text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Continuous positive airway pressure**
Continuous positive airway pressure:
Continuous positive airway pressure (CPAP) is a form of positive airway pressure (PAP) ventilation in which a constant level of pressure greater than atmospheric pressure is continuously applied to the upper respiratory tract of a person. The application of positive pressure may be intended to prevent upper airway collapse, as occurs in obstructive sleep apnea, or to reduce the work of breathing in conditions such as acute decompensated heart failure. CPAP therapy is highly effective for managing obstructive sleep apnea. Compliance and acceptance of use of CPAP therapy can be a limiting factor, with 8% of people stopping use after the first night and 50% within the first year.
Medical uses:
Severe to moderate obstructive sleep apnea CPAP is the most effective treatment for moderate to severe obstructive sleep apnea, in which the mild pressure from the CPAP prevents the airway from collapsing or becoming blocked. CPAP has been shown to be 100% effective at eliminating obstructive sleep apneas in the majority of people who use the therapy according to the recommendations of their physician. In addition, a meta-analysis showed that CPAP therapy may reduce erectile dysfunction symptoms in male patients with obstructive sleep apnea.
Medical uses:
Upper airway resistance syndrome Upper airway resistance syndrome is another form of sleep-disordered breathing with symptoms that are similar to obstructive sleep apnea, but not severe enough to be considered OSA. CPAP can be used to treat UARS as the condition progresses, in order to prevent it from developing into obstructive sleep apnea.
Medical uses:
Pre-term infants CPAP also may be used to treat pre-term infants whose lungs are not yet fully developed. For example, physicians may use CPAP in infants with respiratory distress syndrome. It is associated with a decrease in the incidence of bronchopulmonary dysplasia. In some preterm infants whose lungs have not fully developed, CPAP improves survival and decreases the need for steroid treatment for their lungs. In resource-limited settings where CPAP improves respiratory rate and survival in children with primary pulmonary disease, researchers have found that nurses can initiate and manage care with once- or twice-daily physician rounds.
Medical uses:
COVID-19 In March 2020, the USFDA suggested that CPAP devices may be used to support patients affected by COVID-19; however, they recommended additional filtration since non-invasive ventilation may increase the risk of infectious transmission.
Other uses CPAP also has been suggested for treating acute hypoxaemic respiratory failure in children. However, due to a limited number of clinical studies, the effectiveness and safety of this approach to providing respiratory support is not clear.
Contraindications:
CPAP cannot be used in the following situations or conditions: A person is not breathing on their own A person is uncooperative or anxious A person cannot protect their own airway (i.e., has altered consciousness for reasons other than sleep, such as extreme illness, intoxication, coma, etc.) A person is not stable due to respiratory arrest A person has experienced facial trauma or facial burns A person who has had previous facial, esophageal, or gastric surgery may find this a difficult or unsuitable treatment option, or may need to fully heal from surgery before using this treatment
Adverse effects:
Some people experience difficulty adjusting to CPAP therapy and report general discomfort, nasal congestion, abdominal bloating, sensations of claustrophobia, mask leak problems, and convenience-related complaints. Oral leak problems also interfere with CPAP effectiveness.
Mechanism:
CPAP therapy uses machines specifically designed to deliver a flow of air at a constant pressure. CPAP machines possess a motor that pressurizes room temperature air and delivers it through a hose connected to a mask or tube worn by the patient. This constant stream of air opens and keeps the upper airway unobstructed during inhalation and exhalation. Some CPAP machines have other features as well, such as heated humidifiers.
Mechanism:
The therapy is an alternative to positive end-expiratory pressure (PEEP). Both modalities stent open the alveoli in the lungs and thus recruit more of the lung surface area for ventilation. However, while PEEP refers to devices that impose positive pressure only at the end of the exhalation, CPAP devices apply continuous positive airway pressure throughout the breathing cycle. Thus, the ventilator does not cycle during CPAP, no additional pressure greater than the level of CPAP is provided, and patients must initiate all of their breaths.
Mechanism:
Method of delivery of CPAP Nasal CPAP Nasal prongs or a nasal mask is the most common modality of treatment. Nasal prongs are placed directly in the person's nostrils. A nasal mask is a small mask that covers the nose. There are also nasal pillow masks which have a cushion at the base of the nostrils, and are considered the least invasive option. Frequently, nasal CPAP is used for infants, although this use is controversial. Studies have shown nasal CPAP reduces ventilator time, but an increased occurrence of pneumothorax also was prevalent.
Mechanism:
Nasopharyngeal CPAP Nasopharyngeal CPAP is administered by a tube that is placed through the person's nose and ends in the nasopharynx. This tube bypasses the nasal cavity in order to deliver the CPAP farther down in the upper respiratory system.
Face mask A full face mask over the mouth and nose is another approach for people who breathe out of their mouths when they sleep. Often, oral masks and naso-oral masks are used when nasal congestion or obstruction is an issue. There are also devices that combine nasal pressure with mandibular advancement devices (MAD).
Mechanism:
Compliance A large portion of people do not adhere to the recommended method of CPAP therapy, with more than 50% of people discontinuing use in the first year. A significant change in behaviour is required in order to commit to long-term use of CPAP therapy and this can be difficult for many people, since CPAP equipment must be used consistently for all sleep (including naps and overnight trips away from home) and needs to be regularly maintained and replaced over time. In addition, people with moderate to severe obstructive sleep apnea have a higher risk of concomitant symptoms such as anxiety and depression, which can make it more difficult to change their sleep habits and to use CPAP on a regular basis. Educational and supportive approaches have been shown to help motivate people who need CPAP therapy to use their devices more often.
History:
Colin Sullivan, an Australian physician and professor, invented CPAP in 1980 at Royal Prince Alfred Hospital in Sydney. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Main Core**
Main Core:
Main Core is an alleged American government database containing information on those believed to be threats to national security.
History:
The existence of the database was first asserted in May 2008 by Christopher Ketcham and again in July 2008 by Tim Shorrock.
Description:
The Main Core data, which is believed to come from the NSA, FBI, CIA, and other sources, is collected and stored without warrants or court orders. The database's name derives from the fact that it contains "copies of the 'main core' or essence of each item of intelligence information on Americans produced by the FBI and the other agencies of the U.S. intelligence community".As of 2008, there were allegedly eight million Americans listed in the database as possible threats, often for trivial reasons, whom the government may choose to track, question, or detain in a time of crisis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Technology intelligence**
Technology intelligence:
Technology Intelligence (TI) is an activity that enables companies to identify the technological opportunities and threats that could affect the future growth and survival of their business. It aims to capture and disseminate the technological information needed for strategic planning and decision making. As technology life cycles shorten and business become more globalized having effective TI capabilities is becoming increasingly important.In the United States, Project Socrates identified the exploitation of technology as the most effective foundation for decision making for the complete set of functions within the private and public sectors that determine competitiveness.The Centre for Technology Management has defined 'technology intelligence' as "the capture and delivery of technological information as part of the process whereby an organisation develops an awareness of technological threats and opportunities."The Internet has contributed to the growth of data sources for technology intelligence and this is very important for the advancement of technology intelligence. Technology intelligence gives organizations the ability to be aware of technology threats and opportunities. It is important for companies and businesses to be able to identify emerging technologies in form of opportunities and threats and how this can affect their business. In the past two decades, there has been massive growth in the amount of products and services that technology has produced and this is because it is a lot easier and cheaper to acquire and store data from different sources that can be analyzed and used in different industries. The interest started in 1994 and the technology intelligence process has evolved since then. This process can be used to improve and further the growth of a business because the need to shorten the time lag between data acquisition and decision making is spurring innovations in business intelligence technologies. There are several tools called text mining and tech-pioneer that make the technology intelligence process actionable and effective. This process consists of 4 steps: organizing the competitive intelligence effort, collecting the information, analyzing the information and disseminating the results. Although this process is very beneficial to organizations, there are some challenges such as communication and interpreting the results the process provides.
Historic Development:
Technology intelligence is not new but is more important now that organizations and societies are being disrupted by the shift to an information and networking-based economy. Also known as Competitive Intelligence, there are different stages of the evolution process. The interest started in 1994 with numerous publications on the topic, government efforts to encourage competitive intelligence and the origination of competitive intelligence courses and programs in universities. Then in the 1980s, the work of Michael Porter on strategic management renewed this interest. Between the 1970s and 1980s, a few companies started adopting technology intelligence processes but were not successful. This failure still causes a great uncertainty on how companies can adopt these practices. However, over the past few years, there is still a growing interest in technology intelligence processes.The first generation of technology intelligence occurred when there was no long-term strategic framework for Research and Development (R&D) management. A number of inefficient innovations were created, due to the fact that there wasn't much coordination between the central research department and their technology needs. Technology monitoring was introduced to the central research department but still, there were errors. The recommendations were not efficient and their presentation was poor and this didn't do much for the resource allocation process.The second generation of technology intelligence tried to strengthen the link between companies and R&D management by offering short term technological needs but this was not enough as the corporate strategy did not offer long term guidance. When it came to emerging technologies, these could not be easily implemented because they had not adequately planned and were not receptive to recommendations. The technology intelligence processes in this generation were focused on customers in the short term run. Information was collected, analyzed and organized in a controlled manner based on the technology intelligence process of the researcher and this limited the efficiency of the technology intelligence specialists.In the third generation of technology intelligence, the corporate and technology management both decide the strategy and content of R&D. The insufficient information about the knowledge of the market in the future is used as an opportunity to introduce long term innovations to help the company grow and take advantage of opportunities. This strengthens the company's learning ability as all necessary parties are involved. This differs from the previous generations as technology intelligence positions have a coordination role and is decentralized. This is the stage that only a few companies have moved to and others are gradually doing the same.
Tools:
The key to actionable technology is being able to properly implement the use of IT tools to collect and perform analysis on relevant data. The use of open innovation is a good way for businesses to take advantage of technology intelligence. When people within an organization can contribute technologies and ideas, it allows for increased growth. The use of these IT tools like text mining make technology intelligence more efficient and actionable. These tools are very important in planning for technology development by providing frameworks to aid the technology intelligence process. A common tool used is text mining. This tool obtains information from a company's data and analyzes and identifies patterns that will be beneficial to them. The benefit of text mining is that it has a keyword-based morphology analysis that allows you assess the economic and technological value of future technology. Another tool used is Tech-Pioneer. This tool identifies technology opportunities systematically by using a computerized procedure to identify keywords and analyze the architecture and framework of technologies. These tools are mostly used to provide numerous possibilities of future technologies and not necessarily predict the future. These scenarios that the tools provide is pivotal in the technology intelligence process. Scenario planning is also a part of the technology intelligence process. It improves the decision-making process and creates images of how the future might evolve which allows companies to take advantage of opportunities to grow. These scenarios can also identify possible threats.
Process:
The technology intelligence process consists of 4 steps: Planning, organizing and directing the competitive intelligence effort, Collecting intelligence information, Analyzing the data, Disseminating the results of intelligence for action.The planning step involves the company deciding to seize technology-based opportunities. Collecting information involves a number of techniques used to gather insights from data. The third step involves identifying these technological opportunities from the results. The last step involves putting the results into action and taking advantage of the knowledge the process has provided.Technology Intelligence is crucial for success in technology based companies. It identifies technology opportunities by generating insights that can affect revenues and profits. It can result in a shift in strategy and improve the quality of a business' products and services. The process also deals with large volumes of data and generates information that humans cannot produce.
Challenges:
A challenge of technology intelligence is that there isn't much conformity between the analysis generated and the time of the planning. There can also be some difficulties in communication and realization of results. Even when a firm has good technology intelligence, it can be problematic to actually get the information discovered to decision makers. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cleromancy**
Cleromancy:
Cleromancy is a form of sortition (casting of lots) in which an outcome is determined by means that normally would be considered random, such as the rolling of dice (astragalomancy), but that are sometimes believed to reveal the will of a deity.
In classical civilization:
In ancient Rome fortunes were told through the casting of lots or sortes.
In Judaic and Christian tradition:
Casting of lots (Hebrew: גּוֹרָל, romanized: gōral, Greek: κλῆρος, romanized: klē̂ros) is mentioned 47 times in the Bible. Some examples in the Hebrew Bible of the casting of lots as a means of determining God's will: In the Book of Leviticus 16:8, God commanded Moses, "And Aaron shall cast lots upon the two goats; one lot for the LORD, and the other lot for the scapegoat." One goat will be sacrificed as a sin offering, while the scapegoat is loaded up with the sins of the people and sent into the wilderness.
In Judaic and Christian tradition:
According to Numbers 26:55, Moses allocated territory to the tribes of Israel according to each tribe's male population and by lot.
In Joshua 7:14, a guilty party (Achan) is found by lot.
In Judaic and Christian tradition:
In the Book of Joshua 18:6, Joshua says, "Ye shall therefore describe the land into seven parts, and bring the description hither to me, that I may cast lots for you here before the LORD our God." The Hebrews took this action to know God's will as to the dividing of the land between the seven tribes of Israel who had not yet "received their inheritance" (Joshua 18:2).
In Judaic and Christian tradition:
In the First Book of Samuel 14:42, lots are used to determine that it was Jonathan, Saul's son, who broke the oath that Saul made, "Cursed be the man that eateth any food until evening, that I may be avenged on mine enemies" (1 Samuel 14:24).
In Judaic and Christian tradition:
In the Book of Jonah 1:7, the desperate sailors cast lots to see whose god was responsible for creating the storm: "Then the sailors said to each other, 'Come, let us cast lots to find out who is responsible for this calamity.' They cast lots and the lot fell on Jonah."Other places in the Hebrew Bible relevant to divination include: Book of Proverbs 16:33: "The lot is cast into the lap, but its every decision is from Yahweh" and 18:18: "The lot settles disputes, and keeps strong ones apart." Book of Leviticus 19:26 KJV "... neither shall you practice enchantment, nor observe times." The original Hebrew word for enchantment, as found in Strong's Concordance, is pronounced naw-khash'. The translation given by Strong's is "to practice divination, divine, observe signs, learn by experience, diligently observe, practice fortunetelling, take as an omen"; and "1.to practice divination 2.to observe the signs or omens". Times in the original Hebrew is pronounced aw-nan'. Its translation in Strong's is "to make appear, produce, bring (clouds), to practise soothsaying, conjure;" and "1. to observe times, practice soothsaying or spiritism or magic or augury or witchcraft 2. soothsayer, enchanter, sorceress, diviner, fortune-teller, barbarian...". In the Hebrew-Interlinear Bible, the verse reads, "not you shall augur and not you shall consult cloud".
In Judaic and Christian tradition:
Deuteronomy 18:10 "let no one be found among you who [qasam qesem], performs [onan], [nahash], or [kashaph]". qasam qesem literally means distributes distributions, and may possibly refer to cleromancy. Kashaph seems to mean mutter, although the Septuagint renders the same phrase as pharmakia (poison), so it may refer to magic potions.
In the Book of Esther, Haman casts lots to decide the date on which to exterminate the Jews of Shushan; the Jewish festival of Purim commemorates the subsequent chain of events.
In I Chronicles 26:13 guard duties are assigned by lot.
In Judaic and Christian tradition:
To Christian doctrine, perhaps the most significant ancient Hebrew mention of lots occurs in the Book of Psalms, 22:18 "They divide my garments among them, and for my clothing they cast lots." This came to be regarded as a prophecy connecting that psalm and the one that follows to the crucifixion and resurrection of Jesus, since all four gospels (for example, John 19:24) tell of the Roman soldiers at Jesus's crucifixion casting lots to see who would take possession of his clothing. That final act of profanation became the central theme of The Robe, a 1953 film starring Richard Burton.A notable example in the New Testament occurs in the Acts of the Apostles 1:23–26 where the eleven remaining apostles cast lots to determine whether to select Matthias, or Barsabbas (surnamed Justus) to replace Judas.
In Judaic and Christian tradition:
The Eastern Orthodox Church still occasionally uses this method of selection. In 1917, Metropolitan Tikhon became Patriarch of Moscow by the drawing of lots. The Coptic Orthodox Church uses drawing lots to choose the Coptic pope, most recently done in November 2012 to choose Pope Tawadros II. German Pietist Christians in the 18th century often followed the New Testament precedent of drawing lots to determine the will of God. They often did so by selecting a random Bible passage. The most extensive use of drawing of lots in the Pietist tradition may have come with Count von Zinzendorf and the Moravian Brethren of Herrnhut, who drew lots for many purposes, including selection of church sites, approval of missionaries, the election of bishops, and many others. This practice was greatly curtailed after the General Synod of the worldwide Moravian Unity in 1818 and finally discontinued in the 1880s. Many Amish customarily select ordinary preachers by lot. (Note that the Greek word for "lot" (kleros) serves as the etymological root for English words like "cleric" and "clergy" as well as for "cleromancy".)
In Germania:
Tacitus, in Chapter X of his Germania (circa 98 AD), describes casting lots as a practice used by the Germanic tribes. He states: "To divination and casting of lots, they pay attention beyond any other people. Their method of casting lots is a simple one: they cut a branch from a fruit-bearing tree and divide it into small pieces which they mark with certain distinctive signs and scatter at random onto a white cloth. Then, the priest of the community if the lots are consulted publicly, or the father of the family if it is done privately, after invoking the gods and with eyes raised to heaven, picks up three pieces, one at a time, and interprets them according to the signs previously marked upon them." In the ninth century Anskar, a Frankish missionary and later bishop of Hamburg-Bremen, observed the same practice several times in the decision-making process of the Danish peoples. In this version, the chips were believed to determine the support or otherwise of gods, whether Christian or Norse, for a course of action or act. For example, in one case a Swedish man feared he had offended a god and asked a soothsayer to cast lots to find out which god. The soothsayer determined that the Christian god had taken offence; the Swede later found a book that his son had stolen from Bishop Gautbert in his house.
In Asian culture:
In ancient China, and especially in Chinese folk religion, various means of divination through random means are employed, such as qiúqiān (求簽). In Japan, omikuji is one form of drawing lots.
In Asian culture:
I Ching divination, which dates from early China, has played a major role in Chinese culture and philosophy for more than two thousand years. The I Ching tradition descended in part from the oracle bone divination system that was used by rulers in the Shang dynasty, and grew over time into a rich literary wisdom tradition that was closely tied to the philosophy of yin and yang. I Ching practice is widespread throughout East Asia, and commonly involves the use of coins or (traditionally) sticks of yarrow.
In Asian culture:
In South India, the custom of ritualistically tossing sea shells (sozhi) and interpreting the results based on the positions of the shells is prevalent, predominantly in the state of Kerala.
In West African culture:
In Yoruba and Yoruba-inspired religions, babalawos use variations on a common type of cleromancy called Ifá divination. Ifá divination is performed by "pounding ikin"—transferring consecrated oil palm kernels from one hand to another to create a pattern of eight to sixteen marks called "Odù" onto a tray of iyerosun, or consecrated termite dust from the Irosun tree. The casting itself is called Dafá in Yoruba language speaking areas in West Africa. Similar to I Ching, this form of divination forms a binary-like series of eight broken or unbroken pairs. This allows for 256 combinations, each of which references sets of tonal poems that contain a structure that includes various issues, problems and adversities and the prescriptions of offerings to correct them.
In M'ikmaq tradition:
The game of Waltes is a form of cleromancy practiced by traditional Mi'kmaq and preserved since colonial potlache law, the Indian Act and residential schools in Canada. It is played with a bowl, six bone dice, and a counting stick. Three sticks are grandmothers and one the grandfather. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Protein bar**
Protein bar:
Protein bars are nutrition bars that contain a high proportion of protein to carbohydrates/fats.
Dietary purpose:
Protein bars are targeted to people who primarily want a convenient source of protein that does not require preparation (unless homemade). There are different kinds of food bars to fill different purposes. Energy bars provide the majority of their food energy (calories) in carbohydrate form. Meal replacement bars are intended to replace the variety of nutrients in a meal. Protein bars are usually lower in carbohydrates than energy bars, lower in vitamins and dietary minerals than meal replacement bars, and significantly higher in protein than either.
Dietary purpose:
Protein bars are mainly used by athletes or exercise enthusiasts for muscle building.
Protein bar niche:
In addition to other nutrients, the human body needs protein to build muscles. In the fitness and medical fields it is generally accepted that protein after exercise helps build the muscles used. Whey protein is one of the most popular protein sources used for athletic performance. Other protein sources include egg albumen protein and casein, which is typically known as the slow digestive component of milk protein. Alternative protein bars may use insect protein as an ingredient. Vegan protein bars contain only plant-based proteins from sources like peas, brown rice, hemp, and soybeans.
Issues:
Sugar content Protein bars may contain high levels of sugar and sometimes are called "candy bars in disguise".
Supplementation controversy There is a disagreement over the amount of protein required for active individuals and athletic performance. Some research shows that protein supplementation is not necessary. Athletes generally consume higher levels of protein as compared to the general population for muscular hypertrophy and to reduce lean body mass lost during weight loss. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nihil admirari**
Nihil admirari:
Nihil admirari (or "Nil admirari") is a Latin phrase. It means "to be surprised by nothing", or in the imperative, "Let nothing astonish you".
Origin:
Marcus Tullius Cicero argues that real sapience consists of preparing oneself for all possible incidents and not being surprised by anything, using as an example Anaxagoras, who, when informed about the death of his son, said, "Sciebam me genuisse mortalem" (I knew that I begot a mortal). Horace and Seneca refer to similar occurrences and admired such moral fortitude."Marvel at nothing" – that is perhaps the one and only thing that can make a man happy and keep him so.Nietzsche wrote that in this proposition the ancient philosopher "sees the whole of philosophy", opposing it to Schopenhauer's admirari id est philosophari (to marvel is to philosophize). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Placing reflexes**
Placing reflexes:
There are two frequently used placing reflexes. They are tests which allow clinicians to assess the proprioceptive abilities of small domestic animals (cats and dogs in particular). The first test is to lift an animal and bring the anterior/dorsal surface of a paw up to a table edge. The normal animal will position its paw onto the surface properly. The second (sometimes called the proprioceptive positioning reflex) is similar. The dorsal (top) surface of an animals paw is placed onto a surface, and a fully healthy animal would flick it back up to be in the normal position (dorsal side up). If the animal cannot do this it implies that there is either a motor deficit or damage to the sensory pathway for proprioception, or damage to the centres of the brain which would normally integrate this response. These brain centres would include the cerebellum, and possibly (debated) portions of the cerebrum. There is no evidence to suggest whether the cerebrum is specifically involved with this reflex. Evidence for the involvement of the cerebellum comes, in part, from the fact that cerebellar ataxia can lead to a loss of this particular reflex.
Placing reflexes:
It is sometimes referred to as a "response", to allow for possible conscious cerebral influence of the action. However, hopping and placing reactions, long loop stretch reflexes are probably integrated by the cerebral cortex. Decorticate animals show absence of this reflex. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Splitting theorem**
Splitting theorem:
In the mathematical field of differential geometry, there are various splitting theorems on when a pseudo-Riemannian manifold can be given as a metric product. The best-known is the Cheeger–Gromoll splitting theorem for Riemannian manifolds, although there has also been research into splitting of Lorentzian manifolds.
Cheeger and Gromoll's Riemannian splitting theorem:
Any connected Riemannian manifold M has an underlying metric space structure, and this allows the definition of a geodesic line as a map c: ℝ → M such that the distance from c(s) to c(t) equals | t − s | for arbitrary s and t. This is to say that the restriction of c to any bounded interval is a curve of minimal length which connects its endpoints.In 1971, Jeff Cheeger and Detlef Gromoll proved that, if a geodesically complete and connected Riemannian manifold of nonnegative Ricci curvature contains any geodesic line, then it must split isometrically as the product of a complete Riemannian manifold with ℝ. The proof was later simplified by Jost Eschenburg and Ernst Heintze. In 1936, Stefan Cohn-Vossen had originally formulated and proved the theorem in the case of two-dimensional manifolds, and Victor Toponogov had extended Cohn-Vossen's work to higher dimensions, under the special condition of nonnegative sectional curvature.The proof can be summarized as follows. The condition of a geodesic line allows for two Busemann functions to be defined. These can be thought of as a normalized Riemannian distance function to the two endpoints of the line. From the fundamental Laplacian comparison theorem proved earlier by Eugenio Calabi, these functions are both superharmonic under the Ricci curvature assumption. Either of these functions could be negative at some points, but the triangle inequality implies that their sum is nonnegative. The strong maximum principle implies that the sum is identically zero and hence that each Busemann function is in fact (weakly) a harmonic function. Weyl's lemma implies the infinite differentiability of the Busemann functions. Then, the proof can be finished by using Bochner's formula to construct parallel vector fields, setting up the de Rham decomposition theorem. Alternatively, the theory of Riemannian submersions may be invoked.As a consequence of their splitting theorem, Cheeger and Gromoll were able to prove that the universal cover of any closed manifold of nonnegative Ricci curvature must split isometrically as the product of a closed manifold with a Euclidean space. If the universal cover is topologically contractible, then it follows that all metrics involved must be flat.
Lorentzian splitting theorem:
In 1982, Shing-Tung Yau conjectured that a particular Lorentzian version of Cheeger and Gromoll's theorem should hold. Proofs in various levels of generality were found by Jost Eschenburg, Gregory Galloway, and Richard Newman. In these results, the role of geodesic completeness is replaced by either the condition of global hyperbolicity or of timelike geodesic completeness. The nonnegativity of Ricci curvature is replaced by the timelike convergence condition that the Ricci curvature is nonnegative in all timelike directions. The geodesic line is required to be timelike. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cobalt-59 nuclear magnetic resonance**
Cobalt-59 nuclear magnetic resonance:
Cobalt-59 nuclear magnetic resonance is a form of nuclear magnetic resonance spectroscopy that uses cobalt-59, a cobalt isotope. 59Co is a nucleus of spin 7/2 and 100% abundancy. The nucleus has a magnetic quadrupole moment. Among all NMR active nuclei, 59Co has the largest chemical shift range and the chemical shift can be correlated with the spectrochemical series. Resonances are observed over a range of 20000 ppm, the width of the signals being up to 20 kHz. A widely used standard is potassium hexacyanocobaltate (0.1M K3Co(CN)6 in D2O), which, due to its high symmetry, has a rather small line width. Systems of low symmetry can yield broadened signals to an extent that renders the signals unobservable in fluid phase NMR, in these cases signals can still be observable in solid state NMR. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Methyl perchlorate**
Methyl perchlorate:
Methyl perchlorate is an organic chemical compound. Like many other perchlorates, it is a high energy material. It is also a toxic alkylating agent and exposure to the vapor can cause death. It can be prepared by treating iodomethane with a solution of silver perchlorate in benzene. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pseudocholinesterase deficiency**
Pseudocholinesterase deficiency:
Pseudocholinesterase deficiency is an autosomal recessive inherited blood plasma enzyme abnormality in which the body's production of butyrylcholinesterase (BCHE; pseudocholinesterase aka PCE) is impaired. People who have this abnormality may be sensitive to certain anesthetic drugs, including the muscle relaxants succinylcholine and mivacurium as well as other ester local anesthetics.
Signs and symptoms:
The effects are varied depending on the particular drug given. When anesthetists administer standard doses of these anesthetic drugs to a person with pseudocholinesterase deficiency, the patient experiences prolonged paralysis of the respiratory muscles, requiring an extended period of time during which the patient must be mechanically ventilated. Eventually the muscle-paralyzing effects of these drugs will wear off despite the deficiency of the pseudocholinesterase enzyme. If the patient is maintained on a mechanical respirator until normal breathing function returns, there is little risk of harm to the patient.Because it is rare in the general population, pseudocholinesterase deficiency is sometimes overlooked when a patient does not wake up after surgery. If this happens, there are two major complications that can arise. First, the patient may lie awake and paralyzed while medical providers try to determine the cause of the patient's unresponsiveness. Second, the breathing tube may be removed before the patient is strong enough to breathe properly, potentially causing respiratory arrest.
Signs and symptoms:
This enzyme abnormality is a benign condition unless a person with pseudocholinesterase deficiency is exposed to the offending pharmacological agents.
Complications The main complication resulting from pseudocholinesterase deficiency is the possibility of respiratory failure secondary to succinylcholine or mivacurium-induced neuromuscular paralysis. Individuals with pseudocholinesterase deficiency also may be at increased risk of toxic reactions, including sudden cardiac death, associated with recreational use of the aromatic ester cocaine.
Genetics:
The body has two primary ways of metabolizing choline esters. This is via the common, neuronal "acetylcholinesterase" (ACHE) and the blood plasma carried "butyrylcholinesterase" (BCHE), described here. Several single-nucleotide polymorphisms in the BCHE gene have been identified, such as the D98G missense SNP chr3:165830741 A->G (Asp to Gly at 98) rs1799807 present in 1% of the populace (e.g. dibucaine-resistant "atypical" enzyme at 41% of normal activity), and the A567T missense SNP chr3:165773492 G->A (Ala to Thr at 567) rs1803274 (common K-variant "Kalow" at -7% of normal activity). Many uncommon variants, with greater effects on enzyme activity, are known, such as S1, F1, and F2.Genes encoding cholinesterase 1 (CHE1) and CHE2 have been mapped to 3q26.1-q26.2. One gene is silent. Specifically there are sixteen possible genotypes, expressed as ten phenotypes; six of these phenotypes are associated with a marked reduction in the hydrolysis of succinylcholine. The plasma cholinesterase activity level is genetically determined by four alleles identified as silent (s), usual allele (u), dibucaine (d), or fluoride (f); also, this allele can be absent (a).The inherited defect is caused by either the presence of an atypical PCE or complete absence of the enzyme. Cholinesterases are enzymes that facilitate hydrolysis of the esters of choline. Acetylcholine, the most commonly encountered of these esters, is the mediator of the whole cholinergic system. Acetylcholine is immediately inactivated “in situ” by a specific acetylcholinesterase in the ganglia of the autonomic nervous system (preganglionic and postganglionic in the parasympathetic nervous system and almost exclusively preganglionic in the sympathetic nervous system), in the synapses of the central nervous system, and in the neuromuscular junctions. The affinity of PCE is lower for acetylcholine, but higher for other esters of choline, such as butyrylcholine, benzoylcholine, and succinylcholine, and for aromatic esters (e.g., procaine, chloroprocaine, tetracaine). Normal PCE is produced in the liver, has a plasma half-life of 8 to 12 days, and can be found in plasma, erythrocytes, glial tissue, liver, pancreas, and bowel. When succinylcholine is used for anesthesia, its high plasma concentration immediately after intravenous injection decreases rapidly in normal individuals because of the rapid action of plasma PCE. In case of an atypical PCE or complete absence of PCE, the effect of the injected succinylcholine can last for up to 10 hours.
Drug reactions:
These patients should notify others in their family who may be at risk for carrying one or more abnormal butyrylcholinesterase gene alleles.
Drug reactions:
Drugs to avoid: Succinylcholine, also known as suxamethonium, which is commonly given to paralyse skeletal muscles as part of a general anaesthetic for surgery. A dose that would paralyze the average individual for 5-10 minutes can paralyze the enzyme-deficient individual for up to 8 hours. If this condition is recognized by the anesthesiologist early, then there is rarely a problem, as the patient can be kept intubated and sedated until the muscle relaxation resolves. If not identified, residual paralysis can cause serious complications due to weakness of the muscles of respiration after the patient's breathing support has been ceased.
Drug reactions:
Mivacurium, like succinylcholine, is a muscle relaxant and will have prolonged action in those with butyrylcholinesterase deficiency.
Pilocarpine (trade name Salagen) is used to treat dry mouth. As the name suggests, dry mouth is a medical condition that occurs when saliva production goes down. There are a variety of causes of dry mouth including side effect of various drugs.
Butyrylcholine - this is rarely used to treat exposure to nerve agents, pesticides, toxins, etc.
Drugs containing Huperzine A and Donepezil, which are used to slow the progression of Alzheimer's disease.
Drugs containing propionylcholine and acetylcholine Parathion, an agricultural pesticide Procaine, a local anaesthetic agent used before and during various surgical or dental procedures. Procaine causes loss of feeling in the skin and surrounding tissues.
Diagnosis:
This inherited condition can be diagnosed with a blood test. If the total cholinesterase activity in the patient's blood is low, this may suggest an atypical form of the enzyme is present, putting the patient at risk of sensitivity to suxamethonium and related drugs. Inhibition studies may also be performed to give more information about potential risk. In some cases, genetic studies may be carried out to help identify the form of the enzyme that is present.
Prevention:
Patients with known pseudocholinesterase deficiency may wear a medic-alert bracelet that will notify healthcare workers of increased risk from administration of succinylcholine, and use a non-depolarising neuromuscular-blocking drug for general anesthesia, such as rocuronium.
Prognosis:
Prognosis for recovery following administration of succinylcholine is excellent when medical support includes close monitoring and respiratory support measures.
In nonmedical settings in which subjects with pseudocholinesterase deficiency are exposed to cocaine, sudden cardiac death can occur.
Frequency:
For homozygosity, the incidence is approximately 1:2,000-4,000, whereas the incidence for heterozygosity increases to up to 1:500. The variant EaEa genotype, homozygous absent, is approximately 1:3200. The gene for the dibucaine-resistant atypical cholinesterase appears to be widely distributed. Among Caucasians, males are affected almost twice as often as females. The frequency for heterozygosity is low among black people, Japanese and non-Japanese Asians, South Americans, Australian Aboriginal peoples, and Arctic Inuit (in general). However, there are a few Inuit populations (e.g., Alaskan Inuit) with an unusually high gene frequency for PCE deficiency. A relatively high frequency also was reported among Jews from Iran and Iraq, Caucasians from North America, Great Britain, Portugal, Yugoslavia, and Greece.
Frequency:
Arya Vysyas Multiple studies done both in and outside India have shown an increased prevalence of pseudocholinesterase deficiency amongst the Arya Vysya community. A study performed in the Indian state of Tamil Nadu in Coimbatore on 22 men and women from this community showed that 9 of them had pseudocholinesterase deficiency, which translates to a prevalence that is 4000-fold higher than that in European and American populations.
Frequency:
Persian Jews Pseudocholinesterase deficiency is common within the Persian and Iraqi Jewish populations. Approximately one in 10 Persian Jews are known to have a mutation in the gene causing this disorder and thus one in 100 couples will both carry the mutant gene and each of their children will have a 25% chance of having two mutant genes, and thus be affected with this disorder. This means that one out of 400 Persian Jews is affected with this condition. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Glass wool**
Glass wool:
Glass wool is an insulating material made from glass fiber arranged using a binder into a texture similar to wool. The process traps many small pockets of air between the glass, and these small air pockets result in high thermal insulation properties. Glass wool is produced in rolls or in slabs, with different thermal and mechanical properties. It may also be produced as a material that can be sprayed or applied in place, on the surface to be insulated. The modern method for producing glass wool was invented by Games Slayter while he was working at the Owens-Illinois Glass Co. (Toledo, Ohio). He first applied for a patent for a new process to make glass wool in 1933.
Principles of function:
Gases possess poor thermal conduction properties compared to liquids and solids and thus make good insulation material if they can be trapped in materials so that much of the heat that flows through the material is forced to flow through the gas. In order to further augment the effectiveness of a gas (such as air) it may be disrupted into small cells which cannot effectively transfer heat by natural convection. Natural convection involves a larger bulk flow of gas driven by buoyancy and temperature differences, and it does not work well in small gas cells where there is little density difference to drive it, and the high surface area to volume ratios of the small cells retards bulk gas flow inside them by means of viscous drag.
Principles of function:
In order to accomplish the formation of small gas cells in man-made thermal insulation, glass and polymer materials can be used to trap air in a foam-like structure. The same principle used in glass wool is used in other man-made insulators such as rock wool, Styrofoam, wet suit neoprene foam fabrics, and fabrics such as Gore-Tex and polar fleece. The air-trapping property is also the insulation principle used in nature in down feathers and insulating hair such as natural wool.
Manufacturing process:
Natural sand and recycled glass are mixed and heated to 1,450 °C, to produce glass. The fiberglass is usually produced by a method similar to making cotton candy, by forcing it through a fine mesh by centrifugal force, cooling on contact with the air. Cohesion and mechanical strength are obtained by the presence of a binder that “cements” the fibers together. A drop of binder is placed at each fiber intersection. The fiber mat is then heated to around 200 °C to polymerize the resin and is calendered to give it strength and stability. Finally, the wool mat is cut and packed in rolls or panels, palletized, and stored for use.
Uses:
Glass wool is a thermal insulation material consisting of intertwined and flexible glass fibers, which causes it to "package" air, resulting in a low density that can be varied through compression and binder content (as noted above, these air cells are the actual insulator). Glass wool can be a loose-fill material, blown into attics, or together with an active binder, sprayed on the underside of structures, sheets, and panels that can be used to insulate flat surfaces such as cavity wall insulation, ceiling tiles, curtain walls, and ducting. It is also used to insulate piping and for soundproofing.
Fiberglass batts and blankets:
Batts are precut, whereas blankets are available in continuous rolls. Compressing the material reduces its effectiveness. Cutting it to accommodate electrical boxes and other obstructions allows air a free path to cross through the wall cavity. One can install batts in two layers across an unfinished attic floor, perpendicular to each other, for increased effectiveness at preventing heat bridging. Blankets can cover joists and studs as well as the space between them. Batts can be challenging and unpleasant to hang under floors between joists; straps, or staple cloth or wire mesh across joists, can hold it up.
Fiberglass batts and blankets:
Gaps between batts (bypasses) can become sites of air infiltration or condensation (both of which reduce the effectiveness of the insulation) and require strict attention during the installation. By the same token careful weatherization and installation of vapour barriers is required to ensure that the batts perform optimally. Air infiltration can be also reduced by adding a layer of cellulose loose-fill on top of the material.
Health problems:
Fiberglass will irritate the eyes, skin, and the respiratory system. Potential symptoms include irritation of eyes, skin, nose, and throat, dyspnea (breathing difficulty), sore throat, hoarseness and cough. Fiberglass used for insulating appliances appears to produce human disease that is similar to asbestosis. Scientific evidence demonstrates that fiberglass is safe to manufacture, install and use when recommended work practices are followed to reduce temporary mechanical irritation. Unfortunately these work practices are not always followed, and fiberglass is often left exposed in basements that later become occupied. Fiberglass insulation should never be left exposed in an occupied area, according to the American Lung Association.
Health problems:
In June 2011, the United States' National Toxicology Program (NTP) removed from its Report on Carcinogens all biosoluble glass wool used in home and building insulation and for non-insulation products. Similarly, California's Office of Environmental Health Hazard Assessment ("OEHHA"), in November 2011, published a modification to its Proposition 65 listing to include only "Glass wool fibers (inhalable and biopersistent)." The United States' NTP and California's OEHHA action means that a cancer warning label for biosoluble fiber glass home and building insulation is no longer required under Federal or California law. All fiberglass wools commonly used for thermal and acoustical insulation were reclassified by the International Agency for Research on Cancer (IARC) in October 2001 as Not Classifiable as to carcinogenicity to humans (Group 3).Fiberglass itself is resistant to mold. If mold is found in or on fiberglass it is more likely that the binder is the source of the mold, since binders are often organic and more hygroscopic than the glass wool. In tests, glass wool was found to be highly resistant to the growth of mold. Only exceptional circumstances resulted in mold growth: very high relative humidity, 96% and above, or saturated glass wool, although saturated wool glass will only have moderate growth. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Choline-phosphate cytidylyltransferase**
Choline-phosphate cytidylyltransferase:
Choline-phosphate cytidylyltransferase (EC 2.7.7.15) is an enzyme that catalyzes the chemical reaction CTP + choline phosphate ⇌ diphosphate + CDP-cholinewhere the two substrates of this enzyme are CTP and choline phosphate, and the two products are diphosphate and CDP-choline. It is responsible for regulating phosphatidylcholine content in membranes.
Choline-phosphate cytidylyltransferase:
This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing nucleotide groups (nucleotidyltransferases). The systematic name of this enzyme class is CTP:choline-phosphate cytidylyltransferase. Other names in common use include phosphorylcholine transferase, CDP-choline pyrophosphorylase, CDP-choline synthetase, choline phosphate cytidylyltransferase, CTP-phosphocholine cytidylyltransferase, CTP:phosphorylcholine cytidylyltransferase, cytidine diphosphocholine pyrophosphorylase, phosphocholine cytidylyltransferase, phosphorylcholine cytidylyltransferase, and phosphorylcholine:CTP cytidylyltransferase. This enzyme participates in aminophosphonate metabolism and glycerophospholipid metabolism.
Structural studies:
As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 1PEH and 1PEI. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Phosphorylethanolamine**
Phosphorylethanolamine:
Phosphorylethanolamine or phosphoethanolamine is an ethanolamine derivative that is used to construct two different categories of phospholipids. One category termed a glycerophospholipid and the other a sphingomyelin, or more specifically within the sphingomyelin class, a sphingophospholipid. Phosphorylethanolamine is a polyprotic acid with two pKa values at 5.61 and 10.39.Phosphorylethanolamine has been falsely promoted as a cancer treatment.
Effectiveness:
As a potential drug, phosphorylethanolamine has undergone human clinical trials. These were halted when no evidence of benefit was found.Edzard Ernst has called Phosphorylethanolamine "the most peculiar case of Brazilian quackery".
Legality:
There has been ongoing controversy and litigation in Brazil with regard to its use as a cancer treatment without approval by the National Health Surveillance Agency. For years, Gilberto Chierice, a Chemistry Professor at the São Carlos campus of the University of São Paulo, used resources from a campus laboratory to unofficially manufacture, distribute, and promote the drug to cancer patients without it having gone through clinical testing. In September 2015, university administrators began preventing the Professor from continuing with this practice. In October 2015, several courts in Brazil ruled in favor of plaintiffs who wanted the right to try the compound. However, a state court overturned the lower courts' decision a month later. Jailson Bittencourt de Andrade, secretary for Brazil's science and technology ministry, said the ministry plans to fund further research on the compound, but that it will be years before a determination can be made about phosphorylethanolamine's safety and efficacy in humans.On April 14, 2016, a law was passed in Brazil allowing the use of synthetic phosphorylethanolamine for cancer treatment, despite opposition from the Brazilian Medical Association, the Brazilian Society of Clinical Oncology, and the regulatory agency Anvisa. However, shortly after, the country's Supreme Court suspended the law. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ciência Hoje**
Ciência Hoje:
Ciência Hoje (Portuguese:Science today) is a Brazilian science magazine created in 1982 by Sociedade Brasileira para o Progresso da Ciência (SBPC). Its first edition was issued in 1982, during the SBPC's 34th annual meeting, held in Campinas.The magazine's first editors were biologists Darcy Fountoura and Roberto Lent and physicists Alberto Passos Guimarães and Ennio Candotti.In 2003 the magazine became part of the Instituto Ciência Hoje (ICH), a public interest social organization responsible for publishing theCiência Hoje and Ciência Hoje das Crianças (Ciência Hoje for kids) magazines. The Institute also publishes Ciência Hoje na Escola (supplemental educational material) and science popularization books.The magazine deals with several fields of knowledge, including biology, mathematics, physics, chemistry, philosophy and sociology, and is written by journalists and researchers. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Extinction (neurology)**
Extinction (neurology):
Extinction is a neurological disorder that impairs the ability to perceive multiple stimuli of the same type simultaneously. Extinction is usually caused by damage resulting in lesions on one side of the brain. Those who are affected by extinction have a lack of awareness in the contralesional side of space (towards the left side space following a right lesion) and a loss of exploratory search and other actions normally directed toward that side.
Effect of the laterality of the sensory inputs:
Unilateral lesions of various brain structures can cause a failure to sense contralesional stimuli in the absence of obvious sensory losses. This failure is defined as unilateral extinction if it occurs solely in the case of simultaneous bilateral sensory stimulations. Unilateral extinction can occur with bilateral visual, auditory and tactile stimuli, as well as with bilateral cross-modal stimulations of these sensory systems, and is more frequent following right hemisphere brain damage (RHD) than left hemisphere brain damage (LHD). Unilateral sensory extinction is thought by most to be explained by competition models of selective attention where each stimulus competes to gain access to limited pools of attentional resources. Because of a special role of the right hemisphere in attention, lesions of that hemisphere would disadvantage sensory inputs from the contralateral left hemispace relative from those from the right space.
Effect of the laterality of the sensory inputs:
The idea that inputs from the contralesional side of space may undergo a faulty processing regardless of whether they are primarily directed to the damaged or intact hemisphere has been provided on the most part by studies on olfactory neglect and extinction. The laterality of the sensory inputs makes a difference insofar as left-sided inputs directed to the intact left hemisphere are not affected by extinction, or affected to a much smaller degree than left-sided inputs directed to the damaged right hemisphere. In other words, the lateral organization of sensory inputs should be reconsidered as a far from negligible factor in the cross-modal pattern of unilateral sensory extinction from unilateral brain damage.
Theories of unilateral extinction:
Two of the major theories of unilateral extinction are the sensory theory and the representational theory. The sensory theory involves an attenuation of sensory input to the right hemisphere from the contralateral side of the body and space. The representational theory involves a disordered internal representation of the contralateral side of the body and space, not dependent on sensory input. Recent literature suggests that unilateral extinction patients not only fail to respond to the contralateral external space, but also to internally represented stimuli with patients frequently locating the details of the left side on the right side.
Research and characteristics of extinction:
German neurologists documented clinical descriptions of extinction a century ago, but the syndrome subsequently received less systematic attention than other classical neurological syndromes in part due to the rareness of suitable theoretical ideas. Moreover, despite the dramatic loss of awareness for one side, extinction was rarely considered in discussions of the neural basis of conscious perceptual experience until recently. In extinction there is a spatially specific loss of awareness. This has been difficult to explain because so many neural pathways conventionally associated with conscious perception (including primary sensory areas) remain intact in many patients. There is also much excitement about the possibility of relating awareness to neural substrates in extinction studies. In addition to revealing the critical lesion sites associated with the various clinical manifestations of visual neglect, a key message of the current investigation is that there is a need to develop more sensitive and nuanced assessment tools to characterize the different facets of this heterogeneous syndrome. It will be important to bring laboratory tests into the clinic in an effort to identify specific cognitive functions by examining each in isolation thus combining more specific descriptions extinction with better clinical measures that isolate specific cognitive functions to yield more consistent lesion mapping results in the future.
Grouping effect in extinction:
Neglect and extinction often present simultaneously in patients. When looking at neglect, studies have demonstrated that there is more to the spatial nature than mere primary sensory loss. Proposals of this kind have become increasingly frequent in recent years but attentional accounts for neglect are not universally popular. We can think of one primary component of neglect as involving inattention and that extinction is by no means the whole story for neglect. Extinction encapsulates a critical general principle that applies for most aspects of neglect, namely that the patient's spatial deficit is most apparent in competitive situations, where information towards the good ipsilesional side comes to dominate information that would otherwise be acknowledged towards the contralesional side. This may relate to the attentional limitation seen in neurologically healthy people. We cannot become aware of multiple targets all at once, even if our sensory systems have transduced them. This is seen in patients with extinction, who are able to detect a single target in any location, with a deficit only for multiple concurrent targets. Therefore, extinction can be regarded as a pathological, spatially specific exaggeration of the normal difficulty in distributing attention to multiple targets so we can predict that it should be reduced if the two competing events could be grouped together. Several recent findings from right-parietal patients with left extinction confirm this prediction, suggesting that grouping mechanisms may still operate despite the pathological spatial bias of the patient to influence whether a particular stimulus will reach the patient's awareness. Thus extinction is reduced when the concurrent target events can be linked into a single subjective object, becoming allies rather than competitors in the bid to attract attention. Furthermore, the extent of residual processing extinct stimuli can vary from one patient to another, depending on the exact extent of their lesion. The examples of residual unconscious processing so far all concern the visual modality although evidence is starting to emerge concerning similar effects for extinguished tactile and auditory stimuli.
Physiology/characteristics:
Extinction as well as spatial neglect are deficits caused by large lesions in the vasculatory territory of the medial cerebral artery [20]. Some studies say that extinction occurs after damage to the right or left hemisphere. Patients with extinction do not report stimuli located in space contralateral to their damaged hemisphere when the stimuli are presented simultaneously with ipsilesional stimuli. Extinction to double simultaneous stimulation is not only attributed to a primary sensory deficit since these patients are aware of contralateral stimuli presented individually. Attentional Deficit Hyperactivity Disorder (ADHD) is more focused on as a role of abnormal sensory processing of contralesional input. The patients have a pathologically limited attentional capacity and an attentional bias towards ipsilesional space, they are more likely to attend to and become aware of ipsilesional stimuli at the expense of contralesional ones. It is noteworthy that the right Temporoparietal Junction (TPJ) has been linked to a number of cognitive functions that suggest a role in modulating competitive interactions between stimulus representations, which would converge with the importance of this area for the attentional deficit displayed by extinction patients.The critical lesion site responsible for the syndrome has been debated for more than a decade. Different criterion was used to identify extinction in their patient samples, which lead to inconsistencies in the critical lesion sites reported across studies. Recent studies have used measures such as ERPs and fMRI and it is believed that the parietal lobe mediates the internal representation of both body and space. They found that in their sample, a cortical lesion was almost always found in the right parietal angula gyrus region. Patients typically showed damage to the inferior parietal areas of the brain. There can also be preserved function in the superior parietal lobe, even with inferior parietal damage. Parietal regions include some neurons with ipsilateral receptive fields, so that while the representation within one hemisphere emphasizes contralateral space overall, some ipsilateral representation is present also. More specifically, the number of left-hemisphere neurons with visual receptive fields at a particular location decreases monotonically as one considers increasingly peripheral locations in the left visual field, and vice versa in the right hemisphere. This might go some way towards explaining why extinction is more severe after right-hemisphere lesions in people, leaving the patient with just the steep gradient of the intact left hemisphere.
Types:
Tactile Patients with tactile extinction are aware of being touched on a contralesional limb, but seem unaware of similar contralesional touch if touched simultaneously on their ipsilesional limb. In the tactile, extinction occurs in the domains of at the level of the hands, the face-neck, the arms-legs, both in case of symmetrical and asymmetrical stimulations, or between the two sides of a single body-part. Extinguished tactile stimulus does not access consciousness but it may interfere with perception of the ipsilesional one. Considerable processing can still take place prior to the level at which loss of awareness arises. The extinction can also rise in bilateral conditions. In a patient study, bilateral trials with extinction still revealed residual early components over the right hemisphere in response to the extinguished left touches. When somatosensory neural activity in the right hemisphere was reduced in amplitude when compared to the one by right hand stimulation on the left hemisphere. So it can be concluded that tactile extinction is defined in conditions of bilateral stimulation and perhaps unilateral stimulation as well. Extinction arises at a high level of tactile input processing.
Types:
Visual extinction Visual/spatial extinction, also known as pseudohemianopia, is the inability to perceive two simultaneous stimuli in each visual field. Those who show spatial extinction can detect a single item in both the left and right visual fields but, under certain conditions of bilateral double simultaneous stimulation (DSS), fails to detect the item in one field. It is thus believed that extinction is caused by sensory neglect, and that extinction reflects an attentional deficit rather than a contralesional deficit in primary perceptual processing. In visual extinction this attentional deficit in perception applies mainly to attention in the relevant dimension. Visual extinction is greatest when objects either have the same color or the same shape.
Types:
Studies suggest that brain damage to the parietal lobe causes sensory neglect and that in turn causes extinction. Spatial neglect specifically leads to visual extinction. Neglect often follows right inferior parietal damage, and is characterized by impaired attention and lack of awareness for stimuli on the contralesional (left) side of space. Any kind of brain damage can lead to neglect, things like stroke, brain tissue death, or tumors, and cause the unilateral damage to one side of the parietal lobe. Overall a person with parietal brain damage still has intact visual fields.
Types:
One way to reduce the effects of extinction is to use grouping of items. Brightness and edge based grouping reduces visual extinction and they act in an additive way. Grouping with similar shapes also reduces the effects of extinction. This suggests that the attentional deficit in extinction can be compensated, at least in part, by the brain's object recognition systems.
Types:
While the parietal lobe deals with sensation and perception, the amygdala controls the perception of fear and emotion. This means that by utilizing the perception abilities of the amygdala that emotional properties of contralesional stimuli can be extracted despite pathological inattention and unawareness. This is because the ability of the amygdala to perceive fear is autonomous and without conscious effort and attention. Unfortunately studies have shown that perception of fear can become habituated so it can be unreliable to reduce extinction by use of the amygdala.
Types:
Auditory extinction Auditory extinction is the failure to hear simultaneous stimuli on the left and right sides. This extinction is also caused by brain damage on one side of the brain where awareness is lost on the contralesional side. Affected people report the presence of side specific phonemes, albeit extinguishing them at the same time. This points to the fact that auditory extinction, like other forms of extinction, is more about acknowledging a stimulus in the contralesional side than about the actual sensing of the stimulus.
Types:
Just like other forms of extinction, auditory extinction is caused by brain damage resulting in lesions to one side of the parietal lobe. Auditory extinction appears to be a rather common phenomenon in the acute state of a vascular disease. The acute state of the vascular disease usually leads to neglect which then in turn leads to the auditory extinction. The number of lesions causes an additive effect when occurring in combination with a recent damage.When it comes to treating and recognizing the occurrence of auditory extinction most sound can still be perceived with the other ear. The nature of sound, which possess directionality but still fills space, makes it more amenable to misattribution of source location. This is called the ‘prior entry’ effect. This is when a stimulus occurring at an attended location receives privileged access to awareness relative to one occurring at an unattended location.
Types:
Chemical extinction Little is known on the side of occurrence of unilateral extinction or neglect for sensory modalities, which are traditionally thought to project to the brain in a predominantly uncrossed fashion, such as olfaction and taste. To date, only a limited number of investigations concerning the suppression of (or competition among) spatial information processed through the so-called chemical senses have been reported. A number of various different reasons may account for this lack of research. First, the distinction between pure chemical versus somatosensory information is often problematic. Second, it is widely assumed that olfaction and taste are senses that are not specialized for conveying spatial information.
Types:
Olfactory extinction Multiple case studies and investigations have been conducted on unilateral neglect within the visual, auditory, and tactile sensory modalities, but only three case studies have been reported on neglect within the olfactory sensory modality. It is still unclear whether humans can localize at all the source of the olfactory stimulation by distinguishing between odors that are processed through the right versus the left nostril. This is particularly true when the stimulus is a pure odorant rather than trigeminal, that is when the odor does not cause any somatosensory stimulation that is known to be encoded by the trigeminal system. It was discovered that when pure odorants such as hydrogen sulfide or vanillin were used as stimulants localization was random. On the other hand, stimulation with carbon dioxide or menthol yielded identification rates of more than 96%. These results established the fact that directional orientation, considering single momentary odorous sensations, can only be assumed, when the olfactory stimulants simultaneously excite the trigeminal somatosensory system. Thus it is possible to distinguish between right and left side when the substances additionally or mainly excited the trigeminal nerve.RHD patients with left tactile and visual neglect were reported to exhibit neglect and extinction of olfactory stimuli to the left nostril, in spite of the anatomically constrained projection of the olfactory input from that nostril to the intact left hemisphere. This finding was taken to suggest an impaired processing of all inputs from the contralesional side of space, regardless of whether such inputs were primarily directed to the damaged right hemisphere or the intact left hemisphere. Yet this interpretation is questionable because normal subjects appear unable to localize to a nostril a lateralized olfactory stimulus without the aid of an associated stimulation of the crossed trigeminal input from that same nostril. Further, and in keeping with the above notion, on a number of unilateral and bilateral olfactory stimulations those patients identified the left nostril input correctly, but misplaced it to the right nostril, possibly because of a rightward response bias related to left-sided neglect. Specifically, when two different stimuli were delivered to each nostril, RHD patients consistently failed to report the stimulus delivered to the left nostril. The olfactory system predominantly projects its fibers ipsilaterally thus these results are evidence supporting the representational theory of neglect. Also patients affected by olfactory extinction showed a large number of displacements in that the correctly identified stimuli presented to the left nostril were described as being in the right nostril.
Types:
Nevertheless, it is not completely possible to determine the exact influence exerted by the nasal somatosensation in the olfactory extinction reported, since one of the odours considered as being pure odorants was later found to be processed probably also by the trigeminal. It appears that the human olfactory system is able to localize the source of the olfactory stimulation only when the odour elicits also a trigeminal response. This contradicts the idea that trained participants can localize both trigeminal stimuli and pure odorants between the two nostrils. Moreover, recently it was shown that naive participants were able to reliably localize pure odorants between the two nostrils. Clearly, if the ability of the olfactory system to extract spatial information from non-trigeminal stimuli turns out to be true, new light could be shed on the extinction phenomena described for odors.The olfactory sense also provides a unique mechanism to test the sensory and representational theories of unilateral neglect. Olfactory information projects predominantly to the ipsilateral hemisphere. Patients with a right hemisphere lesion show left sided neglect in other modalities and fail to respond to the left contralateral nostril, thus the representational theory is supported. It was suggested that since the olfactory sensory pathways to the cerebral hemispheres were not crossed, a neglect should have occurred on the right side if a sensory loss were the cause of neglect. Neglect in olfactory sense is compared with its occurrence in the trigeminal sense, a sense stimulated in the same manner as olfaction (chemically through the nasal passages) but contralaterally innervated. Studies supporting the representational theory of unilateral neglect show that right hemisphere lesion patients with left unilateral neglect failed to respond to their left contralateral nostril on olfactory double simultaneous stimulation in spite of adequate olfactory sensitivity. This demonstrated that the occurrence of unilateral neglect is not a function of sensory attenuation, in fact, olfactory sensitivity did not correlate with number of extinctions.
Types:
Extinction of taste The existence of neglect and/or extinction in taste is less explored than olfaction, even though in humans the ability to localize taste stimuli presented on the tongue has been previously described. In the case of a patient with a wide parietal-occipital tumor, tactile extinction on the upper limbs and extinction of taste sensations on the left part of the tongue were seen when two tastes were presented simultaneously on each hemitongue. The results of the assessment revealed there is unimodal taste extinction and displacement of taste sensations under crossmodal taste-tactile stimulation. In particular, when a touch was delivered to the right hemi-tongue and a taste was applied on the left hemi-tongue, the patient repeatedly reported bilateral taste stimulation, thus surprisingly extinguishing the right touch and partially misplacing the left taste stimulus. Gustatory extinction also seems to occur consequently to a severe tactile extinction.In the gustatory test done on patients with right brain damage (RHD) or left brain damage (LHD) and healthy subjects, nine RHD patients with left hemitongue tactile extinction showed no gustatory extinction for both unilateral and bilateral stimulations. Contrary to a largely crossed cortical representation of the limbs and other exteroceptive body sites, the tongue has been traditionally thought to enjoy a bilateral representation in the cortex for both somatic and gustatory modalities. In fact the tongue representation is bilateral in both modalities, but predominantly ipsilateral in the gustatory modality and predominantly contralateral in the tactile modality. The absence of left gustatory extinction in those patients can be attributed to the predominant channeling of left hemitongue taste inputs into the intact left hemisphere. Since there were no severe disturbances manifested in any of the present RHD or LHD patients, it seems reasonable to assume that gustatory extinction surfaces only as an accompaniment and possibly a consequence of a very marked extinction of tactile lingual sensitivity, or even a full blown intraoral tactile hemineglect. There is still no clear evidence of the existence of purely taste extinction and/or neglect.More evidence that suggests the relationship between tactile and taste extinction in the tongue comes from a patient with a right parieto-occipital glioblastoma, tested with local applications of the four basic tastants (bitter, salty, sour, sweet), or with touch and pin prick stimuli to the two sides of the tongue. The patient missed most of the left hemitongue stimuli on bilateral stimulation, or less frequently wrongly attributed to them the quality of the concurrent right stimulus. Combinations of taste and mechanical stimuli showed an interference of left side stimuli on the perception of right stimuli, suggesting a complex alteration of the central tactile and gustatory representations of both sides of the tongue. Given that taste perception is usually co-mingled with tactile sensations, it is possible that left-sided gustatory extinction in severe left buccal hemineglect was secondary to left-sided lingual tactile extinction.
Types:
Multisensory Neglect and extinction can overlap for a single sensory modality, and even for multiple sensory modalities. Extinction affecting a unimodal sensory system can be influenced by the concurrent activation of another modality. Tactile extinction, as an example, can be modulated by visual events simultaneously presented in the region near the tactile stimulation, increasing or reducing tactile perception, depending upon the spatial arrangement of the stimuli. In one example of visual and tactile relationship, the visual stimulation in the ipsilesional side exacerbates contralesional tactile extinction, whereby the presentation of visual and tactile stimuli on the same contralesional side can reduce the deficit. Tactile and visual informations can also be integrated in other peripersonal space regions, such as around the face.Another similar modulation interaction is between audition and touch. The contralesional tactile detection is hampered by sounds in tactile extinction patients. However, a multisensory effect observed in the front space with respect to the patients’ head was even stronger when cross-modal auditory-tactile extinction was assessed in the patients’ back space. Different degrees of multisensory integration may occur depending upon the functional relevance of a given modality. Altogether, the interactions of cross-modal seems to be a rather frequent occurrence. The results of these studies underline the relevance of cross-modal integration in enhancing visual processing in neglect patients and in patients with visual field deficits. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ext.NET**
Ext.NET:
Ext.NET (known as Coolite until November 2010 and now part of the Object.Net suite) is a suite of professional ASP.NET AJAX Web Controls (Web forms + MVC) which includes the Sencha Ext JS JavaScript Framework.
The suite of web controls are built with a focus on bringing the Ext JS Framework to Visual Studio and the .NET Framework via a combination of server-side and client-side tools. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Humbert surface**
Humbert surface:
In algebraic geometry, a Humbert surface, studied by Humbert (1899), is a surface in the moduli space of principally polarized abelian surfaces consisting of the surfaces with a symmetric endomorphism of some fixed discriminant. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Thiophenol**
Thiophenol:
Thiophenol is an organosulfur compound with the formula C6H5SH, sometimes abbreviated as PhSH. This foul-smelling colorless liquid is the simplest aromatic thiol. The chemical structures of thiophenol and its derivatives are analogous to phenols . An exception is the oxygen atom in the hydroxyl group (-OH) bonded to the aromatic ring is replaced by a sulfur atom. The prefix thio- implies a sulfur-containing compound and when used before a root word name for a compound which would normally contain an oxygen atom, in the case of 'thiol' that the alcohol oxygen atom is replaced by a sulfur atom.
Thiophenol:
Thiophenols also describes a class of compounds formally derived from thiophenol itself. All have a sulfhydryl group (-SH) covalently bonded to an aromatic ring. The organosulfur ligand in the medicine thiomersal is a thiophenol.
Synthesis:
There are several methods of synthesis for thiophenol and related compounds, although thiophenol itself is usually purchased for laboratory operations. 2 methods are the reduction of benzenesulfonyl chloride with zinc and the action of elemental sulfur on phenyl magnesium halide or phenyllithium followed by acidification.
Via the Newman–Kwart rearrangement, phenols (1) can be converted to the thiophenols (5) by conversion to the O-aryl dialkylthiocarbamates (3), followed by heating to give the isomeric S-aryl derivative (4).
Synthesis:
In the Leuckart thiophenol reaction, the starting material is an aniline through the diazonium salt (ArN2X) and the xanthate (ArS(C=S)OR). Alternatively, sodium sulfide and triazene can react in organic solutions and yield thiophenols.Thiophenol can be manufactured from chlorobenzene and hydrogen sulfide over alumina at 700 to 1,300 °F (371 to 704 °C). The disulfide is the primary byproduct. The reaction medium is corrosive and requires ceramic or similar reactor lining. Aryl iodides and sulfur in certain conditions may also produce thiophenols.
Applications:
Thiophenols are used in the production of pharmaceuticals including of sulfonamides. The antifungal agents butoconazole and merthiolate are derivatives of thiophenols.
Properties and reactions:
Acidity Thiophenol has appreciably greater acidity than does phenol, as is shown by their pKa values (6.62 for thiophenol and 9.95 for phenol). A similar pattern is seen for H2S versus H2O, and all thiols versus the corresponding alcohols. Treatment of PhSH with strong base such as sodium hydroxide (NaOH) or sodium metal affords the salt sodium thiophenolate (PhSNa).
Alkylation The thiophenolate is highly nucleophilic, which translates to a high rate of alkylation. Thus, treatment of C6H5SH with methyl iodide in the presence of a base gives methyl phenyl sulfide, C6H5SCH3, a thioether often referred to as thioanisole. Such reactions are fairly irreversible. C6H5SH also adds to α,β-unsaturated carbonyls via Michael addition.
Oxidation Thiophenols, especially in the presence of base are easily oxidized to diphenyl disulfide: 4 C6H5SH + O2 → 2 C6H5S-SC6H5 + 2 H2OThe disulfide can be reduced back the thiol using sodium borohydride followed by acidification. This redox reaction is also exploited in the use of C6H5SH as a source of H atoms.
Chlorination Phenylsulfenyl chloride, a blood-red liquid (b.p. 41–42 °C, 1.5 mm Hg), can be prepared by the reaction of thiophenol with chlorine (Cl2).
Coordination to metals Metal cations form thiophenolates, some of which are polymeric. One example is "C6H5SCu," obtained by treating copper(I) chloride with thiophenol.
Safety:
The US National Institute for Occupational Safety and Health has established a recommended exposure limit at a ceiling of 0.1 ppm (0.5 mg m−3), and exposures not greater than 15 minutes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Seeding (computing)**
Seeding (computing):
In computing, and specifically peer-to-peer file sharing, seeding is the uploading of already downloaded content for others to download from. A peer, a computer that is connected to the network, becomes a seed when having acquired the entire set of data, it begins to offer its upload bandwidth to other peers attempting to download the file. This data consists of small parts so that seeds can effectively share their content with other peers, handing out the missing pieces. A peer deliberately chooses to become a seed by leaving the upload task active once the content has downloaded. The motivation to seed is mainly to keep the file being shared in circulation (as there is no central hub to continue uploading in the absence of peer seeders) and a desire to not act as a parasite. The opposite of a seed is a leech, a peer that downloads more than they upload.
Background:
Seeding is a practice within peer-to-peer file sharing, a content distribution model that connects computers with the use of a peer-to-peer (P2P) software program in order to share desired content. An example of such a peer-to-peer software program is BitTorrent. Peer-to-peer file sharing is different from the client–server model, where content is directly distributed from its server to a client. To make peer-to-peer file sharing function effectively, content is divided into parts of 256 kilobytes (KB). This segmented downloading makes the parts that peers are missing be transferred by seeds. It also makes downloads go faster, as content can be exchanged between peers. All peers (including seeds) sharing the same content are called a swarm.Data shared via peer-to-peer file sharing contains shared file content, computing cycles and disk storage, among other resources.
Motivations:
In peer-to-peer file sharing, the strength of a swarm depends on user behaviour, as peers ideally upload more than they download. This is done by seeding, and there are different motivations to do this. There are two popular motivations to seed, of which one is the reputation-based incentive mechanism and the other is the tit for tat mechanism. As the name reveals, the former is based on the reputation of a peer, meaning that those peers who have a good reputation will get a better treatment from the uploader. The tit for tat mechanism prevents peers from downloading content if they do not upload to the peers they download from. The latter forces a peer to upload.Although seeding is only a social norm, some scholars see the practice of uploading parts of the data bulk to others as a duty, claiming that "downloaders are forced to reward uploaders in order to compensate for their resource consumption and encourage further altruistic behaviour." Other scholars are milder and believe that a group of highly motivated seeders could already provide a notion of fairness by scheduling when to seed, uploading more effectively.
Threats:
Leechers, peers that download more than they upload, are a threat to peer-to-peer file sharing and the practice of seeding. Where the goal of seeding is to upload more than to download, thus contributing to the sharing of content, leechers stop uploading as soon as their download is finished. What this means is that seeders must upload more parts of the data bulk in order to guarantee a successful download for others in the swarm. Leeching is a form of "free riding" and is associated with the free rider problem, temporal downloading users that, by not seeding, do not support the distribution of content.
Threats:
Although leeching is a threat to peer-to-peer sharing and an opposite of seeding, it is not regarded as an immediate problem. With downloads rising, upload is still guaranteed, though few contributors in the system account for most of the services.
Opportunities:
Research sees opportunities for seeding as a practice that caters contribution within peer-to-peer file sharing and the distribution of content in the digital world in general. A term for this is an economic traffic management (ETM), which is concerned with traffic management solutions to involve all peers, both seeder and leecher. It is ETM's goal to unite peers that have different objectives and to make the sharing of content with peer-to-peer file sharing more efficient. Locality awareness is raised as the most promising concepts by scholars. This entails stimulating peers to seed downloads in their neighbourhood, which speeds up the upload speed and saves inter-domain traffic over the Internet.
Opportunities:
Other opportunities that have arisen out of research are to schedule seeding and use models that reduce the power consumption of seeding computers.
Legal issues:
Peer-to-peer file sharing is legal, however, the platform may be also used to share illegal and pirated content. With the sharing being done between peers all over the world, there is no supervision. Control over illegal or manipulated content is therefore difficult. Seeding is a part of this and a peer can therefore be involved in helping other peers download illegal content.One of the largest contenders against peer-to-peer or sharing in general is the Motion Picture Association of America that has led a many lawsuits against peer-to-peer sharing websites. Notable examples include the Megaupload legal case and Torrent websites like The Pirate Bay (see The Pirate Bay trial and The Pirate Bay raid). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Land allocation decision support system**
Land allocation decision support system:
LADSS, or land allocation decision support system, is an agricultural land-use planning tool developed at The Macaulay Institute. More recently the term LADSS is used to refer to the research of the team behind the original planning tool.
Overview of research:
The focus of the research of the LADSS team has evolved over time from land use decision support towards policy support, climate change and the concepts of resilience and adaptive capacity.
Recent studies:
The team has recently published a study which examines, from a Scottish perspective, a number of alternative scenarios for reform of CAP Pillar 1 Area Payments. It focuses on two alternative classifications: the Macaulay Land Capability for Agriculture classification; and Less Favoured Area Designations; and includes analysis of the redistribution of payments from the current historical system. The study is entitled: Modelling Scenarios for CAP Pillar 1 Area Payments using Macaulay Land Capability for Agriculture (& Less Favoured Area Designations) and was used to inform the Pack Inquiry.
Recent studies:
The EU FP7 SMILE (Synergies in Multi-scale Inter-Linkages of Eco-social Systems) project, focuses on the concept of social metabolism that draws attention to how energy, material, money and ideas are utilised by society.
The Aquarius project, which is aims is to find and implement sustainable, integrated land-water management through engaging with land managers.
Recent studies:
The COP15 website which provides a series of briefing and scoping papers produced by the United Nations Environment Programme (UNEP) and contributed to by The Macaulay Institute to raise the profile of the ecosystems approach in the UNFCC 15th Conference of the Parties meeting in Copenhagen to tackling not just climate change mitigation and adaptation, but also poverty alleviation, disaster risk reduction, biodiversity loss and many other environmental issues.
LADSS planning tool:
The LADSS planning tool is implemented using the programming language G2 from Gensym alongside a Smallworld GIS application using the Magik programming language and an Oracle database. LADSS models crops using the CropSyst simulation model. LADSS also contains a livestock model plus social, environmental and economic impact assessments.
LADSS planning tool:
LADSS has been used to address climate change issues affecting agriculture in Scotland and Italy. Part of this work has involved the use of General Circulation Models (also known as Global climate models) to predict future climate scenarios. Other work has included a study into how Common Agricultural Policy reform will affect the uplands of Scotland, an assessment of agricultural sustainability and rural development research within the AGRIGRID project.
Resources:
Peer reviewed papers produced by LADSS are available for download in PDF format. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Swiss wing**
Swiss wing:
Swiss wing (simplified Chinese: 瑞士鸡翼; traditional Chinese: 瑞士雞翼; Jyutping: seoi6 si6 gai1 jik6) is a kind of sweet soy sauce-flavored chicken wings served in some restaurants in Hong Kong. It is marinated in sauce made up of soy sauce, sugar, Chinese wine, and spices. Despite the name "Swiss", it is unrelated to Switzerland. Instead, it is believed to have originated in either Hong Kong or Guangzhou.
Naming:
There are no concrete answers as to the source or the name of the dish. One story — likely to be a mere urban legend — goes that a Westerner came across the dish "sweetened soya sauce chicken wings" in a restaurant, and asked a Chinese waiter what that was. The waiter, who did not speak perfect English, introduced the dish as "sweet wing". The customer misheard "sweet" as "Swiss", and the name "Swiss wing" has been used ever since.
Origin:
Some claim that the dish was invented by a local restaurant, the Tai Ping Koon. It is a common practice in Hong Kong restaurants to name a new dish after a place, which may or may not have any connection with the dish itself at all. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Activin type 1 receptors**
Activin type 1 receptors:
The Activin type I receptors transduce signals for a variety of members of the Transforming growth factor beta superfamily of ligands. This family of cytokines and hormones include activin, Anti-müllerian hormone (AMH), bone morphogenetic proteins (BMPs), and Nodal. They are involved in a host of physiological processes including, growth, cell differentiation, homeostasis, osteogenesis, apoptosis and many other functions. There are three type I Activin receptors: ACVR1, ACVR1B, and ACVR1C. Each bind to a specific type II receptor-ligand complex. Despite the large amount of processes that these ligands regulate, they all operate through essentially the same pathway: A ligand binds to a Type two receptor, which recruits and trans-phosphorylate a type I receptor. The type I receptor recruits a receptor regulated SMAD (R-SMAD) which it phosphorylates. The RSMAD then translocates to the nucleus where it functions as a transcription factor. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**USP15**
USP15:
Ubiquitin carboxyl-terminal hydrolase 15 is an enzyme that in humans is encoded by the USP15 gene.Ubiquitin is a highly conserved protein involved in the regulation of intracellular protein breakdown, cell cycle regulation, and stress response, which is released from degraded proteins by disassembly of the polyubiquitin chains. The disassembly process is mediated by ubiquitin-specific proteases (USPs). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Microkeratome**
Microkeratome:
A microkeratome is a precision surgical instrument with an oscillating blade designed for creating the corneal flap in LASIK or ALK surgery. The normal human cornea varies from around 500 to 600 micrometres in thickness; and in the LASIK procedure, the microkeratome creates an 83 to 200 micrometre thick flap.
This piece of equipment is used all around the world to cut the cornea flap.
The microkeratome is also used in Descemet's stripping automated endothelial keratoplasty (DSAEK), where it is used to slice a thin layer from the back of the donor cornea, which is then transplanted into the posterior cornea of the recipient. It was invented by Jose Barraquer and Cesar Carlos Carriazo in the 1950s in Colombia. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pound per hour**
Pound per hour:
Pound per hour is a mass flow unit. It is abbreviated as PPH or more conventionally as lb/h. Fuel flow for engines is usually expressed using this unit. It is particularly useful when dealing with gases or liquids, as volume flow varies more with temperature and pressure.
Pound per hour:
In the US utility industry, steam and water flows throughout turbine cycles are typically expressed in PPH, while in Europe these mass flows are usually expressed in metric tonnes per hour: 1 lb/h = 0.4535927 kg/h = 126.00 mg/sMinimum fuel intake on a jumbo jet can be as low as 150 lb/h when idling; however, this is not enough to sustain flight. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**5-HT6 receptor**
5-HT6 receptor:
The 5HT6 receptor is a subtype of 5HT receptor that binds the endogenous neurotransmitter serotonin (5-hydroxytryptamine, 5HT). It is a G protein-coupled receptor (GPCR) that is coupled to Gs and mediates excitatory neurotransmission. HTR6 denotes the human gene encoding for the receptor.
Distribution:
The 5HT6 receptor is expressed almost exclusively in the brain. It is distributed in various areas including, but not limited to, the olfactory tubercle, cerebral cortex (frontal and entorhinal regions), nucleus accumbens, striatum, caudate nucleus, hippocampus, and the molecular layer of the cerebellum. Based on its abundance in extrapyramidal, limbic, and cortical regions it can be suggested that the 5HT6 receptor plays a role in functions like motor control, emotionality, cognition, and memory.
Function:
Blockade of central 5HT6 receptors has been shown to increase glutamatergic and cholinergic neurotransmission in various brain areas, whereas activation enhances GABAergic signaling in a widespread manner. Antagonism of 5HT6 receptors also facilitates dopamine and norepinephrine release in the frontal cortex, while stimulation has the opposite effect.
Function:
As a drug target for antagonists Despite the 5HT6 receptor having a functionally excitatory action, it is largely co-localized with GABAergic neurons and therefore produces an overall inhibition of brain activity. In parallel with this, 5HT6 antagonists are hypothesized to improve cognition, learning, and memory. Agents such as latrepirdine, idalopirdine (Lu AE58054), and intepirdine (SB-742,457/RVT-101) were evaluated as novel treatments for Alzheimer's disease and other forms of dementia. However, phase III trials of latrepirdine, idalopirdine, and intepirdine have failed to demonstrate efficacy.
Function:
5HT6 antagonists have also been shown to reduce appetite and produce weight loss, and as a result, PRX-07034, BVT-5,182, and BVT-74,316 are being investigated for the treatment of obesity.
Function:
As a drug target for agonists Recently, the 5HT6 agonists WAY-181,187 and WAY-208,466 have been demonstrated to be active in rodent models of depression, anxiety, and obsessive-compulsive disorder (OCD), and such agents may be useful treatments for these conditions. Additionally, indirect 5HT6 activation may play a role in the therapeutic benefits of serotonergic antidepressants like the selective serotonin reuptake inhibitors (SSRIs) and tricyclic antidepressants (TCAs).
Ligands:
A large number of selective 5HT6 ligands have now been developed.
Agonists Full agonists Partial agonists E-6801 E-6837 – partial agonist at rat 5-HT6 receptors. Orally active in rats, and caused weight loss with chronic administration EMD-386,088 – potent partial agonist (EC50 = 1 nM) but non-selective LSD – Emax = 60% Antagonists and inverse agonists
Genetics:
Polymorphisms in the HTR6 gene are associated with neuropsychiatric disorders. For example, an association between the C267T (rs1805054) polymorphism and Alzheimer's disease has been shown.
Others have studied the polymorphism in relation to Parkinson's disease. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Disclosure widget**
Disclosure widget:
A disclosure widget, expander, or disclosure triangle is a graphical control element that is used to show or hide a collection of "child" widgets in a specific area of the interface. The widget hides non-essential settings or information and thus makes the dialog less cluttered.
Disclosure widget:
The disclosure widget may be expanded or collapsed by the user; when this occurs, the containing window may be expanded to accommodate the increased space requirement. The state of the widget is often signified by a label with a triangle next to it, pointing sideways when it is collapsed and downward when it is expanded (corresponding to the widget's current state), or a button with an arrow pointing downward when it is collapsed and upward when it is expanded (corresponding to how the widget will change state if the button is clicked). Some disclosure widgets can appear as a plus button when collapsed and a minus button when expanded.
Disclosure widget:
In some implementations, the widget may be able to remember its state between invocations; this may increase user familiarity with the interface. In other implementations, the widget may disappear when clicked in order to make room for the newly revealed controls; this state is not remembered.
Some user interface designers call this widget a "norgie", or "twistie". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Homeobox A10**
Homeobox A10:
Homeobox protein Hox-A10 is a protein that in humans is encoded by the HOXA10 gene.
Function:
In vertebrates, the genes encoding the class of transcription factors called homeobox genes are found in clusters named A, B, C, and D on four separate chromosomes. Expression of these proteins is spatially and temporally regulated during embryonic development. This gene is part of the A cluster on chromosome 7 and encodes a DNA-binding transcription factor that may regulate gene expression, morphogenesis, and differentiation. More specifically, it may function in fertility, embryo viability, and regulation of hematopoietic lineage commitment. Alternatively spliced transcript variants encoding different isoforms have been described.
Function:
Downregulation of HOXA10 is observed in the human and baboon decidua after implantation and this downregulation promotes trophoblast invasion by activating STAT3.
Interactions:
Homeobox A10 has been shown to interact with PTPN6. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Yule–Simon distribution**
Yule–Simon distribution:
In probability and statistics, the Yule–Simon distribution is a discrete probability distribution named after Udny Yule and Herbert A. Simon. Simon originally called it the Yule distribution.The probability mass function (pmf) of the Yule–Simon (ρ) distribution is f(k;ρ)=ρB(k,ρ+1), for integer k≥1 and real ρ>0 , where B is the beta function. Equivalently the pmf can be written in terms of the rising factorial as f(k;ρ)=ρΓ(ρ+1)(k+ρ)ρ+1_, where Γ is the gamma function. Thus, if ρ is an integer, f(k;ρ)=ρρ!(k−1)!(k+ρ)!.
Yule–Simon distribution:
The parameter ρ can be estimated using a fixed point algorithm.The probability mass function f has the property that for sufficiently large k we have f(k;ρ)≈ρΓ(ρ+1)kρ+1∝1kρ+1.
This means that the tail of the Yule–Simon distribution is a realization of Zipf's law: f(k;ρ) can be used to model, for example, the relative frequency of the k th most frequent word in a large collection of text, which according to Zipf's law is inversely proportional to a (typically small) power of k
Occurrence:
The Yule–Simon distribution arose originally as the limiting distribution of a particular model studied by Udny Yule in 1925 to analyze the growth in the number of species per genus in some higher taxa of biotic organisms. The Yule model makes use of two related Yule processes, where a Yule process is defined as a continuous time birth process which starts with one or more individuals. Yule proved that when time goes to infinity, the limit distribution of the number of species in a genus selected uniformly at random has a specific form and exhibits a power-law behavior in its tail. Thirty years later, the Nobel laureate Herbert A. Simon proposed a time-discrete preferential attachment model to describe the appearance of new words in a large piece of a text. Interestingly enough, the limit distribution of the number of occurrences of each word, when the number of words diverges, coincides with that of the number of species belonging to the randomly chosen genus in the Yule model, for a specific choice of the parameters. This fact explains the designation Yule–Simon distribution that is commonly assigned to that limit distribution. In the context of random graphs, the Barabási–Albert model also exhibits an asymptotic degree distribution that equals the Yule–Simon distribution in correspondence of a specific choice of the parameters and still presents power-law characteristics for more general choices of the parameters. The same happens also for other preferential attachment random graph models.The preferential attachment process can also be studied as an urn process in which balls are added to a growing number of urns, each ball being allocated to an urn with probability linear in the number (of balls) the urn already contains.
Occurrence:
The distribution also arises as a compound distribution, in which the parameter of a geometric distribution is treated as a function of random variable having an exponential distribution. Specifically, assume that W follows an exponential distribution with scale 1/ρ or rate ρ Exponential (ρ), with density exp (−ρw).
Then a Yule–Simon distributed variable K has the following geometric distribution conditional on W: Geometric exp (−W)).
The pmf of a geometric distribution is g(k;p)=p(1−p)k−1 for k∈{1,2,…} . The Yule–Simon pmf is then the following exponential-geometric compound distribution: exp (−w))h(w;ρ)dw.
Occurrence:
The maximum likelihood estimator for the parameter ρ given the observations k1,k2,k3,…,kN is the solution to the fixed point equation ρ(t+1)=N+a−1b+∑i=1N∑j=1ki1ρ(t)+j, where b=0,a=1 are the rate and shape parameters of the gamma distribution prior on ρ This algorithm is derived by Garcia by directly optimizing the likelihood. Roberts and Robertsgeneralize the algorithm to Bayesian settings with the compound geometric formulation described above. Additionally, Roberts and Roberts are able to use the Expectation Maximisation (EM) framework to show convergence of the fixed point algorithm. Moreover, Roberts and Roberts derive the sub-linearity of the convergence rate for the fixed point algorithm. Additionally, they use the EM formulation to give 2 alternate derivations of the standard error of the estimator from the fixed point equation. The variance of the λ estimator is Var (λ^)=1Nλ^2−∑i=1N∑j=1ki1(λ^+j)2, the standard error is the square root of the quantity of this estimate divided by N.
Generalizations:
The two-parameter generalization of the original Yule distribution replaces the beta function with an incomplete beta function. The probability mass function of the generalized Yule–Simon(ρ, α) distribution is defined as f(k;ρ,α)=ρ1−αρB1−α(k,ρ+1), with 0≤α<1 . For α=0 the ordinary Yule–Simon(ρ) distribution is obtained as a special case. The use of the incomplete beta function has the effect of introducing an exponential cutoff in the upper tail. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Comparison of vector algebra and geometric algebra**
Comparison of vector algebra and geometric algebra:
Geometric algebra is an extension of vector algebra, providing additional algebraic structures on vector spaces, with geometric interpretations.
Vector algebra uses all dimensions and signatures, as does geometric algebra, notably 3+1 spacetime as well as 2 dimensions.
Basic concepts and operations:
Geometric algebra (GA) is an extension or completion of vector algebra (VA). The reader is herein assumed to be familiar with the basic concepts and operations of VA and this article will mainly concern itself with operations in G3 the GA of 3D space (nor is this article intended to be mathematically rigorous). In GA, vectors are not normally written boldface as the meaning is usually clear from the context.
Basic concepts and operations:
The fundamental difference is that GA provides a new product of vectors called the "geometric product". Elements of GA are graded multivectors: scalars are grade 0, usual vectors are grade 1, bivectors are grade 2 and the highest grade (3 in the 3D case) is traditionally called the pseudoscalar and designated I The ungeneralized 3D vector form of the geometric product is: ab=a⋅b+a∧b that is the sum of the usual dot (inner) product and the outer (exterior) product (this last is closely related to the cross product and will be explained below).
Basic concepts and operations:
In VA, entities such as pseudovectors and pseudoscalars need to be bolted on, whereas in GA the equivalent bivector and pseudovector respectively exist naturally as subspaces of the algebra.
Basic concepts and operations:
For example, applying vector calculus in 2 dimensions, such as to compute torque or curl, requires adding an artificial 3rd dimension and extending the vector field to be constant in that dimension, or alternately considering these to be scalars. The torque or curl is then a normal vector field in this 3rd dimension. By contrast, geometric algebra in 2 dimensions defines these as a pseudoscalar field (a bivector), without requiring a 3rd dimension. Similarly, the scalar triple product is ad hoc, and can instead be expressed uniformly using the exterior product and the geometric product.
Translations between formalisms:
Here are some comparisons between standard R3 vector relations and their corresponding exterior product and geometric product equivalents. All the exterior and geometric product equivalents here are good for more than three dimensions, and some also for two. In two dimensions the cross product is undefined even if what it describes (like torque) is perfectly well defined in a plane without introducing an arbitrary normal vector outside of the space.
Translations between formalisms:
Many of these relationships only require the introduction of the exterior product to generalize, but since that may not be familiar to somebody with only a background in vector algebra and calculus, some examples are given.
Cross and exterior products u×v is perpendicular to the plane containing u and v .u∧v is an oriented representation of the same plane.
Translations between formalisms:
We have the pseudoscalar I=e1e2e3 (right handed orthonormal frame) and so e1I=Ie1=e2e3 returns a bivector and I(e2∧e3)=Ie2e3=−e1 returns a vector perpendicular to the e2∧e3 plane.This yields a convenient definition for the cross product of traditional vector algebra: u×v=−I(u∧v) (this is antisymmetric). Relevant is the distinction between polar and axial vectors in vector algebra, which is natural in geometric algebra as the distinction between vectors and bivectors (elements of grade two).
Translations between formalisms:
The I here is a unit pseudoscalar of Euclidean 3-space, which establishes a duality between the vectors and the bivectors, and is named so because of the expected property I2=(e1e2e3)2=e1e2e3e1e2e3=−e1e2e1e3e2e3=e1e1e2e3e2e3=−e3e2e2e3=−1 The equivalence of the R3 cross product and the exterior product expression above can be confirmed by direct multiplication of −I=−e1e2e3 with a determinant expansion of the exterior product u∧v=∑1≤i<j≤3(uivj−viuj)ei∧ej=∑1≤i<j≤3(uivj−viuj)eiej See also Cross product as an exterior product. Essentially, the geometric product of a bivector and the pseudoscalar of Euclidean 3-space provides a method of calculation of the Hodge dual.
Translations between formalisms:
Cross and commutator products The pseudovector/bivector subalgebra of the geometric algebra of Euclidean 3-dimensional space form a 3-dimensional vector space themselves. Let the standard unit pseudovectors/bivectors of the subalgebra be i=e2e3 , j=e1e3 , and k=e1e2 , and the anti-commutative commutator product be defined as A×B=12(AB−BA) , where AB is the geometric product. The commutator product is distributive over addition and linear, as the geometric product is distributive over addition and linear.
Translations between formalisms:
From the definition of the commutator product, i , j and k satisfy the following equalities: which imply, by the anti-commutativity of the commutator product, that The anti-commutativity of the commutator product also implies that These equalities and properties are sufficient to determine the commutator product of any two pseudovectors/bivectors A and B . As the pseudovectors/bivectors form a vector space, each pseudovector/bivector can be defined as the sum of three orthogonal components parallel to the standard basis pseudovectors/bivectors: Their commutator product A×B can be expanded using its distributive property: which is precisely the cross product in vector algebra for pseudovectors.
Translations between formalisms:
Norm of a vector Ordinarily: ‖u‖2=u⋅u Making use of the geometric product and the fact that the exterior product of a vector with itself is zero: uu=‖u‖2=u2=u⋅u+u∧u=u⋅u Lagrange identity In three dimensions the product of two vector lengths can be expressed in terms of the dot and cross products ‖u‖2‖v‖2=(u⋅v)2+‖u×v‖2 The corresponding generalization expressed using the geometric product is ‖u‖2‖v‖2=(u⋅v)2−(u∧v)2 This follows from expanding the geometric product of a pair of vectors with its reverse (uv)(vu)=(u⋅v+u∧v)(u⋅v−u∧v) Determinant expansion of cross and wedge products u×v=∑i<j|uiujvivj|ei×ej u∧v=∑i<j|uiujvivj|ei∧ej Linear algebra texts will often use the determinant for the solution of linear systems by Cramer's rule or for and matrix inversion.
Translations between formalisms:
An alternative treatment is to axiomatically introduce the wedge product, and then demonstrate that this can be used directly to solve linear systems. This is shown below, and does not require sophisticated math skills to understand.
It is then possible to define determinants as nothing more than the coefficients of the wedge product in terms of "unit k-vectors" ( ei∧ej terms) expansions as above.
A one-by-one determinant is the coefficient of e1 for an R1 1-vector.
Translations between formalisms:
A two-by-two determinant is the coefficient of e1∧e2 for an R2 bivector A three-by-three determinant is the coefficient of e1∧e2∧e3 for an R3 trivector ...When linear system solution is introduced via the wedge product, Cramer's rule follows as a side-effect, and there is no need to lead up to the end results with definitions of minors, matrices, matrix invertibility, adjoints, cofactors, Laplace expansions, theorems on determinant multiplication and row column exchanges, and so forth.
Translations between formalisms:
Matrix Related Matrix inversion (Cramer's rule) and determinants can be naturally expressed in terms of the wedge product.
The use of the wedge product in the solution of linear equations can be quite useful for various geometric product calculations.
Traditionally, instead of using the wedge product, Cramer's rule is usually presented as a generic algorithm that can be used to solve linear equations of the form Ax=b (or equivalently to invert a matrix). Namely adj (A)b.
This is a useful theoretic result. For numerical problems row reduction with pivots and other methods are more stable and efficient.
When the wedge product is coupled with the Clifford product and put into a natural geometric context, the fact that the determinants are used in the expression of RN parallelogram area and parallelepiped volumes (and higher-dimensional generalizations thereof) also comes as a nice side-effect.
As is also shown below, results such as Cramer's rule also follow directly from the wedge product's selection of non-identical elements. The result is then simple enough that it could be derived easily if required instead of having to remember or look up a rule.
Two variables example [ab][xy]=ax+by=c.
Pre- and post-multiplying by a and b ,(ax+by)∧b=(a∧b)x=c∧b a∧(ax+by)=(a∧b)y=a∧c Provided a∧b≠0 the solution is [xy]=1a∧b[c∧ba∧c].
For a,b∈R2 , this is Cramer's rule since the e1∧e2 factors of the wedge products u∧v=|u1u2v1v2|e1∧e2 divide out.
Similarly, for three, or N variables, the same ideas hold [abc][xyz]=d [xyz]=1a∧b∧c[d∧b∧ca∧d∧ca∧b∧d] Again, for the three variable three equation case this is Cramer's rule since the e1∧e2∧e3 factors of all the wedge products divide out, leaving the familiar determinants.
A numeric example with three equations and two unknowns: In case there are more equations than variables and the equations have a solution, then each of the k-vector quotients will be scalars.
To illustrate here is the solution of a simple example with three equations and two unknowns.
[110]x+[111]y=[112] The right wedge product with (1,1,1) solves for x [110]∧[111]x=[112]∧[111] and a left wedge product with (1,1,0) solves for y [110]∧[111]y=[110]∧[112].
Observe that both of these equations have the same factor, so one can compute this only once (if this was zero it would indicate the system of equations has no solution).
Collection of results for x and y yields a Cramer's rule-like form: [xy]=1(1,1,0)∧(1,1,1)[(1,1,2)∧(1,1,1)(1,1,0)∧(1,1,2)].
Writing ei∧ej=eij , we have the result: 13 23 13 23 13 23 ]=[−12].
Equation of a plane For the plane of all points r through the plane passing through three independent points r0 , r1 , and r2 , the normal form of the equation is 0.
The equivalent wedge product equation is 0.
Projection and rejection Using the Gram–Schmidt process a single vector can be decomposed into two components with respect to a reference vector, namely the projection onto a unit vector in a reference direction, and the difference between the vector and that projection.
Translations between formalisms:
With, u^=u/‖u‖ , the projection of v onto u^ is Proju^v=u^(u^⋅v) Orthogonal to that vector is the difference, designated the rejection, v−u^(u^⋅v)=1‖u‖2(‖u‖2v−u(u⋅v)) The rejection can be expressed as a single geometric algebraic product in a few different ways uu2(uv−u⋅v)=1u(u∧v)=u^(u^∧v)=(v∧u^)u^ The similarity in form between the projection and the rejection is notable. The sum of these recovers the original vector v=u^(u^⋅v)+u^(u^∧v) Here the projection is in its customary vector form. An alternate formulation is possible that puts the projection in a form that differs from the usual vector formulation v=1u(u⋅v)+1u(u∧v)=(v⋅u)1u+(v∧u)1u Working backwards from the result, it can be observed that this orthogonal decomposition result can in fact follow more directly from the definition of the geometric product itself.
Translations between formalisms:
v=u^u^v=u^(u^⋅v+u^∧v) With this approach, the original geometrical consideration is not necessarily obvious, but it is a much quicker way to get at the same algebraic result.
Translations between formalisms:
However, the hint that one can work backwards, coupled with the knowledge that the wedge product can be used to solve sets of linear equations (see: [1] ), the problem of orthogonal decomposition can be posed directly, Let v=au+x , where u⋅x=0 . To discard the portions of v that are colinear with u , take the exterior product u∧v=u∧(au+x)=u∧x Here the geometric product can be employed u∧v=u∧x=ux−u⋅x=ux Because the geometric product is invertible, this can be solved for x: x=1u(u∧v).
Translations between formalisms:
The same techniques can be applied to similar problems, such as calculation of the component of a vector in a plane and perpendicular to the plane.
For three dimensions the projective and rejective components of a vector with respect to an arbitrary non-zero unit vector, can be expressed in terms of the dot and cross product v=(v⋅u^)u^+u^×(v×u^).
For the general case the same result can be written in terms of the dot and wedge product and the geometric product of that and the unit vector v=(v⋅u^)u^+(v∧u^)u^.
It's also worthwhile to point out that this result can also be expressed using right or left vector division as defined by the geometric product: v=(v⋅u)1u+(v∧u)1u v=1u(u⋅v)+1u(u∧v).
Like vector projection and rejection, higher-dimensional analogs of that calculation are also possible using the geometric product.
As an example, one can calculate the component of a vector perpendicular to a plane and the projection of that vector onto the plane.
Let w=au+bv+x , where u⋅x=v⋅x=0 . As above, to discard the portions of w that are colinear with u or v , take the wedge product w∧u∧v=(au+bv+x)∧u∧v=x∧u∧v.
Translations between formalisms:
Having done this calculation with a vector projection, one can guess that this quantity equals x(u∧v) . One can also guess there is a vector and bivector dot product like quantity such that the allows the calculation of the component of a vector that is in the "direction of a plane". Both of these guesses are correct, and validating these facts is worthwhile. However, skipping ahead slightly, this to-be-proven fact allows for a nice closed form solution of the vector component outside of the plane: x=(w∧u∧v)1u∧v=1u∧v(u∧v∧w).
Translations between formalisms:
Notice the similarities between this planar rejection result and the vector rejection result. To calculate the component of a vector outside of a plane we take the volume spanned by three vectors (trivector) and "divide out" the plane.
Translations between formalisms:
Independent of any use of the geometric product it can be shown that this rejection in terms of the standard basis is x=1(Au,v)2∑i<j<k|wiwjwkuiujukvivjvk||uiujukvivjvkeiejek| where (Au,v)2=∑i<j|uiujvivj|=−(u∧v)2 is the squared area of the parallelogram formed by u , and v The (squared) magnitude of x is ‖x‖2=x⋅w=1(Au,v)2∑i<j<k|wiwjwkuiujukvivjvk|2 Thus, the (squared) volume of the parallelopiped (base area times perpendicular height) is ∑i<j<k|wiwjwkuiujukvivjvk|2 Note the similarity in form to the w, u, v trivector itself ∑i<j<k|wiwjwkuiujukvivjvk|ei∧ej∧ek, which, if you take the set of ei∧ej∧ek as a basis for the trivector space, suggests this is the natural way to define the measure of a trivector. Loosely speaking, the measure of a vector is a length, the measure of a bivector is an area, and the measure of a trivector is a volume.
Translations between formalisms:
If a vector is factored directly into projective and rejective terms using the geometric product v=1u(u⋅v+u∧v) , then it is not necessarily obvious that the rejection term, a product of vector and bivector is even a vector. Expansion of the vector bivector product in terms of the standard basis vectors has the following form Let r=1u(u∧v)=uu2(u∧v)=1‖u‖2u(u∧v) It can be shown that r=1‖u‖2∑i<j|uiujvivj||uiujeiej| (a result that can be shown more easily straight from r=v−u^(u^⋅v) ).
Translations between formalisms:
The rejective term is perpendicular to u , since |uiujuiuj|=0 implies r⋅u=0 The magnitude of r is ‖r‖2=r⋅v=1‖u‖2∑i<j|uiujvivj|2.
So, the quantity ‖r‖2‖u‖2=∑i<j|uiujvivj|2 is the squared area of the parallelogram formed by u and v It is also noteworthy that the bivector can be expressed as u∧v=∑i<j|uiujvivj|ei∧ej.
Thus is it natural, if one considers each term ei∧ej as a basis vector of the bivector space, to define the (squared) "length" of that bivector as the (squared) area.
Going back to the geometric product expression for the length of the rejection 1u(u∧v) we see that the length of the quotient, a vector, is in this case is the "length" of the bivector divided by the length of the divisor.
Translations between formalisms:
This may not be a general result for the length of the product of two k-vectors, however it is a result that may help build some intuition about the significance of the algebraic operations. Namely, When a vector is divided out of the plane (parallelogram span) formed from it and another vector, what remains is the perpendicular component of the remaining vector, and its length is the planar area divided by the length of the vector that was divided out.
Translations between formalisms:
Area of the parallelogram defined by u and v If A is the area of the parallelogram defined by u and v, then A2=‖u×v‖2=∑i<j|uiujvivj|2, and A2=−(u∧v)2=∑i<j|uiujvivj|2.
Note that this squared bivector is a geometric multiplication; this computation can alternatively be stated as the Gram determinant of the two vectors.
Translations between formalisms:
Angle between two vectors sin θ)2=‖u×v‖2‖u‖2‖v‖2 sin θ)2=−(u∧v)2u2v2 Volume of the parallelopiped formed by three vectors In vector algebra, the volume of a parallelopiped is given by the square root of the squared norm of the scalar triple product: Product of a vector and a bivector In order to justify the normal to a plane result above, a general examination of the product of a vector and bivector is required. Namely, This has two parts, the vector part where i=j or i=k , and the trivector parts where no indexes equal. After some index summation trickery, and grouping terms and so forth, this is The trivector term is w∧u∧v . Expansion of (u∧v)w yields the same trivector term (it is the completely symmetric part), and the vector term is negated. Like the geometric product of two vectors, this geometric product can be grouped into symmetric and antisymmetric parts, one of which is a pure k-vector. In analogy the antisymmetric part of this product can be called a generalized dot product, and is roughly speaking the dot product of a "plane" (bivector), and a vector.
Translations between formalisms:
The properties of this generalized dot product remain to be explored, but first here is a summary of the notation Let w=x+y , where x=au+bv , and y⋅u=y⋅v=0 . Expressing w and the u∧v , products in terms of these components is With the conditions and definitions above, and some manipulation, it can be shown that the term y⋅(u∧v)=0 , which then justifies the previous solution of the normal to a plane problem. Since the vector term of the vector bivector product the name dot product is zero when the vector is perpendicular to the plane (bivector), and this vector, bivector "dot product" selects only the components that are in the plane, so in analogy to the vector-vector dot product this name itself is justified by more than the fact this is the non-wedge product term of the geometric vector-bivector product.
Translations between formalisms:
Derivative of a unit vector It can be shown that a unit vector derivative can be expressed using the cross product ddt(r‖r‖)=1‖r‖3(r×drdt)×r=(r^×1‖r‖drdt)×r^ The equivalent geometric product generalization is ddt(r‖r‖)=1‖r‖3r(r∧drdt)=1r(r^∧drdt) Thus this derivative is the component of 1‖r‖drdt in the direction perpendicular to r . In other words, this is 1‖r‖drdt minus the projection of that vector onto r^ This intuitively makes sense (but a picture would help) since a unit vector is constrained to circular motion, and any change to a unit vector due to a change in its generating vector has to be in the direction of the rejection of r^ from drdt . That rejection has to be scaled by 1/|r| to get the final result.
Translations between formalisms:
When the objective isn't comparing to the cross product, it's also notable that this unit vector derivative can be written rdr^dt=r^∧drdt
References and further reading:
Vold, Terje G. (1993), "An introduction to Geometric Algebra with an Application in Rigid Body mechanics" (PDF), American Journal of Physics, 61 (6): 491, Bibcode:1993AmJPh..61..491V, doi:10.1119/1.17201 Gull, S.F.; Lasenby, A.N; Doran, C:J:L (1993), Imaginary Numbers are not Real – the Geometric Algebra of Spacetime (PDF) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Near-native speaker**
Near-native speaker:
In linguistics, the term near-native speakers is used to describe speakers who have achieved "levels of proficiency that cannot be distinguished from native levels in everyday spoken communication and only become apparent through detailed linguistic analyses" (p.484) in their second language or foreign languages. Analysis of native and near-native speakers indicates that they differ in their underlying grammar and intuition, meaning that they do not interpret grammatical contrasts the same way. However, this divergence typically does not impact a near-native speaker's regular usage of the language.
Domains of proficiency:
Although the vast majority of literature has shown that the age of acquisition of the learner is important in determining whether learners can attain nativelike proficiency, a small number of late learners have demonstrated accents and knowledge of certain areas of grammar that are as proficient as that of native speakers.
Domains of proficiency:
Phonetics and pronunciation Late learners who learn a language after the critical period can acquire an accent that is similar to that of native speakers, provided that they have attained relatively high levels of proficiency. In one study that employed speech samples of language learners, advanced learners of Dutch who spoke different first languages were tasked to read Dutch sentences. In addition, native Dutch speakers who were matched for levels of education served as controls. The recordings of these sentences from both the advanced learners of Dutch and the Dutch native speakers were played to Dutch native speakers (some of whom were Dutch language teachers). They were then asked to rate each speaker on a five-point scale based on how "foreign" they perceive the accent to be. The authors reported that four out of 30 learners were perceived by the native speakers to have achieved nativelike pronunciation. Similarly, the pronunciation of five out of 11 Dutch university students of English language and literature were perceived to be as good as native speakers of English even though they did not receive formal instruction in English before the age of 12.Second language learners’ pronunciation measured by voice onset time production has also been shown to be close to that of native speakers. Four out of 10 Spanish speakers who started learning Swedish as a second language at or after the age of 12 years exhibited voice onset time measurements that were similar to that of native speakers when asked to read out Swedish words with the voiceless stops /p/, /t/ and /k/. Likewise, a small number of late Spanish learners of Swedish were also able to perceive voiced and voiceless stops in Swedish as well as native speakers.
Domains of proficiency:
Morphology and syntax In the domain of morphology and syntax ('morphosyntax'), second language learners’ proficiency is typically tested using the Grammaticality Judgment Test. Participants in such studies have to decide whether the sentence presented is grammatical or ungrammatical. Similar to in the domain of phonology and phonetics, highly proficient second language learners have also shown near-native proficiency despite the fact that the target language was acquired at a later age. English, Dutch and Russian learners who exhibited near-native proficiencies in German were shown to perform as well as German native speakers in Grammaticality Judgment Tests that focused on word order and case markings. This was also consistent with an earlier study on learners of English as a second language who started learning English after puberty. These highly proficient learners displayed accuracy rates and reactions times similar to that of native speakers of English when asked to judge grammatical and ungrammatical wh- questions in the Grammaticality Judgment Test.
Common methods used to test for second language speakers' proficiency:
Grammaticality Judgement Test The Grammaticality Judgment Test (GJT) is one of the many ways to measure language proficiency and knowledge of grammar cross-linguistically. It was first introduced to second language research by Jacqueline S. Johnson and Elissa L. Newport. Participants are tested on various grammatical structures in the second language. The test involves showing participants sentences that may or may not contain grammatical mistakes, and they have to decide on the absence or presence of grammatical issues. The test assumes that one's language proficiency is derived from language competence and language performance and reflects what sentence structures learners think are plausible or not in the language.However, in recent years, there have been papers that questioned the reliability of GJT. Some papers have argued that most sentences in GJT have been taken out of context. The lack of standardisation when administering the GJT in studies have also been deemed to be controversial, with the most prominent one being participants in untimed GJT performing better than those under timed GJT.
Common methods used to test for second language speakers' proficiency:
White noise test The white noise test was first developed by Spolsky, Sigurd, Sato, Walker & Arterburn in 1968. In these tests, recordings of speech with varying levels of white noise are played to participants and they are then asked to repeat what they have heard. Participants will need to rely more on his or her own knowledge of the language to understand the speech signal when there is a higher level of white noise in the speech signal. In addition, the ability of the second language speakers to decode what was being said in various conditions usually will determine their proficiency in the second language.
Common methods used to test for second language speakers' proficiency:
Cloze test Participants of cloze tests are typically given texts with blanks and are tasked to complete the blanks with appropriate words. To identify the correct item that fits the missing portion, cloze tests require second language speakers to understand the context, vocabulary, grammatical and pragmatic knowledge of the second language.
Common methods used to test for second language speakers' proficiency:
Voice onset time The voice onset time (VOT) helps to measure the second language speaker’s proficiency by analysing the participants’ ability to detect distinctions between similar-sounding phonemes. VOT refers to "the time interval between the onset of the release burst of a stop consonant and the onset of periodicity from vocal fold vibration" (p.75). In studies that employed this method, participants were either required to read aloud words containing the phonemes of interest or determine if the minimal pairs that they heard was voiced or voiceless. The performance of late learners can then compared to that of native speakers to determine if the late learners exhibit near-native speaker proficiencies.
Common methods used to test for second language speakers' proficiency:
Ratings of accent by native speakers In reading or production tasks, learners of the language are tasked to read aloud sentences or texts containing phonemes of the target language that may be more difficult for learners to pronounce. Some studies also elicited speech samples from participants by encouraging them to talk about anything they would like with regards to a specific topic. Native speakers of the language also serve as controls. These recordings from both the learners and native speakers are then rated by another group of native speakers of the language using scales based on how much foreign accent they perceive. Learners are determined to have sounded like natives if they have ratings that are within two standard deviations of the native controls.
Factors that lead to near-native speakers:
Motivation Motivation acts as a tool for near-native speakers in attaining near-native proficiency. Late learners of a second language who show nativelike proficiencies are typically motivated to sound like natives and to attain high levels of proficiency for professional reasons. Near-native speakers also tend to embark on careers that are related to the second language, such as translators or language teachers.
Factors that lead to near-native speakers:
Training To sound like natives, non-native learners can take up suprasegmental and segmental training. In a study on graduate students of the German language who only had exposure and formal instruction in the German language after the age of eleven, those who had suprasegmental and segmental training were more likely to be rated to be close to native speakers of German for recordings of their speech samples. Similarly, near-native Dutch learners of English were reported to also have received training on the phonemes of English. These studies highlight the importance of suprasegmental and segmental training in language pedagogy, especially for late learners of a target language who are motivated to sound like native speakers.
Factors that lead to near-native speakers:
Typological similarity between the first language and target language(s) The extent to which late learners can achieve near-native speaker status could also depend on the typological distance between the learner’s first language and target language(s). In an examination of Dutch late learners of different first languages, the late learners that exhibited nativelike accents were speakers of German and English which are also Germanic languages like Dutch. Hence, similarities between the learner’s first language and target language(s) may facilitate acquisition of the target language(s).
Factors that lead to near-native speakers:
Necessity of the second language in daily lives The necessity of using their second language(s) in their daily lives, be it for professional or personal progression, has also shown to help hone near-native speakers' second language proficiency. A study on 43 very advanced late learners of Dutch revealed that those who were employed in language-related jobs exhibited nativelike proficiencies. Therefore, the need to use the second language for work purposes aids in attaining native-like proficiencies. Furthermore, late learners who have performed as well as native speakers in language tasks have typically married native speakers of the target language, hence showing that daily usage of the language is one of the factors that lead to high levels of proficiency. By having an environment that requires the continual training and usage of the second language, near-native speakers' proficiency in the second language is expected to improve because professional lives provide linguistic opportunities for conscious and explicit reflection on the second language's linguistic structure, hence helping near-native speakers to become more proficient in their second language.
Factors that lead to near-native speakers:
Aptitude Language-learning aptitude refers to a “largely innate, relatively fixed talent for learning languages" (p.485). A comparison of highly proficient late learners and early learners of Swedish concluded that the late learners generally performed better than early learners on a language aptitude test (as measured by the Swansea Language Aptitude Test). Thus, an aptitude for learning languages may help late learners in achieving near-native proficiencies.
Factors that lead to near-native speakers:
Contention against Aptitude Arguments against aptitude as a factor for near-native speakers' proficiency exist in the linguistic field. Prominent academics like Bialystok(1997) argued that coincidental circumstances (social, educational, etc) allow near-native speakers to be proficient in second language(s) and aptitude does not account for their proficiency since they are not "rare individuals with an unusual and prodigious talent" performing "extraordinary feats" (p.134).
Factors that lead to near-native speakers:
Language Aptitude Test Carroll(1981) identifies four important constituents of language aptitude: phonetic coding ability, grammatical sensitivity, rote learning ability and inductive learning ability.
Examples of Language Aptitude Tests Modern Language Aptitude Test (MLAT) Pimsleur Language Aptitude Battery (PLAB) Defense Language Aptitude Battery (DLAB) VORD CANAL-F Swansea Language Aptitude Test Llama Language Aptitude Tests
Examples of near-native speakers:
Non-native language teachers An example of near-native speakers are non-native language teachers. Since non-native English-speaking teachers (NNESTs) need to teach their second language in their daily lives to be competent language teachers, they have to continuously train their linguistic ability and capacity in the second language. Hence, teaching it daily helps to increase their likelihood of being near-native.
As English-language proficiency tests are usually recognised as the 'make-or-break' requirement in ESL, it becomes a professional duty for NNESTs to improve their English linguistic capacity. The continual training of the second language thus helps to train their linguistic ability and capacity to become near-native speakers.
Examples of near-native speakers:
One study on the difference in teaching behaviour between native English-speaking teachers (NESTs) and NNESTs found that NNESTs' attitude towards teaching English is significantly different from that of NESTs. NNESTs’ own experience from learning the language increases their perceptiveness to better pick up on probable difficulties students might have during the language learning process. This is because NNESTs typically share the same learning foundation as their students when learning the second language and over time, they can better analyse and predict probable linguistic errors students might make. These NNESTs end up becoming labelled as "more insightful" (p.435) and having "sixth sense" (p.438) when teaching English.
Examples of near-native speakers:
Moreover, research on hiring NESTs and NNESTs found that recruiters who hired non-native speakers had positive experiences and that students of NNESTs are not dissatisfied with non-native teachers of the language. Hence, through the constant need to use their second language in their professional lives, NNESTs can be said to have attained near-native speaker status and are also effective language teachers like NESTs.
Examples of near-native speakers:
Notable near-native speakers Henry Kissinger, 56th United States Secretary of State | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Helicobacter cellulitis**
Helicobacter cellulitis:
Helicobacter cellulitis is a cutaneous condition caused by Helicobacter cinaedi.: 280 H. cinaedi can cause cellulitis and bacteremia in immunocompromised people. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bisin**
Bisin:
Bisin is a naturally occurring lantibiotic (an antibacterial peptide) discovered by University of Minnesota microbiologist Dan O'Sullivan. Unlike earlier lantibiotics discovered, such as nisin, bisin also kills Gram-negative bacteria, including E. coli, Salmonella and Listeria.
The food-preservative properties of bisin could lead to food products that resist spoilage for years. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Patrick J. Miller**
Patrick J. Miller:
Patrick J. Miller is a computer scientist and high performance parallel applications developer with a Ph.D. in Computer Science from University of California, Davis, in run-time error detection and correction. Until recently he was with Lawrence Livermore National Laboratory.
Patrick J. Miller:
He is most noted for building and assembling the largest temporary supercomputer in the world, FlashMob I, in an attempt to break into the Top 500 list of supercomputers with students from his "Do-it-yourself Supercomputing" class at the University of San Francisco in April 2004. This effort was featured on the front page of the New York Times on February 23, 2004.In September 2005, he and others at Bryn Mawr recreated a FlashMob Supercomputer to calculate the value of pi to 15,000 digits and performed 15,800 steps to simulate the unfolding of a protein interacting with an anthrax toxin.More recently he is the author of the popular pyMPI distributed parallel version of the Python programming language.
Patrick J. Miller:
Miller now works as a software developer for Aurora Innovation in Palo Alto, California. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Black light theatre**
Black light theatre:
Black light theatre (in Czech černé divadlo) or simply black theatre, is a theatrical performance style characterized by the use of black box theatre augmented by black light illusion. This form of theatre originated from Asia and can be found in many places around the world. It has become a speciality of Prague, where many theatres use it.The distinctive characteristics of "black theatre" are the use of black curtains, a darkened stage, and "black lighting" (UV light), paired with fluorescent costumes in order to create intricate visual illusions. This "black cabinet" technique was used by Georges Méliès, and by theatre revolutionary Konstantin Stanislavski (especially in his production of Cain). The technique, paired with the expressive artistry of dance, mime and acrobatics of the performers is able to create remarkable spectacles.
Optics:
A key principle of black light theatre is the inability of the human eye to distinguish black objects from a black background. This effect results in effective invisibility for any objects not illuminated by the 'black light'. The second optical principle behind black light theatre is the effect of UV light on fluorescent objects. Black lights actually emit as much light as 'normal' lights, but at a frequency that humans cannot detect. While most objects either absorb UV light or reflect it back at the same frequency at which it came in, fluorescent objects absorb UV light then re-emit it at a longer wavelength that human eyes can detect. The combined effect is that designers can make some objects appear as bright as if the room were fully lighted, while making other objects appear as dark as if the room were completely dark.
History:
The black box trick of using performers dressed in black in a dark playing space has been in use for millennia, starting with the jugglers performing for the emperor in ancient China. Japan developed this technique in its Bunraku Theatre by having puppeteers wear black in order to place complete emphasis on the puppet. In modern theatre, the black box trick was adopted by Russian director Konstantin Stanislavski, film director George Melies, and various French avantgarde directors of the 1950s. Among these directors, George Lafaye became an earlier pioneer of black cabinet. But all these directors used the simple trick of black cabinet for only a few moments during their performances, mostly to make something on the stage disappear.
History:
The father of modern black light theatre, author of the principle of black cabinet as is used nowadays (placement of spot lights, placement of UV lights, selection of black velvet as the best material to absorb residual light on the scene...) and even author of the name "black light theatre", and thus the creator of the first black light theatre in the world, is Jiří Srnec. The first performance of the ensemble took place in 1959 in Vienna. It became better known after its participation in the Theatre Festival in Edinburgh in 1962. Later, other groups using the technique of black box appeared starting the new wave of this theatre style.Prague has since become the home of black light theatre with around 10 black light theatre companies.Another well-known black light theatre group is HILT black light theatre Prague, whose performances are based on modern music and dance choreographies, also incorporating live singing. The group was founded in 2006 by Czech dancer, choreographer, director and music composer Theodor Hoidekr. In 2016, a new black light theatre style called "shadow film theatre" was created by the HILT Prague group. In addition to black light theatre shows, their performances also include the first shadow film theatre - dancers and actors play with their shadows on a screen with projections of real places.
History:
In Germany Rainer Pawelke presented his interpretation of black theatre in a stage show for the first time in 1980. The stage show was developed together with his students from the University of Regensburg and was the precursor to the educational sports theatre project Traumfabrik. The project enjoyed considerable popularity in the 1980s - touring and appearing in prime time television in Germany. Traumfabrik tours still every year with 40 shows per year including black light theatre acts. Rainer Pawelke co-authored a book “Schwarzes Theater aus der Traumfabrik“ (German: Black light theatre in the Traumfabrik” to share and consolidate his insights about black theatre.
Modern dance in black light theatre:
In 1989 the Image Theatre was founded by dancer Eva Asterová (former member of Pavel Smok's famous Czech ballet company) and Alexander Čihař. They brought new aspects into black light theatre effects such as modern dance and non-verbal acting. Under the hand of Eva Asterová, the theatre's artistic director, Image seeks to produce an individual look of at a scene's signature. The audience also often becomes an integral part of the performance. The repertory of the Image theatre is composed of its own devised works. Apart from Eva Asterová there are also other authors cooperating repeatedly on Image Theatre's performances such as Josef Tichý, Petr Liška, René Pyš or Zdeněk Zdeněk.
Modern dance in black light theatre:
Since the beginning, Image Theatre had presented 10 different performances and in each of them they came with some new black light technique effects. Magic poetics, playfulness, and humor are the most important trademarks of this ensemble.
The Image theatre has 22 years experience in Prague's black light theatres scene. Apart from regular performances in Prague, Image also performs internationally (Korea, Hong Hong, Macau, Israel, Turkey, India, Lebanon, Greece, Germany, France, Italy, Switzerland, Belgium, Hungary, Slovakia, Cyprus).
Black light theatre today:
Nowadays there are many companies of black light theatre also out of Prague (Hungary, Ukraine, Germany, USA) trying to make shows similar to black light theatres in Prague. Prague scene has changed its face to the style of 20th century – modern dance was involved, costume designs have become more effective, black light theatre shows have become more musical.
Prague is still home of black light theatre – hundreds thousands of tourists visit their shows every month.
Black light theatre today:
HILT – the black light theatre of Theodor Hoidekr was the first ever who has involved live singing into black light show. The most popular HILT's black light theatre musical show was "Juliet's Dream" premiered on 14 February 2012 in Prague. HILT is working on new style of black light theatre – all the members are experienced in black light theatres all over Prague. Its founder and director Theodor Hoidekr was originally dancer who later started to work with black light theatre style. His experiences comes from Prague, Slovakia, Germany, Malta, Greece, South America, IndiaSince 8 April 2015 HILT performs the new show "Phantom" that is the world first black light theatre version of the mysterious phantom figure. The black light theatre HILT starts the new part of their history in theatre Royal Prague. In 2016 HILT has presented the first shadow film theatre Cinderella.
Black light theatre today:
In 2017 HILT has moved to the Prague popular U Valšů Theatre in the centre of the Old Prague. 2.6.2017 they start to present also their best of called Phantom (best of 2007–2017) that is the combination show of black light theatre, shadow theatre.and projections. This show was the only Czech theatre and the only black light theatre presented at 8th World Theatre Olympics 2018 in India.
In Performance:
The effect of black light theatre allows invisible performers to move visible props, turning the objects into independent participants in the theatre at the same level as the human actors. Furthermore, the appearance of objects and actors in a performance can be sudden and can occur anywhere on stage, even within a few meters of an audience member. In order to achieve this effect it is necessary to create an intense field of UV light throughout the entire playing space. Because the intensity of light emitted from a typical 'black light' source diminishes significantly with increased distance from the source, covering an entire theatre space with UV light requires either that the 'black light' sources be spaced as close as one meter apart or emit much more light than a typical 'black light'. Another important consideration is that, since most of the space is completely dark, and the form is heavily dance based, a single wrong move by a single performer can negatively impact the entire production. For this reason performers train extensively specifically for the black light theatre environment.
In Performance:
Contemporary black light theatre often includes many highly technical devices, in addition to the standard 'black light' technique. Such devices can include "flying" performers, dancers in LED-suits, large video projections, and even massive puppets. These technical devices serve as a significant factor in black light theatre's worldwide popularity; since its most important devices are entirely visual, audiences throughout the world can understand most black light theatre performances. The intended result is a theatrical work that combines grand spectacle with beautiful and moving art. Major companies currently producing black light theatre include Srnec Theatre, HILT, Ta Fantastika Theatre, Image Theatre, Metro Theatre, and All Colours Theatre. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**NERD (sabermetrics)**
NERD (sabermetrics):
In baseball statistics, NERD is a quantitative measure of expected aesthetic value. NERD was originally created by Carson Cistulli and is part of his project of exploring the "art" of sabermetric research. The original NERD formula only took into account the pitcher's expected performance while a later model factors in the entire team's performance.
History:
The premise for NERD was developed in Cistulli's piece "Why We Watch" in which he establishes the five reasons that baseball continues to captivate the American imagination from game to game: "Pitching Matchups," "Statistically Notable (or Otherwise Compelling) Players," "Rookies (and Debuts)," "Seasonal Context," and "Quality of Broadcast". Fellow sabermatrician Rob Neyer, who had collaborated with Cistulli on this piece, wrote "the only thing missing [...] is a points system that would let us put a number on each game" and on June 2, 2010, Cistulli unveiled the Pitcher NERD formula.
Pitcher NERD:
Pitcher NERD tries to determine which pitchers will be the most aesthetically appealing to watch for a baseball fan and is both a historical and a predictive statistic. The formula uses a player's standard deviations from the mean (a weighted z-score) of the DIPS statistic xFIP (expected Fielding Independent Pitching), swinging strike percentage, overall strike percentage, and the differential between the pitcher's ERA and xFIP to determine a quantitative value for each pitcher.
Pitcher NERD:
NERD FIP SwStrk Strike LUCK 4.69 The factor of 4.69 is added to make the number fit on a 0 to 10 scale. While there has been some disagreement on the calculation of Cistulli's luck component, the general consensus among sports writers seems to be that a player with a below-average ERA and an above-average xFIP has been "unlucky".
Team NERD:
Following the model of his Pitching NERD, Team NERD tries to give a quantitative value to the aesthetic value of each of the 30 baseball teams. For factors it accounts for "Age," "Park-Adjusted weighted Runs Above Average (wRAA)," "Park-Adjusted Home Run per Fly Ball (HR/FB)," "Team Speed," "Bullpen Strength," "Team Defense," "Luck" (Base Runs – Actual Runs Scored), and "Payroll".
Team NERD:
NERD AGE BAT HR FB SBA SBR XBT .33 BL UZR PAY LUCK In an interview, Cistulli admitted that there is a disconnect between the Tampa Bay Rays high tNERD rating and low attendance saying that he is considered adding a "park-adjustment" to his formula which would reflect either the stadium itself or "attendance relative to the stadium's capacity" but overall reception of this statistic has been positive and Fangraphs started reporting Team NERD in Cistulli's "One Night Only" columns beginning August 23, 2010. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Absurdity**
Absurdity:
An absurdity is a state or condition of being extremely unreasonable, meaningless or unsound in reason so as to be irrational or not taken seriously. "Absurd" is an adjective used to describe an absurdity, e.g., "Tyler and the boys laughed at the absurd situation." It derives from the Latin absurdum meaning "out of tune". The Latin surdus means "deaf", implying stupidity.
Absurdity:
Absurdity is contrasted with being realistic or reasonable In general usage, absurdity may be synonymous with fanciful, foolish, bizarre, wild or nonsense. In specialized usage, absurdity is related to extremes in bad reasoning or pointlessness in reasoning; ridiculousness is related to extremes of incongruous juxtaposition, laughter, and ridicule; and nonsense is related to a lack of meaningfulness. Absurdism is a concept in philosophy related to the notion of absurdity.
History:
Absurdity has been used throughout history regarding foolishness and extremely poor reasoning to form belief.Ancient Greece In Aristophanes' 5th century BC comedy The Wasps, his protagonist Philocleon learned the "absurdities" of Aesop's Fables, considered to be unreasonable fantasy, and not real.Plato often used "absurdity" to describe very poor reasoning, or the conclusion from adopting a position that is false and thus reaching a false conclusion, called an "absurdity" (argument by reductio ad absurdum). Plato describes himself as not using absurd argumentation against himself in Parmenides. In Gorgias, Plato refers to an "inevitable absurdity" as the outcome of reasoning from a false assumption.Aristotle rectified an irrational absurdity in reasoning with empiricism using likelihood, "once the irrational has been introduced and an air of likelihood imparted to it, we must accept it in spite of the absurdity. He claimed that absurdity in reasoning being veiled by charming language in poetry, "As it is, the absurdity is veiled by the poetic charm with which the poet invests it... But in the Epic poem the absurdity passes unnoticed."Renaissance and early modern periods Michel de Montaigne, father of the essay and modern skepticism, argued that the process of abridgement is foolish and produces absurdity, "Every abridgement of a good book is a foolish abridgement... absurdity [is] not to be cured... satisfied with itself than any reason, can reasonably be."Francis Bacon, an early promoter of empiricism and the scientific method, argued that absurdity is a necessary component of scientific progress, and should not always be laughed at. He continued that bold new ways of thinking and bold hypotheses often led to absurdity, "For if absurdity be the subject of laughter, doubt you but great boldness is seldom without some absurdity."
Approaches to absurdity:
Rhetoric Absurdity arises when one's own speech deviates from common sense, is too poetic, or when one is unable to defend oneself with speech and reason. In Aristotle's book Rhetoric, Aristotle discusses the situations in which absurdity is employed and how it affects one's use of persuasion. According to Aristotle, the idea of a man being unable to persuade someone by his words is absurd. Any unnecessary information to the case is unreasonable and makes the speech unclear. If the speech becomes too unclear; the justification for their case becomes unpersuasive, making the argument absurd.
Approaches to absurdity:
Philosophy Absurdity is used in existentialist and related philosophy to describe absurdly pointless efforts to try to find such meaning or purpose in an objective and uncaring world, a philosophy known as absurdism. It is illogical to seek purpose or meaning in an uncaring world without purpose or meaning, or to accumulate excessive wealth in the face of certain death. In his paper, The Absurd, Thomas Nagel analyzed the perpetual absurdity of human life. Absurdity in life becomes apparent when we realize the fact that we take our lives seriously, while simultaneously perceiving that there is a certain arbitrarity in everything we do. He suggests never to stop searching for the absurd. Furthermore, he suggests searching for irony amongst the absurdity.Philosophy of language G. E. Moore, an English analytic philosopher, cited as a paradox of language such superficially absurd statements as, "I went to the pictures last Tuesday but I don't believe it". They can be true and logically consistent, and are not contradictory on further consideration of the user's linguistic intent. Wittgenstein observes that in some unusual circumstances absurdity itself disappears in such statements, as there are cases where "It is raining but I don't believe it" can make sense, i.e., what appears to be an absurdity is not nonsense.Demarcation with sound reasoning Medical commentators have criticized methods and reasoning in alternative and complementary medicine and integrative medicine as being either absurdities or being between evidence and absurdity. They state it often misleads the public with euphemistic terminology, such as the expressions "alternative medicine" and "complementary medicine", and call for a clear demarcation between valid scientific evidence and scientific methodology and absurdity.
Approaches to absurdity:
Absurdity in literature Hobbes' Table of Absurdity Thomas Hobbes distinguished absurdity from errors, including basic linguistic errors as when a word is simply used to refer to something which does not have that name. According to Aloysius Martinich: "What Hobbes is worried about is absurdity. Only human beings can embrace an absurdity, because only human beings have language, and philosophers are more susceptible to it than others". Hobbes wrote that "words whereby we conceive nothing but the sound, are those we call absurd, insignificant, and nonsense. And therefore if a man should talk to me of a round quadrangle; or, accidents of bread in cheese; or, immaterial substances; or of a free subject; a free will; or any free, but free from being hindered by opposition, I should not say he were in an error, but that his words were without meaning, that is to say, absurd". He distinguished seven types of absurdity. Below is the summary of Martinich, based on what he describes as Hobbes' "mature account" found in "De Corpore" 5., which all use examples that could be found in Aristotelian or scholastic philosophy, and all reflect "Hobbes' commitment to the new science of Galileo and Harvey". This is known as "Hobbes' Table of Absurdity".
Approaches to absurdity:
"Combining the name of a body with the name of an accident." For example, "existence is a being" or, "a being is existence". These absurdities are typical of scholastic philosophy according to Hobbes.
"Combining the name of a body with the name of a phantasm." For example, "a ghost is a body".
"Combining the name of a body with the name of a name." For example, "a universal is a thing".
"Combining the name of an accident with the name of a phantasm." For example, "colour appears to a perceiver".
"Combining the name of an accident with the name of a name." For example, "a definition is the essence of a thing".
"Combining the name of a phantasm with the name of a name." For example, "the idea of a man is a universal".
"Combining the name of a thing with the name of a speech act." For example, "some entities are beings per se".According to Martinich, Gilbert Ryle discussed the types of problem Hobbes refers to as absurdities under the term "category error".
Although common usage now considers "absurdity" to be synonymous with "ridiculousness", Hobbes discussed the two concepts as different, in that absurdity is viewed as having to do with invalid reasoning, while ridiculousness has to do with laughter, superiority, and deformity.Theater of the Absurd The Theater of the Absurd was a surrealist movement demonstrating motifs of absurdism.
"Theater should be a bloody and inhuman spectacle designed to exercise (sic. exorcise) the spectator's repressed criminal and erotic obsessions.
Approaches to absurdity:
Theology "I believe because it is absurd" Absurdity is cited as a basis for some theological reasoning about the formation of belief and faith, such as in fideism, an epistemological theory that reason and faith may be hostile to each other. The statement "Credo quia absurdum" ("I believe because it is absurd") is attributed to Tertullian from De Carne Christi, as translated by philosopher Voltaire. According to the New Advent Church, what Tertullian said in DCC 5 was "[...] the Son of God died; it is by all means to be believed, because it is absurd."In the 15th century, the Spanish theologian Tostatus used what he thought was a reduction to absurdity arguing against a spherical earth using dogma, claiming that a spherical earth would imply the existence of antipodes. He argued that this would be impossible since it would require either that Christ has appeared twice or that the inhabitants of the antipodes would be forever damned, which he claimed was an absurdity.Absurdity can refer to any strict religious dogma that pushes something to the point of violating common sense. For example, inflexible religious dictates are sometimes termed pharisaism, referring to unreasonable emphasis on observing exact words or rules, rather than the intent or spirit.Andrew Willet grouped absurdities with "flat contradictions to scripture" and "heresies".
Attitudes towards absurdity:
Psychology Psychologists study how humans adapt to constant absurdities in life. In advertising, the presence or absence of an absurd image was found to moderate negative attitudes toward products and increase product recognition.
Attitudes towards absurdity:
Humor "I can see nothing" – Alice in Wonderland "My, you must have good eyes" – Cheshire Cat Absurdity is used in humor to make people laugh or to make a sophisticated point. One example is Lewis Carroll's "Jabberwocky", a poem of nonsense verse, originally featured as a part of his absurdist novel Through the Looking-Glass, and What Alice Found There (1872). Carroll was a logician and parodied logic using illogic and inverting logical methods.
Attitudes towards absurdity:
Argentine novelist Jorge Luis Borges used absurdities in his short stories to make points. Franz Kafka's The Metamorphosis is considered absurdist by some.
Absurdity in various disciplines:
Legal The absurdity doctrine is a legal theory in American courts.: 234–239 One type of absurdity, known as the "scrivener's error", occurs when simple textual correction is needed to amend an obvious clerical error, such as a misspelled word.: 234–235 Another type of absurdity, called "evaluative absurdity", arises when a legal provision, despite appropriate spelling and grammar, "makes no substantive sense". An example would be a statute that mistakenly provided for a winning rather than losing party to pay the other side's reasonable attorney's fees.: 235–237 In order to stay within the remit of textualism and not reach further into purposivism, the doctrine is restricted by two limiting principles: "...the absurdity and the injustice of applying the provision to the case would be so monstrous, that all mankind would, without hesitation, unite in rejecting the application" and the absurdity must be correctable "...by modifying the text in relatively simple ways".: 237–239 This doctrine is seen as being consistent with examples of historical common sense.
Absurdity in various disciplines:
"The common sense of man approves the judgment mentioned by Pufendorf [sic. Puffendorf], that the Bolognian law which enacted 'that whoever drew blood in the streets should be punished with the utmost severity', did not extend to the surgeon who opened the vein of a person that fell down in the street in a fit. The same common sense accepts the ruling, cited by Plowden, that the statute of 1st Edward II, which enacts that a prisoner who breaks prison shall be guilty of a felony, does not extend to a prisoner who breaks out when the prison is on fire – 'for he is not to be hanged because he would not stay to be burnt'." Logic and computer science Reductio ad absurdum Reductio ad absurdum, reducing to an absurdity, is a method of proof in polemics, logic and mathematics, whereby assuming that a proposition is true leads to absurdity; a proposition is assumed to be true and this is used to deduce a proposition known to be false, so the original proposition must have been false. It is also an argumentation style in polemics, whereby a position is demonstrated to be false, or "absurd", by assuming it and reasoning to reach something known to be believed as false or to violate common sense; it is used by Plato to argue against other philosophical positions.
Absurdity in various disciplines:
An absurdity constraint is used in the logic of model transformations.Constant in logic The "absurdity constant", often denoted by the symbol ⊥, is used in formal logic. It represents the concept of falsum, an elementary logical proposition, denoted by a constant "false" in several programming languages.
Rule in logic The absurdity rule is a rule in logic, as used by Patrick Suppes in Logic, methodology and philosophy of science: Proceedings. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Choline**
Choline:
Choline ( KOH-leen) is a cation with the chemical formula [(CH3)3NCH2CH2OH]+. Choline forms various salts, for example choline chloride and choline bitartrate.
Chemistry:
Choline is a quaternary ammonium cation. The cholines are a family of water-soluble quaternary ammonium compounds. Choline is the parent compound of the cholines class, consisting of ethanolamine residue having three methyl groups attached to the same nitrogen atom. Choline hydroxide is known as choline base. It is hygroscopic and thus often encountered as a colorless viscous hydrated syrup that smells of trimethylamine (TMA). Aqueous solutions of choline are stable, but the compound slowly breaks down to ethylene glycol, polyethylene glycols, and TMA.Choline chloride can be made by treating TMA with 2-chloroethanol: (CH3)3N + ClCH2CH2OH → [(CH3)3NCH2CH2OH]+Cl−The 2-chloroethanol can be generated from ethylene oxide. Choline has historically been produced from natural sources, such as via hydrolysis of lecithin.
Choline in nature:
Choline is widespread in nature in living beings. In most animals, choline phospholipids are necessary components in cell membranes, in the membranes of cell organelles, and in very low-density lipoproteins.
Choline as a nutrient:
Choline is an essential nutrient for humans and many other animals. Humans are capable of some de novo synthesis of choline but require additional choline in the diet to maintain health. Dietary requirements can be met by choline by itself or in the form of choline phospholipids, such as phosphatidylcholine. Choline is not formally classified as a vitamin despite being an essential nutrient with an amino acid–like structure and metabolism.Choline is required to produce acetylcholine – a neurotransmitter – and S-adenosylmethionine (SAM), a universal methyl donor. Upon methylation SAM is transformed into S-adenosyl homocysteine.Symptomatic choline deficiency causes non-alcoholic fatty liver disease and muscle damage. Excessive consumption of choline (greater than 7.5 grams per day) can cause low blood pressure, sweating, diarrhea and fish-like body smell due to trimethylamine, which forms in the metabolism of choline. Rich dietary sources of choline and choline phospholipids include organ meats, egg yolks, dairy products, peanuts, certain beans, nuts and seeds. Vegetables with pasta and rice also contribute to choline intake in the American diet.
Metabolism:
Biosynthesis In plants, the first step in de novo biosynthesis of choline is the decarboxylation of serine into ethanolamine, which is catalyzed by a serine decarboxylase. The synthesis of choline from ethanolamine may take place in three parallel pathways, where three consecutive N-methylation steps catalyzed by a methyl transferase are carried out on either the free-base, phospho-bases, or phosphatidyl-bases. The source of the methyl group is S-adenosyl-L-methionine and S-adenosyl-L-homocysteine is generated as a side product.
Metabolism:
In humans and most other animals, de novo synthesis of choline is via the phosphatidylethanolamine N-methyltransferase (PEMT) pathway, but biosynthesis is not enough to meet human requirements. In the hepatic PEMT route, 3-phosphoglycerate (3PG) receives 2 acyl groups from acyl-CoA forming a phosphatidic acid. It reacts with cytidine triphosphate to form cytidine diphosphate-diacylglycerol. Its hydroxyl group reacts with serine to form phosphatidylserine which decarboxylates to ethanolamine and phosphatidylethanolamine (PE) forms. A PEMT enzyme moves three methyl groups from three S-adenosyl methionines (SAM) donors to the ethanolamine group of the phosphatidylethanolamine to form choline in the form of a phosphatidylcholine. Three S-adenosylhomocysteines (SAHs) are formed as a byproduct.Choline can also be released from more complex choline containing molecules. For example, phosphatidylcholines (PC) can be hydrolyzed to choline (Chol) in most cell types. Choline can also be produced by the CDP-choline route, cytosolic choline kinases (CK) phosphorylate choline with ATP to phosphocholine (PChol). This happens in some cell types like liver and kidney. Choline-phosphate cytidylyltransferases (CPCT) transform PChol to CDP-choline (CDP-Chol) with cytidine triphosphate (CTP). CDP-choline and diglyceride are transformed to PC by diacylglycerol cholinephosphotransferase (CPT).In humans, certain PEMT-enzyme mutations and estrogen deficiency (often due to menopause) increase the dietary need for choline. In rodents, 70% of phosphatidylcholines are formed via the PEMT route and only 30% via the CDP-choline route. In knockout mice, PEMT inactivation makes them completely dependent on dietary choline.
Metabolism:
Absorption In humans, choline is absorbed from the intestines via the SLC44A1 (CTL1) membrane protein via facilitated diffusion governed by the choline concentration gradient and the electrical potential across the enterocyte membranes. SLC44A1 has limited ability to transport choline: at high concentrations part of it is left unabsorbed. Absorbed choline leaves the enterocytes via the portal vein, passes the liver and enters systemic circulation. Gut microbes degrade the unabsorbed choline to trimethylamine, which is oxidized in the liver to trimethylamine N-oxide.Phosphocholine and glycerophosphocholines are hydrolyzed via phospholipases to choline, which enters the portal vein. Due to their water solubility, some of them escape unchanged to the portal vein. Fat-soluble choline-containing compounds (phosphatidylcholines and sphingomyelins) are either hydrolyzed by phospholipases or enter the lymph incorporated into chylomicrons.
Metabolism:
Transport In humans, choline is transported as a free molecule in blood. Choline–containing phospholipids and other substances, like glycerophosphocholines, are transported in blood lipoproteins. Blood plasma choline levels in healthy fasting adults is 7–20 micromoles per liter (μmol/L) and 10 μmol/L on average. Levels are regulated, but choline intake and deficiency alters these levels. Levels are elevated for about 3 hours after choline consumption. Phosphatidylcholine levels in the plasma of fasting adults is 1.5–2.5 mmol/L. Its consumption elevates the free choline levels for about 8–12 hours, but does not affect phosphatidylcholine levels significantly.Choline is a water-soluble ion and thus requires transporters to pass through fat-soluble cell membranes. Three types of choline transporters are known: SLC5A7 CTLs: CTL1 (SLC44A1), CTL2 (SLC44A2) and CTL4 (SLC44A4) OCTs: OCT1 (SLC22A1) and OCT2 (SLC22A2)SLC5A7s are sodium- (Na+) and ATP-dependent transporters. They have high binding affinity for choline, transport it primarily to neurons and are indirectly associated with the acetylcholine production. Their deficient function causes hereditary weakness in the pulmonary and other muscles in humans via acetylcholine deficiency. In knockout mice, their dysfunction results easily in death with cyanosis and paralysis.CTL1s have moderate affinity for choline and transport it in almost all tissues, including the intestines, liver, kidneys, placenta and mitochondria. CTL1s supply choline for phosphatidylcholine and trimethylglycine production. CTL2s occur especially in the mitochondria in the tongue, kidneys, muscles and heart. They are associated with the mitochondrial oxidation of choline to trimethylglycine. CTL1s and CTL2s are not associated with the acetylcholine production, but transport choline together via the blood–brain barrier. Only CTL2s occur on the brain side of the barrier. They also remove excess choline from the neurons back to blood. CTL1s occur only on the blood side of the barrier, but also on the membranes of astrocytes and neurons.OCT1s and OCT2s are not associated with the acetylcholine production. They transport choline with low affinity. OCT1s transport choline primarily in the liver and kidneys; OCT2s in kidneys and the brain.
Metabolism:
Storage Choline is stored in the cell membranes and organelles as phospholipids, and inside cells as phosphatidylcholines and glycerophosphocholines.
Excretion Even at choline doses of 2–8 g, little choline is excreted into urine in humans. Excretion happens via transporters that occur within kidneys (see transport). Trimethylglycine is demethylated in the liver and kidneys to dimethylglycine (tetrahydrofolate receives one of the methyl groups). Methylglycine forms, is excreted into urine, or is demethylated to glycine.
Function:
Choline and its derivatives have many functions in humans and in other organisms. The most notable function is that choline serves as a synthetic precursor for other essential cell components and signalling molecules, such as phospholipids that form cell membranes, the neurotransmitter acetylcholine, and the osmoregulator trimethylglycine (betaine). Trimethylglycine in turn serves as a source of methyl groups by participating in the biosynthesis of S-adenosylmethionine.
Function:
Phospholipid precursor Choline is transformed to different phospholipids, like phosphatidylcholines and sphingomyelins. These are found in all cell membranes and the membranes of most cell organelles. Phosphatidylcholines are structurally important part of the cell membranes. In humans 40–50% of their phospholipids are phosphatidylcholines.Choline phospholipids also form lipid rafts in the cell membranes along with cholesterol. The rafts are centers, for example for receptors and receptor signal transduction enzymes.Phosphatidylcholines are needed for the synthesis of VLDLs: 70–95% of their phospholipids are phosphatidylcholines in humans.Choline is also needed for the synthesis of pulmonary surfactant, which is a mixture consisting mostly of phosphatidylcholines. The surfactant is responsible for lung elasticity, that is for lung tissue's ability to contract and expand. For example, deficiency of phosphatidylcholines in the lung tissues has been linked to acute respiratory distress syndrome.Phosphatidylcholines are excreted into bile and work together with bile acid salts as surfactants in it, thus helping with the intestinal absorption of lipids.
Function:
Acetylcholine synthesis Choline is needed to produce acetylcholine. This is a neurotransmitter which plays a necessary role in muscle contraction, memory and neural development, for example. Nonetheless, there is little acetylcholine in the human body relative to other forms of choline. Neurons also store choline in the form of phospholipids to their cell membranes for the production of acetylcholine.
Function:
Source of trimethylglycine In humans, choline is oxidized irreversibly in liver mitochondria to glycine betaine aldehyde by choline oxidases. This is oxidized by mitochondrial or cytosolic betaine-aldehyde dehydrogenases to trimethylglycine. Trimethylglycine is a necessary osmoregulator. It also works as a substrate for the BHMT-enzyme, which methylates homocysteine to methionine. This is a S-adenosylmethionine (SAM) precursor. SAM is a common reagent in biological methylation reactions. For example, it methylates guanidines of DNA and certain lysines of histones. Thus it is part of gene expression and epigenetic regulation. Choline deficiency thus leads to elevated homocysteine levels and decreased SAM levels in blood.
Content in foods:
Choline occurs in foods as a free molecule and in the form of phospholipids, especially as phosphatidylcholines. Choline is highest in organ meats and egg yolks though it is found to a lesser degree in non-organ meats, grains, vegetables, fruit and dairy products. Cooking oils and other food fats have about 5 mg/100 g of total choline. In the United States, food labels express the amount of choline in a serving as a percentage of daily value (%DV) based on the adequate intake of 550 mg/day. 100% of the daily value means that a serving of food has 550 mg of choline. "Total choline" is defined as the sum of free choline and choline-containing phospholipids, without accounting for mass fraction.Human breast milk is rich in choline. Exclusive breastfeeding corresponds to about 120 mg of choline per day for the baby. Increase in a mother's choline intake raises the choline content of breast milk and low intake decreases it. Infant formulas may or may not contain enough choline. In the EU and the US, it is mandatory to add at least 7 mg of choline per 100 kilocalories (kcal) to every infant formula. In the EU, levels above 50 mg/100 kcal are not allowed.Trimethylglycine is a functional metabolite of choline. It substitutes for choline nutritionally, but only partially. High amounts of trimethylglycine occur in wheat bran (1,339 mg/100 g), toasted wheat germ (1,240 mg/100 g) and spinach (600–645 mg/100 g), for example.
Content in foods:
Daily values The following table contains updated sources of choline to reflect the new Daily Value and the new Nutrition Facts and Supplement Facts Labels. It reflects data from the U.S. Department of Agriculture, Agricultural Research Service. FoodData Central, 2019.
Content in foods:
DV = Daily Value. The U.S. Food and Drug Administration (FDA) developed DVs to help consumers compare the nutrient contents of foods and dietary supplements within the context of a total diet. The DV for choline is 550 mg for adults and children age 4 years and older. The FDA does not require food labels to list choline content unless choline has been added to the food. Foods providing 20% or more of the DV are considered to be high sources of a nutrient, but foods providing lower percentages of the DV also contribute to a healthful diet.
Content in foods:
The U.S. Department of Agriculture's (USDA's) FoodData Central lists the nutrient content of many foods and provides a comprehensive list of foods containing choline arranged by nutrient content.
Dietary recommendations:
Insufficient data is available to establish an estimated average requirement (EAR) for choline, so the Food and Nutrition Board (FNB) established adequate intakes (AIs). For adults, the AI for choline was set at 550 mg/day for men and 425 mg/day for women. These values have been shown to prevent hepatic alteration in men. However, the study used to derive these values did not evaluate whether less choline would be effective, as researchers only compared a choline-free diet to a diet containing 550 mg of choline per day. From this, the AIs for children and adolescents were extrapolated.Recommendations are in milligrams per day (mg/day). The European Food Safety Authority (EFSA) recommendations are general recommendations for the EU countries. The EFSA has not set any upper limits for intake. Individual EU countries may have more specific recommendations. The National Academy of Medicine (NAM) recommendations apply in the United States, Australia and New Zealand.
Intake in populations:
Twelve surveys undertaken in 9 EU countries between 2000 and 2011 estimated choline intake of adults in these countries to be 269–468 milligrams per day. Intake was 269–444 mg/day in adult women and 332–468 mg/day in adult men. Intake was 75–127 mg/day in infants, 151–210 mg/day in 1- to 3-year-olds, 177–304 mg/day in 3- to 10-year-olds and 244–373 mg/day in 10- to 18-year-olds. The total choline intake mean estimate was 336 mg/day in pregnant adolescents and 356 mg/day in pregnant women.A study based on the NHANES 2009–2012 survey estimated the choline intake to be too low in some US subpopulations. Intake was 315.2–318.8 mg/d in 2+ year olds between this time period. Out of 2+ year olds, only 15.6±0.8% of males and 6.1±0.6% of females exceeded the adequate intake (AI). AI was exceeded by 62.9±3.1% of 2- to 3-year-olds, 45.4±1.6% of 4- to 8-year-olds, 9.0±1.0% of 9- to 13-year-olds, 1.8±0.4% of 14–18 and 6.6±0.5% of 19+ year olds. Upper intake level was not exceeded in any subpopulations.A 2013–2014 NHANES study of the US population found the choline intake of 2- to 19-year-olds to be 256±3.8 mg/day and 339±3.9 mg/day in adults 20 and over. Intake was 402±6.1 mg/d in men 20 and over and 278 mg/d in women 20 and over.
Deficiency:
Signs and symptoms Symptomatic choline deficiency is rare in humans. Most obtain sufficient amounts of it from the diet and are able to biosynthesize limited amounts of it via PEMT. Symptomatic deficiency is often caused by certain diseases or by other indirect causes. Severe deficiency causes muscle damage and non-alcoholic fatty liver disease, which may develop into cirrhosis.Besides humans, fatty liver is also a typical sign of choline deficiency in other animals. Bleeding in the kidneys can also occur in some species. This is suspected to be due to deficiency of choline derived trimethylglycine, which functions as an osmoregulator.
Deficiency:
Causes and mechanisms Estrogen production is a relevant factor which predisposes individuals to deficiency along with low dietary choline intake. Estrogens activate phosphatidylcholine producing PEMT enzymes. Women before menopause have lower dietary need for choline than men due to women's higher estrogen production. Without estrogen therapy, the choline needs of post-menopausal women are similar to men's. Some single-nucleotide polymorphisms (genetic factors) affecting choline and folate metabolism are also relevant. Certain gut microbes also degrade choline more efficiently than others, so they are also relevant.In deficiency, availability of phosphatidylcholines in the liver are decreased – these are needed for formation of VLDLs. Thus VLDL-mediated fatty acid transport out of the liver decreases leading to fat accumulation in the liver. Other simultaneously occurring mechanisms explaining the observed liver damage have also been suggested. For example, choline phospholipids are also needed in mitochondrial membranes. Their inavailability leads to the inability of mitochondrial membranes to maintain proper electrochemical gradient, which, among other things, is needed for degrading fatty acids via β-oxidation. Fat metabolism within liver therefore decreases.
Excess intake:
Excessive doses of choline can have adverse effects. Daily 8–20 g doses of choline, for example, have been found to cause low blood pressure, nausea, diarrhea and fish-like body odor. The odor is due to trimethylamine (TMA) formed by the gut microbes from the unabsorbed choline (see trimethylaminuria).The liver oxidizes TMA to trimethylamine N-oxide (TMAO). Elevated levels of TMA and TMAO in the body have been linked to increased risk of atherosclerosis and mortality. Thus, excessive choline intake has been hypothetized to increase these risks in addition to carnitine, which also is formed into TMA and TMAO by gut bacteria. However, choline intake has not been shown to increase the risk of dying from cardiovascular diseases. It is plausible that elevated TMA and TMAO levels are just a symptom of other underlying illnesses or genetic factors that predispose individuals for increased mortality. Such factors may have not been properly accounted for in certain studies observing TMA and TMAO level related mortality. Causality may be reverse or confounding and large choline intake might not increase mortality in humans. For example, kidney dysfunction predisposes for cardiovascular diseases, but can also decrease TMA and TMAO excretion.
Health effects:
Neural tube closure Low maternal intake of choline is associated with an increased risk of neural tube defects. Higher maternal intake of choline is likely associated with better neurocognition/neurodevelopment in children. Choline and folate, interacting with vitamin B12, act as methyl donors to homocysteine to form methionine, which can then go on to form SAM (S-adenosylmethionine). SAM is the substrate for almost all methylation reactions in mammals. It has been suggested that disturbed methylation via SAM could be responsible for the relation between folate and NTDs. This may also apply to choline. Certain mutations that disturb choline metabolism increase the prevalence of NTDs in newborns, but the role of dietary choline deficiency remains unclear, as of 2015.
Health effects:
Cardiovascular diseases and cancer Choline deficiency can cause fatty liver, which increases cancer and cardiovascular disease risk. Choline deficiency also decreases SAM production, which partakes in DNA methylation – this decrease may also contribute to carcinogenesis. Thus, deficiency and its association with such diseases has been studied. However, observational studies of free populations have not convincingly shown an association between low choline intake and cardiovascular diseases or most cancers. Studies on prostate cancer have been contradictory.
Health effects:
Cognition Studies observing the effect between higher choline intake and cognition have been conducted in human adults, with contradictory results. Similar studies on human infants and children have been contradictory and also limited.
Perinatal development:
Both pregnancy and lactation increase demand for choline dramatically. This demand may be met by upregulation of PEMT via increasing estrogen levels to produce more choline de novo, but even with increased PEMT activity, the demand for choline is still so high that bodily stores are generally depleted. This is exemplified by the observation that Pemt −/− mice (mice lacking functional PEMT) will abort at 9–10 days unless fed supplemental choline.While maternal stores of choline are depleted during pregnancy and lactation, the placenta accumulates choline by pumping choline against the concentration gradient into the tissue, where it is then stored in various forms, mostly as acetylcholine. Choline concentrations in amniotic fluid can be ten times higher than in maternal blood.
Perinatal development:
Functions in the fetus Choline is in high demand during pregnancy as a substrate for building cellular membranes (rapid fetal and mother tissue expansion), increased need for one-carbon moieties (a substrate for methylation of DNA and other functions), raising choline stores in fetal and placental tissues, and for increased production of lipoproteins (proteins containing "fat" portions). In particular, there is interest in the impact of choline consumption on the brain. This stems from choline's use as a material for making cellular membranes (particularly in making phosphatidylcholine). Human brain growth is most rapid during the third trimester of pregnancy and continues to be rapid to approximately five years of age. During this time, the demand is high for sphingomyelin, which is made from phosphatidylcholine (and thus from choline), because this material is used to myelinate (insulate) nerve fibers. Choline is also in demand for the production of the neurotransmitter acetylcholine, which can influence the structure and organization of brain regions, neurogenesis, myelination, and synapse formation. Acetylcholine is even present in the placenta and may help control cell proliferation and differentiation (increases in cell number and changes of multiuse cells into dedicated cellular functions) and parturition.Choline uptake into the brain is controlled by a low-affinity transporter located at the blood–brain barrier. Transport occurs when arterial blood plasma choline concentrations increase above 14 μmol/L, which can occur during a spike in choline concentration after consuming choline-rich foods. Neurons, conversely, acquire choline by both high- and low-affinity transporters. Choline is stored as membrane-bound phosphatidylcholine, which can then be used for acetylcholine neurotransmitter synthesis later. Acetylcholine is formed as needed, travels across the synapse, and transmits the signal to the following neuron. Afterwards, acetylcholinesterase degrades it, and the free choline is taken up by a high-affinity transporter into the neuron again.
Uses:
Choline chloride and choline bitartrate are used in dietary supplements. Bitartrate is used more often due to its lower hygroscopicity. Certain choline salts are used to supplement chicken, turkey and some other animal feeds. Some salts are also used as industrial chemicals: for example, in photolithography to remove photoresist. Choline theophyllinate and choline salicylate are used as medicines, as well as structural analogs, like methacholine and carbachol. Radiolabeled cholines, like 11C-choline, are used in medical imaging. Other commercially used salts include tricholine citrate and choline bicarbonate.
Antagonists and inhibitors:
Hundreds of choline antagonists and enzyme inhibitors have been developed for research purposes. Aminomethylpropanol is among the first ones used as a research tool. It inhibits choline and trimethylglycine synthesis. It is able to induce choline deficiency that in turn results in fatty liver in rodents. Diethanolamine is another such compound, but also an environmental pollutant. N-cyclohexylcholine inhibits choline uptake primarily in brains. Hemicholinium-3 is a more general inhibitor, but also moderately inhibits choline kinases. More specific choline kinase inhibitors have also been developed. Trimethylglycine synthesis inhibitors also exist: carboxybutylhomocysteine is an example of a specific BHMT inhibitor.The cholinergic hypothesis of dementia has not only lead to medicinal acetylcholinesterase inhibitors, but also to a variety of acetylcholine inhibitors. Examples of such inhibiting research chemicals include triethylcholine, homocholine and many other N-ethyl derivates of choline, which are false neurotransmitter analogs of acetylcholine. Choline acetyltransferase inhibitors have also been developed.
History:
Discovery In 1849, Adolph Strecker was the first to isolate choline from pig bile. In 1852, L. Babo and M. Hirschbrunn extracted choline from white mustard seeds and named it sinkaline. In 1862, Strecker repeated his experiment with pig and ox bile, calling the substance choline for the first time after the Greek word for bile, chole, and identifying it with the chemical formula C5H13NO. In 1850, Theodore Nicolas Gobley extracted from the brains and roe of carps a substance he named lecithin after the Greek word for egg yolk, lekithos, showing in 1874 that it was a mixture of phosphatidylcholines.In 1865, Oscar Liebreich isolated "neurine" from animal brains. The structural formulas of acetylcholine and Liebreich's "neurine" were resolved by Adolf von Baeyer in 1867. Later that year "neurine" and sinkaline were shown to be the same substances as Strecker's choline. Thus, Bayer was the first to resolve the structure of choline. The compound now known as neurine is unrelated to choline.
History:
Discovery as a nutrient In the early 1930s, Charles Best and colleagues noted that fatty liver in rats on a special diet and diabetic dogs could be prevented by feeding them lecithin, proving in 1932 that choline in lecithin was solely responsible for this preventive effect. In 1998, the US National Academy of Medicine reported their first recommendations for choline in the human diet. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Multipolar exchange interaction**
Multipolar exchange interaction:
Magnetic materials with strong spin-orbit interaction, such as: LaFeAsO, PrFe4P12, YbRu2Ge2, UO2, NpO2, Ce1−xLaxB6, URu2Si2 and many other compounds, are found to have magnetic ordering constituted by high rank multipoles, e.g. quadruple, octople, etc. Due to the strong spin-orbit coupling, multipoles are automatically introduced to the systems when the total angular momentum quantum number J is larger than 1/2. If those multipoles are coupled by some exchange mechanisms, those multipoles could tend to have some ordering as conventional spin 1/2 Heisenberg problem. Except the multipolar ordering, many hidden order phenomena are believed closely related to the multipolar interactions
Tensor operator expansion:
Basic concepts Consider a quantum mechanical system with Hilbert space spanned by |j,mj⟩ , where j is the total angular momentum and mj is its projection on the quantization axis. Then any quantum operators can be represented using the basis set {|j,mj⟩} as a matrix with dimension (2j+1) . Therefore, one can define (2j+1)2 matrices to completely expand any quantum operator in this Hilbert space. Taking J=1/2 as an example, a quantum operator A can be expanded as A=[1234]=1[1000]+2[0100]+3[0010]+4[0001]=1L1,1+2L1,2+3L2,1+4L2,2 Obviously, the matrices: Lij=|i⟩⟨j| form a basis set in the operator space. Any quantum operator defined in this Hilbert can be expended by {Lij} operators. In the following, let's call these matrices as a super basis to distinguish the eigen basis of quantum states. More specifically the above super basis {Lij} can be called a transition super basis because it describes the transition between states |i⟩ and |j⟩ . In fact, this is not the only super basis that does the trick. We can also use Pauli matrices and the identity matrix to form a super basis A=[1234]=52[1001]+i2[0i−i0]+32[−1001]+52[0110]=52I+i2σy+32σz+52σx Since the rotation properties of σx,σy,σz follow the same rules as the rank 1 tensor of cubic harmonics Tx,Ty,Tz and the identity matrix I follows the same rules as the rank 0 tensor Ts , the basis set {I,σx,σy,σz} can be called cubic super basis. Another commonly used super basis is spherical harmonic super basis which is built by replacing the σx,σy to the raising and lowering operators {I,σ−1,σ0,σ+1} A=[1234]=52[1001]+2[0100]+32[−1001]−3[00−10]=52I+2σ+1+32σ0−3σ−1 Again, σ−1,σ0,σ+1 share the same rotational properties as rank 1 spherical harmonic tensors Y−11,Y01,Y−11 , so it is called spherical super basis.
Tensor operator expansion:
Because atomic orbitals s,p,d,f are also described by spherical or cubic harmonic functions, one can imagine or visualize these operators using the wave functions of atomic orbitals although they are essentially matrices not spatial functions.
Tensor operator expansion:
If we extend the problem to J=1 , we will need 9 matrices to form a super basis. For transition super basis, we have {Lij;i,j=1∼3} . For cubic super basis, we have {Ts,Tx,Ty,Tz,Txy,Tyz,Tzx,Tx2−y2,T3z2−r2} . For spherical super basis, we have {Y00,Y−11,Y01,Y−11,Y−22,Y−12,Y02,Y12,Y22} . In group theory, Ts/Y00 are called scalar or rank 0 tensor, Tx,yz,/Y−1,0,+11 are called dipole or rank 1 tensors, Txy,yz,zx,x2−y2,3z2−r2/Y−2,−1,0,+1,+22 are called quadrupole or rank 2 tensors.The example tells us, for a J -multiplet problem, one will need all rank 0∼2J tensor operators to form a complete super basis. Therefore, for a J=1 system, its density matrix must have quadrupole components. This is the reason why a J>1/2 problem will automatically introduce high-rank multipoles to the system Formal definitions A general definition of spherical harmonic super basis of a J -multiplet problem can be expressed as YKQ(J)=∑MM′(−1)J−M(2K+1)1/2×(JJKM′−MQ)|JM⟩⟨JM′|, where the parentheses denote a 3-j symbol; K is the rank which ranges 0∼2J ; Q is the projection index of rank K which ranges from −K to +K. A cubic harmonic super basis where all the tensor operators are hermitian can be defined as TKQ=12[(−1)QYKQ(J)+YK−Q(J)] TK−Q=i2[YKQ(J)−(−1)QYK−Q(J)] Then, any quantum operator A defined in the J -multiplet Hilbert space can be expanded as A=∑K,QαKQYKQ=∑K,QβKQTKQ=∑i,jγi,jLi,j where the expansion coefficients can be obtained by taking the trace inner product, e.g. αKQ=Tr[AYKQ†] Apparently, one can make linear combination of these operators to form a new super basis that have different symmetries.
Tensor operator expansion:
Multi-exchange description Using the addition theorem of tensor operators, the product of a rank n tensor and a rank m tensor can generate a new tensor with rank n+m ~ |n-m|. Therefore, a high rank tensor can be expressed as the product of low rank tensors. This convention is useful to interpret the high rank multipolar exchange terms as a "multi-exchange" process of dipoles (or pseudospins). For example, for the spherical harmonic tensor operators of J=1 case, we have Y2−2=2Y1−1Y1−1 Y2−1=2(Y1−1Y10+Y10Y1−1) 24 /6(Y1−1Y1+1+2Y10Y10+Y1+1Y1−1) Y2+1=2(Y10Y1+1+Y1+1Y10) Y2+2=2Y1+1Y1+1 If so, a quadrupole-quadrupole interaction (see next section) can be considered as a two steps dipole-dipole interaction. For example, Y2i+2iY2j−2j=4Y1i+1iY1i+1iY1j−1jY1j−1j , so the one step quadrupole transition Y2i+2i on site i now becomes a two steps of dipole transition Y1i+1iY1i+1i . Hence not only inter-site-exchange but also intra-site-exchange terms appear (so called multi-exchange). If J is even larger, one can expect more complicated intra-site-exchange terms would appear. However, one has to note that it is not a perturbation expansion but just a mathematical technique. The high rank terms are not necessarily smaller than low rank terms. In many systems, high rank terms are more important than low rank terms.
Multipolar exchange interactions:
There are four major mechanisms to induce exchange interactions between two magnetic moments in a system: 1). Direct exchange 2). RKKY 3). Superexchange 4). Spin-Lattice. No matter which one is dominated, a general form of the exchange interaction can be written as H=∑ij∑KQCKiKjQiQjTKiQiTKjQj where i,j are the site indexes and CKiKjQiQj is the coupling constant that couples two multipole moments TKiQi and TKjQj . One can immediately find if K is restricted to 1 only, the Hamiltonian reduces to conventional Heisenberg model.
Multipolar exchange interactions:
An important feature of the multipolar exchange Hamiltonian is its anisotropy. The value of coupling constant CKiKjQiQj is usually very sensitive to the relative angle between two multipoles. Unlike conventional spin only exchange Hamiltonian where the coupling constants are isotropic in a homogeneous system, the highly anisotropic atomic orbitals (recall the shape of the s,p,d,f wave functions) coupling to the system's magnetic moments will inevitably introduce huge anisotropy even in a homogeneous system. This is one of the main reasons that most multipolar orderings tend to be non-colinear.
Antiferromagnetism of multipolar moments:
Unlike magnetic spin ordering where the antiferromagnetism can be defined by flipping the magnetization axis of two neighbor sites from a ferromagnetic configuration, flipping of the magnetization axis of a multipole is usually meaningless. Taking a Tyz moment as an example, if one flips the z-axis by making a π rotation toward the y-axis, it just changes nothing. Therefore, a suggested definition of antiferromagnetic multipolar ordering is to flip their phases by π , i.e. Tyz→eiπTyz=−Tyz . In this regard, the antiferromagnetic spin ordering is just a special case of this definition, i.e. flipping the phase of a dipole moment is equivalent to flipping its magnetization axis. As for high rank multipoles, e.g. Tyz , it actually becomes a π/2 rotation and for T3z2−r2 it is even not any kind of rotation.
Computing coupling constants:
Calculation of multipolar exchange interactions remains a challenging issue in many aspects. Although there were many works based on fitting the model Hamiltonians with experiments, predictions of the coupling constants based on first-principle schemes remain lacking. Currently there are two studies implemented first-principles approach to explore multipolar exchange interactions. An early study was developed in 80's. It is based on a mean field approach that can greatly reduce the complexity of coupling constants induced by RKKY mechanism, so the multipolar exchange Hamiltonian can be described by just a few unknown parameters and can be obtained by fitting with experiment data. Later on, a first-principles approach to estimate the unknown parameters was further developed and got good agreements with a few selected compounds, e.g. cerium momnpnictides. Another first-principle approach was also proposed recently. It maps all the coupling constants induced by all static exchange mechanisms to a series of DFT+U total energy calculations and got agreement with uranium dioxide. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Florentine bronze**
Florentine bronze:
Florentine bronze is a modern term for a type of bronzed metal.
Florentine bronze:
Prior to 1828, the primary artificial bronze used for copper and copper alloys was antique green of various shades. A metal colouring called "Florentine bronze" was introduced by a French man named Lafleur around 1828 and soon became popular. A variation, Florentine fremé ("smoked" bronze), was introduced by another French man named Camus in 1833.The alloy is usually formed as a mixture of aluminium or tin (<10%) and copper (>90%). Currently no chemical formula for Florentine bronze has been made as it is an alloy which is not standardised (in proportions) worldwide."Florentine bronze" bears no relation to the 16th century bronze reductions of full-scale sculptures that were made in Florence after models by Giambologna and other Mannerist sculptors, to satisfy a collectors' market. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HD 213240**
HD 213240:
HD 213240 is a possible binary star system in the constellation Grus. It has an apparent visual magnitude of 6.81, which lies below the limit of visibility for normal human sight. The system is located at a distance of 133.5 light years from the Sun based on parallax. The primary has an absolute magnitude of 3.77.This is an ordinary G-type main-sequence star with a stellar classification of G0/G1V. It is a metal-rich star with an age that has been calculated as being anywhere from 2.7 to 4.6 billion years. The star has 1.6 times the mass of the Sun and 1.56 times the Sun's radius. It is spinning with a projected rotational velocity of 3.5 km/s. The star is radiating 2.69 times the luminosity of the Sun from its photosphere at an effective temperature of 5,921 K.A red dwarf companion star was detected in 2005 with a projected separation of 3,898 AU.
Planetary system:
The Geneva extrasolar planet search team discovered a planet orbiting this star in 2001. Since this planet was discovered by radial velocity, only its minimum mass was initially known, and there was a 5% chance of it being massive enough to be a brown dwarf. In 2023, the inclination and true mass of HD 213240 b were determined via astrometry, confirming its planetary nature. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Refractive index and extinction coefficient of thin film materials**
Refractive index and extinction coefficient of thin film materials:
A. R. Forouhi and I. Bloomer deduced dispersion equations for the refractive index, n, and extinction coefficient, k, which were published in 1986 and 1988. The 1986 publication relates to amorphous materials, while the 1988 publication relates to crystalline. Subsequently, in 1991, their work was included as a chapter in “The Handbook of Optical Constants”. The Forouhi–Bloomer dispersion equations describe how photons of varying energies interact with thin films. When used with a spectroscopic reflectometry tool, the Forouhi–Bloomer dispersion equations specify n and k for amorphous and crystalline materials as a function of photon energy E. Values of n and k as a function of photon energy, E, are referred to as the spectra of n and k, which can also be expressed as functions of the wavelength of light, λ, since E = HC/λ. The symbol h represents Planck’s constant and c, the speed of light in vacuum. Together, n and k are often referred to as the “optical constants” of a material (though they are not constants since their values depend on photon energy).
Refractive index and extinction coefficient of thin film materials:
The derivation of the Forouhi–Bloomer dispersion equations is based on obtaining an expression for k as a function of photon energy, symbolically written as k(E), starting from first principles quantum mechanics and solid state physics. An expression for n as a function of photon energy, symbolically written as n(E), is then determined from the expression for k(E) in accordance to the Kramers–Kronig relations which states that n(E) is the Hilbert transform of k(E).
Refractive index and extinction coefficient of thin film materials:
The Forouhi–Bloomer dispersion equations for n(E) and k(E) of amorphous materials are given as: k(E)=A(E−Eg)2E2−BE+C n(E)=n(∞)+(B0E+C0)E2−BE+C The five parameters A, B, C, Eg, and n(∞) each have physical significance. Eg is the optical energy band gap of the material. A, B, and C depend on the band structure of the material. They are positive constants such that 4C-B2 > 0. Finally, n(∞), a constant greater than unity, represents the value of n at E = ∞. The parameters B0 and C0 in the equation for n(E) are not independent parameters, but depend on A, B, C, and Eg. They are given by: B0=AQ(−B22+EgB−Eg2+C) C0=AQ[(Eg2+C)B2−2EgC] where Q=12(4C−B2)12 Thus, for amorphous materials, a total of five parameters are sufficient to fully describe the dependence of both n and k on photon energy, E.
Refractive index and extinction coefficient of thin film materials:
For crystalline materials which have multiple peaks in their n and k spectra, the Forouhi–Bloomer dispersion equations can be extended as follows: k(E)=∑i=1q[Ai(E−Egi)2E2−BiE+Ci] n(E)=n(∞)+∑i=1q[B0iE+C0iE2−BiE+Ci] The number of terms in each sum, q, is equal to the number of peaks in the n and k spectra of the material. Every term in the sum has its own values of the parameters A, B, C, Eg, as well as its own values of B0 and C0. Analogous to the amorphous case, the terms all have physical significance.
Characterizing thin films:
The refractive index (n) and extinction coefficient (k) are related to the interaction between a material and incident light, and are associated with refraction and absorption (respectively). They can be considered as the “fingerprint of the material". Thin film material coatings on various substrates provide important functionalities for the microfabrication industry, and the n, k, as well as the thickness, t, of these thin film constituents must be measured and controlled to allow for repeatable manufacturing.
Characterizing thin films:
The Forouhi–Bloomer dispersion equations for n and k were originally expected to apply to semiconductors and dielectrics, whether in amorphous, polycrystalline, or crystalline states. However, they have been shown to describe the n and k spectra of transparent conductors, as well as metallic compounds. The formalism for crystalline materials was found to also apply to polymers, which consist of long chains of molecules that do not form a crystallographic structure in the classical sense.
Characterizing thin films:
Other dispersion models that can be used to derive n and k, such as the Tauc–Lorentz model, can be found in the literature. Two well-known models—Cauchy and Sellmeier—provide empirical expressions for n valid over a limited measurement range, and are only useful for non-absorbing films where k=0. Consequently, the Forouhi–Bloomer formulation has been used for measuring thin films in various applications.In the following discussions, all variables of photon energy, E, will be described in terms of wavelength of light, λ, since experimentally variables involving thin films are typically measured over a spectrum of wavelengths. The n and k spectra of a thin film cannot be measured directly, but must be determined indirectly from measurable quantities that depend on them. Spectroscopic reflectance, R(λ), is one such measurable quantity. Another, is spectroscopic transmittance, T(λ), applicable when the substrate is transparent. Spectroscopic reflectance of a thin film on a substrate represents the ratio of the intensity of light reflected from the sample to the intensity of incident light, measured over a range of wavelengths, whereas spectroscopic transmittance, T(λ), represents the ratio of the intensity of light transmitted through the sample to the intensity of incident light, measured over a range of wavelengths; typically, there will also be a reflected signal, R(λ), accompanying T(λ).
Characterizing thin films:
The measurable quantities, R(λ) and T(λ) depend not only on n(λ) and k(λ) of the film, but also on film thickness, t, and n(λ) and k(λ) of the substrate. For a silicon substrate, the n(λ) and k(λ) values are known and are taken as a given input. The challenge of characterizing thin films involves extracting t, n(λ) and k(λ) of the film from the measurement of R(λ) and/or T(λ). This can be achieved by combining the Forouhi–Bloomer dispersion equations for n(λ) and k(λ) with the Fresnel equations for the reflection and transmission of light at an interface to obtain theoretical, physically valid, expressions for reflectance and transmittance. In so doing, the challenge is reduced to extracting the five parameters A, B, C, Eg, and n(∞) that constitute n(λ) and k(λ), along with film thickness, t, by utilizing a nonlinear least squares regression analysis fitting procedure. The fitting procedure entails an iterative improvement of the values of A, B, C, Eg, n(∞), t, in order to reduce the sum of the squares of the errors between the theoretical R(λ) or theoretical T(λ) and the measured spectrum of R(λ) or T(λ).
Characterizing thin films:
Besides spectroscopic reflectance and transmittance, spectroscopic ellipsometry can also be used in an analogous way to characterize thin films and determine t, n(λ) and k(λ).
Measurement examples:
The following examples show the versatility of using the Forouhi–Bloomer dispersion equations to characterize thin films utilizing a tool based on near-normal incident spectroscopic reflectance. Near-normal spectroscopic transmittance is also utilized when the substrate is transparent. The n(λ) and k(λ) spectra of each film are obtained along with film thickness, over a wide range of wavelengths from deep ultraviolet to near infrared wavelengths (190–1000 nm).
Measurement examples:
In the following examples, the notation for theoretical and measured reflectance in the spectral plots is expressed as “R-theor” and “R-meas”, respectively.
Measurement examples:
Below are schematics depicting the thin film measurement process: The Forouhi–Bloomer dispersion equations in combination with Rigorous Coupled-Wave Analysis (RCWA) have also been used to obtain detailed profile information (depth, CD, sidewall angle) of trench structures. In order to extract structure information, polarized broadband reflectance data, Rs and Rp, must be collected over a large wavelength range from a periodic structure (grating), and then analyzed with a model that incorporates Forouhi–Bloomer dispersion equations and RCWA. Inputs into the model include grating pitch and n and k spectra of all materials within the structure, while outputs can include Depth, CDs at multiple locations, and even sidewall angle. The n and k spectra of such materials can be obtained in accordance with the methodology described in this section for thin film measurements.
Measurement examples:
Below are schematics depicting the measurement process for trench structures. Examples of trench measurements then follow.
Example 1: Amorphous silicon on oxidized silicon substrate (a-Si/SiO2/Si-Sub) Example 1 shows one broad maximum in the n(λ) and k(λ) spectra of the a-Si film, as is expected for amorphous materials. As a material transitions toward crystallinity, the broad maximum gives way to several sharper peaks in its n(λ) and k(λ) spectra, as demonstrated in the graphics.
Measurement examples:
When the measurement involves two or more films in a stack of films, the theoretical expression for reflectance must be expanded to include the n(λ) and k(λ) spectra, plus thickness, t, of each film. However, the regression may not converge to unique values of the parameters, due to the non-linear nature of the expression for reflectance. So it is helpful to eliminate some of the unknowns . For example, the n(λ) and k(λ) spectra of one or more of the films may be known from the literature or previous measurements, and held fixed (not allowed to vary) during the regression. To obtain the results shown in Example 1, the n(λ) and k(λ) spectra of the SiO2 layer was fixed, and the other parameters, n(λ) and k(λ) of a-Si, plus thicknesses of both a-Si and SiO2 were allowed to vary.
Measurement examples:
Example 2: 248 nm photoresist on silicon substrate (PR/Si-Sub) Polymers such as photoresist consist of long chains of molecules which do not form a crystallographic structure in the classic sense. However, their n(λ) and k(λ) spectra exhibit several sharp peaks rather than a broad maximum expected for non-crystalline materials. Thus, the measurement results for a polymer are based on the Forouhi–Bloomer formulation for crystalline materials. Most of the structure in the n(λ) and k(λ) spectra occurs in the deep UV wavelength range and thus to properly characterize a film of this nature, it is necessary that the measured reflectance data in the deep UV range is accurate.
Measurement examples:
The figure shows a measurement example of a photoresist (polymer) material used for 248 nm micro-lithography. Six terms were used in the Forouhi–Bloomer equations for crystalline materials to fit the data and achieve the results.
Measurement examples:
Example 3: Indium tin oxide on glass substrate (ITO/Glass-Sub) Indium tin oxide (ITO) is a conducting material with the unusual property that it is transparent, so it is widely used in the flat panel display industry. Reflectance and transmittance measurements of the uncoated glass substrate were needed in order to determine the previously unknown n(λ) and k(λ) spectra of the glass. The reflectance and transmittance of ITO deposited on the same glass substrate were then measured simultaneously, and analyzed using the Forouhi–Bloomer equations.
Measurement examples:
As expected, the k(λ) spectrum of ITO is zero in the visible wavelength range, since ITO is transparent. The behavior of the k(λ) spectrum of ITO in the near-infrared (NIR) and infrared (IR) wavelength ranges resembles that of a metal: non-zero in the NIR range of 750–1000 nm (difficult to discern in the graphics since its values are very small) and reaching a maximum value in the IR range (λ>1000 nm). The average k value of the ITO film in the NIR and IR range is 0.05.
Measurement examples:
Example 4: Multi-spectral analysis of germanium (40%)–selenium (60%) thin films When dealing with complex films, in some instances the parameters cannot be resolved uniquely. To constrain the solution to a set of unique values, a technique involving multi-spectral analysis can be used. In the simplest case, this entails depositing the film on two different substrates and then simultaneously analyzing the results using the Forouhi–Bloomer dispersion equations.
Measurement examples:
For example, the single measurement of reflectance in 190–1000 nm range of Ge40Se60/Si does not provide unique n(λ) and k(λ) spectra of the film. However, this problem can be solved by depositing the same Ge40Se60 film on another substrate, in this case oxidized silicon, and then simultaneously analyzing the measured reflectance data to determine: Thickness of the Ge40Se60/Si film on the silicon substrate as 34.5nm, Thickness of the Ge40Se60/Si film on the oxidized silicon substrate as 33.6nm, Thickness of SiO2 (with n and k spectra of SiO2 held fixed), and n and k spectra, in 190–1000 nm range, of Ge40Se60/Si.
Measurement examples:
Example 5: Complex trench structure The trench structure depicted in the adjacent diagram repeats itself in 160 nm intervals, that is, it has a given pitch of 160 nm. The trench is composed of the following materials: Accurate n and k values of these materials are necessary in order to analyze the structure. Often a blanket area on the trench sample with the film of interest is present for the measurement. In this example, the reflectance spectrum of the poly-silicon was measured on a blanket area containing the poly-silicon, from which its n and k spectra were determined in accordance with the methodology described in this article that utilizes the Forouhi–Bloomer dispersion equations. Fixed tables of n and k values were used for the SiO2 and Si3N4 films.
Measurement examples:
Combining the n and k spectra of the films with Rigorous Coupled-Wave Analysis (RCWA) the following critical parameters were determined (with measured results as well): | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Endoscopy unit**
Endoscopy unit:
An endoscopy unit refers to a dedicated area where medical procedures are performed with endoscopes, which are cameras used to visualize structures within the body, such as the digestive tract and genitourinary system. Endoscopy units may be located within a hospital, incorporated within other medical care centres, or may be stand-alone in nature.
In the early days of endoscopy, when fewer procedures were carried out, facilities such as operating theatres tended to be used; as the number of procedures carried out and the complexity of the procedures and equipment increased, the need for specialised rooms and staff became apparent.
Components:
An endoscopy unit consists of the following components: trained and accredited endoscopists (which are usually gastroenterologists or surgeons); trained nursing and additional staff; endoscopes and other equipment; preparation, procedural and recovery areas; a disinfection and cleaning area for equipment; emergency equipment and personnel; and, a program for quality assurance. Procedures performed within an endoscopy unit may include gastrointestinal endoscopy (such as gastroscopy, colonoscopy, ERCP, and endoscopic ultrasound), bronchoscopy, cystoscopy, or other more specialized procedures. Endoscopy units may be part of a hospital, where emergency procedures may be performed on ill patients admitted to hospital; however, most endoscopies are performed on ambulatory patients in the outpatient setting.
Layout:
Endoscopy units consist of a number of areas: Reception and waiting area for patients and relatives.
Consultation rooms.
Changing areas.
Procedure rooms.
Recovery area.
Decontamination area.
Procedure rooms These are the rooms where the endoscopic procedures are performed.
Procedure rooms should to contain: Patient trolley.
Endoscopy 'stack' and video monitor(s) – this equipment contains the light source and processor required for the endoscopes to produce images.
Monitoring equipment to allow continuous monitoring of patient condition during procedures.
suction equipment to allow both aspiration of airway secretions and to allow aspiration of fluid through the endoscope.
Piped oxygen supply.
Medication used to provide procedural sedation.
Ancillary equipment - endoscopy biopsy forceps, snares, injectors (see Instruments used in gastroenterology).
Diathermy and/or Argon plasma coagulation equipment.
Computer(s) used to generate endoscopy reports.Procedure rooms should be at least 200 square feet (19 m2) in size, and hospitals should have at least two procedure rooms. Larger endoscopy units should contain one procedure room per 1,000 to 1,500 procedures performed annually.
Recovery area Since a number of patients undergoing endoscopy receive sedation, and a few emergency patients may be unstable, there must be an area available for the observation of patients until they have recovered. These areas also need to have piped oxygen, full monitoring facilities (including pulse oximetry), suction, resuscitation equipment and emergency drugs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**National research and education network**
National research and education network:
A national research and education network (NREN) is a specialised internet service provider dedicated to supporting the needs of the research and education communities within a country.
It is usually distinguished by support for a high-speed backbone network, often offering dedicated channels for individual research projects.
In recent years NRENs have developed many 'above the net' services.
List of NRENs by geographic area:
East and Southern Africa UbuntuNet Alliance for Research and Education Networking - the Alliance of NRENs of East and Southern Africa Eb@le - DRC NREN EthERNet - Ethiopian NREN iRENALA - Malagasy NREN KENET - Kenyan NREN MAREN - Malawian NREN MoRENet - Mozambican NREN RENU - Ugandan NREN RwEdNet - Rwanda NREN SomaliREN - Somali NREN SudREN - Sudanese NREN TENET/SANReN - South African NREN TERNET - Tanzanian NREN Xnet - Namibian NREN ZAMREN - Zambian NREN North Africa ASREN - Arab States Research and Education Network TUREN - Tunisian NREN MARWAN - Moroccan NREN ENREN - Egyptian NREN ARN (Algeria) - Algerian NREN SudREN - Sudanese NREN SomaliREN - Somali NREN West and Central Africa WACREN - West and Central African Research and Education Network GARNET - Ghanaian NREN TogoRER - Togolese NREN GhREN - Ghanaian NREN MaliREN - Mali NREN Niger-REN - Nigerien NREN RITER - Côte d'Ivoire NREN SnRER - Senegalese NREN NgREN - Nigerian NREN Eko-Konnect Research and Education Network - Nigerian NREN LRREN - Liberia Research and Education Network Asia Pacific APAN - Asia-Pacific Advanced Network AARNet - Australian NREN AfgREN - Afghanistan NREN BDREN - Bangladeshi NREN CSTNET - China Science and Technology Network CERNET - China Education and Research Network ERNET - Indian NREN HARNET - Hong Kong NREN KOREN - Korean NREN KREONET- Korean NREN IDREN - Indonesian NREN LEARN - Sri Lankan NREN SINET - Japanese NREN MYREN - Malaysian NREN NKN - Indian NREN NREN - Nepal NREN NREN - Islamic Republic of Iran NREN REANNZ - New Zealand NREN PERN - Pakistani NREN PREGINET - Philippine NREN SingAREN - Singaporean NREN TWAREN - Taiwanese NREN UniNet - Thai NREN VinaRen - Vietnamese NREN CamREN- Cambodia NREN TEIN - Trans Eurasia Information Network North America United States – although advocated since the 1980s, the U.S. does not have one single NREN.Canada Latin America RedCLARA - Cooperación Latino Americana de Redes Avanzadas (Association of Latin American NRENs) Innova-Red - Argentinian NREN ADSIB - Bolivian NREN RNP - Brazilian NREN REUNA - Chilean NREN RENATA - Colombian NREN RedCONARE - Costa Rican NREN CEDIA - Ecuadorian NREN RAICES - El Salvadoran NREN RAGIE - Guatemalan NREN Universidad Tecnológica Centroamericana (UNITEC) - Honduran NREN CUDI - Mexican NREN RENIA - Nicaraguan NREN RedCyT - Panamanian NREN Arandu - Paraguayan NREN RAAP - Peruvian NREN RAU - Uruguayan NREN REACCIUN CNTI?- Venezuelan NREN Caribbean C@ribNET - Caribbean NREN TTRENT - Trinidad and Tobago NREN JREN - Jamaica NREN RADEI - NREN of the Dominican Republic Europe European Academic and Research Network GÉANT - Develops and maintains the GÉANT backbone network on behalf of European NRENs. Formerly DANTE and TERENA.
List of NRENs by geographic area:
CEENet - Central and Eastern European Research Networking Association Eumedconnect - South Mediterranean Backbone ANA Albanian NREN ASNET-AM - Armenian NREN ACOnet - Austrian NREN AzScienceNet Azerbaijan NREN BASNET - Belarus NREN Belnet - Belgian NREN BREN - Bulgarian NREN CESNET - Czech NREN CARNET - Croatian NREN CYNET - Cypriot NREN SURFnet - Dutch NREN EENet - Estonian NREN RENATER - French NREN Deutsches Forschungsnetz (DFN) - German NREN GRENA - Georgian NREN GRNET - Greek NREN KIFU/NIIF Program - Hungarian NREN HEAnet - Irish NREN GARR - Italian NREN KazRENA - Kazakhstan NREN SigmaNet - Latvian NREN LITNET - Lithuanian NREN RESTENA - Luxembourg NREN MARNET - Macedonian NREN RiċerkaNet - Maltese NREN RENAM - Moldovian NREN MREN - Montenegro NREN PIONIER (PSNC) - Polish NREN FCCN - Portuguese NREN RoEduNet - Romanian NREN RUNNet - Russian NREN AMRES - Serbian NREN ARNES - Slovenian NREN SANET - Slovakian NREN RedIRIS - Spanish NREN SWITCH - Swiss NREN ULAKBIM - Turkish NREN URAN - Ukrainian NREN Jisc - United Kingdom NREN, operator of the Janet network KREN - Kosovo NREN Nordic countries NORDUnet - Nordic backbone network DeiC - Danish NREN FUNET - Finnish NREN RHnet - Icelandic NREN SUNET - Swedish NREN UNINETT - Norwegian NREN Middle East Maeen Saudi Arabia NREN Eumedconnect - Mediterranean/North African Backbone ANKABUT UAE NREN OMREN Omani NREN IUCC - Israeli NREN JUnet Jordanian NREN IRAN SHOA Iranian NREN PALNREN Palestinian NREN Lebanon Birzeit Uni/AlQuds Palestinian Authority QNREN - Qatar NREN HIAST Syrian NREN Central Asia RUNNet - Russian University Network, Russian NREN ASNET-AM - Armenian AzScienceNet Azerbaijan NREN GRENA - Georgian NREN KazRENA - Kazakhstan NREN KRENA - Kyrgyzian NREN TuRENA - Turkmenistan NREN UZSCINET - Uzbekistan NREN | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**German Braille**
German Braille:
German Braille is one of the older braille alphabets. The French-based order of the letter assignments was largely settled on with the 1878 convention that decided the standard for international braille. However, the assignments for German letters beyond the 26 of the basic Latin alphabet are mostly unrelated to French values.
Letters:
In numerical order by decade, the letters are: The generic accent sign, ⠈, is used with foreign names such as ⠍⠕⠇⠊⠈⠑⠗⠑ Molière that have accented letters not found in German. There are numerous contractions and abbreviations.
Punctuation:
Punctuation is as follows: Only the first asterisk is marked with dot 6, so print *** is in braille ⠠⠔⠔⠔.
⠴ is the Artikel sign, marking an article of a document.
For the brackets of phonetic transcription, German Braille uses a modified form, ⠰⠶...⠰⠶.
Additional punctuation and symbols, especially mathematical, are explained in the external reference below.
Numbers:
Numbers are introduced with the sign ⠼. They are dropped to decade 5 for ordinals and for the denominator of fractions.
So, for example, ⠼⠙ is ⟨4⟩, while ⠼⠲ is ⟨4.⟩ (4th), and ⠼⠉⠲ is ⟨3⁄4⟩́.
The percent sign requires the number sign even after a number: ⠼⠃⠼⠴ ⟨2%⟩; otherwise it would look like the (undefined) fraction 2⁄0.
In a compound fraction, a repeat of the number sign separate the units from the fraction: ⠼⠁⠼⠁⠆ ⟨1+1⁄2⟩.
Formatting:
The emphasis sign (for italics, underline, or bold) is marked with an extra point, ⠠⠸, when it occurs in the middle of a word. It is doubled, ⠸⠸, when more than one word is emphasized, in which case the ending sign ⠠⠄ will be required at the end of the last word.
Formatting:
The all-caps sign is used for initialisms and the like. Doubled, it is used for all-cap text, such as titles, and the same ending sign, ⠠⠄, is used. Names with initials, such as J.S. Bach, do not require the cap sign. The lower-case sign ⠠ is used to mark mixed case or exceptions to expected capitalization; as such, it replaces the apostrophe that sets off the plural -s in print: ⠘⠊⠉⠠⠎ ⟨IC's⟩, ⠘⠍⠨⠓⠵ ⟨MHz⟩, ⠨⠛⠍⠃⠘⠓ ⟨GmbH⟩.(Note the initialism sign can be used for a single letter.) Lower-case metric units are marked as lower-case: ⠠⠅⠘⠺ ⟨kW⟩. This is useful, as it ends the scope of the number sign ⠼: ⠼⠁⠉⠚⠠⠓⠨⠏⠁ ⟨130 hPa⟩, ⠼⠁⠉⠚⠠⠅⠘⠧⠁ ⟨130 kVA⟩. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pit manager**
Pit manager:
A pit boss (more commonly known today as the pit manager) is the person who directs the employees who work in a casino pit. The job of the pit boss is to manage the floormen, who are the supervisors for table games dealers in a casino. One pit boss monitors all floormen, dealers, and players in the pit; there is usually one floorman for every six tables. The floormen correct minor mistakes, but if a severe gaming discrepancy arises (such as duplicate cards being found in a deck), it is the job of the pit boss to sort it out. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Emotion Markup Language**
Emotion Markup Language:
An Emotion Markup Language (EML or EmotionML) has first been defined by the W3C Emotion Incubator Group (EmoXG) as a general-purpose emotion annotation and representation language, which should be usable in a large variety of technological contexts where emotions need to be represented. Emotion-oriented computing (or "affective computing") is gaining importance as interactive technological systems become more sophisticated. Representing the emotional states of a user or the emotional states to be simulated by a user interface requires a suitable representation format; in this case a markup language is used.
Emotion Markup Language:
EmotionML version 1.0 was published by the group in May 2014.
History:
In 2006, a first W3C Incubator Group, the Emotion Incubator Group (EmoXG), was set up "to investigate a language to represent the emotional states of users and the emotional states simulated by user interfaces" with the final Report published on 10 July 2007.In 2007, the Emotion Markup Language Incubator Group (EmotionML XG) was set up as a follow-up to the Emotion Incubator Group, "to propose a specification draft for an Emotion Markup Language, to document it in a way accessible to non-experts, and to illustrate its use in conjunction with a number of existing markups." The final report of the Emotion Markup Language Incubator Group, Elements of an EmotionML 1.0, was published on 20 November 2008.The work then was continued in 2009 in the frame of the W3C's Multimodal Interaction Activity, with the First Public Working Draft of "Emotion Markup Language (EmotionML) 1.0" being published on 29 October 2009. The Last Call Working Draft of "Emotion Markup Language 1.0", was published on 7 April 2011. The Last Call Working Draft addressed all open issues that arose from feedback of the community on the First Call Working Draft as well as results of a workshop held in Paris in October 2010. Along with the Last Call Working Draft, a list of vocabularies for EmotionML has been published to aid developers using common vocabularies for annotating or representing emotions.
History:
Annual draft updates were published until the 1.0 version was finished in 2014.
Reasons for defining an emotion markup language:
A standard for an emotion markup language would be useful for the following purposes: To enhance computer-mediated human-human or human-machine communication. Emotions are a basic part of human communication and should therefore be taken into account, e.g. in emotional Chat systems or emphatic voice boxes. This involves specification, analysis and display of emotion related states.
To enhance systems' processing efficiency. Emotion and intelligence are strongly interconnected. The modeling of human emotions in computer processing can help to build more efficient systems, e.g. using emotional models for time-critical decision enforcement.
Reasons for defining an emotion markup language:
To allow the analysis of non-verbal behavior, emotion, mental states that can be provided using web services to enable data collection, analysis, and reporting.Concrete examples of existing technology that could apply EmotionML include: Opinion mining / sentiment analysis in Web 2.0, to automatically track customer's attitude regarding a product across blogs; Affective monitoring, such as ambient assisted living applications, fear detection for surveillance purposes, or using wearable sensors to test customer satisfaction; Wellness technologies that provide assistance according to a person's emotional state with the goal to improve the person's well-being; Character design and control for games and virtual worlds; Building web services to capture, analysis, and report data of non-verbal behavior, emotion and mental states of an individual or group across the internet using standard web technologies such as HTML5 and JSON.
Reasons for defining an emotion markup language:
Social robots, such as guide robots engaging with visitors; Expressive speech synthesis, generating synthetic speech with different emotions, such as happy or sad, friendly or apologetic; expressive synthetic speech would for example make more information available to blind and partially sighted people, and enrich their experience of the content; Emotion recognition (e.g., for spotting angry customers in speech dialog systems, to improve computer games or e-Learning applications); Support for people with disabilities, such as educational programs for people with autism. EmotionML can be used to make the emotional intent of content explicit. This would enable people with learning disabilities (such as Asperger syndrome) to realise the emotional context of the content; EmotionML can be used for media transcripts and captions. Where emotions are marked up to help deaf or hearing impaired people who cannot hear the soundtrack, more information is made available to enrich their experience of the content.The Emotion Incubator Group has listed 39 individual use cases for an Emotion markup language.A standardised way to mark up the data needed by such "emotion-oriented systems" has the potential to boost development primarily because data that was annotated in a standardised way can be interchanged between systems more easily, thereby simplifying a market for emotional databases, and the standard can be used to ease a market of providers for sub-modules of emotion processing systems, e.g. a web service for the recognition of emotion from text, speech or multi-modal input.
The challenge of defining a generally usable emotion markup language:
Any attempt to standardize the description of emotions using a finite set of fixed descriptors is doomed to failure, as there is no consensus on the number of relevant emotions, on the names that should be given to them or how else best to describe them. For example, the difference between ":)" and "(:" is small, but using a standardized markup it would make one invalid. Even more basically, the list of emotion-related states that should be distinguished varies depending on the application domain and the aspect of emotions to be focused. Basically, the vocabulary needed depends on the context of use.
The challenge of defining a generally usable emotion markup language:
On the other hand, the basic structure of concepts is less controversial: it is generally agreed that emotions involve triggers, appraisals, feelings, expressive behavior including physiological changes, and action tendencies; emotions in their entirety can be described in terms of categories or a small number of dimensions; emotions have an intensity, and so on. For details, see the Scientific Descriptions of Emotions in the Final Report of the Emotion Incubator Group.
The challenge of defining a generally usable emotion markup language:
Given this lack of agreement on descriptors in the field, the only practical way of defining an emotion markup language is the definition of possible structural elements and to allow users to "plug in" vocabularies that they consider appropriate for their work.
The challenge of defining a generally usable emotion markup language:
An additional challenge lies in the aim to provide a markup language that is generally usable. The requirements that arise from different use cases are rather different. Whereas manual annotation tends to require all the fine-grained distinctions considered in the scientific literature, automatic recognition systems can usually distinguish only a very small number of different states and affective avatars need yet another level of detail for expressing emotions in an appropriate way.
The challenge of defining a generally usable emotion markup language:
For the reasons outlined here, it is clear that there is an inevitable tension between flexibility and interoperability, which need to be weighed in the formulation of an EmotionML. The guiding principle in the following specification has been to provide a choice only where it is needed, and to propose reasonable default options for every choice.
Applications and web services benefiting from an emotion markup language:
There are a range of existing projects and applications to which an emotion markup language will enable the building of webservices to measure capture data of individuals non-verbal behavior, mental states, and emotions and allowing results to be reported and rendered in a standardized format using standard web technologies such as JSON and HTML5. One such project is measuring affect data across the Internet using EyesWeb. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Patulous Eustachian tube**
Patulous Eustachian tube:
Patulous Eustachian tube (PET) is the name of a physical disorder where the Eustachian tube, which is normally closed, instead stays intermittently open. When this occurs, the person experiences autophony, the hearing of self-generated sounds. These sounds, such as one's own breathing, voice, and heartbeat, vibrate directly onto the ear drum and can create a "bucket on the head" effect. PET is a form of eustachian tube dysfunction (ETD), which is said to be present in about 1 percent of the general population.
Signs and symptoms:
With patulous Eustachian tube, variations in upper airway pressure associated with respiration are transmitted to the middle ear through the Eustachian tube. This causes an unpleasant fullness feeling in the middle ear and alters the auditory perception. Complaints seem to include muffled hearing and autophony. In addition, patulous Eustachian tube generally feels dry with no clogged feeling or sinus pressure.
Signs and symptoms:
Patients hear their own voice or its echo from inside. They describe it as being amplified and unpleasant. Lying head down may help since it increases venous blood pressure and congestion of the mucosa.
Causes:
Patulous Eustachian tube is a physical disorder. The exact causes may vary depending on the person. Weight loss is a commonly cited cause of the disorder due to the nature of the Eustachian tube itself and is associated with approximately one-third of reported cases. Fatty tissues hold the tube closed most of the time in healthy individuals. When circumstances cause overall body fat to diminish, the tissue surrounding the Eustachian tube shrinks and this function is disrupted.Activities and substances which dehydrate the body have the same effect and are also possible causes of patulous Eustachian tube. Examples are stimulants (including caffeine) and exercise. Exercise may have a more short-term effect than caffeine or weight loss in this regard.
Causes:
Pregnancy can also be a cause of patulous Eustachian tube due to the effects of pregnancy hormones on surface tension and mucus in the respiratory system.Granulomatosis with polyangiitis can also be a cause of this disorder. It is yet unknown why.
PET can occur as a result of liquid residue in the Eustachian tube, after suffering a middle ear infection (otitis media).
Radiation therapy High levels of estrogen Nasal decongestants Stress Sudden weight loss Neurological disorders
Diagnosis:
Upon examination of a suspected case of patulous Eustachian tube, a doctor can directly view the tympanic membrane with a light and observe that it vibrates with every breath taken by the patient. A tympanogram may also help with the diagnosis. Patulous Eustachian tube is likely if brisk inspiration causes a significant pressure shift.
Diagnosis:
Patulous Eustachian tube is frequently misdiagnosed as standard congestion due to the similarity in symptoms and rarity of the disorder. Audiologists are more likely to recognize the disorder, usually with tympanometry or nasally delivered masking noise during a hearing assessment, which is highly sensitive to this condition.When misdiagnosis occurs, a decongestant medication is sometimes prescribed. This type of medication aggravates the condition, as the Eustachian tube relies on sticky fluids to keep closed and the drying effect of a decongestant would make it even more likely to remain open and cause symptoms. The misdiagnosed patient may also have tubes surgically inserted into the eardrum, which increases the risk of ear infection and will not alleviate patulous Eustachian tube. If these treatments are tried and failed, and the doctor is not aware of the actual condition, the symptoms may even be classified as psychological.
Diagnosis:
Incidentally, patients who instead suffer from the even rarer condition of superior canal dehiscence are at risk for misdiagnosis of patulous Eustachian tube due to the similar autophony in both conditions.
Treatment:
Estrogen nasal drops or saturated potassium iodide have been used to induce edema of the eustachian tube opening. Nasal medications containing diluted hydrochloric acid, chlorobutanol, and benzyl alcohol have been reported to be effective in some patients, with few side effects. Food and Drug Administration approval is still pending, however. Nasal sprays have also been a very effective temporary treatment for this disease, as well.In extreme cases surgical intervention may attempt to restore the Eustachian tube tissues with fat, gel foam, or cartilage or scar it closed with cautery. These methods are not always successful. For example, there is the case of the early attempts at surgical correction involving injections of tetrafluoroetheylene (Teflon) paste but although this treatment was able to give transient relief, it was discontinued due to several deaths that resulted from inadvertent intracarotid injections.Although a temporary solution, surgical ventilation tube placement in the ear drum has also proven to be an effective treatment option. This treatment is known as either a unilateral or bilateral myringotomy. 50% of patients reported relief of PET symptoms when given this treatment. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Long dice**
Long dice:
Long dice (sometimes oblong or stick dice) are dice, often roughly right prisms or (in the case of barrel dice) antiprisms, designed to land on any of several marked lateral faces, but neither end. Landing on end may be rendered very rare simply by their small size relative to the faces, by the instability implicit in the height of the dice, and by rolling the long dice along their axes rather than tossing. Many long dice provide further insurance against landing on end by giving the ends a rounded or peaked shape, rendering such an outcome physically impossible (at least on a flat solid surface).
Long dice:
Design advantages of long dice include being relatively easy to create fair dice with an odd number of faces, and (for four-faced dice) being easier to roll than tetrahedral d4 dice (as found in many role-playing games).
Four faces (square prisms):
Both cubic dice and four-faced long dice are found as early as the mid third millennium BCE at Indus Valley civilisation sites; these are marked variously with dot-and-ring figures, linear devices, and Indus Valley signs. Dot-and-ring figures are used to this day on long dice in India, and predominate in the central European long dice shown above. In India, long dice (pasa) are used to play Chaupar (a relative of Pachisi); the faces may be marked with the values 1-3-4-6 or 1-2-5-6, though older Indian long dice were marked 1-2-3-4.Similar dice were used by Germanic people before the Migration Period. These include distinctive roughly ovoid Westerwanna-type dice (named for the site of their initial discovery in Lower Saxony); these are typically about 2 cm in length and marked with dot-and-ring figures of values 2-3-4-5.Long dice are used with the Scandinavian games Daldøs (typically marked A-II-III-IIII or X-II-III-IIII) and Sáhkku (with a variety of similar markings including X-II-III-[blank]); these dice may be so short as to exhibit nearly square faces, and therefore feature pyramidal ends.
More faces (n-gonal prisms):
A five-faced long die (pentagonal prism) is used in the Korean game of Dignitaries.Owzthat and similar forms of pencil cricket (a cricket simulation game) use two six-faced long dice (hexagonal prisms—like segments of a pencil).Though the traditional English Lang Larence ("Long Lawrence") was sometimes four-faced, it commonly appeared with eight faces (octagonal prism), even though they continued to display only four distinct values (each value being displayed on two faces).
More faces (n-gonal prisms):
This gambling game played with the Lang Larence is the same as that usually played with teetotums. A teetotum is essentially a long die (though not necessarily physically long) with a spindle through its axis, allowing it to be spun and preventing it landing on end. Though many teetotums (for example, the dreidel) are four-faced, they may have any practical number of faces.
Barrel dice:
Barrel dice are a more recent design, used most often by players of role playing games and wargames. They appear roughly cylindrical, and are generally modified antiprisms with between four and twenty flattened triangular facets, each numbered. Each triangular face alternates in alignment by 180 degrees. The two ends are formed by half as many triangular facets as there are numbered faces, arranged as a pyramid so that it is impossible for the die to stop on one of its ends. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dorsal nerve of the clitoris**
Dorsal nerve of the clitoris:
The dorsal nerve of the clitoris is a nerve in females that branches off the pudendal nerve to innervate the clitoris. The nerve is important for female sexual pleasure, and it may play a role in clitoral erections.It travels from below the inferior pubic ramus to the suspensory ligament of the clitoris. At its thickest, the DNC is 2 mm (0.079 in) in diameter, visible to the naked eye during dissection. The DNC splits into two nerve branches on either side of the midline, closely following the crura of the clitoris.Some surgeries—for example, sling surgeries to treat female urinary incontinence—can damage the DNC, causing a loss of sensation in the clitoris. Understanding the nerve is important for urologists and gynecologists who may operate on organs near the DNC.The dorsal nerve of the clitoris is analogous to the dorsal nerve of the penis in males. It is a terminal branch of the pudendal nerve. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pyromorphite**
Pyromorphite:
Pyromorphite is a mineral species composed of lead chlorophosphate: Pb5(PO4)3Cl, sometimes occurring in sufficient abundance to be mined as an ore of lead. Crystals are common, and have the form of a hexagonal prism terminated by the basal planes, sometimes combined with narrow faces of a hexagonal pyramid. Crystals with a barrel-like curvature are not uncommon. Globular and reniform masses are also found. It is part of a series with two other minerals: mimetite (Pb5(AsO4)3Cl) and vanadinite (Pb5(VO4)3Cl), the resemblance in external characters is so close that, as a rule, it is only possible to distinguish between them by chemical tests. They were formerly confused under the names green lead ore and brown lead ore (German: Grünbleierz and Braunbleierz). The phosphate was first distinguished chemically by M. H. Klaproth in 1784, and it was named pyromorphite by J. F. L. Hausmann in 1813. The name is derived from the Greek for pyr (fire) and morfe (form) due to its crystallization behavior after being melted.Paecilomyces javanicus is a mold collected from a lead-polluted soil that is able to form biominerals of pyromorphite.
Properties and isomorphism:
The color of the mineral is usually some bright shade of green, yellow or brown, and the luster is resinous. The hardness is 3.5 to 4, and the specific gravity 6.5 - 7.1. Owing to isomorphous replacement of the phosphorus by arsenic there may be a gradual passage from pyromorphite to mimetite. Varieties containing calcium isomorphously replacing lead are lower in density (specific gravity 5.9 - 6.5) and usually lighter in color; they bear the names polysphaerite (because of the globular form), miesite from Stříbro (pronounced Mies in German) in Bohemia, nussierite from Nuizière, Chénelette, near Beaujeu, Rhône, France, and cherokine from Cherokee County in Georgia. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Etemadi's inequality**
Etemadi's inequality:
In probability theory, Etemadi's inequality is a so-called "maximal inequality", an inequality that gives a bound on the probability that the partial sums of a finite collection of independent random variables exceed some specified bound. The result is due to Nasrollah Etemadi.
Statement of the inequality:
Let X1, ..., Xn be independent real-valued random variables defined on some common probability space, and let α ≥ 0. Let Sk denote the partial sum Sk=X1+⋯+Xk.
Then Pr max max Pr (|Sk|≥α).
Remark:
Suppose that the random variables Xk have common expected value zero. Apply Chebyshev's inequality to the right-hand side of Etemadi's inequality and replace α by α / 3. The result is Kolmogorov's inequality with an extra factor of 27 on the right-hand side: Pr max 27 var (Sn). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Horn antenna**
Horn antenna:
A horn antenna or microwave horn is an antenna that consists of a flaring metal waveguide shaped like a horn to direct radio waves in a beam. Horns are widely used as antennas at UHF and microwave frequencies, above 300 MHz. They are used as feed antennas (called feed horns) for larger antenna structures such as parabolic antennas, as standard calibration antennas to measure the gain of other antennas, and as directive antennas for such devices as radar guns, automatic door openers, and microwave radiometers. Their advantages are moderate directivity, broad bandwidth, low losses, and simple construction and adjustment.One of the first horn antennas was constructed in 1897 by Bengali-Indian radio researcher Jagadish Chandra Bose in his pioneering experiments with microwaves. The modern horn antenna was invented independently in 1938 by Wilmer Barrow and G. C. Southworth The development of radar in World War II stimulated horn research to design feed horns for radar antennas. The corrugated horn invented by Kay in 1962 has become widely used as a feed horn for microwave antennas such as satellite dishes and radio telescopes.An advantage of horn antennas is that since they have no resonant elements, they can operate over a wide range of frequencies, a wide bandwidth. The usable bandwidth of horn antennas is typically of the order of 10:1, and can be up to 20:1 (for example allowing it to operate from 1 GHz to 20 GHz). The input impedance is slowly varying over this wide frequency range, allowing low voltage standing wave ratio (VSWR) over the bandwidth. The gain of horn antennas ranges up to 25 dBi, with 10–20 dBi being typical.
Description:
A horn antenna is used to transmit radio waves from a waveguide (a metal pipe used to carry radio waves) out into space, or collect radio waves into a waveguide for reception. It typically consists of a short length of rectangular or cylindrical metal tube (the waveguide), closed at one end, flaring into an open-ended conical or pyramidal shaped horn on the other end. The radio waves are usually introduced into the waveguide by a coaxial cable attached to the side, with the central conductor projecting into the waveguide to form a quarter-wave monopole antenna. The waves then radiate out the horn end in a narrow beam. In some equipment the radio waves are conducted between the transmitter or receiver and the antenna by a waveguide; in this case the horn is attached to the end of the waveguide. In outdoor horns, such as the feed horns of satellite dishes, the open mouth of the horn is often covered by a plastic sheet transparent to radio waves, to exclude moisture.
How it works:
A horn antenna serves the same function for electromagnetic waves that an acoustical horn does for sound waves in a musical instrument such as a trumpet. It provides a gradual transition structure to match the impedance of a tube to the impedance of free space, enabling the waves from the tube to radiate efficiently into space.If a simple open-ended waveguide is used as an antenna, without the horn, the sudden end of the conductive walls causes an abrupt impedance change at the aperture, from the wave impedance in the waveguide to the impedance of free space, (about 377 Ω). When radio waves travelling through the waveguide hit the opening, this impedance-step reflects a significant fraction of the wave energy back down the guide toward the source, so that not all of the power is radiated. This is similar to the reflection at an open-ended transmission line or a boundary between optical mediums with a low and high index of refraction, like at a glass surface. The reflected waves cause standing waves in the waveguide, increasing the SWR, wasting energy and possibly overheating the transmitter. In addition, the small aperture of the waveguide (less than one wavelength) causes significant diffraction of the waves issuing from it, resulting in a wide radiation pattern without much directivity.
How it works:
To improve these poor characteristics, the ends of the waveguide are flared out to form a horn. The taper of the horn changes the impedance gradually along the horn's length. This acts like an impedance matching transformer, allowing most of the wave energy to radiate out the end of the horn into space, with minimal reflection. The taper functions similarly to a tapered transmission line, or an optical medium with a smoothly varying refractive index. In addition, the wide aperture of the horn projects the waves in a narrow beam.
How it works:
The horn shape that gives minimum reflected power is an exponential taper. Exponential horns are used in special applications that require minimum signal loss, such as satellite antennas and radio telescopes. However conical and pyramidal horns are most widely used, because they have straight sides and are easier to design and fabricate.
Radiation pattern:
The waves travel down a horn as spherical wavefronts, with their origin at the apex of the horn, a point called the phase center. The pattern of electric and magnetic fields at the aperture plane at the mouth of the horn, which determines the radiation pattern, is a scaled-up reproduction of the fields in the waveguide. Because the wavefronts are spherical, the phase increases smoothly from the edges of the aperture plane to the center, because of the difference in length of the center point and the edge points from the apex point. The difference in phase between the center point and the edges is called the phase error. This phase error, which increases with the flare angle, reduces the gain and increases the beamwidth, giving horns wider beamwidths than similar-sized plane-wave antennas such as parabolic dishes.
Radiation pattern:
At the flare angle, the radiation of the beam lobe is down about 20 dB from its maximum value.As the size of a horn (expressed in wavelengths) is increased, the phase error increases, giving the horn a wider radiation pattern. Keeping the beamwidth narrow requires a longer horn (smaller flare angle) to keep the phase error constant. The increasing phase error limits the aperture size of practical horns to about 15 wavelengths; larger apertures would require impractically long horns. This limits the gain of practical horns to about 1000 (30 dBi) and the corresponding minimum beamwidth to about 5–10°.
Types:
Below are the main types of horn antennas. Horns can have different flare angles as well as different expansion curves (elliptic, hyperbolic, etc.) in the E-field and H-field directions, making possible a wide variety of different beam profiles.
Pyramidal horn (a, right) – a horn antenna with the horn in the shape of a four-sided pyramid, with a rectangular cross section. They are a common type, used with rectangular waveguides, and radiate linearly polarized radio waves.
Sectoral horn – A pyramidal horn with only one pair of sides flared and the other pair parallel. It produces a fan-shaped beam, which is narrow in the plane of the flared sides, but wide in the plane of the narrow sides. These types are often used as feed horns for wide search radar antennas.
E-plane horn (b) – A sectoral horn flared in the direction of the electric or E-field in the waveguide.
H-plane horn (c) – A sectoral horn flared in the direction of the magnetic or H-field in the waveguide.
Conical horn (d) – A horn in the shape of a cone, with a circular cross section. They are used with cylindrical waveguides.
Types:
Exponential horn (e) – A horn with curved sides, in which the separation of the sides increases as an exponential function of length. Also called a scalar horn, they can have pyramidal or conical cross sections. Exponential horns have minimum internal reflections, and almost constant impedance and other characteristics over a wide frequency range. They are used in applications requiring high performance, such as feed horns for communication satellite antennas and radio telescopes.
Types:
Corrugated horn – A horn with parallel slots or grooves, small compared with a wavelength, covering the inside surface of the horn, transverse to the axis. Corrugated horns have wider bandwidth and smaller sidelobes and cross-polarization, and are widely used as feed horns for satellite dishes and radio telescopes.
Dual-mode conical horn – (The Potter horn ) This horn can be used to replace the corrugated horn for use at sub-mm wavelengths where the corrugated horn is lossy and difficult to fabricate.
Diagonal horn – This simple dual-mode horn superficially looks like a pyramidal horn with a square output aperture. On closer inspection, however, the square output aperture is seen to be rotated 45° relative to the waveguide. These horns are typically machined into split blocks and used at sub-mm wavelengths.
Ridged horn – A pyramidal horn with ridges or fins attached to the inside of the horn, extending down the center of the sides. The fins lower the cutoff frequency, increasing the antenna's bandwidth.
Septum horn – A horn which is divided into several subhorns by metal partitions (septums) inside, attached to opposite walls.
Types:
Aperture-limited horn – a long narrow horn, long enough so the phase error is a negligible fraction of a wavelength, so it essentially radiates a plane wave. It has an aperture efficiency of 1.0 so it gives the maximum gain and minimum beamwidth for a given aperture size. The gain is not affected by the length but only limited by diffraction at the aperture. Used as feed horns in radio telescopes and other high-resolution antennas.
Optimum horn:
For a given frequency and horn length, there is some flare angle that gives minimum reflection and maximum gain. The internal reflections in straight-sided horns come from the two locations along the wave path where the impedance changes abruptly; the mouth or aperture of the horn, and the throat where the sides begin to flare out. The amount of reflection at these two sites varies with the flare angle of the horn (the angle the sides make with the axis). In narrow horns with small flare angles most of the reflection occurs at the mouth of the horn. The gain of the antenna is low because the small mouth approximates an open-ended waveguide, with a large impedance step. As the angle is increased, the reflection at the mouth decreases rapidly and the antenna's gain increases. In contrast, in wide horns with flare angles approaching 90° most of the reflection is at the throat. The horn's gain is again low because the throat approximates an open-ended waveguide. As the angle is decreased, the amount of reflection at this site drops, and the horn's gain again increases.
Optimum horn:
This discussion shows that there is some flare angle between 0° and 90° which gives maximum gain and minimum reflection. This is called the optimum horn. Most practical horn antennas are designed as optimum horns. In a pyramidal horn, the dimensions that give an optimum horn are: aE=2λLEaH=3λLH For a conical horn, the dimensions that give an optimum horn are: d=3λL where aE is the width of the aperture in the E-field direction aH is the width of the aperture in the H-field direction LE is the slant height of the side in the E-field direction LH is the slant height of the side in the H-field direction d is the diameter of the cylindrical horn aperture L is the slant height of the cone from the apex λ is the wavelengthAn optimum horn does not yield maximum gain for a given aperture size. That is achieved with a very long horn (an aperture limited horn). The optimum horn yields maximum gain for a given horn length. Tables showing dimensions for optimum horns for various frequencies are given in microwave handbooks.
Gain:
Horns have very little loss, so the directivity of a horn is roughly equal to its gain. The gain G of a pyramidal horn antenna (the ratio of the radiated power intensity along its beam axis to the intensity of an isotropic antenna with the same input power) is: G=4πAλ2eA For conical horns, the gain is: G=(πdλ)2eA where A is the area of the aperture, d is the aperture diameter of a conical horn λ is the wavelength, eA is a dimensionless parameter between 0 and 1 called the aperture efficiency,The aperture efficiency ranges from 0.4 to 0.8 in practical horn antennas. For optimum pyramidal horns, eA = 0.511., while for optimum conical horns eA = 0.522. So an approximate figure of 0.5 is often used. The aperture efficiency increases with the length of the horn, and for aperture-limited horns is approximately unity.
Horn-reflector antenna:
A type of antenna that combines a horn with a parabolic reflector is known as a Hogg-horn, or horn-reflector antenna, invented by Alfred C. Beck and Harald T. Friis in 1941 and further developed by David C. Hogg at Bell labs in 1961. It is also referred to as the "sugar scoop" due to its characteristic shape. It consists of a horn antenna with a reflector mounted in the mouth of the horn at a 45 degree angle so the radiated beam is at right angles to the horn axis. The reflector is a segment of a parabolic reflector, and the focus of the reflector is at the apex of the horn, so the device is equivalent to a parabolic antenna fed off-axis. The advantage of this design over a standard parabolic antenna is that the horn shields the antenna from radiation coming from angles outside the main beam axis, so its radiation pattern has very small sidelobes. Also, the aperture isn't partially obstructed by the feed and its supports, as with ordinary front-fed parabolic dishes, allowing it to achieve aperture efficiencies of 70% as opposed to 55–60% for front-fed dishes. The disadvantage is that it is far larger and heavier for a given aperture area than a parabolic dish, and must be mounted on a cumbersome turntable to be fully steerable. This design was used for a few radio telescopes and communication satellite ground antennas during the 1960s. Its largest use, however, was as fixed antennas for microwave relay links in the AT&T Long Lines microwave network. Since the 1970s this design has been superseded by shrouded parabolic dish antennas, which can achieve equally good sidelobe performance with a lighter more compact construction. Probably the most photographed and well-known example is the 15-meter-long (50-foot) Holmdel Horn Antenna at Bell Labs in Holmdel, New Jersey, with which Arno Penzias and Robert Wilson discovered cosmic microwave background radiation in 1965, for which they won the 1978 Nobel Prize in Physics. Another more recent horn-reflector design is the cass-horn, which is a combination of a horn with a cassegrain parabolic antenna using two reflectors. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mehler reaction**
Mehler reaction:
The Mehler reaction is named after Alan H. Mehler, who, in 1951, presented data to the effect that isolated chloroplasts reduce oxygen to form hydrogen peroxide (H2O2). Mehler observed that the H2O2 formed in this way does not present an active intermediate in photosynthesis; rather, as a reactive oxygen species, it can be toxic to surrounding biological processes as an oxidizing agent. In scientific literature, the Mehler reaction often is used interchangeably with the Water-Water Cycle to refer to the formation of H2O2 by photosynthesis. Sensu stricto, the Water Water Cycle encompasses the Hill reaction, in which water is split to form oxygen, as well as the Mehler Reaction, in which oxygen is reduced to form H2O2 and, finally, the scavenging of this H2O2 by antioxidants to form water. Beginning in the 1970s, Professor Kozi Asada elucidated that oxygen can be reduced by electrons emerging from ferredoxin of photosystem I, to form superoxide, which is then reduced by superoxide dismutase to form H2O2. This photochemical H2O2 is then reduced by the action of ascorbate peroxidase to form water and oxidized ascorbate. Asada argued that oxygen presents an important sink for excess excitation energy acquired during plant exposure to bright light. He would often begin seminars by asking: 'Why aren't plants sunburnt despite being exposed to light?'.How much of a photoprotective role the Water Water Cycle plays has been occasion for some debate. In terrestrial plants, transfer of electrons to oxygen from ferredoxin at PSI accounts for easily less than 10% of total photosynthetic electron transport. In algae and other uni-cellular photosynthetic organisms, however, this amount can account for 20 to 30% of total electron transport. It is possible that the reduction of oxygen by free electrons emerging from PSI prevents components of the electron transport chain from becoming over-reduced.The Water Water Cycle is not related to photorespiration, as it comprises different reactions and results in no net oxygen consumption. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Quinuclidone**
Quinuclidone:
Quinuclidones are a class of bicyclic organic compounds with chemical formula C7H11NO with two structural isomers for the base skeleton 3-quinuclidone and 2-quinuclidone.
Quinuclidone:
3-Quinuclidone (1-azabicyclo[2.2.2]octan-3-one) is an uneventful molecule that can be synthesized as the hydrochloric acid salt by a Dieckman condensation:The other isomer, 2-quinuclidone, appears equally uneventful, but in fact it has defied synthesis until 2006. The reason is that this molecule is very unstable because its amide group has the amine lone pair and the carbonyl group not properly aligned, as may be expected for an amide, as a result of steric strain. This behaviour is predicted by Bredt's Rule, and formal amide group resembles in fact an amine, as evidenced by the ease of salt formation.
Quinuclidone:
The organic synthesis of the tetrafluoroborate salt of 2-quinuclidone is a six-step affair starting from norcamphor the final step being an azide - ketone Schmidt reaction (38% yield): This compound rapidly reacts with water to the corresponding amino acid with a chemical half-life of 15 seconds. X-ray diffraction shows pyramidalization on the nitrogen atom (59° compared to 0 for reference dimethylformamide) and torsion around the carbon-nitrogen bond to an extent of 91°. Attempts to prepare the free-base lead to uncontrolled polymerization.
Quinuclidone:
It is, nevertheless, possible to estimate its basicity in an experiment in which amine pairs (the quinuclidonium salt and a reference amine such as diethylamine or indoline) are introduced into a mass spectrometer. The relative basicity is then revealed by collision-induced dissociation of the heterodimer. Further analysis via the extended kinetic method allows for the determination of the proton affinity and gas phase basicity of 2-quinuclidonium. This method has determined that quinuclidone ranks among secondary and tertiary amines in terms of proton affinity. This high basicity is hypothesized to be due to the loss of electron delocalization when the amide bond is twisted—this causes misalignment of the pi orbitals, resulting in loss of electron resonance. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fuchsine**
Fuchsine:
Fuchsine (sometimes spelled fuchsin) or rosaniline hydrochloride is a magenta dye with chemical formula C20H19N3·HCl. There are other similar chemical formulations of products sold as fuchsine, and several dozen other synonyms of this molecule.It becomes magenta when dissolved in water; as a solid, it forms dark green crystals. As well as dying textiles, fuchsine is used to stain bacteria and sometimes as a disinfectant. In the literature of biological stains the name of this dye is frequently misspelled, with omission of the terminal -e, which indicates an amine. American and English dictionaries (Webster's, Oxford, Chambers, etc.) give the correct spelling, which is also used in the literature of industrial dyeing. It is well established that production of fuchsine results in development of bladder cancers by production workers. Production of magenta is listed as a circumstance known to result in cancer.
History:
Fuchsine was first created by Jakub Natanson in 1856 from aniline and 1,2-Dichloroethane. In 1858 August Wilhelm von Hofmann obtained it from aniline and carbon tetrachloride. François-Emmanuel Verguin discovered the substance independently of Hofmann the same year and patented it. Fuchsine was named by its original manufacturer Renard frères et Franc, is usually cited with one of two etymologies: from the color of the flowers of the plant genus Fuchsia, named in honor of botanist Leonhart Fuchs, or as the German translation Fuchs of the French name Renard, which means fox. An 1861 article in Répertoire de Pharmacie said that the name was chosen for both reasons.
Acid fuchsine:
Acid fuchsine is a mixture of homologues of basic fuchsine, modified by addition of sulfonic groups. While this yields twelve possible isomers, all of them are satisfactory despite slight differences in their properties.
Basic fuchsine:
Basic fuchsine is a mixture of rosaniline, pararosaniline, new fuchsine and Magenta II. Formulations usable for making of Schiff reagent must have high content of pararosanilin. The actual composition of basic fuchsine tends to somewhat vary by vendor and batch, making the batches differently suitable for different purposes.
In solution with phenol (also called carbolic acid) as an accentuator it is called carbol fuchsin and is used for the Ziehl–Neelsen and other similar acid-fast staining of the mycobacteria which cause tuberculosis, leprosy etc. Basic fuchsine is widely used in biology to stain the nucleus, and is also a component of Lactofuchsin, used for Lactofuchsin mounting.
Properties:
The crystals pictured at the right are of basic fuchsine, also known as basic violet 14, basic red 9, pararosanaline or CI 42500. Their structure differs from the structure shown above by the absence of the methyl group on the upper ring, otherwise they are quite similar.
They are soft, with a hardness of less than 1, about the same as or less than talc. They possess a strong metallic lustre and a green yellow color. They leave dark greenish streaks on paper and when these are moistened with a solvent, the strong magenta colour appears.
Chemical structure:
Fuchsine is an amine salt and has three amine groups, two primary amines and a secondary amine. If one of these is protonated to form ABCNH+, the positive charge is delocalized across the whole symmetrical molecule due to pi cloud electron movement.
Chemical structure:
The positive charge can be thought of as residing on the central carbon atom and all three "wings" becoming identical aromatic rings terminated by a primary amine group. Other resonance structures can be conceived, where the positive charge "moves" from one amine group to the next, or one third of the positive charge resides on each amine group. The ability of fuchsine to be protonated by a stronger acid gives it its basic property. The positive charge is neutralized by the negative charge on the chloride ion. The positive "basic fuchsinium ions" and negative chloride ions stack to form the salt "crystals" depicted above. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Flip chart**
Flip chart:
A flip chart is a stationery item consisting of a pad of large paper sheets. It is typically fixed to the upper edge of a whiteboard, or supported on a tripod or four-legged easel. Such charts are commonly used for presentations.
Forms:
Although most commonly supported on a tripod, flip charts come in various forms. Some of these are: stand-alone flip chart: resembles a big isosceles triangle box that usually sits on a table. Imagine a book that you would open at 270° angle and then lay on a table. The paper is flipped from one side of the top of the triangle box to the other.
Forms:
metallic tripod (or easel) stand: usually has 3 or 4 metallic legs that are linked together at one extremity. A support board is attached to two of these legs to support the large paper pad. This is the most common type of flip chart stand.
metallic mount on wheels: usually has a flat base to support the paper pad and is mounted on one or two legs that then have a set of wheels. The advantage of these more recent forms of stands is that it is easier to transport the flip chart from one location to another.
Usage:
Text is usually hand written with marker pens and may include figures or charts. A sheet can be flipped over by the presenter to continue to a new page.
Some flip charts may have a reduced version of the page that faces the audience printed on the back of the preceding page, making it possible for the presenter to see the same thing the audience is seeing. Others have teaching notes printed on the back.
Usage:
Flip charts are used in many different settings such as: in any type of presentation where the papers pads are pre-filled with information on a given topic for capturing information in meetings and brainstorming sessions in classrooms and teaching institutions of any kind to record relevant information in manufacturing plants a creative drawing board for Art students a palette for artists in “life-drawing” classes for strategy coaching for sports teams for teachingA variety of paper sizes are used from the floor standing through to the smaller table-top versions, subject to the country's adopted paper sizes. These include A1, B1, 25" x30" through to 20" x 23".
History:
The earliest known patent of a flipchart is from May 8, 1913. Flip charts have being in use from the 1900s, the earliest recorded use of a flip chart is a photo from 1912 of John Henry Patterson (1844-1922), NCR's CEO while addressing the 100 Point Club standing next to a pair of flip charts on casters. The flipchart we know (on a small whiteboard) was invented by Peter Kent in the 1970s. Peter Kent was the founder and CEO of the visual communications group Nobo plc, and it is believed that they were the first company to put the large pieces of paper over whiteboards, rather than over other materials.
Digital:
Recently, scientists have developed a digital self writing flip chart which writes word for word everything it is instructed to record. The disability action group "Armless" has stated that this is a significant step forward for disabilities groups to have conferences like people without disabilities. Also available are flipchart stands that are self heightening. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nano-Structures & Nano-Objects**
Nano-Structures & Nano-Objects:
Nano-Structures & Nano-Objects is an interdisciplinary peer-reviewed scientific journal devoted to all aspects of the synthesis and properties of the nanotechnology. The journal focuses on novel architecture at the nanolevel with an emphasis on new synthesis and characterization methods. The journal focused on objects rather than on their application. However, various novel applications (nano-electronics, energy conversion, catalysis, drug delivery and nano-medicine) using Nanostructures and Nano-objects are considered in this journal.
Nano-Structures & Nano-Objects:
The journal is published by Elsevier and publishes four volumes per year.
Editor-in-Chief:
Sabu Thomas is the current Editor-in-Chief of the Nano-Structures & Nano-Objects.
Indexing:
The journal is indexed in the Scopus, INSPEC, and PubMed. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sega Swirl**
Sega Swirl:
Sega Swirl is a puzzle game that was created for the Dreamcast, Personal computer and Palm OS. The game was included in various demo discs released for the Dreamcast (through the Official Dreamcast Magazine (UK) and Official Dreamcast Magazine (US) magazines and on newly released consoles), and is free to download and play on the PC.
Sega Swirl was created by Scott Hawkins, while he worked at Sega. Scott Hawkins designed the game and programmed the original PC version of the game. Scott Hawkins worked with Tremor Entertainment to develop the Dreamcast version of the game.
Sega Swirl:
The game presented swirls of different colors stacked upon each other. The player would try to match up as many of the same colored swirls onscreen as possible, then, when satisfied with a combo, they would press the color, making them disappear. The more swirls one can gather together, the more points earned, as well as a reward of seeing the swirls disappear in different ways. The most rewarding way to see the swirls disappear is when they all go into the air and burst with firework-like sounds and cheers. If a swirl of a certain color is alone within a stack of other colored swirls, the player actually loses points.
Sega Swirl:
The Dreamcast version featured a snake in the bottom right corner of the screen, who would act pleased when the player did well and shook his head when they did poorly. If the player did nothing for an extended length of time, the snake would stare at them and then gesture to the left, towards the play field.
Sega Swirl:
On the Dreamcast, it could be played on Versus mode (players compete with one swirl screen) with up to four players, an email mode (if you used the Dreamcast modem), and it also allowed split screen (four players with their own swirl play fields). On the PC, split screen is not available, and versus is up to two players. Both versions allowed one to compete with another human player via email (Dreamcast players may also play against PC players through this).
Sega Swirl:
The Palm version of Sega Swirl includes a two player head-to-head mode that can be played in real-time over the handheld's infrared port. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Low-energy plasma-enhanced chemical vapor deposition**
Low-energy plasma-enhanced chemical vapor deposition:
Low-energy plasma-enhanced chemical vapor deposition (LEPECVD) is a plasma-enhanced chemical vapor deposition technique used for the epitaxial deposition of thin semiconductor (silicon, germanium and SiGe alloys) films. A remote low energy, high density DC argon plasma is employed to efficiently decompose the gas phase precursors while leaving the epitaxial layer undamaged, resulting in high quality epilayers and high deposition rates (up to 10 nm/s).
Working principle:
The substrate (typically a silicon wafer) is inserted in the reactor chamber, where it is heated by a graphite resistive heater from the backside. An argon plasma is introduced into the chamber to ionize the precursors' molecules, generating highly reactive radicals which result in the growth of an epilayer on the substrate. Moreover, the bombardment of Ar ions removes the hydrogen atoms adsorbed on the surface of the substrate while introducing no structural damage.
Working principle:
The high reactivity of the radicals and the removal of hydrogen from the surface by ion bombardment prevent the typical problems of Si, Ge and SiGe alloys growth by thermal chemical vapor deposition (CVD), which are dependence of the growth rate from the substrate temperature, due to the thermal energy needed for precursors decomposition and hydrogen desorption from the substrate high temperatures (>1000 °C for silicon) required to get a significant growth rate, which is strongly limited by the aforementioned effects strong dependence of the deposition rate on the SiGe alloy composition, due to the large difference between the hydrogen desorption rate from Si and Ge surfaces.Thanks to this effects the growth rate in a LEPECVD reactor depends only on the plasma parameters and the gas fluxes, and it is possible to obtain epitaxial deposition at much lower temperatures compared to a standard CVD tool.
LEPECVD reactor:
The LEPECVD reactor is divided in three main parts: a loadlock, to load the substrates into the chamber without breaking the vacuum the main chamber, which is kept in UHV at a base pressure of ~10 −9 mbar the plasma source, where the plasma is generated.The substrate is placed at the top of the chamber, facing down toward the plasma source. Heating is provided from the back side by thermal radiation from a resistive graphite heater incapsulated between two boron nitride discs, which improve the temperature uniformity across the heater. Thermocouples are used to measure the temperature above the heater, which is then correlated to that of the substrate by a calibration done with an infrared pyrometer. Typical substrate temperatures for monocrystalline films are 400 °C to 760 °C, for germanium and silicon respectively.
LEPECVD reactor:
The potential of the wafer stage can be controlled by an external power supply, influencing the amount and the energy of radicals impinging on the surface, and is typically kept at 10-15 V with respect to the chamber walls.
The process gases are introduced into the chamber through a gas dispersal ring placed below the wafer stage. The gases used in a LEPECVD reactor are silane (SiH4) and germane (GeH4) for silicon and germanium deposition respectively, together with diborane (B2H6) and phosphine (PH3) for p- and n-type doping.
Plasma source The plasma source is the most critical component of a LEPECVD reactor, as the low energy, high density, plasma is the key difference from a typical PECVD deposition system.
LEPECVD reactor:
The plasma is generated in a source which is attached to the bottom of the chamber. Argon is fed directly in the source, where tantalum filaments are heated to create an electron-rich environment by thermionic emission. The plasma is then ignited by a DC discharge from the heated filaments to the grounded walls of the source. Thanks to the high electron density in the source the voltage required to obtain a discharge is around 20-30V, resulting in an ion energy of about 10-20 eV, while the discharge current is of the order of several tens of amperes, giving a high ion density.
LEPECVD reactor:
The DC discharge current can be tuned to control the ion density, thus changing the growth rate: in particular at a larger discharge current the ion density is higher, therefore increasing the rate.
Plasma confinement The plasma enters the growth chamber through an anode electrically connected to the grounded chamber walls, which is used to focus and stabilize the discharge and the plasma.
Further focusing is provided by a magnetic field directed along the chamber's axis, provided by external copper coils wrapped around the chamber. The current flowing through the coils (i.e. the intensity of the magnetic field) can be controlled to change the ion density at the substrate's surface, thus changing the growth rate.
Additional coils ("wobblers") are placed around the chamber, with their axis perpendicular to the magnetic field, to continuously sweep the plasma over the substrate, improving the homogeneity of the deposited film.
Applications:
Thanks to the possibility of changing the growth rate (through the plasma density or gas fluxes) independently from the substrate temperature, both thin films with sharp interfaces and a precision down to the nanometer scale at rates as low as 0.4 nm/s, as well as thick layers (up to 10 um or more) at rates as high as 10 nm/s, can be grown using the same reactor and in the same deposition process. This has been exploited to grow low-loss composition-graded waveguides for NIR and MIR and integrated nanostructures (i.e. quantum well stacks) for NIR optical amplitude modulation. The capability of LEPECVD to grow both very sharp quantum wells on thick buffers in the same deposition step has also been employed to realize high mobility strained Ge channels.Another promising application of the LEPECVD technique is the possibility of growing high aspect ratio, self-assembled silicon and germanium microcrystals on deeply patterned Si substrates. This solves many problems related to heteroepitaxy (i.e. thermal expansion coefficient and crystal lattice mismatch), leading to very high crystal quality, and is possible thanks to the high rates and low temperatures found in a LEPECVD reactor. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Heavy water**
Heavy water:
Heavy water (deuterium oxide, 2H2O, D2O) is a form of water whose hydrogen atoms are all deuterium (2H or D, also known as heavy hydrogen) rather than the common hydrogen-1 isotope (1H or H, also called protium) that makes up most of the hydrogen in normal water. The presence of the heavier hydrogen isotope gives the water different nuclear properties, and the increase in mass gives it slightly different physical and chemical properties when compared to normal water.
Heavy water:
Deuterium is a heavy hydrogen isotope. Heavy water contains deuterium atoms and is used in nuclear reactors. Semiheavy water (HDO) is more common than pure heavy water, while heavy-oxygen water is denser but lacks unique properties. Tritiated water is radioactive due to tritium content.
Heavy water:
Heavy water (D2O) has different physical properties than regular water, such as being 10.6% denser and having a higher melting point. Heavy water is less dissociated at a given temperature, and it does not have the blue color of regular water. While it has no significant taste difference, it can taste slightly sweet. Heavy water affects biological systems by altering enzymes, hydrogen bonds, and cell division in eukaryotes. It can be lethal to multicellular organisms at concentrations over 50%. However, some prokaryotes like bacteria can survive in a heavy hydrogen environment. Heavy water can be toxic to humans, but a large amount would be needed for poisoning to occur.
Heavy water:
Deuterated water (HDO) occurs naturally in normal water and can be separated through distillation, electrolysis, or chemical exchange processes. The most cost-effective process for producing heavy water is the Girdler sulfide process. Heavy water is used in various industries and is sold in different grades of purity. Some of its applications include nuclear magnetic resonance, infrared spectroscopy, neutron moderation, neutrino detection, metabolic rate testing, neutron capture therapy, and the production of radioactive materials such as plutonium and tritium.
Composition:
Deuterium is a hydrogen isotope with a nucleus containing a neutron and a proton; the nucleus of a protium (normal hydrogen) atom consists of just a proton. The additional neutron makes a deuterium atom roughly twice as heavy as a protium atom.
Composition:
A molecule of heavy water has two deuterium atoms in place of the two protium atoms of ordinary "light" water. The term heavy water as defined by the IUPAC Gold Book can also refer to water in which a higher than usual proportion of hydrogen atoms are deuterium rather than protium. For comparison, ordinary water (the "ordinary water" used for a deuterium standard) contains only about 156 deuterium atoms per million hydrogen atoms, meaning that 0.0156% of the hydrogen atoms are of the heavy type. Thus heavy water as defined by the Gold Book includes hydrogen-deuterium oxide (HDO) and other mixtures of D2O, H2O, and HDO in which the proportion of deuterium is greater than usual. For instance, the heavy water used in CANDU reactors is a highly enriched water mixture that contains mostly deuterium oxide D2O, but also some hydrogen-deuterium oxide and a smaller amount of ordinary hydrogen oxide H2O. It is 99.75% enriched by hydrogen atom-fraction—meaning that 99.75% of the hydrogen atoms are of the heavy type; however, heavy water in the Gold Book sense need not be so highly enriched. The weight of a heavy water molecule, however, is not substantially different from that of a normal water molecule, because about 89% of the molecular weight of water comes from the single oxygen atom rather than the two hydrogen atoms. Heavy water is not radioactive. In its pure form, it has a density about 11% greater than water, but is otherwise physically and chemically similar. Nevertheless, the various differences in deuterium-containing water (especially affecting the biological properties) are larger than in any other commonly occurring isotope-substituted compound because deuterium is unique among heavy stable isotopes in being twice as heavy as the lightest isotope. This difference increases the strength of water's hydrogen–oxygen bonds, and this in turn is enough to cause differences that are important to some biochemical reactions. The human body naturally contains deuterium equivalent to about five grams of heavy water, which is harmless. When a large fraction of water (> 50%) in higher organisms is replaced by heavy water, the result is cell dysfunction and death.Heavy water was first produced in 1932, a few months after the discovery of deuterium. With the discovery of nuclear fission in late 1938, and the need for a neutron moderator that captured few neutrons, heavy water became a component of early nuclear energy research. Since then, heavy water has been an essential component in some types of reactors, both those that generate power and those designed to produce isotopes for nuclear weapons. These heavy water reactors have the advantage of being able to run on natural uranium without using graphite moderators that pose radiological and dust explosion hazards in the decommissioning phase. The graphite moderated Soviet RBMK design tried to avoid using either enriched uranium or heavy water (being cooled with ordinary "light" water instead) which produced the positive void coefficient that was one of a series of flaws in reactor design leading to the Chernobyl disaster. Most modern reactors use enriched uranium with ordinary water as the moderator.
Other heavy forms of water:
Semiheavy water Semiheavy water, HDO, exists whenever there is water with light hydrogen (protium, 1H) and deuterium (D or 2H) in the mix. This is because hydrogen atoms (hydrogen-1 and deuterium) are rapidly exchanged between water molecules. Water containing 50% H and 50% D in its hydrogen actually contains about 50% HDO and 25% each of H2O and D2O, in dynamic equilibrium.
Other heavy forms of water:
In normal water, about 1 molecule in 3,200 is HDO (one hydrogen in 6,400 is in the form of D), and heavy water molecules (D2O) only occur in a proportion of about 1 molecule in 41 million (i.e. one in 6,4002). Thus semiheavy water molecules are far more common than "pure" (homoisotopic) heavy water molecules.
Other heavy forms of water:
Heavy-oxygen water Water enriched in the heavier oxygen isotopes 17O and 18O is also commercially available. It is "heavy water" as it is denser than normal water (H218O is approximately as dense as D2O, H217O is about halfway between H2O and D2O)—but is rarely called heavy water, since it does not contain the deuterium that gives D2O its unusual nuclear and biological properties. It is more expensive than D2O due to the more difficult separation of 17O and 18O. H218O is also used for production of fluorine-18 in radiopharmaceuticals and radiotracers, and positron emission tomography. Small amounts of 17O and 18O are naturally present in water, and most processes enriching heavy water also enrich heavier isotopes of oxygen as a side-effect. This is undesirable if the heavy water is to be used as a neutron moderator in nuclear reactors, as 17O can undergo neutron capture, followed by emission of an alpha particle, producing radioactive 14C. However, doubly labeled water, containing both a heavy oxygen and hydrogen, is useful as a non-radioactive isotopic tracer.
Other heavy forms of water:
Compared to the isotopic change of hydrogen atoms, the isotopic change of oxygen has a smaller effect on the physical properties.
Tritiated water Tritiated water contains tritium (3H) in place of protium (1H) or deuterium (2H), and, as tritium itself is radioactive, tritiated water is also radioactive.
Physical properties:
The physical properties of water and heavy water differ in several respects. Heavy water is less dissociated than light water at given temperature, and the true concentration of D+ ions is less than H+ ions would be for a light water sample at the same temperature. The same is true of OD− vs. OH− ions. For heavy water Kw D2O (25.0 °C) = 1.35 × 10−15, and [D+ ] must equal [OD− ] for neutral water. Thus pKw D2O = p[OD−] + p[D+] = 7.44 + 7.44 = 14.87 (25.0 °C), and the p[D+] of neutral heavy water at 25.0 °C is 7.44.
Physical properties:
The pD of heavy water is generally measured using pH electrodes giving a pH (apparent) value, or pHa, and at various temperatures a true acidic pD can be estimated from the directly pH meter measured pHa, such that pD+ = pHa (apparent reading from pH meter) + 0.41. The electrode correction for alkaline conditions is 0.456 for heavy water. The alkaline correction is then pD+ = pHa(apparent reading from pH meter) + 0.456. These corrections are slightly different from the differences in p[D+] and p[OD-] of 0.44 from the corresponding ones in heavy water.Heavy water is 10.6% denser than ordinary water, and heavy water's physically different properties can be seen without equipment if a frozen sample is dropped into normal water, as it will sink. If the water is ice-cold the higher melting temperature of heavy ice can also be observed: it melts at 3.7 °C, and thus does not melt in ice-cold normal water.A 1935 experiment reported not the "slightest difference" in taste between ordinary and heavy water. However, a more recent study appears to confirm anecdotal observation that heavy water tastes slightly sweet to humans, with the effect mediated by the TAS1R2/TAS1R3 taste receptor. Rats given a choice between distilled normal water and heavy water were able to avoid the heavy water based on smell, and it may have a different taste. Some people report that minerals in water affect taste, e.g. potassium lending a sweet taste to hard water, but there are many factors of a perceived taste in water besides mineral contents.Heavy water lacks the characteristic blue color of light water; this is because the molecular vibration harmonics, which in light water cause weak absorption in the red part of the visible spectrum, are shifted into the infrared and thus heavy water does not absorb red light.No physical properties are listed for "pure" semi-heavy water, because it is unstable as a bulk liquid. In the liquid state, a few water molecules are always in an ionised state, which means the hydrogen atoms can exchange among different oxygen atoms. Semi-heavy water could, in theory, be created via a chemical method, but it would rapidly transform into a dynamic mixture of 25% light water, 25% heavy water, and 50% semi-heavy water. However, if it were made in the gas phase and directly deposited into a solid, semi-heavy water in the form of ice could be stable. This is due to collisions between water vapor molecules being almost completely negligible in the gas phase at standard temperatures, and once crystallized, collisions between the molecules cease altogether due to the rigid lattice structure of solid ice.
History:
The US scientist and Nobel laureate Harold Urey discovered the isotope deuterium in 1931 and was later able to concentrate it in water. Urey's mentor Gilbert Newton Lewis isolated the first sample of pure heavy water by electrolysis in 1933. George de Hevesy and Erich Hofer used heavy water in 1934 in one of the first biological tracer experiments, to estimate the rate of turnover of water in the human body. The history of large-quantity production and use of heavy water, in early nuclear experiments, is described below.Emilian Bratu and Otto Redlich studied the autodissociation of heavy water in 1934.
Effect on biological systems:
Different isotopes of chemical elements have slightly different chemical behaviors, but for most elements the differences are far too small to have a biological effect. In the case of hydrogen, larger differences in chemical properties among protium (light hydrogen), deuterium, and tritium occur, because chemical bond energy depends on the reduced mass of the nucleus–electron system; this is altered in heavy-hydrogen compounds (hydrogen-deuterium oxide is the most common species) more than for heavy-isotope substitution involving other chemical elements. The isotope effects are especially relevant in biological systems, which are very sensitive to even the smaller changes, due to isotopically influenced properties of water when it acts as a solvent.
Effect on biological systems:
To perform their tasks, enzymes rely on their finely tuned networks of hydrogen bonds, both in the active center with their substrates, and outside the active center, to stabilize their tertiary structures. As a hydrogen bond with deuterium is slightly stronger than one involving ordinary hydrogen, in a highly deuterated environment, some normal reactions in cells are disrupted.
Effect on biological systems:
Particularly hard-hit by heavy water are the delicate assemblies of mitotic spindle formations necessary for cell division in eukaryotes. Plants stop growing and seeds do not germinate when given only heavy water, because heavy water stops eukaryotic cell division. The deuterium cell is larger and is a modification of the direction of division. The cell membrane also changes, and it reacts first to the impact of heavy water. In 1972, it was demonstrated that an increase in the percentage content of deuterium in water reduces plant growth. Research conducted on the growth of prokaryote microorganisms in artificial conditions of a heavy hydrogen environment showed that in this environment, all the hydrogen atoms of water could be replaced with deuterium. Experiments showed that bacteria can live in 98% heavy water. Concentrations over 50% are lethal to multicellular organisms, however a few exceptions are known such as switchgrass (Panicum virgatum) which is able to grow on 50% D2O; the plant Arabidopsis thaliana (70% D2O); the plant Vesicularia dubyana (85% D2O); the plant Funaria hygrometrica (90% D2O); and the anhydrobiotic species of nematode Panagrolaimus superbus (nearly 100% D2O). A comprehensive study of heavy water on the fission yeast Schizosaccharomyces pombe showed that the cells displayed an altered glucose metabolism and slow growth at high concentrations of heavy water. In addition, the cells activated the heat-shock response pathway and the cell integrity pathway, and mutants in the cell integrity pathway displayed increased tolerance to heavy water.Heavy water affects the period of circadian oscillations, consistently increasing the length of each cycle. The effect has been demonstrated in unicellular organisms, green plants, isopods, insects, birds, mice, and hamsters. The mechanism is unknown.Despite its toxicity at high levels, heavy water has also been observed to extend lifespan of certain yeasts by up to 85%, with the hypothesized mechanism being the reduction of reactive oxygen species turnover.
Effect on biological systems:
Effect on animals Experiments with mice, rats, and dogs have shown that a degree of 25% deuteration causes (sometimes irreversible) sterility, because neither gametes nor zygotes can develop. High concentrations of heavy water (90%) rapidly kill fish, tadpoles, flatworms, and Drosophila. The only known exception is the anhydrobiotic nematode Panagrolaimus superbus, which is able to survive and reproduce in 99.9% D2O. Mammals (for example, rats) given heavy water to drink die after a week, at a time when their body water approaches about 50% deuteration. The mode of death appears to be the same as that in cytotoxic poisoning (such as chemotherapy) or in acute radiation syndrome (though deuterium is not radioactive), and is due to deuterium's action in generally inhibiting cell division. It is more toxic to malignant cells than normal cells, but the concentrations needed are too high for regular use. As may occur in chemotherapy, deuterium-poisoned mammals die of a failure of bone marrow (producing bleeding and infections) and of intestinal-barrier functions (producing diarrhea and loss of fluids).
Effect on biological systems:
Despite the problems of plants and animals in living with too much deuterium, prokaryotic organisms such as bacteria, which do not have the mitotic problems induced by deuterium, may be grown and propagated in fully deuterated conditions, resulting in replacement of all hydrogen atoms in the bacterial proteins and DNA with the deuterium isotope.In higher organisms, full replacement with heavy isotopes can be accomplished with other non-radioactive heavy isotopes (such as carbon-13, nitrogen-15, and oxygen-18), but this cannot be done for deuterium. This is a consequence of the ratio of nuclear masses between the isotopes of hydrogen, which is much greater than for any other element.Deuterium oxide is used to enhance boron neutron capture therapy, but this effect does not rely on the biological or chemical effects of deuterium, but instead on deuterium's ability to moderate (slow) neutrons without capturing them.Recent experimental evidence indicates that systemic administration of deuterium oxide (30% drinking water supplementation) suppresses tumor growth in a standard mouse model of human melanoma, an effect attributed to selective induction of cellular stress signaling and gene expression in tumor cells.
Effect on biological systems:
Toxicity in humans Because it would take a very large amount of heavy water to replace 25% to 50% of a human being's body water (water being in turn 50–75% of body weight) with heavy water, accidental or intentional poisoning with heavy water is unlikely to the point of practical disregard. Poisoning would require that the victim ingest large amounts of heavy water without significant normal water intake for many days to produce any noticeable toxic effects.
Effect on biological systems:
Oral doses of heavy water in the range of several grams, as well as heavy oxygen 18O, are routinely used in human metabolic experiments. (See doubly labeled water testing.) Since one in about every 6,400 hydrogen atoms is deuterium, a 50-kilogram (110 lb) human containing 32 kilograms (71 lb) of body water would normally contain enough deuterium (about 1.1 grams or 0.039 ounces) to make 5.5 grams (0.19 oz) of pure heavy water, so roughly this dose is required to double the amount of deuterium in the body.
Effect on biological systems:
A loss of blood pressure may partially explain the reported incidence of dizziness upon ingestion of heavy water. However, it is more likely that this symptom can be attributed to altered vestibular function.
Effect on biological systems:
Heavy water radiation contamination confusion Although many people associate heavy water primarily with its use in nuclear reactors, pure heavy water is not radioactive. Commercial-grade heavy water is slightly radioactive due to the presence of minute traces of natural tritium, but the same is true of ordinary water. Heavy water that has been used as a coolant in nuclear power plants contains substantially more tritium as a result of neutron bombardment of the deuterium in the heavy water (tritium is a health risk when ingested in large quantities).
Effect on biological systems:
In 1990, a disgruntled employee at the Point Lepreau Nuclear Generating Station in Canada obtained a sample (estimated as about a "half cup") of heavy water from the primary heat transport loop of the nuclear reactor, and loaded it into a cafeteria drink dispenser. Eight employees drank some of the contaminated water. The incident was discovered when employees began leaving bioassay urine samples with elevated tritium levels. The quantity of heavy water involved was far below levels that could induce heavy water toxicity, but several employees received elevated radiation doses from tritium and neutron-activated chemicals in the water. This was not an incident of heavy water poisoning, but rather radiation poisoning from other isotopes in the heavy water.
Effect on biological systems:
Some news services were not careful to distinguish these points, and some of the public were left with the impression that heavy water is normally radioactive and more severely toxic than it actually is. Even if pure heavy water had been used in the water cooler indefinitely, it is not likely the incident would have been detected or caused harm, since no employee would be expected to get much more than 25% of their daily drinking water from such a source.
Production:
On Earth, deuterated water, HDO, occurs naturally in normal water at a proportion of about 1 molecule in 3,200. This means that 1 in 6,400 hydrogen atoms in water is deuterium, which is 1 part in 3,200 by weight (hydrogen weight). The HDO may be separated from normal water by distillation or electrolysis and also by various chemical exchange processes, all of which exploit a kinetic isotope effect, with the partial enrichment also occurring in natural bodies of water under particular evaporation conditions. (For more information about the isotopic distribution of deuterium in water, see Vienna Standard Mean Ocean Water.) In theory, deuterium for heavy water could be created in a nuclear reactor, but separation from ordinary water is the cheapest bulk production process.
Production:
The difference in mass between the two hydrogen isotopes translates into a difference in the zero-point energy and thus into a slight difference in the speed of the reaction. Once HDO becomes a significant fraction of the water, heavy water becomes more prevalent as water molecules trade hydrogen atoms very frequently. Production of pure heavy water by distillation or electrolysis requires a large cascade of stills or electrolysis chambers and consumes large amounts of power, so the chemical methods are generally preferred.
Production:
The most cost-effective process for producing heavy water is the dual temperature exchange sulfide process (known as the Girdler sulfide process) developed in parallel by Karl-Hermann Geib and Jerome S. Spevack in 1943.An alternative process, patented by Graham M. Keyser, uses lasers to selectively dissociate deuterated hydrofluorocarbons to form deuterium fluoride, which can then be separated by physical means. Although the energy consumption for this process is much less than for the Girdler sulfide process, this method is currently uneconomical due to the expense of procuring the necessary hydrofluorocarbons.
Production:
As noted, modern commercial heavy water is almost universally referred to, and sold as, deuterium oxide. It is most often sold in various grades of purity, from 98% enrichment to 99.75–99.98% deuterium enrichment (nuclear reactor grade) and occasionally even higher isotopic purity.
Production:
Argentina Argentina was the main producer of heavy water, using an ammonia/hydrogen exchange based plant supplied by Switzerland's Sulzer company. It was also a major exporter to Canada, Germany, the US and other countries. The heavy water production facility located in Arroyito was the world's largest heavy water production facility. Argentina produced 200 short tons (180 tonnes) of heavy water per year in 2015 using the monothermal ammonia-hydrogen isotopic exchange method. Since 2017, the Arroyito plant has not been operational.
Production:
Soviet Union In October 1939, Soviet physicists Yakov Borisovich Zel'dovich and Yulii Borisovich Khariton concluded that heavy water and carbon were the only feasible moderators for a natural uranium reactor, and in August 1940, along with Georgy Flyorov, submitted a plan to the Russian Academy of Sciences calculating that 15 tons of heavy water were needed for a reactor. With the Soviet Union having no uranium mines at the time, young Academy workers were sent to Leningrad photographic shops to buy uranium nitrate, but the entire heavy water project was halted in 1941 when German forces invaded during Operation Barbarossa.
Production:
By 1943, Soviet scientists had discovered that all scientific literature relating to heavy water had disappeared from the West, which Flyorov in a letter warned Soviet leader Joseph Stalin about, and at which time there was only 2–3 kg of heavy water in the entire country. In late 1943, the Soviet purchasing commission in the U.S. obtained 1 kg of heavy water and a further 100 kg in February 1945, and upon World War II ending, the NKVD took over the project.
Production:
In October 1946, as part of the Russian Alsos, the NKVD deported to the Soviet Union from Germany the German scientists who had worked on heavy water production during the war, including Karl-Hermann Geib, the inventor of the Girdler sulfide process. These German scientists worked under the supervision of German physical chemist Max Volmer at the Institute of Physical Chemistry in Moscow with the plant they constructed producing large quantities of heavy water by 1948.
Production:
United States During the Manhattan Project the United States constructed three heavy water production plants as part of the P-9 Project at Morgantown Ordnance Works, near Morgantown, West Virginia; at the Wabash River Ordnance Works, near Dana and Newport, Indiana; and at the Alabama Ordnance Works, near Childersburg and Sylacauga, Alabama. Heavy water was also acquired from the Cominco plant in Trail, British Columbia, Canada. The Chicago Pile-3 experimental reactor used heavy water as a moderator and went critical in 1944. The three domestic production plants were shut down in 1945 after producing around 81,470lb of product. The Wabash plant resumed heavy water production in 1952.
Production:
In 1953, the United States began using heavy water in plutonium production reactors at the Savannah River Site. The first of the five heavy water reactors came online in 1953, and the last was placed in cold shutdown in 1996. The SRS reactors were heavy water reactors so that they could produce both plutonium and tritium for the US nuclear weapons program.
Production:
The U.S. developed the Girdler sulfide chemical exchange production process—which was first demonstrated on a large scale at the Dana, Indiana plant in 1945 and at the Savannah River Plant, South Carolina, in 1952. DuPont operated the SRP for the USDOE until 1 April 1989, when Westinghouse took it over.
India India is one of the world's largest producers of heavy water through its Heavy Water Board. It exports heavy water to countries including the Republic of Korea, China, and the United States.
Empire of Japan In the 1930s, it was suspected by the United States and Soviet Union that Austrian chemist Fritz Johann Hansgirg built a pilot plant for the Empire of Japan in Japanese ruled northern Korea to produce heavy water by using a new process he had invented.
Production:
Norway In 1934, Norsk Hydro built the first commercial heavy water plant at Vemork, Tinn, eventually producing 4 kilograms (8.8 lb) per day. From 1940 and throughout World War II, the plant was under German control and the Allies decided to destroy the plant and its heavy water to inhibit German development of nuclear weapons. In late 1942, a planned raid called Operation Freshman by British airborne troops failed, both gliders crashing. The raiders were killed in the crash or subsequently executed by the Germans.
Production:
On the night of 27 February 1943 Operation Gunnerside succeeded. Norwegian commandos and local resistance managed to demolish small, but key parts of the electrolytic cells, dumping the accumulated heavy water down the factory drains.On 16 November 1943, the Allied air forces dropped more than 400 bombs on the site. The Allied air raid prompted the Nazi government to move all available heavy water to Germany for safekeeping. On 20 February 1944, a Norwegian partisan sank the ferry M/F Hydro carrying heavy water across Lake Tinn, at the cost of 14 Norwegian civilian lives, and most of the heavy water was presumably lost. A few of the barrels were only half full, hence buoyant, and may have been salvaged and transported to Germany.
Production:
Recent investigation of production records at Norsk Hydro and analysis of an intact barrel that was salvaged in 2004 revealed that although the barrels in this shipment contained water of pH 14—indicative of the alkaline electrolytic refinement process—they did not contain high concentrations of D2O. Despite the apparent size of the shipment, the total quantity of pure heavy water was quite small, most barrels only containing 0.5–1% pure heavy water. The Germans would have needed a total of about 5 tons of heavy water to get a nuclear reactor running. The manifest clearly indicated that there was only half a ton of heavy water being transported to Germany. Hydro was carrying far too little heavy water for one reactor, let alone the 10 or more tons needed to make enough plutonium for a nuclear weapon. The German nuclear weapons program was much less advanced than the Manhattan project and no reactor constructed in Nazi Germany ever came close to reaching criticality. No amount of heavy water would have changed that.
Production:
Israel admitted running the Dimona reactor with Norwegian heavy water sold to it in 1959. Through re-export using Romania and Germany, India probably also used Norwegian heavy water.
Sweden During the second World War, the company Fosfatbolaget in Ljungaverk, Sweden, produced 2,300 liters per year of heavy water. The heavy water was then sold both to Germany and to the Manhattan project in the USA for the price of 1,40 SEK per gram of heavy water.
Production:
Canada As part of its contribution to the Manhattan Project, Canada built and operated a 1,000 pounds (450 kg) to 1,200 pounds (540 kg) per month (design capacity) electrolytic heavy water plant at Trail, British Columbia, which started operation in 1943.The Atomic Energy of Canada Limited (AECL) design of power reactor requires large quantities of heavy water to act as a neutron moderator and coolant. AECL ordered two heavy water plants, which were built and operated in Atlantic Canada at Glace Bay, Nova Scotia (by Deuterium of Canada Limited) and Port Hawkesbury, Nova Scotia (by General Electric Canada). These plants proved to have significant design, construction and production problems. Consequently, AECL built the Bruce Heavy Water Plant (44.1854°N 81.3618°W / 44.1854; -81.3618 (Bruce Heavy Water Plant)), which it later sold to Ontario Hydro, to ensure a reliable supply of heavy water for future power plants. The two Nova Scotia plants were shut down in 1985 when their production proved unnecessary.
Production:
The Bruce Heavy Water Plant (BHWP) in Ontario was the world's largest heavy water production plant with a capacity of 1600 tonnes per year at its peak (800 tonnes per year per full plant, two fully operational plants at its peak). It used the Girdler sulfide process to produce heavy water, and required 340,000 tonnes of feed water to produce one tonne of heavy water. It was part of a complex that included eight CANDU reactors, which provided heat and power for the heavy water plant. The site was located at Douglas Point/Bruce Nuclear Generating Station near Tiverton, Ontario, on Lake Huron where it had access to the waters of the Great Lakes.AECL issued the construction contract in 1969 for the first BHWP unit (BHWP A). Commissioning of BHWP A was done by Ontario Hydro from 1971 through 1973, with the plant entering service on 28 June 1973, and design production capacity being achieved in April 1974. Due to the success of BHWP A and the large amount of heavy water that would be required for the large numbers of upcoming planned CANDU nuclear power plant construction projects, Ontario Hydro commissioned three additional heavy water production plants for the Bruce site (BHWP B, C, and D). BHWP B was placed into service in 1979. These first two plants were significantly more efficient than planned, and the number of CANDU construction projects ended up being significantly lower than originally planned, which led to the cancellation of construction on BHWP C & D. In 1984, BHWP A was shut down. By 1993 Ontario Hydro had produced enough heavy water to meet all of its anticipated domestic needs (which were lower than expected due to improved efficiency in the use and recycling of heavy water), so they shut down and demolished half of the capacity of BHWP B. The remaining capacity continued to operate in order to fulfil demand for heavy water exports until it was permanently shut down in 1997, after which the plant was gradually dismantled and the site cleared.AECL is currently researching other more efficient and environmentally benign processes for creating heavy water. This is relevant for CANDU reactors since heavy water represented about 15–20% of the total capital cost of each CANDU plant in the 1970s and 1980s.
Production:
Iran Since 1996 a plant for production of heavy water was being constructed at Khondab near Arak. On 26 August 2006, Iranian President Ahmadinejad inaugurated the expansion of the country's heavy-water plant. Iran has indicated that the heavy-water production facility will operate in tandem with a 40 MW research reactor that had a scheduled completion date in 2009.Iran produced deuterated solvents in early 2011 for the first time.The core of the IR-40 is supposed to be re-designed based on the nuclear agreement in July 2015.
Production:
Iran is permitted to store only 130 tonnes (140 short tons) of heavy water. Iran exports excess production after exceeding their allotment making Iran the world's third largest exporter of heavy water.In 2023, Iran sells heavy water; customers have proposed a price over 1000 dollars per liter.
Pakistan The 50 MWth heavy water and natural uranium research reactor at Khushab, in Punjab province, is a central element of Pakistan's program for production of plutonium, deuterium and tritium for advanced compact warheads (i.e. thermonuclear weapons). Pakistan succeeded in acquiring a tritium purification and storage plant and deuterium and tritium precursor materials from two German firms.
Production:
Other countries Romania produced heavy water at the now-decommissioned Drobeta Girdler sulfide plant for domestic and export purposes.France operated a small plant during the 1950s and 1960s.Heavy water exists in elevated concentration in the hypolimnion of Lake Tanganyika in East Africa. It is likely that similar elevated concentrations exist in lakes with similar limnology, but this is only 4% enrichment (24 vs. 28) and surface waters are usually enriched in D2O by evaporation to an even greater extent by faster H2O evaporation.
Applications:
Nuclear magnetic resonance Deuterium oxide is used in nuclear magnetic resonance spectroscopy when using water as solvent if the nuclide of interest is hydrogen. This is because the signal from light-water (1H2O) solvent molecules interferes with the signal from the molecule of interest dissolved in it. Deuterium has a different magnetic moment and therefore does not contribute to the 1H-NMR signal at the hydrogen-1 resonance frequency.
Applications:
For some experiments, it may be desirable to identify the labile hydrogens on a compound, that is hydrogens that can easily exchange away as H+ ions on some positions in a molecule. With addition of D2O, sometimes referred to as a D2O shake, labile hydrogens exchange away and are substituted by deuterium (2H) atoms. These positions in the molecule then do not appear in the 1H-NMR spectrum.
Applications:
Organic chemistry Deuterium oxide is often used as the source of deuterium for preparing specifically labelled isotopologues of organic compounds. For example, C-H bonds adjacent to ketonic carbonyl groups can be replaced by C-D bonds, using acid or base catalysis. Trimethylsulfoxonium iodide, made from dimethyl sulfoxide and methyl iodide can be recrystallized from deuterium oxide, and then dissociated to regenerate methyl iodide and dimethyl sulfoxide, both deuterium labelled. In cases where specific double labelling by deuterium and tritium is contemplated, the researcher must be aware that deuterium oxide, depending upon age and origin, can contain some tritium.
Applications:
Infrared spectroscopy Deuterium oxide is often used instead of water when collecting FTIR spectra of proteins in solution. H2O creates a strong band that overlaps with the amide I region of proteins. The band from D2O is shifted away from the amide I region.
Neutron moderator Heavy water is used in certain types of nuclear reactors, where it acts as a neutron moderator to slow down neutrons so that they are more likely to react with the fissile uranium-235 than with uranium-238, which captures neutrons without fissioning.
Applications:
The CANDU reactor uses this design. Light water also acts as a moderator, but because light water absorbs more neutrons than heavy water, reactors using light water for a reactor moderator must use enriched uranium rather than natural uranium, otherwise criticality is impossible. A significant fraction of outdated power reactors, such as the RBMK reactors in the USSR, were constructed using normal water for cooling but graphite as a moderator. However, the danger of graphite in power reactors (graphite fires in part led to the Chernobyl disaster) has led to the discontinuation of graphite in standard reactor designs.
Applications:
The breeding and extraction of plutonium can be a relatively rapid and cheap route to building a nuclear weapon, as chemical separation of plutonium from fuel is easier than isotopic separation of U-235 from natural uranium.
Among current and past nuclear weapons states, Israel, India, and North Korea first used plutonium from heavy water moderated reactors burning natural uranium, while China, South Africa and Pakistan first built weapons using highly enriched uranium.
Applications:
The Nazi nuclear program, operating with more modest means than the contemporary Manhattan Project and hampered by many leading scientists having been driven into exile (many of them ending up working for the Manhattan Project), as well as continuous infighting, wrongly dismissed graphite as a moderator due to not recognizing the effect of impurities. Given that isotope separation of uranium was deemed too big a hurdle, this left heavy water as a potential moderator. Other problems were the ideological aversion regarding what propaganda dismissed as "Jewish physics" and the mistrust between those who had been enthusiastic Nazis even before 1933 and those who were Mitläufer or trying to keep a low profile. In part due to allied sabotage and commando raids on Norsk Hydro (then the world's largest producer of heavy water) as well as the aforementioned infighting, the German nuclear program never managed to assemble enough uranium and heavy water in one place to achieve criticality despite possessing enough of both by the end of the war.
Applications:
In the U.S., however, the first experimental atomic reactor (1942), as well as the Manhattan Project Hanford production reactors that produced the plutonium for the Trinity test and Fat Man bombs, all used pure carbon (graphite) neutron moderators combined with normal water cooling pipes. They functioned with neither enriched uranium nor heavy water. Russian and British plutonium production also used graphite-moderated reactors.
Applications:
There is no evidence that civilian heavy water power reactors—such as the CANDU or Atucha designs—have been used to produce military fissile materials. In nations that do not already possess nuclear weapons, nuclear material at these facilities is under IAEA safeguards to discourage any diversion.
Applications:
Due to its potential for use in nuclear weapons programs, the possession or import/export of large industrial quantities of heavy water are subject to government control in several countries. Suppliers of heavy water and heavy water production technology typically apply IAEA (International Atomic Energy Agency) administered safeguards and material accounting to heavy water. (In Australia, the Nuclear Non-Proliferation (Safeguards) Act 1987.) In the U.S. and Canada, non-industrial quantities of heavy water (i.e., in the gram to kg range) are routinely available without special license through chemical supply dealers and commercial companies such as the world's former major producer Ontario Hydro.
Applications:
Neutrino detector The Sudbury Neutrino Observatory (SNO) in Sudbury, Ontario uses 1,000 tonnes of heavy water on loan from Atomic Energy of Canada Limited. The neutrino detector is 6,800 feet (2,100 m) underground in a mine, to shield it from muons produced by cosmic rays. SNO was built to answer the question of whether or not electron-type neutrinos produced by fusion in the Sun (the only type the Sun should be producing directly, according to theory) might be able to turn into other types of neutrinos on the way to Earth. SNO detects the Cherenkov radiation in the water from high-energy electrons produced from electron-type neutrinos as they undergo charged current (CC) interactions with neutrons in deuterium, turning them into protons and electrons (however, only the electrons are fast enough to produce Cherenkov radiation for detection).
Applications:
SNO also detects neutrino electron scattering (ES) events, where the neutrino transfers energy to the electron, which then proceeds to generate Cherenkov radiation distinguishable from that produced by CC events. The first of these two reactions is produced only by electron-type neutrinos, while the second can be caused by all of the neutrino flavors. The use of deuterium is critical to the SNO function, because all three "flavours" (types) of neutrinos may be detected in a third type of reaction as well, neutrino-disintegration, in which a neutrino of any type (electron, muon, or tau) scatters from a deuterium nucleus (deuteron), transferring enough energy to break up the loosely bound deuteron into a free neutron and proton via a neutral current (NC) interaction.
Applications:
This event is detected when the free neutron is absorbed by 35Cl− present from NaCl deliberately dissolved in the heavy water, causing emission of characteristic capture gamma rays. Thus, in this experiment, heavy water not only provides the transparent medium necessary to produce and visualize Cherenkov radiation, but it also provides deuterium to detect exotic mu type (μ) and tau (τ) neutrinos, as well as a non-absorbent moderator medium to preserve free neutrons from this reaction, until they can be absorbed by an easily detected neutron-activated isotope.
Applications:
Metabolic rate and water turnover testing in physiology and biology Heavy water is employed as part of a mixture with H218O for a common and safe test of mean metabolic rate in humans and animals undergoing their normal activities.The elimination rate of deuterium alone is a measure of body water turnover. This is highly variable between individuals and depends on environmental conditions as well as subject size, sex, age and physical activity.
Applications:
Tritium production Tritium is the active substance in self-powered lighting and controlled nuclear fusion, its other uses including autoradiography and radioactive labeling. It is also used in nuclear weapon design for boosted fission weapons and initiators. Tritium undergoes beta decay into Helium-3, which is a stable, but rare, isotope of Helium that is itself highly sought after. Some tritium is created in heavy water moderated reactors when deuterium captures a neutron. This reaction has a small cross-section (probability of a single neutron-capture event) and produces only small amounts of tritium, although enough to justify cleaning tritium from the moderator every few years to reduce the environmental risk of tritium escape. Given that Helium-3 is a neutron poison with orders of magnitude higher capture cross section than any component of heavy or tritiated water, its accumulation in a heavy water neutron moderator or target for tritium production must be kept to a minimum.
Applications:
Producing a lot of tritium in this way would require reactors with very high neutron fluxes, or with a very high proportion of heavy water to nuclear fuel and very low neutron absorption by other reactor material. The tritium would then have to be recovered by isotope separation from a much larger quantity of deuterium, unlike production from lithium-6 (the present method), where only chemical separation is needed.
Applications:
Deuterium's absorption cross section for thermal neutrons is 0.52 millibarns (5.2 × 10−32 m2; 1 barn = 10−28 m2), while those of oxygen-16 and oxygen-17 are 0.19 and 0.24 millibarns, respectively. 17O makes up 0.038% of natural oxygen, making the overall cross section 0.28 millibarns. Therefore, in D2O with natural oxygen, 21% of neutron captures are on oxygen, rising higher as 17O builds up from neutron capture on 16O. Also, 17O may emit an alpha particle on neutron capture, producing radioactive carbon-14. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mortuary house**
Mortuary house:
In archaeology and anthropology a mortuary house is any purpose-built structure, often resembling a normal dwelling in many ways, in which a dead body is buried.
Mortuary house:
Proper treatment and placing of the dead has always been of great concern to people around the world. While choice of burial location and treatment of the corpse usually depend on beliefs and ritual standards within a specific cultural context, they are as well of a strategic nature. Burial decisions are affected by cultural norms regarding the deceased’s age, gender, vertical or horizontal status and by the relationship of people to places and other people. Ideas concerning proper burial also apply to those who have been defunct for quite some time. Dead bodies have been exhumed, reburied and desecrated in order to redefine – elevate or degrade – the status of their owners, construct new affiliations, rewrite history and to retrieve or construct social memory.Following the laying to rest of the deceased, who is often surrounded with grave goods, an earthwork called a kurgan in Russian or barrow in English is raised over the house and the structure left sealed.
Mortuary house:
The term has parallels with Christian sepulchres which contain only one burial. Mortuary houses differ from mortuary enclosures in size, design and in the latter's capacity for multiple burials.
Origin:
According to the Online Etymology Dictionary, the word mortuary derived in the early 14th century, from the word mortuarie, an Anglo-French word meaning "gift to a parish priest from a deceased parishioner"; from a Medieval Latin word mortuarium, a noun use of neuterof Late Latin adjective mortuarius "pertaining to the dead", from Latin mortuus, past participle of mori "to die". The meaning "place where bodies are kept temporarily" was first recorded in 1865, a euphemism for earlier deadhouse.
History:
Philip Lieberman suggests that burial and mortuary housing may signify a "concern for the dead that transcends daily life." It may be one of the earliest detectable forms of religious practice. Mortuary housing rituals can be detected back to the earliest days of human existence. Evidence suggests that the Neanderthals were the first human species to practice burial behavior and intentionally bury their dead, doing so in shallow graves along with stone tools and animal bones. The earliest undisputed human burial, discovered so far, dates back 100,000 years. Human skeletal remains stained with red ochre were discovered in the Skhul cave at Qafzeh, Israel.
History:
Egyptian Pyramids Ancient Egypt is well known in their unique housing of the dead. The complex construction of chambers were both alluring and mysterious. The tombs represented as mortuary temples for the dead and the afterlife.
Case studies:
Ballyveelish, Co. Tipperary Ireland The outline of a timber building was discovered by an archaeologist. From the circle of post holes and foundation trenches, the house was determined to be 7m x 5.1m. This structure was classified as a mortuary house, instead of dwelling, because of a lack of evidence of a hearth.
It is believed the mortuary house was built to serve a ceremonial function associated with the interment of human remains. Using radiocarbon dating it was determined this site was erected in the Bronze Age. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sniffing (behavior)**
Sniffing (behavior):
Sniffing is a perceptually-relevant behavior, defined as the active sampling of odors through the nasal cavity for the purpose of information acquisition. This behavior, displayed by all terrestrial vertebrates, is typically identified based upon changes in respiratory frequency and/or amplitude, and is often studied in the context of odor guided behaviors and olfactory perceptual tasks. Sniffing is quantified by measuring intra-nasal pressure or flow or air or, while less accurate, through a strain gauge on the chest to measure total respiratory volume. Strategies for sniffing behavior vary depending upon the animal, with small animals (rats, mice, hamsters) displaying sniffing frequencies ranging from 4 to 12 Hz but larger animals (humans) sniffing at much lower frequencies, usually less than 2 Hz. Subserving sniffing behaviors, evidence for an "olfactomotor" circuit in the brain exists, wherein perception or expectation of an odor can trigger brain respiratory center to allow for the modulation of sniffing frequency and amplitude and thus acquisition of odor information. Sniffing is analogous to other stimulus sampling behaviors, including visual saccades, active touch, and whisker movements in small animals (viz., whisking). Atypical sniffing has been reported in cases of neurological disorders, especially those disorders characterized by impaired motor function and olfactory perception.
Background and history of sniffing:
Background The behavior of sniffing incorporates changes in air flow within the nose. This can involve changes in the depth of inhalation and the frequency of inhalations. Both of these entail modulations in the manner whereby air flows within the nasal cavity and through the nostrils. As a consequence, when the air being breathed is odorized, odors can enter and leave the nasal cavity with each sniff. The same applies regardless of what gas is being inhaled, including toxins and solvents, and other industrial chemicals which may be inhaled as a form of drug or substance abuse.The act of sniffing is considered distinct from respiration on several grounds. In humans, one can assess the occurrence of a sniff based upon volitional control of air movement through the nose. In these cases, human subjects can be asked to inhale for a certain amount of time, or in a particular pattern. Some animals are obligate nasal breathers, wherein the only air for respiration must arrive into the lungs via the nose. This includes rats and mice. Thus, in these animals the distinction between a breath and a sniff is not clear and could be argued to be indistinguishable. (See sniffing in small animals.) Sniffing is observed among all terrestrial vertebrates, wherein they inhale environmental air. Sniffing may also occur in underwater environments wherein an animal may exhale air from within its lungs and nasal cavity to acquire odors within an aquatic environment and then re-inhale this air. (See sniffing in small animals.) While sniffing behavior is often observed and discussed within the context of acquiring odor information, sniffing is also displayed during the performance of motivated behaviors and upon deep brain electrical stimulation of brain reward centers. For instance, prior to obtaining a food reward, mice and rabbits increase their sniffing frequency in a manner independent of seeking odor information. Sniffing behavior is also displayed by animals upon involuntary electrical stimulation of numerous brain structures. Thus, while sniffing is often considered a critical part of olfaction, its link with motivated and reward behaviors suggests it plays a role in other behaviors.
Background and history of sniffing:
History Studies into the perceptual correlates of sniffing on human olfaction did not reach the mainstream scientific community until the 1950s. Frank Jones, an American psychologist, published a paper demonstrating the interplay between parameters of sniffing and odor detection thresholds. He found that deep sniffs, consisting of a large volume of air, allowed for consistent and accurate detection of odors.One of the earliest reports of exploring sniffing in non-human animals was provided by Welker in his 1964 article, Analysis of sniffing in the albino rat. In this study, Welker used video recordings of rats during presentation with odors and other stimuli to explore the chest movements as an index of sniffing. This was the first paper to report that rats can sniff at frequencies reaching 12 Hz upon detection of odors and during free exploration. This paper also provided early evidence that the rhythm of sniffing was coupled with other sensory behaviors, such as whisking, or the movement of the whiskers.
Background and history of sniffing:
While behavioral and psycho-physical studies into sniffing and its influence on odor perception began to surface, much less work was being performed to explore the influence of sniffing behaviors on the physiological processing of odors within the brain. Early recordings from the olfactory bulbs of hedgehogs by Lord Edgar Adrian, who previously won the 1932 Nobel Prize along with Sir Charles Sherrington for their work on the functions of neurons, revealed that neural oscillations within the hedgehog olfactory bulb were entrained to the respiratory cycle. Further, odor-evoked oscillations (including an exhaled puff from a pipe), were amplified along with the respiratory cycle. These data gave evidence that information processing within the brain, particularly that of odors, was linked with respiration - establishing the integral nature of sniffing for the physiological processing of odors. About 20 years later, Max Mozell published a series of studies wherein he further proposed that the flow rate and the sorption properties of odorants interplay to affect the location of odorant binding to olfactory receptor neurons in the nose and consequentially odor input to the brain. Later, evidence that single neurons in the olfactory bulb, the brain's first relay station for odor information, are entrained with respiration was presented, establishing a solid basis for the control of odor input to the brain and the processing of odors by sniffing.
Methods for quantifying sniffing:
There are multiple methods available for measuring sniffing. While these methods are applicable for most animal models (mice to humans), selection of appropriate sniff measurement methods should be determined by experimental need for precision.
Methods for quantifying sniffing:
Video Perhaps the simplest method for determining the moment of sniffing is video-based. High resolution video of small animals (e.g., rats) during immobile respiration enables approximations of sniffing, including identification of individual sniff events. Similar methods can be employed to identify fast, high frequency sniffing during states of arousal and stimulus investigation. This method, however, does not provide direct evidence for sniffing and is not reliable in larger animals (rabbits to humans).
Methods for quantifying sniffing:
Chest strain Sensors to measure chest expansion during inhalation provide direct information of sniff cycles. These methods include mechanical and optical devices. Mechanical devices for sniffing measurements are piezo foils placed under the chests of small animals and strain gauge around the chests of larger animals. In both cases, a positive increase in signal output (voltage) can be identified and used to index inhalation events. Alternatively, a photo transducer can be placed on the opposite side of an animal's chest from a light source (e.g., a Light-emitting diode). In this design, a decrease in signal reflects inhalation (chest expansion) as the chest would interrupt the light passage to the photo transducer.
Methods for quantifying sniffing:
Nasal microphone As a direct measurement of sniffing, early studies favored the use of microphones placed/secured external to the anterior nares, the external openings of the nasal cavity. This method has advantages to directly index air leaving the nares (increase in microphone output), yet is mostly non-invasive. Due to this non-invasive nature of microphone measures, these methods have been employed in dogs during odor tracking exercises and are useful for measuring sniffing on a temporary basis in other large animals.
Methods for quantifying sniffing:
Nasal thermocouple and nasal pressure sensor The most precise methods to date to measure sniffing involve direct intranasal measures through use of a temperature probe, called a thermocouple, or a pressure sensor. These can be inserted temporarily into the nares or implanted surgically. The basic principles of operation are shared between the temperature and pressure devices. Inhalation of ambient air provides cool temperature into the nasal cavity, whereas exhalation of inhaled air provides warm temperature into the nasal cavity and simultaneously an increase in intranasal pressure as air from the lungs is forced out of the nostrils. Placement of these sensors close to the olfactory epithelium of animals allows measures of odorized air transients as they reach the olfactory receptors and thus are common methods for measuring sniffing in the context of sensory neuroscience and psychological studies.
Sniffing in small animals:
The earliest published study of sniffing behavior in small animals was performed in laboratory rats using video-based measures. In this study robust changes in respiratory frequency were reported to occur during exploration of an open arena and novel odors. Resting respiration occurs ~2 times/second (Hz), and increases to about 12 Hz are noted during states of exploration and arousal. Similar transitions in sniffing frequency are observed in freely exploring mice, which, however, maintain generally higher sniffing frequencies than rats (3 [rest] to 15 Hz [exploration] vs 2 to 12 Hz).
Sniffing in small animals:
Transitions in sniffing frequency are observed in animals performing odor-guided tasks. Studies of recording sniffing in the context of odor-guided tasks involve implanting intranasal temperature and pressure sensors into the nasal cavity of animals and either measuring odor-orienting responses (fast sniffing) or sniffing during performance in operant odor-guided tasks. Alternatively, animals can be conditioned to insert their snouts into an air-tight chamber with a pressure transducer embedded within to access nasal transients, while simultaneously odors are presented to measure responses while nose-poking.Notably, several studies have reported that modulation in sniffing frequency may be just as great in context of anticipation of odor sampling as during sampling of odors. Similar changes in sniffing frequency are even seen in animals presented with novel auditory stimuli, suggesting a relationship between sniffing and arousal.
Sniffing in small animals:
Sniffing in semi-aquatic animals While sniffing is generally thought to occur solely in terrestrial animals, semi-aquatic rodents (American water shrew) also display sniffing behaviors during underwater odor-guided tasks. Shrews inhale-exhale small amounts of air in a precise and coordinated fashion while tracking an underwater odor trail. This occurs through the inhalation of air above ground, to allow air to volatilize odors in an environment otherwise void of air.
Sniffing in small animals:
Sniffing and control of odor input to the brain Measurements of sniffing simultaneously with physiological measures from olfactory centers in the brain have provided information on how sniffing modulates the access and processing of odors at the neural level. Inhalation is necessary for odor input to the brain. Further, odor input through the brain is temporally linked to the respiratory cycle, with bouts of activity occurring with each inhalation. This linkage between sniffing frequency and odor processing provides a mechanism for the control of odor input into the brain by respiratory frequency and possibly amplitude, though this is not well established.
Sniffing in humans:
The nature of sniffing regulates odor perception in humans and in fact, in humans, a single sniff is often sufficient for optimal odor perception. For instance, a deep, steady inhalation of a faint odor allows a more potent percept than a shallow inhalation. Similarly, more frequent sniffs provide a faster percept of the odor environment than only sniffing once every 3 seconds. These examples have been supported by empirical studies (see above) and have provided insights into methods whereby humans may change their sniffing strategies to modulate odor perception.Odor inhalation evokes activity throughout olfactory structures in humans. Neuroimaging studies lack resolution to determine the impacts of sniffing frequency on the structure of odor input through the brain, although imaging studies have revealed that the motor act of sniffing is anatomically independent of sniff-evoked odor perception. Implications for this include the shared but distributed pathways for odor processing in the brain.
Neural control of sniffing:
Sniffing is fundamentally controlled by respiratory centers in the brainstem, including the Pre-Botzinger complex which governs inhalation/exhalation patterns. Activity from respiratory brain stem structures then modulates nervous activity to control lung contraction. To exert changes to respiration, and thereby evoke sniffing behavior, volitional centers in the cerebral cortex must stimulate brain stem structures. It is through this simple pathway that the decision to inhale or sniff may occur.
Neural control of sniffing:
The rapid modulation of sniffing upon inhalation of a novel odor or an irritating odor is evidence for an "olfactomotor" loop in the brain. In this loop, novel odor-evoked sniffing behavior can occur rapidly upon perception of a novel odor, one of interest, or an odor which is aversive.
Relation of sniffing to other stimulus sampling behaviors:
Sniffing, as an active sampling behavior, is often grouped along with other behaviors utilized to acquire sensory stimuli. For instance, sniffing has been compared to rapid eye movements, or saccades, in the ability for both methods to provide rapid "snapshots" of information to the brain. This analogy, though, may be imprecise since small animals (e.g., mice) make odor-based decisions (through sniffing) while also making visual decisions, yet do not saccade. Sniffing is also fundamentally similar to active touch, including swiping ones finger along a surface to scan texture.
Relation of sniffing to other stimulus sampling behaviors:
In part due to the interrelatedness of the respiratory brain stem structures with other central pattern generators responsible for governing some other active sampling behaviors, sniffing in animals often occurs at similar frequencies (2 to 12 Hz) and in a phasic relationship to the active sampling behaviors of whisking and licking. Whisking and sniffing are tightly correlated in their occurrence, with sniff inhalations occurring during whisker protraction. Due to the metabolic need to coordinate breathing and swallowing, small animals (rats and mice) often lick at similar frequencies of sniffing (4 to 8 Hz) and swallow in between inhalations or during brief periods of apnea (cessation of breathing).
Relevance to neurological disorders:
Few studies have explored the impact of neurological disorders on sniffing behavior, although numerous neurological disorders affect respiration. Humans with Parkinson's disease have abnormal sniffing capabilities (i.e., reduced volume and flow rate) which may underlie olfactory perceptual impairments in the disease. Studies into sniffing in mouse models of Alzheimer's disease and also humans have not found major effects of Alzheimer's pathology on both basal respiration and odor-evoked sniffing. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Natural class**
Natural class:
In phonology, a natural class is a set of phonemes in a language that share certain distinctive features. A natural class is determined by participation in shared phonological processes, described using the minimum number of features necessary for descriptive adequacy.
Overview:
Classes are defined by distinctive features having reference to articulatory and acoustic phonetic properties, including manners of articulation, places of articulation, voicing, and continuance. For example, the set containing the sounds /p/, /t/, and /k/ is a natural class of voiceless stops in American Standard English. This class is one of several other classes, including the voiced stops (/b/, /d/, and /g/), voiceless fricatives (/f/, /θ/, /s/, /ʃ/, and /h/), sonorants, and vowels.
Overview:
To give a further example, the system of Chomsky and Halle defines the class of voiceless stops by the specification of two binary features: [-continuant] and [-voice]. Any sound with both the feature [-continuant] (not able to be pronounced continuously) and the feature [-voice] (not pronounced with vibration of the vocal cords) is included in the class, thus specifying all and only the voiceless stops.
Overview:
By implication, the class is also described as not having the features [+continuant] or [+voice]. This means that all sounds with either the feature [+continuant] (able to be lengthened in pronunciation) or [+voice] (pronounced with vibration of the vocal cords) are excluded from the class. This excludes all natural classes of sounds besides voiceless stops. For instance, it excludes voiceless fricatives, which have the feature [+continuant], voiced stops, which have the feature [+voice], and liquids and vowels, which have the features [+continuant] and [+voice].
Overview:
Voiceless stops also have other, redundant, features, such as [+consonantal] and [-lateral]. These are not relevant to the description of the class and are unnecessary, since the features [-continuant] and [-voice] already include all voiceless stops and exclude all other sounds.
It is expected that members of a natural class will behave similarly in the same phonetic environment, and will have a similar effect on sounds that occur in their environment. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Superinsulator**
Superinsulator:
A superinsulator is a material that at low but finite temperatures does not conduct electricity, i.e. has an infinite resistance so that no electric current passes through it. The phenomenon of superinsulation can be regarded as an exact dual to superconductivity.
Superinsulator:
The superinsulating state can be destroyed by increasing the temperature and applying an external magnetic field and voltage. A superinsulator was first predicted by M. C. Diamantini, P. Sodano, and C. A. Trugenberger in 1996 who found a superinsulating ground state dual to superconductivity, emerging at the insulating side of the superconductor-insulator transition in the Josephson junction array due to electric-magnetic duality. Superinsulators were independently rediscovered by T. Baturina and V. Vinokur in 2008 on the basis of duality between two different symmetry realizations of the uncertainty principle and experimentally found in titanium nitride (TiN) films. The 2008 measurements revealed giant resistance jumps interpreted as manifestations of the voltage threshold transition to a superinsulating state which was identified as the low-temperature confined phase emerging below the charge Berezinskii-Kosterlitz-Thouless transition. These jumps were similar to earlier findings of the resistance jumps in indium oxide (InO) films. The finite-temperature phase transition into the superinsulating state was finally confirmed by Mironov et al. in NbTiN films in 2018.Other researchers have seen the similar phenomenon in disordered indium oxide films.
Mechanism:
Both superconductivity and superinsulation rest on the pairing of conduction electrons into Cooper pairs. In superconductors, all the pairs move coherently, allowing for the electric current without resistance. In superinsulators, both Cooper pairs and normal excitations are confined and the electric current cannot flow. A mechanism behind superinsulation is the proliferation of magnetic monopoles at low temperatures. In two dimensions (2D), magnetic monopoles are quantum tunneling events (instantons) that are often referred to as monopole “plasma”. In three dimensions (3D), monopoles form a Bose condensate. Monopole plasma or monopole condensate squeezes Faraday's electric field lines into thin electric flux filaments or strings dual to Abrikosov vortices in superconductors. Cooper pairs of opposite charges at the end of these electric strings feel an attractive linear potential. When the corresponding string tension is large, it is energetically favorable to pull out of vacuum many charge-anticharge pairs and to form many short strings rather than to continue stretching the original one. As a consequence, only neutral “electric pions” exist as asymptotic states and the electric conduction is absent. This mechanism is a single-color version of the confinement mechanism that binds quarks into hadrons. Because the electric forces are much weaker than strong forces of the particle physics, the typical size of “electric pions” well exceeds the size of corresponding elementary particles. This implies that preparing the samples that are sufficiently small, one can peer inside an “electric pion,” where electric strings are loose and Coulomb interactions are screened, hence electric charges are effectively unbound and move as if they were in the metal. The low-temperature saturation of the resistance to metallic behavior has been observed in TiN films with small lateral dimensions.
Future applications:
Superinsulators could potentially be used as a platform for high-performance sensors and logical units. Combined with superconductors, superinsulators could be used to create switching electrical circuits with no energy loss as heat. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Data integration**
Data integration:
Data integration involves combining data residing in different sources and providing users with a unified view of them. This process becomes significant in a variety of situations, which include both commercial (such as when two similar companies need to merge their databases) and scientific (combining research results from different bioinformatics repositories, for example) domains. Data integration appears with increasing frequency as the volume (that is, big data) and the need to share existing data explodes. It has become the focus of extensive theoretical work, and numerous open problems remain unsolved. Data integration encourages collaboration between internal as well as external users. The data being integrated must be received from a heterogeneous database system and transformed to a single coherent data store that provides synchronous data across a network of files for clients. A common use of data integration is in data mining when analyzing and extracting information from existing databases that can be useful for Business information.
History:
Issues with combining heterogeneous data sources are often referred to as information silos, under a single query interface have existed for some time. In the early 1980s, computer scientists began designing systems for interoperability of heterogeneous databases. The first data integration system driven by structured metadata was designed at the University of Minnesota in 1991, for the Integrated Public Use Microdata Series (IPUMS). IPUMS used a data warehousing approach, which extracts, transforms, and loads data from heterogeneous sources into a unique view schema so data from different sources become compatible. By making thousands of population databases interoperable, IPUMS demonstrated the feasibility of large-scale data integration. The data warehouse approach offers a tightly coupled architecture because the data are already physically reconciled in a single queryable repository, so it usually takes little time to resolve queries.The data warehouse approach is less feasible for data sets that are frequently updated, requiring the extract, transform, load (ETL) process to be continuously re-executed for synchronization. Difficulties also arise in constructing data warehouses when one has only a query interface to summary data sources and no access to the full data. This problem frequently emerges when integrating several commercial query services like travel or classified advertisement web applications.
History:
As of 2009 the trend in data integration favored the loose coupling of data and providing a unified query-interface to access real time data over a mediated schema (see Figure 2), which allows information to be retrieved directly from original databases. This is consistent with the SOA approach popular in that era. This approach relies on mappings between the mediated schema and the schema of original sources, and translating a query into decomposed queries to match the schema of the original databases. Such mappings can be specified in two ways: as a mapping from entities in the mediated schema to entities in the original sources (the "Global-as-View" (GAV) approach), or as a mapping from entities in the original sources to the mediated schema (the "Local-as-View" (LAV) approach). The latter approach requires more sophisticated inferences to resolve a query on the mediated schema, but makes it easier to add new data sources to a (stable) mediated schema.
History:
As of 2010 some of the work in data integration research concerns the semantic integration problem. This problem addresses not the structuring of the architecture of the integration, but how to resolve semantic conflicts between heterogeneous data sources. For example, if two companies merge their databases, certain concepts and definitions in their respective schemas like "earnings" inevitably have different meanings. In one database it may mean profits in dollars (a floating-point number), while in the other it might represent the number of sales (an integer). A common strategy for the resolution of such problems involves the use of ontologies which explicitly define schema terms and thus help to resolve semantic conflicts. This approach represents ontology-based data integration. On the other hand, the problem of combining research results from different bioinformatics repositories requires bench-marking of the similarities, computed from different data sources, on a single criterion such as positive predictive value. This enables the data sources to be directly comparable and can be integrated even when the natures of experiments are distinct.As of 2011 it was determined that current data modeling methods were imparting data isolation into every data architecture in the form of islands of disparate data and information silos. This data isolation is an unintended artifact of the data modeling methodology that results in the development of disparate data models. Disparate data models, when instantiated as databases, form disparate databases. Enhanced data model methodologies have been developed to eliminate the data isolation artifact and to promote the development of integrated data models. One enhanced data modeling method recasts data models by augmenting them with structural metadata in the form of standardized data entities. As a result of recasting multiple data models, the set of recast data models will now share one or more commonality relationships that relate the structural metadata now common to these data models. Commonality relationships are a peer-to-peer type of entity relationships that relate the standardized data entities of multiple data models. Multiple data models that contain the same standard data entity may participate in the same commonality relationship. When integrated data models are instantiated as databases and are properly populated from a common set of master data, then these databases are integrated.
History:
Since 2011, data hub approaches have been of greater interest than fully structured (typically relational) Enterprise Data Warehouses. Since 2013, data lake approaches have risen to the level of Data Hubs. (See all three search terms popularity on Google Trends.) These approaches combine unstructured or varied data into one location, but do not necessarily require an (often complex) master relational schema to structure and define all data in the Hub.
History:
Data integration plays a big role in business regarding data collection used for studying the market. Converting the raw data retrieved from consumers into coherent data is something businesses try to do when considering what steps they should take next. Organizations are more frequently using data mining for collecting information and patterns from their databases, and this process helps them develop new business strategies to increase business performance and perform economic analyses more efficiently. Compiling the large amount of data they collect to be stored in their system is a form of data integration adapted for Business intelligence to improve their chances of success.
Example:
Consider a web application where a user can query a variety of information about cities (such as crime statistics, weather, hotels, demographics, etc.). Traditionally, the information must be stored in a single database with a single schema. But any single enterprise would find information of this breadth somewhat difficult and expensive to collect. Even if the resources exist to gather the data, it would likely duplicate data in existing crime databases, weather websites, and census data.
Example:
A data-integration solution may address this problem by considering these external resources as materialized views over a virtual mediated schema, resulting in "virtual data integration". This means application-developers construct a virtual schema—the mediated schema—to best model the kinds of answers their users want. Next, they design "wrappers" or adapters for each data source, such as the crime database and weather website. These adapters simply transform the local query results (those returned by the respective websites or databases) into an easily processed form for the data integration solution (see figure 2). When an application-user queries the mediated schema, the data-integration solution transforms this query into appropriate queries over the respective data sources. Finally, the virtual database combines the results of these queries into the answer to the user's query.
Example:
This solution offers the convenience of adding new sources by simply constructing an adapter or an application software blade for them. It contrasts with ETL systems or with a single database solution, which require manual integration of entire new data set into the system. The virtual ETL solutions leverage virtual mediated schema to implement data harmonization; whereby the data are copied from the designated "master" source to the defined targets, field by field. Advanced data virtualization is also built on the concept of object-oriented modeling in order to construct virtual mediated schema or virtual metadata repository, using hub and spoke architecture.
Example:
Each data source is disparate and as such is not designed to support reliable joins between data sources. Therefore, data virtualization as well as data federation depends upon accidental data commonality to support combining data and information from disparate data sets. Because of the lack of data value commonality across data sources, the return set may be inaccurate, incomplete, and impossible to validate.
Example:
One solution is to recast disparate databases to integrate these databases without the need for ETL. The recast databases support commonality constraints where referential integrity may be enforced between databases. The recast databases provide designed data access paths with data value commonality across databases.
Theory:
The theory of data integration forms a subset of database theory and formalizes the underlying concepts of the problem in first-order logic. Applying the theories gives indications as to the feasibility and difficulty of data integration. While its definitions may appear abstract, they have sufficient generality to accommodate all manner of integration systems, including those that include nested relational / XML databases and those that treat databases as programs. Connections to particular databases systems such as Oracle or DB2 are provided by implementation-level technologies such as JDBC and are not studied at the theoretical level.
Theory:
Definitions Data integration systems are formally defined as a tuple ⟨G,S,M⟩ where G is the global (or mediated) schema, S is the heterogeneous set of source schemas, and M is the mapping that maps queries between the source and the global schemas. Both G and S are expressed in languages over alphabets composed of symbols for each of their respective relations. The mapping M consists of assertions between queries over G and queries over S . When users pose queries over the data integration system, they pose queries over G and the mapping then asserts connections between the elements in the global schema and the source schemas.
Theory:
A database over a schema is defined as a set of sets, one for each relation (in a relational database). The database corresponding to the source schema S would comprise the set of sets of tuples for each of the heterogeneous data sources and is called the source database. Note that this single source database may actually represent a collection of disconnected databases. The database corresponding to the virtual mediated schema G is called the global database. The global database must satisfy the mapping M with respect to the source database. The legality of this mapping depends on the nature of the correspondence between G and S . Two popular ways to model this correspondence exist: Global as View or GAV and Local as View or LAV.
Theory:
GAV systems model the global database as a set of views over S . In this case M associates to each element of G a query over S . Query processing becomes a straightforward operation due to the well-defined associations between G and S . The burden of complexity falls on implementing mediator code instructing the data integration system exactly how to retrieve elements from the source databases. If any new sources join the system, considerable effort may be necessary to update the mediator, thus the GAV approach appears preferable when the sources seem unlikely to change.
Theory:
In a GAV approach to the example data integration system above, the system designer would first develop mediators for each of the city information sources and then design the global schema around these mediators. For example, consider if one of the sources served a weather website. The designer would likely then add a corresponding element for weather to the global schema. Then the bulk of effort concentrates on writing the proper mediator code that will transform predicates on weather into a query over the weather website. This effort can become complex if some other source also relates to weather, because the designer may need to write code to properly combine the results from the two sources.
Theory:
On the other hand, in LAV, the source database is modeled as a set of views over G . In this case M associates to each element of S a query over G . Here the exact associations between G and S are no longer well-defined. As is illustrated in the next section, the burden of determining how to retrieve elements from the sources is placed on the query processor. The benefit of an LAV modeling is that new sources can be added with far less work than in a GAV system, thus the LAV approach should be favored in cases where the mediated schema is less stable or likely to change.In an LAV approach to the example data integration system above, the system designer designs the global schema first and then simply inputs the schemas of the respective city information sources. Consider again if one of the sources serves a weather website. The designer would add corresponding elements for weather to the global schema only if none existed already. Then programmers write an adapter or wrapper for the website and add a schema description of the website's results to the source schemas. The complexity of adding the new source moves from the designer to the query processor.
Theory:
Query processing The theory of query processing in data integration systems is commonly expressed using conjunctive queries and Datalog, a purely declarative logic programming language. One can loosely think of a conjunctive query as a logical function applied to the relations of a database such as " f(A,B) where A<B ". If a tuple or set of tuples is substituted into the rule and satisfies it (makes it true), then we consider that tuple as part of the set of answers in the query. While formal languages like Datalog express these queries concisely and without ambiguity, common SQL queries count as conjunctive queries as well.
Theory:
In terms of data integration, "query containment" represents an important property of conjunctive queries. A query A contains another query B (denoted A⊃B ) if the results of applying B are a subset of the results of applying A for any database. The two queries are said to be equivalent if the resulting sets are equal for any database. This is important because in both GAV and LAV systems, a user poses conjunctive queries over a virtual schema represented by a set of views, or "materialized" conjunctive queries. Integration seeks to rewrite the queries represented by the views to make their results equivalent or maximally contained by our user's query. This corresponds to the problem of answering queries using views (AQUV).In GAV systems, a system designer writes mediator code to define the query-rewriting. Each element in the user's query corresponds to a substitution rule just as each element in the global schema corresponds to a query over the source. Query processing simply expands the subgoals of the user's query according to the rule specified in the mediator and thus the resulting query is likely to be equivalent. While the designer does the majority of the work beforehand, some GAV systems such as Tsimmis involve simplifying the mediator description process.
Theory:
In LAV systems, queries undergo a more radical process of rewriting because no mediator exists to align the user's query with a simple expansion strategy. The integration system must execute a search over the space of possible queries in order to find the best rewrite. The resulting rewrite may not be an equivalent query but maximally contained, and the resulting tuples may be incomplete. As of 2011 the GQR algorithm is the leading query rewriting algorithm for LAV data integration systems.
Theory:
In general, the complexity of query rewriting is NP-complete. If the space of rewrites is relatively small, this does not pose a problem — even for integration systems with hundreds of sources.
Medicine and Life Sciences:
Large-scale questions in science, such as real world evidence, global warming, invasive species spread, and resource depletion, are increasingly requiring the collection of disparate data sets for meta-analysis. This type of data integration is especially challenging for ecological and environmental data because metadata standards are not agreed upon and there are many different data types produced in these fields. National Science Foundation initiatives such as Datanet are intended to make data integration easier for scientists by providing cyberinfrastructure and setting standards. The five funded Datanet initiatives are DataONE, led by William Michener at the University of New Mexico; The Data Conservancy, led by Sayeed Choudhury of Johns Hopkins University; SEAD: Sustainable Environment through Actionable Data, led by Margaret Hedstrom of the University of Michigan; the DataNet Federation Consortium, led by Reagan Moore of the University of North Carolina; and Terra Populus, led by Steven Ruggles of the University of Minnesota. The Research Data Alliance, has more recently explored creating global data integration frameworks. The OpenPHACTS project, funded through the European Union Innovative Medicines Initiative, built a drug discovery platform by linking datasets from providers such as European Bioinformatics Institute, Royal Society of Chemistry, UniProt, WikiPathways and DrugBank. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Meyer's law**
Meyer's law:
Meyer's law is an empirical relation between the size of a hardness test indentation and the load required to leave the indentation. The formula was devised by Eugene Meyer of the Materials Testing Laboratory at the Imperial School of Technology, Charlottenburg, Germany, circa 1908.
Equation:
It takes the form: P=kdn where P is the pressure in megapascals k is the resistance of the material to initial penetration n is Meyer's index, a measure of the effect of the deformation on the hardness of the material d is the chordal diameter (diameter of the indentation)The index n usually lies between the values of 2, for fully strain hardened materials, and 2.5, for fully annealed materials. It is roughly related to the strain hardening coefficient in the equation for the true stress-true strain curve by adding 2. Note, however, that below approximately d = 0.5 mm (0.020 in) the value of n can surpass 3. Because of this, Meyer's law is often restricted to values of d greater than 0.5 mm up to the diameter of the indenter.The variables k and n are also dependent on the size of the indenter. Despite this, it has been found that the values can be related using the equation: P=k1d1n1=k2d2n2=k3d3n3=...
Equation:
Meyer's law is often used to relate hardness values based on the fact that if the weight is quartered, the diameter of the indenter is halved. For instance, the hardness values are the same for a test load of 3000 kgf with a 10 mm indenter and for a test load of 750 kgf with a 5 mm diameter indenter. This relationship isn't perfect, but its percent error is relatively small.A modified form of this equation was put forth by Onitsch: 1.854 kdn−2 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lavinite**
Lavinite:
Lavinite (Polish: Lawinit) is a mixture of metal particles (usually iron) and sand held together by solidified molten sulfur. Instead of metal particles, magnesite could be used to give a whiter product. The idea was to make a material that looks like marble.It was invented c. 1912 by Willy Henker, who in that year opened the factory "Kunststein-Industrie W. Henker & Co" in Berlin, which was in operation until at least 1936. Henker produced decorative items from lavinite such as vases, candlesticks, lamps, chandeliers and rosettes as well as letters and advertising signs.Lavinite products were usually black, less often white or colored, enameled or covered with "antique" bronze. Initially, the factory offered items in the Art Nouveau style. Later they introduced lines in antique, oriental and Art Deco styles.In 1922, Kunststein-Industrie W. Henker & Co opened a sales office in New York City and lavinite became very popular in the United States. Afterwards, Henker sold the patent for lavinite production to the U.S., France, Austria and Poland. In 1923 the factory "Lavinit. Krupka I Perlicz" opened in Włocławek, Poland, where it operated until 1939. They offered products from Willy Henker's factory catalogue. Over time, the assortment was expanded by items referring to the history of Poland, such as busts of Prince Józef Poniatowski or Adam Mickiewicz. For a short time, lavinite items were also produced by the Wulkanit factory in Grudziądz.Currently, decorative items in lavinite are popular and valued at auctions all around the world. The biggest collection of them, comprising 63 items, is in the Muzeum Ziemi Kujawskiej i Dobrzyńskiej in Włocławek. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Polysulfide**
Polysulfide:
Polysulfides are a class of chemical compounds derived from anionic chains of sulfur atoms. There are two main classes of polysulfides: inorganic and organic. The inorganic polysulfides have the general formula S2−n. These anions are the conjugate bases of polysulfanes H2Sn. Organic polysulfides generally have the formulae R1SnR2, where R = alkyl or aryl.
Polysulfide salts and complexes:
The alkali metal polysulfides arise by treatment of a solution of sulfide, e.g. sodium sulfide, with elemental sulfur: S2−+ n S → S2−n+1In some cases, these anions have been obtained as organic salts, which are soluble in organic solvents.The energy released in the reaction of sodium and elemental sulfur is the basis of battery technology. The sodium–sulfur battery and the lithium–sulfur battery require high temperatures to maintain liquid polysulfide and Na+-conductive membranes that are unreactive toward sodium, sulfur, and sodium sulfide.
Polysulfide salts and complexes:
Polysulfides are ligands in coordination chemistry. Examples of transition metal polysulfido complexes include (C5H5)2TiS5, [Ni(S4)2]2−, and [Pt(S5)3]2−. Main group elements also form polysulfides.
Organic polysulfides:
In commerce, the term "polysulfide" usually refers to a class of polymers with alternating chains of several sulfur atoms and hydrocarbons. They have the formula R1SnR2. In this formula n indicates the number of sulfur atoms (or "rank"). Polysulfide polymers can be synthesized by condensation polymerization reactions between organic dihalides and alkali metal salts of polysulfide anions: n Na2S5 + n ClCH2CH2Cl → [CH2CH2S5]n + 2n NaClDihalides used in this condensation polymerization are dichloroalkanes (such as 1,2-dichloroethane, bis-(2-chloroethyl)formal (ClCH2CH2OCH2OCH2CH2Cl), and 1,3-dichloropropane). The polymers are called thiokols. In some cases, polysulfide polymers can be formed by ring-opening polymerization reactions.
Organic polysulfides:
Polysulfide polymers are also prepared by the addition of polysulfanes to alkenes. An idealized equation is: 2 RCH=CH2 + H2Sn → (RCH2CH2)2SnIn reality, homogeneous samples of H2Sn are difficult to prepare.Polysulfide polymers are insoluble in water, oils, and many other organic solvents. Because of their solvent resistance, these materials find use as sealants to fill the joints in pavement, automotive window glass, and aircraft structures.
Organic polysulfides:
Polymers containing one or two sulfur atoms separated by hydrocarbon sequences are usually not classified polysulfides, e.g. poly(p-phenylene) sulfide (C6H4S)n.
Organic polysulfides:
Polysulfides in vulcanized rubber Many commercial elastomers contain polysulfides as crosslinks. These crosslinks interconnect neighboring polymer chains, thereby conferring rigidity. The degree of rigidity is related to the number of crosslinks. Elastomers, therefore, have a characteristic ability to "snap back" to their original shape after being stretched or compressed. Because of this memory for their original cured shape, elastomers are commonly referred to as rubbers. The process of crosslinking the polymer chains in these polymers with sulfur is called vulcanization. The sulfur chains attach themselves to the "allylic" carbon atoms, which are adjacent to C=C linkages. Vulcanization is a step in the processing of several classes of rubbers, including polychloroprene (Neoprene), styrene-butadiene, and polyisoprene, which is chemically similar to natural rubber. Charles Goodyear's discovery of vulcanization, involving the heating of polyisoprene with sulfur, was revolutionary because it converted a sticky and almost useless material into an elastomer that could be fabricated into useful products.
Occurrence in gas giants:
In addition to water and ammonia, the clouds in the atmospheres of the gas giant planets contain ammonium sulfides. The reddish-brownish clouds are attributed to polysulfides, arising from the exposure of the ammonium sulfides to light.
Properties:
Polysulfides, like sulfides, can induce stress corrosion cracking in carbon steel and stainless steel. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Plasma cleaning**
Plasma cleaning:
Plasma cleaning is the removal of impurities and contaminants from surfaces through the use of an energetic plasma or dielectric barrier discharge (DBD) plasma created from gaseous species. Gases such as argon and oxygen, as well as mixtures such as air and hydrogen/nitrogen are used. The plasma is created by using high frequency voltages (typically kHz to >MHz) to ionise the low pressure gas (typically around 1/1000 atmospheric pressure), although atmospheric pressure plasmas are now also common.
Methods:
In plasma, gas atoms are excited to higher energy states and also ionized. As the atoms and molecules 'relax' to their normal, lower energy states they release a photon of light, this results in the characteristic “glow” or light associated with plasma. Different gases give different colors. For example, oxygen plasma emits a light blue color.
A plasma’s activated species include atoms, molecules, ions, electrons, free radicals, metastables, and photons in the short wave ultraviolet (vacuum UV, or VUV for short) range. This mixture then interacts with any surface placed in the plasma.
Methods:
If the gas used is oxygen, the plasma is an effective, economical, environmentally safe method for critical cleaning. The VUV energy is very effective in the breaking of most organic bonds (i.e., C–H, C–C, C=C, C–O, and C–N) of surface contaminants. This helps to break apart high molecular weight contaminants. A second cleaning action is carried out by the oxygen species created in the plasma (O2+, O2−, O3, O, O+, O−, ionised ozone, metastable excited oxygen, and free electrons). These species react with organic contaminants to form H2O, CO, CO2, and lower molecular weight hydrocarbons. These compounds have relatively high vapor pressures and are evacuated from the chamber during processing. The resulting surface is ultra-clean. In Fig. 2, a relative content of carbon over material depth is shown before and after cleaning with excited oxygen [1].
Methods:
If the part consists of easily oxidized materials such as silver or copper, the treatment uses inert gases such as argon or helium instead. Plasma activated atoms and ions behave like a molecular sandblast and can break down organic contaminants. These contaminants vaporize during processing and are evacuated from the chamber.
Most of these by-products are small quantities of gases, such as carbon dioxide and water vapor with trace amounts of carbon monoxide and other hydrocarbons.
Methods:
Whether or not organic removal is complete can be assessed with contact angle measurements. When an organic contaminant is present, the contact angle of water with the device is high. Contaminant removal reduces the contact angle to that characteristic of contact with the pure substrate. In addition, XPS and AFM are often used to validate surface cleaning and sterilization applications.If a surface to be treated is coated with a patterned conductive layer (metal, ITO), treatment by direct contact with plasma (capable for contraction to microarcs) could be destructive. In this case, cleaning by neutral atoms excited in plasma to metastable state can be applied. Results of the same applications to surfaces of glass samples coated with Cr and ITO layers are shown in Fig. 3.
Methods:
After treatment, the contact angle of a water droplet is decreased becoming less than its value on the untreated surface. In Fig. 4, the relaxation curve for droplet footprint is shown for glass sample. A photograph of the same droplet on the untreated surface is shown in Fig. 4 inset. Surface relaxation time corresponding to a data shown in Fig. 4 is about 4 hours.
Methods:
Plasma ashing is a process that uses plasma cleaning solely to remove carbon. Plasma ashing is always done with O2 gas.
Applications:
Cleaning & Sterilization Plasma cleaning removes organics contamination through chemical reaction or physical ablation of hydrocarbons on treated surfaces. Chemically reactive process gases (air, oxygen) react with hydrocarbon monolayers to form gaseous products that are swept away by the continuous gas flow in the plasma cleaner chamber. Plasma cleaning can be used in place of wet chemical processes, such as piranha etching, which contain dangerous chemicals, increase danger of reagent contamination and risk etching treated surfaces.
Applications:
Removal of Self Assembled Monolayers of alkanethiolates from gold surfaces Residual proteins on biomedical devices Nanoelectrode Cleaning Life Sciences Cell viability, function, proliferation and differentiation are determined by adhesion to their microenvironment. Plasma is often used as a chemical free means of adding biologically relevant functional groups (carbonyl, carboxyl, hydroxyl, amine, etc) to material surfaces. As a result, plasma cleaning improves material biocompatibility or bioactivity and removes contaminating proteins and microbes. Plasma cleaners are a general tool in the life sciences, being used to activate surfaces for cell culture, tissue engineering, implants and more.
Applications:
Tissue Engineering Substrates Polyethyleneterephthalate (PET) cell adhesion Improved Biocompatibility of Implants: vascular grafts, Stainless Steel Screws Long term cell confinement studies Plasma Lithography for Patterning Cell Culture Substrates Cell sorting by strength of adhesion Antibiotic removal by plasma activated steel shavings Single Cell Sequencing Materials Science Surface wetting and modification is a fundamental tool in materials science for enhancing material characteristics without affecting bulk properties. Plasma Cleaning is used to alter material surface chemistries through the introduction of polar functional groups. Increased surface hydrophilicity (wetting) following plasma treatment improves adhesion with aqueous coatings, adhesives, inks and epoxies: Enhanced Thermopower of Graphene Films Work function enhancement in polymer semiconductor heterostructures Improved adhesion of Ultra‐high modulus polyethylene (Spectra) fibers and aramid fibers Plasma Lithography for nanoscale surface structures and quantum dots Micropatterning of thin films Microfluidics The unique characteristics of micro or nanoscale fluid flow are harnessed by microfluidic devices for a wide variety of research applications. The most widely used material for microfluidic device prototyping is polydimethylsiloxane (PDMS), for its rapid development and adjustable material properties. Plasma cleaning is used to permanently bond PDMS Microfluidic chips with glass slides or PDMS slabs to create water-tight microchannels.
Applications:
Blood plasma separation Single Cell RNA Sequencing Electroosmotic Flow Valves Wettability Patterning in Microfluidic Devices Long Term Retention of Microfluidic Device Hydrophilicity Improved adhesion to poly (propylene) Solar Cells & Photovoltaics Plasma has been used to enhance the performance of solar cells and energy conversion within photovoltaic devices: Reduction of Molybdenum Oxide (MoO3) enhances short circuit current density Modify TiO2 Nanosheets to improve hydrogen generation Enhanced conductivity of PEDOT:PSS for better efficiency in ITO-free perovskite solar cells | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tectin (secretion)**
Tectin (secretion):
Tectin is an organic substance secreted by certain ciliates. Tectin may form an adhesive stalk, disc or other sticky secretion. Tectin may also form a gelatinous envelope or membrane enclosing some ciliates as a protective capsule or lorica. Tectin is also called pseudochitin. Granules or rods (called protrichocysts) in the pellicle of some ciliates are also thought to be involved in tectin secretion. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Suction cup**
Suction cup:
A suction cup, also known as a sucker, is a device or object that uses the negative fluid pressure of air or water to adhere to nonporous surfaces, creating a partial vacuum.Suction cups are peripheral traits of some animals such as octopuses and squids, and have been reproduced artificially for numerous purposes.
Theory:
The working face of the suction cup is made of elastic, flexible material and has a curved surface. When the center of the suction cup is pressed against a flat, non-porous surface, the volume of the space between the suction cup and the flat surface is reduced, which causes the air or water between the cup and the surface to be expelled past the rim of the circular cup. The cavity which develops between the cup and the flat surface has little to no air or water in it because most of the fluid has already been forced out of the inside of the cup, causing a lack of pressure. The pressure difference between the atmosphere on the outside of the cup and the low-pressure cavity on the inside of the cup keeps the cup adhered to the surface.
Theory:
When the user ceases to apply physical pressure to the outside of the cup, the elastic substance of which the cup is made tends to resume its original, curved shape. The length of time for which the suction effect can be maintained depends mainly on how long it takes for air or water to leak back into the cavity between the cup and the surface, equalizing the pressure with the surrounding atmosphere. This depends on the porosity and flatness of the surface and the properties of the cup's rim. A small amount of mineral oil or vegetable oil is often employed to help maintain the seal.
Calculations:
The force required to detach an ideal suction cup by pulling it directly away from the surface is given by the formula: F=AP where: F is the force, A is the area of the surface covered by the cup, P is the pressure outside the cup (typically atmospheric pressure)This is derived from the definition of pressure, which is: P=F/A For example, a suction cup of radius 2.0 cm has an area of π (0.020 m)2 = 0.0013 square meters. Using the force formula (F = AP), the result is F = (0.0013 m2)(100,000 Pa) = about 130 newtons.
Calculations:
The above formula relies on several assumptions: The outer diameter of the cup does not change when the cup is pulled.
No air leaks into the gap between the cup and the surface.
The pulling force is applied perpendicular to the surface so that the cup does not slide sideways or peel off.
The suction cup contains a perfect vacuum; in reality, a small partial pressure will remain on the interior, and P is the differential pressure.
Artificial use:
Artificial suction cups are believed to have first been used in the third century, B.C., and were made out of gourds. They were used to suction "bad blood" from internal organs to the surface. Hippocrates is believed to have invented this procedure.The first modern suction cup patents were issued by the United States Patent and Trademark Office during the 1860s. TC Roche was awarded U.S. Patent No. 52,748 in 1866 for a "Photographic Developer Dipping Stick"; the patent discloses a primitive suction cup means for handling photographic plates during developing procedures. In 1868, Orwell Needham patented a more refined suction cup design, U.S. Patent No. 82,629, calling his invention an "Atmospheric Knob" purposed for general use as a handle and drawer opening means.Suction cups have a number of commercial and industrial applications: To attach an object to a flat, nonporous surface, such as a refrigerator door or a tile on a wall. This is also used for mooring ships.
Artificial use:
To move an object, such as a pane of glass or a raised floor tile, by attaching the suction cup to a flat, nonporous part of the object and then sliding or lifting the object.
In some toys, such as Nerf darts.
As toilet plungers.
Artificial use:
To climb up almost or completely vertically up or down a flat, nonporous surface, such as the sides of some buildings. This is part of buildering, which is also known as urban climbering. To hold an object still while it is worked on, such as holding a piece of glass while performing edge grinding.On May 25, 1981, Dan Goodwin, a.k.a. SpiderDan, scaled Sears Tower, the former world's tallest building, with a pair of suction cups. He went on to scale the Renaissance Center in Dallas, the Bonaventure Hotel in Los Angeles, the World Trade Center in New York City, Parque Central Tower in Caracas, the Nippon TV station in Tokyo, and the Millennium Tower in San Francisco. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Defatting (medical)**
Defatting (medical):
Defatting is the chemical dissolving of dermal lipids, from the skin, on contact with defatting agents. This can result in water loss from the affected area and cause the whitening and drying of the skin which may result in cracking, secondary infection and chemical irritant contact dermatitis.
Cause:
Defatting is caused by the exposure of human skin to a chemical substance, including alcohols, detergents, chemical solvents and motor oil. Aliphatic compounds (commonly found in kerosene) cause defatting action, with lower-boiling point aliphatics having the greatest defatting action and therefore the most potential to cause dermatitis. Aromatic compounds, such as styrene, also have a defatting capacity.
Prevention:
Defatting can be prevented by wearing appropriate protective clothing such as gloves, lab coats and aprons when working regularly with defatting agents. Prolonged skin contact or chronic defatting of the skin increases the possibility for developing irritant contact dermatitis and has the potential to worsen pre-existing skin conditions. Patients with chronic dermatitis are advised to use non-irritating soaps and dishwashing liquids sparingly and to choose those with a neutral pH and minimal defatting capability. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Forensic facial reconstruction**
Forensic facial reconstruction:
Forensic facial reconstruction (or forensic facial approximation) is the process of recreating the face of an individual (whose identity is often not known) from their skeletal remains through an amalgamation of artistry, anthropology, osteology, and anatomy. It is easily the most subjective—as well as one of the most controversial—techniques in the field of forensic anthropology. Despite this controversy, facial reconstruction has proved successful frequently enough that research and methodological developments continue to be advanced.
Forensic facial reconstruction:
In addition to identification of unidentified decedents, facial reconstructions are created for remains believed to be of historical value and for remains of prehistoric hominids and humans.
Types of identification:
There are two forms pertaining to identification in forensic anthropology: circumstantial and positive.
Circumstantial identification is established when an individual fits the biological profile of a set of skeletal or largely skeletal remains. This type of identification does not prove or verify identity because any number of individuals may fit the same biological description.
Types of identification:
Positive identification, one of the foremost goals of forensic science, is established when a unique set of biological characteristics of an individual are matched with a set of skeletal remains. This type of identification requires the skeletal remains to correspond with medical or dental records, unique ante mortem wounds or pathologies, DNA analysis, and still other means.Facial reconstruction presents investigators and family members involved in criminal cases concerning unidentified remains with a unique alternative when all other identification techniques have failed. Facial approximations often provide the stimuli that eventually lead to the positive identification of remains.
Legal admissibility:
In the U.S., the Daubert Standard is a legal precedent set in 1993 by the Supreme Court regarding the admissibility of expert witness testimony during legal proceedings, set in place to ensure that expert testimony is based on sufficient facts or data, derived from proper application of reliable principles and methods. When multiple forensic artists produce approximations for the same set of skeletal remains, no two reconstructions are ever the same and the data from which approximations are created are largely incomplete. Because of this, forensic facial reconstruction does not uphold the Daubert Standard, is not considered a legally recognized technique for positive identification, and is not admissible as expert testimony. Currently, reconstructions are only produced to aid the process of positive identification in conjunction with verified methods.
Types of reconstructions:
Two-dimensional reconstructions Two-dimensional facial reconstructions are based on ante mortem photographs, and the skull. Occasionally skull radiographs are used but this is not ideal since many cranial structures are not visible or at the correct scale. This method usually requires the collaboration of an artist and a forensic anthropologist. A commonly used method of 2D facial reconstruction was pioneered by Karen T. Taylor of Austin, Texas during the 1980s. Taylor's method involves adhering tissue depth markers on an unidentified skull at various anthropological landmarks, then photographing the skull. Life-size or one-to-one frontal and lateral photographic prints are then used as a foundation for facial drawings done on transparent vellum. Recently developed, the F.A.C.E. and C.A.R.E.S. computer software programs quickly produce two-dimensional facial approximations that can be edited and manipulated with relative ease. These programs may help speed the reconstruction process and allow subtle variations to be applied to the drawing, though they may produce more generic images than hand-drawn artwork.
Types of reconstructions:
Three-dimensional reconstructions Three-dimensional facial reconstructions are either: 1) sculptures (made from casts of cranial remains) created with modeling clay and other materials or 2) high-resolution, three-dimensional computer images. Like two-dimensional reconstructions, three-dimensional reconstructions usually require both an artist and a forensic anthropologist. Computer programs create three-dimensional reconstructions by manipulating scanned photographs of the unidentified cranial remains, stock photographs of facial features, and other available reconstructions. These computer approximations are usually most effective in victim identification because they do not appear too artificial. This method has been adopted by the National Center for Missing & Exploited Children, which uses this method often to show approximations of an unidentified decedent to release to the public in hopes to identify the subject.
Types of reconstructions:
Superimposition Superimposition is a technique that is sometimes included among the methods of forensic facial reconstruction. It is not always included as a technique because investigators must already have some kind of knowledge about the identity of the skeletal remains with which they are dealing (as opposed to 2D and 3D reconstructions, when the identity of the skeletal remains are generally completely unknown). Forensic superimpositions are created by superimposing a photograph of an individual suspected of belonging to the unidentified skeletal remains over an X-ray of the unidentified skull. If the skull and the photograph are of the same individual, then the anatomical features of the face should align accurately.
Types of reconstructions:
Methods of Reconstruction Different versions of Craniofacial Reconstruction have been used in multiple disciplines over the span of its discovery. Today, as stated, it is a technique used widely across the globe, that has proven to aid in forensic investigations by identifying victims of different crimes. Forensic experts will use their in depth knowledge of facial musculature and tissue attachments on the skull, in order to recreate the identity of the victim. In order to do such, it is important to consider the appearance of the skull, its soft tissues attached as well as its corresponding scans (X-Ray, CT Scans, ultrasound). As stated above, Craniofacial Reconstruction was performed manually, using clay in 2D and 3D aspects. However, today, technology is able to assist in this reconstruction, with the help of 3 similar but different techniques; the Russian Method, the American Method and the Manchester Method.The Russian Method is a method of craniofacial reconstruction that uses the musculature of the skull. This method uses a clay-like substance to recreate the musculature of the victim’s skull and focuses on the insertion of the muscles to the skull. The American Method is a second method of reconstruction however, this technique focuses on the overlying tissue of the skull. This method requires the facial tissue depth data recorded from previous remains or from live patients, using tissue puncture markers and/or ultrasounds. This technique can display the differences between the reconstruction of remains, based on factors such as race, sex and age. The Manchester Method is a combination of the Russian Method and the American Method. It uses the musculature of the skull as well as tissue depth markers and landmarks, in order to execute the reconstruction, and it is found to be the technique that is most commonly used today.
History:
Hermann Welcker in 1883 and Wilhelm His, Sr. in 1895, were the first to reproduce three-dimensional facial approximations from cranial remains. Most sources, however, acknowledge His as the forerunner in advancing the technique. His also produced the first data on average facial tissue thickness followed by Kollmann and Buchly who later collected additional data and compiled tables that are still referenced in most laboratories working on facial reproductions today.Facial reconstruction originated in two of the four major subfields of anthropology. In biological anthropology, they were used to approximate the appearance of early hominid forms, while in archaeology they were used to validate the remains of historic figures. In 1964, Mikhail Gerasimov was probably the first to attempt paleo-anthropological facial reconstruction to estimate the appearance of ancient peoplesAlthough students of Gerasimov later used his techniques to aid in criminal investigations, it was Wilton M. Krogman who popularized facial reconstruction's application to the forensic field. Krogman presented his method for facial reconstruction in his 1962 book, detailing his method for approximation. Others who helped popularize three-dimensional facial reconstruction include Cherry (1977), Angel (1977), Gatliff (1984), Snow (1979), and Iscan (1986).In 2004 it was for Dr. Andrew Nelson of the University of Western Ontario, Department of Anthropology that noted Canadian artist Christian Corbet created the first forensic facial reconstruction of an approximately 2,200-year-old mummy based on CT and laser scans. This reconstruction is known as the Sulman Mummy project.
Technique for creating a three-dimensional clay reconstruction:
Because a standard method for creating three-dimensional forensic facial reconstructions has not been widely agreed upon, multiple methods and techniques are used. The process detailed below reflects the method presented by Taylor and Angel from their chapter in Craniofacial Identification in Forensic Medicine, pgs 177–185. This method assumes that the sex, age, and race of the remains to undergo facial reconstruction have already been determined through traditional forensic anthropological techniques.
Technique for creating a three-dimensional clay reconstruction:
The skull is the basis of facial reconstruction; however, other physical remains that are sometimes available often prove to be valuable. Occasionally, remnants of soft tissue are found on a set of remains. Through close inspection, the forensic artist can easily approximate the thickness of the soft tissue over the remaining areas of the skull based on the presence of these tissues. This eliminates one of the most difficult aspects of reconstruction, the estimation of tissue thickness. Additionally, any other bodily or physical evidence found in association with remains (e.g. jewelry, hair, glasses, etc.) are vital to the final stages of reconstruction because they directly reflect the appearance of the individual in question.
Technique for creating a three-dimensional clay reconstruction:
Most commonly, however, only the bony skull and minimal or no other soft tissues are present on the remains presented to forensic artists. In this case, a thorough examination of the skull is completed. This examination focuses on, but is not limited to, the identification of any bony pathologies or unusual landmarks, ruggedness of muscle attachments, profile of the mandible, symmetry of the nasal bones, dentition, and wear of the occlusal surfaces. All of these features have an effect on the appearance of an individual's face.
Technique for creating a three-dimensional clay reconstruction:
Once the examination is complete, the skull is cleaned and any damaged or fragmented areas are repaired with wax. The mandible is then reattached, again with wax, according to the alignment of teeth, or, if no teeth are present, by averaging the vertical dimensions between the mandible and maxilla. Undercuts (like the nasal openings) are filled in with modeling clay and prosthetic eyes are inserted into the orbits centered between the superior and inferior orbital rims. At this point, a plaster cast of the skull is prepared. Extensive detail of the preparation of such a cast is presented in the article from which these methods are presented.
Technique for creating a three-dimensional clay reconstruction:
After the cast is set, colored plastics or the colored ends of safety matches are attached at twenty-one specific "landmark" areas that correspond to the reference data. These sites represent the average facial tissue thickness for persons of the same sex, race, and age as that of the remains. From this point on, all features are added using modeling clay.
Technique for creating a three-dimensional clay reconstruction:
First, the facial muscles are layered onto the cast in the following order: temporalis, masseter, buccinator and occipito-frontals, and finally the soft tissues of the neck. Next, the nose and lips are reconstructed before any of the other muscles are formed. The lips are approximately as wide as the interpupillary distance. However, this distance varies significantly with age, sex, race, and occlusion. The nose is one of the most difficult facial features to reconstruct because the underlying bone is limited and the possibility of variation is expansive. The nasal profile is constructed by first measuring the width of the nasal aperture and the nasal spine. Using a calculation of three times the length of the spine plus the depth of tissue marker number five will yield the approximate nose length. Next, the pitch of the nose is determined by examining the direction of the nasal spine – down, flat, or up. A block of clay that is the proper length is then placed on the nasal spine and the remaining nasal tissue is filled in using tissue markers two and three as a guide for the bridge of the nose. The alae are created by first marking a point five millimeters below the bottom of the nasal aperture. After the main part of the nose is constructed, the alae are created as small egg-shaped balls of clay, that are five millimeters in diameter at the widest point, these are positioned on the sides of the nose corresponding with the mark made previously. The alae are then blended to the nose and the overall structure of the nose is rounded out and shaped appropriately.
Technique for creating a three-dimensional clay reconstruction:
The muscles of facial expression and the soft tissue around the eyes are added next. Additional measurements are made according to race (especially for those with eye folds characteristic of Asian descent) during this stage. Next, tissues are built up to within one millimeter of the tissue thickness markers and the ears (noted as being extremely complicated to reproduce) are added. Finally, the face is "fleshed," meaning clay is added until the tissue thickness markers are covered, and any specific characterization is added (for example, hair, wrinkles in the skin, noted racial traits, glasses, etc.). The skull of Mozart was the basis of his facial reconstruction from anthropological data. The bust was unveiled at the "Salon du Son", Paris, in 1991.
Problems with facial reconstruction:
There are multiple outstanding problems associated with forensic facial reconstruction.
Problems with facial reconstruction:
Insufficient tissue thickness data The most pressing issue relates to the data used to average facial tissue thickness. The data available to forensic artists are still very limited in ranges of ages, sexes, and body builds. This disparity greatly affects the accuracy of reconstructions. Until this data is expanded, the likelihood of producing the most accurate reconstruction possible is largely limited.
Problems with facial reconstruction:
Lack of methodological standardization A second problem is the lack of a methodological standardization in approximating facial features. A single, official method for reconstructing the face has yet to be recognized. This also presents major setback in facial approximation because facial features like the eyes and nose and individuating characteristics like hairstyle – the features most likely to be recalled by witnesses – lack a standard way of being reconstructed. Recent research on computer-assisted methods, which take advantage of digital image processing, pattern recognition, promises to overcome current limitations in facial reconstruction and linkage.
Problems with facial reconstruction:
Subjectivity Reconstructions only reveal the type of face a person may have exhibited because of artistic subjectivity. Soft tissue reconstruction is an approximation based on osteological measurements; therefore, distinguishing characteristics used in identification could be missed. The position and general shape of the main facial features are mostly accurate because they are greatly determined by the skull.
Neolithic dog's head forensic reconstruction:
An image of the forensic model of a Neolithic dog skull found at Cuween Hill Chambered Cairn, Orkney, Scotland was published by Sci-News.com on April 22, 2019.
Forensic artist Amy Thornton made a model of the dog's head using a 3D print, based on a CT scan made at the Royal (Dick) School of Veterinary Studies of one of the 24 canine skulls found at the site.
Neolithic dog's head forensic reconstruction:
According to Dr. Alison Sheridan, Principal Archaeological Research Curator in the Department of Scottish History and Archaeology at National Museums Scotland, "The size of a large collie, and with features reminiscent of that of a European grey wolf, the Cuween dog has much to tell us ... While reconstructions have previously been made of people from the Neolithic era, we do not know of any previous attempt to forensically reconstruct an animal from this time."
In popular culture:
In recent years, the presence of forensic facial reconstructions in the entertainment industry and the media has increased. The way the fictional criminal investigators and forensic anthropologists utilize forensics and facial reconstructions are, however, often misrepresented (an influence known as the "CSI effect"). For example, the fictional forensic investigators will often call for the creation of a facial reconstruction as soon as a set of skeletal remains is discovered. In many instances, facial reconstructions have been used as a last resort to stimulate the possibility of identifying an individual.Facial reconstruction has been featured as part of an array of forensic science methods in fictional TV shows like CSI: Crime Scene Investigation and NCIS and their respective spinoffs in the CSI and NCIS franchises.
In popular culture:
In Bones, a long-running TV series centered around forensic analysis of decomposed and skeletal human remains, facial reconstruction is featured in the majority of episodes, used much like a police artist sketch in police procedurals. Regular cast character Angela Montenegro, the Bones team's facial reconstruction specialist, employs 3D software and holographic projection to "give victims back their faces" (as noted in the episode, "A Boy in a Bush").
In popular culture:
In the MacGyver episode "The Secret of Parker House", MacGyver reconstructs the skull of Penny's aunt Betty while investigating her house.
The facial reconstruction of Egypt's Tutankhamun, popularly known as King Tut, made the June 2005 cover of National Geographic.
In popular culture:
A variety of facial reconstruction kit toys are available, with "crime scene" versions, also, reconstructions of famous historical figures, like Tutankhamen and the dinosaur Tyrannosaurus rex.Recently, facial reconstruction has been part of the process used by researchers attempting to identify human remains of two Canadian Army soldiers lost in World War I. One soldier sculpted by Christian Corbet was identified through DNA analysis in 2007, but due to DNA deterioration, identifying the second using the same techniques failed. In 2011, the second of the soldiers' remains discovered at Avion, France were identified through a combination of 3-D printing software, reconstructive sculpture and use of isotopic analysis of bone. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**AR-A000002**
AR-A000002:
AR-A000002 is a drug which is one of the first compounds developed to act as a selective antagonist for the serotonin receptor 5-HT1B, with approximately 10x selectivity for 5-HT1B over the closely related 5-HT1D receptor. It has been shown to produce sustained increases in levels of serotonin in the brain, and has anxiolytic effects in animal studies. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**1-Hydroxy-7-azabenzotriazole**
1-Hydroxy-7-azabenzotriazole:
1-Hydroxy-7-azabenzotriazole (HOAt) is a triazole used as a peptide coupling reagent. It suppresses racemization that can otherwise occur during the reaction.HOAt has a melting point between 213 and 216 degrees Celsius. As a liquid, it is transparent and without any color. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Human tooth sharpening**
Human tooth sharpening:
Human tooth sharpening is the practice of manually sharpening the teeth, usually the front incisors. Filed teeth are customary in various cultures. Many remojadas figurines found in part of Mexico have filed teeth and it is believed to have been common practice in their culture. The Zappo Zap people of the Democratic Republic of Congo are believed to have filed their teeth.
Human tooth sharpening:
Historically it was done for spiritual purposes, with some exceptions, but in modern times it is usually aesthetic in nature as a form of body modification.
History:
Many cultures have practised this form of body modification. In Bali, in a ritual known as Potong gigi or cut teeth, teenagers have their canine teeth filed down because it is thought they represented negative emotions such as anger and jealousy. It is also seen as a way to spiritually separate them from their animalistic instincts and ancestors. After this tradition is completed the teens are now considered adults and are allowed to have sex and marry. During this ritual the person receiving the procedure is dressed in very nice traditional clothing and would traditionally be carried from place to place by their parents as they are not allowed to touch the ground. This is done to avoid encountering evil forces. In a more modernized version of the ritual the teen would wear socks to walk from place to place in order to stay off the ground.Around the year 1910, the African Herero people participated in forms of tooth sharpening. Both the boys and girls at puberty would have four of their lower teeth knocked out. This was followed by the top teeth being sharpened to points that resembled a “V”. The tribe regarded this tradition as a form of beauty. It was said that a girl that had not undergone this procedure would not be able to attract a lover.In Ancient China, a group called Ta-ya Kih-lau (打牙仡佬, literally "仡佬 (Gelao people) who beat out their teeth") had every woman about to wed knock out two of her anterior teeth to "prevent damage to the husband's family." Some cultures have distinctions between which sex does what to their teeth. In the central Congo region, the Upoto tribe has men file only teeth in the maxillary arch, whereas women file both maxillary and mandibular arches.The Mentawai people have also traditionally engaged in this practice. The Mentawai people believed that the soul and body were separate. If the soul was not pleased by its body it would leave and the person would die. As a result, the Mentawai people started modifying their bodies to be more beautiful. In Mentawai culture, those with teeth that have been sharpened are deemed more beautiful. Tooth sharpening would have been traditionally done at puberty, though contact with outside civilizations has resulted in a decline of tooth sharpening. Today, the Mentawai people use a sharpened chisel and another object that acts as a hammer. They use no anesthetics or pain killers, and bite down on a piece of wood. Green bananas are bitten on to reduce pain after the procedure.David Livingstone mentioned a number of African tribes who practiced teeth-filing, including the Bemba, Yao, Makonde, Matambwe, Mboghwa and Chipeta.Koesbardiati, Toetik mentions Indonesian tribes that practice human teeth sharpening in the prehistoric and Islamic populations of Indonesia. In the prehistoric populations of Java, Bali, Sumba, and Flores, dental modifications primarily occurred in canines and incisors but not all of the modifications were for survival. The extraction method practiced by the Flores was for beauty purposes. Human teeth sharpening also continued to occur during the 17th century but this was mostly practiced by those in nobility or those with social prominence. Skeletal remains in the area show that dental filing occurred.
Examples in the modern world:
Horace Ridler, "the Zebra man", included tooth sharpening as one of many body modifications he underwent in order to serve as a circus performer.
In the Indonesian population of Bali, there is a sacred religious practice in which the maxillary front teeth are filed for the purpose of refraining from evil lust. Note that the teeth are flattened, not sharpened.
In the Indonesian population of Timor, residents file the occlusion surface for beauty purposes as it makes the residents feel more comfortable around others.
Among the Mentawai people in Indonesia, the wife of the soon-to-be chief decides to have her teeth sharpened as a sign of great beauty. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Placental infarction**
Placental infarction:
A placental infarction results from the interruption of blood supply to a part of the placenta, causing its cells to die.
Small placental infarcts, especially at the edge of the placental disc, are considered to be normal at term. Large placental infarcts are associated with vascular abnormalities, e.g. hypertrophic decidual vasculopathy, as seen in hypertension. Very large infarcts lead to placental insufficiency and may result in fetal death.
Relation to maternal floor infarct:
Maternal floor infarcts are not considered to be true placental infarcts, as they result from deposition of fibrin around the chorionic villi, i.e. perivillous fibrin deposition. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**What Is This Thing Called Science?**
What Is This Thing Called Science?:
What Is This Thing Called Science? (1976) is a best-selling textbook by Alan Chalmers.
Overview:
The book is a guide to the philosophy of science which outlines the shortcomings of naive empiricist accounts of science, and describes and assesses modern attempts to replace them. The book is written with minimal use of technical terms. What Is This Thing Called Science? was first published in 1976, and has been translated into many languages.
Editions:
What Is This Thing Called Science?, Queensland University Press and Open University Press, 1976, pp. 157 + xvii. (Translated into German, Dutch, Italian Spanish and Chinese.) What Is This Thing Called Science?, Queensland University Press, Open University Press and Hackett, 2nd revised edition (6 new chapters), 1982, pp. 179 + xix. (Translated into German, Persian, French, Italian, Spanish, Dutch, Chinese, Japanese, Indonesian, Portuguese, Polish and Danish, Greek and Estonian.) What Is This Thing Called Science?, University of Queensland Press, Open University press, 3rd revised edition, Hackett, 1999. (Translated into Korean.) What Is This Thing Called Science?, University of Queensland Press, Open University press, 4th edition, 2013. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Right border of heart**
Right border of heart:
The right border of the heart (right margin of heart) is a long border on the surface of the heart, and is formed by the right atrium.
The atrial portion is rounded and almost vertical; it is situated behind the third, fourth, and fifth right costal cartilages about 1.25 cm. from the margin of the sternum.
The ventricular portion, thin and sharp, is named the acute margin; it is nearly horizontal, and extends from the sternal end of the sixth right coastal cartilage to the apex of the heart. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Simulation (journal)**
Simulation (journal):
Simulation is a monthly peer-reviewed scientific journal that covers the field of computer Science. The editor-in-chief is Gabriel Wainer (Carleton University). The journal was established in 1963 and is published by SAGE Publications in association with the Society for Modeling and Simulation International.
Abstracting and indexing:
The journal is abstracted and indexed in Scopus and the Science Citation Index Expanded. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Aces and eights (blackjack)**
Aces and eights (blackjack):
Splitting aces and eights is part of blackjack basic strategy. Rules vary across gambling establishments regarding resplitting, doubling, multiple card draws, and the payout for blackjack, and there are conditional strategic responses that depend upon the number of decks used, the frequency of shuffling and dealer's cards. However, regardless of the various situations, the common strategic wisdom in the blackjack community is to "Always split aces and eights" when dealt either pair as initial cards. This is generally the first rule of any splitting strategy.
Splitting:
The objective of blackjack is for a player to defeat the dealer by obtaining a sum as close to 21 as possible without accumulating a total that exceeds this number. In blackjack, the standard rule is that if the player is dealt a pair of identically ranked initial cards, known as a pair, the player is allowed to split them into separate hands and ask for a new second card for each while placing a full initial bet identical to the original wager with each. After placing the wager for the split hands the dealer gives the player an additional card for each split card. The two hands created by splitting are considered independently in competition against the dealer. Splitting allows the gambler to turn a bad hand into one or two hands with a good possibility of winning. It also allows the player to double the bet when the dealer busts. Some rules even allow for resplitting until the player has as many as four hands or allow doubling the bet after a split so that each hand has a bet double the original. The standard rules are that when a bet is doubled on a hand, the player is only allowed to draw one more card for that hand.
Splitting:
Aces A pair of aces gives the blackjack player a starting hand value of either a 2 or a soft 12 which is a problematic starting hand in either case. Splitting aces gives a player two chances to hit 21. Splitting aces is so favorable to the player that most gambling establishments have rules limiting the player's rights to do so. In most casinos the player is only allowed to draw one card on each split ace. As a general rule, a ten on a split ace (or vice versa) is not considered a natural blackjack and does not get any bonus. Prohibiting resplitting and redoubling is also common. Regardless of the payout for blackjack, the rules for resplitting, the rules for doubling, the rules for multiple card draws and the dealer's cards, one should always split aces.
Splitting:
Eights If a player is dealt a pair of eights, the total of 16 is considered a troublesome hand. In fact, the value 16 is said to be the worst hand one can have in blackjack. Since sixteen of the other fifty cards have a value of 10 and four have a value of 11, there is a strong chance of getting at least an 18 with either or both split cards. A hand totaling 18 or 19 is much stronger than having a 16. Splitting eights limits one's losses and improves one's hand. Probabilistic research of expected value scenarios shows that by splitting eights one can convert a hand that presents an expected loss to two hands that may present an expected profit or a reduced loss, depending on what the dealer is showing. A split pair of eights is expected to win against dealer upcards of 2 through 7 and to lose less against dealer upcards of 8 through ace. If a player hits on a pair of eights, he is expected to lose $52 for a $100 bet. If the player splits the eights, he is expected to lose only $43 for a $100 bet.
History:
Blackjack's "Four Horsemen" (Roger Baldwin, Wilbert Cantey, Herbert Maisel and James McDermott), using adding machines, determined that splitting eights was less costly than playing the pair of eights as a 16. They were part of a 1950s group that discovered that strategy could reduce the house edge to almost zero in blackjack. Now a typical strategy involves the following sequence of playing decisions: one decides whether to surrender, whether to split, whether to double down, and whether to hit or stand.One of the earliest proponents of the strategy of splitting eights is Ed Thorp, who developed the strategy on an IBM 704 as part of an overall blackjack strategic theory published in Beat the Dealer: A Winning Strategy for the Game of Twenty-One in 1962. Thorp was the originator of the card counting system for blackjack. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Juicer**
Juicer:
A juicer, also known as a juice extractor, is a tool used to extract juice from fruits, herbs, leafy greens and other types of vegetables in a process called juicing. It crushes, grinds, and/or squeezes the juice out of the pulp.Some types of juicers can also function as a food processor. Most of the twin gear and horizontal masticating juicers have attachments for crushing herbs and spices, extruding pasta, noodles or bread sticks, making baby food and nut butter, grinding coffee, making nut milk, etc.
Types:
Reamers Squeezers are used for squeezing juice from citrus such as grapefruits, lemons, limes, and oranges. Juice is extracted by pressing or grinding a halved citrus along a juicer's ridged conical center and discarding the rind. Some reamers are stationary and require a user to press and turn the fruit, while others are electrical, automatically turning the ridged center when fruit is pressed upon.
Types:
Centrifugal juicers A centrifugal juicer cuts up the fruit or vegetable with a flat cutting blade. It then spins the produce at a high speed to separate the juice from the pulp.
Masticating juicers A masticating juicer known as cold press juicer or slow juicer uses a single auger to compact and crush produce into smaller sections before squeezing out its juice along a static screen while the pulp is expelled through a separate outlet.
Triturating juicers Triturating juicers (twin gear juicers) have twin augers to crush and press produce.
Types:
Juicing press A juicing press, such as a fruit press or wine press, is a larger scale press that is used in agricultural production. These presses can be stationary or mobile. A mobile press has the advantage that it can be moved from one orchard to another. The process is primarily used for apples and involves a stack of apple mash, wrapped in fine mesh cloth, which is then pressed under approx 40 tonnes. These machines are popular in Europe and have now been introduced to North America.
Types:
Steam juice extractor A stovetop steam juice extractor is typically a pot to generate steam that is used to heat a batch of berries (or other fruit) in a perforated pot stacked on top of a juice collecting container that is above the steam pot. The juice is extracted without mechanical means so it is remarkably clear and because of the steam heating it is also pasteurized for long term storage. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Service-orientation design principles**
Service-orientation design principles:
Service-orientation design principles are proposed principles for developing the solution logic of services within service-oriented architectures (SOA).
Overview:
The success of software development based on any particular design paradigm is never assured. Software developed under the service-oriented design paradigm carries even greater risks. This is because a service-oriented architecture usually spans multiple business areas and requires considerable initial analysis. Therefore, SOA developed without concrete guidelines is very likely to fail. To ensure that the move towards service-orientation is a positive change that delivers on its promised benefits, it is helpful to adopt a set of rules.The service-orientation design principles may be broadly categorized as follows, following Thomas Erl's, SOA Principles of Service Design: Standardized service contract Service loose coupling Service abstraction Service reusability Service autonomy Service statelessness Service discoverability Service composabilityIt is the application of these design principles that create technology independent services and hence provide interoperability in the long term. These design principles serve as a guideline for identifying services.
Strategic goals:
The application of these principles helps in attaining the underlying goals linked with the adoption of service-orientation in the first place. These goals are strategic in nature i.e.
Strategic goals:
long term and look beyond the immediate needs of an organization. These strategic objectives could be summarized into the following seven goals & benefits: Increased intrinsic interoperability Increased federation Increased vendor diversification options Increased business and technology alignment Increased ROI Increased organizational agility Reduced IT burdenEach of the above goals and benefits directly helps towards developing an agile organization that can quickly respond to the ever-changing market conditions with reduced efforts and time.
Characteristics:
The service-orientation design principles help in distinguishing a service-oriented solution from a traditional object-oriented solution by promoting distinct design characteristics. The presence of these characteristics in a service-oriented solution greatly improves the chances of realizing the aforementioned goals and benefits. Erl has identified four service-orientation characteristics as follows: Vendor-neutral Business-driven Enterprise-centric Composition-centricA vendor-neutral service-oriented solution helps to evolve the underlying technology architecture in response to ever changing business requirements. By not being dependent on a particular vendor, any aging infrastructure could be replaced by more efficient technologies without the need for redesigning the whole solution from scratch. This also helps in creating a heterogeneous technology environment where particular business automation requirements are fulfilled by specific technologies.
Characteristics:
Within a SOA, the development of solution logic is driven by the needs of the business and is designed in a manner that focuses on the long-term requirements of the business. As a result, the technology architecture is more aligned with the business needs.
Unlike traditional silo-based application development, a SOA takes into account the requirements of either the whole of the enterprise or at least some considerable part of it. As a result, the developed services are interoperable and reusable across the different segments of the enterprise.
A service-oriented solution enables to deal with new and changing requirements, within a reduced amount of time, by making use of existing services. The services are designed in a manner so that they can be recomposed i.e. become a part of different solutions.
Application:
The service-orientation design principles are applied during the service-oriented analysis and design process. The extent to which each of these principles could be applied is always relative and needs to be weighed against the overall goals and objectives of an organization as well as the time constraints. One important factor that needs to be kept in mind is that it is not just the application of these design principles alone but the consistent application that guarantees the realization of the service-orientation design goals linked with the adoption of service-orientation. This is because services are an enterprise resource, i.e. giving the confidence that they conform to certain standards and could be reused within multiple solutions, so in order to remain such a resource, they must emerge from a process to which these principles have been applied consistently, as an inconsistent application would result in services that are not compatible with each other, resulting in loss of the fundamental service-orientation design characteristics. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pseudoprotease**
Pseudoprotease:
Pseudoproteases are catalytically-deficient pseudoenzyme variants of proteases that are represented across the kingdoms of life. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bivariate analysis**
Bivariate analysis:
Bivariate analysis is one of the simplest forms of quantitative (statistical) analysis. It involves the analysis of two variables (often denoted as X, Y), for the purpose of determining the empirical relationship between them.Bivariate analysis can be helpful in testing simple hypotheses of association. Bivariate analysis can help determine to what extent it becomes easier to know and predict a value for one variable (possibly a dependent variable) if we know the value of the other variable (possibly the independent variable) (see also correlation and simple linear regression).Bivariate analysis can be contrasted with univariate analysis in which only one variable is analysed. Like univariate analysis, bivariate analysis can be descriptive or inferential. It is the analysis of the relationship between the two variables. Bivariate analysis is a simple (two variable) special case of multivariate analysis (where multiple relations between multiple variables are examined simultaneously).
When there is a dependent variable:
If the dependent variable—the one whose value is determined to some extent by the other, independent variable— is a categorical variable, such as the preferred brand of cereal, then probit or logit regression (or multinomial probit or multinomial logit) can be used. If both variables are ordinal, meaning they are ranked in a sequence as first, second, etc., then a rank correlation coefficient can be computed. If just the dependent variable is ordinal, ordered probit or ordered logit can be used. If the dependent variable is continuous—either interval level or ratio level, such as a temperature scale or an income scale—then simple regression can be used.
When there is a dependent variable:
If both variables are time series, a particular type of causality known as Granger causality can be tested for, and vector autoregression can be performed to examine the intertemporal linkages between the variables.
When there is not a dependent variable:
When neither variable can be regarded as dependent on the other, regression is not appropriate but some form of correlation analysis may be.
Graphical methods:
Graphs that are appropriate for bivariate analysis depend on the type of variable. For two continuous variables, a scatterplot is a common graph. When one variable is categorical and the other continuous, a box plot is common and when both are categorical a mosaic plot is common. These graphs are part of descriptive statistics. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Convection (heat transfer)**
Convection (heat transfer):
Convection (or convective heat transfer) is the transfer of heat from one place to another due to the movement of fluid. Although often discussed as a distinct method of heat transfer, convective heat transfer involves the combined processes of conduction (heat diffusion) and advection (heat transfer by bulk fluid flow). Convection is usually the dominant form of heat transfer in liquids and gases.
Convection (heat transfer):
Note that this definition of convection is only applicable in Heat transfer and thermodynamic contexts. It should not be confused with the dynamic fluid phenomenon of convection, which is typically referred to as Natural Convection in thermodynamic contexts in order to distinguish the two.
Overview:
Convection can be "forced" by movement of a fluid by means other than buoyancy forces (for example, a water pump in an automobile engine). Thermal expansion of fluids may also force convection. In other cases, natural buoyancy forces alone are entirely responsible for fluid motion when the fluid is heated, and this process is called "natural convection". An example is the draft in a chimney or around any fire. In natural convection, an increase in temperature produces a reduction in density, which in turn causes fluid motion due to pressures and forces when fluids of different densities are affected by gravity (or any g-force). For example, when water is heated on a stove, hot water from the bottom of the pan is displaced (or forced up) by the colder denser liquid, which falls. After heating has stopped, mixing and conduction from this natural convection eventually result in a nearly homogeneous density, and even temperature. Without the presence of gravity (or conditions that cause a g-force of any type), natural convection does not occur, and only forced-convection modes operate.
Overview:
The convection heat transfer mode comprises one mechanism. In addition to energy transfer due to specific molecular motion (diffusion), energy is transferred by bulk, or macroscopic, motion of the fluid. This motion is associated with the fact that, at any instant, large numbers of molecules are moving collectively or as aggregates. Such motion, in the presence of a temperature gradient, contributes to heat transfer. Because the molecules in aggregate retain their random motion, the total heat transfer is then due to the superposition of energy transport by random motion of the molecules and by the bulk motion of the fluid. It is customary to use the term convection when referring to this cumulative transport and the term advection when referring to the transport due to bulk fluid motion.
Types:
Two types of convective heat transfer may be distinguished: Free or natural convection: when fluid motion is caused by buoyancy forces that result from the density variations due to variations of thermal ±temperature in the fluid. In the absence of an internal source, when the fluid is in contact with a hot surface, its molecules separate and scatter, causing the fluid to be less dense. As a consequence, the fluid is displaced while the cooler fluid gets denser and the fluid sinks. Thus, the hotter volume transfers heat towards the cooler volume of that fluid. Familiar examples are the upward flow of air due to a fire or hot object and the circulation of water in a pot that is heated from below.
Types:
Forced convection: when a fluid is forced to flow over the surface by an internal source such as fans, by stirring, and pumps, creating an artificially induced convection current.In many real-life applications (e.g. heat losses at solar central receivers or cooling of photovoltaic panels), natural and forced convection occur at the same time (mixed convection).Internal and external flow can also classify convection. Internal flow occurs when a fluid is enclosed by a solid boundary such as when flowing through a pipe. An external flow occurs when a fluid extends indefinitely without encountering a solid surface. Both of these types of convection, either natural or forced, can be internal or external because they are independent of each other. The bulk temperature, or the average fluid temperature, is a convenient reference point for evaluating properties related to convective heat transfer, particularly in applications related to flow in pipes and ducts.
Types:
Further classification can be made depending on the smoothness and undulations of the solid surfaces. Not all surfaces are smooth, though a bulk of the available information deals with smooth surfaces. Wavy irregular surfaces are commonly encountered in heat transfer devices which include solar collectors, regenerative heat exchangers, and underground energy storage systems. They have a significant role to play in the heat transfer processes in these applications. Since they bring in an added complexity due to the undulations in the surfaces, they need to be tackled with mathematical finesse through elegant simplification techniques. Also, they do affect the flow and heat transfer characteristics, thereby behaving differently from straight smooth surfaces.For a visual experience of natural convection, a glass filled with hot water and some red food dye may be placed inside a fish tank with cold, clear water. The convection currents of the red liquid may be seen to rise and fall in different regions, then eventually settle, illustrating the process as heat gradients are dissipated.
Newton's law of cooling:
Convection-cooling is sometimes loosely assumed to be described by Newton's law of cooling.Newton's law states that the rate of heat loss of a body is proportional to the difference in temperatures between the body and its surroundings while under the effects of a breeze. The constant of proportionality is the heat transfer coefficient. The law applies when the coefficient is independent, or relatively independent, of the temperature difference between object and environment.
Newton's law of cooling:
In classical natural convective heat transfer, the heat transfer coefficient is dependent on the temperature. However, Newton's law does approximate reality when the temperature changes are relatively small, and for forced air and pumped liquid cooling, where the fluid velocity does not rise with increasing temperature difference.
Convective heat transfer:
The basic relationship for heat transfer by convection is: Q˙=hA(T−Tf) where Q˙ is the heat transferred per unit time, A is the area of the object, h is the heat transfer coefficient, T is the object's surface temperature, and Tf is the fluid temperature.The convective heat transfer coefficient is dependent upon the physical properties of the fluid and the physical situation. Values of h have been measured and tabulated for commonly encountered fluids and flow situations. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Seiberg–Witten map**
Seiberg–Witten map:
The Seiberg–Witten map is a map used in gauge theory and string theory introduced by Nathan Seiberg and Edward Witten which relates non-commutative degrees of freedom of a gauge theory to their commutative counterparts. It was argued by Seiberg and Witten that certain non-commutative gauge theories are equivalent to commutative ones and that there exists a map from a commutative gauge field to a non-commutative one, which is compatible with the gauge structure of each. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fashion Square**
Fashion Square:
Fashion Square or Fashion Square Mall may refer to any of the following shopping malls in the United States: Fashion Square Mall in Saginaw, Michigan Charlottesville Fashion Square in Charlottesville, Virginia Orlando Fashion Square in Orlando, Florida Scottsdale Fashion Square in Scottsdale, Arizona Westfield Fashion Square, formerly Sherman Oaks Fashion Square, in Sherman Oaks, California | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**DDX52**
DDX52:
Probable ATP-dependent RNA helicase DDX52 is an enzyme that in humans is encoded by the DDX52 gene. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Optical properties of carbon nanotubes**
Optical properties of carbon nanotubes:
The optical properties of carbon nanotubes are highly relevant for materials science. The way those materials interact with electromagnetic radiation is unique in many respects, as evidenced by their peculiar absorption, photoluminescence (fluorescence), and Raman spectra.
Optical properties of carbon nanotubes:
Carbon nanotubes are unique "one-dimensional" materials, whose hollow fibers (tubes) have a unique and highly ordered atomic and electronic structure, and can be made in a wide range of dimension. The diameter typically varies from 0.4 to 40 nm (i.e., a range of ~100 times). However, the length can reach 55.5 cm (21.9 in), implying a length-to-diameter ratio as high as 132,000,000:1; which is unequaled by any other material. Consequently, all the electronic, optical, electrochemical and mechanical properties of the carbon nanotubes are extremely anisotropic (directionally dependent) and tunable.Applications of carbon nanotubes in optics and photonics are still less developed than in other fields. Some properties that may lead to practical use include tuneability and wavelength selectivity. Potential applications that have been demonstrated include light emitting diodes (LEDs), bolometers and optoelectronic memory.Apart from direct applications, the optical properties of carbon nanotubes can be very useful in their manufacture and application to other fields. Spectroscopic methods offer the possibility of quick and non-destructive characterization of relatively large amounts of carbon nanotubes, yielding detailed measurements of non-tubular carbon content, tube type and chirality, structural defects, and many other properties that are relevant to those other applications.
Geometric structure:
Chiral angle A single-walled carbon nanotubes (SWCNT) can be envisioned as strip of a graphene molecule (a single sheet of graphite) rolled and joined into a seamless cylinder. The structure of the nanotube can be characterized by the width of this hypothetical strip (that is, the circumference c or diameter d of the tube) and the angle α of the strip relative to the main symmetry axes of the hexagonal graphene lattice. This angle, which may vary from 0 to 30 degrees, is called the "chiral angle" of the tube.
Geometric structure:
The (n,m) notation Alternatively, the structure can be described by two integer indices (n,m) that describe the width and direction of that hypothetical strip as coordinates in a fundamental reference frame of the graphene lattice. If the atoms around any 6-member ring of the graphene are numbered sequentially from 1 to 6, the two vectors u and v of that frame are the displacements from atom 1 to atoms 3 and 5, respectively. Those two vectors have the same length, and their directions are 60 degrees apart. The vector w = n u + m v is then interpreted as the circumference of the unrolled tube on the graphene lattice; it relates each point A1 on one edge of the strip to the point A2 on the other edge that will be identified with it as the strip is rolled up. The chiral angle α is then the angle between u and w.The pairs (n,m) that describe distinct tube structures are those with 0 ≤ m ≤ n and n > 0. All geometric properties of the tube, such as diameter, chiral angle, and symmetries, can be computed from these indices.
Geometric structure:
The type also determines the electronic structure of the tube. Specifically, the tube behaves like a metal if |m–n| is a multiple of 3, and like a semiconductor otherwise.
Zigzag and armchair tubes Tubes of type (n,m) with n=m (chiral angle = 30°) are called "armchair" and those with m=0 (chiral angle = 0°) "zigzag". These tubes have mirror symmetry, and can be viewed as stacks of simple closed paths ("zigzag" and "armchair" paths, respectively).
Electronic structure:
The optical properties of carbon nanotubes are largely determined by their unique electronic structure. The rolling up of the graphene lattice affects that structure in ways that depend strongly on the geometric structure type (n,m).
Electronic structure:
Van Hove singularities A characteristic feature of one-dimensional crystals is that their distribution of density of states (DOS) is not a continuous function of energy, but it descends gradually and then increases in a discontinuous spike. These sharp peaks are called Van Hove singularities. In contrast, three-dimensional materials have continuous DOS. Van Hove singularities result in the following remarkable optical properties of carbon nanotubes: Optical transitions occur between the v1 − c1, v2 − c2, etc., states of semiconducting or metallic nanotubes and are traditionally labeled as S11, S22, M11, etc., or, if the "conductivity" of the tube is unknown or unimportant, as E11, E22, etc. Crossover transitions c1 − v2, c2 − v1, etc., are dipole-forbidden and thus are extremely weak, but they were possibly observed using cross-polarized optical geometry.The energies between the Van Hove singularities depend on the nanotube structure. Thus by varying this structure, one can tune the optoelectronic properties of carbon nanotube. Such fine tuning has been experimentally demonstrated using UV illumination of polymer-dispersed CNTs.Optical transitions are rather sharp (~10 meV) and strong. Consequently, it is relatively easy to selectively excite nanotubes having certain (n, m) indices, as well as to detect optical signals from individual nanotubes.
Electronic structure:
Kataura plot The band structure of carbon nanotubes having certain (n, m) indexes can be easily calculated. A theoretical graph based on these calculations was designed in 1999 by Hiromichi Kataura to rationalize experimental findings. A Kataura plot relates the nanotube diameter and its bandgap energies for all nanotubes in a diameter range. The oscillating shape of every branch of the Kataura plot reflects the intrinsic strong dependence of the SWNT properties on the (n, m) index rather than on its diameter. For example, (10, 1) and (8, 3) tubes have almost the same diameter, but very different properties: the former is a metal, but the latter is a semiconductor.
Optical properties:
Optical absorption Optical absorption in carbon nanotubes differs from absorption in conventional 3D materials by presence of sharp peaks (1D nanotubes) instead of an absorption threshold followed by an absorption increase (most 3D solids). Absorption in nanotubes originates from electronic transitions from the v2 to c2 (energy E22) or v1 to c1 (E11) levels, etc. The transitions are relatively sharp and can be used to identify nanotube types. Note that the sharpness deteriorates with increasing energy, and that many nanotubes have very similar E22 or E11 energies, and thus significant overlap occurs in absorption spectra. This overlap is avoided in photoluminescence mapping measurements (see below), which instead of a combination of overlapped transitions identifies individual (E22, E11) pairs.Interactions between nanotubes, such as bundling, broaden optical lines. While bundling strongly affects photoluminescence, it has much weaker effect on optical absorption and Raman scattering. Consequently, sample preparation for the latter two techniques is relatively simple.
Optical properties:
Optical absorption is routinely used to quantify quality of the carbon nanotube powders.The spectrum is analyzed in terms of intensities of nanotube-related peaks, background and pi-carbon peak; the latter two mostly originate from non-nanotube carbon in contaminated samples. However, it has been recently shown that by aggregating nearly single chirality semiconducting nanotubes into closely packed Van der Waals bundles the absorption background can be attributed to free carrier transition originating from intertube charge transfer.
Optical properties:
Carbon nanotubes as a black body An ideal black body should have emissivity or absorbance of 1.0, which is difficult to attain in practice, especially in a wide spectral range. Vertically aligned "forests" of single-wall carbon nanotubes can have absorbances of 0.98–0.99 from the far-ultraviolet (200 nm) to far-infrared (200 μm) wavelengths.
Optical properties:
These SWNT forests (buckypaper) were grown by the super-growth CVD method to about 10 μm height. Two factors could contribute to strong light absorption by these structures: (i) a distribution of CNT chiralities resulted in various bandgaps for individual CNTs. Thus a compound material was formed with broadband absorption. (ii) Light might be trapped in those forests due to multiple reflections.
Optical properties:
Luminescence Photoluminescence (fluorescence) Semiconducting single-walled carbon nanotubes emit near-infrared light upon photoexcitation, described interchangeably as fluorescence or photoluminescence (PL). The excitation of PL usually occurs as follows: an electron in a nanotube absorbs excitation light via S22 transition, creating an electron-hole pair (exciton). Both electron and hole rapidly relax (via phonon-assisted processes) from c2 to c1 and from v2 to v1 states, respectively. Then they recombine through a c1 − v1 transition resulting in light emission.
Optical properties:
No excitonic luminescence can be produced in metallic tubes. Their electrons can be excited, thus resulting in optical absorption, but the holes are immediately filled by other electrons out of the many available in the metal. Therefore, no excitons are produced.
Salient properties Photoluminescence from SWNT, as well as optical absorption and Raman scattering, is linearly polarized along the tube axis. This allows monitoring of the SWNTs orientation without direct microscopic observation.
PL is quick: relaxation typically occurs within 100 picoseconds.
Optical properties:
PL efficiency was first found to be low (~0.01%), but later studies measured much higher quantum yields. By improving the structural quality and isolation of nanotubes, emission efficiency increased. A quantum yield of 1% was reported in nanotubes sorted by diameter and length through gradient centrifugation, and it was further increased to 20% by optimizing the procedure of isolating individual nanotubes in solution.
Optical properties:
The spectral range of PL is rather wide. Emission wavelength can vary between 0.8 and 2.1 micrometers depending on the nanotube structure.
Excitons are apparently delocalized over several nanotubes in single chirality bundles as the photoluminescence spectrum displays a splitting consistent with intertube exciton tunneling.
Optical properties:
Interaction between nanotubes or between a nanotube and another material may quench or increase PL. No PL is observed in multi-walled carbon nanotubes. PL from double-wall carbon nanotubes strongly depends on the preparation method: CVD grown DWCNTs show emission both from inner and outer shells. However, DWCNTs produced by encapsulating fullerenes into SWNTs and annealing show PL only from the outer shells. Isolated SWNTs lying on the substrate show extremely weak PL which has been detected in few studies only. Detachment of the tubes from the substrate drastically increases PL.Position of the (S22, S11) PL peaks depends slightly (within 2%) on the nanotube environment (air, dispersant, etc.). However, the shift depends on the (n, m) index, and thus the whole PL map not only shifts, but also warps upon changing the CNT medium.
Optical properties:
Raman scattering Raman spectroscopy has good spatial resolution (~0.5 micrometers) and sensitivity (single nanotubes); it requires only minimal sample preparation and is rather informative. Consequently, Raman spectroscopy is probably the most popular technique of carbon nanotube characterization. Raman scattering in SWNTs is resonant, i.e., only those tubes are probed which have one of the bandgaps equal to the exciting laser energy. Several scattering modes dominate the SWNT spectrum, as discussed below.
Optical properties:
Similar to photoluminescence mapping, the energy of the excitation light can be scanned in Raman measurements, thus producing Raman maps. Those maps also contain oval-shaped features uniquely identifying (n, m) indices. Contrary to PL, Raman mapping detects not only semiconducting but also metallic tubes, and it is less sensitive to nanotube bundling than PL. However, requirement of a tunable laser and a dedicated spectrometer is a strong technical impediment.
Optical properties:
Radial breathing mode Radial breathing mode (RBM) corresponds to radial expansion-contraction of the nanotube. Therefore, its frequency νRBM (in cm−1) depends on the nanotube diameter d as, νRBM= A/d + B (where A and B are constants dependent on the environment in which the nanotube is present. For example, B=0 for individual nanotubes.) (in nanometers) and can be estimated as νRBM = 234/d + 10 for SWNT or νRBM = 248/d for DWNT, which is very useful in deducing the CNT diameter from the RBM position. Typical RBM range is 100–350 cm−1. If RBM intensity is particularly strong, its weak second overtone can be observed at double frequency.
Optical properties:
Bundling mode The bundling mode is a special form of RBM supposedly originating from collective vibration in a bundle of SWNTs.
Optical properties:
G mode Another very important mode is the G mode (G from graphite). This mode corresponds to planar vibrations of carbon atoms and is present in most graphite-like materials. G band in SWNT is shifted to lower frequencies relative to graphite (1580 cm−1) and is split into several peaks. The splitting pattern and intensity depend on the tube structure and excitation energy; they can be used, though with much lower accuracy compared to RBM mode, to estimate the tube diameter and whether the tube is metallic or semiconducting.
Optical properties:
D mode D mode is present in all graphite-like carbons and originates from structural defects. Therefore, the ratio of the G/D modes is conventionally used to quantify the structural quality of carbon nanotubes. High-quality nanotubes have this ratio significantly higher than 100. At a lower functionalisation of the nanotube, the G/D ratio remains almost unchanged. This ratio gives an idea of the functionalisation of a nanotube.
Optical properties:
G' mode The name of this mode is misleading: it is given because in graphite, this mode is usually the second strongest after the G mode. However, it is actually the second overtone of the defect-induced D mode (and thus should logically be named D'). Its intensity is stronger than that of the D mode due to different selection rules. In particular, D mode is forbidden in the ideal nanotube and requires a structural defect, providing a phonon of certain angular momentum, to be induced. In contrast, G' mode involves a "self-annihilating" pair of phonons and thus does not require defects. The spectral position of G' mode depends on diameter, so it can be used roughly to estimate the SWNT diameter. In particular, G' mode is a doublet in double-wall carbon nanotubes, but the doublet is often unresolved due to line broadening.
Optical properties:
Other overtones, such as a combination of RBM+G mode at ~1750 cm−1, are frequently seen in CNT Raman spectra. However, they are less important and are not considered here.
Optical properties:
Anti-Stokes scattering All the above Raman modes can be observed both as Stokes and anti-Stokes scattering. As mentioned above, Raman scattering from CNTs is resonant in nature, i.e. only tubes whose band gap energy is similar to the laser energy are excited. The difference between those two energies, and thus the band gap of individual tubes, can be estimated from the intensity ratio of the Stokes/anti-Stokes lines. This estimate however relies on the temperature factor (Boltzmann factor), which is often miscalculated – a focused laser beam is used in the measurement, which can locally heat the nanotubes without changing the overall temperature of the studied sample.
Optical properties:
Rayleigh scattering Carbon nanotubes have very large aspect ratio, i.e., their length is much larger than their diameter. Consequently, as expected from the classical electromagnetic theory, elastic light scattering (or Rayleigh scattering) by straight CNTs has anisotropic angular dependence, and from its spectrum, the band gaps of individual nanotubes can be deduced.Another manifestation of Rayleigh scattering is the "antenna effect", an array of nanotubes standing on a substrate has specific angular and spectral distributions of reflected light, and both those distributions depend on the nanotube length.
Applications:
Light emitting diodes (LEDs) and photo-detectors based on a single nanotube have been produced in the lab. Their unique feature is not the efficiency, which is yet relatively low, but the narrow selectivity in the wavelength of emission and detection of light and the possibility of its fine tuning through the nanotube structure. In addition, bolometer and optoelectronic memory devices have been realised on ensembles of single-walled carbon nanotubes.
Applications:
Photoluminescence is used for characterization purposes to measure the quantities of semiconducting nanotube species in a sample. Nanotubes are isolated (dispersed) using an appropriate chemical agent ("dispersant") to reduce the intertube quenching. Then PL is measured, scanning both the excitation and emission energies and thereby producing a PL map. The ovals in the map define (S22, S11) pairs, which unique identify (n, m) index of a tube. The data of Weisman and Bachilo are conventionally used for the identification.Nanotube fluorescence has been investigated for the purposes of imaging and sensing in biomedical applications.
Applications:
Sensitization Optical properties, including the PL efficiency, can be modified by encapsulating organic dyes (carotene, lycopene, etc.) inside the tubes. Efficient energy transfer occurs between the encapsulated dye and nanotube — light is efficiently absorbed by the dye and without significant loss is transferred to the SWNT. Thus potentially, optical properties of a carbon nanotube can be controlled by encapsulating certain molecule inside it. Besides, encapsulation allows isolation and characterization of organic molecules which are unstable under ambient conditions. For example, Raman spectra are extremely difficult to measure from dyes because of their strong PL (efficiency close to 100%). However, encapsulation of dye molecules inside SWNTs completely quenches dye PL, thus allowing measurement and analysis of their Raman spectra.
Applications:
Cathodoluminescence Cathodoluminescence (CL) — light emission excited by electron beam — is a process commonly observed in TV screens. An electron beam can be finely focused and scanned across the studied material. This technique is widely used to study defects in semiconductors and nanostructures with nanometer-scale spatial resolution. It would be beneficial to apply this technique to carbon nanotubes. However, no reliable CL, i.e. sharp peaks assignable to certain (n, m) indices, has been detected from carbon nanotubes yet.
Applications:
Electroluminescence If appropriate electrical contacts are attached to a nanotube, electron-hole pairs (excitons) can be generated by injecting electrons and holes from the contacts. Subsequent exciton recombination results in electroluminescence (EL). Electroluminescent devices have been produced from single nanotubes and their macroscopic assemblies. Recombination appears to proceed via triplet-triplet annihilation giving distinct peaks corresponding to E11 and E22 transitions.
Multi-walled carbon nanotubes:
Multi-walled carbon nanotubes (MWNT) may consist of several nested single-walled tubes, or of a single graphene strip rolled up multiple times, like a scroll. They are difficult to study because their properties are determined by contributions and interactions of all individual shells, which have different structures. Moreover, the methods used to synthesize them are poorly selective and result in higher incidence of defects. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Speibecken**
Speibecken:
A Speibecken or Kotzbecken is a basin for people to vomit into. These sinks are installed in some bars, restaurants and student fraternities in German-speaking countries as well as in bars in Vietnam.The Speibecken is often a large ceramic bowl installed at waist height with handles for the user to hold onto and a shower head to flush the unit. They are encountered more often in men's facilities than in women's. In Germany and Austria they have become associated with the heavy drinking traditions of student fraternities. They have also been provided at supervised injection sites for drug users.
Names:
Speibeck comes from the German speien ("to spit" but also "to vomit") and Becken ("bowl, basin"). The term also has the meaning of the traditional spittoon, used by tobacco chewers or in dentist's surgeries. In some parts of Austria and Germany they are known as Kotzbecken (from kotzen, "to puke"). In Vietnam they are called bồn ói [nôn], meaning "puke sink".Speibecken are nicknamed Papst ("pope") often said to be because people must bow their heads to use them. In some German-speaking regions vomiting is known as papsten ("poping"). The shower head fixed nearby to flush the Speibecken is also nicknamed "large white telephone". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Blading (professional wrestling)**
Blading (professional wrestling):
In professional wrestling, blading is the practice of intentionally cutting oneself to provoke bleeding. It is also known as "juicing", "gigging", or "getting color". Similarly, a blade is an object used for blading, and a bladejob is a specific act of blading. The act is usually done a good length into the match, as the blood will mix with the flowing sweat on a wrestler's brow to make it look like much more blood is flowing from the wound than there actually is. The preferred area for blading is usually the forehead, as scalp wounds bleed profusely and heal easily. Legitimate, unplanned bleeding which occurs outside the storyline is called "juicing the hard way".
History:
Origin Prior to the advent of blading, most storyline blood in wrestling came from one wrestler deliberately splitting the flesh over their opponent's eyebrow bone with a hard well placed and forceful punch. In his third autobiography, The Hardcore Diaries, Mick Foley cites Terry Funk as one of the few remaining active wrestlers who knows how to "bust an eyebrow open" in this way. However, on a very rare occasion, in the 2012 Extreme Rules event, Brock Lesnar caused John Cena to bleed without blading with a vicious elbow to his head and further hard strikes to Cena's body, though Cena ultimately won the match and the match being critically acclaimed. The forehead has always been the preferred blading surface, due to the abundance of blood vessels. A cut in this area will bleed freely for a length of time and will heal quickly. A cut in this location will allow the blood to mix in with the sweat on the wrestler's face, giving them a "crimson mask" effect.
History:
Contemporary history Popularity of blading has declined in recent years. The wrestler always runs the risk of cutting too deeply and slicing an artery in the forehead. In 2004, Eddie Guerrero accidentally did this during his match with JBL at Judgment Day, resulting in a rush of blood pouring from the bladed area. Guerrero lost so much blood because of the cut that he felt the effects from it for two weeks.
History:
In the past North American professional wrestling, blading was almost exclusively performed by and on male performers. However in promotions that allows blading in the 2020s such as All Elite Wrestling (AEW), women have bladed as well, for example in a match between Britt Baker and Thunder Rosa in 2021, Baker underwent excessive bleeding because of blading during the match.Some wrestlers like Abdullah the Butcher, Dusty Rhodes, New Jack, Bruiser Brody, King Curtis Iaukea, Carlos Colón Sr., Perro Aguayo, Devon Hughes (Brother Devon/D-Von Dudley), Steve Corino, Tarzan Goto, Balls Mahoney, Kintaro Kanemura, Jun Kasai, Villano III, Ian Rotten, Sabu and Manny Fernandez, have disfiguring scars on their heads from frequently blading throughout their careers. According to Mick Foley, the scars in Abdullah's forehead are so deep that he enjoys holding coins or gambling chips in them as a macabre party trick.Presently, blading is a lot less popular than in the past, due to the prevalence and heightened awareness of AIDS and hepatitis. In the 1980s, the willingness to blade was seen as an advantage of new wrestlers. From July 2008 onward, due to its TV-PG rating, WWE has not allowed wrestlers to blade themselves. In most cases, any blood coming from the wrestlers is unintentional. To maintain their TV-PG rating, when a wrestler bleeds on live television, WWE tends to attempt to stop the bleeding mid-match or use different camera angles to avoid showing excessive blood. During repeats of said footage, WWE television programs often shift to black-and-white. However beginning in 2023, WWE has once again permitted wrestlers to bleed intentionally during matches. Impact Wrestling, formerly known as Total Nonstop Action Wrestling (TNA), used blading frequently until adopting a new no-blood policy in 2014. Wrestlers Abyss and Raven were famed for the matches involving the most blood in TNA before the new policy in 2014.
Examples:
One of the most famous such incidents was a bladejob performed by Japanese wrestler The Great Muta in a 1992 match with Hiroshi Hase; the amount of blood Muta lost was so great that many people to this day judge the severity of bladejobs on the Muta Scale.Extreme Championship Wrestling (ECW) was famous for their hardcore style wrestling employing excessive usage of blading. By far the most controversial incident relating to blading was the Mass Transit incident at an ECW house show on November 23, 1996. During a scheduled tag team match between the team of Axl Rotten and D-Von Dudley versus the team of New Jack and Mustafa Saed, Axl Rotten could not make the show and was replaced by 17-year-old fan Erich Kulas, who lied about both his age (claiming to be 21) and wrestling experience. Before the match, Kulas asked New Jack, who was notorious for his stiff hardcore wrestling style and for shooting on opponents, to blade him, since he never had done it himself, and New Jack agreed. New Jack bladed Kulas with a surgical scalpel but cut too deeply and severed two arteries in Kulas' forehead. Kulas screamed in pain, then passed out as blood poured from his head, and was later hospitalized. The incident generated much negative publicity and a lawsuit by Kulas's family, where New Jack was charged but the Jury dropped all charges as the blading was done per Kulas's request and Kulas had lied about his age. Erich Kulas later died on May 12, 2002, but no connection was made between his death and the incident.During an interview on Jimmy Kimmel Live!, Mickey Rourke spoke about his experience with gigging himself for a scene in the 2008 movie The Wrestler. Rourke agreed to gig at the initial request of director Darren Aronofsky in hopes that he would revoke the demand come production time. Indeed, later during filming, Aronofsky admitted that Rourke needn't actually gig; however, by his own will, Rourke decided to go through with it anyway. In the film itself, Rourke's character is seen preparing for a match by wrapping a razor blade inside his wrist tape.
Examples:
There is one notable incident of blading in association football. In 1989, Chile national team goalkeeper Roberto Rojas bladed himself to prevent a loss, by blaming the injury on fireworks thrown by opposing fans. FIFA saw through the ruse and ended up banning Rojas for life and banning Chile from the 1994 FIFA World Cup. Rojas's ban was lifted in 2001.Canadian wrestler Devon Nicholson pressed charges against Abdullah the Butcher, claiming that he contracted hepatitis C after Abdullah bladed him without consent. An Ontario court ruled in favor of Nicholson and ordered Abdullah to pay $2.3 million.During their King of the Road match at Uncensored 1995, Dustin Rhodes and The Blacktop Bully bladed, which was against the policy of World Championship Wrestling (WCW) at the time, and they were both fired as a result. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mercury pollution in Canada**
Mercury pollution in Canada:
Mercury is a poisonous element found in various forms in Canada. It can be emitted in the atmosphere naturally and anthropogenically, the main cause of mercury emission in the environment. Mercury pollution has become a sensitive issue in Canada for the past few decades and many steps have been taken for prevention at local, national, and international levels. It has been found to have various negative health and environmental effects. Methylmercury is the most toxic form of mercury which is easily accessible as well as digestible by living organisms and it is this form of mercury causing serious harm to human and wildlife health.Mercury contamination in Grassy Narrows poisoned many people from the Asubpeeschoseewagong First Nation during the 1960s and 1970s in one of the worst cases of environmental poisoning in Canadian history. The effects of this poisoning are ongoing.
Sources of mercury:
Electricity generation Coal-fired electricity generation used to cause the most mercury emissions in Canada. However, after the mercury-free environment campaign and 2010s provincial and territorial cap of mercury emission using coal, its rate dropped dramatically. It was 34% in 2003 which declined by approximately 50% by 2010 after the cap and some other measures (Canadian Council of Ministers of the Environment, 2016). Now, Canada is the second-largest generator of hydroelectricity in the world using only 9% of the coal-fired electricity.
Sources of mercury:
Waste incineration Many kinds of waste contain mercury (level vary from waste to waste) such as municipal solid waste, sewage waste, industrial waste, hazardous waste, biochemical waste, crematoria waste, farms waste, and so on. Waste incineration was a big contributor to mercury emission in the atmosphere, being in the top 2 in 2003 by having 20% of the total (Environment and Climate Change Canada, 2013). However, this went down to only 1.49 tonnes emission in 2007 and 0.44 tonnes in 2017 with a decrease of almost 70% (Government of Canada, 2020 July 3). This decline is mainly a result of processing the waste before disposal.
Sources of mercury:
Base metal mining and smelting industry This industry is one of the major mercury emitters in Canada which have witnessed an increase and decrease from time-to-time; for instance, from 1990 through 1995, emission cut down from 35 tonnes to 11 tonnes. However, it contributed the largest amount in 1995 by accounting for 40% of the total human-caused mercury emission, which again declined rapidly between 1995 and 2003 and recorded for only 7 tonnes of mercury discharge. In 2003, it reported for 19% of the total by still being in the top three Canadian mercury emitters (Environment and Climate Change Canada, 2013). This industry also has gone through many transformations after the mercury-free environment drive like, technological changes and mercury-free processing (at least to some extent).
Sources of mercury:
Chloralkali industry Until the 1980s, the chloralkali industry was the major source of the emission of mercury in Canada. In the 1970s, there were approximately 15 chloralkali factories which got reduced to one (which is in New Brunswick) after the implementation of mercury emission limits. This industry still contributes to mercury emission but, in quite small quantities because of the anti-pollution rules and reduction in the number of plants (Environment Canada, 2000).
Process of conversion of mercury in an accessible form:
The most common sources of entering mercury into the food chain are marine animals like fish and shellfish. Mercury can go into water naturally, such as runoffs or by human activities such as, disposal of toxic materials directly in the water. Then, it gets converted by bacterial actions from inorganic mercury into Methylmercury. Methylmercury is a form of mercury which is adsorbed by living organisms, it can be bioaccumulated^1 by bacteria and other marine insects in their tissues, which are further eaten by fishes(Green facts – Facts on the Health and the Environment, 2004). Fish is popular seafood frequently eaten by humans and other organisms in the food web and this is how it gets biomagnified^2 throughout the food chain, making almost everyone mercury-contaminated to some extent. According to research, big and old fishes are found to have a high level of mercury concentration in comparison to small and newborn fishes. Further, the higher the organisms are in the food chain, higher the concentration of Methylmercury is (Green facts – Facts on the Health and the Environment, 2004).
Effects of the mercury:
Environment Mercury is unhealthy for the environment as it makes soil, water, and air toxic. It has become a common contaminant in all environments for the past few years but, it is not an issue everywhere. It becomes a problem for those environments only, which have a low reversible rate (of methylmercury) in comparison to its formation from inorganic mercury.
Effects of the mercury:
Wildlife Mercury effects were first noticed in wildlife before humans, birds were noted having flying difficulties and some other abnormal behavior. Almost all kinds of mercury can be adsorbed by living organisms to some extent and all varied forms have their different impacts. Although methylmercury is the most common one, it is highly toxic and mainly sensitive to the nervous system. It also impacts the reproductive system and poses danger to developing fetus. According to studies, in birds, methylmercury can impact egg concentration, making it as low as 0.05 to 2.0 mg/kg (wet eggs). Moreover, some Canadian species are already found to be in this range. In few Canadian arctics, mercury concentration has risen by 2 to 4 times in the arctic ringed seals and beluga whales (Green facts – Facts on the Health and the Environment, 2004).
Effects of the mercury:
Human health Health effects of mercury depend on various factors like form of mercury, age of the person, prior exposure health of the person, concentration, and how it gets into the individual (eating, breathing, etc.). According to researches, the most common form that is found in living organisms is methylmercury, called Minamata disease^3. Methylmercury can have serious effects on the nervous system, lowers immunity, malfunctioning and toxicity of kidney, numbness, skin related problems, hearing and sight issues, lack of muscle coordination, intellectual impairment, and so on. It also has shown severe effects on fetus and developing children (especially through mothers) causing serious nervous system, hearing, vision, and speech issues (United States Environment Protection Agency, 2020).
Steps taken by the government:
1. Light bulbs contain mercury are dumped in landfills in large quantities every year. So, according to Canada's national strategy for bulbs containing mercury, all mercury-containing lamps need to be collected and sent to special facilities which further process it and ensures environment friendly disposal of those bulbs. Moreover, citizens are made aware from time-to-time and are encouraged to purchase mercury-free products.2. Energy generation using coal represents a huge emission of mercury in the environment. That is why people are encouraged to demand less energy through alternative ways, like using less electricity, installing energy savers at homes, using energy-efficient goods, etc. (Government of Canada, 2016, March 11).3. There are a ton of products which contain mercury, Canadian government encourages the use of mercury-free items and have made many policies on the proper disposal of mercury-containing goods (Government of Canada, 2016, March 11).
Steps taken by the government:
4. Mercury is present in all lakes and rivers throughout the world, although often at very low concentrations. To avoid its consumption by humans (through marine animals, especially fish) federal, provincial, and territorial governments have created fish consumption advisories in accordance to varied species of fishes depending on locations to minimize the risk of mercury consumption (Government of Canada, 2013, July 9).
Steps taken by the government:
5. Governments have conducted a variety of research on different locations and sources and have published reports on official website for public use and knowledge. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nontheism**
Nontheism:
Nontheism or non-theism is a range of both religious and non-religious attitudes characterized by the absence of espoused belief in the existence of a God or gods. Nontheism has generally been used to describe apathy or silence towards the subject of gods and differs from atheism, or active disbelief in any gods. It has been used as an umbrella term for summarizing various distinct and even mutually exclusive positions, such as agnosticism, ignosticism, ietsism, skepticism, pantheism, pandeism, transtheism, atheism (strong or positive, implicit or explicit), and apatheism. It is in use in the fields of Christian apologetics and general liberal theology.
Nontheism:
An early usage of the hyphenated term non-theism is attributed to George Holyoake in 1852. Within the scope of nontheistic agnosticism, philosopher Anthony Kenny distinguishes between agnostics who find the claim "God exists" uncertain and theological noncognitivists who consider all discussion of God to be meaningless. Some agnostics, however, are not nontheists but rather agnostic theists. Other related philosophical opinions about the existence of deities are ignosticism and skepticism. Because of the various definitions of the term God, a person could be an atheist in terms of certain conceptions of gods, while remaining agnostic in terms of others.
Origin and definition:
The Oxford English Dictionary (2007) does not have an entry for nontheism or non-theism, but it does have an entry for non-theist, defined as "A person who is not a theist", and an entry for the adjectival non-theistic.An early usage of the hyphenated non-theism is by George Holyoake in 1852, who introduces it because: Mr. [Charles] Southwell has taken an objection to the term Atheism. We are glad he has. We have disused it a long time [...]. We disuse it, because Atheist is a worn-out word. Both the ancients and the moderns have understood by it one without God, and also without morality. Thus the term connotes more than any well-informed and earnest person accepting it ever included in it; that is, the word carries with it associations of immorality, which have been repudiated by the Atheist as seriously as by the Christian. Non-theism is a term less open to the same misunderstanding, as it implies the simple non-acceptance of the Theist's explanation of the origin and government of the world.
Origin and definition:
This passage is cited by James Buchanan in his 1857 Modern Atheism under its forms of Pantheism, Materialism, Secularism, Development, and Natural Laws, who however goes on to state: "Non-theism" was afterwards exchanged [by Holyoake] for "Secularism", as a term less liable to misconstruction, and more correctly descriptive of the real import of the theory.
Origin and definition:
Spelling without hyphen sees scattered use in the later 20th century, following Harvey Cox's 1966 Secular City: "Thus the hidden God or deus absconditus of biblical theology may be mistaken for the no-god-at-all of nontheism." Usage increased in the 1990s in contexts where association with the terms atheism or antitheism was unwanted. The 1998 Baker Encyclopedia of Christian Apologetics states, "In the strict sense, all forms of nontheisms are naturalistic, including atheism, pantheism, deism, and agnosticism."Pema Chödrön uses the term in the context of Buddhism: The difference between theism and nontheism is not whether one does or does not believe in God.[...] Theism is a deep-seated conviction that there's some hand to hold [...] Non-theism is relaxing with the ambiguity and uncertainty of the present moment without reaching for anything to protect ourselves [...] Nontheism is finally realizing there is no babysitter you can count on.
Nontheistic religions:
Nontheistic traditions of thought have played roles in Buddhism, Christianity, Hinduism, Jainism, Taoism, Creativity, Dudeism, Raëlism, Humanistic Judaism, Laveyan Satanism, The Satanic Temple, Unitarian Universalism, and Ethical culture. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Reproductive biology**
Reproductive biology:
Reproductive biology includes both sexual and asexual reproduction.Reproductive biology includes a wide number of fields: Reproductive systems Endocrinology Sexual development (Puberty) Sexual maturity Reproduction Fertility
Human reproductive biology:
Endocrinology Human reproductive biology is primarily controlled through hormones, which send signals to the human reproductive structures to influence growth and maturation. These hormones are secreted by endocrine glands, and spread to different tissues in the human body. In humans, the pituitary gland synthesizes hormones used to control the activity of endocrine glands.
Reproductive systems Internal and external organs are included in the reproductive system. There are two reproductive systems including the male and female, which contain different organs from one another. These systems work together in order to produce offspring.
Female reproductive system The female reproductive system includes the structures involved in ovulation, fertilization, development of an embryo, and birth.
These structures include: Ovaries Oviducts Uterus Vagina Mammary GlandsEstrogen is one of the sexual reproductive hormones that aid in the sexual reproductive system of the female.
Human reproductive biology:
Male reproductive system The male reproductive system includes testes, rete testis, efferent ductules, epididymis, sex accessory glands, sex accessory ducts and external genitalia.Testosterone, an androgen, although present in both males and females, is relatively more abundant in males. Testosterone serves as one of the major sexual reproductive hormones in the male reproductive system However, the enzyme aromatase is present in testes and capable of synthesizing estrogens from androgens. Estrogens are present in high concentrations in luminal fluids of the male reproductive tract. Androgen and estrogen receptors are abundant in epithelial cells of the male reproductive tract.
Animal Reproductive Biology:
Animal reproduction occurs by two modes of action, including both sexual and asexual reproduction. In asexual reproduction the generation of new organisms does not require the fusion sperm with an egg. However, in sexual reproduction new organisms are formed by the fusion of haploid sperm and eggs resulting in what is known as the zygote. Although animals exhibit both sexual and asexual reproduction the vast majority of animals reproduce by sexual reproduction.In many species, relatively little is known about the conditions needed for successful breeding. Such information may be critical to preventing widespread extinction as species are increasingly affected by climate change and other threats. In the case of some species of frogs, such as the Mallorcan midwife toad and the Kihansi spray toad, it has been possible to repopulate areas where wild populations had been lost.
Gametogenesis:
Gametogenesis is the formation of gametes, or reproductive cells.
Gametogenesis:
Spermatogenesis Spermatogenesis is the production of sperm cells in the testis. In mature testes primordial germ cells divide mitotically to form the spermatogonia, which in turn generate spermatocytes by mitosis. Then each spermatocyte gives rise to four spermatids through meiosis. Spermatids are now haploid and undergo differentiation into sperm cells. Later in reproduction the sperm will fuse with a female oocyte to form the zygote.
Gametogenesis:
Oogenesis Oogenesis is the formation of a cell who will produce one ovum and three polar bodies. Oogenesis begins in the female embryo with the production of oogonia from primordial germ cells. Like spermatogenesis, the primordial germ cell undergo mitotic division to form the cells that will later undergo meiosis, but will be halted at the prophase I stage. This is known as the primary oocyte. Human females are born with all the primary oocytes they will ever have. Starting at puberty the process of meiosis can complete resulting in the secondary oocyte and the first polar body. The secondary oocyte can later be fertilized with the male sperm. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Final value theorem**
Final value theorem:
In mathematical analysis, the final value theorem (FVT) is one of several similar theorems used to relate frequency domain expressions to the time domain behavior as time approaches infinity.
Final value theorem:
Mathematically, if f(t) in continuous time has (unilateral) Laplace transform F(s) , then a final value theorem establishes conditions under which lim lim s→0sF(s) Likewise, if f[k] in discrete time has (unilateral) Z-transform F(z) , then a final value theorem establishes conditions under which lim lim z→1(z−1)F(z) An Abelian final value theorem makes assumptions about the time-domain behavior of f(t) (or f[k] ) to calculate lim s→0sF(s) . Conversely, a Tauberian final value theorem makes assumptions about the frequency-domain behaviour of F(s) to calculate lim t→∞f(t) (or lim k→∞f[k] ) (see Abelian and Tauberian theorems for integral transforms).
Final value theorems for the Laplace transform:
Deducing limt → ∞ f(t) In the following statements, the notation ' s→0 ' means that s approaches 0, whereas ' s↓0 ' means that s approaches 0 through the positive numbers.
Final value theorems for the Laplace transform:
Standard Final Value Theorem Suppose that every pole of F(s) is either in the open left half plane or at the origin, and that F(s) has at most a single pole at the origin. Then sF(s)→L∈R as s→0 , and lim t→∞f(t)=L Final Value Theorem using Laplace transform of the derivative Suppose that f(t) and f′(t) both have Laplace transforms that exist for all s>0 . If lim t→∞f(t) exists and lim s→0sF(s) exists then lim lim s→0sF(s) .: Theorem 2.36 : 20 Remark Both limits must exist for the theorem to hold. For example, if sin (t) then lim t→∞f(t) does not exist, but lim lim s→0ss2+1=0 .: Example 2.37 : 20 Improved Tauberian converse Final Value Theorem Suppose that f:(0,∞)→C is bounded and differentiable, and that tf′(t) is also bounded on (0,∞) . If sF(s)→L∈C as s→0 then lim t→∞f(t)=L Extended Final Value Theorem Suppose that every pole of F(s) is either in the open left half-plane or at the origin. Then one of the following occurs: sF(s)→L∈R as s↓0 , and lim t→∞f(t)=L .sF(s)→+∞∈R as s↓0 , and f(t)→+∞ as t→∞ .sF(s)→−∞∈R as s↓0 , and f(t)→−∞ as t→∞ .In particular, if s=0 is a multiple pole of F(s) then case 2 or 3 applies ( f(t)→+∞ or f(t)→−∞ ).
Final value theorems for the Laplace transform:
Generalized Final Value Theorem Suppose that f(t) is Laplace transformable. Let λ>−1 . If lim t→∞f(t)tλ exists and lim s↓0sλ+1F(s) exists then lim lim s↓0sλ+1F(s) where Γ(x) denotes the Gamma function.
Applications Final value theorems for obtaining lim t→∞f(t) have applications in establishing the long-term stability of a system.
Final value theorems for the Laplace transform:
Deducing lims → 0 s F(s) Abelian Final Value Theorem Suppose that f:(0,∞)→C is bounded and measurable and lim t→∞f(t)=α∈C . Then F(s) exists for all s>0 and lim s→0+sF(s)=α .Elementary proofSuppose for convenience that |f(t)|≤1 on (0,∞) , and let lim t→∞f(t) . Let ϵ>0 , and choose A so that |f(t)−α|<ϵ for all t>A . Since s∫0∞e−stdt=1 , for every s>0 we have sF(s)−α=s∫0∞(f(t)−α)e−stdt; hence |sF(s)−α|≤s∫0A|f(t)−α|e−stdt+s∫A∞|f(t)−α|e−stdt≤2s∫0Ae−stdt+ϵs∫A∞e−stdt=I+II.
Final value theorems for the Laplace transform:
Now for every s>0 we have II<ϵs∫0∞e−stdt=ϵ .On the other hand, since A<∞ is fixed it is clear that lim s→0I=0 , and so |sF(s)−α|<ϵ if s>0 is small enough.
Final Value Theorem using Laplace transform of the derivative Suppose that all of the following conditions are satisfied: f:(0,∞)→C is continuously differentiable and both f and f′ have a Laplace transform f′ is absolutely integrable - that is, ∫0∞|f′(τ)|dτ is finite lim t→∞f(t) exists and is finiteThen lim lim t→∞f(t) .Remark The proof uses the dominated convergence theorem.
Final value theorems for the Laplace transform:
Final Value Theorem for the mean of a function Let f:(0,∞)→C be a continuous and bounded function such that such that the following limit exists lim T→∞1T∫0Tf(t)dt=α∈C Then lim s→0,s>0sF(s)=α Final Value Theorem for asymptotic sums of periodic functions Suppose that f:[0,∞)→R is continuous and absolutely integrable in [0,∞) . Suppose further that f is asymptotically equal to a finite sum of periodic functions fas , that is |f(t)−fas(t)|<ϕ(t) where ϕ(t) is absolutely integrable in [0,∞) and vanishes at infinity. Then lim lim t→∞1t∫0tf(x)dx Final Value Theorem for a function that diverges to infinity Let f(t):[0,∞)→R and F(s) be the Laplace transform of f(t) . Suppose that f(t) satisfies all of the following conditions: f(t) is infinitely differentiable at zero f(k)(t) has a Laplace transform for all non-negative integers k f(t) diverges to infinity as t→∞ Then sF(s) diverges to infinity as s→0+ Final Value Theorem for improperly integrable functions (Abel's theorem for integrals) Let h:[0,∞)→R be measurable and such that the (possibly improper) integral := ∫0xh(t)dt converges for x→∞ . Then := lim lim s↓0∫0∞e−sth(t)dt.
Final value theorems for the Laplace transform:
This is a version of Abel's theorem.
To see this, notice that f′(t)=h(t) and apply the final value theorem to f after an integration by parts: For s>0 ,s∫0∞e−stf(t)dt=[−e−stf(t)]t=o∞+∫0∞e−stf′(t)dt=∫0∞e−sth(t)dt.
By the final value theorem, the left-hand side converges to lim x→∞f(x) for s→0 . To establish the convergence of the improper integral lim x→∞f(x) in practice, Dirichlet's test for improper integrals is often helpful. An example is the Dirichlet integral.
Final value theorems for the Laplace transform:
Applications Final value theorems for obtaining lim s→0sF(s) have applications in probability and statistics to calculate the moments of a random variable. Let R(x) be cumulative distribution function of a continuous random variable X and let ρ(s) be the Laplace–Stieltjes transform of R(x) . Then the n -th moment of X can be calculated as E[Xn]=(−1)ndnρ(s)dsn|s=0 The strategy is to write dnρ(s)dsn=F(G1(s),G2(s),…,Gk(s),…) where F(…) is continuous and for each k , Gk(s)=sFk(s) for a function Fk(s) . For each k , put fk(t) as the inverse Laplace transform of Fk(s) , obtain lim t→∞fk(t) , and apply a final value theorem to deduce lim lim lim t→∞fk(t) . Then lim lim lim s→0Gk(s),…) and hence E[Xn] is obtained.
Final value theorems for the Laplace transform:
Examples Example where FVT holds For example, for a system described by transfer function H(s)=6s+2, the impulse response converges to lim lim 0.
That is, the system returns to zero after being disturbed by a short impulse. However, the Laplace transform of the unit step response is G(s)=1s6s+2 and so the step response converges to lim lim s→0ss6s+2=62=3 So a zero-state system will follow an exponential rise to a final value of 3.
Final value theorems for the Laplace transform:
Example where FVT does not hold For a system described by the transfer function H(s)=9s2+9, the final value theorem appears to predict the final value of the impulse response to be 0 and the final value of the step response to be 1. However, neither time-domain limit exists, and so the final value theorem predictions are not valid. In fact, both the impulse response and step response oscillate, and (in this special case) the final value theorem describes the average values around which the responses oscillate.
Final value theorems for the Laplace transform:
There are two checks performed in Control theory which confirm valid results for the Final Value Theorem: All non-zero roots of the denominator of H(s) must have negative real parts.
H(s) must not have more than one pole at the origin.Rule 1 was not satisfied in this example, in that the roots of the denominator are 0+j3 and 0−j3
Final value theorems for the Z transform:
Deducing limk → ∞ f[k] Final Value Theorem If lim k→∞f[k] exists and lim z→1(z−1)F(z) exists then lim lim z→1(z−1)F(z) .: 101
Final value of linear systems:
Continuous-time LTI systems Final value of the system x˙(t)=Ax(t)+Bu(t) y(t)=Cx(t) in response to a step input u(t) with amplitude R is: lim t→∞y(t)=−CA−1BR Sampled-data systems The sampled-data system of the above continuous-time LTI system at the aperiodic sampling times ti,i=1,2,...
is the discrete-time system x(ti+1)=Φ(hi)x(ti)+Γ(hi)u(ti) y(ti)=Cx(ti) where hi=ti+1−ti and Φ(hi)=eAhi , Γ(hi)=∫0hieAsds The final value of this system in response to a step input u(t) with amplitude R is the same as the final value of its original continuous-time system. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vinyl tributyltin**
Vinyl tributyltin:
Vinyl tributyltin is an organotin compound with the formula Bu3SnCH=CH2 (Bu = butyl). It is a white, air-stable solid. It is used as a source of vinyl anion equivalent in Stille coupling reactions. As a source of vinyltin reagents, early work used vinyl trimethyltin, but trimethyltin compounds are avoided nowadays owing to their toxicity.
Preparation:
The compound is prepared by the reaction of vinylmagnesium bromide with tributyltin chloride. It can be synthesized in the laboratory by hydrostannylation of acetylene with tributyltin hydride. It is commercially available. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cable guide**
Cable guide:
A cable guide is a fitting or part of a bicycle frame which guides a piece of bare inner bowden cable around a corner. Most multi-speed bicycles have cable guides to get the derailleur cables past the bottom bracket.
Older derailleur bicycles used either brazed-on or clamp-on guides just above the bottom bracket, but newer bicycles have a guide under the bottom bracket.
Below the bottom bracket:
Cable guides below the bottom bracket can be cheaper, just a piece of moulded plastic, and, for some bikes with very small chainrings, eliminate interference between the rear derailleur cable and the bottom of the front derailleur cage. They also make for a cleaner appearance and easier to clean frame in the bottom bracket area. Poor lubrication of bottom-bracket cable guides is a common cause of autoshifting.
Above the bottom bracket:
Cable guides above the bottom bracket are usually made of metal, causing more friction and wear on the cable, and is a more complex cable guide as it does not follow the shape of the bottom bracket shell. They do allow use of a slightly shorter cable, tend to keep the cable more clean (as it is more protected from grit thrown up from the road), allow the cable to protect the chainstay from chain slap, and the loop of housing at the rear derailleur does not need to bend quite as tightly, since the cable stop is on the top side of the chainstay, rather than beneath it. Despite the advantages this routing is almost exclusively found on older bikes.
Beside top of seat tube:
Some bicycles use a cable guide on one side of the seat cluster for a rear cantilever brake cable, rather than use a short length of housing between two housing stops. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.