text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Card reader**
Card reader:
A card reader is a data input device that reads data from a card-shaped storage medium. The first were punched card readers, which read the paper or cardboard punched cards that were used during the first several decades of the computer industry to store information and programs for computer systems. Modern card readers are electronic devices that can read plastic cards embedded with either a barcode, magnetic strip, computer chip or another storage medium.
Card reader:
A memory card reader is a device used for communication with a smart card or a memory card.
A magnetic card reader is a device used to read magnetic stripe cards, such as credit cards.
A business card reader is a device used to scan and electronically save printed business cards.
Smart card readers:
A smart card reader is an electronic device that reads smart cards and can be found in the following form: Keyboards with a built-in card reader External devices and internal drive bay card reader devices for personal computers (PC) Laptop models containing a built-in smart card reader and/or using flash upgradeable firmware.External devices that can read a Personal identification number (PIN) or other information may also be connected to a keyboard (usually called "card readers with PIN pad"). This model works by supplying the integrated circuit on the smart card with electricity and communicating via protocols, thereby enabling the user to read and write to a fixed address on the card.
Smart card readers:
If the card does not use any standard transmission protocol, but uses a custom/proprietary protocol, it has the communication protocol designation T=14.The latest PC/SC CCID specifications define a new smart card framework. This framework works with USB devices with the specific device class 0x0B. Readers with this class do not need device drivers when used with PC/SC-compliant operating systems, because the operating system supplies the driver by default.PKCS#11 is an API designed to be platform-independent, defining a generic interface to cryptographic tokens such as smart cards. This allows applications to work without knowledge of the reader details.
Memory card readers:
A memory card reader is a device, typically having a USB interface, for accessing the data on a memory card such as a CompactFlash (CF), Secure Digital (SD) or MultiMediaCard (MMC). Most card readers also offer write capability, and together with the card, this can function as a pen drive.
Access control card reader:
Access control card readers are used in physical security systems to read a credential that allows access through access control points, typically a locked door. An access control reader can be a magnetic stripe reader, a bar code reader, a proximity reader, a smart card reader, or a biometric reader.
Access control card reader:
Access control readers are classified by functions they are able to perform and by identification technology: Barcode A barcode is a series of alternating dark and light stripes that are read by an optical scanner. The organization and width of the lines is determined by the bar code protocol selected. There are many different protocols, such as the prevalent Code 39. Sometimes the digits represented by the dark and light bars are also printed to allow people to read the number without an optical reader.
Access control card reader:
The advantage of using barcode technology is that it is cheap and easy to generate the credential and it can easily be applied to cards or other items. However the same affordability and simplicity makes the technology susceptible to fraud, because fake barcodes can also be created cheaply and easily, for example by photocopying real ones. One attempt to reduce fraud is to print the barcode using carbon-based ink, and then cover the bar code with a dark red overlay. The barcode can then be read with an optical reader tuned to the infrared spectrum, but can not easily be copied by a copy machine. This does not address the ease with which barcode numbers can be generated from a computer using almost any printer.
Access control card reader:
Biometric There are several forms of biometric identification employed in access control: fingerprint, hand geometry, iris, Voice Recognition, and facial recognition. Biometric technology has been promoted for its ability to significantly increase the security level of systems. Proponents claim that the technology eliminates such problems as lost, stolen or loaned ID cards and forgotten PINs.All biometric readers work similarly, by comparing the template stored in memory to the scan obtained during the process of identification. If there is a high enough degree of probability that the template in the memory is compatible with the live scan (the scan belongs to the authorized person), the ID number of that person is sent to a control panel. The control panel then checks the permission level of the user and determines whether access should be allowed. The communication between the reader and the control panel is usually transmitted using the industry standard Wiegand interface. The only exception is the intelligent biometric reader, which does not require any panels and directly controls all door hardware.
Access control card reader:
Biometric templates may be stored in the memory of readers, limiting the number of users by the reader memory size (there are reader models that have been manufactured with a storage capacity of up to 50,000 templates). User templates may also be stored in the memory of the smart card, thereby removing all limits to the number of system users (finger-only identification is not possible with this technology), or a central server PC can act as the template host. For systems where a central server is employed, known as "server-based verification", readers first read the biometric data of the user and then forward it to the main computer for processing. Server-based systems support a large number of users but are dependent on the reliability of the central server, as well as communication lines.
Access control card reader:
1-to-1 and 1-to-many are the two possible modes of operation of a biometric reader: In the 1-to-1 mode a user must first either present an ID card or enter a PIN. The reader then looks up the template of the corresponding user in the database and compares it with the live scan. The 1-to-1 method is considered more secure and is generally faster as the reader needs to perform only one comparison. Most 1-to-1 biometric readers are "dual-technology" readers: they either have a built-in proximity, smart card or keypad reader, or they have an input for connecting an external card reader.
Access control card reader:
In the 1-to-many mode a user presents biometric data such as a fingerprint or retina scan and the reader then compares the live scan to all the templates stored in the memory. This method is preferred by most end-users, because it eliminates the need to carry ID cards or use PINs. On the other hand, this method is slower, because the reader may have to perform thousands of comparison operations until it finds the match. An important technical characteristic of a 1-to-many reader is the number of comparisons that can be performed in one second, which is considered the maximum time that users can wait at a door without noticing a delay. Currently most 1-to-many readers are capable of performing 2,000–3,000 matching operations per second.
Access control card reader:
Magnetic stripe Magnetic stripe technology, usually called mag-stripe, is so named because of the stripe of magnetic oxide tape that is laminated on a card. There are three tracks of data on the magnetic stripe. Typically the data on each of the tracks follows a specific encoding standard, but it is possible to encode any format on any track. A mag-stripe card is cheap compared to other card technologies and is easy to program. The magnetic stripe holds more data than a barcode can in the same space. While a mag-stripe is more difficult to generate than a bar code, the technology for reading and encoding data on a mag-stripe is widespread and easy to acquire. Magnetic stripe technology is also susceptible to misreads, card wear, and data corruption. These cards are also susceptible to some forms of skimming where external devices are placed over the reader to intercept the data read.
Access control card reader:
Wiegand card Wiegand card technology is a patented technology using embedded ferromagnetic wires strategically positioned to create a unique pattern that generates the identification number. Like magnetic stripe or barcode technology, this card must be swiped through a reader to be read. Unlike the other technologies, the identification media is embedded in the card and not susceptible to wear. This technology once gained popularity because it is difficult to duplicate, creating a high perception of security. This technology is being replaced by proximity cards, however, because of the limited source of supply, the relatively better tamper resistance of proximity readers, and the convenience of the touch-less functionality in proximity readers.
Access control card reader:
Proximity card readers are still referred to as "Wiegand output readers", but no longer use the Wiegand effect. Proximity technology retains the Wiegand upstream data so that the new readers are compatible with old systems.
Access control card reader:
Proximity card A reader radiates a 1" to 20" electrical field around itself. Cards use a simple LC circuit. When a card is presented to the reader, the reader's electrical field excites a coil in the card. The coil charges a capacitor and in turn powers an integrated circuit. The integrated circuit outputs the card number to the coil, which transmits it to the reader.
Access control card reader:
A common proximity format is 26-bit Wiegand. This format uses a facility code, sometimes also called a site code. The facility code is a unique number common to all of the cards in a particular set. The idea is that an organization will have their own facility code and a set of numbered cards incrementing from 1. Another organization has a different facility code and their card set also increments from 1. Thus different organizations can have card sets with the same card numbers but since the facility codes differ, the cards only work at one organization. This idea worked early in the technology, but as there is no governing body controlling card numbers, different manufacturers can supply cards with identical facility codes and identical card numbers to different organizations. Thus there may be duplicate cards that allow access to multiple facilities in one area. To counteract this problem some manufacturers have created formats beyond 26-bit Wiegand that they control and issue to organizations.
Access control card reader:
In the 26-bit Wiegand format, bit 1 is an even parity bit. Bits 2–9 are a facility code. Bits 10–25 are the card number. Bit 26 is an odd parity bit. 1/8/16/1. Other formats have a similar structure of a leading facility code followed by the card number and including parity bits for error checking, such as the 1/12/12/1 format used by some American access control companies.
Access control card reader:
1/8/16/1 gives as facility code limit of 255 and 65535 card number 1/12/12/1 gives a facility code limit of 4095 and 4095 card number.
Wiegand was also stretched to 34 bits, 56 bits and many others.
Access control card reader:
Smart card There are two types of smart cards: contact and contactless. Both have an embedded microprocessor and memory. The smart card differs from the proximity card in that the microchip in the proximity card has only one function: to provide the reader with the card's identification number. The processor on the smart card has an embedded operating system and can handle multiple applications such as a cash card, a pre-paid membership card, or an access control card.
Access control card reader:
The difference between the two types of smart cards is the manner with which the microprocessor on the card communicates with the outside world. A contact smart card has eight contact points, which must physically touch the contacts on the reader to convey information between them. Since contact cards must be inserted into readers carefully in the proper orientation, the speed and convenience of such a transaction is not acceptable for most access control applications. The use of contact smart cards as physical access control is limited mostly to parking applications when payment data is stored in card memory, and when the speed of transactions is not as important.
Access control card reader:
A contactless smart card uses the same radio-based technology as the proximity card, with the exception of the frequency band used: it uses a higher frequency (13.56 MHz instead of 125 kHz), which allows the transfer of more data, and communication with several cards at the same time. A contactless card does not have to touch the reader or even be taken out of a wallet or purse. Most access control systems only read serial numbers of contactless smart cards and do not utilize the available memory. Card memory may be used for storing biometric data (i.e. fingerprint template) of a user. In such case a biometric reader first reads the template on the card and then compares it to the finger (hand, eye, etc.) presented by the user. In this way biometric data of users does not have to be distributed and stored in the memory of controllers or readers, which simplifies the system and reduces memory requirements.
Access control card reader:
Smartcard readers have been targeted successfully by criminals in what is termed a supply chain attack, in which the readers are tampered with during manufacture or in the supply chain before delivery. The rogue devices capture customers' card details before transmitting them to criminals.
Banking card readers:
Some banks have issued hand-held smartcard readers to their customers to support different electronic payment applications: Chip Authentication Program (CAP) uses EMV banking cards to authenticate online transactions as a phishing countermeasure.
Geldkarte is a German electronic purse scheme where card readers are used to allow the card holder to verify the amount of money stored on the card and the details of the last few transactions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Algebrator**
Algebrator:
Algebrator (also called Softmath) is a computer algebra system (CAS), which was developed in the late 1990s by Neven Jurkovic of Softmath, San Antonio, Texas. This is a CAS specifically geared towards algebra education. Beside the computation results, it shows step by step the solution process and context sensitive explanations. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Making Mathematics with Needlework**
Making Mathematics with Needlework:
Making Mathematics with Needlework: Ten Papers and Ten Projects is an edited volume on mathematics and fiber arts. It was edited by Sarah-Marie Belcastro and Carolyn Yackel, and published in 2008 by A K Peters, based on a meeting held in 2005 in Atlanta by the American Mathematical Society.
Topics:
The book includes ten different mathematical fiber arts projects, by eight contributors. An introduction provides a history of the connections between mathematics, mathematics education, and the fiber arts. Each of its ten project chapters is illustrated by many color photographs and diagrams, and is organized into four sections: an overview of the project, a section on the mathematics connected to it, a section of ideas for using the project as a teaching activity, and directions for constructing the project. Although there are some connections between topics, they can be read independently of each other, in any order. The thesis of the book is that directed exercises in fiber arts construction can help teach both mathematical visualization and concepts from three-dimensional geometry.The book uses knitting, crochet, sewing, and cross-stitch, but deliberately avoids weaving as a topic already well-covered in mathematical fiber arts publications. Projects in the book include a quilt in the form of a Möbius strip, a "bidirectional hat" connected to the theory of Diophantine equations, a shawl with a fractal design, a knitted torus connecting to discrete approximations of curvature, a sampler demonstrating different forms of symmetry in wallpaper group, "algebraic socks" with connections to modular arithmetic and the Klein four-group, a one-sided purse sewn together following a description by Lewis Carroll, a demonstration of braid groups on a cable-knit pillow, an embroidered graph drawing of an Eulerian graph, and topological pants.Beyond belcastro and Yackel, the contributors to the book include Susan Goldstine, Joshua Holden, Lana Holden, Mary D. Shepherd, Amy F. Szczepański, and D. Jacob Wildstrom.
Audience and reception:
Reviewers had mixed opinions on the appropriate audience for the book and its success in targeting that audience. Ketty Peeva writes that the book is "of interest to mathematicians, mathematics educators and crafters", and Mary Fortune writes that a wide group of people would enjoy browsing its contents, However, Kate Atherley warns that it is "not for the faint-of-heart" (either among mathematicians or crafters), and Mary Goetting complains that the audience for the book is not clearly defined, and is inconsistent across the book, with some chapters written for professional mathematicians and others for mathematical beginners. She writes that most readers will have to pick and choose among the chapters for material appealing to them. Similarly, reviewer Michelle Sipics writes that in aiming at multiple audiences, the book "sacrifices some accessibility". And although reviewer Gwen Fisher downplays the potential pedagogical applications of this book, complaining that its teaching ideas do not provide enough detail to be usable, and are not a good fit for typical teaching curricula, Sipics calls mathematics teachers "perhaps the greatest beneficiaries of this text".Fortune writes that, though the book increased her appreciation of and understanding of needlework, she didn't gain much new mathematical insight from reading it. In contrast, Fisher argues that by using only "straightforward applications of traditional needlework skills" the book is accessible even to beginners in the fiber arts, and that the book is "much more about maths than about fibre technique". The real value of the book, she argues, is in the scholarly connection it forges between traditional women's activities and mathematics. Pao-Sheng Hsu says that it would be "a great coffee table book" for browsing. And Anna Lena Phillips calls the book "an excellent synthesis" of textile crafts and mathematics, providing inspiration to those interested in either topic. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nimotuzumab**
Nimotuzumab:
Nimotuzumab (h-R3, BIOMAb EGFR, Biocon, India; TheraCIM, CIMYM Biosciences, Canada; Theraloc, Oncoscience, Europe, CIMAher, Center of Molecular Immunology, Havana, Cuba) is a humanized monoclonal antibody that as of 2014 had orphan status in the US and EU for glioma, and marketing approval in India, China, and other countries for squamous cell carcinomas of the head and neck, and was undergoing several clinical trials. Like cetuximab, nimotuzumab binds to the epidermal growth factor receptor (EGFR), a signalling protein that normally controls cell division. In some cancers, this receptor is altered to cause uncontrolled cell division, a hallmark of cancer. These monoclonal antibodies block EGFR and stop the uncontrolled cell division. It has a humanized human-mouse h-R3 heavy chain and a humanized human-mouse h-R3 κ-chain.
Mechanism:
Nimotuzumab binds with optimal affinity and high specificity to the extracellular region of EGFR (epidermal growth factor receptor). This results in a blockade of ligand binding and receptor activation. Epidermal growth factor receptor (EGFR) is a key target in the development of cancer therapeutics. EGFR-targeting drugs have been shown to improve response when used with conventional treatments such as radiation therapy and chemotherapy.
Development status:
It was developed at the Center of molecular immunology (CIM) in Havana, Cuba. CIM's commercialization arm, CIMAB S.A. formed a joint venture with YM Biosciences called CIMYM BioSciences in 1995 that was 80% owned by YM and 20% owned by CIMAB.CIMYM BioSciences licensed European rights to nimotuzumab to Oncoscience AG in 2003, the South Korean rights to Kuhnil Pharmaceutical Co., Ltd. in 2005, and in 2006, licensed the Japanese rights to Daiichi Sankyo and rights to certain countries in Asia and Africa to Innogene Kalbiotech Pte Ltd. Other licensees for nimotuzumab include Biocon BioPharmaceuticals Ltd. (BBPL) in India, Biotech Pharmaceutical Co. Ltd. in China, Delta Laboratories in Colombia, European Chemicals SAC, Quality Pharma in Peru, Eurofarma Laboratorios Ltda. in Brazil, Ferozsons Labs in Pakistan, Laboratorio Elea S.A.C.I.F.yA. in Argentina, EL KENDI Pharmaceutical in Algeria and Laboratorios PiSA in Mexico.In December 2012, CIMYM BioSciences dissolved and sold its assets related to nimotuzumab to InnoKeys PTE Ltd.According to a 2009 review: "Nimotuzumab was approved for the following indications—For squamous cell carcinoma in head and neck (SCCHN) in India, Cuba, Argentina, Colombia, Ivory Coast, Gabon, Ukraine, Peru and Sri Lanka (expired now); for glioma (pediatric and adult) in Cuba, Argentina, Philippines and Ukraine; for nasopharyngeal cancer in China. It has been granted orphan drug status for glioma in USA and for glioma and pancreatic cancer in Europe."As of 2014 Nimotuzumab was in additional Phase I and II clinical trials.In April 2014, Daiichi Sankyo announced that it was halting a multicenter, randomized, double-blind, placebo-controlled Phase III study investigating nimotuzumab for first-line therapy in patients with unresectable and locally advanced squamous cell lung cancer, due to safety issues in certain patients who received a combination of cisplatin, vinorelbine, radiotherapy, and nimotuzumab.
Safety:
The toxicity and safety of nimotuzumab have been assessed in several pre-clinical and clinical studies wherein it was noticed that side effects usually caused by EGFR inhibitors, especially rashes and other skin toxicities, were negligible. Scientists have hypothesized that this is because nimotuzumab binds only to cells that express moderate to high EGFR levels.Nimotuzumab has been found to be very well tolerated in clinical trials. Common adverse reactions seen in patients treated with nimotuzumab include: Chills Fever Nausea and vomiting Dryness of mouth Asthenia Hypertension/hypotension Flushing | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CEN/TC 165**
CEN/TC 165:
CEN/TC 165 (CEN Technical Committee 165) is a technical decision making body within the CEN system working on standardization in the field of wastewater engineering in the European Union. Its goal is to develop functional standards, and standards for performance and installation for systems and components in the field of wastewater engineering.CEN/TC 165 was created on 01.01.1988 and Working Groups (WG) established under this Technical Committee are: WG1: General requirements for pipes WG2: Vitrified clay pipes WG4: Manhole tops, gully tops, drainage channels and other ancillary components for use outside buildings WG5: Fibre cement pipes WG6: Cast iron pipes WG7: Steel pipes WG8: Separators WG9: Concrete pipes WG10: Installation of buried pipes for gravity drain and sewer systems WG11: Gratings, covers and other ancillary components for use inside buildings WG12: Structural design of buried pipelines WG13: Renovation and repair of drains and sewers WG21: Drainage systems inside buildings WG22: Drain and sewer systems outside buildings WG23: Special projects WG30: Terminology in the field of wastewater engineering WG40: Wastewater treatment plants > 50 PT WG41: Small type sewage treatment plants (< 50 inhabitants) WG42: Working on standards for treatment plants from 51 to 500 population equivalents treatment plants; general processes WG43: Wastewater treatment plants; General requirements and special processes WG50: Use of treated wastewater | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Double negative**
Double negative:
A double negative is a construction occurring when two forms of grammatical negation are used in the same sentence. This is typically used to convey a different shade of meaning from a strictly positive sentence ("You're not unattractive" vs "You're attractive"). Multiple negation is the more general term referring to the occurrence of more than one negative in a clause. In some languages, double negatives cancel one another and produce an affirmative; in other languages, doubled negatives intensify the negation. Languages where multiple negatives affirm each other are said to have negative concord or emphatic negation. Portuguese, Persian, French, Russian, Polish, Bulgarian, Greek, Spanish, Old English, Italian, Afrikaans, Hebrew are examples of negative-concord languages. This is also true of many vernacular dialects of modern English. Chinese, Latin, German, Dutch, Japanese, Swedish and modern Standard English are examples of languages that do not have negative concord. Typologically, negative concord occurs in a minority of languages.Languages without negative concord typically have negative polarity items that are used in place of additional negatives when another negating word already occurs. Examples are "ever", "anything" and "anyone" in the sentence "I haven't ever owed anything to anyone" (cf. "I haven't never owed nothing to no one" in negative-concord dialects of English, and "Nunca devi nada a ninguém" in Portuguese, lit. "Never have I owed nothing to no one", or "Non ho mai dovuto nulla a nessuno" in Italian). Negative polarity can be triggered not only by direct negatives such as "not" or "never", but also by words such as "doubt" or "hardly" ("I doubt he has ever owed anything to anyone" or "He has hardly ever owed anything to anyone").
Double negative:
Because standard English does not have negative concord but many varieties and registers of English do, and because most English speakers can speak or comprehend across varieties and registers, double negatives as collocations are functionally auto-antonymic (contranymic) in English; for example, a collocation such as "ain't nothin" or "not nothing" can mean either "something" or "nothing", and its disambiguation is resolved via the contexts of register, variety, location, and content of ideas. Stylistically, in English, double negatives can sometimes be used for affirmation (e.g. "I'm not feeling unwell"), an understatement of the positive ("I'm feeling well"). The rhetorical term for this is litotes.
English:
Two negatives resolving to a positive When two negatives are used in one independent clause, in standard English the negatives are understood to cancel one another and produce a weakened affirmative (see the Robert Lowth citation below): this is known as litotes. However, depending on how such a sentence is constructed, in some dialects if a verb or adverb is in between two negatives then the latter negative is assumed to be intensifying the former thus adding weight or feeling to the negative clause of the sentence. For this reason, it is difficult to portray double negatives in writing as the level of intonation to add weight in one's speech is lost. A double negative intensifier does not necessarily require the prescribed steps, and can easily be ascertained by the mood or intonation of the speaker. Compare There isn't no other way.= There's some other way. Negative: isn't (is not), noversus There isn't no other way!= There's no other way!These two sentences would be different in how they are communicated by speech. Any assumption would be correct, and the first sentence can be just as right or wrong in intensifying a negative as it is in cancelling it out; thereby rendering the sentence's meaning ambiguous. Since there is no adverb or verb to support the latter negative, the usage here is ambiguous and lies totally on the context behind the sentence. In light of punctuation, the second sentence can be viewed as the intensifier; and the former being a statement thus an admonishment.
English:
In Standard English, two negatives are understood to resolve to a positive. This rule was observed as early as 1762, when Bishop Robert Lowth wrote A Short Introduction to English Grammar with Critical Notes. For instance, "I don't disagree" could mean "I certainly agree", "I agree", "I sort of agree", "I don't understand your point of view (POV)", "I have no opinion", and so on; it is a form of "weasel words". Further statements are necessary to resolve which particular meaning was intended.
English:
This is opposed to the single negative "I don't agree", which typically means "I disagree". However, the statement "I don't completely disagree" is a similar double negative to "I don't disagree" but needs little or no clarification.
English:
With the meaning "I completely agree", Lowth would have been referring to litotes wherein two negatives simply cancel each other out. However, the usage of intensifying negatives and examples are presented in his work, which could also imply he wanted either usage of double negatives abolished. Because of this ambiguity, double negatives are frequently employed when making back-handed compliments. The phrase "Mr. Jones wasn't incompetent." will seldom mean "Mr. Jones was very competent" since the speaker would've found a more flattering way to say so. Instead, some kind of problem is implied, though Mr. Jones possesses basic competence at his tasks.
English:
Two or more negatives resolving to a negative Discussing English grammar, the term "double negative" is often, though not universally, applied to the non-standard use of a second negative as an intensifier to a negation.
English:
Double negatives are usually associated with regional and ethnical dialects such as Southern American English, African American Vernacular English, and various British regional dialects. Indeed, they were used in Middle English: for example, Chaucer made extensive use of double, triple, and even quadruple negatives in his Canterbury Tales. About the Friar, he writes "Ther nas no man no wher so vertuous" ("There never was no man nowhere so virtuous"). About the Knight, "He nevere yet no vileynye ne sayde / In all his lyf unto no maner wight" ("He never yet no vileness didn't say / In all his life to no manner of man").
English:
Following the battle of Marston Moor, Oliver Cromwell quoted his nephew's dying words in a letter to the boy's father Valentine Walton: "A little after, he said one thing lay upon his spirit. I asked him what it was. He told me it was that God had not suffered him to be no more the executioner of His enemies." Although this particular letter has often been reprinted, it is frequently changed to read "not ... to be any more" instead.Whereas some double negatives may resolve to a positive, in some dialects others resolve to intensify the negative clause within a sentence. For example: I didn't go nowhere today.
English:
I'm not hungry no more.
You don't know nothing.
There was never no more laziness at work than before.In contrast, some double negatives become positives: I didn't not go to the park today.
English:
We can't not go to sleep! This is something you can't not watch.The key to understanding the former examples and knowing whether a double negative is intensive or negative is finding a verb between the two negatives. If a verb is present between the two, the latter negative becomes an intensifier which does not negate the former. In the first example, the verb to go separates the two negatives; therefore the latter negative does not negate the already negated verb. Indeed, the word 'nowhere' is thus being used as an adverb and does not negate the argument of the sentence. Double negatives such as I don't want to know no more contrast with Romance languages such as French in Je ne veux pas savoir.An exception is when the second negative is stressed, as in I'm not doing nothing; I'm thinking. A sentence can otherwise usually only become positive through consecutive uses of negatives, such as those prescribed in the later examples, where a clause is void of a verb and lacks an adverb to intensify it. Two of them also use emphasis to make the meaning clearer. The last example is a popular example of a double negative that resolves to a positive. This is because the verb 'to doubt' has no intensifier which effectively resolves a sentence to a positive. Had we added an adverb thus: I never had no doubt this sentence is false.Then what happens is that the verb to doubt becomes intensified, which indeed deduces that the sentence is indeed false since nothing was resolved to a positive. The same applies to the third example, where the adverb 'more' merges with the prefix no- to become a negative word, which when combined with the sentence's former negative only acts as an intensifier to the verb hungry. Where people think that the sentence I'm not hungry no more resolves to a positive is where the latter negative no becomes an adjective which only describes its suffix counterpart more which effectively becomes a noun, instead of an adverb. This is a valid argument since adjectives do indeed describe the nature of a noun; yet some fail to take into account that the phrase no more is only an adverb and simply serves as an intensifier. Another argument used to support the position double negatives aren't acceptable is a mathematical analogy: negating a negative number results in a positive one; e.g., − −2 = +2; therefore, it is argued, I did not go nowhere resolves to I went somewhere.
English:
Other forms of double negatives, which are popular to this day and do strictly enhance the negative rather than destroying it, are described thus: I'm not entirely familiar with Nihilism nor Existentialism.Philosophies aside, this form of double negative is still in use whereby the use of 'nor' enhances the negative clause by emphasizing what isn't to be. Opponents of double negatives would have preferred I'm not entirely familiar with Nihilism or Existentialism; however this renders the sentence somewhat empty of the negative clause being advanced in the sentence. This form of double negative along with others described are standard ways of intensifying as well as enhancing a negative. The use of 'nor' to emphasise the negative clause is still popular today, and has been popular in the past through the works of Shakespeare and Milton: Nor did they not perceive the evil plight In which they were ~ John Milton - Paradise LostI never was, nor never will be ~ William Shakespeare - Richard IIIThe negatives herein do not cancel each other out but simply emphasize the negative clause.
English:
Up to the 18th century, double negatives were used to emphasize negation. "Prescriptive grammarians" recorded and codified a shift away from the double negative in the 1700s. Double negatives continue to be spoken by those of Vernacular English, such as those of Appalachian English and African American Vernacular English. To such speakers, they view double negatives as emphasizing the negative rather than cancelling out the negatives. Researchers have studied African American Vernacular English (AAVE) and trace its origins back to colonial English. This shows that double negatives were present in colonial English, and thus presumably English as a whole, and were acceptable at that time. English after the 18th century was changed to become more logical and double negatives became seen as canceling each other as in mathematics. The use of double negatives became associated with being uneducated and illogical.In his Essay towards a practical English Grammar of 1711, James Greenwood first recorded the rule: "Two Negatives, or two Adverbs of Denying do in English affirm". Robert Lowth stated in his grammar textbook A Short Introduction to English Grammar (1762) that "two negatives in English destroy one another, or are equivalent to an affirmative". Grammarians have assumed that Latin was the model for Lowth and other early grammarians in prescribing against negative concord, as Latin does not feature it. Data indicates, however, that negative concord had already fallen into disuse in Standard English by the time of Lowth's grammar, and no evidence exists that the loss was driven by prescriptivism, which was well established by the time it appeared.
English:
In film and TV Double negatives have been employed in various films and television shows. In the film Mary Poppins (1964), the chimney sweep Bert employs a double negative when he says, "If you don't wanna go nowhere..." Another is used by the bandits in the "Stinking Badges" scene of John Huston's The Treasure of the Sierra Madre (1948): "Badges? We ain't got no badges. We don't need no badges!." The Simpsons episode "Hello Gutter, Hello Fadder" (1999) features Bart writing "I won't not use no double negatives" (pictured) as part of the opening sequence chalkboard gag. More recently, the British television show EastEnders has received some publicity over the Estuary accent of character Dot Branning, who speaks with double and triple negatives ("I ain't never heard of no licence.").. In the Harry Enfield sketch "Mr Cholmondley-Warner's Guide to the Working-Class", a stereotypical Cockney employs a septuple-negative: "Inside toilet? I ain't never not heard of one of them nor I ain't nor nothing." In music, double negatives can be employed to similar effect (as in Pink Floyd's "Another Brick in the Wall", in which schoolchildren chant "We don't need no education / We don't need no thought control") or used to establish a frank and informal tone (as in The Rolling Stones' "(I Can't Get No) Satisfaction"). Other examples include Ain't Nobody (Chaka Khan), Ain't No Sunshine (Bill Withers), and Ain't No Mountain High Enough (Marvin Gaye)
Other Germanic languages:
Double negation is uncommon in other West Germanic languages. A notable exception is Afrikaans in which it is mandatory (for example, "He cannot speak Afrikaans" becomes Hy kan nie Afrikaans praat nie, "He cannot Afrikaans speak not"). Dialectal Dutch, French and San have been suggested as possible origins for this trait. Its proper use follows a set of fairly complex rules as in these examples provided by Bruce Donaldson: Ek het nie geweet dat hy sou kom nie. ("I did not know that he would be coming.") Ek het geweet dat hy nie sou kom nie. ("I knew that he would not be coming.") Hy sal nie kom nie, want hy is siek. ("He will not be coming because he is sick.") Dit is nie so moeilik om Afrikaans te leer nie. ("It is not so difficult to learn Afrikaans.")Another point of view is that the construction is not really an example of a "double negative" but simply a grammatical template for negation. The second nie cannot be understood as a noun or adverb (unlike pas in French, for example), and it cannot be substituted by any part of speech other than itself with the sentence remaining grammatical. The grammatical particle has no independent meaning and happens to be spelled and pronounced the same as the embedded nie, meaning "not", by a historical accident.
Other Germanic languages:
The second nie is used if and only if the sentence or phrase does not already end with either nie or another negating adverb.
Other Germanic languages:
Ek sien jou nie. ("I don't see you") Ek sien jou nooit. ("I never see you")Afrikaans shares with English the property that two negatives make a positive: Ek stem nie met jou saam nie. ("I don't agree with you." ) Ek stem nie nié met jou saam nie. ("I don't not agree with you," i.e., I agree with you.)Double negation is still found in the Low Franconian dialects of west Flanders (e.g., Ik ne willen da nie doen, "I do not want to do that") and in some villages in the central Netherlands such as Garderen, but it takes a different form than that found in Afrikaans. Belgian Dutch dialects, however, still have some widely-used expressions like nooit niet ("never not") for "never".
Other Germanic languages:
Like some dialects of English, Bavarian has both single and double negation, with the latter denoting special emphasis. For example, the Bavarian Des hob i no nia ned g'hört ("This have I yet never not heard") can be compared to the Standard German "Das habe ich noch nie gehört". The German emphatic "niemals!" (roughly "never ever") corresponds to Bavarian "(går) nia ned" or even "nie nicht" in the Standard German pronunciation.
Other Germanic languages:
Another exception is Yiddish for which Slavic influence causes the double (and sometimes even triple) negative to be quite common.
Other Germanic languages:
A few examples would be: איך האב קיינמאל נישט געזאגט ikh hob keynmol nisht gesogt ("I never didn't say") איך האב נישט קיין מורא פאר קיינעם ניט ikh hob nisht keyn more far keynem nit ("I have no fear of no one not") It is common to add נישט ("not") after the Yiddish word גארנישט ("nothing"), i.e. איך האב גארנישט נישט געזאגט ("I haven't said nothing")
Latin and Romance languages:
In Latin a second negative word appearing along with non turns the meaning into a positive one: ullus means "any", nullus means "no", non...nullus (nonnullus) means "some". In the same way, umquam means "ever", numquam means "never", non...numquam (nonnumquam) means "sometimes". In many Romance languages a second term indicating a negative is required.
Latin and Romance languages:
In French, the usual way to express simple negation is to employ two words, e.g. ne [verb] pas, ne [verb] plus, or ne [verb] jamais, as in the sentences Je ne sais pas, Il n'y a plus de batterie, and On ne sait jamais. The second term was originally an emphatic; pas, for example, derives from the Latin passus, meaning "step", so that French Je ne marche pas and Catalan No camino pas originally meant "I will not walk a single step." This initial usage spread so thoroughly that it became a necessary element of any negation in the modern French language to such a degree that ne is generally dropped entirely, as in Je sais pas. In Northern Catalan, no may be omitted in colloquial language, and Occitan, which uses non only as a short answer to questions. In Venetian, the double negation no ... mìa can likewise lose the first particle and rely only on the second: magno mìa ("I eat not") and vegno mìa ("I come not"). These exemplify Jespersen's cycle.
Latin and Romance languages:
Jamais, rien, personne and nulle part (never, nothing, no one, nowhere) can be mixed with each other, and/or with ne...plus (not anymore/not again) in French, e.g. to form sentences like Je n'ai rien dit à personne (I didn't say anything to anyone) or even Il ne dit jamais plus rien à personne (He never says anything to anyone anymore).
Latin and Romance languages:
The Spanish, Italian, Portuguese and Romanian languages usually employ doubled negative correlatives. Portuguese Não vejo nada, Spanish No veo nada, Romanian Nu văd nimic and Italian Non vedo niente (literally, "I do not see nothing") are used to express "I do not see anything". In Italian, a second following negative particle non turns the phrase into a positive one, but with a slightly different meaning. For instance, while both Voglio mangiare ("I want to eat") and Non voglio non mangiare ("I don't want not to eat") mean "I want to eat", the latter phrase more precisely means "I'd prefer to eat".
Latin and Romance languages:
Other Romance languages employ double negatives less regularly. In Asturian, an extra negative particle is used with negative adverbs: Yo nunca nun lu viera ("I had not never seen him") means "I have never seen him" and A mi tampoco nun me presta ("I neither do not like it") means "I do not like it either". Standard Catalan and Galician also used to possess a tendency to double no with other negatives, so Jo tampoc no l'he vista or Eu tampouco non a vira, respectively meant "I have not seen her either". This practice is dying out.
Welsh:
In spoken Welsh, the word ddim (not) often occurs with a prefixed or mutated verb form that is negative in meaning: Dydy hi ddim yma (word-for-word, "Not-is she not here") expresses "She is not here" and Chaiff Aled ddim mynd (word-for-word, "Not-will-get Aled not go") expresses "Aled is not allowed to go".
Welsh:
Negative correlatives can also occur with already negative verb forms. In literary Welsh, the mutated verb form is caused by an initial negative particle, ni or nid. The particle is usually omitted in speech but the mutation remains: [Ni] wyddai neb (word-for-word, "[Not] not-knew nobody") means "Nobody knew" and [Ni] chaiff Aled fawr o bres (word-for-word, "[Not] not-will-get Aled lots of money") means "Aled will not get much money". This is not usually regarded as three negative markers, however, because the negative mutation is really just an effect of the initial particle on the following word.
Greek:
Ancient Greek Doubled negatives are perfectly correct in Ancient Greek. With few exceptions, a simple negative (οὐ or μή) following another negative (for example, οὐδείς, no one) results in an affirmation: οὐδείς οὐκ ἔπασχέ τι ("No one was not suffering") means more simply "Everyone was suffering". Meanwhile, a compound negative following a negative strengthens the negation: μὴ θορυβήσῃ μηδείς ("Do not permit no one to raise an uproar") means "Let not a single one among them raise an uproar".
Greek:
Those constructions apply only when the negatives all refer to the same word or expression. Otherwise, the negatives simply work independently of one another: οὐ διὰ τὸ μὴ ἀκοντίζειν οὐκ ἔβαλον αὐτόν means "It was not on account of their not throwing that they did not hit him", and one should not blame them for not trying.
Modern Greek In Modern Greek, a double negative can express either an affirmation or a negation, depending on the word combination. When expressing negation, it usually carries an emphasis with it. Native speakers can usually understand the sentence meaning from the voice tone and the context.
Examples A combination of χωρίς/δίχως and δε/δεν has an affirmative meaning: "Χωρίς/δίχως αυτό να σημαίνει ότι δε μπορούμε να το βρούμε." translates "Without that meaning that we can't find it." i.e. We can find it.
A combination of δε/δεν and δε/δεν also has an affirmative meaning: "Δε(ν) σημαίνει ότι δε(ν) μπορούμε να το βρούμε." translates "Doesn't mean that we can't find it." i.e. We can find it.
A combination of δε/δεν and κανείς/κανένας/καμία/κανένα has a negative meaning: "Δε(ν) θα πάρεις κανένα βιβλίο." translates "You won't get any book."
Slavic languages:
In Slavic languages, multiple negatives affirm each other. Indeed, if a sentence contains a negated verb, any indefinite pronouns or adverbs must be used in their negative forms. For example, in the Serbo-Croatian, ni(t)ko nikad(a) nigd(j)e ništa nije uradio ("Nobody never did not do nothing nowhere") means "Nobody has ever done anything, anywhere", and nikad nisam tamo išao/išla ("Never I did not go there") means "I have never been there". In Czech, it is nikdy jsem nikde nikoho neviděl ("I have not seen never no-one nowhere"). In Bulgarian, it is: никога не съм виждал никого никъде [nikoga ne sam vishdal nikogo nikade], lit. "I have not seen never no-one nowhere", or не знам нищо ('ne znam nishto'), lit. "I don't know nothing". In Russian, "I know nothing" is я ничего не знаю [ya nichevo nye znayu], lit. "I don't know nothing".
Slavic languages:
Negating the verb without negating the pronoun (or vice versa), while syntactically correct, may result in a very unusual meaning or make no sense at all. Saying "I saw nobody" in Polish (widziałem nikogo) instead of the more usual "I did not see nobody" (Nikogo nie widziałem) might mean "I saw an instance of nobody" or "I saw Mr Nobody" but it would not have its plain English meaning. Likewise, in Slovenian, saying "I do not know anyone" (ne poznam kogarkoli) in place of "I do not know no one" (ne poznam nikogar) has the connotation "I do not know just anyone: I know someone important or special." In Czech, like in many other languages, a standard double negative is used in sentences with a negative pronoun or negative conjunction, where the verb is also negated (nikdo nepřišel "nobody came", literally "nobody didn't come"). However, this doubleness is also transferred to forms where the verbal copula is released and the negation is joined to the nominal form, and such a phrase can be ambiguous: nikdo nezraněn ("nobody unscathed") can mean both "nobody healthy" and "all healthy". Similarly, nepřítomen nikdo ("nobody absent") or plánovány byly tři úkoly, nesplněn žádný ("three tasks were planned, none uncompleted"). The sentence, všichni tam nebyli ("all don't were there") means not "all absented" but "there were not all" (= "at least one of them absenteed"). If all absented, it should be said nikdo tam nebyl ("nobody weren't there"). However, in many cases, a double, triple quadruple negative can really work in such a way that each negative cancels out the next negative, and such a sentence may be a catch and may be incomprehensible to a less attentive or less intelligent addressee. E.g. the sentence, nemohu se nikdy neoddávat nečinnosti ("I can't never not indulge in inaction") contains 4 negations and it is very confusing which of them create a "double negative" and which of them eliminated from each other. Such confusing sentences can then diplomatically soften or blur rejection or unpleasant information or even agreement, but at the expense of intelligibility: nelze nevidět ("it can't be not seen"), nejsem nespokojen ("I'm not dissatisfied"), není nezajímavý ("it/he is not uninteresting"), nemohu nesouhlasit ("I can't disagree").
Baltic languages:
As with most synthetic satem languages double negative is mandatory in Latvian and Lithuanian. Furthermore, all verbs and indefinite pronouns in a given statement must be negated, so it could be said that multiple negative is mandatory in Latvian.
Baltic languages:
For instance, a statement "I have not ever owed anything to anyone" would be rendered as es nekad nevienam neko neesmu bijis parādā. The only alternative would be using a negating subordinate clause and subjunctive in the main clause, which could be approximated in English as "there has not ever been an instance that I would have owed anything to anyone" (nav bijis tā, ka es kādreiz būtu kādam bijis kaut ko parādā), where negative pronouns (nekad, neviens, nekas) are replaced by indefinite pronouns (kādreiz, kāds, kaut kas) more in line with the English "ever, any" indefinite pronoun structures.
Uralic languages:
Double or multiple negatives are grammatically required in Hungarian with negative pronouns: Nincs semmim (word for word: "[doesn't-exists] [nothing-of-mine]", and translates literally as "I do not have nothing") means "I do not have anything". Negative pronouns are constructed by means of adding the prefixes se-, sem-, and sen- to interrogative pronouns.
Uralic languages:
Something superficially resembling double negation is required also in Finnish, which uses the auxiliary verb ei to express negation. Negative pronouns are constructed by adding one of the suffixes -an, -än, -kaan, or -kään to interrogative pronouns: Kukaan ei soittanut minulle means "No one called me". These suffixes are, however, never used alone, but always in connection with ei. This phenomenon is commonplace in Finnish, where many words have alternatives that are required in negative expressions, for example edes for jopa ("even"), as in jopa niin paljon meaning "even so much", and ei edes niin paljoa meaning "not even so much".
Turkish:
Negative verb forms are grammatically required in Turkish phrases with negative pronouns or adverbs that impart a negative meaning on the whole phrase. For example, Hiçbir şeyim yok (literally, word for word, "Not-one thing-of-mine exists-not") means "I don't have anything". Likewise, Asla memnun değilim (literally, "Never satisfied not-I-am") means "I'm never satisfied".
Japanese:
Japanese employs litotes to phrase ideas in a more indirect and polite manner. Thus, one can indicate necessity by emphasizing that not doing something would not be proper. For instance, しなければならない (shinakereba naranai, "must", more literally "if not done, [can] not be") means "not doing [it] wouldn't be proper". しなければいけない (shinakereba ikenai, also "must", "if not done, can not go') similarly means "not doing [it] can't go forward".
Japanese:
Of course, indirectness can also be employed to put an edge on one's rudeness as well. Whilst "He has studied Japanese, so he should be able to write kanji" can be phrased 彼は日本語を勉強したから漢字で書けないわけがない (kare wa nihongo o benkyō shita kara kanji de kakenai wake ga nai), there is a harsher idea in it: "As he's studied Japanese, the reasoning that he can't write Kanji doesn't exist".
Chinese:
Mandarin Chinese and most Chinese languages also employ litotes in a likewise manner. One common construction is "不得不" (Pinyin: bù dé bù, "mustn't not" or "shalln't not"), which is used to express (or feign) a necessity more regretful and convenable than that expressed by "必须" (bìxū, "must"). Compared with "我必须走" (Wǒ bìxū zǒu, "I must go"), "我不得不走" (Wǒ bù dé bù zǒu, "I mustn't not go") emphasizes that the situation is out of the speaker's hands and that the speaker has no choice in the matter: "Unfortunately, I have got to go". Similarly, "没有人不知道" (méiyǒu rén bù zhīdào) or idiomatically "无人不知" (wú rén bù zhī , "There is no one who does not know") is a more emphatic way to express "Every single one knows".
Chinese:
A double negative almost always resolves to a positive meaning and even more so in colloquial speech where the speaker particularly stresses the first negative word. Meanwhile, a triple negative resolves to a negative meaning, which bares a stronger negativity than a single negative. For example, "我不覺得没有人不知道" (Wǒ bù juédé méiyǒu rén bù zhīdào, "I do not think there is no one who does not know") ambiguously means either "I don't think everyone knows" or "I think someone does not know". A quadruple negative further resolves to a positive meaning embedded with stronger affirmation than a double negative; for example, "我不是不知道没人不喜欢他" (Wǒ bú shì bù zhīdào méi rén bù xǐhuan tā, "It is not the case that I do not know that no one doesn't like him") means "I do know that everyone likes him". However, more than triple negatives are frequently perceived as obscure and rarely encountered.
Historical development:
Many languages, including all living Germanic languages, French, Welsh and some Berber and Arabic dialects, have gone through a process known as Jespersen's cycle, where an original negative particle is replaced by another, passing through an intermediate stage employing two particles (e.g. Old French jeo ne dis → Modern Standard French je ne dis pas → Modern Colloquial French je dis pas "I don't say").
Historical development:
In many cases, the original sense of the new negative particle is not negative per se (thus in French pas "step", originally "not a step" = "not a bit"). However, in Germanic languages such as English and German, the intermediate stage was a case of double negation, as the current negatives not and nicht in these languages originally meant "nothing": e.g. Old English ic ne seah "I didn't see" >> Middle English I ne saugh nawiht, lit. "I didn't see nothing" >> Early Modern English I saw not.A similar development to a circumfix from double negation can be seen in non-Indo-European languages, too: for example, in Maltese, kiel "he ate" is negated as ma kielx "he did not eat", where the verb is preceded by a negative particle ma- "not" and followed by the particle -x, which was originally a shortened form of xejn "nothing" - thus, "he didn't eat nothing". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cribbage King / Gin King**
Cribbage King / Gin King:
Cribbage King / Gin King is a 1989 video game published by The Software Toolworks.
Gameplay:
Cribbage King / Gin King is a game in which the customizable card game package can be played using either a keyboard or mouse.
Reception:
Michael S. Lasky reviewed the game for Computer Gaming World, and stated that "Cribbage King is well worth its price. To have Gin King also included makes it a decided computer game bargain for any card shark." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ridge turret**
Ridge turret:
A ridge turret is a turret or small tower constructed over the ridge or apex between two or more sloping roofs of a building. It is usually built either as an architectural ornament for purely decorative purposes or else for the practical housing of a clock, a bell or an observation platform. Its function is thus different from that of a roof lantern, despite a frequent similarity of external appearance. It can have a flat roof but usually has a pointed roof or other kind of apex over.
Ridge turret:
When the height of a roof turret exceeds its width it is usually called a tower or steeple in English architecture, and when the height of a ridge turret's roof exceeds its width, it is called a spire in English architecture or a flèche in French architecture.
== Images == | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Aldehyde dehydrogenase 5 family, member A1**
Aldehyde dehydrogenase 5 family, member A1:
Succinate-semialdehyde dehydrogenase, mitochondrial is an enzyme that in humans is encoded by the ALDH5A1 gene.
Function:
This protein belongs to the aldehyde dehydrogenase family of proteins. This gene encodes a mitochondrial NAD+-dependent succinic semialdehyde dehydrogenase. A deficiency of this enzyme, known as 4-hydroxybutyricaciduria, is a rare inborn error in the metabolism of the neurotransmitter γ-aminobutyric acid (GABA). In response to the defect, physiologic fluids from patients accumulate GHB, a compound with numerous neuromodulatory properties. Two transcript variants encoding distinct isoforms have been identified for this gene. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**INSL5**
INSL5:
Insulin-like peptide 5 (INSL5) is a protein that in humans is encoded by the INSL5 gene.
Function:
The protein encoded by this gene contains a classical signature of the insulin superfamily and is highly similar to relaxin 3 (RLN3/INSL7). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Isotopes of hafnium**
Isotopes of hafnium:
Natural hafnium (72Hf) consists of five stable isotopes (176Hf, 177Hf, 178Hf, 179Hf, and 180Hf) and one very long-lived radioisotope, 174Hf, with a half-life of 7.0×1016 years. In addition, there are 30 known synthetic radioisotopes, the most stable of which is 182Hf with a half-life of 8.9×106 years. This extinct radionuclide is used in hafnium–tungsten dating to study the chronology of planetary differentiation.No other radioisotope has a half-life over 1.87 years. Most isotopes have half-lives under 1 minute. There are also 26 known nuclear isomers, the most stable of which is 178m2Hf with a half-life of 31 years. All isotopes of hafnium are either radioactive or observationally stable, meaning that they are predicted to be radioactive but no actual decay has been observed. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MapInfo Professional**
MapInfo Professional:
MapInfo Pro is a desktop geographic information system (GIS) software product produced by Precisely (formerly: Pitney Bowes Software and MapInfo Corporation) and used for mapping and location analysis. MapInfo Pro allows users to visualize, analyze, edit, interpret, understand and output data to reveal relationships, patterns, and trends. MapInfo Pro allows users to explore spatial data within a dataset, symbolize features, and create maps.
History:
Version 4 of the product, released in 1995, saw the product renamed to "MapInfo Professional".Version 9.5 was released in June 2008. Version 9.5.1 was released in December 2008. The primary enhancements in these releases included the use of a new graphics engine which allows for translucency and anti-aliasing when displaying maps. A set of CAD like editing tools were also added in this release.
History:
Version 10 was released in June 2009. The primary enhancements included a more intuitive user interface, including a rewritten Layer Control dialog box, compatibility with PostGIS and a PDF generator that supports both Layered and georeference PDF files.
Version 10.5 was released in May 2010. The primary enhancements included a new Table Manager window, a built in ability to publish to MapInfo Stratus, ability to ingest Bing Maps directly as background mapping and enhanced support for Catalog Service for the Web (CSW).
History:
Version 11 was released in June 2011. The primary enhancement included performance tuning and usability improvements on the Browser window for creating and analysing tabular data. Integration with MapInfo Manager, a product for managing spatial data and providing [INSPIRE] compliance. Support for 64 bit operating systems was improved with the ability to use up to 4 GB of RAM (instead of 2GB, the limit when running on 32 bit operating systems).
History:
Version 11.5 was released in June 2012. The primary enhancements include a new window for Creating Legends, further enhancements to the new Browser window (introduced in v11.0) and further integration with MapInfo Manager, including the ability to edit metadata within the Catalog Browser.
Version 12 was released in June 2013, with improvements to Cartographic Output; Support for Windows 8, SQL Server 2012, PostGIS2; and a new In-Product Notifications feature utilizing RSS.
History:
Version 12.5 of MapInfo Pro was the first time that a 64 bit version of the product was introduced. MapInfo Pro 12.5 32 bit was released in July 2014 and 64 bit in October 2014. The 64 bit release saw the introduction of a new ribbon UI and layout window, as well allowing for a new framework to handle background processing and multi-threading.
History:
Version 15 of MapInfo Pro 32 bit was released in June 2015 and 64 bit (15.2) was released in October 2015. Highlights include geopackage support as well as changes to the TAB file format to allow larger files and Unicode. The 64 bit version of 15.2 saw the introduction of MapInfo Pro Advanced as a new licensing level for the product which incorporates all new raster capabilities into the product including a .NET SDK. MapInfo Pro Advanced allows users to visualize very large raster files at high resolution such as 1m for a whole country and incorporating multiple satellite bands. This is achieved using a new multi resolution raster file format (.mrr).
History:
Version 16 of MapInfo Pro 64 bit was released in September 2016. Notable features include redesigned Ribbon interface, new interactive interface for thematic mapping, WFS 2.0 and WMTS support, Geopackage support. All new 64-bit version of EasyLoader is included with the release.
Version 17.0 of MapInfo Pro 64 bit was released in April 2018. Python support was added.
Version 2019 of MapInfo Pro 64 bit was released in Nov 2019. Much extended SQL is a key new feature. The mother company is rebranded as Precisely by its new owner Syncsort.
Version 2021 of MapInfo Pro 64 bit was released in Oct 2021. Support for Time Series (mapping/visualizing geographic data changing over time) was added.
Uses:
MapInfo Pro is a 64-bit GIS (Geographic Information System) application used by GIS engineers and business analysts. Industry examples include: Insurance – Analyze exposure to risk from environmental or natural hazards such as floods, tornadoes, hurricanes or crime. Perform demographic and risk analysis to determine the best target locations to acquire new potential policy holders.
Environment – Analyze and assess environmental impacts such as pollution, erosion, invasive species, climate changes including human induced changes to the environment.
Engineering – Coordinate with local planning and engineering groups for construction projects. Assist related groups by helping them understand environmental impacts or locations of public or utility infrastructure such as water, gas and electrical services.
Telco – Produce coverage maps, visualize gaps in coverage, plan for additional coverage. Maximize new investment based on demographics, local terrain and available real estate for cell tower sites.
Marketing - The application of location intelligence to identify geographic areas in which to deliver marketing.
Retail Site Selection - Determining the optimum location to open or close a site (store, factory, depot etc.). The selection process is typically based on customers or worker location, demographics, buying patterns, transport links, nearby facilities.
Crime Analysis - Systematic analysis of spatial data for identifying and analyzing patterns and trends in crime and disorder.
Mineral Exploration - Visualisation of spatial data such as drill holes, soil samples, geophysical survey data, tenement boundaries and cadastral data.
System Features:
Data Format --- MapInfo Pro is a database which manages information as a system of Tables. Each table is either a map file (graph) or a database file (text) and is denoted the file extension .TAB. MapInfo creates a visual display of the data in the form of a map (map window) and/or tabular form (browser window). Once data has been referenced in a table it is assigned X and Y coordinates so that the records can be displayed as objects on a map. This is known as Geocoding.
System Features:
Objects (points, lines, polygons) can be enhanced to highlight specific variations on a theme through the creation of a Thematic map. The basic data is overlaid with graphic styles (e.g. colour shades, hatch patterns) to display information on a more sophisticated level. For example, population density between urban and rural areas may show the cities in deep red (to indicate a high ratio of inhabitants per square mile), while showing remote areas in very pale red (to indicate a low concentration of inhabitants).
System Features:
Retrieval of information is conducted using data filters and "Query" functions . Selecting an object in a map window or records in a browser produces a temporary table that provides a range of values specified by the end-user. More advanced "Structured Query Language" (SQL) analysis allows the user to combine a variety of operations to derive answers to complex questions. This may involve a combination of tables and resultant calculations may be such as the number of points in polygons, proportional overlaps, and statistical breakdowns. The quantity and quality of the attributes associated with objects are dependent on the structure of the original tables.
System Features:
Vector analysis is a primary function of MapInfo based on X, Y coordinates and the user can create and edit data directly with commands such as: node editing, combine, split, erase, buffer, clip region. MapInfo Pro includes a range of engineering “CAD like” drawing and editing tools such as lines, circles, and polygons (referred to as "regions") which can be incorporated into tables or drawn as temporary overlays.
System Features:
Printout of MapInfo maps and/or statistics is managed through design settings in the Layout Window. Layout design enables the creation of composite presentations with maps, tables, legends, text, images, lines and shapes. Output hardware includes large format plotters and high spec. business printers.
Data from MapInfo may be embedded into applications such as Microsoft PowerPoint or Word using copy/paste commands and resized as required.
System Features:
Compatibility with External Software Systems --- MapInfo Pro can read and write other file formats for data exchange with applications such as: ESRI Shapefile and AutoCAD DXF CSV and delimited ASCII text Microsoft Excel and Microsoft Access Bitmaps or Raster Formats such as GeoTIFF, ECW, Mr. SID, JPEG, PNG, MRR Spatial Databases: Oracle, PostGIS, SQL Server, SQLite and GeoPackage Open Geospatial Consortium Web Services: Web Feature Service, Web Map Service, Catalog Service for the Web Web Base Maps: Bing, OpenStreetMap (OSM)
Historical Notes:
With MapInfo Professional, the Sydney Organising Committee for the Olympic Games (SOCOG) created hundreds of maps for the longest torch relay in the history of the modern games. The Olympic Torch Relay covered 26,940 kilometres (16,740 miles) in 100 days and traversed Australia by road, railway and boat. The torch route was designed to ensure that more than 85 percent of the Australian population was within a one-hour drive of the chosen route, which passed through 1,000 towns. In addition, TNT Express used MapInfo to map more than 5,500 delivery routes to deliver Olympic tickets to more than 400,000 Australian homes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Phosphinane**
Phosphinane:
Phosphinane is the organophosphorus compound with the formula (CH2)5PH. This colorless liquid is the parent member of a family of six-membered, saturated rings containing phosphorus. These compounds are mainly of academic interest. The ring adopts a flexible cyclohexane-like chair conformation.Phosphinane can be prepared via the Arbuzov reaction of triethylphosphite and 1,5-dibromopentane followed by cyclization and reduction steps. Phosphinane can also be prepared by reduction of 1-chlorophosphinane, which in turn is obtained by the reaction of 1-phenylphosphinane and phosphorus trichloride. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Navel**
Navel:
The navel (clinically known as the umbilicus; PL: umbilici or umbilicuses; commonly known as the belly button or tummy button) is a protruding, flat, or hollowed area on the abdomen at the attachment site of the umbilical cord. All placental mammals have a navel, although it is generally more conspicuous in humans.
Structure:
The umbilicus is used to visually separate the abdomen into quadrants.The umbilicus is a prominent scar on the abdomen, with its position being relatively consistent among humans. The skin around the waist at the level of the umbilicus is supplied by the tenth thoracic spinal nerve (T10 dermatome). The umbilicus itself typically lies at a vertical level corresponding to the junction between the L3 and L4 vertebrae, with a normal variation among people between the L3 and L5 vertebrae.Parts of the adult navel include the "umbilical cord remnant" or "umbilical tip", which is the often protruding scar left by the detachment of the umbilical cord. This is located in the center of the navel, sometimes described as the belly button. Around the cord remnant, is the "umbilical collar", formed by the dense fibrous umbilical ring. Surrounding the umbilical collar is the periumbilical skin. Directly behind the navel is a thick fibrous cord formed from the umbilical cord, called the urachus, which originates from the bladder.The navel is unique to each individual due to it being a scar, and various general forms have been classified by medical practitioners.
Structure:
Outie: A navel consisting of the umbilical tip protruding past the periumbilical skin is an outie. Essentially any navel which is not concave.
Swirly/spiral: A rare form in which the umbilical cord scar forms a swirl shape.
Split: The protruding umbilical cord scar extends outwards, but is cleft in two by a fissure which extends part or all the way through the umbilical cord scar. This form is similar in appearance to a coffee bean.
Protrusion: The umbilical cord remnant is completely divulged, exposing the full umbilical scar.
Circlet: Although the entirety of the umbilical cord remnant sits out with the umbilical collar, the centre of the knot is inset by a deep fissure. Unlike a split outie, in this form the fissure is contained centrally and does not extend past the umbilical cord remnant in any direction, much akin to a 'donut' shape.
Innie: A navel in which the umbilical tip does not protrude past the periumbilical skin. Any navel which is concave.
Round: Round navels are completely circular with no hooding.
Vertical: Some navels present in the form of a more elongate hollow parallel with the linea alba.
Oval: This form consists of three variants; superior hooding, inferior hooding, no hooding.
T-shaped: As the name states, the scar is in the shape of a T, and may have superior hooding to various extent.
Horizontal: The scar is the least visible, as the natural lines of the tendinous intersection fold over the scar.
Distorted: Any navel which does not fit well into any of the other categories.
Clinical significance:
Disorders Outies are sometimes mistaken for umbilical hernias; however, they are a completely different shape with no health concern, unlike an umbilical hernia. The navel (specifically abdominal wall) would be considered an umbilical hernia if the protrusion were 5 centimeters or more. The diameter of an umbilical hernia is usually 1/2-inch or more. Navels that are concave are nicknamed "innies". While the shape of the human navel may be affected by long term changes to diet and exercise, unexpected change in shape may be the result of ascites.
Clinical significance:
In addition to change in shape being a possible side effect from ascites and umbilical hernias, the navel can be involved in umbilical sinus or fistula, which in rare cases can lead to menstrual or fecal discharge from the navel. Menstrual discharge from the umbilicus is a rare disorder associated with umbilical endometriosis.
Other disorders Omphalitis, an inflammatory condition of the umbilicus in the newborn, usually caused by a bacterial infection.
Omphalophobia is the fear of belly buttons. People suffering from omphalophobia are terrified of belly buttons—their own or, in some cases, those of others. They do not like touching their belly buttons (or even other people touching it). Sometimes just seeing a belly button is enough to make them feel disgusted or terrified.
Surgery To minimize scarring, the navel is a recommended site of incision for various surgeries, including transgastric appendicectomy, gall bladder surgery, and the umbilicoplasty procedure itself.
Fashion, society and culture:
The public exposure of the male and female midriff and bare navel was considered taboo at times in the past in Western cultures, being considered immodest or indecent. Female navel exposure was banned in some jurisdictions, but community perceptions have changed to this now being acceptable. The crop top is a shirt that often exposes the belly button and has become more common among young people. Exposure of the male navel has rarely been stigmatised and has become particularly popular in recent years, due to the strong resurgence of the male crop top and male navel piercing. The navel and midriff are often also displayed in bikinis, or when low-rise pants are worn.
Fashion, society and culture:
While the West was relatively resistant to navel-baring clothing until the 1980s, it has long been a fashion with Indian women, often displayed with saris or lehengas.
Fashion, society and culture:
The Japanese have long had a special regard for the navel. During the early Jōmon period in northern Japan, three small balls indicating the breasts and navel were pasted onto flat clay objects to represent the female body. The navel was exaggerated in size, informed by the belief that the navel symbolized the center where life began.In Arabic-Levantine culture, belly dancing is a popular art form that consists of dance movements focused on the torso and navel.Buddhism and Hinduism refer to the chakra of the navel as the manipura. In qigong, the navel is seen as the main energy centre, or dantian. In Hinduism, the Kundalini energy is sometimes described as being located at the navel. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Interosseous muscles of the hand**
Interosseous muscles of the hand:
The interosseous muscles of the hand are muscles found near the metacarpal bones that help to control the fingers. They are considered voluntary muscles.
They are generally divided into two sets: 4 Dorsal interossei - Abduct the digits away from the 3rd digit (away from axial line) and are bipennate.
3 Palmar interossei - Adduct the digits towards the 3rd digit (towards the axial line) and are unipennate.This is often remembered by the mnemonic PAD-DAB, as the Palmar interosseous muscles ADduct, and the Dorsal interosseous muscles ABduct. The axial line goes down the middle of the 3rd digit, towards the palm of the hand (it's an imaginary line).
Both sets of muscles are innervated by the deep branch of the ulnar nerve. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Structured light**
Structured light:
Structured light is the process of projecting a known pattern (often grids or horizontal bars) on to a scene. The way that these deform when striking surfaces allows vision systems to calculate the depth and surface information of the objects in the scene, as used in structured light 3D scanners.
Invisible (or imperceptible) structured light uses structured light without interfering with other computer vision tasks for which the projected pattern will be confusing. Example methods include the use of infrared light or of extremely high frame rates alternating between two exact opposite patterns.
Structured light:
Structured light is used by a number of police forces for the purpose of photographing fingerprints in a 3D scene. Where previously they would use tape to extract the fingerprint and flatten it out, they can now use cameras and flatten the fingerprint digitally, which allows the process of identification to begin before the officer has even left the scene. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cervical ectropion**
Cervical ectropion:
Cervical ectropion is a condition in which the cells from the 'inside' of the cervical canal, known as glandular cells (or columnar epithelium), are present on the 'outside' of the vaginal portion of the cervix. The cells on the 'outside' of the cervix are typically squamous epithelial cells. Where the two cells meet is called the transformation zone, also known as the stratified squamous epithelium. Cervical ectropion can be grossly indistinguishable from early cervical cancer and must be evaluated by a physician to determine risks and prognosis. It may be found incidentally when a vaginal examination (or pap smear test) is done. The area may look red because the glandular cells are red. While many women are born with cervical ectropion, it can be caused by a number of reasons, such as: Hormonal changes, meaning it can be common in young women Using oral contraceptives Pregnancy.
Signs and symptoms:
Cervical ectropion can be associated with excessive, non-purulent vaginal discharge due to the increased surface area of columnar epithelium containing mucus-secreting glands as well as intermenstrual bleeding (bleeding outside of regular menses). It may also give rise to post-coital bleeding, as fine blood vessels present within the columnar epithelium are easily traumatized.
Causes:
Cervical ectropion is a normal phenomenon, especially in the ovulatory phase in younger women, during pregnancy, and in women taking oral contraceptive, which increases the total estrogen level in the body. It also may be a congenital problem by the persistence of the squamocolumnar junction which is normally present prior to birth.
Mucopurulent cervicitis may increase the size of the cervical ectropion.
Mechanism:
The squamocolumnar junction, where the columnar secretory epithelium of the endocervical canal meets the stratified squamous covering of the ectocervix, is located at the external os before puberty. As estrogen levels rise during puberty, the cervical os opens, exposing the endocervical columnar epithelium onto the ectocervix. This area of columnar cells on the ectocervix forms an area that is red and raw in appearance called an ectropion (cervical erosion). It is then exposed to the acidic environment of the vagina and, through a process of squamous metaplasia, transforms into stratified squamous epithelium.
Treatment:
Usually no treatment is indicated for clinically asymptomatic cervical ectropions. Hormonal therapy may be indicated for symptomatic erosion. If it becomes troublesome to the patient, it can be treated by discontinuing oral contraceptives, cryotherapy treatment, or by using ablation treatment under local anesthetic. Ablation involves using a preheated probe (100 °C) to destroy 3–4 mm of the epithelium. In post-partum erosion, observation and re-examination are necessary for 3 months after labour. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Steroid-induced skin atrophy**
Steroid-induced skin atrophy:
Steroid-induced skin atrophy is thinning of the skin as a result of prolonged exposure to steroids. In people with psoriasis using topical steroids it occurs in up to 5% of people after a year of use.Skin atrophy can occur with both prescription and over the counter steroids creams. Low doses of prednisone by mouth can also result in skin atrophy.
Signs and symptoms:
It can also present with telangiectasia, easy bruising, purpura, and striae. Occlusive dressings and fluorinated steroids both increase the likelihood of developing atrophy.
Prevention:
In general, use a potent preparation short term and weaker preparation for maintenance between flare-ups. While there is no proven best benefit-to-risk ratio, if prolonged use of a topical steroid on a skin surface is required, a pulse therapy should be undertaken.
Prevention:
Pulse therapy refers to the application of a corticosteroid for 2 or 3 consecutive days each week or two. This is useful for maintaining control of chronic diseases. Generally a milder topical steroid or non-steroid treatment is used on the in-between days.Strong steroids should be avoided on sensitive sites such as the face, groin and armpits. Even the application of weaker or safer steroids should be limited to less than two weeks on those sites.
Treatment:
The obvious priority is immediate discontinuation of any further topical corticosteroid use. Protection and support of the impaired skin barrier is another priority. Eliminating harsh skin regimens or products will be necessary to minimize potential for further purpura or trauma, skin sensitivity, and potential infection. Steroid-induced skin atrophy is often permanent, though if caught soon enough and the topical corticosteroid discontinued in time, the degree of damage may be arrested or slightly improve. However, while the accompanying telangiectasias may improve marginally, the stretch marks are permanent and irreversible. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Thioxoethenylidene**
Thioxoethenylidene:
Thioxoethenylidene, is a reactive heteroallene molecule with formula CCS.
Occurrence:
CCS is found in space in large quantities. This includes the Taurus Molecular Cloud in TMC-1, TMC-1c and L1521B. These are likely in young starless molecular cloud cores.
Production:
By condensing propadienedithione SCCCS or thioxopropadienone OCCCS in solid argon and irradiating with ultraviolet radiation, CCS is formed. Another way is via a glow discharge in a mixture of carbon disulfide and helium. Yet another way is through electron irradiation of sulfur containing heterocycles.CCS and the anion CCS− can be formed in solid neon matrices also.
Properties:
CCS can be a ligand. It can form an asymmetrical bridge between two molybdenum atoms in Mo2(μ,σ(C):η2(C′S)-CCS)(CO)4(hydrotris(3,5-dimethylpyrazol-1-yl)borate)2 In this one carbon atom has a triple bond to a molybdenum and the other has a double bond to the other molybdenum atom, which also has a single bond to the sulfur atom.The ultraviolet spectrum shows absorption bands between 2800 and 3370 Å and also in the near infrared between 7500 and 10000 Å.
Properties:
CCS can react with CCCS to form C5S.The infrared spectrum in solid argon shows a vibration band at 1666.6 cm−1 called v1 and another called v2 at 862.7 cm−1. The 2v1 overtone is at 3311.1 cm−1. A combination vibration and bending band is at 2763.4 cm−1The microwave spectrum has emission lines 43 − 32 at 45.4 GHz and 21 - 10 at 22.3 GHz, important for detection of molecules in molecular clouds.Theoretical predictions show that the C-C bond is 1.304 Å long and the C–S bond is 1.550 Å. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Testosterone butyrate**
Testosterone butyrate:
Testosterone butyrate, or testosterone butanoate, also known as androst-4-en-17β-ol-3-one 17β-butanoate, is a synthetic, steroidal androgen and an androgen ester – specifically, the C17β butanoate ester of testosterone – which was first synthesized in the 1930s and was never marketed. Its ester side-chain length and duration of effect are intermediate between those of testosterone propionate and testosterone valerate. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Adjunct (grammar)**
Adjunct (grammar):
In linguistics, an adjunct is an optional, or structurally dispensable, part of a sentence, clause, or phrase that, if removed or discarded, will not structurally affect the remainder of the sentence. Example: In the sentence John helped Bill in Central Park, the phrase in Central Park is an adjunct.A more detailed definition of the adjunct emphasizes its attribute as a modifying form, word, or phrase that depends on another form, word, or phrase, being an element of clause structure with adverbial function. An adjunct is not an argument (nor is it a predicative expression), and an argument is not an adjunct. The argument–adjunct distinction is central in most theories of syntax and semantics. The terminology used to denote arguments and adjuncts can vary depending on the theory at hand. Some dependency grammars, for instance, employ the term circonstant (instead of adjunct), following Tesnière (1959).
Adjunct (grammar):
The area of grammar that explores the nature of predicates, their arguments, and adjuncts is called valency theory. Predicates have valency; they determine the number and type of arguments that can or must appear in their environment. The valency of predicates is also investigated in terms of subcategorization.
Examples:
Take the sentence John helped Bill in Central Park on Sunday as an example: John is the subject argument.
helped is the predicate.
Bill is the object argument.
in Central Park is the first adjunct.
Examples:
on Sunday is the second adjunct.An adverbial adjunct is a sentence element that often establishes the circumstances in which the action or state expressed by the verb takes place. The following sentence uses adjuncts of time and place: Yesterday, Lorna saw the dog in the garden.Notice that this example is ambiguous between whether the adjunct in the garden modifies the verb saw (in which case it is Lorna who saw the dog while she was in the garden) or the noun phrase the dog (in which case it is the dog who is in the garden). The definition can be extended to include adjuncts that modify nouns or other parts of speech (see noun adjunct).
Forms and domains:
An adjunct can be a single word, a phrase, or an entire clause.
Forms and domains:
Single word She will leave tomorrow.Phrase She will leave in the morning.Clause She will leave after she has had breakfast.Most discussions of adjuncts focus on adverbial adjuncts, that is, on adjuncts that modify verbs, verb phrases, or entire clauses like the adjuncts in the three examples just given. Adjuncts can appear in other domains, however; that is, they can modify most categories. An adnominal adjunct is one that modifies a noun: for a list of possible types of these, see Components of noun phrases. Adjuncts that modify adjectives and adverbs are occasionally called adadjectival and adadverbial.
Forms and domains:
the discussion before the game – before the game is an adnominal adjunct.
very happy – very is an "adadjectival" adjunct.
too loudly – too is an "adadverbial" adjunct.Adjuncts are always constituents. Each of the adjuncts in the examples throughout this article is a constituent.
Semantic function:
Adjuncts can be categorized in terms of the functional meaning that they contribute to the phrase, clause, or sentence in which they appear. The following list of the semantic functions is by no means exhaustive, but it does include most of the semantic functions of adjuncts identified in the literature on adjuncts: Causal – Causal adjuncts establish the reason for, or purpose of, an action or state.
Semantic function:
The ladder collapsed because it was old. (reason)Concessive – Concessive adjuncts establish contrary circumstances.
Lorna went out although it was raining.Conditional – Conditional adjuncts establish the condition in which an action occurs or state holds.
I would go to Paris, if I had the money.Consecutive – Consecutive adjuncts establish an effect or result.
It rained so hard that the streets flooded.Final – Final adjuncts establish the goal of an action (what one wants to accomplish).
He works a lot to earn money for school.Instrumental – Instrumental adjuncts establish the instrument used to accomplish an action.
Mr. Bibby wrote the letter with a pencil.Locative – Locative adjuncts establish where, to where, or from where a state or action happened or existed.
She sat on the table. (locative)Measure – Measure adjuncts establish the measure of the action, state, or quality that they modify I am completely finished.
That is mostly true.
We want to stay in part.Modal – Modal adjuncts establish the extent to which the speaker views the action or state as (im)probable.
They probably left.
In any case, we didn't do it.
That is perhaps possible.
I'm definitely going to the party.Modificative – Modificative adjuncts establish how the action happened or the state existed.
He ran with difficulty. (manner) He stood in silence. (state) He helped me with my homework. (limiting)Temporal – Temporal adjuncts establish when, how long, or how frequent the action or state happened or existed.
He arrived yesterday. (time point) He stayed for two weeks. (duration) She drinks in that bar every day. (frequency)
Distinguishing between predicative expressions, arguments, and adjuncts:
Omission diagnostic The distinction between arguments and adjuncts and predicates is central to most theories of syntax and grammar. Predicates take arguments and they permit (certain) adjuncts. The arguments of a predicate are necessary to complete the meaning of the predicate. The adjuncts of a predicate, in contrast, provide auxiliary information about the core predicate-argument meaning, which means they are not necessary to complete the meaning of the predicate. Adjuncts and arguments can be identified using various diagnostics. The omission diagnostic, for instance, helps identify many arguments and thus indirectly many adjuncts as well. If a given constituent cannot be omitted from a sentence, clause, or phrase without resulting in an unacceptable expression, that constituent is NOT an adjunct, e.g.
Distinguishing between predicative expressions, arguments, and adjuncts:
a. Fred certainly knows.
b. Fred knows. – certainly may be an adjunct (and it is).a. He stayed after class.
b. He stayed. – after class may be an adjunct (and it is).a. She trimmed the bushes.
b. *She trimmed. – the bushes is NOT an adjunct.a. Jim stopped.
b. *Stopped. – Jim is NOT an adjunct.
Other diagnostics Further diagnostics used to distinguish between arguments and adjuncts include multiplicity, distance from head, and the ability to coordinate. A head can have multiple adjuncts but only one object argument (=complement): a. Bob ate the pizza. – the pizza is an object argument (=complement).
b. Bob ate the pizza and the hamburger. the pizza and the hamburger is a noun phrase that functions as object argument.
c. Bob ate the pizza with a fork. – with a fork is an adjunct.
Distinguishing between predicative expressions, arguments, and adjuncts:
d. Bob ate the pizza with a fork on Tuesday. – with a fork and on Tuesday are both adjuncts.Object arguments are typically closer to their head than adjuncts: a. the collection of figurines (complement) in the dining room (adjunct) b. *the collection in the dining room (adjunct) of figurines (complement)Adjuncts can be coordinated with other adjuncts, but not with arguments: a. *Bob ate the pizza and with a fork.
Distinguishing between predicative expressions, arguments, and adjuncts:
b. Bob ate with a fork and with a spoon.
Optional arguments vs. adjuncts The distinction between arguments and adjuncts is much less clear than the simple omission diagnostic (and the other diagnostics) suggests. Most accounts of the argument vs. adjunct distinction acknowledge a further division. One distinguishes between obligatory and optional arguments. Optional arguments pattern like adjuncts when just the omission diagnostic is employed, e.g.
a. Fred ate a hamburger.
b. Fred ate. – a hamburger is NOT an obligatory argument, but it could be (and it is) an optional argument.a. Sam helped us.
Distinguishing between predicative expressions, arguments, and adjuncts:
b. Sam helped – us is NOT an obligatory argument, but it could be (and it is) an optional argument.The existence of optional arguments blurs the line between arguments and adjuncts considerably. Further diagnostics (beyond the omission diagnostic and the others mentioned above) must be employed to distinguish between adjuncts and optional arguments. One such diagnostic is the relative clause test. The test constituent is moved from the matrix clause to a subordinate relative clause containing which occurred/happened. If the result is unacceptable, the test constituent is probably NOT an adjunct: a. Fred ate a hamburger.
Distinguishing between predicative expressions, arguments, and adjuncts:
b. Fred ate. – a hamburger is not an obligatory argument.
c. *Fred ate, which occurred a hamburger. – a hamburger is not an adjunct, which means it must be an optional argument.a. Sam helped us.
b. Sam helped. – us is not an obligatory argument.
c. *Sam helped, which occurred us. – us is not an adjunct, which means it must be an optional argument.The particular merit of the relative clause test is its ability to distinguish between many argument and adjunct PPs, e.g.
a. We are working on the problem.
b. We are working.
c. *We are working, which is occurring on the problem. – on the problem is an optional argument.a. They spoke to the class.
b. They spoke.
Distinguishing between predicative expressions, arguments, and adjuncts:
c. *They spoke, which occurred to the class. – to the class is an optional argument.The reliability of the relative clause diagnostic is actually limited. For instance, it incorrectly suggests that many modal and manner adjuncts are arguments. This fact bears witness to the difficulty of providing an absolute diagnostic for the distinctions currently being examined. Despite the difficulties, most theories of syntax and grammar distinguish on the one hand between arguments and adjuncts and on the other hand between optional arguments and adjuncts, and they grant a central position to these divisions in the overarching theory.
Predicates vs. adjuncts:
Many phrases have the outward appearance of an adjunct but are in fact (part of) a predicate instead. The confusion occurs often with copular verbs, in particular with a form of be, e.g.
It is under the bush.
Predicates vs. adjuncts:
The party is at seven o'clock.The PPs in these sentences are NOT adjuncts, nor are they arguments. The preposition in each case is, rather, part of the main predicate. The matrix predicate in the first sentence is is under; this predicate takes the two arguments It and the bush. Similarly, the matrix predicate in the second sentence is is at; this predicate takes the two arguments The party and seven o'clock. Distinguishing between predicates, arguments, and adjuncts becomes particularly difficult when secondary predicates are involved, for instance with resultative predicates, e.g.
Predicates vs. adjuncts:
That made him tired.The resultative adjective tired can be viewed as an argument of the matrix predicate made. But it is also definitely a predicate over him. Such examples illustrate that distinguishing predicates, arguments, and adjuncts can become difficult and there are many cases where a given expression functions in more ways than one.
Overview The following overview is a breakdown of the current divisions: This overview acknowledges three types of entities: predicates, arguments, and adjuncts, whereby arguments are further divided into obligatory and optional ones.
Representing adjuncts:
Many theories of syntax and grammar employ trees to represent the structure of sentences. Various conventions are used to distinguish between arguments and adjuncts in these trees. In phrase structure grammars, many adjuncts are distinguished from arguments insofar as the adjuncts of a head predicate will appear higher in the structure than the object argument(s) of that predicate. The adjunct is adjoined to a projection of the head predicate above and to the right of the object argument, e.g.
Representing adjuncts:
The object argument each time is identified insofar as it is a sister of V that appears to the right of V, and the adjunct status of the adverb early and the PP before class is seen in the higher position to the right of and above the object argument. Other adjuncts, in contrast, are assumed to adjoin to a position that is between the subject argument and the head predicate or above and to the left of the subject argument, e.g.
Representing adjuncts:
The subject is identified as an argument insofar as it appears as a sister and to the left of V(P). The modal adverb certainly is shown as an adjunct insofar as it adjoins to an intermediate projection of V or to a projection of S.
In X-bar theory, adjuncts are represented as elements that are sisters to X' levels and daughters of X' level [X' adjunct [X'...]].
Theories that assume sentence structure to be less layered than the analyses just given sometimes employ a special convention to distinguish adjuncts from arguments. Some dependency grammars, for instance, use an arrow dependency edge to mark adjuncts, e.g.
The arrow dependency edge points away from the adjunct toward the governor of the adjunct. The arrows identify six adjuncts: Yesterday, probably, many times, very, very long, and that you like. The standard, non-arrow dependency edges identify Sam, Susan, that very long story that you like, etc. as arguments (of one of the predicates in the sentence). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**3-O-Methyldopa**
3-O-Methyldopa:
3-O-Methyldopa (3-OMD) is one of the most important metabolites of L-DOPA, a drug used in the treatment of the Parkinson's disease.
3-O-Methyldopa:
3-O-methyldopa is produced by the methylation of L-DOPA by the enzyme catechol-O-methyltransferase. The necessary cofactor for this enzymatic reaction is s-adenosyl methionine (SAM). Its half-life (approximately 15 hours) is longer than L-DOPA's half-life, which is about one hour. This means that it is accumulated in the plasma and the brain of chronic L-DOPA therapy patients such as people suffering from Parkinson's disease.
3-O-Methyldopa:
3-OMD is often elevated in the plasma and cerebrospinal fluid of Parkinson's disease patients taking L-DOPA.
Effects:
Recent studies suggest that 3-OMD has some effects on the chronic treatment of L-DOPA. There are some evidences about it: Higher levels of dyskinesia.
L-DOPA related motor dysfunction.
Inhibition of striatal uptake of tyrosine.
Competition with L-DOPA for the blood–brain barrier transporter system.
Inhibition of dopamine release.
In relation to levodopa:
The most common and important treatment for Parkinson's disease is L-DOPA, used in all patients at any time of the disease evolution. It produces a decrease in symptoms of the disease. In fact, almost all patients that are treated with this drug show a considerable improvement. However, there is a controversy of whether L-DOPA and 3-OMD may be toxic.
Some studies have proposed that 3-OMD increases homocysteine levels, and this amino acid induces cardiovascular disease and neuronal damage. Some other toxic effects could be oxidative DNA damage which can cause cell death, a decrease in locomotor activities and diminishment in mitochondrial membrane potential.
Modulation:
As we know, it is necessary to produce the passage of L-DOPA administered to the blood brain barrier (BBB) to supplement the lack of dopamine suffered by patients with Parkinson's. Due to the high peripheral degradation rate of L-DOPA, high doses are required to improve the levels of this enzyme in blood brain barrier. Those increments are often associated with dopaminergic side effects. For this reason, several studies reported some mechanisms that can prolong the concentration of L-DOPA. Compounds capable of decreasing 3-O-methyldopa, like entacapone, tolcapone and opicapone (COMT inhibitors), when administered in combination with L-DOPA, lead to prolonged availability of this drug, thereby prolonging its effects.
Modulation:
On the other hand, the possibility of blocking peripheral decarboxylation by adding an aromatic amino acid decarboxylase (AADC) inhibitor has been studied. These effects increase the methylation of L-DOPA and increase concentrations of 3-O-methyldopa. Clivel Charlton et al., demonstrate that 3-OMD accumulation from long-term L-DOPA treatment may be involved in the adverse effects of L-DOPA therapy, although more studies are needed to corroborate it.
Metabolic pathway:
3-O-methyldopa is a major metabolite of L-3,4-dihydroxyphenylalanine (L-DOPA) and is formed by catechol-O-methyltransferase (COMT).
L-DOPA has the main role in the metabolic pathway as a metabolite in the biosynthesis of dopamine. This reaction happen in the process of decarboxylation by aromatic amino acid decarboxylase (AADC) also called dopa-descarboxilasa.
Furthermore, L-DOPA also can be methylated in the methylation process to 3-O-methyldopa. DDC acting as decarboxylase inhibitor makes COMT main metabolic pathway catalyzing this conversion of Levodopa.
This process is catalyzed by catechol O-methyltransferase methylates (COMT). The action of the enzyme makes it possible the reaction happens. This metabolite of L-DOPA formed, 3-OMD, is transaminated to vanilpyruvate by tyrosine aminotransferase. Vanilpyruvate is reduced to the final conversion: venillactate which are the same, predominantly by aromatic α-keto acid reductase and also by lactate dehydrogenase. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**File eXchange Protocol**
File eXchange Protocol:
File eXchange Protocol (FXP or FXSP) is a method of data transfer which uses FTP to transfer data from one remote server to another (inter-server) without routing this data through the client's connection. Conventional FTP involves a single server and a single client; all data transmission is done between these two. In the FXP session, a client maintains a standard FTP connection to two servers, and can direct either server to connect to the other to initiate a data transfer. The advantage of using FXP over FTP is evident when a high-bandwidth server demands resources from another high-bandwidth server, but only a low-bandwidth client, such as a network administrator working away from location, has the authority to access the resources on both servers.
Risk:
Enabling FXP support can make a server vulnerable to an exploit known as FTP bounce. As a result of this, FTP server software often has FXP disabled by default. Some sites restrict IP addresses to trusted sites to limit this risk.
FXP over SSL:
Some FTP Servers such as glFTPd, cuftpd, RaidenFTPD, drftpd, and wzdftpd support negotiation of a secure data channel between two servers using either of the FTP protocol extension commands; CPSV or SSCN. This normally works by the client issuing CPSV in lieu of the PASV command—or by sending SSCN prior to PASV transfers—which instructs the server to create either a SSL or TLS connection. However, both methods—CPSV and SSCN—may be susceptible to man-in-the-middle attacks, if the two FTP servers do not verify each other's SSL certificates. SSCN was first introduced by RaidenFTPD and SmartFTP in 2003 and has been widely adopted.
Technical:
Although FXP is often considered a distinct protocol, it is in fact merely an extension of the FTP protocol and is specified in RFC 959: User-PI - Server A (Dest) User-PI - Server B (Source) ------------------ ------------------ C->A : Connect C->B : Connect C->A : PASV A->C : 227 Entering Passive Mode. A1,A2,A3,A4,a1,a2 C->B : PORT A1,A2,A3,A4,a1,a2 B->C : 200 Okay C->A : STOR C->B : RETR B->A : Connect to HOST-A, PORT-a | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bathybius haeckelii**
Bathybius haeckelii:
Bathybius haeckelii was a substance that British biologist Thomas Henry Huxley discovered and initially believed to be a form of primordial matter, a source of all organic life. He later admitted his mistake when it proved to be just the product of an inorganic chemical process (precipitation).
In 1868 Huxley studied an old sample of mud from the Atlantic seafloor taken in 1857. When he first examined it, he had found only protozoan cells and placed the sample into a jar of alcohol to preserve it. Now he noticed that the sample contained an albuminous slime that appeared to be criss-crossed with veins.
Huxley thought he had discovered a new organic substance and named it Bathybius haeckelii, in honor of German biologist Ernst Haeckel. Haeckel had theorized about Urschleim ("primordial slime"), a protoplasm from which all life had originated. Huxley thought Bathybius could be that protoplasm, a missing link (in modern terms) between inorganic matter and organic life.
Bathybius haeckelii:
Huxley published a description of Bathybius that year and also wrote to Haeckel to tell him about it. Haeckel was impressed and flattered and procured a sample for himself. In the next edition of his textbook The History of Creation Haeckel suggested that the substance was constantly coming into being at the bottom of the sea, "monera" arising from nonliving matter due to "physicochemical causes." Huxley asserted in a speech given to the Royal Geographic Society in 1870 that Bathybius undoubtedly formed a continuous mat of living protoplasm that covered the whole ocean floor for thousands of square miles, probably a continuous sheet around the Earth.Sir Charles Wyville Thomson examined some samples in 1869 and regarded them as analogous to mycelium; "no trace of differentiation of organs", "an amorphous sheet of a protein compound, irritable to a low degree and capable of assimilating food... a diffused formless protoplasm."Other scientists were less enthusiastic. George Charles Wallich claimed that Bathybius was a product of chemical disintegration.
Bathybius haeckelii:
In 1872 the Challenger expedition began; it spent three years studying the oceans. The expedition also took soundings at 361 ocean stations. They did not find any sign of Bathybius, despite the claim that it was a nearly universal substance. In 1875 ship's chemist John Young Buchanan analyzed a substance that looked like Bathybius from an earlier collected sample. He noticed that it was a precipitate of calcium sulfate from the seawater that had reacted with the preservative liquid (alcohol), forming a gelatinous ooze which clung to particles as if ingesting them. Buchanan suspected that all the Bathybius samples had been prepared the same way and notified Sir Charles Thomson, now the leader of the expedition. Thomson sent a polite letter to Huxley and told about the discovery.
Bathybius haeckelii:
Huxley realized that he had been too eager and made a mistake. He published part of the letter in Nature and recanted his previous views. Later, during the 1879 meeting of the British Association for the Advancement of Science, he stated that he was ultimately responsible for spreading the theory and convincing others. Most biologists accepted this acknowledgement of error. Haeckel, however, did not want to abandon the idea of Bathybius because it was so close to proof of his own theories about Urschleim. He claimed without foundation that Bathybius "had been observed" in the Atlantic. Haeckel drew a series of pictures of the evolution of his Urschleim, supposedly based on observations. He continued to support this position until 1883.
Bathybius haeckelii:
Huxley's rival George Charles Wallich claimed that Huxley had committed deliberate fraud and also accused Haeckel of falsifying data. Other opponents of evolution, including George Campbell, 8th Duke of Argyll, tried to use the case as an argument against evolution. The entire affair was a blow to the evolutionary cause, who had posited it as their long-sought evolutionary origin of life from nonliving chemistry by natural processes, without the necessity of divine intervention. In retrospect, their error was in dismissing the necessary role of photosynthesis in supporting the entire food chain of life; and the corresponding requirement for sunlight, abundant at the surface, but absent on the ocean floor. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tephrochronology**
Tephrochronology:
Tephrochronology is a geochronological technique that uses discrete layers of tephra—volcanic ash from a single eruption—to create a chronological framework in which paleoenvironmental or archaeological records can be placed. Such an established event provides a "tephra horizon". The premise of the technique is that each volcanic event produces ash with a unique chemical "fingerprint" that allows the deposit to be identified across the area affected by fallout. Thus, once the volcanic event has been independently dated, the tephra horizon will act as time marker. It is a variant of the basic geological technique of stratigraphy.
Tephrochronology:
The main advantages of the technique are that the volcanic ash layers can be relatively easily identified in many sediments and that the tephra layers are deposited relatively instantaneously over a wide spatial area. This means they provide accurate temporal marker layers which can be used to verify or corroborate other dating techniques, linking sequences widely separated by location into a unified chronology that correlates climatic sequences and events. This results in "age-equivalent dating".Effective tephrochronology requires accurate geochemical fingerprinting (usually via an electron microprobe). An important recent advance is the use of LA-ICP-MS (i.e. laser ablation ICP-MS) to measure trace-element abundances in individual tephra shards. One problem in tephrochronology is that tephra chemistry can become altered over time, at least for basaltic tephras.
History of speciality:
The term tephrochronology appears to have been used by Sigurdur Thórarinsson as early as 1944. A key point in the establishment of this scientific field of study with what evolved to be a unique geoscientific method was in 1961 after a proposal supported by him lead by Japanese researchers including Professor Kunio Kobayashi resulted in the establishment of an international scientific group. Much work had preceded this, but was limited by the techniques available at the time in geology. This had resulted in tephra formations not being linked and inaccurate timings that could not be related to events say with worldwide traces.
History of speciality:
What would now be known as cryptotephra studies occurred in sea floor samples in the 1940s but Christer Persson in Scandinavia, was the first to publish articles in this field in the 1960s. Andrew Dugmore in 1989 was the first to use modern systematic methodology. Since then researchers have targeted stratigraphic archives of peat, lake sediment, ice cores, marine sediments, loess, floors of caves and rock shelters or stalagmites as well as contemporary eruption deposits.Early tephra horizons were identified with the Saksunarvatn tephra (Icelandic origin, c. 10.2 cal. ka BP), forming a horizon in the late Pre-Boreal of Northern Europe, the Vedde ash (also Icelandic in origin, c. 12.0 cal. ka BP) and the Laacher See tephra (in the Eifel volcanic field, c. 12.9 cal. ka BP). Major volcanoes which have been used in tephrochronological studies include Vesuvius, Hekla and Santorini. Minor volcanic events may also leave their fingerprint in the geological record: Hayes Volcano is responsible for a series of six major tephra layers in the Cook Inlet region of Alaska. Tephra horizons provide a synchronous check against which to correlate the palaeoclimatic reconstructions that are obtained from terrestrial records, like fossil pollen studies (palynology), from varves in lake sediments or from marine deposits and ice-core records, and to extend the limits of carbon-14 dating.
History of speciality:
A pioneer in the use of tephra layers as marker horizons to establish chronology was Sigurdur Thorarinsson, who began by studying the layers he found in his native Iceland. Since the late 1990s, techniques developed by Chris S. M. Turney (QUB, Belfast; now University of Exeter) and others for extracting tephra horizons invisible to the naked eye ("cryptotephra") have revolutionised the application of tephrochronology. This technique relies upon the difference between the specific gravity of the microtephra shards and the host sediment matrix. It has led to the first discovery of the Vedde ash on the mainland of Britain, in Sweden, in the Netherlands, in the Swiss Lake Soppensee and in two sites on the Karelian Isthmus of Baltic Russia.
History of speciality:
It has also revealed previously undetected ash layers, such as the Borrobol Tephra first discovered in northern Scotland, dated to c. 14.4 cal. ka BP, the microtephra horizons of equivalent geochemistry from southern Sweden, dated at 13,900 Cariaco varve yrs BP and from northwest Scotland, dated at 13.6 cal. ka BP.Since 2010 Bayesian age modelling built around ever-improving 14C-calibration curves and other age-related data,such as zircon double dating continues to better define tephrochronology.
Sources:
Alloway B.V., Larsen G., Lowe D.J., Shane P.A.R., Westgate J.A. (2007). "Tephrochronology", Encyclopedia of Quaternary Science (editor—Elias S.A.) 2869–2869 (Elsevier).
Davies, S.M.; Wastegård, S.; Wohlfarth, B. (2003). "Extending the limits of the Borrobol Tephra to Scandinavia and detection of new early Holocene tephras". Quaternary Research. 59 (3): 345–352. Bibcode:2003QuRes..59..345D. doi:10.1016/S0033-5894(03)00035-8. S2CID 59409634.
Dugmore, Andrew; Buckland, Paul (1991). "Tephrochronology and Late Holocene Soil Erosion in South Iceland". Environmental Change in Iceland: Past and Present. Glaciology and Quaternary Geology. Vol. 7. pp. 147–159. doi:10.1007/978-94-011-3150-6_10. ISBN 978-94-010-5389-1.
Keenan, Douglas J. (2003). "Volcanic ash retrieved from the GRIP ice core is not from Thera" (PDF). Geochemistry, Geophysics, Geosystems. 4 (11): 1097. Bibcode:2003GGG.....4....1K. doi:10.1029/2003GC000608.
Þórarinsson S. (1970). "Tephrochronology in medieval Iceland", Scientific Methods in Medieval Archaeology (ed. R. Berger) 295–328 (Berkeley: University of California Press). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**TAP1**
TAP1:
Transporter associated with antigen processing 1 (TAP1) is a protein that in humans is encoded by the TAP1 gene. A member of the ATP-binding cassette transporter family, it is also known as ABCB2.
Function:
The membrane-associated protein encoded by this gene is a member of the superfamily of ATP-binding cassette (ABC) transporters. ABC proteins transport various molecules across extra- and intra-cellular membranes. ABC genes are divided into seven distinct subfamilies (ABC1, MDR/TAP, MRP, ALD, OABP, GCN20, White). This protein is a member of the MDR/TAP subfamily. Members of the MDR/TAP subfamily are involved in multidrug resistance. The protein encoded by this gene is involved in the pumping of degraded cytosolic peptides across the endoplasmic reticulum into the membrane-bound compartment where class I molecules assemble. Mutations in this gene may be associated with ankylosing spondylitis, insulin-dependent diabetes mellitus, and celiac disease.
Interactions:
TAP1 has been shown to interact with: HLA-A, and Tapasin. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**DeepStack**
DeepStack:
DeepStack is an artificial intelligence computer program designed to play two-player poker, specifically heads up no-limit Texas hold 'em. It is the first computer program to outplay human professionals in this game.
Background:
Poker is a key benchmark game in academic community and substantial amount of research was done to find optimal strategies against worst case adversaries While human professionals have been outplayed in large perfect information games, such as Chess, decades before, imperfect information games require much more complex recursive reasoning.
Prior popular approaches relied mainly on simplification of the game by using abstractions. However, abstractions in imperfect-information games often result in highly-exploitable strategies.
Instead, DeepStack uses several algorithmic innovations, such as the use of neural networks and continual resolving.
The program was developed by an international team from Charles University, Czech Technical University and University of Alberta.
Algorithm:
At the core of the program is the use of neural networks for determining the value of specific card combinations. The networks are trained only on a small number of games states and are used to generalize to situations not seen during training.
Algorithm:
The program uses search with the neural networks and continual resolving to ensure strategy found at each step is consistent with the strategy used in previous steps. The search procedure uses counterfactual regret minimization to iteratively update strategy in its lookahead tree, and the neural networks are used for leaf evaluation. The leaf evaluation avoids reasoning about the entire remainder of the game by substituting the computation beyond a certain depth with a fast approximate estimate.
2016 tournament with professional players:
In a study completed December 2016, DeepStack defeated 11 professional poker players by playing 44,000 hands of poker. Over all games played, DeepStack won 49 big blinds/100 (always folding would only lose 75 bb/100), over four standard deviations from zero, making it the first computer program to beat professional poker players in heads-up no-limit Texas hold'em poker.
Competing approaches:
Concurrently with DeepStack, a competing approach from Carnegie Mellon University research group was published, called Libratus. On January 11 to 31, 2017, Libratus was pitted in a tournament against four top-class human poker players. The algorithm was also published in Science. Libratus does not use neural networks for leaf evaluation. Experts argue that using learning with neural networks (as done by DeepStack) is more general and it has been indeed used in subsequent works that generalize to other games with imperfect information
Reception by the poker community:
Dara O'Kearney, an Irish poker professional who completed 456 hands, claimed that DeepStack played in a style similar to one used by some human players, based on game theory. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Kraton (polymer)**
Kraton (polymer):
Kraton is the trade name given to a number of high-performance elastomers manufactured by Kraton Polymers, and used as synthetic replacements for rubber. Kraton polymers offer many of the properties of natural rubber, such as flexibility, high traction, and sealing abilities, but with increased resistance to heat, weathering, and chemicals.
Company:
The origin of Kraton polymers goes back to the synthetic rubber (GR-S) program funded by the U.S. government during World War II to develop and establish a domestic supply capability for synthetic styrene butadiene rubber (SBR) as an alternative to natural rubber.Shell Oil Company purchased the Torrance, California facility from the U.S. government that was built to make synthetic styrene butadiene rubber. The company formed Elastomers Division that eventually became Kraton Corporation. Shell Oil Company broaden the product portfolio of elastomers in the 1950s, under the technical leadership of Murray Luftglass and Norman R. Legge.As part of the divestment program that was announced by Shell in December 1998, the Kraton elastomers business was sold to a private equity firm Ripplewood Holdings in 2000. Kraton completed IPO on December 17, 2009 to became a separate publicly traded company. In 2021 Kraton employees won an ASC Innovation Award for "Next Generation of Biobased Tackifiers REvolutionTM". Kraton employees accept an ASC Innovation Award
Properties:
Kraton polymers are styrenic block copolymer (SBC) consisting of polystyrene blocks and rubber blocks. The rubber blocks consist of polybutadiene, polyisoprene, or their hydrogenated equivalents. The tri-block with polystyrene blocks at both extremities linked together by a rubber block is the most important polymer structure observed in SBC. If the rubber block consists of polybutadiene, the corresponding triblock structure is: poly(styrene-block-butadiene-block-styrene) usually abbreviated as SBS. Kraton D (SBS and SIS) and their selectively hydrogenated versions Kraton G (SEBS and SEPS) are the major Kraton polymer structures. The microstructure of SBS consists of domains of polystyrene arranged regularly in a matrix of polybutadiene, as shown in the TEM micrograph. The picture was obtained on a thin film of polymer cast onto mercury from solution, and then stained with osmium tetroxide.
Properties:
The glass transition temperature (Tg) of the polybutadiene blocks is typically −90 °C and Tg of the polystyrene blocks is +100 °C. So, at any temperature between about −90 °C and +100 °C Kraton SBS will act as a physically crosslinked elastomer. If Kraton polymers are heated substantially above the Tg of the styrene-derived blocks, that is, above about 100 °C, like 170 °C the physical cross-links change from rigid glassy regions to flowable melt regions and the entire material flows and therefore can be cast, molded, or extruded into any desired form. On cooling, this new form resumes its elastomeric character. This is the reason such a material is called a thermoplastic elastomer (TPE). The polystyrene blocks form domains of nanometre size in the microstructure, and they stabilize the form of the molded material. Depending on the rubber-to-polystyrene ratio in the material, the polystyrene domains can be spherical or form cylinders or lamellae. The hydrogenated Kraton polymers named Kraton G exhibit improved resistance to temperature (processing at 200–230 °C is common), to oxidation, and to UV. SEBS and SEPS due to their polyolefinic rubber nature present excellent compatibility with polyolefins and paraffinic oils.
Applications:
Kraton polymers are always used in blends with various other ingredients like paraffinic oils, polyolefins, polystyrene, bitumen, tackifying resins, and fillers to provide a very large range of end-use products ranging from hot melt adhesives to impact-modified transparent polypropylene bins, from medical TPE compounds to modified bitumen roofing felts or from oil gel toys (including sex toys) to elastic attachments in diapers.It can make asphalt flexible, which is necessary if the asphalt is to be used to coat a surface that is below grade or for highly demanding paving applications like F1 racing tracks.Kraton-based compounds are also used in non-slip knife handles.The earliest commercial components using Kraton G (thermoplastic rubber) in the automobile industry were in 1970s. The implementation of U.S. requirements for automobile bumpers to absorb 5 mph (8 km/h) impacts with no damage to the car's safety equipment lead to the first successful commercial automotive application of specialized flexible polymers as fascia for the 1974 AMC Matador.American Motors Corporation (AMC) also used this polymer plastic on the AMC Eagle for the color matched flexible wheel arch flares that flowed into rocker panel extensions. This was needed because of the Eagle's 2-inch wider track compared to the AMC Concord platform on which the AWD cars were based on. The Eagle's Kraton bodywork was lightweight, flexible, and did not crack in cold weather as is typical of fiberglass automobile body components.Some grades of Kraton can also be dissolved into hydrocarbon oils to create "shear thinning" grease-type products that are used in the manufacture of telecommunications cables containing optical fibers. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lilo Pozzo**
Lilo Pozzo:
Lilo Danielle Pozzo is an American chemical engineer who is a professor of chemical engineering at the University of Washington. Her research considers the development, measurement and control of molecular self-assembly. She is interested in the realization of materials for energy storage and conversion. Pozzo serves on the editorial board of the Royal Society of Chemistry journal Digital Discovery.
Early life and education:
Pozzo was born in Argentina and raised in Puerto Rico. She was an undergraduate student at the University of Puerto Rico at Mayagüez where she studied chemical engineering, earning her bachelor's degree in 2001. After graduating she joined Carnegie Mellon University, where she studied Triblock copolymers as thermoreversible micellar templates for three-dimensional arrays under the supervision of Lynn M. Walker. Pozzo joined National Institute of Standards and Technology as a postdoctoral fellow.
Research and career:
Pozzo's research considers polymers and colloidal systems and the application of advanced characterization techniques to understand their structure-property relationships. She has applied these materials to medical imaging contrast agents and energy storage technologies. Pozzo's research page can be found here.
Research and career:
In 2017, Pozzo and her research team launched a project in Jayuya, Puerto Rico, seeking to evaluate how extended power outages impacted the health of rural patients. In the wake of Hurricane Maria, Pozzo raised funding from people in Seattle to build renewable energy infrastructure in Puerto Rico. As part of these efforts, she installed several solar nanogrid arrays (small scale systems that can produce, store and distribute electricity) to power refrigerators.Pozzo has also worked on data-driven materials design and high-throughput experimentation. She focuses on ways to adapt hardware and software to design new materials for clean energy and healthcare.In 2018, Pozzo was awarded the United States Department of Energy Clean Energy, Education and Empowerment (C3E) initiative education award. The award recognizes efforts of advocates in driving uptake of clean energy technologies in society. Later that year she was honored at the Latinx Faculty Recognition Event.In 2021-2023 Pozzo was named and served as interim chair of the Materials Science department at the University of Washington.
Selected publications:
Wu, Chen-Hao; Chueh, Chu-Chen; Xi, Yu-Yin; Zhong, Hong-Liang; Gao, Guang-Peng; Wang, Zhao-Hui; Pozzo, Lilo D.; Wen, Ten-Chin; Jen, Alex K.-Y. (2015-07-24). "Influence of Molecular Geometry of Perylene Diimide Dimers and Polymers on Bulk Heterojunction Morphology Toward High-Performance Nonfullerene Polymer Solar Cells". Advanced Functional Materials. 25 (33): 5326–5332. doi:10.1002/adfm.201501971. ISSN 1616-301X. S2CID 93752655.
Katie M Weigandt; Nathan White; Dominic Chung; Erica Ellingson; Yi Wang; Xiaoyun Fu; Danilo C Pozzo (1 December 2012). "Fibrin clot structure and mechanics associated with specific oxidation of methionine residues in fibrinogen". Biophysical Journal. 103 (11): 2399–2407. doi:10.1016/J.BPJ.2012.10.036. ISSN 0006-3495. PMC 3514520. PMID 23283239. Wikidata Q36445672.
Leslie W Chan; Xu Wang; Hua Wei; Lilo D Pozzo; Nathan J White; Suzie H Pun (1 March 2015). "A synthetic fibrin cross-linking polymer for modulating clot properties and inducing hemostasis". Science Translational Medicine. 7 (277): 277ra29. doi:10.1126/SCITRANSLMED.3010383. ISSN 1946-6234. PMC 4470483. PMID 25739763. Wikidata Q27327667.
Awards:
Anne Mayes Neutron Scattering Award, 2022 DOE Women in Clean Energy C3E Education Award, 2018 UW College of Engineering Distinguished Teaching Award, 2018 UW LatinX Faculty Recognition Award, 2017 Department of Energy Early Career Award, 2013 University of Washington Outstanding Undergraduate Research Mentor Award, 2013 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Riffle shuffle permutation**
Riffle shuffle permutation:
In the mathematics of permutations and the study of shuffling playing cards, a riffle shuffle permutation is one of the permutations of a set of n items that can be obtained by a single riffle shuffle, in which a sorted deck of n cards is cut into two packets and then the two packets are interleaved (e.g. by moving cards one at a time from the bottom of one or the other of the packets to the top of the sorted deck). Beginning with an ordered set (1 rising sequence), mathematically a riffle shuffle is defined as a permutation on this set containing 1 or 2 rising sequences. The permutations with 1 rising sequence are the identity permutations.
Riffle shuffle permutation:
As a special case of this, a (p,q) -shuffle, for numbers p and q with p+q=n , is a riffle in which the first packet has p cards and the second packet has q cards.
Combinatorial enumeration:
Since a (p,q) -shuffle is completely determined by how its first p elements are mapped, the number of (p,q) -shuffles is However, the number of distinct riffles is not quite the sum of this formula over all choices of p and q adding to n (which would be 2n ), because the identity permutation can be represented in multiple ways as a (p,q) -shuffle for different values of p and q Instead, the number of distinct riffle shuffle permutations of a deck of n cards, for n=1,2,3,… , is More generally, the formula for this number is 2n−n ; for instance, there are 4503599627370444 riffle shuffle permutations of a 52-card deck.
Combinatorial enumeration:
The number of permutations that are both a riffle shuffle permutation and the inverse permutation of a riffle shuffle is For n=1,2,3,… , this is and for 52 there are exactly 23427 invertible shuffles.
Random distribution:
The Gilbert–Shannon–Reeds model describes a random probability distribution on riffle shuffles that is a good match for observed human shuffles. In this model, the identity permutation has probability (n+1)/2n of being generated, and all other riffle permutations have equal probability 1/2n of being generated. Based on their analysis of this model, mathematicians have recommended that a deck of 52 cards be given seven riffles in order to thoroughly randomize it.
Permutation patterns:
A pattern in a permutation is a smaller permutation formed from a subsequence of some k values in the permutation by reducing these values to the range from 1 to k while preserving their order. Several important families of permutations can be characterized by a finite set of forbidden patterns, and this is true also of the riffle shuffle permutations: they are exactly the permutations that do not have 321, 2143, and 2413 as patterns. Thus, for instance, they are a subclass of the vexillary permutations, which have 2143 as their only minimal forbidden pattern.
Perfect shuffles:
A perfect shuffle is a riffle in which the deck is split into two equal-sized packets, and in which the interleaving between these two packets strictly alternates between the two. There are two types of perfect shuffle, an in shuffle and an out shuffle, both of which can be performed consistently by some well-trained people. When a deck is repeatedly shuffled using these permutations, it remains much less random than with typical riffle shuffles, and it will return to its initial state after only a small number of perfect shuffles. In particular, a deck of 52 playing cards will be returned to its original ordering after 52 in shuffles or 8 out shuffles. This fact forms the basis of several magic tricks.
Algebra:
Riffle shuffles may be used to define the shuffle algebra. This is a Hopf algebra where the basis is a set of words, and the product is the shuffle product denoted by the sha symbol ш, the sum of all riffle shuffles of two words.
In exterior algebra, the wedge product of a p -form and a q -form can be defined as a sum over (p,q) -shuffles. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sensitive compartmented information**
Sensitive compartmented information:
Sensitive compartmented information (SCI) is a type of United States classified information concerning or derived from sensitive intelligence sources, methods, or analytical processes. All SCI must be handled within formal access control systems established by the Director of National Intelligence.SCI is not a classification; SCI clearance has sometimes been called "above Top Secret", but information at any classification level may exist within an SCI control system. When "decompartmentalized", this information is treated the same as collateral information at the same classification level.
Sensitive compartmented information:
The federal government requires the SCI be processed, stored, used or discussed in a Sensitive compartmented information facility (SCIF).
Access:
Eligibility for access to SCI is determined by a Single Scope Background Investigation (SSBI) or periodic reinvestigation. Because the same investigation is used to grant Top Secret security clearances, the two are often written together as TS//SCI. Eligibility alone does not confer access to any specific SCI material; it is simply a qualification. One must receive explicit permission to access an SCI control system or compartment. This process may include a polygraph or other approved investigative or adjudicative action.Once it is determined a person should have access to an SCI compartment, they sign a nondisclosure agreement, are "read in" or indoctrinated, and the fact of this access is recorded in a local access register or in a computer database. Upon termination from a particular compartment, the employee again signs the nondisclosure agreement.
Control systems:
SCI is divided into control systems, which are further subdivided into compartments and sub-compartments. These systems and compartments are usually identified by a classified codeword. Several such codewords have been declassified. The following SCI control systems, with their abbreviations and compartments, are known: Special Intelligence (SI) Special Intelligence (so in the CAPCO manual, but always SI in document markings) is the control system covering communications intelligence. Special Intelligence is a term for communications intercepts. The previous title for this control system was COMINT, but this was deprecated in 2011. SI has several compartments, of which the following are known or declassified:SI-NK and SI-EU are also possible as in under ENDSEAL.Several now-retired codewords protected SI compartments based on their sensitivity, generally referred to as Top Secret Codeword (TSC) and Secret Codeword (SC). These codewords were: These three codewords, the usage of which was terminated in 1999, were attached directly to the classification without reference to COMINT or SI, e.g. Top Secret UMBRA.
Control systems:
STELLARWIND (STLW) This codeword was revealed on June 27, 2013, when The Guardian published a draft report from the NSA Inspector General about the electronic surveillance program STELLARWIND. This program was started by President George W. Bush shortly after the 9/11 attacks. For information about this program, a new security compartment was created which was given STELLARWIND as its permanent cover term on October 31, 2001.
Control systems:
ENDSEAL (EL) This U.S. Navy's control system was revealed in the 2013 Classification Manual. ENDSEAL information must always be classified as Special Intelligence (SI), so probably it is related to SIGINT or ELINT. It has two subcompartments: ECRU (SI-EU) and NONBOOK (SI-NK).
Control systems:
TALENT KEYHOLE (TK) TK covers space-based IMINT (imagery intelligence), SIGINT (signals intelligence), and MASINT (measurement and signature intelligence) collection platforms; related processing and analysis techniques; and research, design, and operation of these platforms (but see Reserve below). The original TALENT compartment was created in the mid-1950s for the U-2. In 1960, it was broadened to cover all national aerial reconnaissance (to later include SR-71 sourced imagery) and the KEYHOLE compartment was created for satellite intelligence. TALENT KEYHOLE is now a top-level control system that merged with KLONDIKE; KEYHOLE is no longer a distinct compartment. Known compartments include RUFF (IMINT satellites), ZARF (ELINT satellites), and CHESS (U-2). The KEYHOLE series KH-1 through KH-4b were part of the new TALENT-KEYHOLE designation. RSEN (Risk Sensitive Notice, portion marking RS) keyword is used for imagery product.
Control systems:
HUMINT Control System (HCS) HCS is the HUMINT (human-source intelligence) Control System. This system was simply designated "HUMINT" until confusion arose between collateral (regular) HUMINT and the control system. The current nomenclature was chosen to eliminate the ambiguity. There are two compartments HCS-O (Operation) and HCS-P (Product). HCS-O-P marking was also used in "Review of the Unauthorized Disclosures of Former National Security Agency Contractor Edward Snowden".
Control systems:
KLONDIKE (KDK) KLONDIKE is a legacy system that protected sensitive geospatial intelligence. It had three main subcompartments: KDK BLUEFISH (KDK-BLFH, KDK-BLFH-xxxxxx, xxxxxx represents up to 6 alphanumeric characters indicating a sub BLUEFISH compartment), KDK IDITAROD (KDK-IDIT, KDK-IDIT-xxxxxx, xxxxxx represents up to 6 alphanumeric characters indicating a sub IDITAROD compartment) and KDK KANDIK (KDK-KAND, KDK-KAND-xxxxxx, xxxxxx represents up to 6 alphanumeric characters indicating a sub KANDIK compartment). Nowadays it exists under TALENT KEYHOLE (TK-BLFH, TK-IDIT, TK-KAND).
Control systems:
RESERVE (RSV) RESERVE is the control system for National Reconnaissance Office compartments protecting new sources and methods during the research, development, and acquisition process. RSV-XXX, XXX represents 3 alphanumeric characters to indicate sub Reserve compartments.
BYEMAN (BYE) BYEMAN is a retired control system covering certain overhead collection systems, including CORONA and OXCART. Most BYE content was transferred to TK. BYE Special Handling content was transferred to Reserve.
Markings:
SCI control system markings are placed immediately after the classification level markings in a banner line (banner spells out TOP SECRET in full) or portion marking (here TS is used). Sometimes, especially on older documents, they are stamped. The following banner line and portion marking describe a top secret document containing information from the notional SI-GAMMA 1234 subcompartment, the notional SI-MANSION compartment, and the notional TALENT KEYHOLE-BLUEFISH compartment (TK is always abbreviated, because in some cases even the full meaning may be classified, like for BUR keyword, BUR-BLG-HCAS, BUR-BLG-JETS): Older documents were marked with HANDLE VIA xxxx CONTROL CHANNELS (or "HVxCC"), HANDLE VIA xxxx CHANNELS ONLY (or "HVxCO"), or HANDLE VIA xxxx CHANNELS JOINTLY (or "HVxCJ"), but this requirement was rescinded in 2006. For example, COMINT documents were marked as HANDLE VIA COMINT CHANNELS ONLY. This marking led to the use of the caveat CCO (COMINT Channels Only) in portion markings, but CCO is also obsolete. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chiasmus**
Chiasmus:
In rhetoric, chiasmus ( ky-AZ-məs) or, less commonly, chiasm (Latin term from Greek χίασμα, "crossing", from the Greek χιάζω, chiázō, "to shape like the letter Χ"), is a "reversal of grammatical structures in successive phrases or clauses – but no repetition of words".A similar device, antimetabole, also involves a reversal of grammatical structures in successive phrases or clauses in an A-B-B-A configuration, but unlike chiasmus, presents a repetition of words.
Examples:
Chiasmus balances words or phrases with similar, though not identical, meanings: "Dotes" and "strongly loves" share the same meaning and bracket, as do "doubts" and "suspects".
Additional examples of chiasmus: By day the frolic, and the dance by night.
Despised, if ugly; if she's fair, betrayed.
For comparison, the following is considered antimetabole, in which the reversal in structure involves the same words: Pleasure's a sin, and sometimes sin's a pleasure.
Examples:
Both chiasmus and antimetabole can be used to reinforce antithesis. In chiasmus, the clauses display inverted parallelism. Chiasmus was particularly popular in the literature of the ancient world, including Hebrew, Greek, Latin and Ancient K'iche' Maya, where it was used to articulate the balance of order within the text. Many long and complex chiasmi have been found in Shakespeare and the Greek and Hebrew texts of the Bible. It is also found throughout the Quran and the Book of Mormon.
Conceptual chiasmus:
Chiasmus can be used in the structure of entire passages to parallel concepts or ideas. This process, termed "conceptual chiasmus", uses a criss-crossing rhetorical structure to cause an overlapping of "intellectual space". Conceptual chiasmus utilizes specific linguistic choices, often metaphors, to create a connection between two differing disciplines. By employing a chiastic structure to a single presented concept, rhetors encourage one area of thought to consider an opposing area's perspective.
Effectiveness:
Chiasmus derives its effectiveness from its symmetrical structure. The structural symmetry of the chiasmus imposes the impression upon the reader or listener that the entire argument has been accounted for. In other words, chiasmus creates only two sides of an argument or idea for the listener to consider, and then leads the listener to favor one side of the argument.
Thematic chiasmus:
The Wilhelmus, the national anthem of the Netherlands, has a structure composed around a thematic chiasmus: the 15 stanzas of the text are symmetrical, in that verses one and 15 resemble one another in meaning, as do verses two and 14, three and 13, etc., until they converge in the eighth verse, the heart of the song. Written in the 16th century, the Wilhelmus originated in the nation's struggle to achieve independence. It tells of the Father of the Nation William of Orange who was stadholder in the Netherlands under the king of Spain. In the first person, as if quoting himself, William speaks to the Dutch people and talks about both the outer conflict – the Dutch Revolt – as well as his own, inner struggle: on one hand, he tries to be faithful to the king of Spain, on the other hand, he is above all faithful to his conscience: to serve God and the Dutch people. This is made apparent in the central 8th stanza: "Oh David, thou soughtest shelter from King Saul's tyranny. Even so I fled this welter". Here the comparison is made between the biblical David and William of Orange as merciful and just leaders who both serve under tyrannic kings. As the merciful David defeats the unjust Saul and is rewarded by God with the kingdom of Israel, so too, with the help of God, will William be rewarded a kingdom; being either or both the Netherlands, and the kingdom of God.
Sources:
Baldrick, Chris. 2008. Oxford Dictionary of Literary Terms. Oxford University Press. New York. ISBN 978-0-19-920827-2 Corbett, Edward P. J. and Connors, Robert J. 1999. Style and Statement. Oxford University Press. New York, Oxford. ISBN 0-19-511543-0 Forsyth, Mark. 2014. The Elements of Eloquence. Berkley Publishing Group/Penguin Publishing. New York. ISBN 978-0-425-27618-1 Lund, Nils Wilhelm (1942). Chiasmus in the New Testament, a study in formgeschichte. Chapel Hill: University of North Carolina Press. OCLC 2516087.
Sources:
McCoy, Brad (Fall 2003). "Chiasmus: An Important Structural Device Commonly Found in Biblical Literature" (PDF). CTS Journal. Albuquerque, New Mexico: Chafer Theological Seminary. 9 (2): 18–34. Archived from the original (PDF) on November 22, 2012. Retrieved June 18, 2014.
Parry, Donald W. (2007). Poetic Parallelisms in the Book of Mormon (PDF). Provo, Utah: Neal A. Maxwell Institute for Religious Scholarship. ISBN 978-0-934893-36-7. Archived from the original (PDF) on July 14, 2014. Retrieved June 18, 2014.
Smyth, Herbert Weir (1920). A Greek Grammar for Colleges. New York: American Book Company. p. 677. OCLC 402001.
Welch, John W. (1995). "Criteria for Identifying and Evaluating the Presence of Chiasmus". Journal of Book of Mormon Studies. Brigham Young University. 4 (2). Archived from the original on October 13, 2015. Retrieved June 18, 2014.
Welch, John W. (1999) [1981]. Chiasmus in antiquity: structures, analyses, exegesis. Provo, Utah: Research Press. ISBN 0934893330. OCLC 40126818. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rheumatoid vasculitis**
Rheumatoid vasculitis:
Rheumatoid vasculitis is skin condition that is a typical feature of rheumatoid arthritis, presenting as peripheral vascular lesions that are localized purpura, cutaneous ulceration, and gangrene of the distal parts of the extremities. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bicoid 3′-UTR regulatory element**
Bicoid 3′-UTR regulatory element:
The bicoid 3′-UTR regulatory element is an mRNA regulatory element that controls the gene expression of the bicoid protein in fruitfly Drosophila melanogaster.
The structured RNA element consists of four domains (denoted as II, III, IV and V) in the 3′UTR of the mRNA. It is essential for the correct transport and localisation of bicoid mRNA during oocyte and embryo differentiation, which has been studied most thoroughly in the development of Drosophila melanogaster (fruitfly) larvae. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ruth Stephens Gani Medal**
Ruth Stephens Gani Medal:
The Ruth Stephens Gani Medal is awarded annually by the Australian Academy of Science to recognise research in human genetics.The award honours the contributions by Ruth Stephens Gani to human cytogenetics.It is an early career award normally for Australian resident nominees up to ten years work post doctorate.Below are a list of recipients from 2008-2018 in the field: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**FLEPia**
FLEPia:
The Fujitsu FLEPia is a discontinued e-reader capable of displaying up to 260,000 colors. Released in Japan in 2009.
Specifications:
Size: 158 mm x 240 mm x 12.5 mm Weight: 385 g Display: 8 inch Resolution: 768 x 1024 pixels Number of displayable colors: 260,000 colors (3 scans); 4,096 (2 scans); 64 colors (1 scan) Re-Draw Speed: 1.8 seconds (1 scan), 5 seconds (2 scans), 8 seconds (3 scans) Memory: SD Memory Card (Maximum up to 4GB) Battery: 40 continuous hours (displaying 2,400 pages/at 1 minute per page/with 64 colors) Wireless: WiFi and Bluetooth MSRP: 99,750 JPY (~$1075) Available case colors: white and black. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**OpenEV**
OpenEV:
OpenEV is an open-source geospatial toolkit and a frontend to that toolkit. OpenEV was developed using Python and uses the GDAL library to display georeferenced images and elevation data. The application also has image editing capabilities and uses OpenGL to display elevation data in three-dimensions.
History:
The original version of OpenEV was developed by Atlantis Scientific (renamed Vexcel) as a prototype viewer for the Canadian Geospatial Data Infrastructure. Its development was supported by the Canada Centre for Remote Sensing GeoConnections program and J-2 Geomatics (Canadian Department of National Defense). The goal was to create a free, downloadable, advanced satellite imagery viewer that allowed users to work interactively with CGDI data.
History:
Vexcel, Inc. was acquired in May, 2006 by Microsoft and left the software to Mario Beauchamp and a team of developers. OpenEV has since been used by NASA's Jet Propulsion Laboratory and the American Museum of Natural History. It was also the base for the CIETmap software, which is now developed also by Mario Beauchamp.
Supported data:
OpenEV supports numerous raster and vector formats such as shapefiles. Since it uses the GDAL library to display images, it supports the same formats. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**11β-Hydroxyandrostenedione**
11β-Hydroxyandrostenedione:
11β-Hydroxyandrostenedione (11β-OHA4), also known as 11β-hydroxyandrost-4-ene-3,17-dione, is an endogenous, naturally occurring steroid and androgen prohormone that is produced primarily, if not exclusively, in the adrenal glands. It is closely related to adrenosterone (11-ketoandrostenedione; 11-KA4), 11-ketotestosterone (11-KT), and 11-ketodihydrotestosterone (11-KDHT), which are also produced in the adrenal glands.It can be used as a biomarker for guiding primary aldosteronism subtyping in adrenal vein sampling where blood samples are taken from both adrenal glands to compare the amount of hormone made by each gland. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Zebra Technologies**
Zebra Technologies:
Zebra Technologies Corporation is an American mobile computing company specializing in technology used to sense, analyze, and act in real time, sometimes known as smart data capture. The company manufactures and sells marking, tracking, and computer printing technologies. Its products include mobile computers and tablets, software, thermal barcode label and receipt printers, RFID smart label printers/encoders/fixed & handheld readers/antennas, autonomous mobile robots (AMR’s) & machine vision (MV), and fixed industrial scanning hardware & software.
History:
Zebra was incorporated in 1969 as Data Specialties Incorporated, a manufacturer of high-speed electromechanical products. The company changed its focus to specialty on-demand labeling and ticketing systems in 1982 and became Zebra Technologies Corporation in 1986. Zebra became a publicly traded company in 1991.
History:
In 1998, Zebra Technologies merged with Eltron International, Inc. In 2000, Comtec Information Systems was acquired by Zebra Technologies, followed in 2003 by the acquisition of Atlantek, Inc., which was a manufacturer of photo ID printers.In 2004, the company expanded into RFID smart label manufacturing. In the following years, Zebra also acquired Swecoin, WhereNet Corp, Proveo AG, and Navis Holdings (later divested in 2011).The company bought the Enterprise Solutions Group (ESG) in 2008 and renamed the group Zebra Enterprise Solutions in 2009. In the same year, Multispectral Solutions, Inc. was acquired. In 2012, the companies LaserBand, and StepOne Systems were purchased with a cash price of $1.5 million.In 2013, the company acquired Hart Systems for $94 million in cash from the private equity firm Topspin Partners LBO.In 2014, Zebra acquired Motorola Solutions' Enterprise Division in a $3.45 billion transaction, providing mobile computing and advanced data capture communications technologies and services. Zebra's acquisition of the Enterprise Division included the Symbol Technologies and Psion product lines. Also in 2014, Zebra provided its real-time location system (RTLS) in NFL stadiums to track players and officials and provide location-based data for the NFL's Next Gen Stats program. Zebra’s partnership with the NFL extends through the 2025 football season.In 2018, the company acquired Xplore Technologies, a maker of ruggedized tablets and other hard-wearing hardware.In 2019, Zebra acquired Temptime Corporation, a provider of temperature monitoring devices to the healthcare industry. That same year, Zebra also acquired Profitect, a retail software company that developed a product line used for tracking and identifying inventory losses.In 2020, Zebra acquired Reflexis Systems, a provider of workforce scheduling and task management software to the retail, food service, hospitality, and banking industries for $575 Million.In 2021, Zebra acquired Adaptive Vision (provider of graphical MV software), Fetch Robotics (manufacturer of autonomous mobile robots) and Antuit.ai (provider of AI-powered SaaS).In 2022, Zebra acquired Matrox Imaging, a developer of machine vision components and systems.
Locations:
Zebra Technologies has more than 128 offices in 55 countries, including Australia, Brazil, Canada, China, France, Germany, India, Japan, Mexico, Russia, the United Arab Emirates, and the United Kingdom. The company also has over 10,000+ partners across 180 countries. In the 2021 annual report, Zebra stated that it traded in 180 countries, with approximately 128 facilities and 9,800 employees.
Reception:
Newsweek included Zebra on its 2023 America’s Greatest Workplaces for Diversity list. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Van der Corput inequality**
Van der Corput inequality:
In mathematics, the van der Corput inequality is a corollary of the Cauchy–Schwarz inequality that is useful in the study of correlations among vectors, and hence random variables. It is also useful in the study of equidistributed sequences, for example in the Weyl equidistribution estimate. Loosely stated, the van der Corput inequality asserts that if a unit vector v in an inner product space V is strongly correlated with many unit vectors u1,…,un∈V , then many of the pairs ui,uj must be strongly correlated with each other. Here, the notion of correlation is made precise by the inner product of the space V : when the absolute value of ⟨u,v⟩ is close to 1 , then u and v are considered to be strongly correlated. (More generally, if the vectors involved are not unit vectors, then strong correlation means that |⟨u,v⟩|≈‖u‖‖v‖ .)
Statement of the inequality:
Let V be a real or complex inner product space with inner product ⟨⋅,⋅⟩ and induced norm ‖⋅‖ . Suppose that v,u1,…,un∈V and that ‖v‖=1 . Then (∑i=1n|⟨v,ui⟩|)2≤∑i,j=1n|⟨ui,uj⟩|.
In terms of the correlation heuristic mentioned above, if v is strongly correlated with many unit vectors u1,…,un∈V , then the left-hand side of the inequality will be large, which then forces a significant proportion of the vectors ui to be strongly correlated with one another.
Statement of the inequality:
Proof of the inequality We start by noticing that for any i∈1,…,n there exists ϵi (real or complex) such that |ϵi|=1 and |⟨v,ui⟩|=ϵi⟨v,ui⟩ . Then, (∑i=1n|⟨v,ui⟩|)2 =(∑i=1nϵi⟨v,ui⟩)2 =(⟨v,∑i=1nϵiui⟩)2 since the inner product is bilinear ≤‖v‖2‖∑i=1nϵiui‖2 by the Cauchy–Schwarz inequality =‖v‖2⟨∑i=1nϵiui,∑j=1nϵiuj⟩ by the definition of the induced norm =∑i,j=1nϵiϵj⟨ui,uj⟩ since v is a unit vector and the inner product is bilinear ≤∑i,j=1n|⟨ui,uj⟩| since |ϵi|=1 for all i | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**T-antenna**
T-antenna:
A ‘T’-antenna, ‘T’-aerial, or flat-top antenna is a monopole radio antenna consisting of one or more horizontal wires suspended between two supporting radio masts or buildings and insulated from them at the ends. A vertical wire is connected to the center of the horizontal wires and hangs down close to the ground, connected to the transmitter or receiver. Combined, the top and vertical sections form a ‘T’ shape, hence the name. The transmitter power is applied, or the receiver is connected, between the bottom of the vertical wire and a ground connection.
T-antenna:
‘T’-antennas are typically used in the VLF, LF, MF, and shortwave bands,: 578–579 and are widely used as transmitting antennas for amateur radio stations, and long wave and medium wave AM broadcasting stations. They can also be used as receiving antennas for shortwave listening.
The ‘T’-antenna functions as a monopole antenna with capacitive top-loading; other antennas in this category include the inverted-‘L’, umbrella, and triatic antennas. It was invented during the first decades of radio, in the wireless telegraphy era, before 1920.
How it works:
The electrical design of a ‘T’-antenna is effectively the same as a giant capacitor.
The ‘T’-type antenna is most easily understood as having three functional parts: Top hat The horizontal top section that in effect is the upper plate of the capacitor (also called the capacitance hat).
Radiator The vertical center section (often the antenna mast itself) that carries current from the feedpoint at the base to the top; unbalanced current in the vertical segment generates the emitted radio waves.
How it works:
Counterpoise The base-level ground system, ground plane, or base radials, which forms the bottom plate of the capacitor.The wires of the top hat and the counterpoise (or ground system) are both (ideally) arranged symmetrically; currents flowing in the oppositely directed symmetrical wires of the top hat cancel each others' fields and so produce no net radiation, with the same cancellation happening in the same way in the ground system.The top and ground sections effectively function as oppositely charged reservoirs for augmented storage of excess or deficit electrons, more than what could be stored along the top end of the same height bare headed vertical wire. A greater stored charge causes greater current to flow through the vertical segment between the top and base, and that current in the vertical segment produces the radiation emitted by the T-antenna.
How it works:
Capacitance ‘hat’ The left and right sections of horizontal wire across the top of the ‘T’ carry equal but oppositely-directed currents. Therefore, far from the antenna, the radio waves radiated by each wire are 180° out of phase with the waves from the other wire, and tend to cancel. There is a similar cancellation of radio waves reflected from the ground. Thus the horizontal wires radiate (almost) no radio power.: 554 Instead of radiating, the horizontal wires increase the capacitance at the top of the antenna. More current is required in the vertical wire to charge and discharge this added capacitance during the RF oscillation cycle.: 554 The increased currents in the vertical wire (see drawing at right) effectively increase the antenna's radiation resistance and thus the RF power radiated.The top-load capacitance increases as more wires are added, so several parallel horizontal wires are often used, connected together at the center where the vertical wire attaches. Because each wire's electric field impinges on those of adjacent wires, the additional capacitance from each added wire diminishes.
How it works:
Efficiency of capacitive top loading The horizontal top load wire can increase radiated power by 2 to 4 times (3 to 6 dB) for a given base current. Consequently the ‘T’-antenna can radiate more power than a simple vertical monopole of the same height. Similarly, a receiving T-antenna can intercept more power from the same incoming radio wave signal strength than the same height vertical antenna can. In antennas built for frequencies near or below 600 kHz, the length of an antenna's wire segments is usually shorter than a quarter wavelength[ 1 /4 λ ≈ 125 metres (410 ft) at 600 kHz], the shortest length of unloaded straight wire that achieves resonance.
How it works:
In this circumstance, a ‘T’-antenna is a capacitively top-loaded, electrically short, vertical monopole.: 578–579 Despite its improvements over a short vertical, the typical ‘T’-antenna is still not as efficient as a full-height 1 /4 λ vertical monopole, and has a higher Q and thus a narrower bandwidth. ‘T’-antennas are typically used at low frequencies where building a full-size quarter-wave high vertical antenna is not practical, and the vertical radiating wire is often very electrically short: Only a small fraction of a wavelength long, 1/10λ or less. An electrically short antenna has a base reactance that is capacitive, and although capacitive loading at the top does reduce capacitive reactance at the base, usually some residual capacitive reactance remains. For transmitting antennas that must be tuned-out by added inductive reactance from a loading coil, so the antenna can be efficiently fed power.
Radiation pattern:
Since the vertical wire is the actual radiating element, the antenna radiates vertically polarized radio waves in an omnidirectional radiation pattern, with equal power in all azimuthal directions.
Radiation pattern:
The axis of the horizontal wire makes little difference. The power is maximum in a horizontal direction or at a shallow elevation angle, decreasing to zero at the zenith. This makes it a good antenna at LF or MF frequencies, which propagate as ground waves with vertical polarization, but it also radiates enough power at higher elevation angles to be useful for sky wave ("skip") communication. The effect of poor ground conductivity is generally to tilt the pattern up, with the maximum signal strength at a higher elevation angle.
Transmitting antennas:
In the longer wavelength ranges where ‘T’-antennas are typically used, the electrical characteristics of antennas are generally not critical for modern radio receivers; reception is limited by natural noise, rather than by the signal power gathered by the receiving antenna.Transmitting antennas are different, and feedpoint impedance is critical: The combination of reactance and resistance at the antenna feedpoint must be matched to the impedance of the feedline, and beyond it, the transmitter's output stage. If mismatched, current sent from the transmitter to the antenna will reflect back down the feedline from the antenna, creating a condition called standing waves on the line. This reduces the power radiated by the antenna, and at worst may damage the transmitter.
Transmitting antennas:
Reactance Any monopole antenna that is shorter than 1 /4 wave has a capacitive reactance; the shorter it is, the higher that reactance, and the greater the proportion of the feed current that will be reflected back towards the transmitter.
To efficiently drive current into a short transmitting antenna it must be made resonant (reactance-free), if the top section has not already done so. The capacitance is usually canceled out by an added loading coil or its equivalent; the loading coil is conventionally placed at the base of the antenna for accessibility, connected between the antenna and its feedline.
The horizontal top section of a ‘T’-antenna can also reduce the capacitive reactance at the feedpoint, substituting for a vertical section whose height would be about 2 /3 its length; if it is long enough, it completely eliminates reactance and obviates any need for a coil at the feedpoint.
Transmitting antennas:
At medium and low frequencies, the high antenna capacitance and the high inductance of the loading coil, compared to the short antenna’s low radiation resistance, makes the loaded antenna behave like a high Q tuned circuit, with a narrow bandwidth over which it will remain well matched to the transmission line, when compared to a 1 /4 λ monopole.To operate over a large frequency range the loading coil often must be adjustable and adjusted when the frequency is changed to limit the power reflected back towards the transmitter. The high Q also causes a high voltage on the antenna, which is maximum at the current nodes at the ends of the horizontal wire, roughly Q times the driving-point voltage. The insulators at the ends must be designed to withstand these voltages. In high power transmitters the output power is often limited by the onset of corona discharge from the wires.
Transmitting antennas:
Resistance Radiation resistance is the equivalent resistance of an antenna due to its radiation of radio waves; for a full-length quarter-wave monopole the radiation resistance is around 25 ohms. Any antenna that is short compared to the operating wavelength has a lower radiation resistance than a longer antenna; sometimes catastrophically so, far beyond the maximum performance improvement provided by a T-antenna. So at low frequencies, even a ‘T’-antenna can have very low radiation resistance, often less than 1 ohm, so the efficiency is limited by other resistances in the antenna and the ground system. The input power is divided between the radiation resistance and the ‘ohmic’ resistances of the antenna+ground circuit, chiefly the coil and the ground. The resistance in the coil and particularly the ground system must be kept very low to minimize the power dissipated in them.
Transmitting antennas:
It can be seen that at low frequencies the design of the loading coil can be challenging: it must have high inductance but very low losses at the transmitting frequency (high Q), must carry high currents, withstand high voltages at its ungrounded end, and be adjustable. It is often made of litz wire.At low frequencies the antenna requires a good low resistance ground to be efficient. The RF ground is typically constructed as a star of many radial copper cables buried about 1 ft. in the earth, extending out from the base of the vertical wire, and connected together at the center. The radials should ideally be long enough to extend beyond the displacement current region near the antenna. At VLF frequencies the resistance of the soil becomes a problem, and the radial ground system is usually raised and mounted a few feet above ground, insulated from it, to form a counterpoise.
Equivalent circuit:
The power radiated (or received) by any electrically short vertical antenna, like the ‘T’-antenna, is proportional to the square of the effective height of the antenna, so the antenna should be made as high as possible. Without the horizontal wire, the RF current distribution in the vertical wire would decrease very nearly linearly to zero at the top (see drawing “a” above), giving an effective height of half the physical height of the antenna. With an ideal “infinite capacitance” top load wire, the current in the vertical would be constant along its length, giving an effective height equal to the physical height, therefore increasing the radiated power fourfold for the same feed voltage. So the power radiated (or received) by a ‘T’-antenna lies between a vertical monopole of the same height and up to four times that.
Equivalent circuit:
The radiation resistance of an ideal T-antenna with very large top load capacitance is RR≈5(4πhλ)2 so the radiated power is P=RRI02≈5(4πhI0λ)2 where h is the height of the antenna, λ is the wavelength, and I0 is the RMS input current in amperes.This formula shows that the radiated power depends on the product of the base current and the effective height, and is used to determine how many metre-amps are required to achieve a given amount of radiated power.
Equivalent circuit:
The equivalent circuit of the antenna (including loading coil) is the series combination of the capacitive reactance of the antenna, the inductive reactance of the loading coil, and the radiation resistance and the other resistances of the antenna-ground circuit. So the input impedance is Z=RC+RD+Rℓ.c.+RG+RR+jωLℓ.c.−1jωCant., where RC is the Ohmic resistance of the antenna conductors (copper losses) RD is the equivalent series dielectric losses Rℓ.c. is the series resistance of the loading coil RG is the resistance of the ground system RR is the radiation resistance Cant. is the apparent capacitance of the antenna at the input terminals Lℓ.c. is the inductance of the loading coil.At resonance the capacitive reactance of the antenna is cancelled by the loading coil so the input impedance at resonance Z0 is just the sum of the resistances in the antenna circuit Z0=RC+RD+Rℓ.c.+RG+RR The efficiency of the antenna at resonance, η, is the ratio of radiated power to input power from the feedline. Since power dissipated as radiation or as heat is proportional to resistance, the efficiency is given by η=RRRC+RD+Rℓ.c.+RG+RR It can be seen that, since the radiation resistance is usually very low, the major design problem is to keep the other resistances in the antenna-ground system low to obtain the highest efficiency.
Multiple-tuned antenna:
The multiple-tuned flattop antenna is a variant of the ‘T’-antenna used in high-power low-frequency transmitters to reduce ground power losses. It consists of a long capacitive top-load consisting of multiple parallel wires supported by a line of transmission towers, sometimes several miles long. Several vertical radiator wires hang down from the top load, each attached to its own ground through a loading coil. The antenna is driven either at one of the radiator wires or more often at one end of the top load, by bringing the wires of the top load diagonally down to the transmitter.Although the vertical wires are separated, the distance between them is small compared to the length of the LF waves, so the currents in them are in phase and they can be considered as one radiator. Since the antenna current flows into the ground through N parallel loading coils and grounds rather than one, the equivalent loading coil and ground resistance, and therefore the power dissipated in the loading coil and ground, is reduced to 1/ N that of a simple ‘T’-antenna. The antenna was used in the powerful radio stations of the wireless telegraphy era but has fallen out of favor due to the expense of multiple loading coils. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Floral Jamming**
Floral Jamming:
Floral Jamming is a floral design activity originating from Hong Kong, where participants design their own original sculpture with floral materials. The floral designer provides all materials required for the floral display including flowers and foliage. Participants select the materials to create their own floral creation within a set time.
Floral Jamming The mission of Floral Jamming is to promote floral art to the general public. Participants of Floral Jamming are usually beginners who have little to no experience in floral arrangement or floral design. A key feature of Floral Jamming is that the sessions must be conducted by a floral designer with professional certification.
Through Floral Jamming, these participants can assemble a final floral product under professional guidance without having taken any classes. During a Floral Jamming session, the floral designer only provides guidance upon participants’ request. Based on the individual participants’ design, the floral designer could make suggestions on how to refine the arrangement.
Floral Jamming is also educational. It can introduce concepts of nature and design to young participants. It is an activity that can enhance family bonding and interpersonal skills.
Many Floral Jammers have commented that the process of focusing on producing their own individual creation has helped them to distress.
Origin The first Floral Jamming was held at Tallensia Floral Art, Hong Kong on July 1, 2011.
Founder Floral Jamming was founded by Lowdi Kwan, a floral designer with certification from the American Institute of Floral Designers in 1996.
Benefits of Floral Jamming Psychological research has shown that the art of floral arrangement can help relieve mood swings, improve cognitive and perceptive functions and enhance social skills.Bearing this in mind Kwan thought of the concept of Floral Jamming.
Floral Jamming:
Sessions Prior to the session, materials including the necessary tools and accessories would be displayed. To start off a session a floral designer will give a brief introduction on the materials and on how to start a floral arrangement by introducing the fundamental principles of floral arrangement. The participants will then select flowers and other floral materials to construct their own floral arrangement. The floral designer will provide assistance and technical support throughout the session. It is important to note that the floral designer will only provide guidance/ suggestion upon request as a support and not to change the participants’ individual design.
Floral Jamming:
Floral Jamming has been organized for functions such as birthday parties, bridal showers, school fairs, corporate team building and charity events.
=== References === | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Single-minded agent**
Single-minded agent:
In computational economics, a single-minded agent is an agent who wants only a very specific combination of items. The valuation function of such an agent assigns a positive value only to a specific set of items, and to all sets that contain it. It assigns a zero value to all other sets. A single-minded agent regards the set of items he wants as purely complementary goods. Various computational problems related to allocation of items are easier when all the agents are known to be single-minded. For example: Revenue-maximizing auctions.
Single-minded agent:
Multi-item exchange.
Fair cake-cutting and fair item allocation.
Combinatorial auctions.
Envy-free pricing.
Comparison to other valuation functions:
As mentioned above, a single-minded agent regards the goods as purely complementary goods In contrast, an additive agent assigns a positive value to every item, and assigns to every bundle a value that is the sum of the items in contains. An additive agent regards the set of items he wants as purely independent goods.
In contrast, a unit-demand agent wants only a single item, and assigns to every bundle a value that is the maximum value of an item contained in it. A unit-demand agent regards the items as purely substitute goods. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Menthol**
Menthol:
Menthol is an organic compound, more specifically a monoterpenoid, made synthetically or obtained from the oils of corn mint, peppermint, or other mints. It is a waxy, clear or white crystalline substance, which is solid at room temperature and melts slightly above.
The main form of menthol occurring in nature is (−)-menthol, which is assigned the (1R,2S,5R) configuration. Menthol has local anesthetic and counterirritant qualities, and it is widely used to relieve minor throat irritation. Menthol also acts as a weak κ-opioid receptor agonist.
Structure:
Natural menthol exists as one pure stereoisomer, nearly always the (1R,2S,5R) form (bottom left corner of the diagram below). The eight possible stereoisomers are: In the natural compound, the isopropyl group is in the trans orientation to both the methyl and hydroxyl groups. Thus, it can be drawn in any of the ways shown: The (+)- and (−)-enantiomers of menthol are the most stable among these based on their cyclohexane conformations. With the ring itself in a chair conformation, all three bulky groups can orient in equatorial positions.
Structure:
The two crystal forms for racemic menthol have melting points of 28 °C and 38 °C. Pure (−)-menthol has four crystal forms, of which the most stable is the α form, the familiar broad needles.
Biological properties:
Menthol's ability to chemically trigger the cold-sensitive TRPM8 receptors in the skin is responsible for the well-known cooling sensation it provokes when inhaled, eaten, or applied to the skin. In this sense, it is similar to capsaicin, the chemical responsible for the spiciness of hot chilis (which stimulates heat sensors, also without causing an actual change in temperature).
Biological properties:
Menthol's analgesic properties are mediated through a selective activation of κ-opioid receptors. Menthol blocks calcium channels and voltage-sensitive sodium channels, reducing neural activity that may stimulate muscles.Some studies show that menthol acts as GABAA receptor positive allosteric modulator and increases Gabaergic transmission in PAG neurons. Menthol also shares anaesthetic properties similar to propofol, by modulating the same sites of the GABAA receptor.Menthol is widely used in dental care as a topical antibacterial agent, effective against several types of streptococci and lactobacilli. Menthol also lowers blood pressure and antagonizes vasoconstriction through TRPM8 activation.
Occurrence:
Mentha arvensis (wild mint) is the primary species of mint used to make natural menthol crystals and natural menthol flakes. This species is primarily grown in the Uttar Pradesh region in India.Menthol occurs naturally in peppermint oil (along with a little menthone, the ester menthyl acetate and other compounds), obtained from Mentha × piperita (peppermint). Japanese menthol also contains a small percentage of the 1-epimer neomenthol.
Biosynthesis:
The biosynthesis of menthol has been investigated in Mentha × piperita and the enzymes involved in have been identified and characterized. It begins with the synthesis of the terpene limonene, followed by hydroxylation, and then several reduction and isomerization steps.
Biosynthesis:
More specifically, the biosynthesis of (−)-menthol takes place in the secretory gland cells of the peppermint plant. Geranyl diphosphate synthase (GPPS), first catalyzes the reaction of IPP and DMAPP into geranyl diphosphate. Next (−)-limonene synthase (LS) catalyzes the cyclization of geranyl diphosphate to (−)-limonene. (−)-Limonene-3-hydroxylase (L3OH), using O2 and NADPH, then catalyzes the allylic hydroxylation of (−)-limonene at the 3 position to (−)-trans-isopiperitenol. (−)-trans-Isopiperitenol dehydrogenase (iPD) further oxidizes the hydroxyl group on the 3 position using NAD+ to make (−)-isopiperitenone. (−)-Isopiperitenone reductase (iPR) then reduces the double bond between carbons 1 and 2 using NADPH to form (+)-cis-isopulegone. (+)-cis-Isopulegone isomerase (iPI) then isomerizes the remaining double bond to form (+)-pulegone. (+)-Pulegone reductase (PR) then reduces this double bond using NADPH to form (−)-menthone. (−)-Menthone reductase (MR) then reduces the carbonyl group using NADPH to form (−)-menthol.
Production:
Natural menthol is obtained by freezing peppermint oil. The resultant crystals of menthol are then separated by filtration.
Production:
Total world production of menthol in 1998 was 12,000 tonnes of which 2,500 tonnes was synthetic. In 2005, the annual production of synthetic menthol was almost double. Prices are in the $10–20/kg range with peaks in the $40/kg region but have reached as high as $100/kg. In 1985, it was estimated that China produced most of the world's supply of natural menthol, although it appears that India has pushed China into second place.Menthol is manufactured as a single enantiomer (94% e.e.) on the scale of 3,000 tonnes per year by Takasago International Corporation. The process involves an asymmetric synthesis developed by a team led by Ryōji Noyori, who won the 2001 Nobel Prize for Chemistry in recognition of his work on this process: The process begins by forming an allylic amine from myrcene, which undergoes asymmetric isomerisation in the presence of a BINAP rhodium complex to give (after hydrolysis) enantiomerically pure R-citronellal. This is cyclised by a carbonyl-ene-reaction initiated by zinc bromide to isopulegol, which is then hydrogenated to give pure (1R,2S,5R)-menthol.
Production:
Another commercial process is the Haarmann–Reimer process (after the company Haarmann & Reimer, now part of Symrise) This process starts from m-cresol which is alkylated with propene to thymol. This compound is hydrogenated in the next step. Racemic menthol is isolated by fractional distillation. The enantiomers are separated by chiral resolution in reaction with methyl benzoate, selective crystallisation followed by hydrolysis.
Production:
Racemic menthol can also be formed by hydrogenation of thymol, menthone, or pulegone. In both cases with further processing (crystallizative entrainment resolution of the menthyl benzoate conglomerate) it is possible to concentrate the L-enantiomer, however this tends to be less efficient, although the higher processing costs may be offset by lower raw material costs. A further advantage of this process is that D-menthol becomes inexpensively available for use as a chiral auxiliary, along with the more usual L-antipode.
Applications:
Menthol is included in many products, and for a variety of reasons.
Cosmetic In nonprescription products for short-term relief of minor sore throat and minor mouth or throat irritation e.g.: lip balms and cough medicines.
In some beauty products such as hair conditioners, based on natural ingredients (e.g., St. Ives).
Medical As an antipruritic to reduce itching.
Applications:
As a topical analgesic, it is used to relieve minor aches and pains, such as muscle cramps, sprains, headaches and similar conditions, alone or combined with chemicals such as camphor, eucalyptus oil or capsaicin. In Europe, it tends to appear as a gel or a cream, while in the U.S., patches and body sleeves are very frequently used, e.g.: Tiger Balm, or IcyHot patches or knee/elbow sleeves.
Applications:
As a penetration enhancer in transdermal drug delivery.
In decongestants for chest and sinuses (cream, patch or nose inhaler).
Examples: Vicks VapoRub, Mentholatum, Axe Brand, VapoRem, Mentisan.
In certain medications used to treat sunburns, as it provides a cooling sensation (then often associated with aloe).
Commonly used in oral hygiene products and bad-breath remedies, such as mouthwash, toothpaste, mouth and tongue sprays, and more generally as a food flavor agent; such as in chewing gum and candy.
In first aid products such as "mineral ice" to produce a cooling effect as a substitute for real ice in the absence of water or electricity (pouch, body patch/sleeve or cream).
Others In aftershave products to relieve razor burn.
As a smoking tobacco additive in some cigarette brands, for flavor, and to reduce throat and sinus irritation caused by smoking. Menthol also increases nicotine receptor density, increasing the addictive potential of tobacco products.
As a pesticide against tracheal mites of honey bees.
In perfumery, menthol is used to prepare menthyl esters to emphasize floral notes (especially rose).
In various patches ranging from fever-reducing patches applied to children's foreheads to "foot patches" to relieve numerous ailments (the latter being much more frequent and elaborate in Asia, especially Japan: some varieties use "functional protrusions", or small bumps to massage one's feet as well as soothing them and cooling them down).
As an antispasmodic and smooth muscle relaxant in upper gastrointestinal endoscopy.
Organic chemistry In organic chemistry, menthol is used as a chiral auxiliary in asymmetric synthesis. For example, sulfinate esters made from sulfinyl chlorides and menthol can be used to make enantiomerically pure sulfoxides by reaction with organolithium reagents or Grignard reagents. Menthol reacts with chiral carboxylic acids to give diastereomic menthyl esters, which are useful for chiral resolution.
It can be used as a catalyst for sodium production for the amateur chemist via the alcohol catalysed magnesium reduction process.
menthol is potentially ergogenic (performance enhancing) for athletic performance in hot environments
Reactions:
Menthol reacts in many ways like a normal secondary alcohol. It is oxidised to menthone by oxidising agents such as chromic acid or dichromate, though under some conditions the oxidation can go further and break open the ring. Menthol is easily dehydrated to give mainly 3-menthene, by the action of 2% sulfuric acid. Phosphorus pentachloride (PCl5) gives menthyl chloride.
History:
In the West, menthol was first isolated in 1771, by the German, Hieronymus David Gaubius. Early characterizations were done by Oppenheim, Beckett, Moriya, and Atkinson. It was named by F. L. Alphons Oppenheim (1833–1877) in 1861.
Compendial status:
United States Pharmacopeia 23 Japanese Pharmacopoeia 15 Food Chemicals Codex
Safety:
The estimated lethal dose for menthol (and peppermint oil) in humans may be as low as 50–500 mg/kg, (LD50 Acute: 3300 mg/kg [Rat]. 3400 mg/kg [Mouse]. 800 mg/kg [Cat]).
Survival after doses of 8 to 9 g has been reported. Overdose effects are abdominal pain, ataxia, atrial fibrillation, bradycardia, coma, dizziness, lethargy, nausea, skin rash, tremor, vomiting, and vertigo. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Exchange transfusion**
Exchange transfusion:
An exchange transfusion is a blood transfusion in which the patient's blood or components of it are exchanged with (replaced by) other blood or blood products. The patient's blood is removed and replaced by donated blood or blood components. This exchange transfusion can be performed manually or using a machine (apheresis).Most blood transfusions involve adding blood or blood products without removing any blood, these are also known as simple transfusions or top-up transfusions.Exchange transfusion is used in the treatment of a number of diseases, including sickle-cell disease and hemolytic disease of the newborn. Partial exchange might be required for polycythemia.
Exchange transfusion:
Nearly all exchange transfusions are allogeneic (that is, the new blood or blood products come from another person or persons, via donated blood); autologous exchange transfusion is possible (using autologous blood banking), but there are not many situations in which a need for it arises, as most autologous transfusions involve no exchange.
Description:
An exchange transfusion requires that the patient's blood can be removed and replaced. In most cases, this involves placing one or more thin tubes, called catheters, into a blood vessel. The exchange transfusion is done in cycles: each one usually lasts a few minutes.The patient’s blood is slowly withdrawn (usually about 5 to 20 mL at a time, depending on the patient’s size and the severity of illness), and a slightly larger amount of fresh, prewarmed blood or plasma flows into the patient's body. This cycle is repeated until the correct volume of blood has been replaced.After the exchange transfusion, catheters may be left in place in case the procedure needs to be repeated.
Description:
In diseases such as sickle cell anemia, blood is removed and replaced with donor blood.In conditions such as neonatal polycythemia, a specific amount of the child’s blood is removed and replaced with normal saline, plasma (the clear liquid portion of blood), or an albumin solution. This decreases the total number of red blood cells in the body and makes it easier for blood to flow through the body.
Medical Uses:
Sickle Cell Disease Transfusion therapy is used as an emergency procedure to treat life-threatening complications of sickle-cell disease as well as an elective procedure to stop these complications occurring.
Treatment of life-threatening complications Acute cerebrovascular event (stroke) Acute chest syndrome with respiratory failure Multi-organ failure Mesenteric girdle syndromeThe commonest emergency reason is to treat an acute chest syndrome.
Medical Uses:
Prevention Prior to surgery in people with sickle cell anemia (HbSS) who already have a hemoglobin above 85g/L, or who require a prolonged operation with general anesthetic, or who need high-risk surgery To optimise hemoglobin S levels, for example to prevent a stroke occurring in a child. The target is usually to maintain a hemoglobin S level below 30% to prevent complications occurring.The most common routine reason is to prevent a stroke occurring or re-occurring.
Medical Uses:
Hemolytic Disease of the Newborn Exchange transfusion to treat hemolytic disease of the newborn is now uncommon since the introduction of Anti-D prophylaxis in pregnancy. However, it can occur due to the development of other antibodies such as anti-c, anti-E, and ABO.
Medical Uses:
Polycythemia Polycythemia, a condition in which the number of red cells in the blood is too high, is usually diagnosed when the hematocrit is above 65%. Polycythemia can occur in neonates for multiple different reasons including: babies born after 42 weeks gestation (post-term), babies born to diabetic mothers, twin to twin transfusion, intrauterine growth restriction, and babies with genetic abnormalities. Polycythemia can make the blood thicker than normal and therefore lead to complications. Partial exchange transfusion has been used as a treatment to prevent complications, and has been shown to improve cerebral blood flow, but there is no evidence that it prevents long-term complications.
Medical Uses:
Severe malaria Exchange transfusion has been used for the treatment of severe malaria in the past. However, in 2013 the CDC examined the limited evidence available and found no evidence that exchange transfusion has any beneficial effects (decreased mortality) in people with very high parasite loads (> 10%). Also, although uncommon, exchange transfusion can cause complications (low blood pressure (hypotension), abnormal heart rhythms (ventricular fibrillation) and breathing problems (acute respiratory distress syndrome)). Based on this evidence, the CDC no longer recommend the use of exchange transfusion in the treatment of malaria.
Risks:
General risks are the same as with any transfusion. Other possible complications include: Blood clots Changes in blood chemistry (high or low potassium, low calcium, low glucose, change in acid-base balance in the blood) Heart and lung problems Infection (greatly decreased risk due to careful screening of blood) Shock due to inadequate replacement of blood
Recovery:
The person may need to be monitored for several days in the hospital after the transfusion, but the length of stay generally depends on the condition for which the exchange transfusion was performed. Sickle Cell Disease patients may be exchanged in an outpatient setting and can be sent home the very same day.
History:
The technique was originally developed by Alexander S. Wiener, soon after he co-discovered the Rh factor. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ixazomib**
Ixazomib:
Ixazomib (trade name Ninlaro) is a drug for the treatment of multiple myeloma, a type of white blood cell cancer, in combination with other drugs. It is taken by mouth in the form of capsules.
Common side effects include diarrhea, constipation and low platelet count. Like the older bortezomib (which can only be given by injection), it acts as a proteasome inhibitor, has orphan drug status in the US and Europe, and is a boronic acid derivative.
The drug was developed by Takeda. In the US, it is approved since November 2015, and in the EU since November 2016.
Medical uses:
Ixazomib is used in combination with lenalidomide and dexamethasone for the treatment of multiple myeloma in adults after at least one prior therapy. There are no experiences with children and youths under 18 years of age.The study relevant for approval included 722 people. In this study, ixazomib increased the median time of progression-free survival from 14.7 months (in the placebo+lenalidomide+dexamethasone study arm including 362 people) to 20.6 months (under ixazomib+lenalidomide+dexamethasone, 360 people), which was a statistically significant effect (p = 0.012). 11.7% of patients in the ixazomib group had a complete response to the treatment, versus 6.6% in the placebo group. Overall response rate (complete plus partial) was 78.3% versus 71.5%.A phase 3 study demonstrated a significant improvement in progression-free survival (PFS) with ixazomib-lenalidomide-dexamethasone (IRd) compared with placebo. High-risk cytogenetic abnormalities were defined as del(17p), t(4;14), and/or t(14;16); additionally, patients were assessed for 1q21 amplification. Of 722 randomized patients, 552 had cytogenetic results; 137 (25%) had high-risk cytogenetic abnormalities and 172 (32%) had 1q21 amplification alone. PFS was improved with IRd versus placebo in both high-risk and standard-risk cytogenetics subgroups: in high-risk patients, with median PFS of 21.4 versus 9.7 months; in standard-risk patients, with median PFS of 20.6 versus 15.6 months. This PFS benefit was consistent across subgroups with individual high-risk cytogenetic abnormalities, including patients with del(17p). PFS was also longer with IRd versus placebo- in patients with 1q21 amplification, and in the "expanded high-risk" group, defined as those with high-risk cytogenetic abnormalities and/or 1q21 amplification. IRd demonstrated substantial benefit compared with placebo in relapsed/refractory multiple myeloma patients with high-risk and standard-risk cytogenetics, and improves the poor PFS associated with high-risk cytogenetic abnormalities.
Medical uses:
Pregnancy and breastfeeding Ixazomib and lenalidomide are teratogenic in animal studies. The latter is contraindicated in pregnant women, making this therapy regimen unsuitable for this group. It is not known whether ixazomib or its metabolites pass into the breast milk.
Side effects:
Common side effects of the ixazomib+lenalidomide+dexamethasone study therapy included diarrhoea (42% versus 36% under placebo+lenalidomide+dexamethasone), constipation (34% versus 25%), thrombocytopenia (low platelet count; 28% versus 14%), peripheral neuropathy (28% versus 21%), nausea (26% versus 21%), peripheral oedema (swelling; 25% versus 18%), vomiting (22% versus 11%), and back pain (21% versus 16%). Serious diarrhoea or thrombocytopenia occurred in 2% of patients, respectively.Side effects of ixazomib alone were only assessed in a small number of people. Diarrhoea grade 2 or higher was found in 24% of these patients, thrombocytopenia grade 3 or higher in 28%, and fatigue grade 2 or higher in 26%.
Interactions:
The drug has a low potential for interactions via cytochrome P450 (CYP) liver enzymes and transporter proteins. The only relevant finding in studies was a reduction of ixazomib blood levels when combined with the strong CYP3A4 inducer rifampicin. The Cmax was reduced by 54% and the area under the curve by 74% in this study.
Pharmacology:
Mechanism of action At therapeutic concentrations, ixazomib selectively and reversibly inhibits the protein proteasome subunit beta type-5 (PSMB5) with a dissociation half-life of 18 minutes. This mechanism is the same as of bortezomib, which has a much longer dissociation half-life of 110 minutes; the related drug carfilzomib, by contrast, blocks PSMB5 irreversibly. Proteasome subunits beta type-1 and type-2 are only inhibited at high concentrations reached in cell culture models.PSMB5 is part of the 20S proteasome complex and has enzymatic activity similar to chymotrypsin. It induces apoptosis, a type of programmed cell death, in various cancer cell lines. A synergistic effect of ixazomib and lenalidomide has been found in a large number of myeloma cell lines.
Pharmacology:
Pharmacokinetics The medication is taken orally as a prodrug, ixazomib citrate, which is a boronic ester; this ester rapidly hydrolyzes under physiological conditions to its biologically active form, ixazomib, a boronic acid. Absolute bioavailability is 58%, and highest blood plasma concentrations of ixazomib are reached after one hour. Plasma protein binding is 99%.The substance is metabolized by many CYP enzymes (percentages in vitro, at higher than clinical concentrations: CYP3A4 42.3%, CYP1A2 26.1%, CYP2B6 16.0%, CYP2C8 6.0%, CYP2D6 4.8%, CYP2C9 4.8%, CYP2C9 <1%) as well as non-CYP enzymes, which could explain the low interaction potential. Clearance is about 1.86 litres per hour with a wide variability of 44% between individuals, and plasma half-life is 9.5 days. 62% of ixazomib and its metabolites are excreted via the urine (of which less than 3.5% in unchanged form) and 22% via the faeces.
Chemistry:
Ixazomib is a boronic acid and peptide analogue like the older bortezomib. It contains a derivative of the amino acid leucine with the carboxylic acid group being replaced by a boronic acid; and the remainder of the molecule has been likened to phenylalanine. The structure has been found through a large-scale screening of boron-containing molecules.
History:
The drug was developed by Takeda. It got US and European orphan drug status for multiple myeloma in 2011, and for AL amyloidosis in 2012. Takeda submitted a US new drug application for multiple myeloma in July 2015. In September 2015, the U.S. Food and Drug Administration (FDA) granted ixazomib combined with lenalidomide and dexamethasone a priority review designation for multiple myeloma. On 20 November 2015, the FDA approved this combination for second-line treatment.The request for marketing authorisation in Europe was initially refused by the European Medicines Agency (EMA) in May 2016 due to insufficient data showing a benefit of treatment. After Takeda requested a re-examination, the EMA granted a marketing authorisation on 21 November 2016 on the condition that further efficacy studies be conducted. The approval indication is the same as in the US.
Research:
As of January 2017, ixazomib is also in Phase III clinical trials for the treatment of AL amyloidosis and plasmacytoma of the bones, and in Phase I/II trials for various other conditions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Y-SNP**
Y-SNP:
A Y-SNP is a single-nucleotide polymorphism on the Y chromosome. Y-SNPs are often used in paternal genealogical DNA testing.
SNP markers:
A single nucleotide polymorphism (SNP) is a change to a single nucleotide in a DNA sequence. The relative mutation rate for an SNP is extremely low. This makes them ideal for marking the history of the human genetic tree. SNPs are named with a letter code and a number. The letter indicates the lab or research team that discovered the SNP. The number indicates the order in which it was discovered. For example M173 is the 173rd SNP documented by the Human Population Genetics Laboratory at Stanford University, which uses the letter M. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CM Browser**
CM Browser:
CM Browser (Chinese: 猎豹安全浏览器) was a web browser developed by Cheetah Mobile. The browser is based on Chromium and supports both the WebKit and Trident browser engines. Jinshan Network claims that CM Browser is the first secure dual-engine browser with a "browser intrusion prevention system".On June 3, 2013, CM Browser was released on Android and iOS.
Controversies:
Version 1 of CM Browser used version 17 of Chromium, which was much lower than the official version of Chromium. This prevented the use of the Chrome Web Store on CM Browser.On September 21, 2014, Jinshan was ordered to pay Youku ¥300,000 for violating Chinese competition laws by allowing CM Browser to filter video ads on Youku's website. In the preceding trial, Youku claimed to suffer an economic loss as a result of CM Browser's ad filtering, as the company earned revenue from ads and premium subscriptions that allowed users to skip ads. Jinshan stated that CM Browser's ad filtering feature was vendor-neutral and that users must opt-in to activate the feature.In November 2018, the Shanghai Consumer Protection Committee commissioned an evaluation of the application permissions of 18 popular mobile apps, including CM Browser. The study found that CM Browser requested sensitive phone- and SMS-related permissions that allowed the browser to monitor the phone's outbound calls. A representative for CM Browser responded that the browser needed to determine whether a phone call is active in order to prevent interference when the browser is playing audio. The representative indicated that CM Browser would be updated to address the privacy concerns.In February 2020 all of Cheetah's applications have been pulled from the Play Store. Following that, the company had been accused of spying on its users based on a unofficial report published by Gabriel Cirlig, an unknown proclaimed security researcher. The report suggestively conspired that CM Browser was sending encrypted data to its Chinese servers exfiltrating the URLs visited by all its users and selling them to various third parties. something that Facebook and Google knowingly practices openly but western market protectionism would primarily prevent any fair competition and resorts to anti Chinese rhetoric around security to hinglds a double standards.
Controversies:
Ban in India In June 2020, the Government of India banned CM Browser along with 58 other Chinese origin apps, citing data and privacy concerns. The border tensions in 2020 between India and China might have also played a role in the ban. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Leyla Soleymani**
Leyla Soleymani:
Leyla Soleymani is a scientist and Canada Research Chair at McMaster University's faculty of engineering. Her research includes the development of advanced materials for biosensing and repellent surfaces.
Biography:
Soleymani received her Ph.D. in Electrical and Computer Engineering from the University of Toronto in 2010 under the mentorship of Ted Sargent. Her dissertation was entitled "Ultrasensitive Detection of Nucleic Acids using an Electronic Chip".In 2019, Soleymani developed a plastic wrap that repels pathogens such as the superbug methicilin-resistant Staphyloccocus aureus from surfaces. In 2020, this wrap is being adapted for halting the spread of COVID-19. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Curvature of Space and Time, with an Introduction to Geometric Analysis**
Curvature of Space and Time, with an Introduction to Geometric Analysis:
Curvature of Space and Time, with an Introduction to Geometric Analysis is an undergraduate-level textbook for mathematics and physics students on differential geometry, focusing on applications to general relativity. It was written by Iva Stavrov, based on a course she taught at the 2013 Park City Mathematics Institute and subsequently at Lewis & Clark College, and was published in 2020 by the American Mathematical Society, as part of their Student Mathematical Library book series.
Topics:
Curvature of Space and Time is arranged into five chapters with 14 sections in total, with each section covering a single lecture's worth of material. Its topics are covered both mathematically and historically, with reference to the original source material of Bernhard Riemann and others. However, it deliberately avoids some topics from differential topology that have traditionally been covered in differential geometry courses, including abstract manifolds and tangent vectors. Instead, it approaches the subject through coordinate-based geometry, emphasizing quantities that are invariant under changes of coordinates. Its goals include both providing a shortened path for students to reach an understanding of Einstein's mathematics, and promoting curvature as a central way of describing shape and geometry.The first chapter defines Riemannian manifolds as embedded subsets of Euclidean spaces rather than as abstract spaces. It uses Christoffel symbols to formulate differential equations having the geodesics as their solutions, and descrobes the Koszul formula and energy functional Examples include the Euclidean metric, spherical geometry, projective geometry, and the Poincaré half-plane model of the hyperbolic plane. Chapter 2 includes vector fields, gradients, divergence, directional derivatives, tensor calculus, Lie brackets, Green's identities, the maximum principle, and the Levi-Civita connection. It begins a discussion of curvature and the Riemann curvature tensor that is continued into Chapter 3, "the heart of the book", whose topics include Jacobi fields, Ricci curvature, scalar curvature, Myers's theorem, the Bishop–Gromov inequality, and parallel transport.After these mathematical preliminaries, the final two chapters are more physical, with the fourth chapter concerning special relativity, general relativity, the Schwarzschild metric, and Kruskal–Szekeres coordinates. Topics in the final chapter include geometric analysis, Poisson's equation for the potential fields of charge distributions, and mass in general relativity.
Audience and reception:
As is usual for a textbook, Curvature of Space and Time has exercises that extend the coverage of its topics and make it suitable as the text for undergraduate courses.
Audience and reception:
Although there are multiple undergraduate-level textbooks on differential geometry, they have generally taken an abstract mathematical view of the subject, and at the time of publishing of Curvature of Space and Time, courses based on this material had somewhat fallen out of fashion. This book is unusual in taking a more direct approach to the parts of the subject that are most relevant to physics. However, although it attempts to cover this material in a self-contained way, reviewer Mark Hunacek warns that it may be too advanced for typical mathematics students, and perhaps better reserved for honors students as well as "mathematically sophisticated physics majors". He also suggests the book as an introduction to the area for researchers in other topics.Reviewer Hans-Bert Rademacher calls this a "remarkable book", with "excellent motivations and insights", but suggests it as a supplement to standard texts and courses rather than as the main basis for teaching this material. And although finding fault with a few details, reviewer Justin Corvino suggests that, with faculty guidance over these rough spots, the book would be suitable both for independent study or an advanced topics course, and "required reading" for students enthusiastic about learning the mathematics behind Einstein's theories. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Leukocyte adhesion molecule deficiency**
Leukocyte adhesion molecule deficiency:
Leukocyte adhesion molecule deficiency is a rare autosomal recessive disorder characterized by recurrent bacterial and fungal infections and impaired neutrophil migration.: 87 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Braking action**
Braking action:
Braking action in aviation is a description of how easily an aircraft can stop after landing on a runway. Either pilots or airport management can report the braking action according to the U.S. Federal Aviation Administration.When reporting braking action, any of the following terms may be used: Good; Medium; Poor; Nil - bad or no braking action. If an air traffic controller receives a braking action report worse than good, a Pilot Report (PIREP) must be completed and an advisory must be included in the Automatic Terminal Information Service ("Braking Action Advisories are in effect"). As of October 2019, the FAA has used mu values to describe braking conditions
Europe:
In Europe this differs from the above reference. Braking action reports in Europe are an indication/declaration of reduced friction on a runway due to runway contamination (see Landing performance, under the Runway Surface section) which may impact an aircraft's crosswind limits. European reports have nothing to do with stopping distances on a runway, though they should alert pilots that stopping distances will also be affected. Landing distances are empirically dealt with by landing performance data on dry/wet/contaminated runways for each aircraft type.
Crosswind limits:
Whenever braking actions are issued, they are informing pilots that the aircraft maximum crosswind limits may have to be reduced on that runway because of reduced surface friction (grip). This should alert pilots that they may experience lateral/directional control issues during the landing roll-out. In a crosswind landing, the pilot tacks into wind to make allowances for the sideways force that is being applied to the aircraft (also known as using a crab angle). This sideways force occurs as the wind strikes the aircraft's vertical fin causing the aircraft to weathercock or weathervane. This manifests itself as an angular displacement of the fuselage relative to the runway centreline. This angular displacement is known as drift angle. Just before or upon initial ground contact, the pilot must re-align the fuselage to zero the drift angle (i.e. correct it to parallel with the runway's centreline). This re-alignment is accomplished using the rudder flight control surface. As the wheels make contact with the runway surface there are a lot of side forces and torsion placed on the tyres due to them countering the weathervane effect which continues to try to act upon the aircraft. A combination inherent strength of the tyres and the action of the runway surface friction with them ensures that the pilot can continue to keep the aircraft aligned with the runway as the aircraft decelerates during the landing roll. If however the surface friction is diminished because of contamination then this may upset the balance of forces resulting in insufficient directional control to keep the aircraft on the runway. To ensure this does not occur there is a pro-rata reduction in the aircraft's crosswinds limits, which in turn limits the sideways forces acting on the aircraft, thus ensuring sufficient directional control. This is the explanation for approach and landing; for takeoff the converse is true. The rudder applies a force to counter the crosswind forces as the aircraft accelerates down the runway. At the same time the tyres are accommodating these forces through sidewall torsion, and giving grip with the runway surface. As the aircraft transitions from a ground vehicle to a flying vehicle rudder input is stopped by the pilot and the aircraft will weathervane. The subsequent drift angle will allow the aircraft to fly on a straight course.
Crosswind limits:
Pilots may receive this data through a "Snowtam runway state decoder" which forms an appendix to the internationally recognised METAR (METeorological Aerodrome Report).
Crosswind limits:
In some countries in Europe, pilots will not receive local updated/modified braking action reports directly from an air traffic control (ATC) source unless a recent braking action test has been carried out and is being officially issued. ATC may advise other pilots that they have received a pilot report of a braking action, but since these reports can be variable and subjective, without any empirical value, it should be treated as an advisory.
Braking action tests:
Braking action tests are subject to many variables such as: It is an instantaneous report and its data integrity may not be valid after a short period of time in active or changing weather conditions. Caution, the data value is an average/mean value for the runway length (usually split into thirds) and does not rule out that localised areas are better or worse than reported.
Braking action tests:
The scheduled time interval frequency of such tests and their reports may not be regular. In other words, one may be reading an old braking action report attached to an up-to-date METAR.
Various manufacturers of friction testing equipment provide different readings (non homogenous) on the same surface.
Braking action tests:
Most of these friction testing devices employ using a trailing wheel or tyre combination which is in contact with the runway surface. It is not an aircraft tyre, thus they are not fully representative in size, weight or speed. Many if not all of the tests are accomplished below the normal approach/landing speeds that a code C aircraft will fly; Code C aircraft typically fly an approach speed of up to 140 kt (161 mph or 259 km/h) indicated airspeed (IAS) (aircraft approach speed is mass /pressure altitude/temperature/ centre of gravity/aircraft configuration/ dependent).
Braking action tests:
Runway condition (how old is the tarmac/concrete? is it a grooved or smooth runway surface; does it have an upward or downward slope? Is it clean or has it accumulated rubber on its surface (high/low utilisation)?This reported data is used by the airport operators and authorities to determine whether the runway should be closed for de-icing or contamination removal or remain operational until the next scheduled or requested test or report.
Braking action tests:
Pilots/ATC may request that an official braking action test be carried out prior to a landing.
Format of braking action declarations:
In Europe the format of braking action declarations are given using the Greek term mu which is the co-efficient of friction Good = a mu value of 0.4 and above; measured snowtam decode is 95 Med/Good = a mu value of 0.36 to 0.39; measured snowtam decode is 94 Med = a mu value of 0.30 to 0.35; measured snowtam decode is 93 Med/Poor = a mu value of 0.26 to 0.29; measured snowtam decode is 92 Poor = a mu value of 0.25 and below; measured snowtam decode is 91 UNRELIABLE = reading unreliable; measured snowtam decode is 99 READING not measurable or not operationally significant; snowtam decode is Snowtam Format reference is International Civil Aviation Organization (ICAO) document Annex 15 Appendix 2. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**National coat of arms**
National coat of arms:
A national coat of arms is a symbol which denotes an independent state in the form of a heraldic achievement. While a national flag is usually used by the population at large and is flown outside and on ships, a national coat of arms is normally considered a symbol of the government or (especially in monarchies) the head of state personally and tends to be used in print, on armorial ware, and as a wall decoration in official buildings. The royal arms of a monarchy, which may be identical to the national arms, are sometimes described as arms of dominion or arms of sovereignty.An important use for national coats of arms is as the main symbol on the covers of passports, the document used internationally to prove the citizenship of a person. Another use for national coats of arms is as a symbol on coins of the associated state for general circulation.
National coat of arms:
For a symbol to be called a "national coat of arms", it should follow the rules of heraldry. If it does not, then the symbol is not formally a coat of arms but rather a national emblem. However, many unheraldic national emblems are colloquially called national coats of arms anyway, because they are used for the same purposes as national coats of arms.
Types of national coats of arms:
Heraldic achievements The original national coats of arms were (and continue to be) heraldic arms, which have a shield (escutcheon) which carries symbols upon it (charges) and often other symbols such as a crown on top of the shield and supporters. In the real sense of the word, these national coats of arms are the only ones which should be called coats of arms, since that term reflects that the emblem used is following the rules of heraldry. Heraldry originated in Western Europe and has now spread to all parts of the world.
Types of national coats of arms:
Up until the 20th century, most independent nations in the civilized world were monarchies and therefore used the monarchistic style of coat of arms. This style is illustrated below by the coat of arms of Sweden and the royal coat of arms of the United Kingdom[a], both of which are still in use. Characteristic of this style are the escutcheon (shield) of the kingdom, the supporters on either side (usually beasts as in these cases, but may also be birds, fishes, humans/humanoids or even inanimate objects as depicted on the coat of arms of Spain) and the crown topping the arms. The crown on the UK arms is specifically the Tudor Crown.[a] Both also feature a symbol of the monarch's chivalric order encircling the escutcheon: the chain of the Order of the Seraphim on the Swedish arms and the belt of the Order of the Garter on the UK's arms.[a] A motto is often present either below or above the escutcheon (as shown on the UK arms); this is absent on the Swedish arms. In common with many European monarchies, the Swedish arms features a representation of a royal robe (see mantle and pavilion) topped with another crown, which became common around the 19th century (and which can also be seen in the Romanian arms below); this type of mantle does not feature at all in British heraldry. The Swedish arms also feature an inescutcheon, a secondary escutcheon within the main one which represents (in this case) the monarch's dynasty, although they may also represent other things; the UK arms featured an inescutcheon from 1801, representing Hanover, until 1837, when it was removed. When used by the monarch, the UK arms features a helmet with mantling and crest which are absent from the version of the arms used by the state, and also from the Swedish arms. These features were all commonly used among the arms of European kingdoms.
Types of national coats of arms:
The lion (sometimes referred to as a leopard when depicted walking; not to be confused with the non-heraldic leopard), being a symbol of power and sovereignty, as well as of Jesus (the Lion of Judah), is a common charge on monarchal coats of arms and features on the coats of arms of all surviving European kingdoms (i.e. the coats of arms of Belgium, Denmark, Luxembourg, the Netherlands, Norway, Spain (where it represents León), Sweden, and the UK), as well as several former monarchies.
Types of national coats of arms:
There is much diversity in the coats of arms of the European republics. Many have chosen to use the same coat of arms they used as monarchies (or as part of monarchies) or a modified version of it. Finland for example uses the former coat of arms of the Grand Duke of Finland, a title held by the Swedish Monarch until 1809 and then by the Russian Emperor until 1917. Other examples include Bulgaria, the Czech Republic, and Estonia, all of which also feature lions.
Types of national coats of arms:
Like lions, eagles were common charges in the arms of many former European monarchies (although they do not feature on the arms of any surviving European monarchies). Double-headed eagles were also associated with imperial power (specifically that of the Byzantine, Holy Roman, Austrian, Serbian and Russian Empires). Single-headed eagles can be found today on the coats of arms of Poland, Germany, and Romania; double-headed eagles can be found on the coats of arms of Russia (without the ermine mantling and crown of the Russian Empire), Serbia, Montenegro, and Albania. Austria uses a single-headed eagle as a supporter for its coat of arms, but this is officially unrelated to and distinct from the double-headed eagle used by the former Austrian Empire; the escutcheon (gules, a fess argent) is however a pre-republic symbol dating back to the middle ages. Eagles also feature prominently as supporters on the coats of arms of Arab states, having been derived from the Eagle of Saladin. These include the coats of arms of Egypt, Iraq (see below) and Palestine, and formerly on the coats of arms of Libya, Yemen, and the United Arab Republic.
Types of national coats of arms:
Many former European colonies have chosen to use a heraldic coat of arms, but with no connection to the coat of arms used by the colonizing empires. Australia and Jamaica are examples of countries that have created such a modern coat of arms according to old heraldic principles. These two nations also have chosen not to use a crown on top of their coats of arms although they formally are monarchies (Australia, however, does use St Edward's Crown within the coat of arms, on the parts representing Queensland and Victoria). The coat of arms of Uganda below is a typical example of an African coat of arms, with a tribal shield supported by native animals.
Types of national coats of arms:
Often, a country will employ different versions of their coats of arms for different purposes. For example, many have a heavily simplified "lesser" version of their arms, with the full or "greater" version being restricted for use by the monarch or in other specific circumstances.
Types of national coats of arms:
^a In Scotland a separate version of the Coat of Arms is used which gives precedence to the Scottish elements. It places the royal arms of Scotland in the first and fourth quarters of the escutcheon (and places the royal arms of England in the second quarter), replaces the order belt, the crown atop the helmet (but not the lion), the motto and the crest with Scottish equivalents (the chain of the Order of the Thistle, the Crown of Scotland, Nemo me impune lacessit), reverses the order of the supporters, crowns the unicorn (with the Crown of Scotland) and replaces the shamrocks and Tudor roses with Scottish thistles. It also adds another motto banner above the crown (In defens) and places a Scottish and English flag on poles in the arms of the supporters.
Types of national coats of arms:
National seals Another common type of national coat of arms is the seal. Originally, a seal was used for authenticating documents by stamping an impression on documents and the like. These seals would often contain coats of arms. The United States adopted a seal whose graphical design would also be used as a state symbol and not only as impressions on state documents. This is common in the Americas but also around the world. The round form with text saying what it stands for is easy to recognise.
Types of national coats of arms:
Many national seals are actually, to some extent, in part heraldic and can even have set colours which are always used, even if a seal has another use originally - as a stamp in wax - and in this sense formally never has colours.
Types of national coats of arms:
National emblems An emblem which does not follow the rules of heraldry, but which fulfills the same use as a national coat of arms, can be called a national emblem. These are often used by countries whose regimes are or once were revolutionary, or have their own local rules on national symbolism, and therefore did not use traditional European-style heraldry.
Types of national coats of arms:
National emblems of the East Asian tradition The Japanese equivalent to a heraldic coat of arms is the mon (Japanese: 紋, "sign" or "emblem"), which in its use can be compared to heraldry of the Western world. Similar symbols are common throughout East Asia.
Types of national coats of arms:
Socialist state emblems Many countries which came under the influence of the Soviet Union during the 20th century took after the design of the State Emblem of the Soviet Union, created in the 1920s. The forms followed a very common pattern and since these national emblems were used in the same way as traditional heraldic coats of arms, even if they did not follow the rules of heraldry, they have been called "socialist heraldry". Many of them incorporated symbols of industry and agriculture, the hammer and sickle, a raising sun and the red star of communism. It was not uncommon to show landscapes and weapons, as can be seen in the examples below. When giving up communism, most of these countries returned to traditional heraldry – see for instance the coats of arms of Bulgaria, Georgia, Hungary, and Romania.
Types of national coats of arms:
The designs of socialist heraldry also influenced some non-socialist states, such as Italy. In particular, the emblem of Italy, shaped as a Roman wreath, comprises a white five-pointed star, the Stella d'Italia (English: "Star of Italy"), which is the oldest national symbol of Italy, since it dates back to ancient Greece, with a thin red border, superimposed upon a five-spoked cogwheel, standing between an olive branch to the left side and an oak branch to the right side. The cogwheel surrounding the star refers to Article 1 of the Constitution of the Italian Republic, which states: "Italy is a democratic republic, built on labour." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Swingman**
Swingman:
A swingman is an athlete capable of playing multiple positions in their sport.
Basketball:
In basketball, the term “swingman” (a.k.a. “wing” or “guard-forward”) denotes a player who can play both the shooting guard (2) and small forward (3) positions, and in essence swing between the positions.Examples include: NBA players: Tracy McGrady, Paul Pierce, Jimmy Butler, Michael Jordan, Kobe Bryant, DeMar DeRozan, Paul George, Andre Iguodala, Klay Thompson, Khris Middleton, LeBron James, Danny Green and Evan Turner WNBA players: Seimone Augustus, Maya Moore, Tamika Catchings, and Angel McCoughtry
Baseball:
In baseball, a swingman is a pitcher who can work either as a reliever or as a starter. To thrive in this role, pitchers must possess the stamina of a starter as well as the flexibility to work out of the bullpen. It may be difficult for swingmen to settle into the same type of routine as pitchers used exclusively in one role.
Baseball:
History In 19th century baseball, since the vast majority of games were finished by the starting pitcher, the swingman role did not exist. In the early 1900s, as the percentage of complete games fell, relief appearances became more common, and swingmen began to appear. Early examples included star pitchers such as Mordecai Brown and Ed Walsh (both in the Baseball Hall of Fame) as well as pioneers of the relief role such as Doc Crandall and Firpo Marberry. Through the 1930s, teams continued to use their best pitchers as both starters and relievers. Dizzy Dean, Lefty Grove, and (to a lesser extent) Carl Hubbell were all used as swingmen during this era. In the 1950s and 1960s, strict starting rotations and specific roles for relief pitchers became standard; these trends reduced the prevalence of swingmen. From 1970 through the present day, the usage of swingmen has continued to decline due to the increased specialization of pitchers. During this era, pitchers may be deployed as swingmen early in their careers to ease their transition to the major leagues, move to a permanent starting role once they are deemed ready, and transition back to a swingman/bullpen role as they decline with age, a career arc exemplified by Rudy May. Swingmen are also valuable in the postseason, when they may be needed to replace a struggling starter early in a game and pitch multiple innings while keeping the score close.
Other sports:
Australian football In Australian rules football, a swingman is typically a player who can play both in attack and in defence, usually as a key position player. Examples include Harry Taylor, Ryan Schoenmakers, Ben Reid and Jarryd Roughead.
Ice hockey In ice hockey, a swingman is a player that could play both defenseman and forward, such as Brent Burns of the San Jose Sharks, Dustin Byfuglien of the Winnipeg Jets, Brendan Smith of the New York Rangers and Calder Cup Champion Paul Bissonnette. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Unchained camera technique**
Unchained camera technique:
The unchained camera technique (entfesselte Kamera in German) was an innovation by cinematographer Karl Freund that allowed for filmmakers to get shots from cameras in motion enabling them to use pan shots, tracking shots, tilts, crane shots, etc.Though films such as 1923's Sylvester: Tragödie einer Nach pre-date it, the technique was expanded and popularized by Freund in the 1924 silent film, The Last Laugh, and is arguably the most important stylistic innovation of the 20th century, setting the stage for some of the most commonly used cinematic techniques of modern contemporary cinema. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Software token**
Software token:
A software token (a.k.a. soft token) is a piece of a two-factor authentication security device that may be used to authorize the use of computer services. Software tokens are stored on a general-purpose electronic device such as a desktop computer, laptop, PDA, or mobile phone and can be duplicated. (Contrast hardware tokens, where the credentials are stored on a dedicated hardware device and therefore cannot be duplicated — absent physical invasion of the device) Because software tokens are something one does not physically possess, they are exposed to unique threats based on duplication of the underlying cryptographic material - for example, computer viruses and software attacks. Both hardware and software tokens are vulnerable to bot-based man-in-the-middle attacks, or to simple phishing attacks in which the one-time password provided by the token is solicited, and then supplied to the genuine website in a timely manner. Software tokens do have benefits: there is no physical token to carry, they do not contain batteries that will run out, and they are cheaper than hardware tokens.
Security architecture:
There are two primary architectures for software tokens: shared secret and public-key cryptography. For a shared secret, an administrator will typically generate a configuration file for each end-user. The file will contain a username, a personal identification number, and the secret. This configuration file is given to the user.
Security architecture:
The shared secret architecture is potentially vulnerable in a number of areas. The configuration file can be compromised if it is stolen and the token is copied. With time-based software tokens, it is possible to borrow an individual's PDA or laptop, set the clock forward, and generate codes that will be valid in the future. Any software token that uses shared secrets and stores the PIN alongside the shared secret in a software client can be stolen and subjected to offline attacks. Shared secret tokens can be difficult to distribute, since each token is essentially a different piece of software. Each user must receive a copy of the secret, which can create time constraints.
Security architecture:
Some newer software tokens rely on public-key cryptography, or asymmetric cryptography. This architecture eliminates some of the traditional weaknesses of software tokens, but does not affect their primary weakness (ability to duplicate). A PIN can be stored on a remote authentication server instead of with the token client, making a stolen software token no good unless the PIN is known as well. However, in the case of a virus infection, the cryptographic material can be duplicated and then the PIN can be captured (via keylogging or similar) the next time the user authenticates. If there are attempts made to guess the PIN, it can be detected and logged on the authentication server, which can disable the token. Using asymmetric cryptography also simplifies implementation, since the token client can generate its own key pair and exchange public keys with the server. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Web-Based Enterprise Management**
Web-Based Enterprise Management:
In computing, Web-Based Enterprise Management (WBEM) comprises a set of systems-management technologies developed to unify the management of distributed computing environments. The WBEM initiative, initially sponsored in 1996 by BMC Software, Cisco Systems, Compaq Computer, Intel, and Microsoft, is now widely adopted. WBEM is based on Internet standards and Distributed Management Task Force (DMTF) open standards: Common Information Model (CIM) infrastructure and schema CIM-XML CIM operations over HTTP WS-Management for web services CIM Operations over RESTful ServicesAlthough the name labels WBEM as "web-based", it is not necessarily dependent on any particular user interface (see below). Other systems-management approaches include remote shells, proprietary solutions and IETF standardized network-management architectures like the SNMP and Netconf.
Features:
WBEM allows the management of any element in a standard and inter-operable manner. WBEM provides the technology underlying different management initiatives in information technology: Desktop management (DASH) Network management (NetMan) There is a DMTF page that shows a number of DSP pdfs that are the published profiles of the NetMan Initiative Storage management (SMI) Systems management (SMASH) Virtualization management (VMAN) There is a DMTF page that shows a number of DSP pdfs that are the published profiles of the VMAN Initiative
Architecture:
To understand the WBEM architecture, consider the components which lie between the operator trying to manage a device (configure it, turn it off and on, collect alarms, etc.) and the actual hardware and software of the device: The operator will invoke some form of graphical user interface (GUI), Browser User Interface (BUI), or command-line interface (CLI). The WBEM standard has nothing to say about this interface (although the definition of a CLI for specific applications has started): WBEM operates independently of the human interface, since human interfaces can change without the rest of the system needing to note such changes.
Architecture:
The GUI, BUI or CLI will interface with a WBEM client through a small set of application programming interfaces (APIs). This client will find the WBEM server for the managed device (typically on the device itself) and construct an XML message containing the request.
The client will use the HTTP (or HTTPS) protocol to pass the request, encoding it in CIM-XML, to the WBEM server.
Architecture:
The WBEM server will decode the incoming request, perform the necessary authentication and authorization checks and then consult the previously defined model of the managed device to see how to handle the request. This model provides the power of the architecture: it represents the pivot point of the transaction, with the client simply interacting with the model and the model interacting with the real hardware or software. The model uses the Common Information Model standard; the DMTF has published many models for commonly managed devices and services: IP routers, storage servers, desktop computers, etc.
Architecture:
For most operations, the WBEM server determines from the model that it needs to communicate with the actual hardware or software. So-called "providers" handle the interaction: small pieces of code interface between the WBEM server (using a standardized interface known as CMPI) and the real hardware or software. Because the interface is well-defined and the number of types of call is small, it is normally easy to write providers. In particular, the writer of the provider knows nothing of the GUI, BUI, or CLI used by the operator.
WBEM specifications:
Mappings URI (WBEM URI Mapping Specification 1.0) XML (xmlCIM as used in CIM-XML) XML (WS-CIM as used in WS-Management) UML Protocols CIM-XML WS-Management CIM-RS Discovery SLP (WBEM Discovery using SLP; SLP Template) Query Language CQL (CIM Query Language 1.0) FQL (Filter Query Language 1.0)
Implementing support:
The device manufacturer or service provider have to write three pieces in order to properly implement the management system.
Implementing support:
The modelNormally done by extending as necessary one of the standard models published by the DMTF.The BUI, GUI, or CLI.The client and server usually do not need to be written because there are many open-source and commercial implementations available. (see External links below)The providersWBEM architecture allows the manufacturer of a device or developer of a service to provide a standards-compliant management interface to that device simply and cheaply.
Implementations:
WBEM in operating systems Apple Inc. uses an implementation of WBEM in its Apple Remote Desktop management tool, and Mac OS X clients ship with support for Remote Management.
Hewlett-Packard has included WBEM Services CORE Product in the HP-UX operating system (with all operating environments) since version 11iv1 and OpenVMS V8.3-1H1 and V8.4 IBM ships support in z/OS and AIX.
Microsoft has developed the WMI technology and has included it in Microsoft Windows.
Red Hat ships OpenPegasus as part of Red Hat Enterprise Linux Oracle has WBEM-Services for the Solaris operating environment Ubuntu ships with an updated CIM instrumentation stack, powered by the latest version of the lightweight CIMOM, SBLIM SFCB.
WBEM implementations WS-Management OpenPegasus, open-source client and server written in C++ Open Management Infrastructure, open-source client and server written in C SBLIM (pronounced "sublime") Standards Based Linux Instrumentation for Manageability, C, C++, Java Pywbem, open-source WBEM library written in Python WBEM Solutions J WBEM Server and SDK | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Flame of Hope (Special Olympics)**
Flame of Hope (Special Olympics):
The Flame of Hope is the symbol of the Special Olympics Games.
Flame of Hope (Special Olympics):
It is used much in the same spirit as the Olympic Flame at the Olympic Games, the Flame of Hope is lit during a traditional ceremony in Athens, Greece.After lighting, the Flame is relayed on foot to the organizing city. This is done by members of law enforcement agencies (mostly policemen and -women) and Special Olympics athletes. This relay, officially the Law Enforcement Torch Run is the flagship of an international fundraising effort.In 2018, the Flame of Hope was memorialized by the Chicago Park District which erected the 30 foot (9.1 m) "Eternal Flame of Hope" in honor of the Special Olympics. The sculpture by Richard Hunt stands in a plaza next to Soldier Field, where the first games were held 50 years earlier in 1968. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Virtual design and construction**
Virtual design and construction:
Virtual design and construction (VDC) is the management of integrated multi-disciplinary performance models of design–construction projects, including the product (facilities), work processes, and organization of the design – construction – operation team to support explicit and public business objectives. This is usually achieved creating a digital twin of the project, in where to manage the information.
Virtual design and construction:
The theoretical basis of VDC includes: Engineering modeling methods: product, organization, process Analysis methods – model-based design: including quantities, schedule, cost, 4D interactions, and process risks, these are termed building information modeling (BIM) tools Visualization methods Business metrics – within business analytics – and a focus on strategic management Economic impact analysis, i.e., models of both the cost and value of capital investments
BIM managed project:
"Virtual design and construction BIMs are virtual because they show computer-based descriptions of the project. The BIM project model emphasizes those aspects of the project that can be designed and managed, i.e., the product (typically a building or plant [and infrastructure]), the organization that will define, design, construct, and operate it, and the process the organization teams will follow, that is, the product–organization–process or POP. These models are logically integrated in the sense that they all can access shared data, and if a user highlights or changes an aspect of one, the integrated models can highlight or change the dependent aspects of related models. The models are multi-disciplinary in the sense that they represent the architect, engineering, construction (AEC), and owner of the project, as well as relevant sub-disciplines. The models are performance models in the sense that they predict some aspects of project performance, track many that are relevant, and can show predicted and measured performance in relationship to stated project performance objectives. Some companies now practice the first steps of BIM modeling, and they consistently find that they improve business performance by doing so." Companies are also now considering developing BIMs in various levels of detail, since depending on the application of BIM, more or less detail is needed, and there is varying modeling effort associated with generating building information models at different levels of detail.
Methodologies underpinning BIM:
Advances in construction engineering began with the ten volumes on architecture completed by Vitruvi, a 1 century B.C. Roman. Vitruvi laid the key and lasting foundation for a study of construction.
Methodologies underpinning BIM:
A principle of construction is a use of an applied ontology based in the upper ontology. In practice, these ontologies take on a form of breakdown structures such as the work breakdown structure. Usually breakdown structures form metadata to represent a construction activity; there are notable cases at exceptionally large construction companies where they are simply numbered. In practice, an ontology approach requires a semantic integration approach to construction data so to capture a present status of construction activities (i.e., the project).
Methodologies underpinning BIM:
The research that forms virtual design and construction (VDC) is based in scientific evidence and a validation measured against a best theory opposed to a best practice. This approach, pioneered by the illustrious Dr. Kunz, was a departure from earlier construction engineering methodologies that focused on studies of best practices. The scientific evidence method requires formulating a hypothesis and then testing that hypothesis to failure so to validate. A range of scientific methodologies have proven useful in construction engineering research, in both qualitative research and quantitative research. Because construction is difficult to replicate in a controlled setting, the case-based reasoning, case study and action research methodologies prevail. Power of a method is important to include in results; the case study is often broad and the action research is often focused.
Methodologies underpinning BIM:
A core concept in VDC is spacetime dimensions. There are four dimensions; three space dimensions and a fourth, time. There are additional dimensions of cost and quality, but a core is formed by these four. The four dimensions were first understood by Vitruvi as an importance of perspective (i.e., 3D) and time (i.e., 4D). Prior to computing, a focus was on the fourth dimension of time. In practice, time is a focus of the critical path method. With advances in computing, the representation of three dimensions of space has increased. The merging of space and the above discussed ontology formed the information model, in the construction engineering field, known as building information modeling. The combination of space and time in practice is shown by the linear scheduling method and in close relation the 4D model.
Methodologies underpinning BIM:
Computing brought about the advent of the need to align with a software developer. Previously, pencil and paper was forgiving on the mixing of methods from different schools of thought. Software is not as forgiving and to mix software requires this as a goal. This forms the field of interoperability research. The practical application is demonstrated by the Industry Foundation Classes.
Methodologies underpinning BIM:
Today, the most compelling advances in VDC are in computer vision (List of computer vision topics), artificial intelligence, and the architecture of transmission (AoT), an object-oriented project lifecycle management process, which acts as a counterpoint to commissioned IoT technologies. [2] An important application of VDC is in the workzone. This is where the construction activities reside, and the workforce is a core component. To create an educated workforce with the technical knowhow to use the technology tools now available, VDC includes the development of advanced vocational education topics.
Research centers:
Center for Integrated Facility Engineering (CIFE), Stanford University mosaic, Carnegie Mellon University BIM at UTexas Field Systems and Construction Automation Laboratory (FSCAL), UTexas Construction Information Technology Laboratory (CITL), Georgia Institute of Technology RAPIDS Laboratory, Georgia Institute of Technology | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pseudo Stirling cycle**
Pseudo Stirling cycle:
The pseudo Stirling cycle, also known as the adiabatic Stirling cycle, is a thermodynamic cycle with an adiabatic working volume and isothermal heater and cooler, in contrast to the ideal Stirling cycle with an isothermal working space. The working fluid has no bearing on the maximum thermal efficiencies of the pseudo Stirling cycle.Practical Stirling engines usually use a adiabatic Stirling cycle as the ideal Stirling cycle can not be practically implemented.
Pseudo Stirling cycle:
Nomenclature (practical engines and ideal cycle are both named Stirling) and lack in specificity (omitting ideal or adiabatic Stirling cycle) can cause confusion.
History:
The pseudo Stirling cycle was designed to address predictive shortcomings in the ideal isothermal Stirling cycle. Specifically, the ideal cycle does not give usable figures or criteria for judging the performance of real-world Stirling engines. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sump buster**
Sump buster:
A sump buster is a device installed within a bus route to limit that thoroughfare to buses. It discourages traffic from entering a lane by promising to destroy the oil pan of any vehicle with insufficient ground clearance to get over it, making them similar in use (but not design) to rising bollards. A sump buster can also be known as a "sump breaker" or "sump trap". Sump busters were first used in the 1980s.
Function:
The sump buster uses a non-mechanical solid mass of concrete, or sometimes other aggregates or metal, to demobilise a vehicle when access to a restricted area is attempted. When a vehicle attempts to traverse the sump buster, the device will demolish the vehicle's oil pan (literally "busting the sump"). The track and ground clearance on permitted vehicles, usually buses, is such that they may clear the device with ease. In some cases, advisory or mandatory speed limits are given.
Impact on the community:
A major purpose of the sump buster is to avoid road systems to be used as rat runs and, to a certain extent, joyriding. For this reason, devices have been vandalised (either through annoyance at their existence or to attempt to gain passage), resulting in accidents (and injuries) to legitimate road users.In January 2005, Devon County Council dismissed an application by the Stagecoach Group for the installation of a sump buster on Tan Lane (a restricted access road) in Exeter. The Exeter Highways and Traffic Orders Committee stated that "...[using a sump buster] is not an option that the County Council could support [as] it would not differentiate between high clearance vehicles and for example cars and vans that are authorised to use the link under the current Traffic Regulation Order".
Impact on the community:
Sump busters have led to serious injuries to scooter drivers and cyclists who fail to notice them. Municipalities in the Netherlands have been sued for tort after damage or injuries caused by insufficiently marked sump busters. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Endopeptidase La**
Endopeptidase La:
Endopeptidase La (EC 3.4.21.53, ATP-dependent serine proteinase, lon proteinase, protease La, proteinase La, ATP-dependent lon proteinase, ATP-dependent protease La, Escherichia coli proteinase La, Escherichia coli serine proteinase La, gene lon protease, gene lon proteins, PIM1 protease, PIM1 proteinase, serine protease La) is an enzyme. This enzyme catalyses hydrolysis of proteins in the presence of ATP.
This enzyme is a product of the lon gene in Escherichia coli. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Michael Brin Prize in Dynamical Systems**
Michael Brin Prize in Dynamical Systems:
The Michael Brin Prize in Dynamical Systems, abbreviated as the Brin Prize, is awarded to mathematicians who have made outstanding advances in the field of dynamical systems and are within 14 years of their PhD. The prize is endowed by and named after Michael Brin, whose son Sergey Brin, is a co-founder of Google. Michael Brin is a retired mathematician at the University of Maryland and a specialist in dynamical systems.The first prize was awarded in 2008, between 2009 and 2017 it has been awarded bi-annually, and since 2017 annually. Artur Avila, the 2011 awardee, went on to win the Fields Medal in 2014.
Past winners:
2008 : Giovanni Forni for his work on area-preserving flows.
2009 : Dmitry Dolgopyat for his work on rapid mixing of flows.
2011 : Artur Avila for his work on Teichmüller dynamics and interval-exchange transformations.
2013 : Omri Sarig for his work on the thermodynamics of countable Markov shifts and his Markov partition for surface diffeomorphisms.
2015 : Federico Rodriguez Hertz for his work on geometric and measure rigidity and on stable ergodicity of partially hyperbolic systems.
2017 : Lewis Bowen for creation of entropy theory for a broad class of non-amenable groups and solution of the long-standing isomorphism problem for Bernoulli actions of such groups.
2018 : Mike Hochman for his work in ergodic theory and fractal geometry.
2019 : Sébastien Gouëzel for his work on the spectral theory of transfer operators and statistical properties of hyperbolic dynamical systems and random walks on hyperbolic groups.
2020 : Corinna Ulcigrai for her work on the ergodic theory of locally Hamiltonian flows on surfaces and translation flows on periodic surfaces.
2021 : Tim Austin for his proof the weak Pinsker conjecture, for his groundbreaking approach to non-conventional multiple ergodic theorems, and his contributions to geometric group theory.
2022 : Zhiren Wang for his fundamental contributions to the study of topological and measure rigidity of higher rank actions, and his proof of Moebius disjointness for several classes of dynamical systems. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**History of radio**
History of radio:
The early history of radio is the history of technology that produces and uses radio instruments that use radio waves. Within the timeline of radio, many people contributed theory and inventions in what became radio. Radio development began as "wireless telegraphy". Later radio history increasingly involves matters of broadcasting.
Discovery:
In an 1864 presentation, published in 1865, James Clerk Maxwell proposed theories of electromagnetism, with mathematical proofs, that showed that light and predicted that radio and x-rays were all types of electromagnetic waves propagating through free space.Between 1886 and 1888 Heinrich Rudolf Hertz published the results of experiments wherein he was able to transmit electromagnetic waves (radio waves) through the air, proving Maxwell's electromagnetic theory.
Exploration of optical qualities:
After their discovery many scientists and inventors experimented with transmitting and detecting "Hertzian waves" (it would take almost 20 years for the term "radio" to be universally adopted for this type of electromagnetic radiation). Maxwell's theory showing that light and Hertzian electromagnetic waves were the same phenomenon at different wavelengths led "Maxwellian" scientists such as John Perry, Frederick Thomas Trouton and Alexander Trotter to assume they would be analogous to optical light.Following Hertz' untimely death in 1894, British physicist and writer Oliver Lodge presented a widely covered lecture on Hertzian waves at the Royal Institution on June 1 of the same year. Lodge focused on the optical qualities of the waves and demonstrated how to transmit and detect them (using an improved variation of French physicist Édouard Branly's detector Lodge named the "coherer"). Lodge further expanded on Hertz' experiments showing how these new waves exhibited like light refraction, diffraction, polarization, interference and standing waves, confirming that Hertz' waves and light waves were both forms of Maxwell's electromagnetic waves. During part of the demonstration the waves were sent from the neighboring Clarendon Laboratory building, and received by apparatus in the lecture theater.
Exploration of optical qualities:
After Lodges demonstrations researchers pushed their experiments further down the electromagnetic spectrum towards visible light to further explore the quasioptical nature at these wavelengths. Oliver Lodge and Augusto Righi experimented with 1.5 and 12 GHz microwaves respectively, generated by small metal ball spark resonators. Russian physicist Pyotr Lebedev in 1895 conducted experiments in the 50 GHz 50 (6 millimeter) range. Bengali Indian physicist Jagadish Chandra Bose conducted experiments at wavelengths of 60 GHz (5 millimeter) and invented waveguides, horn antennas, and semiconductor crystal detectors for use in his experiments. He would latter write an essay, "Adrisya Alok" ("Invisible Light") on how in November of 1895 he conducted a public demonstration at the Town Hall of Kolkata, India using millimeter-range-wavelength microwaves to trigger detectors that ignited gunpowder and rang a bell at a distance.
Proposed applications:
Between 1890 and 1892 physicists such as John Perry, Frederick Thomas Trouton and William Crookes proposed electromagnetic or Hertzian waves as a navigation aid or means of communication, with Crookes writing on the possibilities of wireless telegraphy based on Hertzian waves in 1892. Among physicist, what were perceived as technical limitations to using these new waves, such as delicate equipment, the need for large amounts of power to transmit over limited ranges, and its similarity to already existent optical light transmitting devices, lead them to a belief that applications were very limited. The Serbian American engineer Nikola Tesla considered Hertzian waves relatively useless for long range transmission since "light" could not transmit further than line of sight. There was speculation that this fog and stormy weather penetrating "invisible light" could be used in maritime applications such as lighthouses, including the London journal The Electrician (December 1895) commenting on Bose's achievements, saying "we may in time see the whole system of coast lighting throughout the navigable world revolutionized by an Indian Bengali scientist working single handed[ly] in our Presidency College Laboratory."In 1895, adapting the techniques presented in Lodge's published lectures, Russian physicist Alexander Stepanovich Popov built a lightning detector that used a coherer based radio receiver. He presented it to the Russian Physical and Chemical Society on May 7, 1895.
Marconi and radio telegraphy:
In 1894, the young Italian inventor Guglielmo Marconi began working on the idea of building long-distance a wireless transmission systems based on the use of Hertzian waves (radio waves), a line of inquiry that he noted other inventors did not seem to be pursuing. Marconi read through the literature and used the ideas of others who were experimenting with radio waves but did a great deal to develop devices such as portable transmitters and receiver systems that could work over long distances, turning what was essentially a laboratory experiment into a useful communication system. By August 1895, Marconi was field testing his system but even with improvements he was only able to transmit signals up to one-half mile, a distance Oliver Lodge had predicted in 1894 as the maximum transmission distance for radio waves. Marconi raised the height of his antenna and hit upon the idea of grounding his transmitter and receiver. With these improvements the system was capable of transmitting signals up to 2 miles (3.2 km) and over hills. This apparatus proved to be the first engineering-complete, commercially successful radio transmission system and Marconi went on to receive British patent 12039, Improvements in transmitting electrical impulses and signals and in apparatus there-for, in 1896 Nautical and transatlantic transmissions In 1897, Marconi established a radio station on the Isle of Wight, England and opened his "wireless" factory in the former silk-works at Hall Street, Chelmsford, England, in 1898, employing around 60 people.
Marconi and radio telegraphy:
On 12 December 1901, using a 500-foot (150 m) kite-supported antenna for reception—signals transmitted by the company's new high-power station at Poldhu, Cornwall, Marconi transmitted a message across the Atlantic ocean to Signal Hill in St. John's, Newfoundland.Marconi began to build high-powered stations on both sides of the Atlantic to communicate with ships at sea. In 1904, he established a commercial service to transmit nightly news summaries to subscribing ships, which could incorporate them into their on-board newspapers. A regular transatlantic radio-telegraph service was finally begun on 17 October 1907 between Clifden, Ireland, and Glace Bay, but even after this the company struggled for many years to provide reliable communication to others.
Marconi and radio telegraphy:
Marconi's apparatus is also credited with saving the 700 people who survived the tragic Titanic disaster.
Audio transmission:
In the late 1890s, Canadian-American inventor Reginald Fessenden came to the conclusion that he could develop a far more efficient system than the spark-gap transmitter and coherer receiver combination. To this end he worked on developing a high-speed alternator (referred to as "an alternating-current dynamo") that generated "pure sine waves" and produced "a continuous train of radiant waves of substantially uniform strength", or, in modern terminology, a continuous-wave (CW) transmitter. While working for the United States Weather Bureau on Cobb Island, Maryland, Fessenden researched using this setup for audio transmissions via radio. By fall of 1900, he successfully transmitted speech over a distance of about 1.6 kilometers (one mile), which appears to have been the first successful audio transmission using radio signals. Although successful, the sound transmitted was far too distorted to be commercially practical. According to some sources, notably Fessenden's wife Helen's biography, on Christmas Eve 1906, Reginald Fessenden used an Alexanderson alternator and rotary spark-gap transmitter to make the first radio audio broadcast, from Brant Rock, Massachusetts. Ships at sea heard a broadcast that included Fessenden playing O Holy Night on the violin and reading a passage from the Bible.Around the same time American inventor Lee de Forest experimented with an arc transmitter, which unlike the discontinuous pulses produced by spark transmitters, created steady "continuous wave" signal that could be used for amplitude modulated (AM) audio transmissions. In February 1907 he transmitted electronic telharmonium music from his laboratory station in New York City. This was followed by tests that included, in the fall, Eugenia Farrar singing "I Love You Truly". In July 1907 he made ship-to-shore transmissions by radiotelephone—race reports for the Annual Inter-Lakes Yachting Association (I-LYA) Regatta held on Lake Erie—which were sent from the steam yacht Thelma to his assistant, Frank E. Butler, located in the Fox's Dock Pavilion on South Bass Island.
Audio transmission:
Broadcasting The Dutch company Nederlandsche Radio-Industrie and its owner-engineer, Hanso Idzerda, made its first regular entertainment radio broadcast over station PCGG from its workshop in The Hague on 6 November 1919. The company manufactured both transmitters and receivers. Its popular program was broadcast four nights per week using narrow-band FM transmissions on 670 metres (448 kHz), until 1924 when the company ran into financial trouble.
Audio transmission:
Regular entertainment broadcasts began in Argentina, pioneered by Enrique Telémaco Susini and his associates. At 9 pm on August 27, 1920, Sociedad Radio Argentina aired a live performance of Richard Wagner's opera Parsifal from the Coliseo Theater in downtown Buenos Aires. Only about twenty homes in the city had receivers to tune in this program.
On 31 August 1920 the Detroit News began publicized daily news and entertainment "Detroit News Radiophone" broadcasts, originally as licensed amateur station 8MK, then later as WBL and WWJ in Detroit, Michigan.
Audio transmission:
Union College in Schenectady, New York began broadcasting on October 14, 1920, over 2ADD, an amateur station licensed to Wendell King, an African-American student at the school. Broadcasts included a series of Thursday night concerts initially heard within a 100-mile (160 km) radius and later for a 1,000-mile (1,600 km) radius.In 1922 regular audio broadcasts for entertainment began in the UK from the Marconi Research Centre 2MT at Writtle near Chelmsford, England.
Audio transmission:
Wavelength (meters) vs. frequency (kilocycles, kilohertz) In early radio, and to a limited extent much later, the transmission signal of the radio station was specified in meters, referring to the wavelength, the length of the radio wave. This is the origin of the terms long wave, medium wave, and short wave radio. Portions of the radio spectrum reserved for specific purposes were often referred to by wavelength: the 40-meter band, used for amateur radio, for example. The relation between wavelength and frequency is reciprocal: the higher the frequency, the shorter the wave, and vice versa.
Audio transmission:
As equipment progressed, precise frequency control became possible; early stations often did not have a precise frequency, as it was affected by the temperature of the equipment, among other factors. Identifying a radio signal by its frequency rather than its length proved much more practical and useful, and starting in the 1920s this became the usual method of identifying a signal, especially in the United States. Frequencies specified in number of cycles per second (kilocycles, megacycles) were replaced by the more specific designation of hertz (cycles per second) about 1965.
Radio companies:
British Marconi Using various patents, the British Marconi company was established in 1897 by Guglielmo Marconi and began communication between coast radio stations and ships at sea. A year after, in 1898, they successfully introduced their first radio station in Chelmsford. This company, along with its subsidiaries Canadian Marconi and American Marconi, had a stranglehold on ship-to-shore communication. It operated much the way American Telephone and Telegraph operated until 1983, owning all of its equipment and refusing to communicate with non-Marconi equipped ships. Many inventions improved the quality of radio, and amateurs experimented with uses of radio, thus planting the first seeds of broadcasting.
Radio companies:
Telefunken The company Telefunken was founded on May 27, 1903, as "Telefunken society for wireless telefon" of Siemens & Halske (S & H) and the Allgemeine Elektrizitäts-Gesellschaft (General Electricity Company) as joint undertakings for radio engineering in Berlin. It continued as a joint venture of AEG and Siemens AG, until Siemens left in 1941. In 1911, Kaiser Wilhelm II sent Telefunken engineers to West Sayville, New York to erect three 600-foot (180-m) radio towers there. Nikola Tesla assisted in the construction. A similar station was erected in Nauen, creating the only wireless communication between North America and Europe.
Technological development:
Amplitude-modulated (AM) The invention of amplitude-modulated (AM) radio, which allows more closely spaced stations to simultaneously send signals (as opposed to spark-gap radio, where each transmission occupies a wide bandwidth) is attributed to Reginald Fessenden, Valdemar Poulsen and Lee de Forest.
Technological development:
Crystal set receivers The most common type of receiver before vacuum tubes was the crystal set, although some early radios used some type of amplification through electric current or battery. Inventions of the triode amplifier, motor-generator, and detector enabled audio radio. The use of amplitude modulation (AM), by which soundwaves can be transmitted over a continuous-wave radio signal of narrow bandwidth (as opposed to spark-gap radio, which sent rapid strings of damped-wave pulses that consumed much bandwidth and were only suitable for Morse-code telegraphy) was pioneered by Fessenden, Poulsen and Lee de Forest.The art and science of crystal sets is still pursued as a hobby in the form of simple un-amplified radios that 'runs on nothing, forever'. They are used as a teaching tool by groups such as the Boy Scouts of America to introduce youngsters to electronics and radio. As the only energy available is that gathered by the antenna system, loudness is necessarily limited.
Technological development:
Vacuum tubes During the mid-1920s, amplifying vacuum tubes (or thermionic valves in the UK) revolutionized radio receivers and transmitters. John Ambrose Fleming developed a vacuum tube diode. Lee de Forest placed a screen, added a "grid" electrode, creating the triode.Early radios ran the entire power of the transmitter through a carbon microphone. In the 1920s, the Westinghouse company bought Lee de Forest's and Edwin Armstrong's patent. During the mid-1920s, Amplifying vacuum tubes (US)/thermionic valves (UK) revolutionized radio receivers and transmitters. Westinghouse engineers developed a more modern vacuum tube.
Technological development:
The first radios still required batteries, but in 1926 the "battery eliminator" was introduced to the market. This tube technology allowed radios to be powered through the grid instead. They still required batteries to heat up the vacuum-tube filaments, but after the invention of indirectly heated vacuum tubes, the first completely battery free radios became available in 1927.In 1929 a new screen grid tube called UY-224 was introduced, an amplifier designed to operate directly on alternating current.A problem with the early radios was fading stations and fluctuating volume. The invention of the superheterodyne receiver solved this problem, and the first radios with a heterodyne radio receiver went for sale in 1924. But it was costly, and the technology was shelved while waiting for the technology to mature, and in 1929 the Radiola 66 and Radiola 67 went for sale.
Technological development:
Loudspeakers In the early days one had to use headphones to listen to radio. Later loudspeakers in the form of a horn of the type used by phonographs, equipped with a telephone receiver, became available. But the sound quality was poor. In 1926 the first radios with electrodynamic loudspeakers went for sale, which improved the quality a lot. At first the loudspeakers were separated from the radio, but soon radios would come with a built-in loudspeaker.Other inventions related to sound was the automatic volume control (AVC), first commercially available in 1928. In 1930 a tone control knob was added to the radios. This allowed listeners to improve imperfect broadcasting.The magnetic cartridge, which was introduced in the mid 20's, greatly improved the broadcasting of music. When playing music from a phonograph before the magnetic cardridge, a microphone had to be placed close to a horn loudspeaker. The invention allowed the electric signals to be amplified and then fed directly to the broadcast transmitter.
Technological development:
Transistor technology Following development of transistor technology, bipolar junction transistors led to the development of the transistor radio. In 1954, the Regency company introduced a pocket transistor radio, the TR-1, powered by a "standard 22.5 V Battery." In 1955, the newly formed Sony company introduced its first transistorized radio, the TR-55. It was small enough to fit in a vest pocket, powered by a small battery. It was durable, because it had no vacuum tubes to burn out. In 1957, Sony introduced the TR-63, the first mass-produced transistor radio, leading to the mass-market penetration of transistor radios. Over the next 20 years, transistors replaced tubes almost completely except for high-power transmitters.
Technological development:
By the mid-1960s, the Radio Corporation of America (RCA) were using metal–oxide–semiconductor field-effect transistors (MOSFETs) in their consumer products, including FM radio, television and amplifiers. Metal–oxide–semiconductor (MOS) large-scale integration (LSI) provided a practical and economic solution for radio technology, and was used in mobile radio systems by the early 1970s.
Integrated circuit The first integrated circuit (IC) radio, P1740 by General Electric, became available in 1966.
Car radio The first car radio was introduced in 1922, but it was so large that it took up too much space in the car. The first commercial car radio that could easily be installed in most cars went for sale in 1930.
Radio telex:
Telegraphy did not go away on radio. Instead, the degree of automation increased. On land-lines in the 1930s, teletypewriters automated encoding, and were adapted to pulse-code dialing to automate routing, a service called telex. For thirty years, telex was the cheapest form of long-distance communication, because up to 25 telex channels could occupy the same bandwidth as one voice channel. For business and government, it was an advantage that telex directly produced written documents.
Radio telex:
Telex systems were adapted to short-wave radio by sending tones over single sideband. CCITT R.44 (the most advanced pure-telex standard) incorporated character-level error detection and retransmission as well as automated encoding and routing. For many years, telex-on-radio (TOR) was the only reliable way to reach some third-world countries. TOR remains reliable, though less-expensive forms of e-mail are displacing it. Many national telecom companies historically ran nearly pure telex networks for their governments, and they ran many of these links over short wave radio.
Radio telex:
Documents including maps and photographs went by radiofax, or wireless photoradiogram, invented in 1924 by Richard H. Ranger of Radio Corporation of America (RCA). This method prospered in the mid-20th century and faded late in the century.
Radio navigation:
One of the first developments in the early 20th century was that aircraft used commercial AM radio stations for navigation, AM stations are still marked on U.S. aviation charts. Radio navigation played an important role during war time, especially in World War II. Before the discovery of the crystal oscillator, radio navigation had many limits. However, as radio technology expanding, navigation is easier to use, and it provides a better position. Although there are many advantages, the radio navigation systems often comes with complex equipment such as the radio compass receiver, compass indicator, or the radar plan position indicator. All of these require users to obtain certain knowledge.
Radio navigation:
In the 1960s VOR systems became widespread. In the 1970s, LORAN became the premier radio navigation system. Soon, the US Navy experimented with satellite navigation. In 1987, the Global Positioning System (GPS) constellation of satellites was launched.
FM:
In 1933, FM radio was patented by inventor Edwin H. Armstrong. FM uses frequency modulation of the radio wave to reduce static and interference from electrical equipment and the atmosphere. In 1937, W1XOJ, the first experimental FM radio station after Armstrong's W2XMN in Alpine, New Jersey, was granted a construction permit by the US Federal Communications Commission (FCC).
FM:
FM in Europe After World War II, FM radio broadcasting was introduced in Germany. At a meeting in Copenhagen in 1948, a new wavelength plan was set up for Europe. Because of the recent war, Germany (which did not exist as a state and so was not invited) was only given a small number of medium-wave frequencies, which were not very good for broadcasting. For this reason Germany began broadcasting on UKW ("Ultrakurzwelle", i.e. ultra short wave, nowadays called VHF) which was not covered by the Copenhagen plan. After some amplitude modulation experience with VHF, it was realized that FM radio was a much better alternative for VHF radio than AM. Because of this history, FM radio is still referred to as "UKW Radio" in Germany. Other European nations followed a bit later, when the superior sound quality of FM and the ability to run many more local stations because of the more limited range of VHF broadcasts were realized.
Television:
In the 1930s, regular analog television broadcasting began in some parts of Europe and North America. By the end of the decade there were roughly 25,000 all-electronic television receivers in existence worldwide, the majority of them in the UK. In the US, Armstrong's FM system was designated by the FCC to transmit and receive television sound.
Color television 1953: NTSC compatible color television introduced in the US.
1962: Telstar 1, the first communications satellite, relayed the first publicly available live transatlantic television signal.
Mid-1960s: Metal–oxide–semiconductor field-effect transistor (MOSFET) first used for television, by the Radio Corporation of America (RCA). The power MOSFET was later widely adopted for television receiver circuits.By 1963, color television was being broadcast commercially (though not all broadcasts or programs were in color), and the first (radio) communication satellite, Telstar, was launched. In the 1970s,
Mobile phones:
In 1947 AT&T commercialized the Mobile Telephone Service. From its start in St. Louis in 1946, AT&T then introduced Mobile Telephone Service to one hundred towns and highway corridors by 1948. Mobile Telephone Service was a rarity with only 5,000 customers placing about 30,000 calls each week. Because only three radio channels were available, only three customers in any given city could make mobile telephone calls at one time. Mobile Telephone Service was expensive, costing US$15 per month, plus $0.30–0.40 per local call, equivalent to (in 2012 US dollars) about $176 per month and $3.50–4.75 per call. The Advanced Mobile Phone System analog mobile phone system, developed by Bell Labs, was introduced in the Americas in 1978, gave much more capacity. It was the primary analog mobile phone system in North America (and other locales) through the 1980s and into the 2000s.
Mobile phones:
In 1947, AT&T commercialized the Mobile Telephone Service. From its start in St. Louis in 1946, AT&T then introduced Mobile Telephone Service to one hundred towns and highway corridors by 1948. Mobile Telephone Service was a rarity with only 5,000 customers placing about 30,000 calls each week. Because only three radio channels were available, only three customers in any given city could make mobile telephone calls at one time. Mobile Telephone Service was expensive, costing US$15 per month, plus $0.30–0.40 per local call, equivalent to (in 2012 US dollars) about $176 per month and $3.50–4.75 per call.The development of metal–oxide–semiconductor (MOS) large-scale integration (LSI) technology, information theory and cellular networking led to the development of affordable mobile communications. The Advanced Mobile Phone System analog mobile phone system, developed by Bell Labs and introduced in the Americas in 1978, gave much more capacity. It was the primary analog mobile phone system in North America (and other locales) through the 1980s and into the 2000s.
Broadcast and copyright:
The British government and the state-owned postal services found themselves under massive pressure from the wireless industry (including telegraphy) and early radio adopters to open up to the new medium. In an internal confidential report from February 25, 1924, the Imperial Wireless Telegraphy Committee stated: "We have been asked 'to consider and advise on the policy to be adopted as regards the Imperial Wireless Services so as to protect and facilitate public interest.' It was impressed upon us that the question was urgent. We did not feel called upon to explore the past or to comment on the delays which have occurred in the building of the Empire Wireless Chain. We concentrated our attention on essential matters, examining and considering the facts and circumstances which have a direct bearing on policy and the condition which safeguard public interests."When radio was introduced in the early 1920s, many predicted it would kill the phonograph record industry. Radio was a free medium for the public to hear music for which they would normally pay. While some companies saw radio as a new avenue for promotion, others feared it would cut into profits from record sales and live performances. Many record companies would not license their records to be played over the radio, and had their major stars sign agreements that they would not perform on radio broadcasts.Indeed, the music recording industry had a severe drop in profits after the introduction of the radio. For a while, it appeared as though radio was a definite threat to the record industry. Radio ownership grew from two out of five homes in 1931 to four out of five homes in 1938. Meanwhile, record sales fell from $75 million in 1929 to $26 million in 1938 (with a low point of $5 million in 1933), though the economics of the situation were also affected by the Great Depression.The copyright owners were concerned that they would see no gain from the popularity of radio and the ‘free’ music it provided. What they needed to make this new medium work for them already existed in previous copyright law. The copyright holder for a song had control over all public performances ‘for profit.’ The problem now was proving that the radio industry, which was just figuring out for itself how to make money from advertising and currently offered free music to anyone with a receiver, was making a profit from the songs.
Broadcast and copyright:
The test case was against Bamberger's Department Store in Newark, New Jersey in 1922. The store was broadcasting music from its store on the radio station WOR. No advertisements were heard, except at the beginning of the broadcast which announced "L. Bamberger and Co., One of America's Great Stores, Newark, New Jersey." It was determined through this and previous cases (such as the lawsuit against Shanley's Restaurant) that Bamberger was using the songs for commercial gain, thus making it a public performance for profit, which meant the copyright owners were due payment.
Broadcast and copyright:
With this ruling the American Society of Composers, Authors and Publishers (ASCAP) began collecting licensing fees from radio stations in 1923. The beginning sum was $250 for all music protected under ASCAP, but for larger stations the price soon ballooned to $5,000. Edward Samuels reports in his book The Illustrated Story of Copyright that "radio and TV licensing represents the single greatest source of revenue for ASCAP and its composers […] and [a]n average member of ASCAP gets about $150–$200 per work per year, or about $5,000-$6,000 for all of a member's compositions." Not long after the Bamberger ruling, ASCAP had to once again defend their right to charge fees, in 1924. The Dill Radio Bill would have allowed radio stations to play music without paying and licensing fees to ASCAP or any other music-licensing corporations. The bill did not pass.
Regulations of radio stations in the U.S:
Wireless Ship Act of 1910 Radio technology was first used for ships to communicate at sea. To ensure safety, the Wireless Ship Act of 1910 marks the first time the U.S. government implies regulations on radio systems on ships. This act requires ships to have a radio system with a professional operator if they want to travel more than 200 miles offshore or have more than 50 people on board. However, this act had many flaws including the competition of radio operators including the two majors company (British and American Marconi). They tended to delay communication for ships that used their competitor's system. This contributed to the tragic incident of the sinking of the Titanic in 1912.
Regulations of radio stations in the U.S:
Radio Act of 1912 In 1912, distress calls to aid the sinking Titanic were met with a large amount of interfering radio traffic, severely hampering the rescue effort. Subsequently, the US government passed the Radio Act of 1912 to help mitigate the repeat of such a tragedy. The act helps distinguish between normal radio traffic and (primarily maritime) emergency communication, and specifies the role of government during such an emergency.
Regulations of radio stations in the U.S:
The Radio Act of 1927 The Radio Act of 1927 gave the Federal Radio Commission the power to grant and deny licenses, and to assign frequencies and power levels for each licensee. In 1928 it began requiring licenses of existing stations and setting controls on who could broadcast from where on what frequency and at what power. Some stations could not obtain a license and ceased operations. In section 29, the Radio Act of 1927 mentioned that the content of the broadcast should be freely present, and the government cannot interfere with this.
Regulations of radio stations in the U.S:
The Communications Act of 1934 The introduction of the Communications Act of 1934 led to the establishment of the Federal Communications Commissions (FCC). The FCC's responsibility is to control the industry including "telephone, telegraph, and radio communications." Under this Act, all carriers have to keep records of authorized interference and unauthorized interference. This Act also supports the President in time of war. If the government needs to use the communication facilities in time of war, they are allowed to.
Regulations of radio stations in the U.S:
The Telecommunications Act of 1996 The Telecommunications Act of 1996 was the first significant overhaul in over 60 years amending the work of the Communications Act of 1934. Coming only two dozen years after the breakup of AT&T, the act sets out to move telecommunications into a state of competition with their markets and the networks they are a part of. Up to this point the effects of the Telecommunications Act of 1996 have been seen, but some of the changes the Act set out to fix are still ongoing problems, such as being unable to create an open competitive market.
Licensed commercial public radio stations:
The question of the 'first' publicly targeted licensed radio station in the U.S. has more than one answer and depends on semantics. Settlement of this 'first' question may hang largely upon what constitutes 'regular' programming It is commonly attributed to KDKA in Pittsburgh, Pennsylvania, which in October 1920 received its license and went on the air as the first US licensed commercial broadcasting station on November 2, 1920, with the presidential election results as its inaugural show, but was not broadcasting daily until 1921. (Their engineer Frank Conrad had been broadcasting from on the two call sign signals of 8XK and 8YK since 1916.) Technically, KDKA was the first of several already-extant stations to receive a 'limited commercial' license.
Licensed commercial public radio stations:
On February 17, 1919, station 9XM at the University of Wisconsin in Madison broadcast human speech to the public at large. 9XM was first experimentally licensed in 1914, began regular Morse code transmissions in 1916, and its first music broadcast in 1917. Regularly scheduled broadcasts of voice and music began in January 1921. That station is still on the air today as WHA.
Licensed commercial public radio stations:
On August 20, 1920, 8MK, began broadcasting daily and was later claimed by famed inventor Lee de Forest as the first commercial station. 8MK was licensed to a teenager, Michael DeLisle Lyons, and financed by E. W. Scripps. In 1921 8MK changed to WBL and then to WWJ in 1922, in Detroit. It has carried a regular schedule of programming to the present and also broadcast the 1920 presidential election returns just as KDKA did. Inventor Lee de Forest claims to have been present during 8MK's earliest broadcasts, since the station was using a transmitter sold by his company.
Licensed commercial public radio stations:
The first station to receive a commercial license was WBZ, then in Springfield, Massachusetts. Lists provided to the Boston Globe by the U.S. Department of Commerce showed that WBZ received its commercial license on 15 September 1921; another Westinghouse station, WJZ, then in Newark, New Jersey, received its commercial license on November 7, the same day as KDKA did. What separates WJZ and WBZ from KDKA is the fact that neither of the former stations remain in their original city of license, whereas KDKA has remained in Pittsburgh for its entire existence.
Licensed commercial public radio stations:
2XG: Launched by Lee de Forest in the Highbridge section of New York City, that station began daily broadcasts in 1916. Like most experimental radio stations, however, it had to go off the air when the U.S. entered World War I in 1917, and did not return to the air.
1XE: Launched by Harold J. Power in Medford, Massachusetts, 1XE was an experimental station that started broadcasting in 1917. It had to go off the air during World War I, but started up again after the war, and began regular voice and music broadcasts in 1919. However, the station did not receive its commercial license, becoming WGI, until 1922.
WWV, the U.S. Government time service, which was believed to have started 6 months before KDKA in Washington, D.C. but in 1966 was transferred to Ft. Collins, Colorado.
WRUC, the Wireless Radio Union College, located on Union College in Schenectady, New York; was launched as W2XQ KQV, one of Pittsburgh's five original AM stations, signed on as amateur station "8ZAE" on November 19, 1919, but did not receive a commercial license until January 9, 1922.
Media and documentaries:
Empire of the Air: The Men Who Made Radio (1992) by Ken Burns, PBS documentary based on the 1991 book, Empire of the Air: The Men Who Made Radio by Tom Lewis, 1st ed., New York : E. Burlingame Books, ISBN 0060182156 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chmod**
Chmod:
In Unix and Unix-like operating systems, chmod is the command and system call used to change the access permissions and the special mode flags (the setuid, setgid, and sticky flags) of file system objects (files and directories). Collectively these were originally called its modes, and the name chmod was chosen as an abbreviation of change mode.
History:
A chmod command first appeared in AT&T UNIX version 1, along with the chmod system call.
As systems grew in number and types of users, access-control lists were added to many file systems in addition to these most basic modes to increase flexibility.
The version of chmod bundled in GNU coreutils was written by David MacKenzie and Jim Meyering. The command is available as a separate package for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities. The chmod command has also been ported to the IBM i operating system.
Command syntax:
Throughout this section, user refers to the owner of the file, as a reminder that the symbolic form of the command uses "u".
chmod [options] mode[,mode] file1 [file2 ...]Usually implemented options include: -R Recursive, i.e. include objects in subdirectories.
-v verbose, show objects changed (unchanged objects are not shown).If a symbolic link is specified, the target object is affected. File modes directly associated with symbolic links themselves are typically not used.
Command syntax:
To view the file mode, the ls or stat commands may be used: The r, w, and x specify the read, write, and execute access (the first character of the ls display denotes the object type; a hyphen represents a plain file). The script findPhoneNumbers.sh can be read, written to, and executed by the user dgerman; read and executed by members of the staff group; and only read by any other users.
Command syntax:
The main parts of the chmod permissions: For example: rwxr-x--- Each group of three characters define permissions for each class: the three leftmost characters, rwx, define permissions for the User class (i.e. the file owner).
Command syntax:
the middle three characters, r-x, define permissions for the Group class (i.e. the group owning the file) the rightmost three characters, ---, define permissions for the Others class. In this example, users who are not the owner of the file and who are not members of the Group (and, thus, are in the Others class) have no permission to access the file.
Command syntax:
Numerical permissions The chmod numerical format accepts up to four digits. The three rightmost digits define permissions for the file user, the group, and others. The optional leading digit, when 4 digits are given, specifies the special setuid, setgid, and sticky flags. Each digit of the three rightmost digits represents a binary value, which controls the "read", "write" and "execute" permissions respectively. A value of 1 means a class is allowed that action, while a 0 means it is disallowed.
Command syntax:
For example, 754 would allow: "read" (4), "write" (2), and "execute" (1) for the User class; i.e., 7 (4 + 2 + 1).
"read" (4) and "execute" (1) for the Group class; i.e., 5 (4 + 1).
Command syntax:
Only "read" (4) for the Others class.A numerical code permits execution if and only if it is odd (i.e. 1, 3, 5, or 7). A numerical code permits "read" if and only if it is greater than or equal to 4 (i.e. 4, 5, 6, or 7). A numerical code permits "write" if and only if it is 2, 3, 6, or 7.
Command syntax:
Numeric example Change permissions to permit members of the programmers group to update a file: Since the setuid, setgid and sticky bits are not specified, this is equivalent to: Symbolic modes The chmod command also accepts a finer-grained symbolic notation, which allows modifying specific modes while leaving other modes untouched. The symbolic mode is composed of three components, which are combined to form a single string of text: Classes of users are used to distinguish to whom the permissions apply. If no classes are specified "all" is implied. The classes are represented by one or more of the following letters: The chmod program uses an operator to specify how the modes of a file should be adjusted. The following operators are accepted: The modes indicate which permissions are to be granted or removed from the specified classes. There are three basic modes which correspond to the basic permissions: Multiple changes can be specified by separating multiple symbolic modes with commas (without spaces). If a user is not specified, chmod will check the umask and the effect will be as if "a" was specified except bits that are set in the umask are not affected.
Command syntax:
Symbolic examples Add write permission (w) to the Group's (g) access modes of a directory, allowing users in the same group to add files: Remove write permissions (w) for all classes (a), preventing anyone from writing to the file: Set the permissions for the user and the Group (ug) to read and execute (rx) only (no write permission) on referenceLib, preventing anyone from adding files.
Command syntax:
Add the read and write permissions to the user and group classes of a file or directory named sample: Remove all permissions, allowing no one to read, write, or execute the file named sample to no useful end.
Change the permissions for the user and the group to read and execute only (no write permission) on sample.
Special modes The chmod command is also capable of changing the additional permissions or special modes of a file or directory. The symbolic modes use 's' to represent the setuid and setgid modes, and 't' to represent the sticky mode. The modes are only applied to the appropriate classes, regardless of whether or not other classes are specified.
Most operating systems support the specification of special modes numerically, particularly in octal, but some do not. On these systems, only the symbolic modes can be used.
Command line examples | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Antiparticle**
Antiparticle:
In particle physics, every type of particle is associated with an antiparticle with the same mass but with opposite physical charges (such as electric charge). For example, the antiparticle of the electron is the positron (also known as an antielectron). While the electron has a negative electric charge, the positron has a positive electric charge, and is produced naturally in certain types of radioactive decay. The opposite is also true: the antiparticle of the positron is the electron.
Antiparticle:
Some particles, such as the photon, are their own antiparticle. Otherwise, for each pair of antiparticle partners, one is designated as the normal particle (the one that occurs in matter usually interacted with in daily life). The other (usually given the prefix "anti-") is designated the antiparticle.
Particle–antiparticle pairs can annihilate each other, producing photons; since the charges of the particle and antiparticle are opposite, total charge is conserved. For example, the positrons produced in natural radioactive decay quickly annihilate themselves with electrons, producing pairs of gamma rays, a process exploited in positron emission tomography.
Antiparticle:
The laws of nature are very nearly symmetrical with respect to particles and antiparticles. For example, an antiproton and a positron can form an antihydrogen atom, which is believed to have the same properties as a hydrogen atom. This leads to the question of why the formation of matter after the Big Bang resulted in a universe consisting almost entirely of matter, rather than being a half-and-half mixture of matter and antimatter. The discovery of charge parity violation helped to shed light on this problem by showing that this symmetry, originally thought to be perfect, was only approximate.
Antiparticle:
Because charge is conserved, it is not possible to create an antiparticle without either destroying another particle of the same charge (as is for instance the case when antiparticles are produced naturally via beta decay or the collision of cosmic rays with Earth's atmosphere), or by the simultaneous creation of both a particle and its antiparticle, which can occur in particle accelerators such as the Large Hadron Collider at CERN.
Antiparticle:
Particles and their antiparticles have equal and opposite charges, so that an uncharged particle also gives rise to an uncharged antiparticle. In many cases, the antiparticle and the particle coincide: pairs of photons, Z0 bosons, π0 mesons, and hypothetical gravitons and some hypothetical WIMPs all self-annihilate. However, electrically neutral particles need not be identical to their antiparticles: for example, the neutron and antineutron are distinct.
History:
Experiment In 1932, soon after the prediction of positrons by Paul Dirac, Carl D. Anderson found that cosmic-ray collisions produced these particles in a cloud chamber – a particle detector in which moving electrons (or positrons) leave behind trails as they move through the gas. The electric charge-to-mass ratio of a particle can be measured by observing the radius of curling of its cloud-chamber track in a magnetic field. Positrons, because of the direction that their paths curled, were at first mistaken for electrons travelling in the opposite direction. Positron paths in a cloud-chamber trace the same helical path as an electron but rotate in the opposite direction with respect to the magnetic field direction due to their having the same magnitude of charge-to-mass ratio but with opposite charge and, therefore, opposite signed charge-to-mass ratios.
History:
The antiproton and antineutron were found by Emilio Segrè and Owen Chamberlain in 1955 at the University of California, Berkeley. Since then, the antiparticles of many other subatomic particles have been created in particle accelerator experiments. In recent years, complete atoms of antimatter have been assembled out of antiprotons and positrons, collected in electromagnetic traps.
History:
Dirac hole theory Solutions of the Dirac equation contain negative energy quantum states. As a result, an electron could always radiate energy and fall into a negative energy state. Even worse, it could keep radiating infinite amounts of energy because there were infinitely many negative energy states available. To prevent this unphysical situation from happening, Dirac proposed that a "sea" of negative-energy electrons fills the universe, already occupying all of the lower-energy states so that, due to the Pauli exclusion principle, no other electron could fall into them. Sometimes, however, one of these negative-energy particles could be lifted out of this Dirac sea to become a positive-energy particle. But, when lifted out, it would leave behind a hole in the sea that would act exactly like a positive-energy electron with a reversed charge. These holes were interpreted as "negative-energy electrons" by Paul Dirac and mistakenly identified with protons in his 1930 paper A Theory of Electrons and Protons However, these "negative-energy electrons" turned out to be positrons, and not protons.
History:
This picture implied an infinite negative charge for the universe – a problem of which Dirac was aware. Dirac tried to argue that we would perceive this as the normal state of zero charge. Another difficulty was the difference in masses of the electron and the proton. Dirac tried to argue that this was due to the electromagnetic interactions with the sea, until Hermann Weyl proved that hole theory was completely symmetric between negative and positive charges. Dirac also predicted a reaction e− + p+ → γ + γ, where an electron and a proton annihilate to give two photons. Robert Oppenheimer and Igor Tamm, however, proved that this would cause ordinary matter to disappear too fast. A year later, in 1931, Dirac modified his theory and postulated the positron, a new particle of the same mass as the electron. The discovery of this particle the next year removed the last two objections to his theory.
History:
Within Dirac's theory, the problem of infinite charge of the universe remains. Some bosons also have antiparticles, but since bosons do not obey the Pauli exclusion principle (only fermions do), hole theory does not work for them. A unified interpretation of antiparticles is now available in quantum field theory, which solves both these problems by describing antimatter as negative energy states of the same underlying matter field, i.e. particles moving backwards in time.
Particle–antiparticle annihilation:
If a particle and antiparticle are in the appropriate quantum states, then they can annihilate each other and produce other particles. Reactions such as e− + e+ → γγ (the two-photon annihilation of an electron-positron pair) are an example. The single-photon annihilation of an electron-positron pair, e− + e+ → γ, cannot occur in free space because it is impossible to conserve energy and momentum together in this process. However, in the Coulomb field of a nucleus the translational invariance is broken and single-photon annihilation may occur. The reverse reaction (in free space, without an atomic nucleus) is also impossible for this reason. In quantum field theory, this process is allowed only as an intermediate quantum state for times short enough that the violation of energy conservation can be accommodated by the uncertainty principle. This opens the way for virtual pair production or annihilation in which a one particle quantum state may fluctuate into a two particle state and back. These processes are important in the vacuum state and renormalization of a quantum field theory. It also opens the way for neutral particle mixing through processes such as the one pictured here, which is a complicated example of mass renormalization.
Properties:
Quantum states of a particle and an antiparticle are interchanged by the combined application of charge conjugation C , parity P and time reversal T .C and P are linear, unitary operators, T is antilinear and antiunitary, ⟨Ψ|TΦ⟩=⟨Φ|T−1Ψ⟩ . If |p,σ,n⟩ denotes the quantum state of a particle n with momentum p and spin J whose component in the z-direction is σ , then one has CPT|p,σ,n⟩=(−1)J−σ|p,−σ,nc⟩, where nc denotes the charge conjugate state, that is, the antiparticle. In particular a massive particle and its antiparticle transform under the same irreducible representation of the Poincaré group which means the antiparticle has the same mass and the same spin.
Properties:
If C , P and T can be defined separately on the particles and antiparticles, then T|p,σ,n⟩∝|−p,−σ,n⟩, CP|p,σ,n⟩∝|−p,σ,nc⟩, C|p,σ,n⟩∝|p,σ,nc⟩, where the proportionality sign indicates that there might be a phase on the right hand side.
As CPT anticommutes with the charges, CPTQ=−QCPT , particle and antiparticle have opposite electric charges q and -q.
Quantum field theory:
This section draws upon the ideas, language and notation of canonical quantization of a quantum field theory.One may try to quantize an electron field without mixing the annihilation and creation operators by writing ψ(x)=∑kuk(x)ake−iE(k)t, where we use the symbol k to denote the quantum numbers p and σ of the previous section and the sign of the energy, E(k), and ak denotes the corresponding annihilation operators. Of course, since we are dealing with fermions, we have to have the operators satisfy canonical anti-commutation relations. However, if one now writes down the Hamiltonian H=∑kE(k)ak†ak, then one sees immediately that the expectation value of H need not be positive. This is because E(k) can have any sign whatsoever, and the combination of creation and annihilation operators has expectation value 1 or 0.
Quantum field theory:
So one has to introduce the charge conjugate antiparticle field, with its own creation and annihilation operators satisfying the relations bk′=ak†andbk′†=ak, where k has the same p, and opposite σ and sign of the energy. Then one can rewrite the field in the form ψ(x)=∑k+uk(x)ake−iE(k)t+∑k−uk(x)bk†e−iE(k)t, where the first sum is over positive energy states and the second over those of negative energy. The energy becomes H=∑k+Ekak†ak+∑k−|E(k)|bk†bk+E0, where E0 is an infinite negative constant. The vacuum state is defined as the state with no particle or antiparticle, i.e., ak|0⟩=0 and bk|0⟩=0 . Then the energy of the vacuum is exactly E0. Since all energies are measured relative to the vacuum, H is positive definite. Analysis of the properties of ak and bk shows that one is the annihilation operator for particles and the other for antiparticles. This is the case of a fermion.
Quantum field theory:
This approach is due to Vladimir Fock, Wendell Furry and Robert Oppenheimer. If one quantizes a real scalar field, then one finds that there is only one kind of annihilation operator; therefore, real scalar fields describe neutral bosons. Since complex scalar fields admit two different kinds of annihilation operators, which are related by conjugation, such fields describe charged bosons.
Quantum field theory:
Feynman–Stueckelberg interpretation By considering the propagation of the negative energy modes of the electron field backward in time, Ernst Stueckelberg reached a pictorial understanding of the fact that the particle and antiparticle have equal mass m and spin J but opposite charges q. This allowed him to rewrite perturbation theory precisely in the form of diagrams. Richard Feynman later gave an independent systematic derivation of these diagrams from a particle formalism, and they are now called Feynman diagrams. Each line of a diagram represents a particle propagating either backward or forward in time. In Feynman diagrams, anti-particles are shown traveling backwards in time relative to normal matter, and vice versa. This technique is the most widespread method of computing amplitudes in quantum field theory today.
Quantum field theory:
Since this picture was first developed by Stueckelberg, and acquired its modern form in Feynman's work, it is called the Feynman–Stueckelberg interpretation of antiparticles to honor both scientists. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Internet Explorer 6**
Internet Explorer 6:
Microsoft Internet Explorer 6 (IE6) is a graphical web browser developed by Microsoft for Windows operating systems. Released on August 24, 2001, it is the sixth, and by now discontinued, version of Internet Explorer and the successor to Internet Explorer 5. It was the default browser in Windows XP (later default was Internet Explorer 8) and Windows Server 2003 and can replace previous versions of Internet Explorer on Windows NT 4.0, Windows 98, Windows 2000 and Windows ME but unlike version 5, this version does not support Windows 95 or earlier versions. IE6 SP2+ and IE7 were only included in (IE6 SP2+) or available (IE7) for Windows XP SP2+.
Internet Explorer 6:
Despite dominating market share (attaining a peak of 90% in mid-2004), this version of Internet Explorer has been widely criticized for its security issues and lack of support for modern web standards, making frequent appearances in "worst tech products of all time" lists, with PC World labeling it "the least secure software on the planet." In 2004, Mozilla finalized Firefox to rival IE6, and it became highly popular and acclaimed for its security, add-ons, speed and other modern features such as tabbed browsing. Microsoft planned to fix these issues in Internet Explorer 7 by June–August 2005, but it was delayed until an October 2006 release, over 5 years after IE6 debuted.
Internet Explorer 6:
Because a substantial percentage of the web audience still used the outdated browser (especially in China), campaigns were established in the late 2000s to encourage users to upgrade to newer versions of Internet Explorer or switch to different browsers. Some websites dropped support for IE6 entirely, most notable of which was Google dropping support in some of its services in March 2010. According to Microsoft's modern.ie website, as of August 2015, 3.1% of users in China and less than 1% in other countries were using IE6.Internet Explorer 6 was the last version to be called Microsoft Internet Explorer. The software was rebranded as Windows Internet Explorer starting in 2006 with the release of Internet Explorer 7.
Internet Explorer 6:
Internet Explorer 6 is no longer supported, and is not available for download from Microsoft.
It is the last version of Internet Explorer to support Windows NT 4.0 SP6a, Windows 98, Windows 2000 and Windows ME, though it is only available as pre-installed in Windows XP RTM—SP1 and Windows Server 2003 RTM; as the following version, Internet Explorer 7, only supports Windows XP SP2 or later and Windows Server 2003 SP1 or later.
Overview:
When IE6 was released, it included a number of enhancements over its predecessor, Internet Explorer 5. It and its browser engine MSHTML (Trident) are required for many programs including Microsoft Encarta. IE6 improved support for Cascading Style Sheets, adding a number of properties that previously had not been implemented and fixing bugs such as the Internet Explorer box model bug. In Windows XP, IE6 introduced a redesigned interface based on the operating system's default theme, Luna.
Overview:
In addition, IE6 added DHTML enhancements, content restricted inline frames, and partial support of DOM level 1 and SMIL 2.0. The MSXML engine was also updated to version 3.0. Other new features included a new version of the Internet Explorer Administration Kit (IEAK) which introduced IExpress, a utility to create self-extracting INF-based installation packages, Media bar, Windows Messenger integration, fault collection, automatic image resizing, and P3P. Meanwhile, in 2002, the Gopher protocol was disabled. XBM support was dropped.IE6 was the most widely used web browser during its tenure, surpassing Internet Explorer 5.x. At its peak in 2002 and 2003, IE6 attained a total market share of nearly 90%, with all versions of IE combined reaching 95%. There was little change in IE's market share for several years until Mozilla Firefox was released and gradually began to gain popularity. Microsoft subsequently resumed development of Internet Explorer and released Internet Explorer 7, further reducing the number of IE6 users.
Overview:
In a May 7, 2003 Microsoft online chat, Brian Countryman, Internet Explorer Program Manager, declared that Internet Explorer would cease to be distributed separately from Windows (IE6 would be the last standalone version); it would, however, be continued as a part of the evolution of Windows, with updates coming only bundled in Windows upgrades. Thus, Internet Explorer and Windows itself would be kept more in sync. However, after one release in this fashion (IE6 SP2 in Windows XP SP2, in August 2004), Microsoft changed its plan and released Internet Explorer 7 for Windows XP SP2 and Windows Server 2003 SP1 in late 2006. Microsoft Internet Explorer 6 was the last version of Internet Explorer to have "Microsoft" in the title: later versions changed branding to "Windows Internet Explorer", as a reaction to the findings of anti-competitive tying of Internet Explorer and Windows raised in United States v. Microsoft and the European Union Microsoft competition case.On March 4, 2011, Microsoft urged web users to stop using IE6 in favor of newer versions of Internet Explorer. They launched a website called IE6 Countdown, which would show how much percentage of the world uses IE6 and aim to get people to upgrade.
Overview:
Since 2015, all of the older sample questions offered by IE6 Search Companion on Windows XP and other unique functions have been replaced with "Windows 10 Upgrade".
Security problems:
The security advisory site Secunia reported 24 unpatched vulnerabilities in Internet Explorer 6 as of February 9, 2010. These vulnerabilities, which include several "moderately critical" ratings, amount to 17% of the total 144 security risks listed on the website as of February 11, 2010.As of June 23, 2006, Secunia counted 20 unpatched security flaws for Internet Explorer 6, many more and older than for any other browser, even in each individual criticality-level, although some of these flaws only affect Internet Explorer when running on certain versions of Windows or when running in conjunction with certain other applications.On June 23, 2004, an attacker used two previously undiscovered security holes in Internet Explorer to insert spam-sending software on an unknown number of end-user computers. This malware became known as Download.ject and caused users to infect their computers with a back door and key logger merely by viewing a web page. Infected sites included several financial sites.
Security problems:
Probably the biggest generic security failing of Internet Explorer (and other web browsers too) is the fact that it runs with the same level of access as the logged in user, rather than adopting the principle of least user access. Consequently, any malware executing in the Internet Explorer process via a security vulnerability (e.g. Download.ject in the example above) has the same level of access as the user, something that has particular relevance when that user is an Administrator. Tools such as DropMyRights are able to address this issue by restricting the security token of the Internet Explorer process to that of a limited user. However this added level of security is not installed or available by default, and does not offer a simple way to elevate privileges ad hoc when required (for example to access Microsoft Update).
Security problems:
Art Manion, a representative of the United States Computer Emergency Readiness Team (US-CERT) noted in a vulnerability report that the design of Internet Explorer 6 Service Pack 1 made it difficult to secure. He stated that: There are a number of significant vulnerabilities in technologies relating to the IE domain/zone security model, local file system (Local Machine Zone) trust, the Dynamic HTML (DHTML) document object model (in particular, proprietary DHTML features), the HTML Help system, MIME type determination, the graphical user interface (GUI), and ActiveX. … IE is integrated into Windows to such an extent that vulnerabilities in IE frequently provide an attacker significant access to the operating system.
Security problems:
Manion later clarified that most of these concerns were addressed in 2004 with the release of Windows XP Service Pack 2, and other browsers had begun to suffer the same vulnerabilities he identified in the above CERT report.In response to a belief that Internet Explorer's frequency of exploitation is due in part to its ubiquity, since its market dominance made it the most obvious target, David Wheeler argues that this is not the full story. He notes that Apache HTTP Server had a much larger market share than Microsoft IIS, yet Apache traditionally had fewer security vulnerabilities at the time.As a result of its issues, some security experts, including Bruce Schneier in 2004, recommended that users stop using Internet Explorer for normal browsing, and switch to a different browser instead. Several notable technology columnists suggested the same idea, including The Wall Street Journal's Walt Mossberg and eWeek's Steven Vaughan-Nichols. On July 6, 2004, US-CERT released an exploit report in which the last of seven workarounds was to use a different browser, especially when visiting untrusted sites.
Market share:
Internet Explorer 6 was the most widely used web browser during its tenure (surpassing Internet Explorer 5.x), attaining a peak percentage in usage share during 2002 and 2003 in the high 80s, and together with other versions up to 95%. It only slowly declined up to 2007, when it lost about half its market share to Internet Explorer 7 and Mozilla Firefox between late 2006 to 2008.
Market share:
IE6 remained more popular than its successor in business use for more than a year after IE7 came out. A 2008 DailyTech article noted, "A Survey found 55.2% of companies still use IE 6 as of December 2007", while "IE 7 only has a 23.4 percent adoption rate".Net Applications estimated IE6 market share at almost 39% for September 2008. According to the same source, IE7 users migrate faster to IE8 than users of its predecessor IE6 did, leading to IE6 once again becoming the most widely used browser during the summer and fall of 2009, eight years after its introduction.As of February 2010, estimates of IE6's global market share ranged from 10 to 20%. Nonetheless, IE6 continued to maintain a plurality or even majority presence in the browser market of certain countries, notably China and South Korea.Google Apps and YouTube dropped support for IE6 in March 2010, followed by Facebook chat in September.On January 3, 2012, Microsoft announced that usage of IE6 in the United States had dropped below 1%.In August 2012, IE6 was still the most popular IE web browser in China. It was also the second most used browser overall with a total market share of 22.41%, just behind the Chinese-made 360 Secure Browser at 26.96%.In July 2013, Net Applications reported the global market share of IE6 amongst all Internet Explorer browsers to be 10.9%.As of August 2015, IE6 was being used by <1% users in most countries, with the only exception being China (3.1%). Usage in China fell below 1% by the end of the year.
Criticism:
A common criticism of Internet Explorer is the speed at which fixes are released after the discovery of security problems.
Criticism:
Microsoft attributes the perceived delays to rigorous testing. A posting to the Internet Explorer team blog on August 17, 2004 explained that there are, at minimum, 234 distinct releases of Internet Explorer that Microsoft supports (covering more than two dozen languages, and several different revisions of the operating system and browser level for each language), and that every combination is tested before a patch is released.In May 2006, PC World rated Internet Explorer 6 the eighth worst tech product of all time. A certain degree of complacency has been alleged against Microsoft over IE6. With near 90% of the browser market the motive for innovation was not strongly present, resulting in the 5 year time between IE6's introduction and its replacement with IE7. This was a contributing factor for the rapid rise of the free software alternative Mozilla Firefox.
Criticism:
Programming interface Unlike most other modern browsers, IE6 does not fully nor properly support CSS version 2, which made it difficult for web developers to ensure compatibility with the browser without degrading the experience for users of more advanced browsers. Developers often resorted to strategies such as CSS hacks, conditional comments, or other forms of browser sniffing to make their websites work in IE6.
Criticism:
Additionally, IE6 lacks support for alpha transparency in PNG images, replacing transparent pixels with a solid colour background (grey unless defined in a PNG bKGD chunk). There is a workaround by way of Microsoft's proprietary AlphaImageLoader, but it is more complicated and not wholly comparable in function.Due to the long-lasting popularity of Internet Explorer 6, web developers had to work around its lack of interfaces. For example, due to the lack of the position: fixed parameter in CSS for elements such as top bars that should remain on screen when the user scrolls, JavaScript code had to be used to determine the user's scrolling position and then push down an element positioned with position: absolute by the same distance to have it remain on screen, or by dividing the page's hypertext into subframes using the <frameset> tag. With media queries unavailable, responsive widths could be implemented to a limited extent by wrapping elements inside tables.
Criticism:
Bugs Internet Explorer 6 has also been criticized due to its instability. For example, the following code on a website would cause a program crash in IE6: or The user could crash the browser with a single line of code in the address bar, causing a pointer overflow.
Deprecation of support:
Several campaigns were later aimed at ridding Internet Explorer 6 from the browser market: In July 2008, 37signals announced it would phase out support for IE6 beginning in October 2008.
In February 2009, some Norwegian sites began hosting campaigns with the same aim.
In March 2009, a Danish anti-IE6 campaign was launched.
In July 2009, developers of YouTube placed a site notice that warned about the impending deprecation of support for Internet Explorer 6, prompting its users to upgrade their browser. It is claimed that they represented 18% of the site traffic at that time.
In January 2010, the German Government, and subsequently the French Government each advised their citizens to move away from IE6.
Also in January 2010, Google announced it would no longer support IE6.
In February 2010, British citizens began to petition their government to stop using IE6, though this was rejected in July 2010.
In March 2010, in agreement with the EU, Microsoft began prompting users of Internet Explorer 6 in the EU with a ballot screen in which they are presented with a list of browsers in random order to select and upgrade to. The website is located at BrowserChoice.eu.
Deprecation of support:
In May 2010, Microsoft's Australian division launched a campaign which compared IE6 to 9-year-old milk and urged users to upgrade to IE8.With the increasing lack of compatibility with modern web standards, popular websites began removing support for IE6 in 2010, including YouTube and their parent company Google; however large IT company support teams and other employers forcing staff to use IE6 for compatibility reasons slowed upgrades. Microsoft themselves eventually began their own campaign to encourage users to stop using IE6, though stating that they would support IE6 until Windows XP SP3 (including embedded versions) support is removed. However, on January 12, 2016 when the new Microsoft Lifecycle Support policy for Internet Explorer went into effect, IE6 support on all Windows versions ended, more than 14 years after its original release, making the January 2016 security update for multiple versions of XP Embedded the last that Microsoft publicly issued for IE6.
Security framework:
Internet Explorer uses a zone-based security framework, which means that sites are grouped based upon certain conditions. IE allows the restriction of broad areas of functionality, and also allows specific functions to be restricted. The administration of Internet Explorer is accomplished through the Internet Properties control panel. This utility also administers the Internet Explorer framework as it is implemented by other applications.
Security framework:
Patches and updates to the browser are released periodically and made available through Windows Update web site. Windows XP Service Pack 2 adds several important security features to Internet Explorer, including a popup blocker and additional security for ActiveX controls. ActiveX support remains in Internet Explorer although access to the "Local Machine Zone" is denied by default since Service Pack 2. However, once an ActiveX control runs and is authorized by the user, it can gain all the privileges of the user, instead of being granted limited privileges as Java or JavaScript do. This was later solved in the Windows Vista version of IE 7, which supported running the browser in a low-permission mode, making malware unable to run unless expressly granted permission by the user.
Quirks mode:
Internet Explorer 6 dropped Compatibility Mode, which allowed Internet Explorer 4 to be run side by side with 5.x. Instead, IE6 introduced quirks mode, which causes it to emulate many behaviors of IE 5.5. Rather than being activated by the user, quirks mode is automatically and silently activated when viewing web pages that contain an old, invalid or no DOCTYPE. This feature was later added to all other major browsers to maximize compatibility with old or poorly-coded web pages.
Supported platforms:
Internet Explorer 6 supports Windows NT 4.0 (SP6a only), Windows 98, Windows 2000, Windows ME, Windows XP and Windows Server 2003. The Service Pack 1 update supports all of these versions, but Security Version 1 is only available as part of Windows XP SP2 and Windows Server 2003 SP1 and later service packs for those versions. This would later be followed by Internet Explorer 7 dropping support for Windows NT 4.0, Windows 98, Windows 2000 and Windows ME, and thus making Internet Explorer 6 the final version of Internet Explorer with support for Windows versions prior to Windows XP and Windows Server 2003.
System requirements:
IE6 requires at least: 486/66 MHz processor Windows 98/NT 4.0 SP6a Super VGA (800 × 600) monitor with 256 colors Mouse or compatible pointing device RAM: 16-32 MB Free disk space: 8.7–12.7 MB | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Play money**
Play money:
Play money is noticeably fake bills or coins intended to be used as toy currency, especially for classroom instruction or as an in-game currency in board games such as Monopoly, rather than currency in a legitimate exchange market. Play money coins and bills are collected widely. They can be found made from metals, cardboard or, more frequently today, plastic. For card games such as poker, casino tokens are commonly used instead.
Play money:
In 1997, the Winston Million (a cash prize award program on the NASCAR Winston Cup Series) was won by Jeff Gordon at the Mountain Dew Southern 500. A Brinks truck led him around the victory lap, spewing bags of Winston play money.Many online gambling sites offer "play money" games which can be played for freely-obtainable credits. These are usually offered alongside "real money" games. However, some sites also offer software that only offers play money games. Such software is usually downloadable from a parallel .net web address, which can then be advertised to the general public as a non-gambling website. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Blowhole (geology)**
Blowhole (geology):
In geology, a blowhole or marine geyser is formed as sea caves grow landward and upward into vertical shafts and expose themselves toward the surface, which can result in hydraulic compression of seawater that is released through a port from the top of the blowhole. The geometry of the cave and blowhole along with tide levels and swell conditions determine the height of the spray.
Mechanics:
Blowholes are likely to occur in areas where there are crevices, such as lava tubes, in rock along the coast. These areas are often located along fault lines and on islands. As powerful waves hit the coast, water rushes into these crevices and bursts out in a high pressured release. It is often accompanied by a loud noise and wide spray, and for this reason, blowholes are often sites of tourism.Marine erosion on rocky coastlines produce blowholes that are found throughout the world. They are found at intersecting faults and on the windward sides of a coastline where they receive higher wave energy from the open ocean. The development of a blowhole is linked to the formation of a littoral cave. These two elements make up the blowhole system. A blowhole system always contains three main features: a catchment entrance, a compression cavern and an expelling port. The arrangement, angle and size of these three features determine the force of the air to water ratio that is ejected from the port. The blowhole feature tends to occur in the most distal section of a littoral cave. As their name suggests, blowholes have the ability to move air rapidly. Strong reverse draughts in response to pressure changes in a connecting littoral cave can send wind speeds upwards of 70 km/h.The formation of a blowhole system begins as a littoral cave is formed. The main factors that contribute to littoral caves formation are wave dynamics and the parent material’s rock property. A parent material property such as susceptibility or resistance to weathering plays a major role in the development of caves. Littoral caves can be formed by one of two processes: caves made of limestone are produced by karst (dissolution) processes, and caves made of igneous rock are produced by pseudokarst (non-dissolutional) processes. In time the littoral cave enlarges growing inland and vertically through weak joints in the parent material. As weathering continues the roof of the cave is exposed, and the blowhole continues to enlarge, eventually the roof of the littoral cave is weaken and collapses. This creates a steep-wall inlet that allows the next stage of coastal morphology to progress. La Bufadora is a large example of a blowhole located in the Punta Banda Peninsula of Baja California, Mexico. It consists of a littoral cave with a thin opening that has a recurrence eruption interval of 13 -17 seconds, ejecting water up to 100 ft. above sea level.
Ecological impacts:
Blowholes have the capacity to change the topography near their locations. Blowholes can eventually erode the area surrounding the crevices to form larger sea caves. In some instances, the cave itself may collapse. This event may create shallow pools along the coast.
Other:
A blowhole is also the name of a rare geologic feature in which air is blown through a small hole at the surface due to pressure differences between a closed underground system and the surface. The blowholes of Wupatki National Monument are an example of such a phenomenon. It is estimated that the closed underground passages have a volume of at least seven billion cubic feet. Wind speeds can approach 30 miles per hour. Another well-known example of this kind of blowhole is the natural entrance to Wind Cave in South Dakota. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mer (software distribution)**
Mer (software distribution):
Mer was a free and open-source software distribution, targeted at hardware vendors to serve as a middleware for Linux kernel-based mobile-oriented operating systems. It is a fork of MeeGo.
Goals:
Some goals of the project are: Openly developed with transparency built into the fabric of the project Provide a mobile device oriented architecture Primary customers are mobile device vendors, not end-users.
Have structure, processes and tools to make life easy for device manufacturers Support innovation in the mobile OS space Inclusive of projects and technologies (e.g. MeeGo, Tizen, Qt, Enlightenment Foundation Libraries (EFL), HTML5) Governed as a meritocracy Run as a non profit through donations
Software architecture:
Mer is not an operating system; it is aimed to be one component of an operating system based on the Linux kernel. Mer is a part of the operating system above the Linux kernel and below the graphical user interface (GUI).
Mer just provides the equivalent of the MeeGo core. The former MeeGo user interfaces and hardware adaptation are to be done by various other projects and by hardware manufacturers, which will be able to build their products on top of the Mer core.
Components There is support for systemd, Wayland, Hybris, and other current FOSS software.
Zephyr is an attempt at creating a stack for use by other projects to be exploring lightweight, high-performance, next-generation UIs based on Mer, Qt5, QML Compositor and Wayland.Weston 1.3, which was released on 11 October 2013, supports libhybris, making it possible to use Android device drivers with Wayland.
Supported hardware:
Mer can be compiled for a number of instruction sets such as x86, ARM or MIPS.
Supported hardware:
There are Mer-based builds available for various devices, including Raspberry Pi, Beagleboard, Nokia N900, Nokia N950, Nokia N9 and for various Intel Atom-based tablets. These also include hardware adaptation packages and various UXes running on top of Mer, provided by different projects. They can be flashed on the device and might work in dual-boot mode with the original firmware.Mer uses Open Build Service: OBS in mer but with one repository per architecture:
Products based on Mer:
KDE Plasma Active Mer was used as a reference platform for KDE's Plasma Active.
Products based on Mer:
Vivaldi Tablet and Improv-computer In January 2012 a Plasma Active-tablet device, initially known as 'Spark tablet' and soon renamed 'Vivaldi Tablet', was announced. Based on the Allwinner A20 SoC, it would have a 7" multitouch display, run the Plasma Active user interface on top of Mer, and have a target price of about €200. The project encountered some problems when its hardware partner in China completely changed the internal components and was reluctant to release the kernel source for the new hardware. As of early July 2012, the Vivaldi had been set back, but a solution was "in the pipes", according to Plasma developer Aaron Seigo. As a kind of side project Improv-computer was targeted for developers and was to be released in January 2014, Mer preinstalled. In mid 2014 both projects were discontinued.
Products based on Mer:
Nemo Mobile Parallel to Sailfish OS by Jolla, Nemo Mobile is a community-driven operating system based on a Linux kernel, Mer, a GUI and diverse applications.
Since 2019, Nemo Mobile is no longer using Mer Project as a base but switched to Manjaro Linux. The main reason for the move was obsolete components, like Qt version 5.6 due to licensing restrictions.
Products based on Mer:
Jolla and Sailfish OS In July 2012 Jolla, a Finnish company founded by former Nokia employees involved in MeeGo development, announced their work on a new operating system called Sailfish OS, which is based on MeeGo and Mer's core with added proprietary GUI and hardware implementation layers. It was presented in late November 2012. Jolla released its first smartphone using Sailfish in 2013, simply called Jolla. In October 2014 Jolla announced for May 2015 the Jolla Tablet with Sailfish OS 2.0 which is to be 64-bit on quadcore Intel CPU. Also 2.0 is ready for licensing, hence it is used with products like Aqua Fish by Intex and PuzzlePhone.
Products based on Mer:
Yuanxin OS In November 2014, Yuanxin Technology in China announced it is working on Yuanxin OS. The company's president Shi Wenyong called the OS "China's own smartphone OS", to be on par with Android and Apple iOS. Mr. Shi explained to a reporter that Yuanxi OS is based on the Mer distribution.
History:
Mer's initial aim to provide a completely free alternative to the Maemo operating system, which was able to run on Nokia Internet Tablets such as the N800 and N810 (collectively known as the N8x0 devices).It was based on Ubuntu 9.04, and with the release of Maemo 5/Fremantle, a new goal emerged: "[To bring] as much of Fremantle as we can get on the N8x0." Shift to MeeGo Mer suspended development at release 0.17, since focus had switched to building MeeGo for the N800 and N810 devices. By then, MeeGo was available and supported by a much wider community.
History:
Collapse of MeeGo The development was silently resumed during the summer of 2011 by a handful of MeeGo developers (some of them previously active in the Mer project), after Nokia changed its strategy in February 2011. These developers were not satisfied with the way MeeGo had been governed behind closed doors especially after Nokia departed, and they were also concerned that MeeGo heavily depended on big companies which could stop supporting it, as was the case when Nokia abandoned MeeGo as part of its new strategy.This was again proven to be a problem after Intel, Samsung and the Linux Foundation announced they were going to create a new operating system called Tizen. This new OS began focusing on HTML5 and using the Enlightenment Foundation Libraries (EFL) instead of Qt for native applications. However, on May 14, 2014 it was announced that Tizen:Common would be bringing Qt back by starting to ship with it integrated.
History:
Revival with "MeeGo Reconstructed" After the Tizen project was announced, the revival of the Mer project was announced on the MeeGo mailing list, with the promise that it would be developed and governed completely in the open as a meritocracy, unlike MeeGo and Tizen. It would also be based on the MeeGo code base and tools, aiming to provide just the equivalent of the MeeGo core with no default UI. The APIs for third party application development are included, meaning that Qt, EFL, and HTML5 would be supported on the platform, and maybe even others if widely requested.
History:
The project quickly started to gain traction among many open source developers who had been involved in MeeGo, and it started being used by former MeeGo projects, such as the reference handset UX, now rebased on top of Mer and called Nemo Mobile, and a couple of projects targeting tablet UXes such as Cordia (a reimplementation of the Maemo 5 Hildon UX) and Plasma Active emerged on top of Mer. Equivalent Mer-based project of the former MeeGo IVI and Smart TV UXes are not yet known to exist.
History:
The aim of the Mer community is to create, in a solid way, what had been unable to be done with MeeGo; Mer is to become what MeeGo was expected to be but has not become. Mer aims to become the MeeGo 2.0 when the Linux Foundation finds that it complies with all of the MeeGo requirements.
Merger with Sailfish In early 2019 it was announced that they would unify Mer and Sailfish operations under one brand, called Sailfish OS, discontinuing use of the name Mer. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Photograph**
Photograph:
A photograph (also known as a photo, image, or picture) is an image created by light falling on a photosensitive surface, usually photographic film or an electronic image sensor, such as a CCD or a CMOS chip. Most photographs are now created using a smartphone or camera, which uses a lens to focus the scene's visible wavelengths of light into a reproduction of what the human eye would see. The process and practice of creating such images is called photography.
Etymology:
The word photograph was coined in 1839 by Sir John Herschel and is based on the Greek φῶς (phos), meaning "light," and γραφή (graphê), meaning "drawing, writing," together meaning "drawing with light."
History:
The first permanent photograph, a contact-exposed copy of an engraving, was made in 1822 using the bitumen-based "heliography" process developed by Nicéphore Niépce. The first photographs of a real-world scene, made using a camera obscura, followed a few years later at Le Gras, France, in 1826, but Niépce's process was not sensitive enough to be practical for that application: a camera exposure lasting for hours or days was required. In 1829 Niépce entered into a partnership with Louis Daguerre and the two collaborated to work out a similar but more sensitive and otherwise improved process.
History:
After Niépce's death in 1833 Daguerre concentrated on silver halide-based alternatives. He exposed a silver-plated copper sheet to iodine vapor, creating a layer of light-sensitive silver iodide; exposed it in the camera for a few minutes; developed the resulting invisible latent image to visibility with mercury fumes; then bathed the plate in a hot salt solution to remove the remaining silver iodide, making the results light-fast. He named this first practical process for making photographs with a camera the daguerreotype, after himself. Its existence was announced to the world on 7 January 1839 but working details were not made public until 19 August. Other inventors soon made improvements which reduced the required exposure time from a few minutes to a few seconds, making portrait photography truly practical and widely popular.
History:
The daguerreotype had shortcomings, notably the fragility of the mirror-like image surface and the particular viewing conditions required to see the image properly. Each was a unique opaque positive that could only be duplicated by copying it with a camera. Inventors set about working out improved processes that would be more practical. By the end of the 1850s the daguerreotype had been replaced by the less expensive and more easily viewed ambrotype and tintype, which made use of the recently introduced collodion process. Glass plate collodion negatives used to make prints on albumen paper soon became the preferred photographic method and held that position for many years, even after the introduction of the more convenient gelatin process in 1871. Refinements of the gelatin process have remained the primary black-and-white photographic process to this day, differing primarily in the sensitivity of the emulsion and the support material used, which was originally glass, then a variety of flexible plastic films, along with various types of paper for the final prints.
History:
Color photography is almost as old as black-and-white, with early experiments including John Herschel's Anthotype prints in 1842, the pioneering work of Louis Ducos du Hauron in the 1860s, and the Lippmann process unveiled in 1891, but for many years color photography remained little more than a laboratory curiosity. It first became a widespread commercial reality with the introduction of Autochrome plates in 1907, but the plates were very expensive and not suitable for casual snapshot-taking with hand-held cameras. The mid-1930s saw the introduction of Kodachrome and Agfacolor Neu, the first easy-to-use color films of the modern multi-layer chromogenic type. These early processes produced transparencies for use in slide projectors and viewing devices, but color prints became increasingly popular after the introduction of chromogenic color print paper in the 1940s. The needs of the motion picture industry generated a number of special processes and systems, perhaps the best-known being the now-obsolete three-strip Technicolor process.
Types of photographs:
Non-digital photographs are produced with a two-step chemical process. In the two-step process the light-sensitive film captures a negative image (colors and lights/darks are inverted). To produce a positive image, the negative is most commonly transferred ('printed') onto photographic paper. Printing the negative onto transparent film stock is used to manufacture motion picture films.
Alternatively, the film is processed to invert the negative image, yielding positive transparencies. Such positive images are usually mounted in frames, called slides. Before recent advances in digital photography, transparencies were widely used by professionals because of their sharpness and accuracy of color rendition. Most photographs published in magazines were taken on color transparency film.
Types of photographs:
Originally, all photographs were monochromatic or hand-painted in color. Although methods for developing color photos were available as early as 1861, they did not become widely available until the 1940s or 1950s, and even so, until the 1960s most photographs were taken in black and white. Since then, color photography has dominated popular photography, although black-and-white is still used, being easier to develop than color.
Types of photographs:
Panoramic format images can be taken with cameras like the Hasselblad Xpan on standard film. Since the 1990s, panoramic photos have been available on the Advanced Photo System (APS) film. APS was developed by several of the major film manufacturers to provide a film with different formats and computerized options available, though APS panoramas were created using a mask in panorama-capable cameras, far less desirable than a true panoramic camera, which achieves its effect through a wider film format. APS has become less popular and has been discontinued.
Types of photographs:
The advent of the microcomputer and digital photography has led to the rise of digital prints. These prints are created from stored graphic formats such as JPEG, TIFF, and RAW. The types of printers used include inkjet printers, dye-sublimation printer, laser printers, and thermal printers. Inkjet prints are sometimes given the coined name "Giclée".
The Web has been a popular medium for storing and sharing photos ever since the first photograph was published on the web by Tim Berners-Lee in 1992 (an image of the CERN house band Les Horribles Cernettes). Today popular sites such as Flickr, PhotoBucket and 500px are used by millions of people to share their pictures.
Preservation:
Paper folders Ideal photograph storage involves placing each photo in an individual folder constructed from buffered, or acid-free paper. Buffered paper folders are especially recommended in cases when a photograph was previously mounted onto poor quality material or using an adhesive that will lead to even more acid creation. Store photographs measuring 8x10 inches or smaller vertically along the longer edge of the photo in the buffered paper folder, within a larger archival box, and label each folder with relevant information to identify it. The rigid nature of the folder protects the photo from slumping or creasing, as long as the box is not packed too tightly or under filled. Folder larger photos or brittle photos stacked flat within archival boxes with other materials of comparable size.
Preservation:
Polyester enclosures The most stable of plastics used in photo preservation, polyester, does not generate any harmful chemical elements, nor does it have any capability to absorb acids generated by the photograph itself. Polyester sleeves and encapsulation have been praised for their ability to protect the photograph from humidity and environmental pollution, slowing the reaction between the item and the atmosphere. This is true, however the polyester just as frequently traps these elements next to the material it is intended to protect. This is especially risky in a storage environment that experiences drastic fluctuations in humidity or temperature, leading to ferrotyping, or sticking of the photograph to the plastic. Photographs sleeved or encapsulated in polyester cannot be stored vertically in boxes because they will slide down next to each other within the box, bending and folding, nor can the archivist write directly onto the polyester to identify the photograph. Therefore, it is necessary to either stack polyester protected photographs horizontally within a box, or bind them in a three ring binder. Stacking the photos horizontally within a flat box will greatly reduce ease of access, and binders leave three sides of the photo exposed to the effects of light and do not support the photograph evenly on both sides, leading to slumping and bending within the binder. The plastic used for enclosures has been manufactured to be as frictionless as possible to prevent scratching photos during insertion to the sleeves. Unfortunately, the slippery nature of the enclosure generates a build-up of static electricity, which attracts dust and lint particles. The static can attract the dust to the inside of the sleeve, as well, where it can scratch the photograph. Likewise, these components that aid in insertion of the photo, referred to as slip agents, can break down and transfer from the plastic to the photograph, where they deposit as an oily film, attracting further lint and dust. At this time, there is no test to evaluate the long-term effects of these components on photographs. In addition, the plastic sleeves can develop kinks or creases in the surface, which will scratch away at the emulsion during handling.
Preservation:
Handling and care It is best to leave photographs lying flat on the table when viewing them. Do not pick it up from a corner, or even from two sides and hold it at eye level. Every time the photograph bends, even a little, this can break down the emulsion. The very nature of enclosing a photograph in plastic encourages users to pick it up; users tend to handle plastic enclosed photographs less gently than non-enclosed photographs, simply because they feel the plastic enclosure makes the photo impervious to all mishandling. As long as a photo is in its folder, there is no need to touch it; simply remove the folder from the box, lay it flat on the table, and open the folder. If for some reason the researchers or archivists do need to handle the actual photo, perhaps to examine the verso for writing, they can use gloves if there appears to be a risk from oils or dirt on the hands.
Myths and beliefs:
Because daguerreotypes were rendered on a mirrored surface, many spiritualists also became practitioners of the new art form. Spiritualists would claim that the human image on the mirrored surface was akin to looking into one's soul. The spiritualists also believed that it would open their souls and let demons in. Among Muslims, it is makruh (disliked) to perform salah (worship) in a place decorated with photographs. Photography and darkroom anomalies and artifacts sometimes lead viewers to believe that spirits or demons have been captured in photos.
Legality:
The production or distribution of certain types of photograph has been forbidden under modern laws, such as those of government buildings, highly classified regions, private property, copyrighted works, children's genitalia, child pornography and less commonly pornography overall. These laws vary greatly between jurisdictions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Phosphatidylinositol deacylase**
Phosphatidylinositol deacylase:
The enzyme phosphatidylinositol deacylase (EC 3.1.1.52) catalyzes the reaction 1-phosphatidyl-D-myo-inositol + H2O ⇌ 1-acylglycerophosphoinositol + a carboxylateThis enzyme belongs to the family of hydrolases, specifically those acting on carboxylic ester bonds. The systematic name is 1-phosphatidyl-D-myo-inositol 2-acylhydrolase. Other names in common use include phosphatidylinositol phospholipase A2, and phospholipase A2. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lenos Trigeorgis**
Lenos Trigeorgis:
Lenos Trigeorgis is the Bank of Cyprus Chair Professor of Finance in the School of Economics and Management, University of Cyprus. He is considered a leading authority on capital budgeting and strategy, having pioneered the field of real options, and having authored several books on related topics.
Lenos Trigeorgis:
He has taught at universities including Boston University, MIT, Columbia University, UC Berkeley, London Business School, University of Chicago, and Durham University. He has published in numerous journals, and serves on the editorial boards of several journals. He is also President of the Real Options Group (ROG), a boutique strategy consulting firm focusing on real options valuation. Every year since 1997, ROG has organized the Annual International Conference on Real Options.
Lenos Trigeorgis:
Prof. Trigeorgis is the author of Real Options (MIT Press, 1996) and co-authored Strategic Investment (Princeton University Press, 2004), and Competitive Strategy (MIT Press, 2012). With Michael Brennan, he edited Project Flexibility, Agency, and Competition (Oxford University Press, 1999), and, with Eduardo Schwartz, Real Options and Investment Under Uncertainty (MIT Press, 2001).
He received his D.B.A. from Harvard University in 1986. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Insulin like 6**
Insulin like 6:
Insulin like 6 is a protein that in humans is encoded by the INSL6 gene.
Function:
The protein encoded by this gene contains a classical signature of the insulin superfamily and is significantly similar to relaxin and relaxin-like factor. This gene is preferentially expressed in testis. Its expression in testis is restricted to interstitial cells surrounding seminiferous tubules, which suggests a role in sperm development and fertilization. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Solipsism**
Solipsism:
Solipsism ( (listen); from Latin solus 'alone', and ipse 'self') is the philosophical idea that only one's mind is sure to exist. As an epistemological position, solipsism holds that knowledge of anything outside one's own mind is unsure; the external world and other minds cannot be known and might not exist outside the mind.
Varieties:
There are varying degrees of solipsism that parallel the varying degrees of skepticism: Metaphysical Metaphysical solipsism is a variety of solipsism. Based on a philosophy of subjective idealism, metaphysical solipsists maintain that the self is the only existing reality and that all other realities, including the external world and other persons, are representations of that self, and have no independent existence. There are several versions of metaphysical solipsism, such as Caspar Hare's egocentric presentism (or perspectival realism), in which other people are conscious, but their experiences are simply not present.
Varieties:
Epistemological Epistemological solipsism is the variety of idealism according to which only the directly accessible mental contents of the solipsistic philosopher can be known. The existence of an external world is regarded as an unresolvable question rather than actually false. Further, one cannot also be certain as to what extent the external world exists independently of one's mind. For instance, it may be that a God-like being controls the sensations received by mind, making it appear as if there is an external world when most of it (excluding the God-like being and oneself) is false. However, the point remains that epistemological solipsists consider this an "unresolvable" question.
Varieties:
Methodological Methodological solipsism is an agnostic variant of solipsism. It exists in opposition to the strict epistemological requirements for "knowledge" (e.g. the requirement that knowledge must be certain). It still entertains the points that any induction is fallible. Methodological solipsism sometimes goes even further to say that even what we perceive as the brain is actually part of the external world, for it is only through our senses that we can see or feel the mind. Only the existence of thoughts is known for certain.
Varieties:
Methodological solipsists do not intend to conclude that the stronger forms of solipsism are actually true. They simply emphasize that justifications of an external world must be founded on indisputable facts about their own consciousness. The methodological solipsist believes that subjective impressions (empiricism) or innate knowledge (rationalism) are the sole possible or proper starting point for philosophical construction. Often methodological solipsism is not held as a belief system, but rather used as a thought experiment to assist skepticism (e.g. Descartes' Cartesian skepticism).
Main points:
Denial of material existence, in itself, does not constitute solipsism. Philosophers try to build knowledge on more than an inference or analogy. The failure of Descartes' epistemological enterprise brought to popularity the idea that all certain knowledge may go no further than "I think; therefore I exist" without providing any real details about the nature of the "I" that has been proven to exist.
Main points:
The theory of solipsism also merits close examination because it relates to three widely held philosophical presuppositions, each itself fundamental and wide-ranging in importance: One's most certain knowledge is the content of one's own mind—my thoughts, experiences, affects, etc.
There is no conceptual or logically necessary link between mental and physical—between, for example, the occurrence of certain conscious experience or mental states and the "possession" and behavioral dispositions of a "body" of a particular kind.
Main points:
The experience of a given person is necessarily private to that person.To expand on the second point, the conceptual problem here is that the previous assumes mind or consciousness (which are attributes) can exist independent of some entity having this attribute (a capability in this case), i.e., that an attribute of an existent can exist apart from the existent itself. If one admits to the existence of an independent entity (e.g., the brain) having that attribute, the door is open to an independent reality. (See Brain in a vat) Some people hold that, while it cannot be proven that anything independent of one's mind exists, the point that solipsism makes is irrelevant. This is because, whether the world as we perceive it exists independently or not, we cannot escape this perception, hence it is best to act assuming that the world is independent of our minds. (See Falsifiability and testability below)However, being aware simply acknowledges its existence; it does not identify the actual creations until they are observed by the user.
History:
Origins of solipsist thought are found in Greece and later Enlightenment thinkers such as Thomas Hobbes and Descartes.
Gorgias Solipsism was first recorded by the Greek presocratic sophist, Gorgias (c. 483–375 BC) who is quoted by the Roman sceptic Sextus Empiricus as having stated: Nothing exists.
Even if something exists, nothing can be known about it.
Even if something could be known about it, knowledge about it cannot be communicated to others.Much of the point of the sophists was to show that objective knowledge was a literal impossibility.
History:
Descartes The foundations of solipsism are in turn the foundations of the view that the individual's understanding of any and all psychological concepts (thinking, willing, perceiving, etc.) is accomplished by making an analogy with their own mental states; i.e., by abstraction from inner experience. And this view, or some variant of it, has been influential in philosophy since Descartes elevated the search for incontrovertible certainty to the status of the primary goal of epistemology, whilst also elevating epistemology to "first philosophy".
History:
Berkeley George Berkeley's arguments against materialism in favour of idealism provide the solipsist with a number of arguments not found in Descartes. While Descartes defends ontological dualism, thus accepting the existence of a material world (res extensa) as well as immaterial minds (res cogitans) and God, Berkeley denies the existence of matter but not minds, of which God is one.
Relation to other ideas:
Idealism and materialism One of the most fundamental debates in philosophy concerns the "true" nature of the world—whether it is some ethereal plane of ideas or a reality of atomic particles and energy. Materialism posits a real "world out there", as well as in and through us, that can be sensed—seen, heard, tasted, touched and felt, sometimes with prosthetic technologies corresponding to human sensing organs. (Materialists do not claim that human senses or even their prosthetics can, even when collected, sense the totality of the universe; simply that they collectively cannot sense what cannot in any way be known to us.) Materialists do not find this a useful way of thinking about the ontology and ontogeny of ideas, but we might say that from a materialist perspective pushed to a logical extreme communicable to an idealist, ideas are ultimately reducible to a physically communicated, organically, socially and environmentally embedded 'brain state'. While reflexive existence is not considered by materialists to be experienced on the atomic level, the individual's physical and mental experiences are ultimately reducible to the unique tripartite combination of environmentally determined, genetically determined, and randomly determined interactions of firing neurons and atomic collisions.
Relation to other ideas:
For materialists, ideas have no primary reality as essences separate from our physical existence. From a materialist perspective, ideas are social (rather than purely biological), and formed and transmitted and modified through the interactions between social organisms and their social and physical environments. This materialist perspective informs scientific methodology, insofar as that methodology assumes that humans have no access to omniscience and that therefore human knowledge is an ongoing, collective enterprise that is best produced via scientific and logical conventions adjusted specifically for material human capacities and limitations.Modern idealists believe that the mind and its thoughts are the only true things that exist. This is the reverse of what is sometimes called "classical idealism" or, somewhat confusingly, "Platonic idealism" due to the influence of Plato's theory of forms (εἶδος eidos or ἰδέα idea) which were not products of our thinking. The material world is ephemeral, but a perfect triangle or "beauty" is eternal. Religious thinking tends to be some form of idealism, as God usually becomes the highest ideal (such as neoplatonism). On this scale, solipsism can be classed as idealism. Thoughts and concepts are all that exist, and furthermore, only the solipsist's own thoughts and consciousness exist. The so-called "reality" is nothing more than an idea that the solipsist has (perhaps unconsciously) created.
Relation to other ideas:
Cartesian dualism There is another option: the belief that both ideals and "reality" exist. Dualists commonly argue that the distinction between the mind (or 'ideas') and matter can be proven by employing Leibniz's principle of the identity of indiscernibles, which states that if two things share exactly the same qualities, then they must be identical, as in indistinguishable from each other and therefore one and the same thing. Dualists then attempt to identify attributes of mind that are lacked by matter (such as privacy or intentionality) or vice versa (such as having a certain temperature or electrical charge). One notable application of the identity of indiscernibles was by René Descartes in his Meditations on First Philosophy. Descartes concluded that he could not doubt the existence of himself (the famous cogito ergo sum argument), but that he could doubt the (separate) existence of his body. From this, he inferred that the person Descartes must not be identical to the Descartes body since one possessed a characteristic that the other did not: namely, it could be known to exist. Solipsism agrees with Descartes in this aspect, and goes further: only things that can be known to exist for sure should be considered to exist. The Descartes body could only exist as an idea in the mind of the person Descartes. Descartes and dualism aim to prove the actual existence of reality as opposed to a phantom existence (as well as the existence of God in Descartes' case), using the realm of ideas merely as a starting point, but solipsism usually finds those further arguments unconvincing. The solipsist instead proposes that their own unconscious is the author of all seemingly "external" events from "reality".
Relation to other ideas:
Philosophy of Schopenhauer The World as Will and Representation is the central work of Arthur Schopenhauer. Schopenhauer saw the human will as our one window to the world behind the representation, the Kantian thing-in-itself. He believed, therefore, that we could gain knowledge about the thing-in-itself, something Kant said was impossible, since the rest of the relationship between representation and thing-in-itself could be understood by analogy as the relationship between human will and human body.
Relation to other ideas:
Idealism The idealist philosopher George Berkeley argued that physical objects do not exist independently of the mind that perceives them. An item truly exists only as long as it is observed; otherwise, it is not only meaningless but simply nonexistent. Berkeley does attempt to show things can and do exist apart from the human mind and our perception, but only because there is an all-encompassing Mind in which all "ideas" are perceived – in other words, God, who observes all. Solipsism agrees that nothing exists outside of perception, but would argue that Berkeley falls prey to the egocentric predicament – he can only make his own observations, and thus cannot be truly sure that this God or other people exist to observe "reality". The solipsist would say it is better to disregard the unreliable observations of alleged other people and rely upon the immediate certainty of one's own perceptions.
Relation to other ideas:
Rationalism Rationalism is the philosophical position that truth is best discovered by the use of reasoning and logic rather than by the use of the senses (see Plato's theory of forms). Solipsism is also skeptical of sense-data.
Philosophical zombie The theory of solipsism crosses over with the theory of the philosophical zombie in that other seemingly conscious beings may actually lack true consciousness, instead they only display traits of consciousness to the observer, who may be the only conscious being there is.
Relation to other ideas:
Falsifiability and testability Solipsism is not a falsifiable hypothesis as described by Karl Popper: there does not seem to be an imaginable disproof. According to Popper: a hypothesis that cannot be falsified is not scientific, and a solipsist can observe "the success of sciences" (see also no miracles argument). One critical test is nevertheless to consider the induction from experience that the externally observable world does not seem, at first approach, to be directly manipulable purely by mental energies alone. One can indirectly manipulate the world through the medium of the physical body, but it seems impossible to do so through pure thought (psychokinesis). It might be argued that if the external world were merely a construct of a single consciousness, i.e. the self, it could then follow that the external world should be somehow directly manipulable by that consciousness, and if it is not, then solipsism is false. An argument against this states that this argument is circular and incoherent. It assumes at the beginning a "construct of a single consciousness" meaning something false, and then tries to manipulate the external world that it just assumed was false. Of course this is an impossible task, but it does not disprove solipsism. It is simply poor reasoning when considering pure idealized logic and that's why David Deutsch states that when also other scientific methods are used (not only logic) solipsism is "indefensible", also when using the simplest explanations: "If, according to the simplest explanation, an entity is complex and autonomous, then that entity is real."The method of the typical scientist is naturalist: they first assume that the external world exists and can be known. But the scientific method, in the sense of a predict-observe-modify loop, does not require the assumption of an external world. A solipsist may perform a psychological test on themselves, to discern the nature of the reality in their mind – however David Deutsch uses this fact to counter-argue: "outer parts" of solipsist, behave independently so they are independent for "narrowly" defined (conscious) self. A solipsist's investigations may not be proper science, however, since it would not include the co-operative and communitarian aspects of scientific inquiry that normally serve to diminish bias.
Relation to other ideas:
Minimalism Solipsism is a form of logical minimalism. Many people are intuitively unconvinced of the nonexistence of the external world from the basic arguments of solipsism, but a solid proof of its existence is not available at present. The central assertion of solipsism rests on the nonexistence of such a proof, and strong solipsism (as opposed to weak solipsism) asserts that no such proof can be made. In this sense, solipsism is logically related to agnosticism in religion: the distinction between believing you do not know, and believing you could not have known.
Relation to other ideas:
However, minimality (or parsimony) is not the only logical virtue. A common misapprehension of Occam's razor has it that the simpler theory is always the best. In fact, the principle is that the simpler of two theories of equal explanatory power is to be preferred. In other words: additional "entities" can pay their way with enhanced explanatory power. So the naturalist can claim that, while their world view is more complex, it is more satisfying as an explanation.
Relation to other ideas:
Solipsism in infants Some developmental psychologists believe that infants are solipsistic, and that eventually children infer that others have experiences much like theirs and reject solipsism.
Relation to other ideas:
Hinduism The earliest reference to Solipsism is found in the ideas in Hindu philosophy in the Brihadaranyaka Upanishad, dated to early 1st millennium BC. The Upanishad holds the mind to be the only god and all actions in the universe are thought to be a result of the mind assuming infinite forms. After the development of distinct schools of Indian philosophy, Advaita Vedanta and Samkhya schools are thought to have originated concepts similar to solipsism.
Relation to other ideas:
Advaita Vedanta Advaita is one of the six most known Hindu philosophical systems and literally means "non-duality". Its first great consolidator was Adi Shankaracharya, who continued the work of some of the Upanishadic teachers, and that of his teacher's teacher Gaudapada. By using various arguments, such as the analysis of the three states of experience—wakefulness, dream, and deep sleep, he established the singular reality of Brahman, in which Brahman, the universe and the Atman or the Self, were one and the same.
Relation to other ideas:
One who sees everything as nothing but the Self, and the Self in everything one sees, such a seer withdraws from nothing.
Relation to other ideas:
For the enlightened, all that exists is nothing but the Self, so how could any suffering or delusion continue for those who know this oneness? The concept of the Self in the philosophy of Advaita could be interpreted as solipsism. However, the theological definition of the Self in Advaita protect it from true solipsism as found in the west. Similarly, the Vedantic text Yogavasistha, escapes charge of solipsism because the real "I" is thought to be nothing but the absolute whole looked at through a particular unique point of interest.It is mentioned in Yoga Vasistha that “…..according to them (we can safely assume that them are present Solipsists) this world is mental in nature. There is no reality other than the ideas of one’s own mind. This view is incorrect, because the world cannot be the content of an individual’s mind. If it were so, an individual would have created and destroyed the world according to his whims. This theory is called atma khyati – the pervasion of the little self (intellect). Yoga Vasistha - Nirvana Prakarana - Uttarardha (Volume - 6) Page 107 by Swami Jyotirmayananda Samkhya and Yoga Samkhya philosophy, which is sometimes seen as the basis of Yogic thought, adopts a view that matter exists independently of individual minds. Representation of an object in an individual mind is held to be a mental approximation of the object in the external world. Therefore, Samkhya chooses representational realism over epistemological solipsism. Having established this distinction between the external world and the mind, Samkhya posits the existence of two metaphysical realities Prakriti (matter) and Purusha (consciousness).
Relation to other ideas:
Buddhism Some interpretations of Buddhism assert that external reality is an illusion, and sometimes this position is [mis]understood as metaphysical solipsism. Buddhist philosophy, though, generally holds that the mind and external phenomena are both equally transient, and that they arise from each other. The mind cannot exist without external phenomena, nor can external phenomena exist without the mind. This relation is known as "dependent arising" (pratityasamutpada).
Relation to other ideas:
The Buddha stated, "Within this fathom long body is the world, the origin of the world, the cessation of the world and the path leading to the cessation of the world". Whilst not rejecting the occurrence of external phenomena, the Buddha focused on the illusion created within the mind of the perceiver by the process of ascribing permanence to impermanent phenomena, satisfaction to unsatisfying experiences, and a sense of reality to things that were effectively insubstantial.
Relation to other ideas:
Mahayana Buddhism also challenges the illusion of the idea that one can experience an 'objective' reality independent of individual perceiving minds.
Relation to other ideas:
From the standpoint of Prasangika (a branch of Madhyamaka thought), external objects do exist, but are devoid of any type of inherent identity: "Just as objects of mind do not exist [inherently], mind also does not exist [inherently]". In other words, even though a chair may physically exist, individuals can only experience it through the medium of their own mind, each with their own literal point of view. Therefore, an independent, purely 'objective' reality could never be experienced.
Relation to other ideas:
The Yogacara (sometimes translated as "Mind only") school of Buddhist philosophy contends that all human experience is constructed by mind. Some later representatives of one Yogacara subschool (Prajnakaragupta, Ratnakīrti) propounded a form of idealism that has been interpreted as solipsism. A view of this sort is contained in the 11th-century treatise of Ratnakirti, "Refutation of the existence of other minds" (Santanantara dusana), which provides a philosophical refutation of external mind-streams from the Buddhist standpoint of ultimate truth (as distinct from the perspective of everyday reality).In addition to this, the Bardo Thodol, Tibet's famous book of the dead, repeatedly states that all of reality is a figment of one's perception, although this occurs within the "Bardo" realm (post-mortem). For instance, within the sixth part of the section titled "The Root Verses of the Six Bardos", there appears the following line: "May I recognize whatever appeareth as being mine own thought-forms"; there are many lines in similar ideal. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sarah Pallas**
Sarah Pallas:
Sarah L. Pallas is an American neuroscientist and a Professor of biology at the University of Massachusetts Amherst. She is a fellow of the American Association for the Advancement of Sciences (AAAS) known for her cross-modal plasticity work and map compression studies in the visual and auditory cortical pathways.
Background and education:
Sarah Pallas was born in Minnesota. Pallas completed her undergraduate education at the University of Minnesota and graduated with a B.S. in biology. From there, she attended Iowa State University to obtain a master's degree in Zoology. For her Ph.D. she studied developmental plasticity in Ronald R. Hoy's and in Barbara L. Finlay's lab at Cornell University. Pallas completed her postdoctoral training at the Massachusetts Institute of Technology in Mriganka Sur’s lab in the Brain and Cognitive Sciences department. She started her own lab in 1992 at Baylor College of Medicine and moved to Georgia State University in 1997 along with her husband Paul Katz. Pallas was promoted to Full Professor in 2006 and was appointed a Fellow of the American Association for the Advancement of Sciences (AAAS).As of 2019, Pallas is an Associate Professor of Biology at the University of Massachusetts Amherst and runs her own lab studying neural development and plasticity in auditory and visual pathways. Her lab functions under the Neuroscience and Behavior Molecular and Cellular Biology Graduate Programs in the University of Massachusetts Amherst's Biology Department.
Career:
In her scientific career, Sarah Pallas has worked on a variety of projects in understanding the mechanisms behind neural development and plasticity. Her prior work includes cross-modal plasticity of visual and auditory inputs in ferrets. In addition, Pallas has also worked on topographic map compression in the superior colliculus (SC).
Awards and honors:
During her postdoctoral training at M.I.T., Pallas was awarded a NRSA fellowship from the National Health Institute and National Eye Institute. In 2005, she received the Evolution Education Award by the National Association of Biology Teachers while working at Georgia State University. In addition, Pallas was appointed as a fellow of the American Association for the Advancement of Sciences (AAAS) in 2012. Most recently, Pallas received the Alumni Achievement Award in 2020 from her alma mater, University of Minnesota. Lastly, Pallas was awarded tenure in 2020 from the University of Massachusetts Amherst. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hydrocortisone/oxytetracycline**
Hydrocortisone/oxytetracycline:
Hydrocortisone/oxytetracycline (trade name Terra-Cortril) is a combination drug, consisting of the anti-inflammatory drug hydrocortisone and the antibiotic drug oxytetracycline.
It is indicated, for example, in steroid-responsive inflammatory ocular conditions where bacterial infection or a risk of bacterial ocular infection exists. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bioelectromagnetics**
Bioelectromagnetics:
Bioelectromagnetics, also known as bioelectromagnetism, is the study of the interaction between electromagnetic fields and biological entities. Areas of study include electromagnetic fields produced by living cells, tissues or organisms, the effects of man-made sources of electromagnetic fields like mobile phones, and the application of electromagnetic radiation toward therapies for the treatment of various conditions.
Biological phenomena:
Bioelectromagnetism is studied primarily through the techniques of electrophysiology. In the late eighteenth century, the Italian physician and physicist Luigi Galvani first recorded the phenomenon while dissecting a frog at a table where he had been conducting experiments with static electricity. Galvani coined the term animal electricity to describe the phenomenon, while contemporaries labeled it galvanism. Galvani and contemporaries regarded muscle activation as resulting from an electrical fluid or substance in the nerves. Short-lived electrical events called action potentials occur in several types of animal cells which are called excitable cells, a category of cell include neurons, muscle cells, and endocrine cells, as well as in some plant cells. These action potentials are used to facilitate inter-cellular communication and activate intracellular processes. The physiological phenomena of action potentials are possible because voltage-gated ion channels allow the resting potential caused by electrochemical gradient on either side of a cell membrane to resolve..
Biological phenomena:
Several animals are suspected to have the ability to sense electromagnetic fields; for example, several aquatic animals have structures potentially capable of sensing changes in voltage caused by a changing magnetic field, while migratory birds are thought to use magnetoreception in navigation.
Bioeffects of electromagnetic radiation:
Most of the molecules in the human body interact weakly with electromagnetic fields in the radio frequency or extremely low frequency bands. One such interaction is absorption of energy from the fields, which can cause tissue to heat up; more intense fields will produce greater heating. This can lead to biological effects ranging from muscle relaxation (as produced by a diathermy device) to burns. Many nations and regulatory bodies like the International Commission on Non-Ionizing Radiation Protection have established safety guidelines to limit EMF exposure to a non-thermal level. This can be defined as either heating only to the point where the excess heat can be dissipated, or as a fixed increase in temperature not detectable with current instruments like 0.1 °C. However, biological effects have been shown to be present for these non-thermal exposures; Various mechanisms have been proposed to explain these, and there may be several mechanisms underlying the differing phenomena observed.
Bioeffects of electromagnetic radiation:
Many behavioral effects at different intensities have been reported from exposure to magnetic fields, particularly with pulsed magnetic fields. The specific pulseform used appears to be an important factor for the behavioural effect seen; for example, a pulsed magnetic field originally designed for spectroscopic MRI, referred to as Low Field Magnetic Stimulation, was found to temporarily improve patient-reported mood in bipolar patients, while another MRI pulse had no effect. A whole-body exposure to a pulsed magnetic field was found to alter standing balance and pain perception in other studies.A strong changing magnetic field can induce electrical currents in conductive tissue such as the brain. Since the magnetic field penetrates tissue, it can be generated outside of the head to induce currents within, causing transcranial magnetic stimulation (TMS). These currents depolarize neurons in a selected part of the brain, leading to changes in the patterns of neural activity. In repeated pulse TMS therapy or rTMS, the presence of incompatible EEG electrodes can result in electrode heating and, in severe cases, skin burns. A number of scientists and clinicians are attempting to use TMS to replace electroconvulsive therapy (ECT) to treat disorders such as severe depression and hallucinations. Instead of one strong electric shock through the head as in ECT, a large number of relatively weak pulses are delivered in TMS therapy, typically at the rate of about 10 pulses per second. If very strong pulses at a rapid rate are delivered to the brain, the induced currents can cause convulsions much like in the original electroconvulsive therapy. Sometimes, this is done deliberately in order to treat depression, such as in ECT.
Effects of electromagnetic radiation on human health:
While health effects from extremely low frequency (ELF) electric and magnetic fields (0 to 300 Hz) generated by power lines, and radio/microwave frequencies (RF) (10 MHz - 300 GHz) emitted by radio antennas and wireless networks have been well studied, the intermediate range (IR) (300 Hz to 10 MHz) has been studied far less. Direct effects of low power radiofrequency electromagnetism on human health have been difficult to prove, and documented life-threatening effects from radiofrequency electromagnetic fields are limited to high power sources capable of causing significant thermal effects and medical devices such as pacemakers and other electronic implants. However, many studies have been conducted with electromagnetic fields to investigate their effects on cell metabolism, apoptosis, and tumor growth.Electromagnetic radiation in the intermediate frequency range has found a place in modern medical practice for the treatment of bone healing and for nerve stimulation and regeneration. It is also approved as cancer therapy in form of Tumor Treating Fields, using alternating electric fields in the frequency range of 100–300 kHz. Since some of these methods involve magnetic fields that invoke electric currents in biological tissues and others only involve electric fields, they are strictly speaking electrotherapies albeit their application modi with modern electronic equipment have placed them in the category of bioelectromagnetic interactions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fast protein liquid chromatography**
Fast protein liquid chromatography:
Fast protein liquid chromatography (FPLC), is a form of liquid chromatography that is often used to analyze or purify mixtures of proteins. As in other forms of chromatography, separation is possible because the different components of a mixture have different affinities for two materials, a moving fluid (the mobile phase) and a porous solid (the stationary phase). In FPLC the mobile phase is an aqueous solution, or "buffer". The buffer flow rate is controlled by a positive-displacement pump and is normally kept constant, while the composition of the buffer can be varied by drawing fluids in different proportions from two or more external reservoirs. The stationary phase is a resin composed of beads, usually of cross-linked agarose, packed into a cylindrical glass or plastic column. FPLC resins are available in a wide range of bead sizes and surface ligands depending on the application. In the most common FPLC strategy, ion exchange, a resin is chosen that the protein of interest will bind to the resin by a charge interaction while in buffer A (the running buffer) but become dissociated and return to solution in buffer B (the elution buffer). A mixture containing one or more proteins of interest is dissolved in 100% buffer A and pumped into the column. The proteins of interest bind to the resin while other components are carried out in the buffer. The total flow rate of the buffer is kept constant; however, the proportion of buffer B (the "elution" buffer) is gradually increased from 0% to 100% according to a programmed change in concentration (the "gradient"). At some point during this process each of the bound proteins dissociates and appears in the eluant. The eluant passes through two detectors which measure salt concentration (by conductivity) and protein concentration (by absorption of ultraviolet light at a wavelength of 280nm). As each protein is eluted, it appears in the eluant as a "peak" in protein concentration, and can be collected for further use.FPLC was developed and marketed in Sweden by Pharmacia in 1982, and was originally called fast performance liquid chromatography to contrast it with HPLC or high-performance liquid chromatography. FPLC is generally applied only to proteins; however, because of the wide choice of resins and buffers it has broad applications. In contrast to HPLC, the buffer pressure used is relatively low, typically less than 5 bar, but the flow rate is relatively high, typically 1-5 ml/min. FPLC can be readily scaled from analysis of milligrams of mixtures in columns with a total volume of 5 ml or less to industrial production of kilograms of purified protein in columns with volumes of many liters. When used for analysis of mixtures, the eluant is usually collected in fractions of 1-5 ml which can be further analyzed (for example, by matrix-assisted laser desorption/ionization (MALDI) mass spectrometry). When used for protein purification there may be only two collection containers: one for the purified product and one for waste.
FPLC system components:
A typical laboratory FPLC consist of one or two high-precision pumps, a control unit, a column, a detection system and a fraction collector. Although it is possible to operate the system manually, the components are normally linked to a personal computer or, in older units, a microcontroller.
FPLC system components:
Pumps The majority of systems utilize two two-cylinder piston pumps, one for each buffer, combining the output of both in a mixing chamber. Some simpler systems use a single peristaltic pump which draws both buffers from separate reservoirs through a proportioning valve and mixing chamber. In either case the system allows the fraction of each buffer entering the column to be continuously varied. The flow rate can go from a few milliliters per minute in bench-top systems to liters per minute for industrial scale purifications. The wide flow range makes it suitable both for analytical and preparative chromatography.
FPLC system components:
Injection loop The injection loop is a segment of tubing of known volume which is filled with the sample solution before it is injected into the column. Loop volume can range from a few microliters to 50 ml or more.
FPLC system components:
Injection valve The injection valve is a motorized valve which links the mixer and sample loop to the column. Typically the valve has three positions for loading the sample loop, for injecting the sample from the loop into the column, and for connecting the pumps directly to the waste line to wash them or change buffer solutions. The injection valve has a sample loading port through which the sample can be loaded into the injection loop, usually from a hypodermic syringe using a Luer-lock connection.
FPLC system components:
Column The column is a glass or plastic cylinder packed with beads of resin and filled with buffer solution. It is normally mounted vertically with the buffer flowing downward from top to bottom. A glass frit at the bottom of the column retains the resin beads in the column while allowing the buffer and dissolved proteins to exit.
FPLC system components:
Flow cell The eluant from the column passes through one or more flow cells to measure the concentration of protein in the eluant (by UV light absorption at 280 nm). The conductivity cell measures the buffer conductivity, usually in millisiemens/cm, which indicates the concentration of salt in the buffer. A flow cell which measures pH of the buffer is also commonly included. Usually each flow cell is connected to a separate electronics module which provides power and amplifies the signal.
FPLC system components:
Monitor/recorder The flow cells are connected to a display and/or recorder. On older systems this was a simple chart recorder, on modern systems a computer with hardware interface and display is used. This permits the experimenter to identify when peaks in protein concentration occur, indicating that specific components of the mixture are being eluted.
Fraction collector The fraction collector is typically a rotating rack that can be filled with test tubes or similar containers. It allows samples to be collected in fixed volumes, or can be controlled to direct specific fractions detected as peaks of protein concentration, into separate containers.
FPLC system components:
Many systems include various optional components. A filter may be added between the mixer and column to minimize clogging. In large FPLC columns the sample may be loaded into the column directly using a small peristaltic pump rather than an injection loop. When the buffer contains dissolved gas, bubbles may form as pressure drops where the buffer exits the column; these bubbles create artifacts if they pass through the flow cells. This may be prevented by degassing the buffers, e.g. with a degasser, or by adding a flow restrictor downstream of the flow cells to maintain a pressure of 1-5 bar in the eluant line.
FPLC columns:
The columns used in FPLC are large [mm id] tubes that contain small [µ] particles or gel beads that are known as stationary phase. The chromatographic bed is composed by the gel beads inside the column and the sample is introduced into the injector and carried into the column by the flowing solvent. As a result of different components adhering to or diffusing through the gel, the sample mixture gets separated.Columns used with an FPLC can separate macromolecules based on size, charge distribution (ion exchange), hydrophobicity, reverse-phase or biorecognition (as with affinity chromatography). For easy use, a wide range of pre-packed columns for techniques such as ion exchange, gel filtration (size exclusion), hydrophobic interaction, and affinity chromatography are available. FPLC differs from HPLC in that the columns used for FPLC can only be used up to maximum pressure of 3-4 MPa (435-580 psi). Thus, if the pressure of HPLC can be limited, each FPLC column may also be used in an HPLC machine.
Optimizing protein purification:
Combinations of chromatographic methods can be used to purify a target molecule. The purpose of purifying proteins with FPLC is to deliver quantities of the target at sufficient purity in a biologically active state to suit its further use. The quality of the end product varies depending the type and amount of starting material, efficiency of separation, and selectivity of the purification resin. The ultimate goal of a given purification protocol is to deliver the required yield and purity of the target molecule in the quickest, cheapest, and safest way for acceptable results. The range of purity required can be from that required for basic analysis (SDS-PAGE or ELISA, for example), with only bulk impurities removed, to pure enough for structural analysis (NMR or X-ray crystallography), approaching >99% target molecule. Purity required can also mean pure enough that the biological activity of the target is retained. These demands can be used to determine the amount of starting material required to reach the experimental goal. If the starting material is limited and full optimization of purification protocol cannot be performed, then a safe standard protocol that requires a minimum adjustment and optimization steps are expected. This may not be optimal with respect to experimental time, yield, and economy but it will achieve the experimental goal. On the other hand, if the starting material is enough to develop more complete protocol, the amount of work to reach the separation goal depends on the available sample information and target molecule properties. Limits to development of purification protocols many times depends on the source of the substance to be purified, whether from natural sources (harvested tissues or organisms, for example), recombinant sources (such as using prokaryotic or eukaryotic vectors in their respective expression systems), or totally synthetic sources.
Optimizing protein purification:
No chromatographic techniques provide 100% yield of active material and overall yields depend on the number of steps in the purification protocol. By optimizing each step for the intended purpose and arranging them that minimizes inter step treatments, the number of steps will be minimized.
Optimizing protein purification:
A typical multistep purification protocol starts with a preliminary capture step which often utilizes ion exchange chromatography (IEC). The media (stationary phase) resin consists of beads, which range in size from being large (good for fast flow rates and little to no sample clarification at the expense of resolution) to small (for best possible resolution with all other factors being equal). Short and wide column geometries are amenable to high flow rates also at the expense of resolution, typically because of lateral diffusion of sample on the column. For techniques such as size exclusion chromatography to be useful, very long, thin columns and minimal sample volumes (maximum 5% of column volume) are required. Hydrophobic interaction chromatography (HIC) can also be used for first and/ or intermediate steps. Selectivity in HIC is independent of running pH and descending salt gradients are used. For HIC, conditioning involves adding ammonium sulfate to the sample to match the buffer A concentration. If HIC is used before IEC, the ionic strength would have to be lowered to match that of buffer A for IEC step by dilution, dialysis or buffer exchange by gel filtration. This is why IEC is usually performed prior to HIC as the high salt elution conditions for IEC are ideal for binding to HIC resins in the next purification step. Polishing is used to achieve the final level of purification required and is commonly performed on a gel filtration column. An extra intermediate purification step can be added or optimization of the different steps is performed for improving purity. This extra step usually involves another round of IEC under completely different conditions.
Optimizing protein purification:
Although this is an example of a common purification protocol for proteins, the buffer conditions, flow rates, and resins used to achieve final goals can be chosen to cover a broad range of target proteins. This flexibility is imperative for a functional purification system as all proteins behave differently and often deviate from predictions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Injection mold construction**
Injection mold construction:
Injection mold construction is the process of creating molds that are used to perform injection molding operations using an injection molding machine. These are generally used to produce plastic parts using a core and a cavity.
Injection mold construction:
Molds are designed as two-plate or three-plate molds, depending on the type of component to be manufactured. The two plate mold requires a single day in light, while the three plate mold requires two days. Mold construction depends on the shape of the component, which determines the parting line selection, runner and gate selection and component ejection system selection. The mold base size depends on component size and number of cavities to be planned per mold.
Design considerations:
Draft: Required in both the core and cavity for easy ejection of the finished component Shrinkage allowance: Depends on shrinkage property of material core and cavity size Cooling circuit: In order to reduce the cycle time, water circulates through holes drilled in both the core and cavity plates.
Ejection gap: The gap between the ejector plate face and core back plate face should hold dimension within the core. It must allow component to be fully removed from the mold.
Air vents: Removes gases entrapped between core and cavity (usually less than 0.02 mm gap), because excessive gaps can result in flash defects.
Mold polishing: The core, cavity, runner and sprue should have good surface finish and should be polished along material flow direction.
Mold filling : The gate should be placed such that the component is filled from the thicker section to thinner section.
Elements:
Register ring—Aligns injection molding machine screws with the injection mold. Usually made of case-hardened, medium carbon steel material (CHMCS).
Sprue bushing — The bush has a taper hole of 3° to 5° and is usually made of CHMCS. The material enters the mold through the sprue bush.
Top plate—It is used to clamp the top half of the mold to the moving half of the molding machine and is usually made of mild steel.
Cavity plate—The plate used to create a cavity (via a gap) that will be filled with the plastic material and form the plastic component. Usually made of mild steel.
Core plate—The core plate projects into the cavity place and creates hollow portions in the plastic component. This core plate is usually made of hardened hot die P20 steel without hardening after core machining.
Sprue puller bushing — The sprue puller bush is used to accommodate the sprue puller pin; usually made of CHMCS.
Sprue puller pin—The sprue puller pin pulls the sprue from the sprue bush. It is usually made of CHMCS Core back plate—It holds the core insert in place and acts as a "stiffener". It is usually made of mild steel.
Guide pillar and guide bushing — The guide pillar and guide bush align the fixed and moving halves of a mold in each cycle. The material cases are usually made of medium carbon steel and will have higher hardness.
Ejector guide pillar and guide bush—These components ensure the alignment of the ejector assembly so that the ejector pins are not damaged. They are usually made of CHMCS. The guide pillar typically has higher hardness than the guide bush.
Ejector plate—This holds the ejector pins and is usually made of mild steel.
Ejector back plate—It prevents the ejector pins from disengaging; usually of mild steel material.
Heel blocks—Provides a gap for the ejector assembly, so that the finished component ejects from the core. Usually made of mild steel.
Bottom plate—Clamps the bottom half of the mold with the fixed half of the molding machine; usually made of mild steel.
Centering bush—Provides alignment between the bottom plate and the core back plate; usually made of CHMCS.
Elements:
Rest button—Supports the ejection assembly and reduces the area of contact between the ejection assembly and the bottom plate. It is most helpful when cleaning the injection molding machine, which is essential to ensure an "unmarked" finished component. Small foreign particles sticking to the bottom plate may cause ejection pins to project out from the core and result in ejection pin marks on the component.The core and cavity will be usually be made of either P20, En 30B, S7, H13, or 420SS grade steel. The core is the male part which forms the internal shape of molding. The cavity is the female part which forms external shape of molding.
Elements:
Gate types The two main gate systems are manually trimmed gates and automatically trimmed gates. The following examples show where they are used: Sprue gate: Used for large components, the gate mark is visible in component and no runner is required. e.g.: bucket molding (backside cylindrical gate mark visible and can be felt).
Elements:
Edge gate: Most suitable for square, rectangular components Ring gate: Most suitable for cylindrical components to eliminate weld line defect Diaphragm gate: Most suitable for hollow, cylindrical components Tab gate: Most suitable for solid, thick components Submarine gate: Used when auto de-gating is required to reduce cycle time Reverse taper sprue gate (Pin gate): Generally used in three plate molds.
Elements:
Winkle Gate: Its mainly used for electronics product gate flow the material under the core side Ejection system types Pin ejection—Cylindrical pins eject the finished component. In the case of square and rectangular components, a minimum of four pins (at the four corners) are required. In the case of cylindrical components, three equidistant pins (i.e. 120° apart) are required. The number of pins required may vary based on the component profile, size and area of ejection. This ejection system leaves visible ejection marks on the finished component.
Elements:
Sleeve ejection—This type of ejection is preferred for (and limited to) cylindrical cores, where the core is fixed in the bottom plate. In this system, the ejection assembly consists of a sleeve that slides over the core and ejects the component. No visible ejection marks are apparent on the component.
Stripper plate ejection—This ejection is preferred for components with larger areas. This system calls for an additional plate (stripper) between the core and cavity plates. To avoid flash, the stripper plate remains in contact with the cavity plate and a gap is maintained between the cavity and core plate. Visible ejection marks are usually not noted on components.
Blade ejection—This type of ejection is preferred for thin, rectangular cross sections. Rectangular blades are inserted in cylindrical pins (or cylindrical pins are machined to rectangular cross sections) to create an appropriate ejection length for the component. For easy accommodation of the ejection pin head, a counter bore is provided in the ejection plates.
By rotation of core (internal threaded components)—Used for threaded components, where the component is automatically ejected by rotating the core insert.
Air ejection—Used to actuate the ejection pin fitted in the core using compressed air. The ejection pin is retracted using a spring.
Elements:
Alignment Injection molds are designed as two halves, a core half and a cavity half in order to eject the component. For each cycle, the core and cavity are aligned to ensure quality. This alignment is ensured by guide pillar and guide bush. Usually, four guide pillars and guide bushes are used, out of which three pillars are of one diameter and one is of a different diameter, to force the plates into a single configuration (based on the "POKE YOKE" [mistake proof] concept). The register ring has interference fit in top plate and transmission fit with the injection molding machine pattern, aligning the machine pattern and top plate.
Elements:
Mold cooling Desirable attributes of the mold cooling design include: Constant mold temperature for uniform quality Reduced cycle time for productivity Improved surface finish without defects Avoiding warpage by uniform mold surface temperature (warpage caused by nonuniform cooling) Long mold lifeMethods: Cavity plate cooling by drilled holes—The cavity plate is drilled around the cavity insert and plugged with copper or aluminum taper plugs at the ends of openings. Using pipe connected at the inlet and outlet ports, water is circulated to cool the mold.
Elements:
Direct cooling of core insert (baffle system)—The core is drilled by keeping sufficient wall thickness. A baffle plate is located between the drilled hole, dividing the hole into two halves, allowing the water to contact the maximum area in core so cooling may take place.
Annular cooling of cavity insert—A circular groove is made on the core for water circulation. To prevent leakage, O-rings are used above and below the cooling channel.
Core is moving, side & cavity is fixed side in a mold. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Travelling exhibition**
Travelling exhibition:
A travelling exhibition, also referred to as a "travelling exhibit" or a "touring exhibition", is a type of exhibition that is presented at more than one venue.
Travelling exhibition:
Temporary exhibitions can bring together objects that might be dispersed among several collections, to reconstruct an original context such as an artist's career or a patron's collection, or to propose connections – perhaps the result of recent research – which give new insights or a different way of understanding items in museum collections. The whole exhibition, usually with associated services, including insurance, shipping, storage, conservation, mounting, set up, etc., can then be loaned to one or more venues to lengthen the life of the exhibition and to allow the widest possible audiences – regionally, nationally or internationally – to experience these objects and the stories they contain. Such collaborations can add interest to museums where displays of permanent collections might change only slowly, helping to provide fresh interpretations or more complete stories and attract new audiences. They also provide fresh ideas and breathing space for organisations which have exhibition spaces but lack permanent collections.
Travelling exhibition:
To have more than one location for the same exhibition can benefit the organiser because it can then share a part of the production costs among the venues, so museums and galleries frequently use touring as a cost-efficient way of promoting access to their collections. For organisers and their venues, touring exhibitions are important for sharing ideas (for example, promoting techniques for providing for visitors with visual impairments or producing displays which examine current or topical issues) and materials (especially objects that might not be seen in public frequently or even shown together), as well as resources (human as well as financial). Touring is a way of sharing with like-minded institutions and of achieving economies of scale which allow more ambitious projects to happen.
Travelling exhibition:
Travelling exhibitions are often supported by governmental organizations to promote access to knowledge and materials that might not be available locally. To acknowledge the importance of travelling exhibitions, in 1983 the International Council of Museums (ICOM) established the International Committee for Exhibition Exchange (ICEE) as a forum to discuss the different aspects of exhibition development, circulation and exchange.
Travelling exhibition:
Examples of Traveling Exhibitions In celebration of the 200th year birthday of the founder, Louis Vuitton, Louis Vuitton's "200 Trunks, 200 Visionaries: The Exhibition" has gone on an international tour taking off from Asnieres-Sur-Siene, France and has since then traveled to Singapore, Beverly Hills and New York. The Exhibition displays the work of 200 visionaries across many different fields ranging from art to science inspired by the brands iconic trunk. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Studia Quaternaria**
Studia Quaternaria:
Studia Quaternaria is a peer-reviewed open access scholarly journal publishing research articles on quaternary science. It is a journal published by the Polish Academy of Sciences (PAN). The current editor-in-chief is Leszek Marks. It changed name from Quaternary Studies in Poland to the current title in the year 2000.
Abstracting and indexing:
The journal is abstracted and indexed in: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hypotaurocyamine kinase**
Hypotaurocyamine kinase:
In enzymology, a hypotaurocyamine kinase (EC 2.7.3.6) is an enzyme that catalyzes the chemical reaction ATP + hypotaurocyamine ⇌ ADP + Nomega-phosphohypotaurocyamineThus, the two substrates of this enzyme are ATP and hypotaurocyamine, whereas its two products are ADP and Nomega-phosphohypotaurocyamine.
This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing groups (phosphotransferases) with a nitrogenous group as acceptor. The systematic name of this enzyme class is ATP:hypotaurocyamine N-phosphotransferase. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Preoptic anterior hypothalamus**
Preoptic anterior hypothalamus:
POAH is an acronym for preoptic anterior hypothalamus, the part of the brain that senses core body temperature and regulates it to about 36.8 °C (98.6 °F). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Con Kolivas**
Con Kolivas:
Con Kolivas is a Greek-Australian anaesthetist. He has worked as a computer programmer on the Linux kernel and on the development of the cryptographic currency mining software CGMiner. His Linux contributions include patches for the kernel to improve its desktop performance, particularly reducing I/O impact.
Linux:
Kolivas is most notable for his work with CPU scheduling, most significantly his implementation of "fair scheduling", which inspired Ingo Molnár to develop his Completely Fair Scheduler, as a replacement for the earlier O(1) scheduler, crediting Kolivas in his announcement. Kolivas developed several CPU schedulers such as the Staircase in 2004, then Rotating Staircase Deadline (RSDL), and subsequently Staircase Deadline (SD) schedulers to address interactivity concerns of the Linux kernel with respect to desktop computing. Additionally, he has written a "swap prefetch" patch, which allows processes to respond quickly after the operating system has been idle for some time and their working sets have been swapped out. Many of his experimental "-CK" patches, such as his prefetching and scheduling code, did not get merged with the official Linux kernel.
Linux:
In 2007, Kolivas announced in an email that he would cease developing for the Linux kernel. Discussing his reasons in an interview, he expressed frustration with aspects of the mainline kernel development process, which he felt did not give sufficient priority to desktop interactivity, in addition to hacking taking a toll on his health, work and family.He has also written a benchmarking tool called ConTest that can be used to compare the performance of different kernel versions.On 31 August 2009, Kolivas posted a new scheduler called BFS (Brain Fuck Scheduler). It is designed for desktop use and to be very simple (hence it may not scale well to machines with many CPU cores). Con Kolivas did not intend to get it merged into the mainline kernel. He has since retired BFS in favour of MuQSS, a rewritten implementation of the same concept.
CGMiner:
On 13 July 2011, Kolivas introduced a new piece of software for "windows, linux, OSX and other" called CGMiner, which is used for mining cryptocurrencies such as bitcoin and Litecoin. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Valsalva device**
Valsalva device:
The Valsalva device is a device used in spacesuits, some full face diving masks and diving helmets to allow astronauts and commercial divers to equalize the pressure in their ears by performing the Valsalva maneuver inside the suit without using their hands to block their nose. Astronaut Drew Feustel has described it as "a spongy device called a Valsalva that is typically used to block the nose in case a pressure readjustment is needed."In November 2011 ESA astronaut Samantha Cristoforetti posted on Twitter a picture of her demonstrating the use of the Valsalva device in the Sokol space suit during suit pressurization.The Valsalva device has also been used for other purposes. On 25 May 2011, NASA reported that during the second spacewalk of Space Shuttle mission STS-134, Feustel was able to clear tears from his eye by wiggling down far enough in his Extravehicular Mobility Unit to use the Valsalva device in his suit as a sponge to clear up tears caused because anti-fogging agent (liquid soap) came free from the inside of the helmet and floated into his eye.
Valsalva device:
On 3 April 2001, due to missing Valsalva device in his suit, astronaut Leland D. Melvin suffered an ear injury while training in Neutral Buoyancy Laboratory at Johnson Space Center.On 16 July 2013, EVA-23 was cut short as Luca Parmitano's helmet of his Extravehicular Mobility Unit suit started filling with water. After the spacewalk during rapid repressurization of airlock, the Valsalva device on Luca's helmet failed and came apart while he tried to use it as it was not water proof. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Call of Duty: Modern Warfare 3 downloadable content**
Call of Duty: Modern Warfare 3 downloadable content:
Call of Duty: Modern Warfare 3 is a 2011 first-person shooter video game, jointly developed by Infinity Ward and Sledgehammer Games and published by Activision. The game was released worldwide in November 2011 for Microsoft Windows, the Xbox 360, PlayStation 3, Wii, and OS X. It is the sequel to Call of Duty: Modern Warfare 2 (2009), serving as the third and final installment in the original Modern Warfare trilogy and the eighth Call of Duty installment overall. A separate version for the Nintendo DS was developed by n-Space, while Treyarch developed the game's Wii port. In Japan, Square Enix published the game with a separate subtitled and dubbed version.The game's campaign follows Modern Warfare 2 and begins right after the events of its final mission. Similar to Modern Warfare 2, it is centered around Task Force 141, which contains Captain Price, Soap MacTavish, and a newly introduced playable character, Yuri. Alongside the Delta Force and Special Air Service, they hunt Vladimir Makarov (the main antagonist of the trilogy), a Russian terrorist who leads the Russian Ultranationalist party. He leads several terror attacks across Europe, triggering a large-scale war between the Ultranationalists and friendly forces. For the game's multiplayer mode, new mode types and killstreak choices were brought in. Improvements were also made to the mode that solved issues that appeared in Modern Warfare 2.
Call of Duty: Modern Warfare 3 downloadable content:
Using an enhanced version of Modern Warfare 2's IW engine, development for the game began in 2010 with more than one developer studio. Prior to development, Infinity Ward co-founders Jason West and Vince Zampella left the company to form Respawn Entertainment. Other members had been fired or had left the company following the duo's departure. Sledgehammer Games had joined the Modern Warfare 3 development force, with Raven Software also developing the game's multiplayer mode. Following a large leak containing detailed information about the game, multiple teaser trailers were released, with each showcasing a location featured in the game's campaign, leading up to a full reveal.
Call of Duty: Modern Warfare 3 downloadable content:
Modern Warfare 3 received positive reviews from critics, with praise for its gameplay, campaign, and multiplayer, although there was some criticism for its story and lack of innovation. It won the award for Best Shooter at the 2011 Spike Video Game Awards. It was a massive commercial success. Within 24 hours of going on sale, the game sold 6.5 million copies in the United States and the United Kingdom and grossed $400 million, contemporaneously making it the largest entertainment launch ever.
Gameplay:
Modern Warfare 3 is a first-person shooter video game much like its predecessors. Modern Warfare 3 for Microsoft Windows has dedicated server support.
Gameplay:
Campaign The player assumes the role of various characters during the single-player campaign, changing perspectives throughout the progression of the story, which, like its predecessors, is divided into three sets of missions called "Acts". Each mission in an act features a series of objectives that are displayed on the heads up display, which marks the direction and distance between such objectives and the player. Damage to the player is shown by the visualization of blood-spatter or red-outs on the screen. The player's health regenerates over time as long as the player character avoids taking damage for a limited time. Mission objectives vary in their requirements, ranging from having the player arrive at a particular checkpoint, to eliminating enemies in a specified location, to standing their ground against enemy squadrons, directing remote-operated weapons, and planting explosive charges on enemy installations. The player is also accompanied by fellow soldiers who cannot be issued orders. Like its predecessor, the game includes an interactive scene of a terror attack against civilians, which the player is given the option of skipping due to the portrayal of graphic and potentially upsetting content, including harm to children.
Gameplay:
Cooperative Modern Warfare 3 features a new mode, called Survival. This mode allows between one and two players to fight massive waves of enemies, with each wave becoming increasingly difficult. It differs from the Nazi Zombies mode in Call of Duty: World at War, principally in that enemies no longer spawn at fixed locations like the zombies do, but instead appear at tactical positions based on the current location of the player. The mode is available on all multiplayer maps in the game. Players earn in-game cash for items such as weapons, upgrades, ammo, air/ground support, and equipment if they kill or assist in killing enemies, while more items can be unlocked by earning XP which is also increased by killing enemies. Special Ops also returns from Modern Warfare 2. These challenge missions feature up to 48 stars, unlike its previous installment, which featured 69. Some weapons in Spec Ops are exclusive to that game mode, and are not available for play in multiplayer mode.
Gameplay:
Multiplayer The entire Killstreak reward system has been altered to make it more difficult for players to get early unlocks. Killstreaks are now known as Pointstreaks, and kills are no longer the only way to increase the player's point streak. Completing objectives such as planting a bomb or capturing a flag in Capture The Flag awards points towards the player's Pointstreak. Pointstreak rewards are organized into three different "strike packages" called Assault, Support, and Specialist.
Gameplay:
The Assault strike package works the same as the Killstreak reward system in Modern Warfare 2 and Black Ops: the player must earn more and more points without dying. Once the player is killed, their points are reset to zero. Likewise, the Specialist strike package rewards players with perks after every second consecutive kill. Upon death, however, the player loses all the perks and the points are reset to zero. In contrast, the Support strike package is awarded based on the total points that the player has earned over the entire match, regardless of how often the player dies. It is important to note that if a player switches to a custom class with a different reward system (either assault or specialist) during gameplay, all points are automatically reset to 0. Players are allowed to choose which Pointstreak rewards they want to use when they gain it during the match, rather than choosing them between rounds.
Gameplay:
Along with revamping the entire Killstreak reward system, Modern Warfare 3 also has a completely modified Ranking and Unlocks system, which does not use a currency system for unlocks. The player's primary weapon levels up alongside the player, and unlocks a number of "Proficiency" perks such as Attachments, (allows two attachments and is a successor to the "Bling" and "Warlord" perks), Kick (reduced recoil while aiming down the player's sight) and Focus (reduced flinching while under fire). Only one Proficiency can be put on a primary weapon. Another new addition is the ability to equip "Hybrid Scopes" on a weapon, such as a holographic sight with a red dot magnifier, allowing players to switch between magnified and non-magnified. Modern Warfare 3 introduces a new "Prestige Shop" which will unlock only after the player has selected the option to prestige for the first time. The "Prestige Shop" allows Prestige players to use tokens they gain from using the Prestige option to buy exclusive features such as double XP and an extra custom weapon class. The player is able to Prestige 20 times.Several controversial perks in Modern Warfare 2, accused of being too overpowered, have been removed in Modern Warfare 3. Diving from standing to prone, known as "dolphin diving", has been removed due to balancing issues. Modern Warfare 3 utilized Treyarch's hot fix system to fix bugs and glitches. Modern Warfare 3 features a local and online split-screen option. (Not available for the Wii version.)New game modes were added: Kill Confirmed: players must collect floating dog tags from the corpse of a downed enemy before the kill can be registered. However, the opposing team can pick up the dog tag as well to deny the other team a kill, which denies the other team a point.
Gameplay:
Team Defender: both teams attempt to capture a flag dropped by the first person who is killed when the match starts, the team of the person who holds the flag will get double points for their kills; while the team without the flag only gets the default number per kill.
Gameplay:
Private matches also now include pre-made game modes including: "Infected" (where the infected kill enemies to recruit them to their team), "Drop Zone" (where the player must hold a drop zone for points and care packages), "Team Juggernaut" (each team plays alongside an AI Juggernaut character), "Gun Game" (be the first to get one kill with every gun in the game), "One in the Chamber" (in which players are only allowed one pistol with one bullet and three lives where they can only get more bullets by killing other players), and "Juggernaut" (free for all against a juggernaut, kill the juggernaut to become it). Along with this, players are allowed to create their own game modes with customized settings such as number of players and time limit.
Gameplay:
Special Ops The Special Ops game mode from Modern Warfare 2 is present in Modern Warfare 3, and includes new features which make it more replayable and similar to other game modes, such as Nazi Zombies from previous Call of Duty games, and Hordes from Gears of War. The two main Special Ops modes include one that is generally the same as the last Modern Warfare game, and a new wave-based game, Survival, where the player is inserted into a multiplayer map alone or with a single partner and defends an area against waves of enemies. The players can buy support options unlocked with money earned during each round via the means of killing the enemy and completing the optional objectives on each wave, which vary from getting multi-kills to not taking damage. You gain experience for killing them which allows for more weapons to be unlocked, as well as other support options. The model also works with the DLC multiplayer maps.
Gameplay:
Call of Duty: Elite Call of Duty: Elite was an online service developed by Beachhead Studios for the multiplayer portion of Modern Warfare 3 (as well as the previous installment in the series, Black Ops). It was first showcased at E3 2011 and was released on November 8, 2011, to coincide with the release of Modern Warfare 3. The free version included features such as lifetime statistics and social-networking integration. It included monthly downloadable content. The service was shut down by Activision on February 28, 2014, and did not support Call of Duty: Ghosts.
Gameplay:
Downloadable content The downloadable content (DLC) available for Modern Warfare 3 is an assortment of additional multiplayer maps, Special Ops missions, and Face-Off Maps that came as part of the Call of Duty ELITE Premium membership. Downloadable content was split into four unique "Content Collections," each with 2-3 content packs. During the release of Modern Warfare 3 and Call of Duty's Elite service, premium members of the service were promised 20 pieces of DLC over a 9-month period, with content releases for each platform every month. This number was increased to 22 on Call of Duty's official Elite Content Calendar. Initially, all downloadable content was only available to Call of Duty: Elite premium members. Xbox 360 users received all DLC about a month before PlayStation 3 users regardless of Elite membership due to a special contract between Microsoft and Activision. As an example, the first Collection dropped on Xbox 360 on January 24, and on February 28 on the PS3. Content Drops were released monthly exclusively to all Call of Duty: Elite premium and founder members. There were a total of 9 monthly DLC releases up until the end of Modern Warfare 3's 2012 content season. September was the last month DLC was released for Xbox 360, and October was the last month for PlayStation 3. Since Call of Duty: Elite was not available for PC gamers, DLC was only released in the form of Content Collections.
Gameplay:
On May 9, 2012, it was announced that the Face-Off mode would be introduced to Modern Warfare 3. It included smaller maps, which promoted fast gameplay matches. Face-Off included options for 1v1, 2v2, and 3v3 battles. Two free Face-Off maps became available for all Xbox Live Gold subscribers on May 15, 2012, regardless of Call of Duty: Elite membership.
Story:
Characters The game sees the return of former Task Force 141 Captain John "Soap" MacTavish (voiced by Kevin McKidd), former SAS Captain John Price (Billy Murray) and Russian informant "Nikolai", who are on the run after killing the rogue U.S. Army Lieutenant General Shepherd, the main antagonist of the previous game. Task Force 141 was officially disavowed due to Shepherd's death, with the truth of his involvement in igniting the war known only to Price, Soap, and Nikolai. For most of the game, the player controls Yuri (Tony Curran), an ex-Spetsnaz operative and former associate of Makarov, who joins Price on his hunt for Russian Ultranationalist leader Vladimir Makarov (Roman Varshavsky). Makarov returns as the game's main antagonist, and has a new contact named "Volk", a Russian bombmaker in Paris, France. Several playable characters have been added, including: Delta Force operative Staff Sergeant Derek "Frost" Westbrook; SAS Sgt. Marcus Burns; and Andrei Harkov, a Russian FSO agent tasked with protecting the Russian President. Just like President Al-Fulani in the first game, Soap is only "playable" during the game's opening sequence; while Price becomes the player character in the game's final mission, Dust to Dust. The player also takes brief control of a civilian tourist in London, seconds before he and his family are killed by a chemical agent; as well as an AC-130 TV Operator during Team Metal's escape from Paris in the mission Iron Lady.
Story:
New non-player characters (NPCs) include: Delta Force operatives "Sandman" (William Fichtner), "Truck" (Idris Elba), and "Grinch" (Timothy Olyphant), who serve as Frost's squadmates and father figures. Captain MacMillan (Tony Curran) returns from Call of Duty 4: Modern Warfare as Baseplate, to provide the 141 with critical intelligence. Craig Fairbrass, who originally voiced the characters Gaz and Ghost, returns to voice SAS operative Sergeant Wallcroft, who originally had a minor role in the first Modern Warfare.
Story:
Plot Immediately following the events of the previous game, Soap is extracted to a Russian loyalist safehouse in Himachal Pradesh alongside Price and Nikolai. Makarov's forces, however, arrive soon after, prompting the now-disbanded Task Force 141 members to flee with assistance from Yuri, one of Nikolai's associates who shares a common grudge against Makarov.
Story:
Meanwhile, a four-man Delta Force team codename "Metal" assists U.S. forces in defense of New York City, currently under siege by Russian Airborne Troops. Metal destroys a radio jammer and re-establish USAF superiority. They then raid an Oscar II-class submarine and use its own missile payload to destroy the Russian Navy in New York Bay, prompting Russia to withdraw its forces from the East Coast.
Story:
Three months later, Boris Vorshevsky, the President of Russia, flies to take part in peace talks with the United States in Hamburg. Makarov's forces, however, ambush the plane and kill most of the cabinet, forcing it to crash land. Makarov demands the President relinquish Russia's nuclear launch codes, threatening his daughter's life if he does not comply. Makarov then orders chemical bombs to be sent from Sierra Leone to several NATO affiliated capital cities and military bases. The Special Air Service identify such a shipment inbound to London, but the bomb detonates anyway despite their best efforts. With their defense forces crippled by the attacks, Russian ground forces—now under Makarov's control—launch an invasion of the European continent.
Story:
While the U.S. military coordinates efforts to repel the invasion of Europe, Price's team acquires a lead from MacMillan that Volk, CEO of Fregata Industries, is responsible for constructing the bombs. Price acquires Volk's location from his Somali contact before killing him, relaying the information to Metal, who is deployed to capture Volk in Paris with assistance from the GIGN. Volk reveals that Makarov is meeting with his associates in Prague. However, the meeting turns out to be a trap; Sergeant Kamarov, a Russian loyalist and close contact with Price, is killed by Makarov—who addresses Yuri as "[his] friend." In the ensuing fight, Soap suffers a significant fall injury which causes his knife wound to reopen and then dies from blood loss soon after revealing Makarov's association with Yuri.
Story:
Price interrogates Yuri at gunpoint, demanding to know more about his past with Makarov. Yuri reveals that he was present in Pripyat during Zakhaev's attempted assassination and became disillusioned with the Ultranationalist cause following the nuclear detonation in the Middle East. He then explains that he tried to thwart the massacre at Zakhaev International Airport but was critically wounded by Makarov and later rescued by airport security. Price begrudgingly cooperates with Yuri, and with further help from MacMillan, the two infiltrate Makarov's castle base to acquire intel on his whereabouts.
Story:
Price learns that the Russian President's daughter, Alena, is hiding in Berlin. Team Metal relay the intel and work with the German Bundeswehr to reach her but fail to get to her before Makarov's forces capture her. Metal track Alena and her father to a diamond mine in Siberia and launch a rescue mission alongside Yuri and Price. While the mission is successful, Metal sacrifices themselves as the mine collapses. With the President safe, relations between Russia and the U.S. are consequently restored, and political tensions are alleviated, prompting Russia and the U.S. to call an immediate ceasefire, ending the war abroad and forcing Makarov into hiding.
Story:
Three months later in 2017, Price and Yuri track Makarov to a hotel in Dubai. Overcoming fierce resistance, Price and Yuri corner Makarov on the hotel rooftop. Though Makarov gains the upper hand, Yuri intervenes but is killed in the process, allowing Price to strangle the distracted Makarov, hanging him from a steel cable. As first responders approach the hotel, a victorious Price smokes a cigar.
Development:
A 2010 Q3 earnings call from Activision confirmed that the eighth installment of the franchise was currently in development by Sledgehammer Games and Raven Software and due for release "during the back half of 2011". This was revealed to be Infinity Ward's Call of Duty: Modern Warfare 3, with the latter developers co-developing multiplayer. Call of Duty: Modern Warfare 3 was known to be in development after a legal dispute between Infinity Ward co-founders Jason West and Vince Zampella and Activision resulted in the pair being fired from the company. Several dozen Infinity Ward employees followed West and Zampella as a result of the ongoing dispute, causing Activision to enlist the services of Sledgehammer Games and Raven Software to assist the development of the title. The game was said to have been in development since only two weeks after the release of their previous game, Call of Duty: Modern Warfare 2. Also reported was that Sledgehammer was aiming for a "bug free" first outing in the Call of Duty franchise, and also set a goal for Metacritic review scores above 95 percent. Modern Warfare 3 utilizes the MW3 Engine, unofficially the IW 5.0 Engine. Improvements include better streaming and audio. Sledgehammer Games announced the game to be the first entry in the Modern Warfare series to have built-in support for color-blind gamers.The Official UK PlayStation Magazine lent credence to speculation that Modern Warfare 3 would be a prequel starring fan-favorite character Ghost. The magazine's sources strengthened a rumor which first reared its head online early in January 2011. On the Rumor Machine page in its issue (055), OPM points to "insider whispers" which suggest: "Infinity Ward's next Modern Warfare title will be a prequel, with Ghost in the lead role." According to PSM3, the first snippet of Modern Warfare 3 gameplay would be revealed in mid-April. According to the publication's May 2011 issue, insider rumors say "the next in Activision's megaton FPS series will be announced in mid-April".On May 13, 2011, the video game website Kotaku revealed the existence of Modern Warfare 3 following a massive leak. According to Kotaku, this leak came from multiple sources who may or may not work at Activision and Infinity Ward. The leak contained thorough information about the game, confirming that it would be a direct sequel to Call of Duty: Modern Warfare 2, as well as details regarding weapons, levels, and modes found in the game. In response to the leaks, Robert Bowling tweeted "A lot of hype & a lot of leaked info on MW3, some still accurate, some not. To avoid spoiling the experience, I'd wait for the real reveal." Just hours after the leaked assets appeared on Kotaku, four teaser trailers were released on the official Call of Duty YouTube page, separately titled "America", "England", "France" and "Germany", indicating the various locales of the game.
Marketing:
On May 23, 2011, Activision released the first gameplay trailer for Call of Duty: Modern Warfare 3 on YouTube ahead of its official premiere during the NBA Western Conference Finals. On May 31, 2011, Activision announced Call of Duty: Elite, a new social service for the Call of Duty community to track and compare statistics, create videos and access premium content. The service is fully integrated into Call of Duty: Modern Warfare 3, and launched on November 8, 2011, to coincide with the game's release. On June 6, 2011, at 11:00 AM (PDT), the first live gameplay demo of Modern Warfare 3 was presented by Robert Bowling and Glen Schofield at E3 2011. On June 14, 2011, 12:35 PM (EST), the first live gameplay demo of the new Survival Mode was played by Jimmy Fallon and Simon Pegg on Late Night with Jimmy Fallon. On August 9, 2011, the trailer for the new Survival Mode was released on YouTube. On September 2, 2011, the multiplayer world premiere trailer was released on YouTube. On September 3, 2011, another multiplayer trailer was released on YouTube showing off the heads-up display along with various weapons, perks, and killstreaks. On October 6, 2011, a second full-length cinematic trailer was released. On October 22, 2011, the launch trailer was released.On July 19, 2011, UK distributor Lygo's unveiled a range of Turtle Beach Ear Force Modern Warfare 3 gaming headsets that launched in November 2011. The headsets are distinguished by custom audio presets designed by the audio teams at developers Infinity Ward and Sledgehammer Games "in order to provide the ultimate immersion into the cinematic world of Modern Warfare 3". On August 24, 2011, Activision revealed the official Modern Warfare 3 sunglasses. These Call of Duty-branded glasses come from technology eyewear manufacturer GUNNAR – in a licensed partnership with Activision – and join its "Advanced Gaming Eyewear" line. They come with a limited edition Modern Warfare 3 carrying case and cleaning cloth. The product is sold exclusively in North America at Best Buy retail locations and at select European retailers. Microsoft released two limited-edition Modern Warfare 3-themed accessories on October 11, 2011, a wireless controller and a wireless headset. On September 29, 2011, Munitio announced a partnership with Activision in order to make a special edition Modern Warfare 3 9 mm "billet" earphone featuring the Modern Warfare 3 logo, among other things. The earphones were available for pre-order and were released on October 23, 2011. On October 18, 2011, Logitech announced a partnership with Activision in order to make a special edition Modern Warfare 3 mouse and keyboard which feature many things, including the Modern Warfare 3 logo.On September 2, 2011, Jeep announced a partnership with Activision for the second year in a row, to make a special edition Modern Warfare 3 Jeep based on the Wrangler Rubicon model. The Jeep comes with various features including the interior and exterior being designed with a Modern Warfare 3 theme. Jeep dealers started selling this model in November 2011.On August 24, 2011, the PepsiCo-owned brand Mountain Dew officially announced on their Facebook page that they would be promoting the game with their "Game Fuel" soda variants, which would be cherry-citrus-flavored (the original Game Fuel that promoted Halo 3 in 2007 and brought back in 2009 to promote World of Warcraft) and Tropical-flavored (a new flavor that was tested by 500 Dew Labs members). The drinks featured codes to give the player double experience points in-game, depending on the size of the drink. Another PepsiCo-owned brand, Doritos, promoted the game with its "Cool Ranch" and "Nacho Cheese" flavors and also followed the same rules as the Mountain Dew promotion. Both promotions started on October 10, 2011, and ended on December 31, 2011. In Australia, 500 ml cans of V Energy Drink have been branded with the Modern Warfare 3 logo, along with a branded code which can be used for downloads and previews.To promote the game, Activision held a two-day event called Call of Duty: Experience 2011 (Call of Duty: XP for short) which took place in Los Angeles from September 2–3, 2011. It featured many things including the reveal of the new multiplayer which attendees were able to play for the first time. In addition, all attendees received the Hardened Edition for free as a gift for attending. At a Call of Duty: Modern Warfare 3 VIP party in Amsterdam, Dutch porn star and avid fan of the series Kim Holland was originally invited to attend the event until she was suddenly uninvited when Activision discovered her profession. In her blog, she shared her opinion and feelings towards Activision's sudden decision, writing: "People murdering people is neat, [...] but love-makers are dirty?" Activision did not respond to any comments about the subject.Activision had planned to set up an official website to promote the game, however, the domain name "ModernWarfare3.com" had already been taken and was used for an anti-Call of Duty website and redirecting users to Electronic Arts's game Battlefield 3. Activision filed a US$2,600 complaint against the site with the National Arbitration Forum. On September 8, 2011, Activision won the complaint and acquired the rights to the domain name. In November 2011, actors Jonah Hill and Sam Worthington (who voiced the main character Alex Mason in Black Ops, the previous game to Modern Warfare 3), and NBA athlete Dwight Howard starred in commercials advertising the game.
Release:
Two weeks before the release of the game, it was reported that half of the PC version had been uploaded online after being stolen from a warehouse in Fresno, California. Investigators working on behalf of Activision searched torrent websites for traces of the game as well as visiting people who had downloaded a copy across the United States and requested that they remove it or they would face a fine of US$5,000.As early as late October, reports were already surfacing about copies being sold early to people with gameplay videos uploaded online. On November 3, 2011, it was reported that copies of the game were already being sold early in the United States. K-Mart had already started selling copies of the game before its scheduled release date with copies already appearing on eBay and Craigslist. This was due to an error made by one of the shipping companies that told K-mart to sell copies of the game immediately after receiving the shipments. However, Activision contacted K-mart about this and had the issue resolved.French site TF1 News reported that a truck suffered a collision with a car on November 6, 2011, in Créteil, south Paris, before two masked individuals emerged from the car. The criminals reportedly used tear gas to neutralize the truck drivers before hopping in and making off with the video game shipment said to be worth 400,000 Euros. A separate report said the truck contained a delivery of 6000 copies of Modern Warfare 3.Shortly after the game's release, a man from Aurora, Colorado who did not receive a copy of the game at his local Best Buy, despite pre-ordering it, claimed to be so angry that he "could blow this place up". He was also reported as having threatened to shoot employees once they left the store. Lomon Sar, age 31, was issued a citation and court summons by police responding to the disturbance. The game was released for OS X on May 20, 2014.
Release:
Retail versions Modern Warfare 3 was released in two different retail versions across the PlayStation 3 and Xbox 360 platforms: Standard and Hardened. The standard version consists of the game and an instructional manual and is the only version available for the Microsoft Windows platform. Contents within the Hardened Edition include the game disc with "unique art", one-year membership to Call of Duty: Elite, "special founder status" on Call of Duty: Elite which includes an exclusive in-game emblem, player card, weapon camouflage, clan XP boost, and more exclusive benefits, premium collectible Steelbook case, exclusive animated timeline theme for PS3 only, exclusive Spec Ops Juggernaut Xbox Live avatar outfit for Xbox 360 only, and a limited edition, collectible field journal, which chronicles "the entire saga with 100+ pages of authentic military sketches, diagrams, and written entries."Robert Bowling of Infinity Ward confirmed that there will be no Prestige Edition of Modern Warfare 3. In the past, the Prestige editions of previous Call of Duty games have included physical items such as a remote-controlled car for Black Ops, a pair of night vision goggles and a life-sized plastic head to put them on for Modern Warfare 2.On August 19, 2011, UK retailer Game announced an Intel Pack of Modern Warfare 3. It comes with a British special forces avatar for both the Xbox 360 and PlayStation 3 versions, as well as a Brady strategy guide. On September 3, 2011, Activision and Microsoft jointly announced a special, limited Modern Warfare 3 version of the Xbox 360 with a 320 GB hard disk. The unit is designed by the Call of Duty team and includes two custom wireless controllers, a copy of Modern Warfare 3, and features custom sounds when the console is turned on/off or when the disc tray is ejected. A one-month subscription to Xbox Live Gold is also included, as well as exclusive avatar items.
Reception:
Critical response Call of Duty: Modern Warfare 3 received "generally positive" reviews, according to review aggregator Metacritic, except for the Wii, where it received "mixed or average" reviews.The Daily Telegraph gave the game's Xbox 360 version 5 stars out of 5, stating that even as "the series has always been renowned for elements like the excellent sound design, the gloss, polish, and compulsion of its gameplay," it is "a game that not only lives up to the brand hype but exceeds it. A game where the mass appeal is justified and the expectations are met." Gaming Evolution gave the PS3 and Xbox 360 versions a 9.0 out of 10, stating "Modern Warfare 3 lives up to the hype. It is proving itself the one of the best FPS the genre has to offer." IGN gave the game's Xbox 360 version a 9.0 out of 10.0, pointing out that the game offers "great multiplayer, [a] fun campaign, tons of content, but [also] a forgettable story."GameSpot qualifies its review, stating that "the series' signature thrills have lost some of their luster. Modern Warfare 3 iterates rather than innovates, so the fun you have is familiar", but concludes by affirming that "fortunately, [the game is] also utterly engrossing and immensely satisfying, giving fans another reason to rejoice in this busy shooter season". Eurogamer gave the game an 8/10 noting that it is a "ferocious and satisfying game that knows exactly what players expect, and delivers on that promise with bullish confidence" but with "an outmoded single-player campaign".Reviews for the Wii version of the game have been less favorable. IGN rated it only 4.5 out of 10, blaming the lacking graphics and poor friend code system for bringing it down.
Reception:
Sales and revenue Activision has said that it believes Call of Duty: Modern Warfare 3 day-one shipments were the largest for any game ever. "The record number of pre-orders from Modern Warfare 3 drove the largest day-one shipments in our history, and in the industry's history," said Activision Publishing CEO Eric Hirshberg during an earnings call on November 8, 2011. Hirshberg said more than 1.5 million people queued at 13,000 shops at midnight on Monday to buy Modern Warfare 3, "making it the largest retail release in Activision's history and in the industry's history".Activision reported sales figures for Modern Warfare 3 in the U.S. and UK being more than 6.5 million copies sold on launch day and grossed $400 million in the US and UK alone in its first 24 hours, making it the biggest entertainment launch of all time. It is the third year in a row that the Call of Duty series has broken the same record. 2010's Black Ops grossed $360 million on day one; in 2009, Modern Warfare 2 brought in $310 million. Activision Blizzard president and CEO Robert Kotick stated that "the launch of Call of Duty: Modern Warfare 3 is the biggest entertainment launch of all time in any medium, and we [Activision] achieved this record with sales from only two territories."The title grossed more than $775 million globally in its first five days of availability, exceeding the $650 million record set by 2010's Call of Duty: Black Ops and the $550 million one achieved by 2009's Modern Warfare 2. To be exact, it has beaten theatrical box office, book, and video game sales records for five-day worldwide sell-through in dollars.Modern Warfare 3 went on to gross $1 billion throughout the world in 16 days of availability, beating Avatar's record of 19 days, according to Activision.According to NPD Group, Modern Warfare 3 was November's biggest-selling game of the month in the U.S. Modern Warfare 3 sales surpassed first-month sales of the 2010s Black Ops by 7 percent, and sales for November sit at around the 9 million unit mark. In November 2013, IGN confirmed that Modern Warfare 3 sold 26.5 million copies, becoming the highest-selling game in the Call of Duty series.Modern Warfare 3 topped the UK video game sales chart in its first week, becoming the biggest video game launch in history by revenue. By November 21, 2011, the game remained the bestselling title in the United Kingdom, despite sales dropping by 87%. Modern Warfare 3 held the top spot on the UK charts for four weeks running. It was replaced by The Elder Scrolls V: Skyrim in its fifth week on the market.The PlayStation 3 version of Modern Warfare 3 also topped the Japanese chart in its first week on sale, shifting 180,372 copies, while the Xbox 360 version sold around 30,000.
Reception:
Awards Modern Warfare 3 received the Best Shooter award at the 2011 Spike Video Game Awards, it was also nominated for the Best Multiplayer Game in 2011 as well. At the Interactive Achievement Awards or DICE awards, Modern Warfare 3 won the award for Action Game of the Year and was nominated for Outstanding Achievement in Online Gameplay, Outstanding Achievement in Sound Design, and Outstanding Achievement in Connectivity. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rayner (company)**
Rayner (company):
Rayner designs and manufactures intraocular lenses and proprietary injection devices for use in cataract surgery. With Sir Harold Ridley, they were pioneers in the field from 1949 when Ridley successfully implanted the first intraocular lens (IOL) at St Thomas' Hospital, London.
The origin of the company:
The story of the Rayner Company begins in 1910, when Mr John Baptiste Reiner and Mr Charles Davis Keeler opened their first optician's shop at No 9, Vere Street, London, England. They registered their company as Reiner & Keeler Ltd on 30 October 1910. Before forming the company, J.B. Reiner had completed an apprenticeship in 'the art of an optician and scientific instrument maker' in 1891 and had gone on to work for E.B. Merrowitz Ltd, a branch of a well-known American optical company.
The origin of the company:
In 1915, during the First World War, the company name was changed to Rayner & Keeler Ltd. This was almost certainly a commercial decision of the time as J. B. Reiner retained his name all his life.
The two founding directors separated in 1917 when C. D. Keeler resigned and severed all his interests with the company.
The first intraocular lens and Rayner's association with Ridley:
In 1948, Mr Harold Ridley, consultant ophthalmologist at St Thomas' Hospital and Moorfields Eye Hospital, London, together with John Pike of Rayner met privately to discuss a new project. Pike, a director of Rayner and their senior optical specialist, had assisted Ridley with several projects, most recently on the development of electronic ophthalmoscopy. Ridley called his new project the artificial lenticulus project and asked Pike for Rayner's help in the design and manufacture of an implantable lens.
The first intraocular lens and Rayner's association with Ridley:
In David Apple's article from the January 1996 issue of Survey of Ophthalmology, Ridley recalls "... After months of secret thought, I called my friend John Pike, the optical scientist at Rayners of London with whom I had recently worked on electronic ophthalmoscopy. I suggested that we meet in my car after completing our routine duties that day. So it came about that two men sitting in a car in Cavendish Square one evening devised all the principles of a new operation."Perspex was chosen as the preferred material because of its lightness in weight and good optical properties. Also observations during the war of eye injuries to RAF personnel had shown that Perspex appeared inert within body tissues. Perspex was registered in 1934 by ICI as the trademark for their polymethylmethacrylate acrylic sheet. In the late 1930s, as a result of Britain's rearmament programme, ICI's total production of Perspex was reserved for the aircraft industry and the material was specifically developed for the use of fighter aircraft. The required properties of transparency, strength and resistance to heat demanded a high degree of purity and polymerisation. The postwar commercial development of Perspex had resulted in a quite different material from that of the war years but, to ICI's credit, led by Dr John Holt they again produced the high-quality fighter aircraft Perspex which they called Transpex I.
The first intraocular lens and Rayner's association with Ridley:
On 29 November 1949, at St Thomas' Hospital, London, Ridley performed the first IOL operation on the eye of a 45-year-old female patient. The operation was conducted in secret, done in two stages with the artificial lens permanently implanted three months later.
Rayner in the United States:
In 1952 the first IOL implant was performed in the United States: a Ridley-Rayner lens was implanted at the Wills Eye Hospital in Philadelphia. Surgeons Turgut Hamdi MD and Warren Reese MD implanted a series of these lenses – some with good visual function results (reported in a review by Dr Charles Letocha in the Journal of Cataract and Refractive Surgery in 1999).A lens designed by Ridely's pupil Peter Choyce was the first to be approved as "safe and effective" and approved for use in the US by the Food and Drug Administration in 1981. These first FDA-approved lenses, (Choyce Mark VIII and Choyce Mark IX Anterior Chamber lenses) were manufactured by Rayner.
Rayner in the United States:
The Rayner C-flex injectable IOL is approved since May 2007 by the FDA.
Developments in this century:
On 21 April 2009 the prestigious Queen's Award for Enterprise was awarded to Rayner Intraocular Lenses Limited in recognition of sustained international trade in overseas markets. Also in 2009 Rayner celebrated 60 years of continuous manufacturing and sales of intraocular lenses.
Developments in this century:
Rayner is the only British manufacturer of IOLs: all its intraocular lenses were made at its Sackville Road manufacturing facility in Hove, East Sussex until May 2017, when the company moved into a new global HQ and manufacturing facility in Worthing, West Sussex. The new building was named the Ridley Innovation Centre. The plant includes a new generation of manufacturing equipment from companies such as GB Innomech to improve process efficiencies and more than double manufacturing capacity.On 2 December 2009 in the House of Commons, during Prime Minister's Questions, the sixtieth anniversary of the IOL was mentioned (and recorded in Hansard): Ms Celia Barlow (Hove) (Lab): "Will the Prime Minister join me in marking 60 years since the British surgeon Sir Harold Ridley commissioned my Hove company, Rayner Opticians, to produce the first intraocular lens? Will he also congratulate the company on receiving the Queen's Award for Enterprise on Friday, and on the fact that it still works with charities across the world in restoring sight?"The Prime Minister: "In my hon. Friend's constituency, there are many excellent companies, and one of them is Rayner. I want to congratulate all those who have contributed to the success of ophthalmic medicine over the past few years. The inventions that have come from Britain are truly wonderful. We should be very proud of our British scientists and engineers, but also very proud of our medical researchers and medical firms."On 1 February 2014 the High Street retail business of Rayner Opticians was bought by Vision Express from JBR1910 Limited. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Unicity (data analysis)**
Unicity (data analysis):
Unicity ( εp ) is a risk metric for measuring the re-identifiability of high-dimensional anonymous data. First introduced in 2013, unicity is measured by the number of points p needed to uniquely identify an individual in a data set. The fewer points needed, the more unique the traces are and the easier they would be to re-identify using outside information.
Unicity (data analysis):
In a high-dimensional, human behavioural data set, such as mobile phone meta-data, for each person, there exists potentially thousands of different records. In the case of mobile phone meta-data, credit card transaction histories and many other types of personal data, this information includes the time and location of an individual.
Unicity (data analysis):
In research, unicity is widely used to illustrate the re-identifiability of anonymous data sets. In 2013 researchers from the MIT Media Lab showed that only 4 points needed to uniquely identify 95% of individual trajectories in a de-identified data set of 1.5 million mobility trajectories. These points were location-time pairs that appeared with the resolution of 1 hour and 0.15 km² to 15 km². These results were shown to hold true for credit card transaction data as well with 4 points being enough to re-identify 90% of trajectories. Further research studied the unicity of the apps installed by people on their smartphones, the trajectories of vehicles, mobile phone data from Boston and Singapore, and, public transport data in Singapore obtained from smartcards.
Measuring unicity:
Unicity ( εp ) is formally defined as the expected value of the fraction of uniquely identifiable trajectories, given p points selected from those trajectories uniformly at random. A full computation of εp of a data set D requires picking p points uniformly at random from each trajectory Ti∈D , and then checking whether or not any other trajectory also contains those p points. Averaging over all possible sets of p points for each trajectory results in a value for εp . This is usually prohibitively expensive as it requires considering every possible set of p points for each trajectory in the data set — trajectories that sometimes contain thousands of points.Instead, unicity is usually estimated using sampling techniques. Specifically, given a data set D , the estimated unicity is computed by sampling from D a fraction of the trajectories S and then checking whether each of the trajectories Tj∈S are unique in D given p randomly selected points from each Tj . The fraction of S that is uniquely identifiable is then the unicity estimate. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Looking Glass (UNIX desktop)**
Looking Glass (UNIX desktop):
Looking Glass is a desktop environment for computers running the UNIX operating system. Developed by Visix Software, it was sold commercially until Visix went out of business.
Looking Glass was used as the desktop software bundled with INTERACTIVE UNIX System and Caldera OpenLinux. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Google Translator Toolkit**
Google Translator Toolkit:
Google Translator Toolkit was an online computer-assisted translation tool (CAT)—a web application designed to permit translators to edit the translations that Google Translate automatically generated using its own and/or user-uploaded files of appropriate glossaries and translation memory. The toolkit was designed to let translators organize their work and use shared translations, glossaries and translation memories, and was compatible with Microsoft Word, HTML, and other formats.
Google Translator Toolkit:
Google Translator Toolkit by default used Google Translate to automatically pre-translate uploaded documents which translators could then improve.
Google Translator Toolkit:
Google Inc released Google Translator Toolkit on June 8, 2009. This product was expected to be named Google Translation Center, as had been announced in August 2008. However, the Google Translation Toolkit turned out to be a less ambitious product: "document rather than project-based, intended not as a process management package but simply another personal translation memory tool".Originally the Google Translator Toolkit was meant to attract collaboratively minded people, such as those who translate Wikipedia entries or material for non-governmental organizations. However, later it was used widely in commercial translation projects.A review of the toolkit in Multilingual noted: "The significance of the Google Translator Toolkit is its position as a fully online software-as-a-service (SaaS) that mainstreams some backend enterprise features and hitherto fringe innovations, presaging a radical change in how and by whom the translation is performed".Translator Toolkit was shut down on December 4, 2019.
Source and target languages:
The Toolkit began in June 2009 with only one source language—English—and forty-seven target languages, but later support 345 source languages and 345 target languages for approximately 100,000 language pairs.Google Translator Toolkit's user interface was available in eighty-five languages:
Workflow:
To use Google Translator Toolkit first, users uploaded a file from their desktop or entered a URL of a web page or Wikipedia article that they want to translate. Google Translator Toolkit automatically 'pretranslated' the document. It divided the document into segments, usually sentences, headers, or bullets. Next, it searched all available translation databases for previous human translations of each segment. If any previous human translations of the segment existed, Google Translator Toolkit picked the highest-ranked search result and 'pretranslated' the segment with that translation. If no previous human translation of the segment existed, it used machine translation to produce an 'automatic translation' for the segment, without intervention from human translators.
Workflow:
Users could then review and improve the automatic translation by clicking on the sentence and fixing a translation, or using Google's translation tools to help them translate by clicking the "Show toolkit" button.
Workflow:
Users could view translations previously entered by other users in the "Translation search results" tab or use the "Dictionary" tab to search for the right translations for hard-to-find words. In addition, translators could use features like custom, multi-lingual glossaries and view the machine translation for reference. They could also share their translations or invite them to help edit or view their translations. Translations could be downloaded and, for Wikipedia articles, published back to the source pages.
API:
Google Translator Toolkit used to provide an API which was restricted to approved users only. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Maltokinase**
Maltokinase:
Maltokinase (EC 2.7.1.175) is an enzyme with systematic name ATP:alpha-maltose 1-phosphotransferase. This enzyme catalyses the following chemical reaction ATP + maltose ⇌ ADP + alpha-maltose 1-phosphateThis enzyme requires Mg2+ for activity. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Simmondsin**
Simmondsin:
Simmondsin is a component of jojoba seeds (pronounced "ho-HO-bah") (Simmondsia chinensis). While it had been considered toxic due to jojoba seed meal causing weight loss in animals, in recent years its appetite suppressant effect has also been researched as a potential treatment for obesity. It is thought to reduce appetite by increasing levels of cholecystokinin. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cornell's sign**
Cornell's sign:
Cornell's sign is a clinical sign in which scratching along the inner side of the extensor hallucis longus tendon elicits an extensor plantar reflex. It is found in patients with pyramidal tract lesions, and is one of a number of Babinski-like responses. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Free monoid**
Free monoid:
In abstract algebra, the free monoid on a set is the monoid whose elements are all the finite sequences (or strings) of zero or more elements from that set, with string concatenation as the monoid operation and with the unique sequence of zero elements, often called the empty string and denoted by ε or λ, as the identity element. The free monoid on a set A is usually denoted A∗. The free semigroup on A is the subsemigroup of A∗ containing all elements except the empty string. It is usually denoted A+.More generally, an abstract monoid (or semigroup) S is described as free if it is isomorphic to the free monoid (or semigroup) on some set.As the name implies, free monoids and semigroups are those objects which satisfy the usual universal property defining free objects, in the respective categories of monoids and semigroups. It follows that every monoid (or semigroup) arises as a homomorphic image of a free monoid (or semigroup). The study of semigroups as images of free semigroups is called combinatorial semigroup theory.
Free monoid:
Free monoids (and monoids in general) are associative, by definition; that is, they are written without any parenthesis to show grouping or order of operation. The non-associative equivalent is the free magma.
Examples:
Natural numbers The monoid (N0,+) of natural numbers (including zero) under addition is a free monoid on a singleton free generator, in this case the natural number 1. According to the formal definition, this monoid consists of all sequences like "1", "1+1", "1+1+1", "1+1+1+1", and so on, including the empty sequence. Mapping each such sequence to its evaluation result and the empty sequence to zero establishes an isomorphism from the set of such sequences to N0.
Examples:
This isomorphism is compatible with "+", that is, for any two sequences s and t, if s is mapped (i.e. evaluated) to a number m and t to n, then their concatenation s+t is mapped to the sum m+n.
Kleene star In formal language theory, usually a finite set of "symbols" A (sometimes called the alphabet) is considered. A finite sequence of symbols is called a "word over A", and the free monoid A∗ is called the "Kleene star of A".
Examples:
Thus, the abstract study of formal languages can be thought of as the study of subsets of finitely generated free monoids. For example, assuming an alphabet A = {a, b, c}, its Kleene star A∗ contains all concatenations of a, b, and c: {ε, a, ab, ba, caa, cccbabbc, ...}.If A is any set, the word length function on A∗ is the unique monoid homomorphism from A∗ to (N0,+) that maps each element of A to 1. A free monoid is thus a graded monoid. (A graded monoid M is a monoid that can be written as M=M0⊕M1⊕M2⋯ . Each Mn is a grade; the grading here is just the length of the string. That is, Mn contains those strings of length n.
Examples:
The ⊕ symbol here can be taken to mean "set union"; it is used instead of the symbol ∪ because, in general, set unions might not be monoids, and so a distinct symbol is used. By convention, gradations are always written with the ⊕ symbol.) There are deep connections between the theory of semigroups and that of automata. For example, every formal language has a syntactic monoid that recognizes that language. For the case of a regular language, that monoid is isomorphic to the transition monoid associated to the semiautomaton of some deterministic finite automaton that recognizes that language. The regular languages over an alphabet A are the closure of the finite subsets of A*, the free monoid over A, under union, product, and generation of submonoid.For the case of concurrent computation, that is, systems with locks, mutexes or thread joins, the computation can be described with history monoids and trace monoids. Roughly speaking, elements of the monoid can commute, (e.g. different threads can execute in any order), but only up to a lock or mutex, which prevent further commutation (e.g. serialize thread access to some object).
Conjugate words:
We define a pair of words in A∗ of the form uv and vu as conjugate: the conjugates of a word are thus its circular shifts. Two words are conjugate in this sense if they are conjugate in the sense of group theory as elements of the free group generated by A.
Conjugate words:
Equidivisibility A free monoid is equidivisible: if the equation mn = pq holds, then there exists an s such that either m = ps, sn = q (example see image) or ms = p, n = sq. This result is also known as Levi's lemma.A monoid is free if and only if it is graded (in the strong sense that only the identity has gradation 0) and equidivisible.
Free generators and rank:
The members of a set A are called the free generators for A∗ and A+. The superscript * is then commonly understood to be the Kleene star. More generally, if S is an abstract free monoid (semigroup), then a set of elements which maps onto the set of single-letter words under an isomorphism to a monoid A∗ (semigroup A+) is called a set of free generators for S.
Free generators and rank:
Each free monoid (or semigroup) S has exactly one set of free generators, the cardinality of which is called the rank of S.
Free generators and rank:
Two free monoids or semigroups are isomorphic if and only if they have the same rank. In fact, every set of generators for a free monoid or semigroup S contains the free generators, since a free generator has word length 1 and hence can only be generated by itself. It follows that a free semigroup or monoid is finitely generated if and only if it has finite rank.
Free generators and rank:
A submonoid N of A∗ is stable if u, v, ux, xv in N together imply x in N. A submonoid of A∗ is stable if and only if it is free.
Free generators and rank:
For example, using the set of bits { "0", "1" } as A, the set N of all bit strings containing an even number of "1"s is a stable submonoid because if u contains an even number of "1"s, and ux as well, then x must contain an even number of "1"s, too. While N cannot be freely generated by any set of single bits, it can be freely generated by the set of bit strings { "0", "11", "101", "1001", "10001", ... } – the set of strings of the form "10n1" for some nonnegative integer n (along with the string "0").
Free generators and rank:
Codes A set of free generators for a free monoid P is referred to as a basis for P: a set of words C is a code if C* is a free monoid and C is a basis. A set X of words in A∗ is a prefix, or has the prefix property, if it does not contain a proper (string) prefix of any of its elements. Every prefix in A+ is a code, indeed a prefix code.A submonoid N of A∗ is right unitary if x, xy in N implies y in N. A submonoid is generated by a prefix if and only if it is right unitary.
Factorization:
A factorization of a free monoid is a sequence of subsets of words with the property that every word in the free monoid can be written as a concatenation of elements drawn from the subsets. The Chen–Fox–Lyndon theorem states that the Lyndon words furnish a factorization. More generally, Hall words provide a factorization; the Lyndon words are a special case of the Hall words.
Free hull:
The intersection of free submonoids of a free monoid A∗ is again free. If S is a subset of a free monoid A* then the intersection of all free submonoids of A* containing S is well-defined, since A* itself is free, and contains S; it is a free monoid and called the free hull of S. A basis for this intersection is a code.
Free hull:
The defect theorem states that if X is finite and C is the basis of the free hull of X, then either X is a code and C = X, or |C| ≤ |X| − 1 .
Morphisms:
A monoid morphism f from a free monoid B∗ to a monoid M is a map such that f(xy) = f(x)⋅f(y) for words x,y and f(ε) = ι, where ε and ι denote the identity elements of B∗ and M, respectively. The morphism f is determined by its values on the letters of B and conversely any map from B to M extends to a morphism. A morphism is non-erasing or continuous if no letter of B maps to ι and trivial if every letter of B maps to ι.A morphism f from a free monoid B∗ to a free monoid A∗ is total if every letter of A occurs in some word in the image of f; cyclic or periodic if the image of f is contained in {w}∗ for some word w of A∗. A morphism f is k-uniform if the length |f(a)| is constant and equal to k for all a in A. A 1-uniform morphism is strictly alphabetic or a coding.A morphism f from a free monoid B∗ to a free monoid A∗ is simplifiable if there is an alphabet C of cardinality less than that of B such the morphism f factors through C∗, that is, it is the composition of a morphism from B∗ to C∗ and a morphism from that to A∗; otherwise f is elementary. The morphism f is called a code if the image of the alphabet B under f is a code. Every elementary morphism is a code.
Morphisms:
Test sets For L a subset of B∗, a finite subset T of L is a test set for L if morphisms f and g on B∗ agree on L if and only if they agree on T. The Ehrenfeucht conjecture is that any subset L has a test set: it has been proved independently by Albert and Lawrence; McNaughton; and Guba. The proofs rely on Hilbert's basis theorem.
Morphisms:
Map and fold The computational embodiment of a monoid morphism is a map followed by a fold. In this setting, the free monoid on a set A corresponds to lists of elements from A with concatenation as the binary operation. A monoid homomorphism from the free monoid to any other monoid (M,•) is a function f such that f(x1...xn) = f(x1) • ... • f(xn) f() = ewhere e is the identity on M. Computationally, every such homomorphism corresponds to a map operation applying f to all the elements of a list, followed by a fold operation which combines the results using the binary operator •. This computational paradigm (which can be generalized to non-associative binary operators) has inspired the MapReduce software framework.
Endomorphisms:
An endomorphism of A∗ is a morphism from A∗ to itself. The identity map I is an endomorphism of A∗, and the endomorphisms form a monoid under composition of functions.
An endomorphism f is prolongable if there is a letter a such that f(a) = as for a non-empty string s.
String projection The operation of string projection is an endomorphism. That is, given a letter a ∈ Σ and a string s ∈ Σ∗, the string projection pa(s) removes every occurrence of a from s; it is formally defined by if the empty string if if and b≠a.
Endomorphisms:
Note that string projection is well-defined even if the rank of the monoid is infinite, as the above recursive definition works for all strings of finite length. String projection is a morphism in the category of free monoids, so that pa(Σ∗)=(Σ−a)∗ where pa(Σ∗) is understood to be the free monoid of all finite strings that don't contain the letter a. Projection commutes with the operation of string concatenation, so that pa(st)=pa(s)pa(t) for all strings s and t. There are many right inverses to string projection, and thus it is a split epimorphism.
Endomorphisms:
The identity morphism is pε, defined as pε(s)=s for all strings s, and pε(ε)=ε . String projection is commutative, as clearly pa(pb(s))=pb(pa(s)).
For free monoids of finite rank, this follows from the fact that free monoids of the same rank are isomorphic, as projection reduces the rank of the monoid by one.
String projection is idempotent, as pa(pa(s))=pa(s) for all strings s. Thus, projection is an idempotent, commutative operation, and so it forms a bounded semilattice or a commutative band.
The free commutative monoid:
Given a set A, the free commutative monoid on A is the set of all finite multisets with elements drawn from A, with the monoid operation being multiset sum and the monoid unit being the empty multiset.
For example, if A = {a, b, c}, elements of the free commutative monoid on A are of the form {ε, a, ab, a2b, ab3c4, ...}.The fundamental theorem of arithmetic states that the monoid of positive integers under multiplication is a free commutative monoid on an infinite set of generators, the prime numbers.
The free commutative semigroup is the subset of the free commutative monoid that contains all multisets with elements drawn from A except the empty multiset.
The free partially commutative monoid, or trace monoid, is a generalization that encompasses both the free and free commutative monoids as instances. This generalization finds applications in combinatorics and in the study of parallelism in computer science. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pulmonary hypoplasia**
Pulmonary hypoplasia:
Pulmonary hypoplasia is incomplete development of the lungs, resulting in an abnormally low number or small size of bronchopulmonary segments or alveoli. A congenital malformation, it most often occurs secondary to other fetal abnormalities that interfere with normal development of the lungs. Primary (idiopathic) pulmonary hypoplasia is rare and usually not associated with other maternal or fetal abnormalities.
Incidence of pulmonary hypoplasia ranges from 9–11 per 10,000 live births and 14 per 10,000 births. Pulmonary hypoplasia is a relatively common cause of neonatal death. It also is a common finding in stillbirths, although not regarded as a cause of these.
Causes:
Causes of pulmonary hypoplasia include a wide variety of congenital malformations and other conditions in which pulmonary hypoplasia is a complication. These include congenital diaphragmatic hernia, congenital cystic adenomatoid malformation, fetal hydronephrosis, caudal regression syndrome, mediastinal tumor, and sacrococcygeal teratoma with a large component inside the fetus. Large masses of the neck (such as cervical teratoma) also can cause pulmonary hypoplasia, presumably by interfering with the fetus's ability to fill its lungs. In the presence of pulmonary hypoplasia, the EXIT procedure to rescue a baby with a neck mass is not likely to succeed.Fetal hydrops can be a cause, or conversely a complication.Pulmonary hypoplasia is associated with oligohydramnios through multiple mechanisms. Both conditions can result from blockage of the urinary bladder. Blockage prevents the bladder from emptying, and the bladder becomes very large and full. The large volume of the full bladder interferes with normal development of other organs, including the lungs. Pressure within the bladder becomes abnormally high, causing abnormal function in the kidneys hence abnormally high pressure in the vascular system entering the kidneys. This high pressure also interferes with normal development of other organs. An experiment in rabbits showed that PH also can be caused directly by oligohydramnios.Pulmonary hypoplasia is associated with dextrocardia of embryonic arrest in that both conditions can result from early errors of development, resulting in Congenital cardiac disorders.
Causes:
PH is a common direct cause of neonatal death resulting from pregnancy induced hypertension.
Diagnosis:
Medical diagnosis of pulmonary hypoplasia in utero may use imaging, usually ultrasound or MRI. The extent of hypoplasia is a very important prognostic factor. One study of 147 fetuses (49 normal, 98 with abnormalities) found that a simple measurement, the ratio of chest length to trunk (torso) length, was a useful predictor of postnatal respiratory distress. In a study of 23 fetuses, subtle differences seen on MRIs of the lungs were informative. In a study of 29 fetuses with suspected pulmonary hypoplasia, the group that responded to maternal oxygenation had a more favorable outcome.Pulmonary hypoplasia is diagnosed also clinically.
Management:
Management has three components: interventions before delivery, timing and place of delivery, and therapy after delivery.
Management:
In some cases, fetal therapy is available for the underlying condition; this may help to limit the severity of pulmonary hypoplasia. In exceptional cases, fetal therapy may include fetal surgery.A 1992 case report of a baby with a sacrococcygeal teratoma (SCT) reported that the SCT had obstructed the outlet of the urinary bladder causing the bladder to rupture in utero and fill the baby's abdomen with urine (a form of ascites). The outcome was good. The baby had normal kidneys and lungs, leading the authors to conclude that obstruction occurred late in the pregnancy and to suggest that the rupture may have protected the baby from the usual complications of such an obstruction. Subsequent to this report, use of a vesicoamniotic shunting procedure (VASP) has been attempted, with limited success.Often, a baby with a high risk of pulmonary hypoplasia will have a planned delivery in a specialty hospital such as (in the United States) a tertiary referral hospital with a level 3 neonatal intensive-care unit. The baby may require immediate advanced resuscitation and therapy.Early delivery may be required in order to rescue the fetus from an underlying condition that is causing pulmonary hypoplasia. However, pulmonary hypoplasia increases the risks associated with preterm birth, because once delivered the baby requires adequate lung capacity to sustain life. The decision whether to deliver early includes a careful assessment of the extent to which delaying delivery may increase or decrease the pulmonary hypoplasia. It is a choice between expectant management and active management. An example is congenital cystic adenomatoid malformation with hydrops; impending heart failure may require a preterm delivery. Severe oligohydramnios of early onset and long duration, as can occur with early preterm rupture of membranes, can cause increasingly severe PH; if delivery is postponed by many weeks, PH can become so severe that it results in neonatal death.After delivery, most affected babies will require supplemental oxygen. Some severely affected babies may be saved with extracorporeal membrane oxygenation (ECMO). Not all specialty hospitals have ECMO, and ECMO is considered the therapy of last resort for pulmonary insufficiency. An alternative to ECMO is high-frequency oscillatory ventilation.
History:
In 1908, Maude Abbott documented pulmonary hypoplasia occurring with certain defects of the heart. In 1915, Abbott and J. C. Meakins showed that pulmonary hypoplasia was part of the differential diagnosis of dextrocardia. In 1920, decades before the advent of prenatal imaging, the presence of pulmonary hypoplasia was taken as evidence that diaphragmatic hernias in babies were congenital, not acquired. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Regression diagnostic**
Regression diagnostic:
In statistics, a regression diagnostic is one of a set of procedures available for regression analysis that seek to assess the validity of a model in any of a number of different ways. This assessment may be an exploration of the model's underlying statistical assumptions, an examination of the structure of the model by considering formulations that have fewer, more or different explanatory variables, or a study of subgroups of observations, looking for those that are either poorly represented by the model (outliers) or that have a relatively large effect on the regression model's predictions.
Regression diagnostic:
A regression diagnostic may take the form of a graphical result, informal quantitative results or a formal statistical hypothesis test, each of which provides guidance for further stages of a regression analysis.
Introduction:
Regression diagnostics have often been developed or were initially proposed in the context of linear regression or, more particularly, ordinary least squares. This means that many formally defined diagnostics are only available for these contexts.
Assessing assumptions:
Distribution of model errorsNormal probability plotHomoscedasticityGoldfeld–Quandt test Breusch–Pagan test Park test White testCorrelation of model errorsBreusch–Godfrey test
Assessing model structure:
Adequacy of existing explanatory variablesPartial residual plot Ramsey RESET test F test for use when there are replicated observations, so that a comparison can be made between the lack-of-fit sum of squares and the pure error sum of squares, under the assumption that model errors are homoscedastic and have a normal distribution.Adding or dropping explanatory variablesPartial regression plot Student's t test for testing inclusion of a single explanatory variable, or the F test for testing inclusion of a group of variables, both under the assumption that model errors are homoscedastic and have a normal distribution.Change of model structure between groups of observationsStructural break test Chow testComparing model structuresPRESS statistic
Important groups of observations:
OutliersInfluential observationsLeverage (statistics), partial leverage DFFITS Cook's distance | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dephasing rate SP formula**
Dephasing rate SP formula:
The SP formula for the dephasing rate Γφ of a particle that moves in a fluctuating environment unifies various results that have been obtained, notably in condensed matter physics, with regard to the motion of electrons in a metal. The general case requires to take into account not only the temporal correlations but also the spatial correlations of the environmental fluctuations. These can be characterized by the spectral form factor S~(q,ω) , while the motion of the particle is characterized by its power spectrum P~(q,ω) . Consequently, at finite temperature the expression for the dephasing rate takes the following form that involves S and P functions: Γφ=∫dq∫dω2πS~(q,ω)P~(−q,−ω) Due to inherent limitations of the semiclassical (stationary phase) approximation, the physically correct procedure is to use the non-symmetrized quantum versions of S~(q,ω) and P~(q,ω) . The argument is based on the analogy of the above expression with the Fermi-golden-rule calculation of the transitions that are induced by the system-environment interaction.
Derivation:
It is most illuminating to understand the SP formula in the context of the DLD model, which describes motion in dynamical disorder. In order to derive the dephasing rate formula from first principles, a purity-based definition of the dephasing factor can be adopted. The purity P(t)=e−F(t) describes how a quantum state becomes mixed due to the entanglement of the system with the environment. Using perturbation theory, one recovers at finite temperatures at the long time limit F(t)=Γφt , where the decay constant is given by the dephasing rate formula with non symmetrized spectral functions as expected. There is a somewhat controversial possibility to get power law decay of P(t) at the limit of zero temperature. The proper way to incorporate Pauli blocking in the many-body dephasing calculation, within the framework of the SP formula approach, has been clarified as well.
Example:
For the standard 1D Caldeira-Leggett Ohmic environment, with temperature T and friction η , the spectral form factor is S~(q,ω)=(2π)δ(q)q2[2ηω1−e−ω/T] This expression reflects that in the classical limit the electron experiences "white temporal noise", which means force that is not correlated in time, but uniform is space (high q components are absent). In contrast to that, for diffusive motion of an electron in a 3D metallic environment, which is created by the rest of the electrons, the spectral form factor is S~(q,ω)=1νDq2[2ω1−e−ω/T].
Example:
This expression reflects that in the classical limit the electron experiences "white spatio-temporal noise", which means force that is neither correlated in time nor in space. The power spectrum of a single diffusive electron is P~(q,ω)=2Dq2ω2+(Dq2)2 But in the many body context this expression acquires a "Fermi blocking factor": P~(q,ω)=ddω[ω1−e−ω/T]×2Dq2ω2+(Dq2)2 Calculating the SP integral we get the well known result Γφ∝T3/2 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Schwartz–Bruhat function**
Schwartz–Bruhat function:
In mathematics, a Schwartz–Bruhat function, named after Laurent Schwartz and François Bruhat, is a complex valued function on a locally compact abelian group, such as the adeles, that generalizes a Schwartz function on a real vector space. A tempered distribution is defined as a continuous linear functional on the space of Schwartz–Bruhat functions.
Definitions:
On a real vector space Rn , the Schwartz–Bruhat functions are just the usual Schwartz functions (all derivatives rapidly decreasing) and form the space S(Rn) On a torus, the Schwartz–Bruhat functions are the smooth functions.
On a sum of copies of the integers, the Schwartz–Bruhat functions are the rapidly decreasing functions.
On an elementary group (i.e., an abelian locally compact group that is a product of copies of the reals, the integers, the circle group, and finite groups), the Schwartz–Bruhat functions are the smooth functions all of whose derivatives are rapidly decreasing.
Definitions:
On a general locally compact abelian group G , let A be a compactly generated subgroup, and B a compact subgroup of A such that A/B is elementary. Then the pullback of a Schwartz–Bruhat function on A/B is a Schwartz–Bruhat function on G , and all Schwartz–Bruhat functions on G are obtained like this for suitable A and B . (The space of Schwartz–Bruhat functions on G is endowed with the inductive limit topology.) On a non-archimedean local field K , a Schwartz–Bruhat function is a locally constant function of compact support.
Definitions:
In particular, on the ring of adeles AK over a global field K , the Schwartz–Bruhat functions f are finite linear combinations of the products ∏vfv over each place v of K , where each fv is a Schwartz–Bruhat function on a local field Kv and fv=1Ov is the characteristic function on the ring of integers Ov for all but finitely many v . (For the archimedean places of K , the fv are just the usual Schwartz functions on Rn , while for the non-archimedean places the fv are the Schwartz–Bruhat functions of non-archimedean local fields.) The space of Schwartz–Bruhat functions on the adeles AK is defined to be the restricted tensor product := lim →E(⨂v∈ES(Kv)) of Schwartz–Bruhat spaces S(Kv) of local fields, where E is a finite set of places of K . The elements of this space are of the form f=⊗vfv , where fv∈S(Kv) for all v and fv|Ov=1 for all but finitely many v . For each x=(xv)v∈AK we can write f(x)=∏vfv(xv) , which is finite and thus is well defined.
Examples:
Every Schwartz–Bruhat function f∈S(Qp) can be written as f=∑i=1nci1ai+pkiZp , where each ai∈Qp , ki∈Z , and ci∈C . This can be seen by observing that Qp being a local field implies that f by definition has compact support, i.e., supp (f) has a finite subcover. Since every open set in Qp can be expressed as a disjoint union of open balls of the form a+pkZp (for some a∈Qp and k∈Z ) we have supp (f)=∐i=1n(ai+pkiZp) . The function f must also be locally constant, so f|ai+pkiZp=ci1ai+pkiZp for some ci∈C . (As for f evaluated at zero, f(0)1Zp is always included as a term.)On the rational adeles AQ all functions in the Schwartz–Bruhat space S(AQ) are finite linear combinations of ∏p≤∞fp=f∞×∏p<∞fp over all rational primes p , where f∞∈S(R) , fp∈S(Qp) , and fp=1Zp for all but finitely many p . The sets Qp and Zp are the field of p-adic numbers and ring of p-adic integers respectively.
Properties:
The Fourier transform of a Schwartz–Bruhat function on a locally compact abelian group is a Schwartz–Bruhat function on the Pontryagin dual group. Consequently, the Fourier transform takes tempered distributions on such a group to tempered distributions on the dual group. Given the (additive) Haar measure on AK the Schwartz–Bruhat space S(AK) is dense in the space L2(AK,dx).
Applications:
In algebraic number theory, the Schwartz–Bruhat functions on the adeles can be used to give an adelic version of the Poisson summation formula from analysis, i.e., for every f∈S(AK) one has ∑x∈Kf(ax)=1|a|∑x∈Kf^(a−1x) , where a∈AK× . John Tate developed this formula in his doctoral thesis to prove a more general version of the functional equation for the Riemann zeta function. This involves giving the zeta function of a number field an integral representation in which the integral of a Schwartz–Bruhat function, chosen as a test function, is twisted by a certain character and is integrated over AK× with respect to the multiplicative Haar measure of this group. This allows one to apply analytic methods to study zeta functions through these zeta integrals. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Modern psychoanalysis**
Modern psychoanalysis:
Modern psychoanalysis is the term used by Hyman Spotnitz to describe the techniques he developed for the treatment of narcissistic (also called preverbal or preoedipal) disorders.
Theory:
Narcissism is understood (by Spotnitz) as a state in which unexpressed aggression and hostility are trapped within the psychic apparatus with corrosive effects on mind and body. The bottled up aggression is turned against the self by a weak and undeveloped ego that is not capable of handling the stress of hateful feelings. The techniques of modern psychoanalysis are aimed at allowing the ego to direct aggression outward in productive ways and at protecting a fragile ego against the self-attack seen in cases ranging from schizophrenia, depression, and somatization to neurotic forms of self-sabotage. This is accomplished by helping the patient to "say everything." The ego is protected by what is called "object oriented questions." These are questions directed toward the motives of other people rather than the patient, i.e., "What makes her do that?" or, "Why did I do that?” To guide the quality and number of such interventions modern analysts follow the "contact function," the efforts made by the patient to establish some discourse with the analyst. Questions asked by the patient indicate what the patient is ready to talk about and are explored to help the patient say more. Meadow describes the contact function as responding, "’in kind,’ thus replacing subjectively determined timing as used in traditional insight-oriented interpretation with what might be called ‘demand feeding’.In the interest of helping patients to say everything while functioning at an optimum level, the analyst refrains from interpreting defenses and instead "joins the resistance.” In joining, the analyst conveys acceptance of the patient's thoughts and feelings, stated or unstated, conscious or unconscious. Joining reduces the need for a particular defense by making the patient less defensive.Although modern analysis forgoes interpretation as the main form of intervention, it retains the classical psychoanalytic focus on transference, countertransference, and resistance. The transference is usually a narcissistic one in which feelings and patterns of defense from the first years of life are revived. The "narcissistic transference" is not so much a projection of figures from the past onto the analyst, as an externalization of parts of the patient's self. Often a benign feeling of oneness with the analyst prevails at the beginning of treatment. Such patients may make little or no contact with the analyst. Modern analysts find that narcissistic transference develops in all patients, and to facilitate its full expression they recommend that the analyst not attempt to correct the patient's perceptions which would emphasize the differences between patient and analyst, undermining their narcissistic connection. Since patients who are struggling with bottled-up rage often hate themselves, they are apt to hate the analyst as well. The transference, which binds them to the therapist, permits the expression of feelings patients cannot own. In the negative narcissistic transference, they hate the analyst as they hate themselves. When the analyst is seen as an extension of the self, aggression may be more freely and safely expressed, lessening patients’ self-hatred and allowing them to slowly emerge from their narcissistic state.Patients are encouraged to have and express all their feelings toward their analysts, including the most hostile and negative ones. Analysts are expected to have, but not necessarily express, all possible feelings for their patients. Eventually the analyst's emotional responses (objective countertransference) will be used for therapeutic purposes but not until patients are able to hear them without narcissistic injury. In The Edinburgh International Encyclopedia of Psychoanalysis, an entry describing modern psychoanalysis reads in part: "The analyst was advised to use induced countertransference emotions as the basis for responses to the patient rather than cognitive explanations….The modern talking cure emphasizes experiences lived and spoken in the analytic room: de-emphasizing reconstruction of the past."An outcome study by Meadow explored the relative effectiveness of two types of interventions: interpretation and reflection. In the presence of a transference resistance she randomly offered either an interpretation of unconscious motives or a joining of the defense. However, this type of quantitative statistical study is unusual in the psychoanalytic community. The qualitative research method recommended by modern analytic institutes is described in an issue of the journal Modern Psychoanalysis. Candidates conduct single case studies in which the psychoanalytic sessions are used as laboratories to investigate the unconscious motives of specific transference resistances. Other modern analytic writings consider such topics as a comparison of the work of Kernberg, Kohut and Spotnitz; the interactions of the psyche and soma; the application of modern techniques in schools; in analytic training; in groups; and gender studies.Spotnitz's repeated advice to clinicians he trained was to "just get the patient to say everything." A book Just Say Everything has contributions by those who were analyzed or supervised by Spotnitz who "say everything" about Spotnitz and themselves.
Institutions:
A number of institutes offer training in modern psychoanalysis leading to licensure, certification, and/or advanced academic degrees.
Institutions:
The Center for Modern Psychoanalytic Studies in New York offers a certificate leading to eligibility for New York state licensure as a psychoanalyst. The Boston Graduate School of Psychoanalysis in Massachusetts offers accredited masters and doctoral degrees in psychoanalysis. It also offers a non-clinical doctoral degree in psychoanalysis and culture. Other modern analytic institutes include The Center for Human Development in New York City, The Philadelphia School of Psychoanalysis, The Academy of Clinical and Applied Psychoanalysis in New Jersey, and the New Jersey Center for Modern Psychoanalysis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.