text
stringlengths 60
353k
| source
stringclasses 2
values |
|---|---|
**Flying height**
Flying height:
The flying height or floating height or head gap is the distance between the disk read/write head on a hard disk drive and the platter. The first commercial hard-disk drive, the IBM 305 RAMAC (1956), used forced air to maintain a 0.002 inch (51 μm) between the head and disk. The IBM 1301, introduced in 1961, was the first disk drive in which the head was attached to a "hydrodynamic air bearing slider," which generates its own cushion of pressurized air, allowing the slider and head to fly much closer, 0.00025 inches (6.35 μm) above the disk surface.In 2011, the flying height in modern drives was a few nanometers (about 5 nm). Thus, the head can collide with even an obstruction as thin as a fingerprint or a particle of smoke. Despite the dangers of hard drive failure from such foreign objects, hard drives generally allow for ventilation (albeit through a filter) so that the air pressure within the drive can equalize with the air pressure outside. Because disk drives depend on the head floating on a cushion of air, they are not designed to operate in a vacuum. Regulation of flying height will become even more important in future high-capacity drives.However, hermetically sealed enclosures are beginning to be adopted for hard drives filled with helium gas, with the first products launched in December 2015, starting with capacities of 10 TB.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Dual-motor, four-wheel-drive layout**
Dual-motor, four-wheel-drive layout:
In automotive design, dual-motor, four-wheel-drive layout is mainly used by battery electric vehicles by placing electric motors on both front and rear axles and drives all four roadwheels, creating a four-wheel drive layout. This layout is made possible by the small size of electric motors compared to internal combustion engines, allowing it to be placed in multiple locations. It also eliminates the need of a drive shaft that are commonly used by conventional four-wheel drive vehicles to create space for batteries that are commonly mounted on the floor of electric vehicles.The layout is also beneficial to distribute available electrical horsepower to maximize torque and power in response to road grip conditions and weight transfer in the vehicle. For example, during hard acceleration, the front motor must reduce torque and power in order to prevent the front wheels from spinning as weight transfers to the rear of the vehicle. The excess power is transferred to the rear motor where it can be used immediately. The opposite applies when braking, when the front motor can accept more regenerative braking torque and power.In addition, electric vehicles may be equipped with more than two electric motors to achieve greater power output and superior handling. The first mass-produced triple-motor layout was introduced on the Audi e-tron in 2020, which consists of one motor at the front and two motors at the rear.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Testosterone decanoate**
Testosterone decanoate:
Testosterone decanoate (BAN) is an androgen and anabolic steroid and a testosterone ester. It is a component of Sustanon, along with testosterone propionate, testosterone phenylpropionate, and testosterone isocaproate. The medication has not been marketed as a single-drug preparation. Testosterone decanoate has been investigated as a potential long-acting injectable male contraceptive. It has a longer duration of action than testosterone enanthate, but its duration is not as prolonged as that of testosterone undecanoate.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**TF-1 cell**
TF-1 cell:
TF-1 cells are immortal cell line derived from the human Erythroleukemia used in biomedical research. This cells are proliferatively responsive to interleukin-3 (IL-3) or granulocyte-macrophage colony-stimulating factor (GM-CSF). TF-1 cells have gene fusion of CBFA2T3-ABHD12.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Panna cotta**
Panna cotta:
Panna cotta (Italian for "cooked cream") is an Italian dessert of sweetened cream thickened with gelatin and molded. The cream may be aromatized with coffee, vanilla, or other flavorings.
History:
The name panna cotta is not mentioned in Italian cookbooks before the 1960s, yet it is often cited as a traditional dessert of the northern Italian region of Piedmont. One unverified story says that it was invented by a Hungarian woman in the Langhe in the early 1900s. An 1879 dictionary mentions a dish called latte inglese ("English milk"), made of cream cooked with gelatin and molded, though other sources say that latte inglese is made with egg yolks, like crème anglaise; perhaps the name covered any thickened custard-like preparation.
History:
The dish might also come from the French recipe of fromage bavarois from Marie-Antoine Carême. Actually, this recipe that we can find in le pâtissier royal parisien is the same as the modern panna cotta with the difference that one part of the cream is whipped to make chantilly and included to the preparation before adding the gelatin.The Region of Piedmont includes panna cotta in its 2001 list of traditional food products. Its recipe includes cream, milk, sugar, vanilla, gelatin, rum, and marsala poured into a mold with caramel. Another author considers the traditional flavoring to be peach eau-de-vie, and the traditional presentation not to have sauce or other garnishes.Panna cotta became fashionable in the United States in the 1990s.
Preparation:
Sugar is dissolved in warm cream. The cream may be flavored by infusing spices and the like in it or by adding rum, coffee, vanilla, and so on. Gelatin is dissolved in a cold liquid (usually water), then added to the warm cream mixture. This is poured into molds and allowed to set. The molds may have caramel in the bottoms, giving a result similar to a crème caramel. After it solidifies, the panna cotta is usually unmolded onto a serving plate.
Preparation:
Although the name means 'cooked cream', the ingredients are only warm enough to dissolve the gelatin and sugar. Italian recipes sometimes call for colla di pesce 'fish glue,' which may literally be isinglass or, more likely, simply a name for common gelatin.
Garnishes:
Panna cotta is often served with a coulis of berries or a sauce of caramel or chocolate. It may be covered with other fruits or liqueurs.
Related dishes:
Bavarian cream is similar to panna cotta but usually includes eggs as well as gelatin and is mixed with whipped cream before setting.Blancmange is sometimes thickened with gelatin or isinglass, and sometimes with cornstarch.Panna cotta is sometimes called a custard, but true custard is thickened with egg yolks, not gelatin. A lighter version substitutes cream with Greek yogurt.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Baud**
Baud:
In telecommunication and electronics, baud (; symbol: Bd) is a common unit of measurement of symbol rate, which is one of the components that determine the speed of communication over a data channel.
It is the unit for symbol rate or modulation rate in symbols per second or pulses per second. It is the number of distinct symbol changes (signalling events) made to the transmission medium per second in a digitally modulated signal or a bd rate line code.
Baud is related to gross bit rate, which can be expressed in bits per second. If there are precisely two symbols in the system (typically 0 and 1), then baud and bit per second (bit/s) are equivalent.
Naming:
The baud unit is named after Émile Baudot, the inventor of the Baudot code for telegraphy, and is represented according to the rules for SI units. That is, the first letter of its symbol is uppercase (Bd), but when the unit is spelled out, it should be written in lowercase (baud) except when it begins a sentence.
It was defined by the CCITT (now the ITU) in November 1926. The earlier standard had been the number of words per minute, which was a less robust measure since word length can vary.
Definitions:
The symbol duration time, also known as the unit interval, can be directly measured as the time between transitions by looking at an eye diagram of the signal on an oscilloscope. The symbol duration time Ts can be calculated as: Ts=1fs, where fs is the symbol rate.
There is also a chance of miscommunication which leads to ambiguity.
Definitions:
Example: Communication at the baud rate 1000 Bd means communication by means of sending 1000 symbols per second. In the case of a modem, this corresponds to 1000 tones per second; similarly, in the case of a line code, this corresponds to 1000 pulses per second. The symbol duration time is 1/1000 second (that is, 1 millisecond).In digital systems (i.e., using discrete/discontinuous values) with binary code, 1 Bd = 1 bit/s. By contrast, non-digital (or analog) systems use a continuous range of values to represent information and in these systems the exact informational size of 1 Bd varies.
Definitions:
The baud is scaled using standard metric prefixes, so that for example 1 kBd (kilobaud) = 1000 Bd 1 MBd (megabaud) = 1000 kBd 1 GBd (gigabaud) = 1000 MBd
Relationship to gross bit rate:
The symbol rate is related to gross bit rate expressed in bit/s.
Relationship to gross bit rate:
The term baud has sometimes incorrectly been used to mean bit rate, since these rates are the same in old modems as well as in the simplest digital communication links using only one bit per symbol, such that binary digit "0" is represented by one symbol, and binary digit "1" by another symbol. In more advanced modems and data transmission techniques, a symbol may have more than two states, so it may represent more than one bit. A bit (binary digit) always represents one of two states.
Relationship to gross bit rate:
If N bits are conveyed per symbol, and the gross bit rate is R, inclusive of channel coding overhead, the symbol rate fs can be calculated as fs=RN.
By taking information per pulse N in bit/pulse to be the base-2-logarithm of the number of distinct messages M that could be sent, Hartley constructed a measure of the gross bit rate R as R=fsN where log 2(M)⌉.
Here, the ⌈x⌉ denotes the ceiling function of x . Where x is taken to be any real number greater than zero, then the ceiling function rounds up to the nearest natural number (e.g. 2.11 ⌉=3 ).
Relationship to gross bit rate:
In that case, M = 2N different symbols are used. In a modem, these may be time-limited sinewave tones with unique combinations of amplitude, phase and/or frequency. For example, in a 64QAM modem, M = 64, and so the bit rate is N = log2(64) = 6 times the baud rate. In a line code, these may be M different voltage levels.
Relationship to gross bit rate:
The ratio is not necessarily even an integer; in 4B3T coding, the bit rate is 4/3 of the baud rate. (A typical basic rate interface with a 160 kbit/s raw data rate operates at 120 kBd.) Codes with many symbols, and thus a bit rate higher than the symbol rate, are most useful on channels such as telephone lines with a limited bandwidth but a high signal-to-noise ratio within that bandwidth. In other applications, the bit rate is less than the symbol rate. Eight-to-fourteen modulation as used on audio CDs has bit rate 8/17 of the baud rate.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Library Freedom Project**
Library Freedom Project:
The Library Freedom Project teaches librarians about surveillance threats, privacy rights, and digital tools to thwart surveillance. In 2015 the Project began an endeavour to place relays and, particularly, exit nodes of the Tor anonymity network in public libraries.
Tor Exit Relay Project:
Its pilot project enabled the Kilton Public Library in Lebanon, New Hampshire to become in July 2015 the first library in the United States to host Tor, running a middle relay on its excess bandwidth. This service was put on hold in early September, however, when the library was visited by the local police department after they had received a "heads up" e-mail from Department of Homeland Security highlighting the criminal uses of the Tor network (and which falsely claimed that this was the network's primary usage), whereupon the library began reconsidering the deployment from a public relations perspective.After an outpouring of support from the Electronic Frontier Foundation, the Massachusetts and New Hampshire affiliates of the ACLU, the Tor Project itself, an editorial in the local paper Valley News strongly in favor of the pilot project, and virtually unanimous public testimony, the library board of trustees decided on 15 September 2015 to renew the anonymity service, letting stand its previous unanimous vote to establish the middle relay. A dozen libraries and their supporters nationwide expressed interest hosting their own nodes after the DHS involvement became public (an example of the Streisand effect), and U.S. Rep. Zoe Lofgren (D-Calif) released a letter on 10 December 2015, in which she asked the DHS to clarify its procedures, stating that “While the Kilton Public Library’s board ultimately voted to restore their Tor relay, I am no less disturbed by the possibility that DHS employers are pressuring or persuading public and private entities to discontinue or degrade services that protect the privacy and anonymity of U.S. citizens.”In March 2016, New Hampshire state representative Keith Ammon introduced a bill allowing public libraries to run privacy software such as Tor which specifically referenced Tor itself. The bill was crafted with extensive input from Library Freedom director Alison Macrina, and was the direct result of the Kilton Public Library imbroglio. The bill was passed by the House 268-62.Also in March 2016, the first Tor middle relay at a library in Canada was established, at the University of Western Ontario. Given that the running of a Tor exit node is an unsettled area of Canadian law, and that institutions are more capable than individuals to cope with legal pressures, Alison Macrina has opined that in some ways she would like to see intelligence agencies and law enforcement attempt to intervene in the event that an exit node were established.Also in March 2016, the Library Freedom Project was awarded the Free Software Foundation's 2015 Free Software Award for Projects of Social Benefit at MIT.As of 26 June 2016, the Kilton Library is the only library in the U.S. running a Tor exit node. However, in August of that same year, Kilton Library's IT Manager, Chuck McAndrew, said they still hoped other libraries would run their own, adding, "We always planned on our library simply being the pilot for a larger nationwide program. Like everything, this will take time. We continue to talk to other libraries, and the Library Freedom Project is actively working with a number of libraries that have an interest in participating."
Workshops:
Working with ACLU affiliates across the United States, the Library Freedom Project provides workshops to educate librarians about "some of the major surveillance programs and authorizations, including the USA PATRIOT Act, section 702 of the FISA Amendments Act, PRISM, XKEYSCORE, and more, connecting the NSA’s dragnet with FBI and local police surveillance". They also discuss current and developing privacy law on both the federal and state levels, in addition to advising librarians how to handle issues like gag orders and National Security Letters. Other topics covered include Privacy Enhancing Technology (PET) that might help library patrons browse anonymously or evade online tracking.
Workshops:
Furthermore, the project conducts training classes for library patrons themselves which focus on on-line security and privacy. The classes can be adjusted to accommodate any level of user, from beginner to advanced, and various security needs. Given that library patrons, including but not limited to domestic violence survivors, political activists, whistle blowers, journalists, and LGBT teens or adults in many communities, face various threat models, the gestalt of digital security is not a matter of one-size-fits-all. In this regard Alison Macrina has remarked at a library conference that " “Digital security isn’t about which tools you use; rather, it’s about understanding the threats you face and how you can counter those threats. To become more secure, you must determine what you need to protect, and whom you need to protect it from. Threats can change depending on where you’re located, what you’re doing, and whom you’re working with.”The Library Freedom Project is a member of the torservers.net network, an organization of nonprofits which specializes in the general establishment of exit nodes via workshops and donations.
Library Freedom Institute:
Beginning in 2018, Library Freedom Project began offering the Library Freedom Institute as a joint partnership with New York University. The institute is "a free, privacy-focused... program for librarians to teach them the skills necessary to thrive as Privacy Advocates; from educating community members to influencing public policy." The format of the Institute has changed slightly with each cohort, but lasts four to six months and features lecturers and discussions in the areas of technology, community building, media, activism, and education. Participants create capstone projects at the end of the course. Since its inception, the Library Freedom Institute has been supported by grants from the Institute of Museum and Library Services.
Library Freedom Institute:
As of July 2020, there have been four cohorts of Library Freedom Institute with over 100 graduates from the program.
Funding:
In January 2015 the Library Freedom Project received $244,700 in grant funding from the Knight Foundation, and in January 2016 $50,000 from the Rose Foundation's Consumer Privacy Rights Fund (the fiscal sponsor of that grant being the Miami Foundation).In August 2017 the Library Freedom Project was awarded a $249,504 grant from the Laura Bush 21st Century Librarian Program to facilitate the use of practical privacy tools in libraries using a "training the trainers" model. 40 geographically dispersed Privacy Advocates are expected to be trained in a six month course. New York University (NYU) and the Library Freedom Project have since created a formal collaborative program funded by the Institute of Museum and Library Services called Library Freedom Institute; its inaugural course began in June 2018.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Sulfur oxygenase/reductase**
Sulfur oxygenase/reductase:
In enzymology, a sulfur oxygenase/reductase (EC 1.13.11.55) is an enzyme that catalyzes the chemical reaction 4 sulfur + 4 H2O + O2 ⇌ 2 hydrogen sulfide + 2 bisulfite + 2 H+The 3 substrates of this enzyme are sulfur, H2O, and O2, whereas its 3 products are hydrogen sulfide, bisulfite, and H+.
Sulfur oxygenase/reductase:
This enzyme belongs to the family of oxidoreductases, specifically those acting on single donors with O2 as oxidant and incorporation of two atoms of oxygen into the substrate (oxygenases). The oxygen incorporated need not be derived from O2. The systematic name of this enzyme class is sulfur:oxygen oxidoreductase (hydrogen-sulfide- and sulfite-forming). Other names in common use include SOR, sulfur oxygenase, and sulfur oxygenase reductase.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Open Source Tripwire**
Open Source Tripwire:
Open Source Tripwire is a free software security and data integrity tool for monitoring and alerting on specific file change(s) on a range of systems. The project is based on code originally contributed by Tripwire, Inc. in 2000.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Duospaced font**
Duospaced font:
A duospaced font (also called a duospace font) is a fixed-width font whose letters and characters occupy either of two integer multiples of a specified, fixed horizontal space. Traditionally, this means either a single or double character width, although the term has also been applied to fonts using fixed character widths with another simple ratio between them.These dual character widths are also referred to as half-width and full-width, where a full-width character occupies double the width of a half-width character. This contrasts with variable-width fonts, where the letters and spacings have more than two different widths. And, unlike monospaced fonts, this means a character can occupy up to two effective character widths instead of a single character width. This extra horizontal space allows for the accommodation of wider glyphs, such as large ideographs, that cannot reasonably fit into the single character width of strictly uniform, monospaced font.
In CJK typography:
The idea of a "duospaced" font came from East Asian typography, where the local scripts of CJK characters simply cannot fit into a narrow column used in Latin fixed-pitch fonts. Note that this "duospace" name is mostly a historical (c. 1990) Western distinction; Asian typefaces with such characteristics simply call themselves "monospaced" or "fixed pitch".CJK monospace fonts typically include halfwidth and fullwidth forms of characters that provide different widths for typesetting. In addition to East Asian characters and such forms, it is common for other technical and pictographic symbols to become duospaced in some East Asian fonts, a phenomenon known as "ambiguous width".It is a common pitfall for Western programmers to neglect support for such fonts: Terminal applications may have misaligned output due to assuming all character "pitch" to be 1 column wide. The wcwidth() function, originally part of POSIX, is available for querying the width of characters.
In CJK typography:
Qt has a bug where it fails to list CJK monospaced fonts because the underlying fontconfig defined "monospace" as "fixed-pitch" fonts.With the exception of some Japanese monospace fonts like Source Han Code JP, where a 1.5× width is used as the ideograph width, almost all CJK monospace fonts use 2× as the ideograph width. (In the case of the Korean language, Hangul characters which are usually slightly narrower than the ideographs are made to match them.) Some CJK monospace fonts with two or more widths are: Andale Duospace WT GNU Unifont (pan-character set) Migu 1M, Migu 2M Monotype Sans Duospace WT Thorndale Duospace WT WorldType Sans Duo, WorldType Serif Duo Source Han Code JP (1.5×) WenQuanYi Micro Hei Mono, WenQuanYi Zen Hei Mono
In Western typography:
Western duospaced fonts are similar in purpose to CJK duospaced fonts, but they are much rarer and less supported. The idea seems to be limited to an iA Writer typeface where the latin characters wmWM have 1.5× widths, so that they retain the traditional letter shape better.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Acquired C1 esterase inhibitor deficiency**
Acquired C1 esterase inhibitor deficiency:
Acquired C1 esterase inhibitor deficiency, also referred to as acquired angioedema (AAE), is a rare medical condition that presents as body swelling that can be life-threatening and manifests due to another underlying medical condition.: 153 The acquired form of this disease can occur from a deficiency or abnormal function of the enzyme C1 esterase inhibitor (C1-INH). This disease is also abbreviated in medical literature as C1INH-AAE. This form of angioedema is considered acquired due to its association with lymphatic malignancies, immune system disorders, or infections. Typically, acquired angioedema presents later in adulthood, in contrast to hereditary angioedema which usually presents from early childhood and with similar symptoms.Acquired angioedema is usually found after recurrent episodes of swelling and can in some cases take several months to diagnose. Diagnosis usually consists of medical evaluation in addition to laboratory testing. Laboratory evaluation includes complement studies, in which typical cases demonstrate low C4 levels, low C1q levels, and normal C3 levels. Determining the etiology, or cause, of acquired angioedema is often helpful in providing appropriate management of AAE.
Acquired C1 esterase inhibitor deficiency:
Management of AAE usually includes treating any underlying disorder that could be responsible for the condition. Additionally, symptom management is important, especially in cases that are life-threatening. There are medications available to treat AAE, which are focused on replacing deficient levels of C1-INH or abnormal C1-INH enzymes. There are some cases of partial improvement and full resolution with treatment of the underlying medical problems contributing to AAE.
Epidemiology:
It is estimated that the worldwide prevalence of AAE ranges from 1 person in every 10,000 people are affected to 1 case in every 150,000 people. However, it is thought that this disease prevalence could be higher due to diagnostic oversight and the shared symptoms of acquired angioedema with similar diseases. This disease tends to affect males and females equally. Additionally, individuals with acquired angioedema usually develops symptoms in their fourth decade of life or older. Of note, Saini reports the difficulty of diagnosing angioedema accurately due to certain challenges. These obstacles include the lack of awareness about angioedema presentation and potentially higher than expected worldwide prevalence. More challenges include the similarities to paraneoplastic disorders that often require higher priority of care, the evolution of symptoms over time, and mild cases might be attributed to medication use or allergic reactions from an individual's existing medical history. As a result, accurate diagnosis of AAE can take several months, which can delay targeted and specific treatment.
Causes:
There are various disease comorbidities associated with acquired C1 esterase inhibitor deficiency, including: Lymphoproliferative disorders and lymphatic malignancies Lymphoproliferative disorders, such as monoclonal gammopathy of undetermined significance (MGUS) and non-Hodgkin lymphoma are associated with acquired angioedema. In cohort studies, MGUS is considered one of the most common disorders associated with AAE. MGUS is a premalignant plasma cell disorder that is associated with bone marrow pathologic changes leading to abnormal production of M protein that can progress to other hematologic diseases. Additionally, through retrospective case studies performed in France, Gobert et al. found that non-Hodgkin lymphoma was associated with 48% of cases in a sample size of 92 cases of acquired angioedema.
Causes:
Multiple myeloma (MM) is a malignant plasma cell disorder that progresses from MGUS. MM is characterized by elevated monoclonal paraprotein in addition to end organ damage, such as kidney failure.
Lymphoplasmacytic lymphoma (also known as Waldenström macroglobulinemia) is a B cell malignancy with hematologic changes that affect the lymphatic system. Some of the clinical manifestations seen in this lymphoma are anemia, hyperviscosity syndrome, and neuropathy.
Autoimmune disorders Autoimmune disorders, such as systemic lupus erythematosus (SLE) are observed with AAE. SLE is an autoimmune disease with variable manifestations from mild symptoms to multiorgan involvement. The autoantibody component involved in SLE has been investigated and is thought to be associated with angioedema manifestations.
Causes:
Certain vasculitic diseases, such as eosinophilic granulomatosis with polyangiitis (also known as Churg–Strauss syndrome) have been associated with AAE. Eosinophilic granulomatosis with polyangiitis (EGPA) is an inflammatory disease characterized by necrotizing vasculitis that affects small and medium-sized vessels of the body. This vasculitis is associated with certain comorbidities including asthma, rhinosinusitis, and eosinophilia (blood cells responsible for activating immune responses and downstream signals in inflammation).
Causes:
Infections Human immunodeficiency virus (HIV) is a transmissible retrovirus that can predispose individuals carrying the virus to acquired immunodeficiency syndrome (AIDS) which leads to opportunistic infections.
Hepatitis B viral infection (HBV) is a transmissible DNA virus that can potentially lead to liver injury. In a series of cases studies with patients reporting symptoms of angioedema, some of these individuals were found to have positive markers of HBV.
Metabolic disorders Xanthomatosis is a systemic metabolic disorder marked by fatty deposits in the presence of hypercholesterolemia, or high cholesterol.
Idiopathic causes Idiopathic etiology is considered when well-understood and known causes are excluded after a thorough medical evaluation.
Pathophysiology:
The C1 esterase inhibitor (C1-INH) enzyme plays a role in the classical pathway of the complement cascade, which is a component of the immune system response that acts to protect the human body from a variety of foreign substances. As shown in the figure above, the complement cascade starts with the C1q protein which binds to an antibody-antigen complex that arises during an immune response to an invading substance. When the complex is signaled for activation, or turned on, then downstream proteins in the complement cascade are activated, including complement component 2 (C2), complement component 3 (C3), and complement component 4 (C4). When these particular enzymes such as C3 and C4 are activated, their subsequent signals lead to an inflammatory response that involves localized edema, or swelling. The role of C1-INH is to regulate and control the activities of the complement cascade, such that complement proteins remain in check and do not lead to unnecessary activity. When there is a deficiency of C1-INH due to one of the previously mentioned causes, then the complement cascade remains continuously activated and can lead to potentially life-threatening swelling.
Clinical Presentation:
Acquired angioedema presents as mucosal swelling on external and/or internal surfaces of the body. Typical areas of swelling include the face, arms, and legs, while internally some individuals have swelling of the tongue and upper airways. In contrast to hereditary angioedema, there tends to be less symptoms of the abdomen or gastrointestinal tract, but symptoms of nausea, vomiting, and diarrhea have been seen in acquired angioedema. Although this condition appears similar to other skin conditions in which swelling occurs, acquired angioedema does not lead to itchy skin (pruritus) or hives (urticaria).
Diagnosis:
Acquired angioedema is diagnosed through a supportive clinical examination usually in addition to laboratory evaluation. The clinical history consists of recurrent angioedema episodes, symptom onset after 30 years of age, and negative family history of hereditary angioedema.Laboratory evaluation typically consists of complement studies, genotyping, and/or checking for antibodies against C1INH. The most useful complement studies obtained are as follows: To help confirm cases of acquired angioedema, the following pattern of complement studies are observed: low C4 level, low C1-INH protein level, low C1q level, and decreased C1-INH protein function.Using the diagnostic approach mentioned here and in the figure shown above, acquired angioedema is categorized into subtypes for targeted management. The following subtypes include: AAE-I, AAE-II, sex-hormone dependent AAE, and drug-induced AAE. AAE-I subtype groups paraneoplastic syndrome or B-cell malignancies that lead to destruction of the C1-INH enzyme causing acquired angioedema. AAE-II subtype groups autoimmune disorders, such as systemic lupus, causing acquired angioedema. Sex-hormone dependent AAE is associated with case reports of individuals with abnormally elevated estrogen levels or in cases where physiologically elevated estrogen is expected as in pregnancy. Drug-induced AAE can be triggered by certain medications, including ACE inhibitors or angiotensin receptor blockers.Furthermore, additional laboratory testing can be done to consider other causes of swelling that appear similar to angioedema. Some of the common differential diagnoses for angioedema include: allergic reactions, contact dermatitis, skin and soft tissue infections (i.e. cellulitis), lymphedema, and foreign body aspiration.
Management:
Treatment of acquired angioedema is separated into two main parts. First controlling acute symptoms during angioedema attacks is crucial for preventing and lowering the risk of mortality. Second, managing AAE chronically with prophylactic treatment is important to improve prognosis and quality of life. Both pharmacologic therapies (i.e. medications) and symptom management can be used in both acute and chronic treatment of AAE.Pharmacologic treatment in acute situations consists of replacing the enzyme concentrate that is deficiencent or dysfunctional in this disease process. In life-threatening situations, including cases of oral and pharyngeal swelling, it is important to manage these symptoms and to protect the airways in order to lower the risk of mortality. Typical treatments for anaphylaxis and allergic reactions, such as epinephrine, corticosteroids, and antihistamines, are often used in acute cases of AAE with variable resolution. C1-INH concentrates are available in intravenous (IV) and intramuscular (IM) methods of delivery. C1-INH concentrate therapy has shown considerable efficacy (or effect) in acute and prophylactic treatments of hereditary angioedema, but has varying levels of efficacy in AAE.For prophylaxis, clinicians focus on controlling underlying disorders, such as those mentioned under causes, that could be contributing to AAE pathophysiology. Beyond controlling comorbidities, angioedema is usually managed through medications to prevent attacks and to reduce the number of attacks. C1-INH concentrate can be used to replace deficient or abnormal C1-INH enzyme with considerable efficacy. The following list of medical therapies have been used for prophylaxis, including androgens, tranexamic acid, and monoclonal antibody such as rituximab. These agents all have varying roles, efficacy, and potential risks through their use.
Prognosis:
The evaluation of acquired angioedema usually prompts an investigation into the underlying cause. As mentioned in the causes section, malignancy or autoimmune disorders are the more common causes, which must be further explored and considered for treatment if found in an individual. Prognosis depends on the underlying disorder, which may be found at the time of initial diagnosis or through ongoing monitoring. Additionally, successful treatment of the underlying disorder has been observed in some cases to resolve acquired angioedema from partial to complete remission.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Anti-CRISPR**
Anti-CRISPR:
Anti-CRISPR (Anti-Clustered Regularly Interspaced Short Palindromic Repeats or Acr) is a group of proteins found in phages, that inhibit the normal activity of CRISPR-Cas, the immune system of certain bacteria. CRISPR consists of genomic sequences that can be found in prokaryotic organisms, that come from bacteriophages that infected the bacteria beforehand, and are used to defend the cell from further viral attacks. Anti-CRISPR results from an evolutionary process occurred in phages in order to avoid having their genomes destroyed by the prokaryotic cells that they will infect.Before the discovery of this type of family proteins, the acquisition of mutations was the only way known that phages could use to avoid CRISPR-Cas mediated shattering, by reducing the binding affinity of the phage and CRISPR. Nonetheless, bacteria have mechanisms to retarget the mutant bacteriophage, a process that it is called "priming adaptation". So, as far as researchers currently know, anti-CRISPR is the most effective way to ensure the survival of phages throughout the infection process of bacteria.
History:
Anti-CRISPR systems were first seen in Pseudomonas aeruginosa prophages, which disabled type I-F CRISPR–Cas system, characteristic of some strains of these bacteria. After analysing the genomic sequences of these phages, genes codifying five different Anti-CRISPR proteins (also named Acrs) were discovered. Such proteins were AcrF1, AcrF2, AcrF3, AcrF4 and AcrF5. Research found none of these proteins disrupted the expression of Cas genes nor the assembling of CRISPR molecules, so it was thought that those type I-F proteins directly affected the CRISPR–Cas interference.Further investigation confirmed this hypothesis with the discovery of 4 other proteins (AcrE1, AcrE2, AcrE3 and AcrE4), which were shown to impede Pseudomonas aeruginosa’s CRISPR-Cas system. Furthermore, the locus of the genes codifying these type I-E proteins was really close to the one responsible for the type I-F proteins expression in the same group of phages, leading to the conclusion that both types of proteins worked together. However, these first nine proteins shared no common sequence motifs, which would have made easier the identification of new Anti-CRISPR protein families.
History:
Later on, it was seen that phages that produced such proteins also encoded a putative transcriptional regulator named Aca 1 (anti-CRISPR associated 1) which was genetically located really close to the anti-CRISPR genes. This regulatory protein is supposed to be the responsible for the anti-CRISPR gene expression during the infectious cycle of the phage, therefore, both types of proteins (anti-CRISPR and Aca1) seem to work together as a single mechanism.After some studies, a similar amino-acid sequence to that of Aca1 was found, leading to the discovery of Aca2, a new family of Aca proteins. Aca2 also revealed the existence of five new groups of type I-F anti-CRISPR proteins due to their genomic proximity: AcrF6, AcrF7, AcrF8, AcrF9 and AcrF10. These proteins were not only present in Pseudomonas aeruginosa’s phages, as they also affected other cells of the Pseudomonadota (formerly Proteobacteria).Thanks to the use of bioinformatic tools, in 2016, AcrIIC1, AcrIIC2 and AcrIIC3 protein families were discovered in Neisseria meningitidis (which had been infected by phages previously). Such proteins were the first inhibitors of type II CRISPR–Cas to be found (concretely, they impeded II-C CRISPR–Cas9, the type of mechanism used in the genetic edition of human cells). A year later, a study confirmed the presence of type II-A CRISPR–Cas9 inhibitors (AcrIIA1, AcrIIA2, AcrIIA3 and AcrIIA4) in Listeria monocytogenes (infected by bacteriophages which introduced the anti-CRISPR proteins). Two of those proteins (AcrIIA2 and AcrIIA4) were demonstrated to work properly against Streptococcus pyogenes type II-A defensive CRISPR system.
History:
The result of all this research has been the discovery of 21 different Anti-CRISPR protein families, despite other inhibitors may exist due to the quick mutational process of phages. Thus, more research is needed to unravel the complexity of anti-CRISPR systems.
Types:
Anti-CRISPR genes can be found in different parts of the phage DNA: in the capsid, the tail and at the extreme end. Moreover, it has been found that many MGEs have two or even three Acr genes in a single operon, which suggest that they could have been exchanged between MGEs.As all proteins, Acr family proteins are formed by the translation and transduction of the genes, and their classification is based on the type of CRISPR-Cas system they inhibit, due to the fact that each anti-CRISPR protein inhibits a specific CRISPR-Cas system. Although not many anti-CRISPR proteins have been discovered, these are the ones that have been found so far: So far, genes encoding anti-CRISPR proteins have been found in myophages, siphophages, putative conjugative elements and pathogenicity islands.
Types:
Attempts have been made to find common surrounding genetic features of anti-CRISPR genes, but without any success. Nevertheless, the presence of an aca gene just below anti-CRISPR genes has been observed.The first Acr protein families to be discovered were AcrF1, AcrF2, AcrF3, AcrF4 and AcrF5. These inhibitors are mainly found in Pseudomonas phages, which are capable of infecting Pseudomonas aeruginosas possessing a type I‑F CRISPR–Cas system. Then, in another study, AcrE1, AcrE2, AcrE3 and AcrE4 protein families were found to also inhibit the type I‑F CRISPR–Cas in Pseudomonas aeruginosas.Later on, AcrF6, AcrF7, AcrF8, AcrF9 and AcrF10 protein families, which were also able to inhibit type I‑F CRISPR–Cas, were found to be very common in Pseudomonadota MGEs.The first inhibitors of a type II CRISPR–Cas system were then discovered: AcrIIC1, AcrIIC2 and AcrIIC3, that block the type II‑C CRISPR–Cas9 activity of Neisseria meningitidis.Finally, AcrIIA1, AcrIIA2, AcrIIA3 and AcrIIA4 were found. These protein families have the ability to inhibit the type II‑A CRISPR–Cas system of Listeria monocytogenes.As for the naming convention of Acr family proteins, it is established as follows: firstly, the type of system inhibited, then a numerical value referring to the protein family and finally the source of the specific anti-CRISPR protein. For example, AcrF9Vpa is active against the type I-F CRISPR–Cas system. It also was the ninth anti-CRISPR described for this system, and it is encoded in an integrated MGE in a Vibrio parahaemolyticus genome.
Structure:
As exposed above, there is a wide spectrum of anti-CRISPR proteins, but few of these have been deeply studied. One of the most studied and well-defined Acrs is AcrIIA4, which inhibits Cas9, thus blocking the II-A CRISPR-Cas system of Streptococcus pyogenes.
Structure:
AcrIIA4 The protein was solved using nuclear magnetic resonance (NMR); it contains 87 residues and its molecular weight is 10.182 kDa. AcrIIA4 contains: 3 antiparallel β-strands (the first, from residues 16 to 19, the second, from 29 to 33, and the third, from 40 to 44) that form a β-sheet. This represents a 16,1% of the total number of amino acids, as 14 of them form the β-strands.
Structure:
3 α-helices (the first, 2–13 residues, the second, 50–59 residues, and the third, 68–85 residues).
1 310 helix placed between the first (β1) and second (β2) β-strands, which starts at residue 22 and end in residue 25. The total helical part is composed of 40 residues, which is a 50,6% of the protein.
Structure:
Loops joining the different secondary structures.There is a good definition of the secondary structures, as the three α-helices are packed near the three β-strands. Strikingly, between β3 strand, α2 and α3 helices, there is a hydrophobic core, originated by a cluster of aromatic side chains which are attracted by non-covalent interactions, such as pi stacking. Moreover, as it is an acidic protein, there is a high concentration of negatively charged residues in the loops between β3 and α2, between α2 and α3, and in the first part of α3, which may play an important role in the inhibition of Cas9, as negative charges might imitate phosphates of nucleic acids.
Structure:
AcrF1 On the other hand, there is another Acr, AcrF1, which may not have been as studied as the explained above, although there is a good description of its structure. It inhibits the I-F CRISPR-Cas system of Pseudomonas aeruginosa. Maxwell et al. solved the 3D structure using NMR.
Structure:
The protein contains 78 residues, between which interact to form secondary structures. The structure of AcrF1 is formed of two anti-parallel α-helices and a β-sheet, which contains four anti-parallel β-strands. This β-sheet is placed in the contrary side of the α-helical part, which creates a hydrophobic core formed of 13 amino acids. Turns can also be found in different parts of the protein, for instance, joining the β-strands.There are surface residues which actively participate in the active site of AcrF1, two of which are tyrosines (Y6 and Y20) and the third amino acid is a glutamic acid (E31), as their mutation by an alanine causes a 100-fold decrease in the activity of the protein (with Y20A and E31A mutations), and a 107-fold decrease when Y6 is mutated.
Structure:
The different structures that form the protein create a strange combination, as Maxwell et al. conducted a DALI search in order to find similarities between other proteins, and they found no informative similarities.
Function:
Avoiding destruction of the phage DNA The principal function of anti-CRISPR proteins is to interact with specific components of CRISPR-Cas systems, such as the effector nucleases, to avoid the destruction of the phage DNA (by binding or cleavage).A phage introduces its DNA into a prokaryotic cell, usually the cell detects a sequence known as "target", that activates CRISPR-Cas immune system, but the presence of an initial sequence (before the target) encoding the formation of Acr proteins, avoids phage destruction. Acr proteins are formed before the target sequence is read. This way, the CRISPR-Cas system is blocked before it can develop a response.
Function:
The procedure starts with the CRISPR locus being transcribed into crRNAs (CRISPR RNA). CrRNAs combine with Cas proteins forming a ribonucleoprotein complex called Cascade. This complex surveys the cell to find complementary sequences of the crRNA. When this sequence is found, the Cas3 nuclease is recruited to the Cascade, and the target DNA from the phage is cleaved. But, for instance, when AcrF1 and AcrF2 are found (anti-CRISPR proteins), these interact with Cas7f and Cas8f-Cas5f, respectively, not allowing the binding to the phage DNA. Moreover, the cleaving of the target is prevented by the union between AcrF3 and Cas3.
Function:
The majority of Acr genes are located next to anti-CRISPR-associated (Aca) genes, which encode proteins with a helix-turn-helix DNA-binding motif. Aca genes are preserved, and researchers are using them to identify Acr genes, but the function of the proteins they encode is not totally clear. The Acr-associated promoter produces high levels of Acr transcription just after the phage DNA injection into the bacteria takes place and, afterward, Aca proteins repress the transcription. If this wasn't repressed, the constant transcription of the gene would be lethal to the phage. Therefore, Aca activity is essential to ensure its survival.
Function:
Phage-phage cooperation Moreover, it has been verified that bacteria with CRISPR-Cas systems are still partially immune to Acr. Consequently, initial abortive phage infections may be unable to hamper CRISPR immunity, but phage-phage cooperation can increasingly boost Acr production and promote immunosuppression, which might produce an increase on the vulnerability of the host cell to reinfection, and finally allow a successful infection and spreading of a second phage. This cooperation creates an epidemiological tipping point, in which, depending on the initial density of Acr-phages and the strength of CRISPR/Acr binding, phages can either be eliminated or originate a phage epidemic (the number of bacteriophages is amplified).If the starting levels of phages are high enough, the density of immunosuppressed hosts reaches a critical point where there are more successful infections than unsuccessful ones. Then, an epidemic begins. If this point is not reached, phage extinction occurs, and immunosuppressed hosts recover their initial state.
Function:
Phage immune evasion It has become clear that Acr proteins play an important role in allowing phage immune evasion, though it is still unclear how anti-CRISPR proteins synthesis can overcome the host’s CRISPR-Cas system, which can shatter the phage genome within minutes after the infection.
Mechanisms:
Within all the Anti-CRISPR proteins that have been discovered so far, mechanisms have been described for only 15 of among them. These mechanisms can be divided into three different types: crRNA loading interference, DNA binding blockage and DNA cleavage prevention.
CrRNA loading interference CrRNA (CRISPR RNA) loading interference mechanism has been mainly associated with the AcrIIC2 protein family. In order to block Cas9 activity, it prevents the correct assembly of the crRNA‐Cas9 complex.
Mechanisms:
DNA binding blockage AcrIIC2 has been shown not to be the only one capable of blocking DNA binding. There are 11 other Acr family proteins that can also carry it out. Some among those are AcrIF1, AcrIF2, and AcrIF10, which act on different subunits of the Cascade effector complex of the type I‐F CRISPR‐Cas system, preventing the DNA to bind to the complex.Furthermore, AcrIIC3 prevents DNA binding by promoting dimerization of Cas9 and AcrIIA2 mimics DNA, thereby blocking the PAM recognition residues and consequently preventing dsDNA (double-stranded DNA) recognition and binding.
Mechanisms:
DNA cleavage prevention AcrE1, AcrIF3 and AcrIIC1 can prevent target DNA cleavage. Using X-ray crystallography, AcrE1 was discovered to bind to the CRISPR associated Cas3. Likewise, biochemical and structural analysis of AcrIF3 showed its capacity of binding to Cas3 as a dimer so as to prevent the recruitment of Cas3 to the Cascade complex. Finally, thanks to biochemical and structural AcrIIC1 studies, it was found that it binds to the active site of the HNH endonuclease domain in Cas9, which prevents DNA from cleaving. Thus, it turns Cas9 into an inactive but DNA bound state.
Applications:
Reducing CRISPR-Cas9 off-target cuts AcrIIA4 is one of the proteins responsible for the CRISPR-Cas9 system inhibition, the mechanism used in mammalian cells edition. Addition of AcrIIA4 in human cells avoids Cas9 interaction with the CRISPR system, reducing its ability to cut DNA. However, diverse studies have reached the conclusion that adding it in small proportions after the genome editing has been done, reduces the number of off-target cuts at the concrete sites in which Cas9 interacts, a thing that makes the whole system much more precise.
Applications:
Avoiding ecological consequences One of the main objectives of using CRISPR-Cas9 technology is eradicating diseases, some of which are found in disease vectors, such as mosquitoes. Anti-CRISPR proteins can impede gene drive, which could create uncertain and catastrophic consequences in ecosystems.
Detect presence of Cas9 in a sample In order to know whether a certain bacterium synthesises Cas9, and therefore uses CRISPR-Cas9, or to detect accidental or not allowed use of this system, AcrIIC1 can be used. As the aforementioned protein binds to Cas9, a centrifugal microfluidic platform has been designed to detect it and determine its catalytic activity.
Applications:
Phage therapy Antibiotic resistance is a public health problem that is constantly increasing, because of the bad use of antibiotics. Phage therapy consists of the infection of bacteria using phages, which are much more specific and cause less side effects than antibiotics. Acrs could inhibit the CRISPR-Cas9 system of some bacteria and allow these phages to infect bacterial cells without being attacked by its immune system.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Godzilla Generations**
Godzilla Generations:
Godzilla Generations is an action game developed by General Entertainment and published by Sega for the Dreamcast in 1998. It was exclusively released in Japan as one of the system's four launch titles. The game is based on the Godzilla franchise and involves the player controlling various giant monsters in an attempt to destroy real-life Japanese cities.
A sequel, Godzilla Generations: Maximum Impact, was released in Japan in 1999.
Gameplay:
Godzilla Generations is an action game where the player must control one of five monsters from the Godzilla universe. Initially, only Godzilla and Mechagodzilla can be selected, while the other characters are unlocked by progressing through the game. The game world is composed of five cities, each comprising two stages, except the final city which has three. The object of the game is to proceed to the next stage by destroying everything on the stage within a set time limit, such as buildings and trees. Each character has projectile attacks, the ability to block incoming attacks and the ability to heal themselves.
Development and release:
Godzilla Generations was developed by General Entertainment and published by Sega as a launch title for the Dreamcast. It was originally known as simply Godzilla, before its name was changed in July 1998. The game was exclusively released in Japan on November 27, 1998.
Reception:
Godzilla Generations received lukewarm reviews from Japanese gaming magazine Famitsu and a very negative response from Western journalists, despite fans showing interest in the game at the 1998 Tokyo Game Show. Computer and Video Games reviewer Kim Randell described the game as dull and cited issues such as poor controls, a constantly shifting camera and the player character blocking the player's view. Peter Bartholow of GameSpot derided the game as "terrible" and one of the worst games of 1998. Bartholow found it impossible to block incoming attacks due to the creatures' slow gait. He stated that because of this the developers added a healing ability to each creature, allowing players to continue through the game without fear of their character dying, "There's no strategy, no technique. Just the extreme tedium of tromping through cities." Edge criticized the graphics quality, clumsy controls, and confusing camera system, which was said to make in-game objects difficult for players to locate.Despite showing interest in a preview, describing the game as looking like "a riot", Jaz Rignall of IGN and his colleagues were less enthusiastic when their first Dreamcast console arrived three months later with three Japanese launch games. He found "while it brought many smiles and jeers, it didn't impress", the gathered journalists quickly lost interest and moved onto another game. In a November 2002 review of Godzilla: Destroy All Monsters Melee, GameSpy's David Hodgson described himself as "still wincing from Godzilla: Generations". He went on to say the game "seemed to adhere to the loony premise that bizarre camera angles, a monster trudging in extreme slow motion, and the knuckle-gnawingly slow chipping away of scenery was the new high watermark in monstrous fighting action. It wasn't. It was crap".Japan-GameCharts reported that the game sold approximately 22,870 copies.
Sequel:
Godzilla Generations: Maximum Impact was developed by General Entertainment and published by Sega for the Dreamcast on December 23, 1999, exclusively in Japan. The game is split into levels in which Godzilla is stomping forward through a city while he has to shoot enemies. The player can also make Godzilla duck attacks, by holding or tapping the analog pad. In other levels, Godzilla can walk freely and has to fight in one-on-one against Biollante, King Ghidorah, Mothra, the new robot bosses SMG-IInd and MGR-IInd, SpaceGodzilla, the Super X-III which is the game's smallest boss and the last boss, Destoroyah. Godzilla is the only playable character in the game. He can shoot heat rays at his enemies. IGN gave the game 2.5 out of 10 in their review.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Urushi-e**
Urushi-e:
Urushi-e (漆絵 "lacquer picture[s]") refers to three different techniques in Japanese art. Though urushi-e is most associated with woodblock, the term urushi-e is not exclusive to that medium. It can also refer to pictures using lacquer as a paint on three-dimensional lacquered objects; and paintings using actual lacquer on paper or occasionally silk.
Technique:
In Japanese woodblock printing, urushi-e generally refers to a hand-painted technique. Instead of printing with urushi (natural lacquer) it was painted on by hand. This meant that urushi-e pictures could be more colorful than most block prints of the time. Five colors were available when the technique was first developed; brown, yellow, green, red, and black. Urushi-e was sometimes used as a term to describe all hand-painted woodblock prints in Japan, not only those painted with lacquer, however, only urushi-e used iro-urushi, meaning colored lacquer, made from mixing clear lacquer and one of the five pigments. Artists such as Nishimura Shigenaga c.1680s-1750 were also known to use black ink thickened with hide glue to attain a lacquer-like effect on some on his prints. In addition to colored lacquer, gold was sometimes applied to urushi-e works in the form of gold leaf and powders.
Prints:
Urushi-e woodblock prints were made using thick, dark black lines, and were sometimes hand-colored. The ink was mixed with an animal-based glue called nikawa, which thickened it and gave it a lustrous shine, said to resemble lacquer. Most often, this was used not in creating the entire print, but only in enhancing a particular element, such as an obi or a figure's hair, to give it shine and make the image more luxurious overall.Prints which include urushi-e elements are likely to also feature the use of mica, metal dusts, and other elements which enhanced the appearance, quality and value of the works. The technique was most popular in the early 18th century Japan during the Edo era and can be seen in works by many artists of the time.
Paintings:
In painting, the term refers to the use of colored lacquers, produced by mixing pigments with clear lacquer. The use of colored lacquer for painting goes back to the prehistoric Jōmon period, and became especially popular in the Nara period (8th century), when a great many works were made using red lacquer against a black background. Until the 19th century, however, the use of natural pigments restricted the colors accessible to artists to red, black, yellow, green, and light brown.
Artists:
Artist Shibata Zeshin (1807-1891) is known for his innovations in this regard, and is believed by some to be the first to use lacquer not just as a decorative element (in painting boxes, furniture, and pottery) but as a medium for painted scrolls. Zeshin experimented extensively with various substances, which he mixed with lacquer to create a variety of effects, including simulating the appearance of various metals (iron, gold, bronze, copper), and imitating the appearance and texture of Western oil painting.Other artists who used the technique include: Torii Kiyonobu I (1664-1729) a member of the Torii ukiyo-e school used urushi-e.
Artists:
Torii Kiyomasu another member of the Torii school also made five pigment urushi-e.
Another artist Nishimura Shigenaga, used it in brass powder in some of his urushi-e works.
Okumura Masanobu was another who used this technique in the Edo era.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Ten Little Fingers and Ten Little Toes**
Ten Little Fingers and Ten Little Toes:
Ten Little Fingers and Ten Little Toes is a 2008 children's picture book by Mem Fox and Helen Oxenbury. It is about babies, who, although they are from around the world, all share the common trait of having the same number of digits.
Reception:
Ten Little Fingers has been commended for its positive treatment of racial diversity.A review by The New York Times stated that "two beloved picture-book creators — the storyteller Mem Fox and the artist Helen Oxenbury — merge their talents in a winsome look at babies around the world". Booklist called it "a standout for its beautiful simplicity" and "a gentle, joyous offering" School Library Journal described it as a "nearly perfect picture book" and concluded: "Whether shared one-on-one or in storytimes, where the large trim size and big, clear images will carry perfectly, this selection is sure to be a hit". Publishers Weekly, in a starred review, wrote: "Put two titans of kids' books together for the first time, and what do you get (besides the urge to shout, "What took you so long?")? The answer: an instant classic". New York Journal of Books, in a review of a bilingual edition, wrote: "This is a sturdy, toddler-sized board book that has something for everybody. Ms. Fox's text, soft and pure, offers sweet innocence, the joy of lives beginning, and the unique beauty of the mother-child love. Artist Helen Oxenbury's exquisite illustrations are the perfect complement to the text".The Horn Book Magazine referred to it as a "love song": "Snuggle up with your favorite baby and kiss those fingers and toes to both your hearts' content". BookPage Reviews called it "a jewel of a picture book" and wrote: "With minimal text, and sweet illustrations by beloved British artist Helen Oxenbury, it's truly an international treat. .. Ten Little Fingers and Ten Little Toes gently presents—but never preaches—a satisfying lesson about humanity and international harmony".Ten Little Fingers has also been reviewed by the Journal of Children's Literature, The Christian Century, First Opinions -- Second Reactions, YC: Young Children, Library Sparks, Reading Time, and the New England Reading Association Journal.It won the 2009 Australian Book Industry Book of the Year for Younger Children Award
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Chasles–Cayley–Brill formula**
Chasles–Cayley–Brill formula:
In algebraic geometry, the Chasles–Cayley–Brill formula, also known as the Cayley–Brill formula, states that a correspondence T of valence k from an algebraic curve C of genus g to itself has d + e + 2kg united points, where d and e are the degrees of T and its inverse.
Michel Chasles introduced the formula for genus g = 0, Arthur Cayley stated the general formula without proof, and Alexander von Brill gave the first proof.
The number of united points of the correspondence is the intersection number of the correspondence with the diagonal Δ of C×C.
The correspondence has valence k if and only if it is homologous to a linear combination a(C×1) + b(1×C) – kΔ where Δ is the diagonal of C×C. The Chasles–Cayley–Brill formula follows easily from this together with the fact that the self-intersection number of the diagonal is 2 – 2g.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Rafoxanide**
Rafoxanide:
Rafoxanide is a salicylanilide used as an anthelmintic. It is most commonly used in ruminant animals to treat adult liver flukes of the species Fasciola hepatica and Fasciola gigantica.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Threaded pipe**
Threaded pipe:
A threaded pipe is a pipe with screw-threaded ends for assembly.
Tapered threads:
The threaded pipes used in some plumbing installations for the delivery of gases or liquids under pressure have a tapered thread that is slightly conical (in contrast to the parallel sided cylindrical section commonly found on bolts and leadscrews). The seal provided by a threaded pipe joint depends upon multiple factors: the labyrinth seal created by the threads; a positive seal between the threads created by thread deformation when they are tightened to the proper torque; and sometimes on the presence of a sealing coating, such as thread seal tape or a liquid or paste pipe sealant such as pipe dope. Tapered thread joints typically do not include a gasket.
Tapered threads:
Especially precise threads are known as "dry fit" or "dry seal" and require no sealant for a gas-tight seal. Such threads are needed where the sealant would contaminate or react with the media inside the piping, e.g., oxygen service.
Tapered threaded fittings are sometimes used on plastic piping. Due to the wedging effect of the tapered thread, extreme care must be used to avoid overtightening the joint. The overstressed female fitting may split days, weeks, or even years after initial installation. Therefore many municipal plumbing codes restrict the use of threaded plastic pipe fittings.
Both British standard and National pipe thread standards specify a thread taper of 1:16; the change in diameter is one sixteenth the distance travelled along the thread. The nominal diameter is achieved some small distance (the "gauge length") from the end of the pipe.
Straight threads:
Pipes may also be threaded with cylindrical threaded sections, in which case the threads do not themselves provide any sealing function other than some labyrinth seal effect, which may not be enough to satisfy either functional or code requirements. Instead, an O-ring seated between the shoulder of the male pipe section and an interior surface on the female, provides the seal.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Alphanumeric brand name**
Alphanumeric brand name:
An alphanumeric brand name is a brand name composed only of letters and numbers (alphanumericals). Examples include 7 Up, Saks Fifth Avenue, Audi A4, Canon A75. They may serve as abbreviations (e.g. 3M, formerly known as the Minnesota Mining and Manufacturing Company), indicate model extensions (iPhone 3G, iPhone 4, etc.), symbolize physical product attributes (the V-shaped V8 engine), incorporate technical attributes (AMD32 chips use 32-bit processors), refer to inventory codes or internal design numbers (e.g., Levi's 501).Kunter Gunasti and William T. Ross (2010) define two dimensions of alphanumeric brand names: "link", or the connection between the brand name and a specific product feature or the product as a whole; and "alignability", or whether the preferences for a product can be aligned with the numbers included in the brand names in an ascending or descending trend.Selcan Kara, Gunasti and Ross (2015) delineated the number and letter components of alphanumeric brands and observed that for new brand extensions firms can either change the letters or numbers of their parent brand names. Altering the number components of brand names (e.g. Audi A3 vs. A4 vs. A6 vs. A8) led to more favorable consumer reactions compared to changing the letter components (e.g. Mercedes C350 vs. E350 vs. S350).Gunasti and Timucin Ozcan (2016) further categorized alphanumeric brand names as either "round" or "non-round". They showed that use of "round numbers" in brand names is pervasive because this practice increases the tendency of consumers to perceive products as more complete (including all necessary attributes). For example, labeling an identical product with an "S200" brand (round number) as opposed to an "S198" or "S203" brand can make consumers believe that the product is superior and more well-rounded. They also found that the presence of competitor alphanumeric brand name (e.g. Garmin 370) can affect consumer choices among the focal brand (e.g. TomTom 350 vs. TomTom 360). Gunasti and Berna Devezer (2016) observed that this effect occurs only for competing firms' products.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**IBM Information Management System**
IBM Information Management System:
The IBM Information Management System (IMS) is a joint hierarchical database and information management system that supports transaction processing.
History:
IBM designed the IMS with Rockwell and Caterpillar starting in 1966 for the Apollo program, where it was used to inventory the very large bill of materials (BOM) for the Saturn V moon rocket and Apollo space vehicle.The first "IMS READY" message appeared on an IBM 2740 terminal in Downey, California, on August 14, 1968.
History:
In the interim period, IMS has undergone many developments as IBM System/360 technology evolved into the current z/OS and IBM zEnterprise System technologies. For example, IMS now supports the Java programming language, JDBC, XML, and, since late 2005, web services.Vern Watts was IMS's chief architect for many years. Watts joined IBM in 1956 and worked at IBM's Silicon Valley development labs until his death on April 4, 2009. He had continuously worked on IMS since the 1960s.
Database:
The IMS Database component stores data using a hierarchical model, which is quite different from IBM's later released relational database, IBM Db2. In IMS, the hierarchical model is implemented using blocks of data known as segments. Each segment can contain several pieces of data, which are called fields. For example, a customer database may have a root segment (or the segment at the top of the hierarchy) with fields such as phone, name, and age. Child segments may be added underneath another segment, for instance, one order segment under each customer segment represents each order a customer has placed with a company. Likewise, each order segment may have many children segments for each item on the order. Unlike other databases, you do not need to define all of the data in a segment to IMS. A segment may be defined with a size of 40 bytes but only define one field that is six bytes long as a key field that you can use to find the segment when performing queries. IMS will retrieve and save all 40 bytes as directed by a program but may not understand (or care) what the other bytes represent. In practice, often all the data in a segment may map to a COBOL copybook. Besides DL/I query usage, a field may be defined in IMS so that the data can be hidden from certain applications for security reasons. The database component of IMS can be purchased standalone, without the transaction manager component, and used by systems such as CICS.There are three basic forms of IMS hierarchical databases: "Full Function" databases Directly descended from the Data Language Interface (DL/I) databases originally developed for Apollo, full function databases can have primary and secondary indexes, accessed using DL/I calls from an application program, like SQL calls to IBM Db2 or Oracle.
Database:
Full function databases can be accessed by a variety of methods, although Hierarchical Direct (HDAM) and Hierarchical Indexed Direct (HIDAM) dominate. The other formats are Simple Hierarchical Indexed Sequential (SHISAM), Hierarchical Sequential (HSAM), and Hierarchical Indexed Sequential (HISAM).
Full function databases store data using VSAM, a native z/OS access method, or Overflow Sequential (OSAM), an IMS-specific access method that optimizes the I/O channel program for IMS access patterns. In particular, OSAM performance benefits from sequential access of IMS databases (OSAM Sequential Buffering).
Database:
"Fast Path" databases Fast Path databases are optimized for extremely high transaction rates. Data Entry Databases (DEDBs) and Main Storage Databases (MSDBs) are the two types of Fast Path databases. DEDBs use a direct (randomizer) access technique similar to Full Function HDAM and IMS V12 provided a DEDB Secondary Index function. MSDBs do not support secondary indexing. Virtual Storage Option (VSO) DEDBs can replace MSDBs in modern IMS releases, so MSDBs are gradually disappearing.DEDB performance comes from use of high performance (Media Manager) access methods, asynchronous write after commit, and optimized code paths. Logging is minimized because no data is updated on disk until commit, so UNDO (before image) logging is not needed, nor is a backout function. Uncommitted changes can simply be discarded. Starting with IMS Version 11, DEDBs can use z/OS 64-bit storage for database buffers. DEDBs architecture includes a Unit of Work (UOW) concept which made an effective online reorganization utility simple to implement. This function is included in the base product.
Database:
High Availability Large Databases (HALDBs) IMS V7 introduced HALDBs, an extension of IMS full function databases to provide better availability, better handling of extremely large data volumes, and, with IMS V9, online reorganization to support continuous availability. (Third party tools exclusively provided online reorganization prior to IMS V9.) A HALDB can store in excess of 40 terabytes of data.Fast path DEDBs can only be built atop VSAM. DL/I databases can be built atop either VSAM or OSAM, with some restrictions depending on database organization. Although the maximum size of a z/OS VSAM dataset increased to 128 TB a few years ago, IMS still limits a VSAM dataset to 4 GB (and OSAM to 8 GB). This "limitation" simply means that IMS customers will use multiple datasets for large amounts of data. VSAM and OSAM are usually referred to as the access methods, and the IMS "logical" view of the database is referred to as the database "organization" (HDAM, HIDAM, HISAM, etc.) Internally the data are linked using 4-byte pointers or addresses. In the database datasets (DBDSs) the pointers are referred to as RBAs (relative byte addresses).Collectively the database-related IMS capabilities are often called IMS DB. IMS DB has grown and evolved over nearly four decades to support myriad business needs. IMS, with assistance from z/OS hardware – the Coupling Facility – supports N-way inter-IMS sharing of databases. Many large configurations involve multiple IMS systems managing common databases, a technique providing for scalable growth and system redundancy in the event of hardware or software failures.
Transaction Manager:
IMS is also a robust transaction manager (IMS TM, also known as IMS DC) – one of the "big three" classic transaction managers along with CICS and BEA (now Oracle) Tuxedo. A transaction manager interacts with an end user (connected through VTAM or TCP/IP, including 3270 and Web user interfaces) or another application, processes a business function (such as a banking account withdrawal), and maintains state throughout the process, making sure that the system records the business function correctly to a data store. Thus IMS TM is quite like a Web application, operating through a CGI program (for example), to provide an interface to query or update a database. IMS TM typically uses either IMS DB or Db2 as its backend database. When used alone with Db2 the IMS TM component can be purchased without the IMS DB component.IMS TM uses a messaging and queuing paradigm. An IMS control program receives a transaction entered from a terminal (or Web browser or other application) and then stores the transaction on a message queue (in memory or in a dataset). IMS then invokes its scheduler on the queued transaction to start the business application program in a message processing region. The message processing region retrieves the transaction from the IMS message queue and processes it, reading and updating IMS and/or Db2 databases, assuring proper recording of the transaction. Then, if required, IMS enqueues a response message back onto the IMS message queue. Once the output message is complete and available the IMS control program sends it back to the originating terminal. IMS TM can handle this whole process thousands (or even tens of thousands) of times per second. In 2013 IBM completed a benchmark on IMS Version 13 demonstrating the ability to process 100,000 transactions per second on a single IMS system.
Application:
Prior to IMS, businesses and governments had to write their own transaction processing environments. IMS TM provides a straightforward, easy-to-use, reliable, standard environment for high performance transaction execution. In fact, much of the world's banking industry relies on IMS, including the U.S. Federal Reserve. For example, chances are that withdrawing money from an automated teller machine (ATM) will trigger an IMS transaction. Several Chinese banks, by the late 2000s, have purchased IMS to support that country's burgeoning financial industry.Today IMS complements IBM Db2, IBM's relational database system, introduced in 1982. In general, IMS performs faster than Db2 for the common tasks but may require more programming effort to design and maintain for non-primary duties. Relational databases have generally proven superior in cases where the requirements, especially reporting requirements, change frequently or require a variety of viewpoint "angles" outside the primary or original function.A relational "data warehouse" may be used to supplement an IMS database. For example, IMS may provide primary ATM transactions because it performs well for such a specific task. However, nightly copies of the IMS data may be copied to relational systems such that a variety of reports and processing tasks may be performed on the data. This allows each kind of database to focus best on its relative strength.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Autoclave**
Autoclave:
An autoclave is a machine used to carry out industrial and scientific processes requiring elevated temperature and pressure in relation to ambient pressure and/or temperature. Autoclaves are used before surgical procedures to perform sterilization and in the chemical industry to cure coatings and vulcanize rubber and for hydrothermal synthesis. Industrial autoclaves are used in industrial applications, especially in the manufacturing of composites.
Autoclave:
Many autoclaves are used to sterilize equipment and supplies by subjecting them to pressurized saturated steam at 121 °C (250 °F) for around 30-60 minutes at a pressure of 15 psi above atmospheric pressure (205 kPa or 2.02 atm) depending on the size of the load and the contents. The autoclave was invented by Charles Chamberland in 1879, although a precursor known as the steam digester was created by Denis Papin in 1679. The name comes from Greek auto-, ultimately meaning self, and Latin clavis meaning key, thus a self-locking device.
Uses:
Sterilization autoclaves are widely used in microbiology and mycology, medicine and prosthetics fabrication, tattooing and body piercing, and funerary practice. They vary in size and function depending on the media to be sterilized and are sometimes called retort in the chemical and food industries.
Uses:
Typical loads include laboratory glassware, other equipment and waste, surgical instruments, and medical waste.A notable recent and increasingly popular application of autoclaves is the pre-disposal treatment and sterilization of waste material, such as pathogenic hospital waste. Machines in this category largely operate under the same principles as conventional autoclaves in that they are able to neutralize potentially infectious agents by using pressurized steam and superheated water. A new generation of waste converters is capable of achieving the same effect without a pressure vessel to sterilize culture media, rubber material, gowns, dressings, gloves, etc. It is particularly useful for materials which cannot withstand the higher temperature of a hot air oven.Autoclaves are also widely used to cure composites, especially for melding multiple layers without any voids that would decrease material strength, and in the vulcanization of rubber. The high heat and pressure that autoclaves generate help to ensure that the best possible physical properties are repeatable. Manufacturers of spars for sailboats have autoclaves well over 50 feet (15 m) long and 10 feet (3 m) wide, and some autoclaves in the aerospace industry are large enough to hold whole airplane fuselages made of layered composites.Other types of autoclaves are used to grow crystals under high temperatures and pressures. Synthetic quartz crystals used in the electronics industry are grown in autoclaves. Packing of parachutes for specialist applications may be performed under vacuum in an autoclave, which allows the chutes to be warmed and inserted into their packs at the smallest volume.
Uses:
A thermal effluent decontamination system functions as a single-purpose autoclave designed for the sterilization of liquid waste and effluent.
Air removal:
It is very important to ensure that all of the trapped air is removed from the autoclave before activation, as trapped air is a very poor medium for achieving sterility. Steam at 134 °C (273 °F) can achieve a desired level of sterility in three minutes, while achieving the same level of sterility in hot air requires two hours at 160 °C (320 °F). Methods of air removal include: Downward displacement (or gravity-type): As steam enters the chamber, it fills the upper areas first as it is less dense than air. This process compresses the air to the bottom, forcing it out through a drain which often contains a temperature sensor. Only when air evacuation is complete does the discharge stop. Flow is usually controlled by a steam trap or a solenoid valve, but bleed holes are sometimes used. As the steam and air mix, it is also possible to force out the mixture from locations in the chamber other than the bottom.
Air removal:
Steam pulsing: Air dilution by using a series of steam pulses, in which the chamber is alternately pressurized and then depressurized to near atmospheric pressure.
Vacuum pumps: A vacuum pump sucks air or air/steam mixtures from the chamber.
Superatmospheric cycles: Achieved with a vacuum pump. It starts with a vacuum followed by a steam pulse followed by a vacuum followed by a steam pulse. The number of pulses depends on the particular autoclave and cycle chosen.
Subatmospheric cycles: Similar to the superatmospheric cycles, but chamber pressure never exceeds atmospheric pressure until they pressurize up to the sterilizing temperature.Stovetop autoclaves used in poorer or non-medical settings do not always have automatic air removal programs. The operator is required to manually perform steam pulsing at certain pressures as indicated by the gauge.
In medicine:
A medical autoclave is a device that uses steam to sterilize equipment and other objects. This means that all bacteria, viruses, fungi, and spores are inactivated. However, prions, such as those associated with Creutzfeldt–Jakob disease, and some toxins released by certain bacteria, such as Cereulide, may not be destroyed by autoclaving at the typical 134 °C for three minutes or 121 °C for 15 minutes and instead should be immersed in sodium hydroxide (1N NaOH) and heated in a gravity displacement autoclave at 121 °C for 30 min, cleaned, rinsed in water and subjected to routine sterilization. Although a wide range of archaea species, including Geogemma barosii, can survive and even reproduce at temperatures found in autoclaves, their growth rate is so slow at the lower temperatures in the less extreme environments occupied by humans that it is unlikely they could compete with other organisms. None of them are known to be infectious or otherwise pose a health risk to humans; in fact, their biochemistry is so different from our own and their multiplication rate is so slow that microbiologists need not worry about them.Autoclaves are found in many medical settings, laboratories, and other places that need to ensure the sterility of an object. Many procedures today employ single-use items rather than sterilizable, reusable items. This first happened with hypodermic needles, but today many surgical instruments (such as forceps, needle holders, and scalpel handles) are commonly single-use rather than reusable items (see waste autoclave). Autoclaves are of particular importance in poorer countries due to the much greater amount of equipment that is re-used. Providing stove-top or solar autoclaves to rural medical centers has been the subject of several proposed medical aid missions.Because damp heat is used, heat-labile products (such as some plastics) cannot be sterilized this way or they will melt. Paper and other products that may be damaged by steam must also be sterilized another way. In all autoclaves, items should always be separated to allow the steam to penetrate the load evenly.
In medicine:
Autoclaving is often used to sterilize medical waste prior to disposal in the standard municipal solid waste stream. This application has become more common as an alternative to incineration due to environmental and health concerns raised because of the combustion by-products emitted by incinerators, especially from the small units which were commonly operated at individual hospitals. Incineration or a similar thermal oxidation process is still generally mandated for pathological waste and other very toxic or infectious medical waste. For liquid waste, an effluent decontamination system is the equivalent hardware.
In medicine:
In dentistry, autoclaves provide sterilization of dental instruments.
In medicine:
In most of the industrialized world medical-grade autoclaves are regulated medical devices. Many medical-grade autoclaves are therefore limited to running regulator-approved cycles. Because they are optimized for continuous hospital use, they favor rectangular designs, require demanding maintenance regimens, and are costly to operate. (A properly calibrated medical-grade autoclave uses thousands of gallons of water each day, independent of task, with correspondingly high electric power consumption.)
In research:
Autoclaves used in education, research, biomedical research, pharmaceutical research and industrial settings (often called "research-grade" autoclaves) are used to sterilize lab instruments, glassware, culture media, and liquid media. Research-grade autoclaves are increasingly used in these settings where efficiency, ease-of-use, and flexibility are at a premium. Research-grade autoclaves may be configured for "pass-through" operation. This makes it possible to maintain absolute isolation between "clean" and potentially contaminated work areas. Pass-through research autoclaves are especially important in BSL-3 or BSL-4 facilities.
In research:
Research-grade autoclaves—which are not approved for use in sterilizing instruments that will be directly used on humans—are primarily designed for efficiency, flexibility, and ease-of-use. They display a wide range of designs and sizes, and are frequently tailored to their use and load type. Common variations include either a cylindrical or square pressure chamber, air- or water-cooling systems, and vertically or horizontally opening chamber doors (which may be electrically or manually powered).
In research:
In 2016, the Office of Sustainability at the University of California, Riverside (UCR) conducted a study of autoclave efficiency in their genomics and entomology research labs, tracking several units' power and water consumption. They found that, even when functioning within intended parameters, the medical-grade autoclaves used in their research labs were each consuming 700 gallons of water and 90 kWh of electricity per day (1,134MWh of electricity and 8.8 million gallons of water total), because they consumed energy and water continuously, even when not in use. UCR's research-grade autoclaves performed the same tasks with equal effectiveness, but used 83% less energy and 97% less water.
Quality assurance:
In order to sterilize items effectively, it is important to use optimal parameters when running an autoclave cycle. A 2017 study performed by the Johns Hopkins Hospital biocontainment unit tested the ability of pass-through autoclaves to decontaminate loads of simulated biomedical waste when run on the factory default setting. The study found that 18 of 18 (100%) mock patient loads (6 PPE, 6 linen, and 6 liquid loads) passed sterilization tests with the optimized parameters compared to only 3 of 19 (16%) mock loads that passed with use of the factory default settings.There are physical, chemical, and biological indicators that can be used to ensure that an autoclave reaches the correct temperature for the correct amount of time. If a non-treated or improperly treated item can be confused for a treated item, then there is the risk that they will become mixed up, which, in some areas such as surgery, is critical.
Quality assurance:
Chemical indicators on medical packaging and autoclave tape change color once the correct conditions have been met, indicating that the object inside the package, or under the tape, has been appropriately processed. Autoclave tape is only a marker that steam and heat have activated the dye. The marker on the tape does not indicate complete sterility. A more difficult challenge device, named the Bowie-Dick device after its inventors, is also used to verify a full cycle. This contains a full sheet of chemical indicator placed in the center of a stack of paper. It is designed specifically to prove that the process achieved full temperature and time required for a normal minimum cycle of 134 °C for 3.5–4 minutes.To prove sterility, biological indicators are used. Biological indicators contain spores of a heat-resistant bacterium, Geobacillus stearothermophilus. If the autoclave does not reach the right temperature, the spores will germinate when incubated and their metabolism will change the color of a pH-sensitive chemical. Some physical indicators consist of an alloy designed to melt only after being subjected to a given temperature for the relevant holding time. If the alloy melts, the change will be visible.Some computer-controlled autoclaves use an F0 (F-nought) value to control the sterilization cycle. F0 values are set for the number of minutes of sterilization equivalent to 121 °C (250 °F) at 103 kPa (14.9 psi) above atmospheric pressure for 15 minutes. Since exact temperature control is difficult, the temperature is monitored, and the sterilization time adjusted accordingly.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Genotyping by sequencing**
Genotyping by sequencing:
In the field of genetic sequencing, genotyping by sequencing, also called GBS, is a method to discover single nucleotide polymorphisms (SNP) in order to perform genotyping studies, such as genome-wide association studies (GWAS). GBS uses restriction enzymes to reduce genome complexity and genotype multiple DNA samples. After digestion, PCR is performed to increase fragments pool and then GBS libraries are sequenced using next generation sequencing technologies, usually resulting in about 100bp single-end reads. It is relatively inexpensive and has been used in plant breeding. Although GBS presents an approach similar to restriction-site-associated DNA sequencing (RAD-seq) method, they differ in some substantial ways.
Methods:
GBS is a robust, simple, and affordable procedure for SNP discovery and mapping. Overall, this approach reduces genome complexity with restriction enzymes (REs) in high-diversity, large genomes species for efficient high-throughput, highly multiplexed sequencing. By using appropriate REs, repetitive regions of genomes can be avoided and lower copy regions can be targeted, which reduces alignments problems in genetically highly diverse species. The method was first described by Elshire et al. (2011). In summary, high molecular weight DNAs are extracted and digested using a specific RE previously defined by cutting frequently in the major repetitive fraction of the genome. ApeKI is the most used RE. Barcode adapters are then ligated to sticky ends and PCR amplification is performed. Next-generation sequencing technology is performed resulting in about 100 bp single-end reads. Raw sequence data are filtered and aligned to a reference genome using usually Burrows–Wheeler alignment tool (BWA) or Bowtie 2. The next step is to identify SNPs from aligned tags and score all discovered SNPs for various coverage, depth and genotypic statistics. Once a large-scale, species-wide SNP production has been run, it is possible to quickly call known SNPs in newly sequenced samples.When initially developed, the GBS approach was tested and validated in recombinant inbred lines (RILs) from a high-resolution maize mapping population (IBM) and doubled haploid (DH) barley lines from the Oregon Wolfe Barley (OWB) mapping population. Up to 96 RE (ApeKI)-digested DNA samples were pooled and processed simultaneously during the GBS library construction, which was checked on a Genome Analyzer II (Illumina, Inc.). Overall, 25,185 biallelic tags were mapped in maize, while 24,186 sequence tags were mapped in barley. Barley GBS marker validation using a single DH line (OWB003) showed 99% agreement between the reference markers and the mapped GBS reads. Although barley lacks a complete genome sequence, GBS does not require a reference genome for sequence tag mapping, the reference is developed during the process of sampling genotyping. Tags can also be treated as dominant markers for alternative genetic analysis in the absence of a reference genome. Other than the multiplex GBS skimming, imputation of missing SNPs has the potential to further reduce GBS costs. GBS is a versatile and cost-effective procedure that will allow mining genomes of any species without prior knowledge of its genome structure.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**XOR swap algorithm**
XOR swap algorithm:
In computer programming, the exclusive or swap (sometimes shortened to XOR swap) is an algorithm that uses the exclusive or bitwise operation to swap the values of two variables without using the temporary variable which is normally required.
The algorithm is primarily a novelty and a way of demonstrating properties of the exclusive or operation. It is sometimes discussed as a program optimization, but there are almost no cases where swapping via exclusive or provides benefit over the standard, obvious technique.
The algorithm:
Conventional swapping requires the use of a temporary storage variable. Using the XOR swap algorithm, however, no temporary storage is needed. The algorithm is as follows: Since XOR is a commutative operation, either X XOR Y or Y XOR X can be used interchangeably in any of the foregoing three lines. Note that on some architectures the first operand of the XOR instruction specifies the target location at which the result of the operation is stored, preventing this interchangeability. The algorithm typically corresponds to three machine-code instructions, represented by corresponding pseudocode and assembly instructions in the three rows of the following table: In the above System/370 assembly code sample, R1 and R2 are distinct registers, and each XR operation leaves its result in the register named in the first argument. Using x86 assembly, values X and Y are in registers eax and ebx (respectively), and xor places the result of the operation in the first register.
The algorithm:
However, in the pseudocode or high-level language version or implementation, the algorithm fails if x and y use the same storage location, since the value stored in that location will be zeroed out by the first XOR instruction, and then remain zero; it will not be "swapped with itself". This is not the same as if x and y have the same values. The trouble only comes when x and y use the same storage location, in which case their values must already be equal. That is, if x and y use the same storage location, then the line: sets x to zero (because x = y so X XOR Y is zero) and sets y to zero (since it uses the same storage location), causing x and y to lose their original values.
Proof of correctness:
The binary operation XOR over bit strings of length N exhibits the following properties (where ⊕ denotes XOR): L1. Commutativity: A⊕B=B⊕A L2. Associativity: (A⊕B)⊕C=A⊕(B⊕C) L3. Identity exists: there is a bit string, 0, (of length N) such that A⊕0=A for any A L4. Each element is its own inverse: for each A , A⊕A=0 .Suppose that we have two distinct registers R1 and R2 as in the table below, with initial values A and B respectively. We perform the operations below in sequence, and reduce our results using the properties listed above.
Proof of correctness:
Linear algebra interpretation As XOR can be interpreted as binary addition and a pair of bits can be interpreted as a vector in a two-dimensional vector space over the field with two elements, the steps in the algorithm can be interpreted as multiplication by 2×2 matrices over the field with two elements. For simplicity, assume initially that x and y are each single bits, not bit vectors.
Proof of correctness:
For example, the step: which also has the implicit: corresponds to the matrix (1101) as (1101)(xy)=(x+yy).
The sequence of operations is then expressed as: (1101)(1011)(1101)=(0110) (working with binary values, so 1+1=0 ), which expresses the elementary matrix of switching two rows (or columns) in terms of the transvections (shears) of adding one element to the other.
To generalize to where X and Y are not single bits, but instead bit vectors of length n, these 2×2 matrices are replaced by 2n×2n block matrices such as (InIn0In).
These matrices are operating on values, not on variables (with storage locations), hence this interpretation abstracts away from issues of storage location and the problem of both variables sharing the same storage location.
Code example:
A C function that implements the XOR swap algorithm: The code first checks if the addresses are distinct. Otherwise, if they were equal, the algorithm would fold to a triple *x ^= *x resulting in zero.
The XOR swap algorithm can also be defined with a macro:
Reasons for avoidance in practice:
On modern CPU architectures, the XOR technique can be slower than using a temporary variable to do swapping. At least on recent x86 CPUs, both by AMD and Intel, moving between registers regularly incurs zero latency. (This is called MOV-elimination.) Even if there is not any architectural register available to use, the XCHG instruction will be at least as fast as the three XORs taken together. Another reason is that modern CPUs strive to execute instructions in parallel via instruction pipelines. In the XOR technique, the inputs to each operation depend on the results of the previous operation, so they must be executed in strictly sequential order, negating any benefits of instruction-level parallelism.
Reasons for avoidance in practice:
Aliasing The XOR swap is also complicated in practice by aliasing. If an attempt is made to XOR-swap the contents of some location with itself, the result is that the location is zeroed out and its value lost. Therefore, XOR swapping must not be used blindly in a high-level language if aliasing is possible. This issue does not apply if the technique is used in assembly to swap the contents of two registers.
Reasons for avoidance in practice:
Similar problems occur with call by name, as in Jensen's Device, where swapping i and A[i] via a temporary variable yields incorrect results due to the arguments being related: swapping via temp = i; i = A[i]; A[i] = temp changes the value for i in the second statement, which then results in the incorrect i value for A[i] in the third statement.
Variations:
The underlying principle of the XOR swap algorithm can be applied to any operation meeting criteria L1 through L4 above. Replacing XOR by addition and subtraction gives various slightly different, but largely equivalent, formulations. For example: Unlike the XOR swap, this variation requires that the underlying processor or programming language uses a method such as modular arithmetic or bignums to guarantee that the computation of X + Y cannot cause an error due to integer overflow. Therefore, it is seen even more rarely in practice than the XOR swap.
Variations:
However, the implementation of AddSwap above in the C programming language always works even in case of integer overflow, since, according to the C standard, addition and subtraction of unsigned integers follow the rules of modular arithmetic, i. e. are done in the cyclic group Z/2sZ where s is the number of bits of unsigned int. Indeed, the correctness of the algorithm follows from the fact that the formulas (x+y)−y=x and (x+y)−((x+y)−y)=y hold in any abelian group. This generalizes the proof for the XOR swap algorithm: XOR is both the addition and subtraction in the abelian group (Z/2Z)s (which is the direct sum of s copies of Z/2Z ).
Variations:
This doesn't hold when dealing with the signed int type (the default for int). Signed integer overflow is an undefined behavior in C and thus modular arithmetic is not guaranteed by the standard, which may lead to incorrect results.
The sequence of operations in AddSwap can be expressed via matrix multiplication as: (1−101)(101−1)(1101)=(0110)
Application to register allocation:
On architectures lacking a dedicated swap instruction, because it avoids the extra temporary register, the XOR swap algorithm is required for optimal register allocation. This is particularly important for compilers using static single assignment form for register allocation; these compilers occasionally produce programs that need to swap two registers when no registers are free. The XOR swap algorithm avoids the need to reserve an extra register or to spill any registers to main memory. The addition/subtraction variant can also be used for the same purpose.This method of register allocation is particularly relevant to GPU shader compilers. On modern GPU architectures, spilling variables is expensive due to limited memory bandwidth and high memory latency, while limiting register usage can improve performance due to dynamic partitioning of the register file. The XOR swap algorithm is therefore required by some GPU compilers.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Acute hemorrhagic edema of infancy**
Acute hemorrhagic edema of infancy:
Acute hemorrhagic edema of infancy is a skin condition that affects children under the age of two with a recent history of upper respiratory illness, a course of antibiotics, or both.: 833 The disease was first described in 1938 by Finkelstein and later by Seidlmayer as "Seidlmayer cockade purpura".
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Major second**
Major second:
In Western music theory, a major second (sometimes also called whole tone or a whole step) is a second spanning two semitones (Play ). A second is a musical interval encompassing two adjacent staff positions (see Interval number for more details). For example, the interval from C to D is a major second, as the note D lies two semitones above C, and the two notes are notated on adjacent staff positions. Diminished, minor and augmented seconds are notated on adjacent staff positions as well, but consist of a different number of semitones (zero, one, and three).
Major second:
The intervals from the tonic (keynote) in an upward direction to the second, to the third, to the sixth, and to the seventh scale degrees (of a major scale are called major.
Major second:
The major second is the interval that occurs between the first and second degrees of a major scale, the tonic and the supertonic. On a musical keyboard, a major second is the interval between two keys separated by one key, counting white and black keys alike. On a guitar string, it is the interval separated by two frets. In moveable-do solfège, it is the interval between do and re. It is considered a melodic step, as opposed to larger intervals called skips.
Major second:
Intervals composed of two semitones, such as the major second and the diminished third, are also called tones, whole tones, or whole steps.
In just intonation, major seconds can occur in at least two different frequency ratios: 9:8 (about 203.9 cents) and 10:9 (about 182.4 cents). The largest (9:8) ones are called major tones or greater tones, the smallest (10:9) are called minor tones or lesser tones. Their size differs by exactly one syntonic comma (81:80, or about 21.5 cents).
Some equal temperaments, such as 15-ET and 22-ET, also distinguish between a greater and a lesser tone.
The major second was historically considered one of the most dissonant intervals of the diatonic scale, although much 20th-century music saw it reimagined as a consonance. It is common in many different musical systems, including Arabic music, Turkish music and music of the Balkans, among others. It occurs in both diatonic and pentatonic scales.
Listen to a major second in equal temperament . Here, middle C is followed by D, which is a tone 200 cents sharper than C, and then by both tones together.
Major and minor tones:
In tuning systems using just intonation, such as 5-limit tuning, in which major seconds occur in two different sizes, the wider of them is called a major tone or greater tone, and the narrower a minor tone or, lesser tone. The difference in size between a major tone and a minor tone is equal to one syntonic comma (about 21.51 cents).
Major and minor tones:
The major tone is the 9:8 interval play , and it is an approximation thereof in other tuning systems, while the minor tone is the 10:9 ratio play . The major tone may be derived from the harmonic series as the interval between the eighth and ninth harmonics. The minor tone may be derived from the harmonic series as the interval between the ninth and tenth harmonics. The 10:9 minor tone arises in the C major scale between D & E and G & A, and is "a sharper dissonance" than 9:8. The 9:8 major tone arises in the C major scale between C & D, F & G, and A & B. This 9:8 interval was named epogdoon (meaning 'one eighth in addition') by the Pythagoreans.
Major and minor tones:
Notice that in these tuning systems, a third kind of whole tone, even wider than the major tone, exists. This interval of two semitones, with ratio 256:225, is simply called the diminished third (for further details, see Five-limit tuning § Size of intervals).
Some equal temperaments also produce major seconds of two different sizes, called greater and lesser tones (or major and minor tones). For instance, this is true for 15-ET, 22-ET, 34-ET, 41-ET, 53-ET, and 72-ET.
Conversely, in twelve-tone equal temperament, Pythagorean tuning, and meantone temperament (including 19-ET and 31-ET) all major seconds have the same size, so there cannot be a distinction between a greater and a lesser tone.
Major and minor tones:
In any system where there is only one size of major second, the terms greater and lesser tone (or major and minor tone) are rarely used with a different meaning. Namely, they are used to indicate the two distinct kinds of whole tone, more commonly and more appropriately called major second (M2) and diminished third (d3). Similarly, major semitones and minor semitones are more often and more appropriately referred to as minor seconds (m2) and augmented unisons (A1), or diatonic and chromatic semitones.
Major and minor tones:
Unlike almost all uses of the terms major and minor, these intervals span the same number of semitones. They both span 2 semitones, while, for example, a major third (4 semitones) and minor third (3 semitones) differ by one semitone. Thus, to avoid ambiguity, it is preferable to call them greater tone and lesser tone (see also greater and lesser diesis).
Major and minor tones:
Two major tones equal a ditone.
Epogdoon:
In Pythagorean music theory, the epogdoon (Ancient Greek: ἐπόγδοον) is the interval with the ratio 9 to 8. The word is composed of the prefix epi- meaning "on top of" and ogdoon meaning "one eighth"; so it means "one eighth in addition". For example, the natural numbers are 8 and 9 in this relation (8+( 18 ×8)=9).
Epogdoon:
According to Plutarch, the Pythagoreans hated the number 17 because it separates the 16 from its Epogdoon 18."[Epogdoos] is the 9:8 ratio that corresponds to the tone, [hêmiolios] is the 3:2 ratio that is associated with the musical fifth, and [epitritos] is the 4:3 ratio associated with the musical fourth. It is common to translate epogdoos as 'tone' [major second]." Further reading Barker, Andrew (2007). The Science of Harmonics in Classical Greece. Cambridge University Press. ISBN 9780521879514.
Epogdoon:
Plutarch (2005). Moralia. Translated by Frank Cole Babbitt. Kessinger Publishing. ISBN 9781417905003.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Head shadow**
Head shadow:
A head shadow (or acoustic shadow) is a region of reduced amplitude of a sound because it is obstructed by the head. It is an example of diffraction.Sound may have to travel through and around the head in order to reach an ear. The obstruction caused by the head can account for attenuation (reduced amplitude) of overall intensity as well as cause a filtering effect. The filtering effects of head shadowing are an essential element of sound localisation—the brain weighs the relative amplitude, timbre, and phase of a sound heard by the two ears and uses the difference to interpret directional information.
Head shadow:
The shadowed ear, the ear further from the sound source, receives sound slightly later (up to approximately 0.7 ms later) than the unshadowed ear, and the timbre, or frequency spectrum, of the shadowed sound wave is different because of the obstruction of the head.
The head shadow causes particular difficulty in sound localisation in people suffering from unilateral hearing loss. It is a factor to consider when correcting hearing loss with directional hearing aids.
Sources:
Gray, Lincoln; Kesser, Bradley; Cole, Erika (September 2009). "Understanding speech in noise after correction of congenital unilateral aural atresia: effects of age in the emergence of binaural squelch but not in use of head-shadow". Int. J. Pediatr. Otorhinolaryngol. 73 (9): 1281–7. doi:10.1016/j.ijporl.2009.05.024. PMID 19581007.
Oberzut, Cherish; Olson, Laurel (2003). "Directionality and the head-shadow effect". The Hearing Journal. 56 (4): 56. doi:10.1097/01.HJ.0000293911.35305.25. S2CID 61398142.
Schleich, P.; Nopp, P.; d'Haese, P. (June 2004). "Head shadow, squelch, and summation effects in bilateral users of the MED-EL COMBI 40/40+ cochlear implant". Ear Hear. 25 (3): 197–204. doi:10.1097/01.aud.0000130792.43315.97. PMID 15179111. S2CID 26115988.
Van Wanrooij, M. M.; Van Opstal, A. J. (April 2004). "Contribution of head shadow and pinna cues to chronic monaural sound localization". J. Neurosci. 24 (17): 4163–71. doi:10.1523/JNEUROSCI.0048-04.2004. PMC 6729291. PMID 15115811.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**(Z)-gamma-bisabolene synthase**
(Z)-gamma-bisabolene synthase:
(Z)-γ-bisabolene synthase (EC 4.2.3.40) is an enzyme with systematic name (2E,6E)-farnesyl-diphosphate diphosphate-lyase ((Z)-γ-bisabolene-forming). This enzyme catalyses the following chemical reaction (2E,6E)-farnesyl-diphosphate diphosphate ⇌ (Z)-γ-bisabolene + diphosphateThis enzyme is expressed in the root, hydathodes and stigma of the plant Arabidopsis thaliana.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**GOM Player**
GOM Player:
GOM Player is a media player for Windows, developed by GOM & Company. With more than 100 million downloads, it is also known as the most used player in South Korea. Its main features include the ability to play some broken media files and find missing codecs using a codec finder service.The word gom (곰) means "bear" in Korean, and as such the icon of GOM Player looks like a bear's paw.
GOM Player:
GOM Player has a free version and a paid version. The paid version name is GOM Player Plus, and it allows video playback without advertisements and includes convenient features such as simple configuration.
Features:
GOM Player has several embedded video and audio codecs, so it can play immediately without installing any external codecs. If there is no codec, it can be found with a codec search feature.
The basic embedded codec has the advantage of making it easier for computer beginners who lack knowledge of codecs to play videos.
Video files that are incomplete, damaged, or not completely downloaded can also be played.
If the file name of a video file and subtitles are the same, subtitles are automatically displayed when the video is running.
If there is no subtitle file, subtitles can be found in the subtitles archive supported by GOM Player.
GOM Player started supporting 360 video (360VR) for the first time as a domestic video player in December 2015, and also has supported 360 video on GOM Player mobile app since June 2016.
360 degree videos and trending videos on YouTube can be played through the right panel called Miniweb.
In 2019, the preferences of GOM player were newly reorganized.
In 2020, a new version of GOM player macOS was released.
GOM player is available on mobile (Android/iOS).
In 2020, the company released a new GOM Player app optimized for iPhones.
Supported formats:
GOM Player can play following multimedia formats Video Formats: [Windows]avi, .ogm, .mkv, .mp4, .k3g, .ifo, .ts, .asf, .wmv, .wma, .mov, .mpg, .m1v, .m2v, .vob, .m4v, .3gp/3gp2, .rmvb, .rm, .ogg, .flv, .asx(영상), .dat + When an external codec is used, other video formats can also be used. - [Mac].mkv, .mp4, .avi, .m4v, .mov, .3gp, .ts, .mts, .m2ts, .wmv, .flv, .f4v, .asf, .webm, .rm, .rmvb, .qt, .dv, .mpg, .mpeg, .mxf, .vob, .gif Audio Formats: [Windows].mp3, .m4a, .aac, .ogg, .flac, .wav, .wma, .rma , .alac + When an external codec is used, other audio formats can also be used. - [Mac].mp3, .aac, .mka, .flac, .ogg, .oga, .mogg, .m4a, .opus, .wav, .wv, .aiff, .ape, .tta, .tak Subtitle Formats: [Windows]smi, srt, rt, sub(& IDX), vtt (text sub), dvb, ass, psb, txt, sbv, vobsub (embedded sub) - [Mac]utf, utf8, utf-8, idx, sub, srt, smi, rt, ssa, aqt, jss, js, ass, mks, vtt, sup, scc Playist Formats - [Windows].asx, .pls - [Mac]json (self-formatting)
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Ciliated cyst of the vulva**
Ciliated cyst of the vulva:
Ciliated cyst of the vulva (also known as "Cutaneous Müllerian cyst," and "Paramesonephric mucinous cyst of the vulva") is a cutaneous condition characterized by a cyst of the vulva.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Affinity maturation**
Affinity maturation:
In immunology, affinity maturation is the process by which TFH cell-activated B cells produce antibodies with increased affinity for antigen during the course of an immune response. With repeated exposures to the same antigen, a host will produce antibodies of successively greater affinities. A secondary response can elicit antibodies with several fold greater affinity than in a primary response. Affinity maturation primarily occurs on membrane immunoglobulin of germinal center B cells and as a direct result of somatic hypermutation (SHM) and selection by TFH cells.
In vivo:
The process is thought to involve two interrelated processes, occurring in the germinal centers of the secondary lymphoid organs: Somatic hypermutation: Mutations in the variable, antigen-binding coding sequences (known as complementarity-determining regions (CDR)) of the immunoglobulin genes. The mutation rate is up to 1,000,000 times higher than in cell lines outside the lymphoid system. Although the exact mechanism of the SHM is still not known, a major role for the activation-induced (cytidine) deaminase has been discussed. The increased mutation rate results in 1-2 mutations per CDR and, hence, per cell generation. The mutations alter the binding specificity and binding affinities of the resultant antibodies.
In vivo:
Clonal selection: B cells that have undergone SHM must compete for limiting growth resources, including the availability of antigen and paracrine signals from TFH cells. The follicular dendritic cells (FDCs) of the germinal centers present antigen to the B cells, and the B cell progeny with the highest affinities for antigen, having gained a competitive advantage, are favored for positive selection leading to their survival. Positive selection is based on steady cross-talk between TFH cells and their cognate antigen presenting GC B cell. Because a limited number of TFH cells reside in the germinal center, only highly competitive B cells stably conjugate with TFH cells and thus receive T cell-dependent survival signals. B cell progeny that have undergone SHM, but bind antigen with lower affinity will be out-competed, and be deleted. Over several rounds of selection, the resultant secreted antibodies produced will have effectively increased affinities for antigen.
In vitro:
Like the natural prototype, the in vitro affinity maturation is based on the principles of mutation and selection. The in vitro affinity maturation has successfully been used to optimize antibodies, antibody fragments or other peptide molecules like antibody mimetics. Random mutations inside the CDRs are introduced using radiation, chemical mutagens or error-prone PCR. In addition, the genetic diversity can be increased by chain shuffling. Two or three rounds of mutation and selection using display methods like phage display usually results in antibody fragments with affinities in the low nanomolar range.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Magnesium orthosilicate**
Magnesium orthosilicate:
Magnesium orthosilicate is a chemical compound with the formula Mg2SiO4. It is the orthosilicate salt of magnesium. It exists as Forsterite in nature.
Production:
Magnesium orthosilicate is made by the fusion of stoichiometric amounts of magnesium and silicon oxides at 1,900 °C (3,450 °F).
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Cambridge Optical Aperture Synthesis Telescope**
Cambridge Optical Aperture Synthesis Telescope:
COAST, the Cambridge Optical Aperture Synthesis Telescope, is a multi-element optical astronomical interferometer with baselines of up to 100 metres, which uses aperture synthesis to observe stars with angular resolution as high as one thousandth of one arcsecond (producing much higher resolution images than individual telescopes, including the Hubble Space Telescope). The principal limitation is that COAST can only image bright stars.
Cambridge Optical Aperture Synthesis Telescope:
COAST was the first long-baseline interferometer to obtain high-resolution images of the surfaces of stars other than Sun (although the surfaces of other stars had previously been imaged at lower resolution using aperture masking interferometry on the William Herschel Telescope).
The COAST array was conceived by John E. Baldwin and is operated by the Cavendish Astrophysics Group. It is situated at the Mullard Radio Astronomy Observatory in Cambridgeshire, England.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Zero-acquaintance personality judgments**
Zero-acquaintance personality judgments:
A zero-acquaintance situation requires a perceiver to make a judgment about a target with whom the perceiver has had no prior social interaction. These judgments can be made using a variety of cues, including brief interactions with the target, video recordings of the target, photographs of the target, and observations of the target's personal environments, among others. In zero-acquaintance studies, the target's actual personality is determined through the target's self-rating and/or ratings from close acquaintance(s) of that target. Consensus in ratings is determined by how consistently perceivers rate the target's personality when compared to other raters. Accuracy in ratings is determined by how well perceivers' ratings of a target compare to that target's self-ratings on the same scale, or to that target's close acquaintances' ratings of the target. Zero-acquaintance judgments are regularly made in day-to-day life. Given that these judgments tend to remain stable, even as the length of interaction increases, they can influence important interpersonal outcomes.
Background:
History The study of zero-acquaintance personality judgments developed from Cleeton and Knight's (1924) intent to demonstrate the futility of using physical criteria to predict unobservable individual traits. In order to accomplish this, Cleeton and Knight (1924) recruited 30 target participants from national fraternities and sororities, so that a large group of close acquaintances from these organizations could rate eight traits (i.e. individual traits included sound judgment, intellectual capacity, frankness, willpower, ability to make friends, leadership, originality, and impulsiveness) of the target participants. Cleeton and Knight (1924) then asked a group of strangers to rate these eight traits of each target participant after viewing the target participant from a distance for only a few minutes. After measuring several objective physical traits of the target participants, such as cranium size and eye width, Cleeton and Knight (1924) found that these physical traits were unrelated to close acquaintances' ratings of unobservable individual traits. However, they found that strangers' rating of an unfamiliar individual's traits were reliable; strangers tended to rate a target's personality similarly. Although these ratings were inaccurate, it became apparent that raters must be using similar indicators to make judgements about individual traits.
Background:
Passini and Norman (1966) found comparable evidence that strangers provide similar ratings of unobservable personality traits of a target participant with no prior acquaintanceship. In an introductory undergraduate psychology course, students with no previous interactions were placed into groups and asked to complete personality ratings for each member in the group. Given that the strangers tended to rate a target participant's personality similarly, Passini and Norman (1966) posited that some common observable characteristics must be informing these judgements. In the same year, Taft (1966) demonstrated that strangers can make judgments of personality more accurately than chance, but not as accurately as close acquaintances.
Background:
These findings went unnoticed for over twenty years, until Albright, Kenny, and Malloy (1988) revived interest and formally coined the term zero-acquaintance personality judgments. These researchers established that certain physical appearance variables, including attractiveness, type of dress (both formal and neatness), and perceived age, informed strangers' zero-acquaintance personality judgments. Ratings between strangers were most similar to each other and to the target's self-rating for the traits "sociable" and "responsible". Ratings of target attractiveness informed judgements of sociability; formality and neatness of dress informed judgements of responsibility.
Background:
Consensus in ratings Consensus in zero-acquaintance studies refers to the degree to which multiple perceivers of a target make similar judgments of that target's personality traits. Even from a momentary interaction, multiple perceivers can come to the same conclusion about aspects of a person's personality. A few different explanations exist for this phenomenon. One such explanation is called similar meaning systems. This explanation posits that consensus arises when raters agree on the meaning of the information they use to make personality judgments. Some aspect of the individual (such as facial expressions or posture) appears to have the same meaning to each perceiver. For example, a smile or an erect stance might be an indicator of extraversion (one of the five traits in the five-factor model of personality to all perceivers, therefore these raters will all provide similar extraversion ratings. This does not necessarily mean that the ratings are accurate, just that all perceivers rated the individual similarly.
Background:
Stereotypes also influence consensus across perceivers. If perceivers of a target individual hold the same stereotypes about the target and use them in making personality judgments, consensus will be higher. For example, one gender may be stereotypically considered less emotionally stable than the other (neuroticism). Assuming perceivers hold this stereotype, they will make similar emotional stability ratings when the target's gender is known. Again, this does not assume that the stereotype is valid. In fact, some common stereotypes may prove to be invalid; use of such a stereotype by multiple perceivers will result in consistent, but inaccurate ratings (or high consensus and low accuracy).
Background:
It is worth noting that different personality traits show different levels of consensus. Of the traits in the Five Factor Model of personality, conscientiousness and extraversion tend to show higher levels of consensus, while agreeableness tends to demonstrate the least consensus. These patterns of findings suggest that some traits are more easily judged from brief interactions and more likely to be agreed upon than others. For example, facial expressions, which often indicate levels of extraversion, are easily detected in brief meetings, pictures, or video clips; perceivers tend to agree on what traits these facial expressions convey. Therefore, consensus tends to be higher for extraversion. Consensus can also result from beliefs about the target's physical attractiveness and traits commonly associated with attractive individuals. For example, extraversion is often associated with physical attractiveness. Because perceivers tend to agree on targets' physical attractiveness, consensus for extraversion is generally high.Studies examining these differences in consensus across the Big Five traits have found that consensus ratings for extraversion tend to be around .27, compared to consensus of .03 for agreeableness. These numbers indicate how similarly raters view the same target's personality, with higher numbers indicating greater consensus for a particular trait. For agreeableness (with a consensus score of .03), there was virtually no agreement across raters of the target's level of agreeableness. For extraversion, (.27), there was much more consensus about the target's level of extraversion.
Background:
Studies have also examined the differences in consensus for zero-acquaintance raters and raters who have been long acquainted with the target. For extraversion, consensus ratings seem to be similar (.27 for zero-acquaintance and .29 for long-term acquaintance). For all other traits, long-term acquaintance ratings tend to converge much more than for zero-acquaintance ratings. For example, ratings of agreeableness demonstrate consensus estimates of .27 when long-term acquaintances are the raters, compared to .03 when there are zero-acquaintance raters. Those who are familiar with the target individual tend to agree about their personality traits much more than individuals who do not know the target, with the exception of extraversion. These study findings suggest that extraversion is a fairly observable trait for any person, whether they know the target or not, and people interpret social cues related to extraversion quite similarly.
Background:
Accuracy in ratings To determine if the perceiver in a zero-acquaintance context has made an accurate judgment of a target's personality, perceiver ratings are compared to the target's own ratings of their personality. The degree to which these two ratings converge is known as accuracy. Peer ratings (from people who have frequent contact with the individual being rated) can also be used to determine accuracy; if perceiver ratings of a target converge with peer ratings of the target, accuracy has been established. Accuracy is achieved when perceivers use what is known as "good information" about the target to make ratings. Information is "good" when it actually represents the trait being rated. For example, if a smile is actually reflective of extraversion, accuracy will be increased when perceivers use smiles to influence their extraversion ratings. This notion is known as the Realistic Accuracy Model (RAM; David C. Funder). When perceivers utilize good information in their ratings and ignore bad or irrelevant information, accuracy increases.
Background:
As mentioned above, both self-ratings and peer-ratings of a target can be used to calculate accuracy; in fact, several researchers suggest it is best to combine self-reports and peer reports when measuring accuracy (self-report inventory). This helps eliminate the flaws of each measure on its own and can increase confidence that the accuracy measure is itself accurate. This combined rating (usually an average of peer- and self-score) is then correlated with the zero-acquaintance perceiver ratings to determine accuracy, in what has been termed the "Gold Standard" in personality research;Like consensus, accuracy is greater for some traits than others. In fact, a similar pattern to consensus exists, with extraversion and conscientiousness ratings tending to be the most accurate and agreeableness and openness ratings the least accurate of the five-factor model of personality. This is likely due to the trait-relevant information available to perceivers in zero-acquaintance settings. For example, in photographs there is more information present that accurately reflects conscientiousness and extraversion (such as cleanliness and facial expressions), than that which reflects agreeableness or openness.Sociability-related traits have typically been found to be the easiest to judge when raters have little acquaintance with targets. Since extraversion largely measures social tendencies, it makes sense that the highest consensus and accuracy are found for this trait. Extraversion ratings by peer raters who had little interaction with targets demonstrated validities of .35, compared to validities of .01 for agreeableness. This stark difference in validity suggests that agreeableness is much harder to judge with accuracy when the rater is unfamiliar with the target. Conscientiousness has also demonstrated relatively high accuracy when the rater is unacquainted, with a validity of .29. The validity coefficient tells us how related the raters' judgments are to the target's own ratings of their personality. A validity coefficient of .35 (such as for extraversion) is fairly high, compared to a validity coefficient of .01, which indicates that there is absolutely no relationship between the two sets of ratings.
Background:
When raters are not very acquainted with the targets they are rating, it appears that the length of time they are actually able to observe the person doesn't have a strong influence on the accuracy of the behavioral judgments made. When raters have as little as 30 seconds to make a judgment about a target, they have the same level of accuracy as when they have four to five hours. This suggests that several cues about a person can be displayed in very short window of time. Across all ratings in all time windows, accuracy was .39, which is similar to the results of other zero-acquaintance studies.It is also important to note that while zero-acquaintance judgments can be accurate and particular traits are easy to judge, research generally shows that the greater acquaintance someone has with the target, the more accurate their ratings are. A classic study showed that the Big Five traits on average have higher validity for acquaintances (.40) than strangers (.29). This is a 38% increase in accuracy for acquaintances, compared to strangers. A later study examined this same topic over an extended period of time in order to better test an "acquaintance effect" on ratings. When people were asked to rate targets at multiple points in time, their ratings became more accurate (more similar to the targets' self-ratings) over time, as they got to know the target better. At weeks 1, 4, and 7, accuracy in ratings across the Big Five traits increased from .21 to .26, to .30, respectively. This demonstrates the phenomenon known as the acquaintance effect: over time, as people get to know someone better, they are better able to estimate that target's true personality.
Background:
Recent research Recent research employing zero-acquaintance situations has largely focused on what traits are judged most consistently and accurately, and in what contexts. The most common contexts for zero-acquaintance judgments are those that allow for the observation of a target's physical appearance. Information about physical attributes is generally garnered in one of two ways: observation or photographs. Observation ratings typically come from video or sound recordings. For example, Borkenau and Liebler (1992) asked perceivers to view either a silent video, audio video, or just audio of targets entering a room, sitting down, and reading a script. Perceivers provided the most accurate judgements for extraversion and agreeableness with both audio and visual cues; conscientiousness was most accurate with visual cues alone.
Background:
Interest in photographs has continued to increase in recent years, given the widespread use of these images on social media sites. For example, Naumann, Vazire, Rentfrow, and Gosling (2009) found that perceivers can make more accurate judgments of targets when these targets appear in spontaneous or "strike a pose" photographs than in non-expressive full body photographs. Specifically, perceivers were only able to accurately judge extraversion, self-esteem, and religiosity using a non-expressive full body photograph. When the target struck a pose, however, the perceiver was able to judge extraversion, agreeableness, emotional stability, openness, likeability, self-esteem, loneliness, religiosity, and political orientation with some degree of accuracy. In a more recent study exploring the accuracy of personality judgments in selfie profile pictures, Qiu, Lu, Yang, Qu, and Zhu (2015) found that perceivers only accurately predict openness from selfies; these ratings were formed through impressions of the target's emotional positivity.
Background:
Zero-acquaintance personality judgments can also be made through other artifacts, such as personal belongings or social media profiles. Gosling and colleagues (2002) found that perceivers can more accurately rate targets' personality by observing their bedrooms than their offices. When observing the bedrooms, perceivers could accurately rate extraversion, agreeableness, conscientiousness, emotional stability, and openness to experience. When observing the offices, however, perceivers could only accurately rate extraversion, conscientiousness, and openness to experience. The researchers also found which cues informed which rating of personality. For instance, neatness informed judgments of conscientiousness, whereas variety and quantity of books informed judgements of openness to experience. More recently, Back et al. (2010) investigated how well perceivers could make judgements of targets by viewing their social media profiles, either on Facebook or StudiVZ. The researchers found that perceivers could accurately predict extraversion, agreeableness, conscientiousness, and openness of a target by simply perusing the social media profile.
Implications and unanswered questions:
Personality judgments made in zero-acquaintance contexts are extremely common in day-to-day life. As can be seen from the studies referenced previously, personality impressions can be formed from extremely brief interactions, physical appearance, and personal environments. As the world becomes increasingly virtual, these zero-acquaintance judgments are becoming even more popular, as people turn to online profiles to infer people's personalities for use in both professional and interpersonal contexts. These judgments can have important implications, as they can influence the decision to further engage with an individual, and how so. For example, when employees were presented with the job application and picture of a fictitious future manager, their zero-acquaintance judgments of personality predicted these employees' willingness to work under this hypothetical manager.Furthermore, research has indicated that these first personality impressions remain relatively stable over time, even as the extent of interaction with the target person increases over time. In other words, the first impressions created in zero-acquaintance contexts are hard to change, and thus are predictive of further outcomes. For example, student impressions of an instructor on the first day of class were shown to be relatively consistent with impressions after the class had concluded, and thus these judgments predicted student course evaluations. Additionally, interviewer first impressions of interviewees have been shown to influence the amount of information provided by the interviewer during the interview, as well as the communication style and rapport established.Despite research demonstrating the stability of these zero-acquaintance judgments, as well as their accuracy and consensus, many questions still remain. For example, researchers have just begun to examine how different forms of zero-acquaintance judgments compare to one another. In other words, it is likely that one trait may be better judged in a particular context (e.g. photographs) while a different trait may be more accurately judged via a different context (e.g. video recordings). More research is needed in order to determine exactly which traits are best judged in which contexts.
Implications and unanswered questions:
Additionally, it is possible that different zero-acquaintance contexts may provide conflicting information regarding the same trait. If a perceiver encounters both sources of information, it is unknown which source would be emphasized in the perceiver's judgments. Additionally, the weight that any particular perceiver attributes to one source over another may be influenced by characteristics of that perceiver or of the situation, and future research is needed to investigate this possibility.
Implications and unanswered questions:
Finally, the role of culture and demographics in zero-acquaintance judgments has been identified as an area ripe for further studies. Perceivers from different cultures may judge the same source of information in different ways. Additionally, some sources of information (e.g. writing samples) portray culture and demographics to a lesser extent than others (e.g. photographs), which may lead to important differences in the judgments made. While research has begun to delve into these questions, many remain unanswered.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Concrete plant**
Concrete plant:
A concrete plant, also known as a batch plant or batching plant or a concrete batching plant, is equipment that combines various ingredients to form concrete. Some of these inputs include water, air, admixtures, sand, aggregate (rocks, gravel, etc.), fly ash, silica fume, slag, and cement. A concrete plant can have a variety of parts and accessories, including: mixers (either tilt drum or horizontal, or in some cases both), cement batchers, aggregate batchers, conveyors, radial stackers, aggregate bins, cement bins, heaters, chillers, cement silos, batch plant controls, and dust collectors.
Concrete plant:
The heart of the concrete batching plant is the mixer, and there are many types of mixers, such as tilt drum, pan, planetary, single shaft and twin shaft. The twin shaft mixer can ensure an even mixture of concrete through the use of high horsepower motors, while the tilt mixer offers a comparatively large batch of concrete mix. In North America, the predominant central mixer type is a tilt drum style, while in Europe and other parts of the world, a twin shaft mixer is more prevalent. A pan or planetary mixer is more common at a precast plant. Aggregate bins have 2 to 6 compartments for storage of various sand and aggregate (rocks, gravel, etc.) sizes, while cement silos are typically one or two compartments, but at times up to 4 compartments in a single silo. Conveyors are typically between 24-48 inches wide and carry aggregate from the ground hopper to the aggregate bin, as well as from the aggregate batcher to the charge chute.
Concrete plant:
The aggregate batcher, also named aggregate bins, is used for storage and to batch the sand, gravel and crushed stone of the concrete plant. There are also many types of aggregate batchers, but most of them measure aggregate by weighing. Some use a weighing hopper, some use a weighing belt.
Concrete plant:
The cement silos are indispensable devices in the production of concrete. They mainly store bulk cement, fly ash, mineral powder and others. There are three types of cement silos: bolted cement silos, horizontal cement silos and integrated cement silos. Integrated cement silos are made in factories, and can be used directly. Bolted cement silos are bolted for easy installation and removal. Horizontal cement silos have lower requirements on foundations and can be transported by truck or flatbed without disassembly.
Concrete plant:
The screw conveyor is a machine to transfer the materials from the cement silos to the powder weighing hopper. Concrete plants use the control system to control the working of the machine. Concrete batch plants employ computer aided control to assist in fast and accurate measurement of input constituents or ingredients. With concrete performance so dependent on accurate water measurement, systems often use digital scales for cementitious materials and aggregates, and moisture probes to measure aggregate water content as it enters the aggregate batcher to automatically compensate for the mix design water/cement ratio target. Many producers find moisture probes work well only in sand, and with marginal results on larger sized aggregate.
Types:
There are many classification standards for concrete plants. Concrete plants can be divided into dry mix plant and wet mixing plants, depending on whether a central mixer is used. They can be also divided into stationary concrete plants and mobile concrete plants, depending on whether they can be moved.
Types:
Dry mix concrete plant A dry mix concrete plant, also known as a transit mix plant, weighs sand, gravel and cement in weigh batchers via digital or manual scales. All the ingredients are then discharged into a chute, which discharges into a truck. Meanwhile, water is either being weighed or volumetrically metered and discharged through the same charging chute into the mixer truck. These ingredients are then mixed for a minimum of 70 to 100 revolutions during transportation to the jobsite.
Types:
Wet mix concrete plant A wet mix concrete plant combines some or all of the above ingredients (including water) at a central location into a concrete mixer - that is, the concrete is mixed at a single point, and then simply agitated on the way to the jobsite to prevent setting (using agitators or ready mix trucks) or hauled to the jobsite in an open-bodied dump truck. Dry mix plants differ from wet mix plants in that wet mix plants contain a central mixer, which can offer a more consistent mixture in a shorter time (generally 5 minutes or less). Dry mix plants typically see more break strength standard deviation and variation from load to load because of inconsistencies in mix times, truck blade and drum conditions, traffic conditions, etc. With a central mix plant, all loads see the same mixing action and there is an initial quality control point when discharging from the central mixer. Certain plants combine both dry and wet characteristics for increased production or for seasonality. For example, a mobile batch plant can be constructed on a large job site.
Types:
Mobile concrete plant The mobile batch plant, also known as a portable concrete plant, is a very productive, reliable and cost-effective piece of equipment to produce batches of concrete. It allows the user to batch concrete at almost any location, then move to another location and batch concrete. Portable plants are the best choice for temporary site projects or even stationary locations where the equipment height is a factor or the required production rate is lower.
Types:
Stationary concrete plant The stationary concrete plant is designed to produce high-quality concrete. It has the advantages of large output, high efficiency, high stability and high specification. Stationary concrete batching plants are reliable and flexible, easy to maintain and have a low failure rate. They are widely used in various projects such as roads and bridges, ports, tunnels, dams and buildings.
Application:
Typical concrete plants are used for ready mix, civil infrastructure, and precast applications.
Application:
For ready mix The global Ready Mix Concrete (RMC) market is valued at US$394.44 billion in 2017 and is expected to reach US$624.82 billion by the end of 2025, growing at a CAGR of 5.92% between 2016 and 2022. A ready mix concrete plant is generally located inside the city, transporting ready-mixed concrete for projects through concrete truck mixers. Ready mix concrete plants have higher requirements for durability, reliability, safety and environmental protection of the concrete plant's system than other types of plant.
Application:
For precast applications Precast concrete, also named PC component, is a concrete product that is processed in a standardized process in the factory. Compared with cast-in-place concrete, precast concrete can be produced, poured and cured in batches. A precast concrete batching plant has a safer construction environment, lower cost, and high quality products compared with concrete poured on site; the construction speed can be guaranteed. In addition, it is widely used in transportation, construction, water conservancy and other fields.
Application:
Precast and prestress concrete producers supply critical elements used in world-wide infrastructure, including buildings, bridges, parking decks, road surfaces, and retaining walls.
Dust and water pollution:
Municipalities, especially in urban or residential areas, have been concerned by pollution from concrete batching plants. The absence of suitable dust collection and filter systems in cement silos or at the truck loading point is the major source of particulate matter emission in the air. The loading point is a large emission point for dust pollution, so many concrete producers use central dust collectors to contain this dust. Notably, many transit mix (dry loading) plants create significantly more dust pollution than central mix plants due to the nature of the batching process. A final source of concern for many municipalities is the presence of extensive water runoff and reuse for water spilled on a producer's sites.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Nodulosis–arthropathy–osteolysis syndrome**
Nodulosis–arthropathy–osteolysis syndrome:
Nodulosis–arthropathy–osteolysis syndrome is a cutaneous condition that shares features with juvenile hyaline fibromatosis.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**RacoWireless**
RacoWireless:
RacoWireless is a provider of wireless products and services focusing on the machine to machine (M2M) industry. The company delivers wireless data and provides a platform for companies to build and support wireless M2M applications.
In November 2014, RacoWireless was acquired by a private equity backed, competing M2M MVNO KORE Wireless Group.
History:
RacoWireless was initially formed as a subsidiary of RACO Industries LLC in Cincinnati, Ohio. The parent organization RACO Industries was founded in 1988 by current CEO Rob Adams as a value-added reseller of barcoding hardware and data-collection services. The company also partnered with network carriers to provide vehicle-tracking solutions.
History:
In 2005, the company shifted its focus towards data aggregation and began building a data aggregation platform. The following year, the company was approached by T-Mobile with the concept of becoming a data aggregator in the M2M space, and RacoWireless was officially founded in 2006.RacoWireless developed its Omega Management Suite as a web-based platform to provide customers with an M2M device management and monitoring system. The company worked with T-Mobile and their M2M solutions team, which was led by national director of M2M John Horn.
History:
In 2011, RacoWireless announced that Horn had left T-Mobile to become President of RacoWireless. As part of the move, RacoWireless signed a deal to become T-Mobile's preferred partner for new M2M business and operational support.In 2011, RacoWireless and T-Mobile partnered with Audi to offer Audi Connect – an in-car service that allows users access to news, weather, and fuel prices while turning the vehicle into a secure mobile Wi-Fi hotspot allowing passengers access to the Internet.In recent years, RacoWireless has formed partnerships with other international mobile network carriers including EE in the UK, Rogers in Canada, Sprint, and Telefonica out of Spain and Latin America.In October 2012, Inverness Graham Investments, a private equity firm out of Philadelphia, announced a controlled recapitalization of RacoWireless. In July 2013, RacoWireless announced the acquisition of Position Logic, a provider of B2B location-based services with operations in North America, South America, Europe, Africa and the Middle East.In November 2014, RacoWireless was acquired by a private equity backed, competing M2M MVNO KORE Wireless Group.
Company:
RacoWireless provides products and services for the machine-to-machine (M2M) world. Additionally, it offers the Omega Management Suite, an information tool that provides web-based M2M management, reporting, and alerting features; SIM activation, maintenance, and management; web-based billing solutions; consulting, carrier device certification, application hosting, and virtual LAN solutions.
RacoWireless services markets including: fleet management, asset tracking, healthcare, monitoring & control, and point-of-sale transaction processing. The Company has operations in 60 countries and employs 100 people in the US and Latin America.
New Technology:
Working in partnership with T-Mobile, RacoWireless was the first M2M provider to launch embedded SIM. This technology has allowed GSM solutions to enter more restrictive verticals where temperature and vibration had previously kept earlier technologies out.
RacoWireless, partnered with EE and Giesecke & Devrient, recently introduced the first Multi-IMSI SIM to the market. Multi-IMSI technology allows a single SIM card to be assigned to multiple subscriptions and carriers.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Royal jelly**
Royal jelly:
Royal jelly is a honey bee secretion that is used in the nutrition of larvae and adult queens. It is secreted from the glands in the hypopharynx of nurse bees, and fed to all larvae in the colony, regardless of sex or caste.
Royal jelly:
During the process of creating new queens, the workers construct special queen cells. The larvae in these cells are fed with copious amounts of royal jelly. This type of feeding triggers the development of queen morphology, including the fully developed ovaries needed to lay eggs.Royal jelly is sometimes used in alternative medicine under the category apitherapy. It is often sold as a dietary supplement for humans, but the European Food Safety Authority has concluded that current evidence does not support the claim that consuming royal jelly offers health benefits to humans. In the United States, the Food and Drug Administration has taken legal action against companies that have marketed royal jelly products using unfounded claims of health benefits.
Production:
Royal jelly is secreted from the glands in the heads of worker bees and is fed to all bee larvae, whether they are destined to become drones (males), workers (sterile females), or queens (fertile females). After three days, the drone and worker larvae are no longer fed with royal jelly, but queen larvae continue to be fed this special substance throughout their development.
Composition:
Royal jelly is 67% water, 12.5% protein, 11% simple sugars (monosaccharides), 6% fatty acids and 3.5% 10-hydroxy-2-decenoic acid (10-HDA). It also contains trace minerals, antibacterial and antibiotic components, pantothenic acid (vitamin B5), pyridoxine (vitamin B6) and trace amounts of vitamin C, but none of the fat-soluble vitamins: A, D, E or K.
Composition:
Proteins Major royal jelly proteins (MRJPs) are a family of proteins secreted by honey bees. The family consists of nine proteins, of which MRJP1 (also called royalactin), MRJP2, MRJP3, MRJP4, and MRJP5 are present in the royal jelly secreted by worker bees. MRJP1 is the most abundant, and largest in size. The five proteins constitute 83–90% of the total proteins in royal jelly. They are synthesised by a family of nine genes (mrjp genes), which are in turn members of the yellow family of genes such as in the fruitfly (Drosophila) and bacteria. They are attributed to be involved in differential development of queen larva and worker larvae, thus establishing division of labour in the bee colony.
Epigenetic effects:
The honey bee queens and workers represent one of the most striking examples of environmentally controlled phenotypic polymorphism. Even if two larvae had identical DNA, one raised to be a worker, the other a queen, the two adults would be strongly differentiated across a wide range of characteristics including anatomical and physiological differences, longevity, and reproductive capacity. Queens constitute the female sexual caste and have large active ovaries, whereas female workers have only rudimentary, inactive ovaries and are functionally sterile. The queen–worker developmental divide is controlled epigenetically by differential feeding with royal jelly; this appears to be due specifically to the protein royalactin. A female larva destined to become a queen is fed large quantities of royal jelly; this triggers a cascade of molecular events resulting in development of a queen. It has been shown that this phenomenon is mediated by an epigenetic modification of DNA known as CpG methylation. Silencing the expression of an enzyme that methylates DNA in newly hatched larvae led to a royal jelly-like effect on the larval developmental trajectory; the majority of individuals with reduced DNA methylation levels emerged as queens with fully developed ovaries. This finding suggests that DNA methylation in honey bees allows the expression of epigenetic information to be differentially altered by nutritional input.
Use by humans:
Cultivation Royal jelly is harvested by stimulating colonies with movable frame hives to produce queen bees. Royal jelly is collected from each individual queen cell (honeycomb) when the queen larvae are about four days old. These are the only cells in which large amounts are deposited. This is because when royal jelly is fed to worker larvae, it is fed directly to them, and they consume it as it is produced, while the cells of queen larvae are "stocked" with royal jelly much faster than the larvae can consume it. Therefore, only in queen cells is the harvest of royal jelly practical.
Use by humans:
A well-managed hive during a season of 5–6 months can produce approximately 500 g (18 oz) of royal jelly. Since the product is perishable, producers must have immediate access to proper cold storage (e.g., a household refrigerator or freezer) in which the royal jelly is stored until it is sold or conveyed to a collection center. Sometimes honey or beeswax is added to the royal jelly, which is thought to aid its preservation.The Vegetarian Society considers royal jelly to be non-vegetarian.
Use by humans:
Adverse effects Royal jelly may cause allergic reactions in humans, ranging from hives, asthma, to even fatal anaphylaxis. The incidence of allergic side effects in people who consume royal jelly is unknown. The risk of having an allergy to royal jelly is higher in people who have other allergies.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Poly(hydridocarbyne)**
Poly(hydridocarbyne):
Poly(hydridocarbyne) (PHC) is one of a class of carbon-based random network polymers primarily composed of tetrahedrally hybridized carbon atoms, each having one hydride substituent, exhibiting the generic formula [HC]n. PHC is made from bromoform, a liquid halocarbon that is commercially manufactured from methane. At room temperature, poly(hydridocarbyne) is a dark brown powder. It can be easily dissolved in a number of solvents (tetrahydrofuran, ether, toluene etc.), forming a colloidal suspension that is clear and non-viscous, which may then be deposited as a film or coating on various substrates. Upon thermolysis in argon at atmospheric pressure and temperatures of 110 °C to 1000 °C, decomposition of poly(hydridocarbyne) results in hexagonal diamond (lonsdaleite).
Poly(hydridocarbyne):
More recently poly(hydridocarbyne) has been synthesized by a much simpler method using electrolysis of chloroform (May 2008) and hexachloroethane (June 2009).The novelty of PHC (and its related polymer poly(methylsilyne)) is that the polymer may be readily fabricated into various forms (e.g. films, fibers, plates) and then thermolyzed into a final hexagonal diamond ceramic.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Somatotropin family**
Somatotropin family:
The Somatotropin family is a protein family whose titular representative is somatotropin, also known as growth hormone, a hormone that plays an important role in growth control. Other members include choriomammotropin (lactogen), its placental analogue; prolactin, which promotes lactation in the mammary gland, and placental prolactin-related proteins; proliferin and proliferin related protein; and somatolactin from various fishes. The 3D structure of bovine somatotropin has been predicted using a combination of heuristics and energy minimisation.
Human peptides from this family:
CSH1; CSH2; CSHL1; GH1; GH2 (hGH-V); PRL;
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Heteronuclear molecule**
Heteronuclear molecule:
A heteronuclear molecule is a molecule composed of atoms of more than one chemical element. For example, a molecule of water (H2O) is heteronuclear because it has atoms of two different elements, hydrogen (H) and oxygen (O).
Heteronuclear molecule:
Similarly, a heteronuclear ion is an ion that contains atoms of more than one chemical element. For example, the carbonate ion (CO2−3) is heteronuclear because it has atoms of carbon (C) and oxygen (O). The lightest heteronuclear ion is the helium hydride ion (HeH+). This is in contrast to a homonuclear ion, which contains all the same kind of atom, such as the dihydrogen cation, or atomic ions that only contain one atom such as the hydrogen anion (H−).
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Vasomotor center**
Vasomotor center:
The vasomotor center (VMC) is a portion of the medulla oblongata. Together with the cardiovascular center and respiratory center, it regulates blood pressure. It also has a more minor role in other homeostatic processes. Upon increase in carbon dioxide level at central chemoreceptors, it stimulates the sympathetic system to constrict vessels. This is opposite to carbon dioxide in tissues causing vasodilatation, especially in the brain. Cranial nerves IX (glossopharyngeal nerve) and X (vagus nerve) both feed into the vasomotor centre and are themselves involved in the regulation of blood pressure.
Structure:
The vasomotor center is a collection of integrating neurons in the medulla oblongata of the middle brain stem. The term "vasomotor center" is not truly accurate, since this function relies not on a single brain structure ("center") but rather represents a network of interacting neurons.
Structure:
Afferent fibres The vasomotor center integrates nerve impulses from many places via the solitary nucleus: central chemoreceptors aortic body chemoreceptors, which send impulses via the vagus nerves carotid body chemoreceptors, which send impulses via the glossopharyngeal nerves aortic arch high-pressure baroreceptors, which send impulses via the aortic nerve carotid sinus high-pressure baroreceptors, which send impulses via the glossopharyngeal nerves Efferent fibres The vasomotor center gives off sympathetic fibres through the spinal cord and sympathetic ganglia, which reach vascular smooth muscle.
Function:
The vasomotor center changes vascular smooth muscle tone. This changes local and systemic blood pressure.A drop in blood pressure leads to increased sympathetic tone from the vasomotor center. This acts to raise blood pressure.
Clinical significance:
Methyldopa acts on the vasomotor center, leading to selective stimulation of α2-adrenergic receptor. Guanfacine also causes the same stimulation. This reduces sympathetic tone to vascular smooth muscle. This reduces heart rate and vascular resistance.Digoxin increases vagal tone from the vasomotor centre, which decreases pulse.G-series nerve agents have their most potent effect in the vasomotor center. Unlike other parts of the body, where continued stimulation of acetylcholine receptors leads to recoverable paralysis, overstimulation of the vasomotor center is often causes a fatal rise in blood pressure.
History:
The localization of vasomotor center was determined by Filipp Ovsyannikov in 1871.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Musical expression**
Musical expression:
Musical expression is the art of playing or singing with a personal response to the music.At a practical level, this means making appropriate use of dynamics, phrasing, timbre and articulation to bring the music to life. Composers may specify these aspects of expression to a greater or lesser extent in the notation of their musical score.
Musical expression:
The nature of musical expression has also been discussed at a theoretical level throughout of the history of classical music. One common view is that music both expresses and evokes emotion, forming a conduit for emotional communication between the musician and the audience. This view has been present through most of musical history, though it was most clearly expressed in musical romanticism. However, emotion's role in music has been challenged on occasion by those like Igor Stravinsky who see music as a pure art form and expression as an irrelevant distraction.
Mimesis and rhetoric:
In the Baroque and Classical periods of music, music (and aesthetics as a whole) was strongly influenced by Aristotle's theory of mimesis. Art represented the perfection and imitation of nature, speech and emotion.As speech was taken as a model for music, composition and performance in the Baroque period were strongly influenced by rhetoric. According to what has become known as the theory of affect, a musician was expected to stir feelings in his audience in much the same way as an orator making a speech in accordance with the rules of classical rhetoric. As a result, the aim of a piece of music was to produce a particular emotion, for instance joy, sadness, anger or calm. The harmony, melody, tonality, metre and structure of the music worked to this end, as did all the aspects under the performer's control such as articulation and dynamics.As Johann Joachim Quantz wrote, The orator and the musician have, at bottom, the same aim in regard to both the preparation and the final execution of their productions, namely to make themselves the masters of the hearts of their listeners, to arouse or still their passions, and to transport them now to this sentiment, now that.
Mimesis and rhetoric:
Baroque composers used expressive markings relatively rarely, so it can be a challenge for musicians today to interpret Baroque scores, in particular if they adopt a historically informed performance perspective and aim to recreate an approach that might have been recognised at the time. There are some general principles. Looking at the rhythm of a piece, slow rhythms tend to be serious while quick ones tend towards light and frivolous. In the melodic line, small intervals typically represented melancholy while large leaps were used to represent joy. In harmony, the choice of dissonances used had a significant effect on which emotion was intended (or produced), and Quantz recommended that the more extreme the dissonance, the louder it should be played. A cadence normally represented the end of a sentence.The rhetorical approach to music begged the philosophical question of whether stirring the listener's passions in this manner was compatible with Aristotle's idea that art was only effective because it imitated nature. Some writers on music in the 18th century stayed closely true to Aristotle, with Charles Batteux writing that the sole unifying principle of taste and beauty was the reproduction of the ideal form that lay behind natural things. However, this view was challenged by others who felt that the role of music was to produce an emotional effect. For instance, Sir William Jones wrote in 1772 that: "‘it will appear, that the finest parts of poetry, musick, and painting, are expressive of the passions, and operate on our minds by sympathy; that the inferior parts of them are descriptive of natural objects, and affect us chiefly by substitution’".In 1785, Michel de Chabanon proposed that music was best understood as its own language, which then prompted an emotional response linked to but not limited by the musical expression. The same music could be associated with a wide range of emotional responses in the listener. Chabanon rejected the rhetorical approach to music, because he did not believe that there was a simple correspondence between musical characteristics and emotional affects. Much subsequent philosophy of music depended on Chabanon's views.
Romantic era:
Around the start of the 19th Century, the idea of music as a kind of 'ultimate language of the emotions' gained currency. The new aesthetic doctrine of Romanticism placed sublime, heightened emotion at the core of artistic experience, and communicating these emotions became the aim of musical performance. Music was expected to convey intense feelings, highly personal to the vision of the composer. As the 19th century developed, musical nationalism extended these emotions beyond the personal level to embodying the feelings of entire nations.This emphasis on emotional communication was supported by an increasing confidence in using more complex harmony, and by instruments and ensembles capable of greater extremes of dynamic. At the start of the 19th century, dynamic markings like "pp" and "ff" were most commonly used, but by the late century, markings like "pppp" and "ffff" began to appear on the score. Romantic composers also made increasingly detailed use of expressive markings like crescendos and diminuendos, accents and articulation markings.
Against expression:
After the increasing dominance of expression and emotion in music during the 19th and early 20th centuries, there was a backlash."Most people like music because it gives them certain emotions such as joy, grief, sadness, and image of nature, a subject for daydreams or – still better – oblivion from “everyday life”. They want a drug – dope -…. Music would not be worth much if it were reduced to such an end. When people have learned to love music for itself, when they listen with other ears, their enjoyment will be of a far higher and more potent order, and they will be able to judge it on a higher plane and realise its intrinsic value." - Igor Stravinsky
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Salicylamide**
Salicylamide:
Salicylamide (o-hydroxybenzamide or amide of salicyl) is a non-prescription drug with analgesic and antipyretic properties. Its medicinal uses are similar to those of aspirin. Salicylamide is used in combination with both aspirin and caffeine in the over-the-counter pain remedy PainAid. It was also an ingredient in the over-the-counter pain remedy BC Powder but was removed from the formulation in 2009, and Excedrin used the ingredient from 1960 to 1980 in conjunction with aspirin, acetaminophen, and caffeine. It was used in later formulations of Vincent's powders in Australia as a substitute for phenacetin.
Derivatives:
Derivatives of salicylamide include ethenzamide, labetalol, medroxalol, lopirin, otilonium, oxyclozanide, salicylanilide, niclosamide, and raclopride.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Soumen Basak**
Soumen Basak:
Soumen Basak is an Indian immunologist and virologist at the National Institute of Immunology (NII). A former fellow of the Wellcome Trust DBT India Alliance, he is known for his studies on the NF-kappaB signaling system.
Career:
Soumen earned an M.Sc. from the Biochemistry department of the University of Calcutta in 1998, and then joined the same department to carry out doctoral research under the guidance of Prof. Dhrubajyoti Chattopadhyay. Subsequent to receiving a Ph.D. degree in 2003, he joined the laboratory of Prof. Alexander Hoffmann at the University of California, San Diego for post-doctoral studies. On his return to India in 2010, he joined the National Institute of Immunology where he has been heading the Systems Immunology Research Group since then.Soumen has contributed substantially to developing an understanding of immune regulatory mechanisms. Extracellular cues engage discrete cell signaling pathways to control dynamically specific sets of transcription factors, which trigger distinct gene-expression programs. However, a myriad of biochemical processes links these pathways within an integrated cellular network. More so, mammalian cells in their anatomic niche receive signals simultaneously from a variety of stimuli that generate plausible crosstalks between concomitantly activated intracellular pathways. Combining biochemistry, mouse genetics, and computational modeling tools, Soumen's group has been characterizing how such cross-regulatory signaling mechanisms tune immune responses and if aberrant signaling crosstalk underlies human diseases. His work established physiological functions of such signaling crosstalk in tuning inflammatory responses to gut pathogens and in orchestrating immune homeostasis in the secondary lymphoid organs. His research also captured a pathophysiological role of signaling crosstalk in neoplastic diseases – aberrant crosstalk provoked anomalous gene expressions in multiple myeloma. Soumen's current work indicates that cross-regulatory NF-kappaB controls may have ramifications for pathological intestinal inflammation. Soumen has published his research findings in a series of research articles in internationally renowned, peer-reviewed journals.
Career:
Unchecked inflammation has been implicated in human ailments. Mainstay signaling pathways mediate multiple biological functions, and their therapeutic targeting leads to devastating side effects. In this context, Soumen's finding bears promises for disease-specific interventions in inflammation-associated diseases that target newly discovered crosstalk motif delinking inflammatory module from the integrated network.The Department of Biotechnology of the Government of India awarded him with the National Bioscience Award for Career Development in 2018 for his outstanding research contributions. In 2019, Soumen was conferred with the prestigious Shanti Swarup Bhatnagar Prize for Science and Technology, the highest civilian award conferred to Indian scientists, for his exceptional contributions in biological sciences by the Council of Scientific and Industrial Research. Soumen is a member of the Guha Research Conference, and also a fellow of all three Indian Science Academies, namely the Indian National Science Academy, the Indian Academy of Sciences, and the National Academy of Sciences, India.
Selected bibliography:
Chawla, Meenakshi; Mukherjee, Tapas; Deka, Alvina; Chatterjee, Budhaditya; Sarkar, Uday Aditya; Singh, Amit K.; Kedia, Saurabh; Lum, Josephine; Kaur, Manpreet; Banoth, Balaji; Biswas, Subhra K.; Ahuja, Vineet; Basak, Soumen (22 June 2021). "An epithelial Nfkb2 pathway exacerbates intestinal inflammation by supplementing latent RelA dimers to the canonical NF-kB module". PNAS. 118 (25): e2024828118. Bibcode:2021PNAS..11824828C. doi:10.1073/pnas.2024828118. PMC 8237674. PMID 34155144.
Chawla, Meenakshi; Roy, Payel; Basak, Soumen (1 February 2021). "Role of the NF-kB system in context-specific tuning of the inflammatory gene response". Current Opinion in Immunology. 68: 21–27. doi:10.1016/j.coi.2020.08.005. PMID 32898750. S2CID 221572636.
Mukherjee, Tapas; Chatterjee, Budhaditya; Dhar, Atika; Bais, Sachendra S; Chawla, Meenakshi; Roy, Payel; George, Anna; Bal, Vineeta; Rath, Satyajit; Basak, Soumen (23 October 2017). "A TNF‐p100 pathway subverts noncanonical NF‐kB signaling in inflamed secondary lymphoid organs". EMBO J. 36 (23): 3501–3516. doi:10.15252/embj.201796919. PMC 5709727. PMID 29061763.
Roy, Payel; Chatterjee, Budhaditya; Mukherjee, Tapas; Vijayraghvan, Bharath; Banoth, Balaji; Basak, Soumen (1 March 2017). "Non-canonical NF-kB mutations reinforce pro-survival TNF response in multiple myeloma through an autoregulatory RelB:p50 NF-kB pathway". Oncogene. 36 (10): 1417–1429. doi:10.1038/onc.2016.309. PMC 5346295. PMID 27641334.
Basak, Soumen; Kim, Ha Na; Kearns, Jeff; Tergaonkar, Vinay; Odea, Ellen; Werner, Chris A. Benedict; Ware, Carl F.; Ghosh, Gouri; Verma, Inder M.; Hoffmann, Alexander (26 January 2007). "A fourth IkB protein in the NF-kB signaling module". Cell. 128 (2): 369–381. doi:10.1016/j.cell.2006.12.033. PMC 1831796. PMID 17254973.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Headworks**
Headworks:
Headworks is a civil engineering term for any structure at the head or diversion point of a waterway. It is smaller than a barrage and is used to divert water from a river into a canal or from a large canal into a smaller canal.An example is the Horseshoe falls at the start of the Llangollen canal.
Historically the phrase "headworks" derives from the traditional approach of diverting water at the start of an irrigation network and the location of these processes at the "head of the works".
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Tetramethyl acetyloctahydronaphthalenes**
Tetramethyl acetyloctahydronaphthalenes:
Tetramethyl acetyloctahydronaphthalenes (International Nomenclature for Cosmetic Ingredients (INCI) name) (1-(1,2,3,4,5,6,7,8-ottaidro-2,3,8,8,-tetrametil-2-naftil)etan-1-one) is a synthetic ketone fragrance also known as OTNE (octahydrotetramethyl acetophenone) and by other commercial trade names such as: Iso E Super, Iso Gamma Super, Anthamber, Amber Fleur, Boisvelone, Iso Ambois, Amberlan, Iso Velvetone, Orbitone, Amberonne. It is a synthetic woody odorant and is used as a fragrance ingredient in perfumes, laundry products and cosmetics.
Odour:
OTNE has a woody, slightly ambergris odour, reminiscent of clean human skin. Its odour is long-lasting on skin and fabric.
Uses:
Iso E Super is a very common perfume ingredient, providing a sandalwood-like and cedarwood-like fragrance, in soap, shampoo, perfumes, detergents, fabric fresheners, antiperspirants or deodorants, and air fresheners. It is also used as a tobacco flavoring (at 200–2000 ppm), as a plasticizer and as a precursor for the delivery of organoleptic and antimicrobial compounds.
Production:
Iso E Super is produced commercially by Diels–Alder reaction of myrcene with 3-methyl-3-penten-2-one in the presence of aluminium chloride to give a monocyclic intermediate that is cyclized in the presence of 85% phosphoric acid.
Production:
Carrying out the initial Diels–Alder reaction using a Lewis acid catalyst such as aluminum chloride appears to ensure that the acetyl group is at position 2 of the resulting cyclohexene adduct, which distinguished Iso E Super from other (previously patented) fragrances based on tetramethylacetyloctaline. The second cyclization reaction yields a mixture of diastereomers with the general structure depicted above, the predominant ones being (2R,3R) and (2S,3S).
Chemical Summary:
OTNE is the abbreviation for the fragrance material with Chemical Abstract Service (CAS) numbers 68155-66-8, 54464-57-2 and 68155-67-9 and EC List number 915-730-3. It is a multi-constituent isomer mixture containing: 1-(1,2,3,4,5,6,7,8-octahydro-2,3,8,8-tetramethyl-2-naphthyl)ethan-1-one (CAS 54464-57-2) 1-(1,2,3,5,6,7,8,8a-octahydro-2,3,8,8-tetramethyl-2-naphthyl)ethan-1-one (CAS 68155-66-8) 1-(1,2,3,4,6,7,8,8a-octahydro-2,3,8,8-tetramethyl-2-naphthyl)ethan-1-one (CAS 68155-67-9)All isomers conform to the chemical formula C16H26O and have a molecular weight of 234.4 g/mol.
Physical-chemical properties:
OTNE is a clear yellow liquid at 20 °C. Its melting point is below −20 °C at atmospheric pressure, and its boiling point is determined to be at around 290 °C (modified OECD 103 method). All physicochemical data have been obtained from the OTNE REACH registration dossier.
Safety:
Iso E Super may cause allergic reactions detectable by patch tests in humans and chronic exposure to Iso E Super from perfumes may result in permanent hypersensitivity. In a study with female mice, Iso E Super was positive in the local lymph node assay (LLNA) and irritancy assay (IRR), but negative in the mouse ear swelling test (MEST).No data were available regarding chemical disposition, metabolism, or toxicokinetics; acute, short-term, subchronic, or chronic toxicity; synergistic or antagonistic activity; reproductive or teratological effects; carcinogenicity; genotoxicity; or immunotoxicity.The International Fragrance Association (IFRA) has published safe use levels for Iso E Super in consumer products.OTNE is not toxic and not a CMR substance.OTNE is classified as a skin irritant (R38 EU DSD, H315 EU CLP) and is positive in the Local Lymph Node Assay (LLNA – OECD 429) and therefore classified as a skin sensitiser (R43 EU DSD, H317 EU CLP), though OTNE lacks any structural alerts for sensitisation in in silico prediction models (DEREK) and is not identified as an allergen in in vivo Human Repeated Patch Tests.Several health related studies have been conducted on OTNE, and based on these studies, OTNE has been determined to be safe under the current conditions of use.Given the sensitization classification of OTNE, and its use in fragrances, the International Fragrance Association (IFRA) has published safe use levels for OTNE in consumer products, which have been in effect since August 2009.
Environmental data:
OTNE is classified as H410 Very toxic to aquatic life with long-lasting effects (EU-CLP) or R51/53 Toxic to aquatic organisms, may cause long-term adverse effects in the aquatic environment (EU DSD).
The biodegradation of OTNE in fresh water (T1/2) is at most 40 days, and at most 120 days in sediment (OECD 314 test), though the biodegradation within the 28day window was around 11% (OECD 301-C). Given the outcome of the OECD 314 test OTNE does not meet the criteria for “Persistent” (P) or “very Persistent” (vP).
The measured Bio Concentration Factor (BCF) is 391 L/kg, which is well below the EU limit of 2000 and US limit of 1000 for Bioaccumulation (B) classification. The LogKow for OTNE has been measured to be 5.65.OTNE is therefore not classified as a PBT or vPvB substance for the EU or any other global criteria.
OTNE has been detected in surface water at levels of 29–180 ng/L, These values are well below the Predicted No Effect Concentration (PNEC) and as a result the overall environmental risk ratio (also referred to as RCR or PEC/PNECS) is determined to be below 1.
Regulatory status:
OTNE is registered on all major chemical inventories (US, Japan, China, Korea, Philippines, and Australia) and has been EU REACH registered in 2010.
Regulatory status:
In 2014 the US National Toxicology Program (NTP) conducted a 13-week repeat dose toxicity study and found no adverse effects.OTNE has been recommended for inclusion in an update for the EU Fragrance Allergens labelling for cosmetic products based on a small number of positive reactions in dermatological clinics of around 0.2% to 1.7% of patients tested in three studiesIf the proposed SCCS Opinion is taken forward into legislation then OTNE will be labelled on cosmetic products in the EU, several years after publication of a new legislation.
Commercial products:
The fragrance Molecule 01 (Escentric Molecules, 2005) is a specific isomer of Iso E Super, by the company IFF. Its partner fragrance Escentric 01 contains Iso E Super along with ambroxan, pink pepper, green lime with balsamic notes like benzoin mastic and incense.
The fragrance Eternity by Calvin Klein (1988) contained 11.7% Iso E Super in the fragrance portion of the formula.
The fragrance Scent of a Dream by Charlotte Tilbury contains Iso E Super.
The fragrance No.1 Invisible by Perfume Extract contains Iso E Super.
History:
OTNE was patented in 1975 as an invention of International Flavors and Fragrances.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Bromoxylene**
Bromoxylene:
A bromoxylene is an aromatic compound containing a benzene ring linked with two methyl groups, and a bromine atom. There are several isomers.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Phaclofen**
Phaclofen:
Phaclofen, or phosphonobaclofen, is a selective antagonist for the GABAB receptor. It was the first selective GABAB antagonist discovered, but its utility was limited by the fact that it does not cross the blood brain barrier.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Dernford Fen**
Dernford Fen:
Dernford Fen is a 10.3-hectare (25-acre) biological Site of Special Scientific Interest north-west of Sawston in Cambridgeshire.The site is a rare surviving example of rough fen and carr. Other habitats are dry grassland and scrub, together with ditches and a chalk stream. There are breeding warblers, and the diverse habitats are valuable for amphibians and reptiles.The site is private land with no public access.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Operator associativity**
Operator associativity:
In programming language theory, the associativity of an operator is a property that determines how operators of the same precedence are grouped in the absence of parentheses. If an operand is both preceded and followed by operators (for example, ^ 3 ^), and those operators have equal precedence, then the operand may be used as input to two different operations (i.e. the two operations indicated by the two operators). The choice of which operations to apply the operand to, is determined by the associativity of the operators. Operators may be associative (meaning the operations can be grouped arbitrarily), left-associative (meaning the operations are grouped from the left), right-associative (meaning the operations are grouped from the right) or non-associative (meaning operations cannot be chained, often because the output type is incompatible with the input types). The associativity and precedence of an operator is a part of the definition of the programming language; different programming languages may have different associativity and precedence for the same type of operator.
Operator associativity:
Consider the expression a ~ b ~ c. If the operator ~ has left associativity, this expression would be interpreted as (a ~ b) ~ c. If the operator has right associativity, the expression would be interpreted as a ~ (b ~ c). If the operator is non-associative, the expression might be a syntax error, or it might have some special meaning. Some mathematical operators have inherent associativity. For example, subtraction and division, as used in conventional math notation, are inherently left-associative. Addition and multiplication, by contrast, are both left and right associative. (e.g. (a * b) * c = a * (b * c)).
Operator associativity:
Many programming language manuals provide a table of operator precedence and associativity; see, for example, the table for C and C++.
Operator associativity:
The concept of notational associativity described here is related to, but different from, the mathematical associativity. An operation that is mathematically associative, by definition requires no notational associativity. (For example, addition has the associative property, therefore it does not have to be either left associative or right associative.) An operation that is not mathematically associative, however, must be notationally left-, right-, or non-associative. (For example, subtraction does not have the associative property, therefore it must have notational associativity.)
Examples:
Associativity is only needed when the operators in an expression have the same precedence. Usually + and - have the same precedence. Consider the expression 7 - 4 + 2. The result could be either (7 - 4) + 2 = 5 or 7 - (4 + 2) = 1. The former result corresponds to the case when + and - are left-associative, the latter to when + and - are right-associative.
Examples:
In order to reflect normal usage, addition, subtraction, multiplication, and division operators are usually left-associative, while for an exponentiation operator (if present) and Knuth's up-arrow operators there is no general agreement. Any assignment operators are typically right-associative. To prevent cases where operands would be associated with two operators, or no operator at all, operators with the same precedence must have the same associativity.
Examples:
A detailed example Consider the expression 5^4^3^2, in which ^ is taken to be a right-associative exponentiation operator. A parser reading the tokens from left to right would apply the associativity rule to a branch, because of the right-associativity of ^, in the following way: Term 5 is read.
Nonterminal ^ is read. Node: "5^".
Term 4 is read. Node: "5^4".
Nonterminal ^ is read, triggering the right-associativity rule. Associativity decides node: "5^(4^".
Term 3 is read. Node: "5^(4^3".
Nonterminal ^ is read, triggering the re-application of the right-associativity rule. Node "5^(4^(3^".
Term 2 is read. Node "5^(4^(3^2".
No tokens to read. Apply associativity to produce parse tree "5^(4^(3^2))".This can then be evaluated depth-first, starting at the top node (the first ^): The evaluator walks down the tree, from the first, over the second, to the third ^ expression.
It evaluates as: 32 = 9. The result replaces the expression branch as the second operand of the second ^.
Evaluation continues one level up the parse tree as: 49 = 262,144. Again, the result replaces the expression branch as the second operand of the first ^.
Again, the evaluator steps up the tree to the root expression and evaluates as: 5262144 ≈ 6.2060699×10183230. The last remaining branch collapses and the result becomes the overall result, therefore completing overall evaluation.A left-associative evaluation would have resulted in the parse tree ((5^4)^3)^2 and the completely different result (6253)2 = 244,140,6252 ≈ 5.9604645×1016.
Right-associativity of assignment operators:
In many imperative programming languages, the assignment operator is defined to be right-associative, and assignment is defined to be an expression (which evaluates to a value), not just a statement. This allows chained assignment by using the value of one assignment expression as the right operand of the next assignment expression.
Right-associativity of assignment operators:
In C, the assignment a = b is an expression that evaluates to the same value as the expression b converted to the type of a, with the side effect of storing the R-value of b into the L-value of a. Therefore the expression a = (b = c) can be interpreted as b = c; a = b;. The alternative expression (a = b) = c raises an error because a = b is not an L-value expression, i.e. it has an R-value but not an L-value where to store the R-value of c. The right-associativity of the = operator allows expressions such as a = b = c to be interpreted as a = (b = c).
Right-associativity of assignment operators:
In C++, the assignment a = b is an expression that evaluates to the same value as the expression a, with the side effect of storing the R-value of b into the L-value of a. Therefore the expression a = (b = c) can still be interpreted as b = c; a = b;. And the alternative expression (a = b) = c can be interpreted as a = b; a = c; instead of raising an error. The right-associativity of the = operator allows expressions such as a = b = c to be interpreted as a = (b = c).
Non-associative operators:
Non-associative operators are operators that have no defined behavior when used in sequence in an expression. In Prolog the infix operator :- is non-associative because constructs such as "a :- b :- c" constitute syntax errors.
Non-associative operators:
Another possibility is that sequences of certain operators are interpreted in some other way, which cannot be expressed as associativity. This generally means that syntactically, there is a special rule for sequences of these operations, and semantically the behavior is different. A good example is in Python, which has several such constructs. Since assignments are statements, not operations, the assignment operator does not have a value and is not associative. Chained assignment is instead implemented by having a grammar rule for sequences of assignments a = b = c, which are then assigned left-to-right. Further, combinations of assignment and augmented assignment, like a = b += c are not legal in Python, though they are legal in C. Another example are comparison operators, such as >, ==, and <=. A chained comparison like a < b < c is interpreted as (a < b) and (b < c), not equivalent to either (a < b) < c or a < (b < c).
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Calligra Words**
Calligra Words:
Calligra Words is a word processor, which is part of Calligra Suite and developed by KDE as free software.
History:
When the Calligra Suite was formed, unlike the other Calligra applications Words was not a continuation of the corresponding KOffice application – KWord. The Words was largely written from scratch – in May 2011 a completely new layout engine was announced. The first release was made available on April 11, 2012 (2012-04-11), using the version number 2.4 to match the rest of Calligra Suite.
Reception:
Initial reception of Calligra Words shortly after the 2.4 release was mixed. While Linux Pro Magazine Online's Bruce Byfield wrote “Calligra needed an impressive first release. Perhaps surprisingly, and to the development team’s credit, it has managed one in 2.4.”, he also noted that “Words in particular is still lacking features”. He concluded that Calligra is “worth keeping an eye on”.On the other hand, Calligra Words became the default word processor in Kubuntu 12.04 – replacing LibreOffice Writer.
Formula editor:
Formulas in Calligra Words are provided by the Formula plugin. It is a formula editor with a WYSIWYG interface.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Temporary equilibrium method**
Temporary equilibrium method:
The temporary equilibrium method has been devised by Alfred Marshall for analyzing economic systems that comprise interdependent variables of different speed. Sometimes it is referred to as the moving equilibrium method.
Temporary equilibrium method:
For example, assume an industry with a certain capacity that produces a certain commodity. Given this capacity, the supply offered by the industry will depend on the prevailing price. The corresponding supply schedule gives short-run supply. The demand depends on the market price. The price in the market declines if supply exceeds demand, and it increases, if supply is less than demand. The price mechanism leads to market clearing in the short run. However, if this short-run equilibrium price is sufficiently high, production will be very profitable, and capacity will increase. This shifts the short-run supply schedule to the right, and a new short-run equilibrium price will be obtained. The resulting sequence of short-run equilibria are termed temporary equilibria. The overall system involves two state variables: price and capacity. Using the temporary equilibrium method, it can be reduced to a system involving only state variable. This is possible because each short-run equilibrium price will be a function of the prevailing capacity, and the change of capacity will be determined by the prevailing price. Hence the change of capacity will be determined by the prevailing capacity. The method works if the price adjusts fast and capacity adjustment is comparatively slow. The mathematical background is provided by the Moving equilibrium theorem.
Temporary equilibrium method:
In physics, the method is known as scale separation,
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Undergrowth**
Undergrowth:
In forestry and ecology, understory (American English), or understorey (Commonwealth English), also known as underbrush or undergrowth, includes plant life growing beneath the forest canopy without penetrating it to any great extent, but above the forest floor. Only a small percentage of light penetrates the canopy so understory vegetation is generally shade-tolerant. The understory typically consists of trees stunted through lack of light, other small trees with low light requirements, saplings, shrubs, vines and undergrowth. Small trees such as holly and dogwood are understory specialists.
Undergrowth:
In temperate deciduous forests, many understory plants start into growth earlier in the year than the canopy trees, to make use of the greater availability of light at that particular time of year. A gap in the canopy caused by the death of a tree stimulates the potential emergent trees into competitive growth as they grow upwards to fill the gap. These trees tend to have straight trunks and few lower branches. At the same time, the bushes, undergrowth, and plant life on the forest floor become denser. The understory experiences greater humidity than the canopy, and the shaded ground does not vary in temperature as much as open ground. This causes a proliferation of ferns, mosses, and fungi and encourages nutrient recycling, which provides favorable habitats for many animals and plants.
Understory structure:
The understory is the underlying layer of vegetation in a forest or wooded area, especially the trees and shrubs growing between the forest canopy and the forest floor.
Understory structure:
Plants in the understory comprise an assortment of seedlings and saplings of canopy trees together with specialist understory shrubs and herbs. Young canopy trees often persist in the understory for decades as suppressed juveniles until an opening in the forest overstory permits their growth into the canopy. In contrast understory shrubs complete their life cycles in the shade of the forest canopy. Some smaller tree species, such as dogwood and holly, rarely grow tall and generally are understory trees.
Understory structure:
The canopy of a tropical forests are typically about 10m thick, and intercepts around 95% of the sunlight. The understory therefore receives less intense light than plants in the canopy and such light as does penetrate is impoverished in wavelengths of light that are most effective for photosynthesis. Understory plants therefore must be shade tolerant—they must be able to photosynthesize adequately using such light as does reach their leaves. They often are able to use wavelengths that canopy plants cannot. In temperate deciduous forests towards the end of the leafless season, understory plants take advantage of the shelter of the still leafless canopy plants to "leaf out" before the canopy trees do. This is important because it provides the understory plants with a window in which to photosynthesize without the canopy shading them. This brief period (usually 1–2 weeks) is often a crucial period in which the plant can maintain a net positive carbon balance over the course of the year.
Understory structure:
As a rule forest understories also experience higher humidity than exposed areas. The forest canopy reduces solar radiation, so the ground does not heat up or cool down as rapidly as open ground. Consequently, the understory dries out more slowly than more exposed areas do. The greater humidity encourages epiphytes such as ferns and mosses, and allows fungi and other decomposers to flourish. This drives nutrient cycling, and provides favorable microclimates for many animals and plants, such as the pygmy marmoset.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Ventricular-brain ratio**
Ventricular-brain ratio:
Ventricular-brain ratio (VBR), also known as the ventricle-to-brain ratio or ventricle-brain ratio, is the ratio of total ventricle area to total brain area, which can be calculated with planimetry from brain imagining techniques such as CT scans.
Ventricular-brain ratio:
It is a common measure of ventricular dilation or cerebral atrophy in patients with traumatic brain injury or hydrocephalus ex vacuo. VBR also tends to increase with age.Generally, a higher VBR means a worse prognosis for recovering from a brain injury. For example, VBR is significantly correlated with performance on the Luria-Nebraska neuropsychological battery. Studies have found people with schizophrenia have larger third ventricles and VBR. Correlational studies have found relationships between ventricle-brain ratio and binge eating and inversely with plasma thyroid hormone concentration.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Collaborative e-democracy**
Collaborative e-democracy:
Collaborative e-democracy refers to a hybrid democratic model combining elements of direct democracy, representative democracy, and e-democracy (or the incorporation of ICTs into democratic processes). This concept, first introduced at international academic conferences in 2009, offers a pathway for citizens to directly or indirectly engage in policymaking. Steven Brams and Peter Fishburn describe it as an "innovative way to engage citizens in the democratic process," that potentially makes government "more transparent, accountable, and responsive to the needs of the people."Collaborative e-democracy is a political system that enables governmental stakeholders (such as politicians, parties, ministers, MPs) and non-governmental stakeholders (including NGOs, political lobbies, local communities, and individual citizens) to collaborate in the development of public laws and policies. This collaborative policymaking process occurs through a government-sanctioned social networking site, with all citizens as members, thus facilitating collaborative e-policy-making. Michael Gallagher suggests that it can be a "powerful tool that can be used to improve the quality of decision-making." Andrew Reynolds even believes that "collaborative e-democracy is the future of democracy."In this system, directly elected government officials, or ‘proxy representatives’, would undertake most law and policy-making processes, embodying aspects of representative democracy. However, citizens retain final voting power on each issue, a feature of direct democracy. Furthermore, every citizen is empowered to propose their own policies and, where relevant, initiate new policy processes (initiative). Collaboratively formulated policies, considering the views of a larger proportion of the citizenry, may result in more just, sustainable, and therefore, implementable outcomes. As Steven Brams and Peter Fishburn suggest, "collaborative e-democracy can help to ensure that all voices are heard, and that decisions are made in the best interests of the community." They argue that this can lead to "more just and sustainable outcomes."Collaborative e-democracy can also help to improve the quality of decision-making, as noted by Michael Gallagher, who states, "By involving a wider range of people in the decision-making process, collaborative e-democracy can help to ensure that decisions are made on the basis of sound evidence and reasoning." Gallagher further proposes that this collaborative approach can contribute to "more sustainable outcomes."Andrew Reynolds posits that "Collaborative e-democracy can help to make government more responsive to the needs of the people. By giving citizens a direct say in the decision-making process, collaborative e-democracy can help to ensure that government is more accountable to the people. This can lead to more implementable outcomes, as decisions are more likely to be supported by the people." Additional references support the idea that collaborative e-democracy can lead to more just, sustainable, and implementable outcomes.
Theoretical Framework:
Collaborative e-democracy encompasses the following theoretical components: Collaborative Democracy: A political framework where electors and elected officials actively collaborate to achieve optimal solutions using technologies that facilitate broad citizen participation in government.
Theoretical Framework:
Collaborative e-Policymaking (CPM): A software-facilitated, five-phase policy process in which citizens participate either directly or indirectly via proxy representatives. This process unfolds on a government-backed social networking site, with all citizens as members. Each member can propose issues, evaluate and rank other members' suggestions, and vote on laws and policies that will affect them. In a broader context, CPM is a universal process that could enable every organization (e.g., businesses, governments) or self-selected group (e.g., unions, online communities) to co-create their own regulations (such as laws or codes of conduct) and strategies (e.g., governmental actions, business strategies), involving all stakeholders in the respective decision-making processes.
Theoretical Framework:
Proxy voting and Liquid Democracy: In a collaborative e-democracy, the system takes into account the limitations of direct democracy, where each citizen is expected to vote on every policy issue. Recognizing that this could impose an excessive burden, collaborative e-democracy allows citizens to delegate voting power to trusted representatives, or proxies, for issues or domains where they lack the time, interest, or expertise for direct participation. Despite this delegation, the original citizen maintains final voting power on each issue, amalgamating the benefits of both direct and representative democracy on the social networking platform.
Policy Process:
Collaborative e-democracy engages various stakeholders such as affected individuals, domain experts, and parties capable of implementing solutions in the process of shaping public laws and policies. The cycle of each policy begins with the identification of a common issue or objective by the collective participants - citizens, experts, and proxy representatives. As Steven Brams and Peter Fishburn argue, "collaborative e-democracy can help to ensure that all voices are heard, and that decisions are made in the best interests of the community." Suggestion & Ranking Phase: Participants are prompted to offer policy solutions aimed at resolving the identified issue or reaching the proposed goal, a method known as policy crowdsourcing. Subsequently, these suggestions are ranked with those having the most support taking precedence. This process, according to Michael Gallagher, helps to "improve the quality of decision-making" by involving a wider range of people, ensuring that "decisions are made on the basis of sound evidence and reasoning." Evaluation Phase: For each top-ranking proposal (i.e., law or government action), pros and cons of its implementation are identified, enabling the collective to assess how they might be impacted by each policy. Independent domain experts assist this evaluation process.
Policy Process:
Voting Phase: Based on the collectively created information, the group votes for the proposal perceived as the most optimal solution for the identified issue or goal. The outcome of this phase may result in the introduction of a new law or execution of a new government action. As Andrew Reynolds notes, giving citizens a "direct say in the decision-making process... can lead to more implementable outcomes, as decisions are more likely to be supported by the people." Revision Phase: A predetermined period post-implementation, the collective is consulted to ascertain whether the policy enacted was successful in resolving the issue or attaining the goal. If the policy is deemed successful, the cycle concludes; if not, the process reinitiates with the suggestion phase until a resolution is reached.Note that as a software process, CPM is automated and conducted on a governmental social networking site.
Principles:
Collaborative e-democracy operates on several key principles: Self-government and Direct Democracy: Collaborative e-democracy is grounded in the ideal of self-governance and direct democracy. It embodies the ancient Roman law maxim, quod omnes tangit ab omnibus approbetur, which translates to “that which affects all people must be approved by all people.” This stands in stark contrast to representative democracy, which is often influenced by corporate lobbies (Corporatocracy).
Principles:
Open source governance: This philosophy promotes the application of open source and open content principles to democracy, enabling any engaged citizen to contribute to policy creation.
Aggregation: The social networking platform plays a role in gathering citizens' opinions on different issues, such as agreement with a specific policy. Based on these common views, ad hoc groups may form to address these concerns.
Collaboration: The platform also encourages collaboration of like-minded individuals on shared issues, aiding the co-creation of policy proposals within or between groups. Groups with contrasting strategies or perspectives but similar goals can compete with each other.
Collective intelligence: The CPM process leverages collective intelligence — a group intelligence emerging from aggregation, collaboration, competition, and consensus decision-making. This collective intelligence helps identify issues and co-create solutions beneficial for most people, reflecting the design pattern of Web 2.0.
Principles:
Collective Learning & Adoption: The direct democracy aspect of collaborative e-democracy shifts policymaking responsibility from government teams (top-down) to the citizen collective (bottom-up). The repercussions of their decisions initiate a collective learning process. Collaborative e-democracy, being flexible and adaptable, integrates learning experiences quickly and adjusts to new social, economic, or environmental circumstances. This principle mirrors 'Perpetual Beta,' another design pattern of Web 2.0.
Benefits and Limitations:
Collaborative e-democracy aims to bring forth several benefits: Transparency and Accessibility: The CPM process aspires to provide transparency and make governmental operations accessible to all citizens via the internet.Political efficacy: Engaging citizens in governmental processes could heighten political efficacy and help counter the democratic deficit.Deliberation: The governmental social networking site, serving as the primary platform for political information and communication, could enhance deliberation quality among the nation's various governmental and non-governmental stakeholders.Collective Awareness: Large-scale online participation could boost public awareness of collective problems, goals, or policy issues, including minority opinions, and facilitate harnessing the nation's collective intelligence for policy development.However, collaborative e-democracy has its limitations: Constitutional Constraints: Many democratic nations have constitutional limits on direct democracy, and governments may be reluctant to surrender policymaking authority to the collective.Digital divide: People without internet access could be at a disadvantage in a collaborative e-democracy. Traditional democratic procedures need to remain available until the digital divide is resolved.Majority rule: As in most democratic decision processes, majorities could overshadow minorities. The evaluation process could provide advance notice if a minority group would be significantly disadvantaged by a proposed policy.Potential for Naive Voting: Voters may not have comprehensive understanding of the facts and data related to their options, leading to votes that do not represent their actual intentions. However, the system's included proxy voting/delegation, coupled with potential improvement in education, critical thinking, and reasoning skills (potentially fostered by a better form of government and internet usage), should help mitigate this issue. Additionally, the CPM process incorporates proxies and experts to educate people on policy implications before decisions are made.
Research and Development:
The concepts of collaborative e-democracy and collaborative e-policy-making were first introduced at two academic conferences on e-governance and e-democracy in 2009. The key presentations were: Petrik, Klaus (2009). “Participation and e-Democracy: How to Utilize Web 2.0 for Policy Decision-Making.” Presented at the 10th International Digital Government Research Conference: "Social Networks: Making Connections between Citizens, Data & Government" in Puebla, Mexico.Petrik, Klaus (2009). “Deliberation and Collaboration in the Policy Process: A Web 2.0 Approach.” Presented at The 3rd Conference on Electronic Democracy in Vienna, Austria.An additional publication appeared in the "Journal of eDemocracy and Open Government", Vol 2, No 1 (2010).
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Cadbury Creme Egg**
Cadbury Creme Egg:
A Cadbury Creme Egg, originally named Fry's Creme Egg, is a chocolate confection produced in the shape of an egg. It originated from the British chocolatier Fry's in 1963 before being renamed by Cadbury in 1971. The product consists of a thick chocolate shell containing a sweet white and yellow filling that resembles fondant. The filling mimics the albumen and yolk of a soft boiled egg.
Cadbury Creme Egg:
The confectionery is produced by Cadbury in the United Kingdom, by The Hershey Company in the United States, and by Cadbury Adams in Canada.
History:
While filled eggs were first manufactured by the Cadbury Brothers in 1923, the Creme Egg in its current form was introduced in 1963. Initially sold as "Fry's Creme Eggs" (incorporating the Fry's brand, after the British chocolatier), they were renamed "Cadbury's Creme Eggs" in 1971.
Composition:
Cadbury Creme Eggs are manufactured as two chocolate half shells, each of which is filled with a white fondant made from sugar, glucose syrup, inverted sugar syrup, dried egg white, and flavouring. The fondant in each half is topped with a smaller amount of the same mixture coloured yellow with paprika extract, to mimic the yolk and white of a real egg. Both halves are then quickly joined and cooled, the shell bonding together in the process. The solid eggs are removed from the moulds and wrapped in foil.During an interview in a 2007 episode of Late Night with Conan O'Brien, actor B. J. Novak drew attention to the fact that American market Cadbury Creme Eggs had decreased in size, despite the official Cadbury website stating otherwise. American Creme Eggs at the time weighed 34 g (1.2 oz) and contained 150 kcal. Before 2006, the eggs marketed by Hershey were identical to the UK version, weighing 39 g (1.4 oz) and containing 170 kcal.In 2015, the British Cadbury company under the American Mondelēz International conglomerate announced that it had changed the formula of the Cadbury Creme Egg by replacing its Cadbury Dairy Milk chocolate with "standard cocoa mix chocolate". It had also reduced the packaging from 6 eggs to 5 with a less than proportionate decrease in price. This resulted in a large number of complaints from consumers. Analysts at IRI found that Cadbury lost more than $12 million in Creme Egg sales in the UK.
Manufacture and sales:
Creme Eggs are produced by Cadbury in the United Kingdom, by The Hershey Company in the United States, and by Cadbury Adams in Canada. They are sold by Mondelez International in all markets except the US, where The Hershey Company has the local marketing rights. At the Bournville factory in Birmingham in the UK, they are manufactured at a rate of 1.5 million per day. The Creme Egg was also previously manufactured in New Zealand, but has been imported from the UK since 2009. A YouGov poll saw the Creme Egg ranked as the most famous confectionery in the UK.As of 2011 the Creme Egg was the best-selling confectionery item between New Year's Day and Easter in the UK, with annual sales in excess of 200 million eggs and a brand value of approximately £55 million. However, in 2016 sales plummeted after the controversial decision to change the recipe from the original Cadbury Dairy Milk chocolate to a cheaper substitute, with reports of a loss of more than £6M in sales.
Manufacture and sales:
Creme Eggs are available individually and in boxes, with the numbers of eggs per package varying per country. The foil wrapping of the eggs was traditionally pink, blue, purple, and yellow in the United Kingdom and Ireland, though green was removed and purple replaced blue early in the 21st century. In the United States, some green is incorporated into the design, which previously featured the product's mascot, the Creme Egg Chick. As of 2015, the packaging in Canada has been changed to a 34 g (1.2 oz), purple, red and yellow soft plastic shell.
Manufacture and sales:
Creme Eggs are available annually between 1 January and Easter Sunday. In the UK in the 1980s, Cadbury made Creme Eggs available year-round but sales dropped and they returned to seasonal availability. In 2018, white chocolate versions of the Creme Eggs were made available. These eggs were not given a wrapper that clearly marked them as white chocolate eggs, and were mixed in with the normal Creme Eggs in the United Kingdom. Individuals who discovered an egg would win money via a ticket that had a code printed on it inside of the wrapper.Creme Eggs were manufactured in New Zealand at the Cadbury factory in Dunedin from 1983 to 2009. Cadbury in New Zealand and Australia went through a restructuring process, with most Cadbury products previously produced in New Zealand being manufactured instead at Cadbury factories in Australia. Cadbury Australia produces some Creme Eggs products for the Australian market, most prominently the Mini Creme Egg. New Zealand's Dunedin plant later received a $69 million upgrade to specialise in boxed products such as Cadbury Roses, and Creme Eggs were no longer produced there. The result of the changes meant that Creme Eggs were instead imported from the United Kingdom. The change also saw the range of Creme Eggs available for sale decrease. The size also dropped from 40 g (1.4 oz) to 39 g (1.4 oz) in this time. The response from New Zealanders was not positive, with complaints including the filling not being as runny as the New Zealand version. As of 2023, Cadbury Australia continue to produce the Mini Egg variant.
Advertising:
The Creme Egg has been marketed in the UK and Ireland with the question "How do you eat yours?" and in New Zealand with the slogan "Don't get caught with egg on your face". Australia and New Zealand have also used a variation of the UK question, using the slogan "How do you do it?"In the US, Creme Eggs are advertised on television with a small white rabbit called the Cadbury Bunny (alluding to the Easter Bunny) which clucks like a chicken. Other animals dressed with bunny ears have also been used in the television ads, and in 2021, out of over 12,000 submissions in the Hershey Company's third annual tryouts, an Australian tree frog named Betty was named the newest Cadbury Bunny. Ads for caramel eggs use a larger gold-coloured rabbit which also clucks, and chocolate eggs use a large brown rabbit which clucks in a deep voice. The advertisements use the slogan "Nobunny knows Easter better than him", spoken by TV personality Mason Adams. The adverts have continued to air nearly unchanged into the high definition era and after Adams's death in 2005, though currently the ad image is slightly zoomed to fill the screen. The majority of rabbits used in the Cadbury commercials are Flemish Giants.In the UK, around the year 2000, selected stores were provided standalone paperboard cutouts of something resembling a "love tester". The shopper would press a button in the centre and a "spinner" (a series of LED lights) would select at random a way of eating the Creme Egg, e.g. "with chips". These were withdrawn within a year. There are also the "Creme Egg Cars" which are, as the name suggest, ovular vehicles painted to look like Creme Eggs. They are driven to various places to advertise the eggs but are based mainly at the Cadbury factory in Bournville. Five "Creme Egg Cars" were built from Bedford Rascal chassis. The headlights are taken from a Citroën 2CV.For the 2008/2009 season, advertising in the UK, Ireland, Australia, New Zealand and Canada consisted of stopmotion adverts in the "Here Today, Goo Tomorrow" campaign which comprised a Creme Egg stripping its wrapper off and then breaking its own shell, usually with household appliances and equipment, while making various 'goo' sounds, and a 'relieved' noise when finally able to break its shell. The Cadbury's Creme Egg website featured games where the player had to prevent the egg from finding a way to release its goo.
Advertising:
A similar advertising campaign in 2010 featured animated Creme Eggs destroying themselves in large numbers, such as gathering together at a cinema before bombarding into each other to release all of the eggs' goo, and another which featured eggs being destroyed by mouse traps.In Halloween 2011, 2012 and 2013, advertising in Canada and New Zealand consisted of the "Screme Egg" Easter aliens, such as 48 seconds in the advertising.
Advertising:
Campaigns/slogans c. 1970s: "Shopkeeper" campaign in which a boy asks for 6000 Cadbury Creme Eggs "Irresistibly" campaign showing characters prepared to do something unusual for a Creme Egg, similar to the "What would you do for a Klondike bar?" campaign Early 1980s: "Can't Resist Them" 1985: The "How Do You Eat Yours?" campaign Mid-1980s–present: "Nobunny Knows Easter Better than Cadbury" 1985–1996: "Don't get caught with egg on your face" 1990–1993: The first television campaign to use the "How Do You Eat Yours?" theme, featuring the zodiac signs 1994–1996: Spitting Image characters continued "How Do You Eat Yours?" 1997–1999: Matt Lucas, with the catchphrase "I've seen the future, and it's egg shaped!" 2000–2003: The "Pointing Finger" 2004: The "Roadshow" finger 2005: "Licky, Sticky, Happy" 2006–2007: "Eat It Your Way" 2008–2010: "Here Today, Goo Tomorrow" 2008–2009: "Unleash the Goo" 2009: "Release the Goo" 2010: "You’ll Miss Me When I’m Gone" 2011: "Goo Dares Wins" 2011: "Get Your Goo On!" 2012: "Gooing For Gold" 2012: "It's Goo Time" 2013–2016: "Have a fling with a Creme Egg" 2017–2019: "It's Hunting Season" 2020–2021: "Creme Egg Eatertainment" 2021: "Creme Egg Golden Goobilee" 2022-2023: "How do you NOT eat yours?” Creme Egg Café In 2016, Cadbury opened a pop-up café titled "Crème de la Creme Egg Café" in London. Tickets for the café sold out within an hour of being published online. The café on Greek Street, Soho, was open every Friday, Saturday and Sunday from 22 January to 6 March 2016.
Advertising:
Creme Egg Camp In 2018, Cadbury opened a pop-up camp. The camp in Last Days of Shoreditch, Old Street was open every Thursday to Sunday from 19 January, to 18 February 2018
Varieties:
Cadbury has introduced many variants to the original Creme Egg, including: Other products include: Creme Egg Fondant in a narrow cardboard tube (limited edition) Creme Egg ice cream with a fondant sauce in milk chocolate Creme Egg Pots Of Joy – melted Cadbury milk chocolate with a fondant layer Screme Egg Pots Of Joy – melted Cadbury milk chocolate but with a layer of Screme Egg fondant Creme Egg Layers Of Joy – A layered sharing dessert with Cadbury milk chocolate, chocolate mousse, chocolate chip cookie and fondant dessert with a creamy topping.
Varieties:
Jaffa Egg – Manufactured in New Zealand, Dark chocolate with orange filling Marble Egg – Manufactured in New Zealand, Dairy Milk and Dream Chocolate swirled together
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Fibrous tunic of eyeball**
Fibrous tunic of eyeball:
The sclera and cornea form the fibrous tunic of the bulb of the eye; the sclera is opaque, and constitutes the posterior five-sixths of the tunic; the cornea is transparent, and forms the anterior sixth.
The term "corneosclera" is also used to describe the sclera and cornea together.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Noncommutative projective geometry**
Noncommutative projective geometry:
In mathematics, noncommutative projective geometry is a noncommutative analog of projective geometry in the setting of noncommutative algebraic geometry.
Examples:
The quantum plane, the most basic example, is the quotient ring of the free ring: k⟨x,y⟩/(yx−qxy) More generally, the quantum polynomial ring is the quotient ring: k⟨x1,…,xn⟩/(xixj−qijxjxi)
Proj construction:
By definition, the Proj of a graded ring R is the quotient category of the category of finitely generated graded modules over R by the subcategory of torsion modules. If R is a commutative Noetherian graded ring generated by degree-one elements, then the Proj of R in this sense is equivalent to the category of coherent sheaves on the usual Proj of R. Hence, the construction can be thought of as a generalization of the Proj construction for a commutative graded ring.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Federated learning**
Federated learning:
Federated learning (also known as collaborative learning) is a machine learning technique that trains an algorithm via multiple independent sessions, each using its own dataset. This approach stands in contrast to traditional centralized machine learning techniques where local datasets are merged into one training session, as well as to approaches that assume that local data samples are identically distributed.
Federated learning:
Federated learning enables multiple actors to build a common, robust machine learning model without sharing data, thus addressing critical issues such as data privacy, data security, data access rights and access to heterogeneous data. Its applications engage industries including defense, telecommunications, Internet of Things, and pharmaceuticals. A major open question is when/whether federated learning is preferable to pooled data learning. Another open question concerns the trustworthiness of the devices and the impact of malicious actors on the learned model.
Definition:
Federated learning aims at training a machine learning algorithm, for instance deep neural networks, on multiple local datasets contained in local nodes without explicitly exchanging data samples. The general principle consists in training local models on local data samples and exchanging parameters (e.g. the weights and biases of a deep neural network) between these local nodes at some frequency to generate a global model shared by all nodes.
Definition:
The main difference between federated learning and distributed learning lies in the assumptions made on the properties of the local datasets, as distributed learning originally aims at parallelizing computing power where federated learning originally aims at training on heterogeneous datasets. While distributed learning also aims at training a single model on multiple servers, a common underlying assumption is that the local datasets are independent and identically distributed (i.i.d.) and roughly have the same size. None of these hypotheses are made for federated learning; instead, the datasets are typically heterogeneous and their sizes may span several orders of magnitude. Moreover, the clients involved in federated learning may be unreliable as they are subject to more failures or drop out since they commonly rely on less powerful communication media (i.e. Wi-Fi) and battery-powered systems (i.e. smartphones and IoT devices) compared to distributed learning where nodes are typically datacenters that have powerful computational capabilities and are connected to one another with fast networks.
Definition:
Mathematical formulation The objective function for federated learning is as follows: f(x1,…,xK)=1K∑i=1Kfi(xi) where K is the number of nodes, xi are the weights of model as viewed by node i , and fi is node i 's local objective function, which describes how model weights xi conforms to node i 's local dataset.
The goal of federated learning is to train a common model on all of the nodes' local datasets, in other words: Optimizing the objective function f(x1,…,xK) Achieving consensus on xi . In other words, x1,…,xK converge to some common x at the end of the training process.
Definition:
Centralized federated learning In the centralized federated learning setting, a central server is used to orchestrate the different steps of the algorithms and coordinate all the participating nodes during the learning process. The server is responsible for the nodes selection at the beginning of the training process and for the aggregation of the received model updates. Since all the selected nodes have to send updates to a single entity, the server may become a bottleneck of the system.
Definition:
Decentralized federated learning In the decentralized federated learning setting, the nodes are able to coordinate themselves to obtain the global model. This setup prevents single point failures as the model updates are exchanged only between interconnected nodes without the orchestration of the central server. Nevertheless, the specific network topology may affect the performances of the learning process. See blockchain-based federated learning and the references therein.
Definition:
Heterogeneous federated learning An increasing number of application domains involve a large set of heterogeneous clients, e.g., mobile phones and IoT devices. Most of the existing Federated learning strategies assume that local models share the same global model architecture. Recently, a new federated learning framework named HeteroFL was developed to address heterogeneous clients equipped with very different computation and communication capabilities. The HeteroFL technique can enable the training of heterogeneous local models with dynamically varying computation and non-iid data complexities while still producing a single accurate global inference model.
Main features:
Iterative learning To ensure good task performance of a final, central machine learning model, federated learning relies on an iterative process broken up into an atomic set of client-server interactions known as a federated learning round. Each round of this process consists in transmitting the current global model state to participating nodes, training local models on these local nodes to produce a set of potential model updates at each node, and then aggregating and processing these local updates into a single global update and applying it to the global model.In the methodology below, a central server is used for aggregation, while local nodes perform local training depending on the central server's orders. However, other strategies lead to the same results without central servers, in a peer-to-peer approach, using gossip or consensus methodologies.Assuming a federated round composed by one iteration of the learning process, the learning procedure can be summarized as follows: Initialization: according to the server inputs, a machine learning model (e.g., linear regression, neural network, boosting) is chosen to be trained on local nodes and initialized. Then, nodes are activated and wait for the central server to give the calculation tasks.
Main features:
Client selection: a fraction of local nodes are selected to start training on local data. The selected nodes acquire the current statistical model while the others wait for the next federated round.
Configuration: the central server orders selected nodes to undergo training of the model on their local data in a pre-specified fashion (e.g., for some mini-batch updates of gradient descent).
Reporting: each selected node sends its local model to the server for aggregation. The central server aggregates the received models and sends back the model updates to the nodes. It also handles failures for disconnected nodes or lost model updates. The next federated round is started returning to the client selection phase.
Main features:
Termination: once a pre-defined termination criterion is met (e.g., a maximum number of iterations is reached or the model accuracy is greater than a threshold) the central server aggregates the updates and finalizes the global model.The procedure considered before assumes synchronized model updates. Recent federated learning developments introduced novel techniques to tackle asynchronicity during the training process, or training with dynamically varying models. Compared to synchronous approaches where local models are exchanged once the computations have been performed for all layers of the neural network, asynchronous ones leverage the properties of neural networks to exchange model updates as soon as the computations of a certain layer are available. These techniques are also commonly referred to as split learning and they can be applied both at training and inference time regardless of centralized or decentralized federated learning settings.
Main features:
Non-IID data In most cases, the assumption of independent and identically distributed samples across local nodes does not hold for federated learning setups. Under this setting, the performances of the training process may vary significantly according to the unbalanced local data samples as well as the particular probability distribution of the training examples (i.e., features and labels) stored at the local nodes. To further investigate the effects of non-IID data, the following description considers the main categories presented in the preprint by Peter Kairouz et al. from 2019.The description of non-IID data relies on the analysis of the joint probability between features and labels for each node.
Main features:
This allows to decouple each contribution according to the specific distribution available at the local nodes.
The main categories for non-iid data can be summarized as follows: Covariate shift: local nodes may store examples that have different statistical distributions compared to other nodes. An example occurs in natural language processing datasets where people typically write the same digits/letters with different stroke widths or slants.
Prior probability shift: local nodes may store labels that have different statistical distributions compared to other nodes. This can happen if datasets are regional and/or demographically partitioned. For example, datasets containing images of animals vary significantly from country to country.
Concept drift (same label, different features): local nodes may share the same labels but some of them correspond to different features at different local nodes. For example, images that depict a particular object can vary according to the weather condition in which they were captured.
Concept shift (same features, different labels): local nodes may share the same features but some of them correspond to different labels at different local nodes. For example, in natural language processing, the sentiment analysis may yield different sentiments even if the same text is observed.
Unbalanced: the amount of data available at the local nodes may vary significantly in size.The loss in accuracy due to non-iid data can be bounded through using more sophisticated means of doing data normalization, rather than batch normalization.
Algorithmic hyper-parameters:
Network topology The way the statistical local outputs are pooled and the way the nodes communicate with each other can change from the centralized model explained in the previous section. This leads to a variety of federated learning approaches: for instance no central orchestrating server, or stochastic communication.In particular, orchestrator-less distributed networks are one important variation. In this case, there is no central server dispatching queries to local nodes and aggregating local models. Each local node sends its outputs to several randomly-selected others, which aggregate their results locally. This restrains the number of transactions, thereby sometimes reducing training time and computing cost.
Algorithmic hyper-parameters:
Federated learning parameters Once the topology of the node network is chosen, one can control different parameters of the federated learning process (in addition to the machine learning model's own hyperparameters) to optimize learning: Number of federated learning rounds: T Total number of nodes used in the process: K Fraction of nodes used at each iteration for each node: C Local batch size used at each learning iteration: B Other model-dependent parameters can also be tinkered with, such as: Number of iterations for local training before pooling: N Local learning rate: η Those parameters have to be optimized depending on the constraints of the machine learning application (e.g., available computing power, available memory, bandwidth). For instance, stochastically choosing a limited fraction C of nodes for each iteration diminishes computing cost and may prevent overfitting, in the same way that stochastic gradient descent can reduce overfitting.
Technical limitations:
Federated learning requires frequent communication between nodes during the learning process. Thus, it requires not only enough local computing power and memory, but also high bandwidth connections to be able to exchange parameters of the machine learning model. However, the technology also avoids data communication, which can require significant resources before starting centralized machine learning. Nevertheless, the devices typically employed in federated learning are communication-constrained, for example IoT devices or smartphones are generally connected to Wi-Fi networks, thus, even if the models are commonly less expensive to be transmitted compared to raw data, federated learning mechanisms may not be suitable in their general form.Federated learning raises several statistical challenges: Heterogeneity between the different local datasets: each node may have some bias with respect to the general population, and the size of the datasets may vary significantly; Temporal heterogeneity: each local dataset's distribution may vary with time; Interoperability of each node's dataset is a prerequisite; Each node's dataset may require regular curations; Hiding training data might allow attackers to inject backdoors into the global model; Lack of access to global training data makes it harder to identify unwanted biases entering the training e.g. age, gender, sexual orientation; Partial or total loss of model updates due to node failures affecting the global model; Lack of annotations or labels on the client side.
Federated learning variations:
In this section, the notation of the paper published by H. Brendan McMahan and al. in 2017 is followed.To describe the federated strategies, let us introduce some notations: K : total number of clients; k : index of clients; nk : number of data samples available during training for client k ;kt : model's weight vector on client k , at the federated round t ;ℓ(w,b) : loss function for weights w and batch b ;E : number of local updates; Federated stochastic gradient descent (FedSGD) Deep learning training mainly relies on variants of stochastic gradient descent, where gradients are computed on a random subset of the total dataset and then used to make one step of the gradient descent.
Federated learning variations:
Federated stochastic gradient descent is the direct transposition of this algorithm to the federated setting, but by using a random fraction C of the nodes and using all the data on this node. The gradients are averaged by the server proportionally to the number of training samples on each node, and used to make a gradient descent step.
Federated learning variations:
Federated averaging Federated averaging (FedAvg) is a generalization of FedSGD, which allows local nodes to perform more than one batch update on local data and exchanges the updated weights rather than the gradients. The rationale behind this generalization is that in FedSGD, if all local nodes start from the same initialization, averaging the gradients is strictly equivalent to averaging the weights themselves. Further, averaging tuned weights coming from the same initialization does not necessarily hurt the resulting averaged model's performance.
Federated learning variations:
Federated Learning with Dynamic Regularization (FedDyn) Federated learning methods suffer when the device datasets are heterogeneously distributed. Fundamental dilemma in heterogeneously distributed device setting is that minimizing the device loss functions is not the same as minimizing the global loss objective. In 2021, Acar et al. introduced FedDyn method as a solution to heterogenous dataset setting. FedDyn dynamically regularizes each devices loss function so that the modified device losses converges to the actual global loss. Since the local losses are aligned, FedDyn is robust to the different heterogeneity levels and it can safely perform full minimization in each device. Theoretically, FedDyn converges to the optimal (a stationary point for nonconvex losses) by being agnostic to the heterogeneity levels. These claims are verified with extensive experimentations on various datasets.Minimizing the number of communications is the gold-standard for comparison in federated learning. We may also want to decrease the local computation levels per device in each round. FedDynOneGD is an extension of FedDyn with less local compute requirements. FedDynOneGD calculates only one gradients per device in each round and update the model with a regularized version of the gradient. Hence, the computation complexity is linear in local dataset size. Moreover, gradient computation can be parallelizable within each device which is different from successive SGD steps. Theoretically, FedDynOneGD achieves the same convergence guarantees as in FedDyn with less local computation.
Federated learning variations:
Personalized Federated Learning by Pruning (Sub-FedAvg) Federated Learning methods cannot achieve good global performance under Non-IID settings which motivates the participating clients to yield personalized models in federation. Recently, Vahidian et al. introduced Sub-FedAvg opening a new personalized FL algorithm paradigm by proposing Hybrid Pruning (structured + unstructured pruning) with averaging on the intersection of clients’ drawn subnetworks which simultaneously handles communication efficiency, resource constraints and personalized models accuracies.Sub-FedAvg is the first work which shows existence of personalized winning tickets for clients in federated learning through experiments. Moreover, it also proposes two algorithms on how to effectively draw the personalized subnetworks. Sub-FedAvg tries to extend "Lottery Ticket Hypothesis" which is for centrally trained neural networks to federated learning trained neural networks leading to this open research problem: “Do winning tickets exist for clients’ neural networks being trained in federated learning? If yes, how to effectively draw the personalized subnetworks for each client?” Dynamic Aggregation - Inverse Distance Aggregation IDA (Inverse Distance Aggregation) is a novel adaptive weighting approach for clients based on meta-information which handles unbalanced and non-iid data. It uses the distance of the model parameters as a strategy to minimize the effect of outliers and improve the model's convergence rate.
Federated learning variations:
Hybrid Federated Dual Coordinate Ascent (HyFDCA) Very few methods for hybrid federated learning, where clients only hold subsets of both features and samples, exist. Yet, this scenario is very important in practical settings. Hybrid Federated Dual Coordinate Ascent (HyFDCA) is a novel algorithm proposed in 2022 that solves convex problems in the hybrid FL setting. This algorithm extends CoCoA, a primal-dual distributed optimization algorithm introduced by Jaggi et al. (2014) and Smith et al. (2017), to the case where both samples and features are partitioned across clients.
Federated learning variations:
HyFDCA claims several improvement over existing algorithms: HyFDCA is a provably convergent primal-dual algorithm for hybrid FL in at least the following settings.
Hybrid Federated Setting with Complete Client Participation Horizontal Federated Setting with Random Subsets of Available Clients The authors show HyFDCA enjoys a convergence rate of O(1⁄t) which matches the convergence rate of FedAvg (see below).
Vertical Federated Setting with Incomplete Client Participation The authors show HyFDCA enjoys a convergence rate of O(log(t)⁄t) whereas FedBCD exhibits a slower O(1⁄sqrt(t)) convergence rate and requires full client participation.
HyFDCA provides the privacy steps that ensure privacy of client data in the primal-dual setting. These principles apply to future efforts in developing primal-dual algorithms for FL.
Federated learning variations:
HyFDCA empirically outperforms FedAvg in loss function value and validation accuracy across a multitude of problem settings and datasets. The authors also introduce a hyperparameter selection framework for FL with competing metrics using ideas from multiobjective optimization.There is only one other algorithm that focuses on hybrid FL, HyFEM proposed by Zhang et al. (2020). This algorithm uses a feature matching formulation that balances clients building accurate local models and the server learning an accurate global model. This requires a matching regularizer constant that must be tuned based on user goals and results in disparate local and global models. Furthermore, the convergence results provided for HyFEM only prove convergence of the matching formulation not of the original global problem. This work is substantially different than HyFDCA's approach which uses data on local clients to build a global model that converges to the same solution as if the model was trained centrally. Furthermore, the local and global models are synchronized and do not require the adjustment of a matching parameter between local and global models. However, HyFEM is suitable for a vast array of architectures including deep learning architectures, whereas HyFDCA is designed for convex problems like logistic regression and support vector machines.
Federated learning variations:
Federated ViT using Dynamic Aggregation (FED-REV) Federated Learning (FL) provides training of global shared model using decentralized data sources on edge nodes while preserving data privacy. However, its performance in the computer vision applications using Convolution neural network (CNN) considerably behind that of centralized training due to limited communication resources and low processing capability at edge nodes. Alternatively, Pure Vision transformer models (VIT) outperform CNNs by almost four times when it comes to computational efficiency and accuracy. Hence, we propose a new FL model with reconstructive strategy called FED-REV, Illustrates how attention-based structures (pure Vision Transformers) enhance FL accuracy over large and diverse data distributed over edge nodes, in addition to the proposed reconstruction strategy that determines the dimensions influence of each stage of the vision transformer and then reduce its dimension complexity which reduce computation cost of edge devices in addition to preserving accuracy achieved due to using the pure Vision transformer.
Current research topics:
Federated learning has started to emerge as an important research topic in 2015 and 2016, with the first publications on federated averaging in telecommunication settings. Another important aspect of active research is the reduction of the communication burden during the federated learning process. In 2017 and 2018, publications have emphasized the development of resource allocation strategies, especially to reduce communication requirements between nodes with gossip algorithms as well as on the characterization of the robustness to differential privacy attacks. Other research activities focus on the reduction of the bandwidth during training through sparsification and quantization methods, where the machine learning models are sparsified and/or compressed before they are shared with other nodes. Developing ultra-light DNN architectures is essential for device-/edge- learning and recent work recognises both the energy efficiency requirements for future federated learning and the need to compress deep learning, especially during learning.Recent research advancements are starting to consider real-world propagating channels as in previous implementations ideal channels were assumed. Another active direction of research is to develop Federated learning for training heterogeneous local models with varying computation complexities and producing a single powerful global inference model.A learning framework named Assisted learning was recently developed to improve each agent's learning capabilities without transmitting private data, models, and even learning objectives. Compared with Federated learning that often requires a central controller to orchestrate the learning and optimization, Assisted learning aims to provide protocols for the agents to optimize and learn among themselves without a global model.
Use cases:
Federated learning typically applies when individual actors need to train models on larger datasets than their own, but cannot afford to share the data in itself with others (e.g., for legal, strategic or economic reasons). The technology yet requires good connections between local servers and minimum computational power for each node.
Use cases:
Transportation: self-driving cars Self-driving cars encapsulate many machine learning technologies to function: computer vision for analyzing obstacles, machine learning for adapting their pace to the environment (e.g., bumpiness of the road). Due to the potential high number of self-driving cars and the need for them to quickly respond to real world situations, traditional cloud approach may generate safety risks. Federated learning can represent a solution for limiting volume of data transfer and accelerating learning processes.
Use cases:
Industry 4.0: smart manufacturing In Industry 4.0, there is a widespread adoption of machine learning techniques to improve the efficiency and effectiveness of industrial process while guaranteeing a high level of safety. Nevertheless, privacy of sensitive data for industries and manufacturing companies is of paramount importance. Federated learning algorithms can be applied to these problems as they do not disclose any sensitive data. In addition, FL also implemented for PM2.5 prediction to support Smart city sensing applications.
Use cases:
Medicine: digital health Federated learning seeks to address the problem of data governance and privacy by training algorithms collaboratively without exchanging the data itself. Today's standard approach of centralizing data from multiple centers comes at the cost of critical concerns regarding patient privacy and data protection. To solve this problem, the ability to train machine learning models at scale across multiple medical institutions without moving the data is a critical technology. Nature Digital Medicine published the paper "The Future of Digital Health with Federated Learning" in September 2020, in which the authors explore how federated learning may provide a solution for the future of digital health, and highlight the challenges and considerations that need to be addressed. Recently, a collaboration of 20 different institutions around the world validated the utility of training AI models using federated learning. In a paper published in Nature Medicine "Federated learning for predicting clinical outcomes in patients with COVID-19", they showcased the accuracy and generalizability of a federated AI model for the prediction of oxygen needs in patients with COVID-19 infections. Furthermore, in a published paper "A Systematic Review of Federated Learning in the Healthcare Area: From the Perspective of Data Properties and Applications", the authors trying to provide a set of challenges on FL challenges on medical data-centric perspective.
Use cases:
Robotics Robotics includes a wide range of applications of machine learning methods: from perception and decision-making to control. As robotic technologies have been increasingly deployed from simple and repetitive tasks (e.g. repetitive manipulation) to complex and unpredictable tasks (e.g. autonomous navigation), the need for machine learning grows. Federated Learning provides a solution to improve over conventional machine learning training methods. In the paper, mobile robots learned navigation over diverse environments using the FL-based method, helping generalization. In the paper, Federated Learning is applied to improve multi-robot navigation under limited communication bandwidth scenarios, which is a current challenge in real-world learning-based robotic tasks. In the paper, Federated Learning is used to learn Vision-based navigation, helping better sim-to-real transfer.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**LibGDX**
LibGDX:
libGDX is a free and open-source game-development application framework written in the Java programming language with some C and C++ components for performance dependent code. It allows for the development of desktop and mobile games by using the same code base. It is cross-platform, supporting Windows, Linux, Mac OS X, Android, iOS, BlackBerry and web browsers with WebGL support.
History:
In the middle of 2009 Mario Zechner, the creator of libGDX, wanted to write Android games and started developing a framework called AFX (Android Effects) for this. When he found that deploying the changes from Desktop to Android device was cumbersome, he modified AFX to work on the Desktop as well, making it easier to test programs. This was the first step toward the game framework later known as libGDX.In March 2010 Zechner decided to open-source AFX, hosting it on Google Code under the GNU Lesser General Public License (LGPL). However, at the time he stated that "It's not the intention of the framework to be used for creating desktop games anyway", intending the framework to primarily target Android. In April, it got its first contributor.When Zechner created a Box2D JNI wrapper, this attracted more users and contributors because physics games were popular at the time. Many of the issues with Android were resolved because of this.Because many users suggested switching to a different license due to LGPL not being suitable for Android, libGDX changed its license to the Apache License 2.0 in July 2010, making it possible to use the framework in closed-source commercial games. The same month its phpBB forum was launched.Due to issues with Java Sound the audio desktop implementation switched to OpenAL in January 2011. Development of a small image manipulation library called Gdx2D was finished as well, which depends on the open source STB library.The rest of 2011 was spent adding a UI library and working on the basics of a 3D API.At the start of 2012 Zechner created a small helper library called gdx-jnigen for easing the development of JNI bindings. This made it possible for the gdx-audio and gdx-freetype extensions to be developed over the following months.Inspired by Google's PlayN cross-platform game development framework that used Google Web Toolkit (GWT) to compile Java to JavaScript code, Zechner wrote an HTML/JavaScript backend over the course of several weeks, which allowed libGDX applications to be run in any browser with WebGL support. After Google abandoned PlayN, it was maintained by Michael Bayne, who added iOS support to it. libGDX used parts of this work for its own MonoTouch-based backend.In August 2012 the project switched its version control system from Subversion to Git, moving from Google Code to GitHub. However, the issue tracker and wiki remained on Google Code for another year. The main build system was also changed to Maven, making it easier for developers with different IDEs to work together.Because of issues with the MonoTouch iOS backend Niklas Thernig wrote a RoboVM backend for libGDX in March 2013, which was integrated into the project in September. From March to May 2013 a new 3D API was developed as well and integrated into the library.In June 2013 the project's website was redone, now featuring a gallery where users can submit their games created with libGDX. As of January 2016 more than 3000 games have been submitted.After the source code migration to GitHub the year before, in September 2013 the issue tracker and wiki were also moved there from Google Code. The same month the build and dependency management system was switched from Maven to Gradle.After a cleanup phase in the first months of 2014 libGDX version 1.0 was released on 20 April, more than four years after the start of the project.In 2014 libGDX was one of the annual Duke's Choice Award winners, being chosen for its focus on platform-independence.
History:
From a diverse team of open source enthusiasts comes libGDX, a cross-platform game development framework that allows programmers to write, test, and debug Java games on a desktop PC running Windows, Linux, or Mac OS X and deploy that same code to Android, iOS and WebGL-enabled browsers—something not widely available right now. The goal of libGDX, says creator Mario Zechner, "is to fulfill the 'write once, run anywhere' promise of the Java platform specifically for game development." In April 2016 it was announced that libGDX would switch to Intel's Multi-OS Engine on the iOS backend after the discontinuation of RoboVM. With the release of libGDX 1.9.3 on 16 May 2016 Multi-OS is provided as an alternative, while by default the library uses its own fork of the open source version of RoboVM.
History:
libGDX Jam From 18 December 2015 to 18 January 2016 a libGDX game jam was organized together with RoboVM, itch.io and Robotality. From initially 180 theme suggestions "Life in space" was chosen as the jam's main theme, and 83 games were created over the course of the competition.
Release versions
Architecture:
libGDX allows the developer to write, test, and debug their application on their own desktop PC and use the same code on Android. It abstracts away the differences between a common Windows/Linux application and an Android application. The usual development cycle consists of staying on the desktop PC as much as possible while periodically verifying that the project still works on Android. Its main goal is to provide total compatibility between desktop and mobile devices, the main difference being speed and processing power.
Architecture:
Backends The library transparently uses platform-specific code through various backends to access the capabilities of the host platform. Most of the time the developer does not have to write platform-specific code, except for starter classes (also called launchers) that require different setup depending on the backend.
On the desktop the Lightweight Java Game Library (LWJGL) is used. There is also an experimental JGLFW backend that is not being continued anymore. In Version 1.8 a new LWJGL 3 backend was introduced, intended to replace the older LWJGL 2 backend.
The HTML5 backend uses the Google Web Toolkit (GWT) for compiling the Java to JavaScript code, which is then run in a normal browser environment. libGDX provides several implementations of standard APIs that are not directly supported there, most notably reflection.
The Android backend runs Java code compiled for Android with the Android SDK.
For iOS a custom fork of RoboVM is used to compile Java to native iOS instructions. Intel's Multi-OS Engine has been provided as an alternative since the discontinuation of RoboVM.
Other JVM languages While libGDX is written primarily in Java, the compiled bytecode is language-independent, allowing many other JVM languages to directly use the library. The documentation specifically states the interoperability with Ceylon, Clojure, Kotlin, Jython, JRuby and Scala.
Extensions:
Several official and third-party extensions exist that add additional functionality to the library.
gdxAI An artificial intelligence (AI) framework that was split from the main library with version 1.4.1 in October 2014 and moved into its own repository. While it was initially made for libGDX, it can be used with other frameworks as well. The project focuses on AI useful for games, among them pathfinding, decision making and movement.
gdx freetype Can be used to render FreeType fonts at run time instead of using static bitmap images, which do not scale as well.
Box2D A wrapper for the Box2D physics library was introduced in 2010 and moved to an extension with the 1.0 release.
packr A helper tool that bundles a custom JRE with the application so end users do not have to have their own one installed.
Notable games:
Drag Racing: Streets Ingress (before it was relaunched as Ingress Prime) Slay the Spire Delver HOPLITE Deep Town Sandship Unciv Mindustry Space Haven Pathway Halfway Riiablo Mirage Realms Raindancer PokeMMO Zombie Age 3 Epic Heroes War Shattered Pixel Dungeon Hair Dash Antiyoy Wildermyth Line-Of-Five Labyrinthian
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Diethyl selenide**
Diethyl selenide:
Diethyl selenide is an organoselenium compound with the formula C4H10Se. First reported in 1836, it was the first organoselenium compound to be discovered. It is the selenium analogue of diethyl ether. It has a strong and unpleasant smell.
Occurrence:
Diethyl selenide has been detected in biofuel produced from plantain peel.
It is also a minor air pollutant in some areas.
Preparation:
It may be prepared by a substitution reaction similar to the Williamson ether synthesis: reaction of a metal selenide, such as sodium selenide, with two equivalents of ethyl iodide or similar reagent to supply the ethyl groups:
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**ZMYND11**
ZMYND11:
Zinc finger MYND domain-containing protein 11 is a protein that in humans is encoded by the ZMYND11 gene.
Function:
The protein encoded by this gene was first identified by its ability to bind the adenovirus E1A protein. The protein localizes to the nucleus. It functions as a transcriptional repressor, and expression of E1A inhibits this repression. Alternatively spliced transcript variants encoding different isoforms have been identified.
Interactions:
ZMYND11 has been shown to interact with: BMPR1A, C11orf30, ETS2, and TAB1.
H3.3K36me3
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**K-index (meteorology)**
K-index (meteorology):
The K-Index or George's Index is a measure of thunderstorm potential in meteorology. According to the National Weather Service, the index harnesses measurements such as "vertical temperature lapse rate, moisture content of the lower atmosphere, and the vertical extent of the moist layer." It was developed by the American meteorologist Joseph J. George, and published in the 1960 book Weather Forecasting for Aeronautics.
Definition:
The index is derived arithmetically by: 850 500 850 700 700 ) Where : 850 = Dew point at 850 hPa 850 = Temperature at 850 hPa 700 = Dew point at 700 hPa 700 = Temperature at 700 hPa 500 = Temperature at 500 hPa
Interpretation:
The K Index is related to the probability of occurrence of a thunderstorm. It was developed with the idea that Potential = 4 x (KI - 15), which gives the following interpretation:
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Millosevichite**
Millosevichite:
Millosevichite is a rare sulfate mineral with the chemical formula Al2(SO4)3. Aluminium is often substituted by iron. It forms finely crystalline and often porous masses.
Millosevichite:
It was first described in 1913 for an occurrence in Grotta dell'Allume, Porto Levante, Vulcano Island, Lipari, Aeolian Islands, Sicily. It was named for Italian mineralogist Federico Millosevich (1875–1942) of the University of Rome.The mineral is mainly known from burning coal dumps, acting as one of the main minerals forming sulfate crust. It can be also found in volcanic solfatara environments.
Millosevichite:
It occurs with native sulfur, sal ammoniac, letovicite, alunogen and boussingaultite.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Gudjonsson suggestibility scale**
Gudjonsson suggestibility scale:
The Gudjonsson suggestibility scale (GSS) is a psychological test that measures suggestibility of a subject. It was created in 1983 by Icelandic psychologist Gísli Hannes Guðjónsson. It involves reading a short story to the subject and testing recall. This test has been used in court cases in several jurisdictions but has been the subject of various criticisms.
History:
The Gudjonsson suggestibility scale (GSS) was created in 1983 by Icelandic psychologist Gísli Hannes Guðjónsson. Given his large number of publications on suggestibility, Gísli was often called as an expert witness in court cases where the suggestibility of those involved in the case was crucial to the proceedings. To measure suggestibility, Gísli created a scale that was relatively straightforward and could be administered in a wide variety of settings. He noticed that while there was a significant body of research on the effects of leading questions on suggestibility, less was known about the effects of "specific instruction" and "interpersonal pressure". Previous methods of measuring suggestibility were primarily aimed at "hypnotic phenomena"; however, Gísli's scale was the first created to be used specifically in conjunction with interrogative events.His test relies on two different aspects of interrogative suggestibility: it measures how much an interrogated person yields to leading questions, as well as how much an interrogated person shifts their responses when additional interrogative pressure is applied. The test is designed specifically to measure the effects of suggestive questions and instructions. Although originally developed in English, the scale has been translated into several different languages, including Portuguese, Italian, Dutch, and Polish.
History:
Method The GSS involves reading a short story to the subject, followed by a general recall activity, a test, and a retest. It begins with a short story being read to the subject: Anna Thomson of South Croydon was on holiday in Spain when she was held up outside her hotel and robbed of her handbag, which contained $50 worth of traveler's checks and her passport. She screamed for help and attempted to put up a fight by kicking one of the assailants in the shins. A police car shortly arrived and the woman was taken to the nearest police station, where she was interviewed by Detective Sergeant Delgado. The woman reported that she had been attacked by three men, one of whom she described as oriental looking. The men were said to be slim and in their early twenties. The police officer was touched by the woman’s story and advised her to contact the British Embassy. Six days later, the police recovered the lady’s handbag, but the contents were never found. Three men were subsequently charged, two of whom were convicted and given prison sentences. Only one had had previous convictions for similar offences. The lady returned to Britain with her husband Simon and two friends but remained frightened of being out on her own.
History:
The subject is instructed to listen carefully to the story being read to them because they will have to report what they remember afterward. After the researcher reads the story aloud to the participant, the subject is asked to engage in free recall in which they report everything remembered of what was just read. To make the assessment more difficult, subjects may be asked to report these facts after 50 minutes in addition to immediately following the story. This part of the assessment is scored based on how many facts the subject recalls correctly.The second part of the assessment consists of the actual scale. It consists of twenty questions regarding the short story: fifteen questions being suggestive and five being neutral. The fifteen suggestive questions can be separated into three types of suggestibility: leading questions, affirmative questions, and false alternative questions. Their purpose is to measure how much a participant "yields" to suggestive questions.
History:
Leading questions contained some "salient precedence" and are worded in such a way that they seem plausible and lend themselves to an affirmative answer. A leading question on the GSS would ask, "Did the woman's glasses break in the struggle?"Affirmative questions were those that presented facts that did not appear in the story, but that contain an affirmative response bias. An example of an affirmative question would be "Were the assailants convicted six weeks after their arrest?"False alternative questions also contain information not present in the story; however, these questions focus specifically on objects, people, and events not found in the story. One of these questions would be, "Did the woman hit one of the assailants with her fist or handbag?"The five neutral questions contain a correct answer that is affirmative; the correct answer is yes. After 1987, the GSS was altered so that these five questions were included in the shift score as well. This version is referred to as the Gudjonsson suggestibility scale 2, or GSS2.
History:
The twenty questions are dispersed within the assessment in order to conceal its aim. The person under interrogation is told in a "forceful manner" that there are errors in their story, and they must answer the questions a second time. After answering the initial questionnaire, the subjects are told that they made a certain number of errors and are instructed to go over the assessment again and correct any errors they detect. Any changes made in the suggestive questions are recorded.
History:
Scoring Scoring can be broken down into two main categories: memory recall and suggestibility. Memory recall refers to the number of facts the subject correctly remembered during the free recall. Each fact is worth one point, and the subject can earn a maximum of forty points for this section.The suggestibility section is broken into three subcategories-yield, shift, and total. Yield refers to the number of suggestive questions answered incorrectly, based on the original story. With each question being worth one point, subjects can score up to fifteen points on this section. If the subject engaged in two recall activities, the score for the second trial is not included in the scoring. Shift refers to any notable significant change in the participant's answers after they were told to go over their original answers and correct their mistakes. Subjects can also score up to fifteen points on this section. The total score refers to the sum of both the Yield and Shift scores.In a sample of 195 people, the Yield 1 mean score was 4.9, with a standard deviation of 3.0. The Yield 2 mean score was 6.9, with a standard deviation of 3.4. The average Shift score was 3.6, with a standard deviation of 2.7. For total suggestibility (Yield + Shift), the average score was 8.5, with a standard deviation of 4.3. The average memory recall score was 19.2, with a standard deviation of 8.0.
History:
Measures of reliability and validity Internal consistency scores between Yield 1 and Shift for the GSS range from −.23 to .28. Internal consistency for the fifteen Yield and fifteen Shift questions were reportedly 0.77 and 0.67, respectively.The GSS2 showed higher internal consistency than the GSS1. Test-retest reliability was reportedly 0.55. Overall, Shift scores showed the lowest internal consistency, at 0.11. Other scores were significant. External validity, tested with the Portuguese version of the GSS, showed no correlation between interrogative suggestibility and factors of personality, nor interrogative suggestibility and anxiety. Immediate recall and delayed recall correlated negatively with all suggestibility scores.
Uses in the justice system:
Use in criminal proceedings The GSS is used most often in criminal justice systems. The human memory has been known to be unreliable, as is eyewitness testimony. But Western countries rely strongly on such testimony, and wrongful convictions based on incorrect eyewitness testimony have been publicized, raising this as an issue to the wider public.The GSS allows psychologists to identify individuals who may be susceptible to giving false accounts of events when questioned. The GSS could be useful in a situation where a defendant is being interrogated or cross-examined. There is evidence that GSS scores vary between inmates and the general population. In the general population, high scores on the GSS are associated with an increased likelihood of false confession. Pires (2014) studied 40 Portuguese prisoners and found that inmates had higher suggestibility scores than the general population. This group had the lowest scores in the immediate recall portion of the GSS, suggesting that their higher suggestibility was due to their lower memory capacity.Possible explanations for this may be that the inmates participated in the study voluntarily, and were told that participation would have no negative effect on them. Therefore, even for inmates with antisocial personality disorder, the study took place in a "cooperative atmosphere". Inmates who had a negative attitude toward the test situation or the examiner had decreased vulnerability to suggestion. Additionally, repeat offenders were more resistant to interrogative pressure than those without prior convictions; this may be due to their experience in interrogation settings. Studies have found that GSS scores are higher in people who confess to crimes they did not commit, than in people who are more resistant to police questioning.The use of the GSS in court proceedings has been met with mixed responses. In the United States, courts in many states have ruled that the GSS does not meet either the Frye standard or the Daubert standard for the admissibility of expert testimony. In Soares v. Massachusetts(2001), for example, the Massachusetts Appeals Court stated that the case was "devoid of evidence demonstrating either the scientific validity or reliability of the GSS as a measure of susceptibility to suggestion or appropriate applications of the test results."In the same year, the Wisconsin Supreme Court, in Summers v. Wisconsin affirmed the trial court's decision to exclude the defense's expert testimony on the GSS because it was "vague regarding what information or insights the expert could offer that would assist the jury and the scientific bases of these insights." Despite these decisions, the GSS has been permitted to be used in several court cases. For example, in Oregon v. Romero (2003), the Oregon Court of Appeals held that the testimony of a defense expert about the results of a Gudjonsson suggestibility test—offered in support of the defendant's claim that her confession to police was involuntary—met "the threshold for admissibility" because "It would have been probative, relevant, and helpful to the trier of fact."Experts have linked GSS suggestibility to the voluntary aspect of Miranda waivers during legal proceedings. Despite this, there are very few appellate cases in which the GSS has been presented to a court with any reference to whether a waiver of Miranda rights by a suspect was voluntary. Rogers (2010) specifically examined the GSS in terms of its ability to predict people's ability to understand and agree to Mirand rights. This study found that suggestibility, as assessed by the GSS, appeared to be unrelated to "Miranda comprehension, reasoning, and detainees' perceptions of police coercion". Defendants with high compliance were found to have significantly lower Miranda comprehension and ability to reason about exercising Miranda rights when compared to counterparts with low compliance.
Uses in the justice system:
Use in juvenile delinquency proceedings Scores of adolescents in the justice system differ from those of adults. Richardson (1995), administered the GSS to 65 juvenile offenders. When matched with adult offenders on IQ and memory, juveniles were much more susceptible to giving into interrogative pressure (Shift), specifically by changing their answers after they were given negative feedback. Their answers to the leading questions, however, were no more affected by suggestibility than their adult cohorts.These results were likely not due to memory capacity, as studies have shown that information that children can retrieve during free recall increases with age and is equal to adults by around age 12. Singh (1992) compared non-offending adults and adolescents, and showed that adolescents still showed higher suggestibility scores than adults. A study comparing delinquent adolescents to normal adults found the same results Researchers suggest that police interviewers not place adolescent suspects and witnesses under excessive pressure by criticizing their answers.
Critiques:
Use with people with intellectual disabilities Use of the GSS with people who have an intellectual disability has been met with criticism. This controversy is partially due to the large memory component of the GSS. Research has shown that the high levels of suggestibility demonstrated by people with intellectual disabilities are related to poor memory for the information presented in the GSS. People with intellectual disabilities have difficulty remembering aspects of the fictional story of GSS because it is not relevant to them. When those with intellectual disabilities are tested based on events that are of personal significance to them, suggestibility decreases significantly. In terms of false confession, which involves a situation in which the defendant was not present, the GSS might have more relevance to confessions than it does to witness testimony. Another context in which the GSS is sometimes used is as part of the assessment of whether people accused of a crime have the capacity to plead to the charge. Despite this perceived usefulness, it is advised that the GSS not be used in court, as their results may not accurately represent their ability to understand the charges against them or to stand trial.
Critiques:
Internal consistency reliability One issue with the GSS is internal consistency reliability, specifically in regards to the Shift portion of the measure. Both Shift-positive and Shift-negative are associated with levels of internal consistency reliability of x2 < .60. Internal Shift scores have been reported as x2 = .60, which is "unacceptably low". These numbers serve as a possible explanation for why studies have not found "theoretically meaningful correlations" between the Shift sub-scale and other external criteria. Researchers argue against the use of a Total suggestibility composite due to evidence that Yield 1 and Shift scores do not significantly correlate with each other. This absence of a correlation is problematic because it "suggests that yielding to a leading question and yielding to negative feedback from an interviewer operate under completely different processes". Other researchers have found that there are two types of suggestibility: direct and indirect. The failure to take these into account may have led to methodological problems with the GSS. Researchers suggest that until these issues have been addressed, the GSS should only be limited to the Yield sub-scale.
Critiques:
Effects of cognitive load on suggestibility Drake et al. (2013) aimed to discover the effects that increasing cognitive load had on suggestibility scores on the GSS, and specifically attempts at faking interrogative suggestibility. The study was conducted using 80 undergraduate students, each of whom were assigned to one of four conditions from a combination of instruction type (genuine or instructed faking) and concurrent task (yes or no). Findings showed that instructed fakers not performing a concurrent task scored significantly higher on yield 1 compared with "genuine interviewees". Instructed fakers who were performing a concurrent task scored significantly lower on yield 1 scores. Genuines (non-fakers) did not exhibit this pattern in response to cognitive load differences. These results suggest that an increase in cognitive load may indicate an attempt at faking on the yield portion of the GSS. Increasing cognitive load may facilitate the detection of deception because it is more difficult to act deceptively under these conditions.
Critiques:
Validity One possible issue with the GSS is its validity – whether it measures genuine "internalization of the suggested materials" or simply "compliance with the interrogator". To test this, Mastroberardino (2013) conducted two experiments. In the first, participants were administered the GSS2 and then immediately performed a "source identification task" for the items on the scale. In the second experiment, half of the participants were administered this identification task immediately while the other have were administered it after 24 hours. Both experiments found a higher proportion of compliant responses. Participants internalized more suggested information after yield 1, and made more compliant responses during the shift portion of the assessment. In the second experiment, participants in the delayed condition internalized less material than those in the immediate condition. These results support the idea that different processes underlie the yield 1 and shift parts of the GSS2-yield 1 may include internalization of suggested materials and compliance, while shift may be due mostly to compliance with the interrogator. The GSS is not able to differentiate between compliance and suggestibility, as the outcome behaviors of these two cognitive processes are the same.
Critiques:
Suggestibility and false memory Leavitt (1997) compared suggestibility (evaluated by the GSS) in participants who recovered memories of sexual assault to that of those without a history of sexual trauma. The results of this study showed that those who had recovered memories had a lower average suggestibility scores than those who did not have a history of sexual abuse – 6.7 versus 10.6. These results suggest that suggestibility does not play as large a role in the formation of memories than previously assumed.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**4-Methoxyestriol**
4-Methoxyestriol:
4-Methoxyestriol (4-MeO-E3) is an endogenous estrogen metabolite. It is the 4-methyl ether of 4-hydroxyestriol and a metabolite of estriol and 4-hydroxyestriol. 4-Methoxyestriol has very low affinities for the estrogen receptors. Its relative binding affinities (RBAs) for estrogen receptor alpha (ERα) and estrogen receptor beta (ERβ) are both about 1% of those of estradiol. For comparison, estriol had RBAs of 11% and 35%, respectively.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Phenyl-C61-butyric acid methyl ester**
Phenyl-C61-butyric acid methyl ester:
PCBM is the common abbreviation for the fullerene derivative [6,6]-phenyl-C61-butyric acid methyl ester. It is being investigated in organic solar cells.PCBM is a fullerene derivative of the C60 buckyball that was first synthesized in the 1990s. It is an electron acceptor material and is often used in organic solar cells (plastic solar cells) or flexible electronics in conjunction with electron donor materials such as P3HT or other conductive polymers. It is a more practical choice for an electron acceptor when compared with fullerenes because of its solubility in chlorobenzene. This allows for solution processable donor/acceptor mixes, a necessary property for "printable" solar cells. However, considering the cost of fabricating fullerenes, it is not certain that this derivative can be synthesized on a large scale for commercial applications.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Electrocochleography**
Electrocochleography:
Electrocochleography (abbreviated ECochG or ECOG) is a technique of recording electrical potentials generated in the inner ear and auditory nerve in response to sound stimulation, using an electrode placed in the ear canal or tympanic membrane. The test is performed by an otologist or audiologist with specialized training, and is used for detection of elevated inner ear pressure (endolymphatic hydrops) or for the testing and monitoring of inner ear and auditory nerve function during surgery.
Clinical applications:
The most common clinical applications of electrocochleography include: Objective identification and monitoring of Ménière's disease and endolymphatic hydrops (EH) Intraoperative monitoring of auditory system function during surgery on the brainstem or cerebellum Enhancement of Wave I of the auditory brainstem response, particularly in patients who are hard of hearing Diagnosis of auditory neuropathy
Cochlear physiology:
The basilar membrane and the hair cells of the cochlea function as a sharply tuned frequency analyzer. Sound is transmitted to the inner ear via vibration of the tympanic membrane, leading to movement of the middle ear bones (malleus, incus, and stapes). Movement of the stapes on the oval window generates a pressure wave in the perilymph within the cochlea, causing the basilar membrane to vibrate. Sounds of different frequencies vibrate different parts of the basilar membrane, and the point of maximal vibration amplitude depends on the sound frequency.As the basilar membrane vibrates, the hair cells attached to this membrane are rhythmically pushed up against the tectorial membrane, bending the hair cell stereocilia. This opens mechanically gated ion channels on the hair cell, allowing influx of potassium (K+) and calcium (Ca2+) ions. The flow of ions generates an AC current through the hair cell surface, at the same frequency as the acoustic stimulus. This measurable AC voltage is called the cochlear microphonic (CM), which mimics the stimulus. The hair cells function as a transducer, converting the mechanical movement of the basilar membrane into electrical voltage, in a process requiring ATP from the stria vascularis as an energy source.
Cochlear physiology:
The depolarized hair cell releases neurotransmitters across a synapse to primary auditory neurons of the spiral ganglion. Upon reaching receptors on the postsynaptic spiral ganglion neurons, the neurotransmitters induce a postsynaptic potential or generator potential in the neuronal projections. When a certain threshold potential is reached, the spiral ganglion neuron fires an action potential, which enters the auditory processing pathway of the brain.
Cochlear physiology:
Cochlear potentials A resting endolymphatic potential of a normal cochlea is + 80 mV. There are at least 3 other potentials generated upon cochlear stimulation: Cochlear microphonic (CM) Summating potential (SP) Action potential (AP)As described above, the cochlear microphonic (CM) is an alternating current (AC) voltage that mirrors the waveform of the acoustic stimulus. It is dominated by the outer hair cells of the organ of Corti. The magnitude of the recording is dependent on the proximity of the recording electrodes to the hair cells. The CM is proportional to the displacement of the basilar membrane. A fourth potential, the auditory nerve neurophonic, is sometimes dissociated from the CM. The neurophonic represents the neural part (auditory nerve spikes) phased-locked to the stimulus and is similar to the Frequency following response.The summating potential (SP), first described by Tasaki et al. in 1954, represents the direct current (DC) response of the hair cells as they move in conjunction with the basilar membrane, as well as the DC response from dendritic and axonal potentials of the auditory nerve. The SP is the stimulus-related potential of the cochlea. Although historically it has been the least studied, renewed interest has surfaced due to changes in the SP reported in cases of endolymphatic hydrops or Ménière's disease.
Cochlear physiology:
The auditory nerve action potential, also called the compound action potential (CAP), is the most widely studied component in ECochG. The AP represents the summed response of the synchronous firing of the nerve fibers. It also appears as an AC voltage. The first and largest wave (N1) is identical to wave I of auditory brainstem response (ABR). Following this is N2, which is identical to wave II of the ABR. The magnitude of the action potential reflects the number of fibers that are firing. The latency of the AP is measured as the time between the onset and the peak of the N1 wave.
Procedure and recording parameters:
ECochG can be performed with either invasive or non-invasive electrodes. Invasive electrodes, such as transtympanic (TT) needles, give clearer, more robust electrical responses (with larger amplitudes) since the electrodes are very close to the voltage generators. The needle is placed on the promontory wall of the middle ear and the round window. Non-invasive, or extratympanic (ET), electrodes have the advantage of not causing pain or discomfort to the patient. Unlike with invasive electrodes, there is no need for sedation, anesthesia, or medical supervision. The responses, however, are smaller in magnitude.
Procedure and recording parameters:
Auditory stimuli in the form of broadband clicks 100 microseconds in duration are used. The stimulus polarity can be rarefaction polarity, condensation polarity, or alternating polarity. Signals are recorded from a primary recording (non-inverted) electrode located in the ear canal, tympanic membrane, or promontory (depending on type of electrode used). Reference (inverting) electrodes can be placed on the contralateral earlobe, mastoid, or ear canal.
Procedure and recording parameters:
The signal is processed, including signal amplification (by as much as a factor 100000 for extratympanic electrode recordings), noise filtration, and signal averaging. A band-pass filter from 10 Hz to 1.5 kHz is often used.
Interpretation of results:
The CM, SP, and AP are all used in the diagnosis of endolymphatic hydrops and Ménière's disease. In particular, abnormally high SP and a high SP:AP ratio are signs of Ménière's disease. An SP:AP ratio of 0.45 or greater is considered abnormal.
History:
The CM was first discovered in 1930 by Ernest Wever and Charles Bray in cats. Wever and Bray mistakenly concluded that this recording was generated by the auditory nerve. They named the discovery the "Wever-Bray effect". Hallowell Davis and A.J. Derbyshire from Harvard replicated the study and concluded that the waves were in fact cochlear origin and not from the auditory nerve.Fromm et al. were the first investigators to employ the ECochG technique in humans by inserting a wire electrode through the tympanic membrane and recording the CM from the niche of the round window and cochlear promontory. Their first measurement of the CM in humans was in 1935. They also discovered the N1, N2, and N3 waves following the CM, but it was Tasaki who identified these waves as auditory nerve action potentials.
History:
Fisch and Ruben were the first to record the compound action potentials from both the round window and the eighth cranial nerve (CN VIII) in cats and mice. Ruben was also the first person to use CM and AP clinically.
History:
The summating potential, a stimulus-related hair cell potential, was first described by Tasaki and colleagues in 1954. Ernest J. Moore was the first investigator to record the CM from surface electrodes. In 1971, Moore conducted five experiments in which he recorded CM and AP from 38 human subjects using surface electrodes. The purpose of the experiment was to establish the validity of the responses and to develop an artifact-free earphone system. Unfortunately, bulk of his work was never published.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Graphene morphology**
Graphene morphology:
A graphene morphology is any of the structures related to, and formed from, single sheets of graphene. 'Graphene' is typically used to refer to the crystalline monolayer of the naturally occurring material graphite. Due to quantum confinement of electrons within the material at these low dimensions, small differences in graphene morphology can greatly impact the physical and chemical properties of these materials. Commonly studied graphene morphologies include the monolayer sheets, bilayer sheets, graphene nanoribbons and other 3D structures formed from stacking of the monolayer sheets.
Monolayer sheets:
In 2013 researchers developed a production unit that produces continuous monolayer sheets of high-strength monolayer graphene (HSMG). The process is based on graphene growth on a liquid metal matrix.
Bilayer:
Bilayer graphene displays the anomalous quantum Hall effect, a tunable band gap and potential for excitonic condensation. Bilayer graphene typically can be found either in twisted configurations where the two layers are rotated relative to each other or graphitic Bernal stacked configurations where half the atoms in one layer lie atop half the atoms in the other. Stacking order and orientation govern its optical and electronic properties.
Bilayer:
One synthesis method is chemical vapor deposition, which can produce large bilayer regions that almost exclusively conform to a Bernal stack geometry.
Superlattices:
Periodically stacked graphene and its insulating isomorph provide a fascinating structural element in implementing highly functional superlattices at the atomic scale, which offers possibilities in designing nanoelectronic and photonic devices. Various types of superlattices can be obtained by stacking graphene and its related forms. The energy band in layer-stacked superlattices is more sensitive to the barrier width than that in conventional III–V semiconductor superlattices. When adding more than one atomic layer to the barrier in each period, the coupling of electronic wavefunctions in neighboring potential wells can be significantly reduced, which leads to the degeneration of continuous subbands into quantized energy levels. When varying the well width, the energy levels in the potential wells along the L–M direction behave distinctly from those along the K–H direction.
Superlattices:
Precisely aligned graphene on h-BN always produces giant superlattice known as Moiré pattern. Moiré patterns are observed and the sensitivity of moiré interferometry proves that the graphene grains can align precisely with the underlying h-BN lattice within an error of less than 0.05°. The occurrence of moiré pattern clearly indicates that the graphene locks into h-BN via van der Waals epitaxy with its interfacial stress greatly released.
Superlattices:
The existence of the giant Moiré pattern in graphene nanoribbon (GNR) embedded in hBN indicates that the graphene was highly crystalline and precisely aligned with the h-BN underneath. It was noticed that the Moiré pattern appeared to be stretched along the GNR, while it appeared relaxed laterally. This trend differs from regular hexagons with a periodicity of ~14 nm, which have always been observed with well-aligned graphene domains on h-BN. This observation gives a strong indication of the in-plane epitaxy between the graphene and the h-BN at the edges of the trench, where the graphene is stretched by tensile strain along the ribbon, due to a lattice mismatch between the graphene and h-BN.
Nanoribbons:
Graphene nanoribbons ("nanostripes" in the "zig-zag" orientation), at low temperatures, show spin-polarized metallic edge currents, which suggest spintronics applications. (In the "armchair" orientation, the edges behave like semiconductors.)
Fiber:
In 2011, researchers reported making fibers using chemical vapor deposition grown graphene films. The method was scalable and controllable, delivering tunable morphology and pore structure by controlling the evaporation of solvents with suitable surface tension. Flexible all-solid-state supercapacitors based on such fibers were demonstrated in 2013.In 2015 intercalating small graphene fragments into the gaps formed by larger, coiled graphene sheets after annealing provided pathways for conduction, while the fragments helped reinforce the fibers. The resulting fibers offered better thermal and electrical conductivity and mechanical strength. Thermal conductivity reached 1290 watts per meter per kelvin, while tensile strength reached 1080 megapascals.In 2016, kilometer-scale continuous graphene fibers with outstanding mechanical properties and excellent electrical conductivity were produced by high-throughput wet-spinning of graphene oxide liquid crystals followed by graphitization through a full-scale synergetic defect-engineering strategy.
3D:
Three dimensional bilayer graphene was reported in 2012 and 2014.In 2013, a three-dimensional honeycomb of hexagonally arranged carbon was termed 3D graphene. Self-supporting 3D graphene was produced that year. Researchers at Stony Brook University have reported a novel radical-initiated crosslinking method to fabricate porous 3D free-standing architectures of graphene and carbon nanotubes using nanomaterials as building blocks without any polymer matrix as support. 3D structures can be fabricated by using either CVD or solution-based methods. A 2016 review summarized the techniques for fabrication of 3D graphene and other related two-dimensional materials. These 3D graphene (all-carbon) scaffolds/foams have potential applications in fields such as energy storage, filtration, thermal management and biomedical devices and implants.In 2016, a box-shaped graphene (BSG) nanostructure resulted from mechanical cleavage of pyrolytic graphite has been reported. The discovered nanostructure is a multilayer system of parallel hollow nanochannels located along the surface that displayed quadrangular cross-section. The thickness of the channel walls is approximately equal to 1 nm, the typical width of channel facets makes about 25 nm. Potential applications include: ultra-sensitive detectors, high-performance catalytic cells, nanochannels for DNA sequencing and manipulation, high-performance heat sinking surfaces, rechargeable batteries of enhanced performance, nanomechanical resonators, electron multiplication channels in emission nanoelectronic devices, high-capacity sorbents for safe hydrogen storage.
3D:
In 2017 researchers simulated a graphene gyroid that has five percent of the density of steel, yet is ten times as strong with an enormous surface area to volume ratio. They compressed heated graphene flakes. They then constructed high resolution 3D-printed models of plastic of various configurations – similar to the gyroids that graphene form naturally, though thousands of times larger. These shapes were then tested for tensile strength and compression, and compared to the computer simulations. When the graphene was swapped out for polymers or metals, similar gains in strength were seen.A film of graphene soaked in solvent to make it swell and become malleable was overlaid on an underlying substrate "former". The solvent evaporated, leaving behind a layer of graphene that had taken on the shape of the underlying structure. In this way the team was able to produce a range of relatively intricate micro-structured shapes. Features vary from 3.5 to 50 μm. Pure graphene and gold-decorated graphene were each successfully integrated with the substrate.An aerogel made of graphene layers separated by carbon nanotubes was measured at 0.16 milligrams per cubic centimeter. A solution of graphene and carbon nanotubes in a mold is freeze dried to dehydrate the solution, leaving the aerogel. The material has superior elasticity and absorption. It can recover completely after more than 90% compression, and absorb up to 900 times its weight in oil, at a rate of 68.8 grams per second.At the end of 2017, fabrication of freestanding graphene gyroids with 35nm and 60nm unit cells was reported. The gyroids were made via controlled direct chemical vapor deposition and are self-supporting and can be transferred onto a variety of substrates. Furthermore, they represent the smallest free standing periodic graphene 3D structures yet produced with a pore size of tens of nm. Due to their high mechanical strength, good conductivity (sheet resistance : 240 Ω/sq) and huge ratio of surface area per volume, the graphene gyroids might find their way to various applications, ranging from batteries and supercapacitors to filtration and optoelectronics.
Pillared:
Pillared graphene is a hybrid carbon structure consisting of an oriented array of carbon nanotubes connected at each end to a graphene sheet. It was first described theoretically in 2008. Pillared graphene has not been synthesized in the laboratory.
Reinforced:
Graphene sheets reinforced with embedded carbon nanotubes ("rebar") are easier to manipulate, while improving the electrical and mechanical qualities of both materials.Functionalized single- or multiwalled carbon nanotubes are spin-coated on copper foils and then heated and cooled, using the nanotubes as the carbon source. Under heating, the functional carbon groups decompose into graphene, while the nanotubes partially split and form in-plane covalent bonds with the graphene, adding strength. π–π stacking domains add more strength. The nanotubes can overlap, making the material a better conductor than standard CVD-grown graphene. The nanotubes effectively bridge the grain boundaries found in conventional graphene. The technique eliminates the traces of substrate on which later-separated sheets were deposited using epitaxy.Stacks of a few layers have been proposed as a cost-effective and physically flexible replacement for indium tin oxide (ITO) used in displays and photovoltaic cells.
Nanocoil:
In 2015 a coiled form of graphene was discovered in graphitic carbon (coal). The spiraling effect is produced by defects in the material's hexagonal grid that causes it to spiral along its edge, mimicking a Riemann surface, with the graphene surface approximately perpendicular to the axis. When voltage is applied to such a coil, current flows around the spiral, producing a magnetic field. The phenomenon applies to spirals with either zigzag or armchair orientations, although with different current distributions. Computer simulations indicated that a conventional spiral inductor of 205 microns in diameter could be matched by a nanocoil just 70 nanometers wide, with a field strength reaching as much as 1 tesla, about the same as the coils found in typical loudspeakers, about the same field strength as some MRI machines. They found the magnetic field would be strongest in the hollow, nanometer-wide cavity at the spiral's center.A solenoid made with such a coil behaves as a quantum conductor whose current distribution between the core and exterior varies with applied voltage, resulting in nonlinear inductance.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**H.241**
H.241:
H.241 is a Recommendation from the ITU Telecommunication Standardization Sector (ITU-T) that defines extended video procedures and control signals for H.300-series terminals, including H.323 and H.320.
This Recommendation defines the use of advanced video codecs, including H.264: Command and Indication Capability exchange signaling Transport requires support of single NAL unit mode (packetization mode 0) of RFC 6184 Reduced-Complexity Decoding Operation (RCDO) for H.264 baseline profile bit streams Negotiation of video submodes
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Transformers: Super-God Masterforce**
Transformers: Super-God Masterforce:
Transformers: Super-God Masterforce (トランスフォーマー 超神マスターフォース, Toransufōmā: Chōjin Masutāfōsu) is a Japanese Transformers line of toys and anime series that ran from April 12, 1988, to March 7, 1989, for 42 episodes. On July 3, 2006, the series was released on DVD in the UK, and it was aired on AnimeCentral in the UK a few years later. In 2008, Madman Entertainment released the series on DVD in Australia in Region 4, PAL format. On May 1, 2012, the series was released on DVD in the US. It serves as the second sequel series to the Japanese dub of the original The Transformers cartoon series as part of the Generation 1 franchise, preceded by Transformers: The Headmasters and followed by Transformers: Victory.
Story:
The core concept of Masterforce begins with the human beings themselves rising up to fight and defend their home, rather than the alien Transformers doing it for them. Going hand-in-hand with this idea, the Japanese incarnations of the Autobot Pretenders actually shrink down to pass for normal human beings, whose emotions and strengths they value and wish to safeguard. The Decepticon Pretenders tend to remain large monsters, unless they battle in their robot forms. Later on children and adults would be recruited to become Headmaster Juniors for both the Autobots and Decepticons but as the story progressed the story focuses more on the Godmasters (released as Powermasters in the West) and they became the more powerful Transformers on the show. The Godmasters themselves are human beings with the ability to merge with their Transtectors (robot bodies). Most of the Godmasters would be adults with the exception of Clouder who is about the same age as the Junior Headmasters. Other characters would later appear including Black Zarak who would later merge with the Decepticons leader; Devil Z for the final battle and for the Autobots comes Grand Maximus who has a Pretender guise and is Fortress Maximus' younger brother. Also the Firecons make a brief appearance in one episode and a robot who transforms into a gun (similar to G1 Megatron) was given to Cancer of the Headmaster Junior Decepticons as a gift from Lady Mega. His name was Browning (or BM in the dub). The Decepticons also had the Targetmaster Seacons under their command, but like the Pretenders, they were sentient robots and didn't require humans to operate them. The Autobots would also gain the help of another sentient robot called Sixknight (Or as he is known outside Japan; Quickswitch), who appeared on Earth as a travelling warrior who wanted to challenge Ginrai (who is the Godmaster of the body of Optimus Prime) to a battle, but soon decided for himself to fight for the Autobots cause. The story basically tells the efforts of the heroic Autobot forces as they protect the Earth from the Decepticons. Only this time round, human characters played a more important role than in other Transformers series.
Development:
With the conclusion of the US Transformers cartoon series in 1987, Japan produced their first exclusive anime series, Transformers: The Headmasters, to replace the fourth and final US season and to carry out the story concepts begun in The Transformers: The Movie and carried on through the third season, using the existing cast and adding the eponymous Headmasters into the mix. With the completion of the series, the evil Decepticons had finally been forced off Earth, and the stage was set for the beginning of Super-God Masterforce.
Development:
Although nominally occurring in the same continuity as the previous Transformers series, there was a very obvious effort on head writer Masumi Kaneda's part to make Masterforce a "fresh start" as a mecha story, introducing an entirely new cast of characters from scratch, rather than using any of the previous ones. To this end, although the toys are mostly the same in both Japan and the West (barring some different color schemes), the characters which they represent are vastly different—most prominently, Powermaster Optimus Prime's counterpart is Ginrai, a human trucker who combines with a transtector (a non-sentient Transformer body, a concept lifted from Headmasters) to become a Transformer himself, the same applies to the other Powermasters' counterparts; the Godmasters. The Pretender figures released during that year were the same but in Masterforce the Autobot pretenders disguise themselves regular sized humans that can wear normal clothing instead of being giant humans wearing armor as they were in contemporary Marvel comics.
Development:
The attempt to start things afresh with Masterforce does give rise to some continuity quirks, however, such as Earth technology being portrayed as contemporary, rather than futuristic as in 2010 and Headmasters, and some characters being totally unaware of what Transformers are, even though they have been public figures for over two decades. Similarly, the show never supplied the viewer with the full backstory - within the main 42 episodes of the series, important aspects such as what the true villain, Devil Z is or who BlackZarak is are never explained. Even the timeframe of the show was never revealed, with the series taking place an indeterminate amount of time after Headmasters. Most of these facts would all be revealed later in made-for-video clip shows and other media, including a Special Secrets episode where both Shuta and Grand Maximus would explain and reveal several pieces of trivia about the show.
Adaptations:
The series was dubbed into English in Hong Kong by the dubbing company; Omni Productions, for broadcast on the Malaysian TV channel, RTM1 along with Headmasters and the following series, Victory. These dubs, however, are more famous for their time on the Singapore satellite channel, Star TV, where they were grouped under the umbrella title of "Transformers Takara", and all given Victory's opening sequence. Later acquired by the US Transformers animated series creator Sunbow Productions, they were given English-language closing credits (even including the English Transformers theme), but no official release of them has ever been carried in the US, because of their poor quality. Performed by a small group (less than half-a-dozen actors), the dubs feature many incorrect names and nonsensical translations - in the case of the Masterforce, especially, all the English-equivalent names are used for the characters, so throughout the series, the clearly human Ginrai is referred to as "Optimus Prime", and the little blonde girl called Minerva is referred to by the inappropriate name "Nightbeat".
Adaptations:
In 2006, the complete series was released in Region 2 with the Japanese audio with subtitles (although like Shout! Factory, it does not contain the English dub). For the Shout! Factory release, the Cybertronians are still referred to as Autobots and the Destrons are still known as the Decepticons, and many of the characters are given the names of the American releases of their toys.
Adaptations:
A twelve-chapter manga adaptation of this anime was written by Masami Kaneda and illustrated by Ban Magami.
Theme songs:
Openings"Super-God Masterforce Theme" (超神マスターフォースのテーマ, Chōjin Masutāfōsu no Tēma) April 12, 1988 - March 7, 1989 Lyricist: Machiko Ryu / Composer: Masahiro Kawasaki / Arranger: Masahiro Kawasaki / String Arranger: Tomoyuki Asakawa / Singers: Toshiya Igarashi Episodes: 1–47Endings"Let's Go! Transformers" (燃えろ!トランスフォーマー, Moero! Toransufōmā) April 12, 1988 - March 7, 1989 Lyricist: Machiko Ryu / Composer: Masahiro Kawasaki / Arranger: Masahiro Kawasaki/ String Arranger: Tomoyuki Asakawa / Singers: Toshiya Igarashi, Mori no Ki Jido Gassho-dan Episodes: 1–47Insert Songs"Miracle Transformers" (奇跡のトランスフォーマー, Kiseki no Toransufōmā) September 13, 1988, November 1, 1988, November 15, 1988, December 6, 1988 Lyricist: Machiko Ryu / Composer: Masahiro Kawasaki / Arranger: Masahiro Kawasaki / Singers: Toshiya Igarashi Episodes: 20, 27, 29, 32 "Advance! Super-God Masterforce" (進め!超神マスターフォース, Susume! Chōjin Masutāfōsu) September 27, 1988, November 8, 1988 Lyricist: Machiko Ryu / Composer: Masahiro Kawasaki / Arranger: Masahiro Kawasaki / Singers: Toshiya Igarashi Episodes: 22, 28 "WE BELIEVE TOMORROW" December 13, 1988, February 28, 1989 Lyricist: Machiko Ryu / Composer: Komune Negishi / Arranger: Kimio Nomura / Singers: Toshiya Igarashi Episodes: 33, 42 "Super Ginrai Theme" (スーパージンライのテーマ, Sūpā Jinrai no Tēma) Lyricist: Machiko Ryu / Composer: Komune Negishi / Arranger: Katsunori Ishida / Singers: Toshiya Igarashi Episodes: 34, 39 "Transform! Godmaster" (変身!ゴッドマスター, Henshin! Goddomasutā) Lyricist: Machiko Ryu / Composer: Masahiro Kawasaki / Arranger: Kimio Nomura / Singers: Toshiya Igarashi Episodes: None "Small Warrior: Headmaster Jr Theme" (小さな勇士~ヘッドマスターJrのテーマ~, Chīsana Yūshi: Heddomasutā Junia no Tēma) Lyricist: Kayoko Fuyusha / Composer: Komune Negishi / Arranger: Katsunori Ishida / Singers: Yumi Toma, Hiroko Emori, Yuriko Yamamoto Episodes: None "See See Seacons" (See See シーコンズ, Sī Sī Shīkonzu) Lyricist: Kayoko Fuyusha / Composer: Komune Negishi / Arranger: Katsunori Ishida / Singers: Masato Hirano Episodes: None "Ruler of the Universe: Devil Z" (宇宙の支配者・デビルZ, Uchū no Shihaisha: Debiru Zetto) Lyricist: Machiko Ryu / Composer: Komune Negishi / Arranger: Katsunori Ishida / Singers: Toshiya Igarashi Episodes: None
Characters:
Error, link leads to OG page, removed link
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Completeness (cryptography)**
Completeness (cryptography):
In cryptography, a boolean function is said to be complete if the value of each output bit depends on all input bits. This is a desirable property to have in an encryption cipher, so that if one bit of the input (plaintext) is changed, every bit of the output (ciphertext) has an average of 50% probability of changing. The easiest way to show why this is good is the following: consider that if we changed our 8-byte plaintext's last byte, it would only have any effect on the 8th byte of the ciphertext. This would mean that if the attacker guessed 256 different plaintext-ciphertext pairs, he would always know the last byte of every 8byte sequence we send (effectively 12.5% of all our data). Finding out 256 plaintext-ciphertext pairs is not hard at all in the internet world, given that standard protocols are used, and standard protocols have standard headers and commands (e.g. "get", "put", "mail from:", etc.) which the attacker can safely guess. On the other hand, if our cipher has this property (and is generally secure in other ways, too), the attacker would need to collect 264 (~1020) plaintext-ciphertext pairs to crack the cipher in this way.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**FASTOPEN**
FASTOPEN:
In computing, FASTOPEN is a DOS terminate-and-stay-resident command, introduced in MS-DOS version 3.3, that provides accelerated access to frequently-used files and directories. The command is also available in SISNE plus.
Overview:
The command works with hard disks, but not with diskettes (probably for security when swapping) or with network drives (probably because such drives do not offer block-level access, only file-level access).
It is possible to specify for which drives FASTOPEN should operate, how many files and directories should be cached on each (10 by default, up to 999 total), how many regions for each drive should be cached and whether the cache should be located in conventional or expanded memory.
If a disk defragmenter tool is used, or if Windows Explorer is to move files or directories, while FASTOPEN is installed, it is necessary to reboot the computer afterwards, because FASTOPEN would remember the old position of files and directories, causing MS-DOS to display garbage if e.g. "DIR" was performed.
DR DOS 6.0 includes an implementation of the FASTOPEN command. FASTOPEN is also part of the Windows XP MS-DOS subsystem to maintain MS-DOS and MS OS/2 version 1.x compatibility. It is not available on Windows XP 64-Bit Edition.The "fastopen" name has since been reused for various other "accelerating" software products.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Calculation of glass properties**
Calculation of glass properties:
The calculation of glass properties (glass modeling) is used to predict glass properties of interest or glass behavior under certain conditions (e.g., during production) without experimental investigation, based on past data and experience, with the intention to save time, material, financial, and environmental resources, or to gain scientific insight. It was first practised at the end of the 19th century by A. Winkelmann and O. Schott. The combination of several glass models together with other relevant functions can be used for optimization and six sigma procedures. In the form of statistical analysis glass modeling can aid with accreditation of new data, experimental procedures, and measurement institutions (glass laboratories).
History:
Historically, the calculation of glass properties is directly related to the founding of glass science. At the end of the 19th century the physicist Ernst Abbe developed equations that allow calculating the design of optimized optical microscopes in Jena, Germany, stimulated by co-operation with the optical workshop of Carl Zeiss. Before Ernst Abbe's time the building of microscopes was mainly a work of art and experienced craftsmanship, resulting in very expensive optical microscopes with variable quality. Now Ernst Abbe knew exactly how to construct an excellent microscope, but unfortunately, the required lenses and prisms with specific ratios of refractive index and dispersion did not exist. Ernst Abbe was not able to find answers to his needs from glass artists and engineers; glass making was not based on science at this time.In 1879 the young glass engineer Otto Schott sent Abbe glass samples with a special composition (lithium silicate glass) that he had prepared himself and that he hoped to show special optical properties. Following measurements by Ernst Abbe, Schott's glass samples did not have the desired properties, and they were also not as homogeneous as desired. Nevertheless, Ernst Abbe invited Otto Schott to work on the problem further and to evaluate all possible glass components systematically. Finally, Schott succeeded in producing homogeneous glass samples, and he invented borosilicate glass with the optical properties Abbe needed. These inventions gave rise to the well-known companies Zeiss and Schott Glass (see also Timeline of microscope technology). Systematic glass research was born. In 1908, Eugene Sullivan founded glass research also in the United States (Corning, New York).At the beginning of glass research it was most important to know the relation between the glass composition and its properties. For this purpose Otto Schott introduced the additivity principle in several publications for calculation of glass properties. This principle implies that the relation between the glass composition and a specific property is linear to all glass component concentrations, assuming an ideal mixture, with Ci and bi representing specific glass component concentrations and related coefficients respectively in the equation below. The additivity principle is a simplification and only valid within narrow composition ranges as seen in the displayed diagrams for the refractive index and the viscosity. Nevertheless, the application of the additivity principle lead the way to many of Schott’s inventions, including optical glasses, glasses with low thermal expansion for cooking and laboratory ware (Duran), and glasses with reduced freezing point depression for mercury thermometers. Subsequently, English and Gehlhoff et al. published similar additive glass property calculation models. Schott’s additivity principle is still widely in use today in glass research and technology.
Global models:
Schott and many scientists and engineers afterwards applied the additivity principle to experimental data measured in their own laboratory within sufficiently narrow composition ranges (local glass models). This is most convenient because disagreements between laboratories and non-linear glass component interactions do not need to be considered. In the course of several decades of systematic glass research thousands of glass compositions were studied, resulting in millions of published glass properties, collected in glass databases. This huge pool of experimental data was not investigated as a whole, until Bottinga, Kucuk, Priven, Choudhary, Mazurin, and Fluegel published their global glass models, using various approaches. In contrast to the models by Schott the global models consider many independent data sources, making the model estimates more reliable. In addition, global models can reveal and quantify non-additive influences of certain glass component combinations on the properties, such as the mixed-alkali effect as seen in the adjacent diagram, or the boron anomaly. Global models also reflect interesting developments of glass property measurement accuracy, e.g., a decreasing accuracy of experimental data in modern scientific literature for some glass properties, shown in the diagram. They can be used for accreditation of new data, experimental procedures, and measurement institutions (glass laboratories). In the following sections (except melting enthalpy) empirical modeling techniques are presented, which seem to be a successful way for handling huge amounts of experimental data. The resulting models are applied in contemporary engineering and research for the calculation of glass properties.
Global models:
Non-empirical (deductive) glass models exist. They are often not created to obtain reliable glass property predictions in the first place (except melting enthalpy), but to establish relations among several properties (e.g. atomic radius, atomic mass, chemical bond strength and angles, chemical valency, heat capacity) to gain scientific insight. In future, the investigation of property relations in deductive models may ultimately lead to reliable predictions for all desired properties, provided the property relations are well understood and all required experimental data are available.
Methods:
Glass properties and glass behavior during production can be calculated through statistical analysis of glass databases such as GE-SYSTEM SciGlass and Interglad, sometimes combined with the finite element method. For estimating the melting enthalpy thermodynamic databases are used.
Methods:
Linear regression If the desired glass property is not related to crystallization (e.g., liquidus temperature) or phase separation, linear regression can be applied using common polynomial functions up to the third degree. Below is an example equation of the second degree. The C-values are the glass component concentrations like Na2O or CaO in percent or other fractions, the b-values are coefficients, and n is the total number of glass components. The glass main component silica (SiO2) is excluded in the equation below because of over-parametrization due to the constraint that all components sum up to 100%. Many terms in the equation below can be neglected based on correlation and significance analysis. Systematic errors such as seen in the picture are quantified by dummy variables. Further details and examples are available in an online tutorial by Fluegel.
Methods:
Glass Property =b0+∑i=1n(biCi+∑k=inbikCiCk) Non-linear regression The liquidus temperature has been modeled by non-linear regression using neural networks and disconnected peak functions. The disconnected peak functions approach is based on the observation that within one primary crystalline phase field linear regression can be applied and at eutectic points sudden changes occur.
Methods:
Glass melting enthalpy The glass melting enthalpy reflects the amount of energy needed to convert the mix of raw materials (batch) to a melt glass. It depends on the batch and glass compositions, on the efficiency of the furnace and heat regeneration systems, the average residence time of the glass in the furnace, and many other factors. A pioneering article about the subject was written by Carl Kröger in 1953.
Methods:
Finite element method For modeling of the glass flow in a glass melting furnace the finite element method is applied commercially, based on data or models for viscosity, density, thermal conductivity, heat capacity, absorption spectra, and other relevant properties of the glass melt. The finite element method may also be applied to glass forming processes.
Optimization It is often required to optimize several glass properties simultaneously, including production costs.
Methods:
This can be performed, e.g., by simplex search, or in a spreadsheet as follows: Listing of the desired properties; Entering of models for the reliable calculation of properties based on the glass composition, including a formula for estimating the production costs; Calculation of the squares of the differences (errors) between desired and calculated properties; Reduction of the sum of square errors using the Solver option in Microsoft Excel with the glass components as variables. Other software (e.g. Microcal Origin) can also be used to perform these optimizations.It is possible to weight the desired properties differently. Basic information about the principle can be found in an article by Huff et al. The combination of several glass models together with further relevant technological and financial functions can be used in six sigma optimization.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**CSI 300 Index**
CSI 300 Index:
The CSI 300 (Chinese: 沪深300) is a capitalization-weighted stock market index designed to replicate the performance of the top 300 stocks traded on the Shanghai Stock Exchange and the Shenzhen Stock Exchange. It has two sub-indexes: the CSI 100 Index and the CSI 200 Index. Over the years, it has been deemed the Chinese counterpart of the S&P 500 index and a better gauge of the Chinese stock market than the more traditional SSE Composite Index.
CSI 300 Index:
The index is compiled by the China Securities Index Company, Ltd.It has been calculated since April 8, 2005. Its value is normalized relative to a base of 1000 on December 31, 2004.It is considered to be a blue chip index for Mainland China stock exchanges.
Annual Returns:
The following table shows the annual development of the CSI 300 Index since 2005.
Sub-Indices:
Moreover, there are the following ten sub-indices, which reflect specific sectors: CSI 300 Energy Index CSI 300 Materials Index CSI 300 Industrials Index CSI 300 Consumer Discretionary Index CSI 300 Consumer Staples Index CSI 300 Health Care Index CSI 300 Financial Index CSI 300 Information Technology Index CSI 300 Telecommunications Index CSI 300 Utilities IndexCSI 300 Index also split into CSI 100 Index and CSI 200 Index for top 100 companies and 101st to 300th companies
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Degenerative disease**
Degenerative disease:
Degenerative disease is the result of a continuous process based on degenerative cell changes, affecting tissues or organs, which will increasingly deteriorate over time.In neurodegenerative diseases, cells of the central nervous system stop working or die via neurodegeneration. An example of this is Alzheimer's disease. The other two common groups of degenerative diseases are those that affect circulatory system (e.g. coronary artery disease) and neoplastic diseases (e.g. cancers).Many degenerative diseases exist and some are related to aging. Normal bodily wear or lifestyle choices (such as exercise or eating habits) may worsen degenerative diseases, but this depends on the disease. Sometimes the main or partial cause behind such diseases is genetic. Thus some are clearly hereditary like Huntington's disease. Sometimes the cause is viruses, poisons or other chemicals. The cause may also be unknown.Some degenerative diseases can be cured. In those that can not, it may be possible to alleviate the symptoms.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Gaia philosophy**
Gaia philosophy:
Gaia philosophy (named after Gaia, Greek goddess of the Earth) is a broadly inclusive term for relating concepts about, humanity as an effect of the life of this planet.
Gaia philosophy:
The Gaia hypothesis holds that all organisms on a life-giving planet regulate the biosphere in such a way as to promote its habitability. Gaia concepts draw a connection between the survivability of a species (hence its evolutionary course) and its usefulness to the survival of other species. While there were a number of precursors to Gaia hypothesis, the first scientific form of this idea was proposed as the Gaia hypothesis by James Lovelock, a UK chemist, in 1970. The Gaia hypothesis deals with the concept of biological homeostasis, and claims the resident life forms of a host planet coupled with their environment have acted and act like a single, self-regulating system. This system includes the near-surface rocks, the soil, and the atmosphere. Today, many scientists consider such ideas to be unsupported by, or at odds with, the available evidence (see Gaia hypothesis criticism). These theories are ,however, significant in green politics.
Predecessors to the Gaia theory:
There are some mystical, scientific and religious predecessors to the Gaia philosophy, which had a Gaia-like conceptual basis. Many religious mythologies had a view of Earth as being a whole that is greater than the sum of its parts (e.g. some Native American religions and various forms of shamanism).
Predecessors to the Gaia theory:
Isaac Newton wrote of the earth, "Thus this Earth resembles a great animal or rather inanimate vegetable, draws in æthereall breath for its dayly refreshment & vitall ferment & transpires again with gross exhalations, And according to the condition of all other things living ought to have its times of beginning youth old age & perishing."Pierre Teilhard de Chardin, a paleontologist and geologist, believed that evolution fractally unfolded from cell to organism to planet to solar system and ultimately the whole universe, as we humans see it from our limited perspective. Teilhard later influenced Thomas Berry and many Catholic humanist thinkers of the 20th century.
Predecessors to the Gaia theory:
Lewis Thomas believed that Earth should be viewed as a single cell; he derived this view from Johannes Kepler's view of Earth as a single round organism.Buckminster Fuller is generally credited with making the idea respectable in Western scientific circles in the 20th century. Building to some degree on his observations and artifacts, e.g. the Dymaxion map of the Earth he created, others began to ask if there was a way to make the Gaia theory scientifically sound.
Predecessors to the Gaia theory:
In 1931, L.G.M. Baas Becking delivered an inaugural lecture about Gaia in the sense of life and earth.Oberon Zell-Ravenheart in 1970 in an article in Green Egg Magazine, independently articulated the Gaia Thesis.Many believe that these ideas cannot be considered scientific hypotheses; by definition a scientific hypothesis must make testable predictions. As the above claims are not currently testable, they are outside the bounds of current science. This does not mean that these ideas are not theoretically testable. As one can postulate tests that could be applied, given enough time and space, then these ideas should be seen as scientific hypotheses.
Predecessors to the Gaia theory:
These are conjectures and perhaps can only be considered as social and maybe political philosophy; they may have implications for theology, or thealogy as Zell-Ravenheart and Isaac Bonewits put it.
Range of views:
According to James Kirchner there is a spectrum of Gaia hypotheses, ranging from the undeniable to radical. At one end is the undeniable statement that the organisms on the Earth have radically altered its composition. A stronger position is that the Earth's biosphere effectively acts as if it is a self-organizing system which works in such a way as to keep its systems in some kind of equilibrium that is conducive to life. Today many scientists consider that such a view (and any stronger views) are unlikely to be correct. An even stronger claim is that all lifeforms are part of a single planetary being, called Gaia. In this view, the atmosphere, the seas, the terrestrial crust would be the result of interventions carried out by Gaia, through the coevolving diversity of living organisms.
Range of views:
The most extreme form of Gaia theory is that the entire Earth is a single unified organism with a highly intelligent mind that arose as an emergent property of the whole biosphere. In this view, the Earth's biosphere is consciously manipulating the climate in order to make conditions more conducive to life. Scientists contend that there is no evidence at all to support this last point of view, and it has come about because many people do not understand the concept of homeostasis. Many non-scientists instinctively and incorrectly see homeostasis as a process that requires conscious controlThe more speculative versions of Gaia, including versions in which it is believed that the Earth is actually conscious, sentient, and highly intelligent, are usually considered outside the bounds of what is usually considered science.
Gaia in biology and science:
Buckminster Fuller has been credited as the first to incorporate scientific ideas into a Gaia theory, which he did with his Dymaxion map of the Earth.
The first scientifically rigorous theory was the Gaia hypothesis by James Lovelock, a UK chemist.
A variant of this hypothesis was developed by Lynn Margulis, a microbiologist, in 1979.
Her version is sometimes called the "Gaia Theory" (note uppercase-T). Her model is more limited in scope than the one that Lovelock proposed.
Gaia in biology and science:
Whether this sort of system is present on Earth is still open to debate. Some relatively simple homeostatic mechanisms are generally accepted. For example, when atmospheric carbon dioxide levels rise, plants are able to grow better and thus remove more carbon dioxide from the atmosphere. Other biological effects and feedbacks exist, but the extent to which these mechanisms have stabilized and modified the Earth's overall climate is largely not known.
Gaia in biology and science:
The Gaia hypothesis is sometimes viewed from significantly different philosophical perspectives. Some environmentalists view it as an almost conscious process, in which the Earth's ecosystem is literally viewed as a single unified organism. Some evolutionary biologists, on the other hand, view it as an undirected emergent property of the ecosystem: as each individual species pursues its own self-interest, their combined actions tend to have counterbalancing effects on environmental change. Proponents of this view sometimes point to examples of life's actions in the past that have resulted in dramatic change rather than stable equilibrium, such as the conversion of the Earth's atmosphere from a reducing environment to an oxygen-rich one.
Gaia in biology and science:
Depending on how strongly the case is stated, the hypothesis conflicts with mainstream neo-Darwinism. Most biologists would accept Daisyworld-style homeostasis as possible, but would certainly not accept the idea that this equates to the whole biosphere acting as one organism.
A very small number of scientists, and a much larger number of environmental activists, claim that Earth's biosphere is consciously manipulating the climate in order to make conditions more conducive to life. Scientists contend that there is no evidence to support this belief.
Gaia in the social sciences and politics:
A social science view of Gaia theory is the role of humans as a keystone species who may be able to accomplish global homeostasis. Whilst a few social scientists who draw inspiration from 'organic' views of society have embraced Gaia philosophy as a way to explain the human-nature interconnections, most professional social scientists are more involved in reflecting upon the way Gaia philosophy is used and engaged with within sub-sections of society. Alan Marshall, in the Department of Social Sciences at Mahidol University, for example, reflects upon the way Gaia philosophy has been used and advocated in various societal settings by environmentalists, spiritualists, managers, economists, and scientists and engineers. As Marshall explains, most social scientists had already given up on systems ideas of society in the 1960s before Gaia philosophy was born under James Lovelock's ideas since such ideas were interpreted as supporting conservatism and traditionalism.Gaia theory also influenced the dynamics of green politics.
Gaia in religion:
Rosemary Radford Ruether, the American feminist scholar and theologian, wrote a book called "Gaia and God: An Ecofeminist Theology of Earth Healing".
Gaia in religion:
A book edited by Allan Hunt Badiner called Dharma Gaia explores the ground where Buddhism and ecology meet through writings by the Dalai Lama, Gary Snyder, Thich Nhat Hanh, Allen Ginsberg, David Abram, Joanna Macy, Robert Aitken, and 25 other Buddhists and ecologists.Gaianism, an earth-centered philosophical, holistic, and spiritual belief that shares expressions with earth religions and paganism while not identifying exclusively with any specific religion, sprang from the gaia hypothesis.
Criticism:
One of the most problematic issues with referring to Gaia as an organism is its apparent failure to meet the biological criterion of being able to reproduce. Obviously this limited view misunderstands cosmic cycles of death of planets and stars into star stuff that creates more planets and stars over billions of years. Richard Dawkins has asserted that the planet is not the offspring of any parents and is unable to reproduce.
Books on Gaia:
Alan Marshall (2002), The Unity of Nature, Imperial College Press.
Books on Gaia:
Mary Midgley (2007), Earthy realism: the meaning of Gaia Mary Midgley (2001), Gaia: the next big idea Lawrence E. Joseph (1991), Gaia: the growth of an idea Stephen Henry Schneider (2004), Scientists debate gaia: the next century Allan Hunt Badiner (1990), Dharma Gaia: A Harvest of Essays in Buddhism and Ecology George Ronald Williams (1996), The molecular biology of Gaia Tyler Volk (2003), Gaia's Body: Toward a Physiology of Earth Norman Myers (1993), Gaia An Atlas of Planet Management Anne Primavesi (2008), Gaia and Climate Change: A Theology of Gift Events Anne Primavesi (2000), Sacred Gaia: holistic theology and earth system science Anne Primavesi (2003), Gaia's gift: earth, ourselves, and God after Copernicus Peter Bunyard (1996), Gaia in Action: Science of the Living Earth Francesca Ciancimino Howell (2002), Making Magic with Gaia: Practices to Heal Ourselves and Our Planet Pepper Lewis (2005), Gaia Speaks Toby Tyrrell (2013), On Gaia
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Magic eye tube**
Magic eye tube:
A magic eye tube or tuning indicator, in technical literature called an electron-ray indicator tube, is a vacuum tube which gives a visual indication of the amplitude of an electronic signal, such as an audio output, radio-frequency signal strength, or other functions. The magic eye (also called a cat's eye, or tuning eye in North America) is a specific type of such a tube with a circular display similar to the EM34 illustrated. Its first broad application was as a tuning indicator in radio receivers, to give an indication of the relative strength of the received radio signal, to show when a radio station was properly tuned in.The magic eye tube was the first in a line of development of cathode ray type tuning indicators developed as a cheaper alternative to needle movement meters. It was not until the 1960s that needle meters were made economically enough in Japan to displace indicator tubes. Tuning indicator tubes were used in vacuum tube receivers from around 1936 to 1980, before vacuum tubes were replaced by transistors in radios. An earlier tuning aid which the magic eye replaced was the "tuneon" neon lamp.
History:
The magic eye tube (or valve) for tuning radio receivers was invented in 1932 by Allen B. DuMont (who spent most of the 1930s improving the lifetime of cathode ray tubes, and ultimately formed the DuMont Television Network).The RCA 6E5 from 1935 was the first commercial tube.The earlier types were end-viewed (see the EM34), usually with an octal or side-contact base. Later developments featured a smaller side-viewed noval B9A based all-glass type with either a fan type display or a band display (see the EM84). The end-viewed version had a round cone-shaped fluorescent screen together with the black cap that shielded the red light from the cathode/heater assembly. This design prompted the contemporary advertisers to coin the term magic eye, a term still used.
History:
There was also a sub-miniature version with wire ends (Mullard DM70/DM71, Mazda 1M1/1M3, GEC/Marconi Y25) intended for battery operation, used in one Ever Ready AM/FM battery receiver with push-pull output, as well as a small number of AM/FM mains receivers, which lit the valve from the 6.3 V heater supply via a 220 ohm resistor or from the audio output valve's cathode bias. Some reel-to-reel tape recorders also used the DM70/DM71 to indicate recording level, including a transistorized model with the valve lit from the bias-oscillator voltage.
History:
The function of a magic eye can be achieved with modern semiconductor circuitry and optoelectronic displays. The high voltages (100 volts or more) required by these tubes are not present in modern devices, so the magic eye tube is now obsolete.
Method of operation:
A magic eye tube is a miniature cathode ray tube, usually with a built-in triode signal amplifier. It usually glows bright green, (occasionally yellow in some very old types, e.g., EM4) and the glowing ends grow to meet in the middle as the voltage on a control grid increases. It is used in a circuit that drives the grid with a voltage that changes with signal strength; as the tuning knob is turned, the gap in the eye becomes narrowest when a station is tuned in correctly.
Method of operation:
Internally, the device is a vacuum tube consisting of two plate electrode assemblies, one creating a triode amplifier and the other a display section consisting of a conical-shaped target anode coated with zinc silicate or similar material. The display section's anode is usually directly connected to the receiver's full positive high tension (HT) voltage, whilst the triode-anode is usually (internally) connected to a control electrode mounted between the cathode and the target-anode, and externally connected to positive HT via a high-value resistor, typically 1 megaohm.
Method of operation:
When the receiver is switched on but not tuned to a station, the target-anode glows green due to electrons striking it, with the exception of the area by the internal control-electrode. This electrode is typically 150–200 V negative with respect to the target-anode, repelling electrons from the target in this region, causing a dark sector to appear on the display.
Method of operation:
The control-grid of the triode-amplifier section is connected to a point where a negative control voltage dependent on signal strength is available, e.g. the AGC line in an AM superheterodyne receiver, or the limiter stage or FM detector in an FM receiver. As a station is tuned in the triode-grid becomes more negative with respect to the common cathode.
Use in radios:
The purpose of magic eye tubes in radio sets is to help with accurate tuning to a station; the tube makes peaks in signal strength more obvious by producing a visual indication, which is better than using the ear alone. The eye is especially useful because the automatic gain control (AGC) action tends to increase the audio volume of a mistuned station, so the volume varies relatively little as the tuning knob is turned. The tuning eye was driven by the AGC voltage rather than the audio signal.
Use in radios:
When, in the early 1950s, FM radio sets were made available on the UK market, many different types of magic eye tubes were made available, with differing displays, but they all worked the same way. Some had a separate small display to light up indicating a stereo signal on FM.
The British Leak company used an EM84 indicator as a very precise tuning-indicator in their Troughline FM tuner series, by mixing the AGC voltages from the two limiter valve grids at the indicator sensing-grid. By this means accurate tuning was indicated by a fully open sharp shadow, whilst off-tune the indicator produced a partially closed shadow.
Common types:
In USA made radios, the first type issued was the type 6E5 single pie shaped image, introduced by RCA and used in their 1936 line of radios. Other radio makers originally used the 6E5 as well until, soon after, the less sensitive type 6G5 was introduced. Also, a type 6AB5 aka 6N5 tube with lower plate voltage was introduced for series filament radios. Type number 6U5 was similar to the 6G5 but had a straight glass envelope. Zenith Radio used a type 6T5 in their 1938 model year radios with "Target tuning" indicator (resembling a camera iris), but was abandoned after a year already, with Ken-Rad manufacturing a replacement type. All the above types use a 6-pin base with two larger pins for filament connection.
Common types:
Several other "eye tubes" were introduced in USA radios and also used in test equipment and audio gear, including the octal-based types 6AF6GT, 6AD6GT and 1629. The latter was an industrial type with 12 volt filament looking identical to type 6E5. Later USA made audio gear used European tubes like EM80 (equivalent to 6BR5), EM81 (6DA5), EM84 (6FG6), EM85 (6DG7) or EM87 (6HU6).
Other applications:
Magic eye tubes were used as the recording level indicator for tape recorders (for example in the Echolette), and it is also possible to use them (in a specially adapted circuit) as a means of rough frequency comparison as a simpler alternative to Lissajous figures.
A magic eye tube acts as an inexpensive uncalibrated (and not necessarily linear) voltage indicator, and can be used wherever an indication of voltage is needed, saving the cost of a more accurate calibrated meter.
At least one design of capacitance bridge uses this type of tube to indicate that the bridge is balanced.
The magic eye tube also appears on the cover of My Morning Jacket's 2011 album Circuital. The tube as shown is almost fully lit.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Acquired brain injury**
Acquired brain injury:
Acquired brain injury (ABI) is brain damage caused by events after birth, rather than as part of a genetic or congenital disorder such as fetal alcohol syndrome, perinatal illness or perinatal hypoxia. ABI can result in cognitive, physical, emotional, or behavioural impairments that lead to permanent or temporary changes in functioning. These impairments result from either traumatic brain injury (e.g. physical trauma due to accidents, assaults, neurosurgery, head injury etc.) or nontraumatic injury derived from either an internal or external source (e.g. stroke, brain tumours, infection, poisoning, hypoxia, ischemia, encephalopathy or substance abuse). ABI does not include damage to the brain resulting from neurodegenerative disorders.While research has demonstrated that thinking and behavior may be altered in virtually all forms of ABI, brain injury is itself a very complex phenomenon having dramatically varied effects. No two persons can expect the same outcome or resulting difficulties. The brain controls every part of human life: physical, intellectual, behavioral, social and emotional. When the brain is damaged, some part of a person's life will be adversely affected.Consequences of ABI often require a major life adjustment around the person's new circumstances, and making that adjustment is a critical factor in recovery and rehabilitation. While the outcome of a given injury depends largely upon the nature and severity of the injury itself, appropriate treatment plays a vital role in determining the level of recovery.
Signs and symptoms:
Emotional ABI has been associated with a number of emotional difficulties such as depression, issues with self-control, managing anger impulses and challenges with problem-solving, these challenges also contribute to psychosocial concerns involving social anxiety, loneliness and lower levels of self esteem. These psychosocial problems have been found to contribute to other dilemmas such as reduced frequency of social contact and leisure activities, unemployment, family problems and marital difficulties.How the patient copes with the injury has been found to influence the level at which they experience the emotional complications correlated with ABI. Three coping strategies for emotions related to ABI have presented themselves in the research, approach-oriented coping, passive coping and avoidant coping. Approach-oriented coping has been found to be the most effective strategy, as it has been negatively correlated with rates of apathy and depression in ABI patients; this coping style is present in individuals who consciously work to minimize the emotional challenges of ABI. Passive coping has been characterized by the person choosing not to express emotions and a lack of motivation which can lead to poor outcomes for the individual. Increased levels of depression have been correlated to avoidance coping methods in patients with ABI; this strategy is represented in people who actively evade coping with emotions. These challenges and coping strategies should be kept in consideration when seeking to understand individuals with ABI.
Signs and symptoms:
Memory Following acquired brain injury it is common for people to experience memory loss; memory disorders are one of the most prevalent cognitive deficits experienced in affected people. However, because some aspects of memory are directly linked to attention, it can be challenging to assess what components of a deficit are caused by memory and which are fundamentally attention problems. There is often partial recovery of memory functioning following the initial recovery phase; however, permanent handicaps are often reported with ABI patients reporting significantly more memory difficulties when compared people without an acquired brain injury.In order to cope more efficiently with memory disorders many people with ABI use memory aids; these included external items such as diaries, notebooks and electronic organizers, internal strategies such as visual associations, and environmental adaptations such as labelling kitchen cupboards. Research has found that ABI patients use an increased number of memory aids after their injury than they did prior to it and these aids vary in their degree of effectiveness. One popular aid is the use of a diary. Studies have found that the use of a diary is more effective if it is paired with self-instructional training, as training leads to more frequent use of the diary over time and thus more successful use as a memory aid.
Children:
In children and youth with pediatric acquired brain injury the cognitive and emotional difficulties that stem from their injury can negatively impact their level of participation in home, school and other social situations, participation in structured events has been found to be especially hindered under these circumstances. Involvement in social situations is important for the normal development of children as a means of gaining an understanding of how to effectively work together with others. Furthermore, young people with ABI are often reported as having insufficient problem solving skills. This has the potential to hinder their performance in various academic and social settings further. It is important for rehabilitation programs to deal with these challenges specific to children who have not fully developed at the time of their injury.
Management:
Rehabilitation following an acquired brain injury does not follow a set protocol, due to the variety of mechanisms of injury and structures affected. Rather, rehabilitation is an individualized process that will often involve a multi-disciplinary approach. The rehabilitation team may include but is not limited to nurses, neurologists, physiotherapists, psychiatrists (particularly those specialized in Brain Injury Medicine), occupational therapists, speech-language pathologists, music therapists, and rehabilitation psychologists. Physical therapy and other professions may be utilized post- brain injury in order to control muscle tone, regain normal movement patterns, and maximize functional independence. Rehabilitation should be patient-centered and guided by the individual's needs and goals.There is some evidence that rhythmic auditory stimulation is beneficial in gait rehabilitation following a brain injury. Music therapy may assist patients to improve gait, arm swing while walking, communication, and quality of life after experiencing a stroke. Newer treatment methods such as virtual reality and robotics remain under-researched; however, there is reason to believe that virtual reality in upper limb rehabilitation may be useful, following an acquired brain injury.Due to few random control trials and generally weak evidence, more research is needed to gain a complete understanding of the ideal type and parameters of therapeutic interventions for treatment of acquired brain injuries.For more information on therapeutic interventions for acquired brain injury, see stroke and traumatic brain injury.
Management:
Memory Some strategies for rehabilitating the memory of those affected by ABI have used repetitive tasks to attempt to increase the patients' ability to recall information. While this type of training increases performance on the task at hand, there is little evidence that the skills translate to improved performance on memory challenges outside of the laboratory. Awareness of memory strategies, motivation and dedication to increasing memory have been related to successful increases in memory capability among patients an example of this could be the use of attention process training and brain injury education in patients with memory disorders related to brain injury. These have been shown to increase memory functioning in patients based on self-report measures.Another strategy for improvement amongst individuals with poor memory functioning is the use of elaboration to improve encoding of items, one form of this strategy is called self-imagining whereby the patient imagines the event to be recalled from a more personal perspective. Self-imagining has been found to improve recognition memory by coding the event in a manner that is more individually salient to the subject. This effect has been found to improve recall in individuals with and without memory disorders.There is research evidence to suggest that rehabilitation programs that are geared toward the individual may have greater results than group-based interventions for improving memory in ABI patients because they are tailored to the symptoms experienced by the individual.More research is necessary in order to draw conclusions on how to improve memory among individuals with ABI that experience memory loss.
Notable cases:
There have been many popularized cases of various forms of ABI such as: Phineas Gage's case of traumatic brain injury that greatly stimulated discussion on brain function and physiology Henry Molaison, formerly known as patient H.M., underwent neurosurgery to remove scar tissue in his brain that was causing debilitating epileptic seizures, neurosurgeon William Beecher Scoville performed the surgery which created bilateral lesions near the hippocampus. These lesions helped remove symptoms of the epilepsy in Molaison but resulted in anterograde amnesia. Molaison has been studied by hundreds of researchers since this time, most notably Brenda Milner, and has been greatly influential in the study of memory and the brain.
Notable cases:
Zasetsky injured in the Battle of Smolensk, bullet entered his left parieto-occipital area and resulted in a long coma. Following this, he developed a form of agnosia and became unable to perceive the right side of things.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Dense irregular connective tissue**
Dense irregular connective tissue:
Dense irregular connective tissue has fibers that are not arranged in parallel bundles as in dense regular connective tissue.
Dense irregular connective tissue consists of mostly collagen fibers. It has less ground substance than loose connective tissue. Fibroblasts are the predominant cell type, scattered sparsely across the tissue.
Function:
This type of connective tissue is found mostly in the reticular layer (or deep layer) of the dermis. It is also in the sclera and in the deeper skin layers. Due to high portions of collagenous fibers, dense irregular connective tissue provides strength, making the skin resistant to tearing by stretching forces from different directions.Dense irregular connective tissue also makes up submucosa of the digestive tract, lymph nodes, and some types of fascia. Other examples include periosteum and perichondrium of bones, and the tunica albuginea of testis. In the submucosa layer, the fiber bundles course in varying planes allowing the organ to resist excessive stretching and distension.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Rapid casting**
Rapid casting:
Rapid casting is an integration of investment casting with rapid prototyping/3D printing. In this technique disposable patterns that are used for forming molds are created with 3D printing techniques like fused deposition modeling, stereolithography or any other 3D printing technique.
Advantages:
Cheap for batch production Reduced turnaround time Representative prototypes Easier to make patterns Possibility to make the part lighter by removing unwanted material and stiffer by adding rib features.
Advantages of pressure die casting Cheap at scale Large parts Good surface finish High dimensional accuracy High tensile strength
Procedure:
A disposable pattern is 3D printed (can be of wax or any plastic used in 3D printing PLA,PETG,Etc.).
The pattern, if made of wax, undergoes wax infiltration and other procedures to increase its strength and dewaxing properties.
A mold is made by coating the printed pattern using a ceramic slurry.
The pattern is melted out of the ceramic mold.
Molten metal is poured into mold.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**81 Cancri**
81 Cancri:
81 Cancri (Pi1 Cancri, π1 Cancri) is a stellar system that lies approximately 66 light-years away. The main component of the system is a close binary, while a brown dwarf binary is located at a wide separation.
Components:
81 Cancri has long been known to be a binary, both visually and spectroscopically (VBO=SB2O). Their orbit is an eccentric 2.7 year one, resolved by over 100 milli-arcseconds due to a modest separation and close distance. The two components have similar masses and temperatures, with the secondary being only ~0.04 M☉ lower in mass and a few hundred kelvin cooler.
Components:
A brown dwarf component in the system was detected in 2001. The source 2MASSW J0912145+145940 (2M0912+14) in the 2MASS catalogue was identified as having a common proper motion with the AB binary, and subsequent observations confirmed the brown dwarf nature of the companion. The new component, 81 Cancri C, was found to have a spectral type of L8, near to the L-T transition. Separated from the primary components by 43 arcseconds and at a distance of 20.4 parsecs, the brown dwarf has a minimum physical separation of approximately 880 AU.
Components:
The brown dwarf was found to be about half a magnitude brighter in the JHK bands than expected, compared to others of similar spectral type and known distance. The system was not found to not be particularly young to some confidence, so it was possible that component C could itself be a close binary not resolved by 2MASS. This was confirmed in 2006 as the source was found to be slightly oblong, caused by two components of similar spectral types. These two brown dwarfs, components C and D, have a separation of approximately 11 AU, and their mutual orbit likely takes on order of 150 years due to the small masses involved.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Orthogonal array testing**
Orthogonal array testing:
Orthogonal array testing is a black box testing technique that is a systematic, statistical way of software testing. It is used when the number of inputs to the system is relatively small, but too large to allow for exhaustive testing of every possible input to the systems. It is particularly effective in finding errors associated with faulty logic within computer software systems. Orthogonal arrays can be applied in user interface testing, system testing, regression testing, configuration testing and performance testing.
Orthogonal array testing:
The permutations of factor levels comprising a single treatment are so chosen that their responses are uncorrelated and therefore each treatment gives a unique piece of information. The net effects of organizing the experiment in such treatments is that the same piece of information is gathered in the minimum number of experiments.
Background:
Orthogonal vector Orthogonal vectors exhibit orthogonality. Orthogonal vectors exhibit the following properties: Each of the vectors conveys information different from that of any other vector in the sequence, i.e., each vector conveys unique information therefore avoiding redundancy.
On a linear addition, the signals may be separated easily.
Each of the vectors is statistically independent of the others, i.e., the correlation between them is nil.
When linearly added, the resultant is the arithmetic sum of the individual components.
Benefits:
Testing cycle time is reduced and analysis is simpler.
Test cases are balanced, so it's straightforward to isolate defects and assess performance. This provides a significant cost savings over pair-wise testing.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Bronchitis kettle**
Bronchitis kettle:
The bronchitis kettle, typified by a long spout, was used in the nineteenth and twentieth centuries to moisten the air for a sufferer of bronchitis, and was considered to make it easier to breathe for the patient. Sometimes menthol was added to the water to relieve congestion. The water was boiled on the fireplace in the room, or above a spirit lamp, or, in the twentieth century, by electricity. Sometimes the kettle was boiled within a tent placed around the patient.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Neodymium bismuthide**
Neodymium bismuthide:
Neodymium bismuthide or Bismuth-Neodymium is a binary inorganic compound of neodymium and bismuth with the formula NdBi. It forms crystals.
Preparation:
Neodymium bismuthide can be prepared by reacting a stoichiometric amount of neodymium and bismuth at 1900°C: Nd + Bi → NdBi
Physical properties:
Neodymium bismuthide forms cubic crystals of the space group Fm3m, with cell parameters a = 0.64222 nm, Z = 4 with a structure like sodium chloride. The compound melts at 1900°C. At a temperature of 24 K, an antiferromagnetic transition occurs in the compound.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Genitography**
Genitography:
Genitography is the radiography of the urogenital sinus and internal duct structures after injection of a contrast medium through the opening of the sinus.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Macintosh Font X encoding**
Macintosh Font X encoding:
Macintosh Font X is a character encoding which is used by Kermit to represent text on the Apple Macintosh (but not by standard Mac OS fonts). It is a modification of Mac OS Symbol to include all characters in DEC Special Graphics and the DEC Technical Character Set (unifying the ⎷ and √ from the Technical Character Set).
� Characters at A4, A7, A9, D0, E1, and F1 do not have Unicode equivalents; these characters, along with the not sign at D8 are intended to assemble a 3x5 uppercase sigma.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Beck's cognitive triad**
Beck's cognitive triad:
Beck's cognitive triad, also known as the negative triad, is a cognitive-therapeutic view of the three key elements of a person's belief system present in depression. It was proposed by Aaron Beck in 1967. The triad forms part of his cognitive theory of depression and the concept is used as part of CBT, particularly in Beck's "Treatment of Negative Automatic Thoughts" (TNAT) approach.
Beck's cognitive triad:
The triad involves "automatic, spontaneous and seemingly uncontrollable negative thoughts" about: The self The world or environment The futureExamples of this negative thinking include: The self – "I'm worthless and ugly" or "I wish I was different" The world – "No one values me" or "people ignore me all the time" The future – "I'm hopeless because things will never change" or "things can only get worse!"
Beck's cognitive model of depression:
From a cognitive perspective, depressive disorders are characterized by people's dysfunctional negative views of themselves, their life experience (and the world in general), and their future—the cognitive triad.
Beck's cognitive model of depression:
People with depression often view themselves as unlovable, helpless, doomed or deficient. They tend to attribute their unpleasant experiences to their presumed physical, mental, and/or moral deficits. They tend to feel excessively guilty, believing that they are worthless, blameworthy, and rejected by self and others. They may have a very difficult time viewing themselves as people who could ever succeed, be accepted, or feel good about themselves and this may lead to withdrawal and isolation, which further worsens the mood.
Beck's cognitive model of depression:
Cognitive distortions Beck proposes that those with depression develop cognitive distortions, a type of cognitive bias sometimes also referred to as faulty or unhelpful thinking patterns. Beck referred to some of these biases as "automatic thoughts", suggesting they are not entirely under conscious control. People with depression will tend to quickly overlook their positive attributes and disqualify their accomplishments as being minor or meaningless. They may also misinterpret the care, good will, and concern of others as being based on pity or susceptible to being lost easily if those others knew the “real person" and this fuels further feelings of guilt. The main cognitive distortions according to Beck are summarised below: Arbitrary inference - drawing conclusions from insufficient or no evidence.
Beck's cognitive model of depression:
Selective abstraction - drawing conclusions on the basis of just one of many elements of a situation.
Overgeneralisation - making sweeping conclusions based on a single event.
Magnification - exaggerating the importance of an undesirable event.
Minimisation - underplaying the significance of a positive event.
Beck's cognitive model of depression:
Personalisation - attributing negative feelings of others to oneself.Depressed people view their lives as devoid of pleasure or reward, presenting insuperable obstacles to achieving their important goals. This is often manifested as a lack of motivation and leads to the depressed person feeling further withdrawal and isolation as they may be seen as lazy by others. Everything seems and feels “too hard to manage” and other people are seen as punishing (or potentially so). They believe that their troubles will continue indefinitely, and that the future will only bring further hardship, deprivation, and frustration. “Paralysis of the will” results from the depressed patients' pessimism and hopelessness. Expecting their efforts to end in failure, they are reluctant to commit themselves to growth-oriented goals, and their activity level drops. Believing that they cannot affect the outcome of various situations, they experience a desire to avoid such situations.Suicidal wishes are seen as an extreme expression of the desire to escape from problems that appear to be uncontrollable, interminable, and unbearable.
Beck's cognitive model of depression:
Negative self-schemata Beck also believed that a depressed person will, often from childhood experiences, hold a negative self-schema. This schema may originate from negative early experiences, such as criticism, abuse or bullying. Beck suggests that people with negative self-schemata are liable to interpret information presented to them in a negative manner, leading to the cognitive distortions outlined above. The pessimistic explanatory style, which describes the way in which depressed or neurotic people react negatively to certain events, is an example of the effect of these schemata on self-image. This explanatory style involves blaming oneself for negative events outside of their control or the behaviour of others (personalisation), believing that such events will continue forever and letting these events significantly affect their emotional wellbeing.
Measuring aspects of the triad:
A number of instruments have been developed to attempt to measure negative cognition in the three areas of the triad. The Beck Depression Inventory (BDI) is a well-known questionnaire for scoring depression based on all three aspects of the triad. Other examples include the Beck Hopelessness Scale for measuring thoughts about the future and the Rosenberg Self-Esteem Scale for measuring views of the self. The Cognitive Triad Inventory (CTI) was developed by Beckham et al. to attempt to systematically measure the three aspects of Beck's triad. The CTI aims to quantify the relationship between "therapist behaviour in a single treatment session to changes in the cognitive triad" and "patterns of changes to the triad to changes in overall depressive mood". This inventory has since been adapted for use with children and adolescents in the CTI-C, developed by Kaslow et al.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**GPS2 (gene)**
GPS2 (gene):
G protein pathway suppressor 2 is a protein that in humans is encoded by the GPS2 gene.
Function:
This gene encodes a protein involved in G protein-mitogen-activated protein kinase (MAPK) signaling cascades. When overexpressed in mammalian cells, this gene could potently suppress a RAS- and MAPK-mediated signal and interfere with JNK activity, suggesting that the function of this gene may be signal repression. The encoded protein is an integral subunit of the NCOR1-HDAC3 (nuclear receptor corepressor 1-histone deacetylase 3) complex, and it was shown that the complex inhibits JNK activation through this subunit and thus could potentially provide an alternative mechanism for hormone-mediated antagonism of AP1 (activator protein 1) function.
Interactions:
GPS2 (gene) has been shown to interact with:
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Datanet**
Datanet:
DataNet, or Sustainable Digital Data Preservation and Access Network Partner was a research program of the U.S. National Science Foundation Office of Cyberinfrastructure. The office announced a request for proposals with this title on September 28, 2007. The lead paragraph of its synopsis describes the program as: Science and engineering research and education are increasingly digital and increasingly data-intensive. Digital data are not only the output of research but provide input to new hypotheses, enabling new scientific insights and driving innovation. Therein lies one of the major challenges of this scientific generation: how to develop the new methods, management structures and technologies to manage the diversity, size, and complexity of current and future data sets and data streams. This solicitation addresses that challenge by creating a set of exemplar national and global data research infrastructure organizations (dubbed DataNet Partners) that provide unique opportunities to communities of researchers to advance science and/or engineering research and learning.
Datanet:
The introduction in the solicitation goes on to say: Chapter 3 (Data, Data Analysis, and Visualization) of NSF’s Cyberinfrastructure Vision for 21st century Discovery presents a vision in which “science and engineering digital data are routinely deposited in well-documented form, are regularly and easily consulted and analyzed by specialists and non-specialists alike, are openly accessible while suitably protected, and are reliably preserved.” The goal of this solicitation is to catalyze the development of a system of science and engineering data collections that is open, extensible and evolvable.
Datanet:
The initial plan called for a $100 million initiative: five awards of $20 million each over five years with the possibility of continuing funding. Awards were given in two rounds. In the first round, for which full proposals were due on March 21, 2008, two DataNet proposals were awarded. DataONE, led by William Michener at the University of New Mexico covers ecology, evolutionary, and earth science. The Data Conservancy, led by Sayeed Choudhury of Johns Hopkins University, focuses on astronomy, earth science, life sciences, and social science.
Datanet:
For the second round, preliminary proposals were due on October 6, 2008, and full proposals on February 16, 2009. Awards from the second round were greatly delayed, and funding was reduced substantially from $20 million per project to $8 million. Funding for three second round projects began in Fall 2011. SEAD: Sustainable Environment through Actionable Data, led by Margaret Hedstrom of the University of Michigan, seeks to provide data curation software and services for the "long tail" of small- and medium-scale data producers in the domain of sustainability science. The DataNet Federation Consortium, led by Reagan Moore of the University of North Carolina, uses the integrated Rule-Oriented Data System (iRODS) to provide data grid infrastructure for science and engineering. Terra Populus, led by Steven Ruggles of the University of Minnesota focuses on tools for data integration across the domains of social science and environmental data, allowing interoperability of the three major data formats used in these domains: microdata, areal data, and raster data.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Hot and cold cognition**
Hot and cold cognition:
Hot cognition is a hypothesis on motivated reasoning in which a person's thinking is influenced by their emotional state. Put simply, hot cognition is cognition coloured by emotion. Hot cognition contrasts with cold cognition, which implies cognitive processing of information that is independent of emotional involvement. Hot cognition is proposed to be associated with cognitive and physiological arousal, in which a person is more responsive to environmental factors. As it is automatic, rapid and led by emotion, hot cognition may consequently cause biased decision making. Hot cognition may arise, with varying degrees of strength, in politics, religion, and other sociopolitical contexts because of moral issues, which are inevitably tied to emotion. Hot cognition was initially proposed in 1963 by Robert P. Abelson. The idea became popular in the 1960s and the 1970s. An example of a biased decision caused by hot cognition would be a juror disregarding evidence because of an attraction to the defendant. Decision making with cold cognition is more likely to involve logic and critical analysis. Therefore, when an individual engages in a task while using cold cognition, the stimulus is likely to be emotionally neutral and the "outcome of the test is not motivationally relevant" to the individual. An example of a critical decision using cold cognition would be concentrating on the evidence before drawing a conclusion.
Hot and cold cognition:
Hot and cold cognition form a dichotomy within executive functioning. Executive functioning has long been considered as a domain general cognitive function, but there has been support for separation into "hot" affective aspects and "cold" cognitive aspects. It is recognized that executive functioning spans across a number of cognitive tasks, including working memory, cognitive flexibility and reasoning in active goal pursuit. The distinction between hot and cool cognition implies that executive function may operate differently in different contexts. The distinction has been applied to research in cognitive psychology, developmental psychology, clinical psychology, social psychology, neuropsychology, and other areas of study in psychology.
Development and neuroanatomy:
Performance on hot and cold tasks improves most rapidly during the preschool years, but continues into adolescence. This co-occurs with both structural and functional development associated with the prefrontal cortex. Specific areas within the prefrontal cortex (PFC) are thought to be associated with both hot and cold cognition. Hot cognition is likely to be utilized during tasks that require the regulation of emotion or motivation, as well as the re-evaluation of the motivational significance of a stimulus. The ventral and medial areas of the prefrontal cortex (VM-PFC) are implicated during these tasks. Cold cognition is thought to be associated with executive functions elicited by abstract, deconceptualized tasks, such as card sorting. The area of the brain that is utilized for these tasks is the dorsolateral prefrontal cortex (DL-PFC). It is between the ages of 3 years and 5 years that the most significant change in task completion is seen. Age-related trends have been observed in tasks used to measure hot cognition, as well as cold cognition. However, the age at which children reach adult-like functioning varies. It appears as though children take longer to fully develop hot executive functioning than cold. This lends support to the idea that hot cognition may follow a separate, and perhaps delayed, developmental trajectory as opposed to cold cognition. Further research done on these neurological areas suggests there may be some plasticity during the development of both hot and cold cognition. While the preschool years are ones of extreme sensitivity to the development of prefrontal cortex, a similar period is found in the transition into adolescence. This gives rise to the idea that there may be a time window for intervention training, which would improve cognitive abilities and executive functioning in children and adolescents.
Marketing:
In marketing research, an audience's energy takes the form of psychological heat: hot cognition is an emotional thought process and cold cognition is a cognitive thought process.
Assessment:
This section explains the most common tasks that are used to measure hot and cold cognitive functioning. The cool tasks are neutrally affective and measure executive function abilities such as cognitive flexibility and working memory. In other words, there is nothing to be gained or lost by performing these tasks. The hot tasks also measure executive function, but these tasks result in emotionally significant consequences.
Assessment:
Hot function tasks Iowa gambling task In the Iowa gambling task participants are initially given $2,000 facsimile dollars and asked to win as much money as possible. They are presented with four decks of cards that represent either a gain or loss in money. One card from each deck is drawn at a time. Consistently choosing a card from the advantageous decks results in a net gain, whereas choosing from a disadvantageous deck results in a net loss. Each card from the disadvantageous deck offers a higher reward than the advantageous deck, but also a higher and more variable loss.
Assessment:
Delay of gratification Studies have been conducted on the concept of delay of gratification to test whether or not people are capable of waiting to receive a reward in order to increase the value of the reward. In these experiments, participants can choose to either take the reward they are immediately presented with or can choose to wait a period of time to then receive a higher valued reward. Hot cognition would motivate people to immediately satisfy their craving for the present reward rather than waiting for a better reward.
Assessment:
Neutral versus negative syllogisms tasks The influence that beliefs can have on logical reasoning may vary as a result of emotions during cognitive processes. When presented with neutral content, this will typically lead to the exhibition of the belief-bias effect. In contrast, content that is emotionally charged will result in a diminished likelihood of beliefs having an influence. The impact of negative emotions demonstrates the capability they have for altering the process underlying logical reasoning. There is an interaction that occurs between emotions and beliefs that interferes with the ability that an individual has to reason.
Assessment:
Cold function tasks The cool tasks are neutrally affective and measure executive function abilities such as cognitive flexibility and working memory. In other words, there is nothing to be gained or lost by performing these tasks. The hot tasks also measure executive function, but these tasks result in emotionally significant consequences.
Assessment:
Self Ordered Pointing In this task an array of items is presented to participants. The position of these items then randomly changes from trial to trial. Participants are instructed to point to one of these items, but then asked to not point to that same item again. In order to perform well on this task, participants must remember what item they pointed to and use this information to decide on subsequent responses.
Assessment:
Wisconsin Card Sort Task (WCST) The Wisconsin Card Sort Task requires participants to sort stimulus cards that differ in dimensions (shape, colour, or number). However, they are not told how to sort them. The only feedback they receive is whether or not a match is correct. Participants must discover the rule according to dimension. Once the participant matches a certain number of correct cards, the dimension changes and they must rediscover the new rule. This requires participants to remember the rule they were using and cognitively change the rule by which they use to sort.
Assessment:
Dimensional Change Card Sort Task (DCCS) Participants are required to sort stimulus cards based on either shape or colour. They are first instructed to sort based on one dimension (colour) in a trial, and then it switches to the other (shape) in the following trial. "Switch" trials are also used where the participant must change back and forth between rules within a single trial. Unlike the WCST, the rule is explicitly stated and does not have to be inferred. The task measures how flexible participants are to changing rules. This requires participants to shift between dimensions of sorting.
Recent evidence:
Research has demonstrated emotional manipulations on decision making processes. Participants who are induced with enthusiasm, anger or distress (different specific emotions) responded in different ways to the risky-choice problems, demonstrating that hot cognition, as an automatic process, affects decision making differently. Another example of hot cognition is a better predictor of negative emotional arousal as compared to cold cognition when they have a personal investment, such as wanting your team to win. In addition, hot cognition changes the way people use decision-making strategies, depending on the type of mood they are in, positive or negative. When people are in a positive mood, they tend to use compensatory, holistic strategies. This leads to a shallow and broad processing of information. In a negative mood people employ non-compensatory, narrow strategies which leads to a more detail-oriented and thorough processing of information. In the study participants were shown movie clips in order to induce a mood of happiness, anger or sadness and asked to complete a decision-making task. Researchers found that participants in the negative mood condition used more non-compensatory, specific decision-making techniques by focusing on the details of the situation. Participants in the positive mood condition used more compensatory, broad decision making techniques by focusing on the bigger picture of the situation. Also, hot cognition has been implicated in automatic processing and autobiographical memory. Furthermore, hot cognition extends outside the laboratory as exhibited in political process and criminal judgments. When police officers were induced with sadness they were more likely to think the suspect was guilty. However, if police officers were induced with anger there was no difference in judgments. There are also clinical implications for understanding certain disorders. Patients diagnosed with anorexia nervosa went through intervention training, which included hot cognition as a part of emotional processing development, did not show any improvement after this training. In another clinical population, those diagnosed with bipolar disorder exaggerated their perception of negative feedback and were less likely to adjust their decision making process in the face of risky-choices (gambling tasks).
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Non-conventional trademark**
Non-conventional trademark:
A non-conventional trademark, also known as a nontraditional trademark, is any new type of trademark which does not belong to a pre-existing, conventional category of trade mark, and which is often difficult to register, but which may nevertheless fulfill the essential trademark function of uniquely identifying the commercial origin of products or services.
The term is broadly inclusive as it encompasses marks which do not fall into the conventional set of marks (e.g. those consisting of letters, numerals, words, logos, pictures, symbols, or combinations of one or more of these elements), and therefore includes marks based on appearance, shape, sound, smell, taste and texture.
Non-conventional trademarks may therefore be visible signs (e.g. colors, shapes, moving images, holograms, positions), or non-visible signs (e.g. sounds, scents, tastes, textures).
Trends and issues:
Certain types of non-conventional trademarks have become more widely accepted in recent times as a result of legislative changes expanding the definition of "trademark". Such developments are the result of international treaties dealing with intellectual property, such as the Agreement on Trade-Related Aspects of Intellectual Property Rights, which sets down a standardised, inclusive legal definition. Single colour trademarks, motion trademarks, hologram trademarks, shape trademarks (also known as three-dimensional trademarks or 3D trademarks), and sound trademarks (also known as aural trademarks), are examples of such marks.
Trends and issues:
In the United Kingdom, colours have been granted trademark protection when used in specific, limited contexts such as packaging or marketing. The particular shade of turquoise used on cans of Heinz baked beans can only be used by the H. J. Heinz Company for that product. In another instance, BP claims the right to use green on signs for petrol stations.[1] In a widely disputed move, Cadbury's (confectioners) has been granted "the colour Purple".
Trends and issues:
In the United States, it is possible, in some cases, for color alone to function as a trademark. Originally, color was considered not a valid feature to register a trademark Leshen & Sons Rope Co. v. Broderick & Bascom Rope Co., 201 U.S. 166 (1906). Later, with the passage of the Lanham Act the United States Supreme Court in the case of Qualitex Co. v. Jacobson Products Co., 514 U.S. 159, 165, 115 S.Ct. 1300, 1304, 131 L.Ed.2d 248 (1995) would rule that under the Lanham Act, subject to the usual conditions, a color is registrable as a trademark.
Trends and issues:
The right to exclusive use of a specific color as a trademark on packaging has generally been mixed in U.S. court cases. Specific cases denying color protection include royal blue for ice cream packages (AmBrit Inc. v. Kraft, Inc., 812 F.2d 1531 (11th Cir. 1986), cert. denied, 481 U.S. 1041 (1987)); a series of stripes or multiple colors on candy packages (Life Savers v. Curtiss Candy Co., 82 F.2d 4 (7th Cir. 1950)); green for farm implements (Deere & Co. v. Farmhand Inc. (560 F. Supp. 85 (S.D. Iowa 1982) aff'd, 721 F.2d 253 (8th Cir. 1983)); black for motors (Brunswick Corp. v. British Seagull, Ltd., 35 F.3d 1527 (Fed. Cir.), cert. denied, 115 S. Ct. 1426 (1994)); and the use of red for one half of a soup can (Campbell Soup Co. v. Armour & Co., 175 F. 2d 795 (Court of App. 3d Cir., 1949)). A successful case granting color protection involved the use of the color red for cans of tile mastic Dap Products, Inc. v. Color Tile Mfg., Inc. 821 F. Supp. 488 (S.D. Ohio 1993), and a green-gold color for dry cleaning pads (Qualitex Co. v. Jacobson Products Co., 514 U.S. 159, 165, 115 S.Ct. 1300, 1304, 131 L.Ed.2d 248 (1995)).
Trends and issues:
Although scent trademarks (also known as olfactory trademarks or smell trademarks), are sometimes specifically mentioned in legislative definitions of "trademark", it is often difficult to register such marks if consistent, non-arbitrary and meaningful graphic representations of the marks cannot be produced. This tends to be an issue with all types of non-conventional trademarks, especially in Europe. United States practice is generally more liberal; a trademark for plumeria scent for sewing thread was registered in 1990. In Europe, a written description, with or without a deposited sample, is not sufficient to allow the mark to be registered, whereas such formalities are acceptable in the United States. However, even in the United States "functional" scents that are inherent in the product itself, such as smell for perfume, are not accepted for registration.
Trends and issues:
One example of a shape trademark recognized in Europe is the protection granted to Toblerone, a company which manufactures chocolate bars with a distinctive triangular shape.Presenting further difficulties are entirely new types of marks which, despite growing commercial adoption in the marketplace, are typically very difficult to register, often because they are not formally recognised as a "trademark". Examples of such marks are motion trademarks (also known as animated marks, moving marks, moving image marks or movement marks). Many web browsers feature a moving image mark in the top right hand corner of the browser screen which is visible when the browser is in the process of resolving a website.
Decisions on non-conventional trademarks:
Owens-Corning Owens-Corning was issued a trademark for the color pink used to color its fiberglass batting insulation product. The decision was based upon the fact that the company had been emphasizing the pink color of its insulation for decades, had licensed use of the Pink Panther cartoon character in its ads, the color was a non-functional aspect of the product (fiberglass is normally tan or yellow), and Owens Corning had spent over US$50 million advertising its insulation product. In re Owens-Corning Fiberglas Corp., 774 F.2d 1116 (Fed. Cir. 1985).
Decisions on non-conventional trademarks:
Sieckmann In Dr. Ralf Sieckmann vs Deutsches Patent- und Markenamt (case C-273/00), a judgement of the European Court of Justice issued on December 12, 2002, the ECJ held in relation to trademarks in the European Community that: Article 2 of Council Directive 89/104/EEC (of 21 December 1988 to approximate the laws of the Member States relating to trade marks) must be interpreted as meaning that a trade mark may consist of a sign which is not in itself capable of being perceived visually, provided that it can be represented graphically, particularly by means of images, lines or characters, and that the representation is clear, precise, self-contained, easily accessible, intelligible, durable and objective.
Decisions on non-conventional trademarks:
In respect of an olfactory sign, the requirements of graphic representability are not satisfied by a chemical formula, by a description in written words, by the deposit of an odour sample or by a combination of those elements.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Defeat device**
Defeat device:
A defeat device is any motor vehicle hardware, software, or design that interferes with or disables emissions controls under real-world driving conditions, even if the vehicle passes formal emissions testing. The term appears in the US Clean Air Act and European Union regulations, to describe anything that prevents an emissions control system from working, and applies as well to power plants or other air pollution sources, as to automobiles.The United States Environmental Protection Agency (EPA) has taken numerous enforcement actions against car makers and other companies that have used or installed defeat devices, whether deliberately, or through error or negligence. Aftermarket parts or software, such as modified exhausts or chip tuning products and services, are considered defeat devices if they inhibit or bypass a vehicle's emissions controls.
Timeline:
1970s In 1973 the Big 3 Detroit automakers, Chrysler, Ford Motor Company and General Motors, along with import brand Toyota, were ordered by the EPA to stop using ambient temperature switches which disabled pollution controls at low temperatures. The automakers agreed to cease using the ambient temperature switches in the way the EPA said was in violation of the Clean Air Act, while insisting that the switches were not 'defeat devices' intended to evade rules. The auto companies said the devices improved engine efficiency and actually reduced pollution. The EPA order affected 2 million 1973 model year cars slated for production, but did not require a recall of cars already on the road.Also in 1973, Volkswagen agreed to a settlement with the EPA, in which they admitted no wrongdoing and paid a $120,000 fine, for failing to disclose the existence of two temperature sensing switches that affected emissions function. In their 1974 model year application to the EPA, VW disclosed the presence of the switches and the EPA rejected them, so they were removed.
Timeline:
1990s In 1995, General Motors was ordered to recall 470,000 model year 1991 through 1995 Cadillacs and pay an $11 million fine for programming the car's electronic control unit (ECU) to enrich the fuel mixture any time the car's air conditioning or cabin heat was operating, since the EPA tests are conducted with those systems turned off. The richer fuel mixture was needed to address an engine stalling problem, resulting in emissions of up to 10 grams per mile of carbon monoxide (CO), nearly three times the limit of 3.4 g/mi. While the EPA and Justice Department contended that GM intentionally violated emissions standards, GM said that was "a matter of interpretation." Besides the fine, the second largest Clean Air Act penalty to date in 1995, GM had to spend up to $34 million for anti-pollution programs and recall 470,000 Cadillac 4.9 liter Eldorados, Fleetwoods, DeVilles, and Sevilles. The largest civil penalty under the Clean Air Act was $11.1 million paid by Louisiana-Pacific lumber and paper company.In 1996, Honda reached an agreement with the EPA to extend the warranties and offer free services for 1.6 million 1995 Civics and 1996–1997 model year Acuras, Accords, Civics, Preludes, and Odysseys, because Honda had disabled an engine misfire monitoring device that would have otherwise directed drivers to seek repairs. Honda was required to spend a total of $254 million on the warranties, service, pollution reduction projects, and $12.6 million in civil penalties.Also in 1996, Ford reached a consent decree to spend $7.9 million to address a defeat device on 60,000 1997 model year Econoline vans which used a "sophisticated electronic control strategy designed to enhance fuel economy", disabling NOx emissions controls while the vans were driven at highway speeds, a circumstance not occurring during lab testing to verify emissions control compliance.In 1998, the EPA announced fines totaling $83.4 million against seven diesel engine manufacturers, the largest fine to date, which evaded testing by shutting down emissions controls during highway driving while appearing to comply with lab testing. The seven, Caterpillar, Cummins, Detroit Diesel, Mack Trucks, Navistar International, Renault Trucks, and Volvo Trucks, also agreed to spend more than $1 billion to correct the problem. The trucks used engine ECU software to engage pollution controls during the 20-minute lab tests to verify compliance with the Clean Air Act, but then disable the emissions controls during normal highway cruising, emitting up to three times the maximum allowed NOx pollution.
Timeline:
2000s In 2000 the German motorcycle magazine Motorrad reported about a defeat device delivered with the BMW F 650 GS. BMW responded by issuing an improved injection as of 2001 and calling back the models from the previous year.
Timeline:
2010s In late 2015, the EPA discovered that software used in millions of Volkswagen Group turbocharged direct injection (TDI) diesel engines included features intended to produce misleading results during laboratory emissions testing.On 10 October 2015, Consumer Reports tested a 2015 Jetta TDI and a 2011 Jetta Sportwagen TDI in what they presumed was the special emissions testing, or cheat mode. The 0 to 60 mph (0 to 97 km/h) acceleration time of the 2011 Jetta increased from 9.9 to 10.5 seconds, and the 2015 car's time went from 9.1 to 9.2 seconds. The fuel economy of the 2011 car decreased from 50 to 46 mpg‑US (4.7 to 5.1 L/100 km; 60 to 55 mpg‑imp) and the 2015 car's fuel economy decreased from 53 to 50 mpg‑US (4.4 to 4.7 L/100 km; 64 to 60 mpg‑imp). Consumer Reports's Director of Auto Testing said that while the added fuel costs, "may not be dramatic, these cars may no longer stand out among many very efficient competitors." The method the magazine used to engage cheat mode while driving required making assumptions about the ECU's operations. Because disabling electronic stability control is a necessary step for running a car on a dynamometer, the magazine assumed that this would put the car in cheat mode. In order to keep the electronic stability control from reactivating while driving, they disconnected the cars' rear wheel speed sensors, simulating the inputs the ECU receives while the car is on a stationary test rig, even though it was being driven on the road. Besides front and rear wheel speeds, the EPA had said that steering wheel movement, barometric pressure and duration of engine operation were factors in triggering cheat mode.Fiat Chrysler produced over 100,000 model year 2014 through 2016 Ram 1500 and Jeep Grand Cherokee vehicles for sale in the United States with EcoDiesel engines in which the US EPA and the California Air Resources Board alleged had a defeat device.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Antonella Zanna**
Antonella Zanna:
Antonella Zanna Munthe-Kaas is an Italian applied mathematician and numerical analyst whose research includes work on numerical integration of differential equations and applications to medical imaging. She is a professor and head of the mathematics department at the University of Bergen in Norway.
Education:
Zanna was born in Molfetta, in southern Italy, and earned a degree in mathematics from the University of Bari.
She completed her PhD in the Department of Applied Mathematics and Theoretical Physics at the University of Cambridge in 1998. Her dissertation, Numerical Solution of Isospectral Flows, was supervised by Arieh Iserles.
Recognition:
Zanna won the Second Prize in the Leslie Fox Prize for Numerical Analysis in 1997.
She is a member of the Norwegian Academy of Technological Sciences.
Personal life:
Zanna married to Norwegian mathematician Hans Munthe-Kaas in 1997; they have two children and a dog.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Technical peer review**
Technical peer review:
In engineering, technical peer review is a well defined review process for finding and correcting defects conducted by a team of peers with assigned roles. Technical peer reviews are carried out by peers representing areas of life cycle affected by material being reviewed (usually limited to 6 or fewer people). Technical peer reviews are held within development phases, between milestone reviews, on completed products, or on completed portions of products. A technical peer review may also be called an engineering peer review, a product peer review, a peer review/inspection or an inspection.
Overview:
The purpose of a technical peer review is to remove defects as early as possible in the development process. By removing defects at their origin (e.g., requirements and design documents, test plans and procedures, software code, etc.), technical peer reviews prevent defects from propagating through multiple phases and work products and reduce the overall amount of rework necessary on projects. Improved team efficiency is a side effect (e.g., by improving team communication, integrating the viewpoints of various engineering specialty disciplines, more quickly bringing new members up to speed, and educating project members about effective development practices).
Overview:
In CMMI, peer reviews are used as a principal means of verification in the Verification process area and as an objective evaluation method in the Process and Product Quality Assurance process area. The results of technical peer reviews can be reported at milestone reviews.
Overview:
Peer reviews are distinct from management reviews, which are conducted by management representatives rather than by colleagues and for management and control purposes rather than for technical evaluation. This is especially true of line managers of the author or other participants in the review. A policy of encouraging management to stay out of peer reviews encourages the peer review team to concentrate on the product being reviewed and not on the people or personalities involved.
Overview:
They are also distinct from software audit reviews, which are conducted by personnel external to the project, to evaluate compliance with specifications, standards, contractual agreements, or other criteria. A software peer review is a type of technical peer review. The IEEE defines formal structures, roles, and processes for software peer reviews.
Roles of participants:
Moderator Responsible for conducting the technical peer review process and collecting inspection data. The moderator plays a key role in all stages of the process except rework and is typically required to perform several duties during a technical peer review in addition to inspectors' tasks.Inspectors Responsible for finding defects in work product from a general point of view, as well as defects that affect their area of expertise.Author Provides information about work product during all stages of process. The author is responsible for correcting all major defects and any minor and trivial defects that cost and schedule permit, as well as performing the duties of an inspector.Reader Guides team through work product during the technical peer review meeting. The reader reads or paraphrases work product in detail and also may perform the duties of an inspector.Recorder Accurately records each defect found during inspection meeting on the Inspection Defect List, and may also perform the duties of an inspector.
Vested interest of reviewers:
There are two philosophies about the vested interest of the inspectors in the product under review. On one hand, project personnel who have a vested interest in the work product under review have the most knowledge of the product and are motivated to find and fix defects. On the other hand, personnel from outside the project who do not have a vested interest in the work product bring objectivity and a fresh viewpoint to the technical peer review team. Each inspector is invited to disclose vested interests to the rest of the technical peer review panel so the moderator can exercise sound judgement in evaluating the inspector's inputs.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Cloud storage gateway**
Cloud storage gateway:
A cloud storage gateway is a hybrid cloud storage device, implemented in hardware or software, which resides at the customer premises and translates cloud storage APIs such as SOAP or REST to block-based storage protocols such as iSCSI or Fibre Channel or file-based interfaces such as NFS or SMB.According to a 2011 report by Gartner Group, cloud gateways were expected to increase the use of cloud storage by lowering monthly charges and eliminating the concern of data security.
Technology:
Features Modern applications (aka "cloud native applications") use network attached storage by means of REST and SOAP with hypertext transfer protocol on the protocol layer. The related storage is provided from arrays that offer these as object storage. Classic applications use network attached storage by means of Network File System NFS, iSCSI or Server Message Block SMB. To make use of all the advantages of object storage, existing applications need to be rewritten and new applications must be object storage aware, which is not the case by default. This problem is addressed by cloud storage gateways. They offer object storage via classic native storage protocols like Network File System NFS or Server Message Block SMB (and a very few offer iSCSI as well). As a rule of thumb, classic applications with cloud native object storage can now be used with cloud storage gateways.
Technology:
Functionality In enterprise infrastructures NFS is mainly used by Linux systems whereas Windows systems are using SMB. Object storage needs data in form of objects rather than files. For all cloud storage gateways it is mandatory to cache the incoming files and destage them to object storage on a later step. The time of destaging is subject to the gateway and a policy engine allows functions like pinning = bind specific files to the cache and destage them only for mirroring purpose content based destaging = move only files with specific characteristics to object storage e.g. all MP3 files multi-cloud mirroring = mirror all files to two different object stores Least Recently use = fill the local cache to maximum, move all files to object storage and delete files in cache on a LRU algorithm encrypt prior of destage = files are encrypted on the cloud storage gateway and destaged to object storage in an encrypted form compress and/or deduplication prior of destage = files are deduplicated and/or compressed prior of destaging backup data in a native backup formatCombinations of these functions are usual. Default sorting schematics spanning the retrieval interface generally rely on zero-fault content processing, which carries the obvious requirement that two or more of the above functions are synchronized.
Technology:
Extensions Nearly all object storage gateways support Amazon S3 protocol as a quasi-standard. Some offer as well Microsoft Azure Blob, Google Storage, or Openstack SWIFT. Most gateways support public cloud storage e.g. from Amazon or Microsoft as an object store and Dropbox as a file drive store, there are as well a lot of vendors that support private cloud storage as well – including off and on prem storage.
Technology:
Deployment methods There are multiple variants to deploy such gateways – and some vendors support as well different variants as of their product line: bare metal hardware appliance software appliance supporting different hypervisors software on top of an operating system – aka FUSE basedSoftware appliances as well as FUSE-based gateways can be installed on public cloud infrastructures.
Advantages:
Cloud storage gateways avoid the need to change existing applications by providing a standard interface.
Additionally, IT users are used to existing protocols – like SMB or NFS. They can make use of cloud storage with the advantage of still using their existing infrastructures (including e.g. Active Directory, LDAP integration, file share functions etc.).
Advantages:
While cloud storage gateways initially covered a niche only, they got more attraction as of multi-cloud technologies. As an example: It is possible to run a cloud storage gateway in form of a software appliance on top of a public or private cloud infrastructure by offering docker volume drivers that enable containers for automatic provisioning of storage used by these containers in a consistent form. They are using the hypervisors disks as a cache only, but destage data on least recently used algorithm to the underlying cloud storage.
Advantages:
The de facto standard for object storage is Amazon S3 – it had the most popularity and capacity installed on object storage. But every object storage vendor can (and most of them do) offer Amazon S3 storage – even there is no real "standard" S3 API: Every vendor is a little bit different in implementing S3 API (as seen from the different cloud storage gateway vendors supporting the "specific" APIs of the different object storage vendors). Since 2018, an increasing number of cloud storage gateways hide this complexity by offering S3 on northbound (as of networking technologies, southbound relates to the storage used by a gateway, whereas northbound is the storage provided by the gateway). As such, one may utilize a richer S3 implementation on northbound than the southbound supports.
Disadvantages:
By using cloud storage gateways the complexity to use object storage is hidden, but that also hides some of the advantages of object storage: the ability of horizontal scaling ability to add high efficient metadata to the data content to use extended WORM and archiving capabilities of object storageAs applications change to cloud-aware applications (aka called cloud native applications), cloud storage gateways will change from multiprotocol gateways to multi-cloud gateways, providing access to multiple cloud providers as well as multiple southbound protocols and act as relays between different clouds.
Market:
As of 2020 the cloud storage gateway market was valued at over USD 2 billion and was predicted to reach USD 11 billion by 2026, based on a report by the market research firm Mordor intelligence.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Alphabet Synthesis Machine**
Alphabet Synthesis Machine:
The Alphabet Synthesis Machine (2002) is a work of interactive art which makes use of genetic algorithms to "evolve" a set of glyphs similar in appearance to a real-world alphabet. Users create initial glyphs and the program takes over. As the creators of the project put it, their goal was "to bring about the specific feeling of semi-sense one experiences when one recognizes—- but cannot read—- the unfamiliar writing of another culture." The project was developed by Golan Levin, a new-media artist, in collaboration with Cassidy Curtis and Jonathan Feinberg.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Silicon on insulator**
Silicon on insulator:
In semiconductor manufacturing, silicon on insulator (SOI) technology is fabrication of silicon semiconductor devices in a layered silicon–insulator–silicon substrate, to reduce parasitic capacitance within the device, thereby improving performance. SOI-based devices differ from conventional silicon-built devices in that the silicon junction is above an electrical insulator, typically silicon dioxide or sapphire (these types of devices are called silicon on sapphire, or SOS). The choice of insulator depends largely on intended application, with sapphire being used for high-performance radio frequency (RF) and radiation-sensitive applications, and silicon dioxide for diminished short-channel effects in other microelectronics devices. The insulating layer and topmost silicon layer also vary widely with application.
Industry need:
SOI technology is one of several manufacturing strategies to allow the continued miniaturization of microelectronic devices, colloquially referred to as "extending Moore's Law" (or "More Moore", abbreviated "MM"). Reported benefits of SOI relative to conventional silicon (bulk CMOS) processing include: Lower parasitic capacitance due to isolation from the bulk silicon, which improves power consumption at matched performance Resistance to latchup due to complete isolation of the n- and p-well structures Higher performance at equivalent VDD. Can work at low VDD's Reduced temperature dependency due to no doping Better yield due to high density, better wafer utilization Reduced antenna issues No body or well taps are needed Lower leakage currents due to isolation thus higher power efficiency Inherently radiation hardened (resistant to soft errors), reducing the need for redundancyFrom a manufacturing perspective, SOI substrates are compatible with most conventional fabrication processes. In general, an SOI-based process may be implemented without special equipment or significant retooling of an existing factory. Among challenges unique to SOI are novel metrology requirements to account for the buried oxide layer and concerns about differential stress in the topmost silicon layer. The threshold voltage of the transistor depends on the history of operation and applied voltage to it, thus making modeling harder.
Industry need:
The primary barrier to SOI implementation is the drastic increase in substrate cost, which contributes an estimated 10–15% increase to total manufacturing costs.
SOI transistors:
An SOI MOSFET is a metal–oxide–semiconductor field-effect transistor (MOSFET) device in which a semiconductor layer such as silicon or germanium is formed on an insulator layer which may be a buried oxide (BOX) layer formed in a semiconductor substrate. SOI MOSFET devices are adapted for use by the computer industry. The buried oxide layer can be used in SRAM designs. There are two types of SOI devices: PDSOI (partially depleted SOI) and FDSOI (fully depleted SOI) MOSFETs. For an n-type PDSOI MOSFET the sandwiched n-type film between the gate oxide (GOX) and buried oxide (BOX) is large, so the depletion region can't cover the whole n region. So to some extent PDSOI behaves like bulk MOSFET. Obviously there are some advantages over the bulk MOSFETs. The film is very thin in FDSOI devices so that the depletion region covers the whole channel region. In FDSOI the front gate (GOX) supports fewer depletion charges than the bulk so an increase in inversion charges occurs resulting in higher switching speeds. The limitation of the depletion charge by the BOX induces a suppression of the depletion capacitance and therefore a substantial reduction of the subthreshold swing allowing FD SOI MOSFETs to work at lower gate bias resulting in lower power operation. The subthreshold swing can reach the minimum theoretical value for MOSFET at 300K, which is 60mV/decade. This ideal value was first demonstrated using numerical simulation. Other drawbacks in bulk MOSFETs, like threshold voltage roll off, etc. are reduced in FDSOI since the source and drain electric fields can't interfere due to the BOX. The main problem in PDSOI is the "floating body effect (FBE)" since the film is not connected to any of the supplies.
Manufacture of SOI wafers:
SiO2-based SOI wafers can be produced by several methods: SIMOX - Separation by IMplantation of OXygen – uses an oxygen ion beam implantation process followed by high temperature annealing to create a buried SiO2 layer.
Wafer bonding – the insulating layer is formed by directly bonding oxidized silicon with a second substrate. The majority of the second substrate is subsequently removed, the remnants forming the topmost Si layer.
One prominent example of a wafer bonding process is the Smart Cut method developed by the French firm Soitec which uses ion implantation followed by controlled exfoliation to determine the thickness of the uppermost silicon layer.
NanoCleave is a technology developed by Silicon Genesis Corporation that separates the silicon via stress at the interface of silicon and silicon-germanium alloy.
ELTRAN is a technology developed by Canon which is based on porous silicon and water cut.
Seed methods - wherein the topmost Si layer is grown directly on the insulator. Seed methods require some sort of template for homoepitaxy, which may be achieved by chemical treatment of the insulator, an appropriately oriented crystalline insulator, or vias through the insulator from the underlying substrate.An exhaustive review of these various manufacturing processes may be found in reference
Use in the microelectronics industry:
IBM began to use SOI in the high-end RS64-IV "Istar" PowerPC-AS microprocessor in 2000. Other examples of microprocessors built on SOI technology include AMD's 130 nm, 90 nm, 65 nm, 45 nm and 32 nm single, dual, quad, six and eight core processors since 2001. Freescale adopted SOI in their PowerPC 7455 CPU in late 2001, currently Freescale is shipping SOI products in 180 nm, 130 nm, 90 nm and 45 nm lines. The 90 nm PowerPC- and Power ISA-based processors used in the Xbox 360, PlayStation 3, and Wii use SOI technology as well. Competitive offerings from Intel however continue to use conventional bulk CMOS technology for each process node, instead focusing on other venues such as HKMG and tri-gate transistors to improve transistor performance. In January 2005, Intel researchers reported on an experimental single-chip silicon rib waveguide Raman laser built using SOI.As for the traditional foundries, on July 2006 TSMC claimed no customer wanted SOI, but Chartered Semiconductor devoted a whole fab to SOI.
Use in high-performance radio frequency (RF) applications:
In 1990, Peregrine Semiconductor began development of an SOI process technology utilizing a standard 0.5 μm CMOS node and an enhanced sapphire substrate. Its patented silicon on sapphire (SOS) process is widely used in high-performance RF applications. The intrinsic benefits of the insulating sapphire substrate allow for high isolation, high linearity and electro-static discharge (ESD) tolerance. Multiple other companies have also applied SOI technology to successful RF applications in smartphones and cellular radios.
Use in photonics:
SOI wafers are widely used in silicon photonics. The crystalline silicon layer on insulator can be used to fabricate optical waveguides and other optical devices, either passive or active (e.g. through suitable implantations). The buried insulator enables propagation of infrared light in the silicon layer on the basis of total internal reflection. The top surface of the waveguides can be either left uncovered and exposed to air (e.g. for sensing applications), or covered with a cladding, typically made of silica
Disadvantages:
The major disadvantage of SOI technology when compared to conventional semiconductor industry is increased cost of manufacturing. As of 2012 only IBM and AMD used SOI as basis for high-performance processors and the other manufacturers (Intel, TSMC, Global Foundries etc.) used conventional silicon wafers to build their CMOS chips.
SOI market:
As of 2020 the market utilizing the SOI process was projected to grow up by ~15% for the next 5 years according to Market Research Future group.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Neuropeptide S**
Neuropeptide S:
Neuropeptide S (NPS) is a neuropeptide found in human and mammalian brain, mainly produced by neurons in the amygdala and between Barrington's nucleus and the locus coeruleus, although NPS-responsive neurons extend projections into many other brain areas. NPS binds specifically to a G protein-coupled receptor, NPSR. Animal studies show that NPS suppresses anxiety and appetite, induces wakefulness and hyperactivity, including hyper-sexuality, and plays a significant role in the extinction of conditioned fear. It has also been shown to significantly enhance dopamine activity in the mesolimbic pathway, and inhibits motility and increases permeability in neurocrine fashion acting through NO in the myenteric plexus in rats and humans.
Synthetic ligands:
The non-peptide NPS receptor antagonist SHA-68 blocks the effects of NPS in animals and is anxiogenic. Several peptide derived NPS agonists and antagonists have also been developed.
Peptide sequence:
Below are the sequences of mature neuropeptide S in several representative species in which it is expressed: According to Pfam's HMM logo, there is a conserved "KR" cleave site immediately N-terminal to the C-terminal mature peptide.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**VSTa**
VSTa:
Valencia's Simple Tasker (VSTa) is an operating system with a microkernel architecture, with all device drivers and file systems residing in userspace mode. It mostly complies with the Portable Operating System Interface (POSIX), except where such compliance interferes with extensibility and modularity. It is conceptually inspired by QNX and Plan 9 from Bell Labs. Written by Andy Valencia, and released under a GNU General Public License (GPL). As of 2020, the licensing for VSTa is Copyleft.
VSTa:
It was originally written to run on Intel 80386 hardware, and then was ported to several different platforms, e.g., Motorola 68030 based Amigas.
VSTa is no longer developed. A fork, named Flexible Microkernel Infrastructure/Operating System (FMI/OS), did not make a release.
User interface:
The default graphical user interface provided as a tar-ball with the system was ManaGeR (MGR).
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.