text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Cooper (profession)**
Cooper (profession):
A cooper is a person trained to make wooden casks, barrels, vats, buckets, tubs, troughs and other similar containers from timber staves that were usually heated or steamed to make them pliable.
Journeymen coopers also traditionally made wooden implements, such as rakes and wooden-bladed shovels. In addition to wood, other materials, such as iron, were used in the manufacturing process. The trade is the origin of the surname Cooper.
Etymology:
The word "cooper" is derived from Middle Dutch or Middle Low German kūper 'cooper' from kūpe 'cask', in turn from Latin cupa 'tun, barrel'. Everything a cooper produces is referred to collectively as cooperage. A cask is any piece of cooperage containing a bouge, bilge, or bulge in the middle of the container. A barrel is a type of cask, so the terms "barrel-maker" and "barrel-making" refer to just one aspect of a cooper's work. The facility in which casks are made is also referred to as a cooperage.
Etymology:
As a name In much the same way as the trade or vocation of smithing produced the common English surname Smith and the German name Schmidt (see occupational surname), the cooper trade is also the origin of the English name Cooper.
Etymology:
It is also the origin of the French Tonnelier and Tonnellier; Greek Varelas (Βαρελάς); Danish Bødker; German Binder, Fassbender or Fassbinder (Faßbinder, literally 'cask-binder'), Böttcher ('tub-maker'), Scheffler, and Kübler; Dutch Kuiper and Cuypers; Lithuanian Kubilius; Latvian Mucenieks; Armenian Տակառագործյան; Hungarian Kádár, Bognár and Bodnár; Polish Bednarz, Bednarski, and Bednarczyk; Czech Bednář; Romanian Dogaru and Butnaru; Ukrainian Bondar, Bodnaruk, and Bodnarchuk, and Bondarenko (Бондаренко); Russian and Ukrainian Bondarev (Бондарев) and Bocharov (Бочаров); Yiddish Bodner; Portuguese Tanoeiro and Toneleiro; Spanish Cubero, Tonelero, and (via Greek) Varela; Bulgarian Bachvarov (Бъчваров); Macedonian Bacvarovski (Бачваровски); Croatian Bačvar; Slovene Pintar (from German Binder) and Italian Bottai (from botte).
History:
Traditionally, a cooper is someone who makes wooden, staved vessels, held together with wooden or metal hoops and possessing flat ends or heads. Examples of a cooper's work include casks, barrels, buckets, tubs, butter churns, vats, hogsheads, firkins, tierces, rundlets, puncheons, pipes, tuns, butts, troughs, pins and breakers. Traditionally, a hooper was the man who fitted the wooden or metal hoops around the barrels or buckets that the cooper had made, essentially an assistant to the cooper. The English name Hooper is derived from that profession. With time, many coopers took on the role of the hooper themselves.
History:
Antiquity An Egyptian wall-painting in the tomb of Hesy-Ra, dating to 2600 BC, shows a wooden tub made of staves, bound together with wooden hoops, and used to measure. Another Egyptian tomb painting dating to 1900 BC shows a cooper and tubs made of staves in use at the grape harvest. Palm-wood casks were reported in use in ancient Babylon. In Europe, buckets and casks dating to 200 BC have been found preserved in the mud of lake villages. A lake village near Glastonbury dating to the late Iron Age has yielded one complete tub and a number of wooden staves.
History:
The Roman historian Pliny the Elder reports that cooperage in Europe originated with the Gauls in Alpine villages where they stored their beverages in wooden casks bound with hoops. Pliny identified three types of coopers: ordinary coopers, wine coopers and coopers who made large casks. Large casks contained more and longer staves and were correspondingly more difficult to assemble. Roman coopers tended to be independent tradesmen, passing their skills on to their sons. The Greek geographer Strabo records wooden pithoi (casks) were lined with pitch to stop leakage and preserve the wine. Barrels were sometimes used for military purposes. Julius Caesar used catapults to hurl barrels of burning tar into towns under siege to start fires. Empty barrels were sometimes used to make pontoon bridges to cross rivers.
History:
Empty casks were used to line the walls of shallow wells from at least Roman times. Such casks were found in 1897 during archaeological excavation of Roman Silchester in Britain. They were made of Pyrenean silver fir and the staves were one and a half inches thick and featured grooves where the heads fitted. They had Roman numerals scratched on the surface of each stave to help with reassembly.
History:
Middle Ages to today In Anglo-Saxon Britain wooden barrels were used to store ale, butter, honey and mead. Drinking vessels were also made from small staves of oak, yew or pine. These items required considerable craftsmanship to hold liquids and might be bound with finely worked precious metals. They were highly valued items and were sometimes buried with the dead as grave goods. Churns, buckets and tubs made from staves have been excavated from peat bogs and lake villages in Europe. A large keg and a bucket were found in the Viking Gokstad ship excavated near Oslo Fiord in 1880.
History:
There were four divisions in the cooper's craft. The "dry" or "slack" cooper made containers that would be used to ship dry goods such as cereals, nails, tobacco, fruits, and vegetables. The "dry-tight" cooper made casks designed to keep dry goods in and moisture out. Gunpowder and flour casks are examples of a dry-tight cooper's work. The "white" cooper made straight-staved containers like washtubs, buckets, and butter churns, which would hold water and other liquids but did not allow shipping of the liquids. Usually there was no bending of wood involved in white cooperage. The "wet" or "tight" cooper made casks for long-term storage and transportation of liquids that could even be under pressure, as with beer. The "general" cooper worked on ships, on the docks, in breweries, wineries and distilleries, and in warehouses, and was responsible for cargo while in storage or transit.
History:
Ships, in the age of sail, provided much work for coopers. They made water and provision casks, the contents of which sustained crew and passengers on long voyages. They also made barrels to contain high-value commodities, such as wine and sugar. The proper stowage of casks on ships about to sail was an important stevedoring skill. Casks of various sizes were used to accommodate the sloping walls of the hull and make maximum use of limited space. Casks also had to be tightly packed, to ensure they did not move during the voyage and endanger the ship, crew and cask contents. Whaling ships in particular, featuring long voyages and large crews, needed many casks – for salted meat, other provisions and water – and to store the whale oil. Sperm whale oil was a particularly difficult substance to contain, due to its highly viscous nature, and oil coopers were perhaps the most skilled tradesmen in pre-industrial cooperage. Whaling ships usually carried a cooper on board, to assemble shooks (disassembled barrels) and maintain casks.Coopers in Britain started to organise as early as 1298. The Worshipful Company of Coopers is one of the oldest Livery Companies in London. It still survives today although it is now largely a charitable organisation.
History:
Many coopers worked for breweries. They made the large wooden vats in which beer was brewed. They also made the wooden kegs in which the beer was shipped to liquor retailers. Beer kegs had to be particularly strong in order to contain the pressure of the fermenting liquid, and the rough handling they received when transported, sometime over long distances, to pubs where they were rolled into tap-rooms or were lowered into cellars.
History:
Prior to the mid-20th century, the cooper's trade flourished in the United States; a dedicated trade journal was published, the National Cooper's Journal, with advertisements from firms that supplied everything from barrel staves to purpose-built machinery.
Plastics, stainless steel, pallets, and corrugated cardboard replaced most wooden containers during the last half of the 20th century, and largely made the cooperage trade obsolete.
History:
In the 21st century, coopers mostly operate barrel-making machinery and assemble casks for the wine and spirits industry. Traditionally, the staves were heated to make them easier to bend. This is still done, but now because the slightly toasted interior of the staves imparts a certain flavour over time to the wine or spirit contents that is much admired by experts. In England, the trade of master cooper is dwindling; but in Scotland there are several active cooperages that provide barrels to the whisky industry. It is thought that the last remaining master cooper in England works for Theakston Brewery in Masham. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MKL1**
MKL1:
MKL/megakaryoblastic leukemia 1 (also termed MRTFA/myocardin related transcription factor A) is a protein that in humans is encoded by the MKL1 gene.
Function:
The protein encoded by this gene is regulated by the actin cytoskeleton and is shuttled between the cytoplasm and the nucleus as a result of actin dynamics. In the nucleus, it coactivates the transcription factor serum response factor, a key regulator of smooth muscle cell differentiation, in an interaction mediated by its Basic domain. It is closely related to MKL2 and myocardin, with which it shares five key conserved structural domains.
Clinical significance:
This gene is involved in a specific translocation event that creates a fusion of this gene and the RNA-binding motif protein-15 gene. This translocation has been associated with acute megakaryocytic leukemia. It also functions in the process of normal megakaryocyte maturation.
Research:
Evalarted MKL1 expression is observed in breast cancer and can predict chemosensitivity and patient survival. MKL1 may be a promising biomarker of clinical value for breast cancer. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Connected Device Configuration**
Connected Device Configuration:
The Connected Device Configuration (CDC) is a specification of a framework for Java ME applications describing the basic set of libraries and virtual-machine features that must be present in an implementation. The CDC is combined with one or more profiles to give developers a platform for building applications on embedded devices ranging from pagers up to set-top boxes. The CDC was developed under the Java Community Process as JSR 36 (CDC 1.0.2) and JSR 218 (CDC 1.1.2).
Typical requirements:
Devices that support CDC typically include a 32-bit CPU with about 2 MB of RAM, and 2.5 MB of ROM available for the Java application environment. The reference implementations for CDC profiles are based on Linux running on an Intel-compatible PC, and optimized implementations are available for a variety of other CPUs and operating systems.
Profiles:
A profile is a set of APIs that support devices with different capabilities and resources within the CDC framework to provide a complete Java application environment. Three profiles are available, which build on each other incrementally and allow application developers to choose the appropriate programming model for a particular device.
Profiles:
Foundation Profile This is the most basic of the CDC family of profiles. Foundation Profile is a set of Java APIs tuned for low-footprint devices that have limited resources that do not need a graphical user interface system. It provides a complete Java ME application environment for consumer products and embedded devices but without a standards-based GUI system. Version 1.1.2 is specified in JSR 219 and implements a subset of Java SE 1.4.2, including a set of security-related optional packages, such as Java Authentication and Authorization Service (JAAS), Java Secure Socket Extension (JSSE), and Java Cryptography Extension (JCE).
Profiles:
Personal Basis Profile The Personal Basis Profile provides a superset of the Foundation Profile APIs and supports a similar set of devices, with lightweight graphical user interface requirements. A framework for building lightweight graphical user interface components is provided with support for some AWT classes. There are no heavyweight GUI components provided because these components assume the availability of a pointing device such as a mouse. The specification is described in JSR 217 and is used for products that require a standards-based graphical user interface but without full AWT compatibility. The Xlet application programming model is used for application development within this profile, including advanced content on Blu-ray discs conforming to the BD-J specification.
Profiles:
Personal Profile The Personal Profile extends the Personal Basis Profile with a GUI toolkit based on AWT. It provides a complete Java ME application environment with full AWT support and is intended for higher end devices, such as PDAs, smart communicators, set-top boxes, game consoles, automobile dashboard electronics, and so on. This is the recommended profile for porting of legacy PersonalJava-based applications. The specification is described in JSR 62 and uses the Applet programming model for application development.
Optional Packages:
CDC supports a number of optional packages that allow developers to access specific pieces of extra functionality within the restricted resource constraints of a Java ME device.
The RMI Optional Package provides a subset of Java SE RMI for distributed-application and network communication.
The JDBC Optional Package provides a subset of the JDBC 3.0 API for accessing data sources, including spreadsheets, flat files and relational databases. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MMS22L**
MMS22L:
Methyl methanesulfonate-sensitivity protein 22-like also known as MMS22-like, DNA repair protein is a protein that in humans is encoded by the MMS22L gene.
Model organisms:
Model organisms have been used in the study of MMS22L function. A conditional knockout mouse line, called Mms22ltm1a(EUCOMM)Wtsi was generated as part of the International Knockout Mouse Consortium program — a high-throughput mutagenesis project to generate and distribute animal models of disease to interested scientists.Male and female animals underwent a standardized phenotypic screen to determine the effects of deletion. Twenty six tests were carried out on mutant mice and two significant abnormalities were observed. No homozygous mutant embryos were identified during gestation, and therefore none survived until weaning. The remaining tests were carried out on heterozygous mutant adult mice; no additional significant abnormalities were observed in these animals. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bovine alphaherpesvirus 2**
Bovine alphaherpesvirus 2:
Bovine alphaherpesvirus 2 (BoHV2) is a virus of the family Herpesviridae that causes two diseases in cattle, bovine mammillitis and pseudo-lumpy skin disease. BoHV2 is similar in structure to human herpes simplex virus.Pseudo-lumpy skin disease was originally discovered in South Africa where a similar but more serious disease caused by a poxvirus, lumpy skin disease, is also prevalent. Symptoms include fever and skin nodules on the face, back, and perineum. The disease heals within a few weeks. Bovine mammillitis is characterized by lesions restricted to the teats and udder. BoHV-2 probably spreads through an arthropod vector, but can also be spread through milkers and milking machines.A review publication from 2011 presents a series of controversial but scientifically based conclusions concerning the pathogenesis and epidemiology of the infection, among these that spread among cattle is preferably by the respiratory route, and that skin lesions result from viremic spread to epidermal foci and inflammation due to complement activation by the classical pathway at sites of virus propagation after formation of early antibody to BoHV2. Lesions may be aggravated by low skin temperature (e.g. in edematic or hairless skin areas) causing reduced blood circulation and hampered removal of cell-toxic inflammatory substances. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Α-Aminobutyric acid**
Α-Aminobutyric acid:
α-Aminobutyric acid (AABA), also known as homoalanine in biochemistry, is a non-proteinogenic alpha amino acid with chemical formula C4H9NO2. The straight two carbon side chain is one carbon longer than alanine, hence the prefix homo-.
Homoalanine is biosynthesised by transaminating oxobutyrate, a metabolite in isoleucine biosynthesis. It is used by nonribosomal peptide synthases. One example of a nonribosomal peptide containing homoalanine is ophthalmic acid, which was first isolated from calf lens.
α-Aminobutyric acid is one of the three isomers of aminobutyric acid. The two other are the neurotransmitter γ-Aminobutyric acid (GABA) and β-Aminobutyric acid (BABA) which is known for inducing plant disease resistance.
The conjugate base of α-aminobutyric acid is the carboxylate α-aminobutyrate. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CJK Compatibility Ideographs**
CJK Compatibility Ideographs:
CJK Compatibility Ideographs is a Unicode block created to contain Han characters that were encoded in multiple locations in other established character encodings, in addition to their CJK Unified Ideographs assignments, in order to retain round-trip compatibility between Unicode and those encodings. Such encodings include: South Korean KS X 1001:1998 (U+F900–U+FA0B, 268 characters) Taiwanese Big5 (U+FA0C–U+FA0D, 2 characters) Japanese IBM 32 (CP932 variant; U+FA0E–U+FA2D, 32 characters, 12 are unified) South Korean KS X 1001:2004 (U+FA2E–U+FA2F, 2 character) Japanese JIS X 0213 (U+FA30–U+FA6A, 59 characters) Japanese ARIB STD-B24 (U+FA6B–U+FA6D, 3 characters) North Korean KPS 10721-2000 (U+FA70–U+FAD9, 106 characters)In ensuing versions of the standard, more characters have been added to the block. These even include a few regular ideographs (with the Unified_Ideograph property) that are not found in the CJK Unified Ideographs block As of now, all the regular ideographs are from the IBM 32 source. The IBM 32 encoding already contained duplicates warranting the need to encode the duplicates in this block, but also included twelve rare kokuji characters. These are are: FA0E..FA0F FA11 FA13..FA14 FA1F FA21 FA23..FA24 FA27..FA29The block has dozens of ideographic variation sequences registered in the Unicode Ideographic Variation Database (IVD).
CJK Compatibility Ideographs:
These sequences specify the desired glyph variant for a given Unicode character.
History:
The following Unicode-related documents record the purpose and process of defining specific characters in the CJK Compatibility Ideographs block: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Co-occurrence matrix**
Co-occurrence matrix:
A co-occurrence matrix or co-occurrence distribution (also referred to as : gray-level co-occurrence matrices GLCMs) is a matrix that is defined over an image to be the distribution of co-occurring pixel values (grayscale values, or colors) at a given offset. It is used as an approach to texture analysis with various applications especially in medical image analysis.
Method:
Given a grey-level image I , co-occurrence matrix computes how often pairs of pixels with a specific value and offset occur in the image. The offset, (Δx,Δy) , is a position operator that can be applied to any pixel in the image (ignoring edge effects): for instance, (1,2) could indicate "one down, two right".
An image with p different pixel values will produce a p×p co-occurrence matrix, for the given offset.
Method:
The th value of the co-occurrence matrix gives the number of times in the image that the th and th pixel values occur in the relation given by the offset.For an image with p different pixel values, the p×p co-occurrence matrix C is defined over an n×m image I , parameterized by an offset (Δx,Δy) , as: if and otherwise where: i and j are the pixel values; x and y are the spatial positions in the image I; the offsets (Δx,Δy) define the spatial relation for which this matrix is calculated; and I(x,y) indicates the pixel value at pixel (x,y) The 'value' of the image originally referred to the grayscale value of the specified pixel, but could be anything, from a binary on/off value to 32-bit color and beyond. (Note that 32-bit color will yield a 232 × 232 co-occurrence matrix!) Co-occurrence matrices can also be parameterized in terms of a distance, d , and an angle, θ , instead of an offset (Δx,Δy) Any matrix or pair of matrices can be used to generate a co-occurrence matrix, though their most common application has been in measuring texture in images, so the typical definition, as above, assumes that the matrix is an image.
Method:
It is also possible to define the matrix across two different images. Such a matrix can then be used for color mapping.
Aliases:
Co-occurrence matrices are also referred to as: GLCMs (gray-level co-occurrence matrices) GLCHs (gray-level co-occurrence histograms) spatial dependence matrices
Application to image analysis:
Whether considering the intensity or grayscale values of the image or various dimensions of color, the co-occurrence matrix can measure the texture of the image. Because co-occurrence matrices are typically large and sparse, various metrics of the matrix are often taken to get a more useful set of features. Features generated using this technique are usually called Haralick features, after Robert Haralick.Texture analysis is often concerned with detecting aspects of an image that are rotationally invariant. To approximate this, the co-occurrence matrices corresponding to the same relation, but rotated at various regular angles (e.g. 0, 45, 90, and 135 degrees), are often calculated and summed.
Application to image analysis:
Texture measures like the co-occurrence matrix, wavelet transforms, and model fitting have found application in medical image analysis in particular.
Other applications:
Co-occurrence matrices are also used for words processing in natural language processing (NLP). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Desmethylchlorotrianisene**
Desmethylchlorotrianisene:
Desmethylchlorotrianisene (DMCTA) is a nonsteroidal estrogen which is thought to be the major active metabolite of chlorotrianisene (CTA; TACE). It is a 1:1 mixture of cis and trans isomers. DMCTA is produced from CTA via mono-O-demethylation catalyzed by cytochrome P450 enzymes in the liver. CTA is thought to act as a long-lasting prodrug of DMCTA. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Janus kinase 3 inhibitor**
Janus kinase 3 inhibitor:
Janus kinase 3 inhibitors, also called JAK3 inhibitors, are a new class of immunomodulatory agents that inhibit Janus kinase 3. They are used for the treatment of autoimmune diseases. The Janus kinases are a family of four nonreceptor tyrosine-protein kinases, JAK1, JAK2, JAK3, and TYK2. They signal via the JAK/STAT pathway, which is important in regulating the immune system. Expression of JAK3 is largely restricted to lymphocytes (predominant expression is in the hematopoietic system), while the others are ubiquitously expressed, so selective targeting of JAK3 over the other JAK isozymes is attractive as a possible treatment of autoimmune diseases.
Mechanism of action:
Janus kinase 3 inhibitors work by inhibiting the action of the enzyme Janus kinase 3, so they interfere with the JAK-STAT signaling pathway. JAK3 is required for signaling by cytokines through the common γ chain of the interleukin receptors for IL-2, IL-4, IL-7, IL-9, IL-15, and IL-21. However JAK1 is also required as the two kinases cooperate for signaling. Signaling leads to phosphorylation and dimerization of the adaptor proteins STAT. When activated, they translocate into the nucleus, where they modulate gene transcription. By selectively inhibiting JAK3, downward signaling can be blocked.
Mechanism of action:
JAK3 is crucial in transmitting signals from cytokines that are responsible for either T-cell proliferation, differentiation, or development. It is also of high importance in the development of B-cells and NK-cells. Inhibition of JAK3, then, could prove to be a powerful immunosuppressant. Since JAK3 is restricted to the immune system, while the other JAKs such as JAK1 are much more broadly expressed, selective targeting of JAK3 could decrease possible adverse effects and improve tolerability.As an immunosuppressant, JAK3 inhibitors could aid in autoimmune diseases such as rheumatoid arthritis, psoriasis, or other diseases where the immune system fails to distinguish self from nonself and starts attacking self cells.
Discovery and development:
Discovery One of the first JAKs to be targeted in drug development for medical use was JAK3. Immune system depression is observed in patients with JAK3 defects. The role of JAK3 is greatly restricted to the immune system, so this enzyme was thought to be a good target for selective immunosuppressant. Whether JAK3 is sufficient to suppress the cytokine signaling is uncertain, as it can also be caused by stimulation of JAK1. Whether inhibiting JAK3 is as efficient as pan-JAK inhibition is under study. Many compounds with high affinity and possible selectivity for JAK3 have been discovered with high-throughput screening.
Discovery and development:
Development Currently, much attention is focused on developing Janus kinase inhibitors as drugs for immune diseases including inflammatory bowel diseases, rheumatoid arthritis, alopecia areata, and psoriases.
Discovery and development:
The first JAK inhibitor approved for the treatment of rheumatoid arthritis was tofacitinib. It has also shown promising results in other autoimmune disorders. Initially, tofacitinib was thought to be a selective JAK3 inhibitor, but later was found to be a potent inhibitor of JAK1 and JAK2. The value of developing a selective JAK3 targeting over the other JAKs is that JAK3 expression is restricted to the immune system, while the other JAKs are much more broadly expressed. Since JAK3 is not as ubiquitously expressed, selective targeting could improve tolerability, and decrease possible adverse effects and safety concerns. For example, dual inhibition of JAK1 and JAK3 might increase bacterial and viral infection because of a broader immunosuppressive effect. Inhibition of JAK2 has been linked to adverse effects such as anaemia and generalised leukopenia.Developing sufficiently selective JAK3 inhibitors has been difficult. One of the reason is the small variation in the ATP binding site of different JAKs. Another problem is that JAK3 has a higher affinity for ATP than the other JAKs, which can be a reason for a poor translation from in vitro enzymatic assay studies to cellular system studies. An example of this is decernotinib, which showed 41-fold selectivity for JAK3 vs JAK1 in in vitro enzyme assays, while the selectivity for JAK3 was not maintained in cellular assays, where it showed a slight preference for JAK1.
Discovery and development:
Structure activity relationship JAK3 inhibitors target the catalytic ATP-binding site of JAK3 and various moieties have been used to get a stronger affinity and selectivity to the ATP-binding pockets. The base that is often seen in compounds with selectivity for JAK3 is pyrrolopyrimidine, as it binds to the same region of the JAKs as purine of the ATP binds. Another ring system that has been used in JAK3 inhibitor derivatives is 1H-pyrrolo[2,3-b]pyridine, as it mimics the pyrrolopyrimidine scaffold.Sequence alignment has shown that the ATP binding pockets of the JAKs are almost identical and only a few features distinguish JAK3 from the rest. One of these differences is the presence of cysteine residue (Cys909) in the front region of the ATP binding pocket, where the other JAKs have serine at that same position. Only 10 other kinases possess a cysteine at that location, making cysteine even more intriguing as a target for a better selectivity. The focus has been on structures that can react with cysteine and have the electrophilic warhead acrylamide have been of interest, as they should ideally react only with proximal cysteine. Covalent cysteine targeting can be tricky, as off-target reaction can lead to adverse reactions, but as the JAKs resynthesize rapidly, covalent inhibition could be necessary to extend the pharmacodynamics.To compare inhibitors, the parameter of choice is IC50; by measuring IC50 for different JAKs, determining selectivity is possible. In the kinase family, JAK3 has the highest affinity for ATP, so measuring IC50 in high concentrations of ATP show whether the inhibitor can compete with ATP for the binding site.
Medical use:
Several therapeutic options exist for the treatment of autoimmune diseases, but the search is still going on for safer, more effective, and more convenient treatments. Inhibition of JAK3 has in research shown to be a good target for immunosuppression.The only indication for a JAK3 inhibitor at the moment, rheumatoid arthritis, is for the nonselective JAK1/JAK3 inhibitor tofacitinib. Other indications, such as psoriasis, alopecia areata, and ulcerative colitis are in clinical trials. Cytokines have an important role in autoimmune diseases and as the common γ chain cytokines interleukin IL-2, IL-4, IL-7, IL-9, IL-15, and IL-21 signal via JAK3, the inhibition of JAK3 and blocking of the signaling of these cytokines could affect many immune diseases and lead to development of new effective immunosuppressive drugs.
List of JAK3 inhibitors:
Nonselective JAK3 inhibitor Tofacitinib (CP-690, 550) An inhibitor of JAK1/JAK3, it was in 2012 granted approval from the FDA to treat rheumatoid arthritis.
JAK3 inhibitors in clinical trials Decernotinib (VX-509) A JAK3 inhibitor, it has shown efficacy in a phase IIa study in rheumatoid arthritis.
PF-06651600 An irreversible covalent JAK3 selective inhibitor, it is in phase II in pipelines at Pfizer for alopecia areata, rheumatoid arthritis, and ulcerative colitis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**OffOn**
OffOn:
OffOn is an experimental film created by Scott Bartlett made and released in 1968.
Summary:
It is most notable for being one of the first examples in which film and video technologies were combined. The nine-minute film combines a number of video loops which have been altered through re-photography or video colorization, and utilizes an electronic sound track to create its unique effect.
Legacy:
In 2004, the film was selected for preservation in the United States National Film Registry by the Library of Congress as being "culturally, historically, or aesthetically significant".It also appeared on the 1990 Oscar-nominated documentary film Berkeley in the Sixties.In 1980, Scott recreated the event in a video production class at UCLA called The Making of OffOn. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Osteomyology**
Osteomyology:
Osteomyology (sometimes neurosteomyology) is a multi-disciplined form of alternative medicine found almost exclusively in the United Kingdom and is loosely based on aggregated ideas from other manipulation therapies, principally chiropractic and osteopathy. It is a results-based physical therapy tailored specifically to the needs of the individual patient. Osteomyologists have been trained in osteopathy and chiropractic, but do not require to be regulated by the General Osteopathic Council (GOsC) or the General Chiropractic Council (GCC).
Origin and philosophy:
The term osteomyology was invented by an English born doctor of osteopathy, Dr Sir Alan Clemens, in 1992. This name was created the name from the joining of osteon = bone, myo = muscle and ology, a study. This name was given to those who joined an informal group of qualified osteopaths and students. This group was formed to satisfy a need for 'Continuing Professional Development' (CPD) with masterclasses on technique. It was intended to allow students to learn and for the qualified to improve upon basic as well as advanced techniques. Up to that time it was felt the existing official organizations of Osteopathy and Chiropractic did not organize such training well.
Origin and philosophy:
In 1993 The Osteopaths Act was passed followed by the Chiropractic Act 1994 requiring all chiropractors and osteopaths to be registered with new governing bodies. The new acts were not universally welcomed by the grassroots of the professions. The acts protected the titles of osteopath and chiropractor to those registered with the new organisations. The techniques used by osteopaths and chiropractors are not protected by the acts and may be used by osteomyologists as long as they do not describe themselves as osteopaths or chiropractors.
Origin and philosophy:
Objection on the basis of requalification Many osteomyologists were qualified under previous non-statutory schemes. The new General Osteopathic Council set a level playing field allowing application from anyone who had been practicing as an osteopath. Previous qualification, experience, clinical reasoning was to be assessed via a professional portfolio of evidence. This process was not universally popular and some osteopaths resented the requirement to re-prove their eligibility for registration. However the portfolio was required of all osteopaths including those graduating within the transitional period. Some chose not to register and some failed to fulfill the requirements and after interview and clinical assessment were refused registration. Some of those declining or failing to register became osteomyologists.
Origin and philosophy:
Objection on the basis of non representation Some osteomyologists objected to the scale of fees charged by the General Osteopathic Council and claimed this did not offer them good value for money and gave this as a reason to not register. The primary purpose of a statutory registration body is to protect the public. Non registering osteopaths failed to see the value in this role. In its first creation the GOsC had the responsibility to represent and promote the profession so this claim has some merit however the promotion role was removed by legislation after the Foster Report.
First General Osteopathic Council:
The first General Osteopathic council was appointed by the Department of Health. It was considered by the osteomyologists and by the Democratic Osteopathic Council, not to be representative or democratic because it had been formed initially by invitation from only one existing training school of osteopathy. There had been serious differences between this school and the others over many years over the philosophy and practice that was taught. Only later did elections take place onto the new council.
First General Osteopathic Council:
By taking on the title osteomyologist, practitioners can advertise their various spinal manipulation without being in breach of the legislation because they did not claim to be osteopaths. However this means that their practice and behaviour is not subject to the Standards of Practice of either the GOsC or the GCC. The GOsC and GCC will not hear complaints about practitioners who are not registered with them so the protection offered to the patients of osteomyologists is less than that offered to osteopathic and chiropractic patients.
Claimed differences from osteopathy:
The practice of osteomyology claims to be different from osteopathy because It focused technique upon relaxing muscle rather than manipulating bone They believed this achieved the same objective as osteopathy in releasing stiff joints but was kinder to the patient.
It meant the patient could be better brought in to take part in their own cure.
Claimed differences from osteopathy:
It more effectively recruited the so-called placebo effect that all treatment, whether orthodox or alternative ultimately depend for much of their effect.Both osteopathy and osteomyology lack any clear definition of scope and application so these distinctions are quite arbitrary. The main difference remains that osteopathy is a statutorily regulated health profession and osteomyology is a group of like-minded professionals operating outside a statutory regulatory framework.
Claimed differences from osteopathy:
Alan Clemens now runs the Association of Osteomyologists and provides professional insurance and marketing services for members. Members of the Association designate themselves with the letters MAO (Member of the Association of Osteomyologists) after their name. Members are expected to partake in continuing training programmes and can present evidence of ongoing training in any alternative medicine. The code of conduct is made public and there is a method by which members of the public can make concerns known about members. The organisation does not publish membership figures, but their site would suggest that there are several hundred members.
Efficacy:
There is no reliable evidence available regarding the effectiveness or risks of treatment given by osteomyologists as a distinct practice. However, there is a wide range of evidence regarding the efficacy of the various constituent manual therapies that osteomyology draws upon.
Efficacy:
Effectiveness In 1996, Ernst and Canter published a systematic review of the evidence base for various spinal manipulation techniques, including "chiropractors, osteopaths, physiotherapists and other healthcare providers mostly (but not exclusively) to treat musculoskeletal problems." They concluded, In conclusion, we have found no convincing evidence from systematic reviews to suggest that SM is a recommendable treatment option for any medical condition. In several areas, where there is a paucity of primary data, more rigorous clinical trials could advance our knowledge.
Efficacy:
However, from other reviews, there is some evidence that Chiropractic practices (when compared to sham treatments) show clinically significant improvements in short-term pain relief for acute low back pain. However, when compared with conventional treatments there were no significant benefits. There is some evidence that osteopathic treatment is helpful for low back pain. For other conditions, the evidence is not compelling.
Efficacy:
Associated risks Spinal manipulation is associated with frequent, mild and temporary adverse effects, including new or worsening pain or stiffness in the affected region. Rarely, spinal manipulation, particularly on the upper spine, can also result in complications that can lead to permanent disability or death. The incidence of these complications is unknown, due to rarity, high levels of under-reporting, and difficulty of linking manipulation to adverse effects such as stroke, and has been noted as a particular concern.
Controversies:
Legal status Osteomyology is not a statutorily regulated form of alternative medicine but due to government legislation has opted for self-regulation. To become an osteomyologist, one must have a professional qualification in any of the physical/medical disciplines and applicants have to present their professional diplomas for scrutiny also abiding by the code of practice and ethics and registering full insurance cover. Only then may they join the TAO and call themselves osteomyologist. The newly formed UK voluntary regulation body, the Complementary and Natural Healthcare Council will not play any role in the regulation of osteomyologists. The Association of Osteomyologists is currently working on a framework for voluntary self-regulation for its members.The Advertising Standards Authority concluded that the Association of Osteomyologists was not a statutory or recognised health and medical professional body and merely allowed osteomyologists to share knowledge.
Controversies:
Professional standards The WHO states that the safety and quality of chiropractic practice depends mainly on the quality of training of the practitioner. As osteomyologists are often practitioners who refuse to be subject to statutory regulation regarding training and practice, it is difficult to ensure that their standards meet minimum guidelines. The Association of Osteomyologists claim to allow membership to anyone who has "degree qualifications in one of the physical medical disciplines". This is a much broader and looser requirement than the statutorily regulated profession of chiropractic.
Controversies:
Regulatory offenses Osteomyologists have found themselves subject to various types of regulatory investigation. The Advertising Standards Authority has taken action against practitioners, for such offenses as making untruthful and unsubstantiated claims in advertising about the extent of scientific support for the therapy, or referring to serious medical conditions in their advertising. In November 2008, the Committee of Advertising Practice issued advice about the advertising from osteomyologists warning that they should not mislead on their status or training and that if they wanted to claim to offer manipulation or chiropractic techniques they must hold suitable, relevant qualifications to undertake such therapy and robust substantiation for the efficacy of claims for the therapy.Several practitioners have been investigated by the General Osteopathic Council for advertising as osteopaths. The Times ran an investigation in 2004 into 'illegal chiropractors' and found many osteomyologists describing themselves as chiropractors to prospective customers.A chiropractor being investigated by the General Chiropractic Council (GCC) for multiple instances of unprofessional conduct was found by the council to have "endeavoured to evade the GCC’s jurisdiction by denying that he is a chiropractor" calling himself instead an osteomyologist. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Emotional flooding**
Emotional flooding:
Emotional flooding is a form of psychotherapy that involves attacking the unconscious and/or subconscious mind to release repressed feelings and fears. Many of the techniques used in modern emotional flooding practice have roots in history, some tracing as far back as early tribal societies. For more information on emotional flooding, see Flooding (psychology).
Tribal Societies:
Tribal communities often have a shaman, or a medicine man, whose primary responsibilities includes: diagnosing illnesses, prescribing herbs and suggesting other treatments to cure the afflicted of their ailments. Many ritual cures include free displays of emotion.In his book, The Discovery of the Unconscious, Henri Ellenberger claims that shamans historically were primarily practitioners of psychosomatic medicine. These shamans did not consider the possibility of a split between mind and body, unlike the popular beliefs of the Western philosophical movement. Dr. Paul Olsen said, "Implicit in the belief that any sort of illness contains emotional elements is an unverbalized acknowledgment of an unconscious process. It follows that liberation of these elements is a pathway to cure. In essence, the shamans were dealing with a crude but strikingly accurate concept of repression."The link between these methods and modern techniques is the emphasis upon working with the body. Psychiatrist Ari Kiev said, "[groups that] facilitate change by producing excessive cortical excitation, emotional exhaustion, and states of reduced resistance or hypersuggestibility, which in turn increases the patient's chances of being converted to new points of view [are consistent with modern-day modalities of primal therapy and encounter.]"According to some researchers, many tribal afflictions were more likely symptoms of disorders such as depression or schizophrenia. Similar to the treatments for these disorders practiced today, historically the treatments shamans practiced often required the patient to recall difficult experiences and to recreate a wide range of emotional accounts.
Early Renaissance:
Doctors from the Renaissance period also practiced treatments that resembled emotional flooding for patients afflicted with demonic possession. Paul Olsen says, "Possession was truly a diagnostic category of its day, encompassing practically any form of religi-culturally determined psychopathology.”Practitioners frequently attributed many ailments, as well as most odd behaviors, now recognized as mental diseases to Satan and other demons. This was particularly true when the ravings, actions, or hallucinatory experiences could be considered blasphemous or heretical.Cures for possession by the devil focused on spiritual salvation and were aimed at getting to a person's unconscious and unacceptable impulses and wishes. Many people who confessed under the duress of torture may well have been releasing repressed material. In all likelihood, pain stimulated a flood of unconscious crimes, such as murderous rage against authority figures, incest wishes, or any number of socially determined offenses.Exorcism rituals aimed at rescuing the soul from Satan. The effects of the procedure may have also relieved some of the body's anguish through release of emotional pain. These techniques resembled modern emotional flooding techniques. The emphasis on emotion was strong in exorcism techniques; the exorcist tried to temper its expression or to liberate it.
Nineteenth Century:
Pierre Janet and Hypnosis Pierre Janet was a French hypnotist who used hypnosis to study the dissociative tendencies of the mind. Researcher John Ryan Haule studied Janet's work and observed that Janet referred to the hypnotic process as 'influence somnambulique.' Before 1900, Janet saw somnambulism as the essential condition, of which hysteria, hypnosis, multiple personality, and spiritualism were variations. Janet used the word somnambulism to refer to any kind of activity pursued while in a dissociated condition, not just to sleepwalking. Janet used hypnosis to manipulate the somnambulistic condition. He identified three phases.1.
Nineteenth Century:
Fatigue: The treated patient feels exhausted upon awaking from the hypnotic trance.
2.
Health: When the fatigue is gone, the patient seems to be in perfect health. All symptoms of the disorder are gone, and the patient appears to be "back to normal." However, the patient is not cured and this phase is temporary. The only sign that something is odd is the patient's obsession with the hypnotist.
3.
Nineteenth Century:
Obsession: Following the brief phase of apparent good health, all symptoms return. The patient has a strong desire to be put to sleep, almost like withdrawal symptoms, and wants to undergo hypnosis again. The patient also has a strange, almost sexual, obsession with the hypnotist.Janet was not only a hypnotist. He would engage the patient, talk to him, address the "sick" forces within him, and attempt to use hypnosis to contact the unconscious. Like exorcism, hypnosis also attacked the unconscious.
Nineteenth Century:
Experts now refer to Janet's approach as the cathartic method. In A Critical Dictionary of Psychoanalysis, Charles Rycroft said that abreaction was the term applied to the expression of affect, with the subsequent alleviation of symptoms being the catharsis.Later, Sigmund Freud and his followers deemed the cathartic cure to be unsuccessful because it did not stimulate awareness of unconscious factors and did not result in insight, which meant that there may be symptom substitution which could lead to no real cure.
Nineteenth Century:
Wilhelm Reich and the Therapeutic Approach Over time, psychiatrists abandoned hypnosis and the cathartic cure and adopted the therapeutic approach as the accepted practice. The therapeutic approach emphasized the expression of emotion, as a by-product of the goal to make the unconscious conscious. rather than as the main event.
Nineteenth Century:
Wilhelm Reich was an Austrian-American psychiatrist who worked with Sigmund Freud. Reich focused on the body, trying to make body-mind duality a seamless concept. He believed that the body was the unconscious and that the psychologist must break through the body's armor to reach the subconscious. He called the body's defenses armoring.W. Edward Mann called attention to the body's visible displays of character armor such as muscular tension and stated that armoring was the character structure in its physical form. He explained that if one could break down the armoring one would be able to change the neurotic character structure.Researchers now understand these displays as physical defenses; the body reacts in certain ways to defend the person against the expression of undesirable emotion. Mann explains the build-up of armoring as the build-up of armoring as the body's physical response to create blocks for natural biological movements such as curiosity, play, sex, exploration, or defiance of authority. Reich's writings imply that there are no benefits in armoring, a belief that most modern-day experts do not accept.Essentially, the technique meant that to properly treat the problem, the therapist must break down the body's defenses to allow repressed emotion to come out.
Contemporary Practice:
Modern uses of emotional flooding include: Gestalt Therapy, developed by Frederick S. Perls Immersion Therapy Sensory Hypnoanalysis, used by Milton Kline | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dual electrification**
Dual electrification:
Dual electrification is a system whereby a railway line is supplied power both via overhead catenary and a third rail. This is done to enable trains that use either system of power to share the same railway line, for example in the case of mainline and suburban trains (as used at Hamburg S-Bahn between 1940 and 1955).
Examples:
London, UK North London Line changeover at Acton Central.
Northern City Line changeover at Drayton Park.
Thameslink route changeover at City Thameslink (northbound) and Farringdon (southbound).
West London line changeover between Willesden Junction and Shepherd's Bush.
High Speed 1 changeover at platforms 5 and 6 of Ebbsfleet International.
New York, NY Penn Station East River Tunnels North River Tunnels Athens, GR Line 3 of Athens Metro uses third rail for underground part and overhead power supply on surface for access to/from Airport.
Boston, MA Like Athens, the MBTA Blue Line uses third rail for the underground part, and switches to overhead catenary power at Airport station.
Hamburg, DE Line S3 of the Hamburg S-Bahn changeover at Neugraben.
Variations:
Both systems live The system is usually used only in exceptional cases as it can lead to problems caused by the interaction of the electric circuits; for example, where one system is powered with direct current and another by alternating current (AC), premagnetisation of the substation transformers of the AC system can occur.
One system live A similar arrangement to dual electrification is one in which both means of powering a train are present, but not live simultaneously. Such arrangements can be found in frontier stations and in sections of railway used for running tests. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Enyo (software)**
Enyo (software):
Enyo is an open source JavaScript framework for cross-platform mobile, desktop, TV and web applications emphasizing object-oriented encapsulation and modularity. Initially developed by Palm, it was later acquired by Hewlett-Packard in April 2010 and then released under an Apache 2.0 license. It is sponsored by LG Electronics and Hewlett-Packard.
Bootplate:
Bootplate is a simplified way of creating an app, providing a skeleton of the program's folder tree. The Bootplate template provides a complete starter project that supports source control and cross-platform deployment out of the box. It can be used to facilitate both the creation of a new project and the preparation for its eventual deployment.
Libraries:
Layout: Fittables, scrollers, lists, drawers, panels.
Onyx: Based on the original styled of webOS/Touchpad design but available for use on any platform.
Moonstone: Used by LG SmartTV apps but available for use on any platform.
Spotlight: To support key-based interactions and "point and click" events on remote controls and keyboards.
Mochi: Advanced user interface library. It has been maintained by the community since the team behind webOS released this abandoned interface from Palm/HP as open source. This library is not included on bootplate right now, but has design documents.
enyo-iLib: Internationalization and localization library, it wrap ilib's functionality on Enyo apps. G11n was another library that has been deprecated on newer versions of enyo.
Canvas Extra enyo-cordova: Enyo-compatible library to automatically include platform-specific Cordova library (WIP).
Use:
The following projects are built with Enyo: LG Smart TV apps Openbravo Mobile and Web POS xTuple ERP Web and Mobile AppPartial list of Enyo apps can be found on Enyo Apps. Some developers can be found on Enyo Developer Directory.
Examples:
This is an example of a 'Hello world program' in Enyo
Supported platforms:
In general, Enyo can run across all relatively modern, standards-based web environments, but because of the variety of them there are three priority tiers. At 2015 some platforms supported are: Tier 1 Supported at high priority:Packaged Apps: iOS7, iOS6 (PhoneGap), Android 4+ (PhoneGap), Windows 8.1 Store App and Windows Phone 8 (PhoneGap), Blackberry 10 (PhoneGap), Chrome Web Store App, LG webOS.
Supported platforms:
Desktop Browsers: Chrome (latest), Safari (latest MAC), Firefox (latest), IE11 IE10, IE9, IE8. (Win).
Mobile Browsers: iOS7, iOS6, Android 4+ Chrome, Kindle Fire and HD, Blackberry 10, IE11 (Windows 8.1),IE10 (Windows Phone 8).
Tier 2 SupportedPackaged Apps: iOS5, iOS4, Android 2.3,Firefox OS (pre-release), Tizen OS (pre-release), Windows 8 Store App, Windows (Intel AppUp).
Desktop Browsers: Opera, Chrome >10, Firefox >4, Safari >5.
Mobile Browsers: iOS5, iOS4, Android 4+ Firefox, webOS 3.0.5, webOS 2.2, BlackBerry 6-7, BlackBerry Playbook and others.
Tier 3 Partial supportMobile Browsers: Windows Phone 7.5.
No supportDesktop Browsers: IE8 Mobile Browsers: Windows Phone 7, BlackBerry 6, Symbian, Opera Mini | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**VAT1L**
VAT1L:
Vesicle amine transport protein 1 homolog (T. californica)-like is a protein in humans that is encoded by the VAT1L gene.
In humans, the VAT1L gene is located on chromosome 16 locus q23.1. According to SAGE data, it is expressed mainly in the brain. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nous**
Nous:
Nous, or Greek νοῦς (UK: , US: ), sometimes equated to intellect or intelligence, is a concept from classical philosophy for the faculty of the human mind necessary for understanding what is true or real.Alternative English terms used in philosophy include "understanding" and "mind"; or sometimes "thought" or "reason" (in the sense of that which reasons, not the activity of reasoning). It is also often described as something equivalent to perception except that it works within the mind ("the mind's eye"). It has been suggested that the basic meaning is something like "awareness". In colloquial British English, nous also denotes "good sense", which is close to one everyday meaning it had in Ancient Greece. The nous performed a role comparable to the modern concept of intuition.
Nous:
In Aristotle's influential works, which are the main source of later philosophical meanings, nous was carefully distinguished from sense perception, imagination, and reason, although these terms are closely inter-related. The term was apparently already singled out by earlier philosophers such as Parmenides, whose works are largely lost. In post-Aristotelian discussions, the exact boundaries between perception, understanding of perception, and reasoning have not always agreed with the definitions of Aristotle, even though his terminology remains influential.
Nous:
In the Aristotelian scheme, nous is the basic understanding or awareness that allows human beings to think rationally. For Aristotle, this was distinct from the processing of sensory perception, including the use of imagination and memory, which other animals can do. For him then, discussion of nous is connected to discussion of how the human mind sets definitions in a consistent and communicable way, and whether people must be born with some innate potential to understand the same universal categories in the same logical ways. Derived from this it was also sometimes argued, in classical and medieval philosophy, that the individual nous must require help of a spiritual and divine type. By this type of account, it also came to be argued that the human understanding (nous) somehow stems from this cosmic nous, which is however not just a recipient of order, but a creator of it. Such explanations were influential in the development of medieval accounts of God, the immortality of the soul, and even the motions of the stars, in Europe, North Africa and the Middle East, amongst both eclectic philosophers and authors representing all the major faiths of their times.
Pre-Socratic usage:
In early Greek uses, Homer used nous to signify mental activities of both mortals and immortals, for example what they really have on their mind as opposed to what they say aloud. It was one of several words related to thought, thinking, and perceiving with the mind. In pre-Socratic philosophy, it became increasingly distinguished as a source of knowledge and reasoning opposed to mere sense perception or thinking influenced by the body such as emotion. For example, Heraclitus complained that "much learning does not teach nous".Among some Greek authors, a faculty of intelligence known as a "higher mind" came to be considered as a property of the cosmos as a whole. The work of Parmenides set the scene for Greek philosophy to come, and the concept of nous was central to his radical proposals. He claimed that reality as perceived by the senses alone is not a world of truth at all, because sense perception is so unreliable, and what is perceived is so uncertain and changeable. Instead he argued for a dualism wherein nous and related words (the verb for thinking which describes its mental perceiving activity, noein, and the unchanging and eternal objects of this perception, noēta) describe another form of perception which is not physical, but intellectual only, distinct from sense perception and the objects of sense perception.
Pre-Socratic usage:
Anaxagoras, born about 500 BC, is the first person who is definitely known to have explained the concept of a nous (mind), which arranged all other things in the cosmos in their proper order, started them in a rotating motion, and continuing to control them to some extent, having an especially strong connection with living things. (However Aristotle reports an earlier philosopher, Hermotimus of Clazomenae, who had taken a similar position.) Amongst the pre-Socratic philosophers before Anaxagoras, other philosophers had proposed a similar ordering human-like principle causing life and the rotation of the heavens. For example, Empedocles, like Hesiod much earlier, described cosmic order and living things as caused by a cosmic version of love, and Pythagoras and Heraclitus, attributed the cosmos with "reason" (logos).According to Anaxagoras the cosmos is made of infinitely divisible matter, every bit of which can inherently become anything, except Mind (nous), which is also matter, but which can only be found separated from this general mixture, or else mixed into living things, or in other words in the Greek terminology of the time, things with a soul (psychē). Anaxagoras wrote: All other things partake in a portion of everything, while nous is infinite and self-ruled, and is mixed with nothing, but is alone, itself by itself. For if it were not by itself, but were mixed with anything else, it would partake in all things if it were mixed with any; for in everything there is a portion of everything, as has been said by me in what goes before, and the things mixed with it would hinder it, so that it would have power over nothing in the same way that it has now being alone by itself. For it is the thinnest of all things and the purest, and it has all knowledge about everything and the greatest strength; and nous has power over all things, both greater and smaller, that have soul [psychē].
Pre-Socratic usage:
Concerning cosmology, Anaxagoras, like some Greek philosophers already before him, believed the cosmos was revolving, and had formed into its visible order as a result of such revolving causing a separating and mixing of different types of chemical elements. Nous, in his system, originally caused this revolving motion to start, but it does not necessarily continue to play a role once the mechanical motion has started. His description was in other words (shockingly for the time) corporeal or mechanical, with the moon made of earth, the sun and stars made of red hot metal (beliefs Socrates was later accused of holding during his trial) and nous itself being a physically fine type of matter which also gathered and concentrated with the development of the cosmos. This nous (mind) is not incorporeal; it is the thinnest of all things. The distinction between nous and other things nevertheless causes his scheme to sometimes be described as a peculiar kind of dualism.Anaxagoras' concept of nous was distinct from later platonic and neoplatonic cosmologies in many ways, which were also influenced by Eleatic, Pythagorean and other pre-Socratic ideas, as well as the Socratics themselves.
Pre-Socratic usage:
In some schools of Hindu philosophy, a "higher mind" came to be considered a property of the cosmos as a whole that exists within all matter (known as buddhi or mahat). In Samkhya, this faculty of intellect (buddhi) serves to differentiate matter (prakrti) from pure consciousness (purusha). The lower aspect of mind that corresponds to the senses is referred to as "manas".
Socratic philosophy:
Xenophon Xenophon, the less famous of the two students of Socrates whose written accounts of him have survived, recorded that he taught his students a kind of teleological justification of piety and respect for divine order in nature. This has been described as an "intelligent design" argument for the existence of God, in which nature has its own nous. For example, in his Memorabilia 1.4.8, he describes Socrates asking a friend sceptical of religion, "Are you, then, of the opinion that intelligence (nous) alone exists nowhere and that you by some good chance seized hold of it, while—as you think—those surpassingly large and infinitely numerous things [all the earth and water] are in such orderly condition through some senselessness?" Later in the same discussion he compares the nous, which directs each person's body, to the good sense (phronēsis) of the god, which is in everything, arranging things to its pleasure (1.4.17). Plato describes Socrates making the same argument in his Philebus 28d, using the same words nous and phronēsis.
Socratic philosophy:
Plato Plato used the word nous in many ways that were not unusual in the everyday Greek of the time, and often simply meant "good sense" or "awareness". On the other hand, in some of his Platonic dialogues it is described by key characters in a higher sense, which was apparently already common. In his Philebus 28c he has Socrates say that "all philosophers agree—whereby they really exalt themselves—that mind (nous) is king of heaven and earth. Perhaps they are right." and later states that the ensuing discussion "confirms the utterances of those who declared of old that mind (nous) always rules the universe".In his Cratylus, Plato gives the etymology of Athena's name, the goddess of wisdom, from Atheonóa (Ἀθεονόα) meaning "god's (theos) mind (nous)". In his Phaedo, Plato's teacher Socrates is made to say just before dying that his discovery of Anaxagoras' concept of a cosmic nous as the cause of the order of things, was an important turning point for him. But he also expressed disagreement with Anaxagoras' understanding of the implications of his own doctrine, because of Anaxagoras' materialist understanding of causation. Socrates said that Anaxagoras would "give voice and air and hearing and countless other things of the sort as causes for our talking with each other, and should fail to mention the real causes, which are, that the Athenians decided that it was best to condemn me". On the other hand, Socrates seems to suggest that he also failed to develop a fully satisfactory teleological and dualistic understanding of a mind of nature, whose aims represent the Good, which all parts of nature aim at.
Socratic philosophy:
Concerning the nous that is the source of understanding of individuals, Plato is widely understood to have used ideas from Parmenides in addition to Anaxagoras. Like Parmenides, Plato argued that relying on sense perception can never lead to true knowledge, only opinion. Instead, Plato's more philosophical characters argue that nous must somehow perceive truth directly in the ways gods and daimons perceive. What our mind sees directly in order to really understand things must not be the constantly changing material things, but unchanging entities that exist in a different way, the so-called "forms" or "ideas". However he knew that contemporary philosophers often argued (as in modern science) that nous and perception are just two aspects of one physical activity, and that perception is the source of knowledge and understanding (not the other way around).
Socratic philosophy:
Just exactly how Plato believed that the nous of people lets them come to understand things in any way that improves upon sense perception and the kind of thinking which animals have, is a subject of long running discussion and debate. On the one hand, in the Republic Plato's Socrates, in the Analogy of the sun and Allegory of the Cave describes people as being able to perceive more clearly because of something from outside themselves, something like when the sun shines, helping eyesight. The source of this illumination for the intellect is referred to as the Form of the Good. On the other hand, in the Meno for example, Plato's Socrates explains the theory of anamnesis whereby people are born with ideas already in their soul, which they somehow remember from previous lives. Both theories were to become highly influential.
Socratic philosophy:
As in Xenophon, Plato's Socrates frequently describes the soul in a political way, with ruling parts, and parts that are by nature meant to be ruled. Nous is associated with the rational (logistikon) part of the individual human soul, which by nature should rule. In his Republic, in the so-called "analogy of the divided line", it has a special function within this rational part. Plato tended to treat nous as the only immortal part of the soul.
Socratic philosophy:
Concerning the cosmos, in the Timaeus, the title character also tells a "likely story" in which nous is responsible for the creative work of the demiurge or maker who brought rational order to our universe. This craftsman imitated what he perceived in the world of eternal Forms. In the Philebus Socrates argues that nous in individual humans must share in a cosmic nous, in the same way that human bodies are made up of small parts of the elements found in the rest of the universe. And this nous must be in the genos of being a cause of all particular things as particular things.
Socratic philosophy:
Aristotle Like Plato, Aristotle saw the nous or intellect of an individual as somehow similar to sense perception but also distinct. Sense perception in action provides images to the nous, via the "sensus communis" and imagination, without which thought could not occur. But other animals have sensus communis and imagination, whereas none of them have nous. Aristotelians divide perception of forms into the animal-like one which perceives species sensibilis or sensible forms, and species intelligibilis that are perceived in a different way by the nous.
Socratic philosophy:
Like Plato, Aristotle linked nous to logos (reason) as uniquely human, but he also distinguished nous from logos, thereby distinguishing the faculty for setting definitions from the faculty that uses them to reason with. In his Nicomachean Ethics, Book VI Aristotle divides the soul (psychē) into two parts, one which has reason and one which does not, but then divides the part which has reason into the reasoning (logistikos) part itself which is lower, and the higher "knowing" (epistēmonikos) part which contemplates general principles (archai). Nous, he states, is the source of the first principles or sources (archai) of definitions, and it develops naturally as people gain experience. This he explains after first comparing the four other truth revealing capacities of soul: technical know how (technē), logically deduced knowledge (epistēmē, sometimes translated as "scientific knowledge"), practical wisdom (phronēsis), and lastly theoretical wisdom (sophia), which is defined by Aristotle as the combination of nous and epistēmē. All of these others apart from nous are types of reason (logos).
Socratic philosophy:
And intellect [nous] is directed at what is ultimate on both sides, since it is intellect and not reason [logos] that is directed at both the first terms [horoi] and the ultimate particulars, on the one side at the changeless first terms in demonstrations, and on the other side, in thinking about action, at the other sort of premise, the variable particular; for these particulars are the sources [archai] from which one discerns that for the sake of which an action is, since the universals are derived from the particulars. Hence intellect is both a beginning and an end, since the demonstrations that are derived from these particulars are also about these. And of these one must have perception, and this perception is intellect.
Socratic philosophy:
Aristotle's philosophical works continue many of the same Socratic themes as his teacher Plato. Amongst the new proposals he made was a way of explaining causality, and nous is an important part of his explanation. As mentioned above, Plato criticized Anaxagoras' materialism, or understanding that the intellect of nature only set the cosmos in motion, but is no longer seen as the cause of physical events. Aristotle explained that the changes of things can be described in terms of four causes at the same time. Two of these four causes are similar to the materialist understanding: each thing has a material which causes it to be how it is, and some other thing which set in motion or initiated some process of change. But at the same time according to Aristotle each thing is also caused by the natural forms they are tending to become, and the natural ends or aims, which somehow exist in nature as causes, even in cases where human plans and aims are not involved. These latter two causes (the "formal" and "final") encompass the continuous effect of the intelligent ordering principle of nature itself. Aristotle's special description of causality is especially apparent in the natural development of living things. It leads to a method whereby Aristotle analyses causation and motion in terms of the potentialities and actualities of all things, whereby all matter possesses various possibilities or potentialities of form and end, and these possibilities become more fully real as their potential forms become actual or active reality (something they will do on their own, by nature, unless stopped because of other natural things happening). For example, a stone has in its nature the potentiality of falling to the earth and it will do so, and actualize this natural tendency, if nothing is in the way.
Socratic philosophy:
Aristotle analyzed thinking in the same way. For him, the possibility of understanding rests on the relationship between intellect and sense perception. Aristotle's remarks on the concept of what came to be called the "active intellect" and "passive intellect" (along with various other terms) are amongst "the most intensely studied sentences in the history of philosophy". The terms are derived from a single passage in Aristotle's De Anima, Book III. Following is the translation of one of those passages with some key Greek words shown in square brackets.
Socratic philosophy:
...since in nature one thing is the material [hulē] for each kind [genos] (this is what is in potency all the particular things of that kind) but it is something else that is the causal and productive thing by which all of them are formed, as is the case with an art in relation to its material, it is necessary in the soul [psychē] too that these distinct aspects be present; the one sort is intellect [nous] by becoming all things, the other sort by forming all things, in the way an active condition [hexis] like light too makes the colors that are in potency be at work as colors [to phōs poiei ta dunamei onta chrōmata energeiai chrōmata].
Socratic philosophy:
This sort of intellect [which is like light in the way it makes potential things work as what they are] is separate, as well as being without attributes and unmixed, since it is by its thinghood a being-at-work [energeia], for what acts is always distinguished in stature above what is acted upon, as a governing source is above the material it works on.
Socratic philosophy:
Knowledge [epistēmē], in its being-at-work, is the same as the thing it knows, and while knowledge in potency comes first in time in any one knower, in the whole of things it does not take precedence even in time.
Socratic philosophy:
This does not mean that at one time it thinks but at another time it does not think, but when separated it is just exactly what it is, and this alone is deathless and everlasting (though we have no memory, because this sort of intellect is not acted upon, while the sort that is acted upon is destructible), and without this nothing thinks.
Socratic philosophy:
The passage tries to explain "how the human intellect passes from its original state, in which it does not think, to a subsequent state, in which it does" according to his distinction between potentiality and actuality. Aristotle says that the passive intellect receives the intelligible forms of things, but that the active intellect is required to make the potential knowledge into actual knowledge, in the same way that light makes potential colours into actual colours. As Davidson remarks: Just what Aristotle meant by potential intellect and active intellect - terms not even explicit in the De anima and at best implied - and just how he understood the interaction between them remains moot. Students of the history of philosophy continue to debate Aristotle's intent, particularly the question whether he considered the active intellect to be an aspect of the human soul or an entity existing independently of man.
Socratic philosophy:
The passage is often read together with Metaphysics, Book XII, ch.7-10, where Aristotle makes nous as an actuality a central subject within a discussion of the cause of being and the cosmos. In that book, Aristotle equates active nous, when people think and their nous becomes what they think about, with the "unmoved mover" of the universe, and God: "For the actuality of thought (nous) is life, and God is that actuality; and the essential actuality of God is life most good and eternal." Alexander of Aphrodisias, for example, equated this active intellect which is God with the one explained in De Anima, while Themistius thought they could not be simply equated. (See below.) Like Plato before him, Aristotle believes Anaxagoras' cosmic nous implies and requires the cosmos to have intentions or ends: "Anaxagoras makes the Good a principle as causing motion; for Mind (nous) moves things, but moves them for some end, and therefore there must be some other Good—unless it is as we say; for on our view the art of medicine is in a sense health."In the philosophy of Aristotle the soul (psyche) of a body is what makes it alive, and is its actualized form; thus, every living thing, including plant life, has a soul. The mind or intellect (nous) can be described variously as a power, faculty, part, or aspect of the human soul. For Aristotle, soul and nous are not the same. He did not rule out the possibility that nous might survive without the rest of the soul, as in Plato, but he specifically says that this immortal nous does not include any memories or anything else specific to an individual's life. In his Generation of Animals Aristotle specifically says that while other parts of the soul come from the parents, physically, the human nous, must come from outside, into the body, because it is divine or godly, and it has nothing in common with the energeia of the body. This was yet another passage which Alexander of Aphrodisias would link to those mentioned above from De Anima and the Metaphysics in order to understand Aristotle's intentions.
Socratic philosophy:
Post-Aristotelian classical theories Until the early modern era, much of the discussion which has survived today concerning nous or intellect, in Europe, Africa and the Middle East, concerned how to correctly interpret Aristotle and Plato. However, at least during the classical period, materialist philosophies, more similar to modern science, such as Epicureanism, were still relatively common. The Epicureans believed that the bodily senses themselves were not the cause of error, but the interpretations can be. The term prolepsis was used by Epicureans to describe the way the mind forms general concepts from sense perceptions.
Socratic philosophy:
To the Stoics, more like Heraclitus than Anaxagoras, order in the cosmos comes from an entity called logos, the cosmic reason. But as in Anaxagoras this cosmic reason, like human reason but higher, is connected to the reason of individual humans. The Stoics however, did not invoke incorporeal causation, but attempted to explain physics and human thinking in terms of matter and forces. As in Aristotelianism, they explained the interpretation of sense data requiring the mind to be stamped or formed with ideas, and that people have shared conceptions that help them make sense of things (koine ennoia). Nous for them is soul "somehow disposed" (pôs echon), the soul being somehow disposed pneuma, which is fire or air or a mixture. As in Plato, they treated nous as the ruling part of the soul.Plutarch criticized the Stoic idea of nous being corporeal, and agreed with Plato that the soul is more divine than the body while nous (mind) is more divine than the soul. The mix of soul and body produces pleasure and pain; the conjunction of mind and soul produces reason which is the cause or the source of virtue and vice. (From: “On the Face in the Moon”)Albinus was one of the earliest authors to equate Aristotle's nous as prime mover of the Universe, with Plato's Form of the Good.
Socratic philosophy:
Alexander of Aphrodisias Alexander of Aphrodisias was a Peripatetic (Aristotelian) and his On the Soul (referred to as De anima in its traditional Latin title), explained that by his interpretation of Aristotle, potential intellect in man, that which has no nature but receives one from the active intellect, is material, and also called the "material intellect" (nous hulikos) and it is inseparable from the body, being "only a disposition" of it. He argued strongly against the doctrine of immortality. On the other hand, he identified the active intellect (nous poietikos), through whose agency the potential intellect in man becomes actual, not with anything from within people, but with the divine creator itself. In the early Renaissance his doctrine of the soul's mortality was adopted by Pietro Pomponazzi against the Thomists and the Averroists. For him, the only possible human immortality is an immortality of a detached human thought, more specifically when the nous has as the object of its thought the active intellect itself, or another incorporeal intelligible form.Alexander was also responsible for influencing the development of several more technical terms concerning the intellect, which became very influential amongst the great Islamic philosophers, Al-Farabi, Avicenna, and Averroes.
Socratic philosophy:
The intellect in habitu is a stage in which the human intellect has taken possession of a repertoire of thoughts, and so is potentially able to think those thoughts, but is not yet thinking these thoughts.
Socratic philosophy:
The intellect from outside, which became the "acquired intellect" in Islamic philosophy, describes the incorporeal active intellect which comes from outside man, and becomes an object of thought, making the material intellect actual and active. This term may have come from a particularly expressive translation of Alexander into Arabic. Plotinus also used such a term. In any case, in Al-Farabi and Avicenna, the term took on a new meaning, distinguishing it from the active intellect in any simple sense - an ultimate stage of the human intellect where a kind of close relationship (a "conjunction") is made between a person's active intellect and the transcendental nous itself.
Socratic philosophy:
Themistius Themistius, another influential commentator on this matter, understood Aristotle differently, stating that the passive or material intellect does "not employ a bodily organ for its activity, is wholly unmixed with the body, impassive, and separate [from matter]". This means the human potential intellect, and not only the active intellect, is an incorporeal substance, or a disposition of incorporeal substance. For Themistius, the human soul becomes immortal "as soon as the active intellect intertwines with it at the outset of human thought".This understanding of the intellect was also very influential for Al-Farabi, Avicenna, and Averroes, and "virtually all Islamic and Jewish philosophers". On the other hand, concerning the active intellect, like Alexander and Plotinus, he saw this as a transcendent being existing above and outside man. Differently from Alexander, he did not equate this being with the first cause of the Universe itself, but something lower. However he equated it with Plato's Idea of the Good.
Plotinus and Neoplatonism:
Of the later Greek and Roman writers Plotinus, the initiator of neoplatonism, is particularly significant. Like Alexander of Aphrodisias and Themistius, he saw himself as a commentator explaining the doctrines of Plato and Aristotle. But in his Enneads he went further than those authors, often working from passages which had been presented more tentatively, possibly inspired partly by earlier authors such as the neopythagorean Numenius of Apamea. Neoplatonism provided a major inspiration to discussion concerning the intellect in late classical and medieval philosophy, theology and cosmology.
Plotinus and Neoplatonism:
In neoplatonism there exists several levels or hypostases of being, including the natural and visible world as a lower part.
The Monad or "the One" sometimes also described as "the Good", based on the concept as it is found in Plato. This is the dunamis or possibility of existence. It causes the other levels by emanation.
Plotinus and Neoplatonism:
The Nous (usually translated as "Intellect", or "Intelligence" in this context, or sometimes "mind" or "reason") is described as God, or more precisely an image of God, often referred to as the demiurge. It thinks its own contents, which are thoughts, equated to the Platonic ideas or forms (eide). The thinking of this Intellect is the highest activity of life. The actualization (energeia) of this thinking is the being of the forms. This Intellect is the first principle or foundation of existence. The One is prior to it, but not in the sense that a normal cause is prior to an effect, but instead Intellect is called an emanation of the One. The One is the possibility of this foundation of existence.
Plotinus and Neoplatonism:
Soul (psychē). The soul is also an energeia: it acts upon or actualizes its own thoughts and creates "a separate, material cosmos that is the living image of the spiritual or noetic Cosmos contained as a unified thought within the Intelligence". So it is the soul which perceives things in nature physically, which it understands to be reality. Soul in Plotinus plays a role similar to the potential intellect in Aristotelian terminology.
Plotinus and Neoplatonism:
Lowest is matter.This was based largely upon Plotinus' reading of Plato, but also incorporated many Aristotelian concepts, including the unmoved mover as energeia. They also incorporated a theory of anamnesis, or knowledge coming from the past lives of our immortal souls, like that found in some of Plato's dialogues.
Later Platonists distinguished a hierarchy of three separate manifestations of nous, like Numenius of Apamea had. Notable later neoplatonists include Porphyry and Proclus.
Medieval nous in religion:
Greek philosophy had an influence on the major religions that defined the Middle Ages, and one aspect of this was the concept of nous.
Gnosticism Gnosticism was a late classical movement that incorporated ideas inspired by Neoplatonism and Neopythagoreanism, but which was more a syncretic religious movement than an accepted philosophical movement.
Medieval nous in religion:
Valentinus In Valentinianism, Nous is the first male Aeon. Together with his conjugate female Aeon, Aletheia (truth), he emanates from the Propator Bythos (Προπάτωρ Βυθος "Forefather Depths") and his co-eternal Ennoia (Ἔννοια "Thought") or Sigē (Σιγή "Silence"); and these four form the primordial Tetrad. Like the other male Aeons he is sometimes regarded as androgynous, including in himself the female Aeon who is paired with him. He is the Only Begotten; and is styled the Father, the Beginning of All, inasmuch as from him are derived immediately or mediately the remaining Aeons who complete the Ogdoad (eight), thence the Decad (ten), and thence the Dodecad (twelve); in all, thirty Aeons constitute the Pleroma.
Medieval nous in religion:
He alone is capable of knowing the Propator; but when he desired to impart like knowledge to the other Aeons, was withheld from so doing by Sigē. When Sophia ("Wisdom"), youngest Aeon of the thirty, was brought into peril by her yearning after this knowledge, Nous was foremost of the Aeons in interceding for her. From him, or through him from the Propator, Horos was sent to restore her. After her restoration, Nous, according to the providence of the Propator, produced another pair, Christ and the Holy Spirit, "in order to give fixity and steadfastness (εις πήξιν και στηριγμόν) to the Pleroma." For this Christ teaches the Aeons to be content to know that the Propator is in himself incomprehensible, and can be perceived only through the Only Begotten (Nous).
Medieval nous in religion:
Ophites The Ophites held that the demiurge Ialdabaoth, after coming into conflict with the archons he created, created a son, Ophiomorphus, who is called the serpent-formed Nous. This entity would become the serpent in the garden, who was compelled to act on behest of Sophia.
Medieval nous in religion:
Basilides A similar conception of Nous appears in the later teaching of the Basilideans, according to which he is the first begotten of the Unbegotten Father, and himself the parent of Logos, from whom emanate successively Phronesis, Sophia, and Dunamis. But in this teaching, Nous is identified with Christ, is named Jesus, is sent to save those that believe, and returns to Him who sent him, after a Passion which is apparent only, Simon of Cyrene being substituted for him on the cross. It is probable, however, that Nous had a place in the original system of Basilides himself; for his Ogdoad, "the great Archon of the universe, the ineffable" is apparently made up of the five members named by Irenaeus (as above), together with two whom we find in Clement of Alexandria, Dikaiosyne and Eirene, added to the originating Father.
Medieval nous in religion:
Simon Magus The antecedent of these systems is that of Simon, of whose six "roots" emanating from the Unbegotten Fire, Nous is first. The correspondence of these "roots" with the first six Aeons that Valentinus derives from Bythos, is noted by Hippolytus. Simon says in his Apophasis Megalē, There are two offshoots of the entire ages, having neither beginning nor end.... Of these the one appears from above, the great power, the Nous of the universe, administering all things, male; the other from beneath, the great Epinoia, female, bringing forth all things.
Medieval nous in religion:
To Nous and Epinoia correspond Heaven and Earth, in the list given by Simon of the six material counterparts of his six emanations. The identity of this list with the six material objects alleged by Herodotus to be worshipped by the Persians, together with the supreme place given by Simon to Fire as the primordial power, leads us to look to Iran for the origin of these systems in one aspect. In another, they connect themselves with the teaching of Pythagoras and of Plato.
Medieval nous in religion:
Gospel of Mary According to the Gospel of Mary, Jesus himself articulates the essence of Nous: There where is the nous, lies the treasure." Then I said to him: "Lord, when someone meets you in a Moment of Vision, is it through the soul [psychē] that they see, or is it through the spirit [pneuma]?" The Teacher answered: "It is neither through the soul nor the spirit, but the nous between the two which sees the vision...
Medieval nous in religion:
Mandaeism In Mandaic, mana (ࡌࡀࡍࡀ) has been variously translated as "mind," "nous," or "treasure." The Mandaean formula "I am a mana of the Great Life" is a phrase often found in the numerous hymns of Book 2 of the Left Ginza.
Medieval nous in religion:
Medieval Islamic philosophy During the Middle Ages, philosophy itself was in many places seen as opposed to the prevailing monotheistic religions, Islam, Christianity and Judaism. The strongest philosophical tradition for some centuries was amongst Islamic philosophers, who later came to strongly influence the late medieval philosophers of western Christendom, and the Jewish diaspora in the Mediterranean area. While there were earlier Muslim philosophers such as Al Kindi, chronologically the three most influential concerning the intellect were Al Farabi, Avicenna, and finally Averroes, a westerner who lived in Spain and was highly influential in the late Middle Ages amongst Jewish and Christian philosophers.
Medieval nous in religion:
Al Farabi The exact precedents of Al Farabi's influential philosophical scheme, in which nous (Arabic ʿaql) plays an important role, are no longer perfectly clear because of the great loss of texts in the Middle Ages which he would have had access to. He was apparently innovative in at least some points. He was clearly influenced by the same late classical world as neoplatonism, neopythagoreanism, but exactly how is less clear. Plotinus, Themistius and Alexander of Aphrodisias are generally accepted to have been influences. However while these three all placed the active intellect "at or near the top of the hierarchy of being", Al Farabi was clear in making it the lowest ranking in a series of distinct transcendental intelligences. He is the first known person to have done this in a clear way. He was also the first philosopher known to have assumed the existence of a causal hierarchy of celestial spheres, and the incorporeal intelligences parallel to those spheres. Al Farabi also fitted an explanation of prophecy into this scheme, in two levels. According to Davidson (p. 59):The lower of the two levels, labeled specifically as "prophecy" (nubuwwa), is enjoyed by men who have not yet perfected their intellect, whereas the higher, which Alfarabi sometimes specifically names "revelation" (w-ḥ-y), comes exclusively to those who stand at the stage of acquired intellect.
Medieval nous in religion:
This happens in the imagination (Arabic mutakhayyila; Greek phantasia), a faculty of the mind already described by Aristotle, which al Farabi described as serving the rational part of the soul (Arabic ʿaql; Greek nous). This faculty of imagination stores sense perceptions (maḥsūsāt), disassembles or recombines them, creates figurative or symbolic images (muḥākāt) of them which then appear in dreams, visualizes present and predicted events in a way different from conscious deliberation (rawiyya). This is under the influence, according to Al Farabi, of the active intellect. Theoretical truth can only be received by this faculty in a figurative or symbolic form, because the imagination is a physical capability and can not receive theoretical information in a proper abstract form. This rarely comes in a waking state, but more often in dreams. The lower type of prophecy is the best possible for the imaginative faculty, but the higher type of prophecy requires not only a receptive imagination, but also the condition of an "acquired intellect", where the human nous is in "conjunction" with the active intellect in the sense of God. Such a prophet is also a philosopher. When a philosopher-prophet has the necessary leadership qualities, he becomes philosopher-king.
Medieval nous in religion:
Avicenna In terms of cosmology, according to Davidson (p. 82), "Avicenna's universe has a structure virtually identical with the structure of Alfarabi's" but there are differences in details. As in Al Farabi, there are several levels of intellect, intelligence or nous, each of the higher ones being associated with a celestial sphere. Avicenna however details three different types of effect which each of these higher intellects has, each "thinks" both the necessary existence and the possible being of the intelligence one level higher. And each "emanates" downwards the body and soul of its own celestial sphere, and also the intellect at the next lowest level. The active intellect, as in Alfarabi, is the last in the chain. Avicenna sees active intellect as the cause not only of intelligible thought and the forms in the "sublunar" world we people live, but also the matter. (In other words, three effects.)Concerning the workings of the human soul, Avicenna, like Al Farabi, sees the "material intellect" or potential intellect as something that is not material. He believed the soul was incorporeal, and the potential intellect was a disposition of it which was in the soul from birth. As in Al Farabi there are two further stages of potential for thinking, which are not yet actual thinking, first the mind acquires the most basic intelligible thoughts which we can not think in any other way, such as "the whole is greater than the part", then comes a second level of derivative intelligible thoughts which could be thought. Concerning the actualization of thought, Avicenna applies the term "to two different things, to actual human thought, irrespective of the intellectual progress a man has made, and to actual thought when human intellectual development is complete", as in Al Farabi.When reasoning in the sense of deriving conclusions from syllogisms, Avicenna says people are using a physical "cogitative" faculty (mufakkira, fikra) of the soul, which can err. The human cogitative faculty is the same as the "compositive imaginative faculty (mutakhayyila) in reference to the animal soul". But some people can use "insight" to avoid this step and derive conclusions directly by conjoining with the active intellect.Once a thought has been learned in a soul, the physical faculties of sense perception and imagination become unnecessary, and as a person acquires more thoughts, their soul becomes less connected to their body. For Avicenna, different from the normal Aristotelian position, all of the soul is by nature immortal. But the level of intellectual development does affect the type of afterlife that the soul can have. Only a soul which has reached the highest type of conjunction with the active intellect can form a perfect conjunction with it after the death of the body, and this is a supreme eudaimonia. Lesser intellectual achievement means a less happy or even painful afterlife.Concerning prophecy, Avicenna identifies a broader range of possibilities which fit into this model, which is still similar to that of Al Farabi.
Medieval nous in religion:
Averroes Averroes came to be regarded even in Europe as "the Commentator" to "the Philosopher", Aristotle, and his study of the questions surrounding the nous were very influential amongst Jewish and Christian philosophers, with some aspects being quite controversial. According to Herbert Davidson, Averroes' doctrine concerning nous can be divided into two periods. In the first, neoplatonic emanationism, not found in the original works of Aristotle, was combined with a naturalistic explanation of the human material intellect. "It also insists on the material intellect's having an active intellect as a direct object of thought and conjoining with the active intellect, notions never expressed in the Aristotelian canon." It was this presentation which Jewish philosophers such as Moses Narboni and Gersonides understood to be Averroes'. In the later model of the universe, which was transmitted to Christian philosophers, Averroes "dismisses emanationism and explains the generation of living beings in the sublunar world naturalistically, all in the name of a more genuine Aristotelianism. Yet it abandons the earlier naturalistic conception of the human material intellect and transforms the material intellect into something wholly un-Aristotelian, a single transcendent entity serving all mankind. It nominally salvages human conjunction with the active intellect, but in words that have little content."This position, that humankind shares one active intellect, was taken up by Parisian philosophers such as Siger of Brabant, but also widely rejected by philosophers such as Albertus Magnus, Thomas Aquinas, Ramon Lull, and Duns Scotus. Despite being widely considered heretical, the position was later defended by many more European philosophers including John of Jandun, who was the primary link bringing this doctrine from Paris to Bologna. After him this position continued to be defended and also rejected by various writers in northern Italy. In the 16th century it finally became a less common position after the renewal of an "Alexandrian" position based on that of Alexander of Aphrodisias, associated with Pietro Pomponazzi.
Medieval nous in religion:
Christianity The Christian New Testament makes mention of the nous or noos, generally translated in modern English as "mind", but also showing a link to God's will or law: Romans 7:23, refers to the law (nomos) of God which is the law in the writer's nous, as opposed to the law of sin which is in the body.
Romans 12:2, demands Christians should not conform to this world, but continuously be transformed by the renewing of their nous, so as to be able to determine what God’s will is.
1 Corinthians 14:14-14:19. Discusses "speaking in tongues" and says that a person who speaks in tongues that they can not understand should prefer to also have understanding (nous), and it is better for the listeners also to be able to understand.
Ephesians 4:17-4:23. Discusses how non-Christians have a worthless nous, while Christians should seek to renew the spirit (pneuma) of their nous.
2 Thessalonians 2:2. Uses the term to refer to being troubled of mind.
Revelation 17:9: "here is the nous which has wisdom".In the writings of the Christian fathers a sound or pure nous is considered essential to the cultivation of wisdom.
Medieval nous in religion:
Philosophers influencing western Christianity While philosophical works were not commonly read or taught in the early Middle Ages in most of Europe, the works of authors like Boethius and Augustine of Hippo formed an important exception. Both were influenced by neoplatonism, and were amongst the older works that were still known in the time of the Carolingian Renaissance, and the beginnings of Scholasticism.
Medieval nous in religion:
In his early years Augustine was heavily influenced by Manichaeism and afterwards by the Neoplatonism of Plotinus. After his conversion to Christianity and baptism (387), he developed his own approach to philosophy and theology, accommodating a variety of methods and different perspectives.Augustine used Neoplatonism selectively. He used both the neoplatonic Nous, and the Platonic Form of the Good (or "The Idea of the Good") as equivalent terms for the Christian God, or at least for one particular aspect of God. For example, God, nous, can act directly upon matter, and not only through souls, and concerning the souls through which it works upon the world experienced by humanity, some are treated as angels.Scholasticism becomes more clearly defined much later, as the peculiar native type of philosophy in medieval catholic Europe. In this period, Aristotle became "the Philosopher", and scholastic philosophers, like their Jewish and Muslim contemporaries, studied the concept of the intellectus on the basis not only of Aristotle, but also late classical interpreters like Augustine and Boethius. A European tradition of new and direct interpretations of Aristotle developed which was eventually strong enough to argue with partial success against some of the interpretations of Aristotle from the Islamic world, most notably Averroes' doctrine of their being one "active intellect" for all humanity. Notable "Catholic" (as opposed to Averroist) Aristotelians included Albertus Magnus and Thomas Aquinas, the founder of Thomism, which exists to this day in various forms. Concerning the nous, Thomism agrees with those Aristotelians who insist that the intellect is immaterial and separate from any bodily organs, but as per Christian doctrine, the whole of the human soul is immortal, not only the intellect.
Medieval nous in religion:
Eastern Orthodox The human nous in Eastern Orthodox Christianity is the "eye of the heart or soul" or the "mind of the heart". The soul of man, is created by God in His image, man's soul is intelligent and noetic. Saint Thalassius of Syria wrote that God created beings "with a capacity to receive the Spirit and to attain knowledge of Himself; He has brought into existence the senses and sensory perception to serve such beings". Eastern Orthodox Christians hold that God did this by creating mankind with intelligence and noetic faculties.Human reasoning is not enough: there will always remain an "irrational residue" which escapes analysis and which can not be expressed in concepts: it is this unknowable depth of things, that which constitutes their true, indefinable essence that also reflects the origin of things in God. In Eastern Christianity it is by faith or intuitive truth that this component of an object’s existence is grasped. Though God through his energies draws us to him, his essence remains inaccessible. The operation of faith being the means of free will by which mankind faces the future or unknown, these noetic operations contained in the concept of insight or noesis. Faith (pistis) is therefore sometimes used interchangeably with noesis in Eastern Christianity.
Medieval nous in religion:
Angels have intelligence and nous, whereas men have reason, both logos and dianoia, nous and sensory perception. This follows the idea that man is a microcosm and an expression of the whole creation or macrocosmos. The human nous was darkened after the Fall of Man (which was the result of the rebellion of reason against the nous), but after the purification (healing or correction) of the nous (achieved through ascetic practices like hesychasm), the human nous (the "eye of the heart") will see God's uncreated Light (and feel God's uncreated love and beauty, at which point the nous will start the unceasing prayer of the heart) and become illuminated, allowing the person to become an orthodox theologian.In this belief, the soul is created in the image of God. Since God is Trinitarian, Mankind is Nous, reason, both logos and dianoia, and Spirit. The same is held true of the soul (or heart): it has nous, word and spirit. To understand this better first an understanding of Saint Gregory Palamas's teaching that man is a representation of the trinitarian mystery should be addressed. This holds that God is not meant in the sense that the Trinity should be understood anthropomorphically, but man is to be understood in a triune way. Or, that the Trinitarian God is not to be interpreted from the point of view of individual man, but man is interpreted on the basis of the Trinitarian God. And this interpretation is revelatory not merely psychological and human. This means that it is only when a person is within the revelation, as all the saints lived, that he can grasp this understanding completely (see theoria). The second presupposition is that mankind has and is composed of nous, word and spirit like the trinitarian mode of being. Man's nous, word and spirit are not hypostases or individual existences or realities, but activities or energies of the soul - whereas in the case with God or the Persons of the Holy Trinity, each are indeed hypostases. So these three components of each individual man are 'inseparable from one another' but they do not have a personal character" when in speaking of the being or ontology that is mankind. The nous as the eye of the soul, which some Fathers also call the heart, is the centre of man and is where true (spiritual) knowledge is validated. This is seen as true knowledge which is "implanted in the nous as always co-existing with it".
Early modern philosophy:
The so-called "early modern" philosophers of western Europe in the 17th and 18th centuries established arguments which led to the establishment of modern science as a methodical approach to improve the welfare of humanity by learning to control nature. As such, speculation about metaphysics, which cannot be used for anything practical, and which can never be confirmed against the reality we experience, started to be deliberately avoided, especially according to the so-called "empiricist" arguments of philosophers such as Bacon, Hobbes, Locke and Hume. The Latin motto "nihil in intellectu nisi prius fuerit in sensu" (nothing in the intellect without first being in the senses) has been described as the "guiding principle of empiricism" in the Oxford Dictionary of Philosophy. (This was in fact an old Aristotelian doctrine, which they took up, but as discussed above Aristotelians still believed that the senses on their own were not enough to explain the mind.) These philosophers explain the intellect as something developed from experience of sensations, being interpreted by the brain in a physical way, and nothing else, which means that absolute knowledge is impossible. For Bacon, Hobbes and Locke, who wrote in both English and Latin, "intellectus" was translated as "understanding". Far from seeing it as secure way to perceive the truth about reality, Bacon, for example, actually named the intellectus in his Novum Organum, and the proœmium to his Great Instauration, as a major source of wrong conclusions, because it is biased in many ways, for example towards over-generalizing. For this reason, modern science should be methodical, in order not to be misled by the weak human intellect. He felt that lesser known Greek philosophers such as Democritus "who did not suppose a mind or reason in the frame of things", have been arrogantly dismissed because of Aristotelianism leading to a situation in his time wherein "the search of the physical causes hath been neglected, and passed in silence". The intellect or understanding was the subject of Locke's Essay Concerning Human Understanding.These philosophers also tended not to emphasize the distinction between reason and intellect, describing the peculiar universal or abstract definitions of human understanding as being man-made and resulting from reason itself. Hume even questioned the distinctness or peculiarity of human understanding and reason, compared to other types of associative or imaginative thinking found in some other animals. In modern science during this time, Newton is sometimes described as more empiricist compared to Leibniz.
Early modern philosophy:
On the other hand, into modern times some philosophers have continued to propose that the human mind has an in-born ("a priori") ability to know the truth conclusively, and these philosophers have needed to argue that the human mind has direct and intuitive ideas about nature, and this means it can not be limited entirely to what can be known from sense perception. Amongst the early modern philosophers, some such as Descartes, Spinoza, Leibniz, and Kant, tend to be distinguished from the empiricists as rationalists, and to some extent at least some of them are called idealists, and their writings on the intellect or understanding present various doubts about empiricism, and in some cases they argued for positions which appear more similar to those of medieval and classical philosophers.
Early modern philosophy:
The first in this series of modern rationalists, Descartes, is credited with defining a "mind-body problem" which is a major subject of discussion for university philosophy courses. According to the presentation his 2nd Meditation, the human mind and body are different in kind, and while Descartes agrees with Hobbes for example that the human body works like a clockwork mechanism, and its workings include memory and imagination, the real human is the thinking being, a soul, which is not part of that mechanism. Descartes explicitly refused to divide this soul into its traditional parts such as intellect and reason, saying that these things were indivisible aspects of the soul. Descartes was therefore a dualist, but very much in opposition to traditional Aristotelian dualism. In his 6th Meditation he deliberately uses traditional terms and states that his active faculty of giving ideas to his thought must be corporeal, because the things perceived are clearly external to his own thinking and corporeal, while his passive faculty must be incorporeal (unless God is deliberately deceiving us, and then in this case the active faculty would be from God). This is the opposite of the traditional explanation found for example in Alexander of Aphrodisias and discussed above, for whom the passive intellect is material, while the active intellect is not. One result is that in many Aristotelian conceptions of the nous, for example that of Thomas Aquinas, the senses are still a source of all the intellect's conceptions. However, with the strict separation of mind and body proposed by Descartes, it becomes possible to propose that there can be thought about objects never perceived with the body's senses, such as a thousand sided geometrical figure. Gassendi objected to this distinction between the imagination and the intellect in Descartes. Hobbes also objected, and according to his own philosophical approach asserted that the "triangle in the mind comes from the triangle we have seen" and "essence in so far as it is distinguished from existence is nothing else than a union of names by means of the verb is". Descartes, in his reply to this objection insisted that this traditional distinction between essence and existence is "known to all".His contemporary Blaise Pascal, criticised him in similar words to those used by Plato's Socrates concerning Anaxagoras, discussed above, saying that "I cannot forgive Descartes; in all his philosophy, Descartes did his best to dispense with God. But Descartes could not avoid prodding God to set the world in motion with a snap of his lordly fingers; after that, he had no more use for God."Descartes argued that when the intellect does a job of helping people interpret what they perceive, not with the help of an intellect which enters from outside, but because each human mind comes into being with innate God-given ideas, more similar then, to Plato's theory of anamnesis, only not requiring reincarnation. Apart from such examples as the geometrical definition of a triangle, another example is the idea of God, according to the 4th "Meditation", comes about because people make judgments about things which are not in the intellect or understanding. This is possible because the human will, being free, is not limited like the human intellect.
Early modern philosophy:
Spinoza, though considered a Cartesian and a rationalist, rejected Cartesian dualism and idealism. In his "pantheistic" approach, explained for example in his Ethics, God is the same as nature, the human intellect is just the same as the human will. The divine intellect of nature is quite different from human intellect, because it is finite, but Spinoza does accept that the human intellect is a part of the infinite divine intellect.
Early modern philosophy:
Leibniz, in comparison to the guiding principle of the empiricists described above, added some words nihil in intellectu nisi prius fuerit in sensu, nisi intellectus ipsi ("nothing in the intellect without first being in the senses" except the intellect itself). Despite being at the forefront of modern science, and modernist philosophy, in his writings he still referred to the active and passive intellect, a divine intellect, and the immortality of the active intellect.
Early modern philosophy:
Berkeley, partly in reaction to Locke, also attempted to reintroduce an "immaterialism" into early modern philosophy (later referred to as "subjective idealism" by others). He argued that individuals can only know sensations and ideas of objects, not abstractions such as "matter", and that ideas depend on perceiving minds for their very existence. This belief later became immortalized in the dictum, esse est percipi ("to be is to be perceived"). As in classical and medieval philosophy, Berkeley believed understanding had to be explained by divine intervention, and that all our ideas are put in our mind by God.
Early modern philosophy:
Hume accepted some of Berkeley's corrections of Locke, but in answer insisted, as had Bacon and Hobbes, that absolute knowledge is not possible, and that all attempts to show how it could be possible have logical problems. Hume's writings remain highly influential on all philosophy afterwards, and are for example considered by Kant to have shaken him from an intellectual slumber.
Early modern philosophy:
Kant, a turning point in modern philosophy, agreed with some classical philosophers and Leibniz that the intellect itself, although it needed sensory experience for understanding to begin, needs something else in order to make sense of the incoming sense information. In his formulation the intellect (Verstand) has a priori or innate principles which it has before thinking even starts. Kant represents the starting point of German idealism and a new phase of modernity, while empiricist philosophy has also continued beyond Hume to the present day. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Community Climate System Model**
Community Climate System Model:
The Community Climate System Model (CCSM) is a coupled general circulation model (GCM) developed by the University Corporation for Atmospheric Research (UCAR) with funding from the National Science Foundation (NSF), the Department of Energy (DoE), and the National Aeronautics and Space Administration (NASA). The coupled components include an atmospheric model (Community Atmosphere Model), a land-surface model (Community Land Model), an ocean model (Parallel Ocean Program), and a sea ice model (Community Sea Ice Model, CICE). CCSM is maintained by the National Center for Atmospheric Research (NCAR).
Community Climate System Model:
Its software design assumes a physical/dynamical component of the climate system and, as a freely available community model, is designed to work on a variety of machine architectures powerful enough to run the model. The CESM codebase is mostly public domain with some segregable components issued under open source and other licenses. The offline chemical transport model has been described as "very efficient".The model includes four submodels (land, sea-ice, ocean and atmosphere) connected by a coupler that exchanges information with the submodels. NCAR suggested that because of this, CCSM cannot be considered a single climate model, but rather a framework for building and testing various climate models.
Submodels:
Ocean model (docn6) The Climatological Data Ocean Model (docn) is recently at version 6.0. It must be run within the framework of CCSM rather than standalone. It takes two netCDF datasets as input and sends six outputs to the coupler, to be integrated with the output of the other submodels.
Submodels:
Atmosphere model (CAM) The Community Atmosphere Model (CAM) can also be run as a standalone atmosphere model. Its most current version is 3.1, while 3.0 was the fifth generation. On May 17, 2002, its name was changed from the NCAR Community Climate Model to reflect its role in the new system. It shares the same horizontal grid as the land model of CCSM: a 256×128 regular longitude/latitude global horizontal grid (giving a 1.4 degree resolution). It has 26 levels in the vertical.
Submodels:
Sea Ice Model (CICE) The polar component of ocean-atmosphere coupling includes sea ice geophysics using the formerly-known Los Alamos Sea Ice Model, CICE, now often referred to as the CICE Consortium model, to which NCAR has contributed code and physical improvements through the Polar Climate Working Group. CICE simulates the growth, movement, deformation and melt of sea ice, critical for calculating energy and mass fluxes between the polar atmosphere and oceans in the earth system.
Development:
The first version of CCSM was created in 1983 as the Community Climate Model (CCM). Over the next two decades it was steadily improved and was renamed CCSM after the Climate System Model (CSM) components were introduced in May 1996. In June 2004 NCAR released the third version, which included new versions of all of the submodels. In 2007 this new version (commonly given the acronym CCSM3 or NCCCSM) was used in the IPCC Fourth Assessment Report, alongside many others. In May 2010 NCAR released CCSM version 4 (CCSM4). On June 25, 2010 NCAR released the successor to CCSM, called the Community Earth System Model (CESM), version 1.0 (CESM1), as a unified code release that included CCSM4 as the code base for its atmospheric component. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Krishna Shenoy**
Krishna Shenoy:
Krishna Vaughn Shenoy (1968–2023) was an American neuroscientist and neuroengineer at Stanford University. Shenoy was the Hong Seh and Vivian W. M. Lim Professor in the Stanford University School of Engineering. He focused on neuroscience topics, including neurotechnology such as brain-computer interfaces. On 21 January 2023, he died after a long battle with pancreatic cancer. According to Google Scholar, he amassed an h-index of 79.
Research:
Shenoy obtained a B.S. in Electrical and Computer Engineering from UC Irvine (1987–1990) and a Ph.D. in Electrical Engineering and Computer Science from MIT (1990–1995). He was then a postdoctoral fellow in Neurobiology at Caltech (1995–2001). In 2001, Shenoy joined the Department of Electrical Engineering at Stanford University as an Assistant Professor, and was promoted to Associate Professor in 2008, and then to Full Professor in 2012. In 2017 he was appointed as the inaugural Hong Seh and Vivian W. M. Lim Professor (endowed chair). He also held courtesy appointments in the departments of Bioengineering, Neurobiology and Neurosurgery.At Stanford, Shenoy was a member of the Wu Tsai Neurosciences Institute and the Bio-X Institute. He was the Director of Stanford's Neural Prosthetic Systems Laboratory and the co-director of the Neural Prosthetics Translational Laboratory at Stanford University. Within these positions, he aimed to restore motor function to paralyzed individuals. In 2015 Shenoy became an investigator with the Howard Hughes Medical Institute.Shenoy and his team made fundamental discoveries about how the brain encodes and executes motor commands, applying those insights to improving brain-computer interfaces. To this end, they developed a mathematical framework for analyzing neural activity called 'computation through dynamics'.In 2022 Shenoy was elected member of the National Academy of Medicine "For making seminal contributions both to basic neuroscience and to translational and clinical research. His work has shown how networks of motor cortical neurons operate as dynamical systems, and he has developed new technologies to provide new means of restoring movement and communication to people with paralysis."In 2022 he was also elected as a Fellow of the IEEE "For contributions to cortical control of movement and brain-computer interfaces."
Patents:
US 9095455B2, "Brain machine interfaces incorporating neural population dynamics" US 9373088B2, "Brain machine interface utilizing a discrete action state decoder in parallel with a continuous decoder for a neural prosthetic device" US 8792976B2, "Brain machine interface" US 20150245928A1, "Brain-Machine Interface Utilizing Interventions to Emphasize Aspects of Neural Variance and Decode Speed and Angle" US 7058445B2, "Decoding of neural signals for movement control" WO 2003005934A3, "Cognitive state machine for prosthetic systems" US 20030023319A1, "Cognitive state machine for prosthetic systems" US 6609017B1, "Processed neural signals and methods for generating and using them" | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cheesemaking**
Cheesemaking:
Cheesemaking (or caseiculture) is the craft of making cheese. The production of cheese, like many other food preservation processes, allows the nutritional and economic value of a food material, in this case milk, to be preserved in concentrated form. Cheesemaking allows the production of the cheese with diverse flavors and consistencies.
History:
Cheesemaking is documented in Egyptian tomb drawings and in ancient Greek literature.Cheesemaking may have originated from nomadic herdsmen who stored milk in vessels made from sheep's and goats' stomachs. Because their stomach linings contains a mix of lactic acid, bacteria as milk contaminants and rennet, the milk would ferment and coagulate. A product reminiscent of yogurt would have been produced, which through gentle agitation and the separation of curds from whey would have resulted in the production of cheese; the cheese being essentially a concentration of the major milk protein, casein, and milk fat. The whey proteins, other major milk proteins, and lactose are all removed in the cheese whey. Another theory is offered by David Asher, who wrote that the origins actually lie within the "sloppy milk bucket in later European culture, it having gone unwashed and containing all of the necessary bacteria to facilitate the ecology of cheese".
History:
Ancient cheesemaking One of the ancient cheesemakers' earliest tools for cheesemaking, cheese molds or strainers, can be found throughout Europe, dating back to the Bronze Age. Baskets were used to separate the cheese curds, but as technology advanced, these cheese molds would be made of wood or pottery. The cheesemakers placed the cheese curds inside of the mold, secured the mold with a lid, then added pressure to separate the whey, which would drain out from the holes in the mold. The more whey that was drained, the less moisture retained in the cheese. Less moisture meant that the cheese would be firmer. In Ireland, some cheeses ranged from a dry and hard cheese (mullahawn) to a semi-liquid cheese (millsén).The designs and patterns were often used to decorate the cheeses and differentiate between them. Since many monastic establishments and abbeys owned their share of milk animals at the time, it was commonplace for the cheeses they produced to bear a cross in the middle.
History:
Although the common perception of cheese today is made from cow's milk, goat's milk was actually the preferred base of ancient cheesemakers, due to the fact that goats are smaller animals than cows. This meant that goats required less food and were easier to transport and herd. Moreover, goats can breed any time of the year as opposed to sheep, who also produce milk, but mating season only came around during fall and winter.
History:
Before the age of pasteurization, cheesemakers knew that certain cheeses could cause constipation or kidney stones, so they advised their customers to supplement these side effects by eating in moderation along with other foods and consuming walnuts, almonds, or horseradish.
Process:
The goal of cheese making is to control the spoiling of milk into cheese. The milk is traditionally from a cow, goat, sheep or buffalo, although, in theory, cheese could be made from the milk of any mammal. Cow's milk is most commonly used worldwide. The cheesemaker's goal is a consistent product with specific characteristics (appearance, aroma, taste, texture). The process used to make a Camembert will be similar to, but not quite the same as, that used to make Cheddar.
Process:
Some cheeses may be deliberately left to ferment from naturally airborne spores and bacteria; this approach generally leads to a less consistent product but one that is valuable in a niche market.
Process:
Culturing Cheese is made by bringing milk (possibly pasteurised) in the cheese vat to a temperature required to promote the growth of the bacteria that feed on lactose and thus ferment the lactose into lactic acid. These bacteria in the milk may be wild, as is the case with unpasteurised milk, added from a culture, frozen or freeze dried concentrate of starter bacteria. Bacteria which produce only lactic acid during fermentation are homofermentative; those that also produce lactic acid and other compounds such as carbon dioxide, alcohol, aldehydes and ketones are heterofermentative. Fermentation using homofermentative bacteria is important in the production of cheeses such as Cheddar, where a clean, acid flavour is required. For cheeses such as Emmental the use of heterofermentative bacteria is necessary to produce the compounds that give characteristic fruity flavours and, importantly, the gas that results in the formation of bubbles in the cheese ('eye holes').
Process:
Starter cultures are chosen to give a cheese its specific characteristics. In the case of mould-ripened cheese such as Stilton, Roquefort or Camembert, mould spores (fungal spores) may be added to the milk in the cheese vat or can be added later to the cheese curd.
Process:
Coagulation During the fermentation process, once sufficient lactic acid has been developed, rennet is added to cause the casein to precipitate. Rennet contains the enzyme chymosin which converts κ-casein to para-κ-caseinate (the main component of cheese curd, which is a salt of one fragment of the casein) and glycomacropeptide, which is lost in the cheese whey. As the curd is formed, milk fat is trapped in a casein matrix. After adding the rennet, the cheese milk is left to form curds over a period of time.
Process:
Draining Once the cheese curd is judged to be ready, the cheese whey must be released. As with many foods the presence of water and the bacteria in it encourages decomposition. To prevent such decomposition it is necessary to remove most of the water (whey) from the cheese milk, and hence cheese curd, to make a partial dehydration of the curd. There are several ways to separate the curd from the whey.
Process:
Scalding In making Cheddar (or many other hard cheeses) the curd is cut into small cubes and the temperature is raised to approximately 39 °C (102 °F) to 'scald' the curd particles. Syneresis occurs and cheese whey is expressed from the particles. The Cheddar curds and whey are often transferred from the cheese vat to a cooling table which contains screens that allow the whey to drain, but which trap the curd. The curd is cut using long, blunt knives and 'blocked' (stacked, cut and turned) by the cheesemaker to promote the release of cheese whey in a process known as 'cheddaring'. During this process the acidity of the curd increases to a desired level. The curd is then milled into ribbon shaped pieces and salt is mixed into it to arrest acid development. The salted green cheese curd is put into cheese moulds lined with cheesecloths and pressed overnight to allow the curd particles to bind together. The pressed blocks of cheese are then removed from the cheese moulds and are either bound with muslin-like cloth, or waxed or vacuum packed in plastic bags to be stored for maturation. Vacuum packing removes oxygen and prevents mould (fungal) growth during maturation, which depending on the wanted final product may be a desirable characteristic or not.
Process:
Mould-ripening In contrast to cheddaring, making cheeses like Camembert requires a gentler treatment of the curd. It is carefully transferred to cheese hoops and the whey is allowed to drain from the curd by gravity, generally overnight. The cheese curds are then removed from the hoops to be brined by immersion in a saturated salt solution. The salt absorption stops bacteria growing, as with Cheddar. If white mould spores have not been added to the cheese milk it is applied to the cheese either by spraying the cheese with a suspension of mould spores in water or by immersing the cheese in a bath containing spores of, e.g., Penicillium candida.
Process:
By taking the cheese through a series of maturation stages where temperature and relative humidity are carefully controlled, allowing the surface mould to grow and the mould-ripening of the cheese by fungi to occur. Mould-ripened cheeses ripen very quickly compared to hard cheeses (weeks against months or years). This is because the fungi used are biochemically very active when compared with starter bacteria. Some cheeses are surface-ripened by moulds, such as Camembert and Brie, some are ripened internally, such as Stilton, which is pierced with stainless steel wires, to admit air to promote mould spore germination and growth, as with Penicillium roqueforti. Surface ripening of some cheeses, such as Saint-Nectaire, may also be influenced by yeasts which contribute flavour and coat texture. Others are allowed to develop bacterial surface growths which give characteristic colours and appearances, e.g., by the growth of Brevibacterium linens which gives an orange coat to cheeses. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**UDP-glucuronate 5'-epimerase**
UDP-glucuronate 5'-epimerase:
In enzymology, an UDP-glucuronate 5'-epimerase (EC 5.1.3.12) is an enzyme that catalyzes the chemical reaction UDP-glucuronate ⇌ UDP-L-iduronateHence, this enzyme has one substrate, UDP-glucuronate, and one product, UDP-L-iduronate.
This enzyme belongs to the family of isomerases, specifically those racemases and epimerases acting on carbohydrates and derivatives. The systematic name of this enzyme class is UDP-glucuronate 5'-epimerase. Other names in common use include uridine diphosphoglucuronate 5'-epimerase, UDP-glucuronic acid 5'-epimerase, and C-5-uronosyl epimerase. This enzyme participates in nucleotide sugars metabolism. It employs one cofactor, NAD+. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pentanitroaniline**
Pentanitroaniline:
Pentanitroaniline, sometimes called hexyl, is an explosive organic compound. It is a relatively sensitive explosive (much more so than TNT) that can be used as a base charge for detonators, although it is uncommon in this application.
Pentanitroaniline can be reacted with ammonia in benzene, dichloromethane or another similar solvent to produce triaminotrinitrobenzene (TATB), an insensitive high explosive, used in nuclear bombs and other critical applications.
Pentanitroaniline is regulated by the United States Department of Transportation (DoT) as a "forbidden explosive" that is too dangerous to transport over public thoroughfares or by air. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**KCNV2**
KCNV2:
Potassium voltage-gated channel subfamily V member 2 is a protein that in humans is encoded by the KCNV2 gene. The protein encoded by this gene is a voltage-gated potassium channel subunit. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Unthought known**
Unthought known:
Unthought known is a phrase coined by Christopher Bollas in the 1980s to represent those experiences in some way known to the individual, but about which the individual is unable to think. At its most compelling, the unthought known stands for those early schemata for interpreting the object world that preconsciously determine our subsequent life expectations. In this sense, the unthought known refers to preverbal, unschematised early experience/trauma that may determine one's behaviour unconsciously, barred to conscious thought.
Prehistory:
It has been suggested that behind Bollas's concept lay a comment reported by Freud from a patient to the effect that he had always known something but he had never thought of it.The term also has been linked to W. R. Bion's idea of Beta-elements – psychic experiences which cannot yet be processed in any way by the mind.
Central elements:
Bollas saw several elements as going to make up the substance of the unthought known. Persistent moods can be considered to preserve elementary but preschematized states of mind into later life; the complex early interplay of self and (primary) object may also be preserved in the unthought known; early aesthetic experience – pre-verbal – can again form part of the unthought known.Bollas also linked the concept to D. W. Winnicott's notion of the true self.
Systems theory:
In terms of systems-centered therapy, the concept refers to the boundary between apprehensive knowing (non-verbal) and comprehensive knowing – what we can allow ourselves to formulate in words.
Therapy:
In therapy, the unthought known can become the subtext of the therapeutic interchange – the therapist's role then becoming that of picking up and containing (through projective identification) what the patients themselves cannot yet think about. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Parabidiminished rhombicosidodecahedron**
Parabidiminished rhombicosidodecahedron:
In geometry, the parabidiminished rhombicosidodecahedron is one of the Johnson solids (J80). It is also a canonical polyhedron.
Parabidiminished rhombicosidodecahedron:
A Johnson solid is one of 92 strictly convex polyhedra that is composed of regular polygon faces but are not uniform polyhedra (that is, they are not Platonic solids, Archimedean solids, prisms, or antiprisms). They were named by Norman Johnson, who first listed these polyhedra in 1966.It can be constructed as a rhombicosidodecahedron with two opposing pentagonal cupolae removed. Related Johnson solids are the diminished rhombicosidodecahedron (J76) where one cupola is removed, the metabidiminished rhombicosidodecahedron (J81) where two non-opposing cupolae are removed, and the tridiminished rhombicosidodecahedron (J83) where three cupolae are removed. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Partition (number theory)**
Partition (number theory):
In number theory and combinatorics, a partition of a non-negative integer n, also called an integer partition, is a way of writing n as a sum of positive integers. Two sums that differ only in the order of their summands are considered the same partition. (If order matters, the sum becomes a composition.) For example, 4 can be partitioned in five distinct ways: 4 3 + 1 2 + 2 2 + 1 + 1 1 + 1 + 1 + 1The only partition of zero is the empty sum, having no parts.
Partition (number theory):
The order-dependent composition 1 + 3 is the same partition as 3 + 1, and the two distinct compositions 1 + 2 + 1 and 1 + 1 + 2 represent the same partition as 2 + 1 + 1.
An individual summand in a partition is called a part. The number of partitions of n is given by the partition function p(n). So p(4) = 5. The notation λ ⊢ n means that λ is a partition of n.
Partitions can be graphically visualized with Young diagrams or Ferrers diagrams. They occur in a number of branches of mathematics and physics, including the study of symmetric polynomials and of the symmetric group and in group representation theory in general.
Examples:
The seven partitions of 5 are 4 + 1 3 + 2 3 + 1 + 1 2 + 2 + 1 2 + 1 + 1 + 1 1 + 1 + 1 + 1 + 1Some authors treat a partition as a decreasing sequence of summands, rather than an expression with plus signs. For example, the partition 2 + 2 + 1 might instead be written as the tuple (2, 2, 1) or in the even more compact form (22, 1) where the superscript indicates the number of repetitions of a part.
Examples:
This multiplicity notation for a partition can be written alternatively as 1m12m23m3⋯ , where m1 is the number of 1's, m2 is the number of 2's, etc. (Components with mi = 0 may be omitted.) For example, in this notation, the partitions of 5 are written 51,1141,2131,1231,1122,1321 , and 15
Diagrammatic representations of partitions:
There are two common diagrammatic methods to represent partitions: as Ferrers diagrams, named after Norman Macleod Ferrers, and as Young diagrams, named after Alfred Young. Both have several possible conventions; here, we use English notation, with diagrams aligned in the upper-left corner.
Diagrammatic representations of partitions:
Ferrers diagram The partition 6 + 4 + 3 + 1 of the number 14 can be represented by the following diagram: The 14 circles are lined up in 4 rows, each having the size of a part of the partition. The diagrams for the 5 partitions of the number 4 are shown below: Young diagram An alternative visual representation of an integer partition is its Young diagram (often also called a Ferrers diagram). Rather than representing a partition with dots, as in the Ferrers diagram, the Young diagram uses boxes or squares. Thus, the Young diagram for the partition 5 + 4 + 1 is while the Ferrers diagram for the same partition is While this seemingly trivial variation does not appear worthy of separate mention, Young diagrams turn out to be extremely useful in the study of symmetric functions and group representation theory: filling the boxes of Young diagrams with numbers (or sometimes more complicated objects) obeying various rules leads to a family of objects called Young tableaux, and these tableaux have combinatorial and representation-theoretic significance. As a type of shape made by adjacent squares joined together, Young diagrams are a special kind of polyomino.
Partition function:
The partition function p(n) counts the partitions of a non-negative integer n . For instance, p(4)=5 because the integer 4 has the five partitions 1+1+1+1 , 1+1+2 , 1+3 , 2+2 , and 4 The values of this function for n=0,1,2,… are: 1, 1, 2, 3, 5, 7, 11, 15, 22, 30, 42, 56, 77, 101, 135, 176, 231, 297, 385, 490, 627, 792, 1002, 1255, 1575, 1958, 2436, 3010, 3718, 4565, 5604, ... (sequence A000041 in the OEIS).The generating function of p is ∑n=0∞p(n)qn=∏j=1∞∑i=0∞qji=∏j=1∞(1−qj)−1.
Partition function:
No closed-form expression for the partition function is known, but it has both asymptotic expansions that accurately approximate it and recurrence relations by which it can be calculated exactly. It grows as an exponential function of the square root of its argument., as follows: exp (π2n3) as n→∞ The multiplicative inverse of its generating function is the Euler function; by Euler's pentagonal number theorem this function is an alternating sum of pentagonal number powers of its argument.
Partition function:
p(n)=p(n−1)+p(n−2)−p(n−5)−p(n−7)+⋯ Srinivasa Ramanujan discovered that the partition function has nontrivial patterns in modular arithmetic, now known as Ramanujan's congruences. For instance, whenever the decimal representation of n ends in the digit 4 or 9, the number of partitions of n will be divisible by 5.
Restricted partitions:
In both combinatorics and number theory, families of partitions subject to various restrictions are often studied. This section surveys a few such restrictions.
Restricted partitions:
Conjugate and self-conjugate partitions If we flip the diagram of the partition 6 + 4 + 3 + 1 along its main diagonal, we obtain another partition of 14: By turning the rows into columns, we obtain the partition 4 + 3 + 3 + 2 + 1 + 1 of the number 14. Such partitions are said to be conjugate of one another. In the case of the number 4, partitions 4 and 1 + 1 + 1 + 1 are conjugate pairs, and partitions 3 + 1 and 2 + 1 + 1 are conjugate of each other. Of particular interest are partitions, such as 2 + 2, which have themselves as conjugate. Such partitions are said to be self-conjugate.Claim: The number of self-conjugate partitions is the same as the number of partitions with distinct odd parts.
Restricted partitions:
Proof (outline): The crucial observation is that every odd part can be "folded" in the middle to form a self-conjugate diagram: One can then obtain a bijection between the set of partitions with distinct odd parts and the set of self-conjugate partitions, as illustrated by the following example: Odd parts and distinct parts Among the 22 partitions of the number 8, there are 6 that contain only odd parts: 7 + 1 5 + 3 5 + 1 + 1 + 1 3 + 3 + 1 + 1 3 + 1 + 1 + 1 + 1 + 1 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1Alternatively, we could count partitions in which no number occurs more than once. Such a partition is called a partition with distinct parts. If we count the partitions of 8 with distinct parts, we also obtain 6: 7 + 1 6 + 2 5 + 3 5 + 2 + 1 4 + 3 + 1This is a general property. For each positive number, the number of partitions with odd parts equals the number of partitions with distinct parts, denoted by q(n). This result was proved by Leonhard Euler in 1748 and later was generalized as Glaisher's theorem.
Restricted partitions:
For every type of restricted partition there is a corresponding function for the number of partitions satisfying the given restriction. An important example is q(n) (partitions into distinct parts). The first few values of q(n) are (starting with q(0)=1): 1, 1, 1, 2, 2, 3, 4, 5, 6, 8, 10, ... (sequence A000009 in the OEIS).The generating function for q(n) is given by ∑n=0∞q(n)xn=∏k=1∞(1+xk)=∏k=1∞11−x2k−1.
Restricted partitions:
The pentagonal number theorem gives a recurrence for q: q(k) = ak + q(k − 1) + q(k − 2) − q(k − 5) − q(k − 7) + q(k − 12) + q(k − 15) − q(k − 22) − ...where ak is (−1)m if k = 3m2 − m for some integer m and is 0 otherwise.
Restricted partitions:
Restricted part size or number of parts By taking conjugates, the number pk(n) of partitions of n into exactly k parts is equal to the number of partitions of n in which the largest part has size k. The function pk(n) satisfies the recurrence pk(n) = pk(n − k) + pk−1(n − 1)with initial values p0(0) = 1 and pk(n) = 0 if n ≤ 0 or k ≤ 0 and n and k are not both zero.One recovers the function p(n) by p(n)=∑k=0npk(n).
Restricted partitions:
One possible generating function for such partitions, taking k fixed and n variable, is ∑n≥0pk(n)xn=xk∏i=1k11−xi.
More generally, if T is a set of positive integers then the number of partitions of n, all of whose parts belong to T, has generating function ∏t∈T(1−xt)−1.
Restricted partitions:
This can be used to solve change-making problems (where the set T specifies the available coins). As two particular cases, one has that the number of partitions of n in which all parts are 1 or 2 (or, equivalently, the number of partitions of n into 1 or 2 parts) is ⌊n2+1⌋, and the number of partitions of n in which all parts are 1, 2 or 3 (or, equivalently, the number of partitions of n into at most three parts) is the nearest integer to (n + 3)2 / 12.
Restricted partitions:
Partitions in a rectangle and Gaussian binomial coefficients One may also simultaneously limit the number and size of the parts. Let p(N, M; n) denote the number of partitions of n with at most M parts, each of size at most N. Equivalently, these are the partitions whose Young diagram fits inside an M × N rectangle. There is a recurrence relation obtained by observing that p(N,M;n)−p(N,M−1;n) counts the partitions of n into exactly M parts of size at most N, and subtracting 1 from each part of such a partition yields a partition of n − M into at most M parts.The Gaussian binomial coefficient is defined as: The Gaussian binomial coefficient is related to the generating function of p(N, M; n) by the equality
Rank and Durfee square:
The rank of a partition is the largest number k such that the partition contains at least k parts of size at least k. For example, the partition 4 + 3 + 3 + 2 + 1 + 1 has rank 3 because it contains 3 parts that are ≥ 3, but does not contain 4 parts that are ≥ 4. In the Ferrers diagram or Young diagram of a partition of rank r, the r × r square of entries in the upper-left is known as the Durfee square: The Durfee square has applications within combinatorics in the proofs of various partition identities. It also has some practical significance in the form of the h-index.
Rank and Durfee square:
A different statistic is also sometimes called the rank of a partition (or Dyson rank), namely, the difference λk−k for a partition of k parts with largest part λk . This statistic (which is unrelated to the one described above) appears in the study of Ramanujan congruences.
Young's lattice:
There is a natural partial order on partitions given by inclusion of Young diagrams. This partially ordered set is known as Young's lattice. The lattice was originally defined in the context of representation theory, where it is used to describe the irreducible representations of symmetric groups Sn for all n, together with their branching properties, in characteristic zero. It also has received significant study for its purely combinatorial properties; notably, it is the motivating example of a differential poset. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Desert bloom**
Desert bloom:
A desert bloom is a climatic phenomenon that occurs in various deserts around the world. The phenomenon consists of the blossoming of a wide variety of flowers during early-mid spring in years when rainfall is unusually high. The blossoming occurs when the unusual level of rainfall reaches seeds and bulbs that have been in a latent or dormant state, and causes them to germinate and flower in early spring. It is accompanied by the proliferation of insects, birds and small species of lizards.
Around the world:
Chile In the Atacama Desert, a desert bloom (Spanish: desierto florido) occurs between the months of September and November in years when rainfall is unusually high. Normally, the Atacama Desert receives less than 12 mm (0.47 in) of rain a year.
At its height, the phenomenon can be seen from just south of the city of Vallenar to just north of the city of Copiapó throughout the coastal valleys and Chilean Coast Range from September to November.
Around the world:
Climatically, the event is related to the El Niño phenomenon, a band of anomalously warm ocean water temperatures that occasionally develops off the western coast of South America, which can lead to an increase in evaporation and therefore precipitation.The flowering desert is a popular tourist attraction with tourists visiting the phenomenon from various points around the southern Atacama, including Huasco, Vallenar, La Serena, Copiapó and Caldera.
Around the world:
Plant and animal life The flowering desert involves more than 200 species of flower, most of them endemic to the Atacama region. The different species germinate at different times through the flowering desert period. Some of the most common species include: Garra de león (Bomarea ovallei) Pata de guanaco (Cistanthe grandiflora) Añañuca (Rhodolirium montanum) Schizopetalon tenuifoliumThe region is also home to cacti, succulents and other examples of xerophilous plants, as well as animal species including the Tuco-tuco and the Four-Eyed Frog.
Around the world:
Conservation In recent years, concerns have been raised by environmental organizations about the potentially damaging effects of large numbers of tourists visiting the flowering desert, the illegal trade of native flower species, and the development of motorsport. Environmental organizations have suggested that these activities limit the potential for regeneration of the existing species. In response to this, the Chilean Government has established a series of prohibitions and controls, in addition to informative campaigns to the public, and especially to tourists, in order to limit the damage. Comisión del Desierto Florido de la Región de Atacama was created in 1997, and re-launched in 2015, by the regional government of Atacama Region as an agency aimed to finds ways to protect the desert bloom.In June 2022 Copiapó passed a municipal decree establishing fines for those who damage the desert bloom. On October 2, 2022, the Desierto Florido National Park in 2023 was officially announced.
Around the world:
Flowering The phenomenon depends on above-average rainfall, but highly excessive rainfall can limit blooming. For example, in 1997 the region experienced very high total rainfall, with 129.4 mm (5.09 in) of rain in Copiapó (978% above average) and 168.5 mm (6.63 in) in Vallenar (433% above average), but there was only minimal desert flowering.In a single day in March 2015, parts of the area received 23 mm (0.91 in) of rain from El Niño, causing flowering in September and October 2015.
Around the world:
Peru In the South and North of Lima, a desert bloom occurs between the months of September and November. The particularity of the Lima desert bloom is that it goes all the way up to the highlands as the clouds get "stuck" and precipitate water. The other particularity is the green moss that appears.
United States | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**European Conference on the Dynamics of Molecular Systems**
European Conference on the Dynamics of Molecular Systems:
MOLEC, the European Conference on the Dynamics of Molecular Systems, is a biannual scientific conference. It is held every two years usually late summer. The first conference was held in Trento, Italy in the year of 1976.
Conference locations:
The conference has been held in the following locations: Trento (Italy), 1976, organized by Peter Toennies and Franco Gianturco Brandbjerg Hojskole (Denmark), 1978 Oxford (UK), 1980 Nijmegen (The Netherlands), 1982 Jerusalem (Israel), 1984 Aussois (France), 1986 Assisi (Italy), 1988 Bernkastel-Kues (Germany), 1990 Prague (Czech Republic), 1992 Salamanca (Spain), 1994 Nyborg Strand (Denmark), 1996 Bristol (UK), 1998 Jerusalem (Israel), 2000 Istanbul (Turkey), 2002 Nunspeet (The Netherlands), 2004 Trento (Italy), 2006 St. Petersburg (Russia), 2008 Curia (Portugal), 2010 Oxford (UK), 2012 Gothenburg (Sweden), 2014 Toledo (Spain), 2016, chair Alberto Garcia Vela Dinard (France), 2018
MOLEC Prizes:
MOLEC Senior Prize 1996: Prof Jan Peter Toennies 1998: Prof. Franco Gianturco 2004: Prof Raphael Levine 2006: Prof. Zdenek Herman 2008: Prof. Gabriel Balint-Jurti 2016: Prof. Dieter Gerlich 2018: Prof David ParkerZdenek Herman MOLEC Young Scientist Prize 2016: Prof. Sebastiaan Y. T. van de Meerakker 2018: Prof Francesca Calegari | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Serous membrane**
Serous membrane:
The serous membrane (or serosa) is a smooth tissue membrane of mesothelium lining the contents and inner walls of body cavities, which secrete serous fluid to allow lubricated sliding movements between opposing surfaces. The serous membrane that covers internal organs is called a visceral membrane; while the one that covers the cavity wall is called the parietal membrane. Between the two opposing serosal surfaces is often a potential space, mostly empty except for the small amount of serous fluid.The Latin anatomical name is tunica serosa. Serous membranes line and enclose several body cavities, also known as serous cavities, where they secrete a lubricating fluid which reduces friction from movements. Serosa is entirely different from the adventitia, a connective tissue layer which binds together structures rather than reducing friction between them. The serous membrane covering the heart and lining the mediastinum is referred to as the pericardium, the serous membrane lining the thoracic cavity and surrounding the lungs is referred to as the pleura, and that lining the abdominopelvic cavity and the viscera is referred to as the peritoneum.
Structure:
Serous membranes have two layers. The parietal layers of the membranes line the walls of the body cavity (pariet- refers to a cavity wall). The visceral layer of the membrane covers the organs (the viscera). Between the parietal and visceral layers is a very thin, fluid-filled serous space, or cavity.
Structure:
Visceral and parietal layers Each serous membrane is composed of a secretory epithelial layer and a connective tissue layer underneath. The epithelial layer, known as mesothelium, consists of a single layer of avascular flat nucleated cells (simple squamous epithelium) which produce the lubricating serous fluid. This fluid has a consistency similar to thin mucus. These cells are bound tightly to the underlying connective tissue.
Structure:
The connective tissue layer provides the blood vessels and nerves for the overlying secretory cells, and also serves as the binding layer which allows the whole serous membrane to adhere to organs and other structures.For the heart, the layers of the serous membrane are called the parietal pericardium, and the visceral pericardium (sometimes called the epicardium). Other parts of the body may also have specific names for these structures. For example, the serosa of the uterus is called the perimetrium.
Structure:
The pericardial cavity (surrounding the heart), pleural cavity (surrounding the lungs) and peritoneal cavity (surrounding most organs of the abdomen) are the three serous cavities within the human body. While serous membranes have a lubricative role to play in all three cavities, in the pleural cavity it has a greater role to play in the function of breathing.
Structure:
The serous cavities are formed from the intraembryonic coelom and are basically an empty space within the body surrounded by serous membrane. Early in embryonic life visceral organs develop adjacent to a cavity and invaginate into the bag-like coelom. Therefore, each organ becomes surrounded by serous membrane - they do not lie within the serous cavity. The layer in contact with the organ is known as the visceral layer, while the parietal layer is in contact with the body wall.
Structure:
Examples In the human body, there are three serous cavities with associated serous membranes: A serous membrane lines the pericardial cavity of the heart, and reflects back to cover the heart, much like an under-inflated balloon would form two layers surrounding a fist. Called the pericardium, this serous membrane is a two-layered sac that surrounds the entire heart except where blood vessels emerge on the heart’s superior side; The pleura is the serous membrane that surrounds the lungs in the pleural cavity; The peritoneum is the serous membrane that surrounds several organs in the abdominopelvic cavity.
Structure:
The tunica vaginalis is the serous membrane, which surrounds the male gonad, the testis.The two layers of serous membranes are named parietal and visceral. Between the two layers is a thin fluid filled space. The fluid is produced by the serous membranes and stays between the two layers to reduce friction between the walls of the cavities and the internal organs when they move with respect to one another, such as when the lungs inflate or the heart beats. Such movement could otherwise lead to inflammation of the organs.
Structure:
Development All serous membranes found in the human body formed ultimately from the mesoderm of the trilaminar embryo. The trilaminar embryo consists of three relatively flat layers of ectoderm, endoderm (also known as "entoderm") and mesoderm.
As the embryo develops, the mesoderm starts to segment into three main regions: the paraxial mesoderm, the intermediate mesoderm and the lateral plate mesoderm.
The lateral plate mesoderm later splits in half to form two layers bounding a cavity known as the intraembryonic coelom. Individually, each layer is known as splanchnopleure and somatopleure.
The splanchnopleure is associated with the underlying endoderm with which it is in contact, and later becomes the serous membrane in contact with visceral organs within the body.
Structure:
The somatopleure is associated with the overlying ectoderm and later becomes the serous membrane in contact with the body wall.The intraembryonic coelom can now be seen as a cavity within the body which is covered with serous membrane derived from the splanchnopleure. This cavity is divided and demarcated by the folding and development of the embryo, ultimately forming the serous cavities which house many different organs within the thorax and abdomen.
Diseases:
Mesotheliomas are neoplasias that are relatively specific for serous membranes. The modified Mullerian-derived serous membranes that surrounds the ovaries in females can give rise to serous tumors, a solid to papillary tumor type that may also arise within the uterus. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Inferior cerebral veins**
Inferior cerebral veins:
The inferior cerebral veins are veins that drain the undersurface of the cerebral hemispheres and empty into the cavernous and transverse sinuses.
Those on the orbital surface of the frontal lobe join the superior cerebral veins, and through these open into the superior sagittal sinus.
Those of the temporal lobe anastomose with the middle cerebral and basal veins, and join the cavernous, sphenoparietal, and superior petrosal sinuses. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Preference learning**
Preference learning:
Preference learning is a subfield in machine learning, which is a classification method based on observed preference information. In the view of supervised learning, preference learning trains on a set of items which have preferences toward labels or other items and predicts the preferences for all items.
While the concept of preference learning has been emerged for some time in many fields such as economics, it's a relatively new topic in Artificial Intelligence research. Several workshops have been discussing preference learning and related topics in the past decade.
Tasks:
The main task in preference learning concerns problems in "learning to rank". According to different types of preference information observed, the tasks are categorized as three main problems in the book Preference Learning: Label ranking In label ranking, the model has an instance space X={xi} and a finite set of labels Y={yi|i=1,2,⋯,k} . The preference information is given in the form yi≻xyj indicating instance x shows preference in yi rather than yj . A set of preference information is used as training data in the model. The task of this model is to find a preference ranking among the labels for any instance.
Tasks:
It was observed some conventional classification problems can be generalized in the framework of label ranking problem: if a training instance x is labeled as class yi , it implies that ∀j≠i,yi≻xyj . In the multi-label case, x is associated with a set of labels L⊆Y and thus the model can extract a set of preference information {yi≻xyj|yi∈L,yj∈Y∖L} . Training a preference model on this preference information and the classification result of an instance is just the corresponding top ranking label.
Tasks:
Instance ranking Instance ranking also has the instance space X and label set Y . In this task, labels are defined to have a fixed order y1≻y2≻⋯≻yk and each instance xl is associated with a label yl . Giving a set of instances as training data, the goal of this task is to find the ranking order for a new set of instances.
Tasks:
Object ranking Object ranking is similar to instance ranking except that no labels are associated with instances. Given a set of pairwise preference information in the form xi≻xj and the model should find out a ranking order among instances.
Techniques:
There are two practical representations of the preference information A≻B . One is assigning A and B with two real numbers a and b respectively such that a>b . Another one is assigning a binary value V(A,B)∈{0,1} for all pairs (A,B) denoting whether A≻B or B≻A . Corresponding to these two different representations, there are two different techniques applied to the learning process.
Techniques:
Utility function If we can find a mapping from data to real numbers, ranking the data can be solved by ranking the real numbers. This mapping is called utility function. For label ranking the mapping is a function f:X×Y→R such that yi≻xyj⇒f(x,yi)>f(x,yj) . For instance ranking and object ranking, the mapping is a function f:X→R Finding the utility function is a regression learning problem which is well developed in machine learning.
Techniques:
Preference relations The binary representation of preference information is called preference relation. For each pair of alternatives (instances or labels), a binary predicate can be learned by conventional supervising learning approach. Fürnkranz and Hüllermeier proposed this approach in label ranking problem. For object ranking, there is an early approach by Cohen et al.Using preference relations to predict the ranking will not be so intuitive. Since preference relation is not transitive, it implies that the solution of ranking satisfying those relations would sometimes be unreachable, or there could be more than one solution. A more common approach is to find a ranking solution which is maximally consistent with the preference relations. This approach is a natural extension of pairwise classification.
Uses:
Preference learning can be used in ranking search results according to feedback of user preference. Given a query and a set of documents, a learning model is used to find the ranking of documents corresponding to the relevance with this query. More discussions on research in this field can be found in Tie-Yan Liu's survey paper.Another application of preference learning is recommender systems. Online store may analyze customer's purchase record to learn a preference model and then recommend similar products to customers. Internet content providers can make use of user's ratings to provide more user preferred contents. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bus SCS**
Bus SCS:
SCS is an acronym for "Sistema Cablaggio Semplificato" ("Simplified Cable Solution"). It uses a fieldbus network protocol and has applications in the field of home automation and building automation. It is used mainly in bTicino and Legrand installations.
General features:
An SCS bus is based on a sheathed twisted pair formed of two flexible conductors; these are braided and unshielded with isolation 300/500V, SELV as double isolation is required – according to the rules adopted by CEI (International Electrotechnical Committee). The bus is unpolarized, devices are required to support the DC power supply in both polarity.
General features:
Wiring Two kind of wiring are possible: Free cabling were a mix of bus and star topology are present, better for old houses Star wiring were all devices are connected to the switch rack, better for new houses Communication Across the SCS bus four different types of signals are transmitted in frequency modulation: Electricity supply via 27 Vdc Data with a frequency clock of 9600 Hz Sound VideoThe transmission protocol is the CSMA/CA.
General features:
Functions Through the SCS bus you have the following functions: Light control Automation Sound diffusion Energy management Thermoregulation Video intercom Alarm systemAll the listed functions share the same technology and the same procedures for configuration / installation.
General features:
Configuration All devices connected to SCS bus must be manually configured, no autolearn is possible, apart Alarm system and extenders. Configuration assign an address and an operating mode. Two kind of configurations are possible: Physical - using numbered jumpers with different values resistor Virtual - using a configuration software connected with an ethernet gateway. In this case the address and operating mode are written in a non volatile memory in every device. Applying a physical jumper override the virtual configuration wiping the memory.
General features:
Addressing details Device addressing use three different 'digits' A|PL|GR. The A mean the room, the PL is the Point of Load in the room, and GR is the group. Group join loads in same or different rooms in a logical manner. Not all devices has group addressing. All devices must answer to room broadcast called AMB. All devices must answer to general broadcast called GEN. Physical and Virtual addressing has different limitations: Writing physical addresses use 2 digits. Writing virtual addresses use 4 digits.
General features:
In big houses and buildings, SCS address extension is possible, were different address domains are connected via some bridges. Only some kind of messages can cross a bridge.
General features:
Here the values of physical configuration jumpers: Note: It looks like the values of the configurators are measured, not from any official table. Resistor values are not between short difference. E.g. "4", it is in this table 471 kohm; if resistor is 1%, it is about 470 k ... 479 k. The original table from year 1999/2000 says: 0 = 4,7M, 1 = 825k, 2 = 681k, 3 = 562k, 4 = 475k, 5 = 392k, 6 = 332k, 7 = 274 k, 8 = 221k and 9 = 182k. All these values are found in the standard "E"-resistor value table (EIA E96) but not uniformly spaced (is that correct?). Resistor values in this list are official. However the table values above are useful because all the values fall inside the 1% resistor tolerance area specified by E96 (except value 3501/9 measured at 179k - a second example tested also gave a reading of 179k).
Certifications:
Devices connected to the SCS bus are IMQ-certified and comply with these product standards (International Electrotechnical Commission (IEC) EN 50428 - IEC EN 60669-1/A1 - IEC EN 60669-2-1 - IEC EN 50090-2-2 - IEC EN 50090-2-3 ).
Integration:
You can interact with the SCS bus through a gateway and an open high-level protocol called OpenWebNet. Two kind of gateway exist: Gateway ethernet (Linux based) Gateway USB / RS-232These gateways are bidirectional; they translate SCS frames into OpenWebNet frames, and the other way round.
The open protocol OpenWebNet shared by MyOpen community, let everybody to build software that interact with SCS devices. SCS protocol is a proprietary bTicino protocol. Interaction with other field bus must happen only writing software that use OpenWebNet. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**BitVault**
BitVault:
BitVault is a content-addressable distributed storage system, developed by Microsoft Research in China. BitVault uses peer-to-peer technology to distribute the tasks of storing and managing data. As such, there is no central authority responsible for management of the system. Rather, it is self-managing, provides high availability, reliability and scales up in a self-organizing manner, with low administrative overhead, which is almost constant irrespective of the size of the distributed overlay network.
BitVault:
BitVault system is best suited for reference data, which is large amount of data which changes very infrequently. Such data include archives of out-of-date data, as well as multimedia data like music and video, which, even though might be frequently used, changes very rarely.
Technology:
Every participating peer node in BitVault architecture is a Smart Brick, which is a trimmed down PC with large disks. All Smart Bricks in a BitVault system are connected by a high-bandwidth, low latency network. A BitVault system can be easily scaled up – any computer can be configured to act as a Smart Brick by simply installing the BitVault software, and connecting it to the network, without any need for interrupting the already working nodes.
Technology:
BitVault stores immutable data objects, i.e., objects which cannot be changed. The physical location of the objects is not fixed and can be on any of the bricks. Its location changes depending on its frequency of access; it can even be replicated at more than one brick. To get around this problem of changing locations, BitVault makes it accessible by means of a 160-bit key, which is unique for each object. The system dynamically references the location from which the object can be retrieved most efficiently, by using the key, and makes the object available. The unique key is generated from a hash of the data of the object, thus making the system content-addressable, as opposed to location-addressable. The hashes of the objects (key) are mapped to the physical addresses using hash tables, which are internally managed by the system and do not need any user intervention. Different sets of nodes maintain different sets of hash tables, which concern with only the data in that set of nodes, thereby giving rise to an overlay network in which the location of the data is tracked by a distributed hash table (DHT) architecture.
Technology:
Architecture The BitVault architecture is composed of multiple bricks which constitute a logical 160 bit address space, each associated with hash of some data. The association is maintained in a Distributed Hash Table (DHT). The DHT partitions the entire hash table into smaller hash tables. For example, if there are n peers, the hash table would be divided into n hash tables, each starting from the row next to where its immediate predecessor ended. Each DHT has its associated brick, and the extent of the logical address space a brick is responsible for is called its Zone. The bricks communicate using peer-to-peer technology, over the Membership and Routing Layer (MRL). Lookup of any data object can be done by n bricks in parallel, in its own zone, giving an efficiency of O (log N).
Technology:
Multiple copies of a single object, called replica, are stored in the BitVault system, to give enough redundancy. If any index is damaged, the nearest replica can be notified to start its repair. And if the index notices that the replica is damaged, it can initiate the repair of the replica. This method of error recovery is called the Object Driven Repair model. In order for this to work, there needs to be a membership service running which will give a logical ordering to the peers. This is achieved using the MRL. The membership service guarantees that any addition or removal of a brick is eventually and reliably informed to every other live bricks. The MRL is also responsible to route messages to and from bricks and its associated DHTs.
Technology:
The MRL uses a one hop DHT to perform routing, i.e., it never takes more than one hop over a peer to route messages, when the BitVault system is stable, i.e., no new bricks are added, nor is any load balancing or repair going on. The MRL is implemented using an XRing architecture, which maintains a distributed routing table which facilitates one-hop routing.
Technology:
Single brick architecture A brick registers itself with the MRL with a 160 bit key that forms its identifier, and its zone in the DHT is from its id to just before the id of its next logical successor. The brick architecture is divided into two parts – the Index Module and the Data Module. The index module keeps a list of the list of all the replicas that are cached by the disc, mapped with their hashes. In addition, for each object that is stored, the IM also keeps a list of locations of all other replicas of the object. IM listens to the MRL and updates itself according to membership changes and also according to data being entered into BitVault system or being retrieved from it. The IM is also responsible to initiate repair of replicas once it is informed of a damaged one, and to ask for repair of replicas in its store. The IM is connected to a small Access Module, which serves as the gateway to external clients.
Technology:
Data module stores replicas of objects to a local disc. Along with the object, its metadata such as its hash key and its degree of replication in the BitVault system is also kept.
Working:
Check In Inserting data into the BitVault system is called Check In. A Check In requires the object, its key and an initial replication degree. The MRL routes the object and all its parameters to some brick. The brick then stores the data onto its Data Module and starts the job of replicating the object, by publishing it to random bricks, to achieve the specified replication degree. When the object has achieved the required replication degree, its index is said to be complete, otherwise it is partial. The brick must do further replication of an object which has partial index. Bricks also periodically verify that the index of the object is still complete.
Working:
Check Out Check Out is the process of retrieving data from the BitVault system. The application which uses BitVault as its datastore gives the hash key of the object to be retrieved, which is sent by the MRL to any brick. If the brick doesn't have the object, it passes the request on to other bricks, in parallel. If the brick has the object, it is retrieved from its Data Module and routed to the requestor.
Working:
Fault tolerance BitVault faults can be either transient or permanent. A transient failure will occur when a brick is experiencing temporary failure such as a software crash forcing a reboot. A permanent failure indicates errors such as hardware failure. Whenever any fault is detected, other bricks which have a replica of the affected object update the entry of the object in the index to be partial, and thus triggering further replication. All the other bricks containing replicas collaboratively send different parts of the object data, in parallel, to a new brick which will hold the replica. This parallel replication speeds up the repair of a damaged index to get it back to the complete state.
Working:
Membership changes Whenever a new brick is added to the BitVault system, it takes up a random ID and contacts other bricks. The bricks will then include this new brick in their list of members. The newly added brick also gets a response from those bricks which added this to their membership list. The new brick adds the respondents to its membership list. Background load balancing of the system kicks in to populate the new brick with live replicas.
Working:
Load balancing Bricks periodically query other bricks about the load condition in them. The brick then transfers some replicas onto the low-load bricks to get a more or less balanced load on each brick. It also issues messages to other bricks to update their indices to reflect the change. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Joint European Master in Space Science and Technology**
Joint European Master in Space Science and Technology:
The Joint European Master in Space Science and Technology (or short SpaceMaster) is an Erasmus Mundus 120 ECTS master programme. The SpaceMaster programme started in 2005 and it is focused on providing education in Space Science and Technology to its students. The main objective of the Course is to combine the great diversity of space expertise at multiple European universities to a common platform of competence within the guidelines of the Bologna process. The educational cooperation is supported by scientific and industrial organisations, thus providing direct contacts with professional research and industry.
Joint European Master in Space Science and Technology:
It also provides to the students a cross-disciplinary extension from laboratory and computer simulation environments to hands-on work with stratospheric balloons, rockets, satellite and radar control, robotics, sensor data fusion, automatic control and multi-body dynamics.
The Course brings together students from around the world to share their existing competence in Space Science and Technology and to develop it with Europe's space industry and research community.
Partner Universities:
The partner universities throughout Europe are: Luleå University of Technology (in Kiruna, Sweden) Paul Sabatier University (in Toulouse, France) Cranfield University (in Cranfield, England) Czech Technical University (in Prague, Czech Republic) Aalto University (in Helsinki, Finland) Previously a partner: University of Würzburg (in Würzburg, Germany)The students receive two degrees issued from two universities of the consortium, usually one from Luleå University of Technology and another from the selected university for the second year. The European Space Agency supports the programme with ESA grants, ESA work placements, and ESA lectures. In 2010, both the University of Tokyo and the Utah State University have joined the consortium as full partners. Since 2018, the University of Würzburg is no longer part of the partner universities.
Programme Structure:
The first year of the programme is the same for all the students, starting their first semester in Luleå University of Technology and continuing their third semester in one of the partner universities which the students can select. Current Programme Structure (since 2018): Original Programme Structure (before 2018):
Other Information:
This programme is also taking part in the Erasmus Mundus Action 3 Program which led to the creation of the SpaceMaster Global Partnership. This framework allows EU students to do part of their thesis or project work at one of the following partner universities : Shanghai Jiao Tong University (SJTU), China Stanford University (SU), USA University of Toronto (UT), CanadaAn alumni programme is currently still in the planning stages.
Other Information:
The consortium also organizes outreach activities and events such as the Planetary rover symposium, held in 2009 at the Espoo campus at the Aalto University in Finland, where leading space robotics scientists gave presentations on history, status and technologies of planetary robotics. Invited guest speakers included Mr. Alexei Bogatchev (Russia), Mr. Gianfranco Visentin (ESA), Dr. Richard Volpe (JPL), and Dr. Juha Röning (University of Oulu).In 2008, the SpaceMaster Robotics Team was created by Juxi Leitner and David Leal to organize and participate in various student robotics competitions. Their first project was flown on a stratospheric research balloon in the US and in 2009 they participated in the BEXUS campaign of the European Space Agency. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cotton maturity**
Cotton maturity:
Cotton maturity is a physical testing parameter of cotton fiber properties testing. It is quantified by the degree of cell wall thickening relative to its perimeter. The maturity of individual cotton fiber is an essential aspect of the cotton classing regarding the aesthetics such as appearance, dye-uptake, etc. High volume instrument (HVI) can test cotton maturity like many other fiber properties, including length, uniformity, micronaire/fineness, strength, color, etc.
Major impact:
Cotton maturity of fibers largely depends upon the growing conditions. Cotton maturity is measured as the relative wall thickness (i.e., the area of the cell wall to that of a circle with the same perimeter as the fiber, or the ratio of the cell wall thickness to the overall ‘diameter’ of the fiber). Hence the thickness of the wall infers the extent of the maturity of cotton fibers. Cotton fibers are trichome cells composed primarily of cellulose. Mature fibers have more cellulose and a greater degree of cell wall thickening. The significant impact of immature fiber is on the finished appearance. The MIC values of immature fibers determine the processing and performance of cotton. The commonly caused defects by immature cotton are related to yarn and fabric appearance such as poor dyeing uptake, dead fibers, neps formation, and barre also (if the batch to batch maturity ratio is different).
Measurements:
Cotton classification, or classing, is the process of classifying cotton based on its grade, staple length, and micronaire. Micronaire is a measure of cotton maturity. Maturity of cotton fibers is measured with single fiber measurement test or by double compression air flow test. It is expressed in percentage or maturity ratio.
Micronaire Cotton's simple Micronaire value is determined by both the fineness of the fibres as well as their maturity. Micronaire values or reading represents the fineness of the cotton fiber. For example a preferred micronaire range is 3.7 to 4.2. Upland cotton is coarser than Gossypium barbadense (Pima cotton). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Genetics of synesthesia**
Genetics of synesthesia:
The genetic mechanism of synesthesia has long been debated, with researchers previously claiming it was a single X-linked trait due to seemingly higher prevalence in women and no evidence of male-male transmission This is where the only synesthetic parent is male and the male child has synesthesia, meaning that the trait cannot be solely linked to the X chromosome.
Genetics of synesthesia:
The Mendelian nature of the trait was further disproven when case studies showed that the Phenotype of synesthesia could be differentially expressed in monozygotic (genotypically identical) twins While both twins had the same genome with the potential for phenotypic expression of synesthesia, only one had documented synesthesia. Therefore, the condition is now thought to be oligogenic, with Locus heterogeneity and multiple forms of inheritance, and expression, implying that synesthesia is determined by more than one gene, more than one location in those genes, and a complex mode of inheritance. Several full genome linkage scans have shown particular areas of the genome whose inheritance seem to correlate with the inheritance of synesthesia.
Genetics of synesthesia:
Using the LOD score which describes the likelihood that two genes are near each other on a chromosome, and thus will be inherited together, areas of strong or suggestive linkage with inheritance of synesthesia were found. The area with the highest LOD score in the genome of an individual with auditory-visual synesthesia has been shown to be linked with autism as well, another disorder with sensory and perceptual abnormalities. Other regions of linkage include genes that are related to the development of the cerebral cortex (TBR1), dyslexia, and apoptosis (EFHC1), the last of which could be potentially related to the retention of the neonatal synesthetic pathways in the universal synesthesia/pruning hypothesis. This hypothesis posits that every person is born a synesthete and the ‘extra’ connections are pruned during normal neurodevelopment in non-synesthetes, and not pruned in synesthetes.More potential support for that hypothesis comes from another region identified with strong linkage, which contains a gene (DPYSL3) which is involved in axonal growth, neuroplasticity, and neuronal differentiation. Additionally, this gene is not expressed in the adult brain but is highly expressed in the late-fetal and early post-natal brain and spinal cord, providing more support for a universal “neonatal synesthesia” that is pruned away through natural development.Another genome scan revealed a different area of linkage for an individual with colored sequence synesthesia: one which associates days of the week with colors. In that individual, the linked region contained genes that produces proteins important for intercellular communication (GABARAPL2), genes that are involved in brain development (NDRG4), genes linked to neuron myelination (PLLP), genes that produce enzymes involved in neuronal pruning (KATNB1), genes that produce Apoptosis inhibitors expressed in fetal brains (CIAPIN1), and genes that produce proteins that have differential expression in individuals with schizophrenia (GNAO1).
Genetics of synesthesia:
Due to the prevalence of synesthesia among the first-degree relatives of synesthetes, there is evidence that synesthesia might have a genetic basis, however the monozygotic twins case studies indicate there is an epigenetic component. Synesthesia might also be an oligogenic condition, with Locus heterogeneity, multiple forms of inheritance (including Mendelian in some cases), and continuous variation in gene expression. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Viseme**
Viseme:
A viseme is any of several speech sounds that look the same, for example when lip reading (Fisher 1968).
Viseme:
Visemes and phonemes do not share a one-to-one correspondence. Often several phonemes correspond to a single viseme, as several phonemes look the same on the face when produced, such as /k, ɡ, ŋ/, (viseme: /k/), /t͡ʃ, ʃ, d͡ʒ, ʒ/ (viseme: /ch/), /t, d, n, l/ (viseme: /t/), and /p, b, m/ (viseme: /p/). Thus words such as pet, bell, and men are difficult for lip-readers to distinguish, as all look like /pet/. However, there may be differences in timing and duration during actual speech in terms of the visual "signature" of a given gesture that cannot be captured with a single photograph. Conversely, some sounds which are hard to distinguish acoustically are clearly distinguished by the face (Chen 2001). For example, acoustically speaking English /l/ and /r/ can be quite similar (especially in clusters, such as 'grass' vs. 'glass'), yet visual information can show a clear contrast. This is demonstrated by the more frequent mishearing of words on the telephone than in person. Some linguists have argued that speech is best understood as bimodal (aural and visual), and comprehension can be compromised if one of these two domains is absent (McGurk and MacDonald 1976).
Viseme:
Visemes can often be humorous, as in the phrase "elephant juice", which when lip-read appears identical to "I love you".
Applications for the study of visemes include speech processing, speech recognition, and computer facial animation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cellular algebra**
Cellular algebra:
In abstract algebra, a cellular algebra is a finite-dimensional associative algebra A with a distinguished cellular basis which is particularly well-adapted to studying the representation theory of A.
History:
The cellular algebras discussed in this article were introduced in a 1996 paper of Graham and Lehrer. However, the terminology had previously been used by Weisfeiler and Lehman in the Soviet Union in the 1960s, to describe what are also known as coherent algebras.
Definitions:
Let R be a fixed commutative ring with unit. In most applications this is a field, but this is not needed for the definitions. Let also A be an R -algebra.
The concrete definition A cell datum for A is a tuple (Λ,i,M,C) consisting of A finite partially ordered set Λ A R -linear anti-automorphism i:A→A with id A For every λ∈Λ a non-empty finite set M(λ) of indices.
An injective map C:⋃˙λ∈ΛM(λ)×M(λ)→A The images under this map are notated with an upper index λ∈Λ and two lower indices s,t∈M(λ) so that the typical element of the image is written as Cstλ and satisfying the following conditions:The image of C is a R -basis of A .i(Cstλ)=Ctsλ for all elements of the basis.
For every λ∈Λ , s,t∈M(λ) and every a∈A the equation mod A(<λ) with coefficients ra(u,s)∈R depending only on a , u and s but not on t . Here A(<λ) denotes the R -span of all basis elements with upper index strictly smaller than λ .This definition was originally given by Graham and Lehrer who invented cellular algebras.
The more abstract definition Let i:A→A be an anti-automorphism of R -algebras with id (just called "involution" from now on).
Definitions:
A cell ideal of A w.r.t. i is a two-sided ideal J⊆A such that the following conditions hold: i(J)=J There is a left ideal Δ⊆J that is free as a R -module and an isomorphism α:Δ⊗Ri(Δ)→J of A -A -bimodules such that α and i are compatible in the sense that ∀x,y∈Δ:i(α(x⊗i(y)))=α(y⊗i(x)) A cell chain for A w.r.t. i is defined as a direct decomposition A=⨁k=1mUk into free R -submodules such that i(Uk)=Uk := ⨁j=1kUj is a two-sided ideal of A Jk/Jk−1 is a cell ideal of A/Jk−1 w.r.t. to the induced involution.Now (A,i) is called a cellular algebra if it has a cell chain. One can show that the two definitions are equivalent. Every basis gives rise to cell chains (one for each topological ordering of Λ ) and choosing a basis of every left ideal Δ/Jk−1⊆Jk/Jk−1 one can construct a corresponding cell basis for A
Examples:
Polynomial examples R[x]/(xn) is cellular. A cell datum is given by id and := {0,…,n−1} with the reverse of the natural ordering.
Examples:
:= {1} 11 := xλ A cell-chain in the sense of the second, abstract definition is given by 0⊆(xn−1)⊆(xn−2)⊆…⊆(x1)⊆(x0)=R[x]/(xn) Matrix examples Rd×d is cellular. A cell datum is given by i(A)=AT and := {1} := {1,…,d} For the basis one chooses := Est the standard matrix units, i.e. Cst1 is the matrix with all entries equal to zero except the (s,t)-th entry which is equal to 1.A cell-chain (and in fact the only cell chain) is given by 0⊆Rd×d In some sense all cellular algebras "interpolate" between these two extremes by arranging matrix-algebra-like pieces according to the poset Λ Further examples Modulo minor technicalities all Iwahori–Hecke algebras of finite type are cellular w.r.t. to the involution that maps the standard basis as Tw↦Tw−1 . This includes for example the integral group algebra of the symmetric groups as well as all other finite Weyl groups.
Examples:
A basic Brauer tree algebra over a field is cellular if and only if the Brauer tree is a straight line (with arbitrary number of exceptional vertices).Further examples include q-Schur algebras, the Brauer algebra, the Temperley–Lieb algebra, the Birman–Murakami–Wenzl algebra, the blocks of the Bernstein–Gelfand–Gelfand category O of a semisimple Lie algebra.
Representations:
Cell modules and the invariant bilinear form Assume A is cellular and (Λ,i,M,C) is a cell datum for A . Then one defines the cell module W(λ) as the free R -module with basis {Cs∣s∈M(λ)} and multiplication := ∑ura(u,s)Cu where the coefficients ra(u,s) are the same as above. Then W(λ) becomes an A -left module.
These modules generalize the Specht modules for the symmetric group and the Hecke-algebras of type A.
Representations:
There is a canonical bilinear form ϕλ:W(λ)×W(λ)→R which satisfies mod A(<λ) for all indices s,t,u,v∈M(λ) One can check that ϕλ is symmetric in the sense that ϕλ(x,y)=ϕλ(y,x) for all x,y∈W(λ) and also A -invariant in the sense that ϕλ(i(a)x,y)=ϕλ(x,ay) for all a∈A ,x,y∈W(λ) Simple modules Assume for the rest of this section that the ring R is a field. With the information contained in the invariant bilinear forms one can easily list all simple A -modules: Let := {λ∈Λ∣ϕλ≠0} and define := rad (ϕλ) for all λ∈Λ0 . Then all L(λ) are absolute simple A -modules and every simple A -module is one of these.
Representations:
These theorems appear already in the original paper by Graham and Lehrer.
Properties of cellular algebras:
Persistence properties Tensor products of finitely many cellular R -algebras are cellular.
A R -algebra A is cellular if and only if its opposite algebra op is.
Properties of cellular algebras:
If A is cellular with cell-datum (Λ,i,M,C) and Φ⊆Λ is an ideal (a downward closed subset) of the poset Λ then := ∑RCstλ (where the sum runs over λ∈Λ and s,t∈M(λ) ) is a two-sided, i -invariant ideal of A and the quotient A/A(Φ) is cellular with cell datum (Λ∖Φ,i,M,C) (where i denotes the induced involution and M, C denote the restricted mappings).
Properties of cellular algebras:
If A is a cellular R -algebra and R→S is a unitary homomorphism of commutative rings, then the extension of scalars S⊗RA is a cellular S -algebra.
Direct products of finitely many cellular R -algebras are cellular.If R is an integral domain then there is a converse to this last point: If (A,i) is a finite-dimensional R -algebra with an involution and A=A1⊕A2 a decomposition in two-sided, i -invariant ideals, then the following are equivalent: (A,i) is cellular.
Properties of cellular algebras:
(A1,i) and (A2,i) are cellular.Since in particular all blocks of A are i -invariant if (A,i) is cellular, an immediate corollary is that a finite-dimensional R -algebra is cellular w.r.t. i if and only if all blocks are i -invariant and cellular w.r.t. i Tits' deformation theorem for cellular algebras: Let A be a cellular R -algebra. Also let R→k be a unitary homomorphism into a field k and := Quot (R) the quotient field of R . Then the following holds: If kA is semisimple, then KA is also semisimple.If one further assumes R to be a local domain, then additionally the following holds: If A is cellular w.r.t. i and e∈A is an idempotent such that i(e)=e , then the algebra eAe is cellular.
Properties of cellular algebras:
Other properties Assuming that R is a field (though a lot of this can be generalized to arbitrary rings, integral domains, local rings or at least discrete valuation rings) and A is cellular w.r.t. to the involution i . Then the following hold A is split, i.e. all simple modules are absolutely irreducible.
The following are equivalent: A is semisimple.
A is split semisimple.
∀λ∈Λ:W(λ) is simple.
∀λ∈Λ:ϕλ is nondegenerate.The Cartan matrix CA of A is symmetric and positive definite.
The following are equivalent: A is quasi-hereditary (i.e. its module category is a highest-weight category).
Λ=Λ0 All cell chains of (A,i) have the same length.
All cell chains of (A,j) have the same length where j:A→A is an arbitrary involution w.r.t. which A is cellular.
det (CA)=1 .If A is Morita equivalent to B and the characteristic of R is not two, then B is also cellular w.r.t. a suitable involution. In particular A is cellular (to some involution) if and only if its basic algebra is.
Every idempotent e∈A is equivalent to i(e) , i.e. Ae≅Ai(e) . If char (R)≠2 then in fact every equivalence class contains an i -invariant idempotent. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**3-Methyl-2-butanol**
3-Methyl-2-butanol:
3-Methyl-2-butanol (IUPAC name, commonly called sec-isoamyl alcohol) is an organic chemical compound. It is used as a solvent and an intermediate in the manufacture of other chemicals. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Kendall's W**
Kendall's W:
Kendall's W (also known as Kendall's coefficient of concordance) is a non-parametric statistic for rank correlation. It is a normalization of the statistic of the Friedman test, and can be used for assessing agreement among raters and in particular inter-rater reliability. Kendall's W ranges from 0 (no agreement) to 1 (complete agreement).
Kendall's W:
Suppose, for instance, that a number of people have been asked to rank a list of political concerns, from the most important to the least important. Kendall's W can be calculated from these data. If the test statistic W is 1, then all the survey respondents have been unanimous, and each respondent has assigned the same order to the list of concerns. If W is 0, then there is no overall trend of agreement among the respondents, and their responses may be regarded as essentially random. Intermediate values of W indicate a greater or lesser degree of unanimity among the various responses.
Kendall's W:
While tests using the standard Pearson correlation coefficient assume normally distributed values and compare two sequences of outcomes simultaneously, Kendall's W makes no assumptions regarding the nature of the probability distribution and can handle any number of distinct outcomes.
Steps of Kendall's W:
Suppose that object i is given the rank ri,j by judge number j, where there are in total n objects and m judges. Then the total rank given to object i is Ri=∑j=1mri,j, and the mean value of these total ranks is R¯=1n∑i=1nRi.
The sum of squared deviations, S, is defined as S=∑i=1n(Ri−R¯)2, and then Kendall's W is defined as 12 Sm2(n3−n).
Steps of Kendall's W:
If the test statistic W is 1, then all the judges or survey respondents have been unanimous, and each judge or respondent has assigned the same order to the list of objects or concerns. If W is 0, then there is no overall trend of agreement among the respondents, and their responses may be regarded as essentially random. Intermediate values of W indicate a greater or lesser degree of unanimity among the various judges or respondents.
Steps of Kendall's W:
Kendall and Gibbons (1990) also show W is linearly related to the mean value of the Spearman's rank correlation coefficients between all (m2) possible pairs of rankings between judges r¯s=mW−1m−1 Incomplete Blocks When the judges evaluate only some subset of the n objects, and when the correspondent block design is a (n, m, r, p, λ)-design (note the different notation). In other words, when each judge ranks the same number p of objects for some p<n every object is ranked exactly the same total number r of times, and each pair of objects is presented together to some judge a total of exactly λ times, λ≥1 , a constant for all pairs.Then Kendall's W is defined as 12 ∑i=1n(Ri2)−3r2n(p+1)2λ2n(n2−1).
Steps of Kendall's W:
If p=n and λ=r=m so that each judge ranks all n objects, the formula above is equivalent to the original one.
Steps of Kendall's W:
Correction for Ties When tied values occur, they are each given the average of the ranks that would have been given had no ties occurred. For example, the data set {80,76,34,80,73,80} has values of 80 tied for 4th, 5th, and 6th place; since the mean of {4,5,6} = 5, ranks would be assigned to the raw data values as follows: {5,3,1,5,2,5}.
Steps of Kendall's W:
The effect of ties is to reduce the value of W; however, this effect is small unless there are a large number of ties. To correct for ties, assign ranks to tied values as above and compute the correction factors Tj=∑i=1gj(ti3−ti), where ti is the number of tied ranks in the ith group of tied ranks, (where a group is a set of values having constant (tied) rank,) and gj is the number of groups of ties in the set of ranks (ranging from 1 to n) for judge j. Thus, Tj is the correction factor required for the set of ranks for judge j, i.e. the jth set of ranks. Note that if there are no tied ranks for judge j, Tj equals 0.
Steps of Kendall's W:
With the correction for ties, the formula for W becomes 12 ∑i=1n(Ri2)−3m2n(n+1)2m2n(n2−1)−m∑j=1m(Tj), where Ri is the sum of the ranks for object i, and ∑j=1m(Tj) is the sum of the values of Tj over all m sets of ranks.
Steps of Weighted Kendall's W:
In some cases, the importance of the raters (experts) might not be the same as each other. In this case, the Weighted Kendall's W should be used. Suppose that object i is given the rank rij by judge number j , where there are in total n objects and m judges. Also, the weight of judge j is shown by ϑj (in real-world situation, the importance of each rater can be different). Indeed, the weight of judges is ϑj(j=1,2,...,m) . Then, the total rank given to object i is Ri=∑j=1mϑjrij and the mean value of these total ranks is, R¯=1n∑i=1nRi The sum of squared deviations, S , is defined as, S=∑i=1n(Ri−R¯)2 and then Weighted Kendall's W is defined as, 12 S(n3−n) The above formula is suitable when we do not have any tie rank.
Steps of Weighted Kendall's W:
Correction for Ties In case of tie rank, we need to consider it in the above formula. To correct for ties, we should compute the correction factors, Tj=∑i=1n(tij3−tij)∀j where tij represents the number of tie ranks in judge j for object i . Tj shows the total number of ties in judge j With the correction for ties, the formula for Weighted Kendall's W becomes, 12 S(n3−n)−∑j=1mϑjTj If the weights of the raters are equal (the distribution of the weights is uniform), the value of Weighted Kendall's W and Kendall's W are equal.
Significance Tests:
In the case of complete ranks, a commonly used significance test for W against a null hypothesis of no agreement (i.e. random rankings) is given by Kendall and Gibbons (1990) χ2=m(n−1)W Where the test statistic takes a chi-squared distribution with df=n−1 degrees of freedom.
In the case of incomplete rankings (see above), this becomes χ2=λ(n2−1)k+1W Where again, there are df=n−1 degrees of freedom.
Significance Tests:
Legendre compared via simulation the power of the chi-square and permutation testing approaches to determining significance for Kendall's W. Results indicated the chi-square method was overly conservative compared to a permutation test when 20 . Marozzi extended this by also considering the F test, as proposed in the original publication introducing the W statistic by Kendall & Babington Smith (1939): F=W(m−1)1−W Where the test statistic follows an F distribution with v1=n−1−(2/m) and v2=(m−1)v1 degrees of freedom. Marozzi found the F test performs approximately as well as the permutation test method, and may be preferred to when m is small, as it is computationally simpler.
Software:
Kendall's W and Weighted Kendall's W are implemented in MATLAB, SPSS, R, and other statistical software packages. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HRDetect**
HRDetect:
HRDetect (Homologous Recombination Deficiency Detect) is a whole-genome sequencing (WGS)-based classifier designed to predict BRCA1 and BRCA2 deficiency based on six mutational signatures. Additionally, the classifier is able to identify similarities in mutational profiles of tumors to that of tumors with BRCA1 and BRCA2 defects, also known as BRCAness. This classifier can be applied to assess the implementation of PARP inhibitors in patients with BRCA1/BRCA2 deficiency. The final output is a probability of BRCA1/2 mutation.
Background:
BRCA1/BRCA2 BRCA1 and BRCA2 play crucial roles in maintaining genome integrity, mainly through homologous recombination (HR) for DNA double-strand breaks (DSB)repair. The mutations of BRCA1 and BRCA2 can lead to a reduced capacity of HR machinery, increased genomic instability, and elicit a predisposition to malignancies. People with BRCA1 and BRCA2 deficiency have higher risks of developing certain cancers such as breast and ovarian cancers. Germline defects in BRCA1/BRCA2 genes account for up to 5% of breast cancer cases.
Background:
PARP inhibitors Poly (ADP ribose) polymerase (PARP) inhibitors are designed to treat BRCA1- and BRCA2- defect tumors owing to their homologous recombination deficiency. These drugs have been majorly implemented in breast and ovarian cancers, and their clinical efficacy among patients with other types of cancers, such as pancreatic cancer, is still being investigated. It is vital to identify adequate patients with BRCA1/BRCA2 deficiency to utilize PARP inhibitors optimally. PARP inhibitors operate on the concept of synthetic lethality where by selectively causing cell death in BRCA-mutant cells while sparing normal cells.
Background:
HRDetect HRDetect was implemented to detect tumors with BRCA1/BRCA2 deficiency using the data from whole-genome sequencing. This model quantitatively aggregates six HRD-associated signatures into a single score called HRDetect to accurately classify breast cancers by their BRCA1 and BRCA2 status. The machine learning algorithm assigns weight values to these signatures prior to computing the final score. The six signatures, ranked by decreasing weight, include microhomology-mediated indels, the HRD index, base- substitution signature 3, rearrangement signature 3, rearrangement signature 5, and base- substitution signature 8. Additionally, this weighted approach is able to identify BRCAness, which refers to mutational phenotypes displaying homologous recombination deficiency similar to tumors with BRCA1/BRCA2 germline defects.
Methodology:
Input HRDetect requires four types of inputs: Counts of mutations associated with each signature of single-base substitutions Indels with microhomology at the indel breakpoint junction, indels at polynucleotide-repeat tracts and other complex indels as proportions Counts of rearrangements associated with each signature HRD index (Arithmetic sum of loss of heterozygosity (LOH), telomeric-allelic imbalance (TAI), and large-scale state transitions (LST) scores) Statistical Analysis It is based on a supervised learning method using a lasso logistic regression model to distinguish samples into those with and without BRCA 1/2 deficiency. Optimal coefficients are obtained by minimizing the objective function.
Methodology:
Log Transformation To account for a high substitution count in samples, the genomic data is first log transformed: ln (x+1) Standardization The transformed data is then standardized to make mutational class values comparable giving each object a mean of 0 and a standard deviation (sd) of 1: mean (x)sd⋅(x) Lasso Logistical Regression Modelling To be able to distinguish between those affected and not affected by BRCA1/BRCA2 deficiency, a lasso logistic regression model is used: min log (1+e(β0+xiTβ))]+λ‖β‖1) where: yi : BRCA status of a sample || yi = 1 for BRCA1/BRCA2-null samples || yi = 0 otherwise β0 : Intercept, interpreted as the log of odds of yi = 1 when xiT = 0 β : Vector of weights p : Number of features characterizing each sample N : Number of samples xiT : Vector of features characterizing the ith sample λ : Penalty promoting the sparseness of the weights ‖β‖ : L1 norm of the vector of weights The β weights are constrained to be positive to reflect the presence of mutational actions due to BRCA1/BRCA2 defects. Setting the constraint of nonnegative weights ensures that all samples would be scored on the basis of the presence of relevant mutational signatures associated with BRCA1/BRCA2 deficiency, irrespective of whether these signatures are the dominant mutational process in the cancer.
Methodology:
HRDetect Score Lastly, the weights obtained from the lasso regression are used to give a new sample a probabilistic score using the normalized mutational data xiT and application of the model parameters( β , β0 ): P(Ci=BRCA)=11+e−(β0+xiTβ) where: Ci : variable encoding the status of the ith sample β0 : Intercept weight xiT : Vector encoding features of the ith sample β : Vector of weights Interpretation The probability value quantifies the degree of BRCA1/BRCA2 defectiveness. A cut-off probability value should be chosen while maintaining a high sensitivity. These scores can be utilized to guide therapy.
Applications:
Predicting Chemotherapeutic Outcomes Mutations in genes responsible for HR are prevalent among human cancers. The BRCA1 and BRCA2 genes are centrally involved in HR, DNAdamage repair, end resection, and checkpoint signaling. Mutational signatures of HRD have been identified in over 20% of breast cancers, as well as pancreatic, ovarian, and gastric cancers. BRCA1/2 mutations confer sensitivity to platinum-based chemotherapies. HRDetect can independently trained to predict BRCA1/2 status, and has the capacity to predict outcomes on platinum-based chemotherapies.
Applications:
Breast Cancer HRDetect was initially developed to detect tumors with BRCA1 and BRCA2 deficiency based on the data from whole-genome sequencing of a cohort of 560 breast cancer samples. Within this cohort, 22 patients were known to carry germline BRCA1/BRCA2 mutations. BRCA1/BRCA2- deficiency mutational signatures were found in more breast cancer patients than previously known. This model was able to identify 124 (22%) breast cancer patients showing BRCA1/2 mutational signatures in this cohort of 560 samples. Apart from the 22 known cases, an additional 33 patients showed deficiency with germline BRCA1/2 mutations, 22 patients displayed somatic mutation of BRCA1/2, and 47 were recognized to show functional defect without detected BRCA1/2 mutation. As a result, with an application of a probabilistic cut-off 0.7, HRDetect was able to demonstrate a 98.7% sensitivity recognizing BRCA1/2- deficient cases.
Applications:
In contrast, germline mutations of BRCA1/2 are present in only 1~5% of breast cancer cases. Furthermore, these findings suggest that more breast cancer patients, as many as 1 in 5 (20%), may benefit from PARP inhibitors than a small percentage of patients currently given with the treatment.
Cohort of 80 Breast cancer patients. 6 out of 7 are above HRDetect score 0.7.
Cohort of 80 Breast Cancer Samples HRDetect was tested in 80 breast cancer cases with mainly ER positive and HER2 negative. The tool was able to find ones that exceed HRDetect score 0.7, including one germline BRCA1 mutation carrier, four germline BRCA2 mutation carriers and one somatic BRCA2 mutation carrier. The sensitivity of this tool also reached 86%.
Compatibility Across Cancers HRDetect can be applied to other cancer types and yields adequate sensitivity.
Ovarian Cancer In a cohort of 73 patients with ovarian cancer, 30 patients were known to carry BRCA1/BRCA2 mutations and 46 (63%) patients were assessed by HRDetect to have HRDetect score over 0.7. The sensitivity of detecting BRCA1/2-deficient cancer was almost 100%, with an additional 16 cases identified.
Pancreatic Cancer In a cohort of 96 patients with pancreatic cancers, 6 cases were known to have mutation or allele loss and 11 (11.5%) patients were identified by HRDetect to an exceed cutoff of 0.7. The study observed a similar result of sensitivity approaching 100%, with five other cases identified.
Advantages and Limitations:
Advantages The concordance is predictions is high between low coverage and high coverage sequencing.
Advantages and Limitations:
It can trained on whole exome sequencing (WES) data It can be used with sequencing data from formalin fixed paraffin embedded (FFPE) It can distinguish BRCA1 from BRCA2 tumorsLimitations While it can be used with WES data, the sensitivity of detection falls considerably when not trained with such data. The sensitivity increases when training is performed with WES data however false-positive's are still identified. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Karen Spärck Jones Award**
Karen Spärck Jones Award:
To commemorate the achievements of Karen Spärck Jones, the Karen Spärck Jones Award was created in 2008 by the British Computer Society (BCS) and its Information Retrieval Specialist Group (BCS IRSG), which is sponsored by Microsoft Research.The winner of the award is invited to present a keynote talk at the European Conference on Information Retrieval (ECIR) the following year.
Chronological recipients and keynote talks:
2009: Mirella Lapata : “Image and Natural Language Processing for Multimedia Information Retrieval” 2010: Evgeniy Gabrilovich : “Ad Retrieval Systems in vitro and in vivo: Knowledge-Based Approaches to Computational Advertising” 2011: No award was made 2012: Diane Kelly : “Contours and Convergence” 2013: Eugene Agichtein : “Inferring Searcher Attention and Intention by Mining Behavior Data” 2014: Ryen White : “Mining and Modeling Online Health Search” 2015: Jordan Boyd-Graber : “Opening up the Black Box: Interactive Machine Learning for Understanding Large Document Collections, Characterizing Social Science, and Language-Based Games”, Emine Yilmaz : “A Task-Based Perspective to Information Retrieval” 2016: Jaime Teevan : “Search, Re-Search.” 2017: Fernando Diaz (computer scientist) : “The Harsh Reality of Production Information Access Systems” 2018: Krisztian Balog : “On Entities and Evaluation” 2019: Chirag Shah : “Task-Based Intelligent Retrieval and Recommendation” 2020: Ahmed H. Awadallah : “Learning with Limited Labeled Data: The Role of User Interactions” 2021: Ivan Vulić : “Towards Language Technology for a Truly Multilingual World?” 2022: William Yang Wang "Large Language Models for Question Answering: Challenges and Opportunities" | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sharkskin**
Sharkskin:
Sharkskin describes a specific woven or warp-knitted fabric with a distinctive sheen. Sharkskin is a twill weave fabric. Materials used in its construction include: acetate, rayon, worsted wool, lycra, and other plastic fibers. In sharkskin, the arrangement of darker and brighter threads in a twill weave creates a subtle pattern of lines that run across the fabric diagonally and a two tone, lustrous appearance. Primarily a suiting material, the fabric is sometimes seen in light jackets and non-fashion items such as curtains, tablecloths, and as a liner in diving suits and wetsuits.
Composition:
Sharkskin has historically been made with different types of natural fibers, including either mohair, wool and silk. More expensive variations, often demarcated by fabric content labels bearing "Golden Fleece", "Royal" or the like, indicate an extremely rare and costly "sharkskin" of yester-year. Those fabrics, produced in small quantities, were manufactured in South America (Peru and Argentina: by transplanted German/Italian weavers) from the 1950s and 60s and are known to include in some instances even small percentages of vicuna, guanaco or alpaca in such blends: inclusion of silk was even more common among the "natural sharkskin". Whereas, "artificial sharkskin" is a fabric variant that is more often found from that period and can contain synthesized or synthetic fibers that were developed contemporary to those eras.
Artificial variations:
Artificial sharkskin variants used for suiting first appeared in the 1950s. These variants made more significant use wool and synthetic fibers in their construction. The addition of synthetics can create a heightened metallic-like sheen, and/or added flexibility. Artificial sharkskin, in part for its comparably low price point, gained traction as a clothing material in the early 1960s and the disco era of the late 70s. Its popularity waned, but enjoyed brief fashion resurgences in the mid-1980s, mid-1990s and late 2000s.
Middle East:
British Diplomat Sir Terence Clark in the 1950s served in Bahrain. He reminisces that the requisite winter evening wear for a diplomat was a white sharkskin dinner jacket. Lucette Lagnado in her prize-winning memoir about her childhood, The Man in the White Sharkskin Suit: My Family's Exodus from Old Cairo to the New World uses the imagery of the white sharkskin suit to evoke the glamorous evening life in Egypt in the 1950s. Early in Justine, Lawrence Durrell mentions the heroine sitting in front of a multi-panel mirror trying out a sharkskin dress; the book is set in the high society of diplomats and businessmen in Alexandria in the 1930s, a city where Durrell spent much time during World War II, a few years later. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Generation ship**
Generation ship:
A generation ship, or generation starship, is a hypothetical type of interstellar ark starship that travels at sub-light speed. Since such a ship might require hundreds to thousands of years to reach nearby stars, the original occupants of a generation ship would grow old and die, leaving their descendants to continue traveling.
Origins:
Rocket pioneer Robert H. Goddard was the first to write about long-duration interstellar journeys in his "The Ultimate Migration" (1918). In this he described the death of the Sun and the necessity of an "interstellar ark". The crew would travel for centuries in suspended animation and be awakened when they reached another star system. He proposed to use small moons or asteroids as ships, and speculated that the crew would endure psychological and genetic changes over the generations.Konstantin Tsiolkovsky, considered a father of astronautic theory, first described the need for multiple generations of passengers in his essay, "The Future of Earth and Mankind" (1928), a space colony equipped with engines that travels thousands of years which he called "Noah's Ark". In the story, the crew had changed so much over the generations at so many levels that they did not even acknowledge Earth as their home planet.Another early description of a generation ship is in the 1929 essay "The World, The Flesh, & The Devil" by John Desmond Bernal. Bernal's essay was the first publication to reach the public and influence other writers. He wrote about the concept of human evolution and mankind's future in space through methods of living that we now describe as a generation starship, and which could be seen in the generic word "globes".
Definition:
According to Hein et al., a "generation ship" is a spacecraft on which a crew is living on-board for at least several decades, such that it comprises multiple generations. Several sub-categories of generation ships are distinguished: sprinter, slow boat, colony ship, world ship.
The Enzmann starship is categorised as "slow boat" because of the Astronomy Magazine title "Slow Boat to Centauri" (1977). Gregory Matloff's concept is called a "colony ship" and Alan Bond called his concept a "world ship". These definitions are essentially based on the velocity of the ship and population size.
Obstacles:
Biosphere Such a ship would have to be entirely self-sustaining, providing life support for everyone aboard. It must have extraordinarily reliable systems that could be maintained by the ship's inhabitants over long periods of time. This would require testing whether thousands of humans could survive on their own before sending them beyond the reach of help. Small artificial closed ecosystems, such as Biosphere 2, have been built in an attempt to examine the engineering challenges of such a system, with mixed results.
Obstacles:
Biology and society Generation ships would have to anticipate possible biological, social and morale problems, and would also need to deal with matters of self-worth and purpose for the various crews involved.
Obstacles:
Estimates of the minimum reasonable population for a generation ship vary. Anthropologist John Moore has estimated that, without genetic testing of people before boarding the ship, social control and / or social engineering (such as requiring people to wait until their thirties to have children), nor cryopreservation of eggs, sperm, or embryos (as is done in sperm banks), a minimum of 160 people boarding the ship would allow normal family life (with the average individual having ten potential marriage partners) throughout a 200-year space journey, with little loss of genetic diversity. If the people who board the ship are couples, presumably in their early twenties, and everybody who lives in the ship is required to wait until their mid to late thirties before having children, then the minimum would be just 80 people. However, many variables are not accounted for in the estimate, including the higher chance of health problems for both the woman who is pregnant and the fetus or baby because of the pregnant woman's age. In 2013, anthropologist Cameron Smith reviewed existing literature and created a new computer model to estimate a minimum reasonable population in the tens of thousands. Smith's numbers were much larger than previous estimates such as Moore's, in part because Smith takes the risk of accidents and disease into consideration, and assumes at least one severe population catastrophe over the course of a 150-year journey.In light of the multiple generations that it could take to reach even our nearest neighboring star systems such as Proxima Centauri, further issues on the viability of such interstellar arks include: the possibility of humans dramatically evolving in directions unacceptable to the sponsors the minimum population required to maintain in isolation a culture acceptable to the sponsors; this could include such aspects as ability to learn scientific and technical skills needed to maintain, operate and pilot the ship ability to accomplish the purpose (planetary colonization, research, building new interstellar arks) contemplated sharing the values of the sponsors, which may not be likely to be empirically demonstrated to be viable beyond the home planet unless, once the ship is away from Earth and on its way, survival of one's offspring until the ship reaches the target star is one motivation.
Obstacles:
Size For a spacecraft to maintain a stable environment for multiple generations, it would have to be large enough to support a community of humans and a fully recycling ecosystem. A spacecraft of such a size would require much energy to accelerate and decelerate. A smaller spacecraft, while able to accelerate more easily and thus make higher cruise velocities more practical, would reduce exposure to cosmic radiation and the time for malfunctions to develop in the craft, but would have challenges with resource metabolic flow and ecologic balance.
Obstacles:
Social breakdown Generation ships traveling for long periods of time may see breakdowns in social structures. Changes in society (for example, mutiny) could occur over such periods and may prevent the ship from reaching its destination.
Cosmic rays The radiation environment of deep space is very different from that on the Earth's surface, or in low earth orbit, due to the much larger influx of high-energy galactic cosmic rays (GCRs). Like other ionizing radiation, high-energy cosmic rays can damage DNA and increase the risk of cancer, cataracts, and neurological disorders.
Ethical considerations:
The success of a generation ship depends on children born aboard taking over the necessary duties, as well as having children themselves. Even if their quality of life might be better than, for example, that of people born into poverty on Earth, philosophy professor Neil Levy has raised the question of whether it is ethical to severely constrain life choices of individuals by locking them into a project they did not choose. A moral quandary exists regarding how intermediate generations, those destined to be born and die in transit without actually seeing tangible results of their efforts, might feel about their forced existence on such a ship.
Project Hyperion:
Project Hyperion, launched in December 2011 by Icarus Interstellar, was to perform a preliminary study that defines integrated concepts for a crewed interstellar generation ship. This was a two-year study mainly based out of the WARR student group at the Technical University of Munich. The study aimed to provide an assessment of the feasibility of crewed interstellar flight using current and near-future technologies. It also aimed to guide future research and technology development plans as well as to inform the public about crewed interstellar travel. Notable results of the project include an assessment of world ship system architectures and adequate population size. The core team members have transferred to the Initiative for Interstellar Studies's world ship project and a survey paper on generation ships has been presented at the ESA Interstellar Workshop in 2019 as well as in ESA's Acta Futura journal. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Customer communications management**
Customer communications management:
Customer Communications Management (CCM) is a software that enables companies to manage customer communications across a wide range of media. Originally, customer communications referred to printed documents, archived digital documents, email and web pages. It has grown to include SMS/MMS, in-app notifications, responsive design mobile experiences, and messages over common social media platforms. It entails an automated process that involves not only the delivery of communication, but also the segmentation of messages according to different customer profiles and contexts.
Concept:
CCM software allows organizations to deploy a new approach to information exchange, thereby improving their ability to maintain relationships with customers and other stakeholders. By using the software, messages disseminated are no longer generic but tailored according to customers' needs and specific platforms (Web, email, SMS, and print) and devices (mobile, laptop, tablet, and PC). For instance, if a customer interacts with an organization, the data or push messages provided cover not only the needed information but the entire context of the interaction which includes the customer profile (e.g. lifestyle and life-stage needs), history of online activity, and personal preferences. This process involves the utilization of high-volume data collected offline and online.
Concept:
Owing to the nature of CCMs, they are also referred to as "Intelligent Customer Communications Management" systems.
History:
Before the term CCM was used, this technology was referred to as Variable Data Printing (VDP) or Variable Data Publishing. The term "Trans Promo", short for "Trans Promotional", was in use as the term "VDP" gave way to "CCM" in industry-generated content.
History:
Some Initial CCM concepts focused on the utilization of company transactional documents. These documents such as bank statements, statements of account, invoices and other customer transactional documents were viewed as ideal media to promote company products to customers. The rationale behind this was cited in analyst research by InfoTrends that, "transactional documents are opened and read by more than 90% of consumers. Because the average consumer is bombarded with advertising, e-mail, direct mail and other forms of solicitation each day, Trans Promo can help you cut through the clutter and stand out".
History:
Other CCM concepts were shaped by marketing needs, and many CCM technologies improved design, testing, analytic integration, customer journey mapping capabilities to meet the needs of marketers, who became increasingly important in the technology buying process.The scope of CCM solutions has rapidly grown beyond management and data analysis. Many contemporary solutions offer "automatic generation of sales proposals, employment contracts, loan documents, service level agreements, product descriptions and pricing, and other transactional or legal documents where re-usable content can be applied to generate accurate, consistent and personalized documents for a range of business applications". This shift into management flexibility becomes more evident as companies develop CCM solutions and products adaptable to evolving technologies available to businesses. In the recent years, this can be observed with businesses' introduction of tablets and tablet-friendly solutions into their standard scope of work.
Components:
The technology that supports customer communications management also allows sophistication in the content of the messages. Customer communications management technology usually includes or integrates with the following components: Data Extraction, Transform & Load software Data Management, Analysis and Location Intelligence software Data Hygiene database software Document composition software Electronic document archive software and perhaps payment processing functionality Print Stream Engineering / Post Processing Software Mailing compliance database software Printer Management Software High and medium volume production printers Envelope inserter machines Email Marketing Software SMS Communication Software Mobile Media based content distribution software Entering the frame more recently social media distribution software Document Production Reporting Software Portal Technology Trans promotional Application software Customer Journey Mapping Customer Journey OrchestrationAll CCM technologies feature design interfaces that primarily use a visual layout software to define the structure of the communication. These design interfaces create a basic visual structure of a communication that is later populated by a production engine with data, variably created data, static content elements, rules-driven content elements, externally referenced content and other elements to create a finished customer communication.
Components:
There are varying degrees of sophistication that CCM design interfaces handle, depending on the business needs. Some design environments are simple cloud-based interfaces that create communications for quick and easy marketing communications. There are more comprehensive interfaces that can support complex applications like insurance policy generation that require the skills and expertise of many business experts.
Components:
Most CCM technologies offer data extraction capabilities that present marketers and businesses with an opportunity to combine data from multiple systems across their business to perform customer analysis before composing communications. This allows marketers to evaluate the marketing mix and position individual products to the customer in respect of relevance to the customer or the results of purchase propensity model by applying rules on content elements within the design.
Components:
The process results in the creation of a data model, data acquisition and decision rules. These enable a document composition engine to follow its own set of document application rules, constructing individual documents on the basis of data items contained within an individual's data record. The Document Composition engine usually produces either a print stream or, XML data.
Components:
Post processing can be utilized to prepare a print job for production and distribution. This may include tasks such as the application of barcodes to deliver individual mail piece instructions to the inserters and to vary these in terms of the actual inserter being used. For example, one manufacturer's inserter may require different barcode instructions to complete the same task than another.
Components:
Print Management software controls the routing and distribution of print jobs to either a single production printer or a fleet of production printers. Print management software also provides a mechanism for assured delivery (ensuring that all pages get printed) through communication and feedback from print devices. Analysis of resultant data provides insight useful for Document Production Managers.
Relevance of communication is seen as key in overcrowded, competitive markets where service differentiation can be difficult. Documents that add value to the customer relationship are a major factor in improving customer retention and acquisition. Employing a Customer Communications Management solution can help organizations improve all these customer experiences efforts on a multi-channel communications level. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Acer Chromebook Tab 10**
Acer Chromebook Tab 10:
The Acer Chromebook Tab 10 (D651N) is a tablet computer manufactured by Acer Inc. It is the first ChromeOS tablet that was released. It gets software updates until 2023. The tablet was announced in March 2018.
Specifications:
The SoC is a Rockchip OP1. It has 4GiB RAM and 32GiB of storage, which can be extended with a MicroSD card. It has a 9.7" inch display with a resolution of 2048×1536, with a dpi of 264. The code name of the device is scarlet.It is primarily designed for education.
Reception:
TechRadar noted the excellent screen. PCMag noted that ChromeOS without a keyboard poses some problems. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Serial Data Transport Interface**
Serial Data Transport Interface:
Serial Data Transport Interface is a way of transmitting data packets over a Serial Digital Interface datastream. This means that standard SDI infrastructure can be used.
Developed to address the needs of the growing number of compressed video standards (DV, DVCPRO, BetaSX, MPEG2) it allows lossless transfer of data to other devices which have the same codec, for example DV to DV or SX to SX.
Using a standard SDI transport, the extra data is placed within normal active video, between Start of Active Video (SAV), and End of Active Video (EAV). This gives 1440 10bit words of data at 270Mbit/s (1920 words in the 8bit 360Mbit/s standard).
If an SDTI stream is viewed using a standard SDI device, then the raw data can be seen as a small strip along the left hand side (usually in purple). The DVCAM SDTI has video data at the top, control data in the middle (Timecode, etc.) and audio at the bottom just like it is organised on the tape.
Because SDTI is used for compressed data the area used is less than a full screen; this allows for faster than realtime transfers.
SDTI is standardized as SMPTE 305M. A 1.5 GBit/s version, using the high definition serial digital interface, is standardized as SMPTE 348M. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Shapeshifters (board game)**
Shapeshifters (board game):
Shapeshifters is a board game that was published by Fat Messiah Games in 1991.
Gameplay:
Shapeshifters is a board game about a duel involving two magicians who are both adept at changing form.
Reception:
Scott Haring reviewed Shapeshifters in Pyramid Number 5 (Jan., 1994), and stated that "Overall, this is an inventive game that is easy to learn (but by no means easy to master) and takes less than an hour to play. The components are simple (but not too cheap), and [...] the game is a bargain. What more could you ask for?" | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Landau–Squire jet**
Landau–Squire jet:
In fluid dynamics, Landau–Squire jet or Submerged Landau jet describes a round submerged jet issued from a point source of momentum into an infinite fluid medium of the same kind. This is an exact solution to the incompressible form of the Navier-Stokes equations, which was first discovered by Lev Landau in 1944 and later by Herbert Squire in 1951. The self-similar equation was in fact first derived by N. A. Slezkin in 1934, but never applied to the jet. Following Landau's work, V. I. Yatseyev obtained the general solution of the equation in 1950.
Mathematical description:
The problem is described in spherical coordinates (r,\theta ,\phi ) with velocity components (u,v,0) . The flow is axisymmetric, i.e., independent of \phi . Then the continuity equation and the incompressible Navier–Stokes equations reduce to sin sin cot sin 2θ) where sin sin θ∂∂θ).
A self-similar description is available for the solution in the following form, sin sin θf(θ).
Substituting the above self-similar form into the governing equations and using the boundary conditions u=v=p−p∞=0 at infinity, one finds the form for pressure as p−p∞ρ=−v22+νur+c1r2 where c_{1} is a constant. Using this pressure, we find again from the momentum equation, sin sin θ∂u∂θ)]+2c1r3.
Replacing \theta by cos \mu =\cos \theta as independent variable, the velocities become u=−νrf′(μ),v=−νrf(μ)1−μ2 (for brevity, the same symbol is used for f(\theta ) and f(μ) even though they are functionally the same, but takes different numerical values) and the equation becomes f′2+ff″=2f′+[(1−μ2)f″]′−2c1.
Mathematical description:
After two integrations, the equation reduces to f2=4μf+2(1−μ2)f′−2(c1μ2+c2μ+c3), where c_{2} and c_{3} are constants of integration. The above equation is a Riccati equation. After some calculation, the general solution can be shown to be f=α(1+μ)+β(1−μ)+2(1−μ2)(1+μ)β(1−μ)α[c−∫1μ(1+μ)β(1−μ)α]−1, where α,β,c are constants. The physically relevant solution to the jet corresponds to the case \alpha =\beta =0 (Equivalently, we say that c1=c2=c3=0 , so that the solution is free from singularities on the axis of symmetry, except at the origin). Therefore, sin cos θ.
Mathematical description:
The function is related to the stream function as ψ=νrf , thus contours of for different values of provides the streamlines. The constant describes the force at the origin acting in the direction of the jet (this force is equal to the rate of momentum transfer across any sphere around the origin plus the force in the jet direction exerted by the sphere due to pressure and viscous forces), the exact relation between the force and the constant is given by 32 ln c+2c.
Mathematical description:
The solution describes a jet of fluid moving away from the origin rapidly and entraining the slowly moving fluid outside of the jet. The edge of the jet can be defined as the location where the streamlines are at minimum distance from the axis, i.e.,e the edge is given by cos −1(11+c).
Therefore, the force can be expressed alternatively using this semi-angle of the conical-boundary of the jet, 32 cos sin cos ln cos cos cos θo.
Limiting behaviors:
When the force becomes large, the semi-angle of the jet becomes small, in which case, 32 3θo2≪1 and the solution inside and outside of the jet become cos θ),θ>θo.
The jet in this limiting case is called the Schlichting jet. On the other extreme, when the force is small, F2πρν2∼8c≫1 the semi-angle approaches 90 degree (no inside and outside region, the whole domain is considered as single region), the solution itself goes to sin 2θ. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sky Broadband**
Sky Broadband:
Sky Broadband is a broadband service offered by Sky UK in the United Kingdom. With the introduction of Sky Fibre, Sky Broadband now refers to ADSL broadband products.
History:
In October 2005, Sky UK agreed to purchase the ISP EasyNet for £211 million. At the time, EasyNet were one of two companies in the UK that had made major investments in local-loop unbundling (LLU), providing Sky with access to 232 unbundled telephone exchanges. The purchased company was placed under a new Sky division, Sky Broadband. In October 2007, Sky reached the one million mark in terms of customer numbers, and claim to be adding one new customer every 40 seconds. By September 2009, it had 2.3 million customers. By July 2012 Sky had reached four million customers, and unbundled exchanges covering over 70% of the United Kingdom. By January 2017, Sky said it had 6.1 million customers.Sky agreed on 1 March 2013 to buy the fixed telephone line and broadband business of Telefónica UK, trading under the O2 and BE brands. The company agreed to pay £180 million initially, followed by a further £20 million after all customers have been transferred to Sky's existing Broadband and telephone business. and customers were transferred during 2014.
Networking:
Sky Broadband provides Sky customers with download speeds of up to 20 Mbit/s (ADSL2+ from Sky enabled exchanges, by means of LLU) and up to 76 Mbit/s from exchanges enabled for FTTC via an Openreach landline.
Networking:
In July 2006, Sky also introduced a free broadband and calls package for its digital TV subscribers within the Sky Broadband network area. This means anyone on Sky can get free broadband (subject to a 2 GB per month usage limit) and free evening and weekend telephone calls, as long as the line is in a Sky Broadband network area.For customers whose exchange has not been enabled for the above services, the Connect service is available using the BT Wholesale ADSL Max network.
Networking:
Sky launched Sky Broadband in the Republic of Ireland in February 2013.
Speeds:
As with all DSL connections, the further the distance from the DSLAM (usually located at the telephone exchange) the customer site is, the slower the line speed will be. Sky uses DLM (dynamic line management) over the first ten days of a new connection to set the line at an acceptable downstream and upstream speed in order for the connection to remain stable. Lines are initially connected at 4Mbit/s and gradually increased over the ten-day "training period" until the line shows signs of instability, this allows Sky to know what speeds the line can handle whilst remaining stable.
Speeds:
In April 2012, Sky Fibre was launched almost two and a half years after British Telecom launched BT Infinity in January 2010.In April 2014 it was announced they are to roll out 1 gigabit fibre-to-the-premises connections in the city of York in partnership with rival TalkTalk.
Sky Wireless Hub:
The Sky Wireless Hub is a wireless router distributed to all Sky Broadband customers when they order their Sky Broadband packages.
Sky Wireless Hub:
During 2006, Netgear were the only manufacturer of Sky Broadband routers, which were made in white. From 2008, Netgear and Sagem were the manufacturers of the Sky Broadband routers, made in black and shaped to match the Sky+ HD box. Both routers are also distributed in smaller boxes (The boxes are now the size of the routers) as part of Sky UK’s low carbon scheme in turn reducing postage costs. The Sagem router unlike the Netgear router has added restrictions to features such as the built in inbound firewall settings and outbound and inbound VPN connections. However a firmware upgrade is available upon request, for users wishing to connect to an outbound VPN connection using Sky Broadband, while maintaining restrictions on the inbound firewall and inbound VPN connection.
Sky Wireless Hub:
Towards the end of 2010, D-Link started producing routers for Sky. The D-Link router is the DSL-2640S.
On Demand:
Sky have created On Demand, which will combine Sky Broadband and Sky+ HD to offer a true on-demand service using the Ethernet socket of the Sky+ HD box and the Sky Broadband router. Sky Customers will be able to connect their Sky router to their Sky+ HD box via an Ethernet cable or Wi-Fi adapter, and stream content directly to their television. Unlike other VOD services, On Demand video will count towards a users data usage.
Now Broadband:
Now Broadband (stylised as NOW Broadband) is a brand name of contract-free pricing plans that offer broadband internet and telephone service on a budget. It was launched in Summer 2016 as Now TV Combo, and was rebranded in early 2018 as Now Broadband. It is a brand extension of Sky's Now TV, an over-the-top internet television service which offers multichannel television and video-on-demand content on a budget.
Controversy:
On 21 September 2010, the website of ACS:Law was subjected to a DDoS attack as part of Operation Payback. After the site came back online a 350MB file was uploaded containing spreadsheets listing more than 8,000 Sky broadband customers accused of making unauthorized downloads of adult films. This raised issues concerning Sky not following Data Protection Act guidelines.
Broadband Shield:
In March 2014 Samuel L. Jackson and the other stars of Captain America: The Winter Soldier appeared in advertisements for the 'Sky Broadband Shield' web blocking product.
Sky Talk Shield:
In June 2017 Sky launched a free nuisance call blocking service as an optional extra for their landline customers. The service screens calls automatically before the phone rings, preventing robot callers. Customers are played a recording of the caller's name and given the option to either accept the call, reject it or send to voicemail.
As was common for Sky Broadband marketing campaigns during the 2010s, the launch was promoted with an advert featuring a tie in with a film franchise, in this case, Despicable Me 3. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chemical depilatory**
Chemical depilatory:
A chemical depilatory is a cosmetic preparation used to remove hair from the skin. Common active ingredients are salts of thioglycolic acid and thiolactic acids. These compounds break the disulfide bonds in keratin and also hydrolyze the hair so that it is easily removed. Formerly, sulfides such as strontium sulfide were used, but due to their unpleasant odor, they have been replaced by thiols.The main chemical reaction effected by the thioglycolate is: 2 HSCH2CO2H (thioglycolic acid) + R-S-S-R (cystine) → HO2CCH2-S-S-CH2CO2H (dithiodiglycolic acid) + 2 RSH (cysteine)Chemical depilatories contain 5–6% calcium thioglycolate in a cream base (to avoid runoff). Calcium hydroxide or strontium hydroxide maintain a pH of about 12. Hair destruction requires about 10 minutes. Depilation is followed by careful rinsing with water, and various conditioners are applied to restore the skin's pH to normal. Depilation does not destroy the dermal papilla, and the hair grows back.Chemical depilatories are available in gel, cream, lotion, aerosol, roll-on, and powder forms. Common brands include Nair, Magic Shave, and Veet.
Chemical depilatory:
Chemical depilatories are indicated in the treatment of hirsutism in polycystic ovary syndrome.
Depilatory ointments, or plasters, were known to Greek and Roman authors as psilothrum. In Jewish lore, King Solomon is said to have discovered a chemical depilatory made from a mixture of lime and water and orpiment (arsenic trisulfide). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**BibSonomy**
BibSonomy:
BibSonomy is a social bookmarking and publication-sharing system. It aims to integrate the features of bookmarking systems as well as team-oriented publication management. BibSonomy offers users the ability to store and organize their bookmarks and publication entries and supports the integration of different communities and people by offering a social platform for literature exchange.
Both bookmarks and publication entries can be tagged to help structure and re-find information. As the descriptive terms can be freely chosen, the assignment of tags from different users creates a spontaneous, uncontrolled vocabulary: a folksonomy. In BibSonomy, the folksonomy evolves from the participation of research groups, learning communities and individual users, organizing their information needs.
Publication posts in BibSonomy are stored in the BibTeX format. Export in other formats such as EndNote or HTML (e. g. for publication list creation) is possible.
BibSonomy:
The service was developed by a team of students and scientists from the Institute of Knowledge and Data Engineering, the DMIR group at the University of Würzburg and the L3S Learning Lab Lower Saxony in Hannover and is mainly hosted by the University of Kassel. As of 17 November 2008, the source code of BibSonomy is available under the GNU Lesser General Public License. As of 12 March 2014, the source code of the BibSonomy web application is available under the GNU Affero General Public License. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**AFm phase**
AFm phase:
An AFm phase is an "alumina, ferric oxide, monosubstituted" phase, or aluminate ferrite monosubstituted, or Al2O3, Fe2O3 mono, in cement chemist notation (CCN). AFm phases are important hydration products in the hydration of Portland cements and hydraulic cements.
They are crystalline hydrates with generic, simplified, formula 3CaO·(Al,Fe)2O3·CaXy·nH2O, where: CaO, Al2O3, Fe2O3 represent calcium oxide, aluminium oxide, and ferric oxide, respectively; CaX represents a calcium salt, where X replaces an oxide ion; X is the substituted anion in CaX: – divalent (SO2−4, CO2−3…) with y = 1, or;– monovalent (OH−, Cl−…) with y = 2.
AFm phase:
n represents the number of water molecules in the hydrate and may be comprised between 13 and 19.AFm form inter alia when tricalcium aluminate 3CaO·Al2O3, or C3A in CCN, reacts with dissolved calcium sulfate (CaSO4), or calcium carbonate (CaCO3). As the sulfate form is the dominant one in AFm phases in the hardened cement paste (HCP) in concrete, AFm is often simply referred to as Aluminate Ferrite monosulfate or calcium aluminate monosulfate. However, carbonate-AFm phases also exist (monocarbonate and hemicarbonate) and are thermodynamically more stable than the sulfate-AFm phase. During concrete carbonation by the atmospheric CO2, sulfate-AFm phase is also slowly transformed into carbonate-AFm phases.
Different AFm phases:
AFm phases belong to the class of layered double hydroxides (LDH). LDHs are hydroxides with a double layer structure. The main cation is divalent (M2+) and its electrical charge is compensated by 2 OH− anions: M(OH)2. Some M2+ cations are replaced by a trivalent one (N3+). This creates an excess of positive electrical charges which needs to be compensated by the same number of negative electrical charges born by anions. These anions are located in the space present in between adjacent hydroxide layers. The interlayers in LDHs are also occupied by water molecules accompanying the anions counterbalancing the excess of positive charges created by the cation isomorphic substitution in the hydroxides sheets.
Different AFm phases:
In the most studied class of LDHs, the positive layer (c), consisting of divalent M2+ and trivalent N3+ cations, can be represented by the generic formula: [M2+1-xN3+x(OH−)2]x+ [(Xn−)x/n · yH2O]x-where Xn− is the intercalating anion.In AFm, the divalent cation is a calcium ion (Ca2+), while the substituting trivalent cation is an aluminium ion (Al3+). The nature of the counterbalancing anion (Xn−) can be very diverse: OH−, Cl−, SO2−4, CO2−3, NO−3, NO−2. The thickness of the interlayer is sufficient to host a variety of relatively large anions often present as impurities: B(OH)−4, SeO2−4, SeO2−3... As other LDHs, AFm can incorporate in their structure toxic elements such as boron and selenium. Some AFm phases are presented in the table here below as a function of the nature of the anion counterbalancing the excess of positive charges in the Ca(OH)2 hydroxide sheets. As in portlandite (Ca(OH)2), the hydroxide sheets of AFm are made of hexa-coordinated octahedral cations located in a same plane, but due to the excess of positive electrical charges, the hydroxide sheets are distorted.
Different AFm phases:
To convert the oxide notation in LDH formula, the mass balance in the system has to respect the principle of the conservation of matter. Oxide ions (O2−) and water are transformed into 2 hydroxide anions (OH−) according to the acid-base reaction between H2O and O2− (a strong base) as typically exemplified by the quicklime (CaO) slaking process: H2O + O2− ⇌ OH− + OH−, A1 + B2 ⇌ B1 + A2 or simply, O2− + H2O ⇌ 2 OH−
AFm structure:
AFm phases encompass a class of calcium aluminate hydrates (C-A-H) whose structure derives from that of hydrocalumite: 4CaO·Al2O3·13–19H2O, in which OH− anions are partly replaced by SO2−4 or CO2−3 anions. The different mineral phases resulting from these anionic substitutions do not easily form solid solutions but behave as independent phases. The replacement of hydroxide ions by sulfate ions does not exceed 50 mol %. So, AFm does not refer to a single pure mineralogical phase but rather to a mix of several AFm phases co-existing in hydrated cement paste (HCP).Considering a monovalent anion X, the chemical formula can be rearranged and expressed as 2[Ca2(Al,Fe)(OH)6]·X·nH2O (or Ca4(Al,Fe)2(OH)12·X·nH2O, as presented in the table in the former section). The Me(OH)6 octahedral ions are located in a plane as for calcium or magnesium hydroxides in portlandite or brucite hexagonal sheets respectively. The replacement of one divalent Ca2+ cation by a trivalent Al3+ cation, or to a lesser extent by a Fe3+ cation, with a Ca:Al ratio of 2:1 (one Al substituted for every 3 cations) causes an excess of positive charge in the sheet: 2[2Ca(OH)2·(Al,Fe)(OH)2]+ to be compensated by 2 negative charges X–. The anions X– counterbalancing the positive charge imbalance born by the sheet are located in the interlayer whose spacing is much larger than in the layered structure of brucite or portlandite. This allows the AFm structure to accommodate larger anionic species along with water molecules.The crystal structure of AFm phases is that of layered double hydroxide (LDH) and AFm phases also exhibit the same anion exchange properties. The carbonate anion (CO2−3) occupies the interlayer space in a privileged way with the highest selectivity coefficient and is more retained in the interlayer than other divalent or monovalent anions such as SO2−4 or OH−.
AFm structure:
According to Miyata (1983), the equilibrium constant (selectivity coefficient) for anion exchange varies in the order CO2−3 > HPO2−4 > SO2−4 for divalent anions, and OH− > F− > Cl− > Br− > NO−3 > I− for monovalent anions, but this order is not universal and varies with the nature of the LDH.
Thermodynamic stability:
The thermodynamic stability of AFm phases studied at 25 °C depends on the nature of the anion present in the interlayer: CO2−3 stabilises AFm and displaces OH− and SO2−4 anions at their concentrations typically found in hardened cement paste (HCP). Different sources of carbonate can contribute to the carbonation of AFm phases: Addition of limestone filler finely ground, atmospheric CO2, carbonate present as impurity in the gypsum interground with the clinker to avoid cement flash setting, and "alkali sulfates" condensed onto clinker during its cooling, or from added clinker kiln dust. Carbonation can rapidly occur within the fresh concrete during its setting and hardening (internal carbonate sources), or slowly continue in the long-term in the hardened cement paste in concrete exposed to external sources of carbonate: CO2 from the air, or bicarbonate anion (HCO−3) present in groundwater (immersed structures) or clay porewater (foundations and underground structures).
Thermodynamic stability:
When the carbonate concentration increases in the hardened cement paste (HCP), hydroxy-AFm are progressively replaced, first by hemicarboaluminate and then by monocarboaluminate. The stability of AFm phases increases with their carbonate content as shown by Damidot and Glasser (1995) by means of their thermodynamic calculations of the CaO-Al2O3-SiO2-H2O system at 25 °C.When carbonate displaces sulfate from AFm, the sulfate released in the concrete pore water may react with portlandite (Ca(OH)2) to form ettringite (3CaO·Al2O3·3CaSO4·32H2O), the main AFt phase present in the hydrated cement system.As stressed by Matschei et al. (2007), the impact of small amounts of carbonate on the nature and stability of the AFm phases is noteworthy. Divet (2000) also notes that micromolar amount of carbonate can inhibit the formation of AFm sulfate, favoring so the crystallisation of ettringite (AFt sulfate). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Types of e-commerce**
Types of e-commerce:
There are many types of e-commerce models', based on market segmentation, that can be used to conducted business online. The 6 types of business models that can be used in e-commerce include: Business-to-Consumer (B2C), Consumer-to-Business (C2B), Business-to-Business (B2B), Consumer-to-Consumer (C2C), Business-to-Administration (B2A), and Consumer-to-Administration
Business-to-business (B2B):
B2B e-commerce refers to the sale of goods or services between businesses via an online sales portal. While sometimes the buyer is the end user, often the buyer resells to the consumer. This type of e-commerce typically applies to the relationship between producers and wholesalers; it may additionally remain applied to the relationship between the producers or the wholesalers and the retailers themselves. However, the same relationship can also occur between service providers and business organizations. B2B typically requires more venture capital and a longer sales cycle, but results in higher order value and more recurring purchases.As newer generations become decision makers in business, B2B ecommerce will become more important. In 2015, Google found that close to half of B2B buyers were millennials—nearly double the amount reported in 2012.Examples of this model are ExxonMobil Corporation, the Chevron Corporation, Boeing, and Archer-Daniels-Midland. These businesses have custom, enterprise ecommerce platforms that work directly with other businesses in a closed environment.The advantages of B2B e-commerce include: Convenience: While companies can sell through physical storefronts or take transactions by phone, B2B commerce often takes place online, where companies advertise their products and services, allow for demonstrations and make it easy to place bulk orders. Sellers also benefit from efficient order processing thanks to this digital transaction model.Higher profits: B2B companies often sell their items in wholesale quantities, allowing buyers to receive a good deal and restock less often. Larger order numbers lead to higher potential sales and additional profits for B2B sellers. At the same time, the ease of advertising to other businesses through B2B websites can help cut marketing costs and boost conversion rates.Huge market potential: From business software and consulting services to bulk materials and specialized machinery, B2B sellers can target a large market of companies across industries. At the same time, they have the flexibility of specializing in an area like technology to become a leader in the field.Improved security: Since contracts are a common part of B2B commerce, there's some security for both buyers and sellers in that there's less concern that one will pay and the other will deliver goods as promised. Since sales usually get tracked digitally, it's also more secure in that B2B sellers can track and monitor their financial results.The disadvantages of B2B e-commerce include: More complex setup process: Getting started as a B2B retailer takes work to figure out how to get customers who stay dedicated and make large-enough orders. This often requires thorough research to advertise to potential businesses, set up a custom ordering system and adapt quickly when sales are underwhelming;Limits to sales: While B2B companies can sell a lot, they do miss out on potential sales to individual customers. The smaller pool of business buyers and the need to negotiate contracts can put some limits on profits, especially when the company loses key buyers to other competitors;Need for B2B sellers to stand out: At the same time, the B2B market has many companies competing and selling similar products and services. Sellers often need to cut prices and find special ways to grab companies' attention to succeed in the market;Special ordering experience needed: B2B companies selling online need to put much effort into designing a website and ordering system that buyers find easy to use. This means presenting product and service information clearly, offering online demos or consultations and using order forms with appropriate options for quantities and any special customization needed.
Business-to-business (B2B):
Complex payment process: B2B online payment solutions are both time-consuming and expensive for both parties. The buyer has to be credit checked, they'll often negotiate payment terms and trade discounts and the business will manually have to create a custom invoice.
Business-to-consumer (B2C):
Business-to-consumer (B2C), or direct-to-consumer, is the most common e-commerce model. It deals in electronic business relationships between businesses—both producers and service providers—with end consumers. Many people like this method of e-commerce as it allows them to shop around for the best prices, read customer reviews, and often find different products that they would not otherwise be exposed to in the physical retail world. This e-commerce category also enables businesses to develop a more personalized relationship with their customers.Anything one buys online as a consumer is done as part of a B2C transaction. The decision-making process for a B2C purchase is much shorter than a business-to-business (B2B) purchase, especially for items that have a lower value, thus having a shorter sales cycle. B2C businesses therefore typically spend less marketing dollars to make a sale but also have a lower average order value and less recurring orders than their B2B counterparts. B2C innovators have leveraged technology like mobile apps, native advertising and re-marketing to market directly to their customers and make their lives easier in the process.Examples of B2C businesses are everywhere: exclusively-online retailers include Newegg, Overstock.com, Wish, and ModCloth. Major B2C-model brick-and-mortar businesses include Staples, WalMart, Target, REI, and Gap.The advantages of B2C e-commerce include: Unlimited marketplace: The marketplace is unlimited, enabling the customers to explore and shop at their convenience. We can check on the desired product from home, offices and anywhere else without any time restrictions. Products can be purchased from around the world. It represents the breaking of international barriers, giving people the opportunity to purchase products virtually;Lower costs of doing business: B2C has reduced several business components including employees, purchasing cost, mailing confirmations, phone calls, data entry and the requirement for opening stores with physical existence. This has reduced transaction costs for customers;Business administration made easier: It has made it easier to record store inventory, shipment, logs and overall business transactions compared with traditional methods of business administration. These calculations are now occurring automatically. Moreover, real-time updates can be provided, through which any issues can be flagged;More efficient business relationships: Building new and improved associations with the dealers and suppliers;Workflow automation: This process enables the shipping of products in a timely manner. Furthermore, it automatically adjusts stock levels and figures out location availability. It includes highly reliable security systems, with step by step verification, account entry and admiration mode to look after business transactions. The third-party direct sales are backed up with familiar banking and accounting features that enable businesses to reach out to vendors and perform internal business transactions accordingly.The disadvantages of B2C e-commerce include: Infrastructure: Even though the internet enables reaching a huge, international pool of customers, many still do not have access to the internet;Competition: Competition is severe. There are certain companies that have managed to maintain sizeable market shares giving them a chance to survive in the long run. New and improved products must be rolled out consistently to secure customers;Limited product exposure: Despite rewarding the customers with ease-of-access and a unique level of flexibility for choosing products, e-commerce has restricted product exposure for buyers over the internet. Most websites would not allow customers to go beyond the glamorous product images and their descriptions at the time of purchasing the product. It gives consumers the idea that e-commerce supports ‘limited product exposure', which is why some products disappoint customers at the time of shipment and are sent back to companies immediately.
Consumer-to-business (C2B):
Consumer-to-business (C2B) e-commerce is when a consumer makes their services or products available for companies to purchase.
Consumer-to-business (C2B):
The competitive edge of the C2B e-commerce model is in its pricing for goods and services. This approach includes reverse auctions, in which customers name the price for a product or service they wish to buy. Another form of C2B occurs when a consumer provides a business with a fee-based opportunity to market the business's products on the consumer's blog.For instance, food companies may ask food bloggers to include a new product in a recipe and review it for readers of their blogs. YouTube reviews may be incentivized by free products or direct payment. This could also include paid advertisement space on the consumer website. Google Adwords/Adsense has enabled this kind of relationship by simplifying the process in which bloggers can be paid for ads. Services such as Amazon Affiliates allow website owners to earn money by linking to a product for sale on Amazon. Examples of C2B include: a graphic designer customizing a company logo, or a photographer taking photos for an e-commerce website.The C2B model has flourished in the internet age because of ready access to consumers who are "plugged in" to brands. Where the business relationship was once strictly one-directional, with companies pushing services and goods to consumers, the new bi-directional network has allowed consumers to become their own businesses. Reductions in the cost of technologies such as video cameras, high-quality printers, and Web development services give consumers access to tools for promotion and communication that were once limited to large companies. As a result, both consumers and businesses can benefit from the C2B model.The disadvantages of C2B transactions are that one must be well-versed in web design to create such a website and the amount of money earned is far less than what could be earned by selling the mortgage directly to the consumer instead. The advantages of C2B can be expressed through an example: The C2B website thefreemortgagecalculator.com offers a LendingTree advertisement at the top of the page. The advantage of this website is that the owner does not have to sell mortgages, meet with customers, or pay for everyday business operation expenses in order to make money. If the LendingTree advertisement is used by a visitor, the website owner gets paid a commission from LendingTree for the lead.
Consumer to consumer (C2C):
Consumer-to-consumer (C2C), or customer-to-customer, represents a market environment where one customer purchases goods from another customer using a third-party business or platform to facilitate the transaction.
Consumer to consumer (C2C):
In this case, the third-party platform typically earns their money by charging transaction or listing fees. These businesses benefit from self-propelled growth by motivated buyers and sellers, but face a key challenge in quality control and technology maintenance. Another customers’ benefit is the competition for products. Customers may often find items that are difficult to locate elsewhere. Also, margins can be higher than traditional pricing methods for sellers as there are minimal costs due to the absence of retailers or wholesalers.Opening a C2C site takes careful planning. Examples of C2C include Craigslist and eBay, who pioneered this model in the early days of the internet.
Consumer to consumer (C2C):
Generally, transactions in this model occur via online platforms (such as PayPal), but often are conducted using social-media networks (e.g., Facebook marketplace) and websites (Craigslist).The advantages of C2C include: Availability: It is always available so consumers can shop on demand;Websites are updated regularly;Higher profitability: Consumers selling products directly to other consumers can achieve higher profits;Low transaction cost: Selling via online platforms is much cheaper than the costs incurred on having physical store space;Direct relationship: Customers can directly contact sellers without having to go through an intermediary.The disadvantages of C2C include: Payment may be less secure;Security issues: There could be theft due to scammers falsely impersonating well known C2C sites;Lack of quality control of products.
Business to administration (B2A):
Business-to-administration (B2A), also known as business-to-government (B2G), refers to all transactions between companies and public administrations or government agencies. Government agencies use central websites to trade and exchange information with various business organizations. This is an area that involves many services, particularly in areas such as social security, employment, and legal documents.Businesses that are accustomed to interacting with other businesses or directly with consumers often encounter unexpected hurdles when working with government agencies. Layers of regulation can harm the overall efficiency of the contracting process, and thus, governments tend to take more time than private companies to approve and begin work on a given project.While businesses may find that government contracts involve additional paperwork, time, and vetting, there are advantages to providing goods and services to the public sector. Government contracts are often large and more stable than analogous private-sector work. A company with a history of successful government contracting usually finds it easier to get the next contract. One example of a B2A model is Accela, a software company that provides government software solutions and public access to government services for permitting, planning, licensing, public health, and so on.
Consumer-to-administration (C2A):
Consumer-to-administration (C2A) e-commerce encompasses all electronic transactions between individuals and public administration. The C2A e-commerce model helps the consumer post their queries and request information regarding the public sector directly from their local governments/authorities. It provides an easy way to establish communication between the consumers and the government.Examples of C2A include taxes (filing tax returns), health (scheduling an appointment using an online service), and paying tuition for higher education. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ponnuki**
Ponnuki:
Ponnuki (ポン抜き, ponnuki, traditional Chinese: 開花; simplified Chinese: 开花; pinyin: kāi huā; Korean: 빵따냄 ppang-ttanaem or 빵때림 ppang-ttaerim; "open flower") is a Japanese term in the game of Go that refers to capturing a single stone, resulting in a diamond shape. The shape of the remaining capturing stones is considered to be very strong, due to its influence in all directions. A certain Go proverb says: "A Ponnuki is worth 30 points".
Ponnuki:
A diamond shape (4 stones touching the same empty spot) is considered a ponnuki only when constructed by capturing the middle stone.
Depending on the context (other stones on the board), a ponnuki may be strong and thick but inefficient (overconcentrated).
Etymology:
The word ponnuki (ポン抜き) breaks up into pon and nuki, in which pon is the sound of a cork when taken from a bottle, while nuki means "taking out" (noun), the expression means, 'to pop the cork', a poetic reference to taking the stone out of the centre. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Choledochal cysts**
Choledochal cysts:
Choledochal cysts (a.k.a. bile duct cyst) are congenital conditions involving cystic dilatation of bile ducts. They are uncommon in western countries but not as rare in East Asian nations like Japan and China.
Signs and symptoms:
Most patients have symptoms in the first year of life. It is rare for symptoms to be undetected until adulthood, and usually adults have associated complications. The classic triad of intermittent abdominal pain, jaundice, and a right upper quadrant abdominal mass is found only in minority of patients.In infants, choledochal cysts usually lead to obstruction of the bile ducts and retention of bile. This leads to jaundice and an enlarged liver. If the obstruction is not relieved, permanent damage may occur to the liver - scarring and cirrhosis - with the signs of portal hypertension (obstruction to the flow of blood through the liver) and ascites (fluid accumulation in the abdomen). There is an increased risk of cancer in the wall of the cyst.In older individuals, choledochal cysts are more likely to cause abdominal pain and intermittent episodes of jaundice and occasionally cholangitis (inflammation within the bile ducts caused by the spread of bacteria from the intestine into the bile ducts). Inflammation of the pancreas also may occur. The cause of these complications may be related to either abnormal flow of bile within the ducts or the presence of gallstones.
Diagnosis:
Types They were classified into 5 types by Todani in 1977.Classification was based on site of the cyst or dilatation. Type I to IV has been subtyped.
Type I: Most common variety (80-90%) involving saccular or fusiform dilatation of a portion or entire common bile duct (CBD) with normal intrahepatic duct.
Type II: These cysts are present as an isolated diverticulum protruding from the CBD.
Type III or Choledochocele: Arise from dilatation of duodenal portion of CBD or where pancreatic duct meets.
Type IVa: Characterized by multiple dilatations of the intrahepatic and extrahepatic biliary tree.
Type IVb: Multiple dilatations involving only the extrahepatic bile ducts.
Type V: Cystic dilatation of intrahepatic biliary ducts without extrahepatic duct disease. The presence of multiple saccular or cystic dilations of the intrahepatic ducts is known as Caroli's disease.
Type VI: An isolated cyst of the cystic duct is an extremely rare lesion. Only single case reports are documented in the literature. The most accepted classification system of biliary cysts, the Todani classification, does not include this lesion. Cholecystectomy with cystic duct ligation near the common bile duct is curative.
Treatment:
Choledochal cysts are treated by surgical excision of the cyst with the formation of a roux-en-Y anastomosis hepaticojejunostomy/ choledochojejunostomy to the biliary duct. Future complications include cholangitis and a 2% risk of malignancy, which may develop in any part of the biliary tree. A recent article published in the Journal of Surgery suggested that choledochal cysts could also be treated with single-incision laparoscopic hepaticojejunostomy with comparable results and less scarring. In cases of saccular type of cyst, excision and placement of T-shaped tube is done.Currently, there is no accepted indication for fetal intervention in the management of prenatally suspected choledochal cysts. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ACIN1**
ACIN1:
Apoptotic chromatin condensation inducer in the nucleus is a protein that in humans is encoded by the ACIN1 gene. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Loxapine**
Loxapine:
Loxapine, sold under the brand names Loxitane and Adasuve (inhalation only) among others, is a tricyclic antipsychotic medication used primarily in the treatment of schizophrenia. The medicine is a member of the dibenzoxazepine class and structurally very similar to clozapine. Several researchers have argued that loxapine, initially classified as a typical antipsychotic, behaves as an atypical antipsychotic.Loxapine may be metabolized by N-demethylation to amoxapine, a tricyclic antidepressant.
Medical uses:
The US Food and Drug Administration (FDA) has approved loxapine inhalation powder for the acute treatment of agitation associated with schizophrenia or bipolar I disorder in adults.A brief review of loxapine found no conclusive evidence that it was particularly effective in patients with schizophrenia. A subsequent systematic review considered that the limited evidence did not indicate a clear difference in its effects from other antipsychotics.
Medical uses:
Available forms Loxapine can be taken by mouth. It is also available as an intramuscular injection and as a powder for inhalation.
Side effects:
Loxapine can cause side effects that are generally similar to that of other antipsychotic medications. These include, e.g., gastrointestinal problems (like constipation and abdominal pain), cardiovascular problems (like tachycardia), moderate likelihood of drowsiness (relative to other antipsychotics), and movement problems (i.e. extrapyramidal symptoms [EPS]). At lower dosages its propensity for causing EPS appears to be similar to that of atypical antipsychotics. Although it is structurally similar to clozapine, it has much lower risk of agranulocytosis (which, even with clozapine, is 0.8%); however, mild and temporary fluctuations in blood leukocyte levels can occur. Abuse of loxapine has been reported.The inhaled formulation of loxapine carries a low risk for a type of airway adverse reaction called bronchospasm that is not thought to occur when loxapine is taken by mouth.
Pharmacology:
Mechanism of action Some scientists say loxapine is a "mid-potency" typical antipsychotic. However, unlike most other typical antipsychotics, it has significant potency at the 5HT2A receptor (6.6 nM), which is similar to atypical antipsychotics like clozapine (5.35 nM). The higher likelihood of EPS with loxapine, compared to clozapine, may be due to its higher affinity for the D2 receptor compared to clozapine, which has one of the lowest binding affinities at the D2 receptor of any antipsychotic.
Pharmacology:
Pharmacokinetics Loxapine is metabolized to amoxapine, as well as its 8-hydroxy metabolite (8-hydroxyloxapine). Amoxapine is further metabolized to its 8-hydroxy metabolite (8-hydroxyamoxapine), which is also found in the blood of people taking loxapine. At steady-state after taking loxapine by mouth, the relative amounts of loxapine and its metabolites in the blood is as follows: 8-hydroxyloxapine > 8-hydroxyamoxapine > loxapine.The pharmacokinetics of loxapine change depending on how it is given. Intramuscular injections of loxapine lead to higher blood levels and area under the curve of loxapine than when it is taken by mouth.
Chemistry:
Loxapine is a dibenzoxazepine and is structurally very similar to clozapine, an atypical antipsychotic. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hydrothermal circulation**
Hydrothermal circulation:
Hydrothermal circulation in its most general sense is the circulation of hot water (Ancient Greek ὕδωρ, water, and θέρμη, heat ). Hydrothermal circulation occurs most often in the vicinity of sources of heat within the Earth's crust. In general, this occurs near volcanic activity, but can occur in the shallow to mid crust along deeply penetrating fault irregularities or in the deep crust related to the intrusion of granite, or as the result of orogeny or metamorphism. Hydrothermal circulation often results in hydrothermal mineral deposits.
Seafloor hydrothermal circulation:
Hydrothermal circulation in the oceans is the passage of the water through mid-oceanic ridge systems.
Seafloor hydrothermal circulation:
The term includes both the circulation of the well-known, high-temperature vent waters near the ridge crests, and the much-lower-temperature, diffuse flow of water through sediments and buried basalts further from the ridge crests. The former circulation type is sometimes termed "active", and the latter "passive". In both cases, the principle is the same: Cold, dense seawater sinks into the basalt of the seafloor and is heated at depth whereupon it rises back to the rock-ocean water interface due to its lesser density. The heat source for the active vents is the newly formed basalt, and, for the highest temperature vents, the underlying magma chamber. The heat source for the passive vents is the still-cooling older basalts. Heat flow studies of the seafloor suggest that basalts within the oceanic crust take millions of years to completely cool as they continue to support passive hydrothermal circulation systems.
Seafloor hydrothermal circulation:
Hydrothermal vents are locations on the seafloor where hydrothermal fluids mix into the overlying ocean. Perhaps the best-known vent forms are the naturally occurring chimneys referred to as black smokers.
Volcanic and magma related hydrothermal circulation:
Hydrothermal circulation is not limited to ocean ridge environments. Hydrothermal circulating convection cells can exist in any place an anomalous source of heat, such as an intruding magma or volcanic vent, comes into contact with the groundwater system where permeability allows flow. This convection can manifest as hydrothermal explosions, geysers, and hot springs, although this is not always the case. Hydrothermal circulation above magma bodies has been intensively studied in the context of geothermal projects where many deep wells are drilled into the system to produce and subsequently re-inject the hydrothermal fluids. The detailed data sets available from this work show the long term persistence of these systems, the development of fluid circulation patterns, histories that can be influenced by renewed magmatism, fault movement, or changes associated with hydrothermal brecciation and eruption sometimes followed by massive cold water invasion. Less direct but as intensive study has focused on the minerals deposited especially in the upper parts of hydrothermal circulation systems.
Volcanic and magma related hydrothermal circulation:
Understanding volcanic and magma-related hydrothermal circulation means studying hydrothermal explosions, geysers, hot springs, and other related systems and their interactions with associated surface water and groundwater bodies. A good environment to observe this phenomenon is in volcanogenic lakes where hot springs and geysers are commonly present. The convection systems in these lakes work through cold lake water percolating downward through the permeable lake bed, mixing with groundwater heated by magma or residual heat, and rising to form thermal springs at discharge points.The existence of hydrothermal convection cells and hot springs or geysers in these environments depends not only on the presence of a colder water body and geothermal heat but also strongly depends on a no-flow boundary at the water table. These systems can develop their own boundaries. For example the water level represents a fluid pressure condition that leads to gas exsolution or boiling that in turn causes intense mineralization that can seal cracks.
Deep crust:
Hydrothermal also refers to the transport and circulation of water within the deep crust, in general from areas of hot rocks to areas of cooler rocks. The causes for this convection can be: Intrusion of magma into the crust Radioactive heat generated by cooled masses of granite Heat from the mantle Hydraulic head from mountain ranges, for example, the Great Artesian Basin Dewatering of metamorphic rocks, which liberates water Dewatering of deeply buried sedimentsHydrothermal circulation, in particular in the deep crust, is a primary cause of mineral deposit formation and a cornerstone of most theories on ore genesis.
Deep crust:
Hydrothermal ore deposits During the early 1900s, various geologists worked to classify hydrothermal ore deposits that they assumed formed from upward-flowing aqueous solutions. Waldemar Lindgren (1860–1939) developed a classification based on interpreted decreasing temperature and pressure conditions of the depositing fluid. His terms: "hypothermal", "mesothermal", "epithermal" and "teleothermal", expressed decreasing temperature and increasing distance from a deep source. Recent studies retain only the epithermal label. John Guilbert's 1985 revision of Lindgren's system for hydrothermal deposits includes the following: Ascending hydrothermal fluids, magmatic or meteoric water Porphyry copper and other deposits, 200–800 °C, moderate pressure Igneous metamorphic, 300–800 °C, low to moderate pressure Cordilleran veins, intermediate to shallow depths Epithermal, shallow to intermediate, 50–300 °C, low pressure Circulating heated meteoric solutions Mississippi Valley-type deposits, 25–200 °C, low pressure Western US uranium, 25–75 °C, low pressure Circulating heated seawater Oceanic ridge deposits, 25–300 °C, low pressure | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rowing cycle**
Rowing cycle:
A rowing cycle is a wheeled vehicle propelled by a rowing motion of the body. Steering, braking, and shifting are usually done by the handlebars. Feet are on symmetrical foot rests, as opposed to rotating pedals. Unlike many rowing boats, the rider faces forward. Rowing cycles exist in numerous designs, particularly with respect to frames and drive mechanisms. Commercial production numbers for rowing cycles are small compared to that of standard bicycles.
History:
The use of a rowing-like action to propel a land vehicle goes probably to the 1870s, as George W. Lee used a sliding-seat in a tricycle. Roadsculler races were held in Madison Square Garden in the 1880s. A toy catalog from FAO Schwarz in 1911 advertised a four-wheeled "Row-Cycle" for children, operated using two levers in a standing position and with steering done by the feet. In the 1920s, Manfred Curry in Germany designed and constructed the Landskiff ("land boat"), a four-wheeled vehicle that would be known as a Rowmobile in the English speaking countries. A newsreel from 1937 shows a rowed bicycle that is very similar to today's Craftsbury SS rowing bicycle, Rowbike and VogaBike.
Propulsion and steering:
Some rowed vehicles use a stroke similar to a boat, in that force is used only when straightening the body, the drive portion of the stroke, not the recovery. Other rowed vehicles, mostly those that use linkages and crankshafts in their drive trains, use force in both straightening and bending the body. On most, the handlebars move; most also have moving footrests and some have a moving seat.
Propulsion and steering:
The handle bars on some rowed vehicles travel on a semicircular path due to the handlebars being mounted to a fixed length lever pinned to the frame. Some attempt to simulate the more level stroke used in rowing a boat, for example Streetrower and Vogabike. The September 2007 issue of Velovision magazine claimed that Streetrower has "the most natural rowing action of any rowing vehicle to date".The Streetrower uses a steering system actuated by servos and controlled by the rider with a joy stick.
Drive train:
Rowed vehicles generally have one of three drive trains: chain, linkages, or cable.
Drive train:
The Rowbike brand uses a standard bicycle chain, rear gears, and derailleur. The chain does not travel in a loop, as is the case with a standard bicycle. It moves back and forth over the rear cog in a reciprocating motion. The chain is connected at one end to the frame of the rowbike and to a bungee cord on the other. As the rower pulls back the chain engages the rear cog and the bungee cord is extended, and when the rower returns forward the bungee cord contracts, pulling the chain back and ensuring there is no slack in the chain. All Rowbikes have a rear derailleur, even single speeds, due to the need to keep proper tension in the chain.
Drive train:
Rowbikes that use linkages include Champiot and Powerpumper. They use linkages connected to a crank shaft, similar to a pedal car.
The Thys Rowingbike and Streetrower use a cable which coils and uncoils about a spiral-shaped spool. Thys calls his version a snek drive (after the Dutch term for Fusee (horology)).
Tandem, three and four wheeled variants:
Balancing on a two-wheeled rowed vehicle while rowing requires some practice, even for a skilled bicyclist. Tricycle and quadracycle forms are usually heavier but do not share this problem. The Streetrower is a tricycle with two wheels at the front and one at the rear; the Vogatrike also has three wheels. An early quadracycle, the 'Irish Mail', was similar to railroad handcars used by railroad workers to inspect tracks. The four-wheeled Champiot is reminiscent of the 'Irish Mail' type machine in that it uses linkages, not a chain, in its drive train.
Tandem, three and four wheeled variants:
Thys has produced a tandem rowingbike. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Conservation and restoration of vinyl discs**
Conservation and restoration of vinyl discs:
The conservation and restoration of vinyl discs refers to the preventive measures taken to defend against damage and slow degradation, and to maintain fidelity of singles, 12" singles, EP’s, and LP’s in 45 or 33⅓ rpm 10" disc recordings. Vinyl LP preservation is generally considered separate from conservation, which refers to the repair and stabilization of individual discs. Commonly practiced in major sound archives and research libraries that house large collections of audio recordings, it is also frequently followed by audiophiles and home record collectors. Because vinyl—a virtually unbreakable light plastic made up of polyvinyl chloride acetate copolymer, or PVC—is considered the most stable of analog recording media, it is seen as less a concern for deterioration than earlier sound recordings made from more fragile materials such as acetate, vulcanite, or shellac. This hardly means that vinyl recordings are infallible, however, and research—both expert and evidential—has shown that the way in which discs are handled and cared for can have a profound effect on their longevity. Though some 45s (7"s) are also made from vinyl, many of them are actually polystyrene—a more fragile medium that is prone to fracturing from internal stress. Still, many of the recommendations for the care of vinyl LPs can be applied to 45s.
Historical development and standards:
In 1959—roughly a decade after vinyl LPs first became widely available to consumers—the Library of Congress published Preservation of Sound Recordings (A.G. Pickett and M.M. Lemcoe), the first and most extensive investigation of the deterioration of grooved discs and magnetic tape. Funded by a grant from the Rockefeller Foundation, the purpose of the investigation was to establish suitable guidelines for the storage and preservation of sound recordings for libraries. Conducted at the Southwest Research Institute of San Antonio, the study involved subjecting sound recordings to a series of lab tests, from accelerated aging to fungal exposure. Though considered the definitive study in the field, the chemical makeup of plastics and how they perform under stress was the primary focus of the report, whereas playback deterioration—a significant concern to sound archivists and record collectors—was excluded from the investigation.The Preservation and Restoration of Sound Recordings (Jerry McWilliams), published in 1979 by the American Association of State and Local History, did include information about disc wear through playback, and is still a practical source of information on sound recording preservation. A comprehensive manual based on reports gathered from library professionals, sound archivists, audio engineers, and other experts, it included information on such topics as disc damage from frequency of use, stylus wear, and inferior or improperly adjusted equipment.
Historical development and standards:
In 1986 the Association of Recorded Sound Collections (ARSC) Associated Archives (AAA) Committee received a grant from the National Endowment for the Humanities to conduct an in-depth study in order to identify the problems of preservation and access for sound recordings. Their 860-page report, titled Audio Preservation, A Planning Study was published in 1988.Since the shift from analog to digital recording, research in the preservation of sound recordings has been in serious decline. Gerald L. Gibson, the head of the Motion Picture, Broadcasting, and Recorded Sound Division of the Library of Congress expressed his concern on this issue in 1991, by referencing an investigation on the effects of fire on sound and audiovisual recordings as some of the only new research being done on the topic, stating, "Comparatively little is known about the preservation, conservation, aging problems, or properties of sound recordings…virtually no independent work is going on in these areas." (Gerald L. Gibson, Head of the Motion Picture, Broadcasting, and Recorded Sound Division of the Library of Congress, 1991).Though guidelines and recommendations for the care, handling, and proper storage of vinyl LPs are available from such resources as The Library of Congress and the National Library of Canada, to this date there are no nationally agreed upon standards for audio preservation. In January 2007, a five-page letter was sent to the National Recording Preservation Board at the Library of Congress on behalf of the Association of Research Libraries (ARL) in support of a study on the current state of recorded sound preservation in the United States, stating "the lack of agreed upon standards and commonly accepted best practices presents a major barrier to effective audio preservation."(Prudence S. Adler, Associate Executive Director and Karla L. Hahn, Director, Office of Scholarly Communication, Association of Research Libraries, Jan. 2007)
Recommendations:
Though recommendations for LP preservation differ among professionals, the majority are in agreement on some basic guidelines: discs need to be kept clean, stored in such a way to prevent distortion, and maintained in a stable, climate-controlled environment. Routine maintenance of turntable equipment including regular inspection of the weight, tracking, and condition of the stylus is also advised.
Recommendations:
Cleaning Though proper methods are debated, cleaning is extremely important in maintaining the fidelity of LPs. As Gibson stated, "As with most things in the field, there is very little certainty regarding cleaning. What is known is based on trial-and-error, not upon controlled, scientific study…however, one thing is certain: playing a dirty recording, regardless of its format, is one of the most damaging things you can do to it."(Gerald L. Gibson, Head of the Motion Picture, Broadcasting, and Recorded Sound Division of the Library of Congress, 1991)It is imperative that LPs be kept free from foreign matter deposits. Oils from fingerprints, adhesives, and soot are damaging, as are air pollutants like cigarette smoke. Even grease from cooking can deposit itself on LPs. Probably the number one contributor to damage, however, is ordinary household dust. Dust can become embedded permanently into the disc's grooves, causing distortion of the transmitting signal, ticks, pops, and inferior sound quality. Vinyl discs can become so dirty and scratched that they are virtually unlistenable.
Recommendations:
It is recommended that discs be cleaned before—and after—each playback, carbon-fibre brushes are quite effective. Records should be cleaned in a circular motion, in the direction of the grooves. Distilled water (not tap water as it will leave behind mineral deposits) and a soft, lint-free cloth are a common method of cleaning. Another method is to clean the LP on the turntable with a disc cleaning brush (the Discwasher system is frequently recommended by the audio press). A simple "cleaning bath" device called the Spin Clean gives good results, and there are also vacuum machines on the market such as the Nitty Gritty, Keith Monks, Clearaudio, and VPI, which are recommended for more a thorough cleaning. In recent years, ultrasonic cleaning machines from manufacturers such as Klaudio (Korea) and Audio Desk Systeme (Germany) have also been used with great success. The effectiveness of the ultrasonic machines coupled with their premium price tags (both US$4,000 in January 2015) has opened the door for companies to offer professional ultrasonic cleaning at an affordable cost of just a few dollars per record. Another cleaning product recently released called Record Revirginizer uses a polymer that is applied to record surface then left to dry; the polymer is then peeled from the surface taking the microscopic contaminants with it. Though in the past, using alcohol on vinyl LPs was considered safe, experts now caution against it unless absolutely necessary, as alcohol threatens the loss of the plasticizer or stabilizer. As vinyl is often prone to electrostatic charges that cause dust and debris to be attracted to its surface, anti-static products can be used if needed.
Recommendations:
Other recommendations for the care, handling, and storage of LPs include the following: Handling When possible, use clean, white, lint-free gloves for handling.
Handle by edge and label areas only, with the third and fourth fingers balancing the label and the thumb supporting the rim.
Remove from jacket by bowing the jacket open and holding it against the body and letting the LP with its inner sleeve slide out gently (following the same method for removing the inner sleeve).
Do not expose to air or light unnecessarily. Return LPs to their jackets immediately after playback.
Storing Store exactly vertically to prevent warping. Spacers are recommended for every four to six inches.
Store LPs with other LPs. Avoid mixing with other sizes such as 10" and 7" discs. Never use bookends.
Store on metal shelves (as opposed to wood, which expands and contracts).
Do not allow LPs to hang over the edge of shelves.
Remove shrink wrap from dust jackets immediately after acquiring.
Use polyethylene inner sleeves. Never use PVC sleeves as their chemical makeup is too close to vinyl and may cause imprints or fuse to the LP. Replace paper sleeves as paper deteriorates, leaving oil and paper residue.
Store in-use LPs at a temperature of 65 to 70 °F (18 to 21 °C). Those in long-term storage should be kept at 45 to 50 °F (7 to 10 °C). Though relative humidity (RH) is considered less an issue for vinyl than other recorded media, it is recommended that LPs be stored at 45–50% RH.
Playback equipment The stylus tip should be kept clean at all times. A soft, camelhair brush is recommended with a drop of Discwasher solution. Only clean from back to front.
The stylus should be periodically inspected as it is gradually worn by use. Never play LPs with a worn stylus.
Maintain proper tracking force. If too high, the stylus will bear down on the groove walls of the LP; if too low, the stylus will bounce in the groove.
Recommendations:
Reformatting As vinyl recordings are known to degrade after repeated playback, preservation copying is recommended for archival purposes. This is especially true for rare recordings or those that have special value. A general guideline is to digitise the recording using the appropriate stylus, tracking weight, equalisation curve and other playback parameters and use high-quality analogue-to-digital converters. A service copy of the recording can then be created (on CD or other format) from the preservation master. A second option is to create three copies, the second copy acting as a duplicating master and the third for public use. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fern sports**
Fern sports:
Fern sports are plants that show marked change from the normal type or parent stock as a result of mutation. The term Morphotype is also used for any of a group of different types of individuals of the same species in a population. Fern fronds in sports are typically altered in several ways, such as the frond apex divided and pinnae similarly duplicated.
Occurrence:
Soft Shield Fern Polystichum setiferum, Lady Fern Athyrium filix-femina and Hart's Tongue Fern Asplenium scolopendrium are known to have around three hundred varieties or sports. Scaly Male Fern Dryopteris affinis and Male Fern Dryopteris filix-mas have a number of commercially available and naturally occurring sports or subspecies. Examples are D. affinis polydactyla Dadds, A. filix-femina plumosum, A. filix-femina corymbiferum, and D. filix-mas Barnesii Lady Fern (Athyrium filix-femina)
Characteristics:
The frond of a sport may be branched at the tip and at the tips of the pinna, the colour may vary, and variegation may occur; fronds generally remain bilaterally symmetrical. Ferns sports remain normal in certain respects, such as viability with sori and indusia appearing normal. The frond stipe may be a different colour.
Characteristics:
Hart's Tongue fern (Asplenium scolopendrium) MisidentificationGalls on ferns and other physical damage to fern fronds can be mistaken as sports, however this is usually asymmetric, ferns generally being bilaterally symmetrical. In Athyrium and Dryopteris species white maggots of Chirosia betuleti create mop-head galls on fern frond tips that look somewhat like fern sports, however this is physical damage and not a growth form.
Rarity:
Ferns sports particularly suffered during the Victorian-era Pteridomania ('Fern-Fever') craze, when over collecting of fern species included over collecting unusual fern varieties. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**LeDock**
LeDock:
LeDock is a proprietary flexible molecular docking software designed for the purpose of docking ligands with protein targets. It is available for Linux, macOS, and Windows.It can be ran as a standalone or entirely from Jupyter notebook. It only supports Tripos Mol2 file format.
Introduction:
Methodology: LeDock employs a simulated annealing and genetic algorithm approach to facilitate the docking process of ligands with protein targets. The software utilizes a knowledge-based scoring scheme that has been derived from extensive prospective virtual screening campaigns. It is categorized as using a flexible docking method.
Performance:
Performance: In a comprehensive study involving 2002 protein-ligand complexes, LeDock demonstrated a notable level of accuracy in predicting molecular poses. Moreover, the Linux version offers command line tools to run high-throughput virtual screening of different large molecular libraries in the cloud.In a computational study screening for inhibitors of Mycobacterium tuberculosis DNA gyrase B, LeDock demonstrated better performance than AutoDock Vina at reproducing experimental binding affinity data. When benchmarked on a set of 140 known gyrase inhibitors, the predicted binding energies from LeDock docking experiments showed significantly higher correlation to experimental inhibition constant (pKi) values compared to Vina. Docking software efficacy varies by target site, so the authors advice running experimental benchmarks when choosing a docking software.A 2017 review evaluated the accuracy of different docking software on a diverse set of protein-ligand complexes. LeDock was able to effectively sample ligand conformational space and identify near-native binding poses for a significant proportion of the test cases. Its flexible docking protocol was pointed as a key factor for accurate docking. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Field kitchen**
Field kitchen:
A field kitchen (also known as a battlefield kitchen, expeditionary kitchen, flying kitchen, or goulash cannon) is a kitchen used primarily by militaries to provide hot food to troops near the front line or in temporary encampments. Designed to be easily and quickly moved, they are usually mobile kitchens or mobile canteens, though static and tent-based field kitchens exist and are widely used.
History:
The first field kitchens were carried in four-wheeled wagons (such as chuckwagons) by military units on campaigns throughout history, often part of larger wagon trails, used as late as the 19th century. By the 20th century, smaller two-wheeled trailers became common, especially with the invention of the locomotive.
History:
Karl Rudolf Fissler of Idar-Oberstein invented a mobile field kitchen in 1892 that the Germans came to refer to as a Gulaschkanone ("goulash cannon"), because the chimney of the stove resembled ordnance pieces when disassembled and limbered for towing. As technology advanced, larger trailers evolved as horses were phased out in favor of motorized vehicles more capable of towing heavier loads. In World War II, the mobile canteen was used as a morale booster in the United Kingdom, fitting in with the culture of the tea break and in particular as a result of the successful wartime experiment of the tea lady on productivity and morale. The larger mobile kitchens (now commonly called "flying kitchens" because of the greater speed with which they can be deployed) can service entire battalions of troops.
History:
In the present, many field kitchens are mostly either mobile canteens or deployable field kitchens. Many of these have facilities similar to actual kitchen facilities, and may be designed to serve either fresh meals or hot food rations intended to be prepared in a field kitchen.
Types:
Trailer kitchen A trailer kitchen, rolling kitchen, or chow wagon is a field kitchen that is or can be pulled by a vehicle, pack animal, or person in the form of a cart, wagon, or trailer. They typically have two or four wheels and may be a single unit or two separate units connected to each other. Such trailers may have wheels with the intent that they are pulled to their destinations, or they may be assembled at their destination with the wheels only intended to make it easier to move around if needed. Most trailer kitchens are open-air, though some vehicle-towed trailer kitchens may be enclosed.
Types:
Assault kitchen An assault kitchen or vehicle kitchen is a field kitchen that is installed in a vehicle. It is usually in the rear compartment of the vehicle, which may be a military light utility vehicle, a van, or a truck. They may function similar to a commercial food truck, or they may simply be a kitchen in the back of a vehicle without dedicated serving functions. Assault kitchens allow for meals to be prepared while moving and without the need to wait for a field kitchen to be set up, allowing for the quick preparation and serving of hot meals to troops and, if necessary, a quick extraction of the kitchen and food supplies from a dangerous area.
Types:
Deployable kitchen A deployable kitchen or camp kitchen is a field kitchen that is deployed as a static structure. Though they are not necessarily mobile kitchens, they are designed to be unpacked, assembled, and repacked with relative haste. They may be as small as a set of outdoor cooking equipment that can take only a few minutes to set up; as basic as folding tables with a portable kitchen range, ration heating unit, and food containers; or as large as a tent-based kitchen with a full set of appliances that may take up to an hour to fully set up for food preparation.
Types:
Containerized kitchen A containerized kitchen, modular kitchen, or configurable kitchen is a field kitchen that is enclosed within, or in a similar configuration to, a freight container, typically a shipping container or semi-trailer. They are very similar to deployable kitchens, but larger, usually not assembled by hand, and intended to feed more individuals or prepare more types of food than what is possible with other types of field kitchens. They are typically modular buildings that can be expanded if necessary.
Types:
Other facilities Some modern militaries use mobile facilities that are not field kitchens, but supplement them or are components of them, such as large tents for dining halls. The U.S. Defense Logistics Agency lists several such facilities used by the United States Armed Forces, including the Multi-Temperature Refrigerated Container System, a containerized freezer; the Food Sanitation Center, a dedicated dishwashing tent; and the Containerized Ice Making System, a containerized icemaker designed to mass-produce ice.
Non-military use:
Field kitchens are also used in non-military or non-combat roles. Field kitchens are deployed by militaries or aid agencies to feed groups of refugees, displaced persons, or first responders as part of humanitarian aid, disaster response, and emergency management operations. Field kitchens are also sometimes set up for historical reenactments, preferably with genuine field kitchen appliances or newer reproductions, though modern equivalents are sometimes used, especially if the field kitchen appliances fail.Civilian versions of field kitchens have also been set up at events where dedicated food service facilities are unavailable, such as at protests; for example, several were set up in Maidan Nezalezhnosti during Euromaidan, and one was set up in Confederation Square during the Canada convoy protests. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Diborane(4)**
Diborane(4):
Diborane(4) is a transient inorganic compound with the chemical formula B2H4. Stable derivatives are known. Diborane(4) has been produced by abstraction of two hydrogen atoms from diborane(6) using atomic fluorine and detected by photoionization mass spectrometry. Computational studies predict a structure in which are two hydrogen atoms bridging the two boron atoms via three-centre two-electron bonds in addition to the 2-centre, 2-electron bond between the two boron atoms and one terminal hydrogen atom bonded to each boron atom.Several stable derivatives of diborane(4) have been reported. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Goodyear welt**
Goodyear welt:
A Goodyear welt is a strip of leather, rubber, or plastic that runs along the perimeter of a shoe outsole. The basic principle behind the Goodyear welt machine was invented in 1862 by August Destouy who designed a machine with a curved needle to stitch turned shoes. The machine was then improved in 1869 and later by Destouy and, more importantly, Daniel Mills, an English mechanic, both employed by Charles Goodyear Jr., the son of Charles Goodyear. It has been noted by historians that Goodyear was a frequent visitor to the shoe factory of William J. Dudley, founder of Johnston & Murphy, where early work on sole stitching equipment was performed.
Construction:
"Goodyear welt construction" involves stitching a welt to the upper and a strip of preformed canvas like a "rib" that runs all around and bottom (known as "gemming") cemented to the insole of a shoe as an attach-point for the outsole or midsole (depending on the Goodyear welt variant). The space enclosed by the welt is then filled with cork or some other filler material such as foam (usually either porous or perforated, for breathability and cushioning), and the outsole is both cemented and stitched to the welt. Shoes with other types of construction may also have welts.
Process:
The Goodyear welt process is a machine-based alternative to the traditional hand-welted method (c. 1500) for the manufacture of men's shoes, allowing them to be resoled repeatedly.
Process:
The upper part of the shoe is shaped over the last and fastened on by sewing a leather, linen or synthetic strip (also known as the "welt") to the inner and upper sole. As well as using a welt, stitching holds the material firmly together.The welt forms a cavity which is then filled with a cork material. The final part of the shoe is the sole, which is attached to the welt by some combination of stitching and a high strength adhesive like contact cement or hide glue. The result is highly valued for being relatively waterproof by minimizing water penetration into the insole and the relative ease of resoling as long as the upper remains viable. Welted shoes are more expensive to manufacture than those mass-produced by automated machinery with molded soles. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Autosomal dominant leukodystrophy with autonomic disease**
Autosomal dominant leukodystrophy with autonomic disease:
Autosomal dominant leukodystrophy with autonomic disease is a rare neurological condition of genetic origin which is characterized by gradual demyelination of the central nervous system which results in various impairments, including ataxia, mild cognitive disability and autonomic dysfunction. It is part of a group of disorders called "leukodystrophies".
Signs and symptoms:
Unlike other leukodystrophy syndromes, whose typical age of onset is during childhood, individuals with this condition typically start showing symptoms between their early 40s and late 50s, once they appear, they slowly progress in severity and new symptoms start appearing.These symptoms first start out with dysfunctions of the autonomic nervous system which result in symptoms such as abnormal functioning of both the bladder and bowel, recurrent blood pressure drops whenever patients stand up, and male erectile dysfunction.Rarely, anhidrosis might also occur alongside these symptoms.After these symptoms start, movement impairments develop; they start off at the legs but then progress and move to the arms and the face, these impairments include either muscular spasticity or weakness, intention tremors, ataxia, dysmetria, and dysdiadochokinesis.In some individuals, progressive dementia is present.
Complications:
There are various complications associated with the symptoms that ADLD causes.
Due to the ataxia and it's associated coordination impairments, people might have difficulties with movements such as walking by themselves.
Treatment:
Treatment is focused on the symptoms themselves The ataxic movement impairments can be treated with walking support systems such as canes or wheelchairs, physical therapy, and speech therapy.
Diagnosis:
This condition is diagnosed mainly through MRIs and genetic testing of the LMNB gene and the areas surrounding it, although symptom examination is also important for the diagnosis.
Causes:
This condition is caused by a duplication of the LMNB1 gene, this gene takes part in the production of the lamin B1 protein, which is essential for determining the nucleus' shape within the cells, the replication of DNA, and the way certain genes express themselves.When the gene is duplicated (as seen in patients with ADLD), there is an excess of lamin B1 protein, this leads to the underexpression of genes that are important for the production of myelin and an increased hardening of the nuclear envelope, which results in a progressive reduction of myelin production and maintenance as one ages.Like the name of the condition implies, this condition is inherited following an autosomal dominant pattern, which means that only one copy of a certain mutation (in this case, the duplication of the LMNB1 gene) is needed for a trait or disorder to be expressed, in familial cases, offspring have a 1 in 2, or 50% chance of inheriting a copy of said mutation from one of their affected parents.Although very rarely, this disorder can be caused by deletions near the LMNB1 gene, only one such family has been described in medical literature: they had a deletion upstream the same gene.
Pathophysiology:
In patients with the condition, demyelination (that is, a loss of myelin) starts occurring in both the brain and the spinal cord years before symptoms show up, this abnormality has been identified to be a contributing factor to the development of the first symptoms individuals with this condition show during the early stages of it.
Phenotype-genotype:
In a 2018 study done by Naomi Mezaki and 18 other colleagues, it was found that ADLD patients with a deletion near the LMNB1 gene (2 patients from a single family) started showing symptoms at an earlier age, had less autonomic dysfunctions and had more noticeable cognitive deficits than other ADLD patients with duplication of the LMNB1 gene (4 patients from 3 families).
Prognosis:
This condition is progressive and fatal.While the quality of life might be improved with treatment, the life expectancy can't be improved easily: individuals diagnosed with ADLD typically live for another 10 to 20 years after their diagnosis before their death.
Prevalence:
At least 70 cases from 35 families around the world have been described in medical literature, most of these were from families of Caucasian descent.
History:
This condition was first discovered in 1964 by E Zerbin-Rüdin et al. when they described (what they thought to be) a familial autosomal dominant variant of Pelizaeus-Merzbacher disease with onset in adulthood.In 2006, Padiath et al. found the LMNB1 duplication involved in ADLD in 4 families, of which 1 was previously described in medical literature. Haplotype studies revealed that the family mentioned beforehand and another Irish-American family shared a common ancestor. The lamin B1 protein was found to be overly expressed in brain tissues of family members affected with ADLD. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**History of virtual learning environments in the 1990s**
History of virtual learning environments in the 1990s:
In the history of virtual learning environments, the 1990s was a time of growth, primarily due to the advent of the affordable computer and of the Internet.
1980s:
1985 The Free Educational Mail (FrEdMail) network was created by San Diego educators, Al Rogers and Yvonne Marie Andres, in 1985. More than 150 schools and school districts were using the network for free international email access and curriculum services.
1990s:
1990 Formal Systems Inc. of Princeton, NJ, USA introduces a DOS-based Assessment Management System. An internet version was introduced in 1997. (In 2000, Formal Systems changed its name to Pedagogue Solutions.
1990s:
The Athena Project at MIT, which started in 1983, has evolved into a system of "shared services" that look remarkably like many current VLEs or learning management systems. The network hosted software from multiple vendors, and made it all work together. Here is a list of the features of the system as of 1990: printing, electronic mail, electronic messaging (Zephyr), bulletin board conferencing (Discuss), on-line consulting (OLC), on-line teaching assistant (OLTA), on-line help (OLH), assignment exchange (Turn in/pick up), access to system libraries, authentication for system security (Kerberos), naming-for linking system components together (Hcsiod), and a service management system (Moira).
1990s:
Pavel Curtis created LambdaMOO, an early Multi-User Dungeon (MUD), at Xerox PARC.
1990s:
HyperCourseware created by Kent Norman at the University of Maryland, College Park was originally written for use in the At&T Teaching Theater, a prototype electronic classroom. The original version was written in WinPlus, a Hypercard like program, and ran on a local area network with one server and numerous client workstations. It included an online syllabus, online lecture notes and readings, synchronous chat rooms, asynchronous discussion boards, online student profiles with pictures, online assignments and exams, online grading, and a dynamic seating chart. A Web-based version was introduced in January 1996, which has continued to function up to the present.
1990s:
The US Navy's Naval Technical Training System was designed as a curriculum development system. It included course management tools for the storage, retrieval and dissemination of information.
An article in Electronic Learning by Therese Mageau describes Integrated Learning Systems (ILS) as "networked computers running broad-based curriculum software with a management system that tracks student progress." A report by George Mann and Joe Kitchens reviews the Curriculum Management System (CMS), a system that generated individualized learning plans every two weeks for each student.
FirstClass is launched by SoftArc, initially for the Macintosh platform.
1990s:
1991 Thousands of FrEdMail users gained access to the NSFNET via newly established gateways at two NSFNET mid-level network locations: Merit/MichNet in Ann Arbor, MI, and CERFnet (California Education and Research Federation Network) in San Diego, CA. FrEdMail subscribers began to exchange project-based learning electronic mail with the entire Internet community. The FrEdMail-NSFNET Gateway Software was available free of cost to any mid-level network, college, or university which had an interest in collaborating with local K-12 school districts to bring electronic networking to teachers and students. Through FrEdMail, educators were able share classroom experiences, distribute curriculum ideas and teaching materials, as well as obtain information about workshops, job opportunities, and legislation affecting education. At its peak, FrEdMail was used by 12,000 schools and 350 nodes worldwide. . When the World Wide Web became available to the public in 1993, the FrEdMail Foundation became the Global SchoolNet Foundation and launched its first website, GlobalSchoolhouse.org. The following year the National Science Foundation also awarded Global SchoolNet a grant to introduce a desktop video-conferencing program called CU-SeeMe. CU-SeeMe was used for many educational video-conferences and in 1995 by World News Now for the first television broadcast live on the Internet, which featured an interview by World News Now anchor Kevin Newman and Yvonne Andrés.
1990s:
iEARN (International Education and Resource Network) launched among schools in nine countries, using the IGC/APC system of "conferences/newsgroups" to better enable students to conduct theme-based online projects.
The history page of the TEDS company states that they developed the first Learning Management System.
1990s:
Jakob Ziv-El of Interactive Communication Systems, Inc. files for a patent for an Interactive Group Communication System (# 5,263,869) (similar to the prior art of the IBM 1500 system). A 1990 foreign patent and a 1972 patent by Jakob Zawels (# 3,641,685) are referenced. The patent is granted in 1993. The patent is referenced in a 2000 patent filing (# 6,988,138) by representatives of BlackBoard, Inc.
1990s:
Sydney, Australia, based Webster & Associates release first of several graphical course based systems with Learning Management System included. Courses include logins, course structure, recording of results, reporting, etc. Included ability to store and retrieve results remotely. This system could and was run as a client-server application.
1990s:
Murray Turoff, the guru of EIES, publishes "Computer-Mediated Communication Requirements for Group Support", Journal of Organizational Computing, 1, 85-113 (1991). This distills lessons from a research programme run by him over the preceding 16 years, from 1974 A collaboration of faith based groups http://www.ecunet.org start using a product called BizLink (which later became Convene) in teaching their missionaries and staff around the world using the internet.
1990s:
Gloria Gery publishes Electronic performance support systems: how and why to remake the workplace through the strategic application of technology, which influences thinking about technology and learning in the workplace.
1992 CAPA (Computer Assisted Personalized Approach) system was developed at Michigan State University. It was first used in a small (92 student) physics class in the Fall of 1992. Students accessed randomized (personalized) homework problems through telnet.
Convene International is founded by Jeffery Stein and Reda Athanasios to provide collaboration tools via the Internet.
Convene International acquires Bizlink of North Carolina's Larry Allen to facilitate a rapid entry in building Internet communities.
UNI-C, the Danish State Centre for Computing in Education (which became a Blackboard user in the 2000s) supports a wide range of online distance courses using PortaCOM, a conferencing platform, for example in the TUDIC project, funded under the EU's COMET Programme. Extensive theoretical work undertaken by, amongst others, Elsebeth Korsgaard Sorensen, whose web site has a detailed bibliography.
1990s:
Collaborative Learning Through Computer Conferencing, also known as the Najaden Papers, edited by Anthony Kaye in the NATO ASI Series, and published by Springer-Verlag (ISBN 3-540-55755-5). Provides several case studies of online learning in action, and an overview by Jacob Palme providing a comprehensive inventory of the functionalities available in computer conferencing systems, including SuperKOM. This last paper describes in detail the underlying functions of what would now be called a virtual learning environment, including, for example, roles, voting, expiration times, exams, moderation, deferred operations.
1990s:
Open University (UK) installs FirstClass on a Mac server (reputed to be server license number 3) after an extensive evaluation of tools suitable to deliver online learning across Europe for the just-started JANUS project funded by the European Commission under the DELTA programme. (FirstClass was then a product of SoftArc in Ontario, Canada.).
The New York University School of Continuing Education (SCE) introduces its Virtual College and develops a digital network to deliver courses to students. SCE uses Lotus Notes at least through 1997 for computer conferencing and to provide online computer laboratory access to student home PCs.
GeoMetrix Data Systems founded. They produce the learning management system called TrainingPartner.
[LearnFrame] of Draper, Utah founded. They initially produced online courseware and an authoring tool, and in 1995 developed Pinnacle Learning Manager, that accepted and managed courses from a wide variety of vendors.
1990s:
Following several years of preparatory studies, the European Commission DELTA programme starts. (DELTA stands for Developing European Learning through Technological Advances.) Over 30 projects are funded, each lasting for around three years, many relevant to VLEs, perhaps the most relevant ones being MTS, JANUS and EAST. The DELTA programme built on preparatory studies going on since 1985 into portable educational tools environments (proto-VLEs), networked multimedia and hypermedia, satellite networks, and a Learning Systems Reference Model (in some ways a precursor of IMS). There seems to be almost no Web information now on the preparatory studies, except for an interview with Luis Rosello in DEOSNews.
1990s:
Authorware Inc. merges with MacroMind/ParaComp to create Macromedia. MacroMind specialized in animation software (Director) and ParaComp specialized in 3D imagery (Swivel 3D). Macromedia goes public only months after the merger and remains the leading purveyor of multimedia tools.
Terry Hedegaard of UOP online picks Convene International's Internet collaboration tools to run a pilot for teaching UOP students online exclusively.
The MUD Institute (TMI/TMI-2) provides the TMI Mudlib and online environment for learning MUD programming, including e-mail, bulletin boards, shared file spaces, real time chat, and instant messaging.
1990s:
Terry Anderson coordinates net based "virtual conference" in conjunction with the 16th World Congress of the International Council for Distance Education. This project used email lists and Usenet groups distributed on the early Internet, Usenet, BitNet, and NetNorth. Reference: Anderson, T. & Mason, R. (1993). The Bangkok Project: New tool for Professional Development. American Journal of Distance Education, 7(2), 5-18.
1990s:
Humber College's Digital Electronics program used a learning management system to support a set of online courses. The program featured individualized instruction and continuous intake.
University of Wales, Aberystwyth awarded internal funding to further develop its 'integrated project support environment for teaching software engineering'. Ratcliffe, M. B., Stotter-Brooks, T. J., Bott M. F. & Whittle, B. R. 'The TIPSE: An IPSE for Teaching', Software Engineering Journal, 7, (5), pp 347–356, September 1992.
1993 Jakob Ziv-El of Discourse Technologies, Inc. files for a patent for a Remote Teaching System (# 5,437,555) (similar to the prior art of the PLATO system), referencing his 1991 patent. The patent is granted in 1995. The patent is referenced in a 2000 patent filing (# 6,988,138) by representatives of BlackBoard, Inc.
XT001 Renewable energy, a "landmark" experimental course developing techniques for collaborative and resource-based online learning at a distance, was the first "real" course to use FirstClass as its core online tool at the Open University. There are many references (mostly forgotten now) but particularly useful is.
1990s:
Convene International contracted to work with University of Phoenix to develop the first large-scale commercial product for use in Virtual Classrooms. Convene's unique characteristic enabled students to capture data and then work offline (at a time when people were often charged by the hour or minute for online time). University of Phoenix' Thomas Bishop brands the product "ALEX" for Apollo Learning Exchange." As Convene finishes the development of ALEX for University of Phoenix the pilot enrollment grows to 600 students within the first few months of implementation.
1990s:
Brandon Hall puts out the first issue of his Multimedia and Internet Training Newsletter, one of the first regular publications in the field.
Jisc (the Joint Information Systems Committee of the UK Higher Education Funding and Research Councils) is established on 1 April 1993, as a successor body to the Information Systems Committee. See https://web.archive.org/web/20050207072800/http://www.jisc.ac.uk/index.cfm?name=about_history Also in 1993, ALT - the Association for Learning Technology - was founded in the UK, initially with the assistance of a donation by BT.
1990s:
Michael Hammer and James A. Champy publish "Reengineering the Corporation: a Manifesto for Business Revolution" (New York: HarperCollins, 1993). As usual with business theories it took some time for Reengineering, or Business Process Reengineering in full (BPR in short), to percolate to higher education; but in fact Reengineering spread (to a few) much faster than some other approaches (such as Activity Based Costing or Benchmarking) - already in the 1995-98 period a number of university e-learning experts in UK, Netherlands and Malaysia were using the language, in many cases to the dismay of their colleagues. It is a moot point whether BPR accelerated the development of e-learning or inhibited it - certainly at CEO level in some universities the ideas were for a while seductive. BPR has a sharp edge - the gentler but vaguer approach of Change Management seems to be more enduring Scott Gray, a mathematics graduate student at Ohio State, develops The Web Workshop, a system that allows users to create Web pages online while learning. The pedagogical technique called Useractive Learning was developed to emulate the teaching techniques used in the Calculus & Mathematica courses taught at Ohio State.
1990s:
Bill Davis, Jerry Uhl, Bruce Carpenter, and Lee Wayand launch MathEverywhere, Inc. to market and sell the coursework used in Calculus & Mathematica courses.
1994 In 1994, NKI Distance Education in Norway starts its second generation, online, distance education courses. The courses were provided on the Internet through EKKO, NKI's self-developed Learning Management System (LMS). The experiences are described in the article NKI Fjernundervisning: Two Decades of Online Sustainability in Morten Flate Paulsen's book Online Education and Learning Management Systems.
CALCampus launches online-based school through which administration, real-time classroom instruction, and materials are provided. Origins of CALCampus The Tarrson Family Endowed Chair in Periodontics at UCLA is established with a testamentary gift to design, develop and launch the UCLA Periodontics Information Center for sharing periodontal practices and concepts with the worldwide dental community via CD-ROM and the Internet.
Lotus Development Corporation acquires the Human Interest Group. The system evolves into the Lotus Learning Management System and Lotus Virtual Classroom now owned by IBM. Links to articles that describes how IBM has previously implemented the "inventions" described in the Blackboard patent.
1990s:
SUNY Learning Network begins in 1994. Traditional faculty were hired to create online courses for asynchronous delivery into the home via computer. Each faculty member worked with an instructional design partner to implement the course. From the fall of 1995 through spring of 1997, forty courses were developed and delivered. SLN now supports over 3,000 faculty, 100,000 enrollments on 40 of the State University of New York's campuses.
1990s:
WEST 1.0 is released by WBT Systems. It eventually is renamed TopClass.
1990s:
Bob Jensen and Petrea Sandlin publish "Electronic Teaching and Learning: Trends in Adapting to Hypertext, Hypermedia, and Networks in Higher Education" - republished 1997. Text available via hyperlink, including identification of ten leading LMS systems in 1994 (discussed in detail in chapter 3 of their book): Quest from Allen Communication Tourguide from American Training International (Tourguide is no longer listed as a product at Infotec.) Multimedia ToolBook from Asymetrix Corporation, bought by Click2Learn, bought by SumTotal Systems Lesson Builder from the Center for Education Technology in Accounting (this product never was completed) Tencore from Computer Teaching Corporation Course Builder from Discovery Systems International, Inc.
1990s:
Training Icon Environment (TIE) from Global Information Systems Technology, Inc.
1990s:
tbtAuthor from HyperGraphics Corporation (HyperGraphics no longer lists tbtAuthor in its product line) Authorware from Macromedia Corporation Personal Education Authoring Kit (PEAK) from Major Educational Resources Corp. PEAK is for Mac users only and has been discontinued. However, while they last you can get free copies at 800-989-5353 Banking on the tremendous commercial success and rapid growth for the UOP program, Reda Athanasios of Convene International starts making the online virtual classroom suite, built in collaboration with UOP, available for all other schools aiming at success for their distance education programs.
1990s:
The JANUS project led by the Open University releases in September 1994 Deliverable 45 describing the interim evaluations of the first three online courses delivered across Europe in conjunction with the JANUS project, including AD280 "What is Europe", DM863 "Lisp Programming" and D309 "Cognitive Psychology" Virtual Summer School. Later in the year the Open University releases a longer final report purely on the Virtual Summer School.
1990s:
September 1994: The JANUS User Association holds its first AGM and conference at the Dutch Open University. It is one of the first Europe-wide associations focussed on e-learning. It later changed its name to LearnTel and continued until 1999. An online archive of the newsletter is still available via the support of pjb Associates.
Athabasca University (Canada) implements first on-line Executive MBA program using Lotus Notes.
1990s:
TeleEducation NB introduces a DOS-based working LMS in 1993. In 1994 a more powerful system was proposed for the WWW. A description of the concept was published in 1995 with some of the principal features of an LMS. Reference: McGreal, R. (1995). A heterogeneous distributed database system for distance education networks. The American Journal of Distance Education, 9(1), 27–43. Retrieved 11 August 2006 Taking advantage of Convene International's online virtual classroom and hoping for similar success to that of UOP online, several schools start working with Convene in wiring their Distance Education programs and offering it online via the Internet.
1990s:
Mark Lavenant and John Kruper present "The Phoenix Project at the University of Chicago: Developing a Secure, Distributed Hypermedia Authoring Environment Built on the World Wide Web" at the First International World-Wide Web Conference in Geneva, Switzerland. "The Phoenix Project" later became the Web-based learning environment within the Division of the Biological Sciences at the University of Chicago.
Swanton High School in Ohio used learning management systems to track student progress, as well as testing results, satellite courses, videodiscs, Hypercard, QuickTime video, and Internet connections.
Intralearn comes out with a Learning Management System for the Mid Market. This system has the facility to conduct courses to students from different locations using internet, interact with them, send them mails and conduct examinations.
1990s:
Tufts University released(1994) the Health Sciences Database which subsequently (2003) became known as TUSK, Tufts university sciences knowledgebase. In 1997 using MYSQL created version 3 - hsdb3. There has been a steady development of features through versions hsdb4, hsdb45, TUSK 1.0 and now TUSK 2.0. From its inception its basis was integration of clinical information with its ubiquitous availability across space and time. Students and authors had specific permissions within the system. TUSK is a combination learning management system, content/knowledge management system and course management system. The system is used at the three health sciences schools at Tufts and now a 7 partner schools in the U.S., Africa and India.
1990s:
July 1994: First international gathering of educators using online technologies to conduct classroom project-based learning was held by iEARN (International Education and Resource Network) in Puerto Madryn, Argentina. 120 educators from 20 countries gathered to share experiences. Out of this conference came the first international iEARN constitution and plans to expand school networking globally.
1995 Jerrold Maddox, at Penn State University, taught a course, Commentary on Art, on the web starting in January 1995. It was the first course taught at a distance using the web.
By January 1995 there are dozens of MUDs and MOOs, including Diversity University, in use for educational purposes.
Elliott Masie and Rebekah Wolman publish the first edition of "The Computer Training Handbook" (Minneapolis: Lakewood Books).
1990s:
Pardner Wynn introduces a free web-based interactive course at testprep.com for SAT test preparation, possibly the first interactive learning course on the internet. Over 1 millions hits are registered within 3 months, encouraging the development of the first commercial web-based e-learning course authoring, publishing, and management system, IBTauthor (announced January 1996 in Brandon Hall's Multimedia Training Newsletter). This product became the basis for VC-backed Docent, Inc. (funded in 1997, IPO in 2000), now named SumTotal Systems.
1990s:
European Commission establishes the European Multimedia Task Force, to analyse the status of educational media in Europe. The field covered by the Task Force includes all educational and cultural products and services that can be accessed by TVs and computers, whether via telematics networks or not, and used in the home, industry or educational contexts.
Lotus Notes used for course materials, syllabi, handouts, homework collection, teams, and multi-instructor, multi-team teaching in the MBA program. Results reported at several academic conferences (ICIS-17, AIS-2) in 1996.
Mallard web-based course management system developed at the University of Illinois. Mallard allows for multiple roles. For example, a graduate stiudent can be an instructor in one course and a student in another.
1990s:
Wolverhampton Online Learning Framework|WOLF (Wolverhampton Online Learning Framework) is developed at the University of Wolverhampton's Broadnet project under the guidance of Stephen Molyneux to deliver training materials to local SMEs (Small to Medium Enterprises). In 1999, WOLF is both adopted as the university's VLE, and sold for commercial distribution to Granada Learning, who rebrand the product in partnership with the university and market it to the UK FE and HE sectors under the name Learnwise. WOLF is still in use at the university today, and undergoing continual development to meet the ever-changing needs of education.
1990s:
Nicenet ICA launched to the public.
Murray Goldberg begins development of WebCT at the University of British Columbia in Vancouver, Canada, with a $45,000-grant from UBC's Teaching and Learning Enhancement Fund. WebCT would go on to become the world's most widely used VLE used by millions of students in 80 countries.
FirstClass is named the Best General Purpose Tool/School Program by Technology & Learning magazine.
Professors Michael Gage and Arnold Pizer develop the WeBWorK Online Homework Delivery System at the University of Rochester.
Virtual Science and Mathematics Fair used static HTML pages created by children and a threaded discussion for comment posts left by judges and visitors. PhD research reported by Kevin C Facemyer, 1996.
1990s:
The Future of Networking Technologies for Learning Workshop held, sponsored by US Department of Education. "In an attempt to answer the question, "What is the future of networking technologies for learning," the U.S. Department of Education's Office of Educational Technology commissioned a series of white papers on various aspects of educational networking and hosted a workshop to discuss the issues. The white papers and the workshop report are here." The European Commission release in May 1995 a 104-page report describing the 30 projects commissioned under the DELTA programme of Framework 3. Several of these are concerned with online learning using what many might today call a "virtual learning environment". (The phrase is not used as such but the phrases "learning environment", "interactive learning environment" and "collaborative learning environment" are used quite frequently.) About the same time the JANUS project releases the JANUS Final Report describing the project over its 3-year lifetime and all the online courses it has supported during 1993-1994 across Europe.
1990s:
The report Telematics for Distance Education in North America is released in public form in November 1995 after wide dissemination within European research circles. It describes the situation as it pertains to e-learning at 20 organisations including universities and most major vendors, based on a 3-week study trip in summer 1995 by Bacsich and Mason.
1990s:
A short article in the LIGIS newsletter for November 1995 on FirstClass confirms that at the time of its writing FirstClass did not have a Web interface. (It also notes that its then rival CAUCUS did have a Web interface and that WEST, later TopClass from WBTSystems, had been recently developed.) WBTSystems develops TopClass, a web-based course management system. It allowed personalization in that the instructor could tailor a different version of a course for each student.
1990s:
Northern Virginia Community College (NVCC)'s Extended Learning Institute develops and delivers four math, science, and engineering courses using Lotus Notes for computer conferencing/groupware functionality.
Edward Barrett at MIT received a grant to create a prototype "Electronic Multimedia Online Textbook in Engineering" (EMOTE) for use in classes taught through the new Writing Initiative.
1990s:
WebTeach, a web-based asynchronous communication system using chronological threads in the "Confer style" originally developed in the mid 70s by Robert Parnes, was first used in 1995 in the Professional Development Centre at UNSW. It was written in Apple's Hypercard as a CGI script running behind WebStar by Dr. Chris Hughes and Dr. Lindsay Hewson at UNSW. The 1996 versions supported a Notice Board, a Seminar Room and a Coffee Shop for each class group, and added email notifications, a Quiz function, and a range of pre-programmed communication modes that emulated small group teaching strategies including brainstorming, questioning, case studies and commitment exercises. The modes were characterised by changes in layout, font colours, and the options available to teachers and students. The software was refined in subsequent years, with additional modes, including a formal debate mode, being added. In 2002 it was completely rewritten in Cold Fusion and refined to include many more features, including private groups, voting modes and fully functional web-based administration pages. WebTeach supports an approach to teaching and learning on the web that is more akin to an asynchronous virtual classroom than it is to an instructionally designed and packaged educational experience. Communication forms the basis of the teaching (as opposed to content provision) and the teacher in a group can switch teaching strategies (modes) easily, in order to respond to student contributions.
1990s:
Many online schools appear on the educational scenes after working with Convene International. Some of them emerge as leaders of Internet Education like, Baker College and Pacific Oaks College and UCLA extension to name a few.
1990s:
The Stanford Center for Professional Development (SCPD, formerly SITN) launches Stanford Online, which "was the first university internet delivery system incorporating text and graphics with video and audio, using technology developed at Stanford." "Constructing Educational Courseware using NCSA Mosaic and the World Wide Web" is presented by J.K. Campbell, S. Hurley, S.B. Jones, and N.M. Stephens at the 3rd International World-Wide Web Conference in Darmstadt, Germany.
1990s:
Lee A. Newberg, Richard Rouse III, and John Kruper publish "Integrating the World-Wide Web and Multi-User Domains to Support Advanced Network-Based Learning Environments" in the Proceedings of the World Conference on Educational Multimedia and Hypermedia (1995), Association for the Advancement of Computing in Education, Graz, Austria.
From May to July 1995 Georg Fuellen, Robert Giegerich and others give the "BioComputing Course" using the Electronic Conferencing system BioMOO, later winning the "Multimedia Transfer 1997" presented during the exhibition Learntec 1997.
1990s:
Work began at University of Wales, Aberystwyth in developing its integrated Remote Advisory System, a system designed to provide students with remotely sited tutors, sharing workspaces, audio and video. Supported by Internal Outlook Enterprise Funding. Published in Ratcliffe, M. B., Parker, G. R. and Price, D. E. 'The Remote Advisory Service at Aberystwyth', IEEE Conference on Frontiers in Education, Utah, USA, 6 pages, November 1996.
1990s:
Sue Polyson, Robert Godwin-Jones, and Steve Saltzberg of Virginia Commonwealth University (VCU), at a Fall 1995 meeting of the "Partnership for Distributed Learning" (a consortium of US schools organized by University of North Carolina, Chapel Hill) proposed the concept for developing a web-based course management system named "Web Course in a Box". They described the basic system features and proposed that interested schools work together to develop a working prototype of this system. The VCU group began work on the prototype with input from the consortium. Work continued through the Winter, 1995 and Spring 1996. A first beta of Web Course in a Box was presented to the group in Spring, 1996. The idea for Web Course in a Box grew out of work that Polyson had begun in 1994–1995 at VCU to develop a web-based interface for delivery of course materials to support VCU's Executive Masters in Health Administration, one of the first distance-delivered master's degree programs in the country. During this time, Godwin-Jones, also at VCU, had been working to develop web-based content for foreign language instruction. This work was described in two articles published by Syllabus Press, in the September 1995 issue of Syllabus (Volume 9, No.1) titled "Distributed Learning on the World Wide Web" and "Technology Across the Curriculum - Case Studies", both authored by Saltzberg and Polyson.
1990s:
Question Mark (see QuestionMark) brings out first web based assessment management system QM Web, following on from DOS and Windows assessment systems.
1990s:
Online Learning Circles move from the AT&T Learning Network to their current home on the International Education and Resources Network (iearn) 1996 The Project for OnLine Instructional Support is designed and developed at the University of Arizona]. This tool provides innovative dialog-based lessons to students. To support use of these lessons a method for providing online course context, course organization and course communications tools is created.
1990s:
In 1996, NKI Distance Education in Norway starts its third generation online distance education courses. The courses were web-based and provided through EKKO (renamed to SESAM), NKI's self-developed Learning Management System (LMS). The experiences are described in the article NKI Fjernundervisning: Two Decades of Online Sustainability in Morten Flate Paulsen's book Online Education and Learning Management Systems.
1990s:
In 1996, after hearing about the Virtual Office Hours Project developed by Prof. Craig Merlic and Matthew Walker in UCLA's Department of Biochemistry, UCLA Social Sciences reviewed it with some of the faculty and decided to try writing a custom version. The deciding factor was finding Jeff Carnahan's Upload. pl Perl CGI Script (available at Misc CGI Scripts - click on FileUploader 6.0 for free, but registration required) that did File Uploads via a web browser. With that, Matt Wright's WWWBoard, a Calendar script, later discarded, and a script written by Social Sciences Computing to edit files on the fly, there were enough tools to make something useful. Originally the plan was to have instructors fill out a web form to request a site. But due to problems getting the email to work, sites were created instantly instead. That turned out to be easier. A password was added and emailed to all the Social Sciences faculty. ClassWeb was first offered to UCLA Social Sciences Faculty in the Spring Quarter of 1997. Eight instructors set up ClassWeb sites (see Spring 1997 sites).
1990s:
Early 1996, Dan Cane, a sophomore student at Cornell University begins working Cindy van Es, a senior lecturer in Agricultural, Resource and Managerial Economics (ARME) as part of an independent study project to build course web pages. In turn he develops automated scripts to provide basic interactive functionality for announcements and the beginnings of a suite of tools called The Teachers Toolbox. These ideas later become the foundation for CourseInfo.
1990s:
The UCLA Periodontics Information Center was established in 1996 within the UCLA School of Dentistry with generous gifts from the Tarrson Family and Sun Microsystems. The initial thrust was to provide the most comprehensive website on Periodontics including Tutorials, Case Studies and Continuing Education Credits.
European Commission agrees to the European Council's 'Learning in the Information Society' action plan.
1990s:
Webtester and ChiTester developed at Weber State University through a grant from the Utah Higher Education Technology Initiative. ChiTester early history Sue Polyson and Robert Godwin-Jones, of Virginia Commonwealth University released the first beta version of Web course in a Box (WCB) in Spring, 1996. (See this 1997 presentation). This web-based system was designed to be an easy-to-use, template-based interface that allowed instructors to create an integrated set of web pages for presenting course material. The system featured logins for instructors and students, the ability for instructors to enroll students in their courses so that access to course materials could be controlled, the easy setup of web-based discussion forums for use by students within the class, document sharing through the upload of files to the discussion forum, schedule and announcement pages, content links, and personal home pages for both students and instructors. The WCB system was made available, free of charge, for use by any school that wished to use it. The source code was copyrighted by Virginia Commonwealth University, and Web Course in a Box was trademarked by VCU in 1997. Web Course in a Box was described in an article in "A Practical Guide to Teaching with the World Wide Web", by Polyson, Saltzberg, and Godwin-Jones, published in the September 1996 issue of Syllabus magazine, by Syllabus Press. For a feature and version history of web course in a box, please see, Doncaster College in South Yorkshire, England, submitted a bid under the "Further Education Competitiveness Fund" proposing to use the Fretwell Downing "Common Learning Environment" integrated into newsgroups, the WWW, and conferencing, all combined into an on-line learning environment. Diagram and a single paragraph from the bid, dated 4 March 1996. The full document is much more explicit, making reference to the use of email, conferencing, newsgroups for the delivery of National Vocational Qualifications and distance learning over the internet and the UK Joint Academic Network. Slides from a presentation, including diagram of the learning environment 8 May 1996 - Paris, France: Murray Goldberg presents paper at the 5th WWW conference, introducing WebCT - See session PS10, paper P29. For paper, see: http://www.ra.ethz.ch/CDstore/www5/www156/overview.htm. The reaction to WebCT caused Goldberg to begin giving away free licenses to the software. Word spread very quickly and within 6 months approximately 100 institutions were using WebCT.
1990s:
In January, Nat Kannan, Carl Tyson, and Michael Anderson form UOL Publishing (now VCampus) and release an Internet course delivery platform; the Java client accesses PLATO content on a CDC mainframe. In November, UOL releases a browser-based course authoring and delivery platform based on the Informix OO database. The UOL system supports multiple campuses (with "buildings" on each "campus" for the different academic functions) and enables multiple roles (admin/author/instructor/student) for every user on a course by course basis. UOL's virtual campus is adopted by Graybar Electric and the University of Texas TeleCampus (among others) in early 1997.
1990s:
Paul McKey publishes the design specifications for an "Interactive on-line Tutorial Session Model" in his Masters Thesis "The Development of the On-line Educational Institute", SCU, Australia, July 1996, https://web.archive.org/web/20070804083810/http://www.redbean.com.au/articles/files/masters/06-Chapter6.html Electronic, network-based assignment submission tool in use at Australian National University Department of Computer Science. Web-based course pages also implemented at ANU DCS (both submission tool and course pages may have been in use prior to 1996).
1990s:
The University of Michigan launches the UMIE project (the University of Michigan Instructional Environment), a combination of systems to enhance learning online and to create a Learning Management System for use by the campus.
University of Southern Queensland (USQ) offers its first fully online program, a Graduate Certificate in Open and Distance Learning, using a system that linked together course materials presented in web pages, online discussion via newsgroups (NNTP) and a purpose-built system for online submission of student work.
The development of COSE was funded from September 1996 to August 1999 by the JISC Technology Applications Programme (JTAP). COSE has continued to gain support from the Jisc in its work on interoperability.
The JTAP programme also funded the Toomol project which produced the Colloquia P2P VLE, developed by Liber, Olivier, Britain and Beauvoir, which has had a major influence in the more recent development of the Personal Learning Environment (PLE) concept.
Pitsco, Inc. ships an updated version of its Synergistic Systems modular education curriculum which includes computer-based assessment and network-based reporting and gathering of assessment results.
World Wide Satellite Broadcasting (WSB) Inc. develops a satellite-based distance learning system using synchronized video and audio courseware provided by UCLA. Content is delivered via Philips' CleverCast content distribution system to Windows PCs running Active Desktop via the Astro MEASAT Direct To Home (DTH) network, covering Malaysia, Thailand and India.
The TELSI (Telematic Environment for Language Simulations) VLE is developed at the University of Oulu in Finland. Development was headed by Eric Rouselle and was continued into present day Discendum Optima.
Marine Corps Management and Simulation Office (MCMSO) adapts DOOM II into Marine Doom, a Virtual Learning Environment for training four-man fire teams.
KnowledgePlanet introduced the world's first Web-based Learning Management System in 1996. See - https://web.archive.org/web/20070928043901/http://www.knowledgeplanet.com/inside/milestones.asp Stephen Downes, Jeff McLaughlin and Terry Anderson demonstrate and document the MAUD (Multi-Academic User Domain), holding a Canadian Association for Distance Education Seminar on the system, Online Teaching and Learning, 29 January 1996.
1990s:
Michigan State University'sVirtual University opened. By 1997, its fully online courses included registration, payment, quizzing, discussions, dropbox, and, of course, course content. The system was created and developed by in-house programmers. Now Garry Main and Kevan Gartland, University of Abertay Dundee, UK, A system (webtest) was developed and deployed for use in testing students in the School of Molecular and Life Sciences. This was later extended to allow images to be labelled, self-testing and teaching. Also in use at the time was the Question Mark product. The work at Abertay was presented as a keynote talk at the BALANCE workshops KeyNote Presentations in 1997/8.
1990s:
Initial release of the ETUDES software at Foothill College, California.
Real Education founded (later changed to eCollege.com) as an LMS/CMS Application Service Provider company.
WEST (later WBTSystems) announce in early 1996 a new release of WEST (later renamed TopClass). Among the enhancements mentioned are: support of multiple-choice tests and "fill in the blanks" questions, including choosing questions randomly from a list (question bank?); support of multiple classes with multiple content and students able to take more than one class.
1990s:
The article Lotus Notes in the Telematic University written for LIGIS in September 1996 confirms that several US universities are using Lotus Notes for e-learning, including via a Web interface. It goes on to observe that "Lotus Notes already has offered for a year or more several of the groupware and Internet features that other systems like FirstClass and Microsoft Exchange are only just now getting".
1990s:
Another article in the same edition of LIGIS confirms that FirstClass, to the relief of many of its users, in August announced a Web interface. (http://www.pjb.co.uk/10/FirstClass.htm but see also http://www.pjb.co.uk/9/FirstClass.htm) The 304-page PDF manual for the FirstClass Intranet Client (Part Number SOF3122) is widely and freely distributed by SoftArc across many bulletin boards and web servers and remains available at several universities (e.g. at the University of Maine], a long-standing user of FirstClass.
1990s:
Not to be outdone by the UK Open University, the FernUniversitat Hagen (German OU) described its web-based virtual campus in a LIGIS article in October 1996 on University of Hagen Online by Schlageter and others. The project "goes beyond current approaches in that it integrates all functions of a university, thus producing a complete and homogeneous system. This does not only include all kinds of learning material delivered via electronic network (most "online university" approaches focus almost exclusively on this aspect) - but for a promising approach the following is absolutely essential: user-friendly and powerful communication, especially also between users themselves for collaborative learning (peer learning) and for social interconnecting, possibilities of group-work (cscw), seminar support, new forms of exercise and practical via net, easy access to library and administration, information and tutoring systems".
1990s:
Microsoft announces MS Exchange at Networld+Interop. An article of the era speculates on its relevance to e-learning.
An article nominates 1996 as "the year of virtual universities". There were a large number of conferences - in particular at Ed-Media Boston there was a packed session even though organised at short notice.
WebSeminar (Gary Brown, Eric Miraglia, Doug Winther, and Information Management Group) (now retired, news release here) an interactive web-based space for integrating discussion and media rich modules.
1990s:
The Virtual Classroom (Brown, Burke and Miraglia). (retired) a web-based threaded composition environment. A WSU Boeing grant award and Microsoft, Information Management Group partnership Northern Virginia Community College (NVCC)'s Extended Learning Institute switches from Lotus Notes to FirstClass and uses First Class in over 35 courses during the Fall 1996 semester March 1996. Allaire releases Allaire Forums, "a Web conferencing application built entirely on the ColdFusion platform. Forums provided a feature-rich server application for creating Internet, Intranet and Interprise collaborative environments. Already in use by hundreds of leading companies worldwide, Forums was the first in a new line of end-user Web applications." Bruce Landon makes a proposal to British Columbia to set up a comparison service for VLEs, which made its first report (on nine systems) in 1997. It was first called Landonline, then later called Edutools.
1990s:
Hermann Maurer (Graz University of Technology, Austria) publishes "LATE: A Unified Concept for a Unified Teaching and Learning Environment" in Journal of Universal Computer Science, vol. 2, no. 8 (1996), 580–595. Based on the Hyper-G/HyperWave system developed by Maurer, LATE prefigures many of the features available in virtual learning environments, including content-authoring modules, digital libraries, asynchronous and synchronous discussion, and virtual whiteboards.
1990s:
Technikon South Africa (TSA) now merged with the University of South Africa (Unisa) released the first version of their in-house developed online learning environment (TSA Online) in 1996. The subsequent versions (2 & 3) were renamed TSA COOL (Technikon SA CoOperative Online Learning). Version 4 was under construction when TSA and Unisa merged (See 2004). Version 3 served approximately 24 000 students at the time of the merger.
1990s:
The University of Manitoba conducts an evaluation of course management systems that includes Learning Space (University of Washington), Top Class, WebCT and ToolBook.
Iowa State University develops Classnet, a web-based "tightly integrated, automated class management system". It was created to help with the administrative aspects of course management.
The Oracle Learning Architecture (OLA) is a course management system with over 75 training titles. It has the following features: Home page, bulletin board, Help, User Profile, My Courses, Course Catalog, and Reports. It served up web-based courses, download courses, vendor demos and assessments.
1990s:
Empower Corporation developed the Online Learning Infrastructure (OLI), a training management system that used a relational database as a central repository for courses and/or learning objects. It had built-in tools and templates for authoring learning objects. It also had a middleware layer called the Multimedia Learning Object Broker that mapped learning objects as they moved in and out of the database.
1990s:
TeamSpace's Learning Junction is an Internet-based training management system founded by several ex-Oracle employees. It was developed in Java. The program displayed a graphical list of courses, certification plans and needed skills. Students registered online, and were given an individualized learning plan.
The Jisc Technology Applications Programme (JTAP) coMentor VLE starts development at the University of Huddersfield, UK. The coMentor web site indicates that a further [dissemination phase] of the software started in 1998.
Work was funded at the University of Wales, Aberystwyth to further develop its Integrated Project Support Environment for Teaching started in 1992. Ratcliffe, M. B., Stotter-Brooks, T. J., Bott M. F. & Whittle, B. R. 'The TIPSE: An IPSE for Teaching', Software Engineering Journal, 7, (5), pp 347–356, September 1992.
1990s:
Work funded at the University of Wales, Aberystwyth by the Joint Information Systems Committee Technology Applications Programme £164,000 for NEAT - Networked Expertise, Advise and Tuition. A system for students to obtain help across the Internet from tutors - sharing workspace, audio and video. Ratcliffe, M.B., Davies, T.P. & Price, G.M. 'Remote Advisory Services: A NEAT Approach', IEEE MultiMedia, Vol 6, Issue 1, 16 pages, Jan-March 1999.
1990s:
Tufts University presents to Special Library Association. Article is published in Proceedings of the Contributed Paper Session to the Biological Sciences Division of the Special Libraries Association - 12 June 1996, describing the creation of networked relational document database which integrates text and multimedia and the creation of tools which address the changing needs in medical education 1997 Digitalbrain plc, founded by David Clancy in 1997, quickly established itself as the most heavily used learning platform in the UK; which is still the case in April 2007. Digitalbrain was the first learning platform to be deployed using an on-demand software model and, as the name implies, the first designed around a user-centric approach. "A truly foresighted design" according to the heaviest users of the platform. The combination of the on-demand and user centric approach meant that a single, flexible learning platform could be rolled out across a host of different school and institutional user groups, each with multiple but inter-related user hierarchies, each with different software bundles and functional capabilities - easily, quickly and cheaply. At a time when users had little understanding of why they needed a learning platform, let alone what they would do with it, this approach encouraged user experimentation, at an affordable price.
1990s:
Early 1997, CourseInfo is founded by Dan Cane and Stephen Gilfus, an undergraduate student and teaching assistant, and launches the Interactive Learning Network 1.5 based on scripts that Dan Cane began writing in 2006. The product is one of the first systems to be based on a relational database with internet forms and scripts that provided announcements, document uploading and quiz and survey functionality.
1990s:
In 1997, Instructional Design for New Media – an online course on how to develop online courses was created using forums, interactive exercises and the notion of collaborative learning by a community of instructors and students. Developed by a Canadian consortium led by Christian Blanchette (Learn Ontario) and funded by the Canadian government, it was featured in the May 1998.
1990s:
Brandon Hall publishes the "Web-Based Training Cookbook: everything you need to know for online training" (New York: John Wiley). The book contains many examples of online training software and content already in commercial use. Brandon Hall also publishes the first of his annual reviews of Learning Management Systems, entitled "Training Management Systems: How to Choose a Program Your Company Can Live With." There are 27 learning management systems listed in this report.
1990s:
Elliott Masie publishes the second edition of the "Computer Training Handbook" (the first version was published in 1995, and co-authored by Rebekah Wolman). In this book Elliott describes teaching a pilot course via the Internet called "Training Skills for Teaching New Technology". The book also has a chapter entitled "On-line and Internet-Based Learning".
1990s:
The Stanford Learning Lab, an applied research organization, was created to improve teaching and learning with effective use of information technologies. It carried out many projects that developed techniques and tools for large lecture, geographically distributed, and project-based courses. A study of web-supported large lecture course, The Word and the World tested online structured reading assignments, asynchronous forums, and student projects. Software developed included: panFora: an online discussion environment for the development of critical thinking skills; CourseWork: an online, rationale-based, problem set design and administration environment; E-Folio: ubiquitous, web-based, portable electronic knowledge databases that are private, personalized and sharable; Helix: web-based software developed to coordinate the iterative review of research papers; and RECALLtm: to capture, index, retrieve, and replay concept generation over time in the form of a sketch and the corresponding audio and video rationale annotation.
1990s:
In June 1997, Gotham Writers' Workshop (www.writingclasses.com) launched its online division; classes feature blackboard lectures, class discussion bulletin boards, interactive chat, homework posting/individual teacher response, group assignment posting/group critique files.
1990s:
Virginia Commonwealth University licensed Web Course in a Box (WCB) to madDuck Technologies in early 1997. madDuck Technologies was a company formed in early 1997 by Sue Polyson, Robert Godwin-Jones and Steve Saltzberg. The company was formed by the WCB developers in order to provide support and services to other educational institutions who were using WCB. WCB version 1 was released in February 1997 (beta version were released in 1996, and the product was in use at VCU and several other institutions in 1996). WCB V2 was released in September 1997 and added web-based quizzing, as well as more course site customization to the feature set.
1990s:
The Oncourse Project at Indiana University utilizes the notion and design of a "template - based course management system." Other systems used a similar approach including CourseInfo, WebCT, and other Course Management systems. Take a look at the old IUPUI WebLab site archived at the Archive.org: https://web.archive.org/web/19990221151346/http://www.weblab.iupui.edu/projects/Oncourse.html Lotus LearningSpace deployed as the learning and student team environment for the Indiana University Accounting MBA program and reported in the proceedings of HICSS-32.
1990s:
Lotus LearningSpace presented at NERCOMP 24 March 1997: "Interactive Distributed Learning Solutions: Lotus Notes-Based LearningSpace" by Peter Rothstein, Director, Research and Development Programs, Lotus Institute.
Plateau released TMS 2, an enterprise-class learning management system. TMS 2 was adopted by both the U.S. Air Force and Bristol-Myers Squibb at the time of its release.
The Bodington VLE deployed at the University of Leeds, UK. The Bodington System - Patently Previous] By 1997, the Bodington VLE included many of the features listed in the Blackboard US Patent #6,988,138, including the variable-role authentication/authorization system. A full record exists of all activity in the Bodington VLE at Leeds going back to October 1997.
1990s:
First versions of COSE deployed at Staffordshire University. COSE includes facilities for the publication and reuse of content, facilities for the creation and management of groups and sub-groups of learners by tutors and for the assignment of learning opportunities to those groups and to individual learners. For article (1997) see [1]. This article was republished in 1998 in Australia. For a fuller description of work on COSE to the end of 1997 see: Published mid-1998 Ziff Davis launches ZDNet University for $4.95/month. Offering courses in programming, graphics and web management. See the Archive at Archive Cisco Systems In 1993, Cisco embarked on an initiative to design practical, cost-effective networks for schools. It quickly became apparent that designing and installing the networks was not enough, schools also needed some way to maintain the networks after they were up and running. Cisco Senior Consulting Engineer George Ward developed training for teachers and staff for maintenance of school networks. The students in particular were eager to learn and the demand was such that in 1997 it led to the creation of the Cisco Networking Academy Program, see Cisco networking academy. The Cisco Networking Academy Program, established in 1997, teaches students networking and other information technology-related skills, preparing them for jobs as well as for higher education in engineering, computer science and related fields. Since its launch, the program has grown to more than 10,000 Academies in 50 U.S. states and more than 150 countries with a curriculum taught in nine different languages. More than 400,000 students participate in Academies operating in high schools, colleges and universities, technical schools, community-based organizations, and other educational programs around the world. The Networking Academy program blends face-to-face teaching with web-based curriculum, hands-on lab exercises, and Internet-based assessment.
1990s:
Fretwell Downing, based in Sheffield, England, is working on the development of a virtual learning environment, under the auspices of the "LE Club" a partnership between the company and eleven English Further Education colleges. Dr Bob Banks's outline specification for a Learning Environment. The "LE" had arisen from a 1995-1997 EU ACTS Project - Renaissance - in which Fretwell Downing was the prime contractor.
1990s:
Convene International is recruited by Microsoft to become their first Education marketing partner. Convene helps Microsoft with establishing licensing parameters for the ASP companies.
Foundation of Blackboard Inc as consulting firm.
WebAssign developed by faculty at North Carolina State University for the online submission of student assignments and a mechanism for immediate assessment and feedback.
WebCT spins out of UBC forming independent company with several hundred university customers.
1990s:
Release of TWEN (The West Education Network), a system which "connects you with the most useful and current legal information and news, while helping you to organize your course information and participate in class discussions". (See archived homepage from archive.org) Future Learning Environment (FLE) research and development project starts in Helsinki, Finland (See: http://fle.uiah.fi) Stephen Downes presents Web-Based Courses: The Assiniboine Model http://www.westga.edu/~distance/downes22.html at NAWeb 1997, describing the LMS in detail.
1990s:
A collaborative writing project between Jr Hi students and University pre-teachers, using Filemaker Pro to create collaborative writing spaces, Jan-Mar 1997, later described in Payne, J Scott and N. S. Peterson. 2000. The Civil War project: project-based collaborative learning in a virtual space. Educational Technology & Society 3(3).
1990s:
The Manhattan Project (now known as the Manhattan Virtual Classroom) is launched at Western New England College in Springfield, MA as a supplement to classroom courses in February 1997. It is later released as an open source project. The Manhattan Project (history and description) Delivery starts of the LETTOL course in South Yorkshire, England. Characteristics: delivery over the Internet; materials, tasks/assignments, discussion-board. chat system all accessible by browser; browser-based amending of the materials; learners and tutors all over the world, with learners enrolled to several of the institutions in the (then) South Yorkshire Further Education Consortium, and tutors employed by several different institutions.
1990s:
An undergraduate software development course at the University of North Carolina at Chapel Hill included a team addressing the problem of Distance Education. The purpose was to allow interaction between students and instructors located in remote sites by utilizing a computer network, such as the internet. Included in the software requirements were a linked web-browser system, a synchronized blackboard application, and a student/instructor chat tool. There were two levels of access, separately for the instructor and for the students. The simple software suite was accomplished in the spring semester of 1997.
1990s:
The Web Project at California State University, Northridge, adapted HyperNews from The Turing Institute, a shareware discussion board that created specific courses with faculty and students. In addition, QuizMaker from the University of Hawaii, and Internet Relay Chat (IRC), were shortly thereafter added to the shareware suite and indexed to faculty webpages. The Virtual 7 were seven faculty who began to teach online in 1995, with this software.
1990s:
University of Aberdeen starts a project to research and evaluate web-based course management and communication tools. Project notes are available, including the original administrator guides for TopClass v.1.2.2b, October 1997 (PDF). Aberdeen ultimately chooses WebCT, and rolls out a live system in 1998.
1990s:
Pioneer developed by MEDC (University of Paisley) Pioneer was an online learning environment developed initially for colleges in Scotland. Pioneer was web-based and featured: online course materials (published by the lecturers themselves); integral email to allow communications between students and tutors; forum tools; chat tools; timeatable/calendar; activities. The main driver for Pioneer was Jackie Galbraith. When MEDC was closed, the Pioneer development team moved to SCET in 1998 taking Pioneer with them when it became SCETPioneer. SCETPioneer was used by Glasgow Colleges and a number of other colleges and schools in Scotland. SCET merged with the SCCC and became Learning and Teaching Scotland Bob Jensen and Petrea Sandlin republish "Electronic Teaching and Learning: Trends in Adapting to Hypertext, Hypermedia, and Networks in Higher Education" - first published 1994, text of both versions available via hyperlink.
1990s:
Speakeasy Studio and Café (Gary Brown, Travis Beard, Dennis Bennett, Eric Miraglia, and others) (now retired, but many references remain on WSU websites, e.g., these) a course delivery system hosted by Washington State University] and used on multiple campuses for web-based discussion and collaborative writing. Speakeasy had a primitive portfolio view that allowed instructors and students to find all the writings of a given author within a course space, by discussion topic or in a calendar view.
1990s:
The Cougar Crystal Ball (Gary Brown, Randy Lagier, Peg Collins, Greg Turner & Lori Eveleth-Baker and others). an online learning profile and corresponding university resource inventory, implements ideas related to selective release of material based on learner preparedness.
The WSU OWL (Online Writing Lab) (Gary Brown, Eric Miraglia, Greg Turner Rahman, Jessie Wolf, & Dennis Bennett) (still in use at WSU and by others) an interactive forum for peer tutoring in writing (WSU Boeing grant award), involves a simple threaded discussion. OWL retires in favor of eTutoring March 2008.
The VIRTUS project at University of Cologne, Germany, has started the development of the web-based ILIAS learning management system in 1997. A first version with an integrated web-based authoring environment has been going online at 2 November 1998. In 2000 ILIAS became open source software under the GPL.
1990s:
Serf was invented at the University of Delaware by Dr. Fred Hofstetter during the summer of 1997. Initially used to deliver the U.S.'s first PBS TeleWEBcourse (on Internet Literacy), Serf has been used to deliver hundreds of courses. Serf "began as a self-paced multimedia learning environment that enabled students to navigate a syllabus, access instructional resources, communicate, and submit assignments over the Web," and the Serf feature set was expanded from 1997 to 1999 as described in this article (from College & University Media Review (Fall, 1999), 99-123), which includes a detailed table describing the history of Serf's feature development for versions 1 through 3.
1990s:
University of Maryland University College (UMUC) offers its first classes using WebTycho, a customized "program developed by UMUC to facilitate course delivery via the World Wide Web." Paul McKey launches BigTree Online, a commercial, integrated online learning environment for managing the Apple certification program in Asia Pacific. Built with FileMaker Pro from a model first described in his Masters Thesis in 1996 - https://web.archive.org/web/20070804083810/http://www.redbean.com.au/articles/files/masters/06-Chapter6.html Saba founded. Now one of the pre-eminent corporate learning management systems.
1990s:
FutureMedia (established in 1982) commenced the development of Solstra with BT Group PLC, launching the first version of the product in February 1998. (Annual report for 2001 to SEC) (March 1997) Oleg Liber presents his paper "Viewdata and the World Wide Web: Information or Communication" at CAL 97 at the University of Exeter, England. In it he looks back to the use of videotex in education in the 1980s and forward to a more communications-oriented Web - what we would call Web 2.0 these days - but this was 9 years ago. The paper is worthy of note since Liber is still active in e-learning and as one of the few papers dealing with history of e-learning.
1990s:
Formal Systems Inc. of Princeton, NJ, USA introduces an internet version of its Assessment Management System, which started as a DOS program in 1990. (In 2000, Formal Systems changed its name to Pedagogue Solutions).
Educom's IMS Design Requirements released in document dated 19 December 1997.
Teaching in the switched-on classroom: An introduction to electronic education and HyperCourseware is published online by Kent Norman at the University of Maryland, College Park, MD: Laboratory for Automation Psychology.
Bob Godwin-Jones and Sue Polyson give a presentation at EDUCOM '97 entitled "Tools for Creating and Managing Interactive Web-based Learning". The presentation compared the features of Web Course in a Box and TopClass. The slides for the presentation are still available online.
The MadDuck Technologies web site listed the many distinctive features of the Web Course in a Box course management system.
An online column by Tom Creed called "The Virtual Companion" lists a number of course management systems including Web Course in a Box, WebCT, Nicenet, NetForum, and WebCT.
1990s:
Virtual-U, a course management system for universities, was developed at Simon Fraser University (SFU) in British Columbia, Canada. A design paper Virtual-U Development Plan: Issues and Process dated 25 June 1997 gives a clear description including screen shots. By early 1998 the system was deployed in a number of universities and colleges across Canada, including SFU, Laval, Douglas College, McGill, University of Winnipeg, University of Guelph, University of Waterloo, and Aurora College. (Source: The Peak, Simon Fraser University's Student Newspaper, Volume 98, Issue 6, 16 February 1998.) A press release dated 10 March 1997 announced that "DLJ's Pershing Division Aligns with Princeton Learning Systems and KnowledgeSoft to Create On-line University". Knowledgesoft's LOIS (Learning Organization Information System) was described by Brandon Hall, in his book The Web-based Training Cookbook (New York: John Wiley, 1997), as an "innovative Web-based training administration tool." It had three core modules: a competency management system, an assessment system, and a training management system.
1990s:
The University of Lincoln and Humberside (ULH) in the UK (later the University of Lincoln) begins development of its "Virtual Campus" software, which was later incorporated into a spin-out company called Teknical, which in 2003 was bought by Serco. Historical references seem fragmentary but some indication of the date of origin is contained in the overview material on the joint SRHE/Lincolnconference on 'Managing Learning Innovation' which took place on 1 and 2 September 97 at the university. Substantial funding came from BP as noted in a web page of the former Learning Development Unit at ULH.
1990s:
Two key papers on Role-Based Access Control (RBAC) are published: a Kuhn paper on separation of duty; necessary and sufficient conditions for separation safety - and an Osborn paper (in PostScript) on the relationship between RBAC and multilevel security mandatory access (MLS/MAC) security policy models; role lemma relating RBAC and multilevel security.
Al Seagren and Britt Watwood present "The Virtual Classroom: What Works?" at the Annual International Conference of the Chair Academy. Reno, NV. See ERIC Document Reproduction Service No. ED407029. This presentation reviewed two years of the use of Lotus Notes as a learning management system in a masters and doctoral level education degree from the University of Nebraska.
1990s:
July 1997: The Report of the National Committee of Enquiry into Higher Education, usually called the Dearing Report, is published in the UK. Many of its recommendations were influential not only in the development of e-learning but in the development of the national-level support structures for it, including leading eventually to the Higher Education Academy. The report web site is maintained by the University of Leeds.
1990s:
April 1997: The project Kolibri (Kooperatives Lernen mittels Internet-basierter Informationstechniken, Cooperative Learning with Internet-based IT) was launched at the University Dortmund and went live in February 1998 with a course for Fuzzy Logic. The Kolibri system was a generic web-based application which supported multiple courses and several user groups (student administration, tutors, students). The application supported personal course histories, personal notes to content, automatic tests and interactive cooperative applets for teamwork in lessons. The system further contains a chat-system and a blackboard for information exchange. A report in German is available as PDF [2] In January 1997, Scott Gray, Tricia Gray, Kendell Welch, and Debra Woods launch Useractive an online learning resource dedicated to the useractive learning pedagogical technique. This technique has its roots in constructivism except with computer aided guidance. This asynchronus system is enabled by embedding tutorials and learning management functions into development tools.
1990s:
In October 1997, the French University of Technology at Compiègne (UTC) launched the first French fully on-line degree, Dicit, training documentation engineers, using the Lotus Learning Space platform. The degree was created by Pr. Dominique Boullier and Pr. Jean-Paul Barthes. It offered 15 different courses, a serious game and several case studies on CD-ROM as well as a close coaching of the 20 to 25 students enrolled each year. The format was more of a blended learning type since the students met every two months for a face to face session. The degree was given for 10 years until 2007. Papers were written on this successful experimentation: BOULLIER, Dominique.- " Les choix techniques sont des choix pédagogiques : les dimensions multiples d’une expérience de formation à distance "[3], Sciences et Techniques Educatives, vol. 8, n° 3-4 /2001, pp. 275–299.
1990s:
1998 On 11 August 1998 Indiana University, IUPUI Campus, issued a press release "Prototype for Web-based Teaching and Learning Environment to be Tested at IUPUI This Year" https://web.archive.org/web/19990222013218/http://www.weblab.iupui.edu/projects/oncourseNR.html Ucompass.com is founded on 23 July 1998 and begins marketing its Educator Course Management System.
CourseWork, a web-based, problem set manager, was developed by the at Stanford University's Learning Lab. It formed the core of the CourseWork CMS. This version supported authoring, distribution, completion, and reviewing of automatically graded assignments by students and instructors.
1990s:
Humboldt State University's Courseware Development Center] builds the ExamMaker application for online testing. ExamMaker supports banks of questions, which may include audio and/or video segments, that may be true/false, fill-in-the-blank, multiple choice, or essay. Essay questions are emailed to the teacher for grading, then sent back to ExamMaker to display the graded essays to the students. ExamMaker grades all other types of questions and provides the student immediate feedback as soon as the exam is completed, including an explanation of the correct answers, and automatically posts the grade. Full Description:ASSURED STUDENT ACCESS TO COMPUTING AND THE NETWORK On 1 June 1998, a paper describing a web based Peer Review and Assessment tool developed by the Courseware Development Center at Humboldt State University was presented at the 1998 ASEE Annual Conference & Exposition: Engineering Education Contributing to U.S. Competitiveness. The Peer Review was a set of web forms that enabled students to upload documents, review each other's work, and for an instructor to review and grade student's uploaded work. More.
1990s:
On 2 November 1998, the web-based learning management system ILIAS is gone online at University of Cologne. Within one year more than 30 courses have been created and published for blended learning in economics, business administration and social sciences.
1990s:
In the spring of 1998 TeleTOP, a set of fill-in forms on top of Lotus Domino, saw the light at Twente University, The Netherlands. It was not the first ELO that was used there, but it was the first one where teachers themselves could create a course without any ICT knowledge. Core of this product was and is the central task-scheme ("The Roster"), where the teacher could create a row of activities for each week. A demo course has been available online since 1998. You still can login with UN: docent.test and PW: docent.test. Unfortunately this is an old version of TeleTOP. Since 1998 the look and feel has completely changed and the ELO has a lot more functionalities. Modules like Digital Portfolio and Assessment Centre have been developed to measure the pupils' competence and developments. Open standards such as SCORM, IEE-LOM, Dublin Core and AICC where implemented from the start for reuse and research possibilities. Further information can be found on https://web.archive.org/web/20090502090958/http://www.teletop.nl/en/ On 5/14/98, Indiana University ARTI receives a "Disclosure of Invention" for the Oncourse (case #9853) describing the invention of a comprehensive course management system by Ali Jafari and his WebLab developers, a comprehensive CMS system with message board, announcement, chat, syllabus, etc. including the dynamic method of creating courses for students and faculty based on the data from the campus SIS system.
1990s:
The Cisco Networking Academy Management System (CNAMS) is released to facilitate communication and course management of the largest blended learning initiative of its time, the Cisco Networking Academy. It includes tools to maintain rosters, gradebooks, forums, as well as a scalable, robust assessment engine. Cisco Networking Academy Program.
1990s:
The Advanced Information Technology Lab at Indiana University-Purdue University Indianapolis piloted Oncourse. (A description of the initial software was published in 1999 in The Journal.) Nicenet Internet Classroom Assistant (ICA2) is launched with web-based conferencing, personal messaging, document sharing, scheduling and link/resource sharing to a variety of learning environments. See their website DiscoverWare, Inc.] builds and begins to deploy its "Nova" course management system, involving a client/server architecture to deploy rich interactive content in a desktop application, and storing/sharing information on content, users, courses, and quizzes on a central server. This was an adaptive LMS, in that quizzes were generated based on the user's progress through the content, and courses were generated based on the user's responses to a quiz. The playback engine evolved a browser-based version that was SCORM Level 2 Compliant, enabling deployment of DiscoverWare content in third-party LMS such as Pathware.
1990s:
Public release of EDUCOM/NLII Instructional Management Systems Specifications Document Version 0.5 (29 April 1998), produced by an IMS Technical Team including Steve Griffin (COLLEGIS Research Institute), Andy Doyle (International Thomson Publishers), Bob Alcorn (Blackboard), Brad Cox (George Mason University), Frank Farance (Farance Inc), John Barkley (NIST), Ken Schweller (Buena Vista University), Kirsten Boehner (COLLEGIS Research Institute), Mike Pettit (Blackboard), Neal Nored (IBM), Tom Rhodes (NIST), Tom Wason (UNC), Udo Schuermann (Blackboard). Available as DOC from http://aitel.hist.no/prosjekter/ekstern/compnet/Closed/IMS/spec7.doc.
1990s:
Blackboard LLC merges with CourseInfo LLC to form Blackboard Inc and changes the CourseInfo product name to Blackboard's CourseInfo.
Web Course in a Box, Version 3 is released in 1998. This version added a WhiteBoard feature as well as Student Portfolios, Access Tracking, Course Copying between instructors, and batch account administration.
The Instructional Technology Group at Yale University http://www.yale.edu puts the "Classes" system into production for Fall semester. (A copy of the original site is captured in the Internet Archive for Spring of '99) WebTestr built and deployed by Nicholas Crosby at SIAST [4].
1990s:
Fretwell-Downing Education Ltd (now part of Tribal Group plc) builds a pilot web-based learning environment for use in delivering accredited courses in internet skills (information retrieval, web design and online collaboration) in the UK. ( Partial details, dated 30/12/1997.) The learning environment is a contribution to the work of the Living IT consortium, which includes The Sheffield College and Manchester College or Arts and Technology as well as Fretwell-Downing Education Ltd, and which had been delivering these courses since 1997. (In 1999, the company demonstrates this learning environment as part of its successful tender to build a larger, more sophisticated learning environment for learndirect, which was subsequently used by hundreds of thousands of learners in England and Wales.) Teemu Leinonen and Hanni Muukkonen publish a paper on Future Learning Environment - Innovative Methods and Applications for Collaborative Learning.
1990s:
Future Learning Environment (FLE) reserarch and development project releases the first version of FLE software. The FLE software is afterwards known as Fle3.
1990s:
The survey article "Embedding computer conferencing in university teaching" (Mason and Bacsich) is published in Computers and Education, Volume 30, Number 3, April 1998, pp. 249–258. This describes experiences with using CoSy and FirstClass in online learning at the Open University in the period up to 1995. (Article available online e.g. via Ingenta.) CU Online, the virtual campus of the University of Colorado, is described in an online article by Terri Taylor Straut first presented in 1997 at the FLISH97 conference in Sheffield, UK. CU Online uses the LMS from Real Education, later eCollege.com.
1990s:
Virtual U, "a Web Based Environment Customised to Support Collaborative Learning and Knowledge Building", is described in an online article by Linda Harasim, Tom Calvert and others also first presented at FLISH97. The paper makes it clear that development of Virtual-U has been under way since 1994.
1990s:
CTLSilhouette (Gary Brown Randy Lagier, Peg Collins, Josh Yeidel, Greg Turner & Lori Eveleth-Baker). an online survey and automated response generator. Allows authors to use create custom question types in addition to questions made by wizard. Lacks scoring and feedback features of online test/quiz. CTLSilhouette powers The TLT Group's Flashlight Online system, which includes the Flashlight Current Student Inventory item bank, a useful tool for evaluations of Virtual Learning Environments and scholarship of teaching and learning by instructors.
1990s:
NextEd founded by its CEO Terry Hilsberg in 1998 to deliver global e-learning from bases in Hong Kong and Australia. Its first prominent university client/partner was the University of Southern Queensland, a major Australian distance learning provider.
1990s:
Paul McKey joins NextEd as a foundation employee and CTO and begins development of an online learning management system first described in his Masters Thesis "The Development of the On-line Educational Institute", SCU, Australia, July 1996, https://web.archive.org/web/20070804083810/http://www.redbean.com.au/articles/files/masters/06-Chapter6.html In September 1998 the Computer Science department at RMIT University, Australia began delivering its online courses with [5]. Over 10,000 Open University Australia] student enrollments used Serf's comprehensive LMS features until 2004 when RMIT's corporate Blackboard was phased in. During this period, Serf versions 1 to 3 hosted 13 ugrad CS courses, 5 pgrad CS courses and 3 continuously repeating, short IT courses.
1990s:
September 1998: The EU SCHEMA project (the web site is still extant at http://www.schema.stir.ac.uk/ - full marks to Stirling University) releases via the Oulu team a "State of the art" review specification on CMC techniques applicable to open and distance learning (Deliverable D5.1). This includes a feature and architectural comparison of FirstClass, LearningSpace, TopClass and WebCT. It also describes a desired system Proto. There is a full discussion of roles. The diagrams are particularly informative. [6].
1990s:
In May 1998, Interlynx Multimedia, Inc. of Toronto, received a contract to develop a learning management system for the Canadian Imperial Bank of Commerce. The LMS, designed by Dr. Gary Woodill and Dr. Karen Anderson was built in Microsoft ASP. It included a rudimentary authoring system that allowed HTML pages and multiple choice questions to be built and posted online. The generic code for this LMS became the PROFIS LMS, which was then licensed to several other corporations. Later Operitel Corporation of Peterborough acquired the rights to this LMS which was then renamed LearnFlex. Operitel was sold to Open Text in 2012, and Gary Woodill is now CEO of i5 Research.
1990s:
The Aircraft Industry CBT Committee (AICC) certifies web-based Pathware 3 as its "First Instructional Management Product".
1990s:
Asymetrix (later becoming Click2Learn and then SumTotal) buys Meliora Systems' software for learning management called Ingenium, and merges it with its own learning management product, Toolbook II Librarian, a training management and administration system used with an Oracle, MS SQL Server or other ODBC database. Authoring is done either through Asymetrix' Toolbook II Instructor, Toolbook II Assistant, or through Asymetrix IconAuthor.
1990s:
In October 1998, CoursePackets.com is founded by Alan Blake, a University of Texas at Austin student, with the goal of posting course packs online.
By the end of 1998, Indiana University's Oncourse system had grown to support some 9,000 students.
1990s:
December 1998 the School of Pharmacy at the University of Strathclyde launch their online learning environment SPIDER WebDAV gave a standard method of uploading documents. It was already described in publications in 1998. E.g. WEBDAV: IETF Standard for Collaborative Authoring on the Web IEEE Internet Computing, September/October 1998, pages 34–40 and Collaborative Authoring on the Web: Introducing WebDAV] Bulletin of the American Society for Information Science, Vol. 25, No. 1, October/November 1998, pages 25–29.
1990s:
By May 1998, a number of course management systems and collaborative environments were available. These systems included CyberProf, a course management system from the University of Illinois; Mallard 3.0, a course management system from the University of Illinois; netLearningPlace, a collaborative environment for teaching and learning; PlaceWare, software for live presentations; POLIS, a system from the University of Arizona; The Learning Manager (TLM), from Campus America, Inc.; Toolbox II from Asymetrix Corporation; TopClass, from WBT Systems; Virtual Classroom Interface (VCI), from the University of Illinois; Virtual Object Interactive Classroom Environment (VOICE), a graphic MOO; Web Course in a Box, developed at Virginia Commonwealth University; WebCT, from the University of British Columbia; Web Instructional Services Headquarters (WISH), from Penn State University; and Web Lecture System (WLS), a web lecturing system from North Carolina State University.(Source: Distance Learning Environments Feature List, University of Iowa, last updated 13 May 1998). Of these, WebCT is by far the most widely used with licenses at roughly 500 institutions by year end.
1990s:
1999 Fronter, a European software company, launches its environment for web based collaboration. During 1999 to 2001, the system is implemented by the majority of Norwegian higher education institutions and used as their platform for learning and collaboration.
1990s:
In January 1999 CoursePackets.com goes live, serving dozens of courses at the University of Texas at Austin. The service allowed for the posting of course packs online at a substantial discount over the cost of printed materials. By May 1999, CoursePackets.com begins work on a courseware system for launch in January 2000. The courseware system is comparable to Blackboard, and actively marketed as "CourseNotes.com" beginning in the summer of '99.
1990s:
February 1999: Ossidian Technologies is launched in Dublin, Ireland. Within 6 months the company has developed OLAS, its first web-based LMS. The company begins the process of developing a complete library of eLearning for wireless telecom (cellular, satellite, broadcast, personal and fixed wireless, operations).
1990s:
September 1999: The IEEE magazine Web-based Learning and Collaboration publishes A Framework for Online Learning: The Virtual-U, describing the history of the Virtual-U system from its inception in 1993. There are screen shots and descriptions. In particular it has a "user interface that gives instructors or moderators the ability to easily set up collaborative groups and define structures, tasks, and objectives". Further, system administrators have tools to help in "creating and maintaining accounts, defining access privileges, and establishing courses on the system".
1990s:
In October 1999, The UCLA School of Dentistry Media Center and Dr. Glenn Clark, develop an Internet-based authoring tool, labeled Internet Courseware (iic), which provides DDS students simulation modules for diagnosis and treatment planning of patients across a large breadth of possible medical conditions as well as access to lecture notes, exam reviews, course supplements and faculty contact information. Users are presented access to virtual patients based on class, previous coursework and patient/dentist activity within the system. The project was described in the Journal of Dental Education in 1999 (Clark GT, Carnahan J, Masson P and Watanabe, T. Case-Based Courseware for Distance Learning. J. Dent Educ. 63:71 (#191) 1999).
1990s:
In October 1999 Liber and Britain publish Framework for Pedagogical Evaluation of Virtual Learning Environments (MS Word file), a study for the United Kingdom Joint Information Systems Committee evaluating 12 different VLEs in detail. The report contains a schematic of a prototypical VLE, comprising 15 generic functionalities, and describes each of these functionalities in turn. There is a narrative description of each of the evaluated VLEs, and a comparative table summarising which features each provides.
1990s:
The Oncourse Project invented and introduced the notion of "Enterprise Course management system" where data from the Student Information System (SIS) was used to automatically and dynamically create CMS course site for all the courses offered at the IUPUI Campus (more than 6,000 courses offered to more than 27,000 students). https://web.archive.org/web/20070927215408/http://www.aace.org/PUBS/webnet/v1no4/Vol._1_No._4_Jafari.pdf Martin Dougiamas trials early prototypes of Moodle at Curtin University of Technology, built during 1998 and 1999. This paper "Improving the effectiveness of tools for Internet-based education" published in January 2000 details one case study and includes screenshots.
1990s:
The LON-CAPA project is started at Michigan State University.
Desire2Learn is founded.
The University of Michigan launches CourseTools, originally a product of the UMIE project (launched in 1996), and moved into its own development and production team due to the scale and scope of the LMS being launched and created.
1990s:
The Omnium Project based at The College of Fine Arts at the University of New South Wales ran its first global creative studio project online for 50 design students from 11 countries. See references below: Outline, the CTIAD journal (ISSN 1365-4349) - issue 9: Winter 1999/2000 - pp. 17–24 ECi - Education Communication and Information (ISSN 1463-631X (print) /ISSN 1470-6725 (online)/01/010103-01) (DOI 10.1080/14636310120048074) - Volume 1, Number 1: 1 May 2001 - pp. 103–103 - Online article Monument (ISSN 1320-1115) - Number 36: June/July 2000 - pp. 54–57 and included CD-ROM - PDF copy of article IdN - International Designers Network - Volume 7, Number 1: January 2000 - pp. 49–51 - PDF copy of article Omnium website - History September 1999 - The brand new Technical University of British Columbia admits its first students. Their 'Course Management System' is a home-grown system with 2+ years of development behind it at this point.
1990s:
Web Course in a Box, version 4 was released by madDuck Technologies in early 1999. WCB Version 4, added a gradebook and assignment manager. Companion products, Web Campus in a Box (for creating web pages for a department or program) and Web CourseBuilder Toolbox (for creating faculty web pages and forums, and course listings that were independent of the WCB system) were released in this same time period.
1990s:
WebCT purchased by Universal Learning Technology. Roughly 1000 campuses using WebCT by end of year.
"Courseware Accessibility Study"] published, evaluating 7 online courseware systems for their accessibility.
Stephen Downes publishes Web-Based Courses: The Assiniboine Model in the Online Journal of Distance Learning Administration.
The University of South Australia launches its web-based online learning platform, UniSAnet in March 1999. UniSAnet was developed over 9 months in 1998 and 1999, following a paper to its Academic Board in May 1998.
Wolfgang Appelt and Peter Mambrey publish a paper on using BSCW as a virtual learning environment.
ETUDES 2.3 released. ETUDES 2.5 is released in December. The system is used at several community colleges in California, including Foothill, LasPositas, and Miracosta.
1990s:
"Practical Know How: Distance Education and Training over the Internet" (Jissen Nouhau Inta-netto de Enkaku Kyouiku/Kenshuu) by Douyama Shinichi published in April 1999 by NTT publishing. ISBN 4-7571-0016-7. "It would seem easy to begin distance learning and distance education over the Internet, as an extension of (conventional) distance learning. When it comes to teaching several hundred students in this way, there are a number of problems still to be resolved at this time. In this book we will consider, the selection of teaching materials, making online contents, management methods, and introduce concrete practical know how with good cost performance and lots of practical advice." Chapter one details the trial of an Internet distance learning system, from sending out invitations to graduation.
1990s:
Sheffield company Fretwell Downing is marketing its "LE" (Learning Environment) product. September 1999 product overview.
1990s:
Washington State University publishes online a comparison of 24 VLE's, focusing on 8 that were considered candidates for adoption at WSU. (Note: Only the final draft survives in the archives.) Thorough "Comparison of Online Course Delivery Software Products" published by Marshall University - with stated last update of 1 October 1999 - examining in detail the features and functionalities of 16 mainly US and Canadian systems. Marshall University web site version Wayback Machine version The Bridge (Gary Brown, Mathew Shirey, Dennis Bennett, Greg Turner-Rahman). (now retired, but available available read-only) a course management system with sub-spaces for teams that empowers students to create resource objects (threaded discussion, file upload, web links, notes, and quizzes) in the course. Bridge also had a "personal workspace" that provided the same collaborative and ePortfolio tools to individuals outside any course offering. The concept was not fully implemented as there was no mechanism to authorize users into one's personal workspace.
1990s:
Northern Virginia Community College (NVCC)'s Extended Learning Institute (ELI) begins using Allaire Forums for web-based conferencing in a variety of online/distance courses.
University of Maryland University College (UMUC)'s unveils Version 2.0 of its customized WebTycho program with a new interface design. Through Fall 1999, UMUC has installed WebTycho servers on three continents and served over 26,000 students and faculty in over 1,000 WebTycho courses.
1990s:
In spring 1999 the development of the open source LMS OLAT was initiated by Sabina Jeger, Franziska Schneider and Florian Gnägi to support a tutoring course with 900 students at University of Zurich. The system was put into production in fall 1999 where the 900 students registered to 25 classes that were coached by older students. This first version of OLAT was built on LAMP technology. Later, the system was completely rebuilt on Java EE technology to support the e-learning needs of a whole campus.
1990s:
IBM's Lotus group buys Macromedia's Pathware 4 learning management system. This LMS is later merged into the Lotus Learning Space LMS. For article on the purchase, see here.
1990s:
Isopia (founded actually in 1998) entered the e-Learning landscape in 1999 with the launch of its Integrated Learning Management System (ILMS), its Web-based infrastructure software. Built on Enterprise Java Beans, Isopia claimed to be "a flexible, open system that allows for massive scalability and adapts to a variety of learning needs and rapidly-growing user communities". Isopia certainly rapidly grew in clients and deals (e.g. see the industry testimonials to its feature list from 1999 and early 2000 at http://www.isopia.com/the_industry/sys.html) until being bought by Sun Microsystems in 2001. [7] Knowledge Navigators International releases its third version of LearningEngine as MyLearningPlace. Used by the United Nations Development Programme for several years for worldwide communities of practice and adopted by large architectural firm in CA. Company closed in 2001. New incarnation of software lives as www.coachingplatform.com.
1990s:
"First Annual WebCT Conference on Learning Technologies" takes place at University of British Columbia in Vancouver, Canada from 17 to 18 June. Tim Barker presents a paper "Community Based Virtual Learning: A WebCT Physics Course" comparing three VLEs (WebCT, Topclass and Learning Space) plus Eventware (web annotations & chat), Ceilidh & Tree of Knowledge (discussion boards), Netmeeting (Whiteboard, chat etc.), Inspiration (Concept Mapping) & Composer/Writers Assistant (scaffolds writing process). Additionally Tim proposes integrating a Learning Companion. This conference represents a milestone as one of the first VLE user conferences. It is a significant success with 700 in attendance and poses a logistical exercise for organisers who were originally expecting between 50 and 100. Registration had to be closed due to the large numbers over a month before the conference date.
1990s:
5 December 1999: Randy Graebner's proposal for his master's thesis, Online Education Through Shared Resources The BENVIC project started in late 1999 and ran for two years. Its aim was to benchmark the various virtual campuses (i.e. university-level distance e-learning services) operating across Europe. The BENVIC web site contains several useful outcomes. The project became quiescent in early 2002. It represented a move beyond benchmarking VLEs to benchmarking e-learning at a higher level, i.e. the services which the VLEs underpinned.
1990s:
Dennis Tsichritzis of the University of Geneva publishes "Reengineering the University" (Communications of the ACM Vol. 42, Issue 6, June 1999). One reviewer observes "This is a must-read article for academics" but later cautions that "most traditional college students, particularly in the US, do not have the self-discipline to adjust to the educational environment Tsichritzis describes." Scholastic Corporation publishes Read180, an application for Macs & PCs to improve reading skills in schools. Read180 shipped with sets of CD-ROMs on various topics, each with video presentations and interactive tests. Audio recording sessions by students were sent over the network to a teacher's workstation for evaluation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Voiced pharyngeal fricative**
Voiced pharyngeal fricative:
The voiced pharyngeal approximant or fricative is a type of consonantal sound, used in some spoken languages. The symbol in the International Phonetic Alphabet that represents this sound is ⟨ʕ⟩, and the equivalent X-SAMPA symbol is ?\. Epiglottals and epiglotto-pharyngeals are often mistakenly taken to be pharyngeal.
Although traditionally placed in the fricative row of the IPA chart, [ʕ] is usually an approximant. The IPA symbol itself is ambiguous, but no language is known to make a phonemic distinction between fricatives and approximants at this place of articulation. The approximant is sometimes specified as [ʕ̞] or as [ɑ̯], because it is the semivocalic equivalent of [ɑ].
Features:
Features of the voiced pharyngeal approximant fricative: Its manner of articulation varies between approximant and fricative, which means it is produced by narrowing the vocal tract at the place of articulation, but generally not enough to produce much turbulence in the airstream. Languages do not distinguish voiced fricatives from approximants produced in the throat.
Its place of articulation is pharyngeal, which means it is articulated with the tongue root against the back of the throat (the pharynx).
Its phonation is voiced, which means the vocal cords vibrate during the articulation.
It is an oral consonant, which means air is allowed to escape through the mouth only.
It is a central consonant, which means it is produced by directing the airstream along the center of the tongue, rather than to the sides.
The airstream mechanism is pulmonic, which means it is articulated by pushing air solely with the intercostal muscles and diaphragm, as in most sounds.
Occurrence:
Pharyngeal consonants are not widespread. Sometimes, a pharyngeal approximant develops from a uvular approximant. Many languages that have been described as having pharyngeal fricatives or approximants turn out on closer inspection to have epiglottal consonants instead. For example, the candidate /ʕ/ sound in Arabic and standard Hebrew (not modern Hebrew – Israelis generally pronounce this as a glottal stop) has been variously described as a voiced epiglottal fricative, an epiglottal approximant, or a pharyngealized glottal stop. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Translate (Apple)**
Translate (Apple):
Translate is a translation app developed by Apple for their iOS and iPadOS devices. Introduced on June 22, 2020, it functions as a service for translating text sentences or speech between several languages and was officially released on September 16, 2020, along with iOS 14. All translations are processed through the neural engine of the device, and as such can be used offline.On June 7, 2021, Apple announced that the app would be available on iPad models running iPadOS 15, as well as Macs running macOS Monterey. The app was officially released for iPad models on September 20, 2021, along with iPadOS 15. It was also released for Mac models on October 25, 2021, along with macOS Monterey.
Translate (Apple):
On June 6, 2022, Apple announced six new languages, Turkish, Indonesian, Polish, Dutch, Thai and Vietnamese. The six new languages work on iPhone 8 or later, iPhone 8 Plus or later, iPhone X or later, iPhone SE (2nd generation) or later, iPad Air (3rd generation) or later, all iPad Pro models, iPad Mini (5th generation) or later and iPad (5th generation) or later. The Turkish, Indonesian, Polish, Dutch and Thai languages were added to the app on June 22, 2022, the second anniversary of the announcement of the app. The Vietnamese language was added to the app on July 27, 2022.
Languages:
Translate originally supported the translation between the UK (British) and US (American) dialects of English, Arabic, Mandarin Chinese, French, German, the European dialect of Spanish, Italian, Japanese, Korean, the Brazilian dialect of Portuguese and Russian. This grew to 17 languages as six new languages were added in 2022, such as Turkish, Indonesian, Polish, Dutch, Thai and Vietnamese. All languages support dictation and can be downloaded for offline use. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**G-fibration**
G-fibration:
In algebraic topology, a G-fibration or principal fibration is a generalization of a principal G-bundle, just as a fibration is a generalization of a fiber bundle. By definition, given a topological monoid G, a G-fibration is a fibration p: P→B together with a continuous right monoid action P × G → P such that (1) p(xg)=p(x) for all x in P and g in G.
G-fibration:
(2) For each x in P, the map G→p−1(p(x)),g↦xg is a weak equivalence.A principal G-bundle is a prototypical example of a G-fibration. Another example is Moore's path space fibration: namely, let P′X be the space of paths of various length in a based space X. Then the fibration p:P′X→X that sends each path to its end-point is a G-fibration with G the space of loops of various lengths in X. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Testin**
Testin:
Testin also known as TESS is a protein that in humans is encoded by the TES gene located on chromosome 7. TES is a 47 kDa protein composed of 421 amino acids found at focal adhesions and is thought to have a role in regulation of cell motility. In addition to this, TES functions as a tumour suppressor. The TES gene is located within a fragile region of chromosome 7, and the promoter elements of the TES gene have been shown to be susceptible to methylation – this prevents the expression of the TES protein. TES came to greater prominence towards the end of 2007 as a potential mechanism for its tumour suppressor function was published.
Domain organisation:
Tes is composed of the following domains: The structures of the Cysteine rich domain and the PET domain are not known. LIM domains, however, are known as modulators of protein interactions. LIM domain consist of 2 zinc fingers separated by 2 hydrophobic amino acids (generally a phenylalanine and then a leucine).
Binding partners:
TES does not appear to be an enzyme; rather it is a protein that mediates/regulates cellular functions via protein–protein interactions. Pull down experiments reveal that TES has putative interactions mediated by the indicated domain: Garvalov et al. showed that the interaction between TES & zyxin were direct, using recombinant proteins expressed in E. coli.Some of the potential binding partners (Zyxin, mENA) can be found in focal adhesion complexes; the range of binding partners indicates a potential role for TES in-between 'privileged' Actin polymerisation and focal adhesion contacts to the extracellular matrix. This tallies with the observation that GFP-tagged TES can be seen at focal adhesions.
TES as a tumour suppressor:
In December 2007, Boeda, Briggs et al. showed that the third LIM domain of TES displaces Mena from its usual subcellular positions (focal adhesions or the cell leading edge). The ENA/VASP protein family (of which Mena is a member) are anchored to specific proteins within the cell by a peptide motif consisting of a phenylalanine residue, followed by four proline residues – known as a FPPPP motif. It is the EVH1 domains of VASP/EVL proteins that directly contact the FPPPP motif. The precise architecture of TES:MENA binding was revealed by X-ray crystallography, and showed that the 3rd LIM domain of TES covered up the FPPPP binding site within Menas EVH1 domain. Isothermal titration calorimetry showed that TES has a greater affinity for Mena than its canonical FPPPP ligand, as presented in the focal adhesion protein zyxin. Using microscopy it was shown that either over-expression of GFP-tagged TES, or just the tagged third LIM domain displaced Mena from focal adhesions and reduced mean cell velocity.
TES as a tumour suppressor:
These finding were significant given that Mena is often over-expressed in cancer cells, and is thought to be partly responsible for cancer cell motility, and therefore a factor in cancer metastasis. TES is conversely often not produced in cancer cells. It is possible that a drug designed to mimic TES's interaction with Mena could be used to prevent metastasis and thus development of secondary tumours in cancer patients. The work was widely reported in the British press (the work was carried out by Cancer Research UK), and also in the international press.
Conformational change:
Based on the observations that: Mammalian cell derived TES binding Zyxin E. coli-produced recombinant TES (rTES) does not bind Zyxin An rTES construct composed of residues 201–421 (i.e., the linker and all 3 LIM domains) does bind Zyxin The above rTES construct binds an N-terminal rTES construct, consisting of the cysteine rich and PET domains – IE, the two-halves of TES interact with each other.Garvalov et al. propose that TES exists in two conformational states: A 'closed' state where the N & C halves of TES interact, obscuring the Zyxin binding site in LIM1, and an 'open' state where the Zyxin binding site is accessible and the two halves no-longer interact in the same fashion, if at all. The regulatory mechanism switching between the two states is not presently fully understood.
Phenotype:
In RNAi experiments, cells that had impaired TES expression showed an inability to correctly organise their focal adhesions and actin stress fibres.
In gene knockout experiments, transgenic mice lacking both copies of the TES gene displayed an increased susceptibility to tumour formation when challenged with a carcinogen. Mice retaining the TES gene were less susceptible: thus, TES is a tumour suppressor gene. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Three-dimensional space**
Three-dimensional space:
In geometry, a three-dimensional space (3D space, 3-space or, rarely, tri-dimensional space) is a mathematical space in which three values (coordinates) are required to determine the position of a point. Most commonly, it is the three-dimensional Euclidean space, the Euclidean n-space of dimension n=3 that models physical space. More general three-dimensional spaces are called 3-manifolds.
Technically, a tuple of n numbers can be understood as the Cartesian coordinates of a location in a n-dimensional Euclidean space. The set of these n-tuples is commonly denoted Rn, and can be identified to the pair formed by a n-dimensional Euclidean space and a Cartesian coordinate system.
Three-dimensional space:
When n = 3, this space is called the three-dimensional Euclidean space (or simply "Euclidean space" when the context is clear). It serves as a model of the physical universe (when relativity theory is not considered), in which all known matter exists. While this space remains the most compelling and useful way to model the world as it is experienced, it is only one example of a large variety of spaces in three dimensions called 3-manifolds. In this classical example, when the three values refer to measurements in different directions (coordinates), any three directions can be chosen, provided that vectors in these directions do not all lie in the same 2-space (plane). Furthermore, in this case, these three values can be labeled by any combination of three chosen from the terms width/breadth, height/depth, and length.
History:
Books XI to XIII of Euclid's Elements dealt with three-dimensional geometry. Book XI develops notions of orthogonality and parallelism of lines and planes, and defines solids including parallelpipeds, pyramids, prisms, spheres, octahedra, icosahedra and dodecahedra. Book XII develops notions of similarity of solids. Book XIII describes the construction of the five regular Platonic solids in a sphere.
In the 17th century, three-dimensional space was described with Cartesian coordinates, with the advent of analytic geometry developed by René Descartes in his work La Géométrie and Pierre de Fermat in the manuscript Ad locos planos et solidos isagoge (Introduction to Plane and Solid Loci), which was unpublished during Fermat's lifetime. However, only Fermat's work dealt with three-dimensional space.
History:
In the 19th century, developments of the geometry of three-dimensional space came with William Rowan Hamilton's development of the quaternions. In fact, it was Hamilton who coined the terms scalar and vector, and they were first defined within his geometric framework for quaternions. Three dimensional space could then be described by quaternions q=a+ui+vj+wk which had vanishing scalar component, that is, a=0 . While not explicitly studied by Hamilton, this indirectly introduced notions of basis, here given by the quaternion elements i,j,k , as well as the dot product and cross product, which correspond to (the negative of) the scalar part and the vector part of the product of two vector quaternions.
History:
It was not until Josiah Willard Gibbs that these two products were identified in their own right, and the modern notation for the dot and cross product were introduced in his classroom teaching notes, found also in the 1901 textbook Vector Analysis written by Edwin Bidwell Wilson based on Gibbs' lectures.
Also during the 19th century came developments in the abstract formalism of vector spaces, with the work of Hermann Grassmann and Giuseppe Peano, the latter of whom first gave the modern definition of vector spaces as an algebraic structure.
In Euclidean geometry:
Coordinate systems In mathematics, analytic geometry (also called Cartesian geometry) describes every point in three-dimensional space by means of three coordinates. Three coordinate axes are given, each perpendicular to the other two at the origin, the point at which they cross. They are usually labeled x, y, and z. Relative to these axes, the position of any point in three-dimensional space is given by an ordered triple of real numbers, each number giving the distance of that point from the origin measured along the given axis, which is equal to the distance of that point from the plane determined by the other two axes.Other popular methods of describing the location of a point in three-dimensional space include cylindrical coordinates and spherical coordinates, though there are an infinite number of possible methods. For more, see Euclidean space.
In Euclidean geometry:
Below are images of the above-mentioned systems.
Lines and planes Two distinct points always determine a (straight) line. Three distinct points are either collinear or determine a unique plane. On the other hand, four distinct points can either be collinear, coplanar, or determine the entire space.
Two distinct lines can either intersect, be parallel or be skew. Two parallel lines, or two intersecting lines, lie in a unique plane, so skew lines are lines that do not meet and do not lie in a common plane.
In Euclidean geometry:
Two distinct planes can either meet in a common line or are parallel (i.e., do not meet). Three distinct planes, no pair of which are parallel, can either meet in a common line, meet in a unique common point, or have no point in common. In the last case, the three lines of intersection of each pair of planes are mutually parallel.
In Euclidean geometry:
A line can lie in a given plane, intersect that plane in a unique point, or be parallel to the plane. In the last case, there will be lines in the plane that are parallel to the given line.
In Euclidean geometry:
A hyperplane is a subspace of one dimension less than the dimension of the full space. The hyperplanes of a three-dimensional space are the two-dimensional subspaces, that is, the planes. In terms of Cartesian coordinates, the points of a hyperplane satisfy a single linear equation, so planes in this 3-space are described by linear equations. A line can be described by a pair of independent linear equations—each representing a plane having this line as a common intersection.
In Euclidean geometry:
Varignon's theorem states that the midpoints of any quadrilateral in ℝ3 form a parallelogram, and hence are coplanar.
Spheres and balls A sphere in 3-space (also called a 2-sphere because it is a 2-dimensional object) consists of the set of all points in 3-space at a fixed distance r from a central point P. The solid enclosed by the sphere is called a ball (or, more precisely a 3-ball).
In Euclidean geometry:
The volume of the ball is given by and the surface area of the sphere is Another type of sphere arises from a 4-ball, whose three-dimensional surface is the 3-sphere: points equidistant to the origin of the euclidean space ℝ4. If a point has coordinates, P(x, y, z, w), then x2 + y2 + z2 + w2 = 1 characterizes those points on the unit 3-sphere centered at the origin.
In Euclidean geometry:
This 3-sphere is an example of a 3-manifold: a space which is 'looks locally' like 3D space. In precise topological terms, each point of the 3-sphere has a neighborhood which is homeomorphic to an open subset of 3D space.
Polytopes In three dimensions, there are nine regular polytopes: the five convex Platonic solids and the four nonconvex Kepler-Poinsot polyhedra.
In Euclidean geometry:
Surfaces of revolution A surface generated by revolving a plane curve about a fixed line in its plane as an axis is called a surface of revolution. The plane curve is called the generatrix of the surface. A section of the surface, made by intersecting the surface with a plane that is perpendicular (orthogonal) to the axis, is a circle.
In Euclidean geometry:
Simple examples occur when the generatrix is a line. If the generatrix line intersects the axis line, the surface of revolution is a right circular cone with vertex (apex) the point of intersection. However, if the generatrix and axis are parallel, then the surface of revolution is a circular cylinder.
In Euclidean geometry:
Quadric surfaces In analogy with the conic sections, the set of points whose Cartesian coordinates satisfy the general equation of the second degree, namely, Ax2+By2+Cz2+Fxy+Gyz+Hxz+Jx+Ky+Lz+M=0, where A, B, C, F, G, H, J, K, L and M are real numbers and not all of A, B, C, F, G and H are zero, is called a quadric surface.There are six types of non-degenerate quadric surfaces: Ellipsoid Hyperboloid of one sheet Hyperboloid of two sheets Elliptic cone Elliptic paraboloid Hyperbolic paraboloidThe degenerate quadric surfaces are the empty set, a single point, a single line, a single plane, a pair of planes or a quadratic cylinder (a surface consisting of a non-degenerate conic section in a plane π and all the lines of ℝ3 through that conic that are normal to π). Elliptic cones are sometimes considered to be degenerate quadric surfaces as well.
In Euclidean geometry:
Both the hyperboloid of one sheet and the hyperbolic paraboloid are ruled surfaces, meaning that they can be made up from a family of straight lines. In fact, each has two families of generating lines, the members of each family are disjoint and each member one family intersects, with just one exception, every member of the other family. Each family is called a regulus.
In linear algebra:
Another way of viewing three-dimensional space is found in linear algebra, where the idea of independence is crucial. Space has three dimensions because the length of a box is independent of its width or breadth. In the technical language of linear algebra, space is three-dimensional because every point in space can be described by a linear combination of three independent vectors.
In linear algebra:
Dot product, angle, and length A vector can be pictured as an arrow. The vector's magnitude is its length, and its direction is the direction the arrow points. A vector in ℝ3 can be represented by an ordered triple of real numbers. These numbers are called the components of the vector.
The dot product of two vectors A = [A1, A2, A3] and B = [B1, B2, B3] is defined as: A⋅B=A1B1+A2B2+A3B3=∑i=13AiBi.
The magnitude of a vector A is denoted by ||A||. The dot product of a vector A = [A1, A2, A3] with itself is A⋅A=‖A‖2=A12+A22+A32, which gives ‖A‖=A⋅A=A12+A22+A32, the formula for the Euclidean length of the vector.
Without reference to the components of the vectors, the dot product of two non-zero Euclidean vectors A and B is given by cos θ, where θ is the angle between A and B.
In linear algebra:
Cross product The cross product or vector product is a binary operation on two vectors in three-dimensional space and is denoted by the symbol ×. The cross product A × B of the vectors A and B is a vector that is perpendicular to both and therefore normal to the plane containing them. It has many applications in mathematics, physics, and engineering.
In linear algebra:
In function language, the cross product is a function ×:R3×R3→R3 The components of the cross product are A×B=[A2B3−B2A3,A3B1−B3A1,A1B2−B1A2] , and can also be written in components, using Einstein summation convention as (A×B)i=ϵijkAjBk where ϵijk is the Levi-Civita symbol. It has the property that A×B=−B×A Its magnitude is related to the angle θ between A and B by the identity The space and product form an algebra over a field, which is not commutative nor associative, but is a Lie algebra with the cross product being the Lie bracket. Specifically, the space together with the product, (R3,×) is isomorphic to the Lie algebra of three-dimensional rotations, denoted so(3) . In order to satisfy the axioms of a Lie algebra, instead of associativity the cross product satisfies the Jacobi identity. For any three vectors A,B and C One can in n dimensions take the product of n − 1 vectors to produce a vector perpendicular to all of them. But if the product is limited to non-trivial binary products with vector results, it exists only in three and seven dimensions.
In linear algebra:
Abstract description It can be useful to describe three-dimensional space as a three-dimensional vector space V over the real numbers. This differs from R3 in a subtle way. By definition, there exists a basis B={e1,e2,e3} for V . This corresponds to an isomorphism between V and R3 : the construction for the isomorphism is found here. However, there is no 'preferred' or 'canonical basis' for V On the other hand, there is a preferred basis for R3 , which is due to its description as a Cartesian product of copies of R , that is, R3=R×R×R . This allows the definition of canonical projections, πi:R3→R , where 1≤i≤3 . For example, π1(x1,x2,x3)=x . This then allows the definition of the standard basis Standard ={E1,E2,E3} defined by where δij is the Kronecker delta. Written out in full, the standard basis is Therefore R3 can be viewed as the abstract vector space, together with the additional structure of a choice of basis. Conversely, V can be obtained by starting with R3 and 'forgetting' the Cartesian product structure, or equivalently the standard choice of basis.
In linear algebra:
As opposed to a general vector space V , the space R3 is sometimes referred to as a coordinate space.Physically, it is conceptually desirable to use the abstract formalism in order to assume as little structure as possible if it is not given by the parameters of a particular problem. For example, in a problem with rotational symmetry, working with the more concrete description of three-dimensional space R3 assumes a choice of basis, corresponding to a set of axes. But in rotational symmetry, there is no reason why one set of axes is preferred to say, the same set of axes which has been rotated arbitrarily. Stated another way, a preferred choice of axes breaks the rotational symmetry of physical space.
In linear algebra:
Computationally, it is necessary to work with the more concrete description R3 in order to do concrete computations.
In linear algebra:
Affine description A more abstract description still is to model physical space as a three-dimensional affine space E(3) over the real numbers. This is unique up to affine isomorphism. It is sometimes referred to as three-dimensional Euclidean space. Just as the vector space description came from 'forgetting the preferred basis' of R3 , the affine space description comes from 'forgetting the origin' of the vector space. Euclidean spaces are sometimes called Euclidean affine spaces for distinguishing them from Euclidean vector spaces.This is physically appealing as it makes the translation invariance of physical space manifest. A preferred origin breaks the translational invariance.
In linear algebra:
Inner product space The above discussion does not involve the dot product. The dot product is an example of an inner product. Physical space can be modelled as a vector space which additionally has the structure of an inner product. The inner product defines notions of length and angle (and therefore in particular the notion of orthogonality). For any inner product, there exist bases under which the inner product agrees with the dot product, but again, there are many different possible bases, none of which are preferred. They differ from one another by a rotation, an element of the group of rotations SO(3).
In calculus:
Gradient, divergence and curl In a rectangular coordinate system, the gradient of a (differentiable) function f:R3→R is given by ∇f=∂f∂xi+∂f∂yj+∂f∂zk and in index notation is written The divergence of a (differentiable) vector field F = U i + V j + W k, that is, a function F:R3→R3 , is equal to the scalar-valued function: div F=∇⋅F=∂U∂x+∂V∂y+∂W∂z.
In calculus:
In index notation, with Einstein summation convention this is Expanded in Cartesian coordinates (see Del in cylindrical and spherical coordinates for spherical and cylindrical coordinate representations), the curl ∇ × F is, for F composed of [Fx, Fy, Fz]: |ijk∂∂x∂∂y∂∂zFxFyFz| where i, j, and k are the unit vectors for the x-, y-, and z-axes, respectively. This expands as follows: (∂Fz∂y−∂Fy∂z)i+(∂Fx∂z−∂Fz∂x)j+(∂Fy∂x−∂Fx∂y)k.
In calculus:
In index notation, with Einstein summation convention this is where ϵijk is the totally antisymmetric symbol, the Levi-Civita symbol.
Line integrals, surface integrals, and volume integrals For some scalar field f : U ⊆ Rn → R, the line integral along a piecewise smooth curve C ⊂ U is defined as ∫Cfds=∫abf(r(t))|r′(t)|dt.
where r: [a, b] → C is an arbitrary bijective parametrization of the curve C such that r(a) and r(b) give the endpoints of C and a<b For a vector field F : U ⊆ Rn → Rn, the line integral along a piecewise smooth curve C ⊂ U, in the direction of r, is defined as ∫CF(r)⋅dr=∫abF(r(t))⋅r′(t)dt.
where · is the dot product and r: [a, b] → C is a bijective parametrization of the curve C such that r(a) and r(b) give the endpoints of C.
In calculus:
A surface integral is a generalization of multiple integrals to integration over surfaces. It can be thought of as the double integral analog of the line integral. To find an explicit formula for the surface integral, we need to parameterize the surface of interest, S, by considering a system of curvilinear coordinates on S, like the latitude and longitude on a sphere. Let such a parameterization be x(s, t), where (s, t) varies in some region T in the plane. Then, the surface integral is given by ∬SfdS=∬Tf(x(s,t))‖∂x∂s×∂x∂t‖dsdt where the expression between bars on the right-hand side is the magnitude of the cross product of the partial derivatives of x(s, t), and is known as the surface element. Given a vector field v on S, that is a function that assigns to each x in S a vector v(x), the surface integral can be defined component-wise according to the definition of the surface integral of a scalar field; the result is a vector.
In calculus:
A volume integral refers to an integral over a 3-dimensional domain.
It can also mean a triple integral within a region D in R3 of a function f(x,y,z), and is usually written as: ∭Df(x,y,z)dxdydz.
Fundamental theorem of line integrals The fundamental theorem of line integrals, says that a line integral through a gradient field can be evaluated by evaluating the original scalar field at the endpoints of the curve.
Let φ:U⊆Rn→R . Then φ(q)−φ(p)=∫γ[p,q]∇φ(r)⋅dr.
Stokes' theorem Stokes' theorem relates the surface integral of the curl of a vector field F over a surface Σ in Euclidean three-space to the line integral of the vector field over its boundary ∂Σ: ∬Σ∇×F⋅dΣ=∮∂ΣF⋅dr.
In calculus:
Divergence theorem Suppose V is a subset of Rn (in the case of n = 3, V represents a volume in 3D space) which is compact and has a piecewise smooth boundary S (also indicated with ∂V = S ). If F is a continuously differentiable vector field defined on a neighborhood of V, then the divergence theorem says: ∭V(∇⋅F)dV= S (F⋅n)dS.
In calculus:
The left side is a volume integral over the volume V, the right side is the surface integral over the boundary of the volume V. The closed manifold ∂V is quite generally the boundary of V oriented by outward-pointing normals, and n is the outward pointing unit normal field of the boundary ∂V. (dS may be used as a shorthand for ndS.)
In topology:
Three-dimensional space has a number of topological properties that distinguish it from spaces of other dimension numbers. For example, at least three dimensions are required to tie a knot in a piece of string.In differential geometry the generic three-dimensional spaces are 3-manifolds, which locally resemble R3
In finite geometry:
Many ideas of dimension can be tested with finite geometry. The simplest instance is PG(3,2), which has Fano planes as its 2-dimensional subspaces. It is an instance of Galois geometry, a study of projective geometry using finite fields. Thus, for any Galois field GF(q), there is a projective space PG(3,q) of three dimensions. For example, any three skew lines in PG(3,q) are contained in exactly one regulus. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ester pyrolysis**
Ester pyrolysis:
Ester pyrolysis in organic chemistry is a vacuum pyrolysis reaction converting esters containing a β-hydrogen atom into the corresponding carboxylic acid and the alkene. The reaction is an Ei elimination and operates in a syn fashion.
Examples include the synthesis of acrylic acid from ethyl acrylate at 590 °C, the synthesis of 1,4-pentadiene from 1,5-pentanediol diacetate at 575 °C or the construction of a cyclobutene framework at 700 °C | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sherwood Applied Business Security Architecture**
Sherwood Applied Business Security Architecture:
SABSA (Sherwood Applied Business Security Architecture) is a model and methodology for developing a risk-driven enterprise information security architecture and service management, to support critical business processes. It was developed independently from the Zachman Framework, but has a similar structure. The primary characteristic of the SABSA model is that everything must be derived from an analysis of the business requirements for security, especially those in which security has an enabling function through which new business opportunities can be developed and exploited.
Sherwood Applied Business Security Architecture:
The process analyzes the business requirements at the outset, and creates a chain of traceability through the strategy and concept, design, implementation, and ongoing ‘manage and measure’ phases of the lifecycle to ensure that the business mandate is preserved. Framework tools created from practical experience further support the whole methodology. The model is layered, with the top layer being the business requirements definition stage. At each lower layer a new level of abstraction and detail is developed, going through the definition of the conceptual architecture, logical services architecture, physical infrastructure architecture and finally at the lowest layer, the selection of technologies and products (component architecture). The SABSA model itself is generic and can be the starting point for any organization, but by going through the process of analysis and decision-making implied by its structure, it becomes specific to the enterprise, and is finally highly customized to a unique business model. It becomes in reality the enterprise security architecture, and it is central to the success of a strategic program of information security management within the organization. SABSA is a particular example of a methodology that can be used both for IT (information technology) and OT (operational technology) environments.
SABSA matrix:
Note: The above is the original SABSA Matrix, which is still valid today, but it has been expanded by a comprehensive service management matrix and updated in some detail and terminology areas. In the words of David Lynas, SABSA author, "The SABSA Matrix and the SABSA Service Management Matrix have not been updated since the late 90s. We have redesigned them to deliver the improvements your feedback has requested over the years. We have not fundamentally changed the structure or principles of the matrices (very few elements have changed position) but have focussed on terminology update and consistency." The new versions can be downloaded (along with the 2009 revision of the SABSA White Paper and other important documents like the SABSA Certification Roadmap) at the SABSA Members' Web Site. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Stage lighting accessories**
Stage lighting accessories:
Stage lighting accessories are components manufactured for conventional (non-automated) stage lighting instruments. Most conventional fixtures are designed to accept a number of different accessories designed to assist in the modification of the output. These accessories are intended to either provide relatively common functionality not originally provided in a fixture (such as beam shaping through barn doors), or to extend the versatility of a lighting instrument by introducing features. Other accessories have been designed to overcome limitations or difficulties some fixtures present in specific applications.
Stage lighting accessories:
All stage lighting accessories fall into one of three distinct categories: components installed inside the fixture, components affixed to the front of the fixture (in front of the lens), or components mounted elsewhere on the exterior of a fixture (to the side, top or bottom).
External:
Barn doors Barn doors, or occasionally a set of barn doors, are an attachment fitted to the front of a Fresnel lantern, a type of lantern used in films, television, and theatres. The attachment has the appearance of a large set of barn doors, but in fact there are four leaves, two larger and widening on the outside, two smaller and getting narrower towards the outside. They facilitate shaping of the beam of light from the fixture, and prevent the distinctive scatter of light created by the Fresnel lens from spilling into areas where it is not wanted, such as the eyes of audience members.
External:
Barn doors are mounted with a ring that fits inside of the color gel slot on the instrument. Because of this, barn doors have a gel slot built into them, so the light can still be colored. Depending on the size and local practices, barn doors may be attached to the pipe or the instrument with their own safety cable.
External:
In some parts of the UK, Barn doors are referred to as "Harris Flaps" Barn doors are generally not used with "profile" or "ellipsoidal reflector" spotlights because they have internal shutters which work more effectively. Barn doors are not effective at shaping the light of a PAR lights and a narrower lens would be a better way to do this.
External:
Top hat A top hat, also known as a stove pipe or snoot, is a device used in theatrical lighting to shield the audience's eyes from the direct source of the light. It is shaped like a top hat with a hole in the top, and the brim being inserted into the gel frame holder on a lighting instrument. The cylinder allows light to pass through but takes away the glint halation of a lighting instrument facing the audience. It also reduces flare created by the light, which is often useful when the unit is hung near the proscenium or other objects that the designer does not want to light. There are also half-hats or "eyelashes", which function in a similar manner but have only half the cylinder, and short hats, which are shorter in length. Half hats are designed to split the difference between using one or not and has the extended side towards the audience. This also is to help cut down on the glare and distraction of light at the lens as it exits the light fixture. Top hats are manufactured for most stationary lighting instruments with gel frames of varying sizes. While rare they may be found on automated Intelligent lighting instruments as well. Their use is in mainly in theatrical performance venues than with touring musical concerts rigs.
External:
Gel extender Gel extenders are similar to top hats in appearance, being a tube placed over the end of a lighting fixture. Unlike top hats, however, gel extenders have a colour frame holder built into the end to allow color gel to be mounted. Gel extenders are also available in a conical shape which does not constrict the beam of light output from the fixture at all.
External:
Colour frame A colour frame or gel frame is a piece of folded material, made from either metal or cardboard, designed to hold colour media (gel). Colour frames are placed directly outside the fixture, immediately in front of lens assembly. Most fixtures include an integrated holder for the frame. Some accessories designed to mount in the gel frame holder, such as Barn Doors, occasionally include an integrated replacement slot for frames.
External:
Comes in many different sizes for all types of lanterns including, profiles, fresnels, floods and par cans Doughnut A doughnut, or donut, is a thin metal or cardboard panel, similar in shape and appearance to a colour frame, but with a small diameter hole intended to reduce off-axis rays of light being projected from a fixture. This increases sharpness of the light by reducing the effect of imperfect lenses. Doughnuts are designed to fit into the colour frame holder directly outside the fixture, immediately in front of lens assembly. Because they are typically thin, doughnuts can often be placed in the same slot as a gel frame. Doughnuts are typically used in fixtures in order to sharpen the beam when a template is in place.
External:
Color scroller A color scroller, color changer, or "scroller" is a lighting accessory used to change color gels on stage lighting instruments without the need of a person to be in the vicinity of the light. It is attached in the gel frame holder on the outside of a lighting instrument, immediately in front of lens assembly. The "scroll" of colours inside the colour changer allows a single fixture to output several different colours, or no colour, and to rapidly change between colours on command. Most scrollers are controlled via DMX512 protocol, but some newer models also utilize the RDM protocol.
External:
Moving mirror attachment A moving mirror attachment is an ellipsoidal spotlight accessory that allows you to remotely re-position the beam of light, so that a single luminaire in a fixed position can be used for multiple "specials" in dozens of locations. Two of the most prominent models are the Elipscan by Meteor and the Rosco I-Cue.
Other Beam Bender A beam bender is essentially a large adjustable mirror, mounted into the color slot on the front of a lighting fixture. It is designed to allow a fixture to be mounted at right angles to the desired direction to be lit and have the output reflected (bent) accordingly.
External:
Drop in Boomerang A Boomerang, also known as a Color magazine is a series of colored filters on hinges. A "Drop in Boomerang" is designed to mount into the color slot of a lighting fixture and provide the operator with several manually selected gels. Most often this accessory is seen in conjunction with the followspot yoke, when a fixture is being used as a small replacement followspot.
Internal:
Pattern holder A pattern holder or gobo holder, is a metal frame designed to hold a gobo. Gobo holders are placed inside a fixture through a specifically designed opening, which places the pattern directly in the focal plane of the fixture. By placing the pattern inside of the focal plane of a fixture adjustments to image (hard or soft edges) can easily be created. Larger pattern holders are also available designed to mount into the accessory slot on some fixtures, allowing for the use of larger gobos, or the projection of two overlapping patterns from a single fixture.
Internal:
Gobo rotator Gobo rotators are metal frames designed to hold a gobo. They have a much larger cross-section (thicker) than a regular gobo holder due to the motors and gearing required to facilitate rotation. Because of their increased thickness, gobo rotators are not placed inside fixture through the specifically designed opening (the gobo slot) but instead install into the accessory (iris) slot. Installing the rotator in the accessory slot still places te pattern inside the focal plane of a fixture, allowing adjustments to the image (creating hard or soft edges). All gobo rotators require an external power source, separate from the lighting fixtures power. Many models allow for remote DMX512 control of motor, permitting fine control of rotation speed and orientation of pattern. Features can also include uni- or bi-directional control of the rotation of a pattern, as well as indexing (tracking a patterns position to return it to the same orientation repeatedly.) Several models are available which can hold two patterns simultaneously, and may allow patterns to rotate separately or in opposite directions.
Internal:
Iris The iris is a metal frame housing designed with an adjustable shutter assembly (an iris). The iris is placed inside fixtures through a specifically designed opening, the accessory (or iris) slot. The iris is placed inside focal plane of fixture, before the lens assembly. An iris is designed to reduce diameter of beam emitted from fixture. The iris assembly is different from the donut as it adjusts the diameter of the beam, not the amount of off-axis light emitted.
Internal:
Effects loop Gam Products Inc. manufactures two different models of effect loop, the Film/FX and the SX4. Both of these devices use a ribbon punched with a pattern to project a continuous scrolling pattern. The Film/FX is designed to install into the accessory slot, while the SX4 is installed directly into the fixture, between the lamp assembly and barrel.
Color changers Colormerge High End System's manufactured the Colormerge as a color mixing unit made to be used with the ETC Source Four ellipsoidal fixture. Unlike a traditional color scroller, it is installed inside the unit between the lamp assembly and shutters. It provides CMY color mixing via dichroic glass plates, controlled thorough DMX512. It was discontinued in 2004.
Internal:
SeaChanger The SeaChanger line by Ocean Thin Films is a color mixing unit intended to be installed into the ETC Source Four. The SeaChanger unit completely replaces the shutter assembly of a Source Four fixture, retaining only the lens barrel and lamp assembly. Variations of the SeaChanger line replace (alternately) the lens barrel or lamp assembly with integrated components, essentially creating a whole new fixture. The SeaChanger allows you to customize the wheels for added color possibilities.
External chassis:
Followspot yoke The followspot yoke is an oversized replacement yoke intended to allow an ellipsoidal reflector spot to be installed into a followspot stand and be used as a small, short throw followspot. Generally these yokes allow a much wider range of tilt than a conventional yoke, and have had the hole for a c-clamp bolt replaced with a spigot for a spot stand.
External chassis:
City Theatrical AutoYoke City Theatrical manufactures a complete assembly which essentially turns a conventional fixture into an automated fixture. The autoyoke is a DMX512 operated assembly which provides complete remote pan and tilt control of a fixture. The autoyoke is also designed to control other accessories, including color changers and iris units.
External chassis:
Apollo Design RightArm The Right Arm adds pan and tilt capabilities to a wide range of static theatrical and studio lighting fixtures, allowing the designer to maximize the lighting rig without crowding fixtures or over-extending the budget. Video cameras and LCD projectors can be blended into the lighting rig with minimal preparation, providing easy adjustment from the lighting console. The Right Arm conveniently repositions these devices in theatrical and church productions, corporate events, trade shows - anywhere flexibility in the light plot is needed.
Sources:
Friedman, Sally (1994). Backstage Handbook: an illustrated almanac of technical information, Broadway Press Shelley, Steven Louis (2009). A Practical Guide to Stage Lighting, Second Edition, Focal Press | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ammonium perrhenate**
Ammonium perrhenate:
Ammonium perrhenate (APR) is the ammonium salt of perrhenic acid, NH4ReO4. It is the most common form in which rhenium is traded. It is a white salt; soluble in ethanol and water, and mildly soluble in NH4Cl. It was first described soon after the discovery of rhenium.
Structure:
The crystal structure of APR resembles that of scheelite, with atomic cation is replaced by ammonium. The pertechnetate (NH4TcO4), periodate (NH4IO4), tetrachlorothallate (NH4TlCl4), and tetrachloroindate (NH4InCl4) follow this motif. It undergoes a molecular orientational ordering transition on cooling without change of space group, but with a highly anisotropic change in the shape of the unit cell, resulting in the unusual property of having a positive temperature and pressure Re NQR coefficient. APR does not give hydrates.
Preparation:
Ammonium perrhenate may be prepared from virtually all common sources of rhenium. The metal, oxides, and sulfides can be oxidized with nitric acid and the resulting solution treated with aqueous ammonia. Alternatively an aqueous solution of Re2O7 can be treated with ammonia followed by crystallisation.
Reactions:
Ammonium perrhenate is weak oxidizer. It slowly reacts with hydrochloric acid: NH4ReO4 + 6 HCl → NH4[ReCl4O] + Cl2 ↑ + 3H2O.It is reduced to metallic Re upon heating under hydrogen: 2 NH4ReO4 + 7 H2 → 2 Re + 8 H2O + 2 NH3Ammonium perrhenate decomposes to volatile Re2O7 starting at 250 °C. When heated in a sealed tube at 500 °C, It decomposes to rhenium dioxide: 2NH4ReO4 → 2ReO2 + N2 + 4 H2OThe ammonium ion can be displaced with some concentrated nitrates e.g. potassium nitrate,, silver nitrate, etc.: NH4ReO4 + KNO3 → KReO4 ↓ + NH4NO3It can be reduced to nonahydridorhenate with sodium in ethanol: NH4ReO4 + 18Na + 13C2H5OH → Na2[ReH9] + 13NaC2H5O + 3NaOH + NH3•H2O. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Superconductor Science and Technology**
Superconductor Science and Technology:
Superconductor Science and Technology is a peer-reviewed scientific journal covering research on all aspects of superconductivity, including theories on superconductivity, the basic physics of superconductors, the relation of microstructure and growth to superconducting properties, the theory of novel devices, and the fabrication and properties of thin films and devices. The editor-in-chief is Cathy P Foley (CSIRO). It was established in 1988 and it is published by IOP Publishing. According to the Journal Citation Reports, the journal has an impact factor of 3.219 for 2020.
Article types:
The journal publishes articles in the following categories: Papers: regular articles reporting original research in superconductivity and its application without formal length restrictions Letters: short articles reporting very substantial new advances and no longer than 5 journal pages or 4500 words including figures Topical reviews: review papers commissioned by the editors | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CCT5 (gene)**
CCT5 (gene):
T-complex protein 1 subunit epsilon is a protein that in humans is encoded by the CCT5 gene.
Function:
This gene encodes a molecular chaperone that is member of the TRiC complex. This complex consists of two identical stacked rings, each containing eight different proteins. Unfolded polypeptides enter the central cavity of the complex and are folded in an ATP-dependent manner. The complex folds various proteins, including actin and tubulin. Alternate transcriptional splice variants of this gene have been observed but have not been thoroughly characterized.
Interactions:
CCT5 (gene) has been shown to interact with PPP4C. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mucinous neoplasm**
Mucinous neoplasm:
A mucinous neoplasm (also called colloid neoplasm) is an abnormal and excessive growth of tissue (neoplasia) with associated mucin (a fluid that sometimes resembles thyroid colloid). It arises from epithelial cells that line certain internal organs and skin, and produce mucin (the main component of mucus). A malignant mucinous neoplasm is called a mucinous carcinoma. For example, for ovarian mucinous tumors, approximately 75% are benign, 10% are borderline and 15% are malignant.
Mucinous carcinoma:
Over 40 percent of all mucinous carcinomas are colorectal.When found within the skin, mucinous carcinoma is commonly a round, elevated, reddish, and sometimes ulcerated mass, usually located on the head and neck.: 669 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bond order**
Bond order:
In chemistry, bond order is a formal measure of the multiplicity of a covalent bond between two atoms. As introduced by Linus Pauling, bond order is defined as the difference between the numbers of electron pairs in bonding and antibonding molecular orbitals.
Bond order gives a rough indication of the stability of a bond. Isoelectronic species have the same bond order.
Examples:
The bond order itself is the number of electron pairs (covalent bonds) between two atoms. For example, in diatomic nitrogen N≡N, the bond order between the two nitrogen atoms is 3 (triple bond). In acetylene H–C≡C–H, the bond order between the two carbon atoms is also 3, and the C–H bond order is 1 (single bond). In carbon monoxide, −C≡O+, the bond order between carbon and oxygen is 3. In thiazyl trifluoride N≡SF3, the bond order between sulfur and nitrogen is 3, and between sulfur and fluorine is 1. In diatomic oxygen O=O the bond order is 2 (double bond). In ethylene H2C=CH2 the bond order between the two carbon atoms is also 2. The bond order between carbon and oxygen in carbon dioxide O=C=O is also 2. In phosgene O=CCl2, the bond order between carbon and oxygen is 2, and between carbon and chlorine is 1.
Examples:
In some molecules, bond orders can be 4 (quadruple bond), 5 (quintuple bond) or even 6 (sextuple bond). For example, potassium octachlorodimolybdate salt (K4[Mo2Cl8]) contains the [Cl4Mo≣MoCl4]4− anion, in which the two Mo atoms are linked to each other by a bond with order of 4. Each Mo atom is linked to four Cl− ligands by a bond with order of 1. The compound (terphenyl)–CrCr–(terphenyl) contains two chromium atoms linked to each other by a bond with order of 5, and each chromium atom is linked to one terphenyl ligand by a single bond. A bond of order 6 is detected in ditungsten molecules W2, which exist only in a gaseous phase.
Examples:
Non-integer bond orders In molecules which have resonance or nonclassical bonding, bond order may not be an integer. In benzene, the delocalized molecular orbitals contain 6 pi electrons over six carbons, essentially yielding half a pi bond together with the sigma bond for each pair of carbon atoms, giving a calculated bond order of 1.5 (one and a half bond). Furthermore, bond orders of 1.1 (eleven tenths bond), 4/3 (or 1.333333..., four thirds bond) or 0.5 (half bond), for example, can occur in some molecules and essentially refer to bond strength relative to bonds with order 1. In the nitrate anion (NO−3), the bond order for each bond between nitrogen and oxygen is 4/3 (or 1.333333...). Bonding in dihydrogen cation H+2 can be described as a covalent one-electron bond, thus the bonding between the two hydrogen atoms has bond order of 0.5.
Bond order in molecular orbital theory:
In molecular orbital theory, bond order is defined as half the difference between the number of bonding electrons and the number of antibonding electrons as per the equation below. This often but not always yields similar results for bonds near their equilibrium lengths, but it does not work for stretched bonds. Bond order is also an index of bond strength and is also used extensively in valence bond theory.
Bond order in molecular orbital theory:
bond order = number of bonding electrons - number of antibonding electrons/2Generally, the higher the bond order, the stronger the bond. Bond orders of one-half may be stable, as shown by the stability of H+2 (bond length 106 pm, bond energy 269 kJ/mol) and He+2 (bond length 108 pm, bond energy 251 kJ/mol).Hückel molecular orbital theory offers another approach for defining bond orders based on molecular orbital coefficients, for planar molecules with delocalized π bonding. The theory divides bonding into a sigma framework and a pi system. The π-bond order between atoms r and s derived from Hückel theory was defined by Charles Coulson by using the orbital coefficients of the Hückel MOs: prs=∑inicricsi ,Here the sum extends over π molecular orbitals only, and ni is the number of electrons occupying orbital i with coefficients cri and csi on atoms r and s respectively. Assuming a bond order contribution of 1 from the sigma component this gives a total bond order (σ + π) of 5/3 = 1.67 for benzene, rather than the commonly cited bond order of 1.5, showing some degree of ambiguity in how the concept of bond order is defined.
Bond order in molecular orbital theory:
For more elaborate forms of molecular orbital theory involving larger basis sets, still other definitions have been proposed. A standard quantum mechanical definition for bond order has been debated for a long time. A comprehensive method to compute bond orders from quantum chemistry calculations was published in 2017.
Other definitions:
The bond order concept is used in molecular dynamics and bond order potentials. The magnitude of the bond order is associated with the bond length. According to Linus Pauling in 1947, the bond order between atoms i and j is experimentally described as exp [d1−dijb] where d1 is the single bond length, dij is the bond length experimentally measured, and b is a constant, depending on the atoms. Pauling suggested a value of 0.353 Å for b, for carbon-carbon bonds in the original equation: 0.353 ln (sij) The value of the constant b depends on the atoms. This definition of bond order is somewhat ad hoc and only easy to apply for diatomic molecules. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ductility (Earth science)**
Ductility (Earth science):
In Earth science, ductility refers to the capacity of a rock to deform to large strains without macroscopic fracturing. Such behavior may occur in unlithified or poorly lithified sediments, in weak materials such as halite or at greater depths in all rock types where higher temperatures promote crystal plasticity and higher confining pressures suppress brittle fracture. In addition, when a material is behaving ductilely, it exhibits a linear stress vs strain relationship past the elastic limit.Ductile deformation is typically characterized by diffuse deformation (i.e. lacking a discrete fault plane) and on a stress-strain plot is accompanied by steady state sliding at failure, compared to the sharp stress drop observed in experiments during brittle failure.
Brittle–Ductile Transition Zone:
The brittle–ductile transition zone is characterized by a change in rock failure mode, at an approximate average depth of 10–15 km (~ 6.2–9.3 miles) in continental crust, below which rock becomes less likely to fracture and more likely to deform ductilely. The zone exists because as depth increases confining pressure increases, and brittle strength increases with confining pressure whilst ductile strength decreases with increasing temperature. The transition zone occurs at the point where brittle strength equals ductile strength. In glacial ice this zone is at approximately 30 m (100 ft) depth.
Brittle–Ductile Transition Zone:
Not all materials, however, abide by this transition. It is possible and not rare for material above the transition zone to deform ductilely, and for material below to deform in a brittle manner. The depth of the material does exert an influence on the mode of deformation, but other substances, such as loose soils in the upper crust, malleable rocks, biological debris, and more are just a few examples of that which does not deform in accordance to the transition zone.
Brittle–Ductile Transition Zone:
The type of dominating deformation process also has a great impact on the types of rocks and structures found at certain depths within the Earth's crust. As evident from Fig. 1.1, different geological formations and rocks are found in accordance to the dominant deformation process. Gouge and Breccia form in the uppermost, brittle regime while Cataclasite and Pseudotachylite form in the lower parts of the brittle regime, edging upon the transition zone. Mylonite forms in the more ductile regime at greater depths while Blastomylonite forms well past the transition zone and well into the ductile regime, even deeper into the crust.
Quantification:
Ductility is a material property that can be expressed in a variety of ways. Mathematically, it is commonly expressed as a total quantity of elongation or a total quantity of the change in cross sectional area of a specific rock until macroscopic brittle behavior, such as fracturing, is observed. For accurate measurement, this must be done under several controlled conditions, including but not limited to Pressure, Temperature, Moisture Content, Sample Size, etc., for all can impact the measured ductility. It is important to understand that even the same type of rock or mineral may exhibit different behavior and degrees of ductility due to internal heterogeneities small scale differences between each individual sample. The two quantities are expressed in the form of a ratio or a percent.% Elongation of a Rock = 100 Where: li = Initial Length of Rock lf = Final Length of Rock % Change in Area of a Rock = 100 Where: Ai = Initial Area Af = Final Area For each of these methods of quantifying, one must take measurements of both the initial and final dimensions of the rock sample. For Elongation, the measurement is a uni-dimensional initial and final length, the former measured before any Stress is applied and the latter measuring the length of the sample after fracture occurs. For Area, it is strongly preferable to use a rock that has been cut into a cylindrical shape before stress application so that the cross-sectional area of the sample can be taken.
Quantification:
Cross-Sectional Area of a Cylinder = Area of a Circle = A=πr2 Using this, the initial and final areas of the sample can be used to quantify the % change in the area of the rock.
Deformation:
Any material is shown to be able to deform ductilely or brittlely, in which the type of deformation is governed by both the external conditions around the rock and the internal conditions sample. External conditions include temperature, confining pressure, presence of fluids, etc. while internal conditions include the arrangement of the crystal lattice, the chemical composition of the rock sample, the grain size of the material, etc.Ductilely Deformative behavior can be grouped into three categories: Elastic, Viscous, and Crystal-Plastic Deformation.
Deformation:
Elastic Deformation Elastic Deformation is deformation which exhibits a linear stress-strain relationship (quantified by Young's Modulus) and is derived from Hooke's Law of spring forces (see Fig. 1.2). In elastic deformation, objects show no permanent deformation after the stress has been removed from the system and return to their original state.
Deformation:
σ=Eϵ Where: σ = Stress (In Pascals) E = Young's Modulus (In Pascals) ϵ = Strain (Unitless) Viscous Deformation Viscous Deformation is when rocks behave and deform more like a fluid than a solid. This often occurs under great amounts of pressure and at very high temperatures. In viscous deformation, stress is proportional to the strain rate, and each rock sample has its own material property called its Viscosity. Unlike elastic deformation, viscous deformation is permanent even after the stress has been removed.
Deformation:
σ=ηξ Where: σ = Stress (In Pascals) η = Viscosity (In Pascals * Seconds) ξ = Strain Rate (In 1/Seconds) Crystal-Plastic Deformation Crystal-Plastic Deformation occurs at the atomic scale and is governed by its own set of specific mechanisms that deform crystals by the movements of atoms and atomic planes through the crystal lattice. Like viscous deformation, it is also a permanent form of deformation. Mechanisms of crystal-plastic deformation include Pressure solution, Dislocation creep, and Diffusion creep.
Biological materials:
In addition to rocks, biological materials such as wood, lumber, bone, etc. can be assessed for their ductility as well, for many behave in the same manner and possess the same characteristics as abiotic Earth materials. This assessment was done in Hiroshi Yoshihara's experiment, "Plasticity Analysis of the Strain in the Tangential Direction of Solid Wood Subjected to Compression Load in the Longitudinal Direction." The study aimed to analyze the behavioral rheology of 2 wood specimens, the Sitka Spruce and Japanese Birch. In the past, it was shown that solid wood, when subjected to compressional stresses, initially has a linear stress-strain diagram (indicative of elastic deformation) and later, under greater load, demonstrates a non-linear diagram indicative of ductile objects. To analyze the rheology, the stress was restricted to uniaxial compression in the longitudinal direction and the post-linear behavior was analyzed using plasticity theory. Controls included moisture content in the lumber, lack of defects such as knots or grain distortions, temperature at 20 C, relative humidity at 65%, and size of the cut shapes of the wood samples.Results obtained from the experiment exhibited a linear stress-strain relationship during elastic deformation but also an unexpected non-linear relationship between stress and strain for the lumber after the elastic limit was reached, deviating from the model of plasticity theory. Multiple reasons were suggested as to why this came about. First, since wood is a biological material, it was suggested that under great stress in the experiment, the crushing of cells within the sample could have been a cause for deviation from perfectly plastic behavior. With greater destruction of cellular material, the stress-strain relationship is hypothesized to become more and more nonlinear and non-ideal with greater stress. Additionally, because the samples were inhomogeneous (non-uniform) materials, it was assumed that some bending or distortion may have occurred in the samples that could have deviated the stress from being perfectly uniaxial. This may have also been induced by other factors like irregularities in the cellular density profile and distorted sample cutting.The conclusions of the research accurately showed that although biological materials can behave like rocks undergoing deformation, there are many other factors and variables that must be considered, making it difficult to standardize the ductility and material properties of a biological substance.
Peak Ductility Demand:
Peak Ductility Demand is a quantity used particularly in the fields of architecture, geological engineering, and mechanical engineering. It is defined as the amount of ductile deformation a material must be able to withstand (when exposed to a stress) without brittle fracture or failure. This quantity is particularly useful in the analysis of failure of structures in response to earthquakes and seismic waves.It has been shown that earthquake aftershocks can increase the peak ductility demand with respect to the mainshocks by up to 10%. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Non-relativistic spacetime**
Non-relativistic spacetime:
In physics, a non-relativistic spacetime is any mathematical model that fuses n–dimensional space and m–dimensional time into a single continuum other than the (3+1) model used in relativity theory.
In the sense used in this article, a spacetime is deemed "non-relativistic" if (a) it deviates from (3+1) dimensionality, even if the postulates of special or general relativity are otherwise satisfied, or if (b) it does not obey the postulates of special or general relativity, regardless of the model's dimensionality.
Introduction:
There are many reasons why spacetimes may be studied that do not satisfy relativistic postulates and/or that deviate from the apparent (3+1) dimensionality of the known universe.
Introduction:
Galilean/Newtonian spacetime The classic example of a non-relativistic spacetime is the spacetime of Galileo and Newton. It is the spacetime of everyday "common sense". Galilean/Newtonian spacetime assumes that space is Euclidean (i.e. "flat"), and that time has a constant rate of passage that is independent of the state of motion of an observer, or indeed of anything external.Newtonian mechanics takes place within the context of Galilean/Newtonian spacetime. For a huge problem set, the results of computations using Newtonian mechanics are only imperceptibly different from computations using a relativistic model. Since computations using Newtonian mechanics are considerably simpler than those using relativistic mechanics, as well as correspond to intuition, most everyday mechanics problems are solved using Newtonian mechanics.
Introduction:
Model systems Efforts since 1930 to develop a consistent quantum theory of gravity have not yet produced more than tentative results. The study of quantum gravity is difficult for multiple reasons. Technically, general relativity is a complex, nonlinear theory. Very few problems of significant interest admit of analytical solution, and numerical solutions in the strong-field realm can require immense amounts of supercomputer time.
Introduction:
Conceptual issues present an even greater difficulty, since general relativity states that gravity is a consequence of the geometry of spacetime. To produce a quantum theory of gravity would therefore require quantizing the basic units of measurement themselves: space and time. A completed theory of quantum gravity would undoubtedly present a visualization of the Universe unlike any that has hitherto been imagined.
Introduction:
One promising research approach is to explore the features of simplified models of quantum gravity that present fewer technical difficulties while retaining the fundamental conceptual features of the full-fledged model. In particular, general relativity in reduced dimensions (2+1) retains the same basic structure of the full (3+1) theory, but is technically far simpler. Multiple research groups have adopted this approach to studying quantum gravity.
Introduction:
"New physics" theories The idea that relativistic theory could be usefully extended with the introduction of extra dimensions originated with Nordstöm's 1914 modification of his previous 1912 and 1913 theories of gravitation. In this modification, he added an additional dimension resulting in a 5-dimensional vector theory. Kaluza–Klein theory (1921) was an attempt to unify relativity theory with electromagnetism. Although at first enthusiastically welcomed by physicists such as Einstein, Kaluza–Klein theory was too beset with inconsistencies to be a viable theory.: i–viii Various superstring theories have effective low-energy limits that correspond to classical spacetimes with alternate dimensionalities than the apparent dimensionality of the observed universe. It has been argued that all but the (3+1) dimensional world represent dead worlds with no observers. Therefore, on the basis of anthropic arguments, it would be predicted that the observed universe should be one of (3+1) spacetime.Space and time may not be fundamental properties, but rather may represent emergent phenomena whose origins lie in quantum entanglement.It had occasionally been wondered whether it is possible to derive sensible laws of physics in a universe with more than one time dimension. Early attempts at constructing spacetimes with extra timelike dimensions inevitably met with issues such as causality violation and so could be immediately rejected, but it is now known that viable frameworks exist of such spacetimes that can be correlated with general relativity and the Standard Model, and which make predictions of new phenomena that are within the range of experimental access.: 99–111 Possible observational evidence Observed high values of the cosmological constant may imply kinematics significantly different from relativistic kinematics. A deviation from relativistic kinematics would have significant cosmological implications in regards to such puzzles as the "missing mass" problem.To date, general relativity has satisfied all experimental tests. However, proposals that may lead to a quantum theory of gravity (such as string theory and loop quantum gravity) generically predict violations of the weak equivalence principle in the 10−13 to 10−18 range. Currently envisioned tests of the weak equivalence principle are approaching a degree of sensitivity such that non-discovery of a violation would be just as profound a result as discovery of a violation. Non-discovery of equivalence principle violation in this range would suggest that gravity is so fundamentally different from other forces as to require a major reevaluation of current attempts to unify gravity with the other forces of nature. A positive detection, on the other hand, would provide a major guidepost towards unification.
Introduction:
Condensed matter physics Research on condensed matter has spawned a two-way relationship between spacetime physics and condensed matter physics: On the one hand, spacetime approaches have been used to investigate certain condensed matter phenomena. For example, spacetimes with local non-relativistic symmetries have been investigated capable of supporting massive matter fields. This approach has been used to investigate the details of matter couplings, transport phenomena, and the thermodynamics of non-relativistic fluids.
Introduction:
On the other hand, condensed matter systems can be used to mimic certain aspects of general relativity. Although intrinsically non-relativistic, these systems provide models of curved spacetime quantum field theory that are experimentally accessible. The include acoustical models in flowing fluids, Bose–Einstein condensate systems, or quasiparticles in moving superfluids, such as the quasiparticles and domain walls of the A-phase of superfluid He-3. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Web User**
Web User:
Web User, branded as WebUser, was a fortnightly magazine published in the United Kingdom from 2001 until 2020. It covered topics relating to computing. Its sister magazine was ComputerActive.
Overview:
Web User was founded by IPC Media in 2001. The first issue appeared on 22 March. The bulk of the magazine's content consisted of internet news, website reviews and features on web-related topics. Additionally, it offered product evaluations, free apps and software, step-by-step workshops, and advice on how to use websites, computer hardware, and software. The magazine was complemented by a website, launched in tandem in 2001. It was sold in 2010 to Dennis Publishing. It ceased publication after 516 issues in December 2020.
Overview:
Topics covered include free software; PC security and maintenance; browser add-ons; the best Google tools; and the latest web trends and developments, such as Web 2.0 and social networking. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nemonapride**
Nemonapride:
Nemonapride (エミレース, Emilace (JP)) is an atypical antipsychotic approved in Japan for the treatment of schizophrenia. It was launched by Yamanouchi in May 1991. Nemonapride acts as a D2 and D3 receptor antagonist, and is also a potent 5-HT1A receptor agonist. It has affinity for sigma receptors. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Spin gapless semiconductor**
Spin gapless semiconductor:
Spin gapless semiconductors are a novel class of materials with unique electrical band structure for different spin channels in such a way that there is no band gap (i.e., 'gapless') for one spin channel while there is a finite gap in another spin channel.
Spin gapless semiconductor:
In a spin-gapless semiconductor, conduction and valence band edges touch, so that no threshold energy is required to move electrons from occupied (valence) states to empty (conduction) states. This gives spin-gapless semiconductors unique properties: namely that their band structures are extremely sensitive to external influences (e.g., pressure or magnetic field). Because very little energy is needed to excite electrons in an SGS, charge concentrations are very easily ‘tuneable’. For example, this can be done by introducing a new element (doping) or by application of a magnetic or electric field (gating).
Spin gapless semiconductor:
A new type of SGS identified in 2017, known as Dirac-type linear spin-gapless semiconductors, has linear dispersion and is considered an ideal platform for massless and dissipationless spintronics because spin-orbital coupling opens a gap for the spin fully polarized conduction and valence band, and as a result, the interior of the sample becomes an insulator, however, an electrical current can flow without resistance at the sample edge. This effect, the quantum anomalous Hall effect has only previously been realised in magnetically doped topological insulators.As well as Dirac/linear SGSs, the other major category of SGS are parabolic spin gapless semiconductors. Electron mobility in such materials is two to four orders of magnitude higher than in classical semiconductors.SGSs are topologically non-trivial.
Prediction and discovery:
The spin gapless semiconductor was first proposed as a new spintronics concept and a new class of candidate spintronic materials in 2008 in a paper by Xiaolin Wang of the University of Wollongong in Australia.
Properties and applications:
The dependence of bandgap on spin direction leads to high carrier-spin-polarization, and offers promising spin-controlled electronic and magnetic properties for spintronics applications.The spin gapless semiconductor is a promising candidate material for spintronics because its charged particles can be fully spin-polarised, so that spin can be controlled via only a small applied external energy. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CoRoT-18b**
CoRoT-18b:
CoRoT-18b is a transiting Hot Jupiter exoplanet found by the CoRoT space telescope in 2011.
Host star:
CoRoT-18b orbits CoRoT-18 in the constellation of Monoceros. It is a G9V star with Te = 5440K, M = 0.95M☉, R = 1.00R☉, and near-solar metallicity. Its age is unknown.
Orbit:
The study in 2012, utilizing a Rossiter–McLaughlin effect, have determined the planetary orbit is probably aligned with the rotational axis of the star, misalignment equal to -10±20°. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Quantum dimer models**
Quantum dimer models:
Quantum dimer models were introduced to model the physics of resonating valence bond (RVB) states in lattice spin systems. The only degrees of freedom retained from the motivating spin systems are the valence bonds, represented as dimers which live on the lattice bonds. In typical dimer models, the dimers do not overlap ("hardcore constraint").
Typical phases of quantum dimer models tend to be valence bond crystals. However, on non-bipartite lattices, RVB liquid phases possessing topological order and fractionalized spinons also appear. The discovery of topological order in quantum dimer models (more than a decade after the models were introduced) has led to new interest in these models.
Classical dimer models have been studied previously in statistical physics, in particular by P. W. Kasteleyn (1961) and M. E. Fisher (1961). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Boomerang (cocktail)**
Boomerang (cocktail):
A Boomerang cocktail is a specific cocktail dating back to the early 20th century. In the 21st century, it may also be a reference to cocktails that bartenders illegally shuttle back and forth between bars as a way of sharing experimentation or building comradery.
Boomerang as a specific cocktail:
The Official Mixer's Manual lists the popularized version of the Boomerang Cocktail as calling for: 1/3 Rye Whiskey 1/3 Swedish Punsch 1/3 Dry Vermouth 1 dash Angostura bitters 1 dash lemon juiceTo be stirred well with ice and strained into a glass.
Boomerang as a specific cocktail:
The Cafe Royal Cocktail Book lists the same recipe. The Savoy Cocktail Book lists the same recipe, but calls for "Canadian Club whisky" instead of rye. The Standard Cocktail Guide employed rye whiskey but calls for different proportions, with 1 oz rye, 3/4 oz. swedish punsch, 3/4 oz. sweet vermouth, 2 dashes of lemon juice, and 1 dash of Angostura bitters.Trader Vic lists the same recipe in his 1947 Bartender's Guide as was in the Official Mixer's Manual but substitutes the rye with bourbon.
Boomerang as a specific cocktail:
Prior to World War II, the original Boomerang Cocktail was associated with a South African origin, and likely referred to the boomerang as used for hunting. The drink reached its zenith for a period of time after World War II, when the early Atomic Age and Space Age began to influence Las Vegas and popular culture in terms of architecture, furniture, fabrics, and style, including boomerang shaped cocktail tables, barware, and so-called "atomic cocktails". Flying-themed cocktail names were also popular during this time.
Boomerang as a shuttled cocktail:
A Boomerang cocktail may also refer to alcoholic drinks that bartenders send back and forth to each other from competing bars. It is considered a friendly gesture within the industry, but is typically illegal.
In popular culture:
The Boomerang cocktail was the featured drink for episode #14 of the pioneering video podcast Tiki Bar TV. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Final Fantasy Record Keeper**
Final Fantasy Record Keeper:
Final Fantasy Record Keeper (Japanese: ファイナルファンタジーレコードキーパー, Hepburn: Fainaru Fantajī Rekōdo Kīpā) is a free-to-play role-playing gacha game developed and published by DeNA for iOS and Android. The game features original characters and stories interacting with characters, scenarios, and battles from other games in the Final Fantasy series. It was released in Japan on September 24, 2014, and worldwide on March 26, 2015. The gameplay primarily consists of Active Time Battles with 2D sprite graphics. It has reached over 10 million downloads worldwide and was available in Japanese, English, French and Spanish. In 2020 all languages other than English were permanently removed. The game's service for the global version ended on September 29, 2022.
Gameplay:
Players enter each Final Fantasy titles world and fight to clear dungeons. After finishing it, they unlock new characters from those games. Players are able to combine parties of characters from different Final Fantasy titles. However, though using a game mechanic called "Synergy System", playing the characters that are initially from the world gives the characters a "stat bonus", which is also applied to weapons and gear. The game has players to reenact many climatic moments from the Final Fantasy series, such as the Battle on the Big Bridge against Gilgamesh. Characters are given weapons or abilities that are collected along the way. The game is free-to-play and does not have microtransactions.In 2018, a new kind of dungeon, Record Dungeons was launched, featuring full pixel-art remakes of classic Final Fantasy scenes and dungeons, and an all-new adventure and records to explore alongside heroes of past games in the series.
Plot:
Tyro is a researcher who works in the history department for Dr. Mog. Being his best student, Dr. Mog shares his magic so that Tyro can enter paintings and see memories of different worlds, which are previous Final Fantasy titles. Tyro is also occasionally joined by original characters such as Elarra and Shadowsmith as they relive and restore the records of the great tales.
Development:
Developer DeNA proposed doing a social role-playing game to Square Enix that would center around the Final Fantasy series, similar to a title the developer had worked on previously called Defender of Texel. That game used pixel graphics and characters battling in formation. Square Enix producer Ichiro Hazama decided that DeNA had the experience to build a successful mobile game for western markets and gave the game approval to be developed. Square Enix oversaw the game and the story, setting, and characters, but DeNA completed the backend publishing. DeNa producer Yu Sasaki stated that international release was expected to happen upon initial launch of the game but was delayed and released after modifications were made to please western audiences.For the international release of the game, artwork from any remakes that had been done of earlier Final Fantasy games were used, as developers felt that American audiences were connected more to later Final Fantasy games than earlier ones. Cutscenes were looked at again and polished for the same reason. The game was designed not to be difficult, and character profiles were also added for international release so new players could easily start enjoying the game. Tetsuya Nomura designed player character Tyro and supporting cast members Dr. Mog, Cid, and Elarra. To draw in American audiences, the first world entered in the game is from Final Fantasy VII, and the next two are fan favorites in Japan: Final Fantasy IV and VI. When the game runs events, characters are chosen based on player popularity. Characters are not added from games if there are not enough worlds or events from those same games. The game's producers felt that characters should be winnable through battle, and the game's currency of mithril given generously to encourage players to keep playing. Enemy bosses are animated, in a departure from the style of the older titles that are being referenced. Activities such as logging in to the game earn players in-game currency, and more opportunities to obtain free game currency were added for the international release. Western and Japanese versions of the game are on different servers, so no player communication is possible between the two versions. Producer Ichiro Hazama stated that the core of the experience of the game is "reliving the past". Hazama also felt there was room for original story expansion in the game's plot and the new main character Tyro. A teaser site appeared in July 2014 with a timer counting time for the game's actual reveal.The game was shutdown for the global versions on September 29, 2022.
Reception:
Final Fantasy Record Keeper received positive reviews overall. IGN praised the game's use of nostalgia for previous Final Fantasy games and fun combat and customization, but they criticized its lack of character interaction and shallow story, making it hard for the game to hold players' interest. Kotaku voiced a similar sentiment, calling the game a "fun time waster" but noting the presence of the "much loathed stamina scheme" used to entice players to pay for more play time. VentureBeat said that the crafting and combining of items and weapons was "actually fun" and felt like a real Final Fantasy game, but condemned the gameplay as being boring because it is primarily a "hands-off" experience.Within ten days of release, the game was downloaded over one million times. After a month, the game recorded three million downloads and one billion yen. The game reached five million downloads in six months in Japan and was in the top five highest-grossing games in the Apple App Store. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tcelna**
Tcelna:
Tcelna (formerly known as Tovaxin) is an anti-T cell vaccine being studied in multiple sclerosis (MS). As of 2016 it is in phase II trials.
History:
The company announced in late 2005 that the U.S. Food and Drug Administration had approved the protocol for the Phase IIb clinical trial of Tcelna.
The multicenter, randomized, double blind, placebo-controlled Phase IIb clinical study on 150 patients was designed to evaluate the efficacy, safety and tolerability of the therapy with clinically isolated syndrome (CIS) and early relapsing-remitting MS (RR-MS) patients.
History:
The first phase of the trial finished in March 2008. All patients who completed the trial were to be eligible for an optional one-year extension study, OLTERMS, to receive Tcelna open-label without a placebo group; however, that program was terminated suddenly for lack of funding.After several financial troubles, the trials were restarted in 2011 and Opexa rebranded the therapy, previous called Tovaxin, with the new name Tcelna. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Constellation**
Constellation:
A constellation is an area on the celestial sphere in which a group of visible stars forms a perceived pattern or outline, typically representing an animal, mythological subject, or inanimate object.The origins of the earliest constellations likely go back to prehistory. People used them to relate stories of their beliefs, experiences, creation, or mythology. Different cultures and countries invented their own constellations, some of which lasted into the early 20th century before today's constellations were internationally recognized. The recognition of constellations has changed significantly over time. Many changed in size or shape. Some became popular, only to drop into obscurity. Some were limited to a single culture or nation. Naming constellations also helped astronomers and navigators identify stars more easily.[1] Twelve (or thirteen) ancient constellations belong to the zodiac (straddling the ecliptic, which the Sun, Moon, and planets all traverse). The origins of the zodiac remain historically uncertain; its astrological divisions became prominent c. 400 BC in Babylonian or Chaldean astronomy. Constellations appear in Western culture via Greece and are mentioned in the works of Hesiodus, Eudoxus and Aratus. The traditional 48 constellations, consisting of the Zodiac and 36 more (now 38, following the division of Argo Navis into three constellations) are listed by Ptolemy, a Greco-Roman astronomer from Alexandria, Egypt, in his Almagest. The formation of constellations was the subject of extensive mythology, most notably in the Metamorphoses of the Latin poet Ovid. Constellations in the far southern sky were added from the 15th century until the mid-18th century when European explorers began traveling to the Southern Hemisphere. Due to Roman and European transmission, each constellation has a Latin name.
Constellation:
In 1922, the International Astronomical Union (IAU) formally accepted the modern list of 88 constellations, and in 1928 adopted official constellation boundaries that together cover the entire celestial sphere. Any given point in a celestial coordinate system lies in one of the modern constellations. Some astronomical naming systems include the constellation where a given celestial object is found to convey its approximate location in the sky. The Flamsteed designation of a star, for example, consists of a number and the genitive form of the constellation's name.
Constellation:
Other star patterns or groups called asterisms are not constellations under the formal definition, but are also used by observers to navigate the night sky. Asterisms may be several stars within a constellation, or they may share stars with more than one constellation. Examples of asterisms include the teapot within the constellation Sagittarius, or the big dipper in the constellation of Ursa Major.
Terminology:
The word constellation comes from the Late Latin term cōnstellātiō, which can be translated as "set of stars"; it came into use in Middle English during the 14th century. The Ancient Greek word for constellation is ἄστρον (astron). These terms historically referred to any recognisable pattern of stars whose appearance was associated with mythological characters or creatures, earthbound animals, or objects. Over time, among European astronomers, the constellations became clearly defined and widely recognised. Today, there are 88 IAU designated constellations.A constellation or star that never sets below the horizon when viewed from a particular latitude on Earth is termed circumpolar. From the North Pole or South Pole, all constellations south or north of the celestial equator are circumpolar. Depending on the definition, equatorial constellations may include those that lie between declinations 45° north and 45° south, or those that pass through the declination range of the ecliptic or zodiac ranging between 23½° north, the celestial equator, and 23½° south.Stars in constellations can appear near each other in the sky, but they usually lie at a variety of distances away from the Earth. Since each star has its own independent motion, all constellations will change slowly over time. After tens to hundreds of thousands of years, familiar outlines will become unrecognizable. Astronomers can predict the past or future constellation outlines by measuring individual stars' common proper motions or cpm by accurate astrometry and their radial velocities by astronomical spectroscopy.
Identification:
The 88 constellations recognized by the International Astronomical Union as well as those that cultures have recognized throughout history are imagined figures and shapes derived from the patterns of stars in the observable sky. Many officially recognized constellations are based on the imaginations of ancient, Near Eastern and Mediterranean mythologies. H.A. Rey, who wrote popular books on astronomy, pointed out the imaginative nature of the constellations and their mythological and artistic basis, and the practical use of identifying them through definite images, according to the classical names they were given.
History of the early constellations:
Lascaux Caves, Southern France It has been suggested that the 17,000-year-old cave paintings in Lascaux Southern France depict star constellations such as Taurus, Orion's Belt, and the Pleiades. However, this view is not generally accepted among scientists.
History of the early constellations:
Mesopotamia Inscribed stones and clay writing tablets from Mesopotamia (in modern Iraq) dating to 3000 BC provide the earliest generally accepted evidence for humankind's identification of constellations. It seems that the bulk of the Mesopotamian constellations were created within a relatively short interval from around 1300 to 1000 BC. Mesopotamian constellations appeared later in many of the classical Greek constellations.
History of the early constellations:
Ancient Near East The oldest Babylonian catalogues of stars and constellations date back to the beginning of the Middle Bronze Age, most notably the Three Stars Each texts and the MUL.APIN, an expanded and revised version based on more accurate observation from around 1000 BC. However, the numerous Sumerian names in these catalogues suggest that they built on older, but otherwise unattested, Sumerian traditions of the Early Bronze Age.The classical Zodiac is a revision of Neo-Babylonian constellations from the 6th century BC. The Greeks adopted the Babylonian constellations in the 4th century BC. Twenty Ptolemaic constellations are from the Ancient Near East. Another ten have the same stars but different names.Biblical scholar E. W. Bullinger interpreted some of the creatures mentioned in the books of Ezekiel and Revelation as the middle signs of the four-quarters of the Zodiac, with the Lion as Leo, the Bull as Taurus, the Man representing Aquarius, and the Eagle standing in for Scorpio. The biblical Book of Job also makes reference to a number of constellations, including עיש ‘Ayish "bier", כסיל chesil "fool" and כימה chimah "heap" (Job 9:9, 38:31–32), rendered as "Arcturus, Orion and Pleiades" by the KJV, but ‘Ayish "the bier" actually corresponding to Ursa Major. The term Mazzaroth מַזָּרוֹת, translated as a garland of crowns, is a hapax legomenon in Job 38:32, and it might refer to the zodiacal constellations.
History of the early constellations:
Classical antiquity There is only limited information on ancient Greek constellations, with some fragmentary evidence being found in the Works and Days of the Greek poet Hesiod, who mentioned the "heavenly bodies". Greek astronomy essentially adopted the older Babylonian system in the Hellenistic era, first introduced to Greece by Eudoxus of Cnidus in the 4th century BC. The original work of Eudoxus is lost, but it survives as a versification by Aratus, dating to the 3rd century BC. The most complete existing works dealing with the mythical origins of the constellations are by the Hellenistic writer termed pseudo-Eratosthenes and an early Roman writer styled pseudo-Hyginus. The basis of Western astronomy as taught during Late Antiquity and until the Early Modern period is the Almagest by Ptolemy, written in the 2nd century.
History of the early constellations:
In the Ptolemaic Kingdom, native Egyptian tradition of anthropomorphic figures represented the planets, stars, and various constellations. Some of these were combined with Greek and Babylonian astronomical systems culminating in the Zodiac of Dendera; it remains unclear when this occurred, but most were placed during the Roman period between 2nd to 4th centuries AD. The oldest known depiction of the zodiac showing all the now familiar constellations, along with some original Egyptian constellations, decans, and planets. Ptolemy's Almagest remained the standard definition of constellations in the medieval period both in Europe and in Islamic astronomy.
History of the early constellations:
Ancient China Ancient China had a long tradition of observing celestial phenomena. Nonspecific Chinese star names, later categorized in the twenty-eight mansions, have been found on oracle bones from Anyang, dating back to the middle Shang dynasty. These constellations are some of the most important observations of Chinese sky, attested from the 5th century BC. Parallels to the earliest Babylonian (Sumerian) star catalogues suggest that the ancient Chinese system did not arise independently.Three schools of classical Chinese astronomy in the Han period are attributed to astronomers of the earlier Warring States period. The constellations of the three schools were conflated into a single system by Chen Zhuo, an astronomer of the 3rd century (Three Kingdoms period). Chen Zhuo's work has been lost, but information on his system of constellations survives in Tang period records, notably by Qutan Xida. The oldest extant Chinese star chart dates to that period and was preserved as part of the Dunhuang Manuscripts. Native Chinese astronomy flourished during the Song dynasty, and during the Yuan dynasty became increasingly influenced by medieval Islamic astronomy (see Treatise on Astrology of the Kaiyuan Era). As maps were prepared during this period on more scientific lines, they were considered as more reliable.A well-known map from the Song period is the Suzhou Astronomical Chart, which was prepared with carvings of stars on the planisphere of the Chinese sky on a stone plate; it is done accurately based on observations, and it shows the supernova of the year of 1054 in Taurus.Influenced by European astronomy during the late Ming dynasty, charts depicted more stars but retained the traditional constellations. Newly observed stars were incorporated as supplementary to old constellations in the southern sky, which did not depict the traditional stars recorded by ancient Chinese astronomers. Further improvements were made during the later part of the Ming dynasty by Xu Guangqi and Johann Adam Schall von Bell, the German Jesuit and was recorded in Chongzhen Lishu (Calendrical Treatise of Chongzhen period, 1628). Traditional Chinese star maps incorporated 23 new constellations with 125 stars of the southern hemisphere of the sky based on the knowledge of Western star charts; with this improvement, the Chinese Sky was integrated with the World astronomy.Ancient Greece A lot of well-known constellations also have histories that connect to ancient Greece.
Early modern astronomy:
Historically, the origins of the constellations of the northern and southern skies are distinctly different. Most northern constellations date to antiquity, with names based mostly on Classical Greek legends. Evidence of these constellations has survived in the form of star charts, whose oldest representation appears on the statue known as the Farnese Atlas, based perhaps on the star catalogue of the Greek astronomer Hipparchus. Southern constellations are more modern inventions, sometimes as substitutes for ancient constellations (e.g. Argo Navis). Some southern constellations had long names that were shortened to more usable forms; e.g. Musca Australis became simply Musca.Some of the early constellations were never universally adopted. Stars were often grouped into constellations differently by different observers, and the arbitrary constellation boundaries often led to confusion as to which constellation a celestial object belonged. Before astronomers delineated precise boundaries (starting in the 19th century), constellations generally appeared as ill-defined regions of the sky. Today they now follow officially accepted designated lines of right ascension and declination based on those defined by Benjamin Gould in epoch 1875.0 in his star catalogue Uranometria Argentina.The 1603 star atlas "Uranometria" of Johann Bayer assigned stars to individual constellations and formalized the division by assigning a series of Greek and Latin letters to the stars within each constellation. These are known today as Bayer designations. Subsequent star atlases led to the development of today's accepted modern constellations.
Early modern astronomy:
Origin of the southern constellations The southern sky, below about −65° declination, was only partially catalogued by ancient Babylonians, Egyptians, Greeks, Chinese, and Persian astronomers of the north. The knowledge that northern and southern star patterns differed goes back to Classical writers, who describe, for example, the African circumnavigation expedition commissioned by Egyptian Pharaoh Necho II in c. 600 BC and those of Hanno the Navigator in c. 500 BC.
Early modern astronomy:
The history of southern constellations is not straightforward. Different groupings and different names were proposed by various observers, some reflecting national traditions or designed to promote various sponsors. Southern constellations were important from the 14th to 16th centuries, when sailors used the stars for celestial navigation. Italian explorers who recorded new southern constellations include Andrea Corsali, Antonio Pigafetta, and Amerigo Vespucci.Many of the 88 IAU-recognized constellations in this region first appeared on celestial globes developed in the late 16th century by Petrus Plancius, based mainly on observations of the Dutch navigators Pieter Dirkszoon Keyser and Frederick de Houtman. These became widely known through Johann Bayer's star atlas Uranometria of 1603. Fourteen more were created in 1763 by the French astronomer Nicolas Louis de Lacaille, who also split the ancient constellation Argo Navis into three; these new figures appeared in his star catalogue, published in 1756.Several modern proposals have not survived. The French astronomers Pierre Lemonnier and Joseph Lalande, for example, proposed constellations that were once popular but have since been dropped. The northern constellation Quadrans Muralis survived into the 19th century (when its name was attached to the Quadrantid meteor shower), but is now divided between Boötes and Draco.
Early modern astronomy:
88 modern constellations A list of 88 constellations was produced for the International Astronomical Union in 1922. It is roughly based on the traditional Greek constellations listed by Ptolemy in his Almagest in the 2nd century and Aratus' work Phenomena, with early modern modifications and additions (most importantly introducing constellations covering the parts of the southern sky unknown to Ptolemy) by Petrus Plancius (1592, 1597/98 and 1613), Johannes Hevelius (1690) and Nicolas Louis de Lacaille (1763), who introduced fourteen new constellations. Lacaille studied the stars of the southern hemisphere from 1751 until 1752 from the Cape of Good Hope, when he was said to have observed more than 10,000 stars using a refracting telescope with an aperture of 0.5 inches (13 mm).
Early modern astronomy:
In 1922, Henry Norris Russell produced a list of 88 constellations with three-letter abbreviations for them. However, these constellations did not have clear borders between them. In 1928, the International Astronomical Union (IAU) formally accepted 88 modern constellations, with contiguous boundaries along vertical and horizontal lines of right ascension and declination developed by Eugene Delporte that, together, cover the entire celestial sphere; this list was finally published in 1930. Where possible, these modern constellations usually share the names of their Graeco-Roman predecessors, such as Orion, Leo or Scorpius. The aim of this system is area-mapping, i.e. the division of the celestial sphere into contiguous fields. Out of the 88 modern constellations, 36 lie predominantly in the northern sky, and the other 52 predominantly in the southern.
Early modern astronomy:
The boundaries developed by Delporte used data that originated back to epoch B1875.0, which was when Benjamin A. Gould first made his proposal to designate boundaries for the celestial sphere, a suggestion on which Delporte based his work. The consequence of this early date is that because of the precession of the equinoxes, the borders on a modern star map, such as epoch J2000, are already somewhat skewed and no longer perfectly vertical or horizontal. This effect will increase over the years and centuries to come.
Early modern astronomy:
Symbols The constellations have no official symbols, though those of the ecliptic may take the signs of the zodiac. Symbols for the other modern constellations, as well as older ones that still occur in modern nomenclature, have occasionally been published.
Dark cloud constellations:
The Great Rift, a series of dark patches in the Milky Way, is more visible and striking in the southern hemisphere than in the northern. It vividly stands out when conditions are otherwise so dark that the Milky Way's central region casts shadows on the ground. Some cultures have discerned shapes in these patches and have given names to these "dark cloud constellations". Members of the Inca civilization identified various dark areas or dark nebulae in the Milky Way as animals and associated their appearance with the seasonal rains. Australian Aboriginal astronomy also describes dark cloud constellations, the most famous being the "emu in the sky" whose head is formed by the Coalsack, a dark nebula, instead of the stars.
Dark cloud constellations:
List of dark cloud constellations Great Rift (astronomy) Emu in the sky Cygnus Rift Serpens–Aquila Rift Dark Horse (astronomy) Rho Ophiuchi cloud complex | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Selfie stick**
Selfie stick:
A selfie stick is used to take photographs or video by positioning a digital camera device, typically a smartphone, beyond the normal range of the arm. This allows for shots to be taken at angles and distances that would not have been possible with the human arm by itself. The sticks are typically extensible, with a handle on one end and an adjustable clamp on the other end to hold the device in place. As their name suggests, they are most commonly used for taking selfies with camera phones.
Selfie stick:
Some are connected to a smartphone via its jack plug, while others are tethered using Bluetooth controls. The connection between the device and the selfie stick lets the user decide when to take the picture or start recording a video by clicking a button located on the handle. Models designed for compact cameras have a mirror behind the viewscreen so that the shot can be lined up.In contrast to a monopod for stabilising a camera on the ground, a selfie stick's arm is thickest and strongest at the opposite end from the camera in order to provide better grip and balance when held aloft. Safety concerns and the inconvenience the product causes to others have resulted in them being banned at many venues, including all Disney Parks as well as both Universal Orlando Resort and Hollywood.
History:
The history of homemade selfie sticks can be traced back to 1925. A photo from earlier this year shows a man taking a picture of himself and his wife with a long out-of-frame stick pointed at the camera. Amateur box cameras of the period could not have captured a self-portrait in focus when held at arm's length, requiring photographers to use remote shutter devices such as cables or sticks.A device which has been likened to the selfie stick appears in the 1969 Czechoslovak sci-fi film I Killed Einstein, Gentlemen. One character holds a silver stick in front of herself and another character, smiles at the end of the stick as it produces a camera flash, and immediately unfurls a printed photograph of the pair from the stick's handle.The 1983 Minolta Disc-7 camera had a convex mirror on its front to allow the composition of self-portraits, and its packaging showed the camera mounted on a stick while used for such a purpose. A "telescopic extender" for compact handheld cameras was patented by Ueda Hiroshi and Mima Yujiro in 1983, and a Japanese selfie stick was featured in a 1995 book of "101 Un-Useless Japanese Inventions". While dismissed as a "useless invention" at the time, the selfie stick later gained global popularity in the 21st century.Canadian inventor Wayne Fromm patented his Quik Pod in 2005 and becoming commercially available in the United States the following year. In 2012, Yeong-Ming Wang filed a patent for a "multi-axis omni-directional shooting extender" capable of holding a smartphone, which won a silver medal at the 2013 Concours Lepine. The term "selfie stick" did not become widely used until 2014. Extended forms of selfie sticks can hold laptop computers to take selfies from a webcam. By the fall of 2015 technology news noted that there was a large variety of selfie sticks available on the market; Molly McCugh of Wired magazine wrote in October 2015, "Some are very, very long; some aren't so long; some are bedazzled. Some look like hands. Some are spoons. But they are all, at the end of the day, one thing: A stick that takes selfies."The selfie stick was listed in Time magazine's 25 best inventions of 2014, while the New York Post named the selfie stick the most controversial gift of 2014. At the end of December 2014, Bloomberg News noted that selfie sticks had ruled the 2014 holiday season as the “must-have” gift of the year. The selfie stick has been criticized for its association with the perceived narcissism and self-absorption of contemporary society, with commentators in 2015 dubbing the tool the "Narcisstick" or "Wand of Narcissus". In November 2015, The Atlantic conducted a survey of Silicon Valley insiders which named the selfie stick as one of two technologies that tech leaders would most like to "un-invent" with the only invention on the same level being nuclear weapons. Despite various bans, selfie sticks proved so popular that a selfie stick store was opened in Times Square during the summer of 2015. In 2016 it was reported that Coca-Cola had created a "selfie bottle" with an attached camera that takes pictures when it is tipped for drinking.
Usage:
One is able to attach their device to the end of the selfie stick and then extend it beyond the normal reach of the arm. Different models of stick are triggered in various ways, such as pressing a button on the stick handle which is connected to the device (usually using the jack plug), pressing a button on a wireless remote (often via Bluetooth), using the camera's built-in timer, or making a sound the device can detect to start recording a video or taking a picture.
Usage:
The smartphone's physical means of triggering the camera, such as the sound volume controls or the touchscreen camera button of the device, are replicated on headphones with on-cord controls. When selfie sticks are plugged into the jack plug, they are seen by the device as headphones.
Usage:
The selfie stick gives more practical use in situations that require assistance for taking photos/videos at difficult angles that need to be taken from an extended, elevated distance beyond the arm's reach. It allows the user to take photos and videos in otherwise dangerous or impossible situations, such as recording footage inside a very deep hole, over a cliff, or simply at an angle that is too far away from the user.
Bans and restrictions:
Despite the selfie stick being one of the most popular items among tourists and families, bans and restrictions on its use have been imposed across a range of public venues generally on the grounds of safety and inconvenience to others.
Bans and restrictions:
Concert venues and some music festivals in the United States, Australia and the United Kingdom have banned the use of selfie sticks. Organisers have cited their role in the illegal recording of bands' sets, and the inconvenience and safety issues to fellow audience members.Museums, galleries and historical sites such as the Palace of Versailles have banned the sticks because of concerns about possible damage to priceless artworks and other objects.Theme parks, including Disneyland Resort, Walt Disney World Resort, Tokyo Disney Resort, Disneyland Paris, Hong Kong Disneyland, Shanghai Disneyland, Six Flags, Universal Orlando, and Universal Studios Hollywood have banned selfie sticks. The sticks have always been banned on rides at Disney World for safety reasons, but after a number of instances where rides had to be stopped because of a guest pulling out a selfie stick in mid-ride, such as incidents on California Screamin' and Big Thunder Mountain Railroad, Disney issued a park-wide ban on the accessories.Sporting events have banned selfie sticks both for their "nuisance value" and for interfering with other spectators' enjoyment or view. The Australia Tour Down Under banned the devices citing "harm to cyclists, officials and yourself".In 2014, South Korea's radio management agency issued regulations banning sale of unregistered selfie sticks that use Bluetooth technology to trigger the camera, as any such device sold in South Korea is considered a "telecommunications device" and must be tested by and registered with the agency. In 2015, Apple banned selfie sticks from a WWDC Developers Conference, though no explicit reason was given. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tissue paper**
Tissue paper:
Tissue paper or simply tissue is a lightweight paper or, light crêpe paper. Tissue can be made from recycled paper pulp on a paper machine.
Tissue paper is very versatile, and different kinds of tissue are made to best serve these purposes, which are hygienic tissue paper, facial tissues, paper towels, as packing material, among other (sometimes creative) uses.
The use of tissue paper is common in developed nations, around 21 million tonnes in North America and 6 million in Europe, and is growing due to urbanization. As a result, the industry has often been scrutinized for deforestation. However, more companies are presently using more recycled fibres in tissue paper.
Properties:
The key properties of tissues are absorbency, basis weight, thickness, bulk (specific volume), brightness, stretch, appearance and comfort.
Production:
Tissue paper is produced on a paper machine that has a single large steam heated drying cylinder (Yankee dryer) fitted with a hot air hood. The raw material is paper pulp. The Yankee cylinder is sprayed with adhesives to make the paper stick. Creping is done by the Yankee's doctor blade that is scraping the dry paper off the cylinder surface. The crinkle (crêping) is controlled by the strength of the adhesive, geometry of the doctor blade, speed difference between the Yankee and final section of the paper machine and paper pulp characteristics.The highest water absorbing applications are produced with a through air drying (TAD) process. These papers contain high amounts of NBSK and CTMP. This gives a bulky paper with high wet tensile strength and good water holding capacity. The TAD process uses about twice the energy compared with conventional drying of paper.
Production:
The properties are controlled by pulp quality, crêping and additives (both in base paper and as coating). The wet strength is often an important parameter for tissue.
Applications:
Hygienic tissue paper Hygienic tissue paper is commonly for personal use as facial tissue (paper handkerchiefs), napkins, bathroom tissue and household towels. Paper has been used for hygiene purposes for centuries, but tissue paper as we know it today was not produced in the United States before the mid-1940s. In Western Europe large scale industrial production started in the beginning of the 1960s.
Applications:
Facial tissues Facial tissue (paper handkerchiefs) refers to a class of soft, absorbent, disposable paper that is suitable for use on the face. The term is commonly used to refer to the type of facial tissue, usually sold in boxes, that is designed to facilitate the expulsion of nasal mucus although it may refer to other types of facial tissues including napkins and wipes.
Applications:
The first tissue handkerchiefs were introduced in the 1920s. They have been refined over the years, especially for softness and strength, but their basic design has remained constant. Today each person in Western Europe uses about 200 tissue handkerchiefs a year, with a variety of 'alternative' functions including the treatment of minor wounds, the cleaning of face and hands and the cleaning of spectacles.The importance of the paper tissue on minimising the spread of an infection has been highlighted in light of fears over a swine flu epidemic. In the UK, for example, the Government ran a campaign called "Catch it, Bin it, Kill it", which encouraged people to cover their mouth with a paper tissue when coughing or sneezing.Pressure on use of tissue papers has grown in the wake of improved hygiene concerns in response to the coronavirus pandemic.
Applications:
Paper towels Paper towels are the second largest application for tissue paper in the consumer sector. This type of paper has usually a basis weight of 20 to 24 g/m2. Normally such paper towels are two-ply. This kind of tissue can be made from 100% chemical pulp to 100% recycled fibre or a combination of the two. Normally, some long fibre chemical pulp is included to improve strength.
Applications:
Wrapping tissue Wrapping tissue is a type of thin, translucent tissue paper used for wrapping/packing various articles & cushioning fragile items.
Custom-printed wrapping tissue is becoming a popular trend for boutique retail businesses. There are various on-demand custom printed wrapping tissue paper available online. Sustainably printed custom tissue wrapping paper are printed on FSC-certified, acid-free paper; and only use soy-based inks.
Toilet paper Rolls of toilet paper have been available since the end of the 19th century. Today, more than 20 billion rolls of toilet tissue are used each year in Western Europe.
Table napkins Table napkins can be made of tissue paper. These are made from one up to four plies and in a variety of qualities, sizes, folds, colours and patterns depending on intended use and prevailing fashions. The composition of raw materials varies a lot from deinked to chemical pulp depending on quality.
Colored paper napkins can be a source of carcinogenic primary aromatic amines (paAs) when used as a wrapper for food as a result of degradation of Azo compounds used as paper dyes.
Applications:
Acoustic disrupter In the late 1970s and early 1980s, a sound recording engineer named Bob Clearmountain was said to have hung tissue paper over the tweeter of his pair of Yamaha NS-10 speakers to tame the over-bright treble coming from it.The phenomenon became the subject of hot debate and an investigation into the sonic effects of many different types of tissue paper. The authors of a study for Studio Sound magazine suggested that had the speakers' grilles been used in studios, they would have had the same effect on the treble output as the improvised tissue paper filter. Another tissue study found inconsistent results with different paper, but said that tissue paper generally demonstrated an undesirable effect known as "comb filtering", where the high frequencies are reflected back into the tweeter instead of being absorbed. The author derided the tissue practice as "aberrant behavior", saying that engineers usually fear comb filtering and its associated cancellation effects, suggesting that more controllable and less random electronic filtering would be preferable.
Applications:
Road repair Tissue paper, in the form of standard single-ply toilet paper, is commonly used in road repair to protect crack sealants. The sealants require upwards of 40 minutes to cure enough to not stick onto passing traffic. The application of toilet paper removes the stickiness and keeps the tar in place, allowing the road to be reopened immediately and increasing road repair crew productivity. The paper breaks down and disappears in the following days. The use has been credited to Minnesota Department of Transportation employee Fred Muellerleile, who came up with the idea in 1970 after initially trying standard office paper, which worked, but did not disintegrate easily.
Applications:
Packing industry Apart from above, a range of speciality tissues are also manufactured to be used in the packing industry. These are used for wrapping/packing various items, cushioning fragile items, stuffing in shoes/bags etc. to keep shape intact or, for inserting in garments etc. while packing/folding to keep them wrinkle free and safe. It is generally used printed with the manufacturers brand name or, logo to enhance the look and aesthetic appeal of the product. It is a type of thin, translucent paper generally in the range of grammages between 17 and 40 GSM, that can be rough or, shining, hard or soft, depending upon the nature of use.
The industry:
In North America, people are consuming around three times as much tissue as in Europe.
The industry:
Out of the world's estimated production of 21 million tonnes (21,000,000 long tons; 23,000,000 short tons) of tissue, Europe produces approximately 6 million tonnes (5,900,000 long tons; 6,600,000 short tons).The European tissue market is worth approximately 10 billion Euros annually and is growing at a rate of around 3%. The European market represents around 23% of the global market. Of the total paper and board market tissue accounts for 10%. An analysis and market research in Europe, Germany was one of the top tissue-consuming countries in Western Europe while Sweden was on top of the per-capita consumption of tissue paper in Western Europe. Market Study.In Europe, the industry is represented by the European Tissue Symposium (ETS), a trade association. The members of ETS represent the majority of tissue paper producers throughout Europe and about 90% of total European tissue production. ETS was founded in 1971 and is based in Brussels since 1992.In the U.S., the tissue industry is organized in the AF&PA.Tissue paper production and consumption is predicted to continue to grow because of factors like urbanization, increasing disposable incomes and consumer spending. In 2015, the global market for tissue paper is growing at per annum rates between 8–9% (China, currently 40% of global market) and 2–3% (Europe).
The industry:
Tissue demand on the consumer side booms while the AfH business turns down as majority stay at home amid COVID-19.
Companies The largest tissue producing companies by capacity – some of them also global players – in 2015 are (in descending order): Essity Kimberly-Clark Georgia-Pacific Asia Pulp & Paper (APP)/Sinar Mas Procter & Gamble Sofidel Group CMPC WEPA Hygieneprodukte Metsä Group Cascades
Sustainability:
The paper industry in general has a long history of accusations for being responsible for global deforestation through legal and illegal logging. The WWF has urged Asia Pulp & Paper (APP), "one of the world's most notorious deforesters" especially in Sumatran rain forests, to become an environmentally responsible company; in 2012, the WWF launched a campaign to remove a brand of toilet paper known to be made from APP fiber from grocery store shelves. According to the Worldwatch Institute, the world per capita consumption of toilet paper was 3.8 kilograms in 2005. The WWF estimates that "every day, about 270,000 trees are flushed down the drain or end up as garbage all over the world", a rate of which about 10% are attributable to toilet paper alone.Meanwhile, the paper tissue industry, along with the rest of the paper manufacturing sector, has worked to minimise its impact on the environment. Recovered fibres now represent some 46.5% of the paper industry's raw materials. The industry relies heavily on biofuels (about 50% of its primary energy). Its specific primary energy consumption has decreased by 16% and the specific electricity consumption has decreased by 11%, due to measures such as improved process technology and investment in combined heat and power (CHP). Specific carbon dioxide emissions from fossil fuels decreased by 25% due to process-related measures and the increased use of low-carbon and biomass fuels. Once consumed, most forest-based paper products start a new life as recycled material or biofuelEDANA, the trade body for the non-woven absorbent hygiene products industry (which includes products such as household wipes for use in the home) has reported annually on the industry's environmental performance since 2005. Less than 1% of all commercial wood production ends up as wood pulp in absorbent hygiene products. The industry contributes less than 0.5% of all solid waste and around 2% of municipal solid waste (MSW) compared with paper and board, garden waste and food waste which each comprise between 18 and 20 percent of MSW.There has been a great deal of interest, in particular, in the use of recovered fibres to manufacture new tissue paper products. However, whether this is actually better for the environment than using new fibres is open to question. A life-cycle assessment study indicated that neither fibre type can be considered environmentally preferable. In this study both new fibre and recovered fibre offer environmental benefits and shortcomings.
Sustainability:
Total environmental impacts vary case by case, depending on for example the location of the tissue paper mill, availability of fibres close to the mill, energy options and waste utilization possibilities. There are opportunities to minimise environmental impacts when using each fibre type.
Sustainability:
When using recovered fibres, it is beneficial to: Source fibres from integrated deinking operations to eliminate the need for thermal drying of fibre or long distance transport of wet pulp, Manage deinked sludge in order to maximise beneficial applications and minimise waste burden on society; and Select the recovered paper depending on the end-product requirements and that also allows the most efficient recycling process.When using new fibres, it is beneficial to: Manage the raw material sources to maintain legal, sustainable forestry practices by implementing processes such as forest certification systems and chain of custody standards; and Consider opportunities to introduce new and more renewable energy sources and increase the use of biomass fuels to reduce emissions of carbon dioxide.When using either fibre type, it is beneficial to: Improve energy efficiency in tissue manufacturing; Examine opportunities for changing to alternative, non fossil based sources, of energy for tissue manufacturing operations Deliver products that maximise functionality and optimize consumption; and Investigate opportunities for alternative product disposal systems that minimize the environmental impact of used products.The Confederation of European Paper Industries (CEPI) has published reports focusing on the industry's environmental credentials. In 2002, it noted that "a little over 60% of the pulp and paper produced in Europe comes from mills certified under one of the internationally recognised eco-management schemes". There are a number of ‘eco-labels’ designed to help consumers identify paper tissue products which meet such environmental standards. Eco-labelling entered mainstream environmental policy-making in the late seventies, first with national schemes such as the German Blue Angel programme, to be followed by the Nordic swan (1989). In 1992 a European eco-labelling regulation, known as the EU Flower, was also adopted. The stated objective is to support sustainable development, balancing environmental, social and economical criteria.
Sustainability:
In 2019, the NRDC and Stand.earth released a report grading various brands of toilet paper, paper towels, and facial tissue; the report criticized major brands for lacking recycled material.
Types of eco-labels There are three types of eco-labels, each defined by ISO (International Organization for Standardization).
Type I: ISO 14024 This type of eco-label is one where the criteria are set by third parties (not the manufacturer). They are in theory based on life cycle impacts and are typically based on pass/fail criteria. The one that has European application is the EU Flower.
Type II: ISO 14021 These are based on the manufacturers or retailers own declarations. Well known amongst these are claims of "100% recycled" in relation to tissue/paper.
Type III: ISO 14025 These claims give quantitative details of the impact of the product based on its life cycle. Sometimes known as EPDs (Environmental Product Declarations), these labels are based on an independent review of the life cycle of the product. The data supplied by the manufacturing companies are also independently reviewed.
The most well known example in the paper industry is the Paper Profile. You can tell a Paper Profile meets the Type III requirements when the verifiers logo is included on the document.An example of an organization that sets standards is the Forest Stewardship Council. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Endoscopic ear surgery**
Endoscopic ear surgery:
Endoscopic ear surgery (EES) is a minimally invasive alternative to traditional ear surgery and is defined as the use of the rigid endoscope, as opposed to a surgical microscope, to visualize the middle and inner ear during otologic surgery. During endoscopic ear surgery the surgeon holds the endoscope in one hand while working in the ear with the other. To allow this kind of single-handed surgery, different surgical instruments have to be used. Endoscopic visualization has improved due to high-definition video imaging and wide-field endoscopy, and being less invasive, EES is gaining importance as an adjunct to microscopic ear surgery.
History:
Endoscopic Ear Surgery was first described in 1992 by Professor Ahmed El-Guindy and pioneered by Dr Muaaz Tarabichi in Dubai during the late 90s. His contributions to the field have led to him being recognized globally as the father of endoscopic ear surgery. He now lectures extensively on the topic worldwide. Similar to the early years of FESS (functional endoscopic sinus surgery), EES has been controversial since early descriptions in the 1960s. Tarabichi's initial dissertations were met with skepticism in a very similar fashion to Professor Heinz Stammberger and the backlash he faced when he introduced FESS. Tarabichi and Professor Stammberger persisted in their advocacy of their respective techniques and developed a friendship which resulted in the development of TSESI: Tarabichi Stammberger Ear and Sinus Institute to train and educate surgeons in endoscopic techniques. One of the benefits of an endoscope compared to the microscope is the wide-field view of the middle ear afforded by the location of the light source at the tip of the instrument and the availability of various types of angled lenses. Middle ear procedures that utilize a rigid endoscope for viewing may reduce the need to drill for enhanced exposure of the operative field. The traditional otologic operating microscopes typically require larger portals (e.g., postauricular approaches) to enable adequate passage of light for intraoperative viewing and follow-up surveillance in the clinic. One handed dissection is cited as the main drawback to EES.
History:
The indications for this relatively new technique are evolving. The use of rigid endoscopes to perform ear surgery (operative EES), rather than just to visualize the contents of the middle ear (observational EES), is increasing as optimized instrumentation and operative approaches become available. The number of citations published in the literature on this topic has skyrocketed recently with much of the interest focused on the use the endoscope as the main workhorse in otologic surgery rather than using the method for observation or as an adjunct to microscopic surgery.
Rationale:
Ear surgery had been performed with the microscope and through the mastoid cavity till the 90s. The ability to see certain areas of the anatomy and to pursue disease was hampered by the straight line access when using the microscope. The endoscope allows the surgeon to look around the corners and to reach inaccessible areas like the sinus tympani through the ear canal. Endoscopic ear surgery utilizes the ear canal as the access point for removal of cholesteatoma and therefore represent a minimally invasive alternative to traditional surgery that requires large incision behind the ear. The reduction in postoperative pain and cost that is usually associated with the use of minimally invasive techniques has been demonstrated in endoscopic ear surgery.
Classification:
Cohen and his colleagues at MEEI devised a classification system for the degree of use of the endoscope in otologic surgery: Class 0: Microscopic only case Class 1: Inspection with endoscope Class 2: Mixed dissection with endoscope and microscope Class 3: Endoscopic only case
Types of endoscopic ear surgery:
For cholesteatoma Surgery for cholesteatoma offers the most advantages for using the endoscope instead of the microscope. Failures in cholesteatoma surgery are most common in certain areas of the anatomy of the tympanic cavity, such as the facial recess, sinus tympani, anterior attic, and the protympanum which are poorly accessed with the microscope. The endoscope with its ability to see around the corners can visualize certain areas that are notorious for residual cholesteatoma such as the sinus tympani.
Types of endoscopic ear surgery:
For perforated eardrum Access to the whole perimeter of the perforation is essential for successful treatment of holes in the eardrum. To achieve that, using the microscope, an incision is made behind the ear using the "postauricular approach". The endoscope, with its ability to see around the corner, increases the likelihood of performing closures of perforations through the ear canal rather than making large incisions to access the whole perimeter of the perforation.
Types of endoscopic ear surgery:
For otosclerosis Otosclerosis is a disease that results in fixation of the stapes, which conducts sound to the inner ear. Microscopic stapedectomy, requires some removal of bone, and in some instances, an incision is made to facilitate access. The endoscope's ability to visualize around corners allows for better visualization of the stapes without needing any bone removal or making an incision.
Types of endoscopic ear surgery:
For access into the Eustachian tube The Eustachian tube plays the primary role in the pathophysiology of disorders of the middle ear. Access to the proximal part (ear side) of the eustachian tube is limited since most of the existing surgical access is posteriorly through the mastoid cavity. The endoscope allows the surgeon to reach the protympanum or the bony Eustachian tube and possibly carry out interventions to maintain an open eustachian tube by inserting a dilatation balloon catheter into that area. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Iodine compounds**
Iodine compounds:
Iodine can form compounds using multiple oxidation states. Iodine is quite reactive, but it is much less reactive than the other halogens. For example, while chlorine gas will halogenate carbon monoxide, nitric oxide, and sulfur dioxide (to phosgene, nitrosyl chloride, and sulfuryl chloride respectively), iodine will not do so. Furthermore, iodination of metals tends to result in lower oxidation states than chlorination or bromination; for example, rhenium metal reacts with chlorine to form rhenium hexachloride, but with bromine it forms only rhenium pentabromide and iodine can achieve only rhenium tetraiodide. By the same token, however, since iodine has the lowest ionisation energy among the halogens and is the most easily oxidised of them, it has a more significant cationic chemistry and its higher oxidation states are rather more stable than those of bromine and chlorine, for example in iodine heptafluoride.
Charge-transfer complexes:
The iodine molecule, I2, dissolves in CCl4 and aliphatic hydrocarbons to give bright violet solutions. In these solvents the absorption band maximum occurs in the 520 – 540 nm region and is assigned to a π* to σ* transition. When I2 reacts with Lewis bases in these solvents a blue shift in I2 peak is seen and the new peak (230 – 330 nm) arises that is due to the formation of adducts, which are referred to as charge-transfer complexes.
Hydrogen iodide:
The simplest compound of iodine is hydrogen iodide, HI. It is a colourless gas that reacts with oxygen to give water and iodine. Although it is useful in iodination reactions in the laboratory, it does not have large-scale industrial uses, unlike the other hydrogen halides. Commercially, it is usually made by reacting iodine with hydrogen sulfide or hydrazine: 2 I2 + N2H4 H2O⟶ 4 HI + N2At room temperature, it is a colourless gas, like all of the hydrogen halides except hydrogen fluoride, since hydrogen cannot form strong hydrogen bonds to the large and only mildly electronegative iodine atom. It melts at −51.0 °C and boils at −35.1 °C. It is an endothermic compound that can exothermically dissociate at room temperature, although the process is very slow unless a catalyst is present: the reaction between hydrogen and iodine at room temperature to give hydrogen iodide does not proceed to completion. The H–I bond dissociation energy is likewise the smallest of the hydrogen halides, at 295 kJ/mol.Aqueous hydrogen iodide is known as hydroiodic acid, which is a strong acid. Hydrogen iodide is exceptionally soluble in water: one litre of water will dissolve 425 litres of hydrogen iodide, and the saturated solution has only four water molecules per molecule of hydrogen iodide. Commercial so-called "concentrated" hydroiodic acid usually contains 48–57% HI by mass; the solution forms an azeotrope with boiling point 126.7 °C at 56.7 g HI per 100 g solution. Hence hydroiodic acid cannot be concentrated past this point by evaporation of water.Unlike hydrogen fluoride, anhydrous liquid hydrogen iodide is difficult to work with as a solvent, because its boiling point is low, it has a small liquid range, its dielectric constant is low and it does not dissociate appreciably into H2I+ and HI−2 ions – the latter, in any case, are much less stable than the bifluoride ions (HF−2) due to the very weak hydrogen bonding between hydrogen and iodine, though its salts with very large and weakly polarising cations such as Cs+ and NR+4 (R = Me, Et, Bun) may still be isolated. Anhydrous hydrogen iodide is a poor solvent, able to dissolve only small molecular compounds such as nitrosyl chloride and phenol, or salts with very low lattice energies such as tetraalkylammonium halides.
Other binary iodides:
Nearly all elements in the periodic table form binary iodides. The exceptions are decidedly in the minority and stem in each case from one of three causes: extreme inertness and reluctance to participate in chemical reactions (the noble gases); extreme nuclear instability hampering chemical investigation before decay and transmutation (many of the heaviest elements beyond bismuth); and having an electronegativity higher than iodine's (oxygen, nitrogen, and the first three halogens), so that the resultant binary compounds are formally not iodides but rather oxides, nitrides, or halides of iodine. (Nonetheless, nitrogen triiodide is named as an iodide as it is analogous to the other nitrogen trihalides.)Given the large size of the iodide anion and iodine's weak oxidising power, high oxidation states are difficult to achieve in binary iodides, the maximum known being in the pentaiodides of niobium, tantalum, and protactinium. Iodides can be made by reaction of an element or its oxide, hydroxide, or carbonate with hydroiodic acid, and then dehydrated by mildly high temperatures combined with either low pressure or anhydrous hydrogen iodide gas. These methods work best when the iodide product is stable to hydrolysis; otherwise, the possibilities include high-temperature oxidative iodination of the element with iodine or hydrogen iodide, high-temperature iodination of a metal oxide or other halide by iodine, a volatile metal halide, carbon tetraiodide, or an organic iodide. For example, molybdenum(IV) oxide reacts with aluminium(III) iodide at 230 °C to give molybdenum(II) iodide. An example involving halogen exchange is given below, involving the reaction of tantalum(V) chloride with excess aluminium(III) iodide at 400 °C to give tantalum(V) iodide: TaCl AlI excess TaI AlCl 3 Lower iodides may be produced either through thermal decomposition or disproportionation, or by reducing the higher iodide with hydrogen or a metal, for example: TaI Ta 630 575 thermal gradient Ta 14 Most metal iodides with the metal in low oxidation states (+1 to +3) are ionic. Nonmetals tend to form covalent molecular iodides, as do metals in high oxidation states from +3 and above. Both ionic and covalent iodides are known for metals in oxidation state +3 (e.g. scandium iodide is mostly ionic, but aluminium iodide is not). Ionic iodides MIn tend to have the lowest melting and boiling points among the halides MXn of the same element, because the electrostatic forces of attraction between the cations and anions are weakest for the large iodide anion. In contrast, covalent iodides tend to instead have the highest melting and boiling points among the halides of the same element, since iodine is the most polarisable of the halogens and, having the most electrons among them, can contribute the most to van der Waals forces. Naturally, exceptions abound in intermediate iodides where one trend gives way to the other. Similarly, solubilities in water of predominantly ionic iodides (e.g. potassium and calcium) are the greatest among ionic halides of that element, while those of covalent iodides (e.g. silver) are the lowest of that element. In particular, silver iodide is very insoluble in water and its formation is often used as a qualitative test for iodine.
Iodine halides:
The halogens form many binary, diamagnetic interhalogen compounds with stoichiometries XY, XY3, XY5, and XY7 (where X is heavier than Y), and iodine is no exception. Iodine forms all three possible diatomic interhalogens, a trifluoride and trichloride, as well as a pentafluoride and, exceptionally among the halogens, a heptafluoride. Numerous cationic and anionic derivatives are also characterised, such as the wine-red or bright orange compounds of ICl+2 and the dark brown or purplish black compounds of I2Cl+. Apart from these, some pseudohalides are also known, such as cyanogen iodide (ICN), iodine thiocyanate (ISCN), and iodine azide (IN3).
Iodine halides:
Iodine monofluoride (IF) is unstable at room temperature and disproportionates very readily and irreversibly to iodine and iodine pentafluoride, and thus cannot be obtained pure. It can be synthesised from the reaction of iodine with fluorine gas in trichlorofluoromethane at −45 °C, with iodine trifluoride in trichlorofluoromethane at −78 °C, or with silver(I) fluoride at 0 °C. Iodine monochloride (ICl) and iodine monobromide (IBr), on the other hand, are moderately stable. The former, a volatile red-brown compound, was discovered independently by Joseph Louis Gay-Lussac and Humphry Davy in 1813–1814 not long after the discoveries of chlorine and iodine, and it mimics the intermediate halogen bromine so well that Justus von Liebig was misled into mistaking bromine (which he had found) for iodine monochloride. Iodine monochloride and iodine monobromide may be prepared simply by reacting iodine with chlorine or bromine at room temperature and purified by fractional crystallisation. Both are quite reactive and attack even platinum and gold, though not boron, carbon, cadmium, lead, zirconium, niobium, molybdenum, and tungsten. Their reaction with organic compounds depends on conditions. Iodine chloride vapour tends to chlorinate phenol and salicyclic acid, since when iodine chloride undergoes homolytic dissociation, chlorine and iodine are produced and the former is more reactive. However, iodine chloride in tetrachloromethane solution results in iodination being the main reaction, since now heterolytic fission of the I–Cl bond occurs and I+ attacks phenol as an electrophile. However, iodine monobromide tends to brominate phenol even in tetrachloromethane solution because it tends to dissociate into its elements in solution, and bromine is more reactive than iodine. When liquid, iodine monochloride and iodine monobromide dissociate into I2X+ and IX−2 anions (X = Cl, Br); thus they are significant conductors of electricity and can be used as ionising solvents.Iodine trifluoride (IF3) is an unstable yellow solid that decomposes above −28 °C. It is thus little-known. It is difficult to produce because fluorine gas would tend to oxidise iodine all the way to the pentafluoride; reaction at low temperature with xenon difluoride is necessary. Iodine trichloride, which exists in the solid state as the planar dimer I2Cl6, is a bright yellow solid, synthesised by reacting iodine with liquid chlorine at −80 °C; caution is necessary during purification because it easily dissociates to iodine monochloride and chlorine and hence can act as a strong chlorinating agent. Liquid iodine trichloride conducts electricity, possibly indicating dissociation to ICl+2 and ICl−4 ions.Iodine pentafluoride (IF5), a colourless, volatile liquid, is the most thermodynamically stable iodine fluoride, and can be made by reacting iodine with fluorine gas at room temperature. It is a fluorinating agent, but is mild enough to store in glass apparatus. Again, slight electrical conductivity is present in the liquid state because of dissociation to IF+4 and IF−6. The pentagonal bipyramidal iodine heptafluoride (IF7) is an extremely powerful fluorinating agent, behind only chlorine trifluoride, chlorine pentafluoride, and bromine pentafluoride among the interhalogens: it reacts with almost all the elements even at low temperatures, fluorinates Pyrex glass to form iodine(VII) oxyfluoride (IOF5), and sets carbon monoxide on fire.
Iodine oxides and oxoacids:
Iodine oxides are the most stable of all the halogen oxides, because of the strong I–O bonds resulting from the large electronegativity difference between iodine and oxygen, and they have been known for the longest time. The stable, white, hygroscopic iodine pentoxide (I2O5) has been known since its formation in 1813 by Gay-Lussac and Davy. It is most easily made by the dehydration of iodic acid (HIO3), of which it is the anhydride. It will quickly oxidise carbon monoxide completely to carbon dioxide at room temperature, and is thus a useful reagent in determining carbon monoxide concentration. It also oxidises nitrogen oxide, ethylene, and hydrogen sulfide. It reacts with sulfur trioxide and peroxydisulfuryl difluoride (S2O6F2) to form salts of the iodyl cation, [IO2]+, and is reduced by concentrated sulfuric acids to iodosyl salts involving [IO]+. It may be fluorinated by fluorine, bromine trifluoride, sulfur tetrafluoride, or chloryl fluoride, resulting iodine pentafluoride, which also reacts with iodine pentoxide, giving iodine(V) oxyfluoride, IOF3. A few other less stable oxides are known, notably I4O9 and I2O4; their structures have not been determined, but reasonable guesses are IIII(IVO3)3 and [IO]+[IO3]− respectively.
Iodine oxides and oxoacids:
More important are the four oxoacids: hypoiodous acid (HIO), iodous acid (HIO2), iodic acid (HIO3), and periodic acid (HIO4 or H5IO6). When iodine dissolves in aqueous solution, the following reactions occur: Hypoiodous acid is unstable to disproportionation. The hypoiodite ions thus formed disproportionate immediately to give iodide and iodate: Iodous acid and iodite are even less stable and exist only as a fleeting intermediate in the oxidation of iodide to iodate, if at all. Iodates are by far the most important of these compounds, which can be made by oxidising alkali metal iodides with oxygen at 600 °C and high pressure, or by oxidising iodine with chlorates. Unlike chlorates, which disproportionate very slowly to form chloride and perchlorate, iodates are stable to disproportionation in both acidic and alkaline solutions. From these, salts of most metals can be obtained. Iodic acid is most easily made by oxidation of an aqueous iodine suspension by electrolysis or fuming nitric acid. Iodate has the weakest oxidising power of the halates, but reacts the quickest.Many periodates are known, including not only the expected tetrahedral IO−4, but also square-pyramidal IO3−5, octahedral orthoperiodate IO5−6, [IO3(OH)3]2−, [I2O8(OH2)]4−, and I2O4−9. They are usually made by oxidising alkaline sodium iodate electrochemically (with lead(IV) oxide as the anode) or by chlorine gas: IO−3 + 6 OH− → IO5−6 + 3 H2O + 2 e− IO−3 + 6 OH− + Cl2 → IO5−6 + 2 Cl− + 3 H2OThey are thermodymically and kinetically powerful oxidising agents, quickly oxidising Mn2+ to MnO−4, and cleaving glycols, α-diketones, α-ketols, α-aminoalcohols, and α-diamines. Orthoperiodate especially stabilises high oxidation states among metals because of its very high negative charge of −5. Orthoperiodic acid, H5IO6, is stable, and dehydrates at 100 °C in a vacuum to metaperiodic acid, HIO4. Attempting to go further does not result in the nonexistent iodine heptoxide (I2O7), but rather iodine pentoxide and oxygen. Periodic acid may be protonated by sulfuric acid to give the I(OH)+6 cation, isoelectronic to Te(OH)6 and Sb(OH)−6, and giving salts with bisulfate and sulfate.
Polyiodine compounds:
When iodine dissolves in strong acids, such as fuming sulfuric acid, a bright blue paramagnetic solution including I+2 cations is formed. A solid salt of the diiodine cation may be obtained by oxidising iodine with antimony pentafluoride: 2 I2 + 5 SbF5 SO2⟶20 °C 2 I2Sb2F11 + SbF3The salt I2Sb2F11 is dark blue, and the blue tantalum analogue I2Ta2F11 is also known. Whereas the I–I bond length in I2 is 267 pm, that in I+2 is only 256 pm as the missing electron in the latter has been removed from an antibonding orbital, making the bond stronger and hence shorter. In fluorosulfuric acid solution, deep-blue I+2 reversibly dimerises below −60 °C, forming red rectangular diamagnetic I2+4. Other polyiodine cations are not as well-characterised, including bent dark-brown or black I+3 and centrosymmetric C2h green or black I+5, known in the AsF−6 and AlCl−4 salts among others.The only important polyiodide anion in aqueous solution is linear triiodide, I−3. Its formation explains why the solubility of iodine in water may be increased by the addition of potassium iodide solution: I2 + I− ⇌ I−3 (Keq = ~700 at 20 °C)Many other polyiodides may be found when solutions containing iodine and iodide crystallise, such as I−5, I−9, I2−4, and I2−8, whose salts with large, weakly polarising cations such as Cs+ may be isolated.
Organoiodine compounds:
Organoiodine compounds have been fundamental in the development of organic synthesis, such as in the Hofmann elimination of amines, the Williamson ether synthesis, the Wurtz coupling reaction, and in Grignard reagents.The carbon–iodine bond is a common functional group that forms part of core organic chemistry; formally, these compounds may be thought of as organic derivatives of the iodide anion. The simplest organoiodine compounds, alkyl iodides, may be synthesised by the reaction of alcohols with phosphorus triiodide; these may then be used in nucleophilic substitution reactions, or for preparing Grignard reagents. The C–I bond is the weakest of all the carbon–halogen bonds due to the minuscule difference in electronegativity between carbon (2.55) and iodine (2.66). As such, iodide is the best leaving group among the halogens, to such an extent that many organoiodine compounds turn yellow when stored over time due to decomposition into elemental iodine; as such, they are commonly used in organic synthesis, because of the easy formation and cleavage of the C–I bond. They are also significantly denser than the other organohalogen compounds thanks to the high atomic weight of iodine. A few organic oxidising agents like the iodanes contain iodine in a higher oxidation state than −1, such as 2-iodoxybenzoic acid, a common reagent for the oxidation of alcohols to aldehydes, and iodobenzene dichloride (PhICl2), used for the selective chlorination of alkenes and alkynes. One of the more well-known uses of organoiodine compounds is the so-called iodoform test, where iodoform (CHI3) is produced by the exhaustive iodination of a methyl ketone (or another compound capable of being oxidised to a methyl ketone), as follows: Some drawbacks of using organoiodine compounds as compared to organochlorine or organobromine compounds is the greater expense and toxicity of the iodine derivatives, since iodine is expensive and organoiodine compounds are stronger alkylating agents. For example, iodoacetamide and iodoacetic acid denature proteins by irreversibly alkylating cysteine residues and preventing the reformation of disulfide linkages.Halogen exchange to produce iodoalkanes by the Finkelstein reaction is slightly complicated by the fact that iodide is a better leaving group than chloride or bromide. The difference is nevertheless small enough that the reaction can be driven to completion by exploiting the differential solubility of halide salts, or by using a large excess of the halide salt. In the classic Finkelstein reaction, an alkyl chloride or an alkyl bromide is converted to an alkyl iodide by treatment with a solution of sodium iodide in acetone. Sodium iodide is soluble in acetone and sodium chloride and sodium bromide are not. The reaction is driven toward products by mass action due to the precipitation of the insoluble salt. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Latching switch**
Latching switch:
A latching switch is a switch that maintains its state after being activated. A push-to-make, push-to-break switch would therefore be a latching switch – each time you actuate it, whichever state the switch is left in will persist until the switch is actuated again. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tolylfluanid**
Tolylfluanid:
Tolylfluanid is an organic chemical compound that is used as an active ingredient in fungicides and wood preservatives.
Synthesis:
The synthesis of tolylfluanid begins with the reaction of dimethylamine and sulfuryl chloride. The product further reacts with p-toluidine and dichlorofluoromethanesulfenyl chloride to yield the final product.
Use:
Tolylfluanid is used on fruit and ornamental plants against gray mold (Botrytis), against late blight on tomatoes and against powdery mildew on cucumbers.
Environmental behavior:
Tolylfluanid hydrolyzes slowly in acidic conditions. The half-life is shorter when the pH is high; at pH = 7, it is at least 2 days. In aerobic media (pH = 7.7-8.0), tolylfluanid hydrolytically and microbially decomposes to N,N-dimethyl-N-(4-methylphenyl) sulfamide (DMST) and dimethylsulfamide. After 14 days, tolylfluanid is generally considered to have degraded. The half-life of DMST is 50-70 days.
Absorption, metabolism and excretion:
Tolylfluanid is rapidly and almost completely absorbed in the gastrointestinal tract. The highest concentrations are found in the blood, lungs, liver, kidneys, spleen and thyroid gland. 99% is excreted in the urine within two days, although there is some accumulation in the thyroid gland. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Generalized erythema**
Generalized erythema:
Generalized erythema is a skin condition that may be caused by medications, bacterial toxins, or viral infections.: 139 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Kite (geometry)**
Kite (geometry):
In Euclidean geometry, a kite is a quadrilateral with reflection symmetry across a diagonal. Because of this symmetry, a kite has two equal angles and two pairs of adjacent equal-length sides. Kites are also known as deltoids, but the word deltoid may also refer to a deltoid curve, an unrelated geometric object sometimes studied in connection with quadrilaterals. A kite may also be called a dart, particularly if it is not convex.Every kite is an orthodiagonal quadrilateral (its diagonals are at right angles) and, when convex, a tangential quadrilateral (its sides are tangent to an inscribed circle). The convex kites are exactly the quadrilaterals that are both orthodiagonal and tangential. They include as special cases the right kites, with two opposite right angles; the rhombi, with two diagonal axes of symmetry; and the squares, which are also special cases of both right kites and rhombi.
Kite (geometry):
The quadrilateral with the greatest ratio of perimeter to diameter is a kite, with 60°, 75°, and 150° angles. Kites of two shapes (one convex and one non-convex) form the prototiles of one of the forms of the Penrose tiling. Kites also form the faces of several face-symmetric polyhedra and tessellations, and have been studied in connection with outer billiards, a problem in the advanced mathematics of dynamical systems.
Definition and classification:
A kite is a quadrilateral with reflection symmetry across one of its diagonals. Equivalently, it is a quadrilateral whose four sides can be grouped into two pairs of adjacent equal-length sides. A kite can be constructed from the centers and crossing points of any two intersecting circles. Kites as described here may be either convex or concave, although some sources restrict kite to mean only convex kites. A quadrilateral is a kite if and only if any one of the following conditions is true: The four sides can be split into two pairs of adjacent equal-length sides.
Definition and classification:
One diagonal crosses the midpoint of the other diagonal at a right angle, forming its perpendicular bisector. (In the concave case, the line through one of the diagonals bisects the other.) One diagonal is a line of symmetry. It divides the quadrilateral into two congruent triangles that are mirror images of each other.
Definition and classification:
One diagonal bisects both of the angles at its two ends.Kite quadrilaterals are named for the wind-blown, flying kites, which often have this shape and which are in turn named for a hovering bird and the sound it makes. According to Olaus Henrici, the name "kite" was given to these shapes by James Joseph Sylvester.Quadrilaterals can be classified hierarchically, meaning that some classes of quadrilaterals include other classes, or partitionally, meaning that each quadrilateral is in only one class. Classified hierarchically, kites include the rhombi (quadrilaterals with four equal sides) and squares. All equilateral kites are rhombi, and all equiangular kites are squares. When classified partitionally, rhombi and squares would not be kites, because they belong to a different class of quadrilaterals; similarly, the right kites discussed below would not be kites. The remainder of this article follows a hierarchical classification; rhombi, squares, and right kites are all considered kites. By avoiding the need to consider special cases, this classification can simplify some facts about kites.Like kites, a parallelogram also has two pairs of equal-length sides, but they are opposite to each other rather than adjacent. Any non-self-crossing quadrilateral that has an axis of symmetry must be either a kite, with a diagonal axis of symmetry; or an isosceles trapezoid, with an axis of symmetry through the midpoints of two sides. These include as special cases the rhombus and the rectangle respectively, and the square, which is a special case of both. The self-crossing quadrilaterals include another class of symmetric quadrilaterals, the antiparallelograms.
Special cases:
The right kites have two opposite right angles. The right kites are exactly the kites that are cyclic quadrilaterals, meaning that there is a circle that passes through all their vertices. The cyclic quadrilaterals may equivalently defined as the quadrilaterals in which two opposite angles are supplementary (they add to 180°); if one pair is supplementary the other is as well. Therefore, the right kites are the kites with two opposite supplementary angles, for either of the two opposite pairs of angles. Because right kites circumscribe one circle and are inscribed in another circle, they are bicentric quadrilaterals (actually tricentric, as they also have a third circle externally tangent to the extensions of their sides). If the sizes of an inscribed and a circumscribed circle are fixed, the right kite has the largest area of any quadrilateral trapped between them.Among all quadrilaterals, the shape that has the greatest ratio of its perimeter to its diameter (maximum distance between any two points) is an equidiagonal kite with angles 60°, 75°, 150°, 75°. Its four vertices lie at the three corners and one of the side midpoints of the Reuleaux triangle. When an equidiagonal kite has side lengths less than or equal to its diagonals, like this one or the square, it is one of the quadrilaterals with the greatest ratio of area to diameter.A kite with three 108° angles and one 36° angle forms the convex hull of the lute of Pythagoras, a fractal made of nested pentagrams. The four sides of this kite lie on four of the sides of a regular pentagon, with a golden triangle glued onto the fifth side.
Special cases:
There are only eight polygons that can tile the plane such that reflecting any tile across any one of its edges produces another tile; this arrangement is called an edge tessellation. One of them is a tiling by a right kite, with 60°, 90°, and 120° angles. It produces the deltoidal trihexagonal tiling (see § Tilings and polyhedra). A prototile made by eight of these kites tiles the plane only aperiodically, key to a claimed solution of the einstein problem.In non-Euclidean geometry, a kite can have three right angles and one non-right angle, forming a special case of a Lambert quadrilateral. The fourth angle is acute in hyperbolic geometry and obtuse in spherical geometry.
Properties:
Diagonals, angles, and area Every kite is an orthodiagonal quadrilateral, meaning that its two diagonals are at right angles to each other. Moreover, one of the two diagonals (the symmetry axis) is the perpendicular bisector of the other, and is also the angle bisector of the two angles it meets. Because of its symmetry, the other two angles of the kite must be equal. The diagonal symmetry axis of a convex kite divides it into two congruent triangles; the other diagonal divides it into two isosceles triangles.As is true more generally for any orthodiagonal quadrilateral, the area A of a kite may be calculated as half the product of the lengths of the diagonals p and q Alternatively, the area can be calculated by dividing the kite into two congruent triangles and applying the SAS formula for their area. If a and b are the lengths of two sides of the kite, and θ is the angle between, then the area is Inscribed circle Every convex kite is also a tangential quadrilateral, a quadrilateral that has an inscribed circle. That is, there exists a circle that is tangent to all four sides. Additionally, if a convex kite is not a rhombus, there is a circle outside the kite that is tangent to the extensions of the four sides; therefore, every convex kite that is not a rhombus is an ex-tangential quadrilateral. The convex kites that are not rhombi are exactly the quadrilaterals that are both tangential and ex-tangential. For every concave kite there exist two circles tangent to two of the sides and the extensions of the other two: one is interior to the kite and touches the two sides opposite from the concave angle, while the other circle is exterior to the kite and touches the kite on the two edges incident to the concave angle.For a convex kite with diagonal lengths p and q and side lengths a and b , the radius r of the inscribed circle is and the radius ρ of the ex-tangential circle is A tangential quadrilateral is also a kite if and only if any one of the following conditions is true: The area is one half the product of the diagonals.
Properties:
The diagonals are perpendicular. (Thus the kites are exactly the quadrilaterals that are both tangential and orthodiagonal.) The two line segments connecting opposite points of tangency have equal length.
Properties:
The tangent lengths, distances from a point of tangency to an adjacent vertex of the quadrilateral, are equal at two opposite vertices of the quadrilateral. (At each vertex, there are two adjacent points of tangency, but they are the same distance as each other from the vertex, so each vertex has a single tangent length.) The two bimedians, line segments connecting midpoints of opposite edges, have equal length.
Properties:
The products of opposite side lengths are equal.
Properties:
The center of the incircle lies on a line of symmetry that is also a diagonal.If the diagonals in a tangential quadrilateral ABCD intersect at P , and the incircles of triangles ABP , BCP , CDP , DAP have radii r1 , r2 , r3 , and r4 respectively, then the quadrilateral is a kite if and only if If the excircles to the same four triangles opposite the vertex P have radii R1 , R2 , R3 , and R4 respectively, then the quadrilateral is a kite if and only if Duality Kites and isosceles trapezoids are dual to each other, meaning that there is a correspondence between them that reverses the dimension of their parts, taking vertices to sides and sides to vertices. From any kite, the inscribed circle is tangent to its four sides at the four vertices of an isosceles trapezoid. For any isosceles trapezoid, tangent lines to the circumscribing circle at its four vertices form the four sides of a kite. This correspondence can also be seen as an example of polar reciprocation, a general method for corresponding points with lines and vice versa given a fixed circle. Although they do not touch the circle, the four vertices of the kite are reciprocal in this sense to the four sides of the isosceles trapezoid. The features of kites and isosceles trapezoids that correspond to each other under this duality are compared in the table below.
Properties:
Dissection The equidissection problem concerns the subdivision of polygons into triangles that all have equal areas. In this context, the spectrum of a polygon is the set of numbers n such that the polygon has an equidissection into n equal-area triangles. Because of its symmetry, the spectrum of a kite contains all even integers. Certain special kites also contain some odd numbers in their spectra.Every triangle can be subdivided into three right kites meeting at the center of its inscribed circle. More generally, a method based on circle packing can be used to subdivide any polygon with n sides into O(n) kites, meeting edge-to-edge.
Tilings and polyhedra:
All kites tile the plane by repeated point reflection around the midpoints of their edges, as do more generally all quadrilaterals. Kites and darts with angles 72°, 72°, 72°, 144° and 36°, 72°, 36°, 216°, respectively, form the prototiles of one version of the Penrose tiling, an aperiodic tiling of the plane discovered by mathematical physicist Roger Penrose. When a kite has angles that, at its apex and one side, sum to π(1−1n) for some positive integer n , then scaled copies of that kite can be used to tile the plane in a fractal rosette in which successively larger rings of n kites surround a central point. These rosettes can be used to study the phenomenon of inelastic collapse, in which a system of moving particles meeting in inelastic collisions all coalesce at a common point.A kite with angles 60°, 90°, 120°, 90° can also tile the plane by repeated reflection across its edges; the resulting tessellation, the deltoidal trihexagonal tiling, superposes a tessellation of the plane by regular hexagons and isosceles triangles. The deltoidal icositetrahedron, deltoidal hexecontahedron, and trapezohedron are polyhedra with congruent kite-shaped faces, which can alternatively be thought of as tilings of the sphere by congruent spherical kites. There are infinitely many face-symmetric tilings of the hyperbolic plane by kites. These polyhedra (equivalently, spherical tilings), the square and deltoidal trihexagonal tilings of the Euclidean plane, and some tilings of the hyperbolic plane are shown in the table below, labeled by face configuration (the numbers of neighbors of each of the four vertices of each tile). Some polyhedra and tilings appear twice, under two different face configurations.
Tilings and polyhedra:
The trapezohedra are another family of polyhedra that have congruent kite-shaped faces. In these polyhedra, the edges of one of the two side lengths of the kite meet at two "pole" vertices, while the edges of the other length form an equatorial zigzag path around the polyhedron. They are the dual polyhedra of the uniform antiprisms. A commonly seen example is the pentagonal trapezohedron, used for ten-sided dice.
Outer billiards:
Mathematician Richard Schwartz has studied outer billiards on kites. Outer billiards is a dynamical system in which, from a point outside a given compact convex set in the plane, one draws a tangent line to the convex set, travels from the starting point along this line to another point equally far from the point of tangency, and then repeats the same process. It had been open since the 1950s whether any system defined in this way could produce paths that get arbitrarily far from their starting point, and in a 2007 paper Schwartz solved this problem by finding unbounded billiards paths for the kite with angles 72°, 72°, 72°, 144°, the same as the one used in the Penrose tiling. He later wrote a monograph analyzing outer billiards for kite shapes more generally. For this problem, any affine transformation of a kite preserves the dynamical properties of outer billiards on it, and it is possible to transform any kite into a shape where three vertices are at the points (−1,0) and (0,±1) , with the fourth at (α,0) with α in the open unit interval (0,1) . The behavior of outer billiards on any kite depends strongly on the parameter α and in particular whether it is rational. For the case of the Penrose kite, α=1/φ3 , an irrational number, where φ=(1+5)/2 is the golden ratio. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.