text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Synthetic biology**
Synthetic biology:
Synthetic biology (SynBio) is a multidisciplinary field of science that focuses on living systems and organisms, and it applies engineering principles to develop new biological parts, devices, and systems or to redesign existing systems found in nature.It is a branch of science that encompasses a broad range of methodologies from various disciplines, such as biotechnology, biomaterials, material science/engineering, genetic engineering, molecular biology, molecular engineering, systems biology, membrane science, biophysics, chemical and biological engineering, electrical and computer engineering, control engineering and evolutionary biology.
Synthetic biology:
It includes designing and constructing biological modules, biological systems, and biological machines, or re-designing existing biological systems for useful purposes.Additionally, it is the branch of science that focuses on the new abilities of engineering into existing organisms to redesign them for useful purposes.In order to produce predictable and robust systems with novel functionalities that do not already exist in nature, it is also necessary to apply the engineering paradigm of systems design to biological systems. According to the European Commission, this possibly involves a molecular assembler based on biomolecular systems such as the ribosome.
History:
1910: First identifiable use of the term synthetic biology in Stéphane Leduc's publication Théorie physico-chimique de la vie et générations spontanées. He also noted this term in another publication, La Biologie Synthétique in 1912.1944: Canadian-American scientist Oswald Avery shows that DNA is the material of which genes and chromosomes are made. This becomes the bedrock on which all subsequent genetic research is built.1953: Francis Crick and James Watson publish the structure of the DNA in Nature.
History:
1961: Jacob and Monod postulate cellular regulation by molecular networks from their study of the lac operon in E. coli and envisioned the ability to assemble new systems from molecular components.1973: First molecular cloning and amplification of DNA in a plasmid is published in P.N.A.S. by Cohen, Boyer et al. constituting the dawn of synthetic biology.1978: Arber, Nathans and Smith win the Nobel Prize in Physiology or Medicine for the discovery of restriction enzymes, leading Szybalski to offer an editorial comment in the journal Gene: The work on restriction nucleases not only permits us easily to construct recombinant DNA molecules and to analyze individual genes, but also has led us into the new era of synthetic biology where not only existing genes are described and analyzed but also new gene arrangements can be constructed and evaluated.
History:
1988: First DNA amplification by the polymerase chain reaction (PCR) using a thermostable DNA polymerase is published in Science by Mullis et al. This obviated adding new DNA polymerase after each PCR cycle, thus greatly simplifying DNA mutagenesis and assembly.
2000: Two papers in Nature report synthetic biological circuits, a genetic toggle switch and a biological clock, by combining genes within E. coli cells.2003: The most widely used standardized DNA parts, BioBrick plasmids, are invented by Tom Knight. These parts will become central to the International Genetically Engineered Machine (iGEM) competition founded at MIT in the following year.
2003: Researchers engineer an artemisinin precursor pathway in E. coli.2004: First international conference for synthetic biology, Synthetic Biology 1.0 (SB1.0) is held at MIT.
2005: Researchers develop a light-sensing circuit in E. coli. Another group designs circuits capable of multicellular pattern formation.2006: Researchers engineer a synthetic circuit that promotes bacterial invasion of tumour cells.2010: Researchers publish in Science the first synthetic bacterial genome, called M. mycoides JCVI-syn1.0. The genome is made from chemically-synthesized DNA using yeast recombination.
2011: Functional synthetic chromosome arms are engineered in yeast.2012: Charpentier and Doudna labs publish in Science the programming of CRISPR-Cas9 bacterial immunity for targeting DNA cleavage. This technology greatly simplified and expanded eukaryotic gene editing.
History:
2019: Scientists at ETH Zurich report the creation of the first bacterial genome, named Caulobacter ethensis-2.0, made entirely by a computer, although a related viable form of C. ethensis-2.0 does not yet exist.2019: Researchers report the production of a new synthetic (possibly artificial) form of viable life, a variant of the bacteria Escherichia coli, by reducing the natural number of 64 codons in the bacterial genome to 59 codons instead, in order to encode 20 amino acids.2020: Scientists created the first xenobot, a programmable synthetic organism derived from frog cells and designed by AI.2021: Scientists reported that xenobots are able to self-replicate by gathering loose cells in the environment and then forming new xenobot.
Perspectives:
It is a field whose scope is expanding in terms of systems integration, engineered organisms, and practical findings.Engineers view biology as technology (in other words, a given system includes biotechnology or its biological engineering). Synthetic biology includes the broad redefinition and expansion of biotechnology, with the ultimate goal of being able to design and build engineered live biological systems that process information, manipulate chemicals, fabricate materials and structures, produce energy, provide food, and maintain and enhance human health, as well as advance fundamental knowledge of biological systems (see Biomedical engineering) and our environment.Researchers and companies working in synthetic biology are using nature's power to solve issues in agriculture, manufacturing, and medicine.Due to more powerful genetic engineering capabilities and decreased DNA synthesis and sequencing costs, the field of synthetic biology is rapidly growing. In 2016, more than 350 companies across 40 countries were actively engaged in synthetic biology applications; all these companies had an estimated net worth of $3.9 billion in the global market. Synthetic biology currently has no generally accepted definition. Here are a few examples: It is the science of emerging genetic and physical engineering to produce new (and, therefore, synthetic) life forms. To develop organisms with novel or enhanced characteristics, this emerging field of study combines biology, engineering, and related disciplines' knowledge and techniques to design chemically synthesised DNA.Biomolecular engineering includes approaches that aim to create a toolkit of functional units that can be introduced to present new technological functions in living cells. Genetic engineering includes approaches to construct synthetic chromosomes or minimal organisms like Mycoplasma laboratorium.
Perspectives:
Biomolecular design refers to the general idea of de novo design and additive combination of biomolecular components. Each of these approaches shares a similar task: to develop a more synthetic entity at a higher level of complexity by inventively manipulating a simpler part at the preceding level. Optimizing these exogenous pathways in unnatural systems takes iterative fine-tuning of the individual biomolecular components to select the highest concentrations of the desired product.On the other hand, "re-writers" are synthetic biologists interested in testing the irreducibility of biological systems. Due to the complexity of natural biological systems, it would be simpler to rebuild the natural systems of interest from the ground up; to provide engineered surrogates that are easier to comprehend, control and manipulate. Re-writers draw inspiration from refactoring, a process sometimes used to improve computer software.
Categories of synthetic biology:
Bioengineering, synthetic genomics, protocell synthetic biology, unconventional molecular biology, and in silico techniques are the five categories of synthetic biology.It is necessary to review the distinctions and analogies between the categories of synthetic biology for its social and ethical assessment, to distinguish between issues affecting the whole field and particular to a specific one.
Categories of synthetic biology:
Bioengineering The subfield of bioengineering concentrates on creating novel metabolic and regulatory pathways, and is currently the one that likely draws the attention of most researchers and funding. It is primarily motivated by the desire to establish biotechnology as a legitimate engineering discipline. When referring to this area of synthetic biology, the word "bioengineering" should not be confused with "traditional genetic engineering," which involves introducing a single transgene into the intended organism. Bioengineers adapted synthetic biology to provide a substantially more integrated perspective on how to alter organisms or metabolic systems.A typical example of single-gene genetic engineering is the insertion of the human insulin gene into bacteria to create transgenic proteins. The creation of whole new signalling pathways, containing numerous genes and regulatory components (such as an oscillator circuit to initiate the periodic production of green fluorescent protein (GFP) in mammalian cells), is known as bioengineering as part of synthetic biology.By utilising simplified and abstracted metabolic and regulatory modules as well as other standardized parts that may be freely combined to create new pathways or creatures, bioengineering aims to create innovative biological systems. In addition to creating infinite opportunities for novel applications, this strategy is anticipated to make bioengineering more predictable and controllable than traditional biotechnology.
Categories of synthetic biology:
Synthetic genomics The formation of animals with a chemically manufactured (minimal) genome is another facet of synthetic biology that is highlighted by synthetic genomics. This area of synthetic biology has been made possible by ongoing advancements in DNA synthesis technology, which now makes it feasible to produce DNA molecules with thousands of base pairs at a reasonable cost. The goal is to combine these molecules into complete genomes and transplant them into living cells, replacing the host cell's genome and reprogramming its metabolism to perform different functions.Scientists have previously demonstrated the potential of this approach by creating infectious viruses by synthesising the genomes of multiple viruses. These significant advances in science and technology triggered the initial public concerns concerning the risks associated with this technology.A simple genome might also work as a "chassis genome" that could be enlarged quickly by gene inclusion created for particular tasks. Such "chassis creatures" would be more suited for the insertion of new functions than wild organisms since they would have fewer biological pathways that could potentially conflict with the new functionalities in addition to having specific insertion sites. Synthetic genomics strives to create creatures with novel "architectures," much like the bioengineering method. It adopts an integrative or holistic perspective of the organism. In this case, the objective is the creation of chassis genomes based on necessary genes and other required DNA sequences rather than the design of metabolic or regulatory pathways based on abstract criteria.
Categories of synthetic biology:
Protocell synthetic biology The in vitro generation of synthetic cells is the protocell branch of synthetic biology. Lipid vesicles, which have all the necessary components to function as a complete system, can be used to create these artificial cells. In the end, these synthetic cells should meet the requirements for being deemed alive, namely the capacity for self-replication, self-maintenance, and evolution. The protocell technique has this as its end aim, however there are other intermediary steps that fall short of meeting all the criteria for a living cell. In order to carry out a specific function, these lipid vesicles contain cell extracts or more specific sets of biological macromolecules and complex structures, such as enzymes, nucleic acids, or ribosomes. For instance, liposomes may carry out particular polymerase chain reactions or synthesise a particular protein.Protocell synthetic biology takes artificial life one step closer to reality by eventually synthesizing not only the genome but also every component of the cell in vitro, as opposed to the synthetic genomics approach, which relies on coercing a natural cell to carry out the instructions encoded by the introduced synthetic genome. Synthetic biologists in this field view their work as basic study into the conditions necessary for life to exist and its origin more than in any of the other techniques. The protocell technique, however, also lends itself well to applications; similar to other synthetic biology byproducts, protocells could be employed for the manufacture of biopolymers and medicines.
Categories of synthetic biology:
Unconventional molecular biology The objective of the "unnatural molecular biology" strategy is to create new varieties of life that are based on a different kind of molecular biology, such as new types of nucleic acids or a new genetic code. The creation of new types of nucleotides that can be built into unique nucleic acids could be accomplished by changing certain DNA or RNA constituents, such as the bases or the backbone sugars.The normal genetic code is being altered by inserting quadruplet codons or changing some codons to encode new amino acids, which would subsequently permit the use of non-natural amino acids with unique features in protein production. It is a scientific and technological problem to adjust the enzymatic machinery of the cell for both approaches.A new sort of life would be formed by organisms with a genome built on synthetic nucleic acids or on a totally new coding system for synthetic amino acids. This new style of life would have some benefits but also some new dangers. On release into the environment, there would be no horizontal gene transfer or outcrossing of genes with natural species. Furthermore, these kinds of synthetic organisms might be created to require non-natural materials for protein or nucleic acid synthesis, rendering them unable to thrive in the wild if they accidentally escaped.On the other hand, if these organisms ultimately were able to survive outside of controlled space, they might have a particular benefit over natural organisms because they would be resistant to predatory living organisms or natural viruses, that could lead to an unmanaged spread of the synthetic organisms.
Categories of synthetic biology:
In silico technique Synthetic biology in silico and the various strategies are interconnected. The development of complex designs, whether they are metabolic pathways, fundamental cellular processes, or chassis genomes, is one of the major difficulties faced by the four synthetic-biology methods outlined above. Because of this, synthetic biology has a robust in silico branch, similar to systems biology, that aims to create computational models for the design of common biological components or synthetic circuits, which are essentially simulations of synthetic organisms.The practical application of simulations and models through bioengineering or other fields of synthetic biology is the long-term goal of in silico synthetic biology. Many of the computational simulations of synthetic organisms up to this point possess little to no direct analogy to living things. Due to this, in silico synthetic biology is regarded as a separate group in this article.It is sensible to integrate the five areas under the umbrella of synthetic biology as an unified area of study. Even though they focus on various facets of life, such as metabolic regulation, essential elements, or biochemical makeup, these five strategies all work toward the same end: creating new types of living organisms. Additionally, the varied methodologies begin with numerous methodological approaches, which leads to the diversity of synthetic biology approaches.Synthetic biology is an interdisciplinary field that draws from and is inspired by many different scientific disciplines, not one single field or technique. Synthetic biologists all have the same underlying objective of designing and producing new forms of life, despite the fact that they may employ various methodologies, techniques, and research instruments. Any evaluation of synthetic biology, whether it examines ethical, legal, or safety considerations, must take into account the fact that while some questions, risks, and issues are unique to each technique, in other circumstances, synthetic biology as a whole must be taken into consideration.
Four engineering approaches:
Synthetic biology has traditionally been divided into four different engineering approaches: top down, parallel, orthogonal and bottom up.To replicate emergent behaviours from natural biology and build artificial life, unnatural chemicals are used. The other looks for interchangeable components from biological systems to put together and create systems that do not work naturally. In either case, a synthetic objective compels researchers to venture into new area in order to engage and resolve issues that cannot be readily resolved by analysis. Due to this, new paradigms are driven to arise in ways that analysis cannot easily do. In addition to equipments that oscillate, creep, and play tic-tac-toe, synthetic biology has produced diagnostic instruments that enhance the treatment of patients with infectious diseases.
Four engineering approaches:
Top-down approach It involves using metabolic and genetic engineering techniques to impart new functions to living cells. By comparing universal genes and eliminating non-essential ones to create a basic genome, this method seeks to lessen the complexity of existing cells. These initiatives are founded on the hypothesis of a single genesis for cellular life, the so-called Last Universal Common Ancestor, which supports the presence of a universal minimal genome that gave rise to all living things. Recent studies, however, raise the possibility that the eukaryotic and prokaryotic cells that make up the tree of life may have evolved from a group of primordial cells rather than from a single cell. As a result, even while the Holy Grail-like pursuit of the "minimum genome" has grown elusive, cutting out a number of non-essential functions impairs an organism's fitness and leads to "fragile" genomes.
Four engineering approaches:
Bottom-up approach This approach involves creating new biological systems in vitro by bringing together 'non-living' biomolecular components, often with the aim of constructing an artificial cell.
Four engineering approaches:
Reproduction, replication, and assembly are three crucial self-organizational principles that are taken into account in order to accomplish this. Cells, which are made up of a container and a metabolism, are considered "hardware" in the definition of reproduction, whereas replication occurs when a system duplicates a perfect copy of itself, as in the case of DNA, which is considered "software." When vesicles or containers (such as Oparin's coacervates) formed of tiny droplets of molecules that are organic like lipids or liposomes, membrane-like structures comprising phospholipids, aggregate, assembly occur.The study of protocells exists along with other in vitro synthetic biology initiatives that seek to produce minimum cells, metabolic pathways, or "never-born proteins" as well as to mimic physiological functions including cell division and growth. The in vitro enhancement of synthetic pathways does have the potential to have an effect on some other synthetic biology sectors, including metabolic engineering, despite the fact that it no longer classified as synthetic biology research. This research, which is primarily essential, deserves proper recognition as synthetic biology research.
Four engineering approaches:
Parallel approach Parallel engineering is also known as bioengineering. The basic genetic code is the foundation for parallel engineering research, which uses conventional biomolecules like nucleic acids and the 20 amino acids to construct biological systems. For a variety of applications in biocomputing, bioenergy, biofuels, bioremediation, optogenetics, and medicine, it involves the standardisation of DNA components, engineering of switches, biosensors, genetic circuits, logic gates, and cellular communication operators. For directing the expression of two or more genes and/or proteins, the majority of these applications often rely on the use of one or more vectors (or plasmids). Small, circular, double-strand DNA units known as plasmids, which are primarily found in prokaryotic but can also occasionally be detected in eukaryotic cells, may replicate autonomously of chromosomal DNA.
Four engineering approaches:
Orthogonal approach It is also known as perpendicular engineering. This strategy, also referred to as "chemical synthetic biology," principally seeks to alter or enlarge the genetic codes of living systems utilising artificial DNA bases and/or amino acids. This subfield is also connected to xenobiology, a newly developed field that combines systems chemistry, synthetic biology, exobiology, and research into the origins of life. In recent decades, researchers have created compounds that are structurally similar to the DNA canonical bases to see if those "alien" or xeno (XNA) molecules may be employed as genetic information carriers. Similar to this, noncanonical moieties have taken the place of the DNA sugar (deoxyribose). In order to express information other than the 20 conventional amino acids of proteins, the genetic code can be altered or enlarged. One method involves incorporating a specified unnatural, noncanonical, or xeno amino acid (XAA) into one or more proteins at one or more precise places using orthogonal enzymes and a transfer RNA adaptor from an other organism. By using "directed evolution," which entails repeated cycles of gene mutagenesis (genotypic diversity production), screening or selection (of a specific phenotypic trait), and amplification of a better variant for the following iterative round, orthogonal enzymes are produced Numerous XAAs have been effectively incorporated into proteins in more complex creatures like worms and flies as well as in bacteria, yeast, and human cell lines. As a result of canonical DNA sequence changes, directed evolution also enables the development of orthogonal ribosomes, which make it easier to incorporate XAAs into proteins or create "mirror life," or biological systems that contain biomolecules made up of enantiomers with different chiral orientations.
Enabling technologies:
Several novel enabling technologies were critical to the success of synthetic biology. Concepts include standardization of biological parts and hierarchical abstraction to permit using those parts in synthetic systems. DNA serves as the guide for how biological processes should function, like the score to a complex symphony of life. Our ability to comprehend and design biological systems has undergone significant modifications as a result of developments in the previous few decades in both reading (sequencing) and writing (synthesis) DNA sequences. These developments have produced ground-breaking techniques for designing, assembling, and modifying DNA-encoded genes, materials, circuits, and metabolic pathways, enabling an ever-increasing amount of control over biological systems and even entire organisms.Basic technologies include reading and writing DNA (sequencing and fabrication). Measurements under multiple conditions are needed for accurate modeling and computer-aided design (CAD).
Enabling technologies:
DNA and gene synthesis Driven by dramatic decreases in costs of oligonucleotide ("oligos") synthesis and the advent of PCR, the sizes of DNA constructions from oligos have increased to the genomic level. In 2000, researchers reported synthesis of the 9.6 kbp (kilo bp) Hepatitis C virus genome from chemically synthesized 60 to 80-mers. In 2002, researchers at Stony Brook University succeeded in synthesizing the 7741 bp poliovirus genome from its published sequence, producing the second synthetic genome, spanning two years. In 2003, the 5386 bp genome of the bacteriophage Phi X 174 was assembled in about two weeks. In 2006, the same team, at the J. Craig Venter Institute, constructed and patented a synthetic genome of a novel minimal bacterium, Mycoplasma laboratorium and were working on getting it functioning in a living cell.In 2007, it was reported that several companies were offering synthesis of genetic sequences up to 2000 base pairs (bp) long, for a price of about $1 per bp and a turnaround time of less than two weeks. Oligonucleotides harvested from a photolithographic- or inkjet-manufactured DNA chip combined with PCR and DNA mismatch error-correction allows inexpensive large-scale changes of codons in genetic systems to improve gene expression or incorporate novel amino-acids (see George M. Church's and Anthony Forster's synthetic cell projects.). This favors a synthesis-from-scratch approach.
Enabling technologies:
Additionally, the CRISPR/Cas system has emerged as a promising technique for gene editing. It was described as "the most important innovation in the synthetic biology space in nearly 30 years". While other methods take months or years to edit gene sequences, CRISPR speeds that time up to weeks. Due to its ease of use and accessibility, however, it has raised ethical concerns, especially surrounding its use in biohacking.
Enabling technologies:
Sequencing DNA sequencing determines the order of nucleotide bases in a DNA molecule. Synthetic biologists use DNA sequencing in their work in several ways. First, large-scale genome sequencing efforts continue to provide information on naturally occurring organisms. This information provides a rich substrate from which synthetic biologists can construct parts and devices. Second, sequencing can verify that the fabricated system is as intended. Third, fast, cheap, and reliable sequencing can facilitate rapid detection and identification of synthetic systems and organisms.
Enabling technologies:
Modularity This is the ability of a system or component to operate without reference to its context.The most used: 22–23 standardized DNA parts are BioBrick plasmids, invented by Tom Knight in 2003. Biobricks are stored at the Registry of Standard Biological Parts in Cambridge, Massachusetts. The BioBrick standard has been used by tens of thousands of students worldwide in the international Genetically Engineered Machine (iGEM) competition. BioBrick Assembly Standard 10 promotes modularity by allowing BioBrick coding sequences to be spliced out and exchanged using restriction enzymes EcoRI or XbaI (BioBrick prefix) and SpeI and PstI (BioBrick suffix).: 22–23 Sequence overlap between two genetic elements (genes or coding sequences), called overlapping genes, can prevent their individual manipulation. To increase genome modularity, the practice of genome refactoring or improving "the internal structure of an existing system for future use, while simultaneously maintaining external system function" has been adopted across synthetic biology disciplines. Some notable examples of refactoring including the nitrogen fixation cluster and type III secretion system along with bacteriophages T7 and ΦX174.While DNA is most important for information storage, a large fraction of the cell's activities are carried out by proteins. Tools can send proteins to specific regions of the cell and to link different proteins together. The interaction strength between protein partners should be tunable between a lifetime of seconds (desirable for dynamic signaling events) up to an irreversible interaction (desirable for device stability or resilient to harsh conditions). Interactions such as coiled coils, SH3 domain-peptide binding or SpyTag/SpyCatcher offer such control. In addition, it is necessary to regulate protein-protein interactions in cells, such as with light (using light-oxygen-voltage-sensing domains) or cell-permeable small molecules by chemically induced dimerization.In a living cell, molecular motifs are embedded in a bigger network with upstream and downstream components. These components may alter the signaling capability of the modeling module. In the case of ultrasensitive modules, the sensitivity contribution of a module can differ from the sensitivity that the module sustains in isolation.
Enabling technologies:
Modeling Models inform the design of engineered biological systems by better predicting system behavior prior to fabrication. Synthetic biology benefits from better models of how biological molecules bind substrates and catalyze reactions, how DNA encodes the information needed to specify the cell and how multi-component integrated systems behave. Multiscale models of gene regulatory networks focus on synthetic biology applications. Simulations can model all biomolecular interactions in transcription, translation, regulation and induction of gene regulatory networks.Only extensive modelling can enable the exploration of dynamic gene expression in a form suitable for research and design due to the numerous involved species and the intricacy of their relationships. Dynamic simulations of the entire biomolecular interconnection involved in regulation, transport, transcription, induction, and translation enable the molecular level detailing of designs. As opposed to modelling artificial networks a posteriori, this is contrasted.
Enabling technologies:
Microfluidics Microfluidics, in particular droplet microfluidics, is an emerging tool used to construct new components, and to analyze and characterize them. It is widely employed in screening assays.
Enabling technologies:
Synthetic transcription factors Studies have considered the components of the DNA transcription mechanism. One desire of scientists creating synthetic biological circuits is to be able to control the transcription of synthetic DNA in unicellular organisms (prokaryotes) and in multicellular organisms (eukaryotes). One study tested the adjustability of synthetic transcription factors (sTFs) in areas of transcription output and cooperative ability among multiple transcription factor complexes. Researchers were able to mutate functional regions called zinc fingers, the DNA specific component of sTFs, to decrease their affinity for specific operator DNA sequence sites, and thus decrease the associated site-specific activity of the sTF (usually transcriptional regulation). They further used the zinc fingers as components of complex-forming sTFs, which are the eukaryotic translation mechanisms.
Applications:
Synthetic biology initiatives frequently aim to redesign organisms so that they can create a material, such as a drug or fuel, or acquire a new function, such as the ability to sense something in the environment. Examples of what researchers are creating using synthetic biology include: Utilizing microorganisms for bioremediation to remove contaminants from our water, soil, and air.
Beta-carotene, a substance typically associated with carrots that prevents vitamin A deficiency, is produced by rice that has been modified. Every year, between 250,000 and 500,000 children lose their vision due to vitamin A deficiency, which also significantly raises their chance of dying from infectious infections.
As a sustainable and environmentally benign alternative to the fresh roses that perfumers use to create expensive smells, yeast has been created to produce rose oil.
Applications:
Biosensors A biosensor refers to an engineered organism, usually a bacterium, that is capable of reporting some ambient phenomenon such as the presence of heavy metals or toxins. One such system is the Lux operon of Aliivibrio fischeri, which codes for the enzyme that is the source of bacterial bioluminescence, and can be placed after a respondent promoter to express the luminescence genes in response to a specific environmental stimulus. One such sensor created, consisted of a bioluminescent bacterial coating on a photosensitive computer chip to detect certain petroleum pollutants. When the bacteria sense the pollutant, they luminesce. Another example of a similar mechanism is the detection of landmines by an engineered E.coli reporter strain capable of detecting TNT and its main degradation product DNT, and consequently producing a green fluorescent protein (GFP).Modified organisms can sense environmental signals and send output signals that can be detected and serve diagnostic purposes. Microbe cohorts have been used.Biosensors could also be used to detect pathogenic signatures—such as of SARS-CoV-2—and can be wearable.For the purpose of detecting and reacting to various and temporary environmental factors, cells have developed a wide range of regulatory circuits, ranging from transcriptional to post-translational. These circuits are made up of transducer modules that filter the signals and activate a biological response, as well as carefully designed sensitive sections that attach analytes and regulate signal-detection thresholds. Modularity and selectivity are programmed to biosensor circuits at the transcriptional, translational, and post-translational levels, to achieve the delicate balancing of the two basic sensing modules.
Applications:
Food and drink However, not all synthetic nutrition products are animal food products – for instance, as of 2021, there are also products of synthetic coffee that are reported to be close to commercialization. Similar fields of research and production based on synthetic biology that can be used for the production of food and drink are: Genetically engineered microbial food cultures (e.g. for solar-energy-based protein powder) Cell-free artificial synthesis (e.g. synthetic starch; see Biobased economy#Agriculture) Materials Photosynthetic microbial cells have been used as a step to synthetic production of spider silk.
Applications:
Biological computers A biological computer refers to an engineered biological system that can perform computer-like operations, which is a dominant paradigm in synthetic biology. Researchers built and characterized a variety of logic gates in a number of organisms, and demonstrated both analog and digital computation in living cells. They demonstrated that bacteria can be engineered to perform both analog and/or digital computation. In 2007, in human cells, research demonstrated a universal logic evaluator that operates in mammalian cells. Subsequently, researchers utilized this paradigm to demonstrate a proof-of-concept therapy that uses biological digital computation to detect and kill human cancer cells in 2011. In 2016, another group of researchers demonstrated that principles of computer engineering can be used to automate digital circuit design in bacterial cells. In 2017, researchers demonstrated the 'Boolean logic and arithmetic through DNA excision' (BLADE) system to engineer digital computation in human cells. In 2019, researchers implemented a perceptron in biological systems opening the way for machine learning in these systems.
Applications:
Cell transformation Cells use interacting genes and proteins, which are called gene circuits, to implement diverse function, such as responding to environmental signals, decision making and communication. Three key components are involved: DNA, RNA and Synthetic biologist designed gene circuits that can control gene expression from several levels including transcriptional, post-transcriptional and translational levels.
Applications:
Traditional metabolic engineering has been bolstered by the introduction of combinations of foreign genes and optimization by directed evolution. This includes engineering E. coli and yeast for commercial production of a precursor of the antimalarial drug, Artemisinin.Entire organisms have yet to be created from scratch, although living cells can be transformed with new DNA. Several ways allow constructing synthetic DNA components and even entire synthetic genomes, but once the desired genetic code is obtained, it is integrated into a living cell that is expected to manifest the desired new capabilities or phenotypes while growing and thriving. Cell transformation is used to create biological circuits, which can be manipulated to yield desired outputs.By integrating synthetic biology with materials science, it would be possible to use cells as microscopic molecular foundries to produce materials whose properties were genetically encoded. Re-engineering has produced Curli fibers, the amyloid component of extracellular material of biofilms, as a platform for programmable nanomaterial. These nanofibers were genetically constructed for specific functions, including adhesion to substrates, nanoparticle templating and protein immobilization.
Applications:
Designed proteins Natural proteins can be engineered, for example, by directed evolution, novel protein structures that match or improve on the functionality of existing proteins can be produced. One group generated a helix bundle that was capable of binding oxygen with similar properties as hemoglobin, yet did not bind carbon monoxide. A similar protein structure was generated to support a variety of oxidoreductase activities while another formed a structurally and sequentially novel ATPase. Another group generated a family of G-protein coupled receptors that could be activated by the inert small molecule clozapine N-oxide but insensitive to the native ligand, acetylcholine; these receptors are known as DREADDs. Novel functionalities or protein specificity can also be engineered using computational approaches. One study was able to use two different computational methods: a bioinformatics and molecular modeling method to mine sequence databases, and a computational enzyme design method to reprogram enzyme specificity. Both methods resulted in designed enzymes with greater than 100 fold specificity for production of longer chain alcohols from sugar.Another common investigation is expansion of the natural set of 20 amino acids. Excluding stop codons, 61 codons have been identified, but only 20 amino acids are coded generally in all organisms. Certain codons are engineered to code for alternative amino acids including: nonstandard amino acids such as O-methyl tyrosine; or exogenous amino acids such as 4-fluorophenylalanine. Typically, these projects make use of re-coded nonsense suppressor tRNA-Aminoacyl tRNA synthetase pairs from other organisms, though in most cases substantial engineering is required.Other researchers investigated protein structure and function by reducing the normal set of 20 amino acids. Limited protein sequence libraries are made by generating proteins where groups of amino acids may be replaced by a single amino acid. For instance, several non-polar amino acids within a protein can all be replaced with a single non-polar amino acid. One project demonstrated that an engineered version of Chorismate mutase still had catalytic activity when only nine amino acids were used.Researchers and companies practice synthetic biology to synthesize industrial enzymes with high activity, optimal yields and effectiveness. These synthesized enzymes aim to improve products such as detergents and lactose-free dairy products, as well as make them more cost effective. The improvements of metabolic engineering by synthetic biology is an example of a biotechnological technique utilized in industry to discover pharmaceuticals and fermentive chemicals. Synthetic biology may investigate modular pathway systems in biochemical production and increase yields of metabolic production. Artificial enzymatic activity and subsequent effects on metabolic reaction rates and yields may develop "efficient new strategies for improving cellular properties ... for industrially important biochemical production".
Applications:
Designed nucleic acid systems Scientists can encode digital information onto a single strand of synthetic DNA. In 2012, George M. Church encoded one of his books about synthetic biology in DNA. The 5.3 Mb of data was more than 1000 times greater than the previous largest amount of information to be stored in synthesized DNA. A similar project encoded the complete sonnets of William Shakespeare in DNA. More generally, algorithms such as NUPACK, ViennaRNA, Ribosome Binding Site Calculator, Cello, and Non-Repetitive Parts Calculator enables the design of new genetic systems.
Applications:
Many technologies have been developed for incorporating unnatural nucleotides and amino acids into nucleic acids and proteins, both in vitro and in vivo. For example, in May 2014, researchers announced that they had successfully introduced two new artificial nucleotides into bacterial DNA. By including individual artificial nucleotides in the culture media, they were able to exchange the bacteria 24 times; they did not generate mRNA or proteins able to use the artificial nucleotides.
Applications:
Space exploration Synthetic biology raised NASA's interest as it could help to produce resources for astronauts from a restricted portfolio of compounds sent from Earth. On Mars, in particular, synthetic biology could lead to production processes based on local resources, making it a powerful tool in the development of occupied outposts with less dependence on Earth. Work has gone into developing plant strains that are able to cope with the harsh Martian environment, using similar techniques to those employed to increase resilience to certain environmental factors in agricultural crops.
Applications:
Synthetic life One important topic in synthetic biology is synthetic life, that is concerned with hypothetical organisms created in vitro from biomolecules and/or chemical analogues thereof. Synthetic life experiments attempt to either probe the origins of life, study some of the properties of life, or more ambitiously to recreate life from non-living (abiotic) components. Synthetic life biology attempts to create living organisms capable of carrying out important functions, from manufacturing pharmaceuticals to detoxifying polluted land and water. In medicine, it offers prospects of using designer biological parts as a starting point for new classes of therapies and diagnostic tools.A living "artificial cell" has been defined as a completely synthetic cell that can capture energy, maintain ion gradients, contain macromolecules as well as store information and have the ability to mutate. Nobody has been able to create such a cell.A completely synthetic bacterial chromosome was produced in 2010 by Craig Venter, and his team introduced it to genomically emptied bacterial host cells. The host cells were able to grow and replicate. The Mycoplasma laboratorium is the only living organism with completely engineered genome.
Applications:
The first living organism with 'artificial' expanded DNA code was presented in 2014; the team used E. coli that had its genome extracted and replaced with a chromosome with an expanded genetic code. The nucleosides added are d5SICS and dNaM.In May 2019, in a milestone effort, researchers reported the creation of a new synthetic (possibly artificial) form of viable life, a variant of the bacteria Escherichia coli, by reducing the natural number of 64 codons in the bacterial genome to 59 codons instead, in order to encode 20 amino acids.In 2017, the international Build-a-Cell large-scale open-source research collaboration for the construction of synthetic living cells was started, followed by national synthetic cell organizations in several countries, including FabriCell, MaxSynBio and BaSyC. The European synthetic cell efforts were unified in 2019 as SynCellEU initiative.
Applications:
Drug delivery platforms In therapeutics, synthetic biology has achieved significant advancements in altering and simplifying the therapeutics scope in a relatively short period of time. In fact, new therapeutic platforms, from the discovery of disease mechanisms and drug targets to the manufacture and transport of small molecules, are made possible by the logical and model-guided design construction of biological components.Synthetic biology devices have been designed to act as therapies in therapeutic treatment. It is possible to control complete created viruses and organisms to target particular pathogens and diseased pathways. Thus, in two independent studies 91,92, researchers utilised genetically modified bacteriophages to fight antibiotic-resistant bacteria by giving them genetic features that specifically target and hinder bacterial defences against antibiotic activity.In the therapy of cancer, since conventional medicines frequently indiscriminately target tumours and normal tissues, artificially created viruses and organisms that can identify and connect their therapeutic action to pathological signals may be helpful. For example, p53 pathway activity in human cells was put into adenoviruses to control how they replicated.
Applications:
Engineered bacteria-based platform Bacteria have long been used in cancer treatment. Bifidobacterium and Clostridium selectively colonize tumors and reduce their size. Recently synthetic biologists reprogrammed bacteria to sense and respond to a particular cancer state. Most often bacteria are used to deliver a therapeutic molecule directly to the tumor to minimize off-target effects. To target the tumor cells, peptides that can specifically recognize a tumor were expressed on the surfaces of bacteria. Peptides used include an affibody molecule that specifically targets human epidermal growth factor receptor 2 and a synthetic adhesin. The other way is to allow bacteria to sense the tumor microenvironment, for example hypoxia, by building an AND logic gate into bacteria. Then the bacteria only release target therapeutic molecules to the tumor through either lysis or the bacterial secretion system. Lysis has the advantage that it can stimulate the immune system and control growth. Multiple types of secretion systems can be used and other strategies as well. The system is inducible by external signals. Inducers include chemicals, electromagnetic or light waves.
Applications:
Multiple species and strains are applied in these therapeutics. Most commonly used bacteria are Salmonella typhimurium, Escherichia Coli, Bifidobacteria, Streptococcus, Lactobacillus, Listeria and Bacillus subtilis. Each of these species have their own property and are unique to cancer therapy in terms of tissue colonization, interaction with immune system and ease of application.
Applications:
Engineered yeast-based platform Synthetic biologists are developing genetically modified live yeast that can deliver therapeutic biologic medicines. When orally delivered, these live yeast act like micro-factories and will make therapeutic molecules directly in the gastrointestinal tract. Because yeast are eukaryotic, a key benefit is that they can be administered together with antibiotics. Probiotic yeast expressing human P2Y2 purinergic receptor suppressed intestinal inflammation in mouse models of inflammatory bowel disease. A live S. boulardii yeast delivering a tetra-specific anti-toxin that potently neutralizes Toxin A and Toxin B of Clostridioides difficile has been developed. This therapeutic anti-toxin is a fusion of four single-domain antibodies (nanobodies) that potently and broadly neutralize the two major virulence factors of C. difficile at the site of infection in preclinical models. The first in human clinical trial of engineered live yeast for the treatment of Clostridium difficile infection is anticipated in 2024 and will be sponsored by the developer Fzata, Inc.
Applications:
Cell-based platform The immune system plays an important role in cancer and can be harnessed to attack cancer cells. Cell-based therapies focus on immunotherapies, mostly by engineering T cells.
Applications:
T cell receptors were engineered and 'trained' to detect cancer epitopes. Chimeric antigen receptors (CARs) are composed of a fragment of an antibody fused to intracellular T cell signaling domains that can activate and trigger proliferation of the cell. A second generation CAR-based therapy was approved by FDA.Gene switches were designed to enhance safety of the treatment. Kill switches were developed to terminate the therapy should the patient show severe side effects. Mechanisms can more finely control the system and stop and reactivate it. Since the number of T-cells are important for therapy persistence and severity, growth of T-cells is also controlled to dial the effectiveness and safety of therapeutics.Although several mechanisms can improve safety and control, limitations include the difficulty of inducing large DNA circuits into the cells and risks associated with introducing foreign components, especially proteins, into cells.
Applications:
Biofuels, pharmaceuticals and biomaterials The most popular biofuel is ethanol produced from corn or sugar cane, but this method of producing biofuels is troublesome and constrained due to the high agricultural cost and inadequate fuel characteristics of ethanol. An substitute and potential source of renewable energy is microbes that have had their metabolic pathways altered to be more efficient at converting biomass into biofuels. Only if their production costs could be made to match or even exceed those of present fuel production can these techniques be expected to be successful. Related to this, there are several medicines whose pricey manufacturing procedures prevent them from having a larger therapeutic range. The creation of new materials and the microbiological manufacturing of biomaterials would both benefit substantially from novel artificial biology tools.
Applications:
CRISPR/Cas9 The clustered frequently interspaced short palindromic repetitions (CRISPR)/CRISPR associated (Cas) system is a powerful method of genome engineering in a range of organisms because of its simplicity, modularity, and scalability. In this technique, a guide RNA (gRNA) attracts the CRISPR nuclease Cas9 to a particular spot in the genome, causing a double strand break. Several DNA repair processes, including homology-directed recombination and non-homology end joining, can be used to accomplish the desired genome change (i.e., gene deletion or insertion). Additionally, dCas9 (dead Cas9 or nuclease-deficient Cas9), a Cas9 double mutant (H840A, D10A), has been utilised to control gene expression in bacteria or when linked to a stimulation of suppression site in yeast.
Applications:
Regulatory elements To build and develop biological systems, regulating components including regulators, ribosome-binding sites (RBSs), and terminators are crucial. Despite years of study, there are many various varieties and numbers of promoters and terminators for Escherichia coli, but also for the well-researched model organism Saccharomyces cerevisiae, as well as for other organisms of interest, these tools are quite scarce. Numerous techniques have been invented for the finding and identification of promoters and terminators in order to overcome this constraint, including genome mining, random mutagenesis, hybrid engineering, biophysical modelling, combinatorial design, and rational design.
Applications:
Organoids Synthetic biology has been used for organoids, which are lab-grown organs with application to medical research and transplantation.
Applications:
Bioprinted organs There are many applications for 3D bioprinting in the medical field. An infant patient with a rare respiratory disease known as tracheobronchomalacia (TBM) was given a tracheal splint that was created with 3D printing. 3D bioprinting can be used to reconstruct tissue from various regions of the body. Patients with end-stage bladder disease can be treated by using engineered bladder tissues to rebuild the damaged organ. This technology can also potentially be applied to bone, skin, cartilage and muscle tissue. Though one long-term goal of 3D bioprinting technology is to reconstruct an entire organ, there has been little success in printing fully functional organs. Unlike implantable stents, organs have complex shapes and are significantly harder to bioprint. A bioprinted heart, for example, must not only meet structural requirements, but also vascularization, mechanical load, and electrical signal propagation requirements. Israeli researchers constructed a rabbit-sized heart out of human cells in 2019.In 2022, first success of a clinical trial for a 3D bioprinted transplant that is made from the patient's own cells, an external ear to treat microtia, was reported.For bioprinted food like meat see #Food and drink.
Applications:
Other transplants and induced regeneration There is ongoing research and development into synthetic biology based methods for inducing regeneration in humans as well the creation of transplantable artificial organs.
Applications:
Nanoparticles, artificial cells and micro-droplets Synthetic biology can be used for creating nanoparticles which can be used for drug-delivery as well as for other purposes. Complementing research and development seeks to and has created synthetic cells that mimics functions of biological cells. Applications include medicine such as designer-nanoparticles that make blood cells eat away—from the inside out—portions of atherosclerotic plaque that cause heart attacks. Synthetic micro-droplets for algal cells or synergistic algal-bacterial multicellular spheroid microbial reactors, for example, could be used to produce hydrogen as hydrogen economy biotechnology.
Ethics:
The creation of new life and the tampering of existing life has raised ethical concerns in the field of synthetic biology and are actively being discussed.Common ethical questions include: Is it morally right to tamper with nature? Is one playing God when creating new life? What happens if a synthetic organism accidentally escapes? What if an individual misuses synthetic biology and creates a harmful entity (e.g., a biological weapon)? Who will have control of and access to the products of synthetic biology? Who will gain from these innovations? Investors? Medical patients? Industrial farmers? Does the patent system allow patents on living organisms? What about parts of organisms, like HIV resistance genes in humans? What if a new creation is deserving of moral or legal status?The ethical aspects of synthetic biology has three main features: biosafety, biosecurity, and the creation of new life forms. Other ethical issues mentioned include the regulation of new creations, patent management of new creations, benefit distribution, and research integrity.Ethical issues have surfaced for recombinant DNA and genetically modified organism (GMO) technologies and extensive regulations of genetic engineering and pathogen research were in place in many jurisdictions. Amy Gutmann, former head of the Presidential Bioethics Commission, argued that we should avoid the temptation to over-regulate synthetic biology in general, and genetic engineering in particular. According to Gutmann, "Regulatory parsimony is especially important in emerging technologies...where the temptation to stifle innovation on the basis of uncertainty and fear of the unknown is particularly great. The blunt instruments of statutory and regulatory restraint may not only inhibit the distribution of new benefits, but can be counterproductive to security and safety by preventing researchers from developing effective safeguards.".
Ethics:
The "creation" of life One ethical question is whether or not it is acceptable to create new life forms, sometimes known as "playing God". Currently, the creation of new life forms not present in nature is at small-scale, the potential benefits and dangers remain unknown, and careful consideration and oversight are ensured for most studies. Many advocates express the great potential value—to agriculture, medicine, and academic knowledge, among other fields—of creating artificial life forms. Creation of new entities could expand scientific knowledge well beyond what is currently known from studying natural phenomena. Yet there is concern that artificial life forms may reduce nature's "purity" (i.e., nature could be somehow corrupted by human intervention and manipulation) and potentially influence the adoption of more engineering-like principles instead of biodiversity- and nature-focused ideals. Some are also concerned that if an artificial life form were to be released into nature, it could hamper biodiversity by beating out natural species for resources (similar to how algal blooms kill marine species). Another concern involves the ethical treatment of newly created entities if they happen to sense pain, sentience, and self-perception. There is an ongoing debate as to whether such life forms should be granted moral or legal rights, though no consensus exists as to how these rights would be administered or enforced.
Ethics:
Ethical support for synthetic biology Ethics and moral rationales that support certain applications of synthetic biology include their potential mitigation of substantial global problems of detrimental environmental impacts of conventional agriculture (including meat production), animal welfare, food security, and human health, as well as potential reduction of human labor needs and, via therapies of diseases, reduction of human suffering and prolonged life.
Ethics:
Biosafety and biocontainment What is most ethically appropriate when considering biosafety measures? How can accidental introduction of synthetic life in the natural environment be avoided? Much ethical consideration and critical thought has been given to these questions. Biosafety not only refers to biological containment; it also refers to strides taken to protect the public from potentially hazardous biological agents. Even though such concerns are important and remain unanswered, not all products of synthetic biology present concern for biological safety or negative consequences for the environment. It is argued that most synthetic technologies are benign and are incapable of flourishing in the outside world due to their "unnatural" characteristics as there is yet to be an example of a transgenic microbe conferred with a fitness advantage in the wild.
Ethics:
In general, existing hazard controls, risk assessment methodologies, and regulations developed for traditional genetically modified organisms (GMOs) are considered to be sufficient for synthetic organisms. "Extrinsic" biocontainment methods in a laboratory context include physical containment through biosafety cabinets and gloveboxes, as well as personal protective equipment. In an agricultural context, they include isolation distances and pollen barriers, similar to methods for biocontainment of GMOs. Synthetic organisms may offer increased hazard control because they can be engineered with "intrinsic" biocontainment methods that limit their growth in an uncontained environment, or prevent horizontal gene transfer to natural organisms. Examples of intrinsic biocontainment include auxotrophy, biological kill switches, inability of the organism to replicate or to pass modified or synthetic genes to offspring, and the use of xenobiological organisms using alternative biochemistry, for example using artificial xeno nucleic acids (XNA) instead of DNA. Regarding auxotrophy, bacteria and yeast can be engineered to be unable to produce histidine, an important amino acid for all life. Such organisms can thus only be grown on histidine-rich media in laboratory conditions, nullifying fears that they could spread into undesirable areas.
Ethics:
Biosecurity and bioterrorism Some ethical issues relate to biosecurity, where biosynthetic technologies could be deliberately used to cause harm to society and/or the environment. Since synthetic biology raises ethical issues and biosecurity issues, humanity must consider and plan on how to deal with potentially harmful creations, and what kinds of ethical measures could possibly be employed to deter nefarious biosynthetic technologies. With the exception of regulating synthetic biology and biotechnology companies, however, the issues are not seen as new because they were raised during the earlier recombinant DNA and genetically modified organism (GMO) debates, and extensive regulations of genetic engineering and pathogen research are already in place in many jurisdictions.Additionally, the development of synthetic biology tools has made it easier for individuals with less education, training, and access to equipment to modify and use pathogenic organisms as bioweapons. This increases the threat of bioterrorism, especially as terrorist groups become aware of the significant social, economic, and political disruption caused by pandemics like COVID-19. As new techniques are developed in the field of synthetic biology, the risk of bioterrorism is likely to continue to grow. Juan Zarate, who served as Deputy National Security Advisor for Combating Terrorism from 2005 to 2009, noted that "the severity and extreme disruption of a novel coronavirus will likely spur the imagination of the most creative and dangerous groups and individuals to reconsider bioterrorist attacks." European Union The European Union-funded project SYNBIOSAFE has issued reports on how to manage synthetic biology. A 2007 paper identified key issues in safety, security, ethics, and the science-society interface, which the project defined as public education and ongoing dialogue among scientists, businesses, government and ethicists. The key security issues that SYNBIOSAFE identified involved engaging companies that sell synthetic DNA and the biohacking community of amateur biologists. Key ethical issues concerned the creation of new life forms.
Ethics:
A subsequent report focused on biosecurity, especially the so-called dual-use challenge. For example, while synthetic biology may lead to more efficient production of medical treatments, it may also lead to synthesis or modification of harmful pathogens (e.g., smallpox). The biohacking community remains a source of special concern, as the distributed and diffuse nature of open-source biotechnology makes it difficult to track, regulate or mitigate potential concerns over biosafety and biosecurity.COSY, another European initiative, focuses on public perception and communication. To better communicate synthetic biology and its societal ramifications to a broader public, COSY and SYNBIOSAFE published SYNBIOSAFE, a 38-minute documentary film, in October 2009.The International Association Synthetic Biology has proposed self-regulation. This proposes specific measures that the synthetic biology industry, especially DNA synthesis companies, should implement. In 2007, a group led by scientists from leading DNA-synthesis companies published a "practical plan for developing an effective oversight framework for the DNA-synthesis industry".
Ethics:
United States In January 2009, the Alfred P. Sloan Foundation funded the Woodrow Wilson Center, the Hastings Center, and the J. Craig Venter Institute to examine the public perception, ethics and policy implications of synthetic biology.On July 9–10, 2009, the National Academies' Committee of Science, Technology & Law convened a symposium on "Opportunities and Challenges in the Emerging Field of Synthetic Biology".After the publication of the first synthetic genome and the accompanying media coverage about "life" being created, President Barack Obama established the Presidential Commission for the Study of Bioethical Issues to study synthetic biology. The commission convened a series of meetings, and issued a report in December 2010 titled "New Directions: The Ethics of Synthetic Biology and Emerging Technologies." The commission stated that "while Venter's achievement marked a significant technical advance in demonstrating that a relatively large genome could be accurately synthesized and substituted for another, it did not amount to the "creation of life". It noted that synthetic biology is an emerging field, which creates potential risks and rewards. The commission did not recommend policy or oversight changes and called for continued funding of the research and new funding for monitoring, study of emerging ethical issues and public education.Synthetic biology, as a major tool for biological advances, results in the "potential for developing biological weapons, possible unforeseen negative impacts on human health ... and any potential environmental impact". The proliferation of such technology could also make the production of biological and chemical weapons available to a wider array of state and non-state actors. These security issues may be avoided by regulating industry uses of biotechnology through policy legislation. Federal guidelines on genetic manipulation are being proposed by "the President's Bioethics Commission ... in response to the announced creation of a self-replicating cell from a chemically synthesized genome, put forward 18 recommendations not only for regulating the science ... for educating the public".
Ethics:
Opposition On March 13, 2012, over 100 environmental and civil society groups, including Friends of the Earth, the International Center for Technology Assessment, and the ETC Group, issued the manifesto The Principles for the Oversight of Synthetic Biology. This manifesto calls for a worldwide moratorium on the release and commercial use of synthetic organisms until more robust regulations and rigorous biosafety measures are established. The groups specifically call for an outright ban on the use of synthetic biology on the human genome or human microbiome. Richard Lewontin wrote that some of the safety tenets for oversight discussed in The Principles for the Oversight of Synthetic Biology are reasonable, but that the main problem with the recommendations in the manifesto is that "the public at large lacks the ability to enforce any meaningful realization of those recommendations".
Health and safety:
The hazards of synthetic biology include biosafety hazards to workers and the public, biosecurity hazards stemming from deliberate engineering of organisms to cause harm, and environmental hazards. The biosafety hazards are similar to those for existing fields of biotechnology, mainly exposure to pathogens and toxic chemicals, although novel synthetic organisms may have novel risks. For biosecurity, there is concern that synthetic or redesigned organisms could theoretically be used for bioterrorism. Potential risks include recreating known pathogens from scratch, engineering existing pathogens to be more dangerous, and engineering microbes to produce harmful biochemicals. Lastly, environmental hazards include adverse effects on biodiversity and ecosystem services, including potential changes to land use resulting from agricultural use of synthetic organisms. Synthetic biology is an example of a dual-use technology with the potential to be used in ways that could intentionally or unintentionally harm humans and/or damage the environment. Often "scientists, their host institutions and funding bodies" consider whether the planned research could be misused and sometimes implement measures to reduce the likelihood of misuse.Existing risk analysis systems for GMOs are generally considered sufficient for synthetic organisms, although there may be difficulties for an organism built "bottom-up" from individual genetic sequences. Synthetic biology generally falls under existing regulations for GMOs and biotechnology in general, and any regulations that exist for downstream commercial products, although there are generally no regulations in any jurisdiction that are specific to synthetic biology. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Product feed**
Product feed:
A product feed or product data feed is a file made up of a list of products and attributes of those products organized so that each product can be displayed, advertised or compared in a unique way. A product feed typically contains a product image, title, product identifier, marketing copy, and product attributes. But, can also contain links to rich media assets such as videos, 3D animations, brochures, product stories, product relations, and reviews, as in the case of Open Icecat, the multilingual open content catalogue.
Product feed:
Product feeds supply the content that is presented on many kinds of e-commerce websites such as webshops, search engines, price comparison websites, affiliate networks, and other similar aggregators of e-commerce information. Product data feeds are generated by manufacturers, online retailers and, in some cases, product information is extracted using web scraping or harvested web harvesting from the online shops website.
Applications:
While product feeds differ in content and structure, the goal remains the same – deliver high-quality (fresh, relevant, accurate, comprehensive) information so that shoppers can make a buying decision.Product data feeds are often delivered between manufacturers, distributors and retailers, and are also used within a variety of online marketing channels that help shoppers locate and understand the product they wish to purchase and drive the traffic to the retailers' website. These marketing channels include: Price comparison websites – Feeds are the product descriptive content needed to run sites that compare pricing (price comparison websites), attributes (mostly in vertical search portals) and availability.
Applications:
Paid search affiliates – PPC campaigns use API's that receive a range of attributes within product feeds to determine campaign keywords and bidding.
Affiliate networks – affiliate networks funnel products though their platforms from merchants to affiliates.
Marketplaces – receive product feeds from their merchants (eBay and Amazon for example).
Social Networks - can accept product feeds from merchants to list products (Facebook, Instagram, Pinterest for example).
CTX Feed Pro - product feed manager that makes your product listing approved faster, conditionally enhances product information, filters unoptimized products and keeps your product info updated on multiple channels automatically.
Feed formats:
After announcing the importance of quality product data feeds, Google has updated its feed requirements.
Other product listing sites use proprietary formats that are either plain text or XML format.
Emerging RDF format: Semantic web standards such as RDF are taking root. It is expected product feed will soon adopt this new web standard. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PowerDNS**
PowerDNS:
PowerDNS is a DNS server program, written in C++ and licensed under the GPL. It runs on most Unix derivatives. PowerDNS features a large number of different backends ranging from simple BIND style zonefiles to relational databases and load balancing/failover algorithms. A DNS recursor is provided as a separate program.
History:
PowerDNS development began in 1999 and was originally a commercial proprietary product. In November 2002, the source code was made public under the open-source GPL v2 license.
Features:
PowerDNS Authoritative Server (pdns_server) consists of a single core, and multiple dynamically loadable backends that run multi-threaded. The core handles all packet processing and DNS intelligence, while one or more backends deliver DNS records using arbitrary storage methods.
Features:
Zone transfers and update notifications are supported, and the processes can run unprivileged and chrooted. Various caches are maintained to speed up query processing. Run-time control is available through the pdns_control command, which allows reloading of separate zones, cache purges, zone notifications and dumps statistics in Multi Router Traffic Grapher / rrdtool format. Realtime information can also be obtained through the optional built-in web server.
Features:
There are many independent projects to create management interfaces for PowerDNS.
Features:
DNSSEC The PowerDNS Authoritative Server supports DNSSEC as of version 3.0. While pre-signed zones can be served, it is also possible to perform online signing & key management. This has the upside of being relatively easy, but the downside that the cryptographic keying material is present on the servers itself (which is also true of any HTTPS server when not used with a HSM for example).
Recursor:
PowerDNS Recursor (pdns_recursor) is a resolving DNS server, that runs as a separate process.
This part of PowerDNS uses a combination of native threads and user-space threads, through the use of Boost and the MTasker library, which is a simple cooperative multitasking library. It is also available as a standalone package.
It does not have to run a pdns_server process as a gatekeeper for pdns_recursor, if the goal is simply to provide caching/recursing/resolving nameservice as running pdns_recursor on its own is even more efficient than behind the authoritative component.
Support for DNSSEC validation was added to the pdns_recursor in version 4.0.
DNSdist:
PowerDNS DNSdist (dnsdist) is a caching DNS proxy, with many features including: Load Balancing of DNS Queries DNS Encryption Support - DNS over HTTPS, DNS over TLS, both upstream and downstream (i.e. to clients and backends) Lua Policy Engine - Extensive capabilities for creating rules for processing DNS packets, such as changing the response, re-routing a query or blocking traffic over a max QPS from a subnet.
DNSdist:
Dynamic Rule Generation - Used to create Dynamic Blocks which are short-lived rules, automatically inserted based on configurable thresholds and the analysis of recently received traffic. Used to deal with DoS attacksDNSdist is available as a standalone package, and can be deployed with PowerDNS Authoritative Server or Recursor, or any other third-party DNS server. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Methodological solipsism**
Methodological solipsism:
In epistemology and the philosophy of mind, methodological solipsism has at least two distinct definitions: Methodological solipsism is the epistemological thesis that the individual self and its states are the sole possible or proper starting point for philosophical construction (Wood, 295). A skeptical turn along these lines is Cartesian skepticism.
Methodological solipsism:
Methodological solipsism is the thesis that the mental properties or mental states of an organism can be individuated exclusively on the basis of that state or property's relations with other internal states of the organism itself, without any reference to the society or the physical world in which the organism is embedded.The second definition was promoted by Jerry Fodor (1980). He later went on to distinguish this thesis from another that he called methodological individualism. Fodor's motivation for introducing these concepts into the philosophical (and now psychological) lexicon was the need to defend some sort of internalist conception of the mental from the problems posed by the famous "Twin Earth" thought experiment of Hilary Putnam. Very briefly, the question is whether it is possible for two people, one living in the actual world where water is H2O and the other living in some possible world (Twin Earth) where water has all the same qualities of our water but is actually composed of XYZ, to have the same beliefs (or other propositional attitudes) about water. The externalist says that this is not possible, while the internalist insists that it is.
Methodological solipsism:
Fodor defines methodological solipsism as the extreme position that states that the content of someone's beliefs about, say, water has absolutely nothing to do with the substance water in the outside world, nor with the commonly accepted definition of the society in which that person lives. Everything is determined internally. Moreover, the only thing that other people have to go on in ascribing beliefs to someone else are the internal states of his or her physical brain.
Methodological solipsism:
In contrast, Fodor defines methodological individualism as the view that mental states have a semantically evaluable character—that is, they are relational states. The relation that provides semantic meaning can be a relation with the external world or with one's culture and, so long as the relation produces some change in the causal power of a mental state, it can be considered to be a partial determinant of that state. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Stream capture**
Stream capture:
Stream capture, river capture, river piracy or stream piracy is a geomorphological phenomenon occurring when a stream or river drainage system or watershed is diverted from its own bed, and flows instead down the bed of a neighbouring stream. This can happen for several reasons, including: Tectonic earth movements, where the slope of the land changes, and the stream is tipped out of its former course Natural damming, such as by a landslide or ice sheet Erosion, either Headward erosion of one stream valley upwards into another, or Lateral erosion of a meander through the higher ground dividing the adjacent streams.
Stream capture:
Within an area of karst topography, where streams may sink, or flow underground (a sinking or losing stream) and then reappear in a nearby stream valley Glacier retreatThe additional water flowing down the capturing stream may accelerate erosion and encourage the development of a canyon (gorge).
The now-dry valley of the original stream is known as a wind gap.
Capture mechanisms:
Sea level rise The Kaituna and Pelorus rivers, New Zealand: About 8,000 years ago, a single river was divided by sea water to form two rivers.
Tectonic uplift Barmah Choke: About 25,000 years ago, an uplift of the plains near Moama on the Cadell Fault first dammed the Murray River and then forced it to take a new course. The new course dug its way through the so-called Barmah Choke and captured the lower course of the Goulburn River for 500 km (310 mi).
Capture mechanisms:
Indus-Sutlej-Sarasvati-Yamuna: The Yamuna earlier flowed into the Ghaggar-Hakra River (identified with the Sarasvati River) and later changed its course due to plate tectonics. The Sutlej River flowed into the current channel of the Ghaggar-Hakra River until the 13th century after which it was captured by the Indus River due to plate tectonics.Barrier Range: It was theorised that the original course of the Murray River was to a mouth near Port Pirie where a large delta is still visible protruding into the calm waters of Spencer Gulf. It was suggested that an uplift of the land blocked the river near the southern end of the Flinders Ranges, and the river eventually found its way to a new mouth near Lake Alexandrina. This has since been disproven in favour of findings that ancient Lake Bungunnia overflowed at Swan Reach and the current course is as a result of northward erosion.
Capture mechanisms:
Glacial damming The River Thames in southern England originally entered the North Sea near Ipswich. About 450,000 years ago, an ice sheet expanding from the north pushed the course of the river southwards, forcing the Thames to cut a new mouth where the mouth of the River Blackwater, Essex now is, north of London. It later moved southwards again to its current position as a result of cutting through the Chiltern Hills at Goring-on-Thames, an event which created the Goring Gap.
Capture mechanisms:
Headward erosion The Teays River, captured by the Ohio River.
The Rio Grande which before capture flowed into a closed basin, Lake Cabeza de Vaca, but after capture flowed into the Gulf of Mexico.
The ancestral Niger River captured what is now the upper reaches of the Niger which once flowed into an endorheic basin to the east northeast of Timbuktu.
The River Stour, Kent, largely captured by the River Beult , River Teise and others.
The River Wey, in southern England, the western arm of which is the former upper waters of the River Blackwater (River Loddon).
The River Rheidol in Wales which has captured the headwaters of other streams and now runs for part of its length in a deep gorge.
The River Lyd in Devon, England.
The Black River, in Kings County, Nova Scotia, Canada, captured by the Gaspereau River The Casiquiare canal is a distributary of the Orinoco River that is currently in the process of capturing the upper reaches of the Orinoco.
Karst The Donauversickerung (Danube Sink), currently developing in Germany, where a large portion of the upper part of the Danube river sinks into the limestone bedrock, and resurfaces in the Aachtopf spring, a tributary of the River Rhine.
Capture mechanisms:
Glacier retreat The Slims River was previously fed by meltwater from the Kaskawulsh Glacier in the St. Elias Mountains in the Yukon and its waters flowed into Kluane Lake and on to the Bering Sea. Because of climate change, the glacier has rapidly receded and the meltwater no longer feeds the Slims. The water instead now feeds the Kaskawulsh River which is a tributary to the Alsek River and drains into the Gulf of Alaska.
Effect on freshwater life:
River capture is a shaping force in the biogeography or distribution of many freshwater fish species.
New Zealand freshwater fish Geological uplift in the southern South Island led to the divergence of freshwater galaxiid populations isolated by river capture.
Effect on freshwater life:
Australian freshwater fish The formerly massive Great Dividing Range runs the length of the eastern coastline of Australia and has isolated native freshwater fish populations east and west of the range for millions of years. In the last two million years erosion has reduced the Great Dividing Range to a critical point where west-to-east river capture events have been possible. A number of native fish species that originated in the Murray–Darling river system to the west are (or were) found naturally occurring in a number of coastal systems spanning almost the entire length of the range.
Effect on freshwater life:
None of the river capture events that allowed native fish of the Murray-Darling system to cross into and colonise these East Coast river systems seem to have formed permanent linkages. The colonising Murray-Darling fish in these East Coast river systems have therefore become isolated from their parent species, and due to isolation, the founder effect, genetic drift and natural selection, have become separate species (see allopatric speciation).
Effect on freshwater life:
Examples include: Golden perch (Dawson–Fitzroy river system, central Queensland).
Eel-tailed catfish (several rivers, northern New South Wales). However, note recent genetic research which now indicates eel-tailed catfish colonised east coast drainages in multiple colonisation events relatively recently (by evolutionary standards) and may subsequently have colonised the Murray–Darling system via an east-to-west river capture event, contrary to usual west-to-east capture events listed here.
Macquarie perch (Hawkesbury-Nepean rivers, Shoalhaven River, southern New South Wales).
River blackfish (multiple rivers, Victoria).
Effect on freshwater life:
Murray cod, whose eastern species/subspecies are: Eastern freshwater cod (Clarence River system, northern New South Wales. It was also found in the Richmond River system in New South Wales but that population is now extinct.) Brisbane River cod (Brisbane River system, southern Queensland. That population is now extinct, and its exact taxonomic status is not known.) The Mary River cod (Mary River, southern/central Queensland.) The mountain galaxias species complex (multiple rivers, southern Queensland, New South Wales, Victoria).Olive perchlet (Ambassis agassizii), western carp gudgeon (Hypseleotris klungzingeri), pygmy perch (Nannoperca australis) and Australian smelt (Retropinna semoni) also appear to have made crossings into coastal systems, the last two species seemingly many times as they are found in most or all coastal streams in south eastern Australia as well as the Murray-Darling system.
Effect on freshwater life:
Unfortunately, with the exception of eastern freshwater cod and Mary River cod, it has not been widely recognised that these coastal populations of Murray–Darling native fish are separate species and their classifications have not been updated to reflect this. Many are threatened and two, the Richmond River cod and the Brisbane River cod, have become extinct. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sotoportego**
Sotoportego:
Sotoportego (or sottoportego) is one of the characteristic elements of urban planning in the city of Venice.It is a passageway that goes underneath a building. The sotoportego height typically equals to that of the ground floor. Oftentimes, the sotoportego is the only access to a courtyard or a small square. Many sotoporteghi contain sacred images of the saints or Madonna. The images can be bas-reliefs made of the Istrian stone or white marble.
Types:
There are three basic types of sotoporteghi: Sotoportego that connects a street (calle) or a campo with another street. This type is by far the most widespread in the city since in many cases these are absolutely necessary urban elements to ensure an access otherwise prevented by construction.Sotoportego that goes along a canal and provides a landing place for boats. This type represents a way to create covered banks for the loading and unloading of goods and passengers sheltered from the weather and is also relatively widespread in the city.Sotoportego that leads to a canal. This type creates a section of covered foundation. It is less widespread one but architecturally very impressive, since it combines peculiar aquatic, building, and road elements of the city. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Neptunium(IV) oxide**
Neptunium(IV) oxide:
Neptunium(IV) oxide, or neptunium dioxide, is a radioactive, olive green cubic crystalline solid with the formula NpO2. It emits both α- and γ-particles.
Production:
Industrially, neptunium dioxide is formed by precipitation of neptunium(IV) oxalate, followed by calcination to neptunium dioxide.Production starts with a nitric acid feed solution containing neptunium ions in various oxidation states. First, a hydrazine inhibitor is added to slow any oxidation from standing in air. Then ascorbic acid reduces the feed solution to predominantly neptunium(IV): 2Np5+ + C6H8O6 → 2Np4+ + C6H6O6 + 2H+ Np6+ + C6H8O6 → Np4+ + C6H6O6 + 2H+Addition of oxalic acid precipitates hydrated neptunium oxalate… Np4+ + 2H2C2O4 + 6H2O → Np(C2O4)2.6H2O(v) + 4H+…which pyrolyzes when heated: Np(C2O4)2.6H2O Δ→ Np(C2O4)2 Δ→ NpO2 + 2CO(g)Neptunium dioxide can also be formed from precipitation of neptunium(IV) peroxide, but the process is much more sensitive.
Purification:
As a byproduct of nuclear waste, neptunium dioxide can be purified by fluorination, followed by reduction with excess calcium in the presence of iodine. However, the aforementioned synthesis yields a quite pure solid, with less than 0.3% mass fraction of impurities. Generally, further purification is unnecessary.
Other properties:
Neptunium dioxide contributes to the α-decay of 241Am, reducing its usual half-life an untested but appreciable amount. The compound has a low specific heat capacity (900 K, compared with uranium dioxide's specific heat capacity of 1400 K), an abnormality theorized to stem from its 5f electron count. Another unique trait of neptunium dioxide is its "mysterious low-temperature ordered phase". Mentioned above, it references an abnormal level of order for an actinitde dioxide complex at low temperature. Further discussion of such topics could indicate useful physical trends in the actinoides.
Uses:
The neptunium dioxide complex is used as a means of stabilizing, and decreasing the "long term environmental burden" of neptunium as a nuclear fission byproduct. Actinoide-containing nuclear waste will commonly be treated so that various AnO2 (where An = U, Np, Pu, Am, etc.) complexes form. In neptunium dioxide, neptunium is of reduced radio toxicity compared with elemental neptunium and is thus more desirable for storage and disposal. Neptunium dioxide has also been shown to contribute to increased decay rates of radioactive elements, an application which is currently being explored.Neptunium dioxide is also used experimentally for research into nuclear chemistry and physics, and it is speculated that it could be used to make efficient nuclear weapons. In nuclear reactors, neptunium dioxide can also be used as the target for plutonium bombardment.Furthermore, a patent for a rocket powered by neptunium dioxide is held by Shirakawa Toshihisa, but there is little information available into research and production associated with such a product. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ivory**
Ivory:
Ivory is a hard, white material from the tusks (traditionally from elephants) and teeth of animals, that consists mainly of dentine, one of the physical structures of teeth and tusks. The chemical structure of the teeth and tusks of mammals is the same, regardless of the species of origin, but ivory contains structures of mineralised collagen. The trade in certain teeth and tusks other than elephant is well established and widespread; therefore, "ivory" can correctly be used to describe any mammalian teeth or tusks of commercial interest which are large enough to be carved or scrimshawed.Besides natural ivory, ivory can also be produced synthetically, hence (unlike natural ivory) not requiring the retrieval of the material from animals. Tagua nuts can also be carved like ivory.The trade of finished goods of ivory products has its origins in the Indus Valley. Ivory is a main product that is seen in abundance and was used for trading in Harappan civilization. Finished ivory products that were seen in Harappan sites include kohl sticks, pins, awls, hooks, toggles, combs, game pieces, dice, inlay and other personal ornaments.
Ivory:
Ivory has been valued since ancient times in art or manufacturing for making a range of items from ivory carvings to false teeth, piano keys, fans, and dominoes. Elephant ivory is the most important source, but ivory from mammoth, walrus, hippopotamus, sperm whale, orca, narwhal and warthog are used as well. Elk also have two ivory teeth, which are believed to be the remnants of tusks from their ancestors.The national and international trade in natural ivory of threatened species such as African and Asian elephants is illegal. The word ivory ultimately derives from the ancient Egyptian âb, âbu ('elephant'), through the Latin ebor- or ebur.
Uses:
Both the Greek and Roman civilizations practiced ivory carving to make large quantities of high value works of art, precious religious objects, and decorative boxes for costly objects. Ivory was often used to form the white of the eyes of statues.
Uses:
There is some evidence of either whale or walrus ivory used by the ancient Irish. Solinus, a Roman writer in the 3rd century claimed that the Celtic peoples in Ireland would decorate their sword-hilts with the 'teeth of beasts that swim in the sea'. Adomnan of Iona wrote a story about St Columba giving a sword decorated with carved ivory as a gift that a penitent would bring to his master so he could redeem himself from slavery.The Syrian and North African elephant populations were reduced to extinction, probably due to the demand for ivory in the Classical world.The Chinese have long valued ivory for both art and utilitarian objects. Early reference to the Chinese export of ivory is recorded after the Chinese explorer Zhang Qian ventured to the west to form alliances to enable the eventual free movement of Chinese goods to the west; as early as the first century BC, ivory was moved along the Northern Silk Road for consumption by western nations. Southeast Asian kingdoms included tusks of the Indian elephant in their annual tribute caravans to China. Chinese craftsmen carved ivory to make everything from images of deities to the pipe stems and end pieces of opium pipes.In Japan, ivory carvings became popular in the 17th century during the Edo period, and many netsuke and kiseru, on which animals and legendary creatures were carved, and inro, on which ivory was inlaid, were made. From the mid-1800s, the new Meiji government's policy of promoting and exporting arts and crafts led to the frequent display of elaborate ivory crafts at World's fair. Among them, the best works were admired because they were purchased by Western museums, wealthy people, and the Japanese Imperial Family.The Buddhist cultures of Southeast Asia, including Myanmar, Thailand, Laos and Cambodia, traditionally harvested ivory from their domesticated elephants. Ivory was prized for containers due to its ability to keep an airtight seal. It was also commonly carved into elaborate seals utilized by officials to "sign" documents and decrees by stamping them with their unique official seal.In Southeast Asian countries, where Muslim Malay peoples live, such as Malaysia, Indonesia and the Philippines, ivory was the material of choice for making the handles of kris daggers. In the Philippines, ivory was also used to craft the faces and hands of Catholic icons and images of saints prevalent in the Santero culture.
Uses:
Tooth and tusk ivory can be carved into a vast variety of shapes and objects. Examples of modern carved ivory objects are okimono, netsukes, jewelry, flatware handles, furniture inlays, and piano keys. Additionally, warthog tusks, and teeth from sperm whales, orcas and hippos can also be scrimshawed or superficially carved, thus retaining their morphologically recognizable shapes.
As trade with Africa expanded during the first part of the 1800s, ivory became readily available. Up to 90 percent of the ivory imported into the United States was processed, at one time, in Connecticut where Deep River and Ivoryton in 1860s became the centers of ivory milling, in particular, due to the demand for ivory piano keys.
Uses:
Ivory usage in the last thirty years has moved towards mass production of souvenirs and jewelry. In Japan, the increase in wealth sparked consumption of solid ivory hanko – name seals – which before this time had been made of wood. These hanko can be carved out in a matter of seconds using machinery and were partly responsible for massive African elephant decline in the 1980s, when the African elephant population went from 1.3 million to around 600,000 in ten years.
Consumption before plastics:
Before plastics were introduced, ivory had many ornamental and practical uses, mainly because of the white color it presents when processed. It was formerly used to make cutlery handles, billiard balls, piano keys, Scottish bagpipes, buttons and a wide range of ornamental items.
Synthetic substitutes for ivory in the use of most of these items have been developed since 1800: the billiard industry challenged inventors to come up with an alternative material that could be manufactured;: 17 the piano industry abandoned ivory as a key covering material in the 1970s.
Consumption before plastics:
Ivory can be taken from dead animals – however, most ivory came from elephants that were killed for their tusks. For example, in 1930 to acquire 40 tons of ivory required the killing of approximately 700 elephants. Other animals which are now endangered were also preyed upon, for example, hippos, which have very hard white ivory prized for making artificial teeth. In the first half of the 20th century, Kenyan elephant herds were devastated because of demand for ivory, to be used for piano keys.During the Art Deco era from 1912 to 1940, dozens (if not hundreds) of European artists used ivory in the production of chryselephantine statues. Two of the most frequent users of ivory in their sculptured artworks were Ferdinand Preiss and Claire Colinet.
Mechanical characteristics:
While many uses of ivory are purely ornamental in nature, it often must be carved and manipulated into different shapes to achieve the desired form. Other applications, such as ivory piano keys, introduce repeated wear and surface handling of the material. It is therefore essential to consider the mechanical properties of ivory when designing alternatives.
Mechanical characteristics:
Elephant tusks are the animal’s incisors, so the composition of ivory is unsurprisingly similar to that of teeth in several other mammals. It is composed of dentine, a biomineral composite constructed from collagen fibers mineralized with hydroxyapatite. This composite lends ivory the impressive mechanical properties—high stiffness, strength, hardness, and toughness—required for its use in the animal’s day-to-day activities. Ivory has a measured hardness of 35 on the Vickers scale, exceeding that of bone. It also has a flexural modulus of 14 GPa, a flexural strength of 378 MPa a fracture toughness of 2.05 MPam1/2. These measured values indicate that ivory mechanically outperforms most of its most common alternatives, including celluloid plastic and polyethylene terephthalate.Ivory’s mechanical properties result from the microstructure of the dentine tissue. It is thought that the structural arrangement of mineralized collagen fibers could contribute to the checkerboard-like Schreger pattern observed in polished ivory samples. This is often used as an attribute in ivory identification. As well as being an optical feature, the Schreger pattern could point towards a micropattern well-designed to prevent crack propagation by dispersing stresses. Additionally, this intricate microstructure lends a strong anisotropy to ivory’s mechanical characteristics. Separate hardness measurements on three orthogonal tusk directions indicated that circumferential planes of tusk had up to 25% greater hardness than radial planes of the same specimen. During hardness testing, inelastic and elastic recovery was observed on circumferential planes while the radial planes displayed plastic deformation. This implies that ivory has directional viscoelasticity. These anisotropic properties can be explained by the reinforcement of collagen fibers in the composite oriented along the circumference.
Availability:
Owing to the rapid decline in the populations of the animals that produce it, the importation and sale of ivory in many countries is banned or severely restricted. In the ten years preceding a decision in 1989 by CITES to ban international trade in African elephant ivory, the population of African elephants declined from 1.3 million to around 600,000. It was found by investigators from the Environmental Investigation Agency (EIA) that CITES sales of stockpiles from Singapore and Burundi (270 tonnes and 89.5 tonnes respectively) had created a system that increased the value of ivory on the international market, thus rewarding international smugglers and giving them the ability to control the trade and continue smuggling new ivory.Since the ivory ban, some Southern African countries have claimed their elephant populations are stable or increasing, and argued that ivory sales would support their conservation efforts. Other African countries oppose this position, stating that renewed ivory trading puts their own elephant populations under greater threat from poachers reacting to demand. CITES allowed the sale of 49 tonnes of ivory from Zimbabwe, Namibia and Botswana in 1997 to Japan.In 2007, under pressure from the International Fund for Animal Welfare, eBay banned all international sales of elephant-ivory products. The decision came after several mass slaughters of African elephants, most notably the 2006 Zakouma elephant slaughter in Chad. The IFAW found that up to 90% of the elephant-ivory transactions on eBay violated their own wildlife policies and could potentially be illegal. In October 2008, eBay expanded the ban, disallowing any sales of ivory on eBay.A more recent sale in 2008 of 108 tonnes from the three countries and South Africa took place to Japan and China. The inclusion of China as an "approved" importing country created enormous controversy, despite being supported by CITES, the World Wide Fund for Nature and Traffic. They argued that China had controls in place and the sale might depress prices. However, the price of ivory in China has skyrocketed. Some believe this may be due to deliberate price fixing by those who bought the stockpile, echoing the warnings from the Japan Wildlife Conservation Society on price-fixing after sales to Japan in 1997, and monopoly given to traders who bought stockpiles from Burundi and Singapore in the 1980s.
Availability:
A 2019 peer-reviewed study reported that the rate of African elephant poaching was in decline, with the annual poaching mortality rate peaking at over 10% in 2011 and falling to below 4% by 2017. The study found that the "annual poaching rates in 53 sites strongly correlate with proxies of ivory demand in the main Chinese markets, whereas between-country and between-site variation is strongly associated with indicators of corruption and poverty." Based on these findings, the study authors recommended action to both reduce demand for ivory in China and other main markets and to decrease corruption and poverty in Africa.In 2006, nineteen African countries signed the "Accra Declaration" calling for a total ivory trade ban, and twenty range states attended a meeting in Kenya calling for a 20-year moratorium in 2007.Methods of obtaining ivory can be divided into: Shooting the elephant to take its tusks: this method is of concern here.
Availability:
Taking tusks from an elephant which has died of natural causes.
Taking tusks from an elephant which has had to be put down for another reason, for example, severe arthritis, or if its last molar teeth are worn out and can no longer chew its food.
Among working elephants which use their tusks to carry logs, there is a best length for their tusks. In former times in India, often their tusks were cut back to this length (and often the shortened tusks’ ends were bound in copper). This periodically freed pieces of ivory for the carving trade.
Availability:
Controversy and conservation issues The use and trade of elephant ivory have become controversial because they have contributed to seriously declining elephant populations in many countries. It is estimated that consumption in Great Britain alone in 1831 amounted to the deaths of nearly 4,000 elephants. In 1975, the Asian elephant was placed on Appendix I of the Convention on International Trade in Endangered Species (CITES), which prevents international trade between member states of species that are threatened by trade. The African elephant was placed on Appendix I in January 1990. Since then, some southern African countries have had their populations of elephants "downlisted" to Appendix II, allowing the domestic trade of non-ivory items; there have also been two "one off" sales of ivory stockpiles.In June 2015, more than a ton of confiscated ivory was crushed in New York City's Times Square by the Wildlife Conservation Society to send a message that the illegal trade will not be tolerated. The ivory, confiscated in New York and Philadelphia, was sent up a conveyor belt into a rock crusher. The Wildlife Conservation Society has pointed out that the global ivory trade leads to the slaughter of up to 35,000 elephants a year in Africa. In June 2018, Conservative MEPs’ Deputy Leader Jacqueline Foster MEP urged the EU to follow the UK's lead and introduce a tougher ivory ban across Europe.China was the biggest market for poached ivory but announced they would phase out the legal domestic manufacture and sale of ivory products in May 2015. In September of the same year, China and the U.S. announced they would "enact a nearly complete ban on the import and export of ivory." The Chinese market has a high degree of influence on the elephant population.
Availability:
Alternatives Fossil mammoth tusks Trade in the ivory from the tusks of dead woolly mammoths frozen in the tundra has occurred for 300 years and continues to be legal. Mammoth ivory is used today to make handcrafted knives and similar implements. Mammoth ivory is rare and costly because mammoths have been extinct for millennia, and scientists are hesitant to sell museum-worthy specimens in pieces. Some estimates suggest that 10 million mammoths are still buried in Siberia.
Availability:
Fossil walrus ivory Fossil walrus ivory from animals that died before 1972 is legal to buy and sell or possess in the United States, unlike many other types of ivory.
Synthetic ivory Ivory can also be produced synthetically.
Nuts A species of hard nut is gaining popularity as a replacement for ivory, although its size limits its usability. It is sometimes called vegetable ivory, or tagua, and is the seed endosperm of the ivory nut palm commonly found in coastal rainforests of Ecuador, Peru and Colombia. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lamb–Oseen vortex**
Lamb–Oseen vortex:
In fluid dynamics, the Lamb–Oseen vortex models a line vortex that decays due to viscosity. This vortex is named after Horace Lamb and Carl Wilhelm Oseen.
Mathematical description:
Oseen looked for a solution for the Navier–Stokes equations in cylindrical coordinates (r,θ,z) with velocity components (vr,vθ,vz) of the form 0.
Mathematical description:
where Γ is the circulation of the vortex core. Navier-Stokes equations lead to ∂g∂t=ν(∂2g∂r2−1r∂g∂r) which, subject to the conditions that it is regular at r=0 and becomes unity as r→∞ , leads to g(r,t)=1−e−r2/4νt, where ν is the kinematic viscosity of the fluid. At t=0 , we have a potential vortex with concentrated vorticity at the z axis; and this vorticity diffuses away as time passes.
Mathematical description:
The only non-zero vorticity component is in the z direction, given by ωz(r,t)=Γ4πνte−r2/4νt.
The pressure field simply ensures the vortex rotates in the circumferential direction, providing the centripetal force ∂p∂r=ρv2r, where ρ is the constant density
Generalized Oseen vortex:
The generalized Oseen vortex may obtained by looking for solutions of the form vr=−γ(t)r,vθ=Γ2πrg(r,t),vz=2γ(t)z that leads to the equation ∂g∂t−γr∂g∂r=ν(∂2g∂r2−1r∂g∂r).
Generalized Oseen vortex:
Self-similar solution exists for the coordinate η=r/φ(t) , provided φφ′+γφ2=a , where a is a constant, in which case g=1−e−aη2/2ν . The solution for φ(t) may be written according to Rott (1958) as exp exp (2∫0uγ(s)ds)du, where c is an arbitrary constant. For γ=0 , the classical Lamb–Oseen vortex is recovered. The case γ=k corresponds to the axisymmetric stagnation point flow, where k is a constant. When c=−∞ , φ2=a/k , a Burgers vortex is a obtained. For arbitrary c , the solution becomes φ2=a(1+βe−2kt)/k , where β is an arbitrary constant. As t→∞ , Burgers vortex is recovered. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cragie tube**
Cragie tube:
The Cragie tube or Craigie tube is a method used in microbiology for determining bacterial motility.
Technique:
A hollow tube with some culture medium is placed in semi-solid agar inside a bottle. A sample of the bacterium to be tested is inoculated into the medium in the hollow tube and the setup is incubated at 37 °C overnight.
Observation:
On examining the areas where bacterial growth has occurred there are several observations to be made: the colonies of the non-motile bacteria remain confined within the tube at the site of inoculation the motile bacteria swim out from the bottom of the tube and colonize the surrounding medium as wellConfirmation may be obtained by subculture and retesting. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PAX9**
PAX9:
Paired box gene 9, also known as PAX9, is a protein which in humans is encoded by the PAX9 gene. It is also found in other mammals.
Expression and function:
This gene is a member of the paired box (PAX) family of transcription factors. During mouse embryogenesis Pax9 expression starts from embryonic day 8.5 and becomes more evident by E9.5; at this stage its expression is restricted to the pharyngeal endoderm. Later on, Pax9 is also expressed in the axial skeleton. Pax9 is required for craniofacial, tooth and limb development, and may more generally involve development of stratified squamous epithelia as well as various organs and skeletal elements. PAX9 plays a role in the absence of wisdom teeth in some human populations (possibly along with the less well studied AXIN2 and MSX1).
Clinical significance:
This gene was found amplified in lung cancer. The amplification covers three tissue developmental genes - TTF1, NKX2-8, and PAX9. It appears that certain lung cancer cells select for DNA copy number amplification and increased RNA/protein expression of these three coamplified genes for functional advantages.
Clinical significance:
Oligodontia Oligodontia is a genetic disorder caused by the mutation of the PAX9 gene. This disorder results in the congenital absence of 6 or more permanent teeth, with the exception of the third molar. Also known as selective tooth agenesis (STHAG), it is the most common disorder in regard to human dentition, affecting a little less than one fourth of the population. The gene PAX9 which can be found on chromosome 14 encodes a group of transcription factors that play an important role in early tooth development. In humans, a frameshift mutation in the paired domain of PAX9 was discovered in those affected with oligodontia. Multiple mechanisms are possible by which the mutation may arise. Recently, a study involving the missense mutation of a PAX9 gene suggests that the loss of function due to the absence DNA binding domain is a mechanism that causes oligodontia. Those who express the PAX9 mutation and develop the disorder continue to have a normal life expectancy. Along with the mutation of the PAX9 gene, MSX1 gene mutations have also shown to affect dental development in fetuses.
Interactions:
PAX9 has been shown to interact with JARID1B. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Glottis**
Glottis:
The glottis is the opening between the vocal folds (the rima glottidis). The glottis is crucial in producing vowels and voiced consonants.
Etymology:
From Ancient Greek γλωττίς (glōttís), derived from γλῶττα (glôtta), variant of γλῶσσα (glôssa, "tongue").
Function:
Phonation As the vocal folds vibrate, the resulting vibration produces a "buzzing" quality to the speech, called voice or voicing or pronunciation.
Function:
Sound production that involves moving the vocal folds close together is called glottal. English has a voiceless glottal transition spelled "h". This sound is produced by keeping the vocal folds spread somewhat, resulting in non-turbulent airflow through the glottis. In many accents of English the glottal stop (made by pressing the folds together) is used as a variant allophone of the phoneme /t/ (and in some dialects, occasionally of /k/ and /p/); in some languages, this sound is a phoneme of its own. This is the case with the Klingon language developed for the science fiction series Star Trek, which treats the glottal stop as its own letter, represented by the apostrophe. Skilled players of the Australian didgeridoo restrict their glottal opening in order to produce the full range of timbres available on the instrument.The vibration produced is an essential component of voiced consonants as well as vowels. If the vocal folds are drawn apart, air flows between them causing no vibration, as in the production of voiceless consonants.The glottis is also important in the valsalva maneuver.
Function:
Voiced consonants include /v/, /z/, /ʒ/, /d͡ʒ/, /ð/, /b/, /d/, /ɡ/, /w/.
Voiceless consonants include /f/, /s/, /ʃ/, /t͡ʃ/, /θ/, /p/, /t/, /k/, /ʍ/, and /h/. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Monooxygenase DBH-like 1**
Monooxygenase DBH-like 1:
DBH-like monooxygenase protein 1, also known as monooxygenase X, is an enzyme that in humans is encoded by the MOXD1 gene.DBH-like 1 maintains many of the structural features of dopamine beta-monooxygenase DBH. Since Peptidylglycine alpha-hydroxylating monooxygenase (PHM; EC 1.14.17.3) is homologous to dopamine beta-monooxygenase (DBM; EC 1.14.17.1) this concerns a structural basis for a new family of copper type II, significantly specific for ascorbate-dependent monooxygenases based on the corresponding mouse homolog. The pathway of catecholamine synthesis is a possible catecholamine-binding metabolic copper enzyme domain, a neuron-like property encoding MOX without a signal sequence enzyme metabolism resolving the monooxygenase X chemical pathway of an unknown substrate, exogenous MOX is not secreted, and it localizes throughout the endoplasmic reticulum, in both endocrine or nonendocrine cells.
Deficiency:
DBH deficiency has been treated effectively with L-threo-3,4-dihydroxyphenylserine (DOPS). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Charles S. Peskin**
Charles S. Peskin:
Charles Samuel Peskin (born April 15, 1946) is an American mathematician known for his work in the mathematical modeling of blood flow in the heart. Such calculations are useful in the design of artificial heart valves. From this work has emerged an original computational method for fluid-structure interaction that is now called the “immersed boundary method", which allows the coupling between deformable immersed structures and fluid flows to be handled in a computationally tractable way. With his students and colleagues, Peskin also has worked on mathematical models of such systems as the inner ear, arterial pulse, blood clotting, congenital heart disease, light adaptation in the retina, control of ovulation number, control of plasmid replication, molecular dynamics, and molecular motors.Peskin received an A.B. (1968) from Harvard University and a Ph.D. (1972) from the Albert Einstein College of Medicine, Yeshiva University and shortly thereafter joined the faculty of the Courant Institute of Mathematical Sciences, New York University. He has been a productive educator of applied mathematicians, and has advised more than fifty graduate students as of 2014. Peskin is a MacArthur Fellow and a member of the National Academy of Sciences, the Institute of Medicine and the American Academy of Arts and Sciences.
Charles S. Peskin:
In 1969 he married Lucille G. Bisesi. Their son Eric is the manager of High Performance Computing at New York University.
Awards and honors:
George David Birkhoff Prize in Applied Mathematics from AMS–SIAM, 2003 Invited speaker of the International Congress of Mathematicians, 1998 Mayor's Award for Excellence in Science and Technology, 1994 Sidney Fernbach Award, Institute of Electrical and Electronics Engineers Computer Society, 1994 Cray Research Information Technology Leadership Award for Breakthrough Computational Science, 1994 Josiah Willard Gibbs Lecturer, American Mathematical Society, 1993 New York University Margaret and Herman Sokol Faculty Award in the Sciences, 1992 James H. Wilkinson Prize in Numerical Analysis and Scientific Computing, SIAM, 1985 MacArthur Fellowship, 1983–1988He has also been a fellow of the American Academy of Arts and Sciences since 1994, a member of the National Academy of Sciences since 1995, and a member of the Institute of Medicine since 2000. He is also an inaugural fellow of the American Mathematical Society and the Society for Industrial and Applied Mathematics. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Trigger transformer**
Trigger transformer:
A trigger transformer is a small, usually ferrite cored transformer used in applications requiring a high voltage pulse, typically to start ionization of a gas to allow a current to pass.
Uses:
Trigger transformer cores may be utilized in a unipolar (electromagnetic flux strictly positive or negative) or bipolar (swinging between positive and negative) manner. Applications also differ in whether or not they saturate the core (termed a saturating trigger transformer).
Uses:
Strobe lights, for instance operate the core in a unipolar, unsaturated mode. Capacitors are charged to approx. 300 volts, at which point a second capacitor pulses voltage through the transformer, achieving the approx. 2000-6000 volts (depending upon the characteristics of the specific flash tube) necessary to overcome the resistance of the inert gas (such as xenon) between the electrodes, ionizing it.
Uses:
Trigger transformers operate by means of a secondary coil with hundreds, even thousands, of turns of very fine copper wire, trading current for voltage.Much like in lightning, this plasma has much lower resistance, and the capacitor can discharge rapidly across it. The capacitors begin charging again, starting the cycle anew and giving rise to their characteristic periodic flashing. Capacitors alone would result in a continuous arc. Having achieved the voltage necessary to cause dielectric breakdown, they would maintain it indefinitely.
Uses:
So-called saturating trigger transformers find use in circuits that intentionally utilize core saturation and/or operate in a bipolar fashion, such as DC-to-AC power inverters.
Inductors are also commonly used in place of a trigger transformer, however are not considered transformers themselves, although similar in operation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Integrin**
Integrin:
Integrins are transmembrane receptors that help cell-cell and cell-extracellular matrix (ECM) adhesion. Upon ligand binding, integrins activate signal transduction pathways that mediate cellular signals such as regulation of the cell cycle, organization of the intracellular cytoskeleton, and movement of new receptors to the cell membrane. The presence of integrins allows rapid and flexible responses to events at the cell surface (e.g. signal platelets to initiate an interaction with coagulation factors).
Integrin:
Several types of integrins exist, and one cell generally has multiple different types on its surface. Integrins are found in all animals while integrin-like receptors are found in plant cells.Integrins work alongside other proteins such as cadherins, the immunoglobulin superfamily cell adhesion molecules, selectins and syndecans, to mediate cell–cell and cell–matrix interaction. Ligands for integrins include fibronectin, vitronectin, collagen and laminin.
Structure:
Integrins are obligate heterodimers composed of α and β subunits. Several genes code for multiple isoforms of these subunits, which gives rise to an array of unique integrins with varied activity. In mammals, integrins are assembled from eighteen α and eight β subunits, in Drosophila five α and two β subunits, and in Caenorhabditis nematodes two α subunits and one β subunit. The α and β subunits are both class I transmembrane proteins, so each penetrates the plasma membrane once, and can possess several cytoplasmic domains.
Structure:
Variants of some subunits are formed by differential RNA splicing; for example, four variants of the beta-1 subunit exist. Through different combinations of the α and β subunits, 24 unique mammalian integrins are generated, excluding splice- and glycosylation variants.Integrin subunits span the cell membrane and have short cytoplasmic domains of 40–70 amino acids. The exception is the beta-4 subunit, which has a cytoplasmic domain of 1,088 amino acids, one of the largest of any membrane protein. Outside the cell membrane, the α and β chains lie close together along a length of about 23 nm; the final 5 nm N-termini of each chain forms a ligand-binding region for the ECM. They have been compared to lobster claws, although they don't actually "pinch" their ligand, they chemically interact with it at the insides of the "tips" of their "pinchers".
Structure:
The molecular mass of the integrin subunits can vary from 90 kDa to 160 kDa. Beta subunits have four cysteine-rich repeated sequences. Both α and β subunits bind several divalent cations. The role of divalent cations in the α subunit is unknown, but may stabilize the folds of the protein. The cations in the β subunits are more interesting: they are directly involved in coordinating at least some of the ligands that integrins bind.
Structure:
Integrins can be categorized in multiple ways. For example, some α chains have an additional structural element (or "domain") inserted toward the N-terminal, the alpha-A domain (so called because it has a similar structure to the A-domains found in the protein von Willebrand factor; it is also termed the α-I domain). Integrins carrying this domain either bind to collagens (e.g. integrins α1 β1, and α2 β1), or act as cell-cell adhesion molecules (integrins of the β2 family). This α-I domain is the binding site for ligands of such integrins. Those integrins that don't carry this inserted domain also have an A-domain in their ligand binding site, but this A-domain is found on the β subunit.
Structure:
In both cases, the A-domains carry up to three divalent cation binding sites. One is permanently occupied in physiological concentrations of divalent cations, and carries either a calcium or magnesium ion, the principal divalent cations in blood at median concentrations of 1.4 mM (calcium) and 0.8 mM (magnesium). The other two sites become occupied by cations when ligands bind—at least for those ligands involving an acidic amino acid in their interaction sites. An acidic amino acid features in the integrin-interaction site of many ECM proteins, for example as part of the amino acid sequence Arginine-Glycine-Aspartic acid ("RGD" in the one-letter amino acid code).
Structure:
Structure Despite many years of effort, discovering the high-resolution structure of integrins proved to be challenging, as membrane proteins are classically difficult to purify, and as integrins are large, complex and highly glycosylated with many sugar 'trees' attached to them. Low-resolution images of detergent extracts of intact integrin GPIIbIIIa, obtained using electron microscopy, and even data from indirect techniques that investigate the solution properties of integrins using ultracentrifugation and light scattering, were combined with fragmentary high-resolution crystallographic or NMR data from single or paired domains of single integrin chains, and molecular models postulated for the rest of the chains.
Structure:
The X-ray crystal structure obtained for the complete extracellular region of one integrin, αvβ3, shows the molecule to be folded into an inverted V-shape that potentially brings the ligand-binding sites close to the cell membrane. Perhaps more importantly, the crystal structure was also obtained for the same integrin bound to a small ligand containing the RGD-sequence, the drug cilengitide. As detailed above, this finally revealed why divalent cations (in the A-domains) are critical for RGD-ligand binding to integrins. The interaction of such sequences with integrins is believed to be a primary switch by which ECM exerts its effects on cell behaviour.
Structure:
The structure poses many questions, especially regarding ligand binding and signal transduction. The ligand binding site is directed towards the C-terminal of the integrin, the region where the molecule emerges from the cell membrane. If it emerges orthogonally from the membrane, the ligand binding site would apparently be obstructed, especially as integrin ligands are typically massive and well cross-linked components of the ECM. In fact, little is known about the angle that membrane proteins subtend to the plane of the membrane; this is a problem difficult to address with available technologies. The default assumption is that they emerge rather like little lollipops, but there is little evidence for this. The integrin structure has drawn attention to this problem, which may have general implications for how membrane proteins work. It appears that the integrin transmembrane helices are tilted (see "Activation" below), which hints that the extracellular chains may also not be orthogonal with respect to the membrane surface.
Structure:
Although the crystal structure changed surprisingly little after binding to cilengitide, the current hypothesis is that integrin function involves changes in shape to move the ligand-binding site into a more accessible position, away from the cell surface, and this shape change also triggers intracellular signaling. There is a wide body of cell-biological and biochemical literature that supports this view. Perhaps the most convincing evidence involves the use of antibodies that only recognize integrins when they have bound to their ligands, or are activated. As the "footprint" that an antibody makes on its binding target is roughly a circle about 3 nm in diameter, the resolution of this technique is low. Nevertheless, these so-called LIBS (Ligand-Induced-Binding-Sites) antibodies unequivocally show that dramatic changes in integrin shape routinely occur. However, how the changes detected with antibodies look on the structure is still unknown.
Structure:
Activation When released into the cell membrane, newly synthesized integrin dimers are speculated to be found in the same "bent" conformation revealed by the structural studies described above. One school of thought claims that this bent form prevents them from interacting with their ligands, although bent forms can predominate in high-resolution EM structures of integrin bound to an ECM ligand. Therefore, at least in biochemical experiments, integrin dimers must apparently not be 'unbent' in order to prime them and allow their binding to the ECM. In cells, the priming is accomplished by a protein talin, which binds to the β tail of the integrin dimer and changes its conformation. The α and β integrin chains are both class-I transmembrane proteins: they pass the plasma membrane as single transmembrane alpha-helices. Unfortunately, the helices are too long, and recent studies suggest that, for integrin gpIIbIIIa, they are tilted with respect both to one another and to the plane of the membrane. Talin binding alters the angle of tilt of the β3 chain transmembrane helix in model systems and this may reflect a stage in the process of inside-out signalling which primes integrins. Moreover, talin proteins are able to dimerize and thus are thought to intervene in the clustering of integrin dimers which leads to the formation of a focal adhesion. Recently, the Kindlin-1 and Kindlin-2 proteins have also been found to interact with integrin and activate it.
Function:
Integrins have two main functions, attachment of the cells to the ECM and signal transduction from the ECM to the cells. They are also involved in a wide range of other biological activities, including extravasation, cell-to-cell adhesion, cell migration, and as receptors for certain viruses, such as adenovirus, echovirus, hantavirus, and foot-and-mouth disease, polio virus and other viruses. Recently, the importance of integrins in the progress of autoimmune disorders is also gaining attention of the scientists. These mechanoreceptors seem to regulate autoimmunity by dictating various intracellular pathways to control immune cell adhesion to endothelial cell layers followed by their trans-migration. This process might or might not be dependent on the sheer force faced by the extracellular parts of different integrins.A prominent function of the integrins is seen in the molecule GpIIb/IIIa, an integrin on the surface of blood platelets (thrombocytes) responsible for attachment to fibrin within a developing blood clot. This molecule dramatically increases its binding affinity for fibrin/fibrinogen through association of platelets with exposed collagens in the wound site. Upon association of platelets with collagen, GPIIb/IIIa changes shape, allowing it to bind to fibrin and other blood components to form the clot matrix and stop blood loss.
Function:
Attachment of cell to the ECM Integrins couple the cell-extracellular matrix (ECM) outside a cell to the cytoskeleton (in particular, the microfilaments) inside the cell. Which ligand in the ECM the integrin can bind to is defined by which α and β subunits the integrin is made of. Among the ligands of integrins are fibronectin, vitronectin, collagen, and laminin. The connection between the cell and the ECM may help the cell to endure pulling forces without being ripped out of the ECM. The ability of a cell to create this kind of bond is also of vital importance in ontogeny.
Function:
Cell attachment to the ECM is a basic requirement to build a multicellular organism. Integrins are not simply hooks, but give the cell critical signals about the nature of its surroundings. Together with signals arising from receptors for soluble growth factors like VEGF, EGF, and many others, they enforce a cellular decision on what biological action to take, be it attachment, movement, death, or differentiation. Thus integrins lie at the heart of many cellular biological processes. The attachment of the cell takes place through formation of cell adhesion complexes, which consist of integrins and many cytoplasmic proteins, such as talin, vinculin, paxillin, and alpha-actinin. These act by regulating kinases such as FAK (focal adhesion kinase) and Src kinase family members to phosphorylate substrates such as p130CAS thereby recruiting signaling adaptors such as CRK. These adhesion complexes attach to the actin cytoskeleton. The integrins thus serve to link two networks across the plasma membrane: the extracellular ECM and the intracellular actin filamentous system. Integrin α6β4 is an exception: it links to the keratin intermediate filament system in epithelial cells.Focal adhesions are large molecular complexes, which are generated following interaction of integrins with ECM, then their clustering. The clusters likely provide sufficient intracellular binding sites to permit the formation of stable signaling complexes on the cytoplasmic side of the cell membrane. So the focal adhesions contain integrin ligand, integrin molecule, and associate plaque proteins. Binding is propelled by changes in free energy. As previously stated, these complexes connect the extracellular matrix to actin bundles. Cryo-electron tomography reveals that the adhesion contains particles on the cell membrane with diameter of 25 +/- 5 nm and spaced at approximately 45 nm. Treatment with Rho-kinase inhibitor Y-27632 reduces the size of the particle, and it is extremely mechanosensitive.One important function of integrins on cells in tissue culture is their role in cell migration. Cells adhere to a substrate through their integrins. During movement, the cell makes new attachments to the substrate at its front and concurrently releases those at its rear. When released from the substrate, integrin molecules are taken back into the cell by endocytosis; they are transported through the cell to its front by the endocytic cycle, where they are added back to the surface. In this way they are cycled for reuse, enabling the cell to make fresh attachments at its leading front. The cycle of integrin endocytosis and recycling back to the cell surface is important also for not migrating cells and during animal development.
Function:
Signal transduction Integrins play an important role in cell signaling by modulating the cell signaling pathways of transmembrane protein kinases such as receptor tyrosine kinases (RTK). While the interaction between integrin and receptor tyrosine kinases originally was thought of as uni-directional and supportive, recent studies indicate that integrins have additional, multi-faceted roles in cell signaling. Integrins can regulate the receptor tyrosine kinase signaling by recruiting specific adaptors to the plasma membrane. For example, β1c integrin recruits Gab1/Shp2 and presents Shp2 to IGF1R, resulting in dephosphorylation of the receptor. In a reverse direction, when a receptor tyrosine kinase is activated, integrins co-localise at focal adhesion with the receptor tyrosine kinases and their associated signaling molecules.
Function:
The repertoire of integrins expressed on a particular cell can specify the signaling pathway due to the differential binding affinity of ECM ligands for the integrins. The tissue stiffness and matrix composition can initiate specific signaling pathways regulating cell behavior. Clustering and activation of the integrins/actin complexes strengthen the focal adhesion interaction and initiate the framework for cell signaling through assembly of adhesomes.Depending on the integrin's regulatory impact on specific receptor tyrosine kinases, the cell can experience: cell growth cell division cell survival cellular differentiation apoptosis (programmed cell death)Knowledge of the relationship between integrins and receptor tyrosine kinase has laid a foundation for new approaches to cancer therapy. Specifically, targeting integrins associated with RTKs is an emerging approach for inhibiting angiogenesis.
Integrins and nerve repair:
Integrins have an important function in neuroregeneration after injury of the peripheral nervous system (PNS). Integrins are present at the growth cone of damaged PNS neurons and attach to ligands in the ECM to promote axon regeneration. It is unclear whether integrins can promote axon regeneration in the adult central nervous system (CNS). There are two obstacles that prevent integrin-mediated regeneration in the CNS: 1) integrins are not localised in the axon of most adult CNS neurons and 2) integrins become inactivated by molecules in the scar tissue after injury.
Vertebrate integrins:
The following are 16 of the ~24 integrins found in vertebrates: Beta-1 integrins interact with many alpha integrin chains. Gene knockouts of integrins in mice are not always lethal, which suggests that during embryonal development, one integrin may substitute its function for another in order to allow survival. Some integrins are on the cell surface in an inactive state, and can be rapidly primed, or put into a state capable of binding their ligands, by cytokines. Integrins can assume several different well-defined shapes or "conformational states". Once primed, the conformational state changes to stimulate ligand binding, which then activates the receptors — also by inducing a shape change — to trigger outside-in signal transduction. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mill's Methods**
Mill's Methods:
Mill's Methods are five methods of induction described by philosopher John Stuart Mill in his 1843 book A System of Logic. They are intended to illuminate issues of causation.
The methods:
Direct method of agreement If two or more instances of the phenomenon under investigation have only one circumstance in common, the circumstance in which alone all the instances agree, is the cause (or effect) of the given phenomenon. For a property to be a necessary condition it must always be present if the effect is present. Since this is so, then we are interested in looking at cases where the effect is present and taking note of which properties, among those considered to be 'possible necessary conditions' are present and which are absent. Obviously, any properties which are absent when the effect is present cannot be necessary conditions for the effect. This method is also referred to more generally within comparative politics as the most different systems design.
The methods:
Symbolically, the method of agreement can be represented as: A B C D occur together with w x y z A E F G occur together with w t u v —————————————————— Therefore A is the cause, or the effect, of w.To further illustrate this concept, consider two structurally different countries. Country A is a former colony, has a centre-left government, and has a federal system with two levels of government. Country B has never been a colony, has a centre-left government and is a unitary state. One factor that both countries have in common, the dependent variable in this case, is that they have a system of universal health care. Comparing the factors known about the countries above, a comparative political scientist would conclude that the government sitting on the centre-left of the spectrum would be the independent variable which causes a system of universal health care, since it is the only one of the factors examined which holds constant between the two countries, and the theoretical backing for that relationship is sound; social democratic (centre-left) policies often include universal health care.
The methods:
Method of difference If an instance in which the phenomenon under investigation occurs, and an instance in which it does not occur, have every circumstance save one in common, that one occurring only in the former; the circumstance in which alone the two instances differ, is the effect, or cause, or an indispensable part of the cause, of the phenomenon. This method is also known more generally as the most similar systems design within comparative politics.
The methods:
A B C D occur together with w x y z B C D occur together with x y z —————————————————— Therefore A is the cause, or the effect, or a part of the cause of w.As an example of the method of difference, consider two similar countries. Country A has a centre-right government, a unitary system and was a former colony. Country B has a centre-right government, a unitary system but was never a colony. The difference between the countries is that Country A readily supports anti-colonial initiatives, whereas Country B does not. The method of difference would identify the independent variable to be the status of each country as a former colony or not, with the dependant variable being supportive for anti-colonial initiatives. This is because, out of the two similar countries compared, the difference between the two is whether or not they were formerly a colony. This then explains the difference on the values of the dependent variable, with the former colony being more likely to support decolonization than the country with no history of being a colony.
The methods:
Indirect method of difference If two or more instances in which the phenomenon occurs have only one circumstance in common, while two or more instances in which it does not occur have nothing in common save the absence of that circumstance; the circumstance in which alone the two sets of instances differ, is the effect, or cause, or a necessary part of the cause, of the phenomenon. Also called the "Joint Method of Agreement and Difference", this principle is a combination of two methods of agreement. Despite the name, it is weaker than the direct method of difference and does not include it.
The methods:
Symbolically, the Joint method of agreement and difference can be represented as: A B C occur together with x y z A D E occur together with x v w F G occur with y w —————————————————— Therefore A is the cause, or the effect, or a part of the cause of x.
The methods:
Method of residue Subduct from any phenomenon such part as is known by previous inductions to be the effect of certain antecedents, and the residue of the phenomenon is the effect of the remaining antecedents. If a range of factors are believed to cause a range of phenomena, and we have matched all the factors, except one, with all the phenomena, except one, then the remaining phenomenon can be attributed to the remaining factor.
The methods:
Symbolically, the Method of Residue can be represented as: A B C occur together with x y z B is known to be the cause of y C is known to be the cause of z —————————————————— Therefore A is the cause or effect of x.
The methods:
Method of concomitant variations Whatever phenomenon varies in any manner whenever another phenomenon varies in some particular manner, is either a cause or an effect of that phenomenon, or is connected with it through some fact of causation. If across a range of circumstances leading to a phenomenon, some property of the phenomenon varies in tandem with some factor existing in the circumstances, then the phenomenon can be associated with that factor. For instance, suppose that various samples of water, each containing both salt and lead, were found to be toxic. If the level of toxicity varied in tandem with the level of lead, one could attribute the toxicity to the presence of lead.
The methods:
Symbolically, the method of concomitant variation can be represented as (with ± representing a shift): A B C occur together with x y z A± B C results in x± y z.
————————————————————— Therefore A and x are causally connectedUnlike the preceding four inductive methods, the method of concomitant variation doesn't involve the elimination of any circumstance. Changing the magnitude of one factor results in the change in the magnitude of another factor. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Centrifugal casting (silversmithing)**
Centrifugal casting (silversmithing):
Centrifugal casting in silversmithing is a casting technique where a small mould is poured, then spun on the end of an arm. The centrifugal force thus generated encourages a successful pour.
Processes:
Centrifugal casting, or centrifuging, is used as a means of casting small, detailed parts or jewelry. An articulated arm is free to spin around a vertical axle, which is driven by an electric motor or a spring. The entire mechanism is enclosed in a tub or drum to contain hot metal should the mold break or an excess of metal be used. Single use molds are prepared using the lost wax method. A small amount of metal in a crucible (a sort of ceramic pan) next to the mold is heated with a torch. When the metal is molten the arm is released, forcing (by centrifugal force) the metal into the mold. The high forces imposed on the metal overcome the viscosity, resulting in a finely detailed workpiece. A similar advantage may be obtained by vacuum casting or pressure casting.
Processes:
For casting of small parts using hot metal, a disk shaped mold is contained within a rotating drum, and molten metal is poured into the center.
Machinery:
Many machines are available which can perform centrifugal casting, and they are relatively simple to construct. All that is required is an arm which rotates with an adequate amount of centrifugal force, a container on the end of said arm to hold both a mold and the material to be cast into the mold. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mach bands**
Mach bands:
Mach bands is an optical illusion named after the physicist Ernst Mach. It exaggerates the contrast between edges of the slightly differing shades of gray, as soon as they contact one another, by triggering edge-detection in the human visual system.
Explanation:
The Mach bands effect is due to the spatial high-boost filtering performed by the human visual system on the luminance channel of the image captured by the retina. Mach reported the effect in 1865, conjecturing that filtering is performed in the retina itself, by lateral inhibition among its neurons. This conjecture is supported by observations on other (non-visual) senses, as pointed out by von Békésy. The visual pattern is often found on curved surfaces subject to a particular, naturally-occurring illumination, so the occurrence of filtering can be explained as the result of learnt image statistics. The effect of filtering can be modeled as a convolution between a trapezoidal function that describes the illumination and one or more bandpass filters. A tight approximation is obtained by a model employing 9 even-symmetric filters scaled at octave intervals.The effect is independent of the orientation of the boundary.
In radiology:
This visual phenomenon is important to keep in mind when evaluating dental radiographs for evidence of decay, in which grayscale images of teeth and bone are analyzed for abnormal variances of density. A false-positive radiological diagnosis of dental caries can easily arise if the practitioner does not take into account the likelihood of this illusion. Mach bands manifest adjacent to metal restorations or appliances and the boundary between enamel and dentin. Mach bands may also result in the misdiagnosis of horizontal root fractures because of the differing radiographic intensities of tooth and bone.Mach effect can also lead to an erroneous diagnosis of pneumothorax by creating a dark line at the lung periphery (whereas a true pneumothorax will have a white pleural line).
In computer graphics:
Mach bands can also appear when there is a discontinuity in the derivative of a gradient, a visual effect common when intensities are linearly interpolated such as in Gouraud shading.
Computer image processing systems use edge-detection in a way analogous to the brain, using unsharp masking to clarify edges in photos for example. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Exploration logging**
Exploration logging:
Exploration logging is the process of wireline logging, geophysical logging, geotechnical logging or geological logging of a drill hole, its core, or its rock cuttings for petrophysics or petrology. The practice is usually used in the mining, mineral exploration or oil and natural gas sectors.Note that logging in this context does not refer to logging trees. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Aperture Desk Job**
Aperture Desk Job:
Aperture Desk Job is a 2022 action game by Valve. A spin-off of the Portal series, it was released concurrently with the Steam Deck as a tech demo showcasing the platform's controller functions.
Gameplay:
The gameplay consists of product inspection aided by Grady, an artificial intelligence "core" much like many others seen in the Portal franchise. The game serves as a tech demo for the Steam Deck, with most of the game taking place in front of a desk that acts as an in-world representation of the console. A few different scenarios are used to test different functions of the controls, such as a shooting segment making use of gyroscopic control, or a situation where the player must write their name making use of the Steam Deck's touchscreen. The game could also be played on standard computer hardware with the use of a game controller.
Synopsis:
The player, whose name is up to them to decide, starts work at Aperture as a product tester at a desk that from that point on they seem incapable of leaving. A core named Grady (Nate Bargatze) arrives and tasks the player with testing toilets. A defective toilet destroys a transport pipe filled with ammunition, filling the cistern with bullets which the toilet then fires off. Inspired, Grady decides to try and pitch a new idea to the heads of Aperture.
Synopsis:
Six months later, Grady introduces a turret cobbled together from weapon parts and the shell of a toilet. He urges the player to test it out and the player ends up destroying the warehouse. Grady leaves the player to take the fall while he attempts to improve the design of the turret. Eighteen months later the player is released from Aperture prison, and Grady has become a parole officer in order to monitor them. Grady enthusiastically urges them back to work to test his new and improved turret on appliances stolen from the Housewares Department. He claims that he has organized a meeting with CEO Cave Johnson (J. K. Simmons), and as they journey to Johnson's office on the 80th floor Grady fantasizes about spending the money they are going to make on paying back the loan sharks he used to fund the turret's development. They are suddenly attacked by appliances modelled into (far superior) turrets by Housewares engineers. The player battles through an onslaught of appliance turrets, before using the desk's inbuilt rocket propulsion system to speed them to the top floor.
Synopsis:
Upon reaching Johnson's office, Grady reveals that he lied about organizing the meeting and that he suspects Johnson is a recluse, given that no-one has seen him in years. Upon entering his office, it's revealed that Johnson no longer exists as a physical being—he was stricken with a terrible disease years prior and had his consciousness uploaded into a supercomputer designed to look like a giant statue of his head. Having lived this way for years, he begs the player to kill him; the player and Grady oblige by using the turret to at first try to destroy the head's clay shell before eventually shutting off the power supply. At first this seems to work, only for the backup power to turn Johnson back on, prompting him to fire the two. However, the weight of his head and damage caused by the player causes him to fall through the floor all the way to the bottom of the building.
Synopsis:
Months later, Grady and the player have entered into a witness protection program having informed on Grady's loan sharks, and now go by "Gary" and "Charlie". The two of them still 'work' in the damaged Aperture building, with the player 'testing' toilets that simply fall off the conveyor belt into a hole in the floor caused by Johnson's falling head. The head, along with several other toilet turrets, are briefly given power by an advanced device created by a colony of praying mantises that infest the building, and they perform a choir song over the credits together.
Development:
Aperture Desk Job was developed by Valve on the Source 2 engine as a tech demo for the Steam Deck. It was released for free on March 1, 2022.
Reception:
Rock Paper Shotgun praised the humor of the game. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Birding (magazine)**
Birding (magazine):
Birding is the bimonthly magazine of the American Birding Association. Birding publishes articles on field identification, bird conservation, notable sightings, and other subjects of interest to the birding community. Each issue also includes critical reviews of new equipment and books. Ted Floyd and Frank Izaguirre are the magazine's editors.A six-part history of birding in North America as reported in the pages of Birding appeared in 2006. The History of Birding Part I. 1968–1974 The History of Birding Part II. 1975–1980 The History of Birding Part III. 1981–1987 The History of Birding Part IV. 1988–1993 The History of Birding Part V. 1994–2000 The History of Birding Part VI. 2001–2006 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ScicomP**
ScicomP:
The IBM HPC Systems Scientific Computing User Group (ScicomP) is an international organization open to all scientific and technical users of IBM systems. At yearly meetings application scientists and staff from HPC centers present talks about, and discuss, ways to develop efficient and scalable scientific applications. These meetings provide an opportunity to give feedback to IBM that will influence the design of future systems. ScicomP is a not-for-profit group and is not affiliated with IBM Corporation.
History:
ScicomP was formed in October 1999 when 285 researchers and engineers met at IBM's Advanced Computing Technology Center (ACTC) at IBM Research in Yorktown Heights, NY.
History:
The meeting was a user-oriented and planned 3-day workshop to share information on scientific computing techniques for the users of IBM SP supercomputers. The meeting was created on the recommendations of the attendees of the IBM SP Scientific Applications Development and Optimization Workshop held in March 1999 at the San Diego Supercomputer Center. The objective of that meeting was to help computational scientists and engineers develop applications that achieve maximum performance and scalability on the IBM SP systems.
History:
ScicomP has held annual and semi-annual meetings that bring together scientific domain experts, computational scientists, systems engineers, and IBM technical specialists to share experiences using IBM High Performance Computing Systems. The domain encompasses all IBM HPC Systems, including Power, Blue Gene, Cell, hybrid (e.g. the Los Alamos RoadRunner system), and blade architectures.
The meetings alternate between supercomputing sites in North America and Europe. The next meeting will be held in May 2009 at the Barcelona Supercomputing Center in Barcelona, Spain. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Busy beaver**
Busy beaver:
In theoretical computer science, the busy beaver game aims at finding a terminating program of a given size that produces the most output possible. Since an endlessly looping program producing infinite output is easily conceived, such programs are excluded from the game.
Busy beaver:
More precisely, the busy beaver game consists of designing a halting Turing machine with alphabet {0,1} which writes the most 1s on the tape, using only a given set of states. The rules for the 2-state game are as follows: the machine must have at most two states in addition to the halting state, and the tape initially contains 0s only.A player should conceive a transition table aiming for the longest output of 1s on the tape while making sure the machine will halt eventually.
Busy beaver:
An nth busy beaver, BB-n or simply "busy beaver" is a Turing machine that wins the n-state busy beaver game. That is, it attains the largest number of 1s among all other possible n-state competing Turing machines. The BB-2 Turing machine, for instance, achieves four 1s in six steps.
Determining whether an arbitrary Turing machine is a busy beaver is undecidable. This has implications in computability theory, the halting problem, and complexity theory. The concept was first introduced by Tibor Radó in his 1962 paper, "On Non-Computable Functions".
The game:
The n-state busy beaver game (or BB-n game), introduced in Tibor Radó's 1962 paper, involves a class of Turing machines, each member of which is required to meet the following design specifications: The machine has n "operational" states plus a Halt state, where n is a positive integer, and one of the n states is distinguished as the starting state. (Typically, the states are labelled by 1, 2, ..., n, with state 1 as the starting state, or by A, B, C, ..., with state A as the starting state.) The machine uses a single two-way infinite (or unbounded) tape.
The game:
The tape alphabet is {0, 1}, with 0 serving as the blank symbol.
The game:
The machine's transition function takes two inputs:the current non-Halt state, the symbol in the current tape cell, and produces three outputs: a symbol to write over the symbol in the current tape cell (it may be the same symbol as the symbol overwritten), a direction to move (left or right; that is, shift to the tape cell one place to the left or right of the current cell), and a state to transition into (which may be the Halt state).There are thus (4n + 4)2n n-state Turing machines meeting this definition because the general form of the formula is (symbols × directions × (states + 1))(symbols × states).
The game:
The transition function may be seen as a finite table of 5-tuples, each of the form (current state, current symbol, symbol to write, direction of shift, next state)."Running" the machine consists of starting in the starting state, with the current tape cell being any cell of a blank (all-0) tape, and then iterating the transition function until the Halt state is entered (if ever). If, and only if, the machine eventually halts, then the number of 1s finally remaining on the tape is called the machine's score.
The game:
The n-state busy beaver (BB-n) game is a contest to find such an n-state Turing machine having the largest possible score — the largest number of 1s on its tape after halting. A machine that attains the largest possible score among all n-state Turing machines is called an n-state busy beaver, and a machine whose score is merely the highest so far attained (perhaps not the largest possible) is called a champion n-state machine.
The game:
Radó required that each machine entered in the contest be accompanied by a statement of the exact number of steps it takes to reach the Halt state, thus allowing the score of each entry to be verified (in principle) by running the machine for the stated number of steps. (If entries were to consist only of machine descriptions, then the problem of verifying every potential entry is undecidable, because it is equivalent to the well-known halting problem — there would be no effective way to decide whether an arbitrary machine eventually halts.)
Related functions:
The busy beaver function Σ The busy beaver function quantifies the maximum score attainable by a busy beaver on a given measure. This is a noncomputable function. Also, a busy beaver function can be shown to grow faster asymptotically than any computable function.The busy beaver function, Σ:N→N , is defined so that Σ(n) is the maximum attainable score (the maximum number of 1s finally on the tape) among all halting 2-symbol n-state Turing machines of the above-described type, when started on a blank tape.
Related functions:
It is clear that Σ is a well-defined function: for every n, there are at most finitely many n-state Turing machines as above, up to isomorphism, hence at most finitely many possible running times.
Related functions:
This infinite sequence Σ is the busy beaver function, and any n-state 2-symbol Turing machine M for which σ(M) = Σ(n) (i.e., which attains the maximum score) is called a busy beaver. Note that for each n, there exist at least four n-state busy beavers (because, given any n-state busy beaver, another is obtained by merely changing the shift direction in a halting transition, another by shifting all direction changes to their opposite, and the final by shifting the halt direction of the all-swapped busy beaver. Theoretically, there could be more than one kind of transition leading to the halting state, but in practice it would be wasteful, because there's only one sequence of state transitions producing the sought-after result).
Related functions:
Non-computability Radó's 1962 paper proved that if f:N→N is any computable function, then Σ(n) > f(n) for all sufficiently large n, and hence that Σ is not a computable function.
Related functions:
Moreover, this implies that it is undecidable by a general algorithm whether an arbitrary Turing machine is a busy beaver. (Such an algorithm cannot exist, because its existence would allow Σ to be computed, which is a proven impossibility. In particular, such an algorithm could be used to construct another algorithm that would compute Σ as follows: for any given n, each of the finitely many n-state 2-symbol Turing machines would be tested until an n-state busy beaver is found; this busy beaver machine would then be simulated to determine its score, which is by definition Σ(n).) Even though Σ(n) is an uncomputable function, there are some small n for which it is possible to obtain its values and prove that they are correct. It is not hard to show that Σ(0) = 0, Σ(1) = 1, Σ(2) = 4, and with progressively more difficulty it can be shown that Σ(3) = 6 and Σ(4) = 13 (sequence A028444 in the OEIS). Σ(n) has not yet been determined for any instance of n > 4, although lower bounds have been established (see the Known values section below).
Related functions:
In 2016, Adam Yedidia and Scott Aaronson obtained the first (explicit) upper bound on the minimum n for which Σ(n) is unprovable in ZFC. To do so they constructed a 7910-state Turing machine whose behavior cannot be proven based on the usual axioms of set theory (Zermelo–Fraenkel set theory with the axiom of choice), under reasonable consistency hypotheses (stationary Ramsey property). This was later reduced to 1919 states, with the dependency on the stationary Ramsey property eliminated, and later to 748 states.
Related functions:
Complexity and unprovability of Σ A variant of Kolmogorov complexity is defined as follows [cf. Boolos, Burgess & Jeffrey, 2007]: The complexity of a number n is the smallest number of states needed for a BB-class Turing machine that halts with a single block of n consecutive 1s on an initially blank tape. The corresponding variant of Chaitin's incompleteness theorem states that, in the context of a given axiomatic system for the natural numbers, there exists a number k such that no specific number can be proved to have complexity greater than k, and hence that no specific upper bound can be proven for Σ(k) (the latter is because "the complexity of n is greater than k" would be proved if "n > Σ(k)" were proved). As mentioned in the cited reference, for any axiomatic system of "ordinary mathematics" the least value k for which this is true is far less than 10↑↑10; consequently, in the context of ordinary mathematics, neither the value nor any upper-bound of Σ(10 ↑↑ 10) can be proven. (Gödel's first incompleteness theorem is illustrated by this result: in an axiomatic system of ordinary mathematics, there is a true-but-unprovable sentence of the form "Σ(10 ↑↑ 10) = n", and there are infinitely many true-but-unprovable sentences of the form "Σ(10 ↑↑ 10) < n".) Maximum shifts function S In addition to the function Σ, Radó [1962] introduced another extreme function for Turing machines, the maximum shifts function, S, defined as follows: s(M) = the number of shifts M makes before halting, for any M ∈ En, S(n) = max{s(M) | M ∈ En} = the largest number of shifts made by any halting n-state 2-symbol Turing machine.Because these Turing machines are required to have a shift in each and every transition or "step" (including any transition to a Halt state), the max-shifts function is at the same time a max-steps function.
Related functions:
Radó showed that S is noncomputable for the same reason that Σ is noncomputable — it grows faster than any computable function. He proved this simply by noting that for each n, S(n) ≥ Σ(n). Each shift may write a 0 or a 1 on the tape, while Σ counts a subset of the shifts that wrote a 1, namely the ones that hadn't been overwritten by the time the Turing machine halted; consequently, S grows at least as fast as Σ, which had already been proved to grow faster than any computable function.
Related functions:
The following connection between Σ and S was used by Lin & Radó [Computer Studies of Turing Machine Problems, 1965] to prove that Σ(3) = 6: For a given n, if S(n) is known then all n-state Turing machines can (in principle) be run for up to S(n) steps, at which point any machine that hasn't yet halted will never halt. At that point, by observing which machines have halted with the most 1s on the tape (i.e., the busy beavers), one obtains from their tapes the value of Σ(n). The approach used by Lin & Radó for the case of n = 3 was to conjecture that S(3) = 21, then to simulate all the essentially different 3-state machines for up to 21 steps. By analyzing the behavior of the machines that had not halted within 21 steps, they succeeded in showing that none of those machines would ever halt, thus proving the conjecture that S(3) = 21, and determining that Σ(3) = 6 by the procedure just described.
Related functions:
Inequalities relating Σ and S include the following (from [Ben-Amram, et al., 1996]), which are valid for all n ≥ 1: S(n)≥Σ(n)S(n)≤(2n−1)Σ(3n+3)S(n)<Σ(3n+6); and an asymptotically improved bound (from [Ben-Amram, Petersen, 2002]): there exists a constant c, such that for all n ≥ 2, log 2n⌉+c).
Related functions:
S(n) tends to be close to the square of Σ(n) , and in fact many machines give S(n) less than Σ2(n) Known values for Σ and S As of 2016 the function values for Σ(n) and S(n) are only known exactly for n < 5.The current (as of 2023) 5-state busy beaver champion produces 4098 1s, using 47176870 steps (discovered by Heiner Marxen and Jürgen Buntrock in 1989), but there remain many machines with non-regular behavior which are believed to never halt, but which have not been proven to run infinitely. Various sources list different numbers of these holdouts. Skelet lists 42 or 43 unproven machines.At the moment the record 6-state champion produces over 10↑↑15 (found by Pavel Kropitz in 2022). As noted above, these are 2-symbol Turing machines.
Related functions:
Milton Green, in his 1964 paper "A Lower Bound on Rado's Sigma Function for Binary Turing Machines", constructed a set of Turing machines demonstrating that for k≥2, where ↑ is Knuth up-arrow notation and A is Ackermann's function.
Thus 10 ↑↑↑ ↑↑ 333=333...3 (with 333 = 7625597484987 terms in the exponential tower), and 12 ↑↑↑↑ 3=g1, where the number g1 is the enormous starting value in the sequence that defines Graham's number.
Related functions:
In 1964 Milton Green developed a lower bound for the Busy Beaver function that was published in the proceedings of the 1964 IEEE symposium on switching circuit theory and logical design. Heiner Marxen and Jürgen Buntrock described it as "a non-trivial (not primitive recursive) lower bound". This lower bound can be calculated but is too complex to state as a single expression in terms of n. When n=8 the method gives Σ(8) ≥ 3 × (7 × 392 - 1) / 2 ≈ 8.248×1044.
Related functions:
It can be derived from current lower bounds that: 31 for k≥2, In contrast, the best current (as of 2023) lower bound on Σ(6) is 10↑↑15, which is greater than the lower bound given by Green's formula, 33 = 27 (which is tiny in comparison). In fact, it is much greater than the lower bound: 3 ↑↑ 3 = 333 = 7625597484987, which is Green's first lower bound for Σ(8), and also much greater than the second lower bound: 3×(7×392−1)/2.
Related functions:
Proof for uncomputability of S(n) and Σ(n) Suppose that S(n) is a computable function and let EvalS denote a TM, evaluating S(n). Given a tape with n 1s it will produce S(n) 1s on the tape and then halt. Let Clean denote a Turing machine cleaning the sequence of 1s initially written on the tape. Let Double denote a Turing machine evaluating function n + n. Given a tape with n 1s it will produce 2n 1s on the tape and then halt. Let us create the composition Double | EvalS | Clean and let n0 be the number of states of this machine. Let Create_n0 denote a Turing machine creating n0 1s on an initially blank tape. This machine may be constructed in a trivial manner to have n0 states (the state i writes 1, moves the head right and switches to state i + 1, except the state n0, which halts). Let N denote the sum n0 + n0.
Related functions:
Let BadS denote the composition Create_n0 | Double | EvalS | Clean. Notice that this machine has N states. Starting with an initially blank tape it first creates a sequence of n0 1s and then doubles it, producing a sequence of N 1s. Then BadS will produce S(N) 1s on tape, and at last it will clear all 1s and then halt. But the phase of cleaning will continue at least S(N) steps, so the time of working of BadS is strictly greater than S(N), which contradicts to the definition of the function S(n).
Related functions:
The uncomputability of Σ(n) may be proved in a similar way. In the above proof, one must exchange the machine EvalS with EvalΣ and Clean with Increment — a simple TM, searching for a first 0 on the tape and replacing it with 1.
Related functions:
The uncomputability of S(n) can also be established by reference to the blank tape halting problem. The blank tape halting problem is the problem of deciding for any Turing machine whether or not it will halt when started on an empty tape. The blank tape halting problem is equivalent to the standard halting problem and so it is also uncomputable. If S(n) was computable, then we could solve the blank tape halting problem simply by running any given Turing machine with n states for S(n) steps; if it has still not halted, it never will. So, since the blank tape halting problem is not computable, it follows that S(n) must likewise be uncomputable.
Related functions:
Generalizations For any model of computation there exist simple analogs of the busy beaver. For example, the generalization to Turing machines with n states and m symbols defines the following generalized busy beaver functions: Σ(n, m): the largest number of non-zeros printable by an n-state, m-symbol machine started on an initially blank tape before halting, and S(n, m): the largest number of steps taken by an n-state, m-symbol machine started on an initially blank tape before halting.For example, the longest-running 3-state 3-symbol machine found so far runs 119112334170342540 steps before halting. The longest running 6-state, 2-symbol machine which has the additional property of reversing the tape value at each step produces 6147 1s after 47339970 steps. So for the Reversal Turing Machine (RTM) class, SRTM(6) ≥ 47339970 and ΣRTM(6) ≥ 6147.
Related functions:
It is possible to further generalize the busy beaver function by extending to more than one dimension.
Likewise we could define an analog to the Σ function for register machines as the largest number which can be present in any register on halting, for a given number of instructions.
Related functions:
Exact values and lower bounds The following table lists the exact values and some known lower bounds for S(n, m) and Σ(n, m) for the generalized busy beaver problems. Note: entries listed as "?" are bounded from below by the maximum of all entries to left and above. These machines either haven't been investigated or were subsequently surpassed by a smaller machine.
Related functions:
Nondeterministic Turing machines The problem can be extended to Nondeterministic Turing machines by looking for the system with the most states across all branches or the branch with the longest number of steps. The question of whether a given NDTM will halt is still computationally irreducible, and the computation required to find an NDTM busy beaver is significantly greater than the deterministic case, since there are multiple branches that need to be considered. For a 2-state, 2-color system with p cases or rules, the table to the right gives the maximum number of steps before halting and maximum number of unique states created by the NDTM.
Applications:
In addition to posing a rather challenging mathematical game, the busy beaver functions offer an entirely new approach to solving pure mathematics problems. Many open problems in mathematics could in theory, but not in practice, be solved in a systematic way given the value of S(n) for a sufficiently large n.Consider any conjecture that could be disproven via a counterexample among a countable number of cases (e.g. Goldbach's conjecture). Write a computer program that sequentially tests this conjecture for increasing values. In the case of Goldbach's conjecture, we would consider every even number ≥ 4 sequentially and test whether or not it is the sum of two prime numbers. Suppose this program is simulated on an n-state Turing machine. If it finds a counterexample (an even number ≥ 4 that is not the sum of two primes in our example), it halts and indicates that. However, if the conjecture is true, then our program will never halt. (This program halts only if it finds a counterexample.) Now, this program is simulated by an n-state Turing machine, so if we know S(n) we can decide (in a finite amount of time) whether or not it will ever halt by simply running the machine that many steps. And if, after S(n) steps, the machine does not halt, we know that it never will and thus that there are no counterexamples to the given conjecture (i.e., no even numbers that are not the sum of two primes). This would prove the conjecture to be true.
Applications:
Thus specific values (or upper bounds) for S(n) could be used to systematically solve many open problems in mathematics (in theory). However, current results on the busy beaver problem suggest that this will not be practical for two reasons: It is extremely hard to prove values for the busy beaver function (and the max shift function). It has only been proven for extremely small machines with fewer than five states, while one would presumably need at least 20-50 states to make a useful machine. Furthermore, every known exact value of S(n) was proven by enumerating every n-state Turing machine and proving whether or not each halts. One would have to calculate S(n) by some less direct method for it to actually be useful.
Applications:
But even if one did find a better way to calculate S(n), the values of the busy beaver function (and max shift function) get very large, very fast. S(6) > 1036534 already requires special pattern-based acceleration to be able to simulate to completion. Likewise, we know that S(10) > Σ(10) > 3 ↑↑↑ 3 is a gigantic number and S(17) > Σ(17) > G > g1, where G is Graham's number - an enormous number. Thus, even if we knew, say, S(30), it is completely unreasonable to run any machine that number of steps. There is not enough computational capacity in the known part of the universe to have performed even S(6) operations directly.
Applications:
Notable instances A 748-state binary Turing machine has been constructed that halts iff ZFC is inconsistent. A 744-state Turing machine has been constructed that halts iff the Riemann hypothesis is false. A 43-state Turing machine has been constructed that halts iff Goldbach's conjecture is false, and a 27-state machine for that conjecture has been proposed but not yet verified.A 15-state Turing machine has been constructed that halts iff the following conjecture formulated by Paul Erdős in 1979 is false: for all n > 8 there is at least one digit 2 in the base 3 representation of 2n.
Examples:
These are tables of rules for the Turing machines that generate Σ(1) and S(1), Σ(2) and S(2), Σ(3) (but not S(3)), Σ(4) and S(4), and the best known lower bound for Σ(5) and S(5), and Σ(6) and S(6). For other visualizations, In the tables, columns represent the current state and rows represent the current symbol read from the tape. Each table entry is a string of three characters, indicating the symbol to write onto the tape, the direction to move, and the new state (in that order). The halt state is shown as H.
Examples:
Each machine begins in state A with an infinite tape that contains all 0s. Thus, the initial symbol read from the tape is a 0.
Result key: (starts at the position overlined, halts at the position underlined) Result: 0 0 1 0 0 (1 step, one "1" total) Result: 0 0 1 1 1 1 0 0 (6 steps, four "1"s total) Result: 0 0 1 1 1 1 1 1 0 0 (14 steps, six "1"s total).
Unlike the previous machines, this one is a busy beaver only for Σ, but not for S. (S(3) = 21.) Result: 0 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 (107 steps, thirteen "1"s total) Result: 4098 "1"s with 8191 "0"s interspersed in 47,176,870 steps.
Note in the image to the right how this solution is similar qualitatively to the evolution of some cellular automata. Result: 1 0 1 1 1 ... 1 1 1 ("10" followed by more than 10↑↑15 contiguous "1"s in more than 10↑↑15 steps, where 10↑↑15=1010..10, an exponential tower of 15 tens).
Visualizations:
In the following table, the rules for each busy beaver (maximizing Σ) are represented visually, with orange squares corresponding to a "1" on the tape, and white corresponding to "0". The position of the head is indicated by the black ovoid, with the orientation of the head representing the state. Individual tapes are laid out horizontally, with time progressing from top to bottom. The halt state is represented by a rule which maps one state to itself (head doesn't move). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hydrodissection**
Hydrodissection:
Hydrodissection is the use of a directed jet of water to surgically separate tissues. It is generally used to develop tissue planes or divide soft tissues with less trauma than dissection using a cutting instrument. By using an appropriate pressure it will tend to follow the path of least resistance that is close to the direction of the jet.
Applications:
In cataract surgery it is used to release the lens from its capsule by projecting a continuous flow of water from a cannula under the flap of the anterior capsule, which lifts the capsule membrane from the lens. By directing the flow the surgeon lifts the membrane around the sides and back of the capsule until the lens is completely loose as a prelude to phacoemulsification or direct extracapslar removal. Hydrodissection is also used in general surgery to release a trapped nerve or to reduce intraoperative blood losses. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Quicknet**
Quicknet:
Quicknet is an Ajax framework (using XMLHttpRequest in JavaScript) designed to develop web applications or websites that use passwords to identify correct users. Using this framework, no cleartext password would be sent over the network or stored in the server. Quicknet supports multi-language, JavaScript cooperative multitasking, AJAX call, session and password management, modular structure, XML content, and JavaScript animation. It uses PHP on the server side, and JavaScript on the client side.
System requirements:
Server-side Quicknet should run on any server with Apache 2.2+, MySQL 5.1+ and PHP 5+ .
Client-side Quicknet should be compatible with Internet Explorer 7+, Firefox 3+, Opera 9+, Safari 3+ and Google Chrome 1+ .
Session and Password Management:
Quicknet is an AJAX framework that aims to protect users’ passwords with specially designed algorithm. This is achieved by using the same Cryptographic hash function in JavaScript code on the client-side, as well as PHP code on the server-side, to generate and compare hash results based on users’ passwords and some random data. However, no cleartext password would be sent over the network or stored in the server. It is believed that it is impossible to steal a session or discover the user's original password, even if the data sent over the network and/or stored on the server is known.
Secure Data Transmission:
Currently, Quicknet is possibly the only PHP AJAX framework that provides secure data transmission without SSL.
Multi-language:
Currently, Quicknet is possibly the only PHP AJAX framework with built-in support for multi-language. Developers could easily add new language to build their own systems. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Countersteering**
Countersteering:
Countersteering is used by single-track vehicle operators, such as cyclists and motorcyclists, to initiate a turn toward a given direction by momentarily steering counter to the desired direction ("steer left to turn right"). To negotiate a turn successfully, the combined center of mass of the rider and the single-track vehicle must first be leaned in the direction of the turn, and steering briefly in the opposite direction causes that lean. The rider's action of countersteering is sometimes referred to as "giving a steering command".: 15 The scientific literature does not provide a clear and comprehensive definition of countersteering. In fact, "a proper distinction between steer torque and steer angle ... is not always made."
How it works:
When countersteering to turn right, the following is performed: A torque on the handlebars to the left is applied.
The front wheel will then rotate about the steering axis to the left and the tire will generate forces in the contact patch to the left.
The machine as a whole steers to the left.
Because the forces in the contact patch are at ground level, this pulls the wheels "out from under" the bike to the left and causes it to lean to the right.
The rider, or in most cases the inherent stability of the bike, provides the steering torque necessary to rotate the front wheel back to the right and in the direction of the desired turn.
How it works:
The bike begins a turn to the right.While this appears to be a complex sequence of motions, it is performed by every child who rides a bicycle. The entire sequence goes largely unnoticed by most riders, which is why some assert that they do not do it.It is also important to distinguish the steering torque necessary to initiate the lean required for a given turn from the sustained steering torque and steering angle necessary to maintain a constant radius and lean angle until it is time to exit the turn.
How it works:
The initial steer torque and angle are both opposite the desired turn direction.
The sustained steer angle is in the same direction as the turn.
The sustained steer torque required to maintain that steer angle is either with or opposite the turn direction depending on forward speed, bike geometry, and combined bike and rider mass distribution.
How it works:
Need to lean to turn A bike can negotiate a curve only when the combined center of mass of bike and rider leans toward the inside of the turn at an angle appropriate for the velocity and the radius of the turn: arctan (v2gr) where v is the forward speed, r is the radius of the turn and g is the acceleration of gravity.Higher speeds and tighter turns require greater lean angles. If the mass is not first leaned into the turn, the inertia of the rider and bike will cause them to continue in a straight line as the tires track out from under them along the curve. The transition of riding in a straight line to negotiating a turn is a process of leaning the bike into the turn, and the most practical way to cause that lean (of the combined center of mass of bike and rider) is to move the support points in the opposite direction first.
How it works:
Stable lean As the desired angle is approached, the front wheel must usually be steered into the turn to maintain that angle or the bike will continue to lean with gravity, increasing in rate, until the side contacts the ground. This process often requires little or no physical effort, because the geometry of the steering system of most bikes is designed in such a way that the front wheel has a strong tendency to steer in the direction of a lean.
How it works:
The actual torque the rider must apply to the handlebars to maintain a steady-state turn is a complex function of bike geometry, mass distribution, rider position, tire properties, turn radius, and forward speed. At low speeds, the steering torque necessary from the rider is usually negative, that is opposite the direction of the turn, even when the steering angle is in the direction of the turn. At higher speeds, the direction of the necessary input torque often becomes positive, that is in the same direction as the turn.
How it works:
At low speeds At low speeds countersteering is equally necessary, but the countersteering is then so subtle that it is hidden by the continuous corrections that are made in balancing the bike, often falling below a just noticeable difference or threshold of perception of the rider. Countersteering at low speed may be further concealed by the ensuing much larger steering angle possible in the direction of the turn.
How it works:
Gyroscopic effects One effect of turning the front wheel is a roll moment caused by gyroscopic precession. The magnitude of this moment is proportional to the moment of inertia of the front wheel, its spin rate (forward motion), the rate that the rider turns the front wheel by applying a torque to the handlebars, and the cosine of the angle between the steering axis and the vertical.For a sample motorcycle moving at 22 m/s (50 mph) that has a front wheel with a moment of inertia of 0.6 kgm2, turning the front wheel one degree in half a second generates a roll moment of 3.5 Nm. In comparison, the lateral force on the front tire as it tracks out from under the motorcycle reaches a maximum of 50 N. This, acting on the 0.6 m (2 ft) height of the center of mass, generates a roll moment of 30 Nm.While the moment from gyroscopic forces is only 12% of this, it can play a significant part because it begins to act as soon as the rider applies the torque, instead of building up more slowly as the wheel out-tracks. This can be especially helpful in motorcycle racing.
Motorcycles:
Deliberately countersteering is essential for safe motorcycle riding, and as a result is generally a part of safe riding courses run by organisations like the Motorcycle Safety Foundation, the Canada Safety Council, or Australian Q-Ride providers. Deliberately countersteering a motorcycle is a much more efficient way to steer than to just lean.: 15 At higher speeds the self-balancing property of the bike gets stiffer, and a given input force applied to the handlebars produces smaller changes in lean angle.: 16 Training Much of the art of motorcycle cornering is learning how to effectively push the grips into corners and how to maintain proper lean angles through the turn. When the need for a quick swerve to one side suddenly arises in an emergency, it is essential to know, through prior practice, that countersteering is the most efficient way to change the motorcycle's course.: 16 Many accidents result when otherwise experienced riders who have never carefully developed this skill encounter an unexpected obstacle.
Motorcycles:
To encourage an understanding of the phenomena around countersteering, the phrase positive steering is sometimes used. Other phrases are "PRESS – To turn, the motorcycle must lean", "To lean the motorcycle, press on the handgrip in the direction of the turn" or "Press left – lean left – go left".The Motorcycle Safety Foundation teaches countersteering to all students in all of its schools, as do all motorcycle racing schools. Countersteering is included in United States state motorcycle operator manuals and tests, such as Washington, New Jersey, California, and Missouri.
Motorcycles:
Safety According to the Hurt Report, most motorcycle riders in the United States would over-brake and skid the rear wheel and under-brake the front when trying hard to avoid a collision. The ability to countersteer and swerve was essentially absent with many motorcycle operators. The often small amount of initial countersteering input required to get the bike to lean, which may be as little as 0.125 seconds, keeps many riders unaware of the concept.
Multi-track vehicles:
Three wheeled motorcycles without the ability to lean have no need to be countersteered, and an initial steer torque in one direction does not automatically result in a turn in the other direction. This includes sidecar rigs where the car is rigidly mounted on the bike. The three wheeled BRP Can-Am Spyder Roadster uses two front wheels which do not lean, and so it steers like a car.Some sidecars allow the motorcycle to lean independent of the sidecar, and in some cases, the sidecar even leans in parallel with the motorcycle. These vehicles must be countersteered the same way as a solo motorcycle. The three wheel Piaggio MP3 uses mechanical linkages to lean the two front wheels in parallel with the rear frame, and so that it is countersteered in the same manner as a two-wheeled motorcycle.Free-leaning multi-track vehicles must be balanced by countersteering before turning. Multi-track leaning vehicles that are forced-tilted, such as the Carver, are tilted without countersteering the control and are not balanced by the operator. Later versions of the Carver introduced automatic countersteer to increase tilt speed and reduce the force required to tilt the vehicle. Other forced-tilted vehicles may incorporate automatic countersteering. A prototype tilting multi-track free leaning vehicle was developed in 1984 that employs automatic countersteering and does not require any balancing skills.
Countersteering by weight shifting:
With a sufficiently light bike (especially a bicycle), the rider can initiate a lean and turn without using the handlebars by shifting body weight, called counter lean by some authors.
Documented physical experimentation shows that on heavy bikes (many motorcycles) shifting body weight is less effective at initiating leans.The following is done when countersteering using weight shifting to turn left: The rider applies a momentary torque, either at the seat via the legs or in the torso that causes the bike itself to lean to the right.
Countersteering by weight shifting:
The combined center of mass of the bike and rider is only lowered and not moved out, but if the front of the bike is free to swivel about its steering axis, the lean to the right will cause it to steer to the right by some combination of gyroscopic precession, ground reaction forces, gravitational force on an off-axis center of mass or simply the inertia of an off-axis center of mass depending on the exact geometry and mass distribution of the particular bike and the amount of torque and the speed at which it is applied.
Countersteering by weight shifting:
This countersteering to the right causes the ground contact to move to the right of the center of mass, as the bike moves forward, thus generating a leftward lean. Finally the front end steers to the left and the bike enters the left turn.The amount of leftward steering necessary to balance the leftward lean appropriate for the forward speed and radius of the turn is controlled by the torque generated by the rider, again either at the seat or in the torso.
Countersteering by weight shifting:
To straighten back out of the turn, the rider simply reverses the procedure for entering it: cause the bike to lean farther to the left; this causes it to steer farther to the left, which moves the wheel contact patches farther to the left, eventually reducing the leftward lean and exiting the turn.
Countersteering by weight shifting:
A National Highway Traffic Safety Administration study showed that rider lean has a larger influence on a lighter motorcycle than a heavier one, which helps explain why no-hands steering is less effective on heavy motorcycles. Leaning the torso with respect to the bike does not cause the bike to lean far enough to generate anything but the shallowest turns. No-hands riders may be able to keep a heavy bike centered in a lane and negotiate shallow highway turns, but not much else.
Countersteering by weight shifting:
Complex maneuvers are not possible using weight shifting alone because even for a light machine there is insufficient control authority.
Although on a sufficiently light bike (especially a bicycle), the rider can initiate a lean and turn by shifting body weight, there is no evidence that complex maneuvers can be performed by bodyweight alone.
Other uses:
The term countersteering is also used by some authors to refer to the need on bikes to steer in the opposite direction of the turn (negative steering angle) to maintain control in response to significant rear wheel slippage. Motorcycle speedway racing takes place on an oval track with a loose surface of dirt, cinders or shale. Riders slide their machines sideways, powersliding or broadsiding into the turns, using an extreme form of this type of countersteering that is maintained throughout the turn. This also works, without power, for bicycles on loose or slippery surfaces, although it is an advanced technique.
Other uses:
The term is also used in the discussion of the automobile driving technique called drifting.
The Wright Brothers:
Wilbur Wright explained countersteering this way: I have asked dozens of bicycle riders how they turn to the left. I have never found a single person who stated all the facts correctly when first asked. They almost invariably said that to turn to the left, they turned the handlebar to the left and as a result made a turn to the left. But on further questioning them, some would agree that they first turned the handlebar a little to the right, and then as the machine inclined to the left, they turned the handlebar to the left and as a result made the circle, inclining inward. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**M squared**
M squared:
In laser science, the parameter M2, also known as the beam propagation ratio or beam quality factor is a measure of laser beam quality. It represents the degree of variation of a beam from an ideal Gaussian beam. It is calculated from the ratio of the beam parameter product (BPP) of the beam to that of a Gaussian beam with the same wavelength. It relates the beam divergence of a laser beam to the minimum focussed spot size that can be achieved. For a single mode TEM00 (Gaussian) laser beam, M2 is exactly one. Unlike the beam parameter product, M2 is unitless and does not vary with wavelength.
M squared:
The M2 value for a laser beam is widely used in the laser industry as a specification, and its method of measurement is regulated as an ISO Standard.
Measurement:
There are several ways to define the width of a beam. When measuring the beam parameter product and M2, one uses the D4σ or "second moment" width of the beam to determine both the radius of the beam's waist and the divergence in the far field.M2 can be measured by placing an array detector or scanning-slit profiler at multiple positions within the beam after focusing it with a lens of high optical quality and known focal length. To properly obtain M2 the following steps must be followed: Measure the D4σ widths at 5 axial positions near the beam waist (the location where the beam is narrowest).
Measurement:
Measure the D4σ widths at 5 axial positions at least one Rayleigh length away from the waist.
Measurement:
Fit the 10 measured data points to W2(z)=W02+M4(λπW0)2(z−z0)2 ,Here W(z) is half of the D4 σ(z) beam width and z0 is the location of the beam waist with width W0 . Fitting the 10 data points yields M2, z0 , and W0 . Siegman showed that all beam profiles — Gaussian, flat top, TEMxy, or any shape — must follow the equation above provided that the beam radius uses the D4σ definition of the beam width. Using other definitions of beam width does not work.In principle, one could use a single measurement at the waist to obtain the waist diameter, a single measurement in the far field to obtain the divergence, and then use these to calculate the M2. The procedure above gives a more accurate result in practice, however.
Utility:
M2 is useful because it reflects how well a collimated laser beam can be focused to a small spot, or how well a divergent laser source can be collimated. It is a better guide to beam quality than Gaussian appearance because there are many cases in which a beam can look Gaussian, yet have an M2 value far from unity. Likewise, a beam intensity profile can appear very "un-Gaussian", yet have an M2 value close to unity. The quality of a beam is important for many applications. In fiber-optic communications beams with an M2 close to 1 are required for coupling to single-mode optical fiber. M2 determines how tightly a collimated beam of a given diameter can be focused: the diameter of the focal spot varies as M2, and the irradiance scales as 1/M4. For a given laser cavity, the output beam diameter (collimated or focused) scales as M, and the irradiance as 1/M2. This is very important in laser machining and laser welding, which depend on high fluence at the weld location. Generally, M2 increases as a laser's output power increases. It is difficult to obtain excellent beam quality and high average power at the same time due to thermal lensing in the laser gain medium.
Multi-mode beam propagation:
Real laser beams are often non-Gaussian, being multi-mode or mixed-mode. Multi-mode beam propagation is often modeled by considering a so-called "embedded" Gaussian, whose beam waist is M times smaller than that of the multimode beam. The diameter of the multimode beam is then M times that of the embedded Gaussian beam everywhere, and the divergence is M times greater, but the wavefront curvature is the same. The multimode beam has M2 times the beam area but 1/M2 less beam intensity than the embedded beam. This holds true for any given optical system, and thus the minimum (focussed) spot size or beam waist of a multi-mode laser beam is M times the embedded Gaussian beam waist. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Circumventricular organs**
Circumventricular organs:
Circumventricular organs (CVOs) (circum-: around ; ventricular: of ventricle) are structures in the brain characterized by their extensive and highly permeable capillaries, unlike those in the rest of the brain where there exists a blood–brain barrier (BBB) at the capillary level. Although the term "circumventricular organs" was originally proposed in 1958 by Austrian anatomist Helmut O. Hofer concerning structures around the brain ventricular system, the penetration of blood-borne dyes into small specific CVO regions was discovered in the early 20th century. The permeable CVOs enabling rapid neurohumoral exchange include the subfornical organ (SFO), the area postrema (AP), the vascular organ of lamina terminalis (VOLT — also known as the organum vasculosum of the lamina terminalis (OVLT)), the median eminence, the pituitary neural lobe, and the pineal gland.The circumventricular organs are midline structures around the third and fourth ventricles that are in contact with blood and cerebrospinal fluid, and they facilitate special types of communication between the central nervous system and peripheral blood. Additionally, they are an integral part of neuroendocrine function. Highly permeable capillaries allow the CVOs to act as an alternative route for peptides and hormones in the neural tissue to sample from and secrete to circulating blood. CVOs also have roles in body fluid regulation, cardiovascular functions, immune responses, thirst, feeding behavior and reproductive behavior.CVOs can be classified as either sensory or secretory organs serving homeostatic functions and body water balance. The sensory organs include the area postrema, the subfornical organ, and the vascular organ of lamina terminalis, all having the ability to sense signals in blood, then pass that information neurally to other brain regions. Through their neural circuitry, they provide direct information to the autonomic nervous system from the systemic circulation. The secretory organs include the subcommissural organ (SCO), the pituitary gland, the median eminence, and the pineal gland. These organs are responsible for secreting hormones and glycoproteins into the peripheral blood using feedback from both the brain environment and external stimuli.Circumventricular organs contain capillary networks that vary between one another and within individual organs both in density and permeability, with most CVO capillaries having a permeable endothelial cell layer, except for those in the subcommissural organ. Furthermore, all CVOs contain neural tissue, enabling a neuroendocrine role.
Circumventricular organs:
Although the choroid plexus also has permeable capillaries, it does not contain neural tissue; rather, its primary role is to produce cerebrospinal fluid (CSF), and therefore is typically not classified as a CVO.
Sensory organs:
Area postrema Anatomy The area postrema is located in the caudal medulla oblongata near the junction of the brainstem and the spinal cord. In humans and in most other mammals that have been studied, it consists of swellings on either wall of the fourth ventricle. In rodents and lagomorphs, however, the area postrema forms a midline structure dorsal to the obex. When viewed histologically for its capillary distribution and morphology, the area postrema has numerous subregions separated according to capillary permeability, rates of blood flow, and duration of blood transit through respective capillary beds.
Sensory organs:
Function Relatively little is known about the function of the area postrema in humans. However, there is strong evidence that the area postrema acts as the chemoreceptor trigger zone for vomiting, which is triggered by the presence of noxious stimulation from the blood. There is also evidence that the area postrema is the site at which angiotensin stimulates glucose metabolism, presumed efferent neural activity, blood pressure control, and thirst. The area postrema also has integrative capacities that enable it to send major and minor efferents to sections of the brain involved in the autonomic control of cardiovascular and respiratory activities.
Sensory organs:
Vascular organ of the lamina terminalis Anatomy Classified as a sensory circumventricular organ (along with the SFO and AP), the vascular organ of lamina terminalis (VOLT) is situated in the anterior wall of the third ventricle. Characteristically of the CVOs, it lacks the tight endothelial blood brain barrier. The vascular organ is further characterized by the afferent inputs from the subfornical organ (SFO), the median pre-optic nucleus (MnPO) region, the brainstem, and even the hypothalamus. Conversely, the vascular organ of the lamina terminalis maintains efferent projections to the stria medullaris and basal ganglia.As a major player in the maintenance of the mammalian body fluid homeostasis, the VOLT features the primary neurons responsible for osmosensory balance. These neurons, in turn, feature angiotensin type I receptors, which are used by circulating angiotensin II to initiate water intake and sodium consumption. In addition to the angiotensin receptors, the neurons of the VOLT are also characterized by the presence of a nonselective cation channel deemed the transient receptor potential vanilloid 1, or TRPV1. Though there are other receptors within the TRPV family, a study by Ciura, Liedtke, and Bourque demonstrated that hypertonicity sensing operated via a mechanical mechanism of TRPV1 but not TRPV4. Despite a significant amount of data, the anatomy of the VOLT is not yet fully comprehended.
Sensory organs:
Function As previously mentioned, the vascular organ of lamina terminalis features neurons responsible for the homeostatic conservation of osmolarity. In addition, the fenestrated vasculature of the VOLT allows the astrocytes and neurons of the VOLT to perceive a wide variety of plasma molecules whose signals may be transduced into other regions of the brain, thereby eliciting autonomic and inflammatory reactions.In experiments, mammalian VOLT neurons were shown to transduce hypertonicity by the activation of the TRPV1 nonselective cation channels. These channels are highly permeable to calcium and are responsible for membrane depolarization and increased action potential discharge. Stated simply, an increase in osmolarity results in a reversible depolarization of the VOLT neurons. This can be seen through the predominantly excitatory effects of ANG on the VOLT through the TRPV1 receptor. In this context, it is worthy to note the VOLT neurons typically feature a resting membrane potential in the range of -50 to -67 mV with input resistances ranging from 65 to 360 MΩ.Despite a solid understanding of the VOLT’s role in the maintenance of body fluid homeostasis, other functions are less understood. For example, it is thought that the VOLT may also play a role in the regulation of LH secretion via a negative feedback mechanism. It is also hypothesized that the VOLT may be the mechanism through which pyrogens function to initiate a febrile response in the CNS. Finally, VOLT neurons have been observed to respond to temperature changes indicating that the organum vasculosum of the lamina terminalis is subject to different climates.
Sensory organs:
Subfornical organ (SFO) Anatomy The subfornical organ is a sensory CVO situated on the underside of the fornix and lacking a BBB, the absence of which characterizes the circumventricular organs. Protruding into the third ventricle of the brain, the highly vascularized SFO can be divided into 3–4 anatomical zones, especially by its capillary density and structure. The central zone is composed exclusively of the glial cells and neuronal cell bodies. Conversely, the rostral and caudal areas are mostly made of nerve fibers while very few neurons and glial cells can be seen in this area. Functionally, however, the SFO may be viewed in two portions, the dorsolateral peripheral (pSFO) division and the ventromedial core segment.As an important mechanism of both energy and osmotic homeostasis, the SFO has many efferent projections. In fact, SFO neurons have been experimentally shown to broadcast efferent projections to regions involved in cardiovascular regulation including the lateral hypothalamus with fibers terminating in the supraoptic (SON) and paraventricular (PVN) nuclei, and the anteroventral 3rd ventricle (AV3V) with fibers terminating in the VOLT and the median preoptic area. It seems that the most essential of all these connections is the SFO’s projections to the paraventricular hypothalamic nucleus. Based on their functional relevance, the SFO neurons can be branded as either GE, featuring nonselective cation channels, or GI, featuring potassium channels. While the afferent projections of the SFO are considered less important than the various efferent connections, it is still notable that the subfornical organ receives synaptic input from the zona incerta and arcuate nucleus.Study of subfornical organ anatomy is still ongoing but evidence has demonstrated slow blood transit time which may facilitate the sensory capability of SFO, enabling increased contact time for blood-borne signals to penetrate its permeable capillaries and influence regulation of blood pressure and body fluids. This observation coincides with the fact that SFO neurons have been shown to be intrinsically osmosensitive. Finally, it has been established that SFO neurons maintain resting membrane potential in the range of -57 to -65 mV.
Sensory organs:
Function The subfornical organ is active in many bodily processes including, but not limited to, osmoregulation, cardiovascular regulation, and energy homeostasis. In a study by Ferguson, both hyper- and hypotonic stimuli facilitated an osmotic response. This observation demonstrated the fact that the SFO is involved in the maintenance of blood pressure. Featuring an AT1 receptor for ANG, the SFO neurons demonstrate an excitatory response when activated by ANG, therefore increasing blood pressure. The induction of the drinking response via the SFO can be antagonized, however, by the peptide, ANP. Additional research has demonstrated that the subfornical organ may be an important intermediary though which leptin acts to maintain blood pressure within normal physiological limits via descending autonomic pathways associated with cardiovascular control.Recent research has focused on the subfornical organ as an area particularly important in the regulation of energy. The observation that subfornical neurons respond to a wide range of circulating energy balance signals, and that electrical stimulation of the SFO in rats resulted in food intake supports the SFO’s importance in energy homeostasis. Additionally, it is assumed that the SFO is the lone forebrain structure capable of constant monitoring of circulating concentrations of glucose. This responsiveness to glucose again serves to solidify the SFO’s integral role as a regulator of energy homeostasis.
Secretory organs:
Subcommissural organ Anatomy The subcommissural organ (SCO) is a small secretory organ located on the ventral surface of the posterior commissure near the anterior entrance of the cerebral aqueduct. It differs from other CVOs in that it does not have highly permeable capillaries. Its role as a neuroendocrine structure associated with the ventricular system qualifies it for classification as a CVO. Related to its secretory function, the SCO is partially composed of ependymal cells. These ependymocytes are characterized by elongated cell bodies that contain secretory materials and are covered in cilia. The most prominent of these is the glycoprotein SCO-spondin.
Secretory organs:
Function One function of the SCO is the secretion of the glycoprotein SCO-spondin, which is released into the third ventricle where it aggregates to create Reissner's fiber. Reissner's fiber is a long fibrous projection that travels caudally through the Sylvian aqueduct and terminates in the spinal cord. This fiber is thought to contribute to the maintenance of the patency of the Sylvian aqueduct.While the function of the subcommissural organ remains under investigation, it may be part of the mechanism of aldosterone secretion and CSF detoxification, along with osmoregulation. The SCO is innervated by many systems, the most common of which is associated with the serotonergic system, which influences water and sodium intake. During water deprivation, it will also reduce its innervation to the SCO. The reduction of input to the SCO causes a marked decrease in RF production. This finding implies that the subcommissural organ and its associated Reissner's fiber are integral parts of fluid electrolyte balance and water homeostasis.
Secretory organs:
Pituitary neural lobe The pituitary gland is subdivided into lobes – the anterior pituitary, the intermediate pituitary, and the posterior pituitary (also known as the adenohypophysis and neurohypophysis (or neural lobe), respectively). Each one functions as a separate endocrine organ. The pituitary neural lobe consists of axonal projections that directly extend from cell bodies in the hypothalamus through the infundibulum. Under neurohumoral control, it secretes oxytocin and vasopressin, thereby qualifying it as a circumventricular organ with both neural and secretory functions.The anterior pituitary contains non-neural secretory cells derived from oral ectoderm which are indirectly controlled by "releasing hormones" from the median eminence of the hypothalamus, through the hypophyseal portal circulation.
Secretory organs:
The intermediate lobe (also called pars intermedia) synthesizes and secretes a hormone stimulating melanocytes under neural control by the hypothalamus. It is not commonly included among circumventricular organs.The pituitary gland is located in the sella turcica of the sphenoid bone at the base of the skull.
Secretory organs:
Median eminence The median eminence (ME) is located in the inferior portion of the hypothalamus and is ventral to the third ventricle. While some publications do not list the ME as a CVO, when it is considered to be a circumventricular organ, it is classified as a secretory organ. The median eminence is rich in fenestrated capillaries, allowing for the passage of proteins and neurohormones. More specifically, the median eminence allows for the transport of neurohormones between the CSF and the peripheral blood supply. The major cell type that makes up the median eminence are specialized ependymal cells known as tanycytes. These contribute to the organ's ability to selectively allow macromolecules to pass from the central to the peripheral neuroendocrine systems. Ventromedial subregions of the bilateral hypothalamic arcuate nucleus display relatively high capillary permeability, indicating this nucleus may have moment-to-moment regulatory roles for sensing and neurally conveying hormonal signals.Tanycytes line the floor of the third ventricle and can be characterized by a singular long projection that delves deep inside the hypothalamus. Tanycytes have been evolutionarily linked to radial glial cells of the central nervous system. The tanycytes of the median eminence are often found along the fenestrated peripheral capillaries. They are tightly packed on the capillaries, forming a seal between the third ventricle and the median eminence. This seal can be attributed to the tight junctions observed between tanycytes and functions to restrict the travel of molecules between the median eminence and the third ventricle. The median eminence is also closely linked to the transport of GnRH between the median eminence and the anterior pituitary. Neuronal projections of GnRH neurons actually end at the median eminence, allowing for its release into the portal blood system.
Secretory organs:
Pineal gland Anatomy Gross anatomy The morphology of the pineal gland varies greatly among mammals. The most commonly used classification for this gland takes into account its location relative to the diencephalon and the third ventricle of the brain, as well as its size and shape. Under these conditions, the human pineal gland is classified as type A. A type A pineal gland rests proximally to the posterior section of the diencephalon. It is located within 1-2mm of the midline of the brain.
Secretory organs:
The pineal gland starts to develop during the second month of gestation. In the average adult, the dimensions are as follow: 5-9mm in length, 1-5mm in width and 3-5mm in thickness. Its average weight is 100–180 mg.
The pineal gland consists of a central core made up of small lobes and a cortex that possesses a diffuse distribution of neurons. The principal cell type of the pineal is the pinealocyte sensu stricto. This type of cell has a prominent nucleus and a granular appearance.
Secretory organs:
Vascularization and innervation The level of vascularization in the pineal gland is high. It receives a large supply of blood from branches of the posterior choroidal arteries that derive from cerebral arteries in the posterior mesencephalon.The pineal gland is innervated by fibers from the peripheral parasympathetic and sympathetic systems, in addition to fibers from the central nervous system. The most important set of fibers involved are the unmyelinated postganglionic sympathetic fibers from the superior cervical ganglia, which also form the bilateral nervi conarii. The second set of fibers enters the pineal gland anteriorly via the commissural peduncles. The third set of fibers is myelinated and forms the ventro-lateral pineal tract.
Secretory organs:
Function The pineal gland is considered a secretory organ and its activity shows circadian oscillations. Its main function – secretion of the hormone melatonin – rests when there is no input from the primary circadian pacemaker in the suprachiasmatic nuclei. Melatonin production is controlled by the previously mentioned circadian timing and is suppressed by light. Pineal tumors can affect sexual development, but the mechanism has yet to be established.
Secretory organs:
Other pineal substances Other peptides aside from melatonin have been detected in the pineal. They are most likely associated with a type of innervation deemed "pineal peptidergic innervation." These include vasopressin, oxytocin, VIP, NPY, peptide histidine isoleucine, calcitonin gene-related peptide, substance P and somatostatin. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Impact crater lake**
Impact crater lake:
An impact crater lake is a lake inside a depression caused by the impact of a meteor. It is also known as an annular lake in cases where the water body is shaped like a ring, as many impact crater lakes are.
Examples:
One of the largest impact crater lakes is Lake Manicouagan in Canada; the crater is a multiple-ring structure about 100 km (60 mi) across, with its 70 km (40 mi) diameter inner ring its most prominent feature; it contains a 70 km (40 mi) diameter annular lake, surrounding an inner island plateau, René-Levasseur Island. It is Earth's sixth-largest confirmed impact crater according to rim-to-rim diameter. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Robot-assisted surgery**
Robot-assisted surgery:
Robot-assisted surgery or robotic surgery are any types of surgical procedures that are performed using robotic systems. Robotically assisted surgery was developed to try to overcome the limitations of pre-existing minimally-invasive surgical procedures and to enhance the capabilities of surgeons performing open surgery.
In the case of robotically assisted minimally-invasive surgery, instead of the surgeon directly moving the instruments, the surgeon uses one of two methods to perform dissection, hemostasis and resection, using a direct telemanipulator, or through computer control.
Robot-assisted surgery:
A telemanipulator (e.g. the da Vinci Surgical System) is a system of remotely controlled manipulators that allows the surgeon to operate real-time under stereoscopic vision from a control console separate from the operating table. The robot is docked next to the patient, and robotic arms carry out endoscopy-like maneuvers via end-effectors inserted through specially designed trocars. A surgical assistant and a scrub nurse are often still needed scrubbed at the tableside to help switch effector instruments or provide additional suction or temporary tissue retraction using endoscopic grasping instruments.
Robot-assisted surgery:
In computer-controlled systems, the surgeon uses a computer system to relay control data and direct the robotic arms and its end-effectors, though these systems can also still use telemanipulators for their input. One advantage of using the computerized method is that the surgeon does not have to be present on campus to perform the procedure, leading to the possibility for remote surgery and even AI-assisted or automated procedures.Memory devices play an essential role in preventing any inconveniences in the robot-assisted surgery. The memory storage solutions can perform multiple functions based on the patient's physical record. They can also indicate specific information to measure calibration offsets indicating misalignment of the storage drive system, life of the data, and so on.
Robot-assisted surgery:
Robotic surgery has been criticized for its expense, with the average costs in 2007 ranging from $5,607 to $45,914 per patient. This technique has not been approved for cancer surgery as of 2019 as the safety and usefulness is unclear.
History:
The concept of using standard hand grips to control manipulators and cameras of various sizes down to sub-miniature was described in the Robert Heinlein story 'Waldo' in August 1942, which also mentioned brain surgery.
History:
The first robot to assist in surgery was the Arthrobot, which was developed and used for the first time in Vancouver in 1984. This robot assisted in being able to manipulate and position the patient's leg on voice command. Intimately involved were biomedical engineer James McEwen, Geof Auchinleck, a UBC engineering physics grad, and Dr. Brian Day as well as a team of engineering students. The robot was used in an orthopaedic surgical procedure on 12 March 1984, at the UBC Hospital in Vancouver. Over 60 arthroscopic surgical procedures were performed in the first 12 months, and a 1985 National Geographic video on industrial robots, The Robotics Revolution, featured the device. Other related robotic devices developed at the same time included a surgical scrub nurse robot, which handed operative instruments on voice command, and a medical laboratory robotic arm. A YouTube video entitled Arthrobot – the world's first surgical robot illustrates some of these in operation.In 1985 a robot, the Unimation Puma 200, was used to orient a needle for a brain biopsy while under CT guidance during a neurological procedure. In the late 1980s, Imperial College in London developed PROBOT, which was then used to perform prostatic surgery. The advantages to this robot was its small size, accuracy and lack of fatigue for the surgeon. In the 1990s, computer-controlled surgical devices began to emerge, enabling greater precision and control in surgical procedures. One of the most significant advancements in this period was the da Vinci Surgical System, which was approved by the FDA for use in surgical procedures in 2000 (Intuitive Surgical, 2021). The da Vinci system uses robotic arms to manipulate surgical instruments, allowing surgeons to perform complex procedures with greater accuracy and control. In 1992, the ROBODOC was introduced and revolutionized orthopedic surgery by being able to assist with hip replacement surgeries. The latter was the first surgical robot that was approved by the FDA in 2008. The ROBODOC from Integrated Surgical Systems (working closely with IBM) could mill out precise fittings in the femur for hip replacement. The purpose of the ROBODOC was to replace the previous method of carving out a femur for an implant, the use of a mallet and broach/rasp.
History:
Further development of robotic systems was carried out by SRI International and Intuitive Surgical with the introduction of the da Vinci Surgical System and Computer Motion with the AESOP and the ZEUS robotic surgical system. The first robotic surgery took place at The Ohio State University Medical Center in Columbus, Ohio under the direction of Robert E. Michler.AESOP was a breakthrough in robotic surgery when introduced in 1994, as it was the first laparoscopic camera holder to be approved by the FDA. NASA initially funded the company that produces AESOP, Computer Motion, due to its goal to create a robotic arm that can be used in space, but this project ended up becoming a camera used in laparoscopic procedures. Voice control was then added in 1996 with the AESOP 2000 and seven degrees of freedom to mimic a human hand was added in 1998 with the AESOP 3000.ZEUS was introduced commercially in 1998, and started the idea of telerobotics or telepresence surgery where the surgeon is at a distance from the robot on a console and operates on the patient. ZEUS was first used during a gynecological surgery in 1997 to reconnect Fallopian tubes in Cleveland Ohio, a beating heart coronary artery bypass graft in October 1999, and the Lindbergh Operation, which was a cholecystectomy performed remotely in September 2001. In 2003, ZEUS made its most prominent mark in cardiac surgery after successfully harvesting the left internal mammary arteries in 19 patients, all of which had very successful clinical outcomes.The original telesurgery robotic system that the da Vinci was based on was developed at Stanford Research Institute International in Menlo Park with grant support from DARPA and NASA. A demonstration of an open bowel anastomosis was given to the Association of Military Surgeons of the US. Although the telesurgical robot was originally intended to facilitate remotely performed surgery in the battlefield to reduce casualties and to be used in other remote environments, it turned out to be more useful for minimally invasive on-site surgery. The patents for the early prototype were sold to Intuitive Surgical in Mountain View, California. The da Vinci senses the surgeon's hand movements and translates them electronically into scaled-down micro-movements to manipulate the tiny proprietary instruments. It also detects and filters out any tremors in the surgeon's hand movements, so that they are not duplicated robotically. The camera used in the system provides a true stereoscopic picture transmitted to a surgeon's console. Compared to the ZEUS, the da Vinci robot is attached to trocars to the surgical table, and can imitate the human wrist. In 2000, the da Vinci obtained FDA approval for general laparoscopic procedures and became the first operative surgical robot in the US. Examples of using the da Vinci system include the first robotically assisted heart bypass (performed in Germany) in May 1998, and the first performed in the United States in September 1999; and the first all-robotic-assisted kidney transplant, performed in January 2009. The da Vinci Si was released in April 2009 and initially sold for $1.75 million.in 2004, MAKO Surgical was founded by Rony Abovitz and other key members of its predecessor Z-KAT, Inc.[9] Z-KAT was founded in 1997 by Rony Abovitz, William Tapia, Michael Peshkin Ph.D., Julio Santos-Munne, and Wayne J. Kerness, M.D. and was developing a novel haptic robotic system for medical applications, amongst a wide variety of computer-assisted surgery technologies. Z-KAT's initial haptic robotic arm technology, known as the Whole Arm Manipulator (or WAM Arm) was originally developed at MIT and then at Barrett Technology. Z-KAT's core technology team had adapted the WAM Arm for use as a testbed for surgical procedures. <https://en.wikipedia.org/wiki/MAKO_Surgical_Corp.><https://www.linkedin.com/in/ronyabovitz/> In 2005, a surgical technique was documented in canine and cadaveric models called the transoral robotic surgery (TORS) for the da Vinci robot surgical system as it was the only FDA-approved robot to perform head and neck surgery. In 2006, three patients underwent resection of the tongue using this technique. The results were more clear visualization of the cranial nerves, lingual nerves, and lingual artery, and the patients had a faster recovery to normally swallowing. In May 2006 the first artificial intelligence doctor-conducted unassisted robotic surgery was on a 34-year-old male to correct heart arrhythmia. The results were rated as better than an above-average human surgeon. The machine had a database of 10,000 similar operations, and so, in the words of its designers, was "more than qualified to operate on any patient". In August 2007, Dr. Sijo Parekattil of the Robotics Institute and Center for Urology (Winter Haven Hospital and University of Florida) performed the first robotic-assisted microsurgery procedure denervation of the spermatic cord for chronic testicular pain. In February 2008, Dr. Mohan S. Gundeti of the University of Chicago Comer Children's Hospital performed the first robotic pediatric neurogenic bladder reconstruction.On 12 May 2008, the first image-guided MR-compatible robotic neurosurgical procedure was performed at University of Calgary by Dr. Garnette Sutherland using the NeuroArm. In June 2008, the German Aerospace Centre (DLR) presented a robotic system for minimally invasive surgery, the MiroSurge. In September 2010, the Eindhoven University of Technology announced the development of the Sofie surgical system, the first surgical robot to employ force feedback. In September 2010, the first robotic operation at the femoral vasculature was performed at the University Medical Centre Ljubljana by a team led by Borut Geršak.In 2019 the Versius Surgical Robotic System was launched and is a rival of the Da Vinci surgical system and claims to be more flexible and versatile, having independent modular arms which are "quick and easy to set up". The small-scale design means that it is suitable for virtually any operating room and can be operated at either a standing or a sitting position.
Uses:
Ophthalmology Ophthalmology is still part of the frontier for robotic-assisted surgeries. However, there are a couple of robotic systems that are capable of successfully performing surgeries.
Uses:
PRECEYES Surgical System is being used for vitreoretinal surgeries. This is a single arm robot, that is tele manipulated by a surgeon. This system attaches to the head of the operating room table and provides surgeons with increased precision with the help of the intuitive motion controller. Preceyes is the only robotic instrument to be CE certified. Some other companies like Forsight Robotics, Acusurgical that raised 5.75 M€ (France), and Horizon (US) are working in this field.
Uses:
The da Vinci Surgical System, though not specifically designed for ophthalmic procedures, uses telemanipulation to perform pterygium repairs and ex-vivo corneal surgeries.
Uses:
Heart Some examples of heart surgery being assisted by robotic surgery systems include: Atrial septal defect repair – the repair of a hole between the two upper chambers of the heart, Mitral valve repair – the repair of the valve that prevents blood from regurgitating back into the upper heart chambers during contractions of the heart, Coronary artery bypass – rerouting of blood supply by bypassing blocked arteries that provide blood to the heart.
Uses:
Thoracic Robotic surgery has become more widespread in thoracic surgery for mediastinal pathologies, pulmonary pathologies and more recently complex esophageal surgery.The da Vinci Xi system is used for lung and mediastinal mass resection. This minimally invasive approach as a comparable alternative to video-assisted thoracoscopic surgery (VATS) and the standard open thoracic surgery. Although VATS is the less expensive option, the robotic-assisted approach offers benefits such as 3D visualizations with seven degrees of freedom and improved dexterity while having equivalent perioperative outcomes.
Uses:
ENT The first successful robot-assisted cochlear implantation in a person took place in Bern, Switzerland in 2017. Surgical robots have been developed for use at various stages of cochlear implantation, including drilling through the mastoid bone, accessing the inner ear and inserting the electrode into the cochlea.Advantages of robot-assisted cochlear implantation include improved accuracy, resulting in fewer mistakes during electrode insertion and better hearing outcomes for patients. The surgeon uses image-guided surgical planning to program the robot based on the patient's individual anatomy. This helps the implant team to predict where the contacts of the electrode array will be located within the cochlea, which can assist with audio processor fitting post-surgery. The surgical robots also allow surgeons to reach the inner ear in a minimally invasive way.Challenges that still need to be addressed include safety, time, efficiency and cost.Surgical robots have also been shown to be useful for electrode insertion with pediatric patients.
Uses:
Gastrointestinal Multiple types of procedures have been performed with either the 'Zeus' or da Vinci robot systems, including bariatric surgery and gastrectomy for cancer. Surgeons at various universities initially published case series demonstrating different techniques and the feasibility of GI surgery using the robotic devices. Specific procedures have been more fully evaluated, specifically esophageal fundoplication for the treatment of gastroesophageal reflux and Heller myotomy for the treatment of achalasia.Robot-assisted pancreatectomies have been found to be associated with "longer operating time, lower estimated blood loss, a higher spleen-preservation rate, and shorter hospital stay[s]" than laparoscopic pancreatectomies; there was "no significant difference in transfusion, conversion to open surgery, overall complications, severe complications, pancreatic fistula, severe pancreatic fistula, ICU stay, total cost, and 30-day mortality between the two groups." Gynecology The first report of robotic surgery in gynecology was published in 1999 from the Cleveland Clinic. The adoption of robotic surgery has contributed to the increase in minimally invasive surgery for gynecologic disease. Gynecologic procedures may take longer with robot-assisted surgery and the rate of complications may be higher, but there are not enough high-quality studies to know at the present time. In the United States, robotic-assisted hysterectomy for benign conditions was shown to be more expensive than conventional laparoscopic hysterectomy in 2015, with no difference in overall rates of complications.This includes the use of the da Vinci surgical system in benign gynecology and gynecologic oncology. Robotic surgery can be used to treat fibroids, abnormal periods, endometriosis, ovarian tumors, uterine prolapse, and female cancers. Using the robotic system, gynecologists can perform hysterectomies, myomectomies, and lymph node biopsies. The Hominis robotic system developed by Momentis Surgical™ is aimed to provide a robotic platform for natural orifice transluminal endoscopic surgery (Notes) for myomectomy through the vagina.A 2017 review of surgical removal of the uterus and cervix for early cervical cancer robotic and laparoscopic surgery resulted in similar outcomes with respect to the cancer.
Uses:
Bone Robots are used in orthopedic surgery.ROBODOC is the first active robotic system that performs some of the surgical actions in a total hip arthroplasty (THA). It is programmed preoperatively using data from computer tomography (CT) scans. This allows for the surgeon to choose the optimal size and design for the replacement hip.Acrobot and Rio are semi-active robotic systems that are used in THA. It consists of a drill bit that is controlled by the surgeon however the robotic system does not allow any movement outside the predetermined boundaries.Mazor X is used in spinal surgeries to assist surgeons with placing pedicle screw instrumentation. Inaccuracy when placing a pedicle screw can result in neurovascular injury or construct failure. Mazor X functions by using templating imaging to locate itself to the target location of where the pedicle screw is needed.
Uses:
Spine Robotic devices started to be used in minimally invasive spine surgery starting in the mid-2000s. As of 2014, there were too few randomized clinical trials to judge whether robotic spine surgery is more or less safe than other approaches.As of 2019, the application of robotics in spine surgery has mainly been limited to pedicle screw insertion for spinal fixation. In addition, the majority of studies on robot-assisted spine surgery have investigated lumbar or lumbosacral vertebrae only. Studies on use of robotics for placing screws in the cervical and thoracic vertebrae are limited.
Uses:
Transplant surgery The first fully robotic kidney transplantations were performed in the late 2000s. It may allow kidney transplantations in people who are obese who could not otherwise have the procedure. Weight loss however is the preferred initial effort.
General surgery With regards to robotic surgery, this type of procedure is currently best suited for single-quadrant procedures, in which the operations can be performed on any one of the four quadrants of the abdomen. Cost disadvantages are applied with procedures such as a cholecystectomy and fundoplication, but are suitable opportunities for surgeons to advance their robotic surgery skills.
Uses:
Urology Robotic surgery in the field of urology has become common, especially in the United States.There is inconsistent evidence of benefits compared to standard surgery to justify the increased costs. Some have found tentative evidence of more complete removal of cancer and fewer side effects from surgery for prostatectomy.In 2000, the first robot-assisted laparoscopic radical prostatectomy was performed.Robotic surgery has also been utilized in radical cystectomies. A 2013 review found less complications and better short term outcomes when compared to open technique.
Uses:
Pediatrics Pediatric procedures are also benefiting from robotic surgical systems. The smaller abdominal size in pediatric patients limits the viewing field in most urology procedures. The robotic surgical systems help surgeons overcome these limitations. Robotic technology provides assistance in performing Pyeloplasty - alternative to the conventional open dismembered pyeloplasty (Anderson-Hynes). Pyeloplasty is the most common robotic-assisted procedures in children.
Ureteral reimplantation - alternative to the open intravesical or extravesical surgery.
Ureteroureterostomy - alternative to the transperitoneal approach.
Nephrectomy and heminephrectomy - Traditionally done with laparoscopy, it is not likely that a robotic procedure offers significant advantage due to its high cost.
Comparison to traditional methods:
Major advances aided by surgical robots have been remote surgery, minimally invasive surgery and unmanned surgery. Due to robotic use, the surgery is done with precision, miniaturization, smaller incisions; decreased blood loss, less pain, and quicker healing time. Articulation beyond normal manipulation and three-dimensional magnification help to result in improved ergonomics. Due to these techniques, there is a reduced duration of hospital stays, blood loss, transfusions, and use of pain medication.
Comparison to traditional methods:
The existing open surgery technique has many flaws such as limited access to the surgical area, long recovery time, long hours of operation, blood loss, surgical scars, and marks.The robot's costs range from $1 million to $2.5 million for each unit, and while its disposable supply cost is normally $1,500 per procedure, the cost of the procedure is higher. Additional surgical training is needed to operate the system. Numerous feasibility studies have been done to determine whether the purchase of such systems are worthwhile. As it stands, opinions differ dramatically. Surgeons report that, although the manufacturers of such systems provide training on this new technology, the learning phase is intensive and surgeons must perform 150 to 250 procedures to become adept in their use. During the training phase, minimally invasive operations can take up to twice as long as traditional surgery, leading to operating room tie-ups and surgical staffs keeping patients under anesthesia for longer periods. Patient surveys indicate they chose the procedure based on expectations of decreased morbidity, improved outcomes, reduced blood loss and less pain. Higher expectations may explain higher rates of dissatisfaction and regret.Compared with other minimally invasive surgery approaches, robot-assisted surgery gives the surgeon better control over the surgical instruments and a better view of the surgical site. In addition, surgeons no longer have to stand throughout the surgery and do not get tired as quickly. Naturally occurring hand tremors are filtered out by the robot's computer software. Finally, the surgical robot can continuously be used by rotating surgery teams. Laparoscopic camera positioning is also significantly steadier with less inadvertent movements under robotic controls than compared to human assistance.There are some issues in regards to current robotic surgery usage in clinical applications. There is a lack of haptics in some robotic systems currently in clinical use, which means there is no force feedback, or touch feedback. No interaction between the instrument and the patient is felt. However, recently the Senhance robotic system by Asensus Surgical was developed with haptic feedback in order to improve the interaction between the surgeon and the tissue.The robots can also be very large, have instrumentation limitations, and there may be issues with multi-quadrant surgery as current devices are solely used for single-quadrant application.Critics of the system, including the American Congress of Obstetricians and Gynecologists, say there is a steep learning curve for surgeons who adopt the use of the system and that there's a lack of studies that indicate long-term results are superior to results following traditional laparoscopic surgery. Articles in the newly created Journal of Robotic Surgery tend to report on one surgeon's experience.Complications related to robotic surgeries range from converting the surgery to open, re-operation, permanent injury, damage to viscera and nerve damage. From 2000 to 2011, out of 75 hysterectomies done with robotic surgery, 34 had permanent injury, and 49 had damage to the viscera. Prostatectomies were more prone to permanent injury, nerve damage and visceral damage as well. Very minimal surgeries in a variety of specialties had to actually be converted to open or be re-operated on, but most did sustain some kind of damage or injury. For example, out of seven coronary artery bypass grafting, one patient had to go under re-operation. It is important that complications are captured, reported and evaluated to ensure the medical community is better educated on the safety of this new technology. If something was to go wrong in a robot-assisted surgery, it is difficult to identify culpability, and the safety of the practice will influence how quickly and widespread these practices are used.One drawback of the use of robotic surgery is the risk of mechanical failure of the system and instruments. A study from July 2005 to December 2008 was conducted to analyze the mechanical failures of the da Vinci Surgical System at a single institute. During this period, a total of 1797 robotic surgeries were performed used 4 da Vinci surgical systems. There were 43 cases (2.4%) of mechanical failure, including 24 (1.3%) cases of mechanical failure or malfunction and 19 (1.1%) cases of instrument malfunction. Additionally, one open and two laparoscopic conversions (0.17%) were performed. Therefore, the chance of mechanical failure or malfunction was found to be rare, with the rate of converting to an open or laparoscopic procedure very low.There are also current methods of robotic surgery being marketed and advertised online. Removal of a cancerous prostate has been a popular treatment through internet marketing. Internet marketing of medical devices are more loosely regulated than pharmaceutical promotions. Many sites that claim the benefits of this type of procedure had failed to mention risks and also provided unsupported evidence. There is an issue with government and medical societies promotion a production of balanced educational material. In the US alone, many websites promotion robotic surgery fail to mention any risks associated with these types of procedures, and hospitals providing materials largely ignore risks, overestimate benefits and are strongly influenced by the manufacturer. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SystemVerilog DPI**
SystemVerilog DPI:
SystemVerilog DPI (Direct Programming Interface) is an interface which can be used to interface SystemVerilog with foreign languages. These foreign languages can be C, C++, SystemC as well as others. DPIs consist of two layers: a SystemVerilog layer and a foreign language layer. Both the layers are isolated from each other.
Explanation:
Direct Programming Interface (DPI) allows direct inter language function calls between the SystemVerilog and Foreign language. The functions implemented in Foreign language can be called from SystemVerilog and such functions are called Import functions. Similarly, functions implemented in SystemVerilog can be called from Foreign language (C/C++ or System C); such functions are called Export functions. DPIs allow transfer of data between two domains through function arguments and return.
Function import and export:
1) Function Import:- A function implemented in Foreign language can be used in SystemVerilog by importing it. A Foreign language function used in SystemVerilog is called Imported function.
Properties of imported function and task:
An Imported function shall complete their execution instantly and consume zero simulation time. Imported task can consume time.
Imported function can have input, output, and inout arguments.
The formal input arguments shall not be modified. If such arguments are changed within a function, the changes shall not be visible outside the function.
Imported function shall not assume any initial values of formal output arguments. The initial value of output arguments is undetermined and implementation dependent.
Imported function can access the initial value of a formal inout argument. Changes that the Imported function makes to a formal inout argument shall be visible outside the function.
An Imported function shall not free the memory allocated by SystemVerilog code nor expect SystemVerilog code to free memory allocated by Foreign code or (Foreign Compiler).
A call to an Imported task can result in suspension of the currently executing thread. This occurs when an Imported task calls an Exported task, and the Exported task executes a delay control, event control or wait statement. Thus it is possible for an Imported task to be simultaneously active in multiple execution threads.
An Imported function or task can be equip with special properties called pure or context.
Pure and context tasks and functions:
Pure functions A function whose results solely depends on the value of its input arguments with no side effects is called Pure function.
Properties of pure functions Only Non-Void functions with no output or input can be called as Pure functions.
Functions specified as Pure shall have no side effects, their results need to depend solely on the values of their input arguments.
A Pure function call can be safely eliminated if its result is not needed or if its results for the same value of input arguments is available for reuse without needing to recalculate.
A Pure function is assumed not to directly or indirectly perform the following: Perform any file operation.
Read or Write anything in Environment Variable, Shared memory, Sockets etc.
Access any persistent data like Global or Static variable.
An Imported task can never be declared Pure.
Context tasks and functions An Imported task or function which calls "Exported" tasks or functions or accesses SystemVerilog data objects other than its actual arguments is called Context task or function.
Properties of context tasks and functions 1) A Context Imported task or function can access (read or write) any SystemVerilog data object by calling (PLI/VPI) or by calling Export task or function. Therefore, a call to Context task or function is a barrier for SystemVerilog compiler optimization.
Calling Unix functions:
SystemVerilog code can call Unix functions directly by importing them, with no need for a wrapper.
DPI example:
Calling 'C' functions in SystemVerilog C - code file SystemVerilog code file | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gaiter (vehicle)**
Gaiter (vehicle):
On a vehicle, a gaiter or boot refers to a protective flexible sleeve covering a moving part, intended to keep the part clean.
On motorcycles and bicycles:
Gaiters are pleated rubber tubes enclosing the front suspension tubes of some motorcycles and mountain bikes with telescopic front forks. Gaiters protect the sliding parts of the front suspension from dirt and water.
On cars and other vehicles:
Similar gaiters to those described above find multiple uses on most vehicles. They are used at both ends of driveshafts, protecting constant-velocity joints from the ingress of dirt, and retaining the grease. They also prevent the ingress of dirt where one component slides within another, for example, on suspension struts or the ends of steering racks. Finally, they are also usually used to perform the same function on ball joints, which appear on suspension wishbones and steering tie rod ends. The gear stick gaiter is to resist dirt entering the ball joint at the bottom of the stick and to not have oil or grease from the joint exposed to passengers. They are commonly leather, faux leather, rubber or a waterproof cloth. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Graphical Network Simulator-3**
Graphical Network Simulator-3:
Graphical Network Simulator-3 (shortened to GNS3) is a network software emulator first released in 2008. It allows the combination of virtual and real devices, used to simulate complex networks. It uses Dynamips emulation software to simulate Cisco IOS.: 55 GNS3 is used by many large companies including Exxon, Walmart, AT&T and NASA, and is also popular for preparation of network professional certification exams. As of 2015, the software has been downloaded 11 million times. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**History of smart antennas**
History of smart antennas:
The first smart antennas were developed for military communications and intelligence gathering. The growth of cellular telephone in the 1980s attracted interest in commercial applications. The upgrade to digital radio technology in the mobile phone, indoor wireless network, and satellite broadcasting industries created new opportunities for smart antennas in the 1990s, culminating in the development of the MIMO (multiple-input multiple-output) technology used in 4G wireless networks.
Directional antennas:
The earliest success at tracking and controlling wireless signals relied on the antennas’ physical configuration and motion. The German inventor and physicist Karl F. Braun demonstrated beamforming for the first time in 1905. Braun created a phased array by positioning three antennas to reinforce radiation in one direction and diminish radiation in other directions. Guglielmo Marconi experimented with directional antennas in 1906.
Directional antennas:
Directional antennas were rotated to detect and track enemy forces during World War I. The British admiralty used goniometers (radio compasses) to track the German fleet. Edwin H. Armstrong invented the superheterodyne receiver to detect the high frequency noise generated by German warplanes’ ignition systems. The war ended before Armstrong's creation was ready to help direct antiaircraft fire.
Directional antennas:
Multiple elements (a fed dipole, a director, and reflectors) were assembled in the 1920s to create narrow transmit and receive antenna patterns. The Yagi-Uda array, better known as the Yagi antenna, is still widely used. Edmond Bruce and Harald T. Friis developed directional antennas for shortwave and microwave frequencies during the 1930s.AT&T's decision to use microwave to carry inter-city telephone traffic led to the first large-scale commercial deployment of directional antennas (based on Friis’ horn reflector design) in 1947. Directional antennas with alternating polarization enabled a single pair of frequencies to be reused over many consecutive hops. Microwave links are less expensive to deploy and maintain than coaxial cable links.
Phased array radar:
The first mechanically scanned phased array radar (using a rotating Yagi antenna) was demonstrated in the 1930s. The first electronically scanned radars used electromechanical devices (such as mechanical tuners or switches) to steer the antenna's beam.
Germany built the Wullenweber circular array for direction finding during the early years of World War II. The Wullenweber could electronically scan the horizon 360° and determine the direction of any signal with reasonably good accuracy. Circular arrays were enhanced during the Cold War for eavesdropping purposes.
Phased array radar:
The American Physicist Luis Walter Alvarez developed the first ground-controlled approach (GCA) system for landing aircraft in bad weather based on an electronically steered microwave phased array antenna. Alvarez tested and deployed the system in England in 1943. Near the end of the war, Germany's GEMA built an early warning phased array radar system (the PESA Mammut 1) to detect targets up to 300 km away. The polyrod fire control antenna was developed by Bell Laboratories in 1947 using cascaded phase shifters controlled by a rotary switch (spinning at ten revolutions per second) to create a continuous scanning beam.A major push to meet national security response time and coverage requirements called for the development of an all-electronic steerable planar phased array radar. The USSR's launch of Sputnik in 1957 suggested the need for ground-based satellite surveillance systems. Bendix Corporation responded by building its Electronically Steerable Array Radar (ESAR) in 1960. Enhanced beamforming techniques, such as multiple-beam Butler matrices, were developed for detecting and tracking objects in space.The launch of Explorer 1 by the United States in 1958 suggested another application: space-based radar systems for detecting and tracking aircraft, ships, armored vehicles, ballistic missiles, and cruise missiles. These systems required the development of special techniques for canceling the radar clutter seen from space, nulling ground-based jammers, and compensating for Doppler shifts experienced by fast-moving satellites.Space-based radar systems spurred the development of smaller, lighter weight, and less costly components: monolithic microwave integrated circuits (MMICs) for operation at frequencies in the 1 GHz to 30 GHz (microwave) and 30 GHz to 300 GHz (millimeter wave) ranges. The high power levels needed for detection are easier to achieve at microwave frequencies. The narrow beams required for high resolution target tracking are best achieved at millimeter wave frequencies. Companies such as Texas Instruments, Raytheon, RCA, Westinghouse, General Electric, and Hughes Electronics participated in the early development of MMICs.The first all-solid state radar was built for the United States Marines in 1972 by General Electric. It was a mobile 3-D radar system with its array mounted on a rotating platform for scanning the horizon. The first all-solid state phased array radar was the PAVE PAWS (precision acquisition vehicle entry - phased array warning system) UHF radar built in 1978 for the United States Air Force.
Phased array radar:
Phased array antennas are also used in radio astronomy. Karl Jansky, discoverer of the radio waves emanating from the Milky Way galaxy, used a Bruce array for experiments he conducted in 1931. Modern phased array radio telescopes typically consist of a number of small, interconnected antennas such as the Murchison Widefield Array in Australia, constructed in 2012.
Adaptive antenna arrays:
L. C. van Atta was first to describe a retrodirective antenna, which redirects (rather than reflects) a signal back in the direction from which it came, in his 1959 patent. The signal can be modulated by the redirecting host for purposes such as radio-frequency identification and traffic control (radar target echo enhancement).
Adaptive antenna arrays:
The first adaptive array, the side-lobe canceller, was developed by Paul Howells and Sid Applebaum at General Electric in 1959 to suppress radar jamming signals. Building on Norbert Wiener’s work with analog filters, in 1960 Stanford University professor Bernard Widrow and PhD student Ted Hoff developed the least mean squares (LMS) algorithm that automatically adjusts an antenna's directivity pattern to reinforce desired signals.
Adaptive antenna arrays:
Ted Compton at Ohio State University developed an adaptive antenna technique for recovering direct sequence spread spectrum signals in the presence of narrowband co-channel interference. Compton's method, reported in 1974, only requires knowledge of the desired signal's pseudorandom noise (PN) code—not its direction of arrival. In the late 1970s, Kesh Bakhru and Don Torrieri developed the maximin algorithm for recovering frequency hopping signals in the presence of narrowband co-channel interference.
Adaptive antenna arrays:
A 1977 paper by Bell Labs researchers Douglas O. Reudink and Yu S. Yeh described the advantages of scanning spot beams for satellites. The authors estimated that scanning spot beams could save 20 dB in link budget which in turn could be used to reduce transmit power, increase communication capacity, and decrease the size of earth-station antennas. Satellite spot beams are used today by direct broadcast satellite systems such as DirecTV and Dish Network.
Adaptive antenna arrays:
The Strategic Defense Initiative (SDI), proposed in 1983, became a major source of funding for technology research in several areas. The algorithms developed to track intercontinental ballistic missiles and direct x-ray laser weapons were particularly relevant to smart antennas.
Digital antenna arrays:
These are antenna arrays with multi channels digital beamforming, usually by using FFT.
Digital antenna arrays:
The theory of the 'digital antenna arrays' (DAA) started to emerge as a theory of multichannel estimation. Its origins go back into methods developed in the 1920s that were used to determine direction of the arrival of radio signals by a set of two antennas based on the phase difference or amplitudes of their output voltages. Thus, the assessment of the directions of arrival of a single signal was conducted according to pointedtype indicator readings or according to the Lissajous curves, drawn by beam on the oscilloscope screen.In the late 1940s this approach caused the emergence of the theory of three-channel antenna analyzers that provided the solution to the problem of signal separation of air target and “antipode” reflected from the underlying surface by solving system of equations which were obtained with the help of complex voltages of three-channel signal mix.The growing complexity of solving such radar challenges, as well as the need to implement effective signal processing by the end of the 1950s predetermined the use of electronic computers in this field. For example, in 1957, Ben S. Meltont and Leslie F. Bailey published a very significant article in this field, where authors offered options of implementation of algebraic operations for signal processing with the help of electronic circuits, their equivalents, with the aim to develop signal correlator on the base of certain analogue computer.The replacement of analogue computer facilities by digital technologies three years after in 1960 was embodied in the idea of using high-speed computers to solve directional finding problems, initially to locate earthquake epicenter. B. A. Bolt was one of the first who implemented this idea in practice, he has developed a program for IBM 704 for seismic direction finding based on the method of least squares. Almost simultaneously a similar approach was used by Flinn, research fellow of the Australian National University.Despite the fact that in the mentioned experiments the interface between sensors and computer was implemented with the help of data input cards, such decision was a decisive step on the way of the appearance of the DAA. Then, there was only to solve the problem of direct digital data, obtained from sensing elements, input into computer, excluding the stage of preparation of punch card and operator assistance as a surplus link.Apparently, it was Polikarpov B.I. who first drew attention to the potential possibilities of multichannel analyzers in the former USSR Polikarpov B.I. shows the principal possibility of signal sources resolution with an angular distance less than aperture angle of the antenna system.However, a specific solution to the problem of superRayleigh resolution of the emission sources was proposed by Varyukhin V.A. and Zablotskiy M.A. only in 1962, they invented corresponding method of measuring of directions to sources of electromagnetic field. This method was based on the processing of information contained in the distribution of complex voltage amplitudes at the outputs of amplitude, phase and phase-amplitude multichannel analyzers and it permitted to determine the angular coordinates of sources within the width of the main lobe of the receiving antenna system.
Digital antenna arrays:
Further Varyukhin V.A. developed a general theory of multichannel analyzers, based on the processing of information contained in the distribution of complex voltage amplitudes at the outputs of the digital antenna array. An important milestone in the recognition of the scientific results of Varyukhin V.A. was the defence of his doctor of science dissertation, held in 1967.A distinctive feature of developed by him theoretical foundations is the maximum automation of the process of assessment of the coordinates and parameters of signals, whereas an approach based on the generation of the response function of seismic multichannel analyzer and assessment of its resolution capabilities on the basis of visual impressions was just arisen at that time. What is meant here is a Capon method and developed further multiple signal classification (MUSIC), Estimation of signal parameters via rotational invariance techniques (ESPRIT) methods and other projection methods of spectral estimation.
Digital antenna arrays:
Of course, it is ungrateful to make a conclusion about the priority and importance of various alternative scientific approaches in the process of development of a general theory of the DAA, taking into account classified nature of the majority works and the lack of the possibility to study scientific heritage of that time, even taking into account Internet. Proposed here historical journey only slightly raised the veil of time over the true development of scientific research and its main aim was to point general niche and time frame of the inception of the theory of multichannel analysis through the lens of historical background. A detailed presentation of the historical stages of development of the DAA theory deserves standalone consideration.
Advanced processing techniques:
A 1979 paper by Ralph O. Schmidt of Electromagnetic Systems Laboratory (ESL, a supplier of strategic reconnaissance systems) described the multiple signal classification (MUSIC) algorithm for estimating signals’ angle of arrival. Schmidt used a signal subspace method based on geometric modeling to derive a solution assuming the absence of noise and then extended the method to provide a good approximation in the presence of noise. Schmidt's paper became the most cited and his signal subspace method became the focus of ongoing research.
Advanced processing techniques:
Jack Winters showed in 1984 that received signals from multiple antennas can be combined (using the optimum combining technique) to reduce co-channel interference in digital mobile networks. Up to this time, antenna diversity had only been used to mitigate multipath fading. However, digital mobile networks would not become common for another ten years.
Richard Roy developed the Estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithm in 1987. ESPRIT is a more efficient and higher resolution algorithm than MUSIC for estimating signals’ angle of arrival.
Advanced processing techniques:
Brian Agee and John Treichler developed the constant modulus algorithm (CMA) for blind equalization of analog FM and telephone signals in 1983. CMA relies on knowledge of the signal's waveform rather than channel state information or training signals. Agee extended the CMA to adaptive antenna arrays over the next few years.During the 1990s companies such as Applied Signal Technology (AST) developed airborne systems to intercept digital cellular phone calls and text messages for law enforcement and national security purposes. While an airborne system can eavesdrop on a mobile user anywhere in a cellular network, it will receive all mobile stations reusing the same user and control frequencies at roughly the same power level. Adaptive antenna beamforming and interference cancellation techniques are used to focus on the target user. AST was acquired by Raytheon in 2011.
Space-division multiple access (SDMA):
In 1947, Douglas H. Ring wrote a Bell Laboratories internal memorandum describing a new way to increase the capacity of metropolitan radio networks. Ring proposed dividing a city into geographic cells, using low power transmitters with omnidirectional antennas, and reusing frequencies in non-adjacent cells. Ring's cellular radio scheme did not become practical until the arrival of integrated circuits in the 1970s.
Space-division multiple access (SDMA):
As the number of mobile phone subscribers grew in the 1980s and 1990s researchers investigated new ways to increase mobile phone network capacity. Directional antennas were used to divide cells into sectors. In 1989, Simon Swales at Bristol University in the United Kingdom proposed methods for increasing the number of simultaneous users on the same frequency. Receive signals can be distinguished based on differences in their direction-of-arrival at the cell site antenna array. Transmit signals can be aimed at the intended recipient using beamforming. Soren Anderson in Sweden presented a similar scheme based on computer simulations the following year.
Space-division multiple access (SDMA):
Richard Roy and Björn Ottersten at Arraycomm patented a space-division multiple access method for wireless communication systems in the early 1990s. This technology was employed in Arraycomm's IntelliCell product line.
First commercial smart antennas:
Richard Roy and French entrepreneur Arnaud Saffari founded ArrayComm in 1992 and recruited Marty Cooper, who led the Motorola group that developed the first portable cell phone, to head the company. ArrayComm's smart antennas were designed to increase the capacity of wireless networks employing time-division duplex (TDD) such as the PHS (Personal Handy-phone System) networks that were deployed throughout Asia.
First commercial smart antennas:
Bell Labs researcher Douglas O. Reudink founded Metawave Communications, a maker of switched beam antennas for cellular telephone networks, in 1995. Metawave claimed that by focusing capacity on areas with the highest traffic it could boost cell capacity up to 75%. Though Metawave managed to sell switched beam antennas to at least one major carrier, the company went out of business in 2004.
First commercial smart antennas:
In 1997, AT&T Wireless Group announced plans to offer fixed wireless service at speeds up to 512 kbit/s. Project Angel promised non-line of sight (NLOS) coverage using beamforming and orthogonal frequency-division multiplexing (OFDM). Service was launched in ten cities in 2000. However, by 2002 AT&T sold its fixed wireless service business to Netro Corp.
Development of 4G MIMO:
Smart antenna research led to the development of 4G MIMO. Conventional smart antenna techniques (such as diversity and beamforming) deliver incremental gains in spectral efficiency. 4G MIMO exploits natural multipath propagation to multiply spectral efficiency.
Development of 4G MIMO:
Researchers studying the transmission of multiple signals over different wires in the same cable bundle helped create a theoretical foundation for 4G MIMO. Specifically, techniques for cancelling the effects of crosstalk using knowledge of the source signals were investigated. The “wireline MIMO” researchers included Lane H. Brandenburg and Aaron D. Wyner (1974), Wim van Etten (1970s), Jack Salz (1985), and Alexandra Duel-Hallen (1992). Though optimizing the transmission of multiple data streams over different wire pairs in the same bundle requires compensating for crosstalk, the transmission of multiple data streams over different wireless paths due to multipath propagation is a far greater challenge because the signals become mixed up in time, space, and frequency.
Development of 4G MIMO:
Greg Raleigh’s 1996 paper was first to propose a method for multiplying the capacity of point-to-point wireless links using multiple co-located antennas at each end of a link in the presence of multipath propagation. The paper provided a rigorous mathematical proof of MIMO capacity based on a precise channel model and identified OFDM as the most efficient air interface for use with MIMO. The paper was submitted to the IEEE in April 1996 and presented in November at the 1996 Global Communications Conference in London. Raleigh also filed two patent applications for MIMO in August of the same year.
Development of 4G MIMO:
Raleigh discovered that multipath propagation could be exploited to multiply link capacity after developing an improved channel model that showed how multipath propagation affects signal waveforms. The model took into account factors including radio propagation geometry (natural and man-made objects serving as “local reflectors” and “dominant reflectors”), antenna array steering, angle of arrival, and delay spread.
Development of 4G MIMO:
Bell Labs researcher Gerard J. Foschini’s paper submitted in September 1996 and published in October of the same year also theorized that MIMO could be used to significantly increase the capacity of point-to-point wireless links. Bell Labs demonstrated a prototype MIMO system based on its BLAST (Bell Laboratories Layered Space-Time) technology in late 1998.Space–time block code (also known as the Alamouti code) was developed by Siavash Alamouti and is widely used in MIMO-OFDM systems. Alamouti's 1998 paper showed that the benefits of receive diversity can also be achieved using a combination of transmit diversity and space-time block codes. A key advantage of transmit diversity is that it does not require multiple antennas and RF chains in handsets.
Orthogonal frequency-division multiplexing (OFDM):
OFDM emerged in the 1950s when engineers at Collins Radio Company found that a series of non-contiguous sub-channels are less vulnerable to inter-symbol interference (ISI). OFDM was studied more systematically by Robert W. Chang in 1966. Chang used Fourier transforms to ensure orthogonality. Sidney Darlington proposed use of the discrete Fourier transform (DFT) in 1970. Stephen B. Weinstein and Paul M. Ebert used a discrete Fourier transform (DFT) to perform baseband modulation and demodulation in 1971.
Orthogonal frequency-division multiplexing (OFDM):
Dial-up modems developed by Gandalf Technologies and Telebit in the 1970s and 1980s used OFDM to achieve higher speeds. Amati Communications Corp. used its discrete multi-tone (DMT) form of OFDM to transmit data at higher speeds over phone lines also carrying phone calls in digital subscriber line (DSL) applications. OFDM is part of the digital audio broadcasting (DAB) and digital video broadcasting (DVB) standards developed in Europe. OFDM is also used in the 802.11a and 802.11g wireless LAN standards.
Commercialization of 4G MIMO:
Greg Raleigh, V. K. Jones, and Michael Pollack founded Clarity Wireless in 1996. The company built a prototype MIMO-OFDM fixed wireless link running 100 Mbit/s in 20 MHz of spectrum in the 5.8 GHz band, and demonstrated error-free operation over six miles with one watt of transmit power. Cisco Systems acquired Clarity Wireless in 1998 for its non-line of sight, vector OFDM (VOFDM) technology. The Broadband Wireless Industry Forum (BWIF) was created in 1999 to develop a VOFDM standard.Arogyaswami Paulraj founded Iospan Wireless in late 1998 to develop MIMO-OFDM products. Iospan was acquired by Intel in 2003. Neither Clarity Wireless nor Iospan Wireless shipped MIMO-OFDM products before being acquired.Greg Raleigh and V. K. Jones founded Airgo Networks in 2001 to develop MIMO-OFDM chipsets for wireless LANs. In 2004, Airgo became the first company to ship MIMO-OFDM products. Qualcomm acquired Airgo Networks in late 2006.
Commercialization of 4G MIMO:
Surendra Babu Mandava and Arogyaswami Paulraj founded Beceem Communications in 2004 to produce MIMO-OFDM chipsets for WiMAX. The company was acquired by Broadcom in 2010.
Commercialization of 4G MIMO:
The Institute of Electrical and Electronics Engineers (IEEE) created a task group in late 2003 to develop a wireless LAN standard delivering at least 100 Mbit/s of user data throughput. There were two major competing proposals: TGn Sync was backed by companies including Intel and Philips, and WWiSE was supported by companies including Airgo Networks, Broadcom, and Texas Instruments. Both groups agreed that the 802.11n standard would be based on MIMO-OFDM with 20 MHz and 40 MHz channel options. TGn Sync, WWiSE, and a third proposal (MITMOT, backed by Motorola and Mitsubishi) were merged to create what was called the Joint Proposal. The final 802.11n standard supported speeds up to 600 Mbit/s (using four simultaneous data streams) and was published in late 2009.WiMAX was developed as an alternative to cellular standards, is based on the 802.16e standard, and uses MIMO-OFDM to deliver speeds up to 138 Mbit/s. The more advanced 802.16m standard enabled download speeds up to 1 Gbit/s. A nationwide WiMAX network was built in the United States by Clearwire, a subsidiary of Sprint-Nextel, covering 130 million pops by mid-2012. Clearwire subsequently announced plans to deploy LTE (the cellular 4G standard) covering 31 cities by mid-2013.
Commercialization of 4G MIMO:
The first 4G cellular standard was proposed by NTT DoCoMo in 2004. Long term evolution (LTE) is based on MIMO-OFDM and continues to be developed by the 3rd Generation Partnership Project (3GPP). LTE specifies downlink rates up to 300 Mbit/s, uplink rates up to 75 Mbit/s, and quality of service parameters such as low latency. LTE Advanced adds support for picocells, femtocells, and multi-carrier channels up to 100 MHz wide. LTE has been embraced by both GSM/UMTS and CDMA operators.The first LTE services were launched in Oslo and Stockholm by TeliaSonera in 2009. Deployment is most advanced in the United States, where all four Tier 1 operators have or are constructing nationwide LTE networks. There are currently more than 222 LTE networks in 83 countries operational with approximately 126 million connections (devices).
Emerging 5G MIMO-OFDM standards:
The 802.11ac wireless LAN standard was proposed to deliver speeds of 1 Gbit/s and faster. Development of the specification began in 2011 and is expected to be completed by 2014. 802.11ac uses the 5 GHz band, defines channels up to 160 MHz wide, supports up to 8 simultaneous MIMO data streams, and delivers raw data rates up to nearly 7 Gbit/s. A number of products based on 802.11ac draft specifications are now available.
Emerging 5G MIMO-OFDM standards:
Fifth generation (5G) mobile network concepts are in the exploratory stage. Commercialization is expected by the early 2020s. In March 2013, NTT DoCoMo tested a10 Gbit/s uplink using 400 MHz in the 11 GHz band. In May 2013, Samsung announced that it is experimenting in the 28 GHz band using base stations with up to 64 antennas and has achieved 1 Gbit/s at distances up to 2 kilometers. Samsung claims the technology could deliver tens of Gbit/s under favorable conditions.
Emerging 5G MIMO-OFDM standards:
Research papers suggest that 5G networks are likely to consist of small distributed cells operating at frequencies up to 90 GHz using “massive MIMO.” According to Jakob Hoydis of Bell Laboratories, Alcatel-Lucent, Germany, “Network densification is the only solution to the capacity crunch.” This could involve two-tier networks (“HetNets”) using existing cellular base stations to ensure broad coverage and high mobility and interspersed small cells for capacity and indoor service. Massive MIMO would also be employed in high-speed backhaul links. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Theograndin I**
Theograndin I:
Theograndin I is a sulfated flavone glucuronide found in Cupuaçu (Theobroma grandiflorum). It is a glucuronide of isoscutellarein (8-Hydroxyapigenin). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Xed**
Xed:
Xed is a lightweight text editor forked from Pluma and is the default text editor in Linux Mint.Xed is a graphical application which supports editing multiple text files in one window via tabs. It fully supports international text through its use of the Unicode UTF-8 encoding. As a general-purpose text editor, Xed supports most standard editor features, and emphasizes simplicity and ease of use. Its core feature set includes syntax highlighting of source code, auto indentation, and printing support with print preview.
Features:
Optional vertical tab list in side pane Complete support for UTF-8 text Auto indentation and configurable indentation values Document statistics of file and within selection (line counter, word counter, character count with and without spaces, byte count) View CVS changelogs Colored syntax highlighting Remote file editing Smart find and replace Print preview and printing File comparison File history Complete preferences interface Support for plugin customization Optional Python support Prebundled plugins including a spell checker, case transform, file browser, sort, and insert date/time Edit multiple files in one window using tab Indicate how long ago the file was last saved when closing an unsaved document.
Features:
Ability to sort lines in alphanumerical order. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Steel service center**
Steel service center:
A steel service center is a company that deals with the service related activities of the steel industry. In a typical scenario, a steel manufacturer produces steel in bulk and then sells it to the customer. The customer then uses this steel according to its respective business. Small and medium enterprise businesses (SME) often have a problem in buying good quality steel from a big manufacturer. This is for various reasons, such as, the minimum amount of steel which the manufacturer sells is more than the SME requires. Hence, this leads to maintaining a huge warehouse, adding to the total cost.
Operation:
A steel service center buys steel from a manufacturer, stores it, and sells it to end users as required. This saves the SME from the trouble of buying extra steel or even maintaining an inventory. The steel service center may also process the steel, e.g. by cutting to a size or shape specified by the customer, before sale. The quantity and the methods of processing of steel vary from center to center. It mostly depends on the product mix and the customer mix of the center. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**High-voltage switchgear**
High-voltage switchgear:
High voltage switchgear is any switchgear used to connect or disconnect a part of a high-voltage power system. This equipment is essential for the protection and safe operation, without interruption, of a high voltage power system, and is important because it is directly linked to the quality of the electricity supply.
High-voltage switchgear:
The term "high voltage" covers the former medium voltage (MV) and the former high voltage (HV), and therefore refers to equipment with a rated voltage of over 1,000 V in the case of alternating current, and over 1,500 V in the case of direct current. The industrial applications of high voltage circuit breakers are for the moment limited to alternating current because they are more economical, there are however high voltage disconnectors for direct current connections.
High-voltage switchgear:
High voltage switchgear was invented at the end of the 19th century for operating motors and others electric machines. The technology has been improved over time and can be used with voltages up to 1,100 kV.
Classification:
Functional Classification Disconnectors and earthing switches Disconnectors and earthing switches are safety devices used to open or to close a circuit when there is no current through them. They are used to isolate a part of a circuit, a machine, a part of an overhead line or an underground line so that maintenance can be safely conducted.
The opening of the line isolator or busbar section isolator is necessary for safety, but not sufficient. Grounding must be conducted at both the upstream and downstream sections of the device under maintenance. This is accomplished by earthing switches.
Classification:
In principle, disconnecting switches do not have to interrupt currents, as they are designed for use on de-energized circuits. In practice, some are capable of interrupting currents (as much as 1,600 ampere under 300 V but only if current is drawn via a same circuit half breaker bypass system), and some earthing switches must interrupt induced currents which are generated in a non-current-carrying line by inductive and capacitive coupling with nearby lines (up to 160 A under 20 kV).
Classification:
High-current switching mechanism High-current switching mechanisms are used for energized circuits carrying a normal load. Some can be used as a disconnecting switch.
However, if they can create a short-circuit current, they can not interrupt it.
Contactor Contactors are similar in function to high-current switching mechanisms, but can be used at higher rates. They have a high electrical endurance and a high mechanical endurance.
Fuses A fuse can automatically interrupt a circuit with an overcurrent flowing in it for a fixed time. This is accomplished by the fusion of an electrical conductor which is graded.
Fuses are mainly used to protect against short circuits. They limit the peak value of the fault current.
In three-phase electric power, they only eliminate the phases where the fault current is flowing, which can pose a risk for both the malfunctioning devices and the people. To alleviate this problem, fuses can be used in conjunction with high-current switches or contactors.
Like contactors, high-voltage fuses are used only in the band 30 kV to 100 kV.
Circuit Breaker A high voltage circuit breaker is capable of connecting, carrying and disconnecting currents under the rated voltage (the maximal voltage of the power system which it is protecting).
Under normal operational conditions, circuit breakers can be used to (dis)connect a line. Circuit breakers can also be used to interrupt current when anomalies are detected, such as a short-circuit.
Classification:
Circuit breakers are essential elements of high-voltage power systems because they are the only means to safely interrupt a short circuit current. The international standard IEC 62271-100 defines the demands linked to the characteristics of a high voltage circuit breaker.The circuit breaker can be equipped with electronic devices in order to know at any moment their states, such as wear or gas pressure, and to detect faults from characteristic derivatives. It can also permit planned maintenance operations and to avoid failures.To operate on long lines, circuit breakers are equipped with a closing resistor to limit overvoltages.They can be equipped with devices to synchronize closing and/or opening, to limit the overvoltages and the inrush currents from the lines, the unloaded transformers, the shunt reactances and the capacitor banks.Some devices are designed to have the characteristics of the circuit breaker and the disconnector, but their use is not widespread. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**The Nature of Order**
The Nature of Order:
The Nature of Order: An Essay on the Art of Building and the Nature of the Universe (ISBN 0-9726529-0-6) is a four-volume work by the architect Christopher Alexander published in 2002–2004. In his earlier work, Alexander attempted to formulate the principles that lead to a good built environment as patterns, or recurring design solutions. However, he came to believe that patterns themselves are not enough to generate life in buildings and cities, and that one needs a "morphogenetic" understanding of the formation of the built environment as well as a deep understanding of how the makers get in touch with the creative process.
The Phenomenon of Life:
Volume 1 attempts to define "life" in the built environment and determine why one built environment may have more life than another. Important to this idea is his notion of centers: "Centers are those particular identified sets, or systems, which appear within the larger whole as distinct and noticeable parts. They appear because they have noticeable distinctness, which makes them separate out from their surroundings and makes them cohere, and it is from the arrangements of these coherent parts that other coherent parts appear. The life or intensity of one center is increased or decreased according to the position and intensity of other nearby centers. Above all, centers become most intense when the centers which they are made of help each other."Alexander argues that any entity (center) for example an ecosystem, landscape, garden, city, street, building, window, painting, animal or human being has a certain degree of life. According to Alexander, a human being is able to sense this degree of life in an entity objectively, as a real empirical dimension. He tries to demonstrate this to the reader with experiments that he has conducted, studies and experiments the reader can do. The ability to be aware of the degree of life of an entity, lays out the foundation for his theory and for the creative process of which human beings are, according to Alexander, capable.
The Phenomenon of Life:
The first volume contains an exposition of what the author calls the fundamental properties, which are those that are possessed by environments which have more life. He argues that processes that lead to a good built environment are those that tend to increase one or more of these properties. He identifies fifteen geometric properties which tend to accompany the presence of life in nature, and also in the buildings and cities we make. These properties are seen over and over in nature, and in cities and streets of the past, but have all but disappeared in the developments and buildings of the last one hundred years. The book shows that living structure depends on features which make a close connection with the human self, and that living structure has the capacity to support human well-being.
The Process of Creating Life:
The second book describes the process of creating "life", which is an evolutionary process. Complex systems do not spring into existence fully formed, but rather through a series of small, incremental changes. The process begins with a simple system and incrementally changes that system such that each change preserves the structure of the previous step. Alexander calls these increments "structure-preserving transformations," and they are essential to his process.
The Process of Creating Life:
Where book one introduces the reader to 15 geometric properties that make up living systems, Alexander reframes those geometric properties as structure-preserving transformations in and of themselves rather than being the results of other transformations. For example, Alexander claims that "Levels of Scale" will arise naturally as a result of structure-preserving transformations, but he notes that "Levels of Scale" can also be viewed as a transformation that introduces level of scale into a given structure. A skilled designer would use this transformation to add depth to a particular part of the system that was being built.
The Process of Creating Life:
Alexander contrasts structure-preserving transformations with "Structure-destroying transformations", which he feels are common in modern architecture. Alexander himself does express some sympathy for those who have used these processes to design buildings that he feels are devoid of "life": "I do not, directly, blame all the architects who have made these buildings in so many places on earth. I believe it is inappropriate to feel anger towards them... Rather, I believe that we must acknowledge that the architects (often our own colleagues) who drew these buildings, and then had them built by methods and processes far from their control, deserve our sympathy for being placed in an impossible position. What has caused the new tradition of structure-destroying forms of this era, are mainly the machine-like processes of planning, conceiving, budgeting, developing, construction contracting, construction labor, and so forth. The architects who fully accepted the modern machine have hardly been more than pawns in the game which is much larger than they are."
A Vision of a Living World:
Volume 3, the last of the four books to be published, is the least theoretical of the books and the most compelling from a practical point of view. In Book 3 Alexander presents hundreds of his own buildings and those of his contemporaries who have used similar methods consistent with the theory of living process. The projects include neighborhoods, housing built by people for themselves, public buildings, public urban space, ornament, colors, and details of construction innovation. Hundreds of color photographs offer concrete examples of the kind of spaces, things and buildings you can achieve when you put Alexander's theories into practice.These photos of buildings, and the discussion of each, demonstrate exactly what Alexander means when he talks about living structure, and using life-creating processes to create beautiful places and buildings. These places are more than just pleasant to look at, and be in - they reach an archetypal level of human experience, reaching across centuries, across continents, across cultures, across technology, across building materials and climates. They connect to us all. They connect us to our own feelings.
A Vision of a Living World:
All four books of The Nature of Order present a new framework for perceiving and interacting with our world, a methodology for creating beautiful spaces, a cosmology where art, architecture, science, religion and secular life all work comfortably together. The third book shows us—visually, technically, and artistically—what a world built in this cosmology and framework is likely to be like for us. Six hundred pages of projects built and planned over a thirty-year period, including many un-built experiments, illustrate the impact which is likely to follow from the use of living process in the world. The book provides the reader with an intuitive feel for the kind of world, its style and geometry, which is likely to follow, together with its ecological and natural character. It closes with an assessment of the archetypal character such a new, living world, is likely to reveal.
A Vision of a Living World:
With these examples, lay people, architects, builders, scientists, artists, and students are able to make this new framework real for themselves, for their own lives, and for their own work. Alexander gives us a feast for the eyes, the mind, and the heart.
The Luminous Ground:
The foundations of modern scientific thought, four centuries old, are firmly rooted in a conception that the universe is a machinelike entity, a play of baubles, machines, trinkets. To this day, our real daily experience of ourselves has no clear place in science. It is little wonder that a machinelike world-view has supported the deadly architecture of the last century.This mechanistic thinking and the consequent investment-oriented tracts of houses, condominiums and offices have dehumanized our cities and our lives. How are spirit, soul, emotion, feeling to be introduced into a building, or a street, or a development project, in modern times? In this process, he approached religious questions from a scientific and philosophical rather than mystical direction, focusing in human feelings, well-being and nature interaction rather than metaphysics.
The Luminous Ground:
The Luminous Ground, the fourth book of The Nature of Order, contains what is, perhaps, the deepest revelation in the four-volume work. Alexander addresses the cosmological implications of the theory he has presented. The book begins with a critique of current cosmological thinking, and its separation from personal feeling and value. The outline of a theory in which matter itself is more spirit-like, more personal in character, is sketched. Here is a geometrical view of space and matter seamlessly connected to our own private, personal, experience as sentient and knowing creatures. This is not merely an emotional appendix to the scientific theory of the other books. It is at the core of the entire work, and is rooted in the fact that our two sides - our analytical thinking selves, and our vulnerable emotional personalities as human beings - are coterminous, and must be harnessed at one and the same time, if we are ever to really make sense of what is around us, and be able to create a living world.
The Luminous Ground:
Alexander breaks away completely from the one-sided mechanical model of buildings or neighborhoods as mere assemblages of technically generated, interchangeable parts. He shows us conclusively that a spiritual, emotional, and personal basis must underlie every act of building or making. And then, in the middle of the book, comes the linchpin of the work - a one-hundred-page chapter on color, which dramatically conveys the way that consciousness and spirit are manifested in the world.
The Luminous Ground:
This is a new cosmology: consciousness inextricably joined to the substrate of matter, present in all matter. This view, though radical, conforms to our most ordinary, daily intuitions. It may provide a path for those contemporary scientists who are beginning to see consciousness as the underpinning of all matter, and thus as a proper object of scientific study. And it will change, forever, our conception of what buildings are. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HD 109749**
HD 109749:
HD 109749 is a binary star about 206 light years away in the constellation of Centaurus.
Stellar system:
The primary star, HD 109749 A, is a G-type subgiant with a spectral type of G3IV, indicating it is an evolved star with a luminosity higher than that of a main sequence star. It has a mass of 1.14 M☉ and a radius of 1.21 R☉. The star is shining with a luminosity of 1.55 L☉ and has an effective temperature of 5,860 K. Evolutionary models estimate an age of 4.1 billion years. HD 109749 A is chromospherically inactive and has a high metallicity, with an iron abundance 178% of Sun's.The secondary star, HD 109749 B, is a K-type main sequence star with an apparent magnitude of 10.3. It has a mass of about 0.78 M☉ and is located at a separation of 8.4 arcseconds, which corresponds to a projected separation of 490 AU. This star has the same proper motion as the primary and seems to be at the same distance, confirming they form a physical binary system.
Planetary system:
In 2005, an exoplanet was discovered around HD 109749 A. It was detected by the radial velocity method as part of the N2K Consortium. It is a hot Jupiter with a minimum mass of 0.28 MJ and a semimajor axis of 0.06 AU. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Orbit Downloader**
Orbit Downloader:
Orbit Downloader is a discontinued download manager for Microsoft Windows. Launched in 2006, its developers abandoned it in 2009. In 2013, Orbit Downloader was classified as malware by antivirus software after ESET discovered a botnet in the application.
Features:
One of the main features of the program is its ability to grab and download embedded Flash Video files from online video platforms. Orbit Downloader also accelerates downloads by acting as a peer-to-peer client, utilizing bandwidth of other users.
Orbit Downloader supports downloading from HTTP, HTTPS, FTP, Metalink, RTSP, MMS and RTMP protocols. Orbit Downloader supports Internet Explorer, Maxthon, Mozilla Firefox and Opera web browsers.
Funding and malicious conduct:
Although Orbit Downloader is free, it is an advertising-supported product since it offers to change the web browser's homepage upon installation and also offers to install software that are not critical for its operation. Also it has begun to display built-in ads inside the program main window and when a dialog of a finished download appears.On 21 August 2013, the WeLiveSecurity blog, published by the ESET security company, reported that since version 4.1.1.15, Orbit Downloader includes a botnet-like module which performs DDoS attacks without the user's knowledge or permission. Because of this dubious behavior, it is being detected as malware. Following this report, download websites BetaNews, Download.com, DownloadCrew, MajorGeeks, Softpedia and Softonic disabled its download. Betanews attempted to contact the developers but discovered that their last blog activity had been in 2009 and the Orbit community forum has since been left to a spammer. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Paced Auditory Serial Addition Test**
Paced Auditory Serial Addition Test:
Paced Auditory Serial Addition Test (PASAT) is a neuropsychological test used to assess capacity and rate of information processing and sustained and divided attention.Originally the test was known as the Paced Auditory Serial Addition Task (PASAT). The subjects are given in the version used as part of the Multiple Sclerosis Functional Composite a number every 3 seconds and are asked to add the number they just heard with the number they heard before. This is a challenging task that involves working memory, attention and arithmetic capabilities. Versions with numbers presented every 2 seconds are also available. The original version presented the numbers every 2.4 seconds with 0.4 decrements for subsequent trials. The PASAT was originally developed for use in evaluating patients with head injury. The advantage in this population was supposed to be minimal practice effects. This test has been widely used in other conditions besides traumatic brain injury.
Multiple sclerosis:
It has become widely used in the testing of patients with multiple sclerosis as patients with this disease frequently have an impaired performance on this test. The PASAT was included in the Multiple Sclerosis Functional Composite as a cognitive measure. However, the use of the PASAT in clinical trials in MS it has shown to be problematic as there are significant practice effects over repeated measures; typically the effect of treatment is reflected by a larger improvement on the test compared to the control group. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Apocrine**
Apocrine:
Apocrine () glands are a type of exocrine gland, which are themselves a type of gland, i.e. a group of cells specialized for the release of secretions. Exocrine glands secrete by one of three means: holocrine, merocrine and apocrine. In apocrine secretion, secretory cells accumulate material at their apical ends, and this material then buds off from the cells, forming extracellular vesicles. The secretory cells therefore lose part of their cytoplasm in the process of secretion.
Apocrine:
An example of true apocrine glands is the mammary glands, responsible for secreting breast milk. Apocrine glands are also found in the anogenital region and axillae.Apocrine secretion is less damaging to the gland than holocrine secretion (which destroys a cell) but more damaging than merocrine secretion (exocytosis).
Apocrine metaplasia:
Apocrine metaplasia is a reversible transformation of cells to an apocrine phenotype. It is common in the breast in the context of fibrocystic change. It is seen in women mostly over the age of 50 years. Metaplasia happens when there is an irritation to the breast (breast cyst). Apocrine-like cells form in a lining of developing microcysts, due to the pressure buildup within the lumen. The pressure build up is caused by secretions. This type of metaplasia represents an exception to the common rule of metaplasia increasing the risk for developing cancer in that apocrine metaplasia doesn't increase the possibility of developing breast cancer.
Apocrine ductal carcinoma in situ:
Apocrine ductal carcinoma in situ (ACDIS) is a very rare breast carcinoma which is regarded as a variant of the ductal carcinoma in situ breast tumors. ACDIS tumors have microscopic histopathology features that are similar to pure apocrine carcinoma of the breast tumors but differ from them in that they are completely localized, i.e. have not invaded nearby tissues or metastasized to distant tissues.
Apocrine carcinoma:
Apocrine carcinoma is a very rare form of female breast cancer. The rate of incidence varies from 0.5 to 4%. Cytologically, the cells of apocrine carcinoma are relatively large, granular, and it has a prominent eosinophilic cytoplasm. When apocrine carcinoma is tested as a “triple negative", it means that the cells of the patient cannot express the estrogen receptor, progesterone receptor, or HER2 receptor. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Oral and maxillofacial radiology**
Oral and maxillofacial radiology:
Oral and maxillofacial radiology, also known as dental and maxillofacial radiology, is the specialty of dentistry concerned with performance and interpretation of diagnostic imaging used for examining the craniofacial, dental and adjacent structures.Oral and maxillofacial imaging includes cone beam computerized tomography, multislice computerized tomography, magnetic resonance imaging, positron emission tomography, ultrasound, panoramic radiography, cephalometric imaging, intra-oral imaging (e.g. bitewing, peri-apical and occlusal radiographs) in addition to special tests like sialographs. Other modalities, including optical coherence tomography are also under development for dental imaging.
Training:
United States Oral or dental maxillofacial radiology is one of nine dental specialties recognized by the American Dental Association.To become an oral and maxillofacial radiologist one must first complete a dental degree and then apply for and complete a postgraduate course of training (usually between 2–4 years in length). Training includes all aspects of radiation physics, radiation biology, radiation safety, radiologic technique, the patho-physiology of disease and interpretation of diagnostic images.
Training:
The Commission on Dental Accreditation accredited programs are a minimum of two years in length. Several accredited programs in oral maxillofacial radiology require the resident to complete a master's degree, whereas others allow the option of pursuing a concurrent PhD or master's degree. Following successful completion of this training the Oral and Maxillofacial Radiologist becomes Board eligible to challenge the American Board of Oral and Maxillofacial Radiology examination. Successful completion of board certification results in Diplomat status in the American Board of Oral and Maxillofacial Radiology.
Training:
Australia Australian programs are accredited by the Australian Dental Council and are 3 years in length, culminating in either a master's degree (MDS or MPhil) or a Doctor of Clinical Dentistry degree (DClinDent). Currently, the only Australian institution offering specialist training in oral maxillofacial radiology is the University of Queensland. Programs are focused on clinical radiology and offer comprehensive training with registrars reporting plain film, cone beam computerized tomography, multislice computerized tomography and magnetic resonance imaging of the maxillofacial region.
Training:
Fellowship can then be acquired through the Royal Australia New Zealand College of Radiologists and/or the Royal Australasian College of Dental Surgeons. Oral and maxillofacial radiologists in Australia tend to work in the private sector, reporting in medical radiology practices alongside medical radiologists.
Canada Canadian programs are accredited by the Canadian Dental Association and are a minimum of two years in length and usually culminate with a Master of Science degree. Graduates are then eligible to sit for the Fellowship exams with the Royal College of Dentists of Canada.
United Kingdom Programs in the United Kingdom are 4 years in length and culminate in a Certificate in Completion of Specialty Training and often a Master of Science degree. Graduates are then eligible to sit for the Diploma of Dental Radiology from the Royal College of Radiologists. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Glossary of classical algebraic geometry**
Glossary of classical algebraic geometry:
The terminology of algebraic geometry changed drastically during the twentieth century, with the introduction of the general methods, initiated by David Hilbert and the Italian school of algebraic geometry in the beginning of the century, and later formalized by André Weil, Jean-Pierre Serre and Alexander Grothendieck. Much of the classical terminology, mainly based on case study, was simply abandoned, with the result that books and papers written before this time can be hard to read. This article lists some of this classical terminology, and describes some of the changes in conventions.
Glossary of classical algebraic geometry:
Dolgachev (2012) translates many of the classical terms in algebraic geometry into scheme-theoretic terminology. Other books defining some of the classical terminology include Baker (1922a, 1922b, 1923, 1925, 1933a, 1933b), Coolidge (1931), Coxeter (1969), Hudson (1990), Salmon (1879), Semple & Roth (1949).
Conventions:
The change in terminology from around 1948 to 1960 is not the only difficulty in understanding classical algebraic geometry. There was also a lot of background knowledge and assumptions, much of which has now changed. This section lists some of these changes.
In classical algebraic geometry, adjectives were often used as nouns: for example, "quartic" could also be short for "quartic curve" or "quartic surface".
In classical algebraic geometry, all curves, surfaces, varieties, and so on came with fixed embeddings into projective space, whereas in scheme theory they are more often considered as abstract varieties. For example, a Veronese surface was not just a copy of the projective plane, but a copy of the projective plane together with an embedding into projective 5-space.
Varieties were often considered only up to birational isomorphism, whereas in scheme theory they are usually considered up to biregular isomorphism. (Semple & Roth 1949, p.20–21) Until circa 1950, many of the proofs in classical algebraic geometry were incomplete (or occasionally just wrong). In particular authors often did not bother to check degenerate cases.
Words (such as azygetic or bifid) were sometimes formed from Latin or Greek roots without further explanation, assuming that readers would use their classical education to figure out the meaning.
Conventions:
Definitions in classical algebraic geometry were often somewhat vague, and it is futile to try to find the precise meaning of some of the older terms because many of them never had a precise meaning. In practice this did not matter much when the terms were only used to describe particular examples, as in these cases their meaning was usually clear: for example, it was obvious what the 16 tropes of a Kummer surface were, even if "trope" was not precisely defined in general.
Conventions:
Algebraic geometry was often implicitly done over the complex numbers (or sometimes the real numbers).
Readers were often assumed to know classical (or synthetic) projective geometry, and in particular to have a thorough knowledge of conics, and authors would use terminology from this area without further explanation.
Conventions:
Several terms, such as "Abelian group", "complete", "complex", "flat", "harmonic", "homology", "monoid", "normal", "pole", "regular", now have meanings that are unrelated to their original meanings. Other terms, such as "circle", have their meanings tacitly changed to work in complex projective space; for example, a circle in complex algebraic geometry is a conic passing through the circular points at infinity and has underlying topological space a 2-sphere rather than a 1-sphere.
Conventions:
Sometimes capital letters are tacitly understood to stand for points, and small letters for lines or curves.
Symbols:
[1], [2], . . . , [n] Projective space of dimension 1,2,…,n . This notation was introduced by Schubert (1886).
∞¹, ∞², ...
A family of dimension 1, 2, ...
{1}, {2}, ...,{n} A family or variety of dimension 1,2,…,n . (Semple & Roth 1949, p.288)
A:
Abelian group 1. An archaic name for the symplectic group.
2. A commutative group.
aberrancy The deviation of a curve from circular form. See Salmon (1879, p. 356).
A:
absolute 1. A fixed choice of something in projective space, used to construct some other geometry from projective geometry. For example, choosing a plane, called the absolute plane, of projective space can be used to make its complement into a copy of affine space. Choosing a suitable conic or polarity, called the Cayley absolute, absolute conic or absolute polarity, in the absolute plane provides the means to put a metric on affine space so that it becomes a metric space.
A:
2. Absolute geometry is roughly Euclidean geometry without the parallel postulate.
accidental An accidental (or improper) double point of a surface in 4-dimensional projective space is a double point with two distinct tangent planes. (Baker 1933b, vol 6, p. 157) acnode An acnode is an isolated point of a real curve. See Salmon (1879, p.23).
A:
adjoint If C is a curve, an adjoint of C is a curve such that any point of C of multiplicity r has multiplicity at least r–1 on the adjoint. Sometimes the multiple points of C are required to be ordinary, and if theis condition is not satisfied the term "sub-adjoint" is used. (Semple & Roth 1949, p.55, 231) affine 1. Affine space is roughly a vector space where one has forgotten which point is the origin.
A:
2. An affine variety is a variety in affine space.
affinity An automorphism of affine space.
aggregate A set.
ambient An ambient variety is a large variety containing all the points, curves, divisors, and so on that one is interested in.
anharmonic ratio Cross-ratio antipoint One of a pair of points constructed from two foci of a curve. See Salmon (1879, p.119).
apparent An apparent singularity is a singularity of a projection of a variety into a hyperplane. They are so called because they appear to be singularities to an observer at the point being projected from. (Semple & Roth 1949, p.55, 231) apolar Orthogonal under the polar pairing between the symmetric algebra of a vector space and its dual.
arithmetic genus The arithmetic genus of a variety is a variation of the Euler characteristic of the trivial line bundle; see Hodge number.
A:
Aronhold set One of the 288 sets of 7 of the 28 bitangents of a quartic curve corresponding to the 7 odd theta characteristics of a normal set. associated 1. An associated curve is the image of a projective curve in a Grassmannian, given by taking the tangent lines, or osculating planes, and so on. axial axis A special line or linear subspace associated with some family of geometric objects. For example, a special linear complex in 4-dimensional space consists of all lines meeting a given plane, that is called the axial plane of the complex. (Semple & Roth 1949, p.274) Similar to directrix.
A:
azygetic Unpaired. Opposite of syzygetic, meaning paired. Example: azygetic triad, azygetic tetrad, azygetic set.
B:
base 1. A base point is a point common to all members of a family.
2. The base number ρ is the rank of the Neron–Severi group.
bicircular Having nodes at the two circular points at infinity, as in bicircular curve. See Salmon (1879, p.231).
bicorn A bicorn is a curve with two cusps.
bicuspidal Having two cusps bidegree A pair of integers giving the degrees of a bihomogeneous polynomial in two sets of variables bielliptic 1. A bielliptic curve is a branched double cover of an elliptic curve.
2. A bielliptic surface is the same as a hyperelliptic surface.
B:
bifid 1. Split into two equal parts 2. A bifid map is an element of the vector space of dimension 2g over the field with 2 elements, consisting of the 2g+1-dimensional space of even-cardinality subsets of a set S of 2+2g elements, modulo the 1-dimensional space {0,S}. (Dolgachev 2012, p.215) 3. A bifid substitution is a permutation of the 28 bitangents of a quartic curve depending on one of the 35 decompositions of 8 symbols into two sets of 4 symbols. See Salmon (1879, p.223).
B:
biflecnode Same as fleflecnode. See Salmon (1879, p.210).
bigenus The second plurigenus P2 of a surface.
bihomogeneous Homogeneous in each of two sets of variables, as in bihomogeneous form.
binary Depending on two variables, as in binary form binodal Having two nodes binode A double point of a surface whose tangent cone consists of two different planes. See unode. (Semple & Roth 1949, p.424) bipartite Having two connected components. See Salmon (1879, p.165).
B:
bipunctual 1. Having two points 2. For a bipunctual conic with respect to 3 points see Baker (1922b, vol 2, p. 123). birational 1. Two varieties are birational if they are isomorphic off lower-dimensional subsets 2. A birational map is a rational map with rational "inverse" biregular 1. A biregular map is a regular map with regular inverse 2. Two varieties are biregular if there is a biregular map from one to the other, in other words if they are isomorphic as abstract varieties.
B:
biscribed Both circumscribed and inscribed, or in other words having vertices that lie on a curve and sides that are tangent to the curve, as in biscribed triangle. (Dolgachev 2012) bitangent A bitangent is a line that is tangent to a curve at two points. See Salmon (1879, p. 328).
bitangential Meeting a curve at the tangency points of its bitangents Brianchon hexagon A non-planar hexagon whose three diagonals meet. (Baker 1922a, vol 1, p. 47)
C:
canonical 1. The canonical series is the linear series of the canonical line bundle 2. The canonical bundle is the line bundle of differential forms of highest degree.
C:
3. The canonical map or canonical embedding is the map to the projective space of the sections of the canonical bundle 4. A canonical curve (or variety) is the image of a curve (or variety) under the canonical map 5. The canonical class is the divisor class of a canonical divisor 6. A canonical divisor is a divisor of a section of the canonical line bundle.
C:
catalecticant A catalecticant is an invariant of a binary form of degree 2n that vanishes when the form is a sum of powers of n linear forms.
C:
caustic A caustic is the envelope of light rays from a point reflected in a curve Cayley Cayleyan Named after Arthur Cayley 1. See Salmon (1879) 2. A Cayley octad is a set of 8 points in projective space given by the intersection of three quadrics. (Dolgachev 2012, 6.3.1) 3. The Cayley lines or Cayley–Salmon lines are the 20 lines passing through 3 Kirkman points.
C:
4. A Cayley absolute is a conic or quadric used to define a metric.
C:
center centre 1. A special point associated with some geometric object 2. The center of a perspectivity 3. The center of an isologue character characteristic 1. An integer associated with a projective variety, such as its degree, rank, order, class, type. (Semple & Roth 1949, p.189) In particular the Plücker characteristics of a curve are the order, class, number of nodes, number of bitangents, number of cusps, and number of inflections. (Coolidge 1931, p.99) 2. A characteristic exponent is an exponent of a power series with non-negative coefficient, that is not divisible by the highest common factor of preceding exponents with non-zero coefficients. (Coolidge 1931, p.220) 3. The characteristic series of a linear system of divisors on a surface is the linear system of 0-cycles on one of the divisors given by its intersections with the other divisors.
C:
chord A line joining two points of a variety chordal variety A chordal variety is the union of the chords and tangent spaces of a projective variety circle A plane conic passing through the circular points at infinity. For real projective geometry this is much the same as a circle in the usual sense, but for complex projective geometry it is different: for example, cicles have underlying topological spaces given by a 2-sphere rather than a 1-sphere. circuit A component of a real algebraic curve. A circuit is called even or odd depending on whether it has an even or odd number of intersections with a generic line. (Coolidge 1931, p. 50) circular 1. A circular point is one of the two points at infinity (1: i: 0), (1: −i: 0) through which all circles pass 2. A circular algebraic curve is a curve passing through the two circular points at infinity. See also bicircular.
C:
circumscribed 1. Having edges tangent to some curve, as in circumscribed quadrilateral.
2. Passing through the vertices of something, as in circumscribed circle.
cissoid A cissoid is the curve generated from two curves and a point. See Salmon (1879).
C:
class 1. The class of a plane curve is the number of proper tangents passing through a generic point of the plane. (Semple & Roth 1949, p.28) 2. The class of a space curve is the number of osculating planes passing through a generic point of space. (Semple & Roth 1949, p.85) 3. The class of a surface in rdimensional projective space is the number of tangent planes meeting a generic codimension 2 subspace in a line. (Semple & Roth 1949, p.28) 4. The degree of a contravariant or concomitant in the covariant variables.
C:
coaxal coaxial A pencil of circles is called coaxal if their centers all lie on a line (called the axis).
C:
A family of plane circles all passing through the same two points (other than the circular points at infinity). (Baker 1922b, vol 2, p. 66) coincidence 1. A coincidence quadric is a quadric associated to a correlation, given by the locus of points lying in the corresponding hyperplane. (Semple & Roth 1949, p.8) 2. A fixed point of a correspondence, in other words a point of a variety corresponding to itself under a correspondence. (Coolidge 1931, p. 126) collinear On the same line collineation A collineation is an isomorphism from one projective space to another, often to itself. (Semple & Roth 1949, p.6) See correlation.
C:
complete 1. A linear series of divisors is called complete if it is not contained in a larger linear series.(Semple & Roth 1949, p.351) 2. A scheme is called complete if the map to a point is proper 3. A complete quadrangle is 4 points and the 6 lines joining pairs 4. A complete quadrilateral is 4 lines meeting in pairs in 6 points 5. A complete conic in the plane is a (possibly degenerate) conic, together with a pair of (possibly equal) points on it if it is a double line complex 1. (Noun.) A line complex, a family of lines of codimension 1 in the family of all lines in some projective space, in particular a 3-dimensional family of lines in 3-dimensional projective space. (Semple & Roth 1949, p.236) See congruence.
C:
2. (Adjective.) Related to the complex numbers.
3. The (line) complex group is an old name for the symplectic group.
composite Reducible (meaning having more than one irreducible component).
conchoid A conchoid is the curve given by the cissoid of a circle and another curve. See Salmon (1879).
C:
concomitant A (mixed) concomitant is an invariant homogeneous polynomial in the coefficients of a form, a covariant variable, and a contravariant variable. In other words it is a (tri)homogeneous polynomial on SV⊕V⊕V* for some vector space V, where SV is some symmetric power of V and V* its dual, that is invariant under the special linear group of V. In practice V often has dimension 2. The degree, class, and order of a concomitant are its degrees in the three types of variable. Concomitants are generalizations of covariants, contravariants, and invariants. concurrent Meeting at a point cone 1. The union of the lines joining an algebraic set with a linear algebraic set. Called a point-cone, line-cone, ... if the linear set is a point, line, ...(Semple & Roth 1949, p.18) 2. A subset of a vector space closed under multiplication by scalars.
C:
configuration A configuration is a finite set of points and lines (and sometimes planes), generally with equal numbers of points per line and equal numbers of lines per point.
confocal Having the same foci congruence A family of lines in projective space such that there are a nonzero finite number of lines through a generic point (Semple & Roth 1949, p.238, 288). See complex.
conic A conic is a degree 2 curve. Short for "conic section", the intersection of a cone with a plane.
conjugate 1. A conjugate point is an acnode. (Salmon 1879, p.23) 2. A conjugate point is a point lying on the hyperplane corresponding to another point under a polarity.
3. A conjugate line is a line containing the point corresponding to another line under a polarity (or plane conic). (Baker 1922b, vol 2, p. 26) 4. For harmonic conjugate see harmonic.
connex A correspondence between a projective space and its dual.
consecutive Infinitesimally near. For example, a tangent line to a curve is a line through two consecutive points of the curve, and a focal point is the intersection of the normals of two consecutive points.
C:
contravariant 1. A bihomogeneous polynomial in dual variables of x, y, ... and the coefficients of some homogeneous form in x, y,... that is invariant under some group of linear transformations. In other words it is a bihomogeneous polynomial on SV⊕V for some vector space V, where SV is some symmetric power of V and V* its dual, that is invariant under the special linear group of V. In practice V often has dimension at least 3, because when it has dimension 2 these are more or less the same as covariants. The degree and class of a contravariant are its degrees in the two types of variable. Contravariants generalize invariants and are special cases of concomitants, and are in some sense dual to covariants.
C:
coplanar In the same plane correlation An isomorphism from a projective space to the dual of a projective space, often to the dual of itself. A correlation on the projective space of a vector space is essentially the same as a nonsingular bilinear form on the vector space, up to multiplication by constants. (Semple & Roth 1949, p.7) coresidual See Salmon (1879, p.131) correspondence A correspondence from X to Y is an algebraic subset of X×Y cosingular Having the same singularities couple An ordered pair covariant 1. A bihomogeneous polynomial in x, y, ... and the coefficients of some homogeneous form in x, y,... that is invariant under some group of linear transformations. In other words it is a bihomogeneous polynomial on SV⊕V* for some vector space V, where SV is some symmetric power of V and V* its dual, that is invariant under the special linear group of V. In practice V often has dimension 2. The degree and order of a covariant are its degrees in the two types of variable. Covariants generalize invariants and are special cases of concomitants, and are in some sense dual to contravariants 2. The variety defined by a covariant. In particular the curve defined by the Hessian or Steinerian covariants of a curve are called covariant curves. (Coolidge 1931, p.151) Cremona transformation A Cremona transformation is a birational map from a projective space to itself cross-ratio The cross-ratio is an invariant of 4 points on a projective line.
C:
crunode Crunode is an archaic term for a node, a double point with distinct tangent directions.
cubic Degree 3, especially a degree 3 projective variety cubo-cubic A cubo-cubic transformation is a Cremona transformation such that the homaloids of the transformation and its inverse all have degree 3. Semple & Roth (1949, p.179) curve A curve together with an embedding into projective space.
cusp A cusp is a singular point of a curve whose tangent cone is a line.
cuspidal edge The locus of the focal points of a family of planes (Semple & Roth 1949, p.85, 87) cyclide A cyclide is a quartic surface passing doubly through the absolute conic. (Semple & Roth 1949, p.141)
D:
decic decimic 1. (Adjective) Degree 10 2. (Noun) A degree 10 projective variety deficiency 1. The deficiency of a linear system is its codimension in the corresponding complete linear system.
D:
2. The deficiency D of a plane curve is an approximation to its genus, equal to the genus when all singular points are ordinary, given by (n–1)(n–2)/2 –(a–1)(a–2)/2 – (b–1)(b–2)/2 –..., where n is the degree of the curve and a. b, ... are the multiplicities of its singular points. (Semple & Roth 1949, p.30), (Salmon 1879, p. 28) degree 1. The number of intersection points of a projective variety with a generic linear subspace of complementary dimension 2. The number of points of a divisor on a curve Desargues The Desargues figure or configuration is a configuration of 10 lines and 10 points in Desargues' theorem.
D:
desmic system A desmic system is a configuration of three desmic tetrahedra.
developable 1. (Noun) A 1-dimensional family of planes in 3-dimensional projective space (Semple & Roth 1949, p.85).
2. (Noun) The envelope of the normals of a curve 3. (Noun) Short for a developable surface, one that can be unrolled to a plane 4. The tangent developable of a curve is the surface consisting of its tangent lines.
5. Flat, as in developable surface differential 1. A differential of the first kind is a holomorphic 1-form.
D:
2. A differential of the second kind is a meromorphic 1-form such that the residues of all poles are 0. Sometimes it is only allowd to have one pole that must be of order 2. 3. A differential of the third kind is sometimes a meromorphic 1-form such that all poles are simple (order 1). Sometimes it is only allowed to have 2 poles.
D:
director The director circle of a conic is the locus of points where two orthogonal tangent lines to the conic meet. More generally the director conic of a conic in regard to two points is defined in a similar way. (Baker 1922b, vol 2, p. 26) directrix A straight line, or more generally a projective space, associated with some geometric configuration, such as the directrix of a conic section or the directrix of a rational normal scroll discriminant The invariant (on the vector space of forms of degree d in n variables) which vanishes exactly when the corresponding hypersurface in Pn-1 is singular.
D:
double curve A 1-dimensional singularity, usually of a surface, of multiplicity 2 double point 1. A 0-dimensional singularity of multiplicity 2, such as a node.
One of the two points fixed by an involution of a projective line. (Baker 1922b, vol 2, p.3) double six The Schläfli double six configuration duad A set of two points dual 1. The dual of a projective space is the set of hyperplanes, considered as another projective space.
2. The dual curve of a plane curve is the set of its tangent lines, considered as a curve in the dual projective plane.
3. A dual number is a number of the form a+εb where ε has square 0. Semple & Roth (1949, p.268)
E:
env Eckardt point An Eckardt point is a point of intersection of 3 lines on a cubic surface.
effective An effective cycle or divisor is one with no negative coefficients elation A collineation that fixes all points on a line (called its axis) and all lines though a point on the axis (called its center).
eleven-point conic The eleven-point conic is a conic containing 11 special points associated to four points and a line. (Baker 1922b, vol 2, p. 49) embedded An embedded variety is one contained in a larger variety, sometimes called the ambient variety.
enneaedro A set of 9 tritangent planes to a cubic surface containing the 27 lines.
envelope A curve tangent to a family of curves. See Salmon (1879, p. 65).
epitrochoid An epitrochoid is the curve traced by a point of a disc rolling along another disc. Salmon (1879) equiaffine equiaffinity An equiaffinity is an equiaffine transformation, meaning an affine transformation preserving area.
equianharmonic 1. Four points whose cross ratio (or anharmonic ratio) is a cube root of 1 2. An equianharmonic cubic is a cubic curve with j-invariant 0 equivalence In intersection theory, a positive-dimensional variety sometimes behaves formally as if it were a finite number of points; this number is called its equivalence.
evectant A contravariant defined by Sylvester depending on an invariant. See Salmon (1879, p. 184).
evolute An evolute is the envelope of the normal lines of a plane curve. See Salmon (1879, p. 40).
E:
exceptional 1. Corresponding to something of lower dimension under a birational correspondence, as in exceptional curve, exceptional divisor 2. An exceptional curve on a surface is one that corresponds to a simple point on another surface under a birational correspondence. It is called an exceptional curve of the first kind if it is transformed into a point of the other surface, and an exceptional curve of the second kind if it is transformed into a curve of the other surface.
F:
facultative A facultative point is one where a given function is positive. (Salmon 1885, p.243) first kind holomorphic or regular (when applied to differentials) flat 1. (Noun) A linear subspace of projective space, such as a point, line, plane, hyperplane.
2. (Adjective) Having curvature zero.
3. (Adjective) For the term "flat" in scheme theory see flat module, flat morphism.
flecnode A double point that is also a point of inflexion of one branch. (Cayley 1852). (Salmon 1879, p.210) fleflecnode A double point that is also a point of inflexion of both branches. (Cayley 1852).
F:
flex Short for point of inflection focal 1. A focal point, line, plane, ... is the intersection of several consecutive elements of a family of linear subspaces. (Semple & Roth 1949, p. 85, 252) 2. A focal curve, surface and so on is the locus of the focal points of a family of linear subspaces. (Semple & Roth 1949, p.252) focus A focal point. See Salmon (1879, p. 116), (Semple & Roth 1949, p. 85,251) foliate singularity See (Semple & Roth 1949, p.422) form 1. A homogeneous polynomial in several variables. Same as quantic.
F:
2. A differential form.
free intersection An intersection point of two members of a family that is not a base point.
freedom Dimension, as in degrees of freedom. (Semple & Roth 1949, p.26).
fundamental This term seem to be ambiguous and poorly defined: Zariski states: "I can find no clear-cut definition of a fundamental curve in the literature".
1. The fundamental set or fundamental locus of a birational correspondence appears to mean (roughly) either the set of points where it is not a bijection or the set of points where it is not defined.
2. A fundamental point, curve, or variety is a point, curve, or variety in the fundamental set of a birational correspondence.
G:
grd, γrd A linear or algebraic system of divisors of dimension r and degree d on a curve. The letter g is used for linear systems, and the letter γ is used for algebraic systems.
generator One of the lines of a ruled surface (Semple & Roth 1949, p.204) or more generally an element of some family of linear spaces.
generic 1. Not having some special properties, which are usually not stated explicitly.
2. A generic point is one having coordinates that are algebraically independent over the base field.
3. The generic point of a scheme.
genus 1. The dimension of the space of sections of the canonical bundle, as in the genus of a curve or the geometric genus of a surface 2. arithmetic genus of a surface 3. plurigenus geometric genus The geometric genus is the dimension of the space of holomorphic n-forms on an n-dimensional non-singular projective variety.
G:
grade The grade of a linear system of divisors on an n-dimensional variety is the number of free intersection points of n generic divisors. In particular the grade of a linear series of divisors on a curve is now called the degree and is the number of points in each divisor (Semple & Roth 1949, p.345), and the grade of a net of curves on a surface is the number of free intersections of two generic curves. (Semple & Roth 1949, p.45) (Semple & Roth 1949, p.159) Grassmannian A Grassmannian is a variety parameterizing linear subspaces of projective space group 1. A group or point-group is an archaic term for an effective divisor on a curve. This usage is particularly confusing, because some such divisors are called normal, with the result that there are "normal sub-groups" having nothing to do with the normal subgroups of group theory. (Coolidge 1931) 2. A group in the usual sense.
H:
harmonic 1. Two pairs of points on a line are harmonic if their cross ratio is –1. The 4 points are called a harmonic set, and the points of one pair are called harmonic conjugates with respect to the other pair. 2. A harmonic cubic is an elliptic curve with j-invariant 1728, given by a double cover of the projective line branched at 4 points with cross ratio –1.
H:
3. Satisfying some analogue of the Laplace equation, as in harmonic form.
H:
4. The harmonic polar line of an inflection point of a cubic curve is the component of the polar conic other than the tangent line. (Dolgachev 2012, 3.1.2) 5. A harmonic net is a set of points on a line containing the harmonic conjugate of any point with respect to any other two points. (Baker 1922a, vol 1, p. 133) 6. For harmonically conjugate conics see (Baker 1922b, vol 2, p. 122). Hesse Hessian Named after Otto Hesse.
H:
1. A Hessian matrix, or a variety associated with it. See Salmon (1879, p.55).
H:
2. The Hessian line is a line associated to 3 points A, B, C, of a conic, containing the three points given by the intersections of the tangents at A, B, C with the lines BC, CA, AB. 3. The Hessian point is a point associated to three lines tangent to a conic, whose construction is dual to that of a Hessian line.
H:
4. The Hessian pair or Hessian duad of three points on a projective line is the pair of points fixed by the projective transformations of order 3 permuting the 3 points. More generally the Hessian pair is also defined in a similar way for triples of points of a rational curve, or triples of elements of a pencil. 5. The Hesse configuration is the configuration of inflection points of a plane cubic.
H:
6. The Hesse group is the group of automorphisms of the Hesse configuration, of order 216.
hexad A set of 6 points homaloid An element of a homaloidal system, in particular the image of a hyperlpane under a Cremona transformation.
homaloidal 1. A homaloidal linear system of divisors is a linear system of grade 1, such as the image of the linear system of hyperplanes of projective space under a Cremona transformation. (Semple & Roth 1949, p.45) (Coolidge 1931, p. 442) When the linear system has dimension 2 or 3 it is called a homaloidal net or homaloidal web.
2. Homaloidal means similar to a flat plane. homographic 1. Having the same invariants. See Salmon (1879, p.232).
2. A homographic transformation is an automorphism of projective space over a field, in other words an element of the projective general linear group. (Salmon 1879, p.283) homography 1. An isomorphism between projective spaces induced by an isomorphism of vector spaces.
2. An axis of homography is a line associated to two related ranges of a conic. (Baker 1922b, vol 2, p. 16) homology 1. As in homology group 2. A collineation fixing all lines through a point (the center) and all points through a line (the axis) not containing the center. See elation. This terminology was introduced by Lie.
3. An automorphism of projective space with a hyperplane of fixed points (called the axis). It is called a harmonic homology if it has order 2, in which case it has an isolated fixed point called its center.
Hurwitz curve Hurwitz surface A Hurwitz curve is a complex algebraic curve of genus g>0 with the maximum possible number 84(g–1) of automorphisms.
hyperbolism Essentially a blow-up of a curve at a point. See Salmon (1879, p.175).
hypercusp A singularity of a curve of some multiplicity r whose tangent cone is a single line meeting the curve with order r+1. (Coolidge 1931, p. 18) hyperelliptic A hyperelliptic curve is a curve with a degree 2 map to the projective line.
hyperflex Same as point of undulation: a point of a curve where the tangent line has contact of order at least 4.
hyperosculating point A point where the tangent space meets with order higher than normal. hyperplane A linear subspace of projective space of codimension 1. Same as prime.
I:
index of speciality The dimension of the first cohomology group of the line bundle of a divisor D; often denoted by i or i(D). Semple & Roth (1949, p.381) infinitely near point A point on a blow up of a variety inflection inflexion An inflection is a point where the curvature vanishes, or in other words where the tangent line meets with order at least 3. Differential geometry uses the slightly stricter condition that the curvature changes sign at the point. See Salmon (1879, p. 32) inpolar quadric See (Baker 1923, vol 3, p. 52, 88) inscribed 1. Having vertices on a curve, as in inscribed figure.
I:
2. Tangent to some lines, as in inscribed circle.
integral An integral is (more or less) what is now called a closed differential form, or sometimes the result of integrating such a form.. 1. An integral of the first kind is a holomorphic closed differential form.
2. An integral of the second kind is a meromorphic closed differential form with no residues.
3. An integral of the third kind is a meromorphic closed differential form whose poles are all simple.
4. A simple integral is a closed 1-form, or the result of integrating a 1-form.
5. A double integral is a closed 2-form, or the result of integrating a 2-form.
invariant (Noun) A polynomial in the coefficients of a homogeneous form, invariant under some group of linear transformations. See also covariant, contravariant, concomitant.
inversion An inversion is a transformation of order 2 exchanging the inside and outside of a circle. See Salmon (1879, p.103).
involute An involute is a curve obtained by unrolling a string around a curve. See Salmon (1879, p. 278).
involution 1. A transformation whose square is the identity. Cremona transformations that are involutions include Bertini involutions, Geiser involutions, and De Jonquières involutions.
irregularity The irregularity of a surface is the dimension of the space of holomorphic 1-forms on a non-singular projective surface; see Hodge number.
isologue Given a Cremoma transformation T, the isologue of a point p is the set of points x such that p, x, T(x) are collinear. The point p is called the center of the isologue.
J:
Jacobian 1. The Jacobian variety of a curve 2. A Jacobian curve; see below Jacobian curve The locus of double points of curves of a net. (Semple & Roth 1949, p.115) Jacobian set The set of free double points of a pencil of curves. (Semple & Roth 1949, p.119) Jacobian system The linear system generated by Jacobian curves. (Semple & Roth 1949, p.117) join The join of two linear spaces is the smallest linear space containing both of them.
K:
kenotheme An intersection of n hypersurfaces in n-dimensional projective space. (Sylvester 1853, Glossary p. 543–548) Archaic. keratoid Horn-like. A keratoid cusp is one whose two branches curve in opposite direction; see ramphoid cusp. Salmon (1879) Kirkman point One of the 60 points lying on 3 of the Plücker lines associated with 6 points on a conic.
Klein 1. Felix Klein 2. The Klein icosahedral surface is a certain cubic surface 3. The Klein quartic is the curve 0.
Kronecker index The intersection number of two curves on a surface Kummer surface A quartic surface with 16 nodes
L:
Laguerre net A net V of plane curves of some degree d such that the base locus of a generic pencil of V is the base locus of V together with d–1 collinear points (Dolgachev 2012, theorem 7.3.5) (Coolidge 1931, p. 423) lemniscate A lemniscate is a curve resembling a figure 8. See Salmon (1879, p.42) limaçon A limaçon is a curve traced by a point on a circle rolling around a similar circle. See Salmon (1879, p.43) line A line in projective space; in other words a subvariety of degree 1 and dimension 1.
L:
line coordinates Projective coordinates. See Salmon (1879, p. 7) linear Degree 1 linear system A linear system of divisors, given by the zeros of elements of a vector space of sections of a line bundle locus 1-A subset of projective space given by points satisfying some condition
M:
manifold An algebraic manifold is a cycle of projective space, in other words a formal linear combination of irreducible subvarieties. Algebraic manifolds may have singularities, so their underlying topological spaces need not be manifolds in the sense of differential topology. Semple & Roth (1949, p.14–15) meet The meet of two sets is their intersection.
M:
Möbius tetrads Two tetrads such that the plane containing any three points of one tetrad contains a point of the other. (Baker 1922a, vol 1, p. 62) model 1. A variety whose points (or sometimes hyperplane sections) correspond to elements of some family. Similar to what is now called a parameter space or moduli space. 2. A model for a field extension K of a field k is a projective variety over k together with an isomorphism between K and its field of rational functions.
M:
modulus A function of algebraic varieties depending only on the isomorphism type; in other words, a function on a moduli space Moebius tetrads See #Möbius tetrads monoid A surface of degree n with a point of multiplicity n–1. (Semple & Roth 1949, p.187) monoidal transformation A Cremona transformation of projective space generated by a family of monoids with the same point of multiplicity n–1. More generally a blow-up along a subvariety, called the center of the monoidal transformation. (Semple & Roth 1949, p.187) multiple A multiple point is a singular point (one with a non-regular local ring).
M:
multiplicity The multiplicity of a point on a hypersurface is the degree of the first non-vanishing coefficient of the Taylor series at the point. More generally one can define the multiplicity of any point of a variety as the multiplicity of its local ring. A point has multiplicity 1 if and only if it is non-singular.
N:
Néron–Severi group The Néron–Severi group is the group of divisors module numerical equivalence.
nest Two components (circuits) of a real algebraic curve are said to nest if one is inside the other. (Coolidge 1931) net 1. A 2-dimensional linear system. See "pencil" and "web". See also Laguerre net.
2. A harmonic net is a set of points on a line containing the harmonic conjugate of any point with respect to any other two points. (Baker 1922a, vol 1, p. 133) Newton polygon The convex hull of the points with coordinates given by the exponents of the terms of a polynomial.
N:
nodal A nodal tangent to a singular point of a curve is one of the lines of its tangent cone. (Semple & Roth 1949, p.26) node A singular point p of a hypersurface f = 0, usually with the determinant of the Hessian of f not zero at p. (Cayley 1852) node cusp A singularity of a curve where a node and a cusp coincide at the same point. (Salmon 1879, p. 207) normal 1. A subvariety of projective space is linearly normal if the linear system defining the embedding is complete; see rational normal curve.
N:
2. Orthogonal to the tangent space, such as a line orthogonal to the tangent space or the normal bundle.
3. A normal intersection is an intersection with the "expected" codimension (given a sum of codimensions). (Semple & Roth 1949, p.16) 4. Local rings are integrally closed; see normal scheme.
null-polarity A correlation given by a skew symmetric matrix. A null-polarity of the projective space of a vector space is essentially a non-degenerate skew-symmetric bilinear form, up to multiplication by scalars. See also polarity. (Semple & Roth 1949, p.9)
O:
octad A set of 8 points octic 1. (Adjective) Degree 8 2. (Noun) A degree 8 projective variety ombilic The curve at infinity which is the intersection of any sphere with the plane at infinity. All points of the ombilic are non-real.
order 1. Now called degree of an algebraic variety: the number of intersection points with a generic linear subspace of complementary dimension. (Semple & Roth 1949, p.15) 2. The order of a covariant or concomitant: its degree in the contravariant variables.
3. The order of a Cremona transformation is the order (degree) of its homaloids. (Semple & Roth 1949, p.46) ordinary An ordinary point of multiplicity m of a curve is one with m distinct tangent lines.
oscnode A double point of a plane curve that is also a point of osculation; in other words the two branches meet to order at least 3. (Cayley 1852) osculate Kiss; to meet with high order. See Salmon (1879, p. 356).
osculating plane A tangent plane of a space curve having third order contact with it.
outpolar quadric See (Baker 1922b, vol 2, p. 33) and (Baker 1923, vol 3, p. 52)
P:
Pappus 1. Pappus of Alexandria.
2. The Pappus configuration is the configuration of 9 lines and 9 points that occurs in Pappus's hexagon theorem.
parabolic point A point of a variety that also lies in the Hessian.
parallel 1. Meeting at the line or plane at infinity, as in parallel lines 2. A parallel curve is the envelope of a circle of fixed radius moving along another curve. (Coolidge 1931, p.192) partitivity The number of connected components of a real algebraic curve. See Salmon (1879, p.165).
P:
Pascal Short for Pascal line, the line determined by 6 points of a conic in Pascal's theorem pedal The pedal curve of C with respect to a pedal point P is the locus of points X such that the line through X orthogonal to PX is tangent to C. (Salmon 1879, p.96) pencil A 1-dimensional linear system. See pencil (mathematics) and Lefschetz pencil.
P:
pentad A set of 5 points pentahedron A union of 5 planes, in particular the Sylvester pentahedron of a cubic surface.
period The integral of a differential form over a submanifold perspectivity An isomorphism between two projective lines (or ranges) of projective space such that the lines joining each point of one line to the corresponding point of the other line all pass through a fixed point, called the center of the perspectivity or the perspector.
P:
perspector The center of a perspectivity perspectrix The line in Desargues theorem on which the intersections of pairs of sides of two perspective triangles lie pinch A pinch point is a singular point of a surface, where the two tangent planes of a point on a double curve coincide in a double plane, called the pinch plane. (Semple & Roth 1949, p.175) pippian Introduced by Cayley (1857). Now called the Cayleyan. See also quippian.
P:
Plücker 1. For Plücker characteristic see characteristic 2. A Plücker line is one of the 15 lines containing 4 of the 20 Steiner points associated to 6 points on a conic. The Plücker lines meet in threes at the 60 Kirkman points. (Dolgachev 2012, p.124) plurigenus Plural plurigenera The dth plurigenus of a variety is the dimension of the space of sections of the dth power of the canonical line bundle.
P:
point-star A family of lines with a common point polar 1. (Adjective) Related by a polarity 2. The polar conic is the zero set of the quadratic form associated to a polarity, or equivalently the set of self-conjugate points of the polarity.
3. (Noun) The first polar, second polar, and so on are varieties of degrees n–1, n–2, ... formed from a point and a hypersurface of degree n by polarizing the equation of the hypersurface. (Semple & Roth 1949, p.11) 4. A polar or polar line is the line corresponding to a point under a polarity of the projective plane.
polarity A correlation given by a symmetrical matrix, or a correlation of period 2. A polarity of the projective space of a vector space is essentially a non-degenerate symmetric bilinear form, up to multiplication by scalars. See also null-polarity. (Semple & Roth 1949, p.9) pole 1. The point corresponding to a hyperplane under a polarity.
2. A singularity of a rational function.
P:
poloconic polocubic poloquartic The poloconic (also called conic polar) of a line in the plane with respect to a cubic curve is the locus of points whose first polar is tangent to the line. (Dolgachev 2012, p. 156–157) polygonal A polygonal (or k-gonal) curve is a curve together with a map (of degree k) to the projective line. The degree of the map is called the gonality of the curve. When the degree is 1, 2, or 3 the curve is called rational, hyperelliptic, or trigonal.
P:
porism 1. A porism is a corollary, especially in geometry, as in Poncelet's porism. The precise meaning seems to be controversial.
2. An arrangement of geometrical figures (such as lines or circles) that are inscribed in one curve and circumscribed around another, as in Poncelet's porism or Steiner's porism. There seems to be some confusion about whether "porism" refers to the geometrical configuration or to the statement of the result.
P:
poristic Having either no solutions or infinitely many (Semple & Roth 1949, p.186). For example, Poncelet's porism and Steiner's porism imply that if there is one way to arrange lines or circles then there are infinitely many ways. postulated A postulated object (point, line, and so on) is an object in some larger space. For example, a point at infinity of projective space is a postulated point of affine space. (Baker 1922a, vol 1,) postulation The postulation of a variety for some family is the number of independent conditions needed to force an elements of the family to contain the variety. (Semple & Roth 1949, p.440) power of a point Laguerre defined the power of a point with respect to an algebraic curve of degree n to be the product of the distances from the point to the intersections with a circle through it, divided by the nth power of the diameter. He showed that this is independent of the choice of circle through the point. (Coolidge 1931, p.176) prime An old term for a hyperplane in a projective space. (Semple & Roth 1949, p.1) primal An old term for a projective hypersurface. (Semple & Roth 1949, p.10) projectivity An isomorphism between two projective lines (or ranges). A projectivity is a product of at most three perspectivities.
P:
propinquity A number depending on two branches at a point, defined by Coolidge (1931, p. 224).
proximate For proximate points see (Zariski 1935, p.9). pure All components are of the same dimension. Now called equidimensional. (Semple & Roth 1949, p.15)
Q:
quadratic transformation 1. A Cremona transformation of degree 2. A standard quadratic transformation is one similar to the map taking each coordinate to its inverse.
2. A monomial transformation with center a point, or in other words a blowup at a point.
quadric Degree 2, especially a degree 2 projective variety. Not to be confused with quantic or quartic.
Q:
quadrisecant A quadrisecant is a line meeting something in four points quadro-cubic, quadro-quartic A quadro-cubic or quadro-quartic transformation is a Cremona transformation such that the homaloids of the transformation have degree 2 and those of its inverse have degree 3 or 4. (Semple & Roth 1949, p.180, 188) quantic A homogeneous polynomial in several variables, now usually called a form. Not to be confused with quartic or quadric.
Q:
quarto-quartic A Quarto-quartic transformation is a Cremona transformation such that the homaloids of the transformation and its inverse all have degree 4. (Semple & Roth 1949, p.187) quaternary Depending on four variables, as in quaternary form.
quartic Degree 4, especially a degree 4 projective variety. Not to be confused with quantic or quadric.
quintic Degree 5, especially a degree 5 projective variety.
quippian A quippian is a degree 5 class 3 contravariant of a plane cubic introduced by Cayley (1857) and discussed by Dolgachev (2012, p.157). See also pippian.
quotient ring The quotient ring of a point (or more generally a subvariety) is what is now called its local ring, formed by adding inverses to all functions that do not vanish identically on it.
R:
ramphoid Beak-like. A ramphoid cusp is one whose two branches curve in the same direction; see keratoid cusp. Salmon (1879, p.46) rank 1. The rank of a projective curve is the number of tangents to the curve meeting a generic linear subspace of codimension 2. (Semple & Roth 1949, p.84) 2. The rank of a projective surface is the rank of a curve given by the intersection of the surface with a generic hyperplane. (Semple & Roth 1949, p.193) See order, class, type.
R:
range 1. The set of all points on a line. (Coxeter 1969, p.242) 2. A labeled or finite ordered set of points on a line.
rational 1. Birational to projective space.
2. Defined over the rational numbers.
ray A line, especially one in a family of lines regular 1. A regular surface is one whose irregularity is zero.
2. Having no singularities; see regular local ring.
3. Symmetrical, as in regular polygon, regular polyhedron.
4. Defined everywhere, as in regular (birational) map.
regulus One of the two pencils of lines on a product of two projective planes or a quadric surface.
related Two ranges (labeled sets) of points on a line are called related if there is a projectivity taking one range to the other.
representative manifold A parameter space or moduli space for some family of varieties residual The residual intersection of two varieties consists of the "non-obvious" part of their intersection.
R:
resultant 1. The resultant of two polynomials, given by the determinant of the Sylvester matrix of two binary forms, that vanishes if they have a common root. 2. A Cremona transformation formed from n correlations of n-dimensional projective space. (Semple & Roth 1949, p.180) reverse Inverse (of a function or birational map) ruled Covered by lines, as in ruled surface. See also scroll.
S:
Sn Projective space of dimension n.
S:
Salmon conic The Salmon conic of a pair of plane conics is the locus of points such that the pairs of tangents to the two conics are harmonically conjugate. (Dolgachev 2012, p. 119) satellite 1. If a line meets a cubic curve in 3 points, the residual intersections of the tangents of these points with the cubic all lie on a line, called the satellite line of the original line. See Salmon (1879, p. 127).
S:
2. A certain plane curve of degree (n–1)(n–2) constructed from a plane curve of degree n and a generic point. (Coolidge 1931, p. 159–161) 3. For satellite points see (Zariski 1935, p.8). Possibly something to do with base points. scroll A ruled surface with an embedding into projective space so that the lines of the ruled surface are also lines of projective space.
S:
secant 1. A line intersecting a variety in 2 points, or more generally an n-dimensional projective space meeting a variety in n+1 points.
2. A secant variety is the union of the secants of a variety.
second kind All residues at poles are zero secundum An intersection of two primes (hyperplanes) in projective space. (Semple & Roth 1949, p.2) Segre 1. Named after either Beniamino Segre or Corrado Segre 2. A Segre variety or Segre embedding is the product of two projective spaces, or an embedding of this into a larger projective space.
3. The Segre cubic is a cubic hypersurface in 4-dimensional projective space.
self-conjugate self-polar 1. Incident with its image under a polarity. In particular the self-conjugate points of a polarity form the polar conic.
2. A self-conjugate (or self-polar) triangle (or triad) is a triangle such that each vertex corresponds to the opposite edge under a polarity.
S:
3. A self-conjugate tetrad is a set of 4 points such that the pole of each side lies on the opposite side. (Dolgachev 2012, p.123) septic septimic 1. (Adjective) Degree 7 2. (Noun) A degree 7 projective variety 3. (Noun) A degree 7 form sextactic point One of the 27 points of an elliptic curve of order dividing 6 but not 3. (Salmon 1879, p.132) sextic Degree 6, especially a degree 6 projective variety simple A simple point of a variety is a non-singular point. More generally a simple subvariety W of a variety V is one with a regular local ring, which means roughly that most points of W are simple points of V.
S:
singular Special in some way, including but not limited to the current sense of having a singularity skew Intersecting in a set that is either empty or of the "expected" dimension. For example skew lines in projective 3-space do not intersect, while skew planes in projective 4-space intersect in a point.
solid A 3-dimensional linear subspace of projective space, or in other words the 3-dimensional analogue of a point, line, or plane. (Semple & Roth 1949, p.4) special divisor An effective divisor whose first cohomology group (of the associated invertible sheaf) is non-zero.
spinode A cusp. (Cayley 1852), Salmon (1879, p.23) star A collection of lines (and sometimes planes and so on) with a common point, called the center of the star. (Baker 1922a, vol 1, p. 109) stationary point A cusp. See Salmon (1879, p.23).
Steiner Steinerian 1. Named after Jakob Steiner 2. A Steinerian is the locus of the singular points of the polar quadrics of a hypersurface. Salmon (1879) 3. A Steiner surface is a certain embedding of the projective plane into projective 3-space.
4. a Steiner point is one of the 20 points lying on 3 of the Pascal lines associated with 6 points on a conic.
Steiner–Hessian One of Cayley's names for the Cayleyan. See Salmon (1879, p. 352).
surface An abstract surface together with an embedding into projective space.
superabundance of a divisor on a surface.
The dimension of the first cohomology group of the corresponding sheaf.
S:
symmetroid The zeros of the determinant of a symmetric matrix of linear forms syntheme A partition of a set of 6 elements into 3 pairs, or an element of the symmetric group on 6 points of cycle shape 222. (Dolgachev 2012) system A family of algebraic sets in projective space; for example, a line system is a family of lines.
S:
syzygetic Paired. Opposite of azygetic, meaning unpaired. Example: syzygetic triad, syzygetic tetrad, syzygetic set, syzygetic pencil.
syzygy 1. A point is in syzygy with some other points if it is in the linear subspace generated by them. (Baker 1922a, vol 1, p. 33) A syzygy is a linear relation between points in an affine space.
2. An algebraic relation between generators of a ring, especially a ring of invariants or covariants.
3. A linear relation between generators of a module, or more generally an element of the kernel of a homomorphism of modules.
4. A global syzygy is a resolution of a module or sheaf.
T:
tacnode A tacnode is a point of a curve where two branches meet in the same direction. (Cayley 1852) tacnode-cusp A singularity of a plane curve where a tacnode and a cusp are combined at the same point. (Salmon 1879, p.207) tact-invariant An invariant of two curves that vanishes if they touch each other. See Salmon (1879, p.76).
tangent cone A tangent cone is a cone defined by the non-zero terms of smallest degree in the Taylor series at a point of a hypersurface.
tangential equation The tangential equation of a plane curve is an equation giving the condition for a line to be tangent to the curve. In other words it is the equation of the dual curve. It is not the equation of a tangent to a curve.
ternary Depending on three variables, as in ternary form tetrad A set of 4 points tetragram Synonym for complete quadrilateral tetrahedroid A tetrahedroid is a special kind of Kummer surface.
tetrahedron A geometric configuration consisting of 4 points and the 6 lines joining pairs. This is similar to the lines and infinite edges of a polyhedral tetrahedron, but in algebraic geometry one sometimes does not include the faces of the tetrahedron.
tetrastigm Synonym for complete quadrangle third kind All poles are simple (order 1) threefold 1. (Adjective) Three-dimensional 2. (Noun) A 3-dimensional variety torsal generator.
A generator of a scroll (ruled surface) that meets its consecutive generator. See (Semple & Roth 1949, p.204).
torse Developable surface.
transvectant An invariant depending on two forms. transversal A line meeting several other lines. For example, 4 generic lines in projective 3-space have 2 transversals meeting all of them.
triad A set of 3 points tricircular A tricircular curve is one that passes through the circular points at infinity with order 3.
tricuspidal Having three cusps trigonal A trigonal curve is one with a degree three map to the projective line. See hyperelliptic.
trihedral A set of 3 planes A Steiner trihedral is a set of three tritangent planes of a cubic surface whose intersection point is not on the surface. (Semple & Roth 1949, p.152) trilinear coordinates Coordinates based on distance from sides of a triangle: Trilinear coordinates.
trinodal Having three nodes tripartite Having three connected components. Salmon (1879, p.165) trisecant A line meeting a variety in 3 points. See trisecant identity.
tritangent Meeting something in 3 tangent points, such as a tritangent conic to a cubic curve or a tritangent plane of a cubic surface.
trope A trope is a singular (meaning special) tangent space. (Cayley 1869, p.202) The word is mostly used for a tangent space of a Kummer surface touching it along a conic.
T:
twisted A twisted cubic is a degree 3 embedding of the projective line in projective 3-space total A set of 5 partitions of a 6-element set into three pairs, such that no two elements of the total have a pair in common. For example, {(12)(36)(45), (13)(24)(56), (14)(26)(35), (15)(23)(46), (16)(25)(34)} (Dolgachev 2012) type The type of a projective surface is the number of tangent planes meeting a generic linear subspace of codimension 4. (Semple & Roth 1949, p.193)
U:
undulation A point of undulation of a curve is where the tangent meets the curve to fourth order; also called a hyperflex. See inflection point. (Salmon 1879, p.35, 211) unibranch Having only one branch at a point. For example, a cusp of a plane curve is unibranch, while a node is not.
unicursal A unicursal curve is one that is rational, in other words birational to the projective line. See Salmon (1879, p. 29).
unipartite Connected. See Salmon (1879, p.165) unirational 1. A correspondence is called unirational if it is generically injective, in other words a rational map. (Semple & Roth 1949, p.20) 2. A variety is called unirational if it is finitely covered by a rational variety.
united point A point in the intersection of the diagonal and a correspondence from a set to itself.
unode A double point of a surface whose tangent cone consists of one double plane. See binode.
V:
valence valency The valence or valency of a correspondence T on a curve is a number k such that the divisors T(P)+kP are all linearly equivalent. A correspondence need not have a valency. (Semple & Roth 1949, p.368) Veronese surface An embedding of the projective plane in 5-dimensional projective space.
V:
virtual An estimate for something that is often but not always correct, such as virtual genus, virtual dimension, and so on. If some number is given by the dimension of a space of sections of some sheaf, the corresponding virtual number is sometimes given by the corresponding Euler characteristic, and equal to the dimension when all higher cohomology groups vanish. See superabundance.
W:
web A 3-dimensional linear system. See "net" and "pencil". (Semple & Roth 1949, p.160) Weddle surface A quartic surface in projective space given by the locus of the vertex of a cone passing through 6 points in general position.
Weierstrass point A point on a curve where the dimension of the space of rational functions whose only singularity is a pole of some order at the point is higher than normal. Wirtinger sextic A degree 4 genus 6 plane curve with nodes at the 6 points of a complete quadrangle.
XYZ:
Zeuthen–Segre invariant The Zeuthen–Segre invariant is 4 less than the Euler characteristic of a non-singular projective surface. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ELOB**
ELOB:
Elongin B is a protein that in humans is encoded by the ELOB gene.
Function:
Elongin B is a subunit of the transcription factor B (SIII) complex. The SIII complex is composed of elongins A/A2, B and C. It activates elongation by RNA polymerase II by suppressing transient pausing of the polymerase at many sites within transcription units. Elongin A functions as the transcriptionally active component of the SIII complex, whereas elongins B and C are regulatory subunits. Elongin A2 is specifically expressed in the testis, and capable of forming a stable complex with elongins B and C. The von Hippel-Lindau tumor suppressor protein binds to elongins B and C, and thereby inhibits transcription elongation. Two alternatively spliced transcript variants encoding different isoforms have been described for this gene.
Interactions:
TCEB2 has been shown to interact with: CUL2, TCEB1, and Von Hippel-Lindau tumor suppressor. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Process miniaturization**
Process miniaturization:
Chemical process miniaturization refers to a philosophical concept within the discipline of process design that challenges the notion of "economy of scale" or "bigger is better". In this context, process design refers to the discipline taught primarily to chemical engineers. However, the emerging discipline of process miniaturization will involve integrated knowledge from many areas; as examples, systems engineering and design, remote measurement and control using intelligent sensors, biological process systems engineering, and advanced manufacturing robotics, etc.
Process miniaturization:
One of the challenges of chemical engineering has been to design processes based on chemical laboratory-scale methods, and to scale-up processes so that products can be manufactured that are economically affordable.
Process miniaturization:
As a process becomes larger, more product can be produced per unit time, so when a process technology becomes established or mature, and operates consistently without upsets or “downtime”, more economic efficiency can be gained from scale-up. Given a fixed price for the feedstock (e.g. the price per barrel of crude oil), the product cost can be decreased using a larger scale process because the capital investment and operational costs do not normally increase linearly with scale. For example, the capacity or volume of a cylindrical vessel used to produce a product increases proportional to the square of the radius of the cylinder, so cost of materials per unit volume decreases. But the costs to design and fabricate the vessel have traditionally been less sensitive to scale. In other words, one can design a small vessel and fabricate it for about the same cost as the larger vessel. In addition, the cost to control and operate a process (or a process unit component) does not change substantially with the scale. For example, if it takes one operator to operate a small process, that same operator can probably operate the larger process.
Process miniaturization:
The economy of scale concept, as taught to chemical engineers, has led to the notion that one of the objectives of process development and design is to achieve “economy of scale” by scaling-up to the largest possible size processing plant so that the product cost can be economically affordable. This disciplinary philosophy has been reinforced by example designs in the petroleum refining and petrochemical industries, where feedstocks have been transported as fluids in pipelines, large tanker ships, and railcars.
Process miniaturization:
Fluids, by definition are materials that flow and can be transferred using pumps or gravity. Therefore, large pumps, valves, and pipelines exist to transfer large amounts of fluids in the process industries. Process miniaturization, in contrast, will involve processing of large amounts of solids from renewable biomass resources; therefore, new thinking towards process designs optimized for solids processing will be required.
Process miniaturization:
The concept of a microprocess has been defined by S. S. Sofer while a professor at the New Jersey Institute of Technology. A microprocess has the following characteristics: 1) Portability 2) Capable of being mass produced using advanced robotic manufacturing methods 3) Approaching total automation 4) A new technology
Miniaturization of Electronic Devices:
The microprocess design philosophy has been largely envisioned by historical analysis of the role that component miniaturization has played in the information technology industry. It is the evolution of the miniaturization of computer hardware that has enabled the thinking about process miniaturization, in the chemical engineering design context. Rather than the traditional design objective as “scale-up” of processing to one centralized large processing plant (e.g. the mainframe), one can envision achieving the economic objectives using a “scale-out” philosophy (e.g. multiple microcomputers).
Miniaturization of Electronic Devices:
Electrical and electronic devices have always played an important role in chemical process plant automation. However, initially, simple thermometers such as those containing mercury, and pressure gauges which were completely mechanical in nature were used to monitor process conditions (such as the temperature, pressure and level in a chemical reactor). Process conditions were adjusted based largely on a human operator's heuristic knowledge of the process behavior. Even with electronic automation installed, many process still require substantial operator interaction, particularly during the start-up phase of the process, or during deployment of a new technology.
Miniaturization of Electronic Devices:
Process control of the future will involve the widespread utilization of intelligent sensors, and mass-produced intelligent miniaturized devices such as programmable logic controllers that communicate wirelessly to process actuators. Since these devices will be miniaturized to reduce manufacturing cost, this enables the devices to be embedded in structures so that they become invisible to the casual observer. The cost of such sensors will likely be reduced to a point where they either "function or don't function". When that cost threshold has been reached, the repair procedure will be to disable the sensor, and to actuate a redundant working sensor. In otherwords, entire complex control systems will become so low cost, that repair will not be economically viable.
Miniaturization of Electronic Devices:
The intelligence of the process will be developed using process simulation models based on scientific fundamentals. Heuristic rules will be programmed into the micro-controllers, which will largely eliminate the need for constant monitoring by human heuristic knowledge of the process behavior. Process which can automatically self-optimize through advanced algorithms developed by microprocess engineers will be embedded, and only accessible to the knowledge-owner. This will enable the construction of large networks of autonomous microprocesses.
Process Miniaturization for Knowledge-based Businesses:
Advanced process control systems for process miniaturization will increase the need for controlling the security and ownership of process intelligence in a knowledge-based business. It will become more difficult to control intellectual property through the traditional method of patents; therefore, trademarks, brand recognition, and copyright laws will play a more important role in value security for knowledge-based businesses of the future.
Process Miniaturization for Knowledge-based Businesses:
Techno-economic analysis, as taught in traditional chemical process design, will also dramatically shift from a conservative viewpoint of utilization of historical trend economics and cash flow analysis. Economic viability of a given enterprise will be more linked to acquisition of real-time economic information, that can rapidly change based on empirical observations created by an emerging discipline of microprocess development systems; therefore, the models will be more based on "what can be?" rather that "what has the past shown?"
Process Miniaturization for Future Societies Based on Renewable Materials:
Rather than one large central plant, that has to be fed a large amount of feedstock, such as a refinery that can unload a tanker shipment of petroleum if located next to an ocean, the discipline of process miniaturization envisions the distribution of the process technology to areas where the feedstock is not readily transportable in large quantities to a large centralized processing plant. The miniaturized process technology may simply involve transformation of solid biomass materials from multiple distributed microprocesses into more easily manageable fluids. The fluids can then be transported or distributed to larger-scale intelligent processing nodes using conventional fluid transport technology.
Process Miniaturization for Future Societies Based on Renewable Materials:
Historically, small processes or microprocesses per se have always existed. For example, small vineyards and breweries have produced feedstock, processed it, and stored product in what could be considered “microprocess” when compared to processes designed based on the petrochemical industry model or, for example, large-scale production of beer. Small villages in India and other places in the world have learned to produce biogas from animal manure in what could be considered small-scale microprocesses for the production of energy. However, microprocesses and process miniaturization as a design philosophy includes the notion of approaching total automation, and is a new technology which has been enabled by computer hardware miniaturization, for example, the microprocessor. It is easy to envision processes which can be mass-produced and transported. For example, many appliances such as air conditioners, domestic washing machines, and refrigerators could be considered microprocesses.
Process Miniaturization for Future Societies Based on Renewable Materials:
The design philosophy of process miniaturization envisions that “scale-down” of complex processes involving multiple process unit operations can be achieved, and that economy of scale will be more related to the size of a network of distributed autonomous microprocesses. Since failure of one autonomous microprocess does not cause shutdown of the entire network, microprocesses will lead to more economically efficient, robust, and stable production of products that have traditionally been produced for a petroleum-based society.
Process Miniaturization for Future Societies Based on Renewable Materials:
Since fossil fuels by definition are being consumed and are non-renewable, future fuel and materials will be based on renewable biomass.
Process Miniaturization for Microbial Fuel Cells:
The conversion of biomass into energy is perhaps more challenging to the technologist than energy from fossil fuels. Water, dissolved organic and inorganic compounds, and solid particulates of various size can be present in biomass processes. It is perhaps the development of microbial fuel cells where the philosophical thinking of process miniaturization will play a wider role. Distribution of knowledge, in a fashionable, intriguing style through miniaturized devices, can be substantially enhanced (accelerated) by low power consuming devices (such as smart phones). A rethinking of "what is a powerplant?" can create enormous innovations, given recent advances in membrane materials of construction, immobilized whole cell methodologies, metabolic engineering, and nanotechnology.
Process Miniaturization for Microbial Fuel Cells:
The challenges of microbial fuel cells relate mainly to finding lower cost manufacturing methods, materials of construction, and systems design. Bruce Logan from the Penn State University has described in several research articles and reviews these challenges.
However, even with existing designs which generate low power, there are applications in distribution of electrical recharging systems to remote areas of Africa, where smart phone, can enable access to the vast information of the internet, and to provide lighting. These systems can run on agricultural, animal and human waste streams using naturally occurring bacteria.
Process Miniaturization for mini Nuclear Reactors:
Nuclear power is considered "green technology" in that it does not produce carbon dioxide, a green house gas, as do traditional natural gas or coal-fired power plants. The economics of the deployment of mini nuclear reactors has been discussed in an article in "The Economist".
Process Miniaturization for mini Nuclear Reactors:
The advantages of mini nuclear reactors has also been discussed by Secretary of Energy, Steven Chu. As discussed by Chu, the reactors would be manufactured in a factory-like situation and then transported, intact by rail or ship to different parts of the country or world. Economy of scale by size is replaced by economy of scale by number. Many companies are not willing to accept the risk of investing $8B to $9B dollars in single large reactor, so one of the most attractive features of process miniaturization is a reduction in the risk of capital investment, and the possibility of recovering investment by reselling and relocating a functional turn-key microprocess to a new owner - a major economic advantage of the portability of microprocesses. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Living Reviews (journal series)**
Living Reviews (journal series):
Living Reviews is an open access journal series, which publishes regularly updated peer-reviewed review articles in various fields of science. Its concept of "living" articles takes advantage of web-based electronic publishing and allows authors to update their articles with the latest developments and research findings.
Living Reviews (journal series):
The concept of Living Reviews was developed by Bernard Schutz and Jennifer Wheary, who started the first journal, Living Reviews in Relativity, in 1998 at the Max Planck Institute for Gravitational Physics. The series now contains three entries Living Reviews in Relativity Living Reviews in Solar Physics Living Reviews in Computational AstrophysicsIn June 2015, the series was sold to Springer International Publishing AG. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Field lens**
Field lens:
In imaging optics, a field lens is a positive-powered lens or group of lenses that comes after the objective lens and before the image plane or the eyepiece, serving to change the size of the image or to provide image-space telecentricity. It is used for the reduction of detector size and, in instances needing high optical gain factor, it can correct aberrations through its several elements. Optical systems that feature multiple image planes are at risk of a potential problem, which involves the inability on the part of succeeding relay lenses to capture a cone of light from the primary objective lens. The field lens - by behaving as a variably angled lens - solves this problem by bending or refracting the cone of light back into the succeeding relay lens.
Field lens:
In X-Ray microscopy, the field lens is used to produce parallel and homogeneous illumination of the stencil. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**EaseUS Partition Master**
EaseUS Partition Master:
EaseUS Partition Master is a disk partition software that allows users to manage hard disk drives or solid-state drives on the 32-bit or 64-bit Windows PCs and Windows servers.
Overview:
Created by EaseUS Software, the program was initially released in 2006. It comes in three versions: Home, Professional, and Enterprise. It performs tasks that Windows default device manager doesn't offer, including disk surface test, hiding partitions, re-building MBR, and WinPE boot disc image.In 2021, version 16.5 of EaseUS Partition Master added extend/shrink partition function. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Signal velocity**
Signal velocity:
The signal velocity is the speed at which a wave carries information. It describes how quickly a message can be communicated (using any particular method) between two separated parties. No signal velocity can exceed the speed of a light pulse in a vacuum (by Special Relativity). Signal velocity is usually equal to group velocity (the speed of a short "pulse" or of a wave-packet's middle or "envelope"). However, in a few special cases (e.g., media designed to amplify the front-most parts of a pulse and then attenuate the back section of the pulse), group velocity can exceed the speed of light in vacuum, while the signal velocity will still be less than or equal to the speed of light in vacuum. In electronic circuits, signal velocity is one member of a group of five closely related parameters. In these circuits, signals are usually treated as operating in TEM (Transverse ElectroMagnetic) mode. That is, the fields are perpendicular to the direction of transmission and perpendicular to each other. Given this presumption, the quantities: signal velocity, the product of dielectric constant and magnetic permeability, characteristic impedance, inductance of a structure, and capacitance of that structure, are all related such that if you know any two, you can calculate the rest. In a uniform medium if the permeability is constant, then variation of the signal velocity will be dependent only on variation of the dielectric constant. In a transmission line, signal velocity is the reciprocal of the square root of the capacitance-inductance product, where inductance and capacitance are typically expressed as per-unit length. In circuit boards made of FR-4 material, the signal velocity is typically about six inches (15 cm) per nanosecond, or 6.562 ps/mm. In circuit boards made of Polyimide material, the signal velocity is typically about 16.3 cm per nanosecond or 6.146 ps/mm. In these boards, permeability is usually constant and dielectric constant often varies from location to location, causing variations in signal velocity. As data rates increase, these variations become a major concern for computer manufacturers.
Signal velocity:
vs=cεrμr≈cεr where εr is the relative permittivity of the medium, μr is the relative permeability of the medium, and c is the speed of light in vacuum. The approximation shown is used in many practical context because for most common materials μr≈1 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Flyte (chocolate bar)**
Flyte (chocolate bar):
Flyte was a chocolate bar manufactured by Mars, Incorporated weighing 45 grams. The product was introduced in 1996.Each bar came wrapped in two individual halves. It consisted of a chocolatey, whipped nougat-style centre coated in milk chocolate. It was essentially the same as a UK Milky Way bar before the filling in Milky Way bars was changed from chocolate to vanilla flavour in 1993.The bar was discontinued in 2015. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Industrial water treatment**
Industrial water treatment:
There are many uses of water in industry and, in most cases, the used water also needs treatment to render it fit for re-use or disposal. Raw water entering an industrial plant often needs treatment to meet tight quality specifications to be of use in specific industrial processes. Industrial water treatment encompasses all these aspects which include industrial wastewater treatment, boiler water treatment and cooling water treatment.
Overview:
Water treatment is used to optimize most water-based industrial processes, such as heating, cooling, processing, cleaning, and rinsing so that operating costs and risks are reduced. Poor water treatment lets water interact with the surfaces of pipes and vessels which contain it. Steam boilers can scale up or corrode, and these deposits will mean more fuel is needed to heat the same amount of water. Cooling towers can also scale up and corrode, but left untreated, the warm, dirty water they can contain will encourage bacteria to grow, and Legionnaires' disease can be the fatal consequence. Water treatment is also used to improve the quality of water contacting the manufactured product (e.g., semiconductors) and/or can be part of the product (e.g., beverages, pharmaceuticals). In these instances, poor water treatment can cause defective products.In many cases, effluent water from one process can be suitable for reuse in another process if given suitable treatment. This can reduce costs by lowering charges for water consumption, reduce the costs of effluent disposal because of reduced volume, and lower energy costs due to the recovery of heat in recycled wastewater.
Objectives:
Industrial water treatment seeks to manage four main problem areas: scaling, corrosion, microbiological activity and disposal of residual wastewater. Boilers do not have many problems with microbes as the high temperatures prevent their growth.
Objectives:
Scaling occurs when the chemistry and temperature conditions are such that the dissolved mineral salts in the water are caused to precipitate and form solid deposits. These can be mobile, like a fine silt, or can build up in layers on the metal surfaces of the systems. Scale is a problem because it insulates and heat exchange becomes less efficient as the scale thickens, which wastes energy. Scale also narrows pipe widths and therefore increases the energy used in pumping the water through the pipes.
Objectives:
Corrosion occurs when the parent metal oxidises (as iron rusts, for example) and gradually the integrity of the plant equipment is compromised. The corrosion products can cause similar problems to scale, but corrosion can also lead to leaks, which in a pressurised system can lead to catastrophic failures.
Objectives:
Microbes can thrive in untreated cooling water, which is warm and sometimes full of organic nutrients as wet cooling towers are very efficient air scrubbers. Dust, flies, grass, fungal spores, and others collect in the water and create a sort of "microbial soup" if not treated with biocides. Many outbreaks of the deadly Legionnaires' Disease have been traced to unmanaged cooling towers, and the UK has had stringent Health & Safety guidelines concerning cooling tower operations for many years as have had governmental agencies in other countries. Certain processes like tanning and paper making use heavy metals such as Chrome for tanning. Although most is used up but some amount remains and gets carried away with water. The presence in drinking water is toxic when consumed so even the smallest amount must be removed.
Disposal of residual industrial wastewaters:
Disposal of residual wastewaters from an industrial plant is a difficult and costly problem. Most petroleum refineries, chemical and petrochemical plants have onsite facilities to treat their wastewaters so that the pollutant concentrations in the treated wastewater comply with the local and/or national regulations regarding disposal of wastewaters into sewage treatment plants or into rivers, lakes or oceans.
Processes:
Two of the main processes of industrial water treatment are boiler water treatment and cooling water treatment. A large amount of proper water treatment can lead to the reaction of solids and bacteria within pipe work and boiler housing. Steam boilers can suffer from scale or corrosion when left untreated. Scale deposits can lead to weak and dangerous machinery, while additional fuel is required to heat the same level of water because of the rise in thermal resistance. Poor quality dirty water can become a breeding ground for bacteria such as Legionella causing a risk to public health.
Processes:
Corrosion in low pressure boilers can be caused by dissolved oxygen, acidity and excessive alkalinity. Water treatment therefore should remove the dissolved oxygen and maintain the boiler water with the appropriate pH and alkalinity levels. Without effective water treatment, a cooling water system can suffer from scale formation, corrosion and fouling and may become a breeding ground for harmful bacteria. This reduces efficiency, shortens plant life and makes operations unreliable and unsafe.
Processes:
Boiler water treatment Boiler water treatment is a type of industrial water treatment focused on removal or chemical modification of substances potentially damaging to the boiler. Varying types of treatment are used at different locations to avoid scale, corrosion, or foaming. External treatment of raw water supplies intended for use within a boiler is focused on removal of impurities before they reach the boiler. Internal treatment within the boiler is focused on limiting the tendency of water to dissolve the boiler, and maintaining impurities in forms least likely to cause trouble before they can be removed from the boiler in boiler blowdown. Deaerator is used to reduce oxygen and nitrogen in boiler feed water applications.
Processes:
Cooling water treatment Water cooling is a method of heat removal from components of machinery and industrial equipment. Water may be a more efficient heat transfer fluid where air cooling is ineffective. In most occupied climates water offers the thermal conductivity advantages of a liquid with unusually high specific heat capacity and the option that of evaporative cooling. Low cost often allows rejection as waste after a single use, but recycling coolant loops may be pressurized to eliminate evaporative loss and offer greater portability and improved cleanliness. Unpressurized recycling coolant loops using evaporative cooling require a blowdown waste stream to remove impurities concentrated by evaporation. Disadvantages of water cooling systems include accelerated corrosion and maintenance requirements to prevent heat transfer reductions from biofouling or scale formation. Chemical additives to reduce these disadvantages may introduce toxicity to wastewater. Water cooling is commonly used for cooling automobile internal combustion engines and large industrial facilities such as nuclear and steam electric power plants, hydroelectric generators, petroleum refineries and chemical plants.
Technologies:
Advancements in water treatment technology have affected all areas of industrial water treatment. Although mechanical filtration, such as reverse osmosis, is widely employed to filter contaminants, other technologies including the use of ozone generators, wastewater evaporation, electrodeionization and bioremediation are also able to address the challenges of industrial water treatment.
Ozone treatment is a process in which ozone gas is injected into waste streams as a means to reduce or eliminate the need for water treatment chemicals or sanitizers that may be hazardous, including chlorine.
Technologies:
Chemical treatment Chemical treatments utilizes the additive of chemicals to make industrial water suitable for use or discharge. These includes processes like chemical precipitation, chemical disinfection, Advanced oxidation process (AOP), ion exchange, and chemical neutralization. AOPs are attractive in the treatment of hazardous wastewater due to its high oxidation potential and degradation performance. In AOPs, oxidants like Fenton's reagent, Ozone or Hydrogen peroxide are introduced in the wastewater to degrade harmful substances in industrial water for discharge.
Technologies:
Physical treatment Physical treatment involves the separation of solids form industrial wastewater either through Filtration or Dissolved air flotation. Filtration involves the use of Membrane or filters such as mechanical filters like sand filtration etc to achieve solid-liquid separation. Whereas for Dissolved air flotation, pressurized air is pumped into the wastewater. The pressurized air then forms small bubbles which adhere to the suspended matter causing them to float to the surface of the water where they can be removed by a skimming device or an overflow.
Technologies:
Biological treatment Biological treatment is needed to treat wastewater containing biodegradable elements. It is commonly used in municipal and industrial wastewater management facilities and usually consists in adding common bacteria and other microbes, mostly environmentally friendly, to treat the water. It is a sustainable practice that has been successful for over a century.
Technologies:
Slow sand filters use a biological process to purify raw water to produce potable water. They work by using a complex biological film that grows naturally on the surface of sand. This gelatinous biofilm called the hypogeal layer or Schmutzdecke is located in the upper few millimetres of the sand layer. The surface biofilm purifies the water as it flows through the layer, the underlying sand provides a support medium for the biological treatment layer. The Schmutzdecke consists of bacteria, fungi, protozoa, rotifera and a range of aquatic insect larvae. As the biofilm ages, more algae may develop and larger aquatic organisms including bryozoa, snails and Annelid worms may be present. As water passes through the hypogeal layer, particles of matter are trapped in the mucilaginous matrix and soluble organic material is adsorbed. The contaminants are metabolised by the bacteria, fungi and protozoa.Slow sand filters are typically 1–2 metres deep, and have a hydraulic loading rate of 0.2–0.4 cubic metres per square metre per hour. Filters lose their performance as the biofilm thickens and reduces the rate of flow. The filter is refurbished by removing the biofilm and a thin upper layer of sand. Water is decanted back into the filter and re-circulated to enable a new biofilm to develop. Alternatively wet harrowing involves stirring the sand and flushing the biolayer through for disposal.
Technologies:
Ultraviolet irradiation Ultraviolet (UV) disinfection technology has been a common water treatment technology in the past two decades due to its ability to provide disinfected water without the use of harmful chemicals. The UV-C portion represents wavelengths from 200 nm - 280 nm which is used for disinfection. UV-C photons penetrate cells and damage the nucleic acid, rendering them incapable of reproduction, or microbiologically inactive.
Technologies:
Process water treatment technology Process water is water that is used in a variety of manufacturing operations, such as: coating and plating; rinsing and spraying; washing, etc. Municipal and ground water often contain dissolved minerals which make it unsuitable for these processes because it would affect product quality and/or increase manufacturing costs. A proper incoming water treatment system can remedy these issues and create the right water conditions for specific industrial processes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Facies (medical)**
Facies (medical):
In medical contexts, a facies is a distinctive facial expression or appearance associated with a specific medical condition. The term comes from Latin for "face". As a fifth declension noun, facies can be both singular and plural.
Types:
Examples include: Hippocratic facies – eyes are sunken, temples collapsed, nose is pinched with crusts on the lips, and the forehead is clammy Moon face (also known as "Cushingoid facies") – Cushing's syndrome Elfin facies – Williams syndrome Potter facies – oligohydramnios Mask like facies – parkinsonism Leonine facies – lepromatous leprosy or craniometaphyseal dysplasia Mitral facies – mitral stenosis Amiodarone facies (deep blue discoloration around malar area and nose) Acromegalic facies – acromegaly Flat facies – Down syndrome Marfanoid facies – Marfan's syndrome Snarling facies – myasthenia gravis Myotonic facies – myotonic dystrophy Torpid facies – myxoedema Mouse facies – chronic kidney failure Plethoric facies – Cushing's syndrome and polycythemia vera Bird facies – Pierre Robin sequence Ashen grey facies – myocardial infarction Gargoyle facies – Hurler's syndrome Monkey facies – marasmus Hatchet facies – myotonia atrophica Gorilla-like face – acromegaly Bovine facies (or cow face) – craniofacial dysostosis or crouzon syndrome Marshall halls facies – hydrocephalus Frog face – intranasal disease Coarse facies – many inborn errors of metabolism Adenoid facies – developmental facial traits caused by adenoid hypertrophy, nasal airway obstruction and mouthbreathing; really a form of long face syndrome.
Types:
Lion-like facies – involvement of craniofacial bones in Paget disease of Bone Chipmunk facies – beta thalassemia Treacher Collins syndrome – deformities of the ears, eyes, cheekbones, and chin
Other disorders associated with syndromic facies:
Pitt–Hopkins syndrome Beta thalassemia is associated with distinctive facial features due to ineffective erythropoiesis. The ineffective erythropoiesis causes marrow hyperplasia or expansion and bony changes, including the bones of the face; this causes craniofacial protrusions.
Mowat–Wilson syndromeSnijders Blok-Campeau syndrome | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Voder**
Voder:
The Bell Telephone Laboratory's Voder (from Voice Operating Demonstrator) was the first attempt to electronically synthesize human speech by breaking it down into its acoustic components. It was invented by Homer Dudley in 1937–1938 and developed on his earlier work on the vocoder. The quality of the speech was limited; however, it demonstrated the synthesis of the human voice, which became one component of the vocoder used in voice communications for security and to save bandwidth.
Voder:
The Voder synthesized human speech by imitating the effects of the human vocal tract. The operator could select one of two basic sounds by using a wrist bar. A buzz tone generated by a relaxation oscillator produced the voiced vowels and nasal sounds, with the pitch controlled by a foot pedal. A hissing noise produced by a white noise tube created the sibilants (voiceless fricative sounds). These initial sounds were passed through a bank of 10 band-pass filters that were selected by keys; their outputs were combined, amplified and fed to a loudspeaker. The filters were controlled by a set of keys and a foot pedal to convert the hisses and tones into vowels, consonants, and inflections. Additional special keys were provided to make the plosive sounds such as "p" or "d", and the affricative sounds of the "j" in "jaw" and the "ch" in "cheese". This was a complex machine to operate. After months of practice, a trained operator could produce recognizable speech.
Voder:
Performances on the Voder were featured at the 1939 New York World's Fair and in San Francisco. Twenty operators were trained by Helen Harper, particularly noted for her skill with the machine. The machine said the words "Good afternoon, radio audience."The Voder was developed from research into compression schemes for transmission of voice on copper wires and for voice encryption. In 1948, Werner Meyer-Eppler recognized the capability of the Voder machine to generate electronic music, as described in Dudley's patent.
Voder:
Whereas the vocoder analyzes speech, transforms it into electronically transmitted information, and recreates it, the voder generates synthesized speech by means of a console with fifteen touch-sensitive keys and a pedal. It basically consists of the "second half" of the vocoder, but with manual filter controls, and requires a highly trained operator. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Central core disease**
Central core disease:
Central core disease (CCD), also known as central core myopathy, is an autosomal dominantly inherited muscle disorder present from birth that negatively affects the skeletal muscles. It was first described by Shy and Magee in 1956. It is characterized by the appearance of the myofibril under the microscope.
Signs and symptoms:
The symptoms of CCD are variable, but usually involve hypotonia (decreased muscle tone) at birth, mild delay in child development (highly variable between cases), weakness of the facial muscles, and skeletal malformations such as scoliosis and hip dislocation.CCD is usually diagnosed in infancy or childhood, but some patients remain asymptomatic until adulthood to middle age. While generally not progressive, there appears to be a growing number of people who do experience a slow clinically significant progression of symptomatology. These cases may be due to the large number of mutations of ryanodine receptor malfunction, and with continued research may be found to be clinical variants.
Pathophysiology:
Central core disease is inherited in an autosomal dominant fashion. Most cases have demonstrable mutations in the ryanodine receptor type 1 (RYR1) gene, which are often de novo (newly developed). People with CCD are at increased risk for developing malignant hyperthermia (MH) when receiving general anesthesia.
Diagnosis:
The diagnosis is made based on the combination of typical symptoms and the appearance on biopsy (tissue sample) from muscle. The name derives from the typical appearance of the biopsy on light microscopy, where the muscle cells have cores that are devoid of mitochondria and specific enzymes.Respiratory insufficiency develops in a small proportion of cases. Creatine kinase tend to be normal and electromyography (EMG) shows short duration, short amplitude motor unit action potentials.
Treatment:
There is no specific treatment but triggering anesthetics are avoided and relatives are screened for RYR1 mutations as these may make them susceptible to MH. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cognate object**
Cognate object:
In linguistics, a cognate object (or cognate accusative) is a verb's object that is etymologically related to the verb. More specifically, the verb is one that is ordinarily intransitive (lacking any object), and the cognate object is simply the verb's noun form. For example, in the sentence He slept a troubled sleep, sleep is the cognate object of the verb slept. This construction also has a passive form. The passive is A troubled sleep was slept by him. Cognate objects exist in many languages, including various unrelated ones; for example, they exist in Arabic, Chichewa, English, German, Ancient Greek, Hebrew, Icelandic, Korean, Latin, and Russian.
Examples:
In English, the construction can occur with a number of intransitive verbs, which then become transitive: He slept a troubled sleep. (He slept, and his sleep was troubled.) He laughed a bitter laugh. (He laughed bitterly.) He dreamed a strange dream. (He dreamed, and his dream was strange.) He walked their walk and talked their talk. (He walked and talked as they did.) He smiled a charming smile. (He smiled, and his smile was charming.) He danced a cheerful dance. (He danced, and his dance was cheerful.) He died a painful death. (He died painfully.)In some of these cases, the cognate object allows for a simpler construction. In others, it may be chosen for idiomatic or rhetorical reasons. In general, the cognate object's modifiers are in some sense modifying the verb: for example, He slept a troubled sleep tells how he slept. Semantically, many of these verbs denote modes of nonverbal expression (laugh, smile) and bodily actions or motions (dance, walk, sleep), specifically including what Levin calls "waltz verbs," those that are zero-related (identical) to the names of dances. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hardy–Littlewood Tauberian theorem**
Hardy–Littlewood Tauberian theorem:
In mathematical analysis, the Hardy–Littlewood Tauberian theorem is a Tauberian theorem relating the asymptotics of the partial sums of a series with the asymptotics of its Abel summation. In this form, the theorem asserts that if the sequence an≥0 is such that there is an asymptotic equivalence as y↓0 then there is also an asymptotic equivalence ∑k=0nak∼n as n→∞ . The integral formulation of the theorem relates in an analogous manner the asymptotics of the cumulative distribution function of a function with the asymptotics of its Laplace transform.
Hardy–Littlewood Tauberian theorem:
The theorem was proved in 1914 by G. H. Hardy and J. E. Littlewood.: 226 In 1930, Jovan Karamata gave a new and much simpler proof.: 226
Statement of the theorem:
Series formulation This formulation is from Titchmarsh.: 226 Suppose an≥0 for all n∈N , and we have as 1.
Then as n→∞ we have ∑k=0nak∼n.
The theorem is sometimes quoted in equivalent forms, where instead of requiring an≥0 , we require an=O(1) , or we require an≥−K for some constant K .: 155 The theorem is sometimes quoted in another equivalent formulation (through the change of variable x=1/ey ).: 155 If, as y↓0 then ∑k=0nak∼n.
Integral formulation The following more general formulation is from Feller.: 445 Consider a real-valued function F:[0,∞)→R of bounded variation. The Laplace–Stieltjes transform of F is defined by the Stieltjes integral ω(s)=∫0∞e−stdF(t).
The theorem relates the asymptotics of ω with those of F in the following way. If ρ is a non-negative real number, then the following statements are equivalent ω(s)∼Cs−ρ,ass→0 as t→∞.
Statement of the theorem:
Here Γ denotes the Gamma function. One obtains the theorem for series as a special case by taking ρ=1 and F(t) to be a piecewise constant function with value ∑k=0nak between t=n and t=n+1 A slight improvement is possible. According to the definition of a slowly varying function, L(x) is slow varying at infinity iff L(tx)L(x)→1,x→∞ for every t>0 . Let L be a function slowly varying at infinity and ρ≥0 . Then the following statements are equivalent as s→0 as t→∞.
Karamata's proof:
Karamata (1930) found a short proof of the theorem by considering the functions g such that lim x→1(1−x)∑anxng(xn)=∫01g(t)dt An easy calculation shows that all monomials g(x)=xk have this property, and therefore so do all polynomials g . This can be extended to a function g with simple (step) discontinuities by approximating it by polynomials from above and below (using the Weierstrass approximation theorem and a little extra fudging) and using the fact that the coefficients an are positive. In particular the function given by g(t)=1/t if 1/e<t<1 and 0 otherwise has this property. But then for x−1/N the sum ∑anxng(xn) is a0+⋯+aN and the integral of g is 1 , from which the Hardy–Littlewood theorem follows immediately.
Examples:
Non-positive coefficients The theorem can fail without the condition that the coefficients are non-negative. For example, the function 1(1+x)2(1−x)=1−x+2x2−2x3+3x4−3x5+⋯ is asymptotic to 1/4(1−x) as x→1 , but the partial sums of its coefficients are 1, 0, 2, 0, 3, 0, 4, ... and are not asymptotic to any linear function.
Littlewood's extension of Tauber's theorem In 1911 Littlewood proved an extension of Tauber's converse of Abel's theorem. Littlewood showed the following: If an=O(1/n) , and we have as x↑1 then ∑an=s.
Examples:
This came historically before the Hardy–Littlewood Tauberian theorem, but can be proved as a simple application of it.: 233–235 Prime number theorem In 1915 Hardy and Littlewood developed a proof of the prime number theorem based on their Tauberian theorem; they proved ∑n=2∞Λ(n)e−ny∼1y, where Λ is the von Mangoldt function, and then conclude ∑n≤xΛ(n)∼x, an equivalent form of the prime number theorem.: 34–35 : 302–307 Littlewood developed a simpler proof, still based on this Tauberian theorem, in 1971.: 307–309
Notes:
Karamata, J. (December 1930). "Über die Hardy-Littlewoodschen Umkehrungen des Abelschen Stetigkeitssatzes". Mathematische Zeitschrift (in German). 32 (1): 319–320. doi:10.1007/BF01194636. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Wind turbines on public display**
Wind turbines on public display:
The great majority of wind turbines around the world belong to individuals or corporations who use them to generate electric power or to perform mechanical work. As such, wind turbines are primarily designed to be working devices. However, the large size and height above surroundings of modern industrial wind turbines, combined with their moving rotors, often makes them among the most conspicuous objects in their areas. A few localities have exploited the attention-getting nature of wind turbines by placing them on public display, either with visitor centers on their bases, or with viewing areas farther away. The wind turbines themselves are generally of conventional horizontal-axis, three-bladed design, and generate power to feed electrical grids, but they also serve the unconventional roles of technology demonstration, public relations, and education.
Notable wind turbines on public display:
AustraliaBlayney Wind Farm, New South Wales has a viewing area and interpretive centre Wattle Point Wind Farm, South Australia has an information centre Albany Wind Farm has board walks, viewing towers, interpretive displays and picnic areas on and around the site. It is also traversed by the Bibbulmun Track.CanadaThe OPG 7 commemorative turbine is a Vestas V80-1.8MW wind turbine on the site of the Pickering Nuclear Generating Station The ExPlace Wind Turbine is a Lagerwey Wind model LW 52 wind turbine at Exhibition Place in TorontoChinaInner Mongolia's Huitengxile Wind Farm has 14 visitor centers to accommodate wind power tourists to the remote regionHong KongLamma Winds in Hong Kong has a single Nordex N50/800 kW model with a rotor diameter of 50m and a nameplate capacity of 800 kWNew ZealandBrooklyn, Wellington, New Zealand has a 230 kW wind turbineUnited KingdomGreen Britain Centre, Swaffham, Norfolk - the only wind turbine in the UK that is open for the public to climb. Also doubles as a visitor's centre, cafe and education provider. "Permanently closed" for the third time.
Notable wind turbines on public display:
Green Park Business Park has an Enercon E-70 2 MW wind turbine adjacent to the M4 motorway, billed as the UK's most visible turbine Renewable Energy Systems has a Vestas V29 225 kW wind turbine visible from the M25 motorway at its headquarters at Beaufort Court, Kings Langley, Hertfordshire Scroby Sands wind farm has a visitor center at Great Yarmouth open during the tourist season (May–October) Scout Moor Wind Farm "has become a real tourist attraction" since its 2008 opening Whitelee Wind Farm near Glasgow has become the first wind energy project in Scotland to join the Association of Scottish Visitor Attractions (ASVA).United StatesBrooklyn, New York Sims Metal Management, a large recycling company which holds a 40-year contract with the City of New York has a 160-foot 100 kW small wind turbine which sits on the north corner of the property. When it was activated in January 2015, it was the city's tallest. It produces about 4% of the facility's power.. The Sunset Park Material Recovery Facility administrative building includes an education center which includes exhibits explaining how the plant operates for student and tour groups and connects to the main processing building for public viewing via elevated pedestrian walkway.
Notable wind turbines on public display:
Dorchester, Massachusetts – Local 103 of the International Brotherhood of Electrical Workers installed the first commercial-scale wind turbine within the City of Boston, a 100 kW unit from Fuhrlaender on a 35-meter tower with rotor diameter of 21 meters, visible from the John F. Kennedy Library Ellensburg, Washington – Puget Sound Energy's Renewable Energy Center at the Wild Horse Wind and Solar Facility has a 5,000 sq. ft. visitor center, which features numerous exhibits, a conference room, and guided tours to the base of a wind turbine. The center sits on a ridge at 3,500 ft. in the middle of the 149 turbine facility (Vestas V80 turbines). The Wild Horse Wind Farm is open to visitors from 9:00-5:30 daily, from April through November.
Notable wind turbines on public display:
Dorchester, Massachusetts - Local 103 of the International Brotherhood of Electrical Workers installed the first commercial-scale wind turbine within the City of Boston, a 100 kW unit from Fuhrlaender on a 35-meter tower with rotor diameter of 21 meters, visible from the John F. Kennedy Library The Great Lakes Science Center in Cleveland, Ohio has a reconditioned Vestas V27 wind turbine with a nameplate capacity of 225 kW Great River Energy's headquarters in Maple Grove, Minnesota has a NEG Micon M700 wind turbine, visible from Interstate 94 Laurel, New York has a Northern Power Systems 100 kW turbine at the Half Hollow Nursery and private tours of the operating turbine are provided by Eastern Energy Systems Inc. of Mattituck, New York.
Notable wind turbines on public display:
Lubbock, Texas has a Vestas V47 at the American Wind Power Center McKinney, Texas has a Wal-Mart store with several sustainability features, including two wind turbines manufactured by Bergey Windpower, of 1 kW and 50 kW nameplate capacity respectively Sweetwater, Texas has a 2 MW 60 Hz DeWind D8.2 prototype wind turbine for training students in the Texas State Technical College wind energy program
Observation deck:
Some wind turbines on public display go one further, with observation decks beneath their nacelles. The observation decks are accessed with stairs inside the tower.
Observation deck:
AustriaWind turbine at Pesendorf, Lichtenegg, Lower Austria. Type Enercon E-66 One turbine at the wind farm Energiepark near Bruck an der Leitha, Type Enercon E-66CanadaGrouse Mountain Resorts in North Vancouver, British Columbia installed a Leitwind 1.5MW wind turbine with an observation deck, atop a 65m tower, at an elevation of 1,300m, opening just before the 2010 Winter Olympics.GermanyOne wind turbine at Windpark Holtriem. Type Enercon E-66 Visitor wind turbine "Windfang" (German for "Wind Catcher") nearby Aachen. Type Enercon E-66 Wind turbine Südkronsberg on the Kronsberg hill near Hannover, Type Enercon E-66NetherlandsThe Siemens plant in Zoetermeer features a wind turbine with 40m blade length and an observation deck. Type Enron Wind (Tacke) 1.5sUnited KingdomAnother Enercon E-66 wind turbine with an observation deck belonging to Ecotricity is in the English town of Swaffham. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Isogeny**
Isogeny:
In mathematics, particularly in algebraic geometry, an isogeny is a morphism of algebraic groups (also known as group varieties) that is surjective and has a finite kernel.
Isogeny:
If the groups are abelian varieties, then any morphism f : A → B of the underlying algebraic varieties which is surjective with finite fibres is automatically an isogeny, provided that f(1A) = 1B. Such an isogeny f then provides a group homomorphism between the groups of k-valued points of A and B, for any field k over which f is defined.
Isogeny:
The terms "isogeny" and "isogenous" come from the Greek word ισογενη-ς, meaning "equal in kind or nature". The term "isogeny" was introduced by Weil; before this, the term "isomorphism" was somewhat confusingly used for what is now called an isogeny.
Case of abelian varieties:
For abelian varieties, such as elliptic curves, this notion can also be formulated as follows: Let E1 and E2 be abelian varieties of the same dimension over a field k. An isogeny between E1 and E2 is a dense morphism f : E1 → E2 of varieties that preserves basepoints (i.e. f maps the identity point on E1 to that on E2).
Case of abelian varieties:
This is equivalent to the above notion, as every dense morphism between two abelian varieties of the same dimension is automatically surjective with finite fibres, and if it preserves identities then it is a homomorphism of groups.
Two abelian varieties E1 and E2 are called isogenous if there is an isogeny E1 → E2. This can be shown to be an equivalence relation; in the case of elliptic curves, symmetry is due to the existence of the dual isogeny. As above, every isogeny induces homomorphisms of the groups of the k-valued points of the abelian varieties. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Apple rubbery wood**
Apple rubbery wood:
Apple rubbery wood virus, also known as apple rubodvirus is a viral disease that causes apple rubbery wood in apple and pear cultivars. There are two varieties: ARWV 1 and ARWV 2. It gets its name from its distinctive effect that it has on its host trees, which show unusual flexibility in the stems and branches after a few years of infection. This often results in the maturing fruits of the tree to weigh down the branches such that they lay on the ground. Apple rubbery wood, or ARW, occurs worldwide, affecting apple and pear cultivars in most developed countries.
Taxonomy:
Originally, ARW was assumed to be caused by phytoplasmas, but it could not be confirmed through multiple tests.
In 2019, it was suggested that both ARW 1 and 2 are given their own new genus, "Rubodvirus" (Rubbery wood virus), the name coming from Rub- in "Rubbery", and -od in "wood".
Symptoms:
Limbs of the host tree become abnormally flexible, becoming unable to stay upright in most cases. Tree growth is stunted, and new stems and limbs are unable to grow, are distorted, or are rosetted. The limbs of affected trees are distinctly "flat", caused by atrophy of the vascular tissue. On some trees, like Quince, bark necrosis and discolored leaves can occur.
Impact:
ARW rarely occurs by itself, and instead often occurs along with multiple other diseases, such as powdery mildew and scabbing. Its biggest effect in losses is through fruit yield, which can be reduced by 10–30%, though it isn't of much economic significance in countries where it is extant. It is transmitted from tree to tree through grafting of infected limbs.ARW is known to infect multiple cultivars, including: Cydonia oblonga (quince) Malus (ornamental species apple) Malus baccata (Siberian crab apple) Malus domestica (apple) Prunus avium (sweet cherry) Prunus cerasus (sour cherry) Pyrus communis (European pear)
Treatment:
In Europe, heat treatment can be used to render trees disease-free. A period of 7 days of dry heat exposure (38°C) is effective on young, infected trees. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Buchholz's ordinal**
Buchholz's ordinal:
In mathematics, ψ0(Ωω), widely known as Buchholz's ordinal, is a large countable ordinal that is used to measure the proof-theoretic strength of some mathematical systems. In particular, it is the proof theoretic ordinal of the subsystem Π11 -CA0 of second-order arithmetic; this is one of the "big five" subsystems studied in reverse mathematics (Simpson 1999). It is also the proof-theoretic ordinal of ID<ω , the theory of finitely iterated inductive definitions, and of KPℓ0 , a fragment of Kripke-Platek set theory extended by an axiom stating every set is contained in an admissible set. Buchholz's ordinal is also the order type of the segment bounded by D0Dω0 in Buchholz's ordinal notation (OT,<) . Lastly, it can be expressed as the limit of the sequence: ε0=ψ0(Ω) , BHO=ψ0(Ω2) , ψ0(Ω3) , ...
Definition:
Ω0=1 , and Ωn=ℵn for n > 0.
Ci(α) is the closure of Ωi under addition and the ψη(μ) function itself (the latter of which only for μ<α and η≤ω ).
ψi(α) is the smallest ordinal not in Ci(α) Thus, ψ0(Ωω) is the smallest ordinal not in the closure of 1 under addition and the ψη(μ) function itself (the latter of which only for μ<Ωω and η≤ω ). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Deep litter**
Deep litter:
Deep litter is an animal housing system, based on the repeated spreading of straw or sawdust material in indoor booths. An initial layer of litter is spread for the animals to use for bedding material and to defecate in, and as the litter is soiled, new layers of litter are continuously added by the farmer. In this fashion, a deep litter bedding can build up to depths of 1–2 meters. "The usual procedure for built-up floor litter is to start with about 4 inches (100 mm) of fine litter material with additions of 1 to 2 inches (25 to 50 mm) later as needed without removal of the old. A depth of 6 to 12 inches (150 to 300 mm) is maintained by partial removals from time to time." Many consider this to be a natural means to disposing of animal feces. "The deep litter cultivation is a modern ecological breeding technique based on decomposing feces by microbiological methods, a post processing method for poultry Manure."
History:
The deep litter method was first used in 1946 by the Ohio Station Brooder House. Before the deep litter method, shavings were removed every one to two weeks, in order to avoid dampness and coccidia. Later, it was discovered that deep litter provides adequate protection from these naturally. The deeper litter provides extra insulation in colder temperatures, as well as extra heat from the decomposition of the litter. Another potential benefit is that when raised under conditions that don't provide adequate nutrition, deep litter poultry is healthier than poultry raised in the traditional method of housing. "By not removing the waste, good microbes come and make their homes in the litter. These microbes actually eat and break down the feces and consume unhealthy bacteria, leaving good bacteria behind."
Benefits:
Numerous benefits have been discovered with the use of the deep litter system, also called the "build-up method". One is the increased ability of poultry to fight off coccidia, common bacteria responsible for an average of a 20% death rate of poultry. New studies in Ohio have shown death rates from coccidia as low as 2.9%. As many as 6 successive broods of chicks have been raised on the same litter, each brood showing better results. Chickens raised in this environment have also been less inclined to show cannibalistic traits.Many studies have been done in order to research potential advantages and disadvantages of the deep litter system. This research covers multiple types of livestock including poultry, swine, duck, and cattle.
Benefits:
"The first experimental evidence with reference to the user of built-up litter as a sanitary procedure was secured by the Ohio Station in 1946 when it was first used in the brooder house. During the three years previous when the floor litter was removed and renewed at frequent intervals, the average mortality of 10 broods, or a total of 18,000 chicks, was 19 percent. During the succeeding three years with the use of built-up litter, the average mortality of 11 broods, or a total of 10,000 chicks, was 7 percent. Seldom did a brood escape an attack of coccidiosis before the use of built-up litter. Afterward there was no noticeable trouble from coccidiosis in 11 consecutive broods started and raised on the same old built-up floor litter. Old built-up litter is floor litter which has been used by two or more previous broods of chicks." To build immunity against coccidia, chicks are normally vaccinated. Experiments have shown that deep litter is an effective method of exposing chicks to the bacteria at a safe rate. Chicken feces produces ammonia which is known for killing coccidia. "A 10 percent solution of ammonia spray is considered effective for killing coccidia. Being unable to withstand such a spray, they may likewise be unable to withstand the constant ammoniacal atmosphere in built-up litter."Experiments have shown major potential benefits to utilizing the deep litter method, specifically within piggeries. Pigs raised in a deep litter system, do significantly better than pigs raised under similar conditions, on a concrete floor, which is the traditional method. Studies have shown that pigs raised in a deep litter system have a lower feed to gain ratio, produce a higher quality of pork, create a significantly lower amount of gaseous emissions, show improvements in odor nuisance reduction, and have better animal welfare. "Pigs in the deep litter system had greater color score and rate of cooking meat, while they had lower drip loss and cooking loss than loins from concrete-floor system housed pigs." (ZHOU et al. 426) "Results indicate that pigs raised in the deep-litter system had some animal welfare improvements and an odor nuisance reduction; in the meantime, pork quality also improved from the deep-litter system compared to the pigs housed in the concrete-floor system." (ZHOU, abstract) Gaseous emissions were also lower within the deep litter system when compared to traditional systems. "NH3 concentration in the deep-litter system was significantly lower than that in the concrete-floor system" (ZHOU et al. 425) "Deep litter and outdoor production avoids the large quantities of methane normally generated from effluent ponds in conventional piggeries". This study helped to prove numerous benefits not only to our atmosphere, but to the health and animal welfare of the pigs.
Negative effects:
A study was conducted to determine the effects on the reproductive system caused by different living styles, for poultry. The deep litter system provided lower efficiency in terms of reproduction, and an increase of food intake. "Feed intake was lower (p < 0.05) in legumes and green pasture than deep litter suggesting economic benefit. It was concluded that access to legumes enhanced the performance of layers compared to deep litter and green pasture as indicated by the parameters measured." (Oke, Abstract) This particular study determined that the deep litter method was not beneficial in terms of egg layer production in chickens.
Negative effects:
A study was conducted in three intensive duck farms in China that utilised routine prophylactic antibiotics. This attempted to determine the ability for antibiotic resistant bacteria, to accumulate in meat duck deep litter where the ducks would subsequently excrete the antibiotics and heavy metals from growth promoters and feeds into the litter. Levels were measured at 3 different stages of duck life, in 3 different barns. The litter contained high levels of antibiotics and heavy metals that corresponded to the antibiotics, feed and supplements that the ducks received throughout their growth cycle. "E. coli isolated from the 3 stages of sampling were highly resistant to ampicillin, tetracycline, florfenicol, and doxycycline. Increased resistance to ceftiofur, enrofloxacin, ofloxacin, and gentamicin were seen in the isolates from the final stage of deep litter." (Linn, Abstract) This study concluded that "deep litter could be suitable for the evolution of bacterial antibiotic-resistance under conditions of continuous usage or accumulation of antibiotics and heavy metals without proper management." (Linn, Abstract) This paper highlights the risk of introducing the routine use of antibiotics, growth promoting supplements and pesticides rather than a direct contribution of deep litter systems. This paper did not utilise a control environment that avoided growth promoters or routine antibiotics. Problems may arise from the deep litter method such as rotten bed. This occurs mostly in piggeries, and is caused by high levels of water intake and discharge from the animals, as well as discharging in the same location within the pen. The build-up of moisture cannot be absorbed quickly enough to fully decompose and causes rotting, unpleasant odors, and harmful gases. Experiments to solve this problem have taken place. One process, is called the heat pulse method. This method, refers to heating the bedding at a constant temperature, which causes a buildup of steam beneath the bedding. However, the steam is unable to release itself, so the next step is to pulse oxygen into the bedding, allowing the steam to escape. "The pulse method could promote the timely discharge of steam generated inside the bedding." (Li, 1412)
Innovations:
This type of farming has created a new market for sheds specifically designed to utilize the deep litter method. Companies are realizing that this method has multiple benefits and is being accepted by various governments as a greener method of farming. "It has won the support of the government and acceptance of market." (QIN, 1) One type of building being constructed is called the removable deep litter breeding shed. It consists of larger areas for the animals, space to let the litter build to heights not allowed by traditional housing, and economic costs compared to traditional sheds. "Successful exploiture of breeding supporting facilities will greatly promote the development of deep-litter breeding technology in local farms." (QIN, 1) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Microsoft Dinosaurs**
Microsoft Dinosaurs:
Microsoft Dinosaurs is an educational interactive CD-Rom developed by Microsoft, themed around dinosaurs.
Production:
Microsoft invested in access to the entire library of writing and images of reference publishing house Dorling Kindersley. They used it to create content for the Microsoft Home software line, including Microsoft Dinosaurs.
Gameplay:
The game contains 400MB of dinosaur-related information, including full-motion video, audio, and a gallery of scanned artwork. The main program features 1000 illustrations, 200 hypertext articles, and 800 pop-up windows. Players can explore the content in four different ways: Atlas, Timeline, Families, and Index. There is also a guided tour, hosted by "Dino" Don Lessem. The game contains sequences featuring dinosaurs feeding, fighting and breeding, which had previously been broadcast in an American television series put out by the Public Broadcasting Service, and the Phil Tippett short Prehistoric Beast.
Critical reception:
The Obscuriority felt the title "demonstrates how thoughtfully crafted reference material can bring value to information", adding that guided learning experiences such as this have value even in the age of Wikipedia. Compute! magazine thought the title was "highly entertaining and educational". PC Magazine deemed it instantly usable and a "visual delight". PCWorld.ol felt the title was aesthetically perfect and extremely rich in substance. The Seattle Times thought its chief asset is its consistently challenging, informative content. An article at the Journal of Accountancy recalled a four-year-old child who, a few months after using the program, correctly identified the species of a dinosaur toy and pulled up the information in the game.The game's sound effects won a multimedia industry award. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Harvest jug**
Harvest jug:
A Harvest jug is a type of jug made from slipware, with decoration carved through stained clay layers. They are named for their use to carry ale or cider at harvest time.The technique for carving the decoration is known as sgraffito, from the Italian for 'scratched'.They are traditional in the south-west of England, especially the ports of Barnstaple and Bideford in north Devon and Donyatt in Somerset. They are still made. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Wheel of fire**
Wheel of fire:
In a literary context, a wheel of fire may refer to the chain of tortuous or dire consequences that result from a single action.
In mythology:
The Wheel of Fire originates in Greek mythology as the punishment for Ixion, who was bound to a wheel of fire for lusting after Zeus's wife, Hera.
In literature:
The Wheel of Fire is part of the Aristotelian reading of a tragedy (e.g., plays), which includes the central flaw within a character. In Shakespeare's tragedy Othello, the flaw in Othello himself is his vulnerability to jealousy and his tendency to believe Iago, who is manipulating Othello into believing his wife is unfaithful. As a result of this flaw Othello loses a loyal friend, murders his wife, and is driven insane before eventually committing suicide. In this scenario the Wheel of Fire begins with the action of Othello trusting Iago and consequently the other events occur.
In literature:
The Wheel of Fire is most commonly applied to the protagonist within a tragedy (i.e. the hero) and may aim to provoke sympathy from the audience when the hero falls from grace (this purging of emotions is known as catharsis), though it also adds dramatic interest to the performance.
The Wheel of Fire is also the title of G. Wilson Knight's book on Shakespearean tragedy.
In Shakespeare's King Lear, Lear states: "But I am bound upon a wheel of fire, / That mine own tears do scald like molten lead".
In Tolkien In J. R. R. Tolkien's The Lord of the Rings the One Ring is described as a "wheel of fire".
In literature:
"Sam, and there is no veil between me and the wheel of fire. I begin to see it even with my waking eyes, and all else fades." -- Frodo Baggins to Samwise Gamgee (chapter Mount Doom in The Return of the King).Also: "A crouching shape, scarcely more than the shadow of a living thing, a creature now wholly ruined and defeated, yet filled with a hideous lust and rage; and before it stood stern, untouchable now by pity, a figure robed in white, but at its breast it held a wheel of fire." -- A description of Gollum and Frodo, respectively, as seen by Samwise. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ice navigation**
Ice navigation:
Ice navigation is a specialist area of navigation involving the use of maritime skills to determine and monitor the position of ships in cold waters, where ice is a hazard to the safety of navigation. The presence of sea ice requires a ship to exercise caution, for example by avoiding icebergs, slowly sailing through a lead, or by working with an icebreaker to follow a course through the ice to a destination. Additionally ships must also deal with the extreme cold of the climate in regions such as the poles; this involves removal of ice accumulation from the ship, as well as protecting the crew from the elements while working on the deck. Ships and their crews operating in ice will follow established rules of seamanship, as well as complying with national and international regulations such as the Polar Code.
Areas of ice navigation:
Ice navigation occurs wherever a waterborne vessel transits through sea ice. One of the more common regions for ice navigation is the Baltic Sea, where vessels visiting the Baltic States will make their way through first year ice in the winter months, often with an icebreaker, or with ice reports, charts and data provided by meteorological offices. Other areas include the Arctic Ocean, where increasing numbers of ships are transiting the region in the summer months for cruising and to transport cargo, as a result of oil and gas extraction in areas such as Yamal. The problems of increased shipping in polar regions presents additional challenges, including maritime safety concerns in the event that ice navigation is not carried out carefully. Ships will also pass through ice when navigating in the Antarctic, although most ships are either research vessels or cruise ships that have been especially ice strengthened. Other significant maritime regions where ships will navigate through Ice include the Saint Lawrence Seaway, around Greenland and the Canadian coast, the North Atlantic during iceberg season and through the Northwest Passage.
Icing of superstructure:
The accumulation of ice on the superstructure is a dangerous phenomenon. When the temperature is below −2.2 °C (28.0 °F) slight icing will occur at winds of 5 Bft, moderate icing at 7 Bft, and severe icing at 8 Bft. When sailing in fresh water, icing will occur from 0 °C (32 °F) and below. The more common causes of ice formation on the superstructure are from spray by wave crests and ship-generated spray. Other possibilities are snow fall, sea fog, a drastic fall in ambient temperature and also freezing raindrops in contact with the cold steel. The heading of the vessel relative to the wind and seas will determine which parts of the superstructure will ice first. Icing can immobilise equipment such as anchors or cause a dangerous list if the windward side of the vessel ices more heavily.
Ice detection by radar:
A radar can be useful in detecting ice, but the returning signal which bounces off ice (even icebergs) is very faint, much lower than from ships. Conventional marine radars are designed for target detection and avoidance. Enhanced marine radars provide a higher definition image of the ice that the vessel is transmitting and will result in a much clearer image. This image can be used to identify the quantity and sort of ice that has to be dealt with. In standard radar, sea clutter affects the ability to see in the near vicinity of the vessel. An X-band radar set to a short pulse can give improved results. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fuzzy Control Language**
Fuzzy Control Language:
Fuzzy Control Language, or FCL, is a language for implementing fuzzy logic, especially fuzzy control. It was standardized by IEC 61131-7. It is a domain-specific programming language: it has no features unrelated to fuzzy logic, so it is impossible to even print "Hello, world!". Therefore, one does not write a program in FCL, but one may write part of it in FCL.
Example:
RULE 0: IF (temperature IS cold) THEN (output IS low) RULE 1: IF (temperature IS very cold) THEN (output IS high)
Limitations:
FCL is not an entirely complete fuzzy language, for instance, it does not support "hedges", which are adverbs that modify the set. For instance, the programmer cannot write: RULE 0: If (Temperature is VERY COLD) then (Output is VERY HIGH) However, the programmer can simply define new sets for "very cold" and "very high". FCL also lacks support for higher-order fuzzy sets, subsets, and so on. None of these features are essential to fuzzy control, although they may be nice to have. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Internet Config**
Internet Config:
Internet Config was an Internet preferences manager and API for Mac OS Classic. It was originally developed by Quinn! The Eskimo, Peter N Lewis and Marcus Jager and released in 1994 into the public domain. It was later bundled by Apple Inc.
Internet Config:
Internet Config's purpose was to consolidate what was, at the time, an unwieldy number of options and settings related to Internet use that had not yet been integrated into the operating system's own control panel. Some settings were for a systemwide default web browser, home page, default FTP client, systemwide default download folder, and email settings. Internet Config represented an important ease of use advantage for the Macintosh platform on the early Internet.
Internet Config:
The software consisted of two pieces, the Internet Config control panel — which was actually just a normal application — and an 'appe' extension that launched at boot time but did not patch any system traps.
Internet Config:
Internet Config enabled the ability for applications to support command-clicking of URLs displayed anywhere onscreen and have the URLs sent to the user selected application. For example, http: URLs would be sent to the selected web browser, ftp: URLs to the selected FTP client, mailto: URLs to the selected Email application, and so on. This functionality was made optional systemwide, as it did have to patch one trap, _TEClick.Internet Config also provided functionality to ease interoperability of the Macintosh type and creator code system with the file extensions used on the Internet and on other operating systems. API functions were provided to map file extensions to Mac type/creator information, and vice versa.
Internet Config:
The public domain licensing of the project and the tight Macintosh Internet community in the late 1990s led to the rapid adoption of the facility, and then to Apple bundling it as part of Mac OS. The Internet Config calls ended up being part of Carbon and Mac OS X, with the header and library files now part of Universal Interfaces and Headers. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**American Association of Neuromuscular & Electrodiagnostic Medicine**
American Association of Neuromuscular & Electrodiagnostic Medicine:
The American Association of Neuromuscular & Electrodiagnostic Medicine (AANEM) is a medical society for the medical subspecialty of neuromuscular and electrodiagnostic medicine based in the United States. Members are primarily neurologists and physiatrists—as well as allied health professionals and PhD researchers.
History:
In 1951, a small group of physicians who practiced clinical electromyography and electrodiagnosis began working to organize a professional society dedicated to the growing medical specialty of electrodiagnostic medicine. Their goal was to create a professional network; to build a platform for discussion of research; and to establish standards and measure quality. On August 29, 1953, another gathering took place at the Palmer House in Chicago, IL, to formally organize the American Association of Electromyography and Electrodiagnosis (AAEE).
Membership:
AANEM has several membership categories depending upon an individual's credentials. Physicians who are board certified in Electrodiagnostic Medicine (as determined by the American Board of Electrodiagnostic Medicine) are “Fellow” members. There are also categories for physicians-in-training, international physicians, and academic researchers, nerve conduction study technologists, and collaborators.AANEM Technologist members assist with studies such as nerve conduction studies (NCSs), electroencephalograms (EEGs), intraoperative monitoring (IOM), evoked potentials, polysomnography, and ultrasound.
Membership:
AANEM collaborator members are nonphysician providers who work in collaboration with a neurologist or PMR physician to treat patients with neuromuscular diseases. Collaborators do not perform or interpret needle electromyography (EMG) studies or interpret NCSs but are active in the field of neuromuscular medicine.
AANEM research members are currently active in neuromuscular or electrodiagnostic research and are a PhD investigator, engineer, holder of a master's degree, or graduate student enrolled in a PhD degree program. Affiliations AANEM participates in the International Federation of Clinical Neurophysiology (IFCN). The IFCN consists of national member societies.
Education:
Since 1954, the AANEM has held an annual meeting. The meeting offers educational sessions updating physicians, technologists, and researchers working in the area of neuromuscular, musculoskeletal, and electrodiagnostic medicine. Attendees earn continuing medical education (CME) through hands-on workshops and educational sessions. In addition to the annual meeting, the association holds small-group workshops around the country covering ultrasound and nerve conduction studies.
Education:
To help members obtain continuing medical education (CME), AANEM is accredited by the Accreditation Council for Continuing Medical Education (ACCME) to provide CME credit as Accreditation with Commendation.) The AANEM offers many CME products including: course books, interactive cases, case studies, journal articles, and self-study materials.
AANEM offers podcasts by a diverse group of practicing physicians who describe their research and practical insights.
Advocacy:
AANEM tracks and responds to federal and state legislative and regulatory health policies related to neuromuscular and electrodiagnostic medicine. Activities include creating guidelines and position statements to educate lawmakers and insurance companies, partnering with governmental agencies and private insurers to fight fraud and abuse and advocating for quality patient care. AANEM's guidelines are published through the US Department of Health & Human Services Agency for Healthcare Research and Quality National Guidelines Clearinghouse. AANEM also participated in the Choosing Wisely campaign established by the American Board of Internal Medicine (ABIM) Foundation by creating a list of five situations in which electrodiagnostic and other imaging studies are inappropriate in an effort to educate patients and spur conversation between the provider and the patient.AANEM monitors private sector market trends and critical practice issues and provides members with updates, practice management resources, and tools to respond to the constantly evolving healthcare environment. The association also works closely with several other medical societies and patient advocacy groups to ensure that neuromuscular patients have adequate access to necessary, quality treatment.
Advocacy:
AANEM has a seat in of the AMA House of Delegates and participates in both the Relative Value Scale Update Committee (RUC) and Current Procedural Terminology (CPT) processes. These groups give physicians a voice in establishing new and revised electrodiagnostic and neuromuscular codes as well as being a part of the process used to assign relative value units to new and established CPT codes.
American Board of Electrodiagnostic Medicine:
In 1963, the AANEM began discussion of an examination as an educational experience for membership candidates. The first examination was given in 1967 and successful candidates were transferred to the association active membership category. In 1987 the AANEM approved to separate its membership and examination functions. The American Board of Electrodiagnostic Medicine (ABEM) was created as an autonomous examining body that offered formal certification of competency in electrodiagnostic medicine. The ABEM examination is the only United States exam certifying physicians in EDX medicine, with more than 3,500 physicians currently certified. The ABEM certification process requires physicians to obtain specific academic training and clinical experience, then demonstrate competency in the EDX evaluation of neuromuscular and musculoskeletal systems by passing an examination. Diplomates of the ABEM are required to document their continuing education in electrodiagnostic medicine over the course of the ABEM Maintenance of Certification Program (MOCP) and take a maintenance examination.
American Board of Electrodiagnostic Medicine:
The American Board of Psychiatry and Neurology provides certification examination in the related field of clinical neurophysiology. The American Board of Clinical Neurophysiology certifies in electroencephalography (EEG), Evoked Potentials (EP), Polysomnography (PSG), Epilepsy Monitoring, and Neurologic Intraoperative Monitoring (NIOM). In the US physicians typically specialize in EEG or EDX medicine but not both.
Laboratory Accreditation:
The AANEM developed the Electrodiagnostic Laboratory Accreditation Program in 2010 as a voluntary, peer-review process that identifies and acknowledges electrodiagnostic laboratories that achieve and maintain the highest level of quality, performance, and integrity based on professional standards.
AANEM Foundation:
In 1995 the AANEM established the AANEM Foundation for Research & Education, as a charitable, 501(c)3 nonprofit organization to advance research and science related to muscle and nerve disorders. The AANEM Foundation helps fund numerous studies and research awards related to neuromuscular and electrodiagnostic medicine.
Publications:
In 1982, Muscle & Nerve, a monthly, peer-reviewed, scientific journal, became the AANEM's official journal. Published by Wiley, the journal publishes research and education directly related to neuromuscular and electrodiagnostic topics.The AANEM News is the official publication of AANEM. The print newsletter is distributed to all members biannually and includes updates on legislation, education, and products or programs related to neuromuscular and electrodiagnostic medicine and the association.The AANEM e-newsletter was launched in 2008. It provides information to AANEM members about coding and advocacy issues, new products, and other relevant information. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Glycolaldehyde dehydrogenase**
Glycolaldehyde dehydrogenase:
In enzymology, a glycolaldehyde dehydrogenase (EC 1.2.1.21) is an enzyme that catalyzes the chemical reaction glycolaldehyde + NAD+ + H2O ⇌ glycolate + NADH + H+The 3 substrates of this enzyme are glycolaldehyde, NAD+, and H2O, whereas its 3 products are glycolate, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the aldehyde or oxo group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is glycolaldehyde:NAD+ oxidoreductase. This enzyme is also called glycol aldehyde dehydrogenase. This enzyme participates in glyoxylate and dicarboxylate metabolism.
Structural studies:
As of late 2007, 3 structures have been solved for this class of enzymes, with PDB accession codes 2HG2, 2ILU, and 2IMP. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Butyrylcholine**
Butyrylcholine:
Butyrylcholine is a choline-based ester that can function as a neurotransmitter. It is similar to acetylcholine, with activation of some of the same receptors as acetylcholine. Butyrylcholine is a synthetic compound and does not occur in the body naturally. It is used as a clinical laboratory tool to distinguish between the cholinesterases; acetylcholinesterase and butyrylcholinesterase preferentially lyse acetylcholine and butyrylcholine, respectively. It is also known as pseudocholinesterase [correction needed]. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Index register**
Index register:
An index register in a computer's CPU is a processor register (or an assigned memory location) used for pointing to operand addresses during the run of a program. It is useful for stepping through strings and arrays. It can also be used for holding loop iterations and counters. In some architectures it is used for read/writing blocks of memory. Depending on the architecture it maybe a dedicated index register or a general-purpose register. Some instruction sets allow more than one index register to be used; in that case additional instruction fields may specify which index registers to use.Generally, the contents of an index register is added to (in some cases subtracted from) an immediate address (that can be part of the instruction itself or held in another register) to form the "effective" address of the actual data (operand). Special instructions are typically provided to test the index register and, if the test fails, increments the index register by an immediate constant and branches, typically to the start of the loop. While normally processors that allow an instruction to specify multiple index registers add the contents together, IBM had a line of computers in which the contents were or'd together.Index registers has proved useful for doing vector/array operations and in commercial data processing for navigating from field to field within records. In both uses index registers substantially reduced the amount of memory used and increased execution speed.
History:
In early computers without any form of indirect addressing, array operations had to be performed by modifying the instruction address, which required several additional program steps and used up more computer memory, a scarce resource in computer installations of the early era (as well as in early microcomputers two decades later).
History:
Index registers, commonly known as B-lines in early British computers, as B-registers on some machines and a X-registers on others, were first used in the British Manchester Mark 1 computer, in 1949. In general, index registers became a standard part of computers during the technology's second generation, roughly 1954–1966. Most machines in the IBM 700/7000 mainframe series had them, starting with the IBM 704 in 1954, though they were optional on some smaller machines such as the IBM 650 and IBM 1401.
History:
Early "small machines" with index registers include the AN/USQ-17, around 1960, and the 9 series of real-time computers from Scientific Data Systems, from the early 1960s.
The 1962 UNIVAC 1107 has 15 X-registers, four of which were also A-registers.
The 1964 GE-635 has 8 dedicated X-registers; however, it also allows indexing by the instruction counter or by either half of the A or Q register.
History:
The Digital Equipment Corporation (DEC) PDP-6, introduced in 1964, and the IBM System/360, announced in 1964, do not include dedicated index registers; instead, they have general-purpose registers (called "accumulators" in the PDP-6) that can contain either numerical values or addresses. The memory address of an operand is, in the PDP-6, the sum of the contents of a general-purpose register and an 18-bit offset and, on the System/360, the sum of the contents of two general-purpose registers and a 12-bit offset. The compatible PDP-10 line of successors to the PDP-6, and the IBM System/370 and later compatible successors to the System/360, including the current z/Architecture, work in the same fashion.
History:
The 1969 Data General Nova and successor Eclipse, and 1970 DEC PDP-11, minicomputers also provided general-purpose registers (called "accumulators" in the Nova and Eclipse), rather than separate accumulators and index registers, as did their Eclipse MV and VAX 32-bit superminicomputer successors. In the PDP-11 and VAX, any register could be used when calculating the memory address of an operand; in the Nova, Eclipse, and Eclipse MV, only registers 2 and 3 could be used.The 1971 CDC STAR-100 has a register file of 256 64-bit registers, 9 of which are reserved. Unlike most computers, the STAR-100 instructions only have register fields and operand fields, so the registers serve more as pointer registers than as traditional index registers.
History:
While the Intel 8080 allowed indirect addressing via register pairs, the first microprocessor with a true index register appears to have been the 1974 Motorola 6800.
History:
In 1975, the 8-bit MOS Technology 6502 processor had two index registers 'X' and 'Y'.In 1978, the Intel 8086, the first x86 processor, had eight 16-bit registers, referred to as "general-purpose", all of which can be used as integer data registers in most operations; four of them, 'SI' (source index), 'DI' (destination index), 'BX' (base), and 'BP' (base pointer), can also be used when computing the memory address of an operand, which is the sum of one of those registers and a displacement, or the sum of one of 'BX' or 'BP", one of 'SI' or 'DI', and a displacement. The 1979 Intel 8088, and the 16-bit Intel 80186, Intel 80188, and Intel 80286 successors work the same. In 1985, the i386, a 32-bit successor to those processors, introducing the IA-32 32-bit version of the x86 architecture, extended the eight 16-bit registers to 32 bits, with "E" added to the beginning of the register name; in IA-32, the memory address of an operand is the sum of one of those eight registers, one of seven of those registers (the stack pointer is not allowed as the second register here) multiplied by 1, 2, 4, or 8, and a displacement.: 3-11–3-12, 3-22–3-23 The Advanced Micro Devices Opteron, the first model of which was released in 2003, introduced x86-64, the 64-bit version of the x86 instruction set; in x86-64, the general-purpose registers were extended to 64 bits, and eight additional general-purpose registers were added; the memory address of an operand is the sum of two of those 16 registers and a displacement.: 3–12, 3–24 The reduced instruction set computing (RISC) instruction sets introduced in the 1980s and 1990s all provide general-purpose registers that can contain either numerical values or address values. In most of those instruction sets, there are 32 general-purpose registers (in some of those instruction sets, the value of one of those registers is hardwired to zero) could be used to calculate the operand address; they did not have dedicated index registers. In the 32-bit version of the ARM architecture, first developed in 1985, there are 16 registers designated as "general-purpose registers", but only 13 of them can be used for all purposes, with register R15 containing the program counter. The memory address of a load or store instruction is the sum of any of the 16 registers and either a displacement or another of the registers with the exception of R15 (possibly shifted left for scaling). In the 64-bit version of the ARM architecture, there are 31 64-bit general-purpose registers plus a stack pointer and a zero register; the memory address of a load or store instruction is the sum of any of the 31 registers and either a displacement or another of the registers.
Examples:
Here is a simple example of index register use in assembly language pseudo-code that sums a 100 entry array of 4-byte words: Clear_accumulator Load_index 400,index2 //load 4*array size into index register 2 (index2) loop_start : Add_word_to_accumulator array_start,index2 //Add to AC the word at the address (array_start + index2) Branch_and_decrement_if_index_not_zero loop_start,4,index2 //loop decrementing by 4 until index register is zero | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**The State of the World's Animal Genetic Resources for Food and Agriculture**
The State of the World's Animal Genetic Resources for Food and Agriculture:
The State of the World's Animal Genetic Resources for Food and Agriculture is a major report on the genetic resources of breeds of farm livestock in the world. It was published by the Food and Agriculture Organization of the United Nations (FAO) in 2007. It covers mammalian and avian domestic livestock breeds, but does not include fish or honey bees and other invertebrates. It is based on information submitted to the FAO, in the form of reports of participating countries, thematic studies prepared by experts and data on individual breeds submitted to DAD-IS. An annex to the report, the List of breeds documented in the Global Databank for Animal Genetic Resources, gives an estimate of conservation status for all breeds for which sufficient data had been received. The report has been translated into Arabic, Chinese, French, Indonesian, Russian and Spanish. In 2015 a second edition of the report, The Second Report on the State of the World’s Animal Genetic Resources, was published. The second report emphasizes changes in the status and management of animal genetic resources for food and agriculture that occurred in the eight-year period following the publication of the original version. The second report has been translated into Chinese. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vertical queue**
Vertical queue:
The concept of a vertical queue is often used in traffic flow studies as a common assumption to simplify analysis problems. Its use enables many calculations to be simplified, allowing researchers to get to the core of their problem, while ignoring the effects of queue buildup on a roadway. Vertical queues can also be used in traffic signal analysis, with vertical queues occurring at the location of the stop bar.
Concept usage:
The vertical queue assumption presumes that vehicles on a roadway do not back up over the length of the roadway, which would be considered a horizontal queue, but rather stack up upon one another at the point where congestion begins or at the stop line of a traffic signal. The vertical queue is unitless, and is simply representative of the number of vehicles which are delayed at a given point in a system. This is clearly not possible in real life, but the assumption allows vehicles in an analysis to drive at the free flow speed until reaching the point of congestion. A vehicle does not have to travel at less than the free flow speed due to a road being congested because of a horizontal queue. This simplification is widely accepted by traffic flow theorists.
Concept usage:
Vehicles enter the vertical queue at the top of the stack and depart from the bottom. The first vehicle to arrive at the point of congestion would thus be at the bottom of the vertical queue. Vehicles incur no delay traveling to the point of congestion, and travel to the point at which the vertical queue occurs without hindrance. The vehicles only incur delay while in congestion or at the stop line. The time vehicles spend within the vertical queue is the difference between the undelayed travel time to the point of congestion and the actual travel time. It is equal to the delay incurred before the point of congestion. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Volume correction factor**
Volume correction factor:
In thermodynamics, the Volume Correction Factor (VCF), also known as Correction for the effect of Temperature on Liquid (CTL), is a standardized computed factor used to correct for the thermal expansion of fluids, primarily, liquid hydrocarbons at various temperatures and densities. It is typically a number between 0 and 2, rounded to five decimal places which, when multiplied by the observed volume of a liquid, will return a "corrected" value standardized to a base temperature (usually 60 °Fahrenheit or 15 °Celsius).
Conceptualization:
In general, VCF / CTL values have an inverse relationship with observed temperature relative to the base temperature. That is, observed temperatures above 60 °F (or the base temperature used) typically correlate with a correction factor below "1", while temperatures below 60 °F correlate with a factor above "1". This concept lies in the basis for the kinetic theory of matter and thermal expansion of matter, which states as the temperature of a substance rises, so does the average kinetic energy of its molecules. As such, a rise in kinetic energy requires more space between the particles of a given substance, which leads to its physical expansion.Conceptually, this makes sense when applying the VCF to observed volumes. Observed temperatures below the base temperature generate a factor above "1", indicating the corrected volume must increase to account for the contraction of the substance relative to the base temperature. The opposite is true for observed temperatures above the base temperature, generating factors below "1" to account for the expansion of the substance relative to the base temperature.
Conceptualization:
Exceptions While the VCF is primarily used for liquid hydrocarbons, the theory and principles behind it apply to most liquids, with some exceptions. As a general principle, most liquid substances will contract in volume as temperature drops. However, certain substances, water for example, contain unique angular structures at the molecular level. As such, when these substances reach temperatures just above their freezing point, they begin to expand, since the angle of the bonds prevent the molecules from tightly fitting together, resulting in more empty space between the molecules in a solid state. Other substances which exhibit similar properties include silicon, bismuth, antimony and germanium.While these are the exceptions to general principles of thermal expansion and contraction, they would seldom, if ever, be used in conjunction with VCF / CTL, as the correction factors are dependent upon specific constants, which are further dependent on liquid hydrocarbon classifications and densities.
Formula and usage:
The formula for Volume Correction Factor is commonly defined as: exp 0.8 αT(ΔT+δT)]} Where: exp refers to the mathematical constant , e , raised to the power of 0.8 αT(ΔT+δT)]} ΔT refers to the change in observed temperature ( t ) minus the base temperature ( T ) in degrees Fahrenheit (t−T) . When computing VCF , T is commonly set to 60 °F.
Formula and usage:
δT refers to a small base temperature correction value. If correcting to 60 °F, δT=0 αT refers to the coefficient of thermal expansion at the base temperature. If a base temperature of 60 °F is used, αT is written as 60 and 60 =K0ρ∗2+K1ρ∗+K2 ρ∗ refers to the density [Kg/M3] at the base temperature, T , and 0 psig pressure. When correlated with αT at 60 °F ( 60 ) → 60 K0 , K1 , and K2 refer to a specific set of constants, dependent upon the liquid's classification and density at 60 °F E.G. For Crude oils K0 , K1 , and K2 = 341.0957, 0, and 0, respectively. See table below for typical values used.
Formula and usage:
Usage In standard applications, computing the VCF or CTL requires the observed temperature of the product, and its API gravity at 60 °F. Once calculated, the corrected volume is the product of the VCF and the observed volume.
Formula and usage:
VCorrected=VCF∗VObserved Since API gravity is an inverse measure of a liquid's density relative to that of water, it can be calculated by first dividing the liquid's density by the density of water at a base temperature (usually 60 °F) to compute Specific Gravity (SG), then converting the Specific Gravity to Degrees API as follows: 141.5 131.5 Traditionally, VCF / CTL are found by matching the observed temperature and API gravity within standardized books and tables published by the American Petroleum Institute. These methods are often more time-consuming than entering the values into an online VCF calculator; however, due to the variance in methodology and computation of constants, the tables published by the American Petroleum Institute are preferred when dealing with the purchase and sale of crude oil and residual fuels.
Formulas for Reference:
Density of pure water at 60 °F 999.016 kg/m3 or 0.999016 g/cm3 Note: There is no universal agreement on the exact density of pure water at various temperatures since each industry will often use a different standard. For example the, USGS says it is 0.99907 g/cm3. While the relative variance between values may be low, it is best to use the agreed upon standard for the industry you are working in, | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rosiwal scale**
Rosiwal scale:
The Rosiwal scale is a hardness scale in mineralogy, with its name given in memory of the Austrian geologist August Karl Rosiwal. The Rosiwal scale attempts to give more quantitative values of scratch hardness, unlike the Mohs scale which is a qualitative measurement with relative values. The Rosiwal method (also called the Delesse-Rosiwal method) is a method of petrographic analysis and is performed by scratching a polished surface under a known load using a scratch-tip with a known geometry. The hardness is calculated by finding the volume of removed material, but this measurement can be difficult and must sample a large enough number of grain in order to have statistical significance.
Rosiwal scale values:
Measures the scratch hardness of a mineral expressed on a quantitative scale. These measurements must be performed in a laboratory, since the surfaces must be flat and smooth. The base value of the Rosiwal scale is defined as corundum set to 1000 (unitless). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Secret brand**
Secret brand:
A secret brand is any design or manufacturing company that does not advertise or overtly label its products. The products are generally considered luxury goods intended for exclusive clientele.
Marketing strategy:
The idea of a secret brand works off of one or more of three economic principles. The first is scarcity value. The secret brand creates products with highly specific, unique qualities, usually very subtle and invisible to the casual observer. These qualities do not necessarily improve the performance of the product and can be inefficiencies (e.g., using heavy-weight denim for casual clothes). The customer purchases the product for its rarity; to have something that is very difficult to acquire.
Marketing strategy:
The second principle is branding. Large companies wanting to reach a new demographic may use a secret brand to establish clout among that demographic. This imprint brand may use the distribution network of the parent brand, but ship from a dummy company. Alternatively, the secret brand may employ individuals posing as independent agents to distribute goods to specialty stores. The theory behind these strategies is that it allows the parent company to expand its demographic without compromising the brand identity of its "open" brand.
Marketing strategy:
The third principle is experimentation. Openly announcing new products often affects stock values in publicly traded companies. If a company is unsure of the real world appeal of a product, it may employ a secret brand to experiment with market reaction on a small scale. This technique may be used after inconclusive focus group results (i.e. the majority somewhat dislike the product, while the minority is highly enthusiastic about it). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lisinopril/hydrochlorothiazide**
Lisinopril/hydrochlorothiazide:
Lisinopril/hydrochlorothiazide, sold under the brand name Zestoretic among others, is a fixed-dose combination medication used for the treatment of high blood pressure. It contains lisinopril, an ACE inhibitor, and hydrochlorothiazide, a diuretic. Typically, it becomes an option once a person is doing well on the individual components. It is taken by mouth.Common side effects include dizziness, headache, cough, and feeling tired. Severe side effects may include angioedema and low blood pressure. Use during pregnancy may harm the baby.The combination was approved for medical use in the United States in 1989. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. In 2020, it was the 50th most commonly prescribed medication in the United States, with more than 13 million prescriptions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Infinite Dimensional Analysis, Quantum Probability and Related Topics**
Infinite Dimensional Analysis, Quantum Probability and Related Topics:
Infinite Dimensional Analysis, Quantum Probability and Related Topics is a quarterly peer-reviewed scientific journal published since 1998 by World Scientific. It covers the development of infinite dimensional analysis, quantum probability, and their applications to classical probability and other areas of physics.
Abstracting and indexing:
The journal is abstracted and indexed in CompuMath Citation Index, Current Contents/Physical, Chemical & Earth Sciences, Mathematical Reviews, Science Citation Index, Scopus, and Zentralblatt MATH. According to the Journal Citation Reports, the journal has a 2020 impact factor of 0.793. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Somatorelin**
Somatorelin:
Somatorelin is a diagnostic agent for determining growth hormone deficiency. It is a recombinant version of growth hormone-releasing hormone (GHRH). Somatorelin has been used to study hormone deficiency (particularly growth hormone deficiency), cognitive impairment, sleep disorders, and aging. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dimethyl fumarate**
Dimethyl fumarate:
Dimethyl fumarate (DMF) is the methyl ester of fumaric acid and is named after the earth smoke plant (Fumaria officinalis). Dimethyl fumarate combined with three other fumaric acid esters (FAEs) is solely licensed in Germany as an oral therapy for psoriasis (brand name Fumaderm). Since 2013, it has been approved by the U.S. Food and Drug Administration (FDA) as a treatment option for adults with relapsing multiple sclerosis (brand name Tecfidera). In 2017, an oral formulation of dimethyl fumarate (brand name Skilarence) was approved for medical use in the European Union as a treatment for moderate-to-severe plaque psoriasis. Dimethyl fumarate is thought to have immunomodulatory properties without causing significant immunosuppression.Dimethyl fumarate has also been applied as a biocide in furniture or shoes to prevent growths of mold during storage or transport in humid climates. However, due to cases of allergic reactions after skin contact, dimethyl fumarate-containing consumer products are no longer authorised to be manufactured (since 1998) or imported (since 2009) in the European Union. Dimethyl fumarate is available as a generic medication.
Medical uses:
In Germany, dimethyl fumarate is marketed for the treatment of psoriasis and is available as an oral formulation mixed with related compounds (Fumaderm); in the UK, it is available as a pure oral formulation (Skilarence). It is also available in the US as an oral formulation (Tecfidera) to treat adults with relapsing multiple sclerosis.A 2015 Cochrane systematic review found moderate quality evidence of a reduction in the number of people with relapsing remitting MS that had relapses over a two-year treatment period with dimethyl fumarate versus placebo, as well as low quality evidence of a reduction in worsening disability, and an overall need for higher quality studies with longer follow-up.
History:
The first medical use of fumaric acid was described in 1959 by Walter Schweckendiek, a German chemist, and was a topical formulation for psoriasis. The Swiss company Fumapharm eventually brought Fumaderm, an oral formulation of dimethyl fumarate (along with some monoesters) to market for psoriasis in Germany in 1994.Based on the efficacy and safety of this formulation, and evidence that dimethyl fumarate was the main active component, an oral formulation of dimethyl fumarate was developed by Almirall. This oral formulation, under the brand name Skilarence, was approved by the European Medicines Agency (EMA) in June 2017, for the treatment of moderate-to-severe plaque psoriasis in adults.Initial clinical research on the use of dimethyl fumarate for the treatment of multiple sclerosis was conducted by Fumapharm in collaboration with Biogen Idec; Fumapharm was subsequently acquired by Biogen Idec in 2006. Aditech Pharma in Sweden had also been researching oral formulations of dimethyl fumarate for MS and in 2010, the Danish company Forward Pharma acquired Aditech's patents.Biogen continued developing its oral formulation of dimethyl fumarate from Fumapharm under the code name BG-12; it was approved, under the trade name Tecfidera, for the treatment of adults with relapsing forms of MS in March 2013. Biogen priced the drug at $54,000 per year in the US. It was approved in Europe in 2014. In the UK NICE issued guidance recommending the drug as cost-effective, but only for patients who do not have highly active or rapidly evolving severe relapsing–remitting multiple sclerosis and only if Biogen agreed to provide it at a discount.Forward and Biogen entered into patent litigation in many jurisdictions; in 2017, the companies settled the litigation, with Biogen paying Forward $1.25 billion, with the potential for up to 10% of royalties depending on what happened with the patents in various jurisdictions.In June 2020, in a case between Biogen and Mylan, the U.S. District Court in West Virginia declared invalid Biogen's so-called "514" patent protecting Tecfidera from generic competition. The ruling gave Mylan the right to launch its own version of Tecfidera.
Pharmacology:
Dimethyl fumarate is metabolized to monomethyl fumarate (MMF) prior to entering systemic distribution. Dimethyl fumarate has been described a prodrug.Dimethyl fumarate is a precursor of monomethyl fumarate. Other prodrugs that metabolize to monomethyl fumarate have been developed to treat relapse-remitting multiple sclerosis, including diroximel fumarate which was approved by the FDA in October 2019.The precise mechanism of action of dimethyl fumarate is not clear. Dimethyl fumarate and monomethyl fumarate can activate the transcription factor (Nuclear factor erythroid-derived 2)-related factor 2 (Nrf2) pathway and monomethyl fumarate has been identified as a nicotinic acid receptor agonist in vitro. In mice that lack Nrf2 expression, however, dimethyl fumarate is still able to modulate the immune system, which indicates that Nrf2 is not required for its immunomodulatory action. For psoriasis, the mechanism of action is believed to be due to the interaction of monomethyl fumarate and the intracellular reduced glutathione of cells directly involved in the pathogenesis of psoriasis. The interaction with glutathione leads to the inhibition of nuclear translocation and the transcriptional activity of the nuclear factor kappa-light-chain-enhancer of activated B-cells (NF-κB).Dimethyl fumarate and monomethyl fumarate have been shown to reduce the expression of micro-RNA-21, which is essential for the production of pathogenic cells in multiple sclerosis and psoriasis. This can be achieved because dimethyl fumarate and monomethyl fumarate, as cell-permeable metabolites, can epigenetically regulate the expression of micro-RNA-21 via the metabolic-epigenetic interplay in developing immune cells.The main activity of dimethyl fumarate and monomethyl fumarate is considered to be immunomodulatory, resulting in a shift in T helper cells (Th) from the Th1 and Th17 profile to a Th2 phenotype. Inflammatory cytokine production is reduced by the induction of proapoptotic events, inhibition of keratinocyte proliferation, reduced expression of adhesion molecules and diminished inflammatory infiltrate within psoriatic plaques.The primary route of elimination is via exhalation of CO2, with small amounts excreted through urine or faeces.There is no evidence for dimethyl fumarate interaction with cytochrome P450 and the most common efflux and uptake transporters, and therefore no interactions are expected with medicinal products metabolised or transported by these systems.
Synthesis and reactions:
Several methods exist for the laboratory synthesis of dimethyl fumarate, with reported methods including alkene isomerization of dimethyl maleate, and Fischer esterification of fumaric acid.Dimethyl fumarate is an old compound used in industrial chemistry and can be purchased by the ton; as of 2012, one could purchase it for $1 to $50 per metric ton, with a two-ton minimum purchase.The compound undergoes electrohydrodimerization.
Adverse effects:
In the treatment of psoriasis, the most common adverse events are gastrointestinal events, flushing and lymphopenia, which are usually mild. Other adverse events include progressive multifocal leukoencephalopathy (PML) and Fanconi syndrome, which are considered rare. PML is probably caused by a combination of factors. A previous infection with the John-Cunningham virus (JCV) is considered a prerequisite for the development of PML. In a PML review, all confirmed cases were of patients exposed to periods of varying lymphopenia.For multiple sclerosis, adverse effects include flushing and gastrointestinal events, such as diarrhoea, nausea and upper abdominal pain. The drug label includes warnings about the risk of anaphylaxis and angio-oedema, PML, lymphopenia and liver damage.There is no information on how dimethyl fumarate affects the fetus during pregnancy; in animal tests there was fetal harm at clinically relevant doses.
Consumer products:
There have been cases of severe contact dermatitis which was likely related to a dimethyl fumarate contact allergy of newly acquired sofas and chairs. Dimethyl fumarate has been found to be an allergic sensitizer at very low concentrations, producing eczema by contact allergy that is difficult to treat. Concentrations as low as 1 ppm (parts-per-million) may produce allergic reactions in the most severe cases. There are only a handful of equally potent sensitisers.The sensitizing risk was brought to public attention by the "poison chair" incident, where Chinese manufacturer Linkwise produced two-seater sofas with dimethyl fumarate sachets inside to inhibit mould while they were in storage or transport. In Finland where the chairs were sold from 2006 to 2007, 60 users sustained serious rashes. The cause was identified as dimethyl fumarate-induced allergic reaction by Tapio Rantanen from Finland and his original article became the cover story in the July 2008 issue of the British Journal of Dermatology. In the United Kingdom, sofas sold by Argos, Land of Leather and Walmsley Furnishing containing the chemical caused over a hundred injuries. Argos withdrew the sofas from stores and contacted buyers to collect those that had been sold — with Land of Leather withdrawing the sofas without notifying buyers and Walmsley saying they had removed the sachets from sofas they sold after the danger came to light. The danger came to public attention in 2008 when the BBC Watchdog programme alerted consumers to the sofas.In the European Union, the use of dimethyl fumarate in consumer product manufacturing has been forbidden since 1998, and in 2009 the importation of consumer products containing dimethyl fumarate was also forbidden. EU Commission Decision 2009/251 of 17 March 2009 required member states to ensure that consumer products containing dimethyl fumarate were not placed or made available on the market from 1 May 2009. This definitely outlawed any marketing of consumer products containing dimethyl fumarate in the European Union. The ban on dimethyl fumarate as laid down in Decision 2009/251 establishes a maximum dimethyl fumarate concentration in products of 0.1 ppm. The decision dictated that consumer products containing more than 0.1 ppm dimethyl fumarate should be withdrawn from the market and recalled from consumers.
Research:
As of March 2021 dimethyl fumarate is being evaluated as a treatment for COVID-19 as part of the RECOVERY Trial in the UK. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Khimiya i Zhizn – XXI Vek**
Khimiya i Zhizn – XXI Vek:
Khimiya i Zhizn – XXI Vek (Russian: "Химия и жизнь – XXI век", Chemistry and Life – 21st Century) is a Russian popular scientific monthly magazine, known as simply Khimiya i Zhizn during Soviet times.
The first issue of the magazine was published in April, 1965, with the circulation of 12,500. Lately this figure reached 150,000. Since 1997 the magazine is known as Khimiya i Zhizn – XXI Vek. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**EMT-7**
EMT-7:
EMT-7 is a Russian electromagnetic countermine system for clearing minefields and defense against magnetic mines and enemy armor. It projects an electromagnetic pulse to detonate antitank mines and disrupt electronics before the tank reaches them.
The EMT-7 system has been tested on the T-72 and T-90 main battle tanks. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Combitube**
Combitube:
The Combitube—also known as the esophageal tracheal airway or esophageal tracheal double-lumen airway—is a blind insertion airway device (BIAD) used in the pre-hospital and emergency setting. It is designed to provide an airway to facilitate the mechanical ventilation of a patient in respiratory distress.
Description and use:
It consists of a cuffed, double-lumen tube that is inserted through the patient's mouth to secure an airway and enable ventilation. Generally, the distal tube (tube two, clear) enters the esophagus, where the cuff is inflated and ventilation is provided through the proximal tube (tube one, blue) which opens at the level of the larynx. In the rare instance where the distal tube intubates the trachea, ventilation is provided through the distal tube. Inflation of the cuff in the esophagus allows a level of protection against aspiration of gastric content similar to that found in the laryngeal mask.The simplicity of placement is the main advantage of the Combitube over endotracheal intubation. When intubating with a traditional endotracheal tube, care must be taken to visually ensure that the tube has been placed in the trachea while the dual-lumen design of the Combitube allows for ventilation to proceed regardless of esophageal or tracheal placement.
Description and use:
A device called the Positube, which allows for esophageal intubation detection, can be used on tube number two to rule out the intubation of the Combitube in the trachea. The Positube checks for air flow resistance on tube number two and is very helpful in checking proper Combitube placement when intubation is performed in noisy environments.
The Combitube's ease of use makes it an option for use in the pre-hospital, emergency setting when advanced level providers capable of placing an endotracheal tube are not immediately available. The drawbacks of Combitubes are evidenced by reports of serious complications such as aspiration, esophagus perforation and cranial nerve dysfunction associated with their use.
Description and use:
While it has been suggested as an option by the American Heart Association and European Resuscitation Council for situations where intubation attempts are unsuccessful since the year 2000, it is seldom used outside of the pre-hospital, emergency setting, as it does not allow for long term airway control. Alternatives to the Combitube include the laryngeal mask airway, the endotracheal tube, and the laryngeal tube. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Modula-2**
Modula-2:
Modula-2 is a structured, procedural programming language developed between 1977 and 1985/8 by Niklaus Wirth at ETH Zurich. It was created as the language for the operating system and application software of the Lilith personal workstation. It was later used for programming outside the context of the Lilith.
Modula-2:
Wirth viewed Modula-2 as a successor to his earlier programming languages Pascal and Modula. The main concepts are: The module as a compiling unit for separate compiling The coroutine as the basic building block for concurrent processes Types and procedures that allow access to machine-specific dataThe language design was influenced by the Mesa language and the Xerox Alto, both from Xerox PARC, that Wirth saw during his 1976 sabbatical year there. The computer magazine Byte devoted the August 1984 issue to the language and its surrounding environment.Modula-2 was followed by Modula-3, and later by the Oberon series of languages.
Description:
Modula-2 is a general purpose procedural language suitable for both systems programming and applications programming. The syntax is based on Wirth's earlier language, Pascal, with some elements and syntactic ambiguities removed. The module concept, designed to support separate compilation and data abstraction; and direct language support for multiprogramming were added.
Description:
The language allows the use of one-pass compilers. Such a compiler by Gutknecht and Wirth was about four times faster than earlier multi-pass compilers.Here is an example of the source code for the "Hello world" program: A Modula-2 module may be used to encapsulate a set of related subprograms and data structures, and restrict their visibility from other parts of the program. Modula-2 programs are composed of modules, each of which is made up of two parts: a definition module, the interface portion, which contains only those parts of the subsystem that are exported (visible to other modules), and an implementation module, which contains the working code that is internal to the module.
Description:
The language has strict scope control. Except for standard identifiers, no object from the outside is visible inside a module unless explicitly imported; no internal module object is visible from the outside unless explicitly exported.
Description:
Suppose module M1 exports objects a, b, c, and P by enumerating its identifiers in an explicit export list Then the objects a, b,c, and P from module M1 are known outside module M1 as M1.a, M1.b, M1.c, and M1.P. They are exported in a qualified manner to the outside (assuming module M1 is global). The exporting module's name, i.e. M1, is used as a qualifier followed by the object's name.
Description:
Suppose module M2 contains the following IMPORT declaration Then this means that the objects exported by module M1 to the outside of its enclosing program can now be used inside module M2. They are referenced in a qualified manner: M1.a, M1.b, M1.c, and M1.P. Example: Qualified export avoids name clashes. For example, if another module M3 exports an object called P, then the two objects can be distinguished since M1.P differs from M3.P. It does not matter that both objects are called P inside their exporting modules M1 and M3.
Description:
An alternative method exists. Suppose module M4 is formulated as this: This means that objects exported by module M1 to the outside can again be used inside module M4, but now by mere references to the exported identifiers in an unqualified manner as: a, b, c, and P. Example: This method of import is usable if there are no name clashes. It allows variables and other objects to be used outside their exporting module in the same unqualified, manner as inside the exporting module.
Description:
The export and import rules not only safeguard objects against unwanted access, but also allow a cross-reference of the definition of every identifier in a program to be created. This property helps with the maintenance of large programs containing many modules.
The language provides for single-processor concurrency (monitors, coroutines and explicit transfer of control) and for hardware access (absolute addresses, bit manipulation, and interrupts). It uses a nominal type system.
Dialects:
There are two major dialects of Modula-2. The first is PIM, named for the book Programming in Modula-2 by Niklaus Wirth. There were three major editions of PIM: the second, third (corrected), and fourth. Each describes slight variants of the language. The second major dialect is ISO, named for the standardization effort by the International Organization for Standardization. Here are a few of the differences among them.
Dialects:
PIM2 (1983) Required explicit EXPORT clause in definition modules.
Function SIZE needs to be imported from module SYSTEM PIM3 (1985) Removed the EXPORT clause from definition modules following the observation that everything within a definition module defines the interface to that module, hence the EXPORT clause was redundant.
Function SIZE is pervasive (visible in any scope without import) PIM4 (1988) Specified the behaviour of the MOD operator when the operands are negative.
Required all ARRAY OF CHAR strings to be terminated by ASCII NUL, even if the string fits exactly into its array.
ISO (1996, 1998) ISO Modula-2 resolved most of the ambiguities in PIM Modula-2. It added the data types COMPLEX and LONGCOMPLEX, exceptions, module termination (FINALLY clause) and a complete standard input/output (I/O) library. There are many minor differences and clarifications.
Supersets:
There are several supersets of Modula-2 with language extensions for specific application domains: PIM supersets Canterbury Modula-2, extended with Oberon-like extensible records [This has been withdrawn and is no longer available anywhere] Modula-2+, extended with preemptive threads and exceptions Modula-2*, parallel extension Modula-P, another parallel extension Modula–Prolog, adds a Prolog layer Modula/R, adds relational database extensions Modula-GM, adds embedded system extensions ISO supersets ISO10514-2, adds an object-oriented programming layer ISO10514-3, adds a generic programming (generics) layer IEC supersets Mod51, extended with IEC 1131 constructs for embedded development
Derivatives:
There are several derivative languages that resemble Modula-2 very closely but are new languages in their own right. Most are different languages with different purposes and with strengths and weaknesses of their own: Modula-3, developed by a team of ex-Xerox employees who had moved to DEC and Olivetti Oberon, developed at ETH Zürich for System Oberon available online.
Oberon-2, Oberon with OO extensions Active Oberon, yet another object-oriented Extension of Oberon, developed also at ETH with the main objective to support parallel programming on multiprocessor and multicore systems.
Parallaxis, a language for machine-independent data-parallel programming Umbriel, developed by Pat Terry as a teaching language YAFL, a research language by Darius BlasbandMany other current programming languages have adopted features of Modula-2.
Language elements:
Reserved words PIM [2,3,4] defines 40 reserved words: AND ELSIF LOOP REPEAT ARRAY END MOD RETURN BEGIN EXIT MODULE SET BY EXPORT NOT THEN CASE FOR OF TO CONST FROM OR TYPE DEFINITION IF POINTER UNTIL DIV IMPLEMENTATION PROCEDURE VAR DO IMPORT QUALIFIED WHILE ELSE IN RECORD WITH Built-in identifiers PIM [3,4] defines 29 built-in identifiers: ABS EXCL LONGINT REAL BITSET FALSE LONGREAL SIZE BOOLEAN FLOAT MAX TRUE CAP HALT MIN TRUNC CARDINAL HIGH NIL VAL CHAR INC ODD CHR INCL ORD DEC INTEGER PROC
Embedded system use:
Modula-2 is used to program many embedded systems.
Cambridge Modula-2 Cambridge Modula-2 by Cambridge Microprocessor Systems is based on a subset of PIM4 with language extensions for embedded development. The compiler runs on DOS and it generates code for Motorola 68000 series (M68k) based embedded microcontrollers running a MINOS operating system.
Mod51 Mod51 by Mandeno Granville Electronics is based on ISO Modula-2 with language extensions for embedded development following IEC 1131, an industry standard for programmable logic controllers (PLC) closely related to Modula-2. The Mod51 compiler generates standalone code for 80C51 based microcontrollers.
Embedded system use:
Modula-GM Delco Electronics, then a subsidiary of GM Hughes Electronics, developed a version of Modula-2 for embedded control systems starting in 1985. Delco named it Modula-GM. It was the first high-level programming language used to replace machine code (language) for embedded systems in Delco's engine control units (ECUs). This was significant because Delco was producing over 28,000 ECUs per day in 1988 for GM. This was then the world's largest producer of ECUs. The first experimental use of Modula-GM in an embedded controller was in the 1985 Antilock Braking System Controller which was based on the Motorola 68xxx microprocessor, and in 1993 Gen-4 ECU used by the Champ Car World Series Championship Auto Racing Teams (CART) and Indy Racing League (IRL) teams. The first production use of Modula-GM was its use in GM trucks starting with the 1990 model year vehicle control module (VCM) used to manage GM Powertrain's Vortec engines. Modula-GM was also used on all ECUs for GM's 90° Buick V6 engine family 3800 Series II used in the 1997-2005 model year Buick Park Avenue. The Modula-GM compilers and associated software management tools were sourced by Delco from Intermetrics.
Embedded system use:
Modula-2 was selected as the basis for Delco's high level language because of its many strengths over other alternative language choices in 1986. After Delco Electronics was spun off from GM (with other component divisions) to form Delphi Automotive Systems in 1995, global sourcing required that a non-proprietary high-level software language be used. ECU embedded software now developed at Delphi is compiled with commercial compilers for the language C.
Embedded system use:
Russian radionavigation satellites The satellites of the Russian radionavigation-satellite service framework GLONASS, similar to the United States Global Positioning System (GPS), are programmed in Modula-2.
Compilers:
Amsterdam Compiler Kit (ACK) Modula-2 – for MINIX; freeware ADW Modula-2 – for Windows, ISO compliant, ISO/IEC 10514-1, ISO/IEC 10514-2 (OO extension), ISO/IEC 10514-3 (Generic extension); freeware Aglet Modula-2 – for AmigaOS 4.0 for PowerPC; freeware Fitted Software Tools (FST) Modula-2 – for DOS; freeware Gardens Point Modula-2 (GPM) – for BSD, Linux, OS/2, Solaris; ISO compliant; freeware, as of 30 July 2014 Gardens Point Modula-2 (GPM/CLR) – for .NET Framework; freeware GNU Modula-2 – for GCC platforms, version 1.0 released 11 December 2010; compliance: PIM2, PIM3, PIM4, ISO; free software, GNU General Public License (GPL) Logitech SA - they also had a "Real Time Kernel" for embedded usage (1987) M2Amiga – for Amiga; free software M2M – by N. Wirth and collaborators from ETH Zurich, cross-platform, generates M-code for virtual machine; freeware MacMETH – by N. Wirth and collaborators from ETH Zurich for Macintosh, Classic only; freeware Mod51 – for the Intel 80x51 microcontroller family, ISO compliant, IEC1132 extensions; proprietary software Megamax Modula-2 – for Atari ST with documentation in German only; freeware Modula-2 R10 – reference compiler for this Modula; open-source, peer review ModulaWare – for OpenVMS (VAX and Alpha), ISO compliant; proprietary software ORCA/Modula-2 – for Apple IIGS by The Byte Works for the Apple Programmer's Workshop p1 Modula-2 – for Macintosh, Classic and macOS (PowerPC and Carbon (API) only), ISO compliant; proprietary software MOCKA – for various platforms, PIM compliant; commercial, freeware Linux/BSD versions TDI Modula-2 – for Atari ST, by TDI Software Terra M2VMS – for OpenVMS (VAX and Alpha), PIM compliant; proprietary software m2c, Ulm Modula-2 System – for Solaris (Sun SPARC and Motorola 68k); free software, GNU General Public License (GPL) XDS – ISO compliant, TopSpeed compatible library: Native XDS-x86 for x86 (Windows and Linux); XDS-C for Windows and Linux (16- and 32-bit versions), targets C (K&R & ANSI); freeware Turbo Modula-2 Turbo Modula-2 was a compiler and an integrated development environment for MS-DOS developed, but not published, by Borland. Jensen and Partners, which included Borland cofounder Niels Jensen, bought the unreleased codebase and turned it into TopSpeed Modula-2. It was eventually sold to Clarion, now SoftVelocity, who then offered the Modula-2 compiler as part of its Clarion product line at that time.A Zilog Z80 CP/M version of Turbo Modula-2 was briefly marketed by Echelon under license from Borland. A companion release for Hitachi HD64180 was sold by Micromint as a development tool for their SB-180 single-board computer.
Compilers:
IBM Modula-2 IBM had a Modula-2 compiler for internal use which ran on both OS/2 and AIX, and had first class support in IBM's E2 editor. IBM Modula-2 was used for parts of the OS/400 Vertical Licensed Internal Code (effectively the kernel of OS/400). This code was mostly replaced with C++ when OS/400 was ported to the IBM RS64 processor family, although some remains in modern releases of the operating system. A Motorola 68000 backend also existed, which may have been used in embedded systems products.
Operating systems:
Modula-2 is used to program some operating systems (OSs). The Modula-2 module structure and support are used directly in two related OSs.
Operating systems:
The OS named Medos-2, for the Lilith workstation, was developed at ETH Zurich, by Svend Erik Knudsen with advice from Wirth. It is a single user, object-oriented operating system built from Modula-2 modules.The OS named Excelsior, for the Kronos workstation, was developed by the Academy of Sciences of the Soviet Union, Siberian branch, Novosibirsk Computing Center, Modular Asynchronous Developable Systems (MARS) project, Kronos Research Group (KRG). It is a single user system based on Modula-2 modules.
Books:
Gleaves, Richard (1984). Modula-2 for Pascal Programmers. Springer Books on Professional Computing (1st ed.). Switzerland: Springer Nature. doi:10.1007/978-1-4613-8531-8. ISBN 978-0-387-96051-7. S2CID 346624.
King, K. N. (1 January 1988). Modula-2: A Complete Guide. Burlington, Massachusetts: Jones and Bartlett Publishers. ISBN 978-0669110913.
Wirth, Niklaus (1988). Programming in Modula-2 (4th ed.). Berlin Heidelberg: Springer-Verlag. doi:10.1007/978-3-642-83565-0. ISBN 978-0-387-96051-7. S2CID 41899609.
Cooper, Doug (1 September 1990). Oh My! Modula-2: An Introduction to Programming. New York City, New York: W. W. Norton & Company. ISBN 978-0393960099.
Helman, Paul (1 March 1998). Walls and Mirrors: Intermediate Problem Solving and Data Structures : Modula, 2 (Benjamin/Cummings Series in Structured Programming). Benjamin-Cummings. ISBN 978-0805389456.
Sutcliffe, Richard J. (2004–2005). Modula-2: Abstractions for Data and Programming Structures. Arjay Books. ISBN 978-0-669-11091-3. Uses ISO-standard Modula-2. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**X-bar chart**
X-bar chart:
In industrial statistics, the X-bar chart is a type of Shewhart control chart that is used to monitor the arithmetic means of successive samples of constant size, n. This type of control chart is used for characteristics that can be measured on a continuous scale, such as weight, temperature, thickness etc. For example, one might take a sample of 5 shafts from production every hour, measure the diameter of each, and then plot, for each sample, the average of the five diameter values on the chart.
X-bar chart:
For the purposes of control limit calculation, the sample means are assumed to be normally distributed, an assumption justified by the Central Limit Theorem.
X-bar chart:
The X-bar chart is always used in conjunction with a variation chart such as the x¯ and R chart or x¯ and s chart. The R-chart shows sample ranges (difference between the largest and the smallest values in the sample), while the s-chart shows the samples' standard deviation. The R-chart was preferred in times when calculations were performed manually, as the range is far easier to calculate than the standard deviation; with the advent of computers, ease of calculation ceased to be an issue, and the s-chart is preferred these days, as it is statistically more meaningful and efficient. Depending on the type of variation chart used, the average sample range or the average sample standard deviation is used to derive the X-bar chart's control limits. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**RunScanner**
RunScanner:
RunScanner is a freeware Microsoft Windows system utility which scans a windows system for all configured running programs and autostart locations.
History:
The program was created as a "best of both worlds" effort to combine all positive features in similar programs like HijackThis, Autoruns and Silentrunners.
Unlike similar programs, RunScanner connects to an online database to rate the good and the bad items.
The main purpose of the database is to do whitelisting instead of blacklisting.
Usage:
RunScanner scans all windows autostart locations and gives the user the possibility to delete misconfigured and malware items.
Inexperienced users can post their log files to forums where specialist helpers can help them to solve their malware problems.
Advanced users can use all features that modern malware fighters have come to expect.
Unlike other similar software, RunScanner can also exchange binary files with other users.
Main features:
Scanning of 100+ hijack locations Verification of file signatures MD5 hash calculation of files Online malware analysis of results Extended filters Item marking Powerful process killer Plain text logfile generation Binary .run logfile generation Hosts file editor | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hair crimping**
Hair crimping:
Hair crimping is a method of styling usually straight, long hair so that it becomes wavy, often in a sawtooth / zig-zag fashion. In the Southern United States, it is usually referred to as crimping, but also can be called crinkles or deep waves.
Hair crimping is usually achieved by treating the hair with heat from a crimping iron (also referred to as hair crimper) or by braiding the hair, often in multiple strands, then undoing the braids after a couple of hours. A crimping iron has parallel heated plates designed with a flat S-shaped repeating groove.
Hair crimping:
In 1972, the modern crimping iron was invented by Geri Cusenza, the original founder of Sebastian, for Barbra Streisand's hair.Crimping peaked in mainstream popularity during the mid-1980s. In 2007 at a Chanel runway show crimped hair was shown on a model, and it became more popular throughout late 2007 and 2008. Crimping's popularity has a tendency to return in ten-year cycles, although it is often seen in fashion and hairstyle shows due to its visually striking effect. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PF-05105679**
PF-05105679:
PF-05105679 is a drug which acts as a potent and selective blocker of the TRPM8 ion channel, which is the main receptor responsible for the sensation of cold. It was developed as a potential analgesic, and blocks the sensation of cold in both animals and human trials. It also lowers core body temperature in small mammals, but does not produce this effect in humans in the normal dosage range. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**NXPH1**
NXPH1:
Neurexophilin-1 is a protein that in humans is encoded by the NXPH1 gene.This gene is a member of the neurexophilin family and encodes a secreted protein with a variable N-terminal domain, a highly conserved, N-glycosylated central domain, a short linker region, and a cysteine-rich C-terminal domain. This protein forms a very tight complex with alpha neurexins, a group of proteins that promote adhesion between dendrites and axons. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**The Lightning Process**
The Lightning Process:
The Lightning Process (LP) is a three-day personal training programme developed and trademarked by British osteopath Phil Parker. It claims to be beneficial for various conditions, including chronic fatigue syndrome, depression and chronic pain.
Developed in the late 1990s, it aims to teach techniques for managing the acute stress response that the body experiences under threat. The course aims to help recognise the stress response, calm it and manage it in the long term. It also applies some ideas drawn from neurolinguistic programming (a pseudoscience), as well as elements of life coaching.
The Lightning Process:
A clinical trial in 2017 found that Lightning Process was effective when added to treatment for chronic fatigue syndrome, but it is not recommended by the NHS. Two corrections of this article were published due to methodological weaknesses.The approach has raised some controversy due to using psychological techniques to cure a physical illness. The website was amended after the Advertising Standards Authority ruled that it was misleading. In 2021, after a review of the available evidence, the National Institute for Health and Care Excellence explicitly advised against the use of Lighting Process among patients with chronic fatigue syndrome.
Description:
The Lightning Process comprises three group sessions conducted on three consecutive days, lasting about 12 hours altogether, conducted by trained practitioners.According to its developer, Phil Parker, the programme aims to teach participants about the acute stress response the body experiences under threat. It aims to help trainees spot when this response is happening and learn how to calm it. Techniques based on movement, postural awareness and personal coaching are intended to modify the production of stress hormones. Participants practise a learnt series of steps to habituate the calming method.The Lightning Process is based on the theory that the body can get stuck in a persistent stress response. The initial stressor may be a viral or bacterial infection, psychological stress, or trauma, which causes physical symptoms due to the body's stress response. These symptoms then act as a further stressor, resulting in overload of the central nervous system and chronic activation of the body's stress response. Neuroplasticity then causes this abnormal stress response to persist and be maintained. The Lightning Process suggests that while this disruption initially happens at an unconscious level, it is possible for the patient to exert conscious control and influence over the process, eventually breaking the cycle.The rationale for the programme draws on ideas of osteopaths Andrew Taylor Still and J M Littlejohn regarding nervous system dysregulation and addressing clients' needs in a holistic manner rather than focusing solely on symptoms. It also incorporates ideas drawn from neuro-linguistic programming and life coaching. A basic premise is that individuals can influence their own physiological responses in controlled and repeatable ways. Such learnt emotional self-regulation, it is suggested, could help overcome illness and improve well-being, if the method is practised consistently.Parker advocates attending the training course in order to gain a full understanding of the tools in a safe and supportive context. He also lays emphasis on the trainee playing an active role in recovery (the course is framed as a fully participatory 'training', not a passive 'treatment' or set of answers given to a 'patient'). He claims that the programme has helped to resolve various conditions including depression, panic attacks, insomnia, drug addictions, chronic pain and multiple sclerosis. The program has also been used with chronic fatigue syndrome.The Lightning Process is trademarked.
Criticism and support:
There has been criticism of the cost of the three-day course. There has also been criticism of the claimed benefits (see also below). John Greensmith, of the British advocacy group ME Free For All, stated "We think their claims are extravagant... if patients get better, they claim the success of the treatment – but if they don't, they say the patient is responsible."Some chronic fatigue syndrome patient support groups have strongly objected to the perceived implication that the disease has psychological causes. However, the Lightning Process website states that it is a neuro-physiological approach and that it considers CFS/ME to be a physical illness.Nigel Hawkes writing for The BMJ describes the Lightning Process as being "secretive about its methods, lacks overall medical supervision, and has a cultish quality because many of the therapists are former sufferers who deliver the programme with great conviction" and that "Some children who do not benefit have said that they feel blamed for the failure".Some people have claimed rapid cures for longstanding illnesses. Prominent advocates of the process include British journalist Patrick Strudwick, French dancer Chris Marques, and singer Laura Mvula.
Criticism and support:
Advertising Standards Authority ruling In 2011 Hampshire Trading Standards requested that the UK Advertising Standards Authority (ASA) give a ruling on the website www.lightningprocess.com, arguing that the information on the site was misleading in four areas. ASA upheld two of the four challenges. They concluded that although there seemed to be some evidence of participant improvement during trials conducted, the trials were not controlled, the evidence was not sufficient to draw robust conclusions, and more investigation was necessary; consequently, the website's claims at the time were deemed misleading and was amended.
Research:
A registered clinical trial (UK SMILE pilot study) was conducted in England at Bristol University, with results published in 2017. The results did not change the stance of the National Health Services in the United Kingdom which does not recommend the method.A peer reviewed qualitative study on experiences of the course among a group of young people with chronic fatigue syndrome was published in 2012.
Research:
A Norwegian Support Group patient survey showed mixed experiences. Patient surveys are considered low quality evidence compared with peer reviewed studies. For various reasons they cannot answer questions about the results of intervention.
Research:
A pilot intervention study exploring the efficacy of the Lightning Process training programme for reducing chronic fatigue and improving health-related quality of life in cancer survivors with reported significant fatigue issues were conducted at the National Cancer Hospital in Norway (Radiumhospitalet) and published in August, 2021. The study involved 13 participants and did not include a control group, which prevents causal attribution of any effect a given treatment may have. Significant improvement was reported at 3 and 6 months compared to baseline. No adverse effects were reported and the authors argued that a larger controlled clinical trial may be recommended.Given the limited clinical evidence, the National Institute for Health and Care Excellence (NICE) explicitly states that "[d]o not offer the Lightning Process, or therapies based on it, to people with ME/CFS" in their guideline for the management of ME/CFS published in 2021.
Research:
Public reaction to research Esther Crawley said that "I never expected it would work" and that "This is an important study as it provides another treatment approach that some may find helpful. However, while these results are promising, further research is needed to establish which aspects of the process are helpful, whether it is an effective treatment on its own, and whether it could be used to help more severely affected patients."Research into chronic fatigue syndrome is often a target of criticism. The SMILE study received some public criticism for recruiting children when adult subjects are available. The study was approved by the National Research Ethics Service. The paediatrician supervising the study, Esther Crawley, has commented "If the Lightning Process is dangerous, as they say, we need to find out. They should want to find it out, not prevent research."Results of the study by Crawley were publicized at the Science Media Centre in September 2017; an editorial on its own presentation of the results of the SMILE study stated: "If you had only read the headlines for the CFS/ME story you may conclude that the treatment tested at Bristol might be worth a try if you are blighted by the illness, when in truth the author said repeatedly that the findings would first have to be replicated in a bigger trial." Reactions to their briefing were stronger than expected: "it was the criticism from within the scientific community that we had not anticipated." The briefing invited four psychologists to make comments on the study, who were mild in their reactions, while the commentary on the 28 September 2017 article evoked detailed, well-referenced but anonymous criticisms of the SMILE study and the Lightning Process in the comments section.Dorothy Bishop from Oxford University commented that "The gains for patients in this study seem solid. However, while the patient allocation and statistical analysis of the trial appear to be done to a high standard, the intervention that was assessed is commercial and associated with a number of warning signs. The Lightning Process appears based on neurolinguistic programming, which has long been recognised as pseudoscience". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**FGF1**
FGF1:
Fibroblast growth factor 1, (FGF-1) also known as acidic fibroblast growth factor (aFGF), is a growth factor and signaling protein encoded by the FGF1 gene. It is synthesized as a 155 amino acid polypeptide, whose mature form is a non-glycosylated 17-18 kDa protein. Fibroblast growth factor protein was first purified in 1975, but soon afterwards others using different conditions isolated acidic FGF, Heparin-binding growth factor-1, and Endothelial cell growth factor-1. Gene sequencing revealed that this group was actually the same growth factor and that FGF1 was a member of a family of FGF proteins.
FGF1:
FGF-1 has no definitive signal sequence and thus is not secreted through classical pathways, but it does appear to form a disulfide linked dimer inside cells that associate with a complex of proteins at the cell membrane (including S100A13 and Syt1) which then help flip it through the membrane to the exterior of the cell. Once in the reducing conditions of the surrounding tissue, the dimer dissociates into monomeric FGF1 that can enter systemic circulation or be sequestered in tissues binding to heparan sulfate proteoglycans of the extracellular matrix. FGF1 can then bind to and exert its effects via specific fibroblast growth factor receptor (FGFR) proteins which themselves constitute a family of closely related molecules.In addition to its extracellular activity, FGF1 can also function intracellularly. The protein has a nuclear localization sequence (NLS) but the route that FGF1 takes to get to the nucleus is unclear and it appears that some sort of cell surface receptor binding is necessary, followed by its internalization and translocation to the nucleus whereupon it can interact with nuclear isoforms of FGFRs. This is different from FGF2 which also can activate nuclear FGFRs but has splicing variants of the protein that never leave the cell and go directly to the nucleus.
Function:
FGF family members possess broad mitogenic and cell survival activities, and are involved in a variety of biological processes, including embryonic development, cell growth, morphogenesis, tissue repair, tumor growth and invasion. This protein functions as a modifier of endothelial cell migration and proliferation, as well as an angiogenic factor. It acts as a mitogen for a variety of mesoderm- and neuroectoderm-derived cells in vitro, thus is thought to be involved in organogenesis. Three alternatively spliced variants encoding different isoforms have been described.FGF1 is multifunctional with many reported effects. For one example, in mice with diet-induced diabetes that is an experimental equivalent of type 2 diabetes in humans, a single injection of the FGF1 protein is enough to restore blood sugar levels to a healthy range for > 2 days.
Interactions:
FGF1 has been shown to interact with: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Drop table**
Drop table:
A drop table or wheel drop is a device used in railway engineering during maintenance jobs that require the removal of locomotive or rolling stock wheelsets. The machine is built in a drop pit allowing a locomotive or rolling stock to be rolled onto it, avoiding the need for heavy cranes or jacks to lift the vehicle off the rails.
Drop table:
The vehicle is placed over the drop table, and the connections attaching the wheelset to the vehicle are unfastened. This allows the wheel set to 'float' independently of the locomotive. The wheelset is lowered into the drop pit on a short section of rail, and a dummy rail, normally a part of the drop table machinery, is then inserted in the gap over the lowered wheelset. This enables the vehicle to be moved clear of the drop table on its remaining wheels, so that the removed wheelset can then be lifted out of the drop pit for maintenance work to be performed on it. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**James Franck Institute**
James Franck Institute:
The James Franck Institute of the University of Chicago conducts interdisciplinary research in physics, chemistry and materials science. Scientists at the institute include those interested in condensed matter physics, physical chemistry, materials chemistry, atomic, molecular, and optical (AMO) physics, geophysics, and biophysics.Founded in 1945 by university President Robert Maynard Hutchins as the Institute for the Study of Metals, it was renamed for Nobel Prize winning physicist James Franck in 1967. It had its beginnings in the Metallurgical Laboratory, the World War II project that initiated the first self-sustaining nuclear chain reaction, using the metal uranium. The Institute's founding director was Cyril Stanley Smith, former head of metallurgy at Los Alamos and the institute made early advances in pseudopotential theory and study of the Fermi surface. The Institute was an early pioneer in interdisciplinary research in wide-ranging subjects: as it was organized like a "benevolent anarchy", a large percent of its early papers did not even deal with metals but other aspects of chemistry and physics. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Metalog distribution**
Metalog distribution:
The metalog distribution is a flexible continuous probability distribution designed for ease of use in practice. Together with its transforms, the metalog family of continuous distributions is unique because it embodies all of following properties: virtually unlimited shape flexibility; a choice among unbounded, semi-bounded, and bounded distributions; ease of fitting to data with linear least squares; simple, closed-form quantile function (inverse CDF) equations that facilitate simulation; a simple, closed-form PDF; and Bayesian updating in closed form in light of new data. Moreover, like a Taylor series, metalog distributions may have any number of terms, depending on the degree of shape flexibility desired and other application needs.
Metalog distribution:
Applications where metalog distributions can be useful typically involve fitting empirical data, simulated data, or expert-elicited quantiles to smooth, continuous probability distributions. Fields of application are wide-ranging, and include economics, science, engineering, and numerous other fields. The metalog distributions, also known as the Keelin distributions, were first published in 2016 by Tom Keelin.
History:
The history of probability distributions can be viewed, in part, as a progression of developments towards greater flexibility in shape and bounds when fitting to data. The normal distribution was first published in 1756, and Bayes’ theorem in 1763. The normal distribution laid the foundation for much of the development of classical statistics. In contrast, Bayes' theorem laid the foundation for the state-of-information, belief-based probability representations. Because belief-based probabilities can take on any shape and may have natural bounds, probability distributions flexible enough to accommodate both were needed. Moreover, many empirical and experimental data sets exhibited shapes that could not be well matched by the normal or other continuous distributions. So began the search for continuous probability distributions with flexible shapes and bounds.
History:
Early in the 20th century, the Pearson family of distributions, which includes the normal, beta, uniform, gamma, student-t, chi-square, F, and five others, emerged as a major advance in shape flexibility. These were followed by the Johnson distributions. Both families can represent the first four moments of data (mean, variance, skewness, and kurtosis) with smooth continuous curves. However, they have no ability to match fifth or higher-order moments. Moreover, for a given skewness and kurtosis, there is no choice of bounds. For example, matching the first four moments of a data set may yield a distribution with a negative lower bound, even though it might be known that the quantity in question cannot be negative. Finally, their equations include intractable integrals and complex statistical functions, so that fitting to data typically requires iterative methods.
History:
Early in the 21st century, decision analysts began working to develop continuous probability distributions that would exactly fit any specified three points on the cumulative distribution function for an uncertain quantity (e.g., expert-elicited 0.10 0.50 , and 0.90 quantiles). The Pearson and the Johnson family distributions were generally inadequate for this purpose. In addition, decision analysts also sought probability distributions that would be easy to parameterize with data (e.g., by using linear least squares, or equivalently, multiple linear regression). Introduced in 2011, the class of quantile-parameterized distributions (QPDs) accomplished both goals. While being a significant advance for this reason, the QPD originally used to illustrate this class of distributions, the Simple Q-Normal distribution, had less shape flexibility than the Pearson and Johnson families, and lacked the ability to represent semi-bounded and bounded distributions. Shortly thereafter, Keelin developed the family of metalog distributions, another instance of the QPD class, which is more shape-flexible than the Pearson and Johnson families, offers a choice of boundedness, has closed-form equations that can be fit to data with linear least squares, and has closed-form quantile functions, which facilitate Monte Carlo simulation.
Definition and quantile function:
The metalog distribution is a generalization of the logistic distribution, where the term "metalog" is short for "metalogistic". Starting with the logistic quantile function, ln (y1−y) , Keelin substituted power series expansions in cumulative probability y=F(x) for the μ and the s parameters, which control location and scale, respectively.
Definition and quantile function:
0.5 0.5 0.5 0.5 )4+… 0.5 0.5 0.5 10 0.5 )4+… Keelin's rationale for this substitution was fivefold. First, the resulting quantile function would have significant shape flexibility, governed by the coefficients ai . Second, it would have a simple closed form that is linear in these coefficients, implying that they could easily be determined from CDF data by linear least squares. Third, the resulting quantile function would be smooth, differentiable, and analytic, ensuring that a smooth, closed-form PDF would be available. Fourth, simulation would be facilitated by the resulting closed-form inverse CDF. Fifth, like a Taylor series, any number of terms k could be used, depending on the degree of shape flexibility desired and other application needs.
Definition and quantile function:
Note that the subscripts of the a -coefficients are such that a1 and a4 are in the μ expansion, a2 and a3 are in the s expansion, and subscripts alternate thereafter. This ordering was chosen so that the first two terms in the resulting metalog quantile function correspond to the logistic distribution exactly; adding a third term with a3≠0 adjusts skewness; adding a fourth term with a4≠0 adjusts kurtosis primarily; and adding subsequent non-zero terms yields more nuanced shape refinements.: p.252 Rewriting the logistic quantile function to incorporate the above substitutions for μ and s yields the metalog quantile function, for cumulative probability 0<y<1 ln for ln 0.5 ln for ln 0.5 ln 0.5 for 0.5 for odd 0.5 ln for even k≥6 Equivalently, the metalog quantile function can be expressed in terms of basis functions: Mk(y)=∑i=1kaigi(y) , where the metalog basis functions are ln (y1−y), and each subsequent gi(y) is defined as the expression that is multiplied by ai in the equation for Mk(y) above. Note that coefficient a1 is the median, since all other terms equal zero when 0.5 . Special cases of the metalog quantile function are the logistic distribution ( k=2 ) and the uniform distribution ( 0.5 ,a4=1,ai=0 otherwise).
Probability density function:
Differentiating x=Mk(y) with respect to y yields the quantile density function q(y)=dx/dy . The reciprocal of this quantity, (q(y))−1=dy/dx=f(Q(y)) , is the probability density function expressed as a p-PDF, for 0.5 ln for 0.5 ln for 0.5 for odd 0.5 0.5 ln for even k≥6, which may be equivalently expressed in terms of basis functions as mk(y)=(∑i=1kaidgi(y)dy)−1 where 0<y<1 .Note that this PDF is expressed as a function of cumulative probability, y , rather than variable of interest, x . To plot the PDF (e.g., as shown in the figures on this page), one can vary y∈(0,1) parametrically, and then plot x=Mk(y) on the horizontal axis and mk(y) on the vertical axis.
Probability density function:
Based on the above equations and the following transformations that enable a choice of bounds, the family of metalog distributions is composed of unbounded, semibounded, and bounded metalogs, along with their symmetric-percentile triplet (SPT) special cases.
Unbounded, semi-bounded, and bounded metalog distributions:
As defined above, the metalog distribution is unbounded, except in the unusual special case where ai=0 for all terms that contain ln (y1−y) . However, many applications require flexible probability distributions that have a lower bound bl , an upper bound bu , or both. To meet this need, Keelin used transformations to derive semi-bounded and bounded metalog distributions. Such transformations are governed by a general property of quantile functions: for any quantile function x=Q(y) and increasing function z(x),x=z−1(Q(y)) is also a quantile function. For example, the quantile function of the normal distribution is x=μ+σΦ−1(y) ; since the natural logarithm, ln (x−bl) , is an increasing function, x=bl+eμ+σΦ−1(y) is the quantile function of the lognormal distribution. Analogously, applying this property to the metalog quantile function Mk(y) using the transformations below yields the semi-bounded and bounded members of the metalog family. By considering z(x) to be metalog-distributed, all members of the metalog family meet Keelin and Powley's definition of a quantile-parameterized distribution and thus possess the properties thereof.
Unbounded, semi-bounded, and bounded metalog distributions:
shape transformation quantile function PDF where parameters Metalog (unbounded) Log metalog ln log log (bounded below) Negative-log metalog ln nlog nlog (bounded above) Logit metalog ln logit logit (bounded) =bl=0y=0=bu=0y=1 Note that the number of shape parameters in the metalog family increases linearly with the number of terms k . Therefore, any of the above metalogs may have any number of shape parameters. By contrast, the Pearson and Johnson families of distributions are limited to two shape parameters.
SPT metalog distributions:
The symmetric-percentile triplet (SPT) metalog distributions are a three-term (k=3) special case of the unbounded, semi-bounded, and bounded metalog distributions. These are parameterized by the three (x,y) points off the CDF curve, of the form (x1,α) , 0.5 ) , and (x3,1−α) , where 0.5 . SPT metalogs are useful when, for example, quantiles (x1,x2,x3) corresponding to the CDF probabilities (e.g. 0.1 0.5 0.9 ) are elicited from an expert and used to parameterize the three-term metalog distributions. As noted below, certain mathematical properties are simplified by the SPT parameterization.
Properties:
The metalog family of probability distributions has the following properties.
Feasibility A function of the form of Mk(y) or any of its above transforms is a feasible probability distribution if and only if its PDF is greater than zero for all y∈(0,1).
Properties:
This implies a feasibility constraint on the set of coefficients a=(a1,...,ak)∈Rk ,mk(y)>0 for all y∈(0,1) .In practical applications, feasibility must generally be checked rather than assumed. For k=2 , a2>0 ensures feasibility. For k=3 (including SPT metalogs), the feasibility condition is a2>0 and 1.66711 . For k=4 , a similar closed form has been derived. For k≥5 , feasibility is typically checked graphically or numerically.
Properties:
The unbounded metalog and its above transforms share the same set of feasible coefficients. Therefore, for a given set of coefficients, confirming that mk(y)>0 for all y∈(0,1) is sufficient regardless of the transform in use.
Properties:
Convexity The set of feasible metalog coefficients Sa={a∈Rk|mk(y)>0 for all y∈(0,1)} is convex. Because convex optimization problems require convex feasible sets, this property can simplify optimization problems involving metalogs. Moreover, this property guarantees that any convex combination of the a vectors of feasible metalogs is feasible, which is useful, for example, when combining the opinion of multiple experts or interpolating among feasible metalogs. By implication, any probabilistic mixture of metalog distributions is itself a metalog.
Properties:
Fitting to data The coefficients a can be determined from data by linear least squares. Given n data points (xi,yi) that are intended to characterize a metalog CDF, and an n×k matrix Y whose elements consist of the basis functions gj(yi) , then as long as YTY is invertible, the column vector a of the coefficients ai is given by a=(YTY)−1YTz , where n≥k and column vector z=(z(x1),…,z(xn)) . If n=k , this equation reduces to a=Y−1z , where the resulting metalog CDF runs through all data points exactly. For SPT metalogs, it further reduces to expressions in terms of the three (xi,yi) points directly.An alternate fitting method, implemented as a linear program, determines the coefficients by minimizing the sum of absolute distances between the CDF and the data, subject to feasibility constraints.
Properties:
Shape flexibility According to the metalog flexibility theorem, any probability distribution with a continuous quantile function can be approximated arbitrarily closely by a metalog. Moreover, in the original paper, Keelin showed that ten-term metalog distributions parameterized by 105 CDF points from 30 traditional source distributions (including the normal, student-t, lognormal, gamma, beta, and extreme-value distributions) approximate each such source distribution within a K-S distance of 0.001 or less. Thus, metalog shape flexibility is virtually unlimited.
Properties:
The animated figure on the right illustrates this for the standard normal distribution, where metalogs with various numbers of terms are parameterized by the same set of 105 points from the standard normal CDF. The metalog PDF converges to the standard normal PDF as the number of terms increases. With two terms, the metalog approximates the normal with a logistic distribution. With each increment in number of terms, the fit gets closer. With 10 terms, the metalog PDF and standard normal PDF are visually indistinguishable.
Properties:
Similarly, nine-term semi-bounded metalog PDFs with bl=0 are visually indistinguishable from a range of Weibull distributions. The six cases shown to the right correspond to Weibull shape parameters 0.5, 0.8, 1.0, 1.5, 2, and 4. In each case, the metalog is parameterized by the nine x points from the Weibull CDF that correspond to the cumulative probabilities 0.001 0.02 0.10 0.25 0.5 0.75 0.9 0.98 0.999 ) Such convergence is not unique to the normal and Weibull distributions. Keelin originally showed analogous results for a wide range of distributions and has since provided further illustrations.
Properties:
Median The median of any distribution in the metalog family has a simple closed form. Note that 0.5 defines the median, and 0.5 )=a1 (since all subsequent terms are zero for 0.5 ). It follows that the medians of the unbounded metalog, log metalog, negative-log metalog, and logit metalog distributions are a1 , bl+ea1 , bu−e−a1 , and bl+buea11+ea1 , respectively.
Properties:
Moments The mth moment of the unbounded metalog distribution, E[xm]=∫y=01Mk(y)mdy , is a special case of the more general formula for QPDs. For the unbounded metalog, such integrals evaluate to closed-form moments that are mth order polynomials in the coefficients ai . The first four central moments of the four-term unbounded metalog are: mean variance 12 36 12 skewness 24 kurtosis 15 24 30 80 24 1200 40 80 Moments for fewer terms are subsumed in these equations. For example, moments of the three-term metalog can be obtained by setting a4 to zero. Moments for metalogs with more terms, and higher-order moments ( m>4 ), are also available. Moments for semi-bounded and bounded metalogs are not available in closed form.
Properties:
Parameterization with moments Three-term unbounded metalogs can be parameterized in closed form with their first three central moments. Let m,v, and s be the mean, variance, and skewness, and let ss be the standardized skewness, ss=s/v3/2 . Equivalent expressions of the moments in terms of coefficients, and coefficients in terms of moments, are as follows: 12 36 12 36 24 cos cos −1(−ss4(1+π26)12)+4π)] The equivalence of these two sets of expressions can be derived by noting that the moments equations on the left determine a cubic polynomial in terms of the coefficients a1,a2, and a3 , which can be solved in closed form as functions of m,v, and s . Moreover, this solution is unique. In terms of moments, the feasibility condition is 2.07093 , which can be shown to be equivalent to the following feasibility condition in terms of the coefficients: a2>0 ; and 1.66711 .This property can be used, for example, to represent the sum of independent, non-identically distributed random variables. Based on cumulants, it is known that for any set of independent random variables, the mean, variance, and skewness of the sum are the sums of the respective means, variances, and skewnesses. Parameterizing a three-term metalog with these central moments yields a continuous distribution that exactly preserves these three moments, and accordingly provides a reasonable approximation to the shape of the distribution of the sum of independent random variables.
Properties:
Simulation Since their quantile functions are expressed in closed form, metalogs facilitate Monte Carlo simulation. Substituting uniformly distributed random samples of y into the Metalog quantile function (inverse CDF) produces random samples of x in closed form, thereby eliminating the need to invert a CDF. See below for simulation applications.
Properties:
Eliciting and Combining Expert Opinion Due to their shape flexibility, metalog distributions can be an attractive choice for eliciting and representing expert opinion. Moreover, if the opinions of multiple experts are expressed as k -term metalogs, the consensus opinion may be calculated as a k -term metalog in closed form, where the a -coefficients of the consensus metalog are simply a weighted average of those of the individual experts. This result follows from Vincentization, where the consensus quantile function is a weighted average of individual quantile functions.
Properties:
Bayesian Updating in Closed Form In a classic paper, Howard (1970) shows how the beta-binomial distribution can be used to update, according to Bayes rule in closed form, uncertainty over the long-run frequency ϕ of a coin toss coming up "heads" in light of new coin-toss data. In contrast, if the uncertainty of interest to be updated is defined not by a scalar probability over a discrete event (like the result of a coin toss) but by a probability density function over a continuous variable, metalog Bayesian updating may be used. Under certain conditions, metalog quantile parameters and a -coefficients may be updated in closed form in light of new data according to Bayes rule.
Applications:
Due to their shape and bounds flexibility, metalogs can be used to represent empirical or other data in virtually any field of human endeavor.
Astronomy. Metalogs were applied to assess the risks of asteroid impact.
Cybersecurity. Metalogs were used in cyber security risk assessment.
Eliciting and combining expert opinion. Statistics Canada elicited expert opinions on future Canadian fertility rates from 18 experts, which included the use of spreadsheet-based real-time PDF feedback based on five-term metalogs. The individual expert opinions were then weighted and combined into an overall metalog-based forecast.
Applications:
Empirical data exploration and visualization. In fish biology, a 10-term log metalog distribution (bounded below at 0) was fit to the weights of 3,474 steelhead trout caught and released on the Babine River in British Columbia during 2006–2010. The bimodality of the resulting distribution has been attributed to the presence of both first-time and second-time spawners in the river, the latter of which tend to weigh more.
Applications:
Hydrology. A 10-term semi-bounded metalog was used to model the probability distribution of annual river gauge heights.
Oil field production. Semi-bounded SPT metalogs were used to analyze biases in projections of oil-field production when compared to observed production after the fact.
Portfolio management. SPT metalogs have been used to model commercial value of new products and product portfolios.
Simulation input distributions. To support a bidding decision, uncertainty about the future value of each of 259 financial assets was represented as an SPT metalog. A simulation of total portfolio value was shown to yield more realistic results than a corresponding simulation based on discrete low, median, and high values for each asset.
Simulation output distributions. Metalogs have also been used to fit output data from simulations in order to represent those outputs as closed-form continuous distributions (both CDFs and PDFs). Used in this way, they are typically more stable and smoother than histograms.
Applications:
Sums of lognormals. Metalogs enable a closed-form representation of known distributions whose CDFs have no closed-form expression. Keelin et al. (2019) apply this to the sum of independent identically distributed lognormal distributions, where quantiles of the sum can be determined by a large number of simulations. Nine such quantiles are used to parameterize a semi-bounded metalog distribution that runs through each of these nine quantiles exactly. Quantile parameters are stored in a table, which can then be interpolated to yield in-between values; these values are guaranteed to be feasible by the convexity property above.
Choosing number of terms:
For a given application and data set, choosing the number of metalog terms k depends on context and may require judgment. For expert elicitation, three to five terms is usually sufficient. For data exploration and matching other probability distributions such as the sum of lognormals, eight to 12 terms is usually sufficient. A metalog panel, which displays the metalog PDFs corresponding to differing numbers of terms k for a given data set, may aid this judgment. For example, in the steelhead weight metalog panel, using less than seven terms arguably underfits the data by obscuring the data's inherent bimodality. Using more than 11 terms is unnecessary and could, in principle, overfit the data. The case with 16 terms is infeasible for this data set, as indicated by the blank cell in the metalog panel. Other tools (such as regularization, Akaike information criterion, and Bayesian information criterion) may also be useful. For example, when applied to the steelhead weight data, the AIC ranking of metalog distributions from 2-16 terms along with a wide range of classical distributions identifies the 11-term log metalog as the best fit to this data. A similar BIC ranking identifies the 10-term log metalog as the best fit. Keelin (2016) offers further perspectives on the distribution selection within the metalog family.
Related distributions:
The metalog distributions belong to the group of distributions defined in terms of the quantile function, which include the quantile-parameterized distributions, the Tukey lambda distribution, its generalization, GLD, the Govindarajulu distribution and others. The following distributions are subsumed within the metalog family: The logistic distribution is a special case of the unbounded metalog where ai=0 for all i>2 The uniform distribution is a special case of: 1) the unbounded metalog where k≥4 , 0.5 , a4=1 and ai=0 otherwise; and 2) the bounded metalog where k≥2 , bl=0 , bu=1 , a2=1 , and ai=0 otherwise.
Related distributions:
The log-logistic distribution, also known as the Fisk distribution in economics, is a special case of the log metalog where bl=0 , and ai=0 for all i>2 The log-uniform distribution is a special case of the log metalog where k≥4 , 0.5 , a4=1 , and ai=0 otherwise.
The logit-logistic distribution is a special case of the logit metalog where ai=0 for all i>2
Software:
Freely available software tools can be used to work with metalog distributions: Excel workbooks. By pasting or typing in CDF data, metalogs (with choice of bounds) are instantly displayed.
SPT metalogs workbook calculates 2–3 term metalogs determined by three (xi,yi) CDF data.
Metalogs workbook calculates 2–16 term metalogs (including metalog panel) determined by 2-10,000 (xi,yi) CDF data.
ELD (equally likely data) Metalog workbooks calculate 2–16 term metalogs determined by 2–10,000 (xi) CDF data, where yi 's and metalog panel are automatically calculated.
R. rmetalog (on Complehensive R Archive Network, CRAN).
Python. Pymetalog closely mirrors the R package. Metalogistic takes advantage of the SciPy platform.
Web browser. MakeDistribution.com facilitates experimentation with metalogs parameterized by several CDF data points. The SPT metalog calculator, metalog calculator and ELD metalog calculator are online versions of the Excel Workbooks.
SIPmath Modeler Tools support metalog distributions in an Excel add-In for simulation.
Lumina's Analytica Free 101 software for modeling and aiding difficult decisions.
Software:
BayesFusion's Metalog Builder allows for interactive building of metalog distributions. BayesFusion's GeNIe (academic version of the software is free for academic research and teaching) implements the metalog distributions.Commercially available packages also support the use of metalog distributions: FrontLine Solvers: Analytic Solver, RASON, and Solver SDK, software for optimization. Automatically fits user data to the full range of (bounded and unbounded, multi-term) metalog distributions and provides option to compare metalog distributions with classical distributions based on user-selected goodness of fit criteria.
Software:
Lone Star Analysis: TruNavigator and AnalyticsOS software for predictive and prescriptive analytics. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hog maw**
Hog maw:
Hog maw is the stomach of a pig prepared as food. More specifically, it is the exterior muscular wall of the stomach organ (with interior, lining mucosa removed) which contains no fat if cleaned properly. It can be found in American, soul food, Chinese, Pennsylvania Dutch, Mexican, Portuguese, Italian and Vietnamese dishes. In addition, it can be prepared in various ways including stewed, fried, baked, and broiled.
Ethnic dishes:
Pennsylvania Dutch Hog maw, sometimes called pig's stomach, Susquehanna turkey or Pennsylvania Dutch goose is a Pennsylvania Dutch dish. In the Pennsylvania German language, it is known as Seimaage (sigh-maw-guh), originating from its German name Saumagen. It is made from a cleaned pig's stomach traditionally stuffed with cubed potatoes and loose pork sausage meat. Other ingredients may include cabbage, onions, and spices. It was traditionally boiled in a large pot covered in water, not unlike Scottish haggis, but it can also be baked or broiled until browned or split, then it is often drizzled with butter, sometimes browned, before serving. It is usually served hot on a platter, cut into slices, and topped with horseradish or stewed tomatoes. It can also be served cold as a sandwich. Often served in the winter, it was made on hog butchering days on the farms of Lancaster and Berks Counties and elsewhere in the Pennsylvania Dutch Country.
Ethnic dishes:
It remains a traditional New Year's Day side dish for many Pennsylvania German families; in fact, many families believe that it is bad luck if not even a small piece is consumed on New Year's Day, as is the case with pork and sauerkraut. The stomach is purchased at one of the many traditional butchers at local farmers' markets. The original recipe was most likely brought to Pennsylvania from the Palatinate area of Germany, where it is called Saumagen and served with sauerkraut, another Pennsylvania Dutch food. Indeed, Saumagen is reported to have been a favorite of former German Chancellor Helmut Kohl, a native of the Palatinate (Rheinland-Pfalz) Region.
Ethnic dishes:
Soul food As a soul food dish, hog maw has often been coupled with chitterlings, which are pig intestines. In the book Plantation Row Slave Cabin Cooking: The Roots of Soul Food hog maw is used in the Hog Maw Salad recipe. Hog maw is also traditionally prepared for New Year's Day for prosperity along with other traditional Southern New Year's Day dishes like collard greens and Hoppin' John.
Ethnic dishes:
Chinese cuisine In Chinese cuisine, hog maw is called 猪肚 (zhūdǔ, "pig stomach") often served stir fried with vegetables. It can also be braised, chilled, and sliced as part of a cold cut tray.
Latin American cuisine Hog maws (called "buche") are a specialty in taco stands all over Mexico, mostly deep fried with the rest of the pork.
In Puerto Rico, hog maws are called Cuajos. Cuajitos is a popular street vendor food found around the island and is most often served with boiled green banana escabeche and morcilla (blood sausage). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Thirst Buster (Shell Canada)**
Thirst Buster (Shell Canada):
Thirst Buster is a frosty, non-alcoholic, slushy beverage available at Shell Canada gas stations. It is available in various flavors.
Thirst Buster (Shell Canada):
Thirst Busters are made using a mixture of syrup, frozen water, and carbon dioxide. The ingredients are fed into the Slush Buster Machine at different pressures, blended, then pushed under pressure into a stainless steel chilling tank. The tank contains rotating scraper arms to prevent the mixture from freezing. When the customer opens the tap, the machine's internal pressure pushes the mixture out. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Trachoma**
Trachoma:
Trachoma is an infectious disease caused by bacterium Chlamydia trachomatis. The infection causes a roughening of the inner surface of the eyelids. This roughening can lead to pain in the eyes, breakdown of the outer surface or cornea of the eyes, and eventual blindness. Untreated, repeated trachoma infections can result in a form of permanent blindness when the eyelids turn inward.The bacteria that cause the disease can be spread by both direct and indirect contact with an affected person's eyes or nose. Indirect contact includes through clothing or flies that have come into contact with an affected person's eyes or nose. Children spread the disease more often than adults. Poor sanitation, crowded living conditions, and not enough clean water and toilets also increase spread.Efforts to prevent the disease include improving access to clean water and treatment with antibiotics to decrease the number of people infected with the bacterium. This may include treating, all at once, whole groups of people in whom the disease is known to be common. Washing, by itself, is not enough to prevent disease, but may be useful with other measures. Treatment options include oral azithromycin and topical tetracycline. Azithromycin is preferred because it can be used as a single oral dose. After scarring of the eyelid has occurred, surgery may be required to correct the position of the eyelashes and prevent blindness.Globally, about 80 million people have an active infection. In some areas, infections may be present in as many as 60–90% of children. Among adults, it more commonly affects women than men – likely due to their closer contact with children. The disease is the cause of decreased vision in 2.2 million people, of whom 1.2 million are completely blind. Trachoma is a public health problem in 42 countries across Africa, Asia, the Middle East, and Central and South America. There are 136.9 million people at risk. It results in US$8 billion of economic losses a year. It belongs to a group of diseases known as neglected tropical diseases.
Signs and symptoms:
The bacterium has an incubation period of 5 to 10 days, after which the affected individual experiences symptoms of conjunctivitis, or irritation similar to "pink eye". Blinding endemic trachoma results from multiple episodes of reinfection that maintains the intense inflammation in the conjunctiva. Without reinfection, the inflammation gradually subsides.The conjunctival inflammation is called "active trachoma" and usually is seen in children, especially preschool children. It is characterized by white lumps in the undersurface of the upper eyelid (conjunctival follicles or lymphoid germinal centres) and by nonspecific inflammation and thickening often associated with papillae. Follicles may also appear at the junction of the cornea and the sclera (limbal follicles). Active trachoma often can be irritating and have a watery discharge. Bacterial secondary infection may occur and cause a purulent discharge.The later structural changes of trachoma are referred to as "cicatricial trachoma". These include scarring in the eyelid (tarsal conjunctiva) that leads to distortion of the eyelid with buckling of the lid (tarsus) so the lashes rub on the eye (trichiasis). These lashes can lead to corneal opacities and scarring and then to blindness. Linear scars present in the sulcus subtarsalis are called Arlt's lines (named after Carl Ferdinand von Arlt). In addition, blood vessels and scar tissue can invade the upper cornea (pannus). Resolved limbal follicles may leave small gaps in the pannus (Herbert's pits).Most commonly, children with active trachoma do not present with any symptoms, as the low-grade irritation and ocular discharge is just accepted as normal, but further symptoms may include: Eye discharge Swollen eyelids Trichiasis (misdirected eyelashes) Swelling of lymph nodes in front of the ears Sensitivity to bright lights Increased heart rate Further ear, nose, and throat complications.The major complication or the most important one is corneal ulcer occurring due to rubbing by concentrations, or trichiasis with superimposed bacterial infection.
Cause:
Trachoma is caused by Chlamydia trachomatis, serotypes (serovars) A, B, and C. It is spread by direct contact with eye, nose, and throat secretions from affected individuals, or contact with fomites (inanimate objects that carry infectious agents), such as towels and/or washcloths, that have had similar contact with these secretions. Flies can also be a route of mechanical transmission. Untreated, repeated trachoma infections result in entropion (the inward turning of the eyelids), which may result in blindness due to damage to the cornea. Children are the most susceptible to infection due to their tendency to get dirty easily, but the blinding effects or more severe symptoms are often not felt until adulthood.Blinding endemic trachoma occurs in areas with poor personal and family hygiene. Many factors are indirectly linked to the presence of trachoma including lack of water, absence of latrines or toilets, poverty in general, flies, close proximity to cattle, and crowding. The final common pathway, though, seems to be the presence of dirty faces in children, facilitating the frequent exchange of infected ocular discharge from one child's face to another. Most transmission of trachoma occurs within the family.
Diagnosis:
McCallan's classification McCallan in 1908 divided the clinical course of trachoma into four stages: WHO classification The World Health Organization recommends a simplified grading system for trachoma. The Simplified WHO Grading System is summarized below: Trachomatous inflammation, follicular (TF)—Five or more follicles of >0.5 mm on the upper tarsal conjunctiva Trachomatous inflammation, intense (TI)—Papillary hypertrophy and inflammatory thickening of the upper tarsal conjunctiva obscuring more than half the deep tarsal vessels Trachomatous scarring (TS)—Presence of scarring in tarsal conjunctiva.
Diagnosis:
Trachomatous trichiasis (TT)—At least one ingrown eyelash touching the globe, or evidence of epilation (eyelash removal) Corneal opacity (CO)—Corneal opacity blurring part of the pupil margin
Prevention:
Although trachoma was eliminated from much of the developed world in the 20th century (Australia being a notable exception), this disease persists in many parts of the developing world, particularly in communities without adequate access to water and sanitation.
Prevention:
Environmental measures Environmental improvement: Modifications in water use, fly control, latrine use, health education, and proximity to domesticated animals have all been proposed to reduce transmission of C. trachomatis. These changes pose numerous challenges for implementation. These environmental changes are likely to ultimately affect the transmission of ocular infection by means of lack of facial cleanliness. Particular attention is required for environmental factors that limit clean faces.
Prevention:
A systematic review examining the effectiveness of environmental sanitary measures on the prevalence of active trachoma in endemic areas showed that use of insecticide spray resulted in significant reductions of trachoma and fly density in some studies. Health education also resulted in reductions of active trachoma when implemented. Improved water supply did not result in a reduction of trachoma incidence.
Prevention:
Antibiotics WHO Guidelines recommend that a region should receive community-based, mass antibiotic treatment when the prevalence of active trachoma among one- to nine-year-old children is greater than 10%. Subsequent annual treatment should be administered for three years, at which time the prevalence should be reassessed. Annual treatment should continue until the prevalence drops below 5%. At lower prevalences, antibiotic treatment should be family-based.
Management:
Antibiotics Azithromycin (single oral dose of 20 mg/kg) or topical tetracycline (1% eye ointment twice a day for six weeks). Azithromycin is preferred because it is used as a single oral dose. Although it is expensive, it is generally used as part of the international donation program organized by Pfizer. Azithromycin can be used in children from the age of six months and in pregnancy. As a community-based antibiotic treatment, some evidence suggests that oral azithromycin was more effective than topical tetracycline, but no consistent evidence supported either oral or topical antibiotics as being more effective. Antibiotic treatment reduces the risk of active trachoma in individuals infected with chlamydial trachomatis.
Management:
Surgery For individuals with trichiasis, a bilamellar tarsal rotation procedure is warranted to direct the lashes away from the globe. Evidence suggests that use of a lid clamp and absorbable sutures would result in reduced lid contour abnormalities and granuloma formulation after surgery. Early intervention is beneficial as the rate of recurrence is higher in more advanced disease.
Management:
Lifestyle measures The WHO-recommended SAFE strategy includes: Surgery to correct advanced stages of the disease Antibiotics to treat active infection, using azithromycin Facial cleanliness to reduce disease transmission Environmental change to increase access to clean water and improved sanitationChildren with visible nasal discharge, discharge from the eyes, or flies on their faces are at least twice as likely to have active trachoma as children with clean faces. Intensive community-based health education programs to promote face-washing can reduce the rates of active trachoma, especially intense trachoma. If an individual is already infected, washing one's face is encouraged, especially a child, to prevent reinfection. Some evidence shows that washing the face combined with topical tetracycline might be more effective in reducing severe trachoma compared to topical tetracycline alone. The same trial found no statistical benefit of eye washing alone or in combination with tetracycline eye drops in reducing follicular trachoma amongst children.
Prognosis:
If not treated properly with oral antibiotics, the symptoms may escalate and cause blindness, which is the result of ulceration and consequent scarring of the cornea. Surgery may also be necessary to fix eyelid deformities.
Without intervention, trachoma keeps families in a cycle of poverty, as the disease and its long-term effects are passed from one generation to the next.
Epidemiology:
As of 2011, about 21 million people are actively affected by trachoma, with around 2.2 million people being permanently blind or have severe visual impairment from trachoma. An additional 7.3 million people are reported to have trichiasis. As of June 2022, 125 million individuals live in trachoma endemic areas and are at risk of trachoma-related blindness, and the disease is a public health problem in 42 countries. Of these, Africa is considered the worst affected area, with over 85% of all known active cases of trachoma. Within the continent, South Sudan and Ethiopia have the highest prevalence. In many of these communities, women are three times more likely than men to be blinded by the disease, likely due to their roles as caregivers in the family. Australia is the only developed country that has trachoma. In 2008, trachoma was found in half of Australia's very remote communities.
Epidemiology:
Elimination In 1996, the WHO launched its Alliance for the Global Elimination of Trachoma by 2020, and in 2006, the WHO officially set 2020 as the target to eliminate trachoma as a public-health problem. The International Coalition for Trachoma Control has produced maps and a strategic plan called 2020 INSight that lays out actions and milestones to achieve global elimination of blinding trachoma by 2020. The program recommends the SAFE protocol for blindness prevention: Surgery for trichiasis, Antibiotics to clear infection, Facial cleanliness, and Environmental improvement to reduce transmission. This includes sanitation infrastructure to reduce the open presence of human feces that can breed flies.As of 2018, Cambodia, Ghana, Iran, Laos, Mexico, Nepal, Morocco, and Oman have been certified as having eliminated trachoma as a public-health problem; China, Gambia, Iran, Iraq, and Myanmar make that claim, but have not sought certification. Eradication of the bacterium that causes the disease is seen as impractical; the WHO definition of "eliminated as a public-health problem" means less than 5% of children have any symptoms, and less than 0.1% of adults have vision loss. Having already donated more doses (about 700 million since 2002) of the drug than it has sold during the same time period, the drug company Pfizer has agreed to donate azithromycin until 2025, if necessary, for elimination of the disease. The campaign unexpectedly found distribution of azithromycin to very poor children reduced their early death rate by up to 25%.
History:
The disease is one of the earliest known eye afflictions, having been identified in Egypt as early as 15 BCE.Its presence was also recorded in ancient China and Mesopotamia. Trachoma became a problem as people moved into crowded settlements or towns where hygiene was poor. It became a particular problem in Europe in the 19th century. After the Egyptian Campaign (1798–1802) and the Napoleonic Wars (1798–1815), trachoma was rampant in the army barracks of Europe and spread to those living in towns as troops returned home. Stringent control measures were introduced, and by the early 20th century, trachoma was essentially controlled in Europe, although cases were reported until the 1950s. Today, most victims of trachoma live in underdeveloped and poverty-stricken countries in Africa, the Middle East, and Asia.In the United States, the Centers for Disease Control says, "No national or international surveillance [for trachoma] exists. Blindness due to trachoma has been eliminated from the United States. The last cases were found among Native American populations and in Appalachia, and those in the boxing, wrestling, and sawmill industries (prolonged exposure to combinations of sweat and sawdust often led to the disease). In the late 19th and early 20th centuries, trachoma was the main reason for an immigrant coming through Ellis Island to be deported."In 1913, President Woodrow Wilson signed an act designating funds for the eradication of the disease. Immigrants who attempted to enter the U.S. through Ellis Island, New York, had to be checked for trachoma. During this time, treatment for the disease was by topical application of copper sulfate. By the late 1930s, a number of ophthalmologists reported success in treating trachoma with sulfonamide antibiotics. In 1948, Vincent Tabone (who was later to become the President of Malta) was entrusted with the supervision of a campaign in Malta to treat trachoma using sulfonamide tablets and drops.Due to improved sanitation and overall living conditions, trachoma virtually disappeared from the industrialized world by the 1950s, though it continues to plague the developing world to this day. Epidemiological studies were conducted in 1956–1963 by the Trachoma Control Pilot Project in India under the Indian Council for Medical Research. This potentially blinding disease remains endemic in the poorest regions of Africa, Asia, and the Middle East and in some parts of Latin America and Australia. Currently, 8 million people are visually impaired as a result of trachoma, and 41 million have an active infection.
History:
Of the 54 countries that the WHO cited as still having blinding trachoma occurring, Australia is the only developed country—Australian Aboriginal people who live in remote communities with inadequate sanitation are still blinded by this infectious eye disease.India's Health and Family Welfare Minister JP Nadda declared India free of infective trachoma in 2017.
Etymology The term is derived from Neo-Latin trāchōma, from Greek τράχωμα trākhōma, from τραχύς trākhus "rough".
Economics:
The economic burden of trachoma is huge, particularly with regard to covering treatment costs and productivity losses as a result of increased visual impairment, and in some cases, permanent blindness. The global estimated cost of trachoma is reported between $US2.9 and 5.3 billion each year. By including the cost for trichiasis treatment, the estimated overall cost for the disease increases to about $US 8 billion. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.