text
stringlengths
60
353k
source
stringclasses
2 values
**White blood cell** White blood cell: White blood cells, also called leukocytes or leucocytes, are cells of the immune system that are involved in protecting the body against both infectious disease and foreign invaders. White blood cells include three main subtypes; granulocytes, lymphocytes and monocytes.White cells is most preferred rather than the, white blood cells, because, they spend most of their time in the lymph or plasma. All white blood cells are produced and derived from multipotent cells in the bone marrow known as hematopoietic stem cells. Leukocytes are found throughout the body, including the blood and lymphatic system. All white blood cells have nuclei, which distinguishes them from the other blood cells, the anucleated red blood cells (RBCs) and platelets. They are white in color, or colorless, due to lack of haemoglobin in them. They are devoid of any shape and can be termed as amoeboid. The different white blood cells are usually classified by cell lineage (myeloid cells or lymphoid cells). White blood cells are part of the body's immune system. They help the body fight infection and other diseases. Types of white blood cells are granulocytes (neutrophils, eosinophils, and basophils), and agranulocytes (monocytes, and lymphocytes (T cells and B cells)). Myeloid cells (myelocytes) include neutrophils, eosinophils, mast cells, basophils, and monocytes. Monocytes are further subdivided into dendritic cells and macrophages. Monocytes, macrophages, and neutrophils are phagocytic. Lymphoid cells (lymphocytes) include T cells (subdivided into helper T cells, memory T cells, cytotoxic T cells), B cells (subdivided into plasma cells and memory B cells), and natural killer cells. Historically, white blood cells were classified by their physical characteristics (granulocytes and agranulocytes), but this classification system is less frequently used now. Produced in the bone marrow, white blood cells defend the body against infections and disease. An excess of white blood cells is usually due to infection or inflammation. Less commonly, a high white blood cell count could indicate certain blood cancers or bone marrow disorders. White blood cell: The number of leukocytes in the blood is often an indicator of disease, and thus the white blood cell count is an important subset of the complete blood count. The normal white cell count is usually between 4 × 109/L and 1.1 × 1010/L. In the US, this is usually expressed as 4,000 to 11,000 white blood cells per microliter of blood. White blood cells make up approximately 1% of the total blood volume in a healthy adult, making them substantially less numerous than the red blood cells at 40% to 45%. However, this 1% of the blood makes a large difference to health, because immunity depends on it. An increase in the number of leukocytes over the upper limits is called leukocytosis. It is normal when it is part of healthy immune responses, which happen frequently. It is occasionally abnormal, when it is neoplastic or autoimmune in origin. A decrease below the lower limit is called leukopenia. This indicates a weakened immune system. Etymology: The name "white blood cell" derives from the physical appearance of a blood sample after centrifugation. White cells are found in the buffy coat, a thin, typically white layer of nucleated cells between the sedimented red blood cells and the blood plasma. The scientific term leukocyte directly reflects its description. It is derived from the Greek roots leuk- meaning "white" and cyt- meaning "cell". The buffy coat may sometimes be green if there are large amounts of neutrophils in the sample, due to the heme-containing enzyme myeloperoxidase that they produce. Types of white blood cells: Overview All white blood cells are nucleated, which distinguishes them from the anucleated red blood cells and platelets. Types of leukocytes can be classified in standard ways. Two pairs of broadest categories classify them either by structure (granulocytes or agranulocytes) or by cell lineage (myeloid cells or lymphoid cells). These broadest categories can be further divided into the five main types: neutrophils, eosinophils, basophils, lymphocytes, and monocytes. These types are distinguished by their physical and functional characteristics. Monocytes and neutrophils are phagocytic. Further subtypes can be classified. Types of white blood cells: Granulocytes are distinguished from agranulocytes by their nucleus shape (lobed versus round, that is, polymorphonuclear versus mononuclear) and by their cytoplasm granules (present or absent, or more precisely, visible on light microscopy or not thus visible). The other dichotomy is by lineage: Myeloid cells (neutrophils, monocytes, eosinophils and basophils) are distinguished from lymphoid cells (lymphocytes) by hematopoietic lineage (cellular differentiation lineage). Lymphocytes can be further classified as T cells, B cells, and natural killer cells. Types of white blood cells: Neutrophil Neutrophils are the most abundant white blood cell, constituting 60-70% of the circulating leukocytes. They defend against bacterial or fungal infection. They are usually first responders to microbial infection; their activity and death in large numbers form pus. They are commonly referred to as polymorphonuclear (PMN) leukocytes, although, in the technical sense, PMN refers to all granulocytes. They have a multi-lobed nucleus, which consists of three to five lobes connected by slender strands. This gives the neutrophils the appearance of having multiple nuclei, hence the name polymorphonuclear leukocyte. The cytoplasm may look transparent because of fine granules that are pale lilac when stained. Neutrophils are active in phagocytosing bacteria and are present in large amount in the pus of wounds. These cells are not able to renew their lysosomes (used in digesting microbes) and die after having phagocytosed a few pathogens. Neutrophils are the most common cell type seen in the early stages of acute inflammation. The average lifespan of inactivated human neutrophils in the circulation has been reported by different approaches to be between 5 and 135 hours. Types of white blood cells: Eosinophil Eosinophils compose about 2-4% of white blood cells in circulating blood. This count fluctuates throughout the day, seasonally, and during menstruation. It rises in response to allergies, parasitic infections, collagen diseases, and disease of the spleen and central nervous system. They are rare in the blood, but numerous in the mucous membranes of the respiratory, digestive, and lower urinary tracts.They primarily deal with parasitic infections. Eosinophils are also the predominant inflammatory cells in allergic reactions. The most important causes of eosinophilia include allergies such as asthma, hay fever, and hives; and parasitic infections. They secrete chemicals that destroy large parasites, such as hookworms and tapeworms, that are too big for any one white blood cell to phagocytize. In general, their nuclei are bi-lobed. The lobes are connected by a thin strand. The cytoplasm is full of granules that assume a characteristic pink-orange color with eosin staining. Types of white blood cells: Basophil Basophils are chiefly responsible for allergic and antigen response by releasing the chemical histamine causing the dilation of blood vessels. Because they are the rarest of the white blood cells (less than 0.5% of the total count) and share physicochemical properties with other blood cells, they are difficult to study. They can be recognized by several coarse, dark violet granules, giving them a blue hue. The nucleus is bi- or tri-lobed, but it is hard to see because of the number of coarse granules that hide it. Types of white blood cells: They secrete two chemicals that aid in the body's defenses: histamine and heparin. Histamine is responsible for widening blood vessels and increasing the flow of blood to injured tissue. It also makes blood vessels more permeable so neutrophils and clotting proteins can get into connective tissue more easily. Heparin is an anticoagulant that inhibits blood clotting and promotes the movement of white blood cells into an area. Basophils can also release chemical signals that attract eosinophils and neutrophils to an infection site. Types of white blood cells: Lymphocyte Lymphocytes are much more common in the lymphatic system than in blood. Lymphocytes are distinguished by having a deeply staining nucleus that may be eccentric in location, and a relatively small amount of cytoplasm. Lymphocytes include: B cells make antibodies that can bind to pathogens, block pathogen invasion, activate the complement system, and enhance pathogen destruction. Types of white blood cells: T cells: CD4+ helper T cells: T cells displaying co-receptor CD4 are known as CD4+ T cells. These cells have T-cell receptors and CD4 molecules that, in combination, bind antigenic peptides presented on major histocompatibility complex (MHC) class II molecules on antigen-presenting cells. Helper T cells make cytokines and perform other functions that help coordinate the immune response. In HIV infection, these T cells are the main index to identify the individual's immune system integrity. Types of white blood cells: CD8+ cytotoxic T cells: T cells displaying co-receptor CD8 are known as CD8+ T cells. These cells bind antigens presented on MHC I complex of virus-infected or tumour cells and kill them. Nearly all nucleated cells display MHC I. γδ T cells possess an alternative T cell receptor (different from the αβ TCR found on conventional CD4+ and CD8+ T cells). Found in tissue more commonly than in blood, γδ T cells share characteristics of helper T cells, cytotoxic T cells, and natural killer cells. Natural killer cells are able to kill cells of the body that do not display MHC class I molecules, or display stress markers such as MHC class I polypeptide-related sequence A (MIC-A). Decreased expression of MHC class I and up-regulation of MIC-A can happen when cells are infected by a virus or become cancerous. Types of white blood cells: Monocyte Monocytes, the largest type of white blood cell, share the "vacuum cleaner" (phagocytosis) function of neutrophils, but are much longer lived as they have an extra role: they present pieces of pathogens to T cells so that the pathogens may be recognized again and killed. This causes an antibody response to be mounted. Monocytes eventually leave the bloodstream and become tissue macrophages, which remove dead cell debris as well as attack microorganisms. Neither dead cell debris nor attacking microorganisms can be dealt with effectively by the neutrophils. Unlike neutrophils, monocytes are able to replace their lysosomal contents and are thought to have a much longer active life. They have the kidney-shaped nucleus and are typically not granulated. They also possess abundant cytoplasm. Fixed leucocytes: Some leucocytes migrate into the tissues of the body to take up a permanent residence at that location rather than remaining in the blood. Often these cells have specific names depending upon which tissue they settle in, such as fixed macrophages in the liver, which become known as Kupffer cells. These cells still serve a role in the immune system. Fixed leucocytes: Histiocytes Dendritic cells (Although these will often migrate to local lymph nodes upon ingesting antigens) Mast cells Microglia Disorders: The two commonly used categories of white blood cell disorders divide them quantitatively into those causing excessive numbers (proliferative disorders) and those causing insufficient numbers (leukopenias). Leukocytosis is usually healthy (e.g., fighting an infection), but it also may be dysfunctionally proliferative. Proliferative disorders of white blood cells can be classed as myeloproliferative and lymphoproliferative. Some are autoimmune, but many are neoplastic. Disorders: Another way to categorize disorders of white blood cells is qualitatively. There are various disorders in which the number of white blood cells is normal but the cells do not function normally.Neoplasia of white blood cells can be benign but is often malignant. Of the various tumors of the blood and lymph, cancers of white blood cells can be broadly classified as leukemias and lymphomas, although those categories overlap and are often grouped together. Disorders: Leukopenias A range of disorders can cause decreases in white blood cells. This type of white blood cell decreased is usually the neutrophil. In this case the decrease may be called neutropenia or granulocytopenia. Less commonly, a decrease in lymphocytes (called lymphocytopenia or lymphopenia) may be seen. Leukemia A range of disorder, in which, the count of white blood cells increases abnormally. It is type of cancer. It can be preferrably cured by the bone marrow transplantation, because, white blood cells are produced from bone marrow, therefore, transplating the right person's bone marrow, can cure this disease. Neutropenia Neutropenia can be acquired or intrinsic. A decrease in levels of neutrophils on lab tests is due to either decreased production of neutrophils or increased removal from the blood. The following list of causes is not complete. Disorders: Medications - chemotherapy, sulfas or other antibiotics, phenothiazines, benzodiazepines, antithyroid medications, anticonvulsants, quinine, quinidine, indomethacin, procainamide, thiazides Radiation Toxins - alcohol, benzenes Intrinsic disorders - Fanconi's, Kostmann's, cyclic neutropenia, Chédiak–Higashi Immune dysfunction - disorders of collagen, AIDS, rheumatoid arthritis Blood cell dysfunction - megaloblastic anemia, myelodysplasia, marrow failure, marrow replacement, acute leukemia Any major infection Miscellaneous - starvation, hypersplenismSymptoms of neutropenia are associated with the underlying cause of the decrease in neutrophils. For example, the most common cause of acquired neutropenia is drug-induced, so an individual may have symptoms of medication overdose or toxicity. Disorders: Treatment is also aimed at the underlying cause of the neutropenia. One severe consequence of neutropenia is that it can increase the risk of infection. Lymphocytopenia Defined as total lymphocyte count below 1.0x109/L, the cells most commonly affected are CD4+ T cells. Like neutropenia, lymphocytopenia may be acquired or intrinsic and there are many causes. This is not a complete list. Disorders: Inherited immune deficiency - severe combined immunodeficiency, common variable immune deficiency, ataxia-telangiectasia, Wiskott–Aldrich syndrome, immunodeficiency with short-limbed dwarfism, immunodeficiency with thymoma, purine nucleoside phosphorylase deficiency, genetic polymorphism Blood cell dysfunction - aplastic anemia Infectious diseases - viral (AIDS, SARS, West Nile encephalitis, hepatitis, herpes, measles, others), bacterial (TB, typhoid, pneumonia, rickettsiosis, ehrlichiosis, sepsis), parasitic (acute phase of malaria) Medications - chemotherapy (antilymphocyte globulin therapy, alemtuzumab, glucocorticoids) Radiation Major surgery Miscellaneous - ECMO, kidney or bone marrow transplant, hemodialysis, kidney failure, severe burns, celiac disease, severe acute pancreatitis, sarcoidosis, protein-losing enteropathy, strenuous exercise, carcinoma Immune dysfunction - arthritis, systemic lupus erythematosus, Sjögren syndrome, myasthenia gravis, systemic vasculitis, Behcet-like syndrome, dermatomyositis, granulomatosis with polyangiitis Nutritional/Dietary - alcohol use disorder, zinc deficiencyLike neutropenia, symptoms and treatment of lymphocytopenia are directed at the underlying cause of the change in cell counts. Disorders: Proliferative disorders An increase in the number of white blood cells in circulation is called leukocytosis. This increase is most commonly caused by inflammation. There are four major causes: increase of production in bone marrow, increased release from storage in bone marrow, decreased attachment to veins and arteries, decreased uptake by tissues. Leukocytosis may affect one or more cell lines and can be neutrophilic, eosinophilic, basophilic, monocytosis, or lymphocytosis. Disorders: Neutrophilia Neutrophilia is an increase in the absolute neutrophil count in the peripheral circulation. Normal blood values vary by age. Neutrophilia can be caused by a direct problem with blood cells (primary disease). It can also occur as a consequence of an underlying disease (secondary). Most cases of neutrophilia are secondary to inflammation.Primary causes Conditions with normally functioning neutrophils – hereditary neutrophilia, chronic idiopathic neutrophilia Pelger–Huët anomaly Down syndrome Leukocyte adhesion deficiency Familial cold urticaria Leukemia (chronic myelogenous (CML)) and other myeloproliferative disorders Surgical removal of spleenSecondary causes Infection Chronic inflammation – especially juvenile rheumatoid arthritis, rheumatoid arthritis, Still's disease, Crohn's disease, ulcerative colitis, granulomatous infections (for example, tuberculosis), and chronic hepatitis Cigarette smoking – occurs in 25–50% of chronic smokers and can last up to 5 years after quitting Stress – exercise, surgery, general stress Medication induced – corticosteroids (for example, prednisone, β-agonists, lithium) Cancer – either by growth factors secreted by the tumor or invasion of bone marrow by the cancer Increased destruction of cells in peripheral circulation can stimulate bone marrow. This can occur in hemolytic anemia and idiopathic thrombocytopenic purpura Eosinophilia A normal eosinophil count is considered to be less than 0.65×109/L. Eosinophil counts are higher in newborns and vary with age, time (lower in the morning and higher at night), exercise, environment, and exposure to allergens. Eosinophilia is never a normal lab finding. Efforts should always be made to discover the underlying cause, though the cause may not always be found. Counting and reference ranges: The complete blood cell count is a blood panel that includes the overall white blood cell count and differential count, a count of each type of white blood cell. Reference ranges for blood tests specify the typical counts in healthy people. The normal total leucocyte count in an adult is 4000 to 11,000 per mm3 of blood. Differential leucocyte count: number/ (%) of different types of leucocytes per cubic mm. of blood. Below are reference ranges for various types leucocytes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bisexual lighting** Bisexual lighting: Bisexual lighting is the simultaneous use of pink, purple, and blue lighting to represent bisexual characters. It has been used in studio lighting for film and television, as has been observed in the cinematography of various films. While not all films, television shows, photographs, and music videos that use this lighting intend to portray bisexuality, many queer artists have deliberately used this color palette in their work. Bisexual lighting: Some commentators have pointed to the pink and blue color scheme as merely a reference to 1980s aesthetic. It is reminiscent of neon lights and is also associated with retrowave. Symbolism: George Pierpoint of BBC News writes that some social media users claim bisexual lighting has been used as an "empowering visual device" which counteracts perceived under-representation of bisexuality in the visual media. The colors may be a direct reference to the bisexual pride flag. The trend gained traction in the LGBT community in 2017 particularly on social media sites Twitter, Reddit, and Pinterest. Sasha Geffen wrote at Vulture.com that it had become "solid in its meaning", while Nicky Idika of PopBuzz wrote that it has now "become an established part of bisexual storytelling in media". And while The Daily Dot questioned whether "the aesthetic or the cultural significance [came] first", it too concluded that the idea "has stuck". Pantone selected "Ultra Violet" as the color of 2018 in a move the BBC says reflected the growing use of the scheme.Amelia Perrin has criticized the trend of using such lighting when bisexual characters appear in television and music videos, arguing in Cosmopolitan that this visual image "perpetuates bisexual stereotypes". Perrin argues that this kind of lighting is usually produced by neon lights, which suggest "clubs and dancefloors" to the viewer, and this implies that "bisexual hook-ups and relationships are merely 'experiments', and something that only happens when you’re drunk on a night out."According to Jessica Mason of The Mary Sue, the color purple—being a combination of multiple pure, spectral colors—has historically been used to represent "royalty and the divine," as well as "magic, aliens and the unknown." History: According to BOWIE Creators, the concept of bisexual lighting was invented in 2014 by a Tumblr fan of Sherlock who believed that the lighting was being used to signal that Dr. Watson was bisexual and would eventually be in a romantic relationship with Sherlock Holmes. This brief instance of bisexual lighting had no direct impact on other shows, movies, or music videos containing it, but it did put the idea into the world that bisexual themes could be expressed via this color scheme. Around 2017, left-wing YouTubers such as ContraPoints (who identified as bisexual at the time) began to light their videos with pink, purple, and blue neon lights. The use of bisexual lighting became a popular meme in 2018, with multiple Twitter threads showcasing instances of the lighting scheme going viral, as well as photographs of animals in bisexual lighting being shared widely on social media. Examples: Bisexual lighting appears across mediums, often in scenes featuring bisexual characters or referencing bisexuality. The films The Neon Demon, Atomic Blonde, and Black Panther all feature the use of blue, pink, and purple lighting. Similarly, the award-winning Black Mirror episode "San Junipero", as well as an episodes from Blumhouse holiday horror anthology Into the Dark, including "I'm Just F*cking with You", "Midnight Kiss", and "My Valentine" made use of the visual aesthetic. Later, the television series Riverdale, Moonbeam City, The Assassination of Gianni Versace: American Crime Story, Voltron: Legendary Defender, and The Owl House, as well as the 2020 film Birds of Prey, were also stated to be using it. The third episode of Loki, "Lamentis", features this lighting in a scene where the title character discloses his bisexuality. The video game Ultrakill features bisexual lighting across its dystopian representation of the second circle of hell. Examples: Bisexual lighting also features in the music videos of Janelle Monáe's "Make Me Feel," Demi Lovato's "Cool for the Summer," and Ariana Grande's "7 Rings." The term was used to describe the "electric blue and magenta pink lights" that flash during Harry Styles' song "Medicine" when he plays it on tour and in Lil Nas X's music video for "Panini". Cosmopolitan noted that some of Taylor Swift's fans cited the color palette's presence on her album cover for Lover as evidence for their long-refuted fan theories that she is bisexual and at one point dated Karlie Kloss.Lara Thompson, a lecturer of film at Middlesex University, has argued that bisexual lighting is not well-known, stating: "I would have to see more examples before I see bisexual lighting as a wholly convincing phenomenon". According to Lillian Hochwender writing in Polygon, "Bi lighting often feels ubiquitous, even when there isn't a hint of bisexuality in sight ... These are the colors of magic in fantasy, alien landscapes in sci-fi, and the neon lighting of cyberpunk settings and nightclubs. Thus, while Twitter users and media critics have noted bi lighting in John Wick 3, Blade Runner 2049, Color Out of Space, Orphan: First Kill, Bingo Hell, Men in Black: International, Bullet Train and Spider-Man: Into the Spider-Verse, there's often a less gay logic for doing so." The use of bisexual lighting became a popular meme in 2018, with multiple Twitter threads showcasing instances of the lighting scheme going viral, as well as photographs of animals in bisexual lighting being shared widely on social media. In 2022, bisexual lighting was noticed in Netflix's Heartstopper and HBO's Emmy Award-winning Euphoria. The 2022 bisexual leather film, Please Baby Please, employed bisexual lighting throughout the entire film.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GAMA Platform** GAMA Platform: GAMA (GIS Agent-based Modeling Architecture) is a simulation platform with a complete modelling and simulation integrated development environment (IDE) for building spatially explicit agent-based simulations. About: The GAMA Platform is agent-based modeling software that was originally (2007-2010) developed by the Vietnamese-French research team MSI (located at IFI, Hanoi, and part of the IRD - SU International Research Unit UMMISCO). It is now developed by an international consortium of academic and industrial partners led by UMMISCO Archived 2022-01-23 at the Wayback Machine, including INRAE, the University of Toulouse 1, the University of Rouen, the University of Orsay, the University of Can Tho, Vietnam, the National University of Hanoi, EDF R&D, CEA LISC, and MIT Media Lab.GAMA was designed to allow domain experts without a programming background to model phenomena from their field of expertise.The GAMA environment enables exploration of emergent phenomena. It comes with a models library including examples from several domains, such as economics, biology, physics, chemistry, psychology, and system dynamics. About: The GAMA simulation panel allows exploration by modifying switches, sliders, choosers, inputs, and other user interface elements that the modeler chooses to make available. Technical foundation: GAMA Platform is free and open-source software, released under a GNU General Public License (GPL3). It is written in Java and runs on the Java virtual machine (JVM). All core components and extensions are written in Java, but end users do not need to work in Java at all if they use a published build of the platform; instead, they would write all models using GAML (described below). Technical foundation: Multiple application domains GAMA was developed with a very general approach and can be used for many application domains. GAMA is mostly present in applications domains like transport, urban planning, disaster response, epidemiology, analysis of multirobot systems, and the environment, with special emphasis on analyses that use GIS data. High-level Agent-based language GAML (GAma Modeling Language) is the dedicated language used in GAMA. It is an agent-based language, that provides the possibility to build a model with several paradigms of modeling.This high-level language was inspired by Smalltalk and Java, GAMA has been developed to be used by non-computer scientists. User interface Modelers may use many visual representations for the same model, in order to highlight a certain aspect of a simulation. These include 2D/3D displays, with basic control of lighting, textures, and cameras. Standard charts such as series plots may also be constructed. Project examples: The developers maintain a community-sourced list of scientific projects that use GAMA.Some of the larger efforts include: Hoan Kiem Air: Agent based modeling and simulation of the urban management on traffic and air pollution through tangible interface. Proxymix: Visualization tool about the influence of spatial configuration on human collaboration. CityScope Champs-Elysées: An interactive platform to improve decision-making related to the revitalization of the Champs Élysées. ESCAPE: A Multi-modal Urban Traffic Agent-Based Framework to Study Individual Response to Catastrophic Events. COMOKIT: Generic model of public policies to contain the spread of COVID-19 epidemics in a city, validated on the basis of different case studies. Users: Several academic institutions teach modeling and simulation courses based on GAMA. It is taught in the Urban Simulation class at the Potsdam University of Applied Sciences, and at the University of Salzburg. It is also used and taught annually at the Multi-platform International Summer School on Agent-Based Modelling & Simulation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Single transverse palmar crease** Single transverse palmar crease: In humans, a single transverse palmar crease is a single crease that extends across the palm of the hand, formed by the fusion of the two palmar creases (known in palmistry as the "heart line" and the "head line"). Although it is found more frequently in persons with several abnormal medical conditions, it is not predictive of any of these conditions since it is also found in persons with no abnormal medical conditions. It is found in 1.5% of the world population in at least one hand. Former name: Because it resembles the usual condition of non-human simians, it was, in the past, called the simian crease or simian line. These terms have widely fallen out of favor due to their pejorative connotation. Medical significance: The presence of a single transverse palmar crease has no medical significance. It is found in 1.5% of all people, and though it is found at a higher frequency in people with abnormal medical conditions, in every one of these conditions many people do not have a single transverse palmar crease, thus it has low predictive value. Medical significance: Males are twice as likely as females to have this characteristic, and it tends to run in families. In its non-symptomatic form, it is more common among Asians and Native Americans than among other populations, and in some families there is a tendency to inherit the condition unilaterally; that is, on one hand only.While it is often found in people with Down Syndrome, many who have this syndrome do not have this crease, and thus is not a diagnostic indicator of the Down Syndrome. The presence of a single transverse palmar has been associated with a number of abnormal medical conditions — that is, it is found at a higher than 1.5% frequency, but in all of these conditions many do not have this crease. Examples of conditions with such an association are fetal alcohol syndrome, and with the genetic chromosomal abnormalities Down syndrome (chromosome 21), cri du chat syndrome (chromosome 5), Klinefelter syndrome, Wolf-Hirschhorn Syndrome, Noonan syndrome (chromosome 12), Patau syndrome (chromosome 13), IDIC 15/Dup15q (chromosome 15), Edward's syndrome (chromosome 18), and Aarskog-Scott syndrome (X-linked recessive), or autosomal recessive disorder, such as Leukocyte adhesion deficiency-2 (LAD2). A unilateral single palmar crease was also reported in a case of chromosome 9 mutation causing Nevoid basal cell carcinoma syndrome and Robinow syndrome. It is also sometimes found on the hand of the affected side of patients with Poland syndrome, and craniosynostosis. Medical significance: A 1971 study refutes the hypothesis that the phenomenon is caused by fetal hand movement: the appearance of the crease occurs around the second month of gestation, before the digital movement phase in the womb begins.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bitvise** Bitvise: Bitvise is a proprietary secure remote access software developed for Windows and available as a client and server. The software is based on the Secure Shell (SSH) protocol, which provides a secure channel over an insecure network in a client-server architecture. Technology: Bitvise software implements version 2 of the Secure Shell (SSH) protocol, SFTP versions 3, 4, and 6, as well as SCP and FTPS according to publicly available standards. Development: The software is developed and published by Bitvise Limited. The first released product was Bitvise SSH Server, then named WinSSHD, in 2001, and it was shortly followed by Tunnelier, now Bitvise SSH Client. There have been 8 major releases of the software so far. Features: Both the server and client work with all desktop and server versions of Windows and allow for remote-based access using a tool like WinVNC. They provide a GUI as well as command-line interface to support SFTP, SSH, SCP, and VPN using the TCP/IP tunneling feature.The software among other supports GSSAPI-enabled Kerberos 5 exchange and NTLM Kerberos 5 user authentication. It provides two-factor authentication and compatibility with RFC 6238 authenticator apps.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Costocervical trunk** Costocervical trunk: The costocervical trunk arises from the upper and back part of the second part of subclavian artery, behind the scalenus anterior on the right side, and medial to that muscle on the left side. Passing backward, it splits into the deep cervical artery and the supreme intercostal artery (highest intercostal artery), which descends behind the pleura in front of the necks of the first and second ribs, and anastomoses with the first aortic intercostal (3rd posterior intercostal artery). As it crosses the neck of the first rib it lies medial to the anterior division of the first thoracic nerve, and lateral to the first thoracic ganglion of the sympathetic trunk. In the first intercostal space, it gives off a branch which is distributed in a manner similar to the distribution of the aortic intercostals. The branch for the second intercostal space usually joins with one from the highest aortic intercostal artery. This branch is not constant, but is more commonly found on the right side; when absent, its place is supplied by an intercostal branch from the aorta. Each intercostal gives off a posterior branch which goes to the posterior vertebral muscles, and sends a small spinal branch through the corresponding intervertebral foramen to the medulla spinalis and its membranes. Branches: Deep cervical artery supreme intercostal artery
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tabula Affinitatum** Tabula Affinitatum: The Tabula Affinitatum is a table of chemical affinities between substances.Commissioned around 1766 by the pharmacist Hubert Franz Hoefer for the apothecary's shop of the Grand Duke of Florence, this large table of chemical substances was designed to guide the preparer of pharmaceutical remedies in identifying the compounds most likely to combine with one another. The table is modeled on Étienne-François Geoffroy's Table des différents Rapports observés entre différentes substances (Paris, 1718), from which it differs by adding a seventeenth column. The substances are identified by traditional alchemical symbols and the symbolic language in use in the seventeenth and early eighteenth centuries. The Florentine table does not, however, include the symbol of air. This means that it was compiled in a period when there was not yet a full awareness of the function of air as a chemically active substance, hence capable of combining with solids and liquids. A similar table is found among the plates of Diderot and d'Alembert's Grande Encyclopédie. Tabula Affinitatum: The oil painting is 1540 × 1300 mm and is displayed in the Museo Galileo, Florence.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Concreteness training** Concreteness training: Concreteness training (CNT) is the repeated practice of cognitive skills to create habitual behaviors in order to help reduce anxiety and depressive symptoms for those suffering from the disorder of depression. People suffering from depression have a tendency towards unhelpful abstract thinking and negative thoughts, such as viewing a single mistake as evidence that they are useless at everything. As such, CNT involves switching cognitive focus from negative thoughts to positive thoughts so as to cut down on rumination—focused attention on the symptoms of one's distress—and self-criticism, which can cause feelings of inadequacy and raise anxiety.This technique was developed at the University of Exeter, located in Exeter, England, by Professor Edward Watkins and his team of researchers after they conducted a study to see if the CNT approach could reduce symptoms of depression and anxiety. In the 2009 study, twenty-one men and thirty-nine women were randomly assigned to one of three groups. The first group received CNT, the second group received bogus concreteness training (BGT), and the third group was a wait-list (WL) control condition that received no treatment. The concreteness training involved practicing thinking about the specific details of recent mild negative events: how the event happened, where it happened, who was there, what they did. The goal was to try to get a mental picture of the event, its circumstances, and then focus on the sequence of how it happened. Participants received the specific treatment every day for a week based on their assigned group. At the end of the week, participants were again assessed for depression levels and symptoms. Results indicated that CNT showed a trend toward a greater decrease in depressive symptoms than BGT or WL. Accordingly, Professor Watkins noted: “This is the first demonstration that just targeting thinking style can be an effective means of tackling depression. Concreteness training can be delivered with minimal face-to-face contact with a therapist and training could be accessed online, through CDs or through smartphone apps. This has the advantage of making it a relatively cheap form of treatment.”However, in a study published by Springer Nature in 2013, it was concluded that the effectiveness of CNT may be limited, claiming that while concreteness of thinking increased, results did not support that CNT was effective "as a standalone treatment for depression". In addition, contrary to previous findings, the study also did not find a significant effect on rumination. The potential reasoning behind this lack of effect was "because the sample did not exhibit a significant decrease in depression". Yet, CNT has been proven to be effective when delivered in a specific manner, like therapeutic context, with the rationale that the participant knows he or she is being treated for depressive symptoms by a credible authority. Moreover, results have also demonstrated that CNT is a valid technique in the reduction of self-criticism, especially where the use of self-relevant events (autobiographical materials) has been prevalent.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Low birth weight** Low birth weight: Low birth weight (LBW) is defined by the World Health Organization as a birth weight of an infant of 2,499 g (5 lb 8.1 oz) or less, regardless of gestational age. Infants born with LBW have added health risks which require close management, often in a neonatal intensive care unit (NICU). They are also at increased risk for long-term health conditions which require follow-up over time. Classification: Birth weight may be classified as: High birth weight (macrosomia): greater than 4,200 g (9 lb 4 oz) Normal weight (term delivery): 2,500–4,200 g (5 lb 8 oz – 9 lb 4 oz) Low birth weight: less than 2,500 g (5 lb 8 oz) Very low birth weight: less than 1,500 g (3 lb 5 oz) Extremely low birth weight: less than 1,000 g (2 lb 3 oz) Causes: LBW is either caused by preterm birth (that is, a low gestational age at birth, commonly defined as younger than 37 weeks of gestation) or the infant being small for gestational age (that is, a slow prenatal growth rate), or a combination of both.In general, risk factors in the mother that may contribute to low birth weight include young ages, multiple pregnancies, previous LBW infants, poor nutrition, heart disease or hypertension, untreated celiac disease, substance use disorder, excessive alcohol use, and insufficient prenatal care. It can also be caused by prelabor rupture of membranes. Environmental risk factors include smoking, lead exposure, and other types of air pollutions. Causes: Preterm birth The mechanism of preterm birth is heterogeneous and poorly understood. It may be tied to one or more of the following processes: premature fetal endocrine activation, intrauterine inflammation, over-distension of the uterus, and endometrial bleeding. A prominent risk factor for preterm birth is prior history of preterm delivery. However, there is no reliable protocol for screening and prevention of preterm birth. Causes: Small for gestational age Infants born small for gestational age may be constitutionally small, with no associated pathologic process. Others have intrauterine growth restriction (IUGR) due to any of various pathologic processes. Babies with chromosomal abnormalities or other congenital anomalies may manifest IUGR as part of their syndrome. Problems with the placenta can prevent it from providing adequate oxygen and nutrients to the fetus, resulting in growth restriction. Infections during pregnancy that affect the fetus, such as rubella, cytomegalovirus, toxoplasmosis, and syphilis, may also affect the baby's weight. Causes: Environmental factors Maternal tobacco smoking doubles risk of LBW for the infant. More recently, passive maternal smoking has been examined for possible effects on birth weight, and has been shown to increase risk of LBW by 16%. Causes: Air pollutants The combustion products of solid fuel in developing countries can cause many adverse health issues in people. Because a majority of pregnant women in developing countries, where rate of LBW is high, are heavily exposed to indoor air pollution, increased relative risk translates into substantial population attributable risk of 21% of LBW.Particulate matter, a component of ambient air pollution, is associated with increased risk of low birth weight. Because particulate matter is composed of extremely small particles, even nonvisible levels can be inhaled and present harm to the fetus. Particulate matter exposure can cause inflammation, oxidative stress, endocrine disruption, and impaired oxygen transport access to the placenta, all of which are mechanisms for heightening the risk of low birth weight. To reduce exposure to particulate matter, pregnant women can monitor the EPA’s Air Quality Index and take personal precautionary measures such as reducing outdoor activity on low quality days, avoiding high-traffic roads/intersections, and/or wearing personal protective equipment (i.e., facial mask of industrial design). Indoor exposure to particulate matter can also be reduced through adequate ventilation, as well as use of clean heating and cooking methods.A correlation between maternal exposure to carbon monoxide (CO) and low birth weight has been reported that the effect on birth weight of increased ambient CO was as large as the effect of the mother smoking a pack of cigarettes per day during pregnancy. Causes: It has been revealed that adverse reproductive effects (e.g., risk for LBW) were correlated with maternal exposure to CO emissions in Eastern Europe and North America. Mercury is a known toxic heavy metal that can harm fetal growth and health, and there has been evidence showing that exposure to mercury (via consumption of large oily fish) during pregnancy may be related to higher risks of LBW in the offspring. Causes: Other exposures Elevated blood lead levels in pregnant women, even those well below the US Centers for Disease Control and Prevention's 10 ug/dL "level of concern", can cause miscarriage, premature birth, and LBW in the offspring. Exposure of pregnant women to airplane noise was found to be associated with low birth weight via adverse effects on fetal growth. Prevalence of low birth weight in Japan is associated with radiation doses from the Fukushima accidents of March 2011. Causes: Periodontal health Low birth weight, preterm birth and preeclampsia have been associated with maternal periodontal disease, though the strength of the observed associations is inconsistent and varies according to the population studied, the means of periodontal assessment and the periodontal disease classification employed. The risk of low birth weight can be reduced with treatment of the periodontal disease. This therapy is safe during pregnancy and reduces the inflammatory burden, thus decreasing risk for preterm birth and low birth weight. Management: Temperature regulation LBW newborns are at increased risk of hypothermia due to decreased brown fat stores. Plastic wraps, heated pads, and skin-to-skin contact decrease risk of hypothermia immediately after delivery. One or more of these interventions may be employed, though combinations incur risk of hyperthermia. Warmed incubators in the NICU aid in thermoregulation for LBW infants. Management: Fluid and electrolyte balance Frequent clinical monitoring of volume status and checking of serum electrolytes (up to three times daily) is appropriate to prevent dehydration, fluid overload, and electrolyte imbalance. VLBW newborns have an increased body surface to weight ratio, increasing risk for insensible fluid losses and dehydration. Humidified incubators and skin emollients can lessen insensible fluid loss in VLBW newborns. However, fluid overloading is not benign; it is associated with increased risk of congestive heart failure, necrotizing enterocolitis, and mortality. A degree of fluid restriction mitigates these risks.VLBW newborns are at risk for electrolyte imbalances due to the relative immaturity of the nephrons in their kidneys. The kidneys are not equipped to handle large sodium loads. Therefore, if normal saline is given, the sodium level may become elevated, which may prompt the clinician to give more fluids. Sodium restriction has been shown to prevent fluid overload. Potassium must also be monitored carefully, as immature aldosterone sensitivity and sodium-potassium pumping increases risk for hyperkalemia and cardiac arrhythmias.VLBW newborns are frequently found to have a persistently patent ductus arteriosus (PDA). If present, it is important to evaluate whether the PDA is causing increased circulatory volume, thus posing risk for heart failure. Signs of clinically significant PDA include widened pulse pressure and bounding pulses. In newborns with significant PDA, fluid restriction may avoid the need for surgical or medical therapy to close it. Management: Approach to nutrition As their gastrointestinal systems are typically unready for enteral feeds at the time of birth, VLBW infants require initial parenteral infusion of fluids, macronutrients, vitamins, and micronutrients. Energy needs Decreased activity compared to normal weight newborns may decrease energy requirements, while comorbidities such as bronchopulmonary dysplasia may increase them. Daily weight gain can reveal whether a VLBW newborn is receiving adequate calories. Growth of 21 g/kg/day, mirroring in utero growth, is a target for VLBW and ELBW neonates. Management: Enteral sources Upon transitioning to enteral nutrition, human milk is preferable to formula initially in VLBW newborns because it speeds up development of the intestinal barrier and thereby reduces risk of necrotizing enterocolitis, with an absolute risk reduction of 4%. Donor human milk and maternal expressed breast milk are both associated with this benefit. One drawback of human milk is the imprecision in its calorie content. The fat content in human milk varies greatly among women; therefore, the energy content of human milk cannot be known as precisely as formula. Each time human milk is transferred between containers, some of the fat content may stick to the container, decreasing the energy content. Minimizing transfers of human milk between containers decreases the amount of energy loss. Formula is associated with greater linear growth and weight gain than donor breast milk in LBW infants. Management: Individual nutrient considerations VLBW newborns are at increased risk for hypoglycemia due to decreased energy reserves and large brain mass to body mass ratio. Hypoglycemia may be prevented by intravenous infusion of glucose, amino acids, and lipids. These patients are also at risk of hyperglycemia due to immature insulin secretion and sensitivity. However, insulin supplementation is not recommended due to the possible adverse effect of hypoglycemia, which is more dangerous.VLBW newborns have increased need for amino acids to mirror in utero nutrition. Daily protein intake above 3.0 g/kg is associated with improved weight gain for LBW infants. ELBW newborns may require as much as 4 g/kg/day of protein.Due to the limited solubility of calcium and phosphorus in parenteral infusions, VLBW infants receiving parenteral nutrition will be somewhat deficient of these elements and will require clinical monitoring for osteopenia. Management: Hematology One Cochrane review showed administration of erythropoietin (EPO) decreases later need for blood transfusions, and also is associated with protection against necrotizing enterocolitis and intraventricular hemorrhage. EPO is safe and does not increase risk of mortality or retinopathy of prematurity. Prognosis: Perinatal outcomes LBW is closely associated with fetal and perinatal mortality and morbidity, inhibited growth and cognitive development, and chronic diseases later in life. At the population level, the proportion of babies with a LBW is an indicator of a multifaceted public-health problem that includes long-term maternal malnutrition, ill health, hard work and poor health care in pregnancy. On an individual basis, LBW is an important predictor of newborn health and survival and is associated with higher risk of infant and childhood mortality.Low birth weight constitutes as sixty to eighty percent of the infant mortality rate in developing countries. Infant mortality due to low birth weight is usually directly causal, stemming from other medical complications such as preterm birth, PPROM, poor maternal nutritional status, lack of prenatal care, maternal sickness during pregnancy, and an unhygienic home environment. Prognosis: Long term outcomes Hyponatremia in the newborn period is associated with neurodevelopmental conditions such as spastic cerebral palsy and sensorineural hearing loss. Rapid correction of hyponatremia (faster than 0.4 mEq/L/hour) perinatally is also associated with neurodevelopmental adverse effects. Among VLBW children, risk for cognitive impairment is increased with lower birth weight, male sex, nonwhite ethnicity, and lower parental education level. There is no clear association between brain injury in the neonatal period and later cognitive impairment. Additionally, low birth weight has associations with cardiovascular diseases later in life, especially in cases of large increases in weight during childhood.Low birth rate is associated with schizoid personality disorder. Epidemiology: The World Health Organization (WHO) estimates the worldwide prevalence of low birth weight at 15% as of 2014, and varies by region: Sub-Saharan Africa, 13%; South Asia, 28%; East Asia and the Pacific, 6%; Latin America and the Caribbean, 9%. Aggregate prevalence of LBW in United Nations-designated Least Developed Countries is 13%. The WHO has set a goal of reducing worldwide prevalence of LBW by 30% through public health interventions including improved prenatal care and women's education.In the United States, the Centers for Disease Control and Prevention (CDC) reports 313,752 LBW infants in 2018, for a prevalence of 8.28%. This is increased from an estimated 6.1% prevalence in 2011 by the Agency for Healthcare Research and Quality (AHRQ). The CDC reported prevalence of VLBW at 1.38% in 2018, similar to the 2011 AHRQ estimate.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Adit** Adit: An adit (from Latin aditus, entrance) or stulm is a horizontal or nearly horizontal passage to an underground mine. Miners can use adits for access, drainage, ventilation, and extracting minerals at the lowest convenient level. Adits are also used to explore for mineral veins. Construction: Adits are driven into the side of a hill or mountain, and are often used when an ore body is located inside the mountain but above the adjacent valley floor or coastal plain. In cases where the mineral vein outcrops at the surface, the adit may follow the lode or vein until it is worked out, in which case the adit is rarely straight. The use of adits for the extraction of ore is generally called drift mining. Construction: Adits can only be driven into a mine where the local topography permits. There will be no opportunity to drive an adit to a mine situated on a large flat plain, for instance. Also if the ground is weak, the cost of shoring up a long adit may outweigh its possible advantages. Access and ventilation: Access to a mine by adit has many advantages over the vertical access shafts used in shaft mining. Less energy is required to transport miners and heavy equipment into and out of the mine. It is also much easier to bring ore or coal out of the mine. Horizontal travel by means of narrow gauge tramway or cable car is also much safer and can move more people and ore than vertical elevators. In the past horses and pit ponies were used. Access and ventilation: In combination with shafts, adits form an important element in the ventilation of a mine: in simple terms, cool air will enter through an adit, be warmed by the higher temperature underground and will naturally exhaust from vertical shafts, some of which are sunk specifically for this purpose. Drainage: Most adits are designed to slope slightly upwards from the entrance so that water will flow freely out of the mine. Mines that have adits can be at least partly drained of water by gravity alone or power-assisted gravity. The depth to which a mine can be drained by gravity alone is defined by the deepest open adit which is known as the "drainage adit". The term mine drainage tunnel is also common, at least in the United States. Workings above this level (known as "above adit") will remain unflooded as long as the adit does not become blocked. All mine workings below both the drainage adit ("below adit") and the water table will flood unless mechanical means are used for drainage. Until the invention of the steam engine this was the main restriction on deep mining. Adits are useful for deeper mines, as water only needs to be raised to the drainage adit rather than to the surface. Drainage: Because of the great reduction in ongoing costs that a drainage adit can provide, they have sometimes been driven for great distances for this purpose. One example is the Milwr tunnel in North Wales, which is about ten miles (16 km) long. Other examples are the Great County Adit in Cornwall, a 40-mile (64 km)-long network of adits that used to drain the whole Gwennap mining area, and the 3.9 miles (6.3 km) Sutro Tunnel at the Comstock Lode in Virginia City, Nevada. A side benefit of driving such extensive adits is that previously unknown ore-bodies can be discovered, helping finance the enormous cost.Adits were used in Cornwall before 1500, and were important to the tin and copper mines in Cornwall and Devon because the ore-bearing veins are nearly vertical, thus acting as ingress channels for water. Notable examples: Great County Adit, a system of over 40 miles (65 km) of adits used for dewatering more than 100 mines in the Gwennap area of Cornwall in the 18th and early 19th centuries. The Hollingwood Common Canal is a disused navigable coal mine adit which terminated at the Chesterfield Canal at Hollingwood, near Staveley, Derbyshire. Milwr tunnel, a 10-mile long (16 km) drainage adit in North Wales. Started in 1897, it still discharges an average of 87 million litres (23 million US gallons) of water per day from the disused Halkyn District United Mines. The Snowy Mountains hydroelectric and irrigation scheme in the Australian Snowy Mountains, created during its construction. The adits are very large and used to access the central point from which the hydro tunnels were constructed. Notable examples: Black Trout Adit in Tarnowskie Góry, Poland. Part of a former silver mine, the adit was used for removing water from the mine. It still carries water from old galleries to the nearest river. A part of it is open for tourists, who go 20 metres (66 ft) down the steps in one shaft, have a ride in a boat, and go up the stairs in another shaft. Notable examples: Blue Hawk Mine near Kelowna, British Columbia, Canada. NORCAT's Underground Mine Centre (Fecunis Adit), in Onaping, Ontario, Canada, used for underground training and mining technology development. Sutro Tunnel for drainage and exploitation of the Comstock Lode in Nevada. Similar terms: A "drift" is a more general term for any near-horizontal underground passage in a mine. Unlike an adit, a drift need not break out to the surface. Drift mining is the use of drifts to extract ore - in this case the drifts follow the vein. Similar terms: A "level" is a horizontal passage that branches off from a shaft and is used for access to the parts of the mine where the ore is being removed. In mines where the lodes have significant vertical extent there can be many numbered levels, one below the other. They can be connected by short vertical shafts known as "winzes". A level that reaches the surface, on a hillside or in a valley, for instance, is called an "adit level". In the Worsley Navigable Levels in Greater Manchester, England, the levels were intentionally flooded and coal was transported on canal boats. Similar terms: "Sough" is a term mainly used in the lead mining areas of Derbyshire. The main purpose of a sough is to drain water from the mine. Sources: Earl, Bryan (1994). Cornish Mining: The Techniques of Metal Mining in the West of England, Past and Present (2nd ed.). St Austell: Cornish Hillside Publications. ISBN 0-9519419-3-3.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Shunpiking** Shunpiking: Shunpiking is the act of deliberately avoiding roads that require payment of a fee or toll to travel on them, usually by traveling on alternative "free" roads which bypass the toll road. The term comes from the word shun, meaning "to avoid", and pike, a term referring to turnpikes, which is another name for toll roads. People who often avoid toll roads sometimes call themselves shunpikers. Historically, certain paths around tollbooths came to be so well known they were called "shun-pikes".Shunpiking has also come to mean an avoidance of major highways (regardless of tolls) in preference for bucolic and scenic interludes along lightly traveled country roads. Early shunpikes: Shunpikes were known in the United States soon after independence. In the mid-1700s, Samuel Rice built a road over the Hoosac Range in northwestern Massachusetts, near the present Hoosac Tunnel. Subsequently, a nearby road for stagecoaches was built around 1787, which became subject to control of the Turnpike Association incorporated in 1797. People desiring to avoid the turnpike fees took the Rice Road instead of the stage road, and so the Rice Road earned the sobriquet “shunpike”.Contributing to open free travel, in 1797 the thrifty travelers of the Mohawk Trail forded the Deerfield River rather than pay toll at the turnpike bridge; in 1810 they won the battle for free travel on all Massachusetts roads.A shunpike in Morris County, New Jersey, dates back to 1804; one near Mount Holly, Vermont, was in existence at least as early as 1809; and one in Hampton Falls, New Hampshire, was created circa 1810.A newspaper article in the New Jersey Journal of March 6, 1804 (p. 4), references a house for sale on Shunpike Road between Morristown and Elizabethtown (Elizabeth), New Jersey. This "Shunpike Road", parts of which are still extant, was in existence the same year that the turnpike it was used to avoid, the Morris Turnpike, was opened for business: 1804. It ran southwest of and parallel to the Morris Turnpike, now called "Old Turnpike Road". It was formed by the improvement and connection of sideroads to enable country people to avoid the expenses of the tolls. Shunpike Road ran through the towns of Bottle Hill (now Madison), Chatham, Summit and Springfield.When the Hampton Falls Turnpike was built in Hampton Falls, New Hampshire, around 1810 by the Hampton Causeway Turnpike Corporation, a toll was charged to cross it at the Taylor River. "Not content with the payment of a toll, some of the residents got together and built a slight bridge called the 'Shunpike' across the Taylor's River, some distance west of the Turnpike bridge, where travelers and teamsters could cross without charge. This continued on until April 12, 1826, when the toll on the Turnpike was discontinued and has remained a free road to this day." Historical boycott in Virginia An example of shunpiking as a form of boycott occurred at the James River Bridge in eastern Virginia, United States. After years of lower than anticipated revenues on the narrow, privately funded structure built in 1928, the Commonwealth of Virginia finally purchased the facility in 1949. However, rather than announcing a long-expected decrease in tolls, the state officials increased the rates in 1955 without visibly improving the roadway, with the notable exception of building a new toll plaza. Early shunpikes: The increased toll rates incensed the public and business users alike. In a well-publicized example of shunpiking, Joseph W. Luter Jr., head of Smithfield Packing Company (the producer of Smithfield Hams), ordered his truck drivers to take different routes and cross smaller and cheaper bridges. Despite the boycott by Luter and others, tolls continued for 20 more years. They were finally removed from the old bridge in 1975 when construction began on a toll-free replacement structure. United States: Connecticut Prior to the removal of tolls in 1985, the Connecticut Turnpike had eight mainline toll barriers instead of a ticket system that was typically used on the turnpikes of that era. While the Connecticut Turnpike was officially considered a toll road for its entire 129-mile length, the placement of mainline toll barriers and the lack of ramp tolls meant the only sections of the Turnpike that were truly tolled were between the interchanges immediately before and following each mainline barrier. Consequentially, motorists familiar with the local area around each of the toll barriers could essentially travel the Turnpike toll-free by exiting before the toll plaza, use local streets to bypass the toll, and re-enter the Turnpike past the toll plaza. United States: Delaware There is a toll of $4 in each direction on the 11-mile (18 km) Delaware Turnpike, or I-95. It is the third most expensive turnpike in the United States when calculated per mile. Since the turnpike does not use ramp tolls, only imposing a toll on drivers passing through a toll plaza just east of the Maryland state line, the toll is easily avoided by using local roads. By taking the last exit of I-95 in Maryland, MD 279, one can continue northbound on MD 279, cross into Delaware on DE 279, turn right at Christiana Parkway (DE 4/DE 896), and make another right onto DE 896 and soon arrive once again at I-95. Large trucks cannot use this detour as DE 4/DE 896 have width and weight restrictions.On January 10, 2019, DelDOT opened the US 301 toll road bypassing Middletown. Now all traffic entering Delaware using US 301 must pay a minimum $4 toll at the state line, with access to the old alignment cut off until after the toll point via Exit 2. Several new shunpikes have emerged, the most common being the historical alignment of MD 299 through Warwick or Levels Road, but neither is viable for trucks. A longer distance route involves using MD 300 in Maryland into Delaware (becoming DE 300 across the line) then turning onto US 13 to the free ramp back to DE 1 at Port Penn Road. United States: Kentucky The Abraham Lincoln Bridge and John F. Kennedy Memorial Bridge are a pair of bridges that carry Interstate 65 across the Ohio River, connecting Jeffersonville, Indiana to downtown Louisville, Kentucky. On December 30, 2016, the Kentucky Department of Transportation implemented a toll to cross the bridges in either direction, ranging from $2 for vehicles with electronic transponders to $4 for vehicles paying by mail. The Clark Memorial Bridge, which makes the same crossing less than one mile west of the two I-65 bridges, remained free. This resulted in a 49% decrease in daily crossings on the Kennedy Bridge and a 75% increase in traffic on the Clark Memorial Bridge. United States: Pennsylvania Interstate 70 runs concurrently with the Pennsylvania Turnpike for 86 miles (138 km). Westbound travelers can exit I-70 in Maryland just south of the Pennsylvania border and enter Interstate 68, continuing along I-68's entire length through western Maryland and into West Virginia until arriving at Interstate 79, I-68's western terminus, in Morgantown. After merging onto I-79 north, a traveler can enter Pennsylvania and merge back onto I-70 in Washington, Pennsylvania, where I-70 and I-79 are briefly concurrent. United States: Despite the added mileage, the relatively non-congested roadways in western Maryland (combined with the various tunnels and pre-Interstate quality of the Pennsylvania Turnpike) make the toll-free trip nearly the same time as the toll route. (The Pennsylvania Turnpike was grandfathered from modern Interstate standards.) A statewide shunpike of the Pennsylvania Turnpike, from Philadelphia to Pittsburgh, can be accomplished via a toll-free route that is 4-lane nearly the entire distance and adds only about 45 minutes of travel time relative to the 5-hour drive using the Turnpike between the two cities. This toll-free route utilizes I-76 West to US 202 South to US 422 West to US 222 South to US 30 West to PA 283 West to I-283 North to I-83 North to I-81 South to US 322/22 West to I-99 South to US 22 West to I-376 West. Travelers who need to go further west than Pittsburgh (e.g. to Ohio) and/or reach I-80 can use the aforementioned toll-free route up to US 322 West, but then take I-99 NORTH (instead of south) which connects to I-80 West. United States: Oklahoma In Oklahoma east of Oklahoma City, Interstate 44 replaced old U.S. Route 66 as the main route in the form of the Turner Turnpike between Oklahoma City and Tulsa, and the Will Rogers Turnpike between Tulsa and the Missouri state line. However, locals have kept old 66 alive by using it for shunpiking instead of the locally unpopular toll expressway. In Britain: In the early 1990s, the management of the Severn Bridge doubled the tolls in one direction (England to Wales) and made the other direction free of charge, presumably to save on staff costs. As a result, many lorry drivers used the Severn Bridge in the free direction, but when travelling from England to Wales, crossed the Severn at Gloucester, where there was no charge, and then drove through the Forest of Dean. Tolls on the Severn Crossings were abolished in 2018.The M6 Toll became the first motorway other than bridges to charge drivers. Drivers can avoid the toll by staying on the M6 motorway, which is shorter than the toll road, though usually more congested. In Hong Kong: In Hong Kong, when crossing Victoria Harbour between Hong Kong Island and Kowloon/New Kowloon, most drivers and businesses prefer the much cheaper, and older, Cross-Harbour Tunnel (XHT), to the Western Harbour Crossing. The toll differences are particularly significant for lorries, coaches and buses. The government has proposed a subsidy to users of a third tunnel, the Eastern Harbour Crossing, to relieve the congestion through the XHT and around both ends of the XHT. The proposal of increasing the Cross Harbour Tunnel's prices and lowering that of the Eastern Harbour Crossing has yet to be put into practise. In Hong Kong: A similar phenomenon exists with the Lion Rock Tunnel between Sha Tin New Town (and the rest of the eastern and northeastern New Territories) and New Kowloon. Most users prefer Lion Rock Tunnel to the Tate's Cairn Tunnel or Shing Mun Tunnels, or the Eagle's Nest-Sha Tin Heights Tunnels as the new tunnels are longer and more expensive. However, this problem is not as serious as the tunnels connecting Hong Kong Island and Kowloon. In popular culture: The term "shunpiking" inspired the name of Stan Shunpike, the Knight Bus conductor in the Harry Potter stories.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Infrared sauna** Infrared sauna: An infrared sauna uses infrared heaters to emit infrared light experienced as radiant heat which is absorbed by the surface of the skin. Infrared saunas are popular in alternative therapies, where they are claimed to help with a number of medical issues including autism, cancer, and COVID-19, but these claims are entirely pseudoscientific. Traditional saunas differ from infrared saunas in that they heat the body primarily by conduction and convection from the heated air and by radiation of the heated surfaces in the sauna room whereas infrared saunas primarily use just radiation. Infrared saunas are also used in Infrared Therapy and Waon Therapy and although there is a small amount of preliminary evidence that these therapies correlate with a number of benefits including reduced blood pressure, increased heart rate and increased left ventricular function there are several problems with linking this evidence to alleged health benefits. History: John Harvey Kellogg invented the use of radiant heat saunas with his incandescent electric light bath in 1891. He claimed that it stimulated healing in the body and in 1893 displayed his invention at the Chicago World's Fair. In 1896 the Radiant Heat Bath was patented by Kellogg and described in the patent as not depending on the heat in the air to heat the body but able to more quickly produce a sweat than traditional Turkish or Russian baths at a lower ambient temperature. The idea became popular, particularly in Germany where "Light Institutes" were set up. King Edward VII of England and Kaiser Wilhelm II of Germany both had radiant heat baths set up in their various palaces. The modern concept of the infrared sauna was revived in the 1970s in Japan as Waon (Japanese: "soothing warmth") Therapy and neonatal beds for newborns use infrared elements to keep the baby warm without being stifled. Description: Infrared saunas can be designed to look like traditional saunas but cheaper models can be in the form of a tent with an infrared element inside. In recent years, infrared sauna mats have also been developed that claim some of the same benefits as an infrared sauna. Infrared saunas differ from other types of sauna such as traditional Finnish saunas mainly in the method of heat delivery. Far infrared light, which is emitted in an infrared sauna at a wavelength of around 10 μm, is felt directly by the body in the form of radiated heat without the need to heat the air around the body first. This results in a lower ambient air temperature allowing for longer sustained stays in the sauna. Infrared light also penetrates the body deeply resulting in a fast and vigorous sweat being produced. The average ambient temperature in an infrared sauna is usually 40–60 °C (104–140 °F) compared to 70–90 °C (158–194 °F) in traditional saunas. Effects: A 2009 literature review of research on far-infrared saunas (FIRS) concluded that there was limited moderate evidence supporting their efficacy in normalizing blood pressure and treating congestive heart failure. The review found fair evidence from a single study supporting FIRS therapy for chronic pain. They found fair evidence against claims that FIRS reduces cholesterol levels. They found weak evidence, from a single study, supporting FIRS therapy as treatment for obesity. All of the studies in the review were limited: they small sample sizes, short duration, unvalidated symptom scales, and were conducted by the same core research group. Effects: In February 2021 Steven Novella of Science-Based Medicine commented on the quality of studies in an article entitled "Infrared Saunas for 'Detoxification'" he stated that: Most of the mainstream attention is on the cardiovascular effects. Using a sauna does correlate with reduced blood pressure (in some, BP may also increase), increased heart rate, increased dermal perfusion with a reduction in organ perfusion, and increased left ventricular function and arterial flexibility. There are several problems with linking this evidence to alleged health benefits. First – these effects are all short term, during the sauna and for 30 minutes following. We don't know if there is any sustained change in cardiovascular function. Second, we don't know that these changes are improvements. This relates to the third issue, it is possible that at least most of these changes may simply be due to dehydration. Reduced blood volume from water loss (similar to a diuretic effect) will reduce the blood pressure and increase the heart rate, relaxing blood vessels to increase perfusion. So perhaps all we are seeing is a transient effect of the dehydration that accompanies using a sauna. Effects: A 2018 systematic review and meta-analysis of nine clinical trials found that five weekly conventional sauna sessions for 2 to 4 weeks was associated with a significant reduction in brain natriuretic peptide (BNP; a marker of heart failure progression) and cardiothoracic ratio (an indicator of heart enlargement), and improved left-ventricular ejection fraction, but no significant effect on left-ventricular end-diastolic diameter, left atrial diameter, systolic blood pressure, or diastolic blood pressure. The review also rated the quality of evidence for these findings as moderate to insufficient, citing a risk of bias and imprecision as the reason for the low evidence rating. The evidence presented by the review supported a therapeutic effect of sauna bathing for heart failure patients but recommended that further studies were needed to be able to draw definitive conclusions.A 2019 scientific survey found that most people use both infrared and traditional saunas for relaxation and that its use, 5 to 15 times per month, was associated with higher mental well-being. Use in alternative therapies: There are a number of claims made about the health effects of infrared saunas that are entirely based in pseudoscience and have no evidence to support them. Use in alternative therapies: Claims of detoxification Proponents of infrared saunas may, without evidence, advertise benefits of detoxification, or that infrared saunas detoxify to a greater extent than traditional saunas. Proponents of infrared saunas will often claim that because infrared light penetrates the body so deeply, it must detoxify better than other means of sweat induction. Infrared saunas do induce body warmth and sweat much more vigorously and at lower ambient temperatures than traditional saunas or exercise; this does not mean that they detoxify more efficiently, or at all. Sweating removes an insignificant amount of toxins from the body and can be counterproductive to the function of the body's actual detoxification system, the liver and kidneys. Producing more sweat reduces the amount of urine produced by the body, which may actually reduce toxin excretion. Use in alternative therapies: Applications Fire departments in Texas and Indiana have purchased infrared saunas under the premise that they will prevent cancer and that the firefighters will be able to sweat out inhaled pollutants. Alternative therapists such as naturopaths have advised the use of infrared saunas for the treatment of cancer and autism. Wellness clinics have recommended it to remove radiation and heavy metals from the body, as well as a preventative treatment for COVID-19. Gwyneth Paltrow has also been criticised by experts for recommending infrared saunas as a post COVID-19 treatment.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Brownian motion** Brownian motion: Brownian motion is the random motion of particles suspended in a medium (a liquid or a gas).This motion pattern typically consists of random fluctuations in a particle's position inside a fluid sub-domain, followed by a relocation to another sub-domain. Each relocation is followed by more fluctuations within the new closed volume. This pattern describes a fluid at thermal equilibrium, defined by a given temperature. Within such a fluid, there exists no preferential direction of flow (as in transport phenomena). More specifically, the fluid's overall linear and angular momenta remain null over time. The kinetic energies of the molecular Brownian motions, together with those of molecular rotations and vibrations, sum up to the caloric component of a fluid's internal energy (the equipartition theorem). Brownian motion: This motion is named after the botanist Robert Brown, who first described the phenomenon in 1827, while looking through a microscope at pollen of the plant Clarkia pulchella immersed in water. In 1900, almost eighty years later, in his doctoral thesis, The Theory of Speculation (Théorie de la spéculation), prepared under the supervision of Henri Poincaré, the French mathematician Louis Bachelier modeled the stochastic process now called Brownian motion. Then, in 1905, theoretical physicist Albert Einstein published a paper where he modeled the motion of the pollen particles as being moved by individual water molecules, making one of his first major scientific contributions. The direction of the force of atomic bombardment is constantly changing, and at different times the particle is hit more on one side than another, leading to the seemingly random nature of the motion. This explanation of Brownian motion served as convincing evidence that atoms and molecules exist and was further verified experimentally by Jean Perrin in 1908. Perrin was awarded the Nobel Prize in Physics in 1926 "for his work on the discontinuous structure of matter".The many-body interactions that yield the Brownian pattern cannot be solved by a model accounting for every involved molecule. Consequently, only probabilistic models applied to molecular populations can be employed to describe it. Two such models of the statistical mechanics, due to Einstein and Smoluchowski, are presented below. Another, pure probabilistic class of models is the class of the stochastic process models. There exist sequences of both simpler and more complicated stochastic processes which converge (in the limit) to Brownian motion (see random walk and Donsker's theorem). History: The Roman philosopher-poet Lucretius' scientific poem "On the Nature of Things" (c. 60 BC) has a remarkable description of the motion of dust particles in verses 113–140 from Book II. He uses this as a proof of the existence of atoms: Observe what happens when sunbeams are admitted into a building and shed light on its shadowy places. You will see a multitude of tiny particles mingling in a multitude of ways... their dancing is an actual indication of underlying movements of matter that are hidden from our sight... It originates with the atoms which move of themselves [i.e., spontaneously]. Then those small compound bodies that are least removed from the impetus of the atoms are set in motion by the impact of their invisible blows and in turn cannon against slightly larger bodies. So the movement mounts up from the atoms and gradually emerges to the level of our senses so that those bodies are in motion that we see in sunbeams, moved by blows that remain invisible. History: Although the mingling, tumbling motion of dust particles is caused largely by air currents, the glittering, jiggling motion of small dust particles is caused chiefly by true Brownian dynamics; Lucretius "perfectly describes and explains the Brownian movement by a wrong example".While Jan Ingenhousz described the irregular motion of coal dust particles on the surface of alcohol in 1785, the discovery of this phenomenon is often credited to the botanist Robert Brown in 1827. Brown was studying pollen grains of the plant Clarkia pulchella suspended in water under a microscope when he observed minute particles, ejected by the pollen grains, executing a jittery motion. By repeating the experiment with particles of inorganic matter he was able to rule out that the motion was life-related, although its origin was yet to be explained. History: The first person to describe the mathematics behind Brownian motion was Thorvald N. Thiele in a paper on the method of least squares published in 1880. This was followed independently by Louis Bachelier in 1900 in his PhD thesis "The theory of speculation", in which he presented a stochastic analysis of the stock and option markets. The Brownian motion model of the stock market is often cited, but Benoit Mandelbrot rejected its applicability to stock price movements in part because these are discontinuous.Albert Einstein (in one of his 1905 papers) and Marian Smoluchowski (1906) brought the solution of the problem to the attention of physicists, and presented it as a way to indirectly confirm the existence of atoms and molecules. Their equations describing Brownian motion were subsequently verified by the experimental work of Jean Baptiste Perrin in 1908. Statistical mechanics theories: Einstein's theory There are two parts to Einstein's theory: the first part consists in the formulation of a diffusion equation for Brownian particles, in which the diffusion coefficient is related to the mean squared displacement of a Brownian particle, while the second part consists in relating the diffusion coefficient to measurable physical quantities. In this way Einstein was able to determine the size of atoms, and how many atoms there are in a mole, or the molecular weight in grams, of a gas. In accordance to Avogadro's law, this volume is the same for all ideal gases, which is 22.414 liters at standard temperature and pressure. The number of atoms contained in this volume is referred to as the Avogadro number, and the determination of this number is tantamount to the knowledge of the mass of an atom, since the latter is obtained by dividing the molar mass of the gas by the Avogadro constant. Statistical mechanics theories: The first part of Einstein's argument was to determine how far a Brownian particle travels in a given time interval. Classical mechanics is unable to determine this distance because of the enormous number of bombardments a Brownian particle will undergo, roughly of the order of 1014 collisions per second.He regarded the increment of particle positions in time τ in a one-dimensional (x) space (with the coordinates chosen so that the origin lies at the initial position of the particle) as a random variable ( Δ ) with some probability density function φ(Δ) (i.e., φ(Δ) is the probability density for a jump of magnitude Δ , i.e., the probability density of the particle incrementing its position from x to x+Δ in the time interval τ ). Further, assuming conservation of particle number, he expanded the number density ρ(x,t+τ) (number of particles per unit volume around x ) at time t+τ in a Taylor series, where the second equality is by definition of φ . The integral in the first term is equal to one by the definition of probability, and the second and other even terms (i.e. first and other odd moments) vanish because of space symmetry. What is left gives rise to the following relation: Where the coefficient after the Laplacian, the second moment of probability of displacement Δ , is interpreted as mass diffusivity D: Then the density of Brownian particles ρ at point x at time t satisfies the diffusion equation: Assuming that N particles start from the origin at the initial time t = 0, the diffusion equation has the solution This expression (which is a normal distribution with the mean μ=0 and variance σ2=2Dt usually called Brownian motion Bt ) allowed Einstein to calculate the moments directly. The first moment is seen to vanish, meaning that the Brownian particle is equally likely to move to the left as it is to move to the right. The second moment is, however, non-vanishing, being given by This equation expresses the mean squared displacement in terms of the time elapsed and the diffusivity. From this expression Einstein argued that the displacement of a Brownian particle is not proportional to the elapsed time, but rather to its square root. His argument is based on a conceptual switch from the "ensemble" of Brownian particles to the "single" Brownian particle: we can speak of the relative number of particles at a single instant just as well as of the time it takes a Brownian particle to reach a given point.The second part of Einstein's theory relates the diffusion constant to physically measurable quantities, such as the mean squared displacement of a particle in a given time interval. This result enables the experimental determination of the Avogadro number and therefore the size of molecules. Einstein analyzed a dynamic equilibrium being established between opposing forces. The beauty of his argument is that the final result does not depend upon which forces are involved in setting up the dynamic equilibrium. Statistical mechanics theories: In his original treatment, Einstein considered an osmotic pressure experiment, but the same conclusion can be reached in other ways. Statistical mechanics theories: Consider, for instance, particles suspended in a viscous fluid in a gravitational field. Gravity tends to make the particles settle, whereas diffusion acts to homogenize them, driving them into regions of smaller concentration. Under the action of gravity, a particle acquires a downward speed of v = μmg, where m is the mass of the particle, g is the acceleration due to gravity, and μ is the particle's mobility in the fluid. George Stokes had shown that the mobility for a spherical particle with radius r is μ=16πηr , where η is the dynamic viscosity of the fluid. In a state of dynamic equilibrium, and under the hypothesis of isothermal fluid, the particles are distributed according to the barometric distribution where ρ − ρo is the difference in density of particles separated by a height difference, of h=z−zo , kB is the Boltzmann constant (the ratio of the universal gas constant, R, to the Avogadro constant, NA), and T is the absolute temperature. Statistical mechanics theories: Dynamic equilibrium is established because the more that particles are pulled down by gravity, the greater the tendency for the particles to migrate to regions of lower concentration. The flux is given by Fick's law, where J = ρv. Introducing the formula for ρ, we find that In a state of dynamical equilibrium, this speed must also be equal to v = μmg. Both expressions for v are proportional to mg, reflecting that the derivation is independent of the type of forces considered. Similarly, one can derive an equivalent formula for identical charged particles of charge q in a uniform electric field of magnitude E, where mg is replaced with the electrostatic force qE. Equating these two expressions yields the Einstein relation for the diffusivity, independent of mg or qE or other such forces: Here the first equality follows from the first part of Einstein's theory, the third equality follows from the definition of the Boltzmann constant as kB = R / NA, and the fourth equality follows from Stokes's formula for the mobility. By measuring the mean squared displacement over a time interval along with the universal gas constant R, the temperature T, the viscosity η, and the particle radius r, the Avogadro constant NA can be determined. Statistical mechanics theories: The type of dynamical equilibrium proposed by Einstein was not new. It had been pointed out previously by J. J. Thomson in his series of lectures at Yale University in May 1903 that the dynamic equilibrium between the velocity generated by a concentration gradient given by Fick's law and the velocity due to the variation of the partial pressure caused when ions are set in motion "gives us a method of determining Avogadro's Constant which is independent of any hypothesis as to the shape or size of molecules, or of the way in which they act upon each other".An identical expression to Einstein's formula for the diffusion coefficient was also found by Walther Nernst in 1888 in which he expressed the diffusion coefficient as the ratio of the osmotic pressure to the ratio of the frictional force and the velocity to which it gives rise. The former was equated to the law of van 't Hoff while the latter was given by Stokes's law. He writes k′=po/k for the diffusion coefficient k', where po is the osmotic pressure and k is the ratio of the frictional force to the molecular viscosity which he assumes is given by Stokes's formula for the viscosity. Introducing the ideal gas law per unit volume for the osmotic pressure, the formula becomes identical to that of Einstein's. The use of Stokes's law in Nernst's case, as well as in Einstein and Smoluchowski, is not strictly applicable since it does not apply to the case where the radius of the sphere is small in comparison with the mean free path.At first, the predictions of Einstein's formula were seemingly refuted by a series of experiments by Svedberg in 1906 and 1907, which gave displacements of the particles as 4 to 6 times the predicted value, and by Henri in 1908 who found displacements 3 times greater than Einstein's formula predicted. But Einstein's predictions were finally confirmed in a series of experiments carried out by Chaudesaigues in 1908 and Perrin in 1909. The confirmation of Einstein's theory constituted empirical progress for the kinetic theory of heat. In essence, Einstein showed that the motion can be predicted directly from the kinetic model of thermal equilibrium. The importance of the theory lay in the fact that it confirmed the kinetic theory's account of the second law of thermodynamics as being an essentially statistical law. Statistical mechanics theories: Smoluchowski model Smoluchowski's theory of Brownian motion starts from the same premise as that of Einstein and derives the same probability distribution ρ(x, t) for the displacement of a Brownian particle along the x in time t. He therefore gets the same expression for the mean squared displacement: (Δx)2¯ . However, when he relates it to a particle of mass m moving at a velocity u which is the result of a frictional force governed by Stokes's law, he finds 32 81 64 27 12mu23πμa, where μ is the viscosity coefficient, and a is the radius of the particle. Associating the kinetic energy mu2/2 with the thermal energy RT/N, the expression for the mean squared displacement is 64/27 times that found by Einstein. The fraction 27/64 was commented on by Arnold Sommerfeld in his necrology on Smoluchowski: "The numerical coefficient of Einstein, which differs from Smoluchowski by 27/64 can only be put in doubt."Smoluchowski attempts to answer the question of why a Brownian particle should be displaced by bombardments of smaller particles when the probabilities for striking it in the forward and rear directions are equal. Statistical mechanics theories: If the probability of m gains and n − m losses follows a binomial distribution, Pm,n=(nm)2−n, with equal a priori probabilities of 1/2, the mean total gain is 2m−n¯=∑m=n2n(2m−n)Pm,n=nn!2n[(n2)!]2. If n is large enough so that Stirling's approximation can be used in the form n!≈(ne)n2πn, then the expected total gain will be 2m−n¯≈2nπ, showing that it increases as the square root of the total population. Statistical mechanics theories: Suppose that a Brownian particle of mass M is surrounded by lighter particles of mass m which are traveling at a speed u. Then, reasons Smoluchowski, in any collision between a surrounding and Brownian particles, the velocity transmitted to the latter will be mu/M. This ratio is of the order of 10−7 cm/s. But we also have to take into consideration that in a gas there will be more than 1016 collisions in a second, and even greater in a liquid where we expect that there will be 1020 collision in one second. Some of these collisions will tend to accelerate the Brownian particle; others will tend to decelerate it. If there is a mean excess of one kind of collision or the other to be of the order of 108 to 1010 collisions in one second, then velocity of the Brownian particle may be anywhere between 10 and 1000 cm/s. Thus, even though there are equal probabilities for forward and backward collisions there will be a net tendency to keep the Brownian particle in motion, just as the ballot theorem predicts. Statistical mechanics theories: These orders of magnitude are not exact because they don't take into consideration the velocity of the Brownian particle, U, which depends on the collisions that tend to accelerate and decelerate it. The larger U is, the greater will be the collisions that will retard it so that the velocity of a Brownian particle can never increase without limit. Could such a process occur, it would be tantamount to a perpetual motion of the second type. And since equipartition of energy applies, the kinetic energy of the Brownian particle, MU2/2 , will be equal, on the average, to the kinetic energy of the surrounding fluid particle, mu2/2 In 1906 Smoluchowski published a one-dimensional model to describe a particle undergoing Brownian motion. The model assumes collisions with M ≫ m where M is the test particle's mass and m the mass of one of the individual particles composing the fluid. It is assumed that the particle collisions are confined to one dimension and that it is equally probable for the test particle to be hit from the left as from the right. It is also assumed that every collision always imparts the same magnitude of ΔV. If NR is the number of collisions from the right and NL the number of collisions from the left then after N collisions the particle's velocity will have changed by ΔV(2NR − N). The multiplicity is then simply given by: (NNR)=N!NR!(N−NR)! and the total number of possible states is given by 2N. Therefore, the probability of the particle being hit from the right NR times is: PN(NR)=N!2NNR!(N−NR)! As a result of its simplicity, Smoluchowski's 1D model can only qualitatively describe Brownian motion. For a realistic particle undergoing Brownian motion in a fluid, many of the assumptions don't apply. For example, the assumption that on average occurs an equal number of collisions from the right as from the left falls apart once the particle is in motion. Also, there would be a distribution of different possible ΔVs instead of always just one in a realistic situation. Statistical mechanics theories: Other physics models using partial differential equations The diffusion equation yields an approximation of the time evolution of the probability density function associated with the position of the particle going under a Brownian movement under the physical definition. The approximation is valid on short timescales. Statistical mechanics theories: The time evolution of the position of the Brownian particle itself is best described using the Langevin equation, an equation that involves a random force field representing the effect of the thermal fluctuations of the solvent on the particle. In Langevin dynamics and Brownian dynamics, the Langevin equation is used to efficiently simulate the dynamics of molecular systems that exhibit a strong Brownian component. Statistical mechanics theories: The displacement of a particle undergoing Brownian motion is obtained by solving the diffusion equation under appropriate boundary conditions and finding the rms of the solution. This shows that the displacement varies as the square root of the time (not linearly), which explains why previous experimental results concerning the velocity of Brownian particles gave nonsensical results. A linear time dependence was incorrectly assumed. Statistical mechanics theories: At very short time scales, however, the motion of a particle is dominated by its inertia and its displacement will be linearly dependent on time: Δx = vΔt. So the instantaneous velocity of the Brownian motion can be measured as v = Δx/Δt, when Δt << τ, where τ is the momentum relaxation time. In 2010, the instantaneous velocity of a Brownian particle (a glass microsphere trapped in air with optical tweezers) was measured successfully. The velocity data verified the Maxwell–Boltzmann velocity distribution, and the equipartition theorem for a Brownian particle. Statistical mechanics theories: Astrophysics: star motion within galaxies In stellar dynamics, a massive body (star, black hole, etc.) can experience Brownian motion as it responds to gravitational forces from surrounding stars. The rms velocity V of the massive object, of mass M, is related to the rms velocity v⋆ of the background stars by MV2≈mv⋆2 where m≪M is the mass of the background stars. The gravitational force from the massive object causes nearby stars to move faster than they otherwise would, increasing both v⋆ and V. The Brownian velocity of Sgr A*, the supermassive black hole at the center of the Milky Way galaxy, is predicted from this formula to be less than 1 km s−1. Mathematics: In mathematics, Brownian motion is described by the Wiener process, a continuous-time stochastic process named in honor of Norbert Wiener. It is one of the best known Lévy processes (càdlàg stochastic processes with stationary independent increments) and occurs frequently in pure and applied mathematics, economics and physics. The Wiener process Wt is characterized by four facts: W0 = 0 Wt is almost surely continuous Wt has independent increments Wt−Ws∼N(0,t−s) (for 0≤s≤t ). Mathematics: N(μ,σ2) denotes the normal distribution with expected value μ and variance σ2. The condition that it has independent increments means that if 0≤s1<t1≤s2<t2 then Wt1−Ws1 and Wt2−Ws2 are independent random variables. In addition, for some filtration Ft , Wt is Ft measurable for all t≥0 An alternative characterisation of the Wiener process is the so-called Lévy characterisation that says that the Wiener process is an almost surely continuous martingale with W0 = 0 and quadratic variation [Wt,Wt]=t A third characterisation is that the Wiener process has a spectral representation as a sine series whose coefficients are independent N(0,1) random variables. This representation can be obtained using the Kosambi–Karhunen–Loève theorem. Mathematics: The Wiener process can be constructed as the scaling limit of a random walk, or other discrete-time stochastic processes with stationary independent increments. This is known as Donsker's theorem. Like the random walk, the Wiener process is recurrent in one or two dimensions (meaning that it returns almost surely to any fixed neighborhood of the origin infinitely often) whereas it is not recurrent in dimensions three and higher. Unlike the random walk, it is scale invariant. Mathematics: The time evolution of the position of the Brownian particle itself can be described approximately by a Langevin equation, an equation which involves a random force field representing the effect of the thermal fluctuations of the solvent on the Brownian particle. On long timescales, the mathematical Brownian motion is well described by a Langevin equation. On small timescales, inertial effects are prevalent in the Langevin equation. However the mathematical Brownian motion is exempt of such inertial effects. Inertial effects have to be considered in the Langevin equation, otherwise the equation becomes singular. so that simply removing the inertia term from this equation would not yield an exact description, but rather a singular behavior in which the particle doesn't move at all. Mathematics: Statistics The Brownian motion can be modeled by a random walk.In the general case, Brownian motion is a Markov process and described by stochastic integral equations. Lévy characterisation The French mathematician Paul Lévy proved the following theorem, which gives a necessary and sufficient condition for a continuous Rn-valued stochastic process X to actually be n-dimensional Brownian motion. Hence, Lévy's condition can actually be used as an alternative definition of Brownian motion. Mathematics: Let X = (X1, ..., Xn) be a continuous stochastic process on a probability space (Ω, Σ, P) taking values in Rn. Then the following are equivalent: X is a Brownian motion with respect to P, i.e., the law of X with respect to P is the same as the law of an n-dimensional Brownian motion, i.e., the push-forward measure X∗(P) is classical Wiener measure on C0([0, +∞); Rn). Mathematics: both X is a martingale with respect to P (and its own natural filtration); and for all 1 ≤ i, j ≤ n, Xi(t)Xj(t) −δijt is a martingale with respect to P (and its own natural filtration), where δij denotes the Kronecker delta. Mathematics: Spectral content The spectral content of a stochastic process Xt can be found from the power spectral density, formally defined as where E stands for the expected value. The power spectral density of Brownian motion is found to be where D is the diffusion coefficient of Xt . For naturally occurring signals, the spectral content can be found from the power spectral density of a single realization, with finite available time, i.e., which for an individual realization of a Brownian motion trajectory, it is found to have expected value μBM(ω,T) and variance σBM2(ω,T) For sufficiently long realization times, the expected value of the power spectrum of a single trajectory converges to the formally defined power spectral density S(ω) , but its coefficient of variation γ=σ2/μ tends to 5/2 . This implies the distribution of S(1)(ω,T) is broad even in the infinite time limit. Mathematics: Riemannian manifold The infinitesimal generator (and hence characteristic operator) of a Brownian motion on Rn is easily calculated to be ½Δ, where Δ denotes the Laplace operator. In image processing and computer vision, the Laplacian operator has been used for various tasks such as blob and edge detection. This observation is useful in defining Brownian motion on an m-dimensional Riemannian manifold (M, g): a Brownian motion on M is defined to be a diffusion on M whose characteristic operator A in local coordinates xi, 1 ≤ i ≤ m, is given by ½ΔLB, where ΔLB is the Laplace–Beltrami operator given in local coordinates by det det (g)∑j=1mgij∂∂xj), where [gij] = [gij]−1 in the sense of the inverse of a square matrix. Narrow escape: The narrow escape problem is a ubiquitous problem in biology, biophysics and cellular biology which has the following formulation: a Brownian particle (ion, molecule, or protein) is confined to a bounded domain (a compartment or a cell) by a reflecting boundary, except for a small window through which it can escape. The narrow escape problem is that of calculating the mean escape time. This time diverges as the window shrinks, thus rendering the calculation a singular perturbation problem.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Oxoeicosanoid** Oxoeicosanoid: The oxoeicosanoids are nonclassic eicosanoids, derived from arachidonic acid (AA). For example, Lipoxygenase produces 5-HETE from AA; a dehydrogenase then produces 5-oxo-eicosatetraenoic acid, an oxoeicosanoid, from 5-HETE. They are similar to the leukotrienes in their actions, but they act via a different receptor.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Beta-Zearalenol** Beta-Zearalenol: β-Zearalenol is a nonsteroidal estrogen of the resorcylic acid lactone group related to mycoestrogens found in Fusarium spp. It is the β epimer of α-zearalenol and along with α-zearalenol is a major metabolite of zearalenone formed mainly in the liver but also to a lesser extent in the intestines during first-pass metabolism. A relatively high proportion of α-zearalenol is formed from zearalenone compared to β-zearalenol in humans. β-Zearalenol is about the same or slightly less potent as an estrogen relative to zearalenone.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**International Conference on Mobile Computing and Networking** International Conference on Mobile Computing and Networking: MobiCom, the International Conference on Mobile Computing and Networking, is a series of annual conferences sponsored by ACM SIGMOBILE dedicated to addressing the challenges in the areas of mobile computing and wireless and mobile networking. Although no rating system for computer networking conferences exists, MobiCom is generally considered to be the best conference in these areas, and it is the fifth highest-impact venue in all of Computer Science. The quality of papers published in this conference is very high. The acceptance rate of MobiCom typically around 10%, meaning that only one tenth of all submitted papers make it through the tough peer review filter.According to SIGMOBILE, "the MobiCom conference series serves as the premier international forum addressing networks, systems, algorithms, and applications that support the symbiosis of mobile computers and wireless networks. MobiCom is a highly selective conference focusing on all issues in mobile computing and wireless and mobile networking at the link layer and above." MobiCom Conferences have been held at the following locations: MobiCom 2020, London, UK, 14-18 September 2020 MobiCom 2019, Los Cabos, Mexico, 21-25 October 2019 MobiCom 2018, New Delhi, India, 29 October-2 November 2018 MobiCom 2017, Snowbird, United States, 16-20 October 2017 MobiCom 2016, New York City, United States, 3–7 October 2016 MobiCom 2015, Paris, France, 7–11 September 2015 MobiCom 2014, Maui, Hawaii, United States, 7–11 September 2014 MobiCom 2013, Miami, Florida, United States, 30 September-4 October 2013 MobiCom 2012, Istanbul, Turkey, 22–26 August 2012 MobiCom 2011, Las Vegas, Nevada, United States, 19–23 September 2011 MobiCom 2010, Chicago, Illinois, United States, 20–24 September 2010 MobiCom 2009, Beijing, China, 20–25 September 2009 MobiCom 2008, San Francisco, California, United States, 13–19 September 2008 MobiCom 2007, Montreal, Quebec, Canada, 9–14 September 2007 MobiCom 2006, Los Angeles, California, United States, 23–29 September 2006 MobiCom 2005, Cologne, Germany, 28 August-2 September 2005 MobiCom 2004, Philadelphia, Pennsylvania, United States, 26 September-1 October 2004 MobiCom 2003, San Diego, California, United States, 14–19 September 2003 MobiCom 2002, Atlanta, Georgia, United States, 23–26 September 2002 MobiCom 2001, Rome, Italy, 16–21 July 2001 MobiCom 2000, Boston, Massachusetts, United States, 6–11 August 2000 MobiCom '99, Seattle, Washington, United States, 15–20 August 1999 MobiCom '98, Dallas, Texas, United States, 25–30 October 1998 MobiCom '97, Budapest, Hungary, 26–30 September 1997 MobiCom '96, Rye, New York, United States, 10–12 November 1996 MobiCom '95, Berkeley, California, United States, 13–15 November 1995
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**H3K56ac** H3K56ac: H3K56ac is an epigenetic modification to the DNA packaging protein Histone H3. It is a mark that indicates the acetylation at the 56th lysine residue of the histone H3 protein. It is a covalent modification known as a mark of newly replicated chromatin as well as replication-independent histone replacement. H3K56ac is important for chromatin remodeling and serves as a marker of new nucleosomes during DNA replication but its role in the cell cycle is debated. Lysine 56 is located at the amino-terminal αN-helix and close to the site where the DNA enters and exits the nucleosome. The studies on yeast might not apply to the mammals. Mammalian cells do not express HATs with high specificity to K56. Sirtuins can catalyze the removal of the acetyl group from K56 H3K56ac levels are elevated in cancer and pluripotent cells.TRIM66 reads unmodified H3R2K4 and H3K56ac to respond to DNA damage. Lysine acetylation and deacetylation: Proteins are typically acetylated on lysine residues and this reaction relies on acetyl-coenzyme A as the acetyl group donor. Lysine acetylation and deacetylation: In histone acetylation and deacetylation, histone proteins are acetylated and deacetylated on lysine residues in the N-terminal tail as part of gene regulation. Typically, these reactions are catalyzed by enzymes with histone acetyltransferase (HAT) or histone deacetylase (HDAC) activity, although HATs and HDACs can modify the acetylation status of non-histone proteins as well.The regulation of transcription factors, effector proteins, molecular chaperones, and cytoskeletal proteins by acetylation and deacetylation is a significant post-translational regulatory mechanism These regulatory mechanisms are analogous to phosphorylation and dephosphorylation by the action of kinases and phosphatases. Not only can the acetylation state of a protein modify its activity but there has been recent suggestion that this post-translational modification may also crosstalk with phosphorylation, methylation, ubiquitination, sumoylation, and others for dynamic control of cellular signaling.In the field of epigenetics, histone acetylation (and deacetylation) have been shown to be important mechanisms in the regulation of gene transcription. Histones, however, are not the only proteins regulated by posttranslational acetylation. Nomenclature: H3K56ac indicates acetylation of lysine 56th histone H3 protein subunit: Histone modifications: The genomic DNA of eukaryotic cells is wrapped around special protein molecules known as histones. The complexes formed by the looping of the DNA are known as chromatin. The basic structural unit of chromatin is the nucleosome: this consists of the core octamer of histones (H2A, H2B, H3 and H4) as well as a linker histone and about 180 base pairs of DNA. These core histones are rich in lysine and arginine residues. The carboxyl (C) terminal end of these histones contribute to histone-histone interactions, as well as histone-DNA interactions. The amino (N) terminal charged tails are the site of the post-translational modifications, such as the one seen in H3K36me3. Epigenetic implications: The post-translational modification of histone tails by either histone modifying complexes or chromatin remodelling complexes are interpreted by the cell and lead to complex, combinatorial transcriptional output. It is thought that a Histone code dictates the expression of genes by a complex interaction between the histones in a particular region. The current understanding and interpretation of histones comes from two large scale projects: ENCODE and the Epigenomic roadmap. The purpose of the epigenomic study was to investigate epigenetic changes across the entire genome. This led to chromatin states which define genomic regions by grouping the interactions of different proteins and/or histone modifications together. Epigenetic implications: Chromatin states were investigated in Drosophila cells by looking at the binding location of proteins in the genome. Use of ChIP-sequencing revealed regions in the genome characterised by different banding. Different developmental stages were profiled in Drosophila as well, an emphasis was placed on histone modification relevance. A look in to the data obtained led to the definition of chromatin states based on histone modifications.The human genome was annotated with chromatin states. These annotated states can be used as new ways to annotate a genome independently of the underlying genome sequence. This independence from the DNA sequence enforces the epigenetic nature of histone modifications. Chromatin states are also useful in identifying regulatory elements that have no defined sequence, such as enhancers. This additional level of annotation allows for a deeper understanding of cell specific gene regulation. H3K56ac: H3K56ac is a covalent modification known as a mark of newly replicated chromatin as well as replication-independent histone replacement.H3K56ac is important for chromatin remodeling and serves as a marker of new nucleosomes during DNA replication but its role in the cell cycle is debated.Lysine 56 is located at the amino-terminal αN-helix and close to the site where the DNA enters and exits the nucleosome. due to its location on the lateral surface of the nucleosome, which is close to the DNA entry/exit site and interacts with DNA29.The studies on yeast might not apply to the mammals. Mammalian cells do not express HATs with high specificity to K56.Sirtuins can catalyze the removal of the acetyl group from K56 H3K56ac levels are elevated in cancer and pluripotent cells TRIM66 reads unmodified H3R2K4 and H3K56ac to respond to DNA damage.H3T45P promotes H3K56 acetylation. Phosphorylation of the nucleosome DNA entry-exit region improves access to DNA binding complexes, and the combination of phosphorylation and acetylation has the ability to alter DNA accessibility to transcription regulatory complexes dramatically. Methods: The histone mark acetylation can be detected in a variety of ways: 1. Chromatin Immunoprecipitation Sequencing (ChIP-sequencing) measures the amount of DNA enrichment once bound to a targeted protein and immunoprecipitated. It results in good optimization and is used in vivo to reveal DNA-protein binding occurring in cells. ChIP-Seq can be used to identify and quantify various DNA fragments for different histone modifications along a genomic region.2. Micrococcal Nuclease sequencing (MNase-seq) is used to investigate regions that are bound by well positioned nucleosomes. Use of the micrococcal nuclease enzyme is employed to identify nucleosome positioning. Well positioned nucleosomes are seen to have enrichment of sequences.3. Assay for transposase accessible chromatin sequencing (ATAC-seq) is used to look in to regions that are nucleosome free (open chromatin). It uses hyperactive Tn5 transposon to highlight nucleosome localisation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Eutectic system** Eutectic system: A eutectic system or eutectic mixture ( yoo-TEK-tik) is a homogeneous mixture that has a melting point lower than those of the constituents. The lowest possible melting point over all of the mixing ratios of the constituents is called the eutectic temperature. On a phase diagram, the eutectic temperature is seen as the eutectic point (see plot on the right).Non-eutectic mixture ratios would have different melting temperatures for their different constituents, since one component's lattice will melt at a lower temperature than the other's. Conversely, as a non-eutectic mixture cools down, each of its components would solidify (form a lattice) at a different temperature, until the entire mass is solid. Eutectic system: Not all binary alloys have eutectic points, since the valence electrons of the component species are not always compatible, in any mixing ratio, to form a new type of joint crystal lattice. For example, in the silver-gold system the melt temperature (liquidus) and freeze temperature (solidus) "meet at the pure element endpoints of the atomic ratio axis while slightly separating in the mixture region of this axis".The term eutectic was coined in 1884 by British physicist and chemist Frederick Guthrie (1833–1886). The word originates from Greek εὐ- (eû) 'well', and τῆξῐς (têxis) 'melting'. Eutectic phase transition: The eutectic solidification is defined as follows: Liquid cooling eutectic temperature solid solution solid solution This type of reaction is an invariant reaction, because it is in thermal equilibrium; another way to define this is the change in Gibbs free energy equals zero. Tangibly, this means the liquid and two solid solutions all coexist at the same time and are in chemical equilibrium. There is also a thermal arrest for the duration of the change of phase during which the temperature of the system does not change.The resulting solid macrostructure from a eutectic reaction depends on a few factors, with the most important factor being how the two solid solutions nucleate and grow. The most common structure is a lamellar structure, but other possible structures include rodlike, globular, and acicular. Non-eutectic compositions: Compositions of eutectic systems that are not at the eutectic point can be classified as hypoeutectic or hypereutectic. Hypoeutectic compositions are those with a smaller percent composition of species β and a greater composition of species α than the eutectic composition (E) while hypereutectic solutions are characterized as those with a higher composition of species β and a lower composition of species α than the eutectic composition. As the temperature of a non-eutectic composition is lowered the liquid mixture will precipitate one component of the mixture before the other. In a hypereutectic solution, there will be a proeutectoid phase of species β whereas a hypoeutectic solution will have a proeutectic α phase. Types: Alloys Eutectic alloys have two or more materials and have a eutectic composition. When a non-eutectic alloy solidifies, its components solidify at different temperatures, exhibiting a plastic melting range. Conversely, when a well-mixed, eutectic alloy melts, it does so at a single, sharp temperature. The various phase transformations that occur during the solidification of a particular alloy composition can be understood by drawing a vertical line from the liquid phase to the solid phase on the phase diagram for that alloy. Types: Some uses include: NEMA Eutectic Alloy Overload Relays for electrical protection of 3-phase motors for pumps, fans, conveyors, and other factory process equipment. Eutectic alloys for soldering, both traditional alloys composed of lead (Pb) and tin (Sn), sometimes with additional silver (Ag) or gold (Au) — especially Sn63Pb37 and Sn62Pb36Ag2 alloy formula for electronics - and newer lead-free soldering alloys, in particular ones composed of tin (Sn), silver (Ag), and copper (Cu) such as Sn96.5Ag3.5. Casting alloys, such as aluminium-silicon and cast iron (at the composition of 4.3% carbon in iron producing an austenite-cementite eutectic) Silicon chips are bonded to gold-plated substrates through a silicon-gold eutectic by the application of ultrasonic energy to the chip. See eutectic bonding. Types: Brazing, where diffusion can remove alloying elements from the joint, so that eutectic melting is only possible early in the brazing process Temperature response, e.g., Wood's metal and Field's metal for fire sprinklers Non-toxic mercury replacements, such as galinstan Experimental glassy metals, with extremely high strength and corrosion resistance Eutectic alloys of sodium and potassium (NaK) that are liquid at room temperature and used as coolant in experimental fast neutron nuclear reactors. Types: Others Sodium chloride and water form a eutectic mixture whose eutectic point is −21.2 °C and 23.3% salt by mass. The eutectic nature of salt and water is exploited when salt is spread on roads to aid snow removal, or mixed with ice to produce low temperatures (for example, in traditional ice cream making). Ethanol–water has an unusually biased eutectic point, i.e. it is close to pure ethanol, which sets the maximum proof obtainable by fractional freezing. "Solar salt", 60% NaNO3 and 40% KNO3, forms a eutectic molten salt mixture which is used for thermal energy storage in concentrated solar power plants. To reduce the eutectic melting point in the solar molten salts, calcium nitrate is used in the following proportion: 42% Ca(NO3)2, 43% KNO3, and 15% NaNO3. Lidocaine and prilocaine—both are solids at room temperature—form a eutectic that is an oil with a 16 °C (61 °F) melting point that is used in eutectic mixture of local anesthetic (EMLA) preparations. Menthol and camphor, both solids at room temperature, form a eutectic that is a liquid at room temperature in the following proportions: 8:2, 7:3, 6:4, and 5:5. Both substances are common ingredients in pharmacy extemporaneous preparations. Minerals may form eutectic mixtures in igneous rocks, giving rise to characteristic intergrowth textures exhibited, for example, by granophyre. Some inks are eutectic mixtures, allowing inkjet printers to operate at lower temperatures. Choline chloride produces eutectic mixtures with many natural products such as citric acid, malic acid and sugars. These liquid mixtures can be used, for example, to obtain antioxidant and antidiabetic extracts from natural products. Strengthening mechanisms: Alloys The primary strengthening mechanism of the eutectic structure in metals is composite strengthening (See strengthening mechanisms of materials). This deformation mechanism works through load transfer between the two constituent phases where the more compliant phase transfers stress to the stiffer phase. By taking advantage of the strength of the stiff phase and the ductility of the compliant phase, the overall toughness of the material increases. As the composition is varied to either hypoeutectic or hypereutectic formations, the load transfer mechanism becomes more complex as there is now load transfer between the eutectic phase and the secondary phase as well as the load transfer within the eutectic phase itself. Strengthening mechanisms: A second tunable strengthening mechanism of eutectic structures is the spacing of the secondary phase. By changing the spacing of the secondary phase, the fraction of contact between the two phases through shared phase boundaries is also changed. By decreasing the spacing of the eutectic phase, creating a fine eutectic structure, more surface area is shared between the two constituent phases resulting in more effective load transfer. On the micro-scale, the additional boundary area acts as a barrier to dislocations further strengthening the material. As a result of this strengthening mechanism, coarse eutectic structures tend to be less stiff but more ductile while fine eutectic structures are stiffer but more brittle. The spacing of the eutectic phase can be controlled during processing as it is directly related to the cooling rate during solidification of the eutectic structure. For example, for a simple lamellar eutectic structure, the minimal lamellae spacing is: λ∗=2γVmTEΔH∗ΔT0 Where is γ is the surface energy of the two-phase boundary, Vm is the molar volume of the eutectic phase, TE is the solidification temperature of the eutectic phase, ΔH is the enthalpy of formation of the eutectic phase, and ΔT0 is the undercooling of the material. So, by altering the undercooling, and by extension the cooling rate, the minimal achievable spacing of the secondar phase is controlled. Strengthening mechanisms: Strengthening metallic eutectic phases to resist deformation at high temperatures (see creep deformation) is more convoluted as the primary deformation mechanism changes depending on the level of stress applied. At high temperatures where deformation is dominated by dislocation movement, the strengthening from load transfer and secondary phase spacing remain as they continue to resist dislocation motion. At lower strains where Nabarro-Herring creep is dominant, the shape and size of the eutectic phase structure plays a significant role in material deformation as it affects the available boundary area for vacancy diffusion to occur. Other critical points: Eutectoid When the solution above the transformation point is solid, rather than liquid, an analogous eutectoid transformation can occur. For instance, in the iron-carbon system, the austenite phase can undergo a eutectoid transformation to produce ferrite and cementite, often in lamellar structures such as pearlite and bainite. This eutectoid point occurs at 723 °C (1,333 °F) and 0.76 wt% carbon. Other critical points: Peritectoid A peritectoid transformation is a type of isothermal reversible reaction that has two solid phases reacting with each other upon cooling of a binary, ternary, ..., n-ary alloy to create a completely different and single solid phase. The reaction plays a key role in the order and decomposition of quasicrystalline phases in several alloy types. A similar structural transition is also predicted for rotating columnar crystals. Other critical points: Peritectic Peritectic transformations are also similar to eutectic reactions. Here, a liquid and solid phase of fixed proportions react at a fixed temperature to yield a single solid phase. Since the solid product forms at the interface between the two reactants, it can form a diffusion barrier and generally causes such reactions to proceed much more slowly than eutectic or eutectoid transformations. Because of this, when a peritectic composition solidifies it does not show the lamellar structure that is found with eutectic solidification. Other critical points: Such a transformation exists in the iron-carbon system, as seen near the upper-left corner of the figure. It resembles an inverted eutectic, with the δ phase combining with the liquid to produce pure austenite at 1,495 °C (2,723 °F) and 0.17% carbon. At the peritectic decomposition temperature the compound, rather than melting, decomposes into another solid compound and a liquid. The proportion of each is determined by the lever rule. In the Al-Au phase diagram, for example, it can be seen that only two of the phases melt congruently, AuAl2 and Au2Al , while the rest peritectically decompose. Eutectic calculation: The composition and temperature of a eutectic can be calculated from enthalpy and entropy of fusion of each components.The Gibbs free energy G depends on its own differential: G=H−TS⇒{H=G+TS(∂G∂T)P=−S⇒H=G−T(∂G∂T)P. Thus, the G/T derivative at constant pressure is calculated by the following equation: (∂G/T∂T)P=1T(∂G∂T)P−1T2G=−1T2(G−T(∂G∂T)P)=−HT2. The chemical potential μi is calculated if we assume that the activity is equal to the concentration: ln ln ⁡xi. At the equilibrium, μi=0 , thus μi∘ is obtained as ln ln ⁡xi. Using and integrating gives ln ln ⁡xi=−Hi∘T+K. The integration constant K may be determined for a pure component with a melting temperature T∘ and an enthalpy of fusion H∘ :xi=1⇒T=Ti∘⇒K=Hi∘Ti∘. We obtain a relation that determines the molar fraction as a function of the temperature for each component: ln ⁡xi=−Hi∘T+Hi∘Ti∘. The mixture of n components is described by the system ln 1. ln ln ⁡(1−∑i=1n−1xi)+Hn∘RT−Hn∘RTn∘=0, which can be solved by ln ln ln ln ln ⁡(1−∑i=1n−1xi)+Hn∘RT−Hn∘RTn∘]
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tversky index** Tversky index: The Tversky index, named after Amos Tversky, is an asymmetric similarity measure on sets that compares a variant to a prototype. The Tversky index can be seen as a generalization of the Sørensen–Dice coefficient and the Jaccard index. For sets X and Y the Tversky index is a number between 0 and 1 given by S(X,Y)=|X∩Y||X∩Y|+α|X∖Y|+β|Y∖X| Here, X∖Y denotes the relative complement of Y in X. Further, α,β≥0 are parameters of the Tversky index. Setting α=β=1 produces the Tanimoto coefficient; setting 0.5 produces the Sørensen–Dice coefficient. Tversky index: If we consider X to be the prototype and Y to be the variant, then α corresponds to the weight of the prototype and β corresponds to the weight of the variant. Tversky measures with α+β=1 are of special interest.Because of the inherent asymmetry, the Tversky index does not meet the criteria for a similarity metric. However, if symmetry is needed a variant of the original formulation has been proposed using max and min functions .S(X,Y)=|X∩Y||X∩Y|+β(αa+(1−α)b) min (|X∖Y|,|Y∖X|) max (|X∖Y|,|Y∖X|) This formulation also re-arranges parameters α and β . Thus, α controls the balance between |X∖Y| and |Y∖X| in the denominator. Similarly, β controls the effect of the symmetric difference |X△Y| versus |X∩Y| in the denominator.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Geostrophic wind** Geostrophic wind: In atmospheric science, geostrophic flow () is the theoretical wind that would result from an exact balance between the Coriolis force and the pressure gradient force. This condition is called geostrophic equilibrium or geostrophic balance (also known as geostrophy). The geostrophic wind is directed parallel to isobars (lines of constant pressure at a given height). This balance seldom holds exactly in nature. The true wind almost always differs from the geostrophic wind due to other forces such as friction from the ground. Thus, the actual wind would equal the geostrophic wind only if there were no friction (e.g. above the atmospheric boundary layer) and the isobars were perfectly straight. Despite this, much of the atmosphere outside the tropics is close to geostrophic flow much of the time and it is a valuable first approximation. Geostrophic flow in air or water is a zero-frequency inertial wave. Origin: A useful heuristic is to imagine air starting from rest, experiencing a force directed from areas of high pressure toward areas of low pressure, called the pressure gradient force. If the air began to move in response to that force, however, the Coriolis "force" would deflect it, to the right of the motion in the northern hemisphere or to the left in the southern hemisphere. As the air accelerated, the deflection would increase until the Coriolis force's strength and direction balanced the pressure gradient force, a state called geostrophic balance. At this point, the flow is no longer moving from high to low pressure, but instead moves along isobars. Geostrophic balance helps to explain why, in the northern hemisphere, low-pressure systems (or cyclones) spin counterclockwise and high-pressure systems (or anticyclones) spin clockwise, and the opposite in the southern hemisphere. Geostrophic currents: Flow of ocean water is also largely geostrophic. Just as multiple weather balloons that measure pressure as a function of height in the atmosphere are used to map the atmospheric pressure field and infer the geostrophic wind, measurements of density as a function of depth in the ocean are used to infer geostrophic currents. Satellite altimeters are also used to measure sea surface height anomaly, which permits a calculation of the geostrophic current at the surface. Limitations of the geostrophic approximation: The effect of friction, between the air and the land, breaks the geostrophic balance. Friction slows the flow, lessening the effect of the Coriolis force. As a result, the pressure gradient force has a greater effect and the air still moves from high pressure to low pressure, though with great deflection. This explains why high-pressure system winds radiate out from the center of the system, while low-pressure systems have winds that spiral inwards. Limitations of the geostrophic approximation: The geostrophic wind neglects frictional effects, which is usually a good approximation for the synoptic scale instantaneous flow in the midlatitude mid-troposphere. Although ageostrophic terms are relatively small, they are essential for the time evolution of the flow and in particular are necessary for the growth and decay of storms. Quasigeostrophic and semi geostrophic theory are used to model flows in the atmosphere more widely. These theories allow for a divergence to take place and for weather systems to then develop.. Formulation: Newton's Second Law can be written as follows if only the pressure gradient, gravity, and friction act on an air parcel, where bold symbols are vectors: DUDt=−2Ω×U−1ρ∇P+g+Fr Here U is the velocity field of the air, Ω is the angular velocity vector of the planet, ρ is the density of the air, P is the air pressure, Fr is the friction, g is the acceleration vector due to gravity and D/Dt is the material derivative. Formulation: Locally this can be expanded in Cartesian coordinates, with a positive u representing an eastward direction and a positive v representing a northward direction. Neglecting friction and vertical motion, as justified by the Taylor–Proudman theorem, we have: dudt=−1ρ∂P∂x+f⋅vdvdt=−1ρ∂P∂y−f⋅u0=−g−1ρ∂P∂z With f = 2Ω sin φ the Coriolis parameter (approximately 10−4 s−1, varying with latitude). Formulation: Assuming geostrophic balance, the system is stationary and the first two equations become: f⋅v=1ρ∂P∂xf⋅u=−1ρ∂P∂y By substituting using the third equation above, we have: f⋅v=−g∂P∂x∂P∂z=g∂Z∂xf⋅u=g∂P∂y∂P∂z=−g∂Z∂y with Z the height of the constant pressure surface (geopotential height), satisfying ∂P∂xdx+∂P∂ydy+∂P∂zdZ=0 This leads us to the following result for the geostrophic wind components (ug, vg): ug=−gf∂Z∂yvg=gf∂Z∂x The validity of this approximation depends on the local Rossby number. It is invalid at the equator, because f is equal to zero there, and therefore generally not used in the tropics. Formulation: Other variants of the equation are possible; for example, the geostrophic wind vector can be expressed in terms of the gradient of the geopotential Φ on a surface of constant pressure: Vg=k^f×∇pΦ
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**D. D. Crew** D. D. Crew: D. D. Crew (D. D. クルー) is a 1991 2D beat 'em up developed and released into arcades by Sega. Gameplay: D. D. Crew is similar to Capcom's Final Fight, which is an archetypal side scrolling beat-em-up game. Up to four player characters move from left to right through each level (most of which are split into 3 or more scenes), fighting with the enemy characters who appear, until they reach a confrontation with a stronger boss character at the end of the level. Once that boss is beaten, the players automatically move on to the next stage. Enemies appear from both sides of the screen and from out of doorways or entrances set into the background, and the player(s) must defeat all of them to progress. If the players try to simply travel through the levels without fighting, the screen will stop scrolling until all current enemies have been defeated, before allowing the players to continue progress. Enemies may move outside the confines of the screen, but players may not. Players will pick up a few weapons along the way, like knives and grenades, as well as other items like some lives and "MAX" health points. Players also can pick up and toss enemies either toward the ground or against other enemies. One unique feature is a counter that tells how many enemies a player knocked out. Another unique feature is that players also can perform dash attacks by pressing the joystick toward the left or right sides twice, while pressing the attack button during dashing. Reception: In Japan, Game Machine listed D.D. Crew on their September 1, 1991 issue as being the sixth most-successful table arcade unit of the month.British gaming magazine The One reviewed D.D. Crew in 1991, reviewing it alongside Vendetta, stating that "one will probably fade into insignificance at the expense of the other. If it was up to me, Konami's Vendetta would be the one to take the prizes." The One praises D.D. Crew's sprite size and "well-crafted" graphics, however, they call the gameplay "fine" but "all a bit sterile", stating that Vendetta has "a lot more atmosphere", and the graphics, while smaller, are "much more imaginatively drawn - and the animations are smooth and inventive."Sinclair User gave D.D. Crew an overall score of 71%, complimenting the game's graphics and stating that overall the game was "nicely done" but that it "never quite captures the imagination." They also recommended Vendetta from Konami instead.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dramatic monologue** Dramatic monologue: Dramatic monologue is a type of poetry written in the form of a speech of an individual character. M.H. Abrams notes the following three features of the dramatic monologue as it applies to poetry: The single person, who is patently not the poet, utters the speech that makes up the whole of the poem, in a specific situation at a critical moment […]. Dramatic monologue: This person addresses and interacts with one or more other people; but we know of the auditors' presence, and what they say and do, only from clues in the discourse of the single speaker. The main principle controlling the poet's choice and formulation of what the lyric speaker says is to reveal to the reader, in a way that enhances its interest, the speaker's temperament and character. Types of dramatic monologue: One of the most important influences on the development of the dramatic monologue is romantic poetry. However, the long, personal lyrics typical of the Romantic period are not dramatic monologues, in the sense that they do not, for the most part, imply a concentrated narrative. Poems such as William Wordsworth's Tintern Abbey and Percy Bysshe Shelley's Mont Blanc, to name two famous examples, offered a model of close psychological observation and philosophical or pseudo-philosophical inquiry described in a specific setting. The conversation poems of Samuel Taylor Coleridge are perhaps a better precedent. The genre was also developed by Felicia Hemans and Letitia Elizabeth Landon, beginning in the latter's case with her long poem The Improvisatrice.The novel and plays have also been important influences on the dramatic monologue, particularly as a means of characterization. Dramatic monologues are a way of expressing the views of a character and offering the audience greater insight into that character's feelings. Dramatic monologues can also be used in novels to tell stories, as in Mary Shelley's Frankenstein, and to implicate the audience in moral judgements, as in Albert Camus' The Fall and Mohsin Hamid's The Reluctant Fundamentalist. Examples: The Victorian period represented the high point of the dramatic monologue in English poetry. Alfred, Lord Tennyson's Ulysses, published in 1842, has been called the first true dramatic monologue. After Ulysses, Tennyson's most famous efforts in this vein are Tithonus, The Lotos-Eaters, and St. Simon Stylites, all from the 1842 Poems; later monologues appear in other volumes, notably Idylls of the King. Matthew Arnold's Dover Beach and Stanzas from the Grand Chartreuse are famous, semi-autobiographical monologues. The former, usually regarded as the supreme expression of the growing scepticism of the mid-Victorian period, was published along with the latter in 1867's New Poems. Examples: Robert Browning produced his most famous work in this form. While My Last Duchess is the most famous of his monologues, the form dominated his writing career. The Ring and the Book, Fra Lippo Lippi, Caliban upon Setebos, Soliloquy of the Spanish Cloister and Porphyria's Lover, as well as the other poems in Men and Women are just a handful of Browning's monologues.Other Victorian poets also used the form. Dante Gabriel Rossetti wrote several, including Jenny and The Blessed Damozel; Christina Rossetti wrote a number, including The Convent Threshold. Augusta Webster's A Castaway, Circe, and The Happiest Girl in the World, Amy Levy's Xantippe and A Minor Poet, and Felicia Hemans's Arabella Stuart and Properzia Rossi are all exemplars of this technique. Algernon Charles Swinburne's Hymn to Proserpine has been called a dramatic monologue vaguely reminiscent of Browning's work. Some American poets have also written poems in the genre—famous examples include Edgar Allan Poe's "The Raven". Examples: Post-Victorian examples include William Butler Yeats's The Gift of Harun al-Rashid, Elizabeth Bishop's Crusoe in England, and T.S. Eliot's The Love Song of J. Alfred Prufrock and Gerontion. Sources: Howe, Elisabeth A. (1996). The Dramatic Monologue. Boston: Twayne Publishers. pp. 166 pages. ISBN 0-8057-0969-X. Byron, Glennis (2003). Dramatic monologue. New York: Routledge. pp. 208 pages. ISBN 0-415-22937-5. Arco Publishing (2002). Arco Master the Ap English Language & Composition Test 2003 (Master the Ap English Language & Composition Test). New York: Arco. pp. 288. ISBN 0-7689-0991-0.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Torque vectoring** Torque vectoring: Torque vectoring is a technology employed in automobile differentials that has the ability to vary the torque to each half-shaft with an electronic system; or in rail vehicles which achieve the same using individually motored wheels. This method of power transfer has recently become popular in all-wheel drive vehicles. Some newer front-wheel drive vehicles also have a basic torque vectoring differential. As technology in the automotive industry improves, more vehicles are equipped with torque vectoring differentials. This allows for the wheels to grip the road for better launch and handling. History: In 1996, Honda and Mitsubishi released sporty vehicles with torque vectoring systems. The torque vectoring idea builds on the basic principles of a standard differential. A torque vectoring differential performs basic differential tasks while also transmitting torque independently between wheels. This torque transferring ability improves handling and traction in almost any situation. Torque vectoring differentials were originally used in racing. Mitsubishi rally cars were some of the earliest to use the technology. The technology has slowly developed and is now being implemented in a small variety of production vehicles. The most common use of torque vectoring in automobiles today is in all-wheel drive vehicles. History: The flagship 1996 fifth-generation Honda Prelude was equipped with an Active Torque Transfer System (ATTS) torque-vectoring differential driving the front wheels; it was known in different markets as the Type S (Japan), VTi-S (Europe), and Type SH (North America). In essence, ATTS is a small automatic transmission coupled to the differential, with an electronic control unit actuating clutches to vary the torque output between each driven wheel. ATTS effectively counteracted the natural tendency of the front-engine, front-wheel-drive Prelude to understeer. Honda later developed the system into their Super Handling all-wheel-drive (SH-AWD) system by 2004, which improved handling by increasing torque to the outside wheels. History: At about the same time, the Lancer Evolution IV GSR was equipped with a similar Active Yaw Control (AYC) system in 1996. AYC was fitted to the rear wheels and similarly works to counteract understeer through a series of electronically-controlled clutches that control torque output.The phrase "Torque Vectoring" was first used by Ricardo in 2006 in relation to their driveline technologies. Functional description: The idea and implementation of torque vectoring are both complex. The main goal of torque vectoring is to independently vary torque to each wheel. Differentials generally consist of only mechanical components. A torque vectoring differential requires an electronic monitoring system in addition to standard mechanical components. This electronic system tells the differential when and how to vary the torque. Due to the number of wheels that receive power, a front or rear wheel drive differential is less complex than an all-wheel drive differential. Functional description: The impact of torque distribution is the generation of yaw moment arising from longitudinal forces and changes to the lateral resistance generated by each tire. Applying more longitudinal force reduces the lateral resistance that can be generated. The specific driving condition dictates what the trade-off should be to either damp or excite yaw acceleration. The function is independent of technology and could be achieved by driveline devices for a conventional powertrain, or with electrical torque sources. Functional description: Then comes the practical element of integration with brake stability functions for both fun and safety. Functional description: Front/rear wheel drive Torque vectoring differentials on front or rear wheel drive vehicles are less complex, yet share many of the same benefits as all-wheel drive differentials. The differential only varies torque between two wheels. The electronic monitoring system only monitors two wheels, making it less complex. A front-wheel drive differential must take into account several factors. It must monitor rotational and steering angle of the wheels. As these factors vary during driving, different forces are exerted on the wheels. The differential monitors these forces, and adjusts torque accordingly. Many front-wheel drive differentials can increase or decrease torque transmitted to a certain wheel. This ability improves a vehicle's capability to maintain traction in poor weather conditions. When one wheel begins to slip, the differential can reduce the torque to that wheel, effectively braking the wheel. The differential also increases torque to the opposite wheel, helping balance the power output and keep the vehicle stable. A rear-wheel drive torque vectoring differential works similarly to a front-wheel drive differential. Functional description: All-wheel drive Most torque vectoring differentials are on all-wheel drive vehicles. A basic torque vectoring differential varies torque between the front and rear wheels. This means that, under normal driving conditions, the front wheels receive a set percentage of the engine torque, and the rear wheels receive the rest. If needed, the differential can transfer more torque between the front and rear wheels to improve vehicle performance. Functional description: For example, a vehicle might have a standard torque distribution of 90% to the front wheels and 10% to the rear. When necessary, the differential changes the distribution to 50/50. This new distribution spreads the torque more evenly between all four wheels. Having more even torque distribution increases the vehicle's traction.There are more advanced torque vectoring differentials as well. These differentials build on basic torque transfer between front and rear wheels. They add the capability to transfer torque between individual wheels. This provides an even more effective method of improving handling characteristics. The differential monitors each wheel independently, and distributes available torque to match current conditions. Functional description: Electric vehicles In electric vehicles all-wheel drive is typically implemented with two independent electric motors, one for each axle. In this case the torque vectoring between the front and rear axles is just a matter of electronically controlling the power distribution between the two motors, which can be done on a millisecond scale. In the case of EVs with three or four motors, even more precise torque vectoring can be applied electronically, with millisecond-specific per wheel torque control in the quad-motor case, and two wheels of per wheel control plus one of per axle control in the tri-motor case. Functional description: Torque vectoring can be even more effective if it is actuated through two electric motor drives located on the same axle, as this configuration can be used for shaping the vehicle understeer characteristic and improving the transient response of the vehicle, The Tesla Cybertruck (scheduled for 2022) tri-motor model has one axle with two motors, while the Rivian R1T (in production in 2021) has two motors on each axle, front and rear.A special transmission unit was used in the experimental 2014 car MUTE of the Technical University of Munich, where the bigger motor is providing the driving power and the smaller for the torque vectoring functionality. The detailed control system of the torque vectoring is described in the doctoral thesis of Dr.-Ing. Michael Graf.In case of electric vehicles with four electric motor drives, the same total wheel torque and yaw moment can be generated through a near infinite number of wheel torque distributions. Energy efficiency can be used as a criterion for allocating torque across the wheels. This approach is used in the Rivian R1T light-duty truck introduced in 2021. Functional description: Rail vehicles Research is taking place into using torque vectoring to actively steer railway wheelsets on the track. Claimed benefits include a drastic reduction of wear on both track and wheel and the opportunity to simplify or even eliminate the mechanically complex, heavy and bulky bogie. Stored Energy Technology Limited has built and successfully demonstrated their torque vectoring Actiwheel system which employs a wheel hub motor of their own design.German Aerospace Centre unveiled a full scale mockup of torque vectoring running gear intended for their Next Generation Train at Innotrans 2022.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Valinomycin** Valinomycin: Valinomycin is a naturally occurring dodecadepsipeptide used in the transport of potassium and as an antibiotic. Valinomycin is obtained from the cells of several Streptomyces species, S. fulvissimus being a notable one. Valinomycin: It is a member of the group of natural neutral ionophores because it does not have a residual charge. It consists of enantiomers D- and L-valine (Val), D-alpha-hydroxyisovaleric acid, and L-lactic acid. Structures are alternately bound via amide and ester bridges. Valinomycin is highly selective for potassium ions over sodium ions within the cell membrane. It functions as a potassium-specific transporter and facilitates the movement of potassium ions through lipid membranes "down" the electrochemical potential gradient. The stability constant K for the potassium-valinomycin complex is nearly 100,000 times larger than that of the sodium-valinomycin complex. Valinomycin: This difference is important for maintaining the selectivity of valinomycin for the transport of potassium ions (and not sodium ions) in biological systems. It is classified as an extremely hazardous substance in the United States as defined in Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities. Structure: Valinomycin is a dodecadepsipeptide, that is, it is made of twelve alternating amino acids and esters to form a macrocyclic molecule. The twelve carbonyl groups are essential for the binding of metal ions, and also for solvation in polar solvents. The isopropyl and methyl groups are responsible for solvation in nonpolar solvents. Structure: Along with its shape and size this molecular duality is the main reason for its binding properties. K ions must give up their water of hydration to pass through the pore. K+ ions are octahedrally coordinated in a square bipyramidal geometry by 6 carbonyl bonds from Val. In this space of 1.33 Angstrom, Na+ with its 0.95 Angstrom radius, is significantly smaller than the channel, meaning that Na+ cannot form ionic bonds with the amino acids of the pore at equivalent energy as those it gives up with the water molecules. This leads to a 10,000x selectivity for K+ ions over Na+. For polar solvents, valinomycin will mainly expose the carbonyls to the solvent and in nonpolar solvents the isopropyl groups are located predominantly on the exterior of the molecule. This conformation changes when valinomycin is bound to a potassium ion. The molecule is "locked" into a conformation with the isopropyl groups on the exterior. It is not actually locked into configuration because the size of the molecule makes it highly flexible, but the potassium ion gives some degree of coordination to the macromolecule. Applications: Valinomycin was recently reported to be the most potent agent against severe acute respiratory-syndrome coronavirus (SARS-CoV) in infected Vero E6 cells.Valinomycin acts as a nonmetallic isoforming agent in potassium selective electrodes.This ionophore is used to study membrane vesicles, where it may be selectively applied by experimental design to reduce or eliminate the electrochemical gradient across a membrane.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**C+-probability** C+-probability: In statistics, a c+-probability is the probability that a contrast variable obtains a positive value. C+-probability: Using a replication probability, the c+-probability is defined as follows: if we get a random draw from each group (or factor level) and calculate the sampled value of the contrast variable based on the random draws, then the c+-probability is the chance that the sampled values of the contrast variable are greater than 0 when the random drawing process is repeated infinite times. The c+-probability is a probabilistic index accounting for distributions of compared groups (or factor levels).The c+-probability and SMCV are two characteristics of a contrast variable. There is a link between SMCV and c+-probability. The SMCV and c+-probability provides a consistent interpretation to the strength of comparisons in contrast analysis. When only two groups are involved in a comparison, the c+-probability becomes d+-probability which is the probability that the difference of values from two groups is positive. To some extent, the d+-probability (especially in the independent situations) is equivalent to the well-established probabilistic index P(X > Y). Historically, the index P(X > Y) has been studied and applied in many areas. The c+-probability and d+-probability have been used for data analysis in high-throughput experiments and biopharmaceutical research.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Demushkin group** Demushkin group: In mathematical group theory, a Demushkin group (also written as Demuškin or Demuskin) is a pro-p group G having a certain properties relating to duality in group cohomology. More precisely, G must be such that the first cohomology group with coefficients in Fp = Z/p Z has finite rank, the second cohomology group has rank 1, and the cup product induces a non-degenerate pairing H1(G,Fp)× H1(G,Fp) → H2(G,Fp).Such groups were introduced by Demuškin (1959). Demushkin group: Demushkin groups occur as the Galois groups of the maximal p-extensions of local number fields containing all p-th roots of unity.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Positive airway pressure** Positive airway pressure: Positive airway pressure (PAP) is a mode of respiratory ventilation used in the treatment of sleep apnea. PAP ventilation is also commonly used for those who are critically ill in hospital with respiratory failure, in newborn infants (neonates), and for the prevention and treatment of atelectasis in patients with difficulty taking deep breaths. In these patients, PAP ventilation can prevent the need for tracheal intubation, or allow earlier extubation. Sometimes patients with neuromuscular diseases use this variety of ventilation as well. CPAP is an acronym for "continuous positive airway pressure", which was developed by Dr. George Gregory and colleagues in the neonatal intensive care unit at the University of California, San Francisco. A variation of the PAP system was developed by Professor Colin Sullivan at Royal Prince Alfred Hospital in Sydney, Australia, in 1981.The main difference between BiPAP and CPAP machines is that BiPAP machines have two pressure settings: the prescribed pressure for inhalation (ipap), and a lower pressure for exhalation (epap). The dual settings allow the patient to get more air in and out of their lungs. Medical uses: The main indications for positive airway pressure are congestive heart failure and chronic obstructive pulmonary disease. There is some evidence of benefit for those with hypoxia and community acquired pneumonia.PAP ventilation is often used for patients who have acute type 1 or 2 respiratory failure. Usually PAP ventilation will be reserved for the subset of patients for whom oxygen delivered via a face mask is deemed insufficient or deleterious to health (see CO2 retention). Usually, patients on PAP ventilation will be closely monitored in an intensive care unit, high-dependency unit, coronary care unit or specialist respiratory unit. Medical uses: The most common conditions for which PAP ventilation is used in hospital are congestive cardiac failure and acute exacerbation of obstructive airway disease, most notably exacerbations of COPD and asthma. It is not used in cases where the airway may be compromised, or consciousness is impaired. CPAP is also used to assist premature babies with breathing in the NICU setting. Medical uses: The mask required to deliver CPAP must have an effective seal, and be held on very securely. The "nasal pillow" mask maintains its seal by being inserted slightly into the nostrils and being held in place by various straps around the head. Some full-face masks "float" on the face like a hover-craft, with thin, soft, flexible "curtains" ensuring less skin abrasion, and the possibility of coughing and yawning. Some people may find wearing a CPAP mask uncomfortable or constricting: eyeglass wearers and bearded men may prefer the nasal-pillow type of mask. Breathing out against the positive pressure resistance (the expiratory positive airway pressure component, or EPAP) may also feel unpleasant to some patients. These factors lead to inability to continue treatment due to patient intolerance in about 20% of cases where it is initiated. Some machines have pressure relief technologies that makes sleep therapy more comfortable by reducing pressure at the beginning of exhalation and returning to therapeutic pressure just before inhalation. The level of pressure relief is varied based on the patient's expiratory flow, making breathing out against the pressure less difficult. Those who have an anxiety disorder or claustrophobia are less likely to tolerate PAP treatment. Sometimes medication will be given to assist with the anxiety caused by PAP ventilation. Medical uses: Unlike PAP used at home to splint the tongue and pharynx, PAP is used in hospital to improve the ability of the lungs to exchange oxygen and carbon dioxide, and to decrease the work of breathing (the energy expended moving air into and out of the alveoli). This is because: During inspiration, the inspiratory positive airway pressure, or IPAP, forces air into the lungs—thus less work is required from the respiratory muscles. Medical uses: The bronchioles and alveoli are prevented from collapsing at the end of expiration. If these small airways and alveoli are allowed to collapse, significant pressures are required to re-expand them. This can be explained using the Young–Laplace equation (which also explains why the hardest part of blowing up a balloon is the first breath). Medical uses: Entire regions of the lung that would otherwise be collapsed are forced and held open. This process is called recruitment. Usually these collapsed regions of lung will have some blood flow (although reduced). Because these areas of lung are not being ventilated, the blood passing through these areas is not able to efficiently exchange oxygen and carbon dioxide. This is called ventilation–perfusion (or V/Q) mismatch. The recruitment reduces ventilation–perfusion mismatch. Medical uses: The amount of air remaining in the lungs at the end of a breath is greater (this is called the functional residual capacity). The chest and lungs are therefore more expanded. From this more expanded resting position, less work is required to inspire. This is due to the non-linear compliance–volume curve of the lung. Disadvantages: A major issue with CPAP is non-adherence. Studies showed that some users either abandon the use of CPAP, and/or use CPAP for only a fraction of the nights.Prospective PAP candidates are often reluctant to use this therapy, since the nose mask and hose to the machine look uncomfortable and clumsy. Airflow required for some patients can be vigorous. Some patients will develop nasal congestion while others may experience rhinitis or a runny nose. Some patients adjust to the treatment within a few weeks, others struggle for longer periods, and some discontinue treatment entirely. However, studies show that cognitive behavioral therapy at the beginning of therapy dramatically increases adherence—by up to 148%. While common PAP side effects are merely nuisances, serious side effects such as eustachian tube infection, or pressure build-up behind the cochlea are very uncommon. Furthermore, research has shown that PAP side effects are rarely the reason patients stop using PAP. There are reports of dizziness, sinus infections, bronchitis, dry eyes, dry mucosal tissue irritation, ear pain, and nasal congestion secondary to CPAP use.PAP manufacturers frequently offer different models at different price ranges, and PAP masks have many different sizes and shapes, so that some users need to try several masks before finding a good fit. These different machines may not be comfortable for all users, so proper selection of PAP models may be very important in furthering adherence to therapy. Disadvantages: Beards, mustaches, or facial irregularities may prevent an air-tight seal. Where the mask contacts the skin must be free from dirt and excess chemicals such as skin oils. Shaving before mask-fitting may be necessary in some cases. However, facial irregularities of this nature frequently do not hinder the operation of the device or its positive airflow effects for sleep apnea patients. For many people, the only problem from an incomplete seal is a higher noise level near the face from escaping air. Disadvantages: The CPAP mask can act as an orthodontic headgear and move the teeth and the upper and/or lower jaw backward. This effect can increase over time and may or may not cause TMJ disorders in some patients. These facial changes have been dubbed "Smashed Face Syndrome". Mechanism of action: Continuous pressure devices Fixed-pressure CPAP A continuous positive airway pressure (CPAP) machine was initially used mainly by patients for the treatment of sleep apnea at home, but now is in widespread use across intensive care units as a form of ventilation. Obstructive sleep apnea occurs when the upper airway becomes narrow as the muscles relax naturally during sleep. This reduces oxygen in the blood and causes arousal from sleep. The CPAP machine stops this phenomenon by delivering a stream of compressed air via a hose to a nasal pillow, nose mask, full-face mask, or hybrid, splinting the airway (keeping it open under air pressure) so that unobstructed breathing becomes possible, therefore reducing and/or preventing apneas and hypopneas. It is important to understand, however, that it is the air pressure, and not the movement of the air, that prevents the apneas. When the machine is turned on, but prior to the mask being placed on the head, a flow of air comes through the mask. After the mask is placed on the head, it is sealed to the face and the air stops flowing. At this point, it is only the air pressure that accomplishes the desired result. This has the additional benefit of reducing or eliminating the extremely loud snoring that sometimes accompanies sleep apnea.The CPAP machine blows air at a prescribed pressure (also called the titrated pressure). The necessary pressure is usually determined by a sleep physician after review of a study supervised by a sleep technician during an overnight study (polysomnography) in a sleep laboratory. The titrated pressure is the pressure of air at which most (if not all) apneas and hypopneas have been prevented, and it is usually measured in centimetres of water (cmH2O). The pressure required by most patients with sleep apnea ranges between 6 and 14 cmH2O. A typical CPAP machine can deliver pressures between 4 and 20 cmH2O. More specialised units can deliver pressures up to 25 or 30 cmH2O. Mechanism of action: CPAP treatment can be highly effective in treatment of obstructive sleep apnea. For some patients, the improvement in the quality of sleep and quality of life due to CPAP treatment will be noticed after a single night's use. Often, the patient's sleep partner also benefits from markedly improved sleep quality, due to the amelioration of the patient's loud snoring. Mechanism of action: Given that sleep apnea is a chronic health issue which commonly doesn't go away, ongoing care is usually needed to maintain CPAP therapy. Based on the study of cognitive behavioral therapy (referenced above), ongoing chronic care management is the best way to help patients continue therapy by educating them on the health risks of sleep apnea and providing motivation and support. Mechanism of action: Automatic positive airway pressure An automatic positive airway pressure device (APAP, AutoPAP, AutoCPAP) automatically titrates, or tunes, the amount of pressure delivered to the patient to the minimum required to maintain an unobstructed airway on a breath-by-breath basis by measuring the resistance in the patient's breathing based on levels of airway blockage such as snore and apnea, thereby giving the patient the precise pressure required at a given moment and avoiding the compromise of fixed pressure. Mechanism of action: Bi-level pressure devices "VPAP" or "BPAP" (variable/bilevel positive airway pressure) provides two levels of pressure: inspiratory positive airway pressure (IPAP) and a lower expiratory positive airway pressure (EPAP) for easier exhalation. (Some people use the term BPAP to parallel the terms APAP and CPAP.) Often BPAP is incorrectly referred to as "BiPAP". However, BiPAP is the trademarked name of a BPAP machine manufactured by Respironics Corporation; it is just one of many ventilators that can deliver BPAP. Mechanism of action: Modes S (Spontaneous) – In spontaneous mode the device triggers IPAP when flow sensors detect spontaneous inspiratory effort and then cycles back to EPAP. The sensors' level of responsiveness may be adjusted if needed. T (Timed) – In timed mode the IPAP/EPAP cycling is purely machine-triggered, at a set rate, typically expressed in breaths per minute (BPM). S/T (Spontaneous/Timed) – Like spontaneous mode, the device triggers to IPAP on patient inspiratory effort. But in spontaneous/timed mode a "backup" rate is also set to ensure that patients still receive a minimum number of breaths per minute if they fail to breathe spontaneously. Mechanism of action: Expiratory positive airway pressure devices Nasal expiratory positive airway pressure (Nasal EPAP) is a treatment for obstructive sleep apnea (OSA) and snoring.Contemporary EPAP devices have two small valves that allow air to be drawn in through each nostril, but not exhaled; the valves are held in place by adhesive tabs on the outside of the nose. The mechanism by which EPAP may work is not clear; it may be that the resistance to nasal exhalation leads to a buildup in CO2 which in turn increases respiratory drive, or that resistance to exhalation generates pressure that forces the upper airway to open wider. Components: Flow generator (PAP machine) provides the airflow Hose connects the flow generator (sometimes via an in-line humidifier) to the interface Interface (nasal or full face mask, nasal pillows, or less commonly a lip-seal mouthpiece) provides the connection to the user's airway Optional features: Humidifier adds moisture to low humidity air Heated: Heated water chamber that can increase patient comfort by eliminating the dryness of the compressed air. The temperature can usually be adjusted or turned off to act as a passive humidifier if desired. In general, a heated humidifier is either integrated into the unit or has a separate power source (i.e. plug). Optional features: Passive: Air is blown through an unheated water chamber and is dependent on ambient air temperature. It is not as effective as the heated humidifier described above, but still can increase patient comfort by eliminating the dryness of the compressed air. In general, a passive humidifier is a separate unit and does not have a power source. Mask liners: Cloth-based mask liners may be used to prevent excess air leakage and to reduce skin irritation and dermatitis. Ramp may be used to temporarily lower the pressure if the user does not immediately sleep. The pressure gradually rises to the prescribed level over a period of time that can be adjusted by the patient and/or the DME provider. Exhalation pressure relief: Gives a short drop in pressure during exhalation to reduce the effort required. This feature is known by the trade name C-Flex or A-Flex in some CPAPs made by Respironics and EPR in ResMed machines. Optional features: Flexible chin straps may be used to help the patient not breathe through the mouth (full-face masks avoid this problem), thereby keeping a closed pressure system. The straps are elastic enough that the patient can easily open his mouth if he feels that he needs to. Modern straps use a quick-clip instant fit. Velcro-type adjustments allow quick sizing, before or after the machine is turned on. Optional features: Data logging records basic compliance info or detailed event logging, allowing the sleep physician (or patient) to download and analyse data recorded by the machine to verify treatment effectiveness. Automatic altitude adjustment versus manual altitude adjustment. DC power source versus AC power source.Such features generally increase the likelihood of PAP tolerance and compliance. Care and maintenance: As with all durable medical equipment, proper maintenance is essential for proper functioning, long unit life and patient comfort. The care and maintenance required for PAP machines varies with the type and conditions of use, and are typically spelled out in a detailed instruction manual specific to the make and model. Care and maintenance: Most manufacturers recommend that the end user perform daily and weekly maintenance. Units must be checked regularly for wear and tear and kept clean. Poorly connected, worn or frayed electrical connections may present a shock or fire hazard; worn hoses and masks may reduce the effectiveness of the unit. Most units employ some type of filtration, and the filters must be cleaned or replaced on a regular schedule. Sometimes HEPA filters may be purchased or modified for asthma or other allergy clients. Hoses and masks accumulate exfoliated skin, particulate matter, and can even develop mold. Humidification units must be kept free of mold and algae. Because units use substantial electrical power, housings must be cleaned without immersion. Care and maintenance: For humidification units, cleaning of the water container is imperative for several reasons. First, the container may build up minerals from the local water supply which eventually may become part of the air breathed. Second, the container may eventually show signs of "sludge" coming from dust and other particles which make their way through the air filter which must also be changed as it accumulates dirt. To help clean the unit, some patients have used a very small amount of hydrogen peroxide mixed with the water in the container. They would then let it stand for a few minutes before emptying and rinsing. If this procedure is used, it is imperative to rinse the unit with soap and water before reinstalling onto the machine and breathing. Anti-bacterial soap is not recommended by sellers. To reduce the risk of contamination, distilled water is a good alternative to tap water. If traveling in areas where the mineral content or purity of the water is unknown or suspect, an alternative is to use a water from a "purifier" such as Brita. In cold climates, humidified air may require insulated and/or heated air hoses. These may be bought ready-made, or built from commonly available materials. Care and maintenance: Automated activated oxygen (ozone) cleaners are becoming more popular as a preferred maintenance method. However, the biological effects of using ozone as a PAP cleaning method has not scientifically been proven to provide a benefit to PAP users. Portability: Since continuous compliance is an important factor in the success of treatment, it is of importance that patients who travel have access to portable equipment. Progressively, PAP units are becoming lighter and more compact, and often come with carrying cases. Dual-voltage power supplies permit many units to be used internationally - these units only need a travel adapter for the different outlet. Portability: Long-distance travel or camping presents special considerations. Most airport security inspectors have seen the portable machines, so screening rarely presents a special problem. Increasingly, machines are capable of being powered by the 400-Hz power supply used on most commercial aircraft and include manual or automatic altitude adjustment. Machines may easily fit on a ventilator tray on the bottom or back of a power wheelchair with an external battery. Some machines allow power-inverter or car-battery powering. Portability: A limited study in Amsterdam in January 2016 using an induced sleep patient and when awake whilst on CPAP stretched the pectoralis major frontal chest muscles to bring back the shoulders and expand the chest and noted an increase in blood oxygen levels of over 6% during the manual therapy and 5% thereafter. The conclusion by Palmer was that the manual stretching of the pectoralis major combined at the time of the maximum inflation of CPAP allowed the permanent increase in blood oxygen levels and reinflation of collapsed alveoli. Further studies are required. Portability: Some patients on PAP therapy also use supplementary oxygen. When provided in the form of bottled gas, this can present an increased risk of fire and is subject to restrictions. (Commercial airlines generally forbid passengers to bring their own oxygen.) As of November 2006, most airlines permit the use of oxygen concentrators. Availability: In many countries, PAP machines are only available by prescription. A sleep study at an accredited sleep lab is usually necessary before treatment can start. This is because the pressure settings on the PAP machine must be tailored to a patient's treatment needs. A sleep medicine doctor, who may also be trained in respiratory medicine, psychiatry, neurology, paediatrics, family practice or otolaryngology (ear, nose and throat), will interpret the results from the initial sleep study and recommend a pressure test. This may be done in one night (a split study with the diagnostic testing done in the first part of the night, and CPAP testing done in the later part of the night) or with a follow-up second sleep study during which the CPAP titration may be done over the entire night. With CPAP titration (split night or entire night), the patient wears the CPAP mask and pressure is adjusted up and down from the prescribed setting to find the optimal setting. Studies have shown that split-night protocol is an effective protocol for diagnosing OSA and titrating CPAP. CPAP compliance rate showed no difference between the split-night and the two-night protocols. Availability: In the United States, PAP machines are often available at large discounts online, but a patient purchasing a PAP personally must handle the responsibility of securing reimbursement from his or her insurance or Medicare. Many of the internet providers that deal with insurance such as Medicare will provide upgraded equipment to a patient even if he or she only qualifies for a basic PAP. In some locations a government program, separate from Medicare, can be used to claim a reimbursement for all or part of the cost of the PAP device. Availability: In the United Kingdom, PAP machines are available on National Health Service prescription after a diagnosis of sleep apnea or privately from the internet provided a prescription is supplied. Availability: In Australia, PAP machines can be bought from the Internet or physical stores. There is no general requirement for a doctor's prescription, but many suppliers will require a referral. Low-income earners who hold a Commonwealth Health Care Card should enquire with their state's health department about programmes that provide free or low-cost PAP machines. Those who have private health insurance may be eligible for a partial rebate on the cost of a CPAP machine and the mask. Superannuation may be released for the purchase of essential medical equipment such as PAP machines, on the provision of letters from two doctors, one of whom must be a specialist, and an application to the Australian Prudential Regulation Authority (APRA). Availability: In Canada, CPAP units are widely available in all provinces. Funding for the therapy varies from province to province. In the province of Ontario, the Ministry of Health and Long-Term Care's Assistive Devices Program will fund a portion of the cost of a CPAP unit based on a sleep study in an approved sleep lab showing Obstructive Sleep Apnea Syndrome and the signature of an approved physician on the application form. This funding is available to all residents of Ontario with a valid health card.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hypochlorite** Hypochlorite: In chemistry, hypochlorite, or chloroxide is an anion with the chemical formula ClO−. It combines with a number of cations to form hypochlorite salts. Common examples include sodium hypochlorite (household bleach) and calcium hypochlorite (a component of bleaching powder, swimming pool "chlorine"). The Cl-O distance in ClO− is 1.69 Å.The name can also refer to esters of hypochlorous acid, namely organic compounds with a ClO– group covalently bound to the rest of the molecule. The principal example is tert-butyl hypochlorite, which is a useful chlorinating agent.Most hypochlorite salts are handled as aqueous solutions. Their primary applications are as bleaching, disinfection, and water treatment agents. They are also used in chemistry for chlorination and oxidation reactions. Reactions: Acid reaction Acidification of hypochlorites generates hypochlorous acid, which exists in an equilibrium with chlorine. A lowered pH (ie. towards acid) drives the following reaction to the right, liberating chlorine gas, which can be dangerous: 2 H+ + ClO− + Cl− ⇌ Cl2 + H2O Stability Hypochlorites are generally unstable and many compounds exist only in solution. Lithium hypochlorite LiOCl, calcium hypochlorite Ca(OCl)2 and barium hypochlorite Ba(ClO)2 have been isolated as pure anhydrous compounds. All are solids. A few more can be produced as aqueous solutions. In general the greater the dilution the greater their stability. It is not possible to determine trends for the alkaline earth metal salts, as many of them cannot be formed. Beryllium hypochlorite is unheard of. Pure magnesium hypochlorite cannot be prepared; however, solid Mg(OH)OCl is known. Calcium hypochlorite is produced on an industrial scale and has good stability. Strontium hypochlorite, Sr(OCl)2, is not well characterised and its stability has not yet been determined.Upon heating, hypochlorite degrades to a mixture of chloride, oxygen, and chlorates: 2 ClO− → 2 Cl− + O2 3 ClO− → 2 Cl− + ClO−3This reaction is exothermic and in the case of concentrated hypochlorites, such as LiOCl and Ca(OCl)2, can lead to a dangerous thermal runaway and potentially explosions.The alkali metal hypochlorites decrease in stability down the group. Anhydrous lithium hypochlorite is stable at room temperature; however, sodium hypochlorite is explosive as an anhydrous solid. The pentahydrate (NaOCl·(H2O)5) is unstable above 0 °C; although the more dilute solutions encountered as household bleach possess better stability. Potassium hypochlorite (KOCl) is known only in solution.Lanthanide hypochlorites are also unstable; however, they have been reported as being more stable in their anhydrous forms than in the presence of water. Hypochlorite has been used to oxidise cerium from its +3 to +4 oxidation state.Hypochlorous acid itself is not stable in isolation as it decomposes to form chlorine. Its decomposition also results in some form of oxygen. Reactions: Reactions with ammonia Hypochlorites react with ammonia first giving monochloramine (NH2Cl), then dichloramine (NHCl2), and finally nitrogen trichloride (NCl3). NH3 + ClO− → HO− + NH2ClNH2Cl + ClO− → HO− + NHCl2NHCl2 + ClO− → HO− + NCl3 Preparation: Hypochlorite salts Hypochlorite salts formed by the reaction between chlorine and alkali and alkaline earth metal hydroxides. The reaction is performed at close to room temperature to suppress the formation of chlorates. This process is widely used for the industrial production of sodium hypochlorite (NaClO) and calcium hypochlorite (Ca(ClO)2). Preparation: Cl2 + 2 NaOH → NaCl + NaClO + H2O2 Cl2 + 2 Ca(OH)2 → CaCl2 + Ca(ClO)2 + 2 H2OLarge amounts of sodium hypochlorite are also produced electrochemically via an un-separated chloralkali process. In this process brine is electrolyzed to form Cl2 which dissociates in water to form hypochlorite. This reaction must be conducted in non-acidic conditions to prevent release of chlorine: 2 Cl− → Cl2 + 2 e−Cl2 + H2O ⇌ HClO + Cl− + H+Some hypochlorites may also be obtained by a salt metathesis reaction between calcium hypochlorite and various metal sulfates. This reaction is performed in water and relies on the formation of insoluble calcium sulfate, which will precipitate out of solution, driving the reaction to completion. Preparation: Ca(ClO)2 + MSO4 → M(ClO)2 + CaSO4 Organic hypochlorites Hypochlorite esters are in general formed from the corresponding alcohols, by treatment with any of a number of reagents (e.g. chlorine, hypochlorous acid, dichlorine monoxide and various acidified hypochlorite salts). Biochemistry: Biosynthesis of organochlorine compounds Chloroperoxidases are enzymes that catalyzes the chlorination of organic compounds. This enzyme combines the inorganic substrates chloride and hydrogen peroxide to produce the equivalent of Cl+, which replaces a proton in hydrocarbon substrate: R-H + Cl− + H2O2 + H+ → R-Cl + 2 H2OThe source of "Cl+" is hypochlorous acid (HOCl). Many organochlorine compounds are biosynthesized in this way. Biochemistry: Immune response In response to infection, the human immune system generates minute quantities of hypochlorite within special white blood cells, called neutrophil granulocytes. These granulocytes engulf viruses and bacteria in an intracellular vacuole called the phagosome, where they are digested. Biochemistry: Part of the digestion mechanism involves an enzyme-mediated respiratory burst, which produces reactive oxygen-derived compounds, including superoxide (which is produced by NADPH oxidase). Superoxide decays to oxygen and hydrogen peroxide, which is used in a myeloperoxidase-catalysed reaction to convert chloride to hypochlorite.Low concentrations of hypochlorite were also found to interact with a microbe's heat shock proteins, stimulating their role as intra-cellular chaperone and causing the bacteria to form into clumps (much like an egg that has been boiled) that will eventually die off. The same study found that low (micromolar) hypochlorite levels induce E. coli and Vibrio cholerae to activate a protective mechanism, although its implications were not clear.In some cases, the base acidity of hypochlorite compromises a bacterium's lipid membrane, a reaction similar to popping a balloon. Industrial and domestic uses: Hypochlorites, especially of sodium ("liquid bleach", "Javel water") and calcium ("bleaching powder") are widely used, industrially and domestically, to whiten clothes, lighten hair color and remove stains. They were the first commercial bleaching products, developed soon after that property was discovered in 1785 by French chemist Claude Berthollet. Hypochlorites are also widely used as broad spectrum disinfectants and deodorizers. That application started soon after French chemist Labarraque discovered those properties, around 1820 (still before Pasteur formulated his germ theory of disease). Laboratory uses: As oxidizing agents Hypochlorite is the strongest oxidizing agent of the chlorine oxyanions. This can be seen by comparing the standard half cell potentials across the series; the data also shows that the chlorine oxyanions are stronger oxidizers in acidic conditions. Hypochlorite is a sufficiently strong oxidiser to convert Mn(III) to Mn(V) during the Jacobsen epoxidation reaction and to convert Ce3+ to Ce4+. This oxidising power is what makes them effective bleaching agents and disinfectants. In organic chemistry, hypochlorites can be used to oxidise primary alcohols to carboxylic acids. As chlorinating agents Hypochlorite salts can also serve as chlorinating agents. For example, they convert phenols to chlorophenols. Calcium hypochlorite converts piperidine to N-chloropiperidine. Related oxyanions: Chlorine can be the nucleus of oxyanions with oxidation states of −1, +1, +3, +5, or +7. (The element can also assume oxidation state of +4 is seen in the neutral compound chlorine dioxide ClO2).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Menshutkin reaction** Menshutkin reaction: In organic chemistry, the Menshutkin reaction converts a tertiary amine into a quaternary ammonium salt by reaction with an alkyl halide. Similar reactions occur when tertiary phosphines are treated with alkyl halides. The reaction is the method of choice for the preparation quaternary ammonium salts. Some phase transfer catalysts (PTC) can be prepared according to the Menshutkin reaction, for instance the synthesis of triethyl benzyl ammonium chloride (TEBA) from triethylamine and benzyl chloride: Scope: Reactions are typically conducted in polar solvents such as alcohols. Alkyl iodides are superior alkylating agents relative to the bromides, which in turn are superior to chlorides. As is typical for an SN2 process, benzylic, allylic, and α-carbonylated alkyl halides are excellent reactants. Even though alkyl chlorides are poor alkylating agents (gem-dichlorides especially so), amines should not be handled in chlorinated solvents such as dichloromethane and dichloroethane, especially at high temperatures, due to the possibility of a Menshutkin reaction. (Sometimes, kinetically facile reactions like acylations are sometimes conducted in chlorinated solvents nonetheless.) Highly nucleophilic tertiary amines like DABCO will react with dichloromethane at room temperature overnight and at reflux (39-40 °C) over several hours to give the quaternized product (see the article on Selectfluor). Due to steric hindrance and unfavorable electronic properties, chloroform reacts very slowly with tertiary amines over a period of several weeks to months. Even pyridines, which are considerably less nucleophilic than typical tertiary amines, react with dichloromethane at room temperature over a period of several days to weeks to give bis(pyridinium)methane salts.In addition to solvent and alkylating agent, other factors strongly influence the reaction. In one particular macrocycle system the reaction rate is not only accelerated (150000 fold compared to quinuclidine) but the halide order is also changed History: The reaction is named after its discoverer, Nikolai Menshutkin, who described the procedure in 1890. Depending on the source, his name (and the reaction named after him) is spelled as Menšutkin, Menshutkin, or Menschutkin.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**LYVE1** LYVE1: Lymphatic vessel endothelial hyaluronan receptor 1 (LYVE1), also known as extracellular link domain containing 1 (XLKD1) is a Link domain-containing hyaladherin, a protein capable of binding to hyaluronic acid (HA), homologous to CD44, the main HA receptor. In humans it is encoded by the LYVE1 gene.LYVE1 is a type I integral membrane glycoprotein. It acts as a receptor and binds to both soluble and immobilized hyaluronan. This protein may function in lymphatic hyaluronan transport and have a role in tumor metastasis. LYVE-1 is a cell surface receptor on lymphatic endothelial cells that can be used as a lymphatic endothelial cell marker, allowing for the isolation of these cells for experimental purposes. The physiological role for this receptor is still the subject of debate, but evolutionary conservation suggests an important role. LYVE1: Expression of LYVE1 not restricted to lymph vessels but is also observed in normal liver blood sinusoids, and embryonic blood vessels.LYVE1 expression is also observed in subset of macrophages. LYVE1 positive macrophages in the meninges of rats are both lymphatic, as well as, alymphatic. In brain dura, the LYVE1+ macrophages were predominantly pleomorphic in morphology, while the cells in the spinal cord were pleomorphic in the cervical dura, while in the thoracal dura the cells were mainly round in morphology. The cells in brain dura were associated with collagen network in meninges, and some nonlymphatic LYVE1+ macrophages contained intracellular collagen. The exact function of these cells is yet unknown.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Harem effect (science)** Harem effect (science): In the sociology and history of science, the harem effect refers to a phenomenon whereby a male scientist, in a position of power, predominantly hires female subordinates for his research team. History: While there are numerous historical examples of this phenomenon and the practice may continue today, two examples stand out in the literature. Erwin Frink Smith, a USDA plant pathologist in the Bureau of Plant Industry, hired more than twenty female assistants at the agency to study various agricultural problems in the late 19th and early 20th century. Edward Charles Pickering, astrophysicist and director of the Harvard College Observatory, assembled what became known as “Pickering's Harem”—an all-female staff of a dozen or more to assist in his research program to gather and analyze stellar spectra.Possible reasons suggested for this effect include the significantly lower pay required (allowing many more assistants to be hired) and reduced competition from a "bevy of female subordinates, competent but less threatening than an equal number of bright young men." In Smith's case, a further factor may have been USDA's structural exclusion of women from taking the examinations that would have allowed them to enter the higher-ranking jobs for which they were qualified.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ILF2** ILF2: Interleukin enhancer-binding factor 2 is a protein that in humans is encoded by the ILF2 gene. Function: Nuclear factor of activated T-cells (NFAT) is a transcription factor required for T-cell expression of the interleukin 2 gene. NFAT binds to a sequence in the interleukin 2 gene enhancer known as the antigen receptor response element 2. In addition, NFAT can bind RNA and is an essential component for encapsidation and protein priming of hepatitis B viral polymerase. NFAT is a heterodimer of 45 kDa and 90 kDa proteins, the smaller of which is the product of this gene. The encoded protein binds strongly to the 90 kDa protein and stimulates its ability to enhance gene expression. Interactions: ILF2 has been shown to interact with CDC5L and DNA-PKcs.ILF2 and ILF3 have been identified as autoantigens in mice with induced lupus, in canine systemic rheumatic autoimmune disease, and as a rare finding in humans with autoimmune disease.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Design to standards** Design to standards: "Design to Standards" means to design items with generally accepted and uniform procedures, dimensions or materials. Benefits: Product standardization is a technique in engineering design that aim to reduce the number of different parts within a product. The benefits are: lower supply chain costs product platforms faster product designThe supply chain costs are simple to reason: less variety of supplier, less supplier in numbers less stock keeping units (SKU) more economics of scale less variety of production operationsProduct platforms are enabled through: standardized parts can be re-utilized across a product family standardized parts can be re-utilized across product generationsThe product design process becomes faster because: standardized parts can be pulled from an engineering database already standardized parts do not need to be designed again Standards: Standardization can occur through two ways industry standard setter company standards
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pupillary response** Pupillary response: Pupillary response is a physiological response that varies the size of the pupil, via the optic and oculomotor cranial nerve. A constriction response (miosis), is the narrowing of the pupil, which may be caused by scleral buckles or drugs such as opiates/opioids or anti-hypertension medications. Constriction of the pupil occurs when the circular muscle, controlled by the parasympathetic nervous system (PSNS), contracts, and also to an extent when the radial muscle relaxes. Pupillary response: A dilation response (mydriasis), is the widening of the pupil and may be caused by adrenaline; anticholinergic agents; stimulant drugs such as MDMA, cocaine, and amphetamines; and some hallucinogenics (e.g. LSD). Dilation of the pupil occurs when the smooth cells of the radial muscle, controlled by the sympathetic nervous system (SNS), contract, and also when the cells of the iris sphincter muscle relax. Pupillary response: The responses can have a variety of causes, from an involuntary reflex reaction to exposure or inexposure to light—in low light conditions a dilated pupil lets more light into the eye—or it may indicate interest in the subject of attention or arousal, sexual stimulation, uncertainty, decision conflict, errors, physical activity or increasing cognitive load or demand. The responses correlate strongly with activity in the locus coeruleus neurotransmitter system. The pupils contract immediately before REM sleep begins. A pupillary response can be intentionally conditioned as a Pavlovian response to some stimuli.The latency of pupillary response (the time in which it takes to occur) increases with age.In ophthalmology, intensive studies of pupillary response are conducted via videopupillometry.Anisocoria is the condition of one pupil being more dilated than the other.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**AFF1** AFF1: AF4/FMR2 family member 1 is a protein that in humans is encoded by the AFF1 gene. At its same location was a record for a separate PBM1 gene, which has since been withdrawn and considered an alias. It was previously known as AF4 (ALL1-fused gene from chromosome 4).The gene is a member of the AF4/FMR2 (AFF) family, a group of nuclear transcriptional activators which encourage RNA elongation. It is a component of the super elongation complex. It is recognized as a proto-oncogene: chromosomal translocations associated with leukemia can fuse this gene with others like KMT2A, producing an uncontrolled activator protein.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**UTP4** UTP4: UTP4 is a gene that encodes the protein Cirhin, the gene is also known as CIRH1A and NAIC. This protein contains a WD40 repeat and is localized to the nucleolus where it colocates with UTP15 and WDR43. Biallelic mutations to UTP4 have been associated with North American Indian childhood cirrhosis, a form of inherited cirrhosis of the liver occurring in American Indian children from the Abitibi region of northern Quebec.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Suicide squad** Suicide squad: Suicide Squad is a fictional organization featured in DC Comics books. Suicide Squad may also refer to: Arts: Films Suicide Squad (1935 film), an American film directed by Raymond K. Johnson Suicide Squad (2016 film), a 2016 American superhero film written and directed by David Ayer Suicide Squad (soundtrack), the soundtrack album for the 2016 film The Suicide Squad (film), a 2021 American superhero film written and directed by James Gunn Suicide Squadron, the American title for the 1941 British film Dangerous Moonlight Television Suicide Squad (Arrowverse), a fictional organization appearing in the Arrowverse television franchise "Suicide Squad" (Arrow episode), the name of an episode of Arrow "Suicide Squad" (Brooklyn Nine-Nine), the sixth season finale of Brooklyn Nine-Nine Comics "Suicide Squad" (Frew), a 1952 comic book published in Australia by Frew Politics: Suicide squad (New Zealand), 25 politicians appointed in 1950 to help abolish their own legislative council Suicide squad, the members of the former Queensland Legislative Council who voted for its abolition Other: A nickname for certain members of the Guggenheim Aeronautical Laboratory Suicide Squad (hooligan firm), association football hooligan firm linked to Burnley F.C.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fleshtone** Fleshtone: Fleshtone is a 1994 film written and directed by Harry Hurwitz. Plot: A painter plays erotic games over the telephone with a woman. Her body is found mutilated but it may not be hers after all.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Etifelmine** Etifelmine: Etifelmine (INN; also known as gilutensin) is a stimulant drug. It was used for the treatment of hypotension (low blood pressure). Synthesis: The base catalyzed reaction between benzophenone [119-61-9] (1) and butyronitrile [109-74-0] (2) gives 2-[hydroxy(diphenyl)methyl]butanenitrile [22101-20-8] (3). Catalytic hydrogenation reduces the nitrile group to a primary amine giving 1,1-diphenyl-2-ethyl-3-aminopropanol [22101-87-7] (4). The tertiary hydroxyl group is dehydrated by treatment with anhydrous hydrogen chloride gas, completing the synthesis of Etifelmine (5).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Prusa i3** Prusa i3: The Prusa i3 is a family of fused deposition modeling 3D printers, manufactured by Czech company Prusa Research under the trademarked name Original Prusa i3. Part of the RepRap project, Prusa i3 printers were called the most used 3D printer in the world in 2016. The first Prusa i3 was designed by Josef Průša in 2012, and was released as a commercial kit product in 2015. The latest model (Prusa MK4 on sale as of March 2023) is available in both kit and factory assembled versions. The Prusa i3's comparable low cost and ease of construction and modification made it popular in education and with hobbyists and professionals, with the Prusa i3 model MK2 printer receiving several awards in 2016.The i3 series is released under an open source license, so many other companies and individuals have made variants of the printer. Models: RepRap Mendel First conceived in 2009, RepRap Mendel 3D printers were designed to be assembled from 3D printed parts and commonly available off-the-shelf components (referred to as "vitamins," as they cannot be produced by the printer itself). These parts include threaded rods, leadscrews, smooth rods and bearings, screws, nuts, stepper motors, control circuit boards, and a "hot end" to melt and place thermoplastic materials. A Cartesian mechanism permits placement of material anywhere in a cubic volume; this design has continued throughout development of the i3 series. The flat "print bed" (the surface on which parts are printed) is movable in one axis (Y), while two horizontal and two vertical rods permit tool motion in two axes, designated X and Z. Models: Prusa Mendel Josef Průša, a core developer of the RepRap project who had previously developed a PCB heated "print bed", adapted and simplified the RepRap Mendel design, reducing the time to print 3D plastic parts from 20 to 10 hours, and including 3D printed bushings in place of regular bearings. First announced in September 2010, the printer was dubbed Prusa Mendel by Průša himself. According to the RepRap wiki, "Prusa Mendel is the Ford Model T of 3D printers." Prusa Mendel (Iteration 2) Průša streamlined his Mendel design, releasing "Prusa Iteration 2" in November 2011. Parts changes allowed for snap-fit assembly (no glue required); fewer tools were needed to construct and maintain this version. Although not required, fine-pitch manufactured pulleys and LM8UU linear bearings were recommended over printed equivalents for "professional" results. Models: Prusa i3 In May 2012, Průša released a major redesign, focused on ease of construction and use, and no longer structured around the simplest available common hardware as previous RepRap printers were. The Prusa i3 design replaced the threaded-rod, triangular Z axis frame construction with a rigid, single-piece water jet cut aluminium vertical frame. This improved printing speed and accuracy by eliminating the need to erect, align and tighten the upper supports on the Mendel, which were easily skewed or twisted out of alignment from the base. M10 threaded rods were still used to support the heated bed Y axis. It used a single piece, food safe stainless steel hot end called the Prusa Nozzle which printed with 3 mm filament, and used M5 threaded rods as lead screws instead of M8.In 2015, Průša released an i3 full kit under the brand name "Original Prusa i3". For about three months, the Prusa i3 was delivered set up for a proprietary 3 mm filament diameter (which retrospectively has been dubbed the "mark zero"), before the Mk1 update when it was switched to the more common filament diameter of 1.75 mm. Models: Prusa i3 MK2 and MK2S Průša released the Prusa i3 MK2 in May 2016. It was the first hobby 3D printer with mesh bed leveling and automatic geometry skew correction for all three axes. Features included a larger build volume, custom stepper motors with integrated lead screws, a non-contact inductive sensor for auto-leveling, and a rewritten version of the Marlin firmware. Other new features include a polyetherimide print surface, Rambo controller board and an E3D V6 Full hotend. The Prusa MK2 became the first RepRap printer to be supported by Windows 10 Plug-and-Play USB ID.In March 2017, Průša announced on his blog that the revised Prusa i3 MK2S would ship in place of the Prusa i3 MK2. Enhancements cited include U-bolts to hold the LM8UU bearings where cable ties had been used, higher quality bearings and rods, an improved mount for the inductance sensor, improved cable management, and a new electronics cover. An upgrade kit was offered to owners of the MK2 to add these improvements. Models: Prusa i3 MK3 and MK2.5 In September 2017, Prusa i3 MK3 was released, marketed as "bloody smart." Starting with this model, the base and Y axis were assembled with aluminum extrusion, eliminating the last of the structural threaded rods from the Mendel design. Included were a new extruder with dual Bondtech drive-gears, quieter fans with RPM monitoring, faster print speeds, an updated bed leveling sensor, a new electronics board named "Einsy", quieter stepper motors with 128 step microstepping drivers and a magnetic heatbed with interchangeable PEI-coated steel sheets. Electrical components were updated to work with the new 24 volt power supply. The printer also offers dedicated sockets to connect Raspberry Pi Zero W running a fork of the open source OctoPrint software for wireless printing. Models: Ease-of-use features included a filament detector, allowing the printer to load filament when it is inserted, and to pause printing if the filament is jammed or runs out; error-correcting stepper motor drivers preventing layer shifts due to skipped steps; and recovery after power outages. The ambient temperature sensor both confirms suitable environment temperature and detects overheated electrical connections on the main board. Models: Existing MK2 and MK2S users were offered a $199 partial upgrade named MK2.5, limited to features which are cheaper to upgrade. After negative feedback from the community, Prusa made available a more expensive $500 MK2S to MK3 full upgrade. Models: Prusa i3 MK3S and MK3S+ In February 2019, Prusa i3 MK3S was released, along with the Multi Material Upgrade 2S (MMU2S), which allows selecting any of 5 different materials for printing together automatically. MK3S changes include a simplified opto-mechanical filament sensor, improved print cooling, and easier access to service the extruder.Prusa made a running change starting November, 2020 to the Prusa i3 MK3S+. This model has a revised bed leveling sensor and minor parts changes. Models: Prusa i3 MK4 In March 2023, Prusa announced the i3 MK4 and the Multi Material Unit version 3 (MMU3). This model features a new i3 version of their "Nextruder" extruder system first seen on the Prusa XL, no-adjustment load cell bed leveling, a modular replaceable all-metal hot end, a color touchscreen, and die-cast aluminum frame, Y-carriage (heat bed support), and extruder frame. The 32-bit main processor board includes additional safety and monitoring circuits, a network connector, a port for the MMU3, and a Wi-fi module. This is Prusa's first Mendel-based design to include support for local and cloud monitoring and support. Models: Switching to 0.9 degree stepping motors, and the addition of input shaping and pressure advance, allow the Mendel-style design to print faster while avoiding ringing artifacts and other undesirable patterns imposed on the object being made, even though it does not have the advantages of the box-like structure of CoreXY printers. However, Průša has stated that print quality, not maximum speed, is their design goal. There is a provision for an accelerometer, often used in 3D printing for self-tuning of input shaping, but that component is not included in the final design. Models: When announced, software for input shaping, touch screen operation, and sensor data collection were not finished, and the Multi Material Unit was not ready for release. Upgrade kits for earlier models likewise were not available for shipping. Other Prusa models: Following the MK3S, Prusa introduced unrelated models such as the Prusa SL1 (SLA printer), the Prusa Mini (with a cantilever arm) and Prusa XL (using a Core XY method inside a full-frame structure). These printers are not iterations of the Mendel frame design. Variants: With all aspects of the design freely available under open source and open hardware terms, companies and individuals around the world have produced Prusa i3 copies, variants, and upgrades in assembled and kit form, with thousands offered for sale as early as 2015. Rather than compete directly with these, Prusa Research's strategy is to pursue continual refinement of its designs. Recognition: In 2012, Josef Průša received honors from the governor of the Vysočina Region in the Czech Republic for his accomplishments in technology. In February 2014 he was featured on the cover of Czech Forbes magazine as one of the 30 under 30 list. The MK2 and MK2S printers both won Best Overall 3D Printer awards from Make: Magazine. Deloitte placed Prusa Research at the top of the 2018 Deloitte Technology Fast 50 as the fastest growing company in Central Europe. The 3D Hubs Q3 2018 Trends report noted that the Prusa i3 MK2, MK2S and MK3 had been used to manufacture nearly 35% of all prints ordered through their fee-for-service business. The MK3 was named FFF 3D printer of the year for 2019 by 3D Printing Industry. Průša was again featured on the cover of the Czech edition of Forbes in 2019 for his leadership at the now billion-koruna company. All3DP named the MK3 Best 3D Printer of 2018, and the MK3S Best 3D Printer of 2020. Components and materials: Plastic parts All Prusa i3 models use 3D printing filament as feedstock to make parts. Components and materials: Like other RepRap printers the Prusa i3 is capable of creating many of its own parts, with the designs freely available for repairs, replication, and redesign. Formerly these were printed in ABS plastic; Prusa Research now uses mostly PETG instead. Prusa Research maintains a "print farm" of 600 3D printers (as of October 2021) to manufacture the plastic parts for Original Prusa branded products. Components and materials: Control system When the Prusa i3 design was first introduced in 2012, RepRap printers frequently used Open Hardware controllers such as an Arduino Mega combined with an Arduino shield providing the remaining circuitry, such as the RAMPS board. All-in-one versions such as the RAMBo board were becoming available. As a commercial product, Original Prusa i3 up to MK2 used Mini-Rambo. MK3 versions switched to Einsy Rambo boards to provide desired features such as quieter operation. The i3 MK4 uses xBuddy, the first 32-bit board used in the i3 series.All Original Prusa products use Marlin 3D printing firmware. Components and materials: First layer control and bed leveling When extruding the first layer, the print head must be a precise distance away from the print bed for proper adhesion. Many 3D printers rely on the user to complete this process by adjusting the height of the bed at several locations ("bed leveling"). To automate this process, Prusa i3 models from the MK2 in 2016 have a sensor to detect the height of the printbed at different locations, and then adjust for it when printing ("auto-leveling"). Components and materials: PINDA V1 - A non-contact inductive sensor used on MK2/S and MINI. PINDA V2 - A thermally compensated inductive sensor used on MK2.5, MK2.5S, MK3, and MK3S. SuperPINDA - A thermally insensitive sensor used with MK2.5/S, and MK3/S/+. Load cell sensor - A contact sensor used on the MK4.The PINDA series requires an electronic Z-height adjustment which may vary for different heat bed surfaces or different nozzles. The load cell sensor automatically compensates for variations in nozzle size, and thickness and expansion of the heated bed surface, eliminating stored settings for the purpose. Components and materials: Frames The distinguishing feature of the i3 from its predecessors is the vertical frame, which can take many forms. These include single sheet frames cut from steel or acrylic, box frames from plywood or medium-density fibreboard, and Lego. Inexpensive aluminum extrusion is commonly used, both by printer enthusiasts and by manufacturers of "clone" i3 printers. Some mass market i3 variants, such as many Shenzhen Creality products, use rollers against the extruded frame itself instead of precision rods and bearings to reduce cost and complexity. Components and materials: Extruders Beyond the standard Prusa i3 filament extruders, others have created aftermarket extruders and enthusiast tool heads, including a MIG welder and a laser cutter. Průša offered a collection of functional cooking tools and programs under the name "MK3 Master Chef Upgrade" as an April Fools' Day gag in 2018.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Email address** Email address: An email address identifies an email box to which messages are delivered. While early messaging systems used a variety of formats for addressing, today, email addresses follow a set of specific rules originally standardized by the Internet Engineering Task Force (IETF) in the 1980s, and updated by RFC 5322 and 6854. The term email address in this article refers to just the addr-spec in Section 3.4 of RFC 5322. The RFC defines address more broadly as either a mailbox or group. A mailbox value can be either a name-addr, which contains a display-name and addr-spec, or the more common addr-spec alone. Email address: An email address, such as john.smith@example.com, is made up from a local-part, the symbol @, and a domain, which may be a domain name or an IP address enclosed in brackets. Although the standard requires the local-part to be case-sensitive, it also urges that receiving hosts deliver messages in a case-independent manner, e.g., that the mail system in the domain example.com treat John.Smith as equivalent to john.smith; some mail systems even treat them as equivalent to johnsmith. Mail systems often limit the users' choice of name to a subset of the technically permitted characters. Email address: With the introduction of internationalized domain names, efforts are progressing to permit non-ASCII characters in email addresses. Message transport: An email address consists of two parts, a local-part (sometimes a user name, but not always) and a domain; if the domain is a domain name rather than an IP address then the SMTP client uses the domain name to look up the mail exchange IP address. The general format of an email address is local-part@domain, e.g. jsmith@[192.168.1.2], jsmith@example.com. The SMTP client transmits the message to the mail exchange, which may forward it to another mail exchange until it eventually arrives at the host of the recipient's mail system. Message transport: The transmission of electronic mail from the author's computer and between mail hosts in the Internet uses the Simple Mail Transfer Protocol (SMTP), defined in RFC 5321 and 5322, and extensions such as RFC 6531. The mailboxes may be accessed and managed by applications on personal computers, mobile devices or webmail sites, using the SMTP protocol and either the Post Office Protocol (POP) or the Internet Message Access Protocol (IMAP). Message transport: When transmitting email messages, mail user agents (MUAs) and mail transfer agents (MTAs) use the domain name system (DNS) to look up a Resource Record (RR) for the recipient's domain. A mail exchanger resource record (MX record) contains the name of the recipient's mailserver. In absence of an MX record, an address record (A or AAAA) directly specifies the mail host. Message transport: The local-part of an email address has no significance for intermediate mail relay systems other than the final mailbox host. Email senders and intermediate relay systems must not assume it to be case-insensitive, since the final mailbox host may or may not treat it as such. A single mailbox may receive mail for multiple email addresses, if configured by the administrator. Conversely, a single email address may be the alias to a distribution list to many mailboxes. Email aliases, electronic mailing lists, sub-addressing, and catch-all addresses, the latter being mailboxes that receive messages regardless of the local-part, are common patterns for achieving a variety of delivery goals. Message transport: The addresses found in the header fields of an email message are not directly used by mail exchanges to deliver the message. An email message also contains a message envelope that contains the information for mail routing. While envelope and header addresses may be equal, forged email addresses (also called spoofed email addresses) are often seen in spam, phishing, and many other Internet-based scams. This has led to several initiatives which aim to make such forgeries of fraudulent emails easier to spot. Syntax: The format of an email address is local-part@domain, where the local-part may be up to 64 octets long and the domain may have a maximum of 255 octets. The formal definitions are in RFC 5322 (sections 3.2.3 and 3.4.1) and RFC 5321—with a more readable form given in the informational RFC 3696 (written by J. Klensin, the author of RFC 5321) and the associated errata. Syntax: An email address also may have an associated "display-name" (Display Name) for the recipient, which precedes the address specification, now surrounded by angled brackets, for example: John Smith <john.smith@example.org>. Email spammers and phishers will often use "Display Name spoofing" to trick their victims, by using a false Display Name, or by using a different email address as the Display Name.Earlier forms of email addresses for other networks than the Internet included other notations, such as that required by X.400, and the UUCP bang path notation, in which the address was given in the form of a sequence of computers through which the message should be relayed. This was widely used for several years, but was superseded by the Internet standards promulgated by the Internet Engineering Task Force (IETF). Syntax: Local-part The local-part of the email address may be unquoted or may be enclosed in quotation marks. Syntax: If unquoted, it may use any of these ASCII characters: uppercase and lowercase Latin letters A to Z and a to z digits 0 to 9 printable characters !#$%&'*+-/=?^_`{|}~ dot ., provided that it is not the first or last character and provided also that it does not appear consecutively (e.g., John..Doe@example.com is not allowed).If quoted, it may contain Space, Horizontal Tab (HT), any ASCII graphic except Backslash and Quote and a quoted-pair consisting of a Backslash followed by HT, Space or any ASCII graphic; it may also be split between lines anywhere that HT or Space appears. In contrast to unquoted local-parts, the addresses ".John.Doe"@example.com, "John.Doe."@example.com and "John..Doe"@example.com are allowed. Syntax: The maximum total length of the local-part of an email address is 64 octets. Syntax: Space and special characters "(),:;<>@[\] are allowed with restrictions (they are only allowed inside a quoted string, as described in the paragraph below, and in that quoted string, any backslash or double-quote must be preceded once by a backslash); Comments are allowed with parentheses at either end of the local-part; e.g., john.smith(comment)@example.com and (comment)john.smith@example.com are both equivalent to john.smith@example.com.In addition to the above ASCII characters, international characters above U+007F, encoded as UTF-8, are permitted by RFC 6531 when the EHLO specifies SMTPUTF8, though even mail systems that support SMTPUTF8 and 8BITMIME may restrict which characters to use when assigning local-parts. Syntax: A local-part is either a Dot-string or a Quoted-string; it cannot be a combination. Quoted strings and characters, however, are not commonly used. RFC 5321 also warns that "a host that expects to receive mail SHOULD avoid defining mailboxes where the Local-part requires (or uses) the Quoted-string form". Syntax: The local-part postmaster is treated specially—it is case-insensitive, and should be forwarded to the domain email administrator. Technically all other local-parts are case-sensitive, therefore jsmith@example.com and JSmith@example.com specify different mailboxes; however, many organizations treat uppercase and lowercase letters as equivalent. Indeed, RFC 5321 warns that "a host that expects to receive mail SHOULD avoid defining mailboxes where ... the Local-part is case-sensitive". Syntax: Despite the wide range of special characters which are technically valid, organisations, mail services, mail servers and mail clients in practice often do not accept all of them. For example, Windows Live Hotmail only allows creation of email addresses using alphanumerics, dot (.), underscore (_) and hyphen (-). Common advice is to avoid using some special characters to avoid the risk of rejected emails.According to RFC 5321 2.3.11 Mailbox and Address, "the local-part MUST be interpreted and assigned semantics only by the host specified in the domain of the address". This means that no assumptions can be made about the meaning of the local-part of another mail server. It is entirely up to the configuration of the mail server. Syntax: Interpretation of the local-part is dependent on the conventions and policies implemented in the mail server. For example, case sensitivity may distinguish mailboxes differing only in capitalization of characters of the local-part, although this is not very common. Gmail ignores all dots in the local-part of a @gmail.com address for the purposes of determining account identity. Syntax: Sub-addressing Some mail services support a tag included in the local-part, such that the address is an alias to a prefix of the local-part. Typically the characters following a plus and less often the characters following a minus, so fred+bah@domain and fred+foo@domain might end up in the same inbox as fred+@domain or even as fred@domain. For example, the address joeuser+tag@example.com denotes the same delivery address as joeuser@example.com. RFC 5233 refers to this convention as subaddressing, but it is also known as plus addressing, tagged addressing or mail extensions. This can be useful for tagging emails for sorting, and for spam control.Addresses of this form, using various separators between the base name and the tag, are supported by several email services, including Andrew Project (plus), Runbox (plus), Gmail (plus), Rackspace (plus), Yahoo! Mail Plus (hyphen), Apple's iCloud (plus), Outlook.com (plus), Proton Mail (plus), Fastmail (plus and Subdomain Addressing), postale.io (plus), Pobox (plus), MeMail (plus), MMDF (equals), Qmail and Courier Mail Server (hyphen). Postfix and Exim allow configuring an arbitrary separator from the legal character set.The text of the tag may be used to apply filtering, or to create single-use, or disposable email addresses. Syntax: Domain The domain name part of an email address has to conform to strict guidelines: it must match the requirements for a hostname, a list of dot-separated DNS labels, each label being limited to a length of 63 characters and consisting of:: §2  Uppercase and lowercase Latin letters A to Z and a to z; Digits 0 to 9, provided that top-level domain names are not all-numeric; Hyphen -, provided that it is not the first or last character.This rule is known as the LDH rule (letters, digits, hyphen). In addition, the domain may be an IP address literal, surrounded by square brackets [], such as jsmith@[192.168.2.1] or jsmith@[IPv6:2001:db8::1], although this is rarely seen except in email spam. Internationalized domain names (which are encoded to comply with the requirements for a hostname) allow for presentation of non-ASCII domains. In mail systems compliant with RFC 6531 and RFC 6532 an email address may be encoded as UTF-8, both a local-part as well as a domain name. Syntax: Comments are allowed in the domain as well as in the local-part; for example, john.smith@(comment)example.com and john.smith@example.com(comment) are equivalent to john.smith@example.com. RFC 2606 specifies that certain domains, for example those intended for documentation and testing, should not be resolvable and that as a result mail addressed to mailboxes in them and their subdomains should be non-deliverable. Of note for e-mail are example, invalid, example.com, example.net, and example.org. Syntax: Examples Valid email addresses simple@example.com very.common@example.com disposable.style.email.with+symbol@example.com other.email-with-hyphen@and.subdomains.example.com fully-qualified-domain@example.com user.name+tag+sorting@example.com (may go to user.name@example.com inbox depending on mail server) x@example.com (one-letter local-part) example-indeed@strange-example.com test/test@test.com (slashes are a printable character, and allowed) admin@mailserver1 (local domain name with no TLD, although ICANN highly discourages dotless email addresses) example@s.example (see the List of Internet top-level domains) " "@example.org (space between the quotes) "john..doe"@example.org (quoted double dot) mailhost!username@example.org (bangified host route used for uucp mailers) "very.(),:;<>[]\".VERY.\"very@\\ \"very\".unusual"@strange.example.com (include non-letters character AND multiple at sign, the first one being double quoted) user%example.com@example.org (% escaped mail route to user@example.com via example.org) user-@example.org (local-part ending with non-alphanumeric character from the list of allowed printable characters) postmaster@[123.123.123.123] (IP addresses are allowed instead of domains when in square brackets, but strongly discouraged) postmaster@[IPv6:2001:0db8:85a3:0000:0000:8a2e:0370:7334] (IPv6 uses a different syntax)Invalid email addresses Abc.example.com (no @ character) A@b@c@example.com (only one @ is allowed outside quotation marks) a"b(c)d,e:f;g<h>i[j\k]l@example.com (none of the special characters in this local-part are allowed outside quotation marks) just"not"right@example.com (quoted strings must be dot separated or the only element making up the local-part) this is"not\allowed@example.com (spaces, quotes, and backslashes may only exist when within quoted strings and preceded by a backslash) this\ still\"not\\allowed@example.com (even if escaped (preceded by a backslash), spaces, quotes, and backslashes must still be contained by quotes) 1234567890123456789012345678901234567890123456789012345678901234+x@example.com (local-part is longer than 64 characters) i.like.underscores@but_its_not_allowed_in_this_part (Underscore is not allowed in domain part) QA[icon]CHOCOLATE[icon]@test.com (icon characters) Validation and verification: Email addresses are often requested as input to website as validation of user existence. Other validation methods are available, such as cell phone number validation, postal mail validation, and fax validation. Validation and verification: An email address is generally recognized as having two parts joined with an at-sign (@), although technical specification detailed in RFC 822 and subsequent RFCs are more extensive.Syntactically correct, verified email addresses do not guarantee that an email box exists. Thus many mail servers use other techniques and check the mailbox existence against relevant systems such as the Domain Name System for the domain or using callback verification to check if the mailbox exists. Callback verification is an imperfect solution, as it may be disabled to avoid a directory harvest attack, or callbacks may be reported as spam and lead to listing on a DNSBL. Validation and verification: Several validation techniques may be utilized to validate a user email address. For example, Verification links: Email address validation is often accomplished for account creation on websites by sending an email to the user-provided email address with a special temporary hyperlink. On receipt, the user opens the link, immediately activating the account. Email addresses are also useful as means of delivering messages from a website, e.g., user messages, user actions, to the email inbox. Validation and verification: Formal and informal standards: RFC 3696 provides specific advice for validating Internet identifiers, including email addresses. Some websites instead attempt to evaluate the validity of email addresses through arbitrary standards, such as by rejecting addresses containing valid characters, such as + and /, or enforcing arbitrary length limitations. Email address internationalization provides for a much larger range of characters than many current validation algorithms allow, such as all Unicode characters above U+0080, encoded as UTF-8. Validation and verification: Algorithmic tools: Large websites, bulk mailers and spammers require efficient tools to validate email addresses. Such tools depend upon heuristic algorithms and statistical models. Sender reputation: An email sender's reputation may be used to attempt to verify whether the sender is trustworthy or a potential spammer. Factors that may be incorporated into an assessment of sender reputation include the quality of past contact with or content provided by, and engagement levels of, the sender's IP address or email address. Browser-based verification: HTML5 forms implemented in many browsers allow email address validation to be handled by the browser.Some companies offer services to validate an email address, often using an application programming interface, but there is no guarantee that it will provide accurate results. Internationalization: The IETF conducts a technical and standards working group devoted to internationalization issues of email addresses, entitled Email Address Internationalization (EAI, also known as IMA, Internationalized Mail Address). This group produced RFC 6530, 6531, 6532 and 6533, and continues to work on additional EAI-related RFCs. Internationalization: The IETF's EAI Working group published RFC 6530 "Overview and Framework for Internationalized Email", which enabled non-ASCII characters to be used in both the local-parts and domain of an email address. RFC 6530 provides for email based on the UTF-8 encoding, which permits the full repertoire of Unicode. RFC 6531 provides a mechanism for SMTP servers to negotiate transmission of the SMTPUTF8 content. Internationalization: The basic EAI concepts involve exchanging mail in UTF-8. Though the original proposal included a downgrading mechanism for legacy systems, this has now been dropped. The local servers are responsible for the local-part of the address, whereas the domain would be restricted by the rules of internationalized domain names, though still transmitted in UTF-8. The mail server is also responsible for any mapping mechanism between the IMA form and any ASCII alias. Internationalization: EAI enables users to have a localized address in a native language script or character set, as well as an ASCII form for communicating with legacy systems or for script-independent use. Applications that recognize internationalized domain names and mail addresses must have facilities to convert these representations. Significant demand for such addresses is expected in China, Japan, Russia, and other markets that have large user bases in a non-Latin-based writing system. Internationalization: For example, in addition to the .in top-level domain, the government of India in 2011 got approval for ".bharat", (from Bhārat Gaṇarājya), written in seven different scripts for use by Gujrati, Marathi, Bangali, Tamil, Telugu, Punjabi and Urdu speakers. Indian company XgenPlus.com claims to be the world's first EAI mailbox provider, and the Government of Rajasthan now supplies a free email account on domain राजस्थान.भारत for every citizen of the state. A leading media house Rajasthan Patrika launched their IDN domain पत्रिका.भारत with contactable email. Internationalization: The example addresses below would not be handled by RFC 5322 based servers, but are permitted by RFC 6530. Servers compliant with this will be able to handle these: Latin alphabet with diacritics: Pelé@example.com Greek alphabet: δοκιμή@παράδειγμα.δοκιμή Traditional Chinese characters: 我買@屋企.香港 Japanese characters: 二ノ宮@黒川.日本 Cyrillic characters: медведь@с-балалайкой.рф Devanagari characters: संपर्क@डाटामेल.भारत Postfix mailer supports internationalized mail since 2015-02-08 with a stable release 3.0.0. Internationalization: Google has support for sending emails to and from internationalized domains, but does not allow the registration of non-ASCII email addresses. Microsoft added similar functionality in Outlook 2016 DataMail launches internationalized email support for 8 Indian languages using the XgenPlus email platform in India.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sepsis Six** Sepsis Six: The Sepsis Six is the name given to a bundle of medical therapies designed to reduce mortality in patients with sepsis.Drawn from international guidelines that emerged from the Surviving Sepsis Campaign the Sepsis Six was developed by The UK Sepsis Trust. (Daniels, Nutbeam, Laver) in 2006 as a practical tool to help healthcare professionals deliver the basics of care rapidly and reliably. Sepsis Six: In 2011, The UK Sepsis Trust published evidence that use of the Sepsis Six was associated with a 50% reduction in mortality, a decreased length of stay in hospital, and fewer intensive care days. Though the authors urge caution in a causal interpretation of these findings.The Sepsis Six consists of three diagnostic and three therapeutic steps – all to be delivered within one hour of the initial diagnosis of sepsis: Titrate oxygen to a saturation target of 94% Take blood cultures and consider source control Administer empiric intravenous antibiotics Measure serial serum lactates Start intravenous fluid resuscitation Commence accurate urine output measurement.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Finitely generated module** Finitely generated module: In mathematics, a finitely generated module is a module that has a finite generating set. A finitely generated module over a ring R may also be called a finite R-module, finite over R, or a module of finite type. Related concepts include finitely cogenerated modules, finitely presented modules, finitely related modules and coherent modules all of which are defined below. Over a Noetherian ring the concepts of finitely generated, finitely presented and coherent modules coincide. A finitely generated module over a field is simply a finite-dimensional vector space, and a finitely generated module over the integers is simply a finitely generated abelian group. Definition: The left R-module M is finitely generated if there exist a1, a2, ..., an in M such that for any x in M, there exist r1, r2, ..., rn in R with x = r1a1 + r2a2 + ... + rnan. Definition: The set {a1, a2, ..., an} is referred to as a generating set of M in this case. A finite generating set need not be a basis, since it need not be linearly independent over R. What is true is: M is finitely generated if and only if there is a surjective R-linear map: Rn→M for some n (M is a quotient of a free module of finite rank). Definition: If a set S generates a module that is finitely generated, then there is a finite generating set that is included in S, since only finitely many elements in S are needed to express any finite generating set, and these finitely many elements form a generating set. However, it may occur that S does not contain any finite generating set of minimal cardinality. For example the set of the prime numbers is a generating set of Z viewed as Z -module, and a generating set formed from prime numbers has at least two elements, while the singleton{1} is also a generating set. Definition: In the case where the module M is a vector space over a field R, and the generating set is linearly independent, n is well-defined and is referred to as the dimension of M (well-defined means that any linearly independent generating set has n elements: this is the dimension theorem for vector spaces). Any module is the union of the directed set of its finitely generated submodules. Definition: A module M is finitely generated if and only if any increasing chain Mi of submodules with union M stabilizes: i.e., there is some i such that Mi = M. This fact with Zorn's lemma implies that every nonzero finitely generated module admits maximal submodules. If any increasing chain of submodules stabilizes (i.e., any submodule is finitely generated), then the module M is called a Noetherian module. Examples: If a module is generated by one element, it is called a cyclic module. Examples: Let R be an integral domain with K its field of fractions. Then every finitely generated R-submodule I of K is a fractional ideal: that is, there is some nonzero r in R such that rI is contained in R. Indeed, one can take r to be the product of the denominators of the generators of I. If R is Noetherian, then every fractional ideal arises in this way. Examples: Finitely generated modules over the ring of integers Z coincide with the finitely generated abelian groups. These are completely classified by the structure theorem, taking Z as the principal ideal domain. Finitely generated (say left) modules over a division ring are precisely finite dimensional vector spaces (over the division ring). Some facts: Every homomorphic image of a finitely generated module is finitely generated. In general, submodules of finitely generated modules need not be finitely generated. As an example, consider the ring R = Z[X1, X2, ...] of all polynomials in countably many variables. R itself is a finitely generated R-module (with {1} as generating set). Consider the submodule K consisting of all those polynomials with zero constant term. Since every polynomial contains only finitely many terms whose coefficients are non-zero, the R-module K is not finitely generated. Some facts: In general, a module is said to be Noetherian if every submodule is finitely generated. A finitely generated module over a Noetherian ring is a Noetherian module (and indeed this property characterizes Noetherian rings): A module over a Noetherian ring is finitely generated if and only if it is a Noetherian module. This resembles, but is not exactly Hilbert's basis theorem, which states that the polynomial ring R[X] over a Noetherian ring R is Noetherian. Both facts imply that a finitely generated commutative algebra over a Noetherian ring is again a Noetherian ring. Some facts: More generally, an algebra (e.g., ring) that is a finitely generated module is a finitely generated algebra. Conversely, if a finitely generated algebra is integral (over the coefficient ring), then it is finitely generated module. (See integral element for more.) Let 0 → M′ → M → M′′ → 0 be an exact sequence of modules. Then M is finitely generated if M′, M′′ are finitely generated. There are some partial converses to this. If M is finitely generated and M′′ is finitely presented (which is stronger than finitely generated; see below), then M′ is finitely generated. Also, M is Noetherian (resp. Artinian) if and only if M′, M′′ are Noetherian (resp. Artinian). Some facts: Let B be a ring and A its subring such that B is a faithfully flat right A-module. Then a left A-module F is finitely generated (resp. finitely presented) if and only if the B-module B ⊗A F is finitely generated (resp. finitely presented). Finitely generated modules over a commutative ring: For finitely generated modules over a commutative ring R, Nakayama's lemma is fundamental. Sometimes, the lemma allows one to prove finite dimensional vector spaces phenomena for finitely generated modules. For example, if f : M → M is a surjective R-endomorphism of a finitely generated module M, then f is also injective, and hence is an automorphism of M. This says simply that M is a Hopfian module. Similarly, an Artinian module M is coHopfian: any injective endomorphism f is also a surjective endomorphism.Any R-module is an inductive limit of finitely generated R-submodules. This is useful for weakening an assumption to the finite case (e.g., the characterization of flatness with the Tor functor). Finitely generated modules over a commutative ring: An example of a link between finite generation and integral elements can be found in commutative algebras. To say that a commutative algebra A is a finitely generated ring over R means that there exists a set of elements G = {x1, ..., xn} of A such that the smallest subring of A containing G and R is A itself. Because the ring product may be used to combine elements, more than just R-linear combinations of elements of G are generated. For example, a polynomial ring R[x] is finitely generated by {1, x} as a ring, but not as a module. If A is a commutative algebra (with unity) over R, then the following two statements are equivalent: A is a finitely generated R module. Finitely generated modules over a commutative ring: A is both a finitely generated ring over R and an integral extension of R. Generic rank: Let M be a finitely generated module over an integral domain A with the field of fractions K. Then the dimension dim K⁡(M⊗AK) is called the generic rank of M over A. This number is the same as the number of maximal A-linearly independent vectors in M or equivalently the rank of a maximal free submodule of M (cf. Rank of an abelian group). Since (M/F)(0)=M(0)/F(0)=0 , M/F is a torsion module. When A is Noetherian, by generic freeness, there is an element f (depending on M) such that M[f−1] is a free A[f−1] -module. Then the rank of this free module is the generic rank of M. Generic rank: Now suppose the integral domain A is generated as algebra over a field k by finitely many homogeneous elements of degrees di . Suppose M is graded as well and let dim k⁡Mn)tn be the Poincaré series of M. Generic rank: By the Hilbert–Serre theorem, there is a polynomial F such that PM(t)=F(t)∏(1−tdi)−1 . Then F(1) is the generic rank of M.A finitely generated module over a principal ideal domain is torsion-free if and only if it is free. This is a consequence of the structure theorem for finitely generated modules over a principal ideal domain, the basic form of which says a finitely generated module over a PID is a direct sum of a torsion module and a free module. But it can also be shown directly as follows: let M be a torsion-free finitely generated module over a PID A and F a maximal free submodule. Let f be in A such that fM⊂F . Then fM is free since it is a submodule of a free module and A is a PID. But now f:M→fM is an isomorphism since M is torsion-free. Generic rank: By the same argument as above, a finitely generated module over a Dedekind domain A (or more generally a semi-hereditary ring) is torsion-free if and only if it is projective; consequently, a finitely generated module over A is a direct sum of a torsion module and a projective module. A finitely generated projective module over a Noetherian integral domain has constant rank and so the generic rank of a finitely generated module over A is the rank of its projective part. Equivalent definitions and finitely cogenerated modules: The following conditions are equivalent to M being finitely generated (f.g.): For any family of submodules {Ni | i ∈ I} in M, if ∑i∈INi=M , then ∑i∈FNi=M for some finite subset F of I. For any chain of submodules {Ni | i ∈ I} in M, if ⋃i∈INi=M , then Ni = M for some i in I. Equivalent definitions and finitely cogenerated modules: If ϕ:⨁i∈IR→M is an epimorphism, then the restriction ϕ:⨁i∈FR→M is an epimorphism for some finite subset F of I.From these conditions it is easy to see that being finitely generated is a property preserved by Morita equivalence. The conditions are also convenient to define a dual notion of a finitely cogenerated module M. The following conditions are equivalent to a module being finitely cogenerated (f.cog.): For any family of submodules {Ni | i ∈ I} in M, if ⋂i∈INi={0} , then ⋂i∈FNi={0} for some finite subset F of I. Equivalent definitions and finitely cogenerated modules: For any chain of submodules {Ni | i ∈ I} in M, if ⋂i∈INi={0} , then Ni = {0} for some i in I. Equivalent definitions and finitely cogenerated modules: If ϕ:M→∏i∈INi is a monomorphism, where each Ni is an R module, then ϕ:M→∏i∈FNi is a monomorphism for some finite subset F of I.Both f.g. modules and f.cog. modules have interesting relationships to Noetherian and Artinian modules, and the Jacobson radical J(M) and socle soc(M) of a module. The following facts illustrate the duality between the two conditions. For a module M: M is Noetherian if and only if every submodule N of M is f.g. Equivalent definitions and finitely cogenerated modules: M is Artinian if and only if every quotient module M/N is f.cog. M is f.g. if and only if J(M) is a superfluous submodule of M, and M/J(M) is f.g. M is f.cog. if and only if soc(M) is an essential submodule of M, and soc(M) is f.g. If M is a semisimple module (such as soc(N) for any module N), it is f.g. if and only if f.cog. If M is f.g. and nonzero, then M has a maximal submodule and any quotient module M/N is f.g. If M is f.cog. and nonzero, then M has a minimal submodule, and any submodule N of M is f.cog. Equivalent definitions and finitely cogenerated modules: If N and M/N are f.g. then so is M. The same is true if "f.g." is replaced with "f.cog."Finitely cogenerated modules must have finite uniform dimension. This is easily seen by applying the characterization using the finitely generated essential socle. Somewhat asymmetrically, finitely generated modules do not necessarily have finite uniform dimension. For example, an infinite direct product of nonzero rings is a finitely generated (cyclic!) module over itself, however it clearly contains an infinite direct sum of nonzero submodules. Finitely generated modules do not necessarily have finite co-uniform dimension either: any ring R with unity such that R/J(R) is not a semisimple ring is a counterexample. Finitely presented, finitely related, and coherent modules: Another formulation is this: a finitely generated module M is one for which there is an epimorphism mapping Rk onto M : f : Rk → M.Suppose now there is an epimorphism, φ : F → M.for a module M and free module F. If the kernel of φ is finitely generated, then M is called a finitely related module. Since M is isomorphic to F/ker(φ), this basically expresses that M is obtained by taking a free module and introducing finitely many relations within F (the generators of ker(φ)). Finitely presented, finitely related, and coherent modules: If the kernel of φ is finitely generated and F has finite rank (i.e. F = Rk), then M is said to be a finitely presented module. Here, M is specified using finitely many generators (the images of the k generators of F = Rk) and finitely many relations (the generators of ker(φ)). See also: free presentation. Finitely presented modules can be characterized by an abstract property within the category of R-modules: they are precisely the compact objects in this category. Finitely presented, finitely related, and coherent modules: A coherent module M is a finitely generated module whose finitely generated submodules are finitely presented.Over any ring R, coherent modules are finitely presented, and finitely presented modules are both finitely generated and finitely related. For a Noetherian ring R, finitely generated, finitely presented, and coherent are equivalent conditions on a module. Some crossover occurs for projective or flat modules. A finitely generated projective module is finitely presented, and a finitely related flat module is projective. It is true also that the following conditions are equivalent for a ring R: R is a right coherent ring. The module RR is a coherent module. Every finitely presented right R module is coherent.Although coherence seems like a more cumbersome condition than finitely generated or finitely presented, it is nicer than them since the category of coherent modules is an abelian category, while, in general, neither finitely generated nor finitely presented modules form an abelian category. Textbooks: Atiyah, M. F.; Macdonald, I. G. (1969), Introduction to commutative algebra, Addison-Wesley Publishing Co., Reading, Mass.-London-Don Mills, Ont., pp. ix+128, MR 0242802 Bourbaki, Nicolas (1998), Commutative algebra. Chapters 1--7 Translated from the French. Reprint of the 1989 English translation, Elements of Mathematics, Berlin: Springer-Verlag, ISBN 3-540-64239-0 Kaplansky, Irving (1970), Commutative rings, Boston, Mass.: Allyn and Bacon Inc., pp. x+180, MR 0254021 Lam, T. Y. (1999), Lectures on modules and rings, Graduate Texts in Mathematics No. 189, Springer-Verlag, ISBN 978-0-387-98428-5 Lang, Serge (1997), Algebra (3rd ed.), Addison-Wesley, ISBN 978-0-201-55540-0 Matsumura, Hideyuki (1989), Commutative ring theory, Cambridge Studies in Advanced Mathematics, vol. 8, Translated from the Japanese by M. Reid (2 ed.), Cambridge: Cambridge University Press, pp. xiv+320, ISBN 0-521-36764-6, MR 1011461 Springer, Tonny A. (1977), Invariant theory, Lecture Notes in Mathematics, vol. 585, Springer, doi:10.1007/BFb0095644, ISBN 978-3-540-08242-2.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hardware description language** Hardware description language: In computer engineering, a hardware description language (HDL) is a specialized computer language used to describe the structure and behavior of electronic circuits, and most commonly, digital logic circuits. Hardware description language: A hardware description language enables a precise, formal description of an electronic circuit that allows for the automated analysis and simulation of an electronic circuit. It also allows for the synthesis of an HDL description into a netlist (a specification of physical electronic components and how they are connected together), which can then be placed and routed to produce the set of masks used to create an integrated circuit. Hardware description language: A hardware description language looks much like a programming language such as C or ALGOL; it is a textual description consisting of expressions, statements and control structures. One important difference between most programming languages and HDLs is that HDLs explicitly include the notion of time. HDLs form an integral part of electronic design automation (EDA) systems, especially for complex circuits, such as application-specific integrated circuits, microprocessors, and programmable logic devices. Motivation: Due to the exploding complexity of digital electronic circuits since the 1970s (see Moore's law), circuit designers needed digital logic descriptions to be performed at a high level without being tied to a specific electronic technology, such as ECL, TTL or CMOS. HDLs were created to implement register-transfer level abstraction, a model of the data flow and timing of a circuit.There are two major hardware description languages: VHDL and Verilog. There are different types of description in them: "dataflow, behavioral and structural". Structure of HDL: HDLs are standard text-based expressions of the structure of electronic systems and their behaviour over time. Like concurrent programming languages, HDL syntax and semantics include explicit notations for expressing concurrency. However, in contrast to most software programming languages, HDLs also include an explicit notion of time, which is a primary attribute of hardware. Languages whose only characteristic is to express circuit connectivity between a hierarchy of blocks are properly classified as netlist languages used in electric computer-aided design. HDL can be used to express designs in structural, behavioral or register-transfer-level architectures for the same circuit functionality; in the latter two cases the synthesizer decides the architecture and logic gate layout. Structure of HDL: HDLs are used to write executable specifications for hardware. A program designed to implement the underlying semantics of the language statements and simulate the progress of time provides the hardware designer with the ability to model a piece of hardware before it is created physically. It is this executability that gives HDLs the illusion of being programming languages, when they are more precisely classified as specification languages or modeling languages. Simulators capable of supporting discrete-event (digital) and continuous-time (analog) modeling exist, and HDLs targeted for each are available. Structure of HDL: Comparison with control-flow languages It is certainly possible to represent hardware semantics using traditional programming languages such as C++, which operate on control flow semantics as opposed to data flow, although to function as such, programs must be augmented with extensive and unwieldy class libraries. Generally, however, software programming languages do not include any capability for explicitly expressing time, and thus cannot function as hardware description languages. Before the introduction of System Verilog in 2002, C++ integration with a logic simulator was one of the few ways to use object-oriented programming in hardware verification. System Verilog is the first major HDL to offer object orientation and garbage collection. Structure of HDL: Using the proper subset of hardware description language, a program called a synthesizer, or logic synthesis tool, can infer hardware logic operations from the language statements and produce an equivalent netlist of generic hardware primitives to implement the specified behaviour. Synthesizers generally ignore the expression of any timing constructs in the text. Digital logic synthesizers, for example, generally use clock edges as the way to time the circuit, ignoring any timing constructs. The ability to have a synthesizable subset of the language does not itself make a hardware description language. History: The first hardware description languages appeared in the late 1960s, looking like more traditional languages. The first that had a lasting effect was described in 1971 in C. Gordon Bell and Allen Newell's text Computer Structures. This text introduced the concept of register transfer level, first used in the ISP language to describe the behavior of the Digital Equipment Corporation (DEC) PDP-8.The language became more widespread with the introduction of DEC's PDP-16 RT-Level Modules (RTMs) and a book describing their use. History: At least two implementations of the basic ISP language (ISPL and ISPS) followed. ISPS was well suited to describe relations between the inputs and the outputs of the design and was quickly adopted by commercial teams at DEC, as well as by a number of research teams both in the US and among its NATO allies. The RTM products never took off commercially and DEC stopped marketing them in the mid-1980s, as new techniques and in particular very-large-scale integration (VLSI) became more popular. History: Separate work done about 1979 at the University of Kaiserslautern produced a language called KARL ("KAiserslautern Register Transfer Language"), which included design calculus language features supporting VLSI chip floorplanning and structured hardware design. This work was also the basis of KARL's interactive graphic sister language ABL, whose name was an initialism for "A Block diagram Language". ABL was implemented in the early 1980s by the Centro Studi e Laboratori Telecomunicazioni (CSELT) in Torino, Italy, producing the ABLED graphic VLSI design editor. In the mid-1980s, a VLSI design framework was implemented around KARL and ABL by an international consortium funded by the Commission of the European Union.By the late 1970s, design using programmable logic devices (PLDs) became popular, although these designs were primarily limited to designing finite-state machines. The work at Data General in 1980 used these same devices to design the Data General Eclipse MV/8000, and commercial need began to grow for a language that could map well to them. By 1983 Data I/O introduced ABEL to fill that need. History: In 1985, as design shifted to VLSI, Gateway Design Automation introduced Verilog, and Intermetrics released the first completed version of the VHSIC Hardware Description Language (VHDL). VHDL was developed at the behest of the United States Department of Defense's VHSIC program, and was based on the Ada programming language, as well as on the experience gained with the earlier development of ISPS. Initially, Verilog and VHDL were used to document and simulate circuit designs already captured and described in another form (such as schematic files). HDL simulation enabled engineers to work at a higher level of abstraction than simulation at the schematic level, and thus increased design capacity from hundreds of transistors to thousands. In 1986, with the support of the U.S Department of Defense, VHDL was sponsored as an IEEE standard (IEEE Std 1076), and the first IEEE-standardized version of VHDL, IEEE Std 1076-1987, was approved in December 1987. Cadence Design Systems later acquired Gateway Design Automation for the rights to Verilog-XL, the HDL simulator that would become the de facto standard of Verilog simulators for the next decade. History: The introduction of logic synthesis for HDLs pushed HDLs from the background into the foreground of digital design. Synthesis tools compiled HDL source files (written in a constrained format called RTL) into a manufacturable netlist description in terms of gates and transistors. Writing synthesizable RTL files required practice and discipline on the part of the designer; compared to a traditional schematic layout, synthesized RTL netlists were almost always larger in area and slower in performance. A circuit design from a skilled engineer, using labor-intensive schematic-capture/hand-layout, would almost always outperform its logically-synthesized equivalent, but the productivity advantage held by synthesis soon displaced digital schematic capture to exactly those areas that were problematic for RTL synthesis: extremely high-speed, low-power, or asynchronous circuitry. History: Within a few years, VHDL and Verilog emerged as the dominant HDLs in the electronics industry, while older and less capable HDLs gradually disappeared from use. However, VHDL and Verilog share many of the same limitations, such as being unsuitable for analog or mixed-signal circuit simulation. Specialized HDLs (such as Confluence) were introduced with the explicit goal of fixing specific limitations of Verilog and VHDL, though none were ever intended to replace them. History: Over the years, much effort has been invested in improving HDLs. The latest iteration of Verilog, formally known as IEEE 1800-2005 SystemVerilog, introduces many new features (classes, random variables, and properties/assertions) to address the growing need for better test bench randomization, design hierarchy, and reuse. A future revision of VHDL is also in development, and is expected to match SystemVerilog's improvements. Design using HDL: As a result of the efficiency gains realized using HDL, a majority of modern digital circuit design revolves around it. Most designs begin as a set of requirements or a high-level architectural diagram. Control and decision structures are often prototyped in flowchart applications, or entered in a editor. The process of writing the HDL description is highly dependent on the nature of the circuit and the designer's preference for coding style. The HDL is merely the 'capture language', often beginning with a high-level algorithmic description such as a C++ mathematical model. Designers often use scripting languages such as Perl to automatically generate repetitive circuit structures in the HDL language. Special text editors offer features for automatic indentation, syntax-dependent coloration, and macro-based expansion of the entity/architecture/signal declaration. Design using HDL: The HDL code then undergoes a code review, or auditing. In preparation for synthesis, the HDL description is subject to an array of automated checkers. The checkers report deviations from standardized code guidelines, identify potential ambiguous code constructs before they can cause misinterpretation, and check for common logical coding errors, such as floating ports or shorted outputs. This process aids in resolving errors before the code is synthesized. Design using HDL: In industry parlance, HDL design generally ends at the synthesis stage. Once the synthesis tool has mapped the HDL description into a gate netlist, the netlist is passed off to the back-end stage. Depending on the physical technology (FPGA, ASIC gate array, ASIC standard cell), HDLs may or may not play a significant role in the back-end flow. In general, as the design flow progresses toward a physically realizable form, the design database becomes progressively more laden with technology-specific information, which cannot be stored in a generic HDL description. Finally, an integrated circuit is manufactured or programmed for use. Simulating and debugging HDL code: Essential to HDL design is the ability to simulate HDL programs. Simulation allows an HDL description of a design (called a model) to pass design verification, an important milestone that validates the design's intended function (specification) against the code implementation in the HDL description. It also permits architectural exploration. The engineer can experiment with design choices by writing multiple variations of a base design, then comparing their behavior in simulation. Thus, simulation is critical for successful HDL design. Simulating and debugging HDL code: To simulate an HDL model, an engineer writes a top-level simulation environment (called a test bench). At minimum, a testbench contains an instantiation of the model (called the device under test or DUT), pin/signal declarations for the model's I/O, and a clock waveform. The testbench code is event driven: the engineer writes HDL statements to implement the (testbench-generated) reset-signal, to model interface transactions (such as a host–bus read/write), and to monitor the DUT's output. An HDL simulator — the program that executes the testbench — maintains the simulator clock, which is the master reference for all events in the testbench simulation. Events occur only at the instants dictated by the testbench HDL (such as a reset-toggle coded into the testbench), or in reaction (by the model) to stimulus and triggering events. Modern HDL simulators have full-featured graphical user interfaces, complete with a suite of debug tools. These allow the user to stop and restart the simulation at any time, insert simulator breakpoints (independent of the HDL code), and monitor or modify any element in the HDL model hierarchy. Modern simulators can also link the HDL environment to user-compiled libraries, through a defined PLI/VHPI interface. Linking is system-dependent (x86, SPARC etc. running Windows/Linux/Solaris), as the HDL simulator and user libraries are compiled and linked outside the HDL environment. Simulating and debugging HDL code: Design verification is often the most time-consuming portion of the design process, due to the disconnect between a device's functional specification, the designer's interpretation of the specification, and the imprecision of the HDL language. The majority of the initial test/debug cycle is conducted in the HDL simulator environment, as the early stage of the design is subject to frequent and major circuit changes. An HDL description can also be prototyped and tested in hardware — programmable logic devices are often used for this purpose. Hardware prototyping is comparatively more expensive than HDL simulation, but offers a real-world view of the design. Prototyping is the best way to check interfacing against other hardware devices and hardware prototypes. Even those running on slow FPGAs offer much shorter simulation times than pure HDL simulation. Design verification with HDLs: Historically, design verification was a laborious, repetitive loop of writing and running simulation test cases against the design under test. As chip designs have grown larger and more complex, the task of design verification has grown to the point where it now dominates the schedule of a design team. Looking for ways to improve design productivity, the electronic design automation industry developed the Property Specification Language. Design verification with HDLs: In formal verification terms, a property is a factual statement about the expected or assumed behavior of another object. Ideally, for a given HDL description, a property or properties can be proven true or false using formal mathematical methods. In practical terms, many properties cannot be proven because they occupy an unbounded solution space. However, if provided a set of operating assumptions or constraints, a property checker can prove (or disprove) certain properties by narrowing the solution space. Design verification with HDLs: The assertions do not model circuit activity, but capture and document the designer's intent in the HDL code. In a simulation environment, the simulator evaluates all specified assertions, reporting the location and severity of any violations. In a synthesis environment, the synthesis tool usually operates with the policy of halting synthesis upon any violation. Assertion based verification is still in its infancy, but is expected to become an integral part of the HDL design toolset. HDL and programming languages: An HDL is grossly similar to a software programming language, but there are major differences. Most programming languages are inherently procedural (single-threaded), with limited syntactical and semantic support to handle concurrency. HDLs, on the other hand, resemble concurrent programming languages in their ability to model multiple parallel processes (such as flip-flops and adders) that automatically execute independently of one another. Any change to the process's input automatically triggers an update in the simulator's process stack. HDL and programming languages: Both programming languages and HDLs are processed by a compiler (often called a synthesizer in the HDL case), but with different goals. For HDLs, "compiling" refers to logic synthesis; the process of transforming the HDL code listing into a physically realizable gate netlist. The netlist output can take any of many forms: a "simulation" netlist with gate-delay information, a "handoff" netlist for post-synthesis placement and routing on a semiconductor die, or a generic industry-standard Electronic Design Interchange Format (EDIF) (for subsequent conversion to a JEDEC-format file). HDL and programming languages: On the other hand, a software compiler converts the source-code listing into a microprocessor-specific object code for execution on the target microprocessor. As HDLs and programming languages borrow concepts and features from each other, the boundary between them is becoming less distinct. However, pure HDLs are unsuitable for general purpose application software development, just as general-purpose programming languages are undesirable for modeling hardware. HDL and programming languages: Yet as electronic systems grow increasingly complex, and reconfigurable systems become increasingly common, there is growing desire in the industry for a single language that can perform some tasks of both hardware design and software programming. SystemC is an example of such—embedded system hardware can be modeled as non-detailed architectural blocks (black boxes with modeled signal inputs and output drivers). The target application is written in C or C++ and natively compiled for the host-development system; as opposed to targeting the embedded CPU, which requires host-simulation of the embedded CPU or an emulated CPU. HDL and programming languages: The high level of abstraction of SystemC models is well suited to early architecture exploration, as architectural modifications can be easily evaluated with little concern for signal-level implementation issues. However, the threading model used in SystemC relies on shared memory, causing the language not to handle parallel execution or low-level models well. High-level synthesis: In their level of abstraction, HDLs have been compared to assembly languages. There are attempts to raise the abstraction level of hardware design in order to reduce the complexity of programming in HDLs, creating a sub-field called high-level synthesis. High-level synthesis: Companies such as Cadence, Synopsys and Agility Design Solutions are promoting SystemC as a way to combine high-level languages with concurrency models to allow faster design cycles for FPGAs than is possible using traditional HDLs. Approaches based on standard C or C++ (with libraries or other extensions allowing parallel programming) are found in the Catapult C tools from Mentor Graphics, and the Impulse C tools from Impulse Accelerated Technologies. High-level synthesis: A similar initiative from Intel is the use of Data Parallel C++, related to SYCL, as a high-level synthesis language. Annapolis Micro Systems, Inc.'s CoreFire Design Suite and National Instruments LabVIEW FPGA provide a graphical dataflow approach to high-level design entry and languages such as SystemVerilog, SystemVHDL, and Handel-C seek to accomplish the same goal, but are aimed at making existing hardware engineers more productive, rather than making FPGAs more accessible to existing software engineers. It is also possible to design hardware modules using MATLAB and Simulink using the MathWorks HDL Coder tool or DSP Builder for Intel FPGAs or Xilinx System Generator (XSG) from Xilinx. Examples of HDLs: HDLs for analog circuit design HDLs for digital circuit design The two most widely used and well-supported HDL varieties used in industry are Verilog and VHDL. HDLs for printed circuit board design Several projects exist for defining printed circuit board connectivity using language based, textual-entry methods.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Torture in fiction** Torture in fiction: In fictional representations, torture is often portrayed as a method for obtaining information through interrogation. Unlike the real world practice of torture, fictional representations of torture are often portrayed as being professional and efficient methods of obtaining reliable information, and as selective rather than indiscriminate. Torture can be a convenient plot device to extract information, and when the hero is the torturer, it almost always works, usually quickly. Popular culture representations have an effect on how torture is practiced in the real world; United States Army interrogators as well as the staff at Guantanamo Bay have copied torture techniques that they learned from TV. Positive depictions of torture during the Algerian War of Independence helped shape the public perception of torture, a trend that continued with American media produced after the September 11 attacks. Background: Torture, defined as agents of the government inflicting severe pain or suffering on someone, is illegal under international law under all circumstances. Fictional depictions by conflict: Algerian war of independence Fictional depictions of torture during the Algerian War of Independence, especially The Battle of Algiers, Lost Command, and The Centurions, were especially influential in shaping popular perceptions of torture, as they were much better known than actual events. The Centurions introduced the ticking time bomb scenario in which under torture, a National Liberation Front (FLN) operative quickly exposed the location of fifteen bombs. Political scientist Darius Rejali argues that "The point of [The Centurions] is that failing to torture is the sissy's response; only a real man knows what to do." The Battle of Algiers misrepresents the history of the battle in order to imply that selective French use of torture against insurgents caused its victory (in fact, the torture was much more indiscriminate than portrayed). During the war in Afghanistan, United States servicemen were inspired by these fictional portrayals for interrogation of enemies.In the twenty-first century, French films shifted to portraying torture as negative, including La Trahison (2005) and Mon colonel (2006). The 2007 French film Intimate Enemies explored perpetrator trauma resulting from the Algerian War. Fictional depictions by conflict: Cold War Torture in preparation for show trials in the Eastern Bloc was depicted in the 1970 French film The Confession, based on the memoirs of Slansky trial defendant Artur London. Scriptwriter Jorge Semprún said that the intent of the film was not to overdo torture scenes that would alienate the viewer, but rather show "the slow erasure of a man through isolation, hunger, cold, exhaustion". Fictional depictions by conflict: War on terror After the September 11 attacks, the United States started a state-sanctioned torture program as part of the war on terror. The George W. Bush administration rejected the label "torture" for its practices, calling them "enhanced interrogation techniques". Effectiveness of the United States torture program was limited, with many detainees refusing to talk or providing false information.The amount of torture depicted on American television increased dramatically. The United States TV series 24 (2001–2010) was inspired by the earlier depictions of the Algerian War, and other 2000s TV shows such as Star Trek: Enterprise, The Shield, and LOST also portray the heroes as torturers. The hero of 24, Jack Bauer, is regularly depicted torturing antagonists using a variety of torture methods; hardened terrorists are depicted as giving in quickly and revealing important information. David Danzig, the director of Human Rights First's campaign against torture, calls the TV series "an advertisement for torture" that targets both villains and torture opponents. The only person who is not successfully tortured is Bauer, who temporarily dies from torture in the second season.In order to combat the unrealistic portrayal of torture in American television and movies, in 2006 Human Rights First went to Hollywood with Stuart Herrington, a former intelligence officer during the Vietnam War, Patrick Finnegan, the dean of West Point, and FBI interrogation expert Joe Navarro. They met with LOST producer Jeff Pinkner, who told them that he had never considered that "what we came up with in our fevered minds might have any impact on the way these things were done in the real world". Kiefer Sutherland, the actor who played Bauer, explained that the TV show is just supposed to be entertainment, rather than influencing political debate. One reason why torture scenes are preferred according to industry experts is that they can be done quickly, fitting in to a short runtime.Adam Fierro, the producer of 24, realized that realistic depiction of torture was an unfilled market niche, which he decided to fill with his TV series The Shield, which features an innocent man who is tortured to death. Human Rights First created an educational film Primetime Torture, that it distributed to military educators in order to help them explain that TV depictions of torture are not realistic. According to Rejali, the documentary Taxi to the Dark Side inaccurately portrays a CIA science of torture that did not exist and exonerates low-level soldiers for the killing of Dilawar "in nonemergency conditions and using ordinary military techniques". The CIA was involved in the filming of the 2012 film Zero Dark Thirty, which has been criticized for its portrayal of torture. Fictional depictions by conflict: Israeli–Palestinian conflict Israeli films set during the intifadas have also featured torture. Science fiction: Star Trek television shows have depicted torture in numerous episodes. The protagonists are only depicted as torturers in four out of 21 cases of torture; torture is depicted as effective in 11 out of the 21 cases. The most in-depth depiction of torture is in the episode "Chain of Command" from Star Trek: The Next Generation in which Captain Jean-Luc Picard is captured by the Cardassians and tortured by Gul Madred, who repeatedly shows Picard four lights and tries to get him to say there are five. Although in Star Trek the torture victims usually recover at once, Picard requires rehabilitation after being rescued. Effects: A 2018 study found that viewing media that depicted torture as effective increased support for it, while a 2021 study did not find evidence that watching cinematic depictions of torture affected public opinion on torture. In 2003, the Pentagon screened The Battle of Algiers as an example of what tactics they might face during the United States invasion of Iraq. Celebrities such as Supreme Court justice Antonin Scalia, Bush administration officials John Yoo and Michael Chertoff, former president Bill Clinton, and Republican presidential candidate Tom Tancredo all cited 24 during debates on torture, often to excuse or normalize it. Popular culture representations have an effect on how torture is practiced in the real world; United States Army interrogators as well as the staff at Guantanamo Bay have copied torture techniques that they learned from film. United States military instructors report that their trainees often cite 24 as a reason why torture is sometimes justified.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Erythropoietic protoporphyria** Erythropoietic protoporphyria: Erythropoietic protoporphyria (or commonly called EPP) is a form of porphyria, which varies in severity and can be very painful. It arises from a deficiency in the enzyme ferrochelatase, leading to abnormally high levels of protoporphyrin in the red blood cells (erythrocytes), plasma, skin, and liver. The severity varies significantly from individual to individual. A clinically similar form of porphyria, known as X-Linked dominant protoporphyria, was identified in 2008. Presentation: EPP usually presents in childhood with the most common mode of presentation as acute photosensitivity of the skin. It affects areas exposed to the sun and tends to be intractable. A few minutes of exposure to the sun induces pruritus, erythema, swelling and pain. Longer periods of exposure may induce second degree burns. After repetitive exposure, patients may present with lichenification, hypopigmentation, hyperpigmentation and scarring of the skin.EPP usually first presents in childhood, and most often affects the face and the upper surfaces of the arms, hands, and feet and the exposed surfaces of the legs. Most patients, if the EPP is not as severe, manifest symptoms with onset of puberty when the male and female hormone levels elevate during sexual development and maintenance. More severe EPP can manifest in infancy. EPP can be triggered through exposure to sun even though the patient is behind glass. Even the UV emissions from arc welding with the use of full protective mask have been known to trigger EPP. EPP can also manifest between the ages of 3 and 6.Prolonged exposure to the sun can lead to edema of the hands, face, and feet, rarely with blistering and petechiae. Skin thickening can sometimes occur over time.People with EPP are also at increased risk to develop gallstones. One study has noted that EPP patients suffer from vitamin D deficiency. Presentation: Liver failure Protoporphyrin accumulates to toxic levels in the liver in 5–20% of EPP patients, leading to liver failure. The spectrum of hepatobiliary disease associated with EPP is wide. It includes cholelithiasis, mild parenchymal liver disease, progressive hepatocellular disease and end-stage liver disease.A lack of diagnostic markers for liver failure makes it difficult to predict which patients may experience liver failure, and the mechanism of liver failure is poorly understood. A retrospective European study identified 31 EPP patients receiving a liver transplant between 1983 and 2008, with phototoxic reactions in 25% of patients who were unprotected by surgical light filters. The same study noted a 69% recurrence of the disease in the grafted organ. Five UK liver transplants for EPP have been identified between 1987 and 2009. Frequent liver testing is recommended in EPP patients where no effective therapy has been identified to manage liver failure to date. Presentation: Pregnancy EPP photosensitivity symptoms are reported to lessen in some female patients during pregnancy and menstruation, although this phenomenon is not consistent, and the mechanism is not understood. Genetics: Most cases of EPP are results of inborn errors of metabolism but the metabolic defect in some patients may be acquired. Mutation of the gene that encodes for ferrochelatase in the long arm of chromosome 18 is found in majority of the cases. Ferrochelatase (FECH) catalyzes the insertion of ferrous iron into the protoporphyrin IX ring to form heme. EPP exhibits both recessive and dominant patterns of inheritance and a high degree of allelic heterogeneity with incomplete penetrance. Most heterozygotes are asymptomatic. Symptoms do not occur unless FECH activity is less than 30% of normal, but such low levels are not present in a majority of patients. Pathophysiology: Cells which synthesize heme are predominantly erythroblasts/reticulocytes in the bone marrow (80%) and hepatocytes (20%). Deficiency of FECH results in increased release of protoporphyrin, which binds to albumin in plasma and subsequently undergoes hepatic extraction. Normally, most protoporphyrin in hepatocytes is secreted into bile; the remainder undergoes transformation into heme. Some protoporphyrin in bile is returned to the liver as a consequence of the enterohepatic circulation; the remaining protoporphyrin in the intestine undergoes fecal excretion. Protoporphyrin is insoluble and hence unavailable for renal excretion. In EPP, subnormal biotransformation of protoporphyrin into heme results in accumulation of protoporphyrin in hepatocytes.Since FECH deficiency is associated with increased concentrations of protoporphyrin in erythrocytes, plasma, skin and liver, retention of protoporphyrin in skin predisposes to acute photosensitivity. As a result of absorption of ultraviolet and visible light (peak sensitivity at 400 nm, with lesser peaks between 500-625 nm) by protoporphyrin in plasma and erythrocytes when blood circulates through the dermal vessels, free radicals are formed, erythrocytes become unstable and injury to the skin is induced.A significant increase in the hepatobiliary excretion of protoporphyrin can damage the liver through both cholestatic phenomena and oxidative stress - predisposing to hepatobiliary disease of varying degrees of severity Diagnosis: EPP is generally suspected by the presence of acute photosensitivity of the skin and can be confirmed by detection of a plasmatic fluorescence peak at 634 nm. It is also useful to find increased levels of protoporphyrin in feces and the demonstration of an excess of free protoporphyrin in erythrocytes.Screening for FECH mutation on one allele or aminolevulinic acid synthase 2 gain-of-function mutation in selected family members may be useful, especially in genetic counseling. Diagnosis: Liver biopsy confirms hepatic disease in EPP by the presence of protoporphyrin deposits in the hepatocytes that can be observed as a brown pigment within the biliary canaliculi and the portal macrophages. Macroscopically, the cirrhotic liver can have a black color due to protoporphyrin deposits. Using polarized light the characteristic Maltese cross shape of birefringent crystalline pigment deposits is found. The examination of liver tissue under a Wood’s lamp reveals a red fluorescence due to protoporphyrin. Liver biopsy is not helpful for estimation of prognosis of liver disease. Treatment: There is no cure for this disorder; however, symptoms can usually be managed by limiting exposure to daytime sun and some types of artificial lighting. Most types of artificial lighting emit light in the problematic wavelengths, with fluorescent lighting being the worst offender. Color temperature can be a good indicator of what light is most detrimental, as the higher the color temperature, the more violet light (380–450 nm) is emitted. Incandescent and LED lighting in the soft white range (2700–3000 K) produce the least problematic light. Additionally, selecting lower wattage bulbs can reduce the overall output of light. Treatment: Since the photosensitivity results from light in the visible spectrum, most sunscreens are of little use (with the exception of non-nano zinc oxide which provides uniform protection between 290–400 nm and some protection up to 700 nm). Sun protective clothing can also be very helpful, although clothing with UPF values are only rated based on their UV protection (up to 400 nm) and not on their protection from the visible spectrum. Some sun protective clothing manufacturers use zinc oxide in their fabrics, such as Coolibar's ZnO Suntect line, which will offer protection from visible light.Some patients gradually build a protective layer of melanin by regularly exposing themselves for short times to ultraviolet radiation.Window films which block UV and visible light up to 450 nm can provide relief from symptoms if applied to the patient's automobile and home windows. An example of such would be Madico Amber 81 which can protect through the 500 nm range. Treatment: Blue blocking screen protectors can help provide relief from symptoms caused by televisions, phones, tablets and computer screens. EPP is considered one of the least severe of the porphyrias. Unless there is liver failure, it is not a life-threatening disease. Approved therapies Afamelanotide, developed by Australian-based Clinuvel Pharmaceuticals, was approved in Europe in December 2014 and in the United States in October 2019 for treatment or prevention of phototoxicity in adults with EPP. Treatment: Off-label therapies Several drugs are used off label by patients with EPP: Ursodeoxycholic acid is a bile acid that is administered to promote biliary secretion of protoporphyrin. Results of its use in EPP are controversial. However, it is known to alter the composition of bile, to protect hepatocytes from the cytotoxic effect of hydrophobic bile acids, and to stimulate biliary secretion by several distinct mechanisms. Treatment: Hematin appears to reduce excess protoporphyrin production in the bone marrow. It has been administered to patients with EPP (3–4 mg/kg iv) who develop a crisis after liver transplantation. Plasmapheresis can also decrease the levels of protoporphyrin in plasma, however its use in treating acute episodes is controversial. Cholestyramine is an orally administered resin which reduces circulating levels of protoporphyrin by binding to protoporphyrin in the intestine and, hence, interrupting the enterohepatic circulation. It is usually used in combination with other treatment approaches. Activated carbon, like cholestyramine, binds to protoporphyrin in the intestine and prevents its absorption. It is cheap and readily available. It seems to be effective in reducing circulating protoporphyrin levels.Bone marrow transplantation, liver transplantation, acetylcysteine, extracorporeal albumin dialysis, parenteral iron and transfusion of erythrocytes are alternative plans for treatment of EEP. Over-the-counter drug Some over-the-counter drugs may help: Proferrin is an oral heme supplement which may work similarly to Hematin. B. subtilis (a gram-positive soil probiotic) produces ferrochelatase, which may be able to convert some of the protoporphyrin in the intestine into heme. Beta carotene, though a recent meta analysis of carotene treatment has called its effectiveness into question. Epidemiology: Case reports suggest that EPP is prevalent globally. The prevalence has been estimated somewhere between 1 in 75,000 and 1 in 200,000 however it has been noted that the prevalence of EPP may be increasing due to a better understanding of the disease and improved diagnosis. An estimated 5,000–10,000 individuals worldwide have EPP. EPP is considered the most common form of porphyria in children. The prevalence in Sweden has been published as 1:180,000. History: Erythropoietic protoporphyria was first described in 1953 by Kosenow and Treibs and completed in 1960 by Magnus et al. at the St John's Institute of Dermatology in London.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Clipping (computer graphics)** Clipping (computer graphics): Clipping, in the context of computer graphics, is a method to selectively enable or disable rendering operations within a defined region of interest. Mathematically, clipping can be described using the terminology of constructive geometry. A rendering algorithm only draws pixels in the intersection between the clip region and the scene model. Lines and surfaces outside the view volume (aka. frustum) are removed.Clip regions are commonly specified to improve render performance. A well-chosen clip allows the renderer to save time and energy by skipping calculations related to pixels that the user cannot see. Pixels that will be drawn are said to be within the clip region. Pixels that will not be drawn are outside the clip region. More informally, pixels that will not be drawn are said to be "clipped." In 2D graphics: In two-dimensional graphics, a clip region may be defined so that pixels are only drawn within the boundaries of a window or frame. Clip regions can also be used to selectively control pixel rendering for aesthetic or artistic purposes. In many implementations, the final clip region is the composite (or intersection) of one or more application-defined shapes, as well as any system hardware constraints In one example application, consider an image editing program. A user application may render the image into a viewport. As the user zooms and scrolls to view a smaller portion of the image, the application can set a clip boundary so that pixels outside the viewport are not rendered. In addition, GUI widgets, overlays, and other windows or frames may obscure some pixels from the original image. In this sense, the clip region is the composite of the application-defined "user clip" and the "device clip" enforced by the system's software and hardware implementation. Application software can take advantage of this clip information to save computation time, energy, and memory, avoiding work related to pixels that aren't visible. In 3D graphics: In three-dimensional graphics, the terminology of clipping can be used to describe many related features. Typically, "clipping" refers to operations in the plane that work with rectangular shapes, and "culling" refers to more general methods to selectively process scene model elements. This terminology is not rigid, and exact usage varies among many sources. In 3D graphics: Scene model elements include geometric primitives: points or vertices; line segments or edges; polygons or faces; and more abstract model objects such as curves, splines, surfaces, and even text. In complicated scene models, individual elements may be selectively disabled (clipped) for reasons including visibility within the viewport (frustum culling); orientation (backface culling), obscuration by other scene or model elements (occlusion culling, depth- or "z" clipping). Sophisticated algorithms exist to efficiently detect and perform such clipping. Many optimized clipping methods rely on specific hardware acceleration logic provided by a graphics processing unit (GPU). In 3D graphics: The concept of clipping can be extended to higher dimensionality using methods of abstract algebraic geometry. In 3D graphics: Near clipping Beyond projection of vertices & 2D clipping, near clipping is required to correctly rasterise 3D primitives; this is because vertices may have been projected behind the eye. Near clipping ensures that all the vertices used have valid 2D coordinates. Together with far-clipping it also helps prevent overflow of depth-buffer values. Some early texture mapping hardware (using forward texture mapping) in video games suffered from complications associated with near clipping and UV coordinates. In 3D graphics: Occlusion clipping (Z- or depth clipping) In 3D computer graphics, "Z" often refers to the depth axis in the system of coordinates centered at the viewport origin: "Z" is used interchangeably with "depth", and conceptually corresponds to the distance "into the virtual screen." In this coordinate system, "X" and "Y" therefore refer to a conventional cartesian coordinate system laid out on the user's screen or viewport. This viewport is defined by the geometry of the viewing frustum, and parameterizes the field of view. In 3D graphics: Z-clipping, or depth clipping, refers to techniques that selectively render certain scene objects based on their depth relative to the screen. Most graphics toolkits allow the programmer to specify a "near" and "far" clip depth, and only portions of objects between those two planes are displayed. A creative application programmer can use this method to render visualizations of the interior of a 3D object in the scene. For example, a medical imaging application could use this technique to render the organs inside a human body. A video game programmer can use clipping information to accelerate game logic. For example, a tall wall or building that occludes other game entities can save GPU time that would otherwise be spent transforming and texturing items in the rear areas of the scene; and a tightly integrated software program can use this same information to save CPU time by optimizing out game logic for objects that aren't seen by the player. Algorithms: Line clipping algorithms: Cohen–Sutherland Liang–Barsky Fast-clipping Cyrus–Beck Nicholl–Lee–Nicholl Skala O(lg N) algorithm Polygon clipping algorithms: Greiner–Hormann Sutherland–Hodgman Weiler–Atherton Vatti Rendering methodologies Painter's algorithm
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lectromec** Lectromec: Lectromec Design Co. is a Dulles, Virginia-based engineering firm specializing in aircraft electrical wiring interconnection system certification and testing. Lectromec’s ISO 17025 accredited laboratory is equipped to test and analyze electrical systems of various types for a variety of industries. Lectromec’s research focuses on understanding the electrical and physical properties of wiring insulation and the ill effects of damaged wiring. History: Lectromec was founded in 1984 by Dr. Armin Bruning and initially worked with the United States Navy to evaluate problems with the wiring on several of its aircraft. As the company has grown, Lectromec has worked with a diverse host of customers including foreign and domestic militaries, private sector businesses, wire manufacturers, and government agencies. As the aging wire issue became more visible and increasingly critical, Lectromec has responded by offering solutions to help minimize wiring-related problems. History: Personnel from Lectromec were among the experts that were called before the United States Congress to testify about the fatal accident involving TWA 800 and brought to the attention of the representatives how damaging flawed wire on aircraft could be.Lectromec has consulted and tested wiring for the aerospace industry for over 30 years. Currently, Lectromec employs a diverse staff trained to perform tasks such as wire testing, evaluation, and risk assessment. Lectromec owns several wire related patents, and works on developing wire maintenance technology in the fields of Electrical Wire Interconnection Systems (EWIS) on aircraft.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Convergent Linux Platform** Convergent Linux Platform: Convergent Linux Platform or CLP for short is an initiative of a la Mobile, inc. to present to the market a Linux-embedded mobile phone with raised security issues as well as the first Linux-based smart phone operating system.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**IPhone accessories** IPhone accessories: The iPhone has a wide variety of accessories made by Apple available for it. EarPods: Apple EarPods (introduced on September 12, 2012) first shipped with the iPhone 5 and feature a remote control and microphone. They also ship with the fifth-generation iPod touch (without mic) and the seventh-generation iPod nano (without mic). They're also sold independently. The Apple EarPods are assembled in Vietnam. EarPods: All but the basic earbuds have control capsules allowing users to adjust volume and control music and video playback, located on the cable of the right earpiece; those "with Remote and Mic" also include a microphone for phone calls and voice control of certain devices. Users can adjust volume, control music and video playback (play/pause and next/previous,) and record voice memos on supported iPod and iPhone models and Mac computers. There have been many reports of moisture problems with the remote/mic earbuds. The original iPhone and iPhone 3G came with the iPhone Stereo Headset, a push-button and microphone on the right side of the headphones (there is no volume control, and only limited control of calls). Dock: A series of docks released for the iPhone 5, 5s, and 5c, were announced and released on September 10, 2013. The docks have an identical design, with an audio-out and Lightning-in port on the back, and a Lightning connector on the top. One dock was released solely for the iPhone 5 and 5s, with another dock optimized for the iPhone 5c. The dock is not compatible with iPhones in their cases. Case: For the physical care of the cell phone it is advisable to use cases that cover the entire back of the cell phone. Currently there are many distributors that offer different alternatives in models, materials, colors and custom designs.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Glycoside hydrolase family 4** Glycoside hydrolase family 4: In molecular biology, glycoside hydrolase family 4 is a family of glycoside hydrolases EC 3.2.1., which are a widespread group of enzymes that hydrolyse the glycosidic bond between two or more carbohydrates, or between a carbohydrate and a non-carbohydrate moiety. A classification system for glycoside hydrolases, based on sequence similarity, has led to the definition of >100 different families. This classification is available on the CAZy web site, and also discussed at CAZypedia, an online encyclopedia of carbohydrate active enzymes.Glycoside hydrolase family 4 CAZY GH_4 comprises enzymes with several known activities; 6-phospho-beta-glucosidase (EC 3.2.1.86); 6-phospho-alpha-glucosidase (EC 3.2.1.122); alpha-galactosidase (EC 3.2.1.22); alpha-D-glucuronidase (EC 3.2.1.139). 6-phospho-alpha-glucosidase requires both NAD(H) and divalent metal (Mn2+, Fe2+, Co2+, or Ni2+) for activity.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Otenabant** Otenabant: Otenabant (CP-945,598) is a drug which acts as a potent and highly selective CB1 antagonist. It was developed by Pfizer for the treatment of obesity, but development for this application has been discontinued following the problems seen during clinical use of the similar drug rimonabant.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Proof of work** Proof of work: Proof of work (PoW) is a form of cryptographic proof in which one party (the prover) proves to others (the verifiers) that a certain amount of a specific computational effort has been expended. Verifiers can subsequently confirm this expenditure with minimal effort on their part. The concept was invented by Moni Naor and Cynthia Dwork in 1993 as a way to deter denial-of-service attacks and other service abuses such as spam on a network by requiring some work from a service requester, usually meaning processing time by a computer. The term "proof of work" was first coined and formalized in a 1999 paper by Markus Jakobsson and Ari Juels.Proof of work was later popularized by Bitcoin as a foundation for consensus in a permissionless decentralized network, in which miners compete to append blocks and mine new currency, each miner experiencing a success probability proportional to the computational effort expended. PoW and PoS (proof of stake) remain the two best known Sybil deterrence mechanisms. In the context of cryptocurrencies they are the most common mechanisms.A key feature of proof-of-work schemes is their asymmetry: the work – the computation – must be moderately hard (yet feasible) on the prover or requester side but easy to check for the verifier or service provider. This idea is also known as a CPU cost function, client puzzle, computational puzzle, or CPU pricing function. Another common feature is built-in incentive-structures that reward allocating computational capacity to the network with value in the form of cryptocurrency.The purpose of proof-of-work algorithms is not proving that certain work was carried out or that a computational puzzle was "solved", but deterring manipulation of data by establishing large energy and hardware-control requirements to be able to do so. Proof-of-work systems have been criticized by environmentalists for their energy consumption. Background: One popular system, used in Hashcash, uses partial hash inversions to prove that computation was done, as a goodwill token to send an e-mail. For instance, the following header represents about 252 hash computations to send a message to calvin@comics.net on January 19, 2038: X-Hashcash: 1:52:380119:calvin@comics.net:::9B760005E92F0DAE It is verified with a single computation by checking that the SHA-1 hash of the stamp (omit the header name X-Hashcash: including the colon and any amount of whitespace following it up to the digit '1') begins with 52 binary zeros, that is 13 hexadecimal zeros:[1] 0000000000000756af69e2ffbdb930261873cd71 Whether PoW systems can actually solve a particular denial-of-service issue such as the spam problem is subject to debate; the system must make sending spam emails obtrusively unproductive for the spammer, but should also not prevent legitimate users from sending their messages. In other words, a genuine user should not encounter any difficulties when sending an email, but an email spammer would have to expend a considerable amount of computing power to send out many emails at once. Proof-of-work systems are being used by other, more complex cryptographic systems such as bitcoin, which uses a system similar to Hashcash. Variants: There are two classes of proof-of-work protocols. Variants: Challenge–response protocols assume a direct interactive link between the requester (client) and the provider (server). The provider chooses a challenge, say an item in a set with a property, the requester finds the relevant response in the set, which is sent back and checked by the provider. As the challenge is chosen on the spot by the provider, its difficulty can be adapted to its current load. The work on the requester side may be bounded if the challenge-response protocol has a known solution (chosen by the provider), or is known to exist within a bounded search space.Solution–verification protocols do not assume such a link: as a result, the problem must be self-imposed before a solution is sought by the requester, and the provider must check both the problem choice and the found solution. Most such schemes are unbounded probabilistic iterative procedures such as Hashcash.Known-solution protocols tend to have slightly lower variance than unbounded probabilistic protocols because the variance of a rectangular distribution is lower than the variance of a Poisson distribution (with the same mean). A generic technique for reducing variance is to use multiple independent sub-challenges, as the average of multiple samples will have a lower variance. Variants: There are also fixed-cost functions such as the time-lock puzzle. Moreover, the underlying functions used by these schemes may be: CPU-bound where the computation runs at the speed of the processor, which greatly varies in time, as well as from high-end server to low-end portable devices. Memory-bound where the computation speed is bound by main memory accesses (either latency or bandwidth), the performance of which is expected to be less sensitive to hardware evolution. Variants: Network-bound if the client must perform few computations, but must collect some tokens from remote servers before querying the final service provider. In this sense, the work is not actually performed by the requester, but it incurs delays anyway because of the latency to get the required tokens.Finally, some PoW systems offer shortcut computations that allow participants who know a secret, typically a private key, to generate cheap PoWs. The rationale is that mailing-list holders may generate stamps for every recipient without incurring a high cost. Whether such a feature is desirable depends on the usage scenario. List of proof-of-work functions: Here is a list of known proof-of-work functions: Integer square root modulo a large prime Weaken Fiat–Shamir signatures Ong–Schnorr–Shamir signature broken by Pollard Partial hash inversion This paper formalizes the idea of a proof of work and introduces "the dependent idea of a bread pudding protocol", a "re-usable proof-of-work" (RPoW) system. Hash sequences Puzzles Diffie-Hellman–based puzzle Moderate Mbound Hokkaido Cuckoo Cycle Merkle tree–based Guided tour puzzle protocol Proof of useful work (PoUW): At the IACR conference Crypto 2022 researchers presented a paper describing Ofelimos, a blockchain protocol with a consensus mechanism based on "proof of useful work" (PoUW). Rather than miners consuming energy in solving complex, but essentially useless, puzzles to validate transactions, Ofelimos achieves consensus while simultaneously providing a decentralized optimization problem solver. The protocol is built around Doubly Parallel Local Search (DPLS), a local search algorithm that is used as the PoUW component. The paper gives an example that implements a variant of WalkSAT, a local search algorithm to solve Boolean problems. Bitcoin-type proof of work: In 2009, the Bitcoin network went online. Bitcoin is a proof-of-work digital currency that, like Finney's RPoW, is also based on the Hashcash PoW. But in Bitcoin, double-spend protection is provided by a decentralized P2P protocol for tracking transfers of coins, rather than the hardware trusted computing function used by RPoW. Bitcoin has better trustworthiness because it is protected by computation. Bitcoins are "mined" using the Hashcash proof-of-work function by individual miners and verified by the decentralized nodes in the P2P bitcoin network.The difficulty is periodically adjusted to keep the block time around a target time. Bitcoin-type proof of work: Energy consumption Since the creation of Bitcoin, proof-of-work has been the predominant design of Peer-to-peer cryptocurrency. Studies have estimated the total energy consumption of cryptocurrency mining. The PoW mechanism requires a vast amount of computing resources, which consume a significant amount of electricity. 2018 estimates from the University of Cambridge equate Bitcoin’s energy consumption to that of Switzerland. Bitcoin-type proof of work: History modification Each block that is added to the blockchain, starting with the block containing a given transaction, is called a confirmation of that transaction. Ideally, merchants and services that receive payment in the cryptocurrency should wait for at least one confirmation to be distributed over the network, before assuming that the payment was done. The more confirmations that the merchant waits for, the more difficult it is for an attacker to successfully reverse the transaction in a blockchain—unless the attacker controls more than half the total network power, in which case it is called a 51% attack. Bitcoin-type proof of work: ASICs and mining pools Within the Bitcoin community there are groups working together in mining pools. Some miners use application-specific integrated circuits (ASICs) for PoW. This trend toward mining pools and specialized ASICs has made mining some cryptocurrencies economically infeasible for most players without access to the latest ASICs, nearby sources of inexpensive energy, or other special advantages.Some PoWs claim to be ASIC-resistant, i.e. to limit the efficiency gain that an ASIC can have over commodity hardware, like a GPU, to be well under an order of magnitude. ASIC resistance has the advantage of keeping mining economically feasible on commodity hardware, but also contributes to the corresponding risk that an attacker can briefly rent access to a large amount of unspecialized commodity processing power to launch a 51% attack against a cryptocurrency. Environmental concerns: These miners compete to solve crypto challenges on the Bitcoin blockchain, and their solutions must be agreed upon by all nodes and reach consensus. The solutions are then used to validate transactions, add blocks and generate new bitcoins. Miners are rewarded for solving these puzzles and successfully adding new blocks. However, the Bitcoin-style mining process is very energy intensive because the proof of work shaped like a lottery mechanism. The underlying computational work has no other use. Miners have to waste a lot of energy to add a new block containing a transaction to the blockchain. Also, miners have to invest computer hardwares that need large spaces as fixed cost.In January 2022 Vice-Chair of the European Securities and Markets Authority Erik Thedéen called on the EU to ban the proof of work model in favor of the proof of stake model due its lower energy emissions.In November 2022 the state of New York enacted a two-year moratorium on cryptocurrency mining that does not completely use renewable energy as a power source for two years. Existing mining companies will be grandfathered in to continue mining without the use of renewable energy but they will not be allowed to expand or renew permits with the state, no new mining companies that do not completely use renewable energy will not also not be allowed to begin mining. Notes: ^ On most Unix systems this can be verified with echo -n 1:52:380119:calvin@comics.net:::9B760005E92F0DAE | openssl sha1
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Phablet** Phablet: A phablet (, ) is a mobile device combining or straddling the size formats of smartphones and tablets. The word is a portmanteau of phone and tablet. As of 2020, most budget and entry-level Android smartphones constitute the phablet form factor, as they utilize a minimum of 6.5-inch (165.1 mm) screen size and a height of 6.3 inches (160 mm) or higher. This was first popularized by Chinese brands Oppo and Infinix in 2019, which began producing larger-screen budget smartphones for developing markets such as Bangladesh, India, South Africa and Indonesia. Samsung also started producing large-screen budget smartphones since 2020, with the introduction of Samsung Galaxy A21. Appearance: Phablets feature large displays that complement screen-intensive activity such as mobile web browsing and multimedia viewing. They may also include software optimized for an integral self-storing stylus to facilitate sketching, note-taking and annotation. Phablets were originally designed for the Asian market where consumers could not afford both a smartphone and tablet as in North America; phones for that market are known for having "budget-specs-big-battery" with large low resolution screens and midrange processors, although other phablets have flagship specifications. Since then, phablets in North America have also become successful for several reasons: Android 4.0 and subsequent releases of Android were suited to large as well as small screen sizes, while older consumers preferred larger screen sizes on smartphones due to deteriorating eyesight. Examples of earlier devices with similar form factors date to 1993. The term "phablet" was widespread in the industry from 2012 to 2014 although its usage has declined since average smartphone sizes eventually morphed into small tablet sizes, up to 6.9 inches. Definition: The definition of a phablet has changed in recent years due to the proliferation of larger displays on mainstream smartphones, and smartphones designed with thin bezels and/or curved screens to make them more compact than other devices with similar screen sizes. Thus, a device with a "phablet-sized" screen may not necessarily be considered one.Current phablets typically have a diagonal display measurement between 6.5 inches (170 mm) and 7 inches (180 mm). In comparison, most flagship smartphones released in 2021 have a screen size of over 6 in (150 mm), with larger versions of mainstream flagships (such as iPhone 13 Pro Max, Pixel 6 Pro, and Samsung Galaxy S21 Ultra 5G) using over 6.6 in (170 mm) displays. PhoneArena argued that the S7 Edge was not a phablet, as it has a narrow and compact build with a physical footprint more in line with the smaller-screened Nexus 5X, due primarily to its use of a display with curved edges.In 2017, several manufacturers began to release smartphones with displays taller than the conventional 16:9 aspect ratio used by the majority of devices, and diagonal screen sizes often around 6 inches. However, in these cases, the sizes of the devices are more compact than 16:9 aspect ratio devices with equivalent diagonal screen sizes. History: Origins In tracing the 10 earliest devices in the history of the phablet concept, PC Magazine called the 1993 AT&T EO 440, "the first true phablet", followed by the following devices: 2007 HTC Advantage (5.0" screen) 2007 Nokia N810 WiMAX Edition (4.13" screen) 2009 Verizon Hub (7.0" screen) 2010 LG GW990 (4.8" screen) 2010 Dell Streak (5.0" screen) 2011 Dell Streak 7 (7.0" screen) 2011 Acer Iconia Smart (4.8" screen) 2011 Samsung Galaxy Player 5 (5.0" screen) 2011 Pantech Pocket 2011 Samsung Galaxy Note (5.3" screen) 2013 Nokia Lumia 1520 (16:9 6.0" screen)However, the form factor did not become popular until the arrival of the Galaxy Note in the 2010s. The Android-based Dell Streak included a 5-inch (130 mm), 800 × 480 display and a widescreen-optimized interface. Reviewers encountered issues with its outdated operating system, Android 1.6, which was not yet optimized for such a large screen size, and the device was commercially unsuccessful. History: Introduction of the Galaxy Note and its competitors The Samsung Galaxy Note used a 5.3-inch (130 mm) screen. While some media outlets questioned the viability of the device, the Note received positive reception for its stylus functionality, the speed of its 1.5 GHz dual-core processor, and the advantages of its high resolution display. The Galaxy Note was a commercial success; Samsung announced in December 2011 that the Galaxy Note had sold 1 million units in two months. In February 2012, Samsung debuted a Note version with LTE support. By August 2012, the Note had sold 10 million units worldwide. History: In late 2012, Samsung introduced the Galaxy Note II, featuring a 1.6 GHz quad-core processor, a 5.55-inch (141 mm) screen and the ability to run two applications at once via a split-screen view. The Note II also incorporated a refreshed hardware design based on the Galaxy S III, with a narrower, smoother body. International sales of the Galaxy Note II reached 5 million in two months. The 2012 LG Optimus Vu used a 5-inch (130 mm) display with an unusual 4:3 aspect ratio – in contrast to the 16:9 aspect ratio used by most smartphones. Joining the Galaxy Note II on many carriers' lineups in 2013 was the nearly-identically sized LG Optimus G Pro, released in April.In late-2012 and early 2013, companies began to release smartphones with 5 inch screens at 1080p resolution, such as the HTC Droid DNA and Samsung Galaxy S4. Despite the screen size approaching those of phablets, HTC's design director Jonah Becker said that the Droid DNA was not a phablet. HTC would release a proper phablet, the HTC One Max – a smartphone with a 5.9 in (150 mm) screen and a design based on its popular HTC One model, in October 2013.Examples of Android phablets with screens larger than 6 inches began appearing in 2013 with the Chinese company Huawei unveiling its 6.1 in (150 mm) Ascend Mate at Consumer Electronics Show and Samsung introducing the Galaxy Mega, a phablet with a 6.3 in (160 mm) variant, which has midrange specs and lacks a stylus compared to the flagship Galaxy Note series. Sony Mobile also entered the phablet market with its 6.4 in (160 mm) Xperia Z Ultra.As a variation of the concept, Asus and Samsung also released otherwise small-sized tablets, the FonePad, Galaxy Note 8.0 and Galaxy Tab 3 8.0, with cellular connectivity and the ability to place voice calls. Later that year, Nokia also introduced Windows Phone 8 phablets, such as the 6-inch Lumia 1520. History: Prior to the iPhone 6 Plus In September 2014, Apple released its first phablet, the 5.5 in (140 mm) iPhone 6 Plus; the introduction of the new model reversed a previous policy under late Apple CEO Steve Jobs not to produce a mid-sized device larger than the iPhone or smaller than the iPad, which were 3.5 inches and 9.7 inches, respectively, at the time of his death. While Apple's iPad heavily dominated the tablet market, the void in their lineup left an opening for intermediate-sized devices, with other handset manufacturers already jumping on the trend of producing larger screen sizes to suit all niches.In September 2018, Apple released the iPhone XS Max; the first phablet iPhone to feature the reduced bezel form factor with the larger 6.5-inch display, utilizing the OLED screen found on its predecessor and replacing the Touch ID into the new facial recognition system called Face ID which is enabled by the TrueDepth front facing camera since the iPhone X doesn't have a larger variant due to the smaller dimension with the 5.8-inch display larger than the 5.5-inch iPhone 8 Plus (the final phablet iPhone to feature the Touch ID introduced in 2017) and its predecessors.In October 2022, Apple released the iPhone 14 Plus; the first phablet iPhone to reduce the price as well as lacking the telephoto camera lens and LiDAR sensor since the iPhone XR, iPhone 11, iPhone 12 lineup and iPhone 13 lineup doesn't have a larger display size as the iPhone XS Max, iPhone 11 Pro Max, iPhone 12 Pro Max and iPhone 13 Pro Max due to the smaller dimension available with 6.1-inch and 5.4-inch display size options. The iPhone 14 Pro Max remained as the higher-priced phablet iPhone counterpart to the iPhone 14 Plus which these iPhone models with larger 6.7-inch display are the first time available in both affordable and expensive tier price options. History: Spiritual successors to the Galaxy Note phones In January 2021, Samsung Electronics announced the Galaxy S21 Ultra; the first phablet despite the Samsung Galaxy Note series, it supports the S Pen accessory, albeit sold separately and with limited functionality. It features a 6.8" 1440p "Dynamic AMOLED" curved display with HDR10+ support, "dynamic tone mapping" technology, and a variable 120 Hz refresh rate.However, no successor to the 2020 Galaxy Note20/Galaxy Note20 Ultra would be unveiled at the 2021 launch event, which would only focus on unveiling the new foldable phones (including Galaxy Z Flip 3 and Galaxy Z Fold 3).In February 2022, Galaxy S22 Ultra became the first Samsung Galaxy S phone to include a built-in S Pen and the major upgrade over the 2021 Galaxy S21 Ultra. Sales: Engadget identified falling screen prices, increasing screen power efficiency and battery life, and the evolving importance of multimedia viewing as critical factors in the popularity of the phablet. Phablets also satisfy a consumer need – for the perfect sized device, since smartphones may be too small for viewing and tablets lose their portability – fuelling their global market growth. Phablets have also been popular with an older demographic of smartphone users – their large screens provide a benefit to those with deteriorating eyesight.In April 2013, Doug Conklyn, vice president of global design for Dockers told Fox News that the company reworked the size of its pants pockets "to accommodate the growing size of smartphones". For women, a small handbag can easily accommodate a phablet, but not most tablets.In January 2013, IHS reported that 25.6 million phablet devices were sold in 2012 and estimated that these figures would grow to 60.4 million in 2013, and 146 million by 2016. Barclays projected sales of phablets rising from 27 million in 2012 to 230 million in 2015. In September 2013 International Data Corporation (IDC) reported that its research indicated that phablets "overtook shipments of both laptops and tablets in Asia in the second quarter of 2013".In 2014, Business Insider predicted phablets would outsell smartphones by 2017. Speaking with CNET in 2014, David Burke, Vice President of Engineering at Google, said "If you gave them a phablet for a week, 50 percent of [consumers] would say they like it and not go back".In Q1 2014, phablets made up 6% of US smartphones sold. In the first quarter of 2015, phablets accounted for 21% of all smartphones sold in the US, with the iPhone 6 Plus making up 44 percent of those phablets sold. By 2016, the majority of the smartphones sold were phablets, and by 2018 they had come to dominate the market to the extent the term 'phablet' has largely fallen out of use.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Electric resistance welding** Electric resistance welding: Electric resistance welding (ERW) is a welding process where metal parts in contact are permanently joined by heating them with an electric current, melting the metal at the joint. Electric resistance welding is widely used, for example, in manufacture of steel pipe and in assembly of bodies for automobiles. The electric current can be supplied to electrodes that also apply clamping pressure, or may be induced by an external magnetic field. The electric resistance welding process can be further classified by the geometry of the weld and the method of applying pressure to the joint: spot welding, seam welding, flash welding, projection welding, for example. Some factors influencing heat or welding temperatures are the proportions of the workpieces, the metal coating or the lack of coating, the electrode materials, electrode geometry, electrode pressing force, electric current and length of welding time. Small pools of molten metal are formed at the point of most electrical resistance (the connecting or "faying" surfaces) as an electric current (100–100,000 A) is passed through the metal. In general, resistance welding methods are efficient and cause little pollution, but their applications are limited to relatively thin materials. Spot welding: Spot welding is a resistance welding method used to join two or more overlapping metal sheets, studs, projections, electrical wiring hangers, some heat exchanger fins, and some tubing. Usually power sources and welding equipment are sized to the specific thickness and material being welded together. The thickness is limited by the output of the welding power source and thus the equipment range due to the current required for each application. Care is taken to eliminate contaminants between the faying surfaces. Usually, two copper electrodes are simultaneously used to clamp the metal sheets together and to pass current through the sheets. When the current is passed through the electrodes to the sheets, heat is generated due to the higher electrical resistance where the surfaces contact each other. As the electrical resistance of the material causes a heat buildup in the work pieces between the copper electrodes, the rising temperature causes a rising resistance, and results in a molten pool contained most of the time between the electrodes. As the heat dissipates throughout the workpiece in less than a second (resistance welding time is generally programmed as a quantity of AC cycles or milliseconds) the molten or plastic state grows to meet the welding tips. When the current is stopped the copper tips cool the spot weld, causing the metal to solidify under pressure. The water cooled copper electrodes remove the surface heat quickly, accelerating the solidification of the metal, since copper is an excellent conductor. Resistance spot welding typically employs electrical power in the form of direct current, alternating current, medium frequency half-wave direct current, or high-frequency half wave direct current. Spot welding: If excessive heat is applied or applied too quickly, or if the force between the base materials is too low, or the coating is too thick or too conductive, then the molten area may extend to the exterior of the work pieces, escaping the containment force of the electrodes (often up to 30,000 psi). This burst of molten metal is called expulsion, and when this occurs the metal will be thinner and have less strength than a weld with no expulsion. The common method of checking a weld's quality is a peel test. An alternative test is the restrained tensile test, which is much more difficult to perform, and requires calibrated equipment. Because both tests are destructive in nature (resulting in the loss of salable material), non-destructive methods such as ultrasound evaluation are in various states of early adoption by many OEMs. Spot welding: The advantages of the method include efficient energy use, limited workpiece deformation, high production rates, easy automation, and no required filler materials. When high strength in shear is needed, spot welding is used in preference to more costly mechanical fastening, such as riveting. While the shear strength of each weld is high, the fact that the weld spots do not form a continuous seam means that the overall strength is often significantly lower than with other welding methods, limiting the usefulness of the process. It is used extensively in the automotive industry— cars can have several thousand spot welds. A specialized process, called shot welding, can be used to spot weld stainless steel. Spot welding: There are three basic types of resistance welding bonds: solid state, fusion, and reflow braze. In a solid state bond, also called a thermo-compression bond, dissimilar materials with dissimilar grain structure, e.g. molybdenum to tungsten, are joined using a very short heating time, high weld energy, and high force. There is little melting and minimum grain growth, but a definite bond and grain interface. Thus the materials actually bond while still in the solid state. The bonded materials typically exhibit excellent shear and tensile strength, but poor peel strength. In a fusion bond, either similar or dissimilar materials with similar grain structures are heated to the melting point (liquid state) of both. The subsequent cooling and combination of the materials forms a “nugget” alloy of the two materials with larger grain growth. Typically, high weld energies at either short or long weld times, depending on physical characteristics, are used to produce fusion bonds. The bonded materials usually exhibit excellent tensile, peel and shear strengths. In a reflow braze bond, a resistance heating of a low temperature brazing material, such as gold or solder, is used to join either dissimilar materials or widely varied thick/thin material combinations. The brazing material must “wet” to each part and possess a lower melting point than the two workpieces. The resultant bond has definite interfaces with minimum grain growth. Typically the process requires a longer (2 to 100 ms) heating time at low weld energy. The resultant bond exhibits excellent tensile strength, but poor peel and shear strength. Seam welding: Resistance seam welding is a process that produces a weld at the faying surfaces of two similar metals. The seam may be a butt joint or an overlap joint and is usually an automated process. It differs from flash welding in that flash welding typically welds the entire joint at once and seam welding forms the weld progressively, starting at one end. Like spot welding, seam welding relies on two electrodes, usually made from copper, to apply pressure and current. The electrodes are often disc shaped and rotate as the material passes between them. This allows the electrodes to stay in constant contact with the material to make long continuous welds. The electrodes may also move or assist the movement of the material. Seam welding: A transformer supplies energy to the weld joint in the form of low voltage, high current AC power. The joint of the work piece has high electrical resistance relative to the rest of the circuit and is heated to its melting point by the current. The semi-molten surfaces are pressed together by the welding pressure that creates a fusion bond, resulting in a uniformly welded structure. Most seam welders use water cooling through the electrode, transformer and controller assemblies due to the heat generated. Seam welding: Seam welding produces an extremely durable weld because the joint is forged due to the heat and pressure applied. A properly welded joint formed by resistance welding can easily be stronger than the material from which it is formed. A common use of seam welding is during the manufacture of round or rectangular steel tubing. Seam welding has been used to manufacture steel beverage cans but is no longer used for this as modern beverage cans are seamless aluminum. There are two modes for seam welding: Intermittent and continuous. In intermittent seam welding, the wheels advance to the desired position and stop to make each weld. This process continues until the desired length of the weld is reached. In continuous seam welding, the wheels continue to roll as each weld is made. Low-frequency electric resistance welding: Low-frequency electric resistance welding (LF-ERW) is an obsolete method of welding seams in oil and gas pipelines. It was phased out in the 1970s but as of 2015 some pipelines built with this method remained in service.Electric resistance welded (ERW) pipe is manufactured by cold-forming a sheet of steel into a cylindrical shape. Current is then passed between the two edges of the steel to heat the steel to a point at which the edges are forced together to form a bond without the use of welding filler material. Initially this manufacturing process used low frequency AC current to heat the edges. This low frequency process was used from the 1920s until 1970. In 1970, the low frequency process was superseded by a high frequency ERW process which produced a higher quality weld. Low-frequency electric resistance welding: Over time, the welds of low frequency ERW pipe were found to be susceptible to selective seam corrosion, hook cracks, and inadequate bonding of the seams, so low frequency ERW is no longer used to manufacture pipe. The high frequency process is still being used to manufacture pipe for use in new pipeline construction. Other methods: Other ERW methods include flash welding, resistance projection welding, and upset welding.Flash welding is a type of resistance welding that does not use any filler metals. The pieces of metal to be welded are set apart at a predetermined distance based on material thickness, material composition, and desired properties of the finished weld. Current is applied to the metal, and the gap between the two pieces creates resistance and produces the arc required to melt the metal. Once the pieces of metal reach the proper temperature, they are pressed together, effectively forge welding them together.Projection welding is a modification of spot welding in which the weld is localized by means of raised sections, or projections, on one or both of the workpieces to be joined. Heat is concentrated at the projections, which permits the welding of heavier sections or the closer spacing of welds. The projections can also serve as a means of positioning the workpieces. Projection welding is often used to weld studs, nuts, and other threaded machine parts to metal plate. It is also frequently used to join crossed wires and bars. This is another high-production process, and multiple projection welds can be arranged by suitable designing and jigging.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Centralized mail delivery** Centralized mail delivery: Centralized mail delivery is a unique form of mail delivery system where a letter carrier provides delivery and collection services to a number of residences from a centrally located installation – whether in a single-family subdivision or multi-family structure. Business customers also receive delivery services from a convenient central location. Centralized mail delivery: Centralized mail delivery equipment can be in the form of any "clustered" type mailbox – including free-standing, pedestal-mounted cluster box unit (CBU), or other cluster mailboxes mounted in a wall, kiosk, or shelter. The U.S. Postal Service prefers centralized mail delivery in all new construction because it is less expensive. The United States Postal Service aims to continue to review and modify its operations to provide universal service as efficiently and cost effectively as possible. Therefore, there is pressure to establish centralized mail delivery, which is required in some communities. History: Mail delivery can be traced back to its system founder and first Postmaster, Benjamin Franklin. Since that time, the amount of new mail delivery points have steadily increased from year to year to its present estimate of 1.4 million new delivery points a year. Part of the change in mail delivery can be traced to the changes in transportation over the years. Horseback, dog sled, train, car, plane, boat, and truck have all been used to deliver the mail. As the delivery territories have grown, so, too, has the need for more efficient delivery techniques. During the nineteenth and early twentieth centuries, letter carriers knocked on the door and waited patiently for someone to answer. Efficiency experts estimated that each carrier lost an hour and a half each day just waiting for patrons to come to the door. To gain back those precious hours, in 1923 the Post Office Department mandated that every household have a mailbox or letter slot to receive mail. By the 1930, as a convenience to customers living on the margins of a city, letter carriers began delivering to customers with “suitable boxes at the curb line.” Multiple receptacles appeared, but with no regulation. In the ensuing decades American suburbanization, which exploded in the 1950s, brought an increase in curbside mailboxes. The initial suggestion for the creation of the cluster box was submitted by Peter McHugh, a postal carrier in Los Angeles Ca. The Post Office Department first introduced curbside cluster boxes in 1967. History: By 2001, the US Postal Service (USPS) was approving locking mailbox designs to help customers protect their mail. Neighborhood Delivery Collection Box Units (NDCBUs) were the predecessor to today’s cluster box units. They had multiple compartments for the centralized delivery of mail to the residents of a building or an entire neighborhood, instead of door-to-door or curbside delivery. These NDCBUs transformed into the “E” series cluster box units (CBU). Cluster Box Unit: New cluster box unit (CBU) specifications were then developed in 2005 and became the standard for ALL manufacturers. Only manufacturers who are approved by the USPS may produce the new “F” series CBU.The USPS began to officially license this new standard in 2007 – now manufacturers must be approved and licensed in order to manufacture the CBU. Cluster Box Unit: Like its predecessor the NDCBU, each CBU has multiple compartments for the centralized delivery of mail to the residents of an entire neighborhood, eliminating the need for door-to-door or curbside delivery. This new design also incorporates a parcel locker and an outgoing mail slot for resident convenience. At one time, some manufacturers even offered a high security CBU option for those areas which require a bit more protection. However, the high security CBU was discontinued in early 2016.While the CBU models have parcel lockers built into each box, individual outdoor parcel lockers (OPL) were developed to increase the total amount of parcel lockers available within a single neighborhood installation. Cluster Box Unit: Just like the CBU, the OPL design has evolved over the years as well. The latest USPS approved design includes taller parcel compartments to better accommodate package sizes of today. In addition, to help “dress up” the CBU, some manufacturers have developed USPS Approved caps and pedestals. Available in various designs, these fashionable snap-together accessories place the final touches on the CBU so that it will complement the surrounding architecture. Only caps and pedestals which have been approved by the USPS can be added to the officially licensed CBU. [edit] Other Equipment Options The USPS created guidelines to dictate that wall-mounted vertical or horizontal wall-type boxes are to be specified in these settings. To represent the various levels of “approval” by the USPS, these wall-mounted mailboxes have been “rated”. Former approval standards were considered STD-4B+ and related to specific form factors and security levels of the mailbox. Today, STD-4B+ mailboxes are only USPS approved for replacing existing STD-4B+ applications.New USPS regulations related to wall-mounted, clustered type of mailboxes were introduced in 2004. These were the first changes to “apartment style” mailboxes in more than 30 years. This new regulation, STD-4C, replaces all previous regulations for mailboxes such as these, which were previously approved under STD-4B and STD-4B+. USPS Approved Manufacturers: CBU Florence Manufacturing Company Postal Products Unlimited Inc. Salsbury IndustriesSTD-4C 2BGlobal Florence Manufacturing Company Jensen Mailboxes Postal Products Unlimited Inc. Salsbury Industries Security ManufacturingSTD-4B+ American Device Manufacturing (Company purchased by Florence Manufacturing; Horizontal units only) American Eagle Mailboxes Bommer Industries, Inc. Florence Manufacturing Company Jensen Mailboxes Salsbury Industries Security Manufacturing Controversy: While centralized mail delivery is the preferred method of delivery by the United States Postal Service for new developments, some residents of these communities are opposed to centralized mail delivery. Those opposed are heavily in favor of conventional door to door delivery and/or do not like the idea of CBUs in their neighborhood.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**International Maj Lind Piano Competition** International Maj Lind Piano Competition: The International Maj Lind Piano Competition is organized by the Sibelius Academy and takes place in Helsinki, Finland. Originally a national competition that was first held in 1945, it was opened to international competitors in 2002 and has since then been held every five years.The competition is named after Maria (Maj) Lind, née Kopjeff (1876–1942). In 2022, prize money of over €100,000 was awarded. The first prize was won by Piotr Pawlak.In 2017 the first prize was won by Mackenzie Melemed. In 2012 the first prize was won by Sergei Redkin. In 2007 the first prize was won by Sofya Gulyak. In 2002 the first prize was won by Alberto Nosè.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Technician** Technician: A technician is a worker in a field of technology who is proficient in the relevant skill and technique, with a relatively practical understanding of the theoretical principles. Specialisation: The term technician covers many different specialisations. These include: Electronics technician Information systems technician Laboratory technician Science technician Work safety technician Campaigns: In the UK, a shortage of skilled technicians in the science, engineering and technology sectors has led to various campaigns to encourage more people to become technicians and to promote the role of technician.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Acta Materialia** Acta Materialia: Acta Materialia is a peer-reviewed scientific journal published twenty times per year on behalf of Acta Materialia Inc. The current publisher is Elsevier. The coordinating editor is Christopher A. Schuh. The journal covers research on all aspects of the structure and properties of materials and publishes original papers and commissioned reviews called Overviews. History: The journal was established in 1953 as Acta Metallurgica and renamed to Acta Metallurgica et Materialia in 1990, before obtaining its current name in 1996. Since 1956, it has been published by Pergamon Press, with the imprint being retained for some time after the acquisition by Elsevier. It incorporates Nanostructured Materials that was published independently from 1992 to 1999. Scripta Materialia was established in 1967 as a companion journal, publishing rapid communications as well as opinion articles called Viewpoints. Abstracting and indexing: The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2022 impact factor of 9.4.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Triangulated irregular network** Triangulated irregular network: In computer graphics, a triangulated irregular network (TIN) is a representation of a continuous surface consisting entirely of triangular facets (a triangle mesh), used mainly as Discrete Global Grid in primary elevation modeling. The vertices of these triangles are created from field recorded spot elevations through a variety of means including surveying through conventional techniques, Global Positioning System Real-Time Kinematic (GPS RTK), photogrammetry, or some other means. Associated with three-dimensional (x,y,z) data and topography, TINs are useful for the description and analysis of general horizontal (x,y) distributions and relationships. Triangulated irregular network: Digital TIN data structures are used in a variety of applications, including geographic information systems (GIS), and computer aided design (CAD) for the visual representation of a topographical surface. A TIN is a vector-based representation of the physical land surface or sea bottom, made up of irregularly distributed nodes and lines with three-dimensional coordinates (x,y,z) that are arranged in a network of non-overlapping triangles. Triangulated irregular network: A TIN comprises a triangular network of vertices, known as mass points, with associated coordinates in three dimensions connected by edges to form a triangular tessellation. Three-dimensional visualizations are readily created by rendering of the triangular facets. In regions where there is little variation in surface height, the points may be widely spaced whereas in areas of more intense variation in height the point density is increased. Triangulated irregular network: A TIN used to represent terrain is often called a digital elevation model (DEM), which can be further used to produce digital surface models (DSM) or digital terrain models (DTM). An advantage of using a TIN over a rasterized digital elevation model (DEM) in mapping and analysis is that the points of a TIN are distributed variably based on an algorithm that determines which points are most necessary to create an accurate representation of the terrain. Data input is therefore flexible and fewer points need to be stored than in a raster DEM, with regularly distributed points. While a TIN may be considered less suited than a raster DEM for certain kinds of GIS applications, such as analysis of a surface's slope and aspect, it is often used in CAD to create contour lines. A DTM and DSM can be formed from a DEM. A DEM can be interpolated from a TIN. Triangulated irregular network: TIN are based on a Delaunay triangulation or constrained Delaunay. Delaunay conforming triangulations are recommended over constrained triangulations. This is because the resulting TINs are likely to contain fewer long, skinny triangles, which are undesirable for surface analysis. Additionally, natural neighbor interpolation and Thiessen (Voronoi) polygon generation can only be performed on Delaunay conforming triangulations. A constrained Delaunay triangulation can be considered when you need to explicitly define certain edges that are guaranteed not to be modified (that is, split into multiple edges) by the triangulator. Constrained Delaunay triangulations are also useful for minimizing the size of a TIN, since they have fewer nodes and triangles where breaklines are not densified. Triangulated irregular network: The TIN model was developed in the early 1970s as a simple way to build a surface from a set of irregularly spaced points. The first triangulated irregular network program for GIS was written by W. Randolph Franklin, under the direction of David Douglas and Thomas Peucker (Poiker), at Simon Fraser University in 1973. File formats: A variety of different file formats exist for saving TIN information, including Esri TIN, along with others such as AquaVeo and ICEM CFD.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Meanwhile, back at the ranch** Meanwhile, back at the ranch: "Meanwhile, back at the ranch..." is a catch phrase that appears in a variety of contexts. For example, it may be employed by narrators of American cowboy movies and TV shows to indicate a segue from one scene to another but there is often more to this than meets the eye. The expression originated as a stock subtitle in the silent movies and at first the reference to the ranch was literal. Later, as the phrase became a cliché, it was used more and more loosely and with a growing sense of mockery or levity, often with a vague focus. In this manifestation the phrase came into common use in unrelated contexts."Meanwhile, back at the ranch" is the title of a children's book by Trinka Hakes Noble; a crime novel by Kinky Friedman; of the first album of the German country band Texas Lightning; and is the root of the name of the English band Meanwhile, back in Communist Russia... (1999-2004). It is also the name of a song by Badfinger from the album Wish You Were Here (1974). Meanwhile, back at the ranch: "Meanwhile back at the ranch" was also the name that Alfred Hitchcock gave to a piece of storytelling advice he gave to filmmakers, whereby you structure the story as two parallel storylines, and cut from the first to the second just as the first reaches its peak. Contemporary filmmaker John Sturges quoted Hitchcock as saying, "the name of making movies is meanwhile back at the ranch. He's absolutely right. You want to have two things going. You reach the peak of one, you go to the other. You pick the other up just where you want it. When it loses interest, drop it. Meanwhile, back at the ranch."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Marble Drop** Marble Drop: Marble Drop is a puzzle video game published by Maxis on February 28, 1997. Gameplay: Players are given an initial set of marbles that are divided evenly into six colors: red, orange, yellow, green, blue, and purple, with two more colors available to purchase: black and silver (steel). These marbles are picked up and dropped by the players into funnels leading to a series of rails, switches, traps and other devices which grow more complex as the game progresses. The aim is to ensure that each marble arrives in the bin of the same color as the marble. Players must determine how the marble will travel through the puzzle, and how its journey will change the puzzle for the next marble. When a marble runs over certain sections of the puzzle, the paths may be rerouted or cut off, either temporarily or permanently. For example, if the marble runs over a button, it might hop, skip and jump a diversion that sends the next marble down a different road.There are 50 puzzles in total, including five bonus puzzles which can only be accessed by solving a combination of locks which appear in certain puzzles. Each puzzle is decorated with da Vinci-style notes and sketches. These explanatory notes are a part of the background, informing the player of new pieces of equipment and their effects. At the end of each puzzle, the marbles that have been guided into their proper bins are returned to the player. Lost marbles must be purchased when they are needed to complete a puzzle. Steel (silver) balls are 20 percent of the price of colored marbles and can be used as test marbles or to help release a catch instead of using a valuable colored marble; additionally, there are steel-coloured exit bins in the final puzzle. Black marbles are very expensive, but change to the correct color when they arrive in a bin. Reception: Marble Drop received lukewarm reception upon release. It received a ranking of 5.2 out of 10 from GameSpot, considering it dull. Computer Games Magazine gave it a 2.5 out of 5.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Purser** Purser: A purser is the person on a ship principally responsible for the handling of money on board. On modern merchant ships, the purser is the officer responsible for all administration (including the ship's cargo and passenger manifests) and supply. Frequently, the cooks and stewards answer to the purser as well. They were also called a pusser in British naval slang. History: The purser joined the warrant officer ranks of the Royal Navy in the early 14th century and existed as a naval rank until 1852. The development of the warrant officer system began in 1040, when five English ports began furnishing warships to King Edward the Confessor in exchange for certain privileges. They also furnished crews whose officers were the master, boatswain, carpenter and cook. Later these officers were "warranted" by the British Admiralty. Pursers received no pay but were entitled to profits made through their business activities. In the 18th century a purser would buy his warrant for £65 and was required to post sureties totalling £2,100 with the Admiralty. They maintained and sailed the ships and were the standing officers of the navy, staying with the ships in port between voyages as caretakers supervising repairs and refitting.In charge of supplies such as food and drink, clothing, bedding, candles, the purser was originally known as "the clerk of burser." They would usually charge the supplier a 5% commission for making a purchase and it is recorded they charged a considerable markup when they resold the goods to the crew. The purser was not in charge of pay, but he had to track it closely since the crew had to pay for all their supplies, and it was the purser's job to deduct those expenses from their wages. The purser bought everything (except food and drink) on credit, acting as an unofficial private merchant. In addition to his official responsibilities, it was customary for the purser to act as an official private merchant for luxuries such as tobacco and to be the crew's banker. History: As a result, the purser could be at risk of losing money and being thrown into debtor's prison; conversely, the crew and officers habitually suspected the purser of making an illicit profit out of his complex dealings. It was the common practice of pursers forging pay tickets to claim wages for "phantom" crew members that led to the Navy's implementation of muster inspection to confirm who worked on a vessel. The position, though unpaid, was very sought after because of the expectation of making a reasonable profit; although there were wealthy pursers, it was from side businesses facilitated by their ships' travels. History: On modern-day passenger ships, the purser has evolved into a multiperson office that handles general administration, fees and charges, currency exchange, and any other money-related needs of the passengers and crew. Aircraft: On modern airliners, the cabin manager (chief flight attendant) is often called the purser. The purser oversees the flight attendants by making sure airline passengers are safe and comfortable. A flight purser completes detailed reports and verifies all safety procedures are followed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Diphenolic acid** Diphenolic acid: Diphenolic acid is a carboxylic acid with molecular formula C17H18O4. Its IUPAC name is 4,4-bis(4-hydroxyphenyl)pentanoic acid, and it can be prepared by the condensation reaction of phenol with levulinic acid in the presence of hydrochloric acid. The equation for this synthesis is: 2 C6H5OH + CH3C(O)CH2CH2COOH → CH3C(p-C6H4OH)2CH2CH2COOH + H2ODiphenolic acid is a solid at room temperature, melting at 168–171 °C and boiling at 507 °C. According to its MSDS, diphenolic acid is soluble in ethanol, isopropanol, acetone, acetic acid, and methyl ethyl ketone, but insoluble in benzene, carbon tetrachloride, and xylene. Diphenolic acid: Diphenolic acid may be a suitable replacement for bisphenol A as a plasticizer.Diphenolate esters have been used to synthesize epoxy resins as a replacement for the diglycidyl ether of bisphenol A.The diglycidyl ethers of n-alkyl diphenolate esters have similar thermomechanical properties to the diglycidyl ether of bisphenol A when cured, but the viscosity and glass transition temperature vary as a function of the ester length. Diphenolate esters have also been used to synthesize polycarbonates with a potential for water solubility.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Golden billion** Golden billion: The golden billion (Russian: золотой миллиард, tr. zolotoy milliard) theory is a conspiracy theory that a cabal of global elites are pulling strings to amass wealth for the world's richest billion people at the expense of the rest of humanity. It is popular in the Russian-speaking world.The term was coined by Anatoly Tsikunov (writing as A. Kuzmich) in his 1990 book The Plot of World Government: Russia and the Golden Billion and used in his articles. The term was quickly popularized by Russian writer Sergey Kara-Murza and has become a staple of contemporary Russian conspiratorial thought. Details: The idea of a world with finite resources is not new; the early Christian theologian Tertullian, who lived in 1st–2nd centuries AD wrote: "The strongest witness is the vast population of the earth to which we are a burden and she scarcely can provide for our needs; as our demands grow greater, our complaints against Nature's inadequacy are heard by all. The scourges of pestilence, famine, wars and earthquakes have come to be regarded as a blessing to overcrowded nations since they serve to prune away the luxuriant growth of the human race." Thomas Malthus took this idea further, predicting inevitable Malthusian catastrophes caused by exhaustion of natural resources that would collapse population growth. Details: According to Kara-Murza, the golden billion (population of developed countries) consumes the lion's share of all resources on the planet. If at least half of the global population begins to consume resources to the same extent, these resources wouldn't be sufficient. This is partly based on the ideas of Malthus, in that emphasis is placed on the scarcity of natural resources. However, whereas Malthus was mostly concerned with finite global crop yields, anti-globalists that advocate the idea of a "golden billion" are mostly concerned with finite natural resources such as fossil fuels and metal. According to Kara-Murza, the developed countries, while preserving for their nationals a high level of consumption, endorse political, military and economic measures designed to keep the rest of the world in an industrially undeveloped state and as a raw-material appendage area for the dumping of hazardous waste and as a source of cheap labor.The theory, which holds that the wealth of the West, including that of the lower classes, is mostly based on exploitation of the former colonies in the third world, is not new in Russia, where it was first popularized by Vladimir Lenin, in Imperialism, the Highest Stage of Capitalism. Lenin described the relationship between capitalism and imperialism, wherein the merging of banks and industrial cartels produces finance capital. The final, imperialist stage of capitalism, originates in the financial function of generating greater profits than the home market can yield; thus, business exports (excess) capital, which, in due course, leads to the economic division of the world among international business monopolies, and imperial European states colonising large portions of the world to generate investment profits. Details: Whereas Lenin and other Marxist anti-imperialists such as Immanuel Wallerstein called for an end to the domination of developed nations through international communism, Kara-Murza and his contemporaries in Russia believe that a restriction of free trade (especially with the West), and various methods of state intervention in the economy is the best solution. This economic rationale for protectionism dates back to the early United States and is known as the infant industry argument. The crux of the argument is that nascent industries often do not have the economies of scale that their older competitors from other countries may have, and thus need to be protected until they can attain similar economies of scale. The argument was first explicated by Alexander Hamilton in his 1790 Report on Manufactures, was systematically developed by Daniel Raymond, and was later picked up by Friedrich List in his 1841 work The National System of Political Economy, following his exposure to the idea during his residence in the United States in the 1820s. Details: According to proponents of the theory, differences in incomes in first-world countries and third-world countries cannot be explained by differences in individual productivity. For example, the Caterpillar (CAT) factory in Tosno, Russia has the highest productivity of all CAT factories in Europe, but the workers are paid about an order of magnitude less. The difference is even more startling when comparing the wages of textile workers in United States factories and in China sweatshops. This means that the multinational corporations appropriate a disproportionally high share of the surplus value in "developing" countries. The argument usually holds that the continuation of this exploitation retards the development and prosperity of the developing nations. Hence, globalization and modern capitalism benefit mostly the golden billion, while people in the so-called "developing" countries are getting the short end of the stick. Counter-arguments: Opponents of the concept often invoke market efficiency to argue that free trade and capitalism will make everybody wealthy eventually. Proponents counter that the ongoing process of multinational corporations channeling wealth from poorer countries to richer ones dictates that the gap will not diminish. Counter-arguments: Available data indicates convergence of income for many developing countries. Some economists think that using latest data it is possible to conclude that the world now is in state of unconditional economic convergence. In his book The Ultimate Resource, Julian Simon offers the view that scarcity of physical resources can be overcome by the human mind. For example, the argument of scarcity of oil could be overcome by some of energy development strategies, such as use of synthetic fuels. Counter-arguments: Modern estimations indicate that mineral shortages will not become a threat for many centuries. Resource usage trend analysis finds no imminent problems either.Concerning exploitation of the former colonies, Gregory Clark notes: "Yet generations of research by economic historians – David Landes, Deirdre McCloskey, and Joel Mokyr, among others – show that the wealth of the West was homegrown, the result of a stream of Western technological advances since the Industrial Revolution." Application to the Russian invasion of Ukraine: During Russia's 2022 war with Ukraine, the concept was used by leading Russian politicians to justify Russian policy and accuse the West of elitist colonialism. In May 2022, Nikolai Patrushev, secretary of the Security Council, accused "Anglo-Saxons" of "hiding their actions behind the human rights, freedom and democracy rhetoric," while pushing ahead "with the ‘golden billion’ doctrine, which implies that only select few are entitled to prosperity in this world." In June 2022, speaking at the International Economic Forum, Vladimir Putin "reiterated his position that the Kremlin was 'forced' to initiate the invasion of Ukraine [...] 'Our colleagues do not simply deny reality,' Putin added. 'They are trying to resist the course of history. They think in terms of the last century. They are in captivity of their own delusions about countries outside of the so-called golden billion, they see everything else as the periphery, their backyard, they treat these places as their colonies, and they treat the peoples living there as second-class citizens, because they consider themselves to be exceptional.'”
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Succession planting** Succession planting: In agriculture, succession planting refers to several planting methods that increase crop availability during a growing season by making efficient use of space and timing. Succession planting: There are four basic approaches, that can also be combined: Two or more crops in succession: On the same field where one crop has just been harvested, another is planted. The duration of the growing season, the environment, and the choice of crop are important variables. A crop that prefers the chilly spring months can be followed by a crop that prefers the summer heat. Succession planting: Same crop, successive plantings: Several smaller plantings are made at timed intervals, rather than all at once. The plants mature at staggered dates, establishing a continuous harvest over an extended period. Lettuce and other salad greens are common crops for this approach. Within a small garden or home garden, this method is useful in circumventing the initial large yield from the crop and rather providing a steady, smaller yield that may be consumed in its entirety. This is also known as relay planting. Succession planting: Two or more crops simultaneously: Non-competing crops, often with different maturity dates, are planted together in various patterns. Intercropping is one pattern approach; companion planting is a related, complementary practice. This method is also known as Interplanting: The practice of growing two types of plants in the same space. Interplanting requires a certain amount of preplanning and knowledge of the maturity dates of different types of vegetables. It has been noted that successful interplanting and intensive gardening is done in raised beds within the planting areas. Planting two or more non-competing crops may raise issues with soil-borne diseases and insects that only affect one type of plant. Depending on how close the interplanting varieties are, crop failure is a possibility. Succession planting: Same crop, different maturity dates: Several varieties are selected, with different maturity dates: early, main season, late. Planted at the same time, the varieties mature one after the other over the season.These techniques can be used to design complex, highly productive cropping systems. The more involved the plan, the more detailed knowledge is required of the specific varieties and how they perform in a particular growing location. A number of tertiary institutions have written about the advantages of succession planting and outlined extensive guides to this bio intensive style of small scale crop farming. There are a numerous differences in guides to succession planting due to the diverse climate and soil conditions experienced around the world. There are significant differences between cold weather succession planting and warm weather succession planting.The term "succession planting" usually appears in literature for home gardening and small-scale farming, although the techniques apply to any scale. Some definitions include one or more, but not all of the four techniques described above. Succession planting: Succession planting is often used in organic farming. Multiple cropping describes essentially the same general method. A catch crop refers to a specific type of succession planting, where a fast-growing crop is grown simultaneously with, or between successive plantings of, a main crop. Succession planting has been touted as a way to minimize the risks of crop failure for small farmers. This includes the risk of adverse weather conditions, increased pest conditions and seed failure.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Motivated tactician** Motivated tactician: In social psychology, a motivated tactician is someone who shifts between quick-and-dirty cognitively economical tactics and more thoughtful, thorough strategies when processing information, depending on the type and degree of motivation. Such behavior is a type of motivated reasoning. The idea has been used to explain why people use stereotyping, biases and categorization in some situations, and more analytical thinking in others. History: After much research on categorization, and other cognitive shortcuts, psychologists began to describe human beings as cognitive misers; which explains that a need to conserve mental resources causes people to use shortcuts to thinking about stimuli, instead of motivations and urges influencing the way humans think about their world. Stereotypes and heuristics were used as evidence of the economic nature of human thinking. In recent years, the work of Fiske & Neuberg (1990), Higgins & Molden (2003), Molden & Higgins (2005) and others has led to the recognition of the importance of motivational thinking. This is due to contemporary research studying the importance of motivation in cognitive processes, instead of concentrating on cognition versus motivation. Current research does not deny that people will be cognitively miserly in certain situations, but it takes into account that thorough analytic thought does occur in other situations. History: Using this perspective, researchers have begun to describe human beings as "motivated tacticians" who are tactical about how much cognitive resources will be used depending on the individual's intent and level of motivation. Based on the complex nature of the world and the occasional need for quick thinking, it would be detrimental for a person to be methodical about everything, while other situations require more focus and attention. Considering human beings as motivated tacticians has become popular because it takes both situations into account. This concept also takes into account, and continues to study, what motivates people to use more or less mental resources when processing information about the world. Research has found that intended outcome, relevancy to the individual, culture, and affect can all influence the way a person processes information. Goal-oriented motivational thinking: The most prominent explanation of motivational thinking is that the person's desired outcome motivates him to use more or less cognitive resources while processing a situation or thing. Researchers have divided preferred outcomes into two broad categories: directional and non-directional outcomes. The preferred outcome provides the motivation for the level of processing involved. Goal-oriented motivational thinking: Individuals motivated by directional outcomes have the intention of accomplishing a specific goal. These goals can range from appearing smart, courageous or likeable to affirming positive thoughts and feelings about something or someone to whom they are close or find likable. If someone is motivated by non-directional outcomes, he or she may wish to make the most logical and clear decision. Whether a person is motivated by directional or non-directional outcomes depends on the situation and the person's goals. Confirmation bias is an example of thought-processing motivated by directional outcomes. The goal is to affirm previously held beliefs, so one will use less thorough thinking in order to reach that goal. A person motivated to get the best education, who researches information on colleges and visit schools is motivated by a non-directional outcome. Evidence for outcome-influenced motivation is illustrated by research on self-serving bias. According to Miller (1976), "Independent of expectancies from prior success or failure, the more personally important a success is in any given situation, the stronger is the tendency to claim responsibility for this success but to deny responsibility for failure." Motivation based on strategy: Though outcome-based motivation is the most prominent approach to motivated thinking, there is evidence that a person can be motivated by their preferred strategy of processing information. However, rather than being an alternative, this idea is actually a compliment to the outcome-based approach. Proponents of this approach feel that a person prefers a specific method of information-processing because it usually yields the results they wish to receive. This relates back to the intended outcome being the primary motivation. "Strategy of information processing" means whether a person makes a decision using bias, categories, or analytical thinking. Regardless of whether the method is best suited for the situation or more thorough is less important to the person than its likelihood of yielding the intended result. People feel that their preferred strategy just "feels right". What makes the heuristic or method feel "right" is that the strategy accomplishes the desired goal (i.e. affirming positive beliefs of self-efficacy). Other motivations and approaches: There has been limited research on motivated tactical thinking outside of Western countries. One theory experts have mentioned is that a person's culture could play a large role in a person's motivations. Nations like the United States are considered to be individualistic, while many Asian nations are considered to be collectivistic. An individualist emphasizes importance on the self and is motivated by individual reward and affirmation, while a collectivist sees the world as being more group- or culture-based. The difference in the two ways of thinking could affect motivation in information processing. For example, instead of being motivated by self-affirmation, a collectivist would be motivated by more group-affirming goals.Another theory is that emotions can affect the way a person processes information. Forgas (2000) has stated that current mood can determine the information processing as well as thoroughness of thought. He also mentioned that achieving a desired emotion can influence the level to which information is processed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Questionnaire for User Interaction Satisfaction** Questionnaire for User Interaction Satisfaction: The Questionnaire For User Interaction Satisfaction (QUIS) is a tool developed to assess users' subjective satisfaction with specific aspects of the human-computer interface. It was developed in 1987 by a multi-disciplinary team of researchers at the University of Maryland Human – Computer Interaction Lab. The QUIS is currently at Version 7.0 with demographic questionnaire, a measure of overall system satisfaction along 6 scales, and measures of 9 specific interface factors. These 9 factors are: screen factors, terminology and system feedback, learning factors, system capabilities, technical manuals, on-line tutorials, multimedia, teleconferencing, and software installation. Currently available in: German, Italian, Portuguese, and Spanish. Background: When the QUIS was developed, a large number questionnaires concerning user subjective satisfaction had been developed. However, few of these exclusively focused on user evaluation of the interface itself. This was the motivation for the development of the QUIS Version 1.0 In 1987, Ben Shneiderman presented a questionnaire that directed user attention to focus on their subjective rating of the human-computer interface. While this questionnaire was a strong step towards focus on users' evaluations of an interface, no empirical work had been done to assess its reliability or validity. Background: Version 2.0 This original questionnaire consisted of 90 questions in total. Of these questions, 5 were concerned with rating a user's overall reaction of the system. The remaining 85 were organized into 20 groups which, in turn, consisted of a main component question followed by related subcomponent questions. Background: The reliability of the questionnaire was found to be high with Cronbach's alpha=.94 Version 3.0 QUIS Version 2.0 was modified and expanded to three major sections. In the first section, there were three questions concerned with the type of system under evaluation and the amount of time spent on that system. In the second section, four questions focused on the user's past computer experiences. The last section, section III, included the modified version of QUIS Version 2.0, now containing 103 questions. These modifications included changing the 1-10 rating scale to be from 1-9, which 0 used as "not applicable". This also simplified future data entry for the questionnaire since a maximum rating would no longer require two keystrokes (as it would have in "10"). This in turn would reduce response bias from subjects. Background: Version 4.0 Chin, Norman and Shneiderman (1987) administered the QUIS Version 3.0 and a subsequent revised Version 4.0 to an introductory computer science class learning to program in CF PASCAL. Participants, were assigned to either the interactive batch run IBM mainframe or an interactive syntax-directed editor programming environment on an IBM PC. They evaluated the environment they had used during the first 6 weeks of the course (version 3.0). Then, for the next 6 weeks, the participants switched programming environments and evaluated the new system with the QUIS Version 4.0. Background: Although version 4.0 appeared to be reliable, there were limitations to the study due to sampling. The sample of the users doing the evaluation were limited to those in an academic community. There was a clear need to determine if the reliability of the QUIS would generalize to other populations of users and products, like a local PC User's Group. Background: Version 5.0 Another study using QUIS Version 5.0 was carried out with a local PC User's Group. In order to look at ratings across products, the participants were divided into 4 groups. Each group rated a different product. The products were: a product the rater liked a product the rater disliked a command line system (CLS) a Menu Driven Application (MDA)This investigation examined the reliability and discriminability of the questionnaire. In terms of discriminability, the researchers compared the ratings for software that was liked vs. the ratings for the software that was disliked. Lastly, a comparison between a mandatory CLS with that of a voluntarily chosen MDA was made. The researchers found that the overall reliability of QUIS Version 5.0 using Cronbach's alpha was .939. Background: Version 5.5 Even though the QUIS Version 5.0 was a powerful tool for interface evaluation, interface issues limited the utility of the on-line version. Previous versions of the QUIS have been laid out in a very linear fashion, in which one question would be shown on each screen. However, this format is unable to capture the hierarchical nature of the question sets in the QUIS. This in turn limits the continuity between questions. QUIS 5.5 presented related sets of questions on the same screen. This helped improve question continuity within a set and reduced the amount of time subjects would spend navigating between questions. Background: Users of the QUIS often avoided the on-line version because it failed to record specific user comments about the system. This was not acceptable since these comments are often vital for usability testing. In response to this need, QUIS Version 5.5 collected and stored comments online for each set of questions. The output format of the QUIS data was also a source of frustration. The original format made analysis confusing and error-prone. QUIS Version 5.5 stores data in a format that could be easily imported into most popular spreadsheet and statistical analysis applications. Background: Overall, the most significant change to the QUIS in Version 5.5 is improved flexibility. Prior versions required experimenters to use all questions in all areas despite how most often, only a sub-set of the 80 questions was actually applicable to the interface under evaluation. QUIS Version 5.5 allowed experimenters to select subsets of the QUIS questions to display. Overall, this saved subjects and experimenters time and effort. Background: Version 5.5 - Development of the Web Based QUIS Standard HTML forms were used to let users interact with the QUIS Version 5.5. The online version's style is very similar to the paper version of the questionnaire. The online version displayed multiple questions per page and comment areas at the end of each section. In order to ensure that users considered each question, a response was required for each question (users were able to answer "Not Applicable"). Client-side JavaScript was used to both validate and format the user's responses. The data for each section of the QUIS was time stamped and recorded on the client computer. At the end of the questionnaire the data from all sections of the QUIS were gathered together and send as a single piece back to the server where the QUIS was deployed. This method of data collection ensures that only completed questionnaires were entered, and prevents concurrency issues between users. Background: Version 5.5 Paper vs. Online Study This study compared responses from paper and on-line formats of the QUIS Veresion 5.5. The majority of studies were interested in assessing equivalence between computerized and paper forms of tests. Overall, the results have not indicated significant differences. Twenty subjects evaluated WordPerfect© using both the paper and online formats of the QUIS Version 5.5. Each administration of the QUIS was preceded by a practice session to refamiliarize the subject with the interface. As the researchers expected, the format of the questionnaire did not affect users' ratings. However, it was of note that subjects using the online format wrote more in the comment sections than those who used the paper format. Also, the comments made by subjects using the online format provided better feedback in terms of problems, strengths, and examples. These results indicated that the online QUIS format provides more higher-quality information to developers, researchers and human factors experts than the paper-pencil format. Background: Version 6.0 QUIS Version 5.5 was expanded into Version 6.0 and used for the study of the AVR "Guardian" system. Background: Version 7.0 The QUIS Version 7.0 is an updated and expanded version of the previously validated QUIS 5.5. It is arranged in a hierarchical format and contains: (1) a demographic questionnaire, (2) six scales that measure overall reaction ratings of the system, (3) four measures of specific interface factors: screen factors, terminology and system feedback, learning factors, system capabilities, and (4) optional sections to evaluate specific components of the system. These specific components include: technical manuals and online help on-line tutorials multimedia Internet access software installationAdditional space allowing the rater to make comments regarding the interface is also included within the questionnaire. The comment space is headed by a statement that prompts the rater to comment on each of the specific interface factors. Current: In addition to English, the QUIS 7.0 is currently available in the following languages: German, Italian, Portuguese (Brazilian), and Spanish. In Fall 2011, a group of University of Maryland students began work updating the QUIS Version 7.0. Competitors: Software Usability Measurement Inventory (SUMI): Contains 50 items for 5 aspects System Usability Scale (SUS)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tsunami** Tsunami: A tsunami ( (t)soo-NAH-mee, (t)suu-; from Japanese: 津波, lit. 'harbour wave', pronounced [tsɯnami]) is a series of waves in a water body caused by the displacement of a large volume of water, generally in an ocean or a large lake. Earthquakes, volcanic eruptions and other underwater explosions (including detonations, landslides, glacier calvings, meteorite impacts and other disturbances) above or below water all have the potential to generate a tsunami. Unlike normal ocean waves, which are generated by wind, or tides, which are in turn generated by the gravitational pull of the Moon and the Sun, a tsunami is generated by the displacement of water from a large event. Tsunami waves do not resemble normal undersea currents or sea waves because their wavelength is far longer. Rather than appearing as a breaking wave, a tsunami may instead initially resemble a rapidly rising tide. For this reason, it is often referred to as a tidal wave, although this usage is not favoured by the scientific community because it might give the false impression of a causal relationship between tides and tsunamis. Tsunamis generally consist of a series of waves, with periods ranging from minutes to hours, arriving in a so-called "wave train". Wave heights of tens of metres can be generated by large events. Although the impact of tsunamis is limited to coastal areas, their destructive power can be enormous, and they can affect entire ocean basins. The 2004 Indian Ocean tsunami was among the deadliest natural disasters in human history, with at least 230,000 people killed or missing in 14 countries bordering the Indian Ocean. Tsunami: The Ancient Greek historian Thucydides suggested in his 5th century BC History of the Peloponnesian War that tsunamis were related to submarine earthquakes, but the understanding of tsunamis remained slim until the 20th century, and much remains unknown. Major areas of current research include determining why some large earthquakes do not generate tsunamis while other smaller ones do. This ongoing research is designed to help accurately forecast the passage of tsunamis across oceans as well as how tsunami waves interact with shorelines. Terminology: Tsunami The term "tsunami" is a borrowing from the Japanese tsunami 津波, meaning "harbour wave." For the plural, one can either follow ordinary English practice and add an s, or use an invariable plural as in the Japanese. Some English speakers alter the word's initial /ts/ to an /s/ by dropping the "t," since English does not natively permit /ts/ at the beginning of words, though the original Japanese pronunciation is /ts/. Terminology: Tidal wave Tsunamis are sometimes referred to as tidal waves. This once-popular term derives from the most common appearance of a tsunami, which is that of an extraordinarily high tidal bore. Tsunamis and tides both produce waves of water that move inland, but in the case of a tsunami, the inland movement of water may be much greater, giving the impression of an incredibly high and forceful tide. In recent years, the term "tidal wave" has fallen out of favour, especially in the scientific community, because the causes of tsunamis have nothing to do with those of tides, which are produced by the gravitational pull of the moon and sun rather than the displacement of water. Although the meanings of "tidal" include "resembling" or "having the form or character of" tides, use of the term tidal wave is discouraged by geologists and oceanographers. Terminology: A 1969 episode of the TV crime show Hawaii Five-O entitled "Forty Feet High and It Kills!" used the terms "tsunami" and "tidal wave" interchangeably. Terminology: Seismic sea wave The term seismic sea wave is also used to refer to the phenomenon because the waves most often are generated by seismic activity such as earthquakes. Prior to the rise of the use of the term tsunami in English, scientists generally encouraged the use of the term seismic sea wave rather than tidal wave. However, like tsunami, seismic sea wave is not a completely accurate term, as forces other than earthquakes—including underwater landslides, volcanic eruptions, underwater explosions, land or ice slumping into the ocean, meteorite impacts, and the weather when the atmospheric pressure changes very rapidly—can generate such waves by displacing water. History: While Japan may have the longest recorded history of tsunamis, the sheer destruction caused by the 2004 Indian Ocean earthquake and tsunami event mark it as the most devastating of its kind in modern times, killing around 230,000 people. The Sumatran region is also accustomed to tsunamis, with earthquakes of varying magnitudes regularly occurring off the coast of the island.Tsunamis are an often underestimated hazard in the Mediterranean Sea and parts of Europe. Of historical and current (with regard to risk assumptions) importance are the 1755 Lisbon earthquake and tsunami (which was caused by the Azores–Gibraltar Transform Fault), the 1783 Calabrian earthquakes, each causing several tens of thousands of deaths and the 1908 Messina earthquake and tsunami. The tsunami claimed more than 123,000 lives in Sicily and Calabria and is among the most deadly natural disasters in modern Europe. The Storegga Slide in the Norwegian Sea and some examples of tsunamis affecting the British Isles refer to landslide and meteotsunamis predominantly and less to earthquake-induced waves. History: As early as 426 BC the Greek historian Thucydides inquired in his book History of the Peloponnesian War about the causes of tsunami, and was the first to argue that ocean earthquakes must be the cause. The oldest human record of a tsunami dates back to 479 BC, in the Greek colony of Potidaea, thought to be triggered by an earthquake. The tsunami may have saved the colony from an invasion by the Achaemenid Empire. History: The cause, in my opinion, of this phenomenon must be sought in the earthquake. At the point where its shock has been the most violent the sea is driven back, and suddenly recoiling with redoubled force, causes the inundation. Without an earthquake I do not see how such an accident could happen. The Roman historian Ammianus Marcellinus (Res Gestae 26.10.15–19) described the typical sequence of a tsunami, including an incipient earthquake, the sudden retreat of the sea and a following gigantic wave, after the 365 AD tsunami devastated Alexandria. Causes: The principal generation mechanism of a tsunami is the displacement of a substantial volume of water or perturbation of the sea. This displacement of water is usually caused by earthquakes, but can also be attributed to landslides, volcanic eruptions, glacier calvings or more rarely by meteorites and nuclear tests. However, the possibility of a meteorite causing a tsunami is debated. Causes: Seismicity Tsunamis can be generated when the sea floor abruptly deforms and vertically displaces the overlying water. Tectonic earthquakes are a particular kind of earthquake that are associated with the Earth's crustal deformation; when these earthquakes occur beneath the sea, the water above the deformed area is displaced from its equilibrium position. More specifically, a tsunami can be generated when thrust faults associated with convergent or destructive plate boundaries move abruptly, resulting in water displacement, owing to the vertical component of movement involved. Movement on normal (extensional) faults can also cause displacement of the seabed, but only the largest of such events (typically related to flexure in the outer trench swell) cause enough displacement to give rise to a significant tsunami, such as the 1977 Sumba and 1933 Sanriku events. Causes: Tsunamis have a small wave height offshore, and a very long wavelength (often hundreds of kilometres long, whereas normal ocean waves have a wavelength of only 30 or 40 metres), which is why they generally pass unnoticed at sea, forming only a slight swell usually about 300 millimetres (12 in) above the normal sea surface. They grow in height when they reach shallower water, in a wave shoaling process described below. A tsunami can occur in any tidal state and even at low tide can still inundate coastal areas. Causes: On April 1, 1946, the 8.6 Mw  Aleutian Islands earthquake occurred with a maximum Mercalli intensity of VI (Strong). It generated a tsunami which inundated Hilo on the island of Hawaii with a 14-metre high (46 ft) surge. Between 165 and 173 were killed. The area where the earthquake occurred is where the Pacific Ocean floor is subducting (or being pushed downwards) under Alaska. Causes: Examples of tsunamis originating at locations away from convergent boundaries include Storegga about 8,000 years ago, Grand Banks in 1929, and Papua New Guinea in 1998 (Tappin, 2001). The Grand Banks and Papua New Guinea tsunamis came from earthquakes which destabilised sediments, causing them to flow into the ocean and generate a tsunami. They dissipated before travelling transoceanic distances. The cause of the Storegga sediment failure is unknown. Possibilities include an overloading of the sediments, an earthquake or a release of gas hydrates (methane etc.). Causes: The 1960 Valdivia earthquake (Mw 9.5), 1964 Alaska earthquake (Mw 9.2), 2004 Indian Ocean earthquake (Mw 9.2), and 2011 Tōhoku earthquake (Mw9.0) are recent examples of powerful megathrust earthquakes that generated tsunamis (known as teletsunamis) that can cross entire oceans. Smaller (Mw 4.2) earthquakes in Japan can trigger tsunamis (called local and regional tsunamis) that can devastate stretches of coastline, but can do so in only a few minutes at a time. Causes: Landslides The Tauredunum event was a large tsunami on Lake Geneva in 563 CE, caused by sedimentary deposits destabilized by a landslide. Causes: In the 1950s, it was discovered that tsunamis larger than had previously been believed possible can be caused by giant submarine landslides. These large volumes of rapidly displaced water transfer energy at a faster rate than the water can absorb. Their existence was confirmed in 1958, when a giant landslide in Lituya Bay, Alaska, caused the highest wave ever recorded, which had a height of 524 metres (1,719 ft). The wave did not travel far as it struck land almost immediately. The wave struck three boats—each with two people aboard—anchored in the bay. One boat rode out the wave, but the wave sank the other two, killing both people aboard one of them.Another landslide-tsunami event occurred in 1963 when a massive landslide from Monte Toc entered the reservoir behind the Vajont Dam in Italy. The resulting wave surged over the 262-metre (860 ft)-high dam by 250 metres (820 ft) and destroyed several towns. Around 2,000 people died. Scientists named these waves megatsunamis. Causes: Some geologists claim that large landslides from volcanic islands, e.g. Cumbre Vieja on La Palma (Cumbre Vieja tsunami hazard) in the Canary Islands, may be able to generate megatsunamis that can cross oceans, but this is disputed by many others. Causes: In general, landslides generate displacements mainly in the shallower parts of the coastline, and there is conjecture about the nature of large landslides that enter the water. This has been shown to subsequently affect water in enclosed bays and lakes, but a landslide large enough to cause a transoceanic tsunami has not occurred within recorded history. Susceptible locations are believed to be the Big Island of Hawaii, Fogo in the Cape Verde Islands, La Reunion in the Indian Ocean, and Cumbre Vieja on the island of La Palma in the Canary Islands; along with other volcanic ocean islands. This is because large masses of relatively unconsolidated volcanic material occurs on the flanks and in some cases detachment planes are believed to be developing. However, there is growing controversy about how dangerous these slopes actually are. Causes: Volcanic eruptions Other than by landslides or sector collapse, volcanoes may be able to generate waves by pyroclastic flow submergence, caldera collapse, or underwater explosions. Tsunamis have been triggered by a number of volcanic eruptions, including the 1883 eruption of Krakatoa, and the 2022 Hunga Tonga–Hunga Ha'apai eruption. Over 20% of all fatalities caused by volcanism during the past 250 years are estimated to have been caused by volcanogenic tsunamis.Debate has persisted over the origins and source mechanisms of these types of tsunamis, such as those generated by Krakatoa in 1883, and they remain lesser understood than their seismic relatives. This poses a large problem of awareness and preparedness, as exemplified by the eruption and collapse of Anak Krakatoa in 2018, which killed 426 and injured thousands when no warning was available. Causes: It is still regarded that lateral landslides and ocean-entering pyroclastic currents are most likely to generate the largest and most hazardous waves from volcanism; however, field investigation of the Tongan event, as well as developments in numerical modelling methods, currently aim to expand the understanding of the other source mechanisms. Causes: Meteorological Some meteorological conditions, especially rapid changes in barometric pressure, as seen with the passing of a front, can displace bodies of water enough to cause trains of waves with wavelengths. These are comparable to seismic tsunamis, but usually with lower energies. Essentially, they are dynamically equivalent to seismic tsunamis, the only differences being 1) that meteotsunamis lack the transoceanic reach of significant seismic tsunamis, and 2) that the force that displaces the water is sustained over some length of time such that meteotsunamis cannot be modelled as having been caused instantaneously. In spite of their lower energies, on shorelines where they can be amplified by resonance, they are sometimes powerful enough to cause localised damage and potential for loss of life. They have been documented in many places, including the Great Lakes, the Aegean Sea, the English Channel, and the Balearic Islands, where they are common enough to have a local name, rissaga. In Sicily they are called marubbio and in Nagasaki Bay, they are called abiki. Some examples of destructive meteotsunamis include 31 March 1979 at Nagasaki and 15 June 2006 at Menorca, the latter causing damage in the tens of millions of euros.Meteotsunamis should not be confused with storm surges, which are local increases in sea level associated with the low barometric pressure of passing tropical cyclones, nor should they be confused with setup, the temporary local raising of sea level caused by strong on-shore winds. Storm surges and setup are also dangerous causes of coastal flooding in severe weather but their dynamics are completely unrelated to tsunami waves. They are unable to propagate beyond their sources, as waves do. Causes: Human-made or triggered tsunamis The accidental Halifax Explosion in 1917 triggered a 18-meter high Tsunami in the harbour. There have been studies of the potential of the induction of and at least one actual attempt to create tsunami waves as a tectonic weapon. Causes: In World War II, the New Zealand Military Forces initiated Project Seal, which attempted to create small tsunamis with explosives in the area of today's Shakespear Regional Park; the attempt failed.There has been considerable speculation on the possibility of using nuclear weapons to cause tsunamis near an enemy coastline. Even during World War II consideration of the idea using conventional explosives was explored. Nuclear testing in the Pacific Proving Ground by the United States seemed to generate poor results. Operation Crossroads fired two 20 kilotonnes of TNT (84 TJ) bombs, one in the air and one underwater, above and below the shallow (50 m (160 ft)) waters of the Bikini Atoll lagoon. Fired about 6 km (3.7 mi) from the nearest island, the waves there were no higher than 3–4 m (9.8–13.1 ft) upon reaching the shoreline. Other underwater tests, mainly Hardtack I/Wahoo (deep water) and Hardtack I/Umbrella (shallow water) confirmed the results. Analysis of the effects of shallow and deep underwater explosions indicate that the energy of the explosions does not easily generate the kind of deep, all-ocean waveforms which are tsunamis; most of the energy creates steam, causes vertical fountains above the water, and creates compressional waveforms. Tsunamis are hallmarked by permanent large vertical displacements of very large volumes of water which do not occur in explosions. Characteristics: Tsunamis are caused by earthquakes, landslides, volcanic explosions, glacier calvings, and bolides. They cause damage by two mechanisms: the smashing force of a wall of water travelling at high speed, and the destructive power of a large volume of water draining off the land and carrying a large amount of debris with it, even with waves that do not appear to be large. Characteristics: While everyday wind waves have a wavelength (from crest to crest) of about 100 metres (330 ft) and a height of roughly 2 metres (6.6 ft), a tsunami in the deep ocean has a much larger wavelength of up to 200 kilometres (120 mi). Such a wave travels at well over 800 kilometres per hour (500 mph), but owing to the enormous wavelength the wave oscillation at any given point takes 20 or 30 minutes to complete a cycle and has an amplitude of only about 1 metre (3.3 ft). This makes tsunamis difficult to detect over deep water, where ships are unable to feel their passage. Characteristics: The velocity of a tsunami can be calculated by obtaining the square root of the depth of the water in metres multiplied by the acceleration due to gravity (approximated to 10 m/s2). For example, if the Pacific Ocean is considered to have a depth of 5000 metres, the velocity of a tsunami would be √5000 × 10 = √50000 ≈ 224 metres per second (730 ft/s), which equates to a speed of about 806 kilometres per hour (501 mph). This is the formula used for calculating the velocity of shallow-water waves. Even the deep ocean is shallow in this sense because a tsunami wave is so long (horizontally from crest to crest) by comparison. Characteristics: The reason for the Japanese name "harbour wave" is that sometimes a village's fishermen would sail out, and encounter no unusual waves while out at sea fishing, and come back to land to find their village devastated by a huge wave. Characteristics: As the tsunami approaches the coast and the waters become shallow, wave shoaling compresses the wave and its speed decreases below 80 kilometres per hour (50 mph). Its wavelength diminishes to less than 20 kilometres (12 mi) and its amplitude grows enormously—in accord with Green's law. Since the wave still has the same very long period, the tsunami may take minutes to reach full height. Except for the very largest tsunamis, the approaching wave does not break, but rather appears like a fast-moving tidal bore. Open bays and coastlines adjacent to very deep water may shape the tsunami further into a step-like wave with a steep-breaking front. Characteristics: When the tsunami's wave peak reaches the shore, the resulting temporary rise in sea level is termed run up. Run up is measured in metres above a reference sea level. A large tsunami may feature multiple waves arriving over a period of hours, with significant time between the wave crests. The first wave to reach the shore may not have the highest run-up.About 80% of tsunamis occur in the Pacific Ocean, but they are possible wherever there are large bodies of water, including lakes. However, tsunami interactions with shorelines and the seafloor topography are extremely complex, which leaves some countries more vulnerable than others. For example, the Pacific coasts of the United States and Mexico lie adjacent to each other, but the United States has recorded ten tsunamis in the region since 1788, while Mexico has recorded twenty-five since 1732. Similarly, Japan has had more than a hundred tsunamis in recorded history, while the neighboring island of Taiwan has registered only two, in 1781 and 1867. Drawback: All waves have a positive and negative peak; that is, a ridge and a trough. In the case of a propagating wave like a tsunami, either may be the first to arrive. If the first part to arrive at the shore is the ridge, a massive breaking wave or sudden flooding will be the first effect noticed on land. However, if the first part to arrive is a trough, a drawback will occur as the shoreline recedes dramatically, exposing normally submerged areas. The drawback can exceed hundreds of metres, and people unaware of the danger sometimes remain near the shore to satisfy their curiosity or to collect fish from the exposed seabed. Drawback: A typical wave period for a damaging tsunami is about twelve minutes. Thus, the sea recedes in the drawback phase, with areas well below sea level exposed after three minutes. For the next six minutes, the wave trough builds into a ridge which may flood the coast, and destruction ensues. During the next six minutes, the wave changes from a ridge to a trough, and the flood waters recede in a second drawback. Victims and debris may be swept into the ocean. The process repeats with succeeding waves. Scales of intensity and magnitude: As with earthquakes, several attempts have been made to set up scales of tsunami intensity or magnitude to allow comparison between different events. Scales of intensity and magnitude: Intensity scales The first scales used routinely to measure the intensity of tsunamis were the Sieberg-Ambraseys scale (1962), used in the Mediterranean Sea and the Imamura-Iida intensity scale (1963), used in the Pacific Ocean. The latter scale was modified by Soloviev (1972), who calculated the tsunami intensity "I" according to the formula: log 2⁡Hav where Hav is the "tsunami height" in meters, averaged along the nearest coastline, with the tsunami height defined as the rise of the water level above the normal tidal level at the time of occurrence of the tsunami. This scale, known as the Soloviev-Imamura tsunami intensity scale, is used in the global tsunami catalogues compiled by the NGDC/NOAA and the Novosibirsk Tsunami Laboratory as the main parameter for the size of the tsunami. Scales of intensity and magnitude: This formula yields: I = 2 for Hav = 2.8 metres I = 3 for Hav = 5.5 metres I = 4 for Hav = 11 metres I = 5 for Hav = 22.5 metres etc.In 2013, following the intensively studied tsunamis in 2004 and 2011, a new 12-point scale was proposed, the Integrated Tsunami Intensity Scale (ITIS-2012), intended to match as closely as possible to the modified ESI2007 and EMS earthquake intensity scales. Scales of intensity and magnitude: Magnitude scales The first scale that genuinely calculated a magnitude for a tsunami, rather than an intensity at a particular location was the ML scale proposed by Murty & Loomis based on the potential energy. Difficulties in calculating the potential energy of the tsunami mean that this scale is rarely used. Abe introduced the tsunami magnitude scale Mt , calculated from, log log ⁡R+D where h is the maximum tsunami-wave amplitude (in m) measured by a tide gauge at a distance R from the epicentre, a, b and D are constants used to make the Mt scale match as closely as possible with the moment magnitude scale. Tsunami heights: Several terms are used to describe the different characteristics of tsunami in terms of their height: Amplitude, Wave Height, or Tsunami Height: Refers to the height of a tsunami relative to the normal sea level at the time of the tsunami, which may be tidal High Water, or Low Water. It is different from the crest-to-trough height which is commonly used to measure other type of wave height. Tsunami heights: Run-up Height, or Inundation Height: The height reached by a tsunami on the ground above sea level, Maximum run-up height refers to the maximum height reached by water above sea level, which is sometimes reported as the maximum height reached by a tsunami. Flow Depth: Refers to the height of tsunami above ground, regardless of the height of the location or sea level. (Maximum) Water Level: Maximum height above sea level as seen from trace or water mark. Different from maximum run-up height in the sense that they are not necessarily water marks at inundation line/limit. Warnings and predictions: Drawbacks can serve as a brief warning. People who observe drawback (many survivors report an accompanying sucking sound) can survive only if they immediately run for high ground or seek the upper floors of nearby buildings. In 2004, ten-year-old Tilly Smith of Surrey, England, was on Maikhao beach in Phuket, Thailand with her parents and sister, and having learned about tsunamis recently in school, told her family that a tsunami might be imminent. Her parents warned others minutes before the wave arrived, saving dozens of lives. She credited her geography teacher, Andrew Kearney. In the 2004 Indian Ocean tsunami drawback was not reported on the African coast or any other east-facing coasts that it reached. This was because the initial wave moved downwards on the eastern side of the megathrust and upwards on the western side. The western pulse hit coastal Africa and other western areas. Warnings and predictions: A tsunami cannot be precisely predicted, even if the magnitude and location of an earthquake is known. Geologists, oceanographers, and seismologists analyse each earthquake and based on many factors may or may not issue a tsunami warning. However, there are some warning signs of an impending tsunami, and automated systems can provide warnings immediately after an earthquake in time to save lives. One of the most successful systems uses bottom pressure sensors, attached to buoys, which constantly monitor the pressure of the overlying water column. Warnings and predictions: Regions with a high tsunami risk typically use tsunami warning systems to warn the population before the wave reaches land. On the west coast of the United States, which is prone to tsunamis from the Pacific Ocean, warning signs indicate evacuation routes. In Japan, the populace is well-educated about earthquakes and tsunamis, and along Japanese shorelines, tsunami warning signs remind people of the natural hazards along with a network of warning sirens, typically at the top of the cliffs of surrounding hills.The Pacific Tsunami Warning System is based in Honolulu, Hawaiʻi. It monitors Pacific Ocean seismic activity. A sufficiently large earthquake magnitude and other information triggers a tsunami warning. While the subduction zones around the Pacific are seismically active, not all earthquakes generate a tsunami. Computers assist in analysing the tsunami risk of every earthquake that occurs in the Pacific Ocean and the adjoining land masses. Warnings and predictions: As a direct result of the Indian Ocean tsunami, a re-appraisal of the tsunami threat for all coastal areas is being undertaken by national governments and the United Nations Disaster Mitigation Committee. A tsunami warning system is being installed in the Indian Ocean. Warnings and predictions: Computer models can predict tsunami arrival, usually within minutes of the arrival time. Bottom pressure sensors can relay information in real time. Based on these pressure readings and other seismic information and the seafloor's shape (bathymetry) and coastal topography, the models estimate the amplitude and surge height of the approaching tsunami. All Pacific Rim countries collaborate in the Tsunami Warning System and most regularly practise evacuation and other procedures. In Japan, such preparation is mandatory for government, local authorities, emergency services and the population. Warnings and predictions: Along the United States west coast, in addition to sirens, warnings are sent on television and radio via the National Weather Service, using the Emergency Alert System. Warnings and predictions: Possible animal reaction Some zoologists hypothesise that some animal species have an ability to sense subsonic Rayleigh waves from an earthquake or a tsunami. If correct, monitoring their behaviour could provide advance warning of earthquakes and tsunamis. However, the evidence is controversial and is not widely accepted. There are unsubstantiated claims about the Lisbon quake that some animals escaped to higher ground, while many other animals in the same areas drowned. The phenomenon was also noted by media sources in Sri Lanka in the 2004 Indian Ocean earthquake. It is possible that certain animals (e.g., elephants) may have heard the sounds of the tsunami as it approached the coast. The elephants' reaction was to move away from the approaching noise. By contrast, some humans went to the shore to investigate and many drowned as a result. Mitigation: In some tsunami-prone countries, earthquake engineering measures have been taken to reduce the damage caused onshore. Mitigation: Japan, where tsunami science and response measures first began following a disaster in 1896, has produced ever-more elaborate countermeasures and response plans. The country has built many tsunami walls of up to 12 metres (39 ft) high to protect populated coastal areas. Other localities have built floodgates of up to 15.5 metres (51 ft) high and channels to redirect the water from an incoming tsunami. However, their effectiveness has been questioned, as tsunamis often overtop the barriers. Mitigation: The Fukushima Daiichi nuclear disaster was directly triggered by the 2011 Tōhoku earthquake and tsunami, when waves exceeded the height of the plant's sea wall. Iwate Prefecture, which is an area at high risk from tsunami, had tsunami barriers walls (Taro sea wall) totalling 25 kilometres (16 mi) long at coastal towns. The 2011 tsunami toppled more than 50% of the walls and caused catastrophic damage.The Okushiri, Hokkaidō tsunami which struck Okushiri Island of Hokkaidō within two to five minutes of the earthquake on July 12, 1993, created waves as much as 30 metres (100 ft) tall—as high as a 10-storey building. The port town of Aonae was completely surrounded by a tsunami wall, but the waves washed right over the wall and destroyed all the wood-framed structures in the area. The wall may have succeeded in slowing down and moderating the height of the tsunami, but it did not prevent major destruction and loss of life.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Notch signaling pathway** Notch signaling pathway: The Notch signaling pathway is a highly conserved cell signaling system present in most animals. Mammals possess four different notch receptors, referred to as NOTCH1, NOTCH2, NOTCH3, and NOTCH4. The notch receptor is a single-pass transmembrane receptor protein. It is a hetero-oligomer composed of a large extracellular portion, which associates in a calcium-dependent, non-covalent interaction with a smaller piece of the notch protein composed of a short extracellular region, a single transmembrane-pass, and a small intracellular region.Notch signaling promotes proliferative signaling during neurogenesis, and its activity is inhibited by Numb to promote neural differentiation. It plays a major role in the regulation of embryonic development. Notch signaling is dysregulated in many cancers, and faulty notch signaling is implicated in many diseases, including T-cell acute lymphoblastic leukemia (T-ALL), cerebral autosomal-dominant arteriopathy with sub-cortical infarcts and leukoencephalopathy (CADASIL), multiple sclerosis, Tetralogy of Fallot, and Alagille syndrome. Inhibition of notch signaling inhibits the proliferation of T-cell acute lymphoblastic leukemia in both cultured cells and a mouse model. Discovery: In 1914, John S. Dexter noticed the appearance of a notch in the wings of the fruit fly Drosophila melanogaster. The alleles of the gene were identified in 1917 by American evolutionary biologist Thomas Hunt Morgan. Its molecular analysis and sequencing was independently undertaken in the 1980s by Spyros Artavanis-Tsakonas and Michael W. Young. Alleles of the two C. elegans Notch genes were identified based on developmental phenotypes: lin-12 and glp-1. The cloning and partial sequence of lin-12 was reported at the same time as Drosophila Notch by Iva Greenwald. Mechanism: The Notch protein spans the cell membrane, with part of it inside and part outside. Ligand proteins binding to the extracellular domain induce proteolytic cleavage and release of the intracellular domain, which enters the cell nucleus to modify gene expression.The cleavage model was first proposed in 1993 based on work done with Drosophila Notch and C. elegans lin-12, informed by the first oncogenic mutation affecting a human Notch gene. Compelling evidence for this model was provided in 1998 by in vivo analysis in Drosophila by Gary Struhl and in cell culture by Raphael Kopan. Although this model was initially disputed, the evidence in favor of the model was irrefutable by 2001.The receptor is normally triggered via direct cell-to-cell contact, in which the transmembrane proteins of the cells in direct contact form the ligands that bind the notch receptor. The Notch binding allows groups of cells to organize themselves such that, if one cell expresses a given trait, this may be switched off in neighbouring cells by the intercellular notch signal. In this way, groups of cells influence one another to make large structures. Thus, lateral inhibition mechanisms are key to Notch signaling. lin-12 and Notch mediate binary cell fate decisions, and lateral inhibition involves feedback mechanisms to amplify initial differences.The Notch cascade consists of Notch and Notch ligands, as well as intracellular proteins transmitting the notch signal to the cell's nucleus. The Notch/Lin-12/Glp-1 receptor family was found to be involved in the specification of cell fates during development in Drosophila and C. elegans.The intracellular domain of Notch forms a complex with CBF1 and Mastermind to activate transcription of target genes. The structure of the complex has been determined. Mechanism: Pathway Maturation of the notch receptor involves cleavage at the prospective extracellular side during intracellular trafficking in the Golgi complex. This results in a bipartite protein, composed of a large extracellular domain linked to the smaller transmembrane and intracellular domain. Binding of ligand promotes two proteolytic processing events; as a result of proteolysis, the intracellular domain is liberated and can enter the nucleus to engage other DNA-binding proteins and regulate gene expression. Mechanism: Notch and most of its ligands are transmembrane proteins, so the cells expressing the ligands typically must be adjacent to the notch expressing cell for signaling to occur. The notch ligands are also single-pass transmembrane proteins and are members of the DSL (Delta/Serrate/LAG-2) family of proteins. In Drosophila melanogaster (the fruit fly), there are two ligands named Delta and Serrate. In mammals, the corresponding names are Delta-like and Jagged. In mammals there are multiple Delta-like and Jagged ligands, as well as possibly a variety of other ligands, such as F3/contactin.In the nematode C. elegans, two genes encode homologous proteins, glp-1 and lin-12. There has been at least one report that suggests that some cells can send out processes that allow signaling to occur between cells that are as much as four or five cell diameters apart.The notch extracellular domain is composed primarily of small cystine-rich motifs called EGF-like repeats.Notch 1, for example, has 36 of these repeats. Each EGF-like repeat is composed of approximately 40 amino acids, and its structure is defined largely by six conserved cysteine residues that form three conserved disulfide bonds. Each EGF-like repeat can be modified by O-linked glycans at specific sites. An O-glucose sugar may be added between the first and second conserved cysteines, and an O-fucose may be added between the second and third conserved cysteines. These sugars are added by an as-yet-unidentified O-glucosyltransferase (except for Rumi), and GDP-fucose Protein O-fucosyltransferase 1 (POFUT1), respectively. The addition of O-fucose by POFUT1 is absolutely necessary for notch function, and, without the enzyme to add O-fucose, all notch proteins fail to function properly. As yet, the manner by which the glycosylation of notch affects function is not completely understood. Mechanism: The O-glucose on notch can be further elongated to a trisaccharide with the addition of two xylose sugars by xylosyltransferases, and the O-fucose can be elongated to a tetrasaccharide by the ordered addition of an N-acetylglucosamine (GlcNAc) sugar by an N-Acetylglucosaminyltransferase called Fringe, the addition of a galactose by a galactosyltransferase, and the addition of a sialic acid by a sialyltransferase.To add another level of complexity, in mammals there are three Fringe GlcNAc-transferases, named lunatic fringe, manic fringe, and radical fringe. These enzymes are responsible for something called a "fringe effect" on notch signaling. If Fringe adds a GlcNAc to the O-fucose sugar then the subsequent addition of a galactose and sialic acid will occur. In the presence of this tetrasaccharide, notch signals strongly when it interacts with the Delta ligand, but has markedly inhibited signaling when interacting with the Jagged ligand. The means by which this addition of sugar inhibits signaling through one ligand, and potentiates signaling through another is not clearly understood. Mechanism: Once the notch extracellular domain interacts with a ligand, an ADAM-family metalloprotease called ADAM10, cleaves the notch protein just outside the membrane. This releases the extracellular portion of notch (NECD), which continues to interact with the ligand. The ligand plus the notch extracellular domain is then endocytosed by the ligand-expressing cell. There may be signaling effects in the ligand-expressing cell after endocytosis; this part of notch signaling is a topic of active research. After this first cleavage, an enzyme called γ-secretase (which is implicated in Alzheimer's disease) cleaves the remaining part of the notch protein just inside the inner leaflet of the cell membrane of the notch-expressing cell. This releases the intracellular domain of the notch protein (NICD), which then moves to the nucleus, where it can regulate gene expression by activating the transcription factor CSL. It was originally thought that these CSL proteins suppressed Notch target transcription. However, further research showed that, when the intracellular domain binds to the complex, it switches from a repressor to an activator of transcription. Other proteins also participate in the intracellular portion of the notch signaling cascade. Mechanism: Ligand interactions Notch signaling is initiated when Notch receptors on the cell surface engage ligands presented in trans on opposing cells. Despite the expansive size of the Notch extracellular domain, it has been demonstrated that EGF domains 11 and 12 are the critical determinants for interactions with Delta. Additional studies have implicated regions outside of Notch EGF11-12 in ligand binding. For example, Notch EGF domain 8 plays a role in selective recognition of Serrate/Jagged and EGF domains 6-15 are required for maximal signaling upon ligand stimulation. A crystal structure of the interacting regions of Notch1 and Delta-like 4 (Dll4) provided a molecular-level visualization of Notch-ligand interactions, and revealed that the N-terminal MNNL (or C2) and DSL domains of ligands bind to Notch EGF domains 12 and 11, respectively. The Notch1-Dll4 structure also illuminated a direct role for Notch O-linked fucose and glucose moieties in ligand recognition, and rationalized a structural mechanism for the glycan-mediated tuning of Notch signaling. Mechanism: Synthetic Notch signaling It is possible to engineer synthetic Notch receptors by replacing the extracellular receptor and intracellular transcriptional domains with other domains of choice. This allows researchers to select which ligands are detected, and which genes are upregulated in response. Using this technology, cells can report or change their behavior in response to contact with user-specified signals, facilitating new avenues of both basic and applied research into cell-cell signaling. Notably, this system allows multiple synthetic pathways to be engineered into a cell in parallel. Function: The Notch signaling pathway is important for cell-cell communication, which involves gene regulation mechanisms that control multiple cell differentiation processes during embryonic and adult life. Function: Notch signaling also has a role in the following processes: neuronal function and development stabilization of arterial endothelial fate and angiogenesis regulation of crucial cell communication events between endocardium and myocardium during both the formation of the valve primordial and ventricular development and differentiation cardiac valve homeostasis, as well as implications in other human disorders involving the cardiovascular system timely cell lineage specification of both endocrine and exocrine pancreas influencing of binary fate decisions of cells that must choose between the secretory and absorptive lineages in the gut expansion of the hematopoietic stem cell compartment during bone development and participation in commitment to the osteoblastic lineage, suggesting a potential therapeutic role for notch in bone regeneration and osteoporosis expansion of the hemogenic endothelial cells along with signaling axis involving Hedgehog signaling and Scl T cell lineage commitment from common lymphoid precursor regulation of cell-fate decision in mammary glands at several distinct development stages possibly some non-nuclear mechanisms, such as control of the actin cytoskeleton through the tyrosine kinase Abl Regulation of the mitotic/meiotic decision in the C. elegans germline development of alveoli in the lung. Function: It has also been found that Rex1 has inhibitory effects on the expression of notch in mesenchymal stem cells, preventing differentiation. Role in embryogenesis: The Notch signaling pathway plays an important role in cell-cell communication, and further regulates embryonic development. Role in embryogenesis: Embryo polarity Notch signaling is required in the regulation of polarity. For example, mutation experiments have shown that loss of Notch signaling causes abnormal anterior-posterior polarity in somites. Also, Notch signaling is required during left-right asymmetry determination in vertebrates.Early studies in the nematode model organism C. elegans indicate that Notch signaling has a major role in the induction of mesoderm and cell fate determination. As mentioned previously, C. elegans has two genes that encode for partially functionally redundant Notch homologs, glp-1 and lin-12. During C. elegans, GLP-1, the C. elegans Notch homolog, interacts with APX-1, the C. elegans Delta homolog. This signaling between particular blastomeres induces differentiation of cell fates and establishes the dorsal-ventral axis. Role in embryogenesis: Role in somitogenesis Notch signaling is central to somitogenesis. In 1995, Notch1 was shown to be important for coordinating the segmentation of somites in mice. Further studies identified the role of Notch signaling in the segmentation clock. These studies hypothesized that the primary function of Notch signaling does not act on an individual cell, but coordinates cell clocks and keep them synchronized. This hypothesis explained the role of Notch signaling in the development of segmentation and has been supported by experiments in mice and zebrafish. Experiments with Delta1 mutant mice that show abnormal somitogenesis with loss of anterior/posterior polarity suggest that Notch signaling is also necessary for the maintenance of somite borders.During somitogenesis, a molecular oscillator in paraxial mesoderm cells dictates the precise rate of somite formation. A clock and wavefront model has been proposed in order to spatially determine the location and boundaries between somites. This process is highly regulated as somites must have the correct size and spacing in order to avoid malformations within the axial skeleton that may potentially lead to spondylocostal dysostosis. Several key components of the Notch signaling pathway help coordinate key steps in this process. In mice, mutations in Notch1, Dll1 or Dll3, Lfng, or Hes7 result in abnormal somite formation. Similarly, in humans, the following mutations have been seen to lead to development of spondylocostal dysostosis: DLL3, LFNG, or HES7. Role in embryogenesis: Role in epidermal differentiation Notch signaling is known to occur inside ciliated, differentiating cells found in the first epidermal layers during early skin development. Furthermore, it has found that presenilin-2 works in conjunction with ARF4 to regulate Notch signaling during this development. However, it remains to be determined whether gamma-secretase has a direct or indirect role in modulating Notch signaling. Role in central nervous system development and function: Early findings on Notch signaling in central nervous system (CNS) development were performed mainly in Drosophila with mutagenesis experiments. For example, the finding that an embryonic lethal phenotype in Drosophila was associated with Notch dysfunction indicated that Notch mutations can lead to the failure of neural and Epidermal cell segregation in early Drosophila embryos. In the past decade, advances in mutation and knockout techniques allowed research on the Notch signaling pathway in mammalian models, especially rodents. Role in central nervous system development and function: The Notch signaling pathway was found to be critical mainly for neural progenitor cell (NPC) maintenance and self-renewal. In recent years, other functions of the Notch pathway have also been found, including glial cell specification, neurites development, as well as learning and memory. Role in central nervous system development and function: Neuron cell differentiation The Notch pathway is essential for maintaining NPCs in the developing brain. Activation of the pathway is sufficient to maintain NPCs in a proliferating state, whereas loss-of-function mutations in the critical components of the pathway cause precocious neuronal differentiation and NPC depletion. Modulators of the Notch signal, e.g., the Numb protein are able to antagonize Notch effects, resulting in the halting of cell cycle and differentiation of NPCs. Conversely, the fibroblast growth factor pathway promotes Notch signaling to keep stem cells of the cerebral cortex in the proliferative state, amounting to a mechanism regulating cortical surface area growth and, potentially, gyrification. In this way, Notch signaling controls NPC self-renewal as well as cell fate specification. Role in central nervous system development and function: A non-canonical branch of the Notch signaling pathway that involves the phosphorylation of STAT3 on the serine residue at amino acid position 727 and subsequent Hes3 expression increase (STAT3-Ser/Hes3 Signaling Axis) has been shown to regulate the number of NPCs in culture and in the adult rodent brain.In adult rodents and in cell culture, Notch3 promotes neuronal differentiation, having a role opposite to Notch1/2. This indicates that individual Notch receptors can have divergent functions, depending on cellular context. Role in central nervous system development and function: Neurite development In vitro studies show that Notch can influence neurite development. In vivo, deletion of the Notch signaling modulator, Numb, disrupts neuronal maturation in the developing cerebellum, whereas deletion of Numb disrupts axonal arborization in sensory ganglia. Although the mechanism underlying this phenomenon is not clear, together these findings suggest Notch signaling might be crucial in neuronal maturation. Role in central nervous system development and function: Gliogenesis In gliogenesis, Notch appears to have an instructive role that can directly promote the differentiation of many glial cell subtypes. For example, activation of Notch signaling in the retina favors the generation of Muller glia cells at the expense of neurons, whereas reduced Notch signaling induces production of ganglion cells, causing a reduction in the number of Muller glia. Role in central nervous system development and function: Adult brain function Apart from its role in development, evidence shows that Notch signaling is also involved in neuronal apoptosis, neurite retraction, and neurodegeneration of ischemic stroke in the brain In addition to developmental functions, Notch proteins and ligands are expressed in cells of the adult nervous system, suggesting a role in CNS plasticity throughout life. Adult mice heterozygous for mutations in either Notch1 or Cbf1 have deficits in spatial learning and memory. Similar results are seen in experiments with presenilins1 and 2, which mediate the Notch intramembranous cleavage. To be specific, conditional deletion of presenilins at 3 weeks after birth in excitatory neurons causes learning and memory deficits, neuronal dysfunction, and gradual neurodegeneration. Several gamma secretase inhibitors that underwent human clinical trials in Alzheimer's disease and MCI patients resulted in statistically significant worsening of cognition relative to controls, which is thought to be due to its incidental effect on Notch signalling. Role in cardiovascular development: The Notch signaling pathway is a critical component of cardiovascular formation and morphogenesis in both development and disease. It is required for the selection of endothelial tip and stalk cells during sprouting angiogenesis. Cardiac development Notch signal pathway plays a crucial role in at least three cardiac development processes: Atrioventricular canal development, myocardial development, and cardiac outflow tract (OFT) development. Role in cardiovascular development: Atrioventricular (AV) canal development AV boundary formation Notch signaling can regulate the atrioventricular boundary formation between the AV canal and the chamber myocardium. Studies have revealed that both loss- and gain-of-function of the Notch pathway results in defects in AV canal development. In addition, the Notch target genes HEY1 and HEY2 are involved in restricting the expression of two critical developmental regulator proteins, BMP2 and Tbx2, to the AV canal.AV epithelial-mesenchymal transition (EMT) Notch signaling is also important for the process of AV EMT, which is required for AV canal maturation. After the AV canal boundary formation, a subset of endocardial cells lining the AV canal are activated by signals emanating from the myocardium and by interendocardial signaling pathways to undergo EMT. Notch1 deficiency results in defective induction of EMT. Very few migrating cells are seen and these lack mesenchymal morphology. Notch may regulate this process by activating matrix metalloproteinase2 (MMP2) expression, or by inhibiting vascular endothelial (VE)-cadherin expression in the AV canal endocardium while suppressing the VEGF pathway via VEGFR2. In RBPJk/CBF1-targeted mutants, the heart valve development is severely disrupted, presumably because of defective endocardial maturation and signaling. Role in cardiovascular development: Ventricular development Some studies in Xenopus and in mouse embryonic stem cells indicate that cardiomyogenic commitment and differentiation require Notch signaling inhibition. Active Notch signaling is required in the ventricular endocardium for proper trabeculae development subsequent to myocardial specification by regulating BMP10, NRG1, and EphrinB2 expression. Notch signaling sustains immature cardiomyocyte proliferation in mammals and zebrafish. A regulatory correspondence likely exists between Notch signaling and Wnt signaling, whereby upregulated Wnt expression downregulates Notch signaling, and a subsequent inhibition of ventricular cardiomyocyte proliferation results. This proliferative arrest can be rescued using Wnt inhibitors.The downstream effector of Notch signaling, HEY2, was also demonstrated to be important in regulating ventricular development by its expression in the interventricular septum and the endocardial cells of the cardiac cushions. Cardiomyocyte and smooth muscle cell-specific deletion of HEY2 results in impaired cardiac contractility, malformed right ventricle, and ventricular septal defects. Role in cardiovascular development: Ventricular outflow tract development During development of the aortic arch and the aortic arch arteries, the Notch receptors, ligands, and target genes display a unique expression pattern. When the Notch pathway was blocked, the induction of vascular smooth muscle cell marker expression failed to occur, suggesting that Notch is involved in the differentiation of cardiac neural crest cells into vascular cells during outflow tract development. Role in cardiovascular development: Angiogenesis Endothelial cells use the Notch signaling pathway to coordinate cellular behaviors during the blood vessel sprouting that occurs sprouting angiogenesis.Activation of Notch takes place primarily in "connector" cells and cells that line patent stable blood vessels through direct interaction with the Notch ligand, Delta-like ligand 4 (Dll4), which is expressed in the endothelial tip cells. VEGF signaling, which is an important factor for migration and proliferation of endothelial cells, can be downregulated in cells with activated Notch signaling by lowering the levels of Vegf receptor transcript. Zebrafish embryos lacking Notch signaling exhibit ectopic and persistent expression of the zebrafish ortholog of VEGF3, flt4, within all endothelial cells, while Notch activation completely represses its expression.Notch signaling may be used to control the sprouting pattern of blood vessels during angiogenesis. When cells within a patent vessel are exposed to VEGF signaling, only a restricted number of them initiate the angiogenic process. Vegf is able to induce DLL4 expression. In turn, DLL4 expressing cells down-regulate Vegf receptors in neighboring cells through activation of Notch, thereby preventing their migration into the developing sprout. Likewise, during the sprouting process itself, the migratory behavior of connector cells must be limited to retain a patent connection to the original blood vessel. Role in endocrine development: During development, definitive endoderm and ectoderm differentiates into several gastrointestinal epithelial lineages, including endocrine cells. Many studies have indicated that Notch signaling has a major role in endocrine development. Role in endocrine development: Pancreatic development The formation of the pancreas from endoderm begins in early development. The expression of elements of the Notch signaling pathway have been found in the developing pancreas, suggesting that Notch signaling is important in pancreatic development. Evidence suggests Notch signaling regulates the progressive recruitment of endocrine cell types from a common precursor, acting through two possible mechanisms. One is the "lateral inhibition", which specifies some cells for a primary fate but others for a secondary fate among cells that have the potential to adopt the same fate. Lateral inhibition is required for many types of cell fate determination. Here, it could explain the dispersed distribution of endocrine cells within pancreatic epithelium. A second mechanism is "suppressive maintenance", which explains the role of Notch signaling in pancreas differentiation. Fibroblast growth factor10 is thought to be important in this activity, but the details are unclear. Role in endocrine development: Intestinal development The role of Notch signaling in the regulation of gut development has been indicated in several reports. Mutations in elements of the Notch signaling pathway affect the earliest intestinal cell fate decisions during zebrafish development. Transcriptional analysis and gain of function experiments revealed that Notch signaling targets Hes1 in the intestine and regulates a binary cell fate decision between adsorptive and secretory cell fates. Role in endocrine development: Bone development Early in vitro studies have found the Notch signaling pathway functions as down-regulator in osteoclastogenesis and osteoblastogenesis. Notch1 is expressed in the mesenchymal condensation area and subsequently in the hypertrophic chondrocytes during chondrogenesis. Overexpression of Notch signaling inhibits bone morphogenetic protein2-induced osteoblast differentiation. Overall, Notch signaling has a major role in the commitment of mesenchymal cells to the osteoblastic lineage and provides a possible therapeutic approach to bone regeneration. Role in cancer: Leukemia Aberrant Notch signaling is a driver of T cell acute lymphoblastic leukemia (T-ALL) and is mutated in at least 65% of all T-ALL cases. Notch signaling can be activated by mutations in Notch itself, inactivating mutations in FBXW7 (a negative regulator of Notch1), or rarely by t(7;9)(q34;q34.3) translocation. In the context of T-ALL, Notch activity cooperates with additional oncogenic lesions such as c-MYC to activate anabolic pathways such as ribosome and protein biosynthesis thereby promoting leukemia cell growth. Role in cancer: Urothelial bladder cancer Loss of Notch activity is a driving event in urothelial cancer. A study identified inactivating mutations in components of the Notch pathway in over 40% of examined human bladder carcinomas. In mouse models, genetic inactivation of Notch signaling results in Erk1/2 phosphorylation leading to tumorigenesis in the urinary tract. As not all NOTCH receptors are equally involved in the urothelial bladder cancer, 90% of samples in one study had some level of NOTCH3 expression, suggesting that NOTCH3 plays an important role in urothelial bladder cancer. A higher level of NOTCH3 expression was observed in high-grade tumors, and a higher level of positivity was associated with a higher mortality risk. NOTCH3 was identified as an independent predictor of poor outcome. Therefore, it is suggested that NOTCH3 could be used as a marker for urothelial bladder cancer-specific mortality risk. It was also shown that NOTCH3 expression could be a prognostic immunohistochemical marker for clinical follow-up of urothelial bladder cancer patients, contributing to a more individualized approach by selecting patients to undergo control cystoscopy after a shorter time interval. Notch inhibitors: The involvement of Notch signaling in many cancers has led to investigation of notch inhibitors (especially gamma-secretase inhibitors) as cancer treatments which are in different phases of clinical trials. As of 2013 at least 7 notch inhibitors were in clinical trials. MK-0752 has given promising results in an early clinical trial for breast cancer. Preclinical studies showed beneficial effects of gamma-secretase inhibitors in endometriosis, a disease characterised by increased expression of notch pathway constituents. Several notch inhibitors, including the gamma-secretase inhibitor LY3056480, are being studied for their potential ability to regenerate hair cells in the cochlea, which could lead to treatments for hearing loss and tinnitus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Uncrewed vehicle** Uncrewed vehicle: An uncrewed vehicle or unmanned vehicle is a vehicle without a person on board. Uncrewed vehicles can either be under telerobotic control—remote controlled or remote guided vehicles—or they can be autonomously controlled—autonomous vehicles—which are capable of sensing their environment and navigating on their own. Types: There are different types of uncrewed vehicles: Remote control vehicle (RC), such as radio-controlled cars or radio-controlled aircraft Unmanned ground vehicle (UGV), such as the autonomous cars, or unmanned combat vehicles (UCGV) Self-driving truck Driverless tractor Unmanned ground and aerial vehicle (UGAV), unmanned vehicle with hybrid locomotion methods Unmanned aerial vehicle (UAV), unmanned aircraft commonly known as "drone" Unmanned combat aerial vehicle (UCAV) Medium-altitude long-endurance unmanned aerial vehicle (MALE) Miniature UAV (SUAV) Delivery drone Micro air vehicle (MAV) Target drone Autonomous spaceport drone ship Unmanned surface vehicle (USV), also known as "surface drone", for the operation on the surface of the water Unmanned underwater vehicle (UUV), also known as "underwater drone", for the operation underwater Remotely operated underwater vehicle (ROUV) Autonomous underwater vehicle (AUV) Intervention AUV (IAUV) Underwater glider Uncrewed spacecraft, both remote controlled ("uncrewed space mission") and autonomous ("robotic spacecraft" or "space probe")
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bijective proof** Bijective proof: In combinatorics, bijective proof is a proof technique for proving that two sets have equally many elements, or that the sets in two combinatorial classes have equal size, by finding a bijective function that maps one set one-to-one onto the other. This technique can be useful as a way of finding a formula for the number of elements of certain sets, by corresponding them with other sets that are easier to count. Additionally, the nature of the bijection itself often provides powerful insights into each or both of the sets. Basic examples: Proving the symmetry of the binomial coefficients The symmetry of the binomial coefficients states that (nk)=(nn−k). This means that there are exactly as many combinations of k things in a set of size n as there are combinations of n − k things in a set of size n. Basic examples: A bijective proof The key idea of the proof may be understood from a simple example: selecting k children to be rewarded with ice cream cones, out of a group of n children, has exactly the same effect as choosing instead the n − k children to be denied ice cream cones. More abstractly and generally, the two quantities asserted to be equal count the subsets of size k and n − k, respectively, of any n-element set S. Let A be the set of all k-element subsets of S, the set A has size (nk). Basic examples: Let B be the set of all n−k subsets of S, the set B has size (nn−k) . There is a simple bijection between the two sets A and B: it associates every k-element subset (that is, a member of A) with its complement, which contains precisely the remaining n − k elements of S, and hence is a member of B. More formally, this can be written using functional notation as, f : A → B defined by f(X) = Xc for X any k-element subset of S and the complement taken in S. To show that f is a bijection, first assume that f(X1) = f(X2), that is to say, X1c = X2c. Take the complements of each side (in S), using the fact that the complement of a complement of a set is the original set, to obtain X1 = X2. This shows that f is one-to-one. Now take any n−k-element subset of S in B, say Y. Its complement in S, Yc, is a k-element subset, and so, an element of A. Since f(Yc) = (Yc)c = Y, f is also onto and thus a bijection. The result now follows since the existence of a bijection between these finite sets shows that they have the same size, that is, (nk)=(nn−k) Other examples: Problems that admit bijective proofs are not limited to binomial coefficient identities. As the complexity of the problem increases, a bijective proof can become very sophisticated. This technique is particularly useful in areas of discrete mathematics such as combinatorics, graph theory, and number theory. The most classical examples of bijective proofs in combinatorics include: Prüfer sequence, giving a proof of Cayley's formula for the number of labeled trees. Robinson-Schensted algorithm, giving a proof of Burnside's formula for the symmetric group. Conjugation of Young diagrams, giving a proof of a classical result on the number of certain integer partitions. Bijective proofs of the pentagonal number theorem. Bijective proofs of the formula for the Catalan numbers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Virtual reality in primary education** Virtual reality in primary education: Virtual reality (VR) is a computer application which allows users to experience immersive, three dimensional visual and audio simulations. According to Pinho (2004), virtual reality is characterized by immersion in the 3D world, interaction with virtual objects, and involvement in exploring the virtual environment. The feasibility of the virtual reality in education has been debated due to several obstacles such as affordability of VR software and hardware. The psychological effects of virtual reality are also a negative consideration. However, recent technological progress has made VR more viable and promise new learning models and styles for students. These facets of virtual reality have found applications within the primary education (K-8th grade) sphere in enhancing student learning, increasing engagement, and creating new opportunities for addressing learning preferences. General education: Virtual reality (VR) can be used in numerous ways in an educational setting. Seeing virtual reality as a continued improvement from PC-based simulation systems, researchers recognize its potential to provide special learning experiences which traditional education methods cannot. Although studies agree that restrictions still exist for classroom applications of virtual reality systems, researchers have been experimenting with using VR as part of the teaching method in many aspects of the general education. General education: Following are example attempts at applying virtual reality in classrooms. General education: Augmented Reality Augmented reality (AR) is a technology which superimposes virtual generated images on the real world. The coexistence of virtual objects and real environments have encouraged experimentation and developments in educational settings which are not possible in the real world.A study done by Antonietti et al. (2000) found that giving children an in-depth virtual tour of a painting and letting them examine all aspects of the painting helped with their description and interpretation of the painting, when compared to a control group that studied the painting without the usage of VR. Another experiment was carried out on 91 sixth-grade primary students where they used an augmented reality application "WallaMe" which taught a didactic unit in art education. After analyzing the results, the study found a statistically significant improvements in academic performance, motivation, analysis of information, and collaboration.Augmented reality has also had developments into more mainstream academic settings. 3D rendition of textbooks provide students with a more synergetic way of learning. The Institute for the Promotion of Teaching Science and Technology has launched a geology textbook which allows students to learn traditional information while virtually interacting with the different layers of the earth's core.Another benefit of augmented reality is capitalizing on different learning styles. While virtual reality provides a more immersive experience, augmented reality learning technologies favor on auditory learners. A study done on science information retention in college students showed AR to be a more effective medium for conveying auditory information through spacial presence. General education: Virtual field trips In virtual field trips, students visit real-world places or educational simulations to experience different lessons. Google Expeditions allows students to take a shared field trip using smartphone headset technology under the control of a teacher's app. Nearpod's VR provides lesson plans in all core subjects for primary grades, and has been shown to increase student engagement in lessons.Virtual field trips can also enable primary school students in rural areas to engage in career exploration opportunities not typically available. Field trips experiences are linked to an increase in interest and motivations to pursue those careers. One program, zipTrips, was designed to simulate the benefits of a life science career exploration field trip for middle school students. By harnessing the power of virtual reality, zipTrips allowed students to engage in live 45 minute field trips with scientists and their work. Students are shown to have an enhanced perception of science and scientific careers. General education: Individualized learning Although VR can be used cooperatively, learning has been shown to be especially effective when VR is utilized for independent learning. Merchant et al. (2014) found that “students performed [significantly] better when they worked individually rather than collaboratively when learning through [VR based collaborative learning environments]”. Some VR applications provide independent learning opportunities when combined with individual lesson plans. For example, students might fill out a worksheet in correspondence with a specific virtual reality simulation. General education: Virtual World Virtual worlds, or three-dimensional immersive virtual worlds in full, is an interactive online environment where people use avatars as their representations. The environment can be designed in any context, and users control their avatars to accomplish tasks in virtual worlds. An academic review on past empirical research identified three main areas virtual worlds are used in school settings: (1) communication spaces, (2) simulation of space, and (3) experiential spaces.Communication spaces refers to the communication between users, possibly between teachers and students. Communication takes in both verbal and nonverbal forms, using applications of the chat function and avatar movements respectively. The second use of virtual worlds is simulation of space. Space is one of the most important elements in virtual worlds in terms of its scalability and authenticity with great feasibility of simulating any environment. In an educational initiative, the environment can be built in a school setting to resonate with students as if they are actually in school. The Nanyang Technological University in Singapore developed a virtual campus tour for its prospective students. The virtual campus displays general information but also familiarize students with the campus before physically being there. The third main feature of virtual worlds is its experiential spaces, which allow students to “learn by doing” instead of learning by reading or listening. With virtual worlds, students can directly act on the subject, “observe the outcomes of their actions” and further reflect on the observable outcomes. General education: Music education Because of budget cuts and restrictions such as disabilities, music education in K-12 is facing challenges, with which researchers are looking at virtual reality technology for help. Virtual interfaces with interactive visualization and audio feedback are being experimented with to improve the experience of learning a musical instrument for students. Other attempts include offering simulated experiences of playing musical instruments through head-mounted display devices.A study shows that a mix of virtual and traditional education can effectively improve music learning results, despite concerns for physical and pedagogical problems including virtual sickness and isolation. The usage of virtual reality in K-12 music education is still widely in experimentation, while research has presented promising results. Some researchers suggest that although attempts with VR showed effectiveness, augmented reality may be preferable in practice because of its support of interaction with real instruments or objects. General education: History education With its established ability to create immersive simulated experiences, virtual reality is being evaluated for enhancing the teaching methods for history classes. Research on teaching the history of the Roman Empire with a virtual reconstruction of a Roman city shows significant improve in the learning experiences and academic results for the students. Researchers suggest that the increase in motivation for learning, enhanced interactivity, and the immersive experience are likely key to the success of the experiment, and hold interest for conducting larger-scaled studies on teaching history with virtual reality. Social skills and collaboration: VR also has uses within primary education for social-emotional development. Social skills and collaboration: Collaboration VR has applications for development of social skills and multi-user cooperation. It can provide opportunities for students to collaborate through cooperative simulations, and has been shown to support introverted students in their group interactions. One study found VR-based collaboration to create "superior collaboration and interaction in the development of outcomes, as compared with other situations where group structures were used." Autism Autism, also known as Autism Spectrum Disorder, is a series of developmental disorders that impair the abilities of communicating and interacting with other people. While autism typically appears during early childhood, around 1 in 59 children is identified with the autistic condition according to a datasets put up by CDC's Autism and Developmental Disabilities Monitoring Network. To combat the negative impacts of autism in learning and socializing in school settings, attempts of using VR to increase students’ adaptation are on the rise. Social skills and collaboration: VR simulations have been shown to help children with autism by providing a virtual world in which they can learn to handle real-life scenarios within safe and controlled virtual environments. A study by Strickland et al. (2007) found that children with autism could successfully use virtual worlds to learn skills in fire and street safety, and could apply those skills to real-life situations. One method to facilitate learning experience of autistic students is using virtual reality head-mounted displays (HMDs). According to a study that examines the coping behaviors of using VR headsets in school settings of 32 autistic students between age 6 and age 16, a general preference for “costly and technologically advanced HMDs” and positive attitudes towards the use of VR technologies, such as enjoyment and excitement are found among students. “Developing learning opportunities” and “going places virtually and seeing what the world looks like” are the two primary areas autistic students expect to use HMDs in school for. HMDs also exerts great potential in the future of learning, including relaxing students and creating more learning opportunities at school.Another method is immersing students in virtual scenarios that are common for school settings. Using “a 4-side fully immersive CAVETM VR installation”, it simulates an environment that is “an authentic, safe, controllable and manipulable” to train autistic students to become adaptive in social situations. An example of the scenario is a series of the preparation steps that students normally take before going to school, including brushing teeth, having breakfast and catching school bus. In a study that examines 100 students’ behavior after accepting the training, noticeable changes are shown in “emotion recognition, affective expression and social reciprocity”. Business and academic reception: The use of virtual reality in primary education has been supported by grants from foundations and venture capital firms. The IEEE held workshops on "K-12 Embodied Learning through Virtual & Augmented Reality (KELVAR)" in 2016 and 2017.Despite the interest in virtual reality for K-12 education within business and academia, skepticism of its usefulness for K-12 learners has also been expressed. A 2009 review of the literature concluded that only the most independent, intrinsically motivated, and highly skilled K-12 students succeeded with VR. This review traced the problem to a lack of experience with gearing virtual reality to K-12 specifically; most of the experience had been with VR software designed for adults. Challenges and Concerns: Even though Virtual Reality may be a good supporting tool for students in their studies, there are still certain concerns and challenges that Virtual Reality faces in Primary Education. Challenges and Concerns: Detrimental Effects There are potential physical, physiological, and psychological problems for users associated with the Virtual Reality system today.Since Virtual reality is a simulated environment, simulator sickness is a concern for the user. Wearing Virtual Reality headset for a long period of time could cause discomfort and poor depth perception for students. This is potentially caused by the short distance between the electronic screen and the eyes of the user.Other potential symptoms include nausea, fatigue, dizziness, headache, and sweating. Challenges and Concerns: User safety One downside of the fully immersive environment is that the user is not able to sense the real world objects around once he or she goes into the virtual world. Hence, with some amount of required movement during Virtual Reality immersion, collision with real world object becomes a concern because users may easily run into an obstacle and get hurt. In addition, many Virtual Reality equipment sets also include sound cues, and that may block the sense of hearing for sounds in the real world. Without real world sound inputs, users cannot take in notices from others during an accident. Challenges and Concerns: Distinguishing Reality Similar to video games, a user may become addicted to the world that Virtual Reality technology provides.Virtual Reality immersion can cause a situation where students cannot distinguish reality and virtual reality. This confusion about the real world may result in negative impact on a student's physical safety as they might not differentiate dangerous situations happening towards them. Also, students could become confused as they are overloaded by the virtual information they need to learn, complex equipment they need to master, and tasks they need to finish. Challenges and Concerns: Culture Virtual Reality is still not a technology that is taken seriously and accepted by some people, because they consider it as a game. The attitudes of students can change between whether their task is playing a game, or to think critically and obtain knowledge. Hence, time and effort is needed to spread the potential and positive knowledge of the helpfulness of Virtual Reality in education. Challenges and Concerns: Price In order to become a primary educational tool, Virtual Reality equipment has to be accessible to every student in the class, instead of inefficiently using one shared Virtual Reality headset and takes up valuable learning time. With low end equipment, users will get low end experiences, while high end Virtual Reality equipment would cost hundreds or even thousands of dollars. In order to provide the best Virtual Reality educational environment for students, the use and the affordability of Virtual Reality equipment needs to be considered. Challenges and Concerns: Privacy As the equipment get smaller in size, the infrastructures that stores data behind gets larger. If Virtual Reality technology become mass used in the same environment, the individual systems and the immersive perceptions of users will be networked together. A large network allows the collection of data from users, and this can lead to a potential surveillance situation where the individual privacy of users are tracked by others and exposed to others.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Label (philately)** Label (philately): In philately, label or coupon or tab is a part of sheet of stamps separated from them with perforation (or narrow white margin in imperforate stamps). It cannot be used for postage because it does not have face value and any indication of a postal administration that issued such stamps with labels. The notion of label should not be confused with the term "gutter" or with a margin of a stamp sheet. Label (philately): Sometimes, label is also a stamp-like adhesive of no postal value, often used for promotional purposes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Classification rule** Classification rule: Given a population whose members each belong to one of a number of different sets or classes, a classification rule or classifier is a procedure by which the elements of the population set are each predicted to belong to one of the classes. A perfect classification is one for which every element in the population is assigned to the class it really belongs to. An imperfect classification is one in which some errors appear, and then statistical analysis must be applied to analyse the classification. Classification rule: A special kind of classification rule is binary classification, for problems in which there are only two classes. Testing classification rules: Given a data set consisting of pairs x and y, where x denotes an element of the population and y the class it belongs to, a classification rule h(x) is a function that assigns each element x to a predicted class y^=h(x). A binary classification is such that the label y can take only one of two values. The true labels yi can be known but will not necessarily match their approximations yi^=h(xi) . In a binary classification, the elements that are not correctly classified are named false positives and false negatives. Testing classification rules: Some classification rules are static functions. Others can be computer programs. A computer classifier can be able to learn or can implement static classification rules. For a training data-set, the true labels yj are unknown, but it is a prime target for the classification procedure that the approximation yj^=h(xj)≈yj as well as possible, where the quality of this approximation needs to be judged on the basis of the statistical or probabilistic properties of the overall population from which future observations will be drawn. Testing classification rules: Given a classification rule, a classification test is the result of applying the rule to a finite sample of the initial data set. Binary and multiclass classification: Classification can be thought of as two separate problems – binary classification and multiclass classification. In binary classification, a better understood task, only two classes are involved, whereas multiclass classification involves assigning an object to one of several classes. Since many classification methods have been developed specifically for binary classification, multiclass classification often requires the combined use of multiple binary classifiers. An important point is that in many practical binary classification problems, the two groups are not symmetric – rather than overall accuracy, the relative proportion of different types of errors is of interest. For example, in medical testing, a false positive (detecting a disease when it is not present) is considered differently from a false negative (not detecting a disease when it is present). In multiclass classifications, the classes may be considered symmetrically (all errors are equivalent), or asymmetrically, which is considerably more complicated. Binary and multiclass classification: Binary classification methods include probit regression and logistic regression. Multiclass classification methods include multinomial probit and multinomial logit. Confusion Matrix and Classifiers: When the classification function is not perfect, false results will appear. In the example in the image to the right. There are 20 dots on the left side of the line (true side) while only 8 of those 20 were actually true. In a similar situation for the right side of the line (false side) where there are 16 dots on the right side and 4 of those 16 dots were inaccurately marked as true. Using the dot locations, we can build a confusion matrix to express the values. We can use 4 different metrics to express the 4 different possible outcomes. There is true positive (TP), false positive (FP), false negative (FN), and true negative (TN). Confusion Matrix and Classifiers: False positives False positives result when a test falsely (incorrectly) reports a positive result. For example, a medical test for a disease may return a positive result indicating that the patient has the disease even if the patient does not have the disease. False positive is commonly denoted as the top right (Condition negative X test outcome positive) unit in a Confusion matrix. Confusion Matrix and Classifiers: False negatives On the other hand, false negatives result when a test falsely or incorrectly reports a negative result. For example, a medical test for a disease may return a negative result indicating that patient does not have a disease even though the patient actually has the disease. False negative is commonly denoted as the bottom left (Condition positive X test outcome negative) unit in a Confusion matrix. Confusion Matrix and Classifiers: True positives True positives result when a test correctly reports a positive result. As an example, a medical test for a disease may return a positive result indicating that the patient has the disease. This is shown to be true when the patient test confirms the existence of the disease. True positive is commonly denoted as the top left (Condition positive X test outcome positive) unit in a Confusion matrix. Confusion Matrix and Classifiers: True negatives True negative result when a test correctly reports a negative result. As an example, a medical test for a disease may return a positive result indicating that the patient does not have the disease. This is shown to be true when the patients test also reports not having the disease. True negative is commonly denoted as the bottom right (Condition negative X test outcome negative) unit in a Confusion matrix. Application with Bayes’ Theorem: We can also calculate true positives, false positive, true negative, and false negatives using Bayes' theorem. Using Bayes' theorem will help describe the Probability of an Event (probability theory), based on prior knowledge of conditions that might be related to the event. Expressed are the four classifications using the example below. If a tested patient does not have the disease, the test returns a positive result 5% of the time, or with a probability of 0.05. Suppose that only 0.1% of the population has that disease, so that a randomly selected patient has a 0.001 prior probability of having the disease. Let A represent the condition in which the patient has the disease Let \neg A represent the condition in which the patient does not have the disease Let B represent the evidence of a positive test result. Application with Bayes’ Theorem: Let \neg B represent the evidence of a negative test result.In terms of true positive, false positive, false negative, and true negative: False positive is the probability P that \neg A (The patient does not have the disease) then B (The patient tests positive for the disease) also expressed as P(\neg A|B) False negative is the probability P that A (The patient has the disease) then \neg B (The patient tests negative for the disease) also expressed as P( A|\neg B) True positive is the probability P that A (The patient has the disease) then B (The patient tests positive for the disease) also expressed as P(A|B) True negative is the probability P that \neg A (The patient does not have the disease) then \neg B (The patient tests negative for the disease) also expressed as P(\neg A|\neg B) False positives We can use Bayes' theorem to determine the probability that a positive result is in fact a false positive. We find that if a disease is rare, then the majority of positive results may be false positives, even if the test is relatively accurate. Application with Bayes’ Theorem: Naively, one might think that only 5% of positive test results are false, but that is quite wrong, as we shall see. Suppose that only 0.1% of the population has that disease, so that a randomly selected patient has a 0.001 prior probability of having the disease. We can use Bayes' theorem to calculate the probability that a positive test result is a false positive. 0.99 0.001 0.99 0.001 0.05 0.999 0.019. and hence the probability that a positive result is a false positive is about 1 − 0.019 = 0.98, or 98%. Application with Bayes’ Theorem: Despite the apparent high accuracy of the test, the incidence of the disease is so low that the vast majority of patients who test positive do not have the disease. Nonetheless, the fraction of patients who test positive who do have the disease (0.019) is 19 times the fraction of people who have not yet taken the test who have the disease (0.001). Thus the test is not useless, and re-testing may improve the reliability of the result. Application with Bayes’ Theorem: In order to reduce the problem of false positives, a test should be very accurate in reporting a negative result when the patient does not have the disease. If the test reported a negative result in patients without the disease with probability 0.999, then 0.99 0.001 0.99 0.001 0.001 0.999 0.5 , so that 1 − 0.5 = 0.5 now is the probability of a false positive. Application with Bayes’ Theorem: False negatives We can use Bayes' theorem to determine the probability that the negative result is in fact a false negative using the example from above: 0.01 0.001 0.01 0.001 0.95 0.999 0.0000105. The probability that a negative result is a false negative is about 0.0000105 or 0.00105%. When a disease is rare, false negatives will not be a major problem with the test. But if 60% of the population had the disease, then the probability of a false negative would be greater. With the above test, the probability of a false negative would be 0.01 0.6 0.01 0.6 0.95 0.4 0.0155. The probability that a negative result is a false negative rises to 0.0155 or 1.55%. True positives We can use Bayes' theorem to determine the probability that the positive result is in fact a true positive using the example from above: If a tested patient has the disease, the test returns a positive result 99% of the time, or with a probability of 0.99. If a tested patient does not have the disease, the test returns a positive result 5% of the time, or with a probability of 0.05. Application with Bayes’ Theorem: Suppose that only 0.1% of the population has that disease, so that a randomly selected patient has a 0.001 prior probability of having the disease.Let A represent the condition in which the patient has the disease, and B represent the evidence of a positive test result. Then, the probability that the patient actually has the disease given a positive test result is: 0.99 0.001 0.99 0.001 0.05 0.999 0.019. Application with Bayes’ Theorem: The probability that a positive result is a true positive is about 0.019% True negatives We can also use Bayes' theorem to calculate the probability of true negative. Using the examples above: If a tested patient has the disease, the test returns a positive result 99% of the time, or with a probability of 0.99. 0.99 0.999 0.99 0.999 0.05 0.001 0.0000105. The probability that a negative result is a true negative is 0.9999494 or 99.99%. Since the disease is rare and the positive to positive rate is high and the negative to negative rate is also high, this will produce a large True Negative rate. Measuring a classifier with sensitivity and specificity: In training a classifier, one may wish to measure its performance using the well-accepted metrics of sensitivity and specificity. It may be instructive to compare the classifier to a random classifier that flips a coin based on the prevalence of a disease. Suppose that the probability a person has the disease is p and the probability that they do not is q=1−p . Suppose then that we have a random classifier that guesses that the patient has the disease with that same probability p and guesses that he does not with the same probability q The probability of a true positive is the probability that the patient has the disease times the probability that the random classifier guesses this correctly, or p2 . With similar reasoning, the probability of a false negative is pq . From the definitions above, the sensitivity of this classifier is p2/(p2+pq)=p . With similar reasoning, we can calculate the specificity as q2/(q2+pq)=q So, while the measure itself is independent of disease prevalence, the performance of this random classifier depends on disease prevalence. The classifier may have performance that is like this random classifier, but with a better-weighted coin (higher sensitivity and specificity). So, these measures may be influenced by disease prevalence. An alternative measure of performance is the Matthews correlation coefficient, for which any random classifier will get an average score of 0. Measuring a classifier with sensitivity and specificity: The extension of this concept to non-binary classifications yields the confusion matrix.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DBEdit** DBEdit: DBEdit 2 is a database editor, which can connect to an Oracle, IBM Db2, MySQL and any database that provides a JDBC driver. It runs on Windows, Linux and Solaris. Open source: DBEdit is free and open source software and distributed under the GNU General Public License. The source code is hosted on SourceForge. History: DBEdit is developed by Jef Van Den Ouweland. The first Windows and was used to edit an Oracle or IBM Db2 database. It is written in Java. Later on, generic JDBC support was added so that the application could connect to basically any type of database that provides a JDBC driver. One year after the first release, support for other operating systems, such as Linux and Solaris, was added. The last version of DBEdit was released in May 2012
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bedroom** Bedroom: A bedroom or bedchamber is a room situated within a residential or accommodation unit characterised by its usage for sleeping and sexual activity. A typical western bedroom contains as bedroom furniture one or two beds (ranging from a crib for an infant, a single or twin bed for a toddler, child, teenager, or single adult to bigger sizes like a full, double, queen, king or California king [eastern or waterbed size for a couple]), a clothes closet, and bedside table and dressing table, both of which usually contain drawers. Except in bungalows, ranch style homes, ground floor apartments, or one-storey motels, bedrooms are usually on one of the floors of a dwelling that is above ground level. History: In larger Victorian houses it was common to have accessible from the bedroom a boudoir for the lady of the house and a dressing room for the gentleman. Attic bedrooms exist in some houses; since they are only separated from the outside air by the roof they are typically cold in winter and may be too hot in summer. The slope of the rafters supporting a pitched roof also makes them inconvenient. In houses where servants were living in they often used attic bedrooms. History: In the 14th century the lower class slept on mattresses that were stuffed with hay and broom straws. During the 16th century mattresses stuffed with feathers started to gain popularity, with those who could afford them. The common person was doing well if he could buy a mattress after seven years of marriage. In the 18th century cotton and wool started to become more common. The first coil spring mattress was not invented until 1871. The most common and most purchased mattress is the innerspring mattress, though a wide variety of alternative materials are available including foam, latex, wool, and even silk. The variety of firmness choices range from relatively soft to a rather firm mattress. A bedroom may have bunk beds if two or more people share a room. A chamber pot kept under the bed or in a nightstand was usual in the period before modern domestic plumbing and bathrooms in dwellings. Furnishings: Furniture and other items in bedrooms vary greatly, depending on taste, local traditions and the socioeconomic status of an individual. For instance, a master bedroom (primary bedroom) (also referred to as a "masters bedroom" in the Philippines) may include a bed of a specific size (double, king or queen-sized); one or more dressers (or perhaps, a wardrobe armoire); a nightstand; one or more closets; and carpeting. Built-in closets are less common in Europe than in North America; thus there is greater use of freestanding wardrobes or armoires in Europe. Furnishings: An individual's bedroom is a reflection of their personality, as well as social class and socioeconomic status, and is unique to each person. However, there are certain items that are common in most bedrooms. Mattresses usually have a bed set to raise the mattress off the floor and the bed often provides some decoration. There are many different types of mattresses. Furnishings: Night stands are also popular. They are used to put various items on, such as an alarm clock or a small lamp. In the times before bathrooms existed in dwellings bedrooms often contained a washstand for tasks of personal hygiene. In the 2010s, having a television set in a bedroom is fairly common as well. 43% of American children from ages 3 to 4 have a television in their bedrooms. Along with television sets many bedrooms also have computers, video game consoles, and a desk to do work. In the late 20th century and early 21st century the bedroom became a more social environment and people started to spend a lot more time in their bedrooms than in the past. Furnishings: Bedding used in northern Europe (especially in Scandinavia) is significantly different from that used in North America and other parts of Europe. In Japan futons are common.In addition to a bed (or, if shared by two or more children, a bunk bed), a child's bedroom may include a small closet or dressers, a toy box or computer game console, bookcase or other items. Modern bedrooms: Many houses in North America have at least two bedrooms—usually a master bedroom and one or more bedrooms for either children or guests. In some jurisdictions there are basic features (such as a closet and a "means of egress") that a room must have in order to legally qualify as a bedroom. In many states, such as Alaska, bedrooms are not required to have closets and must instead meet minimum size requirements. Modern bedrooms: A closet by definition is a small space used to store things. In a bedroom, a closet is most commonly used for clothes and other small personal items that one may have. Walk in closets are more popular today and vary in size. However, in the past wardrobes have been the most prominent. A wardrobe is a tall rectangular shaped cabinet that clothes can be stored or hung in. Clothes are also kept in a dresser. Typically nicer clothes are kept in the closet because they can be hung up while leisure clothing and undergarments are stored in the dresser. Modern bedrooms: In buildings with multiple self-contained housing units (e.g., apartments), the number of bedrooms varies widely. While many such units have at least one bedroom—frequently, these units have at least two—some of these units may not have a specific room dedicated for use as a bedroom. (These units may be known by various names, including studio, efficiency, bedsit, and others.) Sometimes, a master bedroom is connected to a dedicated bathroom, often called an ensuite or master bathroom. Culture: Bedrooms typically have a door for privacy (in some cases lockable from inside) and a window for ventilation. In larger bedrooms, a small desk and chair or an upholstered chair and a chest of drawers may also be used. In Western countries, some large bedrooms, called master bedrooms, may also contain a bathroom. Where space allows, bedrooms may also have televisions and / or video players, and in some cases a personal computer. Around the world: Japan In Japan, the notion of having a bedroom is much less important than it is in the west, especially as it pertains to having a private space for personal use. Indeed, having a unified house corresponds to having a unified family, a concept so important that areas are seldom personalized, even those pertaining to relationships. Everything is subject to the concept of primitive cohesion. This makes for flexibility in terms of the way various spaces are utilized: Each evening, the Japanese unroll their futon directly on their tatami mats, typically close to one another. They then put them away come morning in the oshiire. The unity of the household is also reinforced by the use of sliding partitions (shoji) lined with rice paper and insulating in every way. Around the world: Materially, the Japanese tatami room, as opposed to its western counterpart (deemed The Western Room), has no door, bed, or even wall, making it barely detectable in space. This room is typically situated towards the back of the home, close to the place dedicated to the family ancestors and opposite of the southern façade, the gardens, and the general exterior. The second half of the twentieth century saw a considerable change in the bedroom style. Almost non-existent before World War Two, The Western Room continued to gain traction in new constructions to the point where there is a clear relationship between age of a building and presence of western-style bedrooms. Cultural habits, however, have not shifted as rapidly. In the most densely populated cities, there exists a type of hotel, essentially consisting of stacks of individual rooms so cramped they hardly allow one to do more than lie down and sleep. These are called capsule hotels, and have spread to areas such as Singapore and Taiwan.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hood (car)** Hood (car): The hood (American English) or bonnet (Commonwealth English) is the hinged cover over the engine of motor vehicles. Hoods can open to allow access to the engine compartment, or trunk (boot in Commonwealth English) on rear-engine and some mid-engine vehicles) for maintenance and repair. Terminology: In British terminology, hood refers to a fabric cover over the passenger compartment of the car (known as the 'roof' or 'top' in the US). In many motor vehicles built in the 1930s and 1940s, the resemblance to an actual hood or bonnet is clear when open and viewed head-on. In modern vehicles it continues to serve the same purpose but no longer resembles a head covering. Styles and materials: On front-engined cars, the hood may be hinged at either the front or the rear edge, or in earlier models (e.g. the Ford Model T) it may be split into two sections, one each side, each hinged along the centre line. Another variant combines the bonnet and wheelarches into one section which allows the entire front bodywork to tilt forwards around a pivot near the front of the vehicle (e.g. that of the Triumph Herald).Hoods are typically made of the same material as the rest of the body work. This may include steel, aluminum, fiberglass or carbon fiber. Some aftermarket companies produce replacements for steel hoods in fiberglass or carbon fiber to reduce vehicle weight. Release/ safety and security mechanisms: The hood release system is common on most vehicles and usually consists of an interior hood latch handle, hood release cable and hood latch assembly. The hood latch handle is usually located below the steering wheel, beside the driver's seat or set into the door frame. On race cars or cars with aftermarket hoods (that do not use the factory latch system) the hood may be held down by hood pins. Some aftermarket hoods that have a latch system are still equipped with hood pins to hold the hood buttoned down if the latch fails. Features: A hood may contain a hood ornament, hood scoop, and/or wiper jets. A portion of the hood may be raised in a power bulge, to fit a large engine or air filters. Pedestrian safety: In Japan and Europe, regulations have come into effect that place a limit on the severity of pedestrian head injury when struck by a motor vehicle. This is leading to more advanced hood designs, as evidenced by multicone hood inner panel designs as found on the Mazda RX-8 and other vehicles. Other changes are being made to use the hood as an active structure and push its surface several centimeters away from the hard motor components during a pedestrian crash. This may be achieved by mechanical (spring force) or pyrotechnic devices.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Loan closet** Loan closet: A loan closet is a program that allows people to borrow durable medical equipment and home medical equipment at no cost or at low cost. The loan closet may be offered through an organization, an individual, or some other entity, often a non-profit organization. Because medical equipment is expensive and often needed for only a short time, loan closets help people receive equipment that they may not otherwise be able to afford. Process: Typically, a loan closet receives donated equipment from people who no longer need it. The loan closet will then clean and check the equipment and make it available to another person. Depending on the particular loan closet, the equipment may be loaned out for a set period of time, for as long as the person needs it or, in some cases it may be given away. When the equipment is returned, it is cleaned again and checked for safety and is then made available for the next person. The medical equipment available varies from one loan closet to another. Clients: The requirements to use a loan closet vary. Some loan closets are only available to certain groups of people such as senior citizens or veterans. Others may only serve people with a particular disease or condition, such as ALS, multiple sclerosis, cancer, or cerebral palsy. A person may also need to reside in a particular geographical area, belong to a certain organization, or have an income below a certain level. Sponsors: Loan closets may be sponsored by churches, synagogues, mosques, temples, senior centers, or fire stations. An individual may also loan out medical equipment on an informal basis. There are many non-profit organizations whose sole purpose is to loan out home medical equipment to those in need. Contributions: Loan closets are dependent on contributors for the equipment which they lend out. Loan closets can be found through social workers, hospital discharge staff, and physicians' offices. Not all loan closets accept all donations and arrangements may need to be made in advance before donations are accepted.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hairpin lace** Hairpin lace: Hairpin lace is a lace-making technique that uses a crochet hook and two parallel metal rods held at the top and the bottom by removable bars. Historically, a metal U-shaped eponymous hairpin was used. Hairpin lace is formed by wrapping yarn around the prongs of the hairpin lace loom to form loops, which are held together by a row of crochet stitches worked in the center, called the spine. The resulting piece of lace can be worked to any length desired by removing the bottom bar of the hairpin and slipping the loops off the end. The strips produced by this process can be joined together to create an airy and lightweight fabric. Various types of yarns and threads can be used to achieve different color, texture and design effects. Examples of items made with hairpin lace include scarves, shawls, hats, baby blankets, afghans, and clothing. Hairpin lace can also be added to sewn, knitted, and crocheted works as a decorative accent.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tick-borne encephalitis virus** Tick-borne encephalitis virus: Tick-borne encephalitis virus (TBEV) is a positive-strand RNA virus associated with tick-borne encephalitis in the genus Flavivirus. Classification: Taxonomy TBEV is a member of the genus Flavivirus. Other close relatives, members of the TBEV serocomplex, include Omsk hemorrhagic fever virus, Kyasanur Forest disease virus, Alkhurma virus, Louping ill virus and Langat virus. Subtypes TBEV has three subtypes: Western European subtype (formerly Central European encephalitis virus, CEEV; principal tick vector: Ixodes ricinus); Siberian subtype (formerly West Siberian virus; principal tick vector: Ixodes persulcatus); Far Eastern subtype (formerly Russian Spring Summer encephalitis virus, RSSEV; principal tick vector: Ixodes persulcatus).The reference strain is the Sofjin strain. Virology: Structure TBEV is a positive-sense single-stranded RNA virus, contained in a 40-60 nm spherical, enveloped capsid. The TBEV genome is approximately 11kb in size, which contains a 5' cap, a single open reading frame with 3' and 5' UTRs, and is without polyadenylation. Like other flaviviruses, the TBEV genome codes for ten viral proteins, three structural, and seven nonstructural (NS).The structural proteins are C (capsid), PrM (premembrane), which is cleaved to produce the final membrane protein, (M), and envelope protein (E). The seven nonstructural proteins are: NS1, NS2A, NS2B, NS3, NS4A, NS4B, and NS5. The role of some nonstructural proteins is known, NS5 serves as RNA-dependent RNA polymerase, NS3 has protease (in complex with NS2B) and helicase activity. Structural and nonstructural proteins are not required for the genome to be infectious. All viral proteins are expressed as a single large polyprotein, with the order C, PrM, E, NS1, NS2A, NS2B, NS3, NS4A, NS4B, NS5. Virology: Viral genetic determinants for pathogenicity The envelope protein is involved in receptor-binding and neurovirulence, where increased glycosaminoglycan-binding affinity attenuates neuroinvasiveness. The NS5 protein has interferon antagonist activity as it downregulates the expression of IFN receptor subunit. Non structural protein 5 (NS5) affects neuropathogenesis by attenuation of neurite outgrowth. Untranslated region 3 (UTR3) and UTR 5 affect genomic RNA cyclization and replication, and viral RNA transport in dendrites, which impacts neurogenesis and synaptic communication. Virology: Life cycle Transmission Infection of the vector begins when a tick takes a blood meal from an infected host. This can occur at any part of the tick's life cycle but a horizontal transmission between infected nymphs and uninfected larvae co-feeding on the same host is thought to be key in maintaining the circulation of TBEV. TBEV in the blood of the host infects the tick through the midgut, from where it can pass to the salivary glands to be passed to the next host. In non-adult ticks, TBEV is transmitted transtadially by infecting cells that are not destroyed during molting, thus the tick remains infectious throughout its life. Infected adult ticks may be able to lay eggs that are infected, transmitting the virus transorvarially. Virology: Replication In humans, the infection begins in the skin (with the exception of food-borne cases, about 1% of infections) at the site of the bite of an infected tick, where Langerhans cells and macrophages in the skin are preferentially targeted. TBEV envelope (E) proteins recognize heparan sulfate (and likely other receptors) on the host cell surface and are endocytosed via the clathrin mediated pathway. Acidification of the late endosome triggers a conformational change in the E proteins, resulting in fusion, followed by uncoating, and release of the single-stranded RNA genome into the cytoplasm.The viral polyprotein is translated and inserts into the ER membrane, where it is processed on the cytosolic side by host peptidases and in the lumen by viral enzyme action. The viral proteins C, NS3, and NS5 are cleaved into the cytosol (though NS3 can complex with NS2B or NS4A to perform proteolytic or helicase activity), while the remaining nonstructural proteins alter the structure of the ER membrane. This altered membrane permits the assembly of replication complexes, where the viral genome is replicated by the viral RNA-dependent RNA polymerase, NS5.Newly replicated viral RNA genomes are then packaged by the C proteins while on the cytosolic side of the ER membrane, forming the immature nucleocapsid, and gain E and PrM proteins, arranged as a heterodimer, during budding into the lumen of the ER. The immature virion is spiky and geometric in comparison to the mature particle. The particle passes through the golgi apparatus and trans-golgi network, under increasingly acidic conditions, by which the virion matures with cleavage of the Pr segment from the M protein and formation fusion competent E protein homodimers. Though the cleaved Pr segment remains associated with protein complex until exit.The virus is released from the host cell upon fusion of the transport vesicle with the host cell membrane, the cleaved Pr now segments dissociate, resulting in a fully mature, infectious virus. However, partially mature and immature viruses are sometimes released as well; immature viruses are noninfectious as the E proteins are not fusion competent, partially mature viruses are still capable of infection. Pathogenesis and immune response: With the exception of food-borne cases, infection begins in the skin at the site of the tick bite. Skin dendritic (or Langerhans) cells (DCs) are preferentially targeted. Initially, the virus replicates locally and immune response is triggered when viral components are recognized by cytosolic pattern recognition receptors (PRRs), such as Toll-like receptors (TLRs). Recognition causes the release of cytokines including interferons (IFN) α, β , and γ and chemokines, attracting migratory immune cells to the site of the bite. The infection may be halted at this stage and cleared, before the onset of noticeable symptoms. Notably, tick saliva enhances infection by modulating host immune response, dampening apoptotic signals. If the infection continues, migratory DCs and macrophages become infected and travel to the local draining lymph node where activation of polymorphonuclear leukocytes, monocytes and the complement system are activated.The draining lymph node can also serve as a viral amplification site, from where TBEV gains systemic access. This viremic stage corresponds to the first symptomatic phase in the prototypical biphasic pattern of tick-borne encephalitis. TBEV has a strong preference for neuronal tissue, and is neuroinvasive. The initial viremic stage allows access to a number of the preferential tissues. However, the exact mechanism by which TBEV crosses into the central nervous system (CNS) is unclear. There are several proposed mechanism for TBEV breaching the blood-brain barrier (BBB): 1)The "Trojan Horse" mechanism, whereby TBEV gains access to the CNS while infecting an immune cell that passes through the BBB; 2) Disruption and increased permeability of the BBB by immune immune cytokines; 3) Via infection of the olfactory neurons; 4) Via retrograde transport along peripheral nerves to the CNS; 5) Infection of the cells that make up part of the BBB.CNS infection brings on the second phase in the classic biphasic infection pattern associated with the European subtype. CNS disease is immunopathological; release of inflammatory cytokines coupled with the action of cytotoxic CD8+ T cells and possibly NK cells results in inflammation and apoptosis of infected cells that is responsible for many of the CNS symptoms. Pathogenesis and immune response: Humoral response TBEV specific IgM and IgG antibodies are produced in response to infection. IgM antibodies appear and peak first, as well as reaching higher levels, and typically dissipate in about 1.5 months post infection, though there exists considerable variation from patient to patient. IgG levels peak at about 6 weeks after the appearance of CNS symptoms, then decline slightly but do not dissipate, likely conferring life long immunity to the patient. Evolution: The ancestor of the extant strains appears to have separated into several clades approximately 2750 years ago. The Siberian and Far Eastern subtypes diverged about 2250 years ago. A second analysis suggests an earlier date of evolution (3300 years ago) with a rapid increase in the number of strains starting around 300 years ago. Different strains of the virus have been transmitted at least three times into Japan between 260–430 years ago. The strains circulating in Latvia appear to have originated from both Russia and Western Europe while those in Estonia appear to have originated in Russia. The Lithuanian strains appear to be related to those from Western Europe. Phylogenetic analysis indicates that the European and Siberian TBEV sub-types are closely related while the Far-eastern sub-type is closer to the Louping Ill Virus. However, in antigenic relatedness, based on the E, NS3, and NS5 proteins, all three sub-types are highly similar, and Louping Ill virus is the closest relative outside the collective TBEV group. History: Though the first description of what may have been TBE appears in records in the 1700s in Scandinavia, identification of the TBEV virus occurred in the Soviet Union in the 1930s. The investigation began due to an outbreak of what was believed to be Japanese Encephalitis ("Summer encephalitis"), among Soviet troops stationed along the border with the Japanese empire (present day People's Republic of China), near the Far Eastern city of Khabarovsk. The expedition was led by virologist Lev A. Zilber, who assembled a team of twenty young scientists in a number of related fields such as acarology, microbiology, neurology, and epidemiology. The expedition arrived in Khabarovsk on May 15, 1937, and divided into squads, Northern-led by Elizabeth N. Levkovich and working in the Khabarovski Krai- and Southern-led by Alexandra D. Sheboldaeva, working in the Primorski Krai.Inside the month of May, the expedition had identified ticks as the likely vector, collected I. persucatus ticks by exposure of bare skin by entomologist Alexander V. Gutsevich and virologist Mikhail P. Chumakov had isolated the virus from ticks feeding on intentionally infected mice. During the summer, five expeditions members became infected with TBEV, and while there were no fatalities, three of the five suffered damaging sequelae.The expedition returned in mid-August and in October of 1937 Zilber and Sheboldova were arrested, falsely accused of spreading Japanese encephalitis. Expedition epidemiologist Tamara M. Safonov, was arrested the following January for protesting the charges against Zilber and Sheboldova. As a consequence of the arrests, one of the important initial works was published under the authorship of expedition acarologist, Vasily S. Mironov. Zilber was released in 1939 and managed to restore, along with Sheboldova, co-authorship on this initial work; however, Safanov and Sheboldova (who was not released) spent 18 years in labor camps.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Singular (software)** Singular (software): Singular (typeset Singular) is a computer algebra system for polynomial computations with special emphasis on the needs of commutative and non-commutative algebra, algebraic geometry, and singularity theory. Singular has been released under the terms of GNU General Public License. Problems in non-commutative algebra can be tackled with the Singular offspring Plural. Singular is developed under the direction of Wolfram Decker, Gert-Martin Greuel, Gerhard Pfister, and Hans Schönemann, who head Singular's core development team within the Department of Mathematics of the Technische Universität Kaiserslautern. Singular (software): In the DFG Priority Program 1489, interfaces to GAP, Polymake and Gfan are being developed in order to cover recently established areas of mathematics involving convex and algebraic geometry, such as toric and tropical geometry.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Three-Pawn Sacrifice Rapid Attack** Three-Pawn Sacrifice Rapid Attack: Three-Pawn Sacrifice Rapid Attack (三歩突き捨て急戦 san-fu tsukisute kyuusen) is a fast attacking strategy in shogi. It's a Static Rook Boat Rapid Attack strategy against Black's (sente) blocking the bishop's diagonal in a Third File Rook. It was created by Hifumi Katō. Overview: In this fast attack strategy the aimed move becomes evident when, in the accompanying diagram, following gote's P-94, sente defends with P-96. From P-96, it follows ...S-62-53, G-47 (although P-36 might be better if gote was aiming for a fast attack with P-45), and then, three sacrifice pushes of pawns with P-86, Px86 (since Bx86, will be followed by Bx66, S-77, B-2b, and gote is better), P-95, Px95, and, P-75, Px75 (Diagram 2), which, if taken by sente will leave gote in a better position following Lx95, Lx95, P*76, B-99, Rx86. After that, R-88, Rx88+, Bx88, R*87 is a possibility.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tetrahedral hypothesis** Tetrahedral hypothesis: The tetrahedral hypothesis is an obsolete scientific theory attempting to explain the arrangement of the Earth's continents and oceans by referring to the geometry of a tetrahedron. Although it was a historically interesting theory in the late 19th and early 20th century, it was superseded by the concepts of continental drift and modern plate tectonics. The theory was first proposed by William Lowthian Green in 1875. Theory: This idea, described as ‘"ingenious" by geologist Arthur Holmes, is now of historical interest only, being finally refuted by that same Holmes (see reference 7). It attempted to explain apparent anomalies in the distribution of land and water on the Earth's surface: More than 75% of the Earth's land area is in the northern hemisphere. Continents are roughly triangular. Oceans are roughly triangular. The north pole is surrounded by water, the south pole by land. Exactly opposite the Earth from land is almost always water. Theory: The Pacific Ocean occupies about one third of the Earth's surface.To understand its appeal, consider the "regular solids": the sphere and the 5-member set of Platonic Solids. The solid with the lowest number of sides is the tetrahedron (four equilateral triangles); progressing through the hexahedron or cube, the octahedron, the dodecahedron and the icosahedron (20 sides), the sphere can be considered to have an infinite number of sides. All six regular solids share many symmetries. Theory: Now, for each regular solid, we may relate its surface area and volume by the equation: V=k×A×A where k is a characteristic of each solid, V its volume, and A its area. As we traverse the set in order of increasing number of faces, we find that k increases for each member; it is 0.0227 for a tetrahedron and 0.0940 for a sphere. Thus the tetrahedron is the regular solid with the largest surface area for a given volume, and makes a reasonable endpoint for a shrinking spherical Earth. History: The theory was first proposed by William Lowthian Green in 1875. History: It was still popular in 1917 when summarized as: "The law of least action … demands that the somewhat rigid crustal portion of the earth keep in contact with the lessening interior with the least possible readjustment of its surface. … a shrinking sphere tends by the law of least action to collapse into a tetrahedron, or a tetra-hedroid, a sphere marked by four equal and equidistant triangular projections; and the earth with its three about equal and equidistant double continental masses triangular southward with three intervening depressed oceans triangular northward, its northern ocean and southern continent, with land everywhere antipodal to water, realizes the tetrahedroid status remarkably.“ This is suggesting that a cooling spherical Earth might have shrunk to form a tetrahedron, with its vertices and edges forming the continents, and four oceans (Pacific Ocean, Atlantic Ocean, Indian Ocean and Arctic Ocean) on its faces. History: By 1915 German Alfred Wegener (1880–1930) had proposed in his continental drift theory that land masses moved great distances over the Earth's history. Wegener was also at first met with hostile reactions. By the mid-1920s Holmes had developed theories on what could cause the drift. The plate tectonics theory is now generally accepted to explain the dynamic nature of the Earth's surface; the tetrahedral shape plays no special role in modern theories. Explanations of details such as water to land ratios, the precise shape of continents and their sizes continue to be developed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Song poem** Song poem: Song poems are songs with lyrics by usually non-professional writers that have been set to music by commercial companies for a fee. This practice, which has long been disparaged in the established music industry, was also known as song sharking and was conducted by several businesses throughout the 20th century in North America. Production and promotion: From the early 20th century, the business of recording song poems was promoted through small display ads in popular magazines, comic books, tabloids, men's adventure journals and similar publications with a headline reading (essentially) Send in Your Poems - Songwriters Make Thousands of Dollars - Free Evaluation. The term lyrics was avoided because it was assumed potential customers would not understand what the term meant. Those who sent their poetry to one of the production companies usually received notice by mail that their work was worthy of recording by professional musicians, along with a proposal to do so in exchange for a fee. The early 20th century versions of this business involved setting the words to music and printing up sheet music from inexpensively engraved plates. Production and promotion: In producing the recordings, musicians often recorded dozens of songs per recording session using minimal resources. Using a method called "sight-singing," they wrote the music as they read the lyrics and played along, sometimes finishing a song in just one take. Some of the companies recorded new vocals over pre-recorded music backing tracks, using the same music tracks hundreds of times. The recordings were then duplicated on 45 RPM vinyl singles or on individual cassette tapes, or they were released on compilation LPs with dozens of other songs by amateur lyric writers. Copies were sent to the customer. Promises that they would also be sent to radio stations or music industry executives were rarely if ever kept, partly because the recordings would not have been taken seriously by professionals. Production and promotion: Many of the lyrics involve subject matter relating to the passing fads of the day, and thus provide a window into a past pop culture. Song poem writers: Noted examples of those who have used the song poem approach include: Rodd Keith (born Rodney Keith Eskelin, 1937–1974), who has been described as the "Mozart" of the song poem genre. Several compilations of his made-for-hire song poem recordings have been released on CD with comments by his son, avant-garde saxophonist Ellery Eskelin. Eskelin never really knew his father, but was often told that his father was some kind of musical genius.Caglar Juan Singletary, (born 1972) who featured in the 2003 documentary Off The Charts: The Song-Poem Story. His most famous composition is "Annie Oakley", music written by artist David Fox.Norridge Mayhams (1903–1988), also known as Norris the Troubadour, who issued a succession of records between the 1930s and 1980s, many produced by song-poem professionals.Thomas J. Guygax Sr. (1921–1999), of Springfield, Missouri, a lyricist noted for his unconventional approach to word order and syntax.John Trubee (born 1957), whose song "Peace & Love" (commonly known as "Blind Man's Penis"), written to test whether or not a song-poem firm would accept "the most ridiculous, stupid, vile, obscene" lyrics he could write, was recorded by country singer Ramsey Kearney; it has been described as "the most famous song-poem recording of all time". The original lyrics referred to "Stevie Wonder's penis", but Kearney replaced all references to Wonder with the generic phrase "a blind man". In media: In 2003, the documentary Off The Charts: The Song-Poem Story was aired on PBS. Gene Merlino, who claims to have sung on more than 10,000 song poems, was featured in the documentary. It has since been released on DVD, and the soundtrack was released on CD. The 2007 Craig Zobel drama Great World of Sound depicts a modern-day version of "song sharking," and featured scenes where real unsigned musicians audition for the actors portraying the ersatz music producers; these artists ultimately had their songs properly licensed and featured in the finished film. In media: Tom Ardolino, former drummer for the band NRBQ, curated an LP and several compilation CDs of the material taken from his personal collection (The Beat of The Traps, The Makers of Smooth Music, The Human Breakdown of Absurdity, & I'm Just The Other Woman). His work, along with the efforts of others such as Phil Milstein, musicologist Irwin Chusid of WFMU radio, Mark Mothersbaugh of Devo, Bob Purse, James Lindbloom, and magician Penn Jillette has allowed these scraps to reach a level of notoriety unthinkable in their own time. Discography: Hollywood Gold, various artists (Rainbow Records) (one single, one cassette, 22 LPs)[1] MSR Madness series: The Beat of The Traps, Various artists (Carnage Press, LP only) The Makers of Smooth Music, Various artists (Carnage Press) The Human Breakdown of Absurdity, Various artists (Carnage Press) I'm Just The Other Woman, Various artists (Carnage Press) The American Song Poem Anthology: Do You Know The Difference Between Big Wood and Brush?, Various artists (Bar/None) The American Song Poem Christmas: Daddy, Is Santa Really Six Foot Four?, Various artists (Bar/None) I Died Today, Rodd Keith (Tzadik) Ecstacy To Frenzy, Rodd Keith (Tzadik) Saucers in the Sky, Rodd Keith (Roaratorio) My Pipe-Yellow Dream, Rodd Keith (Roaratorio) Black Phoenix Blues, Rodd Keith (Roaratorio) Off The Charts: The Song Poem Story, Various artists (Red Rock Records - film soundtrack) Song Poem Hits of 2007, David Dubowski One Man Band (Crazy Dave Records) Song Poem Hits of 2007 Vol. 2, David Dubowski One Man Band (Crazy Dave Records) Song Poem Hits of 2009, David Dubowski One Man Band (Crazy Dave Records) Documentary: Off The Charts: The Song-Poem Story at IMDb
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Outrunner** Outrunner: An outrunner is an electric motor having the rotor outside the stator, as though the motor were turned inside out. They are often used in radio-controlled model aircraft. This type of motor spins its outer shell around its windings, much like motors found in ordinary CD-ROM computer drives. In fact, CD-ROM motors are frequently rewound into brushless outrunner motors for small park flyer aircraft. Parts to aid in converting CD-ROM motors to aircraft use are commercially available. Outrunner: Usually, outrunners have more poles, so they spin much slower than their inrunner counterparts with their more traditional layout (though still considerably faster than ferrite motors, when compared with motors that use neodymium magnets) while producing far more torque. This makes an outrunner an excellent choice for directly driving electric aircraft propellers since they eliminate the extra weight, complexity, inefficiency and noise of a gearbox. Some front loading direct-drive washing machines use an outrunner motor. Outrunner motors have quickly become popular and are now available in many sizes. They have also become popular in personal, electric transportation applications such as electric bikes and scooters due to their compact size and high efficiency. Outrunner: 240 sailplanes of eleven different types from seven manufacturers are equipped with the FES propulsion system from LZ Design d.o.o of Slovenia. The 22kW provides enough power for lighter 13.5-15m gliders to self-launch and allow heavier gliders with enough power to climb and then maintain height, so avoiding an unscheduled out-landing. Its synchronous permanent magnet motor has an electronically-controlled commutation system. Stator and magnetic pole count: The stationary (stator) windings of an outrunner motor are excited by conventional DC brushless motor controllers. A direct current (switched on and off at high frequency for voltage modulation) is typically passed through three or more non-adjacent windings together, and the group so energized is alternated electronically based upon rotor position feedback. The number of permanent magnets in the rotor does not match the number of stator poles, however. This is to reduce cogging torque and create a sinusoidal back emf. The number of magnet poles divided by 2 gives the ratio of magnetic field frequency to motor rotation frequency. Stator and magnetic pole count: Common stator pole/magnet pole configurations N denotes number of stator "wire wound" poles, P denotes number of rotor "permanent magnet" poles. Stator and magnetic pole count: 9N,12P - very common to many small outrunners. This is also the most common CD-ROM motor configuration. Winding Pattern is ABCABCABC 9N, 6P - Common for helicopter motor, EDFs, and other high speed applications. The winding pattern is ABCABCABC 12N, 14P (DLRK) - Common for higher torque applications. Noted commonly for its smooth and quiet operation. Winding Pattern is AabBCcaABbcC (lowercase implies reverse in winding direction). Stator and magnetic pole count: Other configurations 9N, 8P - Magnetically imbalanced motor configuration occasionally found in high speed applications. This configuration is best terminated as WYE to minimize vibration. Stator and magnetic pole count: 9N, 10P - Highly magnetically imbalanced motor that often makes for noisy running. This configuration is usually only built by do it yourself motor builders. This motor is best terminated WYE. Winding pattern is AaABbBCcC 12N, 16P - A not so common but still used style. It has been overshadowed by the 12N, 14P. Winding pattern is ABCABCABCABC 12N, 10P - Higher speed variant of the DLRK motor. Occasionally found in helicopter motors. Winding Pattern is AabBCcaABbcC (lowercase implies reverse in winding direction). Stator and magnetic pole count: 12N, 8P - Even higher speed than the 12N, 10P. Winding pattern is ABCABCABCABC
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Page (assistance occupation)** Page (assistance occupation): A page is an occupation in some professional capacity. Unlike traditional pages, who are normally younger males, these pages tend to be older and can be either male or female. Workplace: Pages are present in some modern workforces. American television network NBC's page program is a notable example of contemporary workplace pages. Libraries: Some large libraries use the term 'page' for employees or volunteers who retrieve books from the stacks, which are often closed to the public, and return books to shelves. This relieves some of the tedium from the librarians, who may occupy themselves with duties requiring their more advanced training and education. Legislative pages: Many legislative bodies employ student pages as assistants to members of the legislature during session. Legislative pages are secondary school or university students who are unpaid or receive modest stipends. They serve for periods of time ranging from one week to one year, depending on the program. They typically perform small tasks such as running errands, delivering coffee, answering telephones, or assisting a speaker with visual aids. Students typically participate primarily for the work-experience benefits. Legislative pages: The following examples illustrate the range of legislative page programs: CanadaThe Canadian House of Commons Page Programme employs part-time first-year university students who work roughly 15 hours a week and are paid approximately $12,000 (CDN) for a one-year term. They perform both ceremonial and administrative duties and participate in enrichment activities such as meetings with MPs and government leaders. They also meet with student groups to explain the workings of the House of Commons and their duties as Pages. The Canadian Senate Page Program is similar. Legislative pages: The Legislative Assembly of Ontario employs 7th and 8th grade students for periods of two to six weeks during the legislative session. Participants must be high-achieving students who take leaves of absence from their schools while they serve as pages. Duties of pages include acting as messengers in the legislative chamber, taking water to MPPs, and picking up key documents (bills, petitions, motions, reports by committee). They also have opportunities to learn about provincial government and the lawmaking process. Legislative pages: The Legislative Assembly of Alberta employs high school and first-year university students, as a part-time job. Pages must demonstrate strong academic standing, have work experience and participate in extracurricular activities, and be able to commit to a job. Some duties of pages include distributing materials within the legislature, supporting public events such as Family Day and Canada Day, and participating in development seminars.United StatesBoth houses of the United States Congress have or had formal page programs. The House program has ended but the Senate program continues; pages are high school juniors from throughout the country. The application process is very competitive. Pages serve for periods of several weeks during the summer or for a full school semester during term. They live in dormitories near the Capitol and attend special schools for pages, but are always present on the Senate and House floor during session to assist the proceedings as needed. Legislative pages: In the Virginia General Assembly the pages are young males and females ranging in age from 13 to 15. They assist Senators and Delegates with deliveries and errands. The Nebraska Legislature has a page program, in which college students assist Senators and legislative staff with various deliveries and errands. The selection process includes an application and interview, with competitive candidates presenting a letter of recommendation from a member of the Legislature. Pages are employed by the Clerk of the Legislature, and often receive course credit for participating.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Saliva testing** Saliva testing: Saliva testing or Salivaomics is a diagnostic technique that involves laboratory analysis of saliva to identify markers of endocrine, immunologic, inflammatory, infectious, and other types of conditions. Saliva is a useful biological fluid for assaying steroid hormones such as cortisol, genetic material like RNA, proteins such as enzymes and antibodies, and a variety of other substances, including natural metabolites, including saliva nitrite, a biomarker for nitric oxide status (see below for Cardiovascular Disease, Nitric Oxide: a salivary biomarker for cardio-protection). Saliva testing is used to screen for or diagnose numerous conditions and disease states, including Cushing's disease, anovulation, HIV, cancer, parasites, hypogonadism, and allergies. Salivary testing has even been used by the U.S. government to assess circadian rhythm shifts in astronauts before flight and to evaluate hormonal profiles of soldiers undergoing military survival training.Proponents of saliva testing cite its ease of collection, safety, non-invasiveness, affordability, accuracy, and capacity to circumvent venipuncture as the primary advantages when compared to blood testing and other types of diagnostic testing. Additionally, since multiple samples can be readily obtained, saliva testing is particularly useful for performing chronobiological assessments that span hours, days, or weeks. Collecting whole saliva by passive drool has a myriad of advantages. Passive drool collection facilitates large sample size collection. Consequently, this allows the sample to be tested for more than one biomarker. It also gives the researcher the ability to freeze the left over specimen to be used at a later time. Additionally, it lessens the possibility of contamination by eliminating extra collection devices and the need to induce saliva flow.The testing of salivation by the use of mercury was performed at least as early as 1685. Testing the acidity of saliva occurred at least as early as 1808. The clinical use of saliva testing occurred at least as early as 1836 in patients with bronchitis. In 1959, scientists in the journal Cancer raised the possibility of using biochemical changes in acid phosphatases in saliva as an indicator of the presence of prostate cancer.More recent studies have focused on detection of steroid hormones and antibodies in the saliva. Recent applications emphasize the development of increasingly sophisticated techniques to detect additional proteins, genetic material, and markers of nutritional status. According to Wong, scientists are now viewing saliva as "a valuable biofluid…with the potential to extract more data than is possible currently with other diagnostic methods." Technique: Most saliva testing is performed using enzyme-linked immunosorbent assay (ELISA), polymerase chain reaction (PCR), high-resolution mass spectrometry (HRMS), or any number of newer technologies such as fiber-optic-based detection. All of these methods enable detection of specific molecules like cortisol, C-reactive protein (CRP), or secretory IgA. This type of testing typically involves collection of a small amount of saliva into a sterile tube followed by processing at a remote laboratory. Some methods of testing involve collecting saliva using an absorbent pad, applying a chemical solution, and monitoring for color change to indicate a positive or negative result. This method is commonly used as a point-of-care (POC) technique to screen for HIV. However, using absorbent pads and chemical solutions could very easily skew the results of immunoassays. Research by Dr. Douglas A. Granger and colleagues shows that outcomes for testosterone, DHEA, progesterone, and estradiol biomarkers are elevated when cotton-based collection materials are used as opposed to samples collected by other methods (i.e. passive drool). Researchers are currently examining the expanding role of saliva testing as part of routine dental or medical office examinations where saliva collection is simple to perform. Physiologic basis: Humans have three major salivary glands: parotid, submandibular, and sublingual. These glands, along with additional minor salivary glands, secrete a rich mixture of biological chemicals, electrolytes, proteins, genetic material, polysaccharides, and other molecules. Most of these substances enter the salivary gland acinus and duct system from the surrounding capillaries via the intervening tissue fluid, although some substances are produced within the glands themselves. The level of each salivary component varies considerably depending on the health status of the individual and the presence of disease (oral or systemic). By measuring these components in the saliva, it is possible to screen for a variety of infections, allergies, hormonal disturbances, and neoplasms. Clinical use: The following conditions are among those that can be detected through saliva testing (list not comprehensive): adrenal conditions (such as Cushing's disease/syndrome and Addison's disease), altered female hormone states (such as polycystic ovary syndrome [PCOS], menopause, anovulation, and hormonal alterations in cycling women), altered male hormone states (such as hypogonadism/andropause and hyperestrogenic states), metabolic disturbances (such as insulin resistance, diabetes, and metabolic syndrome), benign and metastatic neoplasms (such as breast cancer, pancreatic cancer, and oral cancer), infectious conditions (such as HIV, viral hepatitis, amoebiasis, and helicobacter pylori infection), and allergic conditions (such as food allergy). Uses in behavioral research: Saliva testing also has specific uses in clinical and experimental psychological settings. Due to its ability to provide insight into human behavior, emotions, and development, it has been used to investigate psychological phenomenon such as anxiety, depression, PTSD, and other behavioral disorders. Its primary purpose is to test cortisol and alpha amylase levels, which are indicative of stress levels. Salivary cortisol is a good stress index, with increased levels of cortisol correlating positively with increased levels of stress. Cortisol levels rise slowly over time and take a while to return to base level, indicating that cortisol is more associated with chronic stress levels. Alpha amylase, on the other hand, spikes quickly when confronted with a stressor and returns to baseline soon after the stress has passed, making salivary amylase measurement a powerful tool for psychological research studying acute stress responses. Samples are usually collected from participants by having them drool through a straw into a collection tube while experiencing a stimulus, with samples taken every few minutes to record the gradual change in stress hormone levels. Because the collection of saliva samples is non-invasive, it has the advantage of not introducing further stress on the participant that may otherwise distort results.In more specific studies looking at the link between cortisol levels and psychological phenomena, it has been found that chronic stressors such as life-threatening situations (example: diseases), depression, and social or economic hardship correlate with significantly higher cortisol levels. In situations where a subject undergoes induced anxiety, high cortisol levels correspond with experiencing more physiological symptoms of nervousness, such as increased heart rate, sweating, and skin conductance. Additionally, a negative correlation was discovered between baseline levels of cortisol and aggression. Salivary cortisol levels can thus provide insight into a number of other psychological processes. Uses in behavioral research: Alpha amylase levels in saliva provide a non-invasive way to examine sympathoadrenal medullary (SAM) activity, which can otherwise be measured with electrophysiological equipment or blood plasma readings. Salivary alpha amylase levels have been found to correlate with heightened autonomic nervous system activity levels, reacting in similar ways to the hormone norepinephrine. Subsequent findings reveal a relationship between α-amylase and competition. Results showed that alpha amylase levels changed when reacting to competition, but not when anticipating it. Furthermore, by testing alpha amylase levels, scientists noticed a difference in reactivity behavior among individuals with previous experience in a similar situation.While saliva testing has the promise of becoming a valuable and more widely used tool in psychological research in the future, there are also some disadvantages to the method that must be kept in mind, including the cost of collecting and processing the samples and the reliability of the measure itself. There is a substantial amount of both within-person and between-person variability in cortisol levels that must be taken into account when drawing conclusions from studies.Many studies have been performed to further examine the variables that contribute to these within-person and between-person variances. Analyses of the variables that affect cortisol levels has yielded an extensive list of confounding variables. Uses in behavioral research: Diurnal variation is a major factor for within-person variance because baseline cortisol levels have been known to differ based on the time of day. For normally developing individuals who follow a typical day–night schedule, cortisol production peaks during the last few hours of sleep. This peak is thought to aid in preparing the body for action and stimulate the appetite upon waking. Diurnal variation is also affected by psychological conditions. For example, Early morning cortisol levels have been found to be elevated in shy children and late night levels elevated in depressed adolescents, particularly the between the hours of two and four PM. This might be important for understanding emotions and depressive symptoms. Uses in behavioral research: Other variables that affect within- and between-person variation are listed below. The list is not meant to be comprehensive and the impact of many of these variables could benefit from further research and discussion. Age is one of the major factors for between-person variance. Some studies indicate children and adolescents exhibit greater cortisol activity potentially related to development. Gender has been found to impact base line levels of cortisol, contributing to between-person variance. In generally stressful situations, levels of cortisol in males have been found to increase to nearly double the amount when compared to females. In stressful social situations (i.e. social rejection challenge), however, women but not men tend to show significantly higher levels of cortisol. Uses in behavioral research: The menstrual cycle has been found to impact levels of cortisol in the body, impacting both within- and between-person variance. Women in the luteal phase reportedly have levels of cortisol equal to men, suggesting no sex differences in base levels of cortical when women are not ovulating. Women in the follicular phase and women taking oral contraceptives reportedly have significantly lower levels of cortisol when compared to men and women in the luteal phase. Uses in behavioral research: Pregnancy has been found to increase levels of cortisol in the body. In particular, breast-feeding has been found to decrease levels of cortisol in the short-term even if a mother is exposed to a psychosocial stressor. Nicotine is known to increase levels of cortisol in the body since it stimulates the HPA axis. After at least two cigarettes, smokers show significant elevations of salivary cortisol levels. Furthermore, habitual smokers show blunted salivary cortisol responses to psychological stressors. Food has been found to affect levels of cortisol. The presence of proteins has been found to increase cortisol. This variable is often affected by diurnal variation, with cortisol being notably higher during lunchtime than dinnertime, and gender, with women having higher levels of cortisol after eating than men. While some studies examining the effects of alcohol consumption and caffeine intake on base levels of cortisol have found positive correlations, the results are mixed and would benefit from further examination. Intense or prolonged exercise can result in increased levels of cortisol. Short-term and low-level exercising only mildly increases levels of cortisol. Repeated exposure to initially stressful stimuli has been found to result in a leveling off of cortisol in the body. Birth weight has been shown to be inversely related to base levels of cortisol; low birth weight is correlated with high levels of cortisol. Position within a social hierarchy has been found to affect levels of cortisol. One study in particular looked at a sample of 63 army recruits and found that socially dominant subjects showed high salivary cortisol increases compared to only modest elevations in subordinate men after stress exposure and physical exercise. Some medications (i.e. glucocorticoids, psychoactive drugs, antidepressants) have been found to affect levels of cortisol in the body but the results from studies examining these affects have been mixed. The impact of medications on cortical levels could benefit from further research. Evidence and current research: Cortisol and melatonin aberrations In 2008 the Endocrine Society published diagnostic guidelines for Cushing's syndrome, wherein they recommended midnight salivary cortisol testing on two consecutive days as one possible initial screening tool. A 2009 review concluded that late-night salivary cortisol testing is a suitable alternative to serum cortisol testing for diagnosing Cushing's syndrome, reporting that both sensitivity and specificity exceeded ninety percent. In 2010 Sakihara, et al., evaluated the usefulness and accuracy of salivary, plasma, and urinary cortisol levels and determined salivary cortisol to be the "method of choice" for Cushing's syndrome screening. In 2008 Restituto, et al., found early morning salivary cortisol to be "as good as serum" as an Addison's disease screening technique. In 2010 Bagcim et al., determined that saliva melatonin levels "reflect those in serum at any time of the day" and are a reliable alternative to serum melatonin to study the pineal physiology in newborns. A 2008 review article described saliva melatonin testing as a "practical and reliable method for field, clinical, and research trials". Evidence and current research: Reproductive hormone irregularities A 2009 study examined the use of saliva testing to measure estradiol, progesterone, dehydroepiandrosterone (DHEA), and testosterone levels in 2,722 individuals (male and female). The researchers confirmed the "good validity of [salivary] sex hormone measurements" and concluded that salivary testing was a good method for testing older adults due to the ease of in-home collection.However, other studies suggest that such tests do not represent either the amount of hormones in the blood, or their biological activity. Saliva testing is often used as part of bioidentical hormone replacement therapy, though it has been criticized for being expensive, unnecessary and meaningless. Evidence and current research: Female In 2010 a study identified luteinizing hormone (LH) as an accurate salivary biomarker of ovulation in females. Researchers measured various hormones in the saliva throughout the menstrual cycle and found that salivary luteinizing hormone was reliably elevated during the ovulatory period and, for that reason, "salivary LH level is a reliable way to determine ovulation." A 1983 study of various salivary steroid assays showed that daily salivary progesterone measurements "provides a valuable means of assessing ovarian function". A 2001 study involved performing daily saliva collection from healthy subjects and plotting these over the entire menstrual cycle. The researchers determined that salivary estradiol and progesterone curves corresponded to the daily profiles normally observed in blood, although of lesser amplitude. In 1999 researchers determined that ELISA-based saliva testing "can serve as a reliable [method] for estriol determination." A 2007 article reported that the free testosterone measurement, including via saliva assay, represents "the most sensitive biochemical marker supporting the diagnosis of PCOS." In 1990 Vuorento, et al., found that luteal phase defects, wherein progesterone levels decline prematurely within the menstrual cycle, were identified with high frequency using salivary progesterone testing among women with unexplained infertility. Evidence and current research: Male In 2009 Shibayama, et al., examined the accuracy of salivary androgen measurement for diagnosing late-onset hypogonadism (age-related decline in androgens, often called "andropause"). Researchers determined that the accuracy of saliva testosterone and DHEA measurement exceeded 98.5% and that this method "has satisfactory applicability" in the diagnosis of late-onset hypogonadism. A 2007 study reported a sensitivity and specificity of 100% for salivary testosterone in ruling out hypogonadism and concluded that salivary testosterone is a useful biomarker in the diagnosis of male androgen deficiency. The use of salivary testosterone to screen for hypogonadism has been validated by other studies, including one involving 1454 individuals. Those researchers concluded that salivary testosterone is "an acceptable assay for screening for hypogonadism." Neoplastic conditions Pancreatic cancer A 2010 study by Zhang, et al., demonstrated that researchers were able to detect pancreatic cancer with high sensitivity and specificity (90.0% and 95.0%, respectively) by screening saliva for four specific mRNA biomarkers. In a 2011 review article that examined pancreatic cancer biomarkers, Hamade and Shimosegawa concluded that clinical application of saliva biomarker testing is "beneficial for the screening and early detection of pancreatic cancer." Breast cancer In 2008 Emekli-Alturfan, et al., compared saliva from breast cancer patients to that from healthy individuals and observed, notably, that breast cancer patients' samples contained dysplastic cells and reduced lipid peroxides. A 2000 study compared the salivary levels of a breast cancer marker (HER2/neu) in healthy women, women with benign breast lesions, and women with breast cancer. Researchers found that the salivary (as well as serum) level of this marker was significantly higher in women with breast cancer than in healthy women and women with benign breast lesions; they went on to state that the marker may have potential as a tool for diagnosing breast cancer or detecting its recurrence. A separate study corroborated these findings and further demonstrated that another breast cancer marker (CA15-3) was elevated while the tumor suppressor protein p53 was reduced in the saliva of women with breast cancer compared to healthy controls and women with benign breast lesions. Evidence and current research: Oral cancer In 2010 Jou, et al., found that patients diagnosed with oral squamous cell carcinoma had elevated levels transferrin in saliva compared to healthy controls and, moreover, that salivary transferrin measurement using ELISA technique was "highly specific, sensitive, and accurate for the early detection of oral cancer." A 2009 study reported that the levels of two biomarkers, Cyclin D1 (increased compared to controls) and Maspin (decreased compared to controls), had sensitivities and specificities of 100% for oral cancer detection when measured in saliva. Saliva testing for specific mRNAs has been found to possess significant potential for oral cancer diagnosis. In fact, there is evidence to suggest that saliva RNA diagnostics are slightly superior to serum RNA diagnostics, with the comparative receiver operating characteristic (ROC) value being 95% for saliva but only 88% for serum. Evidence and current research: Glucose dysregulation A 2009 study compared the saliva glucose levels of diabetic patients to those of non-diabetic controls. The authors reported that "salivary [glucose] concentration and excretion were much higher in diabetic patients than in control subjects." In 2009 Rao, et al., investigated salivary biomarkers that could aid identification of type-2 diabetic individuals. Researchers found that sixty-five proteins, the majority of which are involved in regulating metabolism and immune response, were significantly altered in type-2 diabetics. They further observed that the relative increase of these specific proteins was directly proportional to the severity of disease (i.e., they were somewhat elevated in pre-diabetics and significantly elevated in diabetics). In 2010 Soell, et al., determined that one particular salivary biomarker (chromogranin A) was over-expressed in 100% of diabetic patients when compared to controls. In 2010 Qvarnstrom, et al., conducted a cross-sectional analysis of 500 individuals and found that an increase in salivary lysozyme was "significantly associated with metabolic syndrome." Infectious conditions Human immunodeficiency virus The accuracy of saliva anti-HIV antibody testing has been demonstrated in numerous studies; two recent large-scale studies found both sensitivity and specificity to be 100%. The first of these was published in 2008 by Zelin, et al., and compared saliva antibody testing and serum antibody testing using ELISA technique in 820 individuals. The second study, conducted by Pascoe, et al., compared saliva antibody testing to serum antibody testing using ELISA followed by confirmatory Western Blot analysis in 591 individuals. The accuracy of saliva anti-HIV antibody testing has been confirmed by many additional studies, leading to approval of this method by the U.S. Food & Drug Administration in 2004. Evidence and current research: Viral hepatitis Several studies have demonstrated diagnostic potential for salivary hepatitis testing. A 2011 study demonstrated that HBV surface antigen saliva testing using ELISA had a sensitivity and specificity of 93.6% and 92.6%, respectively. Other studies found that saliva assay for anti-HAV antibodies (IgM and IgG) was an effective method to identify HAV-infected individuals. Hepatitis C has also been identified using salivary detection methods. Yaari, et al., reported in 2006 that saliva testing for anti-HCV antibodies yielded a sensitivity of 100% and a specificity that was "similar or better" when compared to serum testing. Evidence and current research: Parasitic infection A 2010 study found that saliva-based detection of the parasite Entamoeba histolytica was superior to existing fecal detection methods for patients with E. histolytica-associated liver abscess. In 2004 El Hamshary and Arafa found that salivary anti-E. histolytica IgA concentration had "predictive diagnostic value of intestinal amoebiasis…as well as in tissue amoebiasis." A 1990 study that involved saliva testing for E. histolytica in 223 school children demonstrated a sensitivity and specificity of 85% and 98%, respectively. In 2005 Stroehle, et al., determined that saliva detection of IgG antibodies against Toxoplasma gondii had a sensitivity and specificity of 98.5% and 100%, respectively. A study published in 1990 demonstrated the diagnostic utility of saliva IgG testing in identifying neurocysticercosis secondary to Taenia solium. Evidence and current research: Helicobacter pylori infection In a 2005 study, researchers investigated the accuracy of Helicobacter pylori diagnosis in dyspeptic patients using salivary anti-H. pylori IgG levels. They determined that saliva testing for H. pylori antibodies "could be used reliably for screening dyspeptic patients in general practice." That same year Tiwari, et al., examined the accuracy of testing saliva for H. pylori DNA and how well this correlated with presence of H. pylori detected via gastric biopsy. Based on their results, researchers concluded that saliva testing could serve as a reliable non-invasive detection method for H. pylori infection. Evidence and current research: Periodontitis A 2009 study conducted by Koss, et al., studied salivary biomarkers of periodontal disease; their findings revealed that three substances (peroxidase, hydroxyproline and calcium) were significantly increased in the saliva of patients with periodontitis. A 2010 study found that elevation of three saliva biomarkers (MMP-8, TIMP-1, and ICTP), particularly when analyzed using time-resolved immunofluorometric assay, was suggestive of periodontitis. Evidence and current research: Cardiovascular disease CRP: a salivary biomarker for cardiovascular risk In 2011 Punyadeera, et al., studied "the clinical utility of salivary C-reactive protein levels in assessing coronary events such as myocardial infarction in a primary health care setting." Researchers found that saliva CRP levels in cardiac patients were significantly higher when compared to healthy controls. Furthermore, they found that saliva CRP correlated with serum CRP in cardiac patients and, thus, could be a useful tool for "large patient screening studies for risk assessment of coronary events." Nitric Oxide: a salivary biomarker for cardio-protection Cardio-protective nitric oxide is generated in the body by a family of specific enzymes, nitric oxide synthase. An alternative pathway for the generation of nitric oxide is the nitrate-nitrite-nitric oxide pathway in which dietary inorganic nitrate is sequentially reduced to nitric oxide. A necessary and obligatory step in the generation of nitric oxide by the non-nitric oxide synthase or alternative pathway involves the uptake of nitrate by the salivary gland, excretion in saliva, and subsequent reduction to nitrite by oral commensal bacteria in the mouth.Salivary nitrite is then further chemically reduced in blood and tissue to nitric oxide resulting in the lowering of blood pressure, inhibition of platelet aggregation, increasing cerebral blood flow and flow-mediated dilation, and decreasing oxygen cost during exercise. A principal source of dietary inorganic nitrate, which is reduced to nitric oxide in the body, is from leafy green vegetables. The blood pressure lowering effects of leafy green vegetables, in particular, spinach and arugula, are abundant in anti-hypertensive diets such as the DASH diet. Several papers have shown saliva nitrite levels correlate with blood nitrite levels which both serve as meaningful surrogates for blood pressure lowering effects. Evidence and current research: Sobko et al. shows that Japanese traditional diets rich in leafy vegetables elevated both plasma and saliva nitrite levels with a corresponding decrease in blood pressure.Webb et al. in 2008 reinforced the obligatory role of saliva in humans to generate nitric oxide. Here, they showed ingestion of beet juice, a nitrate-rich food, by healthy volunteers markedly reduced blood pressure and by disrupting saliva, either by spitting or interrupting the bioconversion of dietary nitrate to nitrite in the mouth with anti-bacterial mouthwash, the chemical reduction of nitrate to nitrite to nitric oxide with an associated decease in blood pressure was abated. By blocking saliva from recirculating or preventing salivary nitrate from being chemically reduced to nitrite, it prevented a rise in plasma nitrite levels, and blocked a decrease in blood pressure as well as abolished nitric oxide-mediated inhibition of platelets aggregation confirming the cardio-protective effects were attributable to nitric oxide via the conversion of nitrate to nitrite in saliva.In a series of reports by Ahluwalia and colleagues, they showed in a cross over protocol of 14 volunteers who ingested inorganic nitrates, plasma and saliva nitrite level increased 3 hours post ingestion with a significant reduction of blood pressure. Nitrate extracted from blood by the salivary gland, accumulates in saliva, which is then reduced to nitric oxide to have a direct blood pressure lowering effect. Decreasing saliva nitrite in volunteers that already had elevated levels, a rise in systolic and diastolic blood pressure resulted. Furthermore, pre-hypertensives may be more sensitive to the blood pressure lowering effects of the dietary nitrate-nitrite-nitric oxide pathway. Monitoring the bioconversion of plant-derived nitrate into salivary nitrite serves as a surrogate biomarker for total body nitric oxide status. Evidence and current research: Allergic states A 2002 study explored the relationship between allergies and salivary immunoglobulin levels in eighty subjects. Researchers demonstrated an association between development of allergies and disturbances in saliva allergen-specific IgA levels (elevated compared to controls) and total secretory IgA (reduced compared to controls). In 2011 Peeters, et al., identified characteristic aberrations in certain salivary metabolites that were associated with peanut-allergic individuals when compared to peanut-tolerant controls. In 2003 Vojdani, et al., found that individuals exposed to various allergenic molds and mycotoxins showed "significantly higher levels of salivary IgA antibodies against one or more mold species." Chemical substances In 2009 Pink, et al., reported that saliva testing had become so widespread that it had begun to replace urine testing as the standard for detecting illicit drugs and prescription medications. Shin, et al., reported in 2008 that salivary detection of ethanol and three of its metabolites (methanol, ethylene glycol, and diethylene glycol) had "relatively high sensitivity and specificity" and that such testing facilitates rapid diagnosis of alcohol intoxication. A 2002 study demonstrated that there was good agreement between saliva and breath ethanol analysis, and that chromatographic saliva ethanol assay is "specific…[and] shows good accuracy and precision." In 2011 Vindenes, et al., investigated the viability of drug abuse monitoring using saliva, comparing this method to urine drug detection. Researchers found that several drug metabolites were detected more frequently in saliva than in urine; this was true for 6-monoacetylmorphine, amphetamine, methamphetamine, and N-desmethyldiazepam. This same study showed that saliva testing could detect other drug metabolites, as well, although not as frequently as urine testing; this was the case for morphine, other benzodiazepines, cannabis, and cocaine. Selected criticism: Sensitivity and specificity One often cited criticism of using saliva as a diagnostic fluid is that biomarkers are present in amounts that are too low to be detected reliably. As Wong points out, however, this "is no longer a limitation" due to the development of increasingly sensitive detection techniques. Advances in ELISA and mass spectrometry, in addition to the emergence of novel detection methods that take advantage of nanotechnology and other technologies, are enabling scientists and practitioners to achieve high analyte sensitivity. Selected criticism: Biomarker specificity is another consideration with saliva testing, much as it is with blood or urine testing. Many biomarkers are nonspecific (for example, CRP is a nonspecific inflammatory marker), and thus they can not be used alone to diagnose any particular disease. This issue is currently being addressed through identification of multiple biomarkers that are correlative of a disease; these can then be screened concomitantly to create a comprehensive panel of tests that significantly increases diagnostic specificity. Of note, certain types of saliva testing are considered by many to be more specific than blood testing; this is particularly true for steroid hormones. Since salivary hormone tests measure only those hormones that are not bound to sex hormone-binding globulin (SHBG) or albumin, they are regarded as reflecting only the bioactive ("free") fraction. With continued research into the field of salivary testing, accuracy parameters such as sensitivity and specificity will continue to improve. Selected criticism: Standardization As with other diagnostic testing methods, one drawback of saliva testing is the variability that exists among diagnostic devices and laboratory analysis techniques, especially for measuring hormones. Consequently, although a test result may be accurate and reliable within a particular assay method or laboratory, it may not be comparative to a test result obtained using a different method or laboratory. As the research community continues to validate and refine test methods and establish standard diagnostic ranges for various saliva biomarkers, this issue should be resolved. Recently, the U.S. National Institute of Health and Public Health Service each granted significant funding to further advancements in salivary testing, including the continued development of diagnostic standards.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Matchstick TV** Matchstick TV: Matchstick TV was a project started on Kickstarter in September 2014 with the tag line "The Streaming Stick Built on Firefox OS". It was described as an "Open hardware and software platform" device built on Firefox OS and OpenFlint. The Matchstick was to work similar to the Chromecast, so the user can "fling" content from a smartphone to a Wi-Fi connected Matchstick to show the content on a TV. Matchstick TV: The project's original completion date was February 2015 but Matchstick's controversial 2015 decision to add digital rights management (DRM) delayed and ultimately killed the product. On February 6, 2015 Matchstick announced the devices would not ship that month and would be delayed until August 2015, citing the addition of DRM as the reason for the delay. On August 3, 2015, the project officially ended due to issues implementing DRM into Firefox OS, and the Matchstick team offered refunds to the backers. Boing Boing called the decision "suicide-by-DRM," citing backers who wanted the product as originally specified without DRM. The commercial company behind the product seems to be Purplecomm, Inc. according to payment and refund information of the Amazon payment system.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Apotheosis** Apotheosis: Apotheosis (from Ancient Greek ἀποθέωσις (apothéōsis), from ἀποθεόω/ἀποθεῶ (apotheóō/apotheô) 'to deify'), also called divinization or deification (from Latin deificatio 'making divine'), is the glorification of a subject to divine levels and, commonly, the treatment of a human being, any other living thing, or an abstract idea in the likeness of a deity. Ancient Near East: Before the Hellenistic period, imperial cults were known in Ancient Egypt (pharaohs) and Mesopotamia (from Naram-Sin through Hammurabi). In the New Kingdom of Egypt, all deceased pharaohs were deified as the god Osiris. The architect Imhotep was deified after his death. Ancient Greece: From at least the Geometric period of the ninth century BC, the long-deceased heroes linked with founding myths of Greek sites were accorded chthonic rites in their heroon, or "hero-temple". Ancient Greece: In the Greek world, the first leader who accorded himself divine honours was Philip II of Macedon. At his wedding to his sixth wife, Philip's enthroned image was carried in procession among the Olympian gods; "his example at Aigai became a custom, passing to the Macedonian kings who were later worshipped in Greek Asia, from them to Julius Caesar and so to the emperors of Rome". Such Hellenistic state leaders might be raised to a status equal to the gods before death (e.g., Alexander the Great) or afterwards (e.g., members of the Ptolemaic dynasty). A heroic cult status similar to apotheosis was also an honour given to a few revered artists of the distant past, notably Homer. Ancient Greece: Archaic and Classical Greek hero-cults became primarily civic, extended from their familial origins, in the sixth century; by the fifth century none of the worshipers based their authority by tracing descent back to the hero, with the exception of some families who inherited particular priestly cults, such as the Eumolpides (descended from Eumolpus) of the Eleusinian mysteries, and some inherited priesthoods at oracle sites. The Greek hero cults can be distinguished on the other hand from the Roman cult of dead emperors, because the hero was not thought of as having ascended to Olympus or become a god: he was beneath the earth, and his power purely local. For this reason, hero cults were chthonic in nature, and their rituals more closely resembled those for Hecate and Persephone than those for Zeus and Apollo. Two exceptions were Heracles and Asclepius, who might be honoured as either gods or heroes, sometimes by chthonic night-time rites and sacrifice on the following day. One god considered as a hero to mankind is Prometheus, he secretly stole fire from Mt Olympus and introduced it to mankind. Ancient Rome: Up to the end of the Republic, the god Quirinus was the only one the Romans accepted as having undergone apotheosis, for his identification/syncretism with Romulus. (See Euhemerism). Subsequently, apotheosis in ancient Rome was a process whereby a deceased ruler was recognized as having been divine by his successor, usually also by a decree of the Senate and popular consent. The first of these cases was the deification the last Roman dictator Julius Caesar in 42 BC due to his adopted son, the triumvir Caesar Octavian. In addition to showing respect, often the present ruler deified a popular predecessor to legitimize himself and gain popularity with the people. The upper-class did not always take part in the imperial cult, and some privately ridiculed the apotheosis of inept and feeble emperors, as in the satire The Pumpkinification of (the Divine) Claudius, usually attributed to Seneca. Ancient Rome: At the height of the imperial cult during the Roman Empire, sometimes the emperor's deceased loved ones—heirs, empresses, or lovers, as Hadrian's Antinous—were deified as well. Deified people were awarded posthumously the title Divus (Diva if women) to their names to signify their divinity. Traditional Roman religion distinguished between a deus (god) and a divus (a mortal who became divine or deified), though not consistently. Temples and columns were erected to provide a space for worship. Ancient Rome: In the Roman story Cupid and Psyche, Zeus gives the ambrosia of the gods to the mortal Psyche, transforming her into a goddess herself. Ancient China: The Ming dynasty epic Investiture of the Gods deals heavily with deification legends. Numerous mortals have been deified into the Taoist pantheon, such as Guan Yu, Iron-crutch Li and Fan Kuai. Song Dynasty General Yue Fei was deified during the Ming Dynasty and is considered by some practitioners to be one of the three highest-ranking heavenly generals. Ancient India, Southeast Asia and North Korea: Various Hindu and Buddhist rulers in the past have been represented as deities, especially after death, from India to Indonesia. The founder of North Korea Kim Il-Sung instituted worship of himself amongst the citizens and is considered the only current country to deify its ruler. Christianity: Instead of the word "apotheosis", Christian theology uses in English the words "deification" or "divinization" or the Greek word "theosis". Pre-Reformation and mainstream theology, in both East and West, views Jesus Christ as the preexisting God who undertook mortal existence, not as a mortal being who attained divinity. It holds that he has made it possible for human beings to be raised to the level of sharing the divine nature as 2 Peter 1:4 states "He became human to make humans "partakers of the divine nature" In John 10:34, Jesus referenced Psalm 82:6 when he stated “Is it not written in your Law, I have said you are gods?" Other authors stated: "For this is why the Word became man, and the Son of God became the Son of man: so that man, by entering into communion with the Word and thus receiving divine sonship, might become a son of God." "For He was made man that we might be made God." "The only-begotten Son of God, wanting to make us sharers in his divinity, assumed our nature, so that he, made man, might make men gods." Accusations of self deification to some degree may have been placed upon heretical groups such as the Waldensians.The Westminster Dictionary of Christian Theology, authored by Anglican Priest Alan Richardson, contains the following in an article titled "Deification": Deification (Greek theosis) is for Orthodoxy the goal of every Christian. Man, according to the Bible, is 'made in the image and likeness of God.'. . . It is possible for man to become like God, to become deified, to become god by grace. This doctrine is based on many passages of both OT and NT (e.g. Ps. 82 (81).6; II Peter 1.4), and it is essentially the teaching both of St Paul, though he tends to use the language of filial adoption (cf. Rom. 8.9–17; Gal. 4.5–7), and the Fourth Gospel (cf. 17.21–23). Christianity: The language of II Peter is taken up by St Irenaeus, in his famous phrase, 'if the Word has been made man, it is so that men may be made gods' (Adv. Haer V, Pref.), and becomes the standard in Greek theology. In the fourth century, St. Athanasius repeats Irenaeus almost word for word, and in the fifth century, St. Cyril of Alexandria says that we shall become sons 'by participation' (Greek methexis). Deification is the central idea in the spirituality of St. Maximus the Confessor, for whom the doctrine is the corollary of the Incarnation: 'Deification, briefly, is the encompassing and fulfillment of all times and ages,' … and St. Symeon the New Theologian at the end of the tenth century writes, 'He who is God by nature converses with those whom he has made gods by grace, as a friend converses with his friends, face to face.' Roman Catholic Church The Roman Catholic Church does not use the term "apotheosis". Christianity: Corresponding to the Greek word theosis are the Latin-derived words "divinization" and "deification" used in the parts of the Catholic Church that are of Latin tradition. The concept has been given less prominence in Western theology than in that of the Eastern Catholic Churches, but is present in the Latin Church's liturgical prayers, such as that of the deacon or priest when pouring wine and a little water into the chalice: "By the mystery of this water and wine may we come to share in the divinity of Christ who humbled himself to share in our humanity."Catholic theology stresses the concept of supernatural life, "a new creation and elevation, a rebirth, it is a participation in and partaking of the divine nature" (cf. 2 Peter 1:4). In Catholic teaching there is a vital distinction between natural life and supernatural life, the latter being "the life that God, in an act of love, freely gives to human beings to elevate them above their natural lives" and which they receive through prayer and the sacraments; indeed the Catholic Church sees human existence as having as its whole purpose the acquisition, preservation and intensification of this supernatural life. Christianity: Eastern Orthodox Church The Church of Jesus Christ of Latter-day Saints The Church of Jesus Christ of Latter-day Saints (Mormons), believes in apotheosis along the lines of the Christian tradition of divinization or deification but refers to it as exaltation, or eternal life, and considers it to be accomplished by "sanctification". They believe that people may live with God throughout eternity in families and eventually become gods themselves but remain subordinate to God the Father, Jesus Christ, and the Holy Spirit. While the primary focus of the LDS Church is on Jesus of Nazareth and his atoning sacrifice for man, Latter-day Saints believe that one purpose for Christ's mission and for his atonement is the exaltation or Christian deification of man. The third Article of Faith of The Church of Jesus Christ of Latter-day Saints states that all men may be saved from sin by the atonement of Jesus Christ, and LDS Gospel Doctrine (as published) states that all men will be saved and will be resurrected from death. However, only those who are sufficiently obedient and accept the atonement and the grace and mercy of Jesus Christ before the resurrection and final judgment will be "exalted" and receive a literal Christian deification. Christianity: A quote often attributed to the early Church leader Lorenzo Snow in 1837, is "As man now is, God once was: As God now is, man may be." The teaching was taught first by Joseph Smith while he was pointing to John 5:19 in the New Testament; he said that "God himself, the Father of us all, dwelt on an earth, the same as Jesus Christ himself did." Many scholars also have discussed the correlation between Latter-day Saint belief in exaltation and the ancient Christian theosis, or deification, as set forth by early Church Fathers. Several Latter-day Saint and gentile historians specializing in studies of the early Christian Church also claim that the Latter-day Saint belief in eternal progression is more similar to the ancient Christian deification as set forth in numerous patristic writings of the 1st to 4th centuries AD than the beliefs of any other modern faith group of the Christian tradition.Members of the Church believe that the original Christian belief in man's divine potential gradually lost its meaning and importance in the centuries after the death of the apostles, as doctrinal changes by post-apostolic theologians caused Christians to lose sight of the true nature of God and his purpose for creating humanity. The concept of God's nature that was eventually accepted as Christian doctrine in the 4th century set divinity apart from humanity by defining the Godhead as three persons sharing a common divine substance. That classification of God in terms of a substance is not found in scripture but, in many aspects, mirrored the Greek metaphysical philosophies that are known to have influenced the thinking of Church Fathers. Latter-day Saints teach that by modern revelation, God restored the knowledge that he is the literal father of our spirits (Hebrews 12:9) and that the Biblical references to God creating mankind in his image and likeness are in no way allegorical. As such, Mormons assert that as the literal offspring of God the Father (Acts 17:28–29), humans have the potential to be heirs of his glory and co-heirs with Christ (Romans 8:16-17). The glory, Mormons believe, lies not in God's substance but in his intelligence: in other words, light and truth (Doctrine and Covenants 93:36). Thus, the purpose of humans is to grow and progress to become like the Father in Heaven. Mortality is seen as a crucial step in the process in which God's spirit children gain a body, which, though formed in the image of the Father's body, is subject to pain, illness, temptation, and death. The purpose of this earth life is to learn to choose the right in the face of that opposition, thereby gaining essential experience and wisdom. The level of intelligence we attain in this life will rise in the Resurrection (Doctrine and Covenants 130:18–19). Bodies will then be immortal like those of the Father and the Son (Philippians 3:21), but the degree of glory to which each person will resurrect is contingent upon the Final Judgment (Revelation 20:13, 1 Corinthians 15:40–41). Those who are worthy to return to God's presence can continue to progress towards a fullness of God's glory, which Mormons refer to as eternal life, or exaltation (Doctrine and Covenants 76). Christianity: The Latter-day Saint concept of apotheosis/exaltation is expressed in Latter-day scriptures (Mosiah 3:19, Alma 13:12, D&C 78:7, D&C 78:22, D&C 84:4, D&C 84:23, D&C 88:68, D&C 93:28) and is expressed by a member of the Quorum of the Twelve Apostles: "Though stretched by our challenges, by living righteously and enduring well we can eventually become sufficiently more like Jesus in our traits and attributes, that one day we can dwell in the Father's presence forever and ever" (Neal Maxwell, October 1997). Christianity: In early 2014, the Church of Jesus Christ of Latter-day Saints published an essay on the official church website specifically addressing the foundations, history, and official beliefs regarding apotheosis. The essay addresses the scriptural foundations of this belief, teachings of the early Church Fathers on the subject of deification, and the teachings of modern Church leaders, starting with Joseph Smith. Christianity: Wesleyan Protestantism Distinctively, in Wesleyan Protestantism theosis sometimes implies the doctrine of entire sanctification which teaches, in summary, that it is the Christian's goal, in principle possible to achieve, to live without any (voluntary) sin (Christian perfection). Wesleyan theologians detect the influence on Wesley from the Eastern Fathers, who saw the drama of salvation leading to the deification (apotheosis) of the human, in order that the perfection that originally part of human nature in creation but distorted by the fall might bring fellowship with the divine. In poetry: Samuel Menashe (1925–2011) wrote a poem entitled Apotheosis, as did Barbara Kingsolver. Emily Dickinson (1830–1886) wrote Love, Poem 18: Apotheosis. The poet Dejan Stojanović's Dancing of Sounds contains the line, "Art is apotheosis." Paul Laurence Dunbar wrote a poem entitled Love's Apotheosis. Samuel Taylor Coleridge wrote a poem entitled "The Apotheosis, or the Snow-Drop" in 1787. In science: In an essay entitled The Limitless Power of Science, Peter Atkins described science as an apotheosis, writing: Science, above all, respects the power of the human intellect. Science is the apotheosis of the intellect and the consummation of the Renaissance. Science respects more deeply the potential of humanity than religion ever can. Anthropolatry: Anthropolatry is the deification and worship of humans. It was practiced in ancient Japan towards their emperors. Followers of Socinianism were later accused of practicing anthropolatry. Anthropologist Ludwig Feuerbach professed a religion to worship all human beings while Auguste Comte venerated only individuals who made positive contributions and excluded those who did not.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Qrpff** Qrpff: qrpff is a Perl script created by Keith Winstein and Marc Horowitz of the MIT SIPB. It performs DeCSS in six or seven lines. The name itself is an encoding of "decss" in rot-13. The algorithm was rewritten 77 times to condense it down to six lines.In fact, two versions of qrpff exist: a short version (6 lines) and a fast version (7 lines). Both appear below. Qrpff: Short: Fast: The fast version is actually fast enough to decode a movie in real-time. qrpff and related memorabilia was sold for $2,500 at The Algorithm Auction, the world's first auction of computer algorithms.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Local Area Transport** Local Area Transport: Local Area Transport (LAT) is a non-routable (data link layer) networking technology developed by Digital Equipment Corporation to provide connection between the DECserver terminal servers and Digital's VAX and Alpha and MIPS host computers via Ethernet, giving communication between those hosts and serial devices such as video terminals and printers. The protocol itself was designed in such a manner as to maximize packet efficiency over Ethernet by bundling multiple characters from multiple ports into a single packet for Ethernet transport.One LAT strength was efficiently handling time-sensitive data transmission. Over time, other host implementations of the LAT protocol appeared allowing communications to a wide range of Unix and other non-Digital operating systems using the LAT protocol. History: In 1984, the first implementation of the LAT protocol connected a terminal server to a VMS VAX-Cluster in Spit Brook Road, Nashua, NH. By "virtualizing" the terminal port at the host end, a very large number of plug-and-play VT100-class terminals could connect to each host computer system. Additionally, a single physical terminal could connect via multiple sessions to multiple hosts simultaneously. Future generations of terminal servers included both LAT and TELNET protocols, one of the earliest protocols created to run on a burgeoning TCP/IP based Internet. Additionally, the ability to create reverse direction pathways from users to non-traditional RS232 devices (i.e. UNIX Host TTYS1 operator ports) created an entirely new market for Terminal Servers, now known as console servers in the mid to late 1990s, year 2000 and beyond through today. History: LAT and VMS drove the initial surge of adoption of thick Ethernet by the computer industry. By 1986, terminal server networks accounted for 10% of Digital's $10 billion revenue. These early Ethernet LANs scaled using Ethernet bridges (another DEC invention) as well as DECnet routers. Subsequently, Cisco routers, which implemented TCP-IP and DECnet, emerged as a global connection between these packet-based Ethernet LANs. History: Over time, when terminals became less popular, terminal emulators had a built-in LAT client. LAT features: If a computer communicating via LAT doesn't receive an acknowledgment within 80 milliseconds for a packet it transmitted, it resends that packet; this can clog a network. No data is sent if no data is offered and under heavy loads LAT limits the number of packets sent per second to twenty-four: twelve transmits and twelve receives. As more characters are sent, the packets get bigger but not more numerous. Above 80 milliseconds latency a touch typist will notice the sluggish character echo. The LAT 80 millisecond delay offloads both the network by sending fewer larger packets which also reduces interrupts at each system. Early terminal server vendors: Digital Equipment Corporation - primarily via their DECserver systems. Able Computer - An early provider of Terminal Server products. Chase Research - A Europe-based early provider of Terminal Server products. Cisco Systems - Provided LAT on Terminal Servers as early as 1990. Emulex Corporation - A California-based early provider of Terminal Server products that included X.25 and 3270 features; they sold through Datability and other OEMs. Equinox Systems - A Florida-based early provider of Terminal Server products. Hughes LAN Systems - Provided LAT capability in 1989. Interlink Computer Sciences, an early 1990 LAT vendor. Xyplex Corporation - An early provider of Terminal Server products based in Massachusetts. Open Source solutions: Most Linux distributions offer a client and server lat package, that can easily be installed via a package manager. This allows e.g. to access a local area network server while being connected to a corporate VPN network that would otherwise block local TCP/IP traffic.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Selective heat sintering** Selective heat sintering: Selective heat sintering (SHS) is a type of additive manufacturing process. It works by using a thermal printhead to apply heat to layers of powdered thermoplastic. When a layer is finished, the powder bed moves down, and an automated roller adds a new layer of material which is sintered to form the next cross-section of the model. SHS is best for manufacturing inexpensive prototypes for concept evaluation, fit/form and functional testing. SHS is a Plastics additive manufacturing technique similar to selective laser sintering (SLS), the main difference being that SHS employs a less intense thermal printhead instead of a laser, thereby making it a cheaper solution, and able to be scaled down to desktop sizes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Guillotine partition** Guillotine partition: Guillotine partition is the process of partitioning a rectilinear polygon, possibly containing some holes, into rectangles, using only guillotine-cuts. A guillotine-cut (also called an edge-to-edge cut) is a straight bisecting line going from one edge of an existing polygon to the opposite edge, similarly to a paper guillotine. Guillotine partition: Guillotine partition is particularly common in designing floorplans in microelectronics. An alternative term for a guillotine-partition in this context is a slicing partition or a slicing floorplan. Guillotine partitions are also the underlying structure of binary space partitions. There are various optimization problems related to guillotine partition, such as: minimizing the number of rectangles or the total length of cuts. These are variants of polygon partitioning problems, where the cuts are constrained to be guillotine cuts. Guillotine partition: A related but different problem is guillotine cutting. In that problem, the original sheet is a plain rectangle without holes. The challenge comes from the fact that the dimensions of the small rectangles are fixed in advance. The optimization goals are usually to maximize the area of the produced rectangles or their value, or minimize the waste or the number of required sheets. Computing a guillotine partition with a smallest edge-length: In the minimum edge-length rectangular-partition problem, the goal is to partition the original rectilinear polygon into rectangles, such that the total edge length is a minimum.: 166–167 This problem can be solved in time O(n5) even if the raw polygon has holes. The algorithm uses dynamic programming based on the following observation: there exists a minimum-length guillotine rectangular partition in which every maximal line segment contains a vertex of the boundary. Therefore, in each iteration, there are O(n) possible choices for the next guillotine cut, and there are altogether O(n4) subproblems. Computing a guillotine partition with a smallest edge-length: In the special case in which all holes are degenerate (single points), the minimum-length guillotine rectangular partition is at most 2 times the minimum-length rectangular partition.: 167–170  By a more careful analysis, it can be proved that the approximation factor is in fact at most 1.75. It is not known if the 1.75 is tight, but there is an instance in which the approximation factor is 1.5. Therefore, the guillotine partition provides a constant-factor approximation to the general problem, which is NP-hard. Computing a guillotine partition with a smallest edge-length: These results can be extended to a d-dimensional box: a guillotine-partition with minimum edge-length can be found in time O(dn2d+1) , and the total (d-1)-volume in the optimal guillotine-partition is at most 2d−4+4/d times that of an optimal d-box partition.Arora and Mitchell used the guillotine-partitioning technique to develop polynomial-time approximation schemes for various geometric optimization problems. Number of guillotine partitions: Besides the computational problems, guillotine partitions were also studied from a combinatorial perspective. Suppose a given rectangle should be partitioned into smaller rectangles using guillotine cuts only. Obviously, there are infinitely many ways to do this, since even a single cut can take infinitely many values. However, the number of structurally-different guillotine partitions is bounded. In two dimensions, there is an upper bound in O(n!25n−3n3/2) attributed to Knuth. the exact number is the Schröder number. In d dimensions, Ackerman, Barequet, Pinter and Romik give an exact summation formula, and prove that it is in Θ((2d−1+2d(d−1))nn3/2) . When d=2 this bound becomes Θ((3+22)nn3/2) Asinowski, Barequet, Mansour and Pinter also study the number of cut-equivalence classes of guillotine partitions. Coloring guillotine partitions: A polychromatic coloring of a planar graph is a coloring of its vertices such that, in each face of the graph, each color appears at least once. Several researchers have tried to find the largest k such that a polychromatic k-coloring always exists. An important special case is when the graph represents a partition of a rectangle into rectangles. Dinitz, Katz and Krakovski proved that there always exists a polychromatic 3-coloring. Aigner-Horev, Katz, Krakovski and Loffler proved that, in the special sub-case in which the graph represents a guillotine partition, a strong polychromatic 4-coloring always exists. Keszegh extended this result to d-dimensional guillotine partitions, and provided an efficient coloring algorithm. Dimitrov, Aigner-Horev and Krakovski finally proved that there always exists a strong polychromatic 4-coloring.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Robert H. Cox** Robert H. Cox: Robert Cox is Professor Emeritus at the Lankenau Institute for Medical Research. He is also Emeritus Professor of Physiology at the University of Pennsylvania School of Medicine. Cox earned his B.S. in Electrical Engineering and his M.S. in Biomedical Engineering, both from Drexel University, Philadelphia; he earned his PhD in Biomedical Engineering from the University of Pennsylvania. His research is focused on how ion channels regulate blood pressure and the electrical signals that drive heartbeat, specifically a type of calcium channel that is present only in arteries. Robert H. Cox: Books editedRecent Advances in Arterial Diseases: Atherosclerosis, Hypertension and Vasospasm, ed. Thomas N. Tulenko and Robert H. Cox (New York: Alan R. Liss, Inc.), 1986. Acute Myocardial Infarction: Emerging Concepts of Pathogenesis and Treatment, ed. Robert H. Cox (New York: Praeger), 1989 Cellular and Molecular Mechanisms in Hypertension, ed. Robert H. Cox (New York: Plenum Press), 1989
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Golf mirror** Golf mirror: A periscope is an instrument for observation over, around or through an object, obstacle or condition that prevents direct line-of-sight observation from an observer's current position.In its simplest form, it consists of an outer case with mirrors at each end set parallel to each other at a 45° angle. This form of periscope, with the addition of two simple lenses, served for observation purposes in the trenches during World War I. Military personnel also use periscopes in some gun turrets and in armoured vehicles.More complex periscopes using prisms or advanced fiber optics instead of mirrors and providing magnification operate on submarines and in various fields of science. The overall design of the classical submarine periscope is very simple: two telescopes pointed into each other. If the two telescopes have different individual magnification, the difference between them causes an overall magnification or reduction. Early examples: Johannes Hevelius described an early periscope (which he called a "polemoscope") with lenses in 1647 in his work Selenographia, sive Lunae descriptio [Selenography, or an account of the Moon]. Hevelius saw military applications for his invention. Early examples: In 1854, Hippolyte Marié-Davy invented the first naval periscope, consisting of a vertical tube with two small mirrors fixed at each end at 45°. Simon Lake used periscopes in his submarines in 1902. Sir Howard Grubb perfected the device in World War I. Morgan Robertson (1861–1915) claimed to have tried to patent the periscope: he described a submarine using a periscope in his fictional works. Early examples: Periscopes, in some cases fixed to rifles, served in World War I (1914–1918) to enable soldiers to see over the tops of trenches, thus avoiding exposure to enemy fire (especially from snipers). The periscope rifle also saw use during the war – this was an infantry rifle sighted by means of a periscope, so the shooter could aim fire the weapon from a safe position below the trench parapet. Early examples: During World War II (1939–1945), artillery observers and officers used specifically-manufactured periscope binoculars with different mountings. Some of them also allowed estimating the distance to a target, as they were designed as stereoscopic rangefinders. Armored vehicle periscopes: Tanks and armoured vehicles use periscopes: they enable drivers, tank commanders, and other vehicle occupants to inspect their situation through the vehicle roof. Prior to periscopes, direct vision slits were cut in the armour for occupants to see out. Periscopes permit view outside of the vehicle without needing to cut these weaker vision openings in the front and side armour, better protecting the vehicle and occupants. Armored vehicle periscopes: A protectoscope is a related periscopic vision device designed to provide a window in armoured plate, similar to a direct vision slit. A compact periscope inside the protectoscope allows the vision slit to be blanked off with spaced armoured plate. This prevents a potential ingress point for small arms fire, with only a small difference in vision height, but still requires the armour to be cut. Armored vehicle periscopes: In the context of armoured fighting vehicles, such as tanks, a periscopic vision device may also be referred to as an episcope. In this context a periscope refers to a device that can rotate to provide a wider field of view (or is fixed into an assembly that can), while an episcope is fixed into position. Periscopes may also be referred to by slang, e.g. "shufti-scope". Armored vehicle periscopes: Gundlach and Vickers 360-degree periscopes An important development, the Gundlach rotary periscope, incorporated a rotating top with a selectable additional prism which reversed the view. This allowed a tank commander to obtain a 360-degree field of view without moving his seat, including rear vision by engaging the extra prism. This design, patented by Rudolf Gundlach in 1936, first saw use in the Polish 7-TP light tank (produced from 1935 to 1939). Armored vehicle periscopes: As a part of Polish–British pre-World War II military cooperation, the patent was sold to Vickers-Armstrong where it saw further development for use in British tanks, including the Crusader, Churchill, Valentine, and Cromwell models as the Vickers Tank Periscope MK.IV. Armored vehicle periscopes: The Gundlach-Vickers technology was shared with the American Army for use in its tanks including the Sherman, built to meet joint British and US requirements. This saw post-war controversy through legal action: "After the Second World War and a long court battle, in 1947 he, Rudolf Gundlach, received a large payment for his periscope patent from some of its producers."The USSR also copied the design and used it extensively in its tanks, including the T-34 and T-70. The copies were based on Lend-Lease British vehicles, and many parts remain interchangeable. Germany also made and used copies. Armored vehicle periscopes: Periscopic gun-sights Periscopic sights were also introduced during the Second World War. In British use, the Vickers periscope was provided with sighting lines, enabling front and rear prisms to be directly aligned to gain an accurate direction. On later tanks such as the Churchill and Cromwell, a similarly marked episcope provided a backup sighting mechanism aligned with a vane sight on the turret roof. Armored vehicle periscopes: Later, US-built Sherman tanks and British Centurion and Charioteer tanks replaced the main telescopic sight with a true periscopic sight in the primary role. The periscopic sight was linked to the gun itself, allowing elevation to be captured (rotation being fixed as part of rotating turret). The sights formed part of the overall periscope, providing the gunner with greater overall vision than previously possible with the telescopic sight. The FV4201 Chieftain used the TESS (TElescopic Sighting System) developed in the early 1980s that was later sold as surplus for use on the RAF Phantom aircraft. Armored vehicle periscopes: Modern specialised AFV periscopes In modern use, specialised periscopes can also provide night vision. The Embedded Image Periscope (EIP) designed and patented by Kent Periscopes provides standard unity vision periscope functionality for normal daytime viewing of the vehicle surroundings plus the ability to display digital images from a range of on-vehicle sensors and cameras (including thermal and low light) such that the resulting image appears "embedded" internally within the unit and projected at a comfortable viewing positions. Naval use: Periscopes allow a submarine, when submerged at a relatively shallow depth, to search visually for nearby targets and threats on the surface of the water and in the air. When not in use, a submarine's periscope retracts into the hull. A submarine commander in tactical conditions must exercise discretion when using his periscope, since it creates a visible wake (and may also become detectable by radar), giving away the submarine's position. Naval use: Marie-Davey built a simple, fixed naval periscope using mirrors in 1854. Thomas H. Doughty of the United States Navy later invented a prismatic version for use in the American Civil War of 1861–1865. Naval use: Submarines adopted periscopes early. Captain Arthur Krebs adapted two on the experimental French submarine Gymnote in 1888 and 1889. The Spanish inventor Isaac Peral equipped his submarine Peral (developed in 1886 but launched on September 8, 1888) with a fixed, non-retractable periscope that used a combination of prisms to relay the image to the submariner. (Peral also developed a primitive gyroscope for submarine navigation and pioneered the ability to fire live torpedoes while submerged.) The invention of the collapsible periscope for use in submarine warfare is usually credited to Simon Lake in 1902. Lake called his device the "omniscope" or "skalomniscope".As of 2009 modern submarine periscopes incorporate lenses for magnification and function as telescopes. They typically employ prisms and total internal reflection instead of mirrors, because prisms, which do not require coatings on the reflecting surface, are much more rugged than mirrors. They may have additional optical capabilities such as range-finding and targeting. The mechanical systems of submarine periscopes typically use hydraulics and need to be quite sturdy to withstand the drag through water. The periscope chassis may also support a radio or radar antenna. Naval use: Submarines traditionally had two periscopes; a navigation or observation periscope and a targeting, or commander's, periscope. Navies originally mounted these periscopes in the conning tower, one forward of the other in the narrow hulls of diesel-electric submarines. In the much wider hulls of recent US Navy submarines the two operate side-by-side. The observation scope, used to scan the sea surface and sky, typically had a wide field of view and no magnification or low-power magnification. The targeting or "attack" periscope, by comparison, had a narrower field of view and higher magnification. In World War II and earlier submarines it was the only means of gathering target data to accurately fire a torpedo, since sonar was not yet sufficiently advanced for this purpose (ranging with sonar required emission of an acoustic "ping" that gave away the location of the submarine) and most torpedoes were unguided. Naval use: Twenty-first-century submarines do not necessarily have periscopes. The United States Navy's Virginia-class submarines and the Royal Navy's Astute-class submarines instead use photonics masts, pioneered by the Royal Navy's HMS Trenchant, which lift an electronic imaging sensor-set above the water. Signals from the sensor-set travel electronically to workstations in the submarine's control center. While the cables carrying the signal must penetrate the submarine's hull, they use a much smaller and more easily sealed—and therefore less expensive and safer—hull opening than those required by periscopes. Eliminating the telescoping tube running through the conning tower also allows greater freedom in designing the pressure hull and in placing internal equipment. Aircraft use: Periscopes have also been used on aircraft for sections with limited view. The first known use of aircraft periscope was on the Spirit of St. Louis. The Vickers VC10 had a periscope that could be used on four locations of the aircraft fuselage, V-Bombers such as the Avro Vulcan and Handley Page Victor and the Nimrod MR1 as the "on top sight". Various US bomber aircraft such as the B-52 used sextant periscopes for celestial navigation before the introduction of GPS. This also allowed the aircrew to navigate without the use of an astrodome in the fuselage. An emergency periscope was used on all Boeing 737 models manufactured before 1997 found under "Seat D" behind the over wing exit row to regulate the landing gear. High speed and hypersonic aircraft such as the North American X-15 used a periscope.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Expected goals** Expected goals: In association football, expected goals (xG) is a performance metric used to evaluate football team and player performance. It can be used to represent the probability of a scoring opportunity that may result in a goal. It is also used in ice hockey. Metric: Association football There is some debate about the origin of the term expected goals. Vic Barnett and his colleague Sarah Hilditch referred to "expected goals" in their 1993 paper that investigated the effects of artificial pitch (AP) surfaces on home team performance in association football in England. Their paper included this observation: Quantitatively we find for the AP group about 0.15 more goals per home match than expected and, allowing for the lower than expected goals against in home matches, an excess goal difference (for home matches) of about 0.31 goals per home match. Over a season this yields about 3 more goals for, an improved goal difference of about 6 goals. Metric: Jake Ensum, Richard Pollard and Samuel Taylor (2004) reported their study of data from 37 matches in the 2002 World Cup in which 930 shots and 93 goals were recorded. Their research sought "to investigate and quantify 12 factors that might affect the success of a shot". Their logistic regression identified five factors that had a significant effect on determining the success of a kicked shot: distance from the goal; angle from the goal; whether or not the player taking the shot was at least 1 m away from the nearest defender; whether or not the shot was immediately preceded by a cross; and the number of outfield players between the shot-taker and goal. They concluded "the calculation of shot probabilities allows a greater depth of analysis of shooting opportunities in comparison to recording only the number of shots". In a subsequent paper (2004), Ensum, Pollard and Taylor combined data from the 1986 and 2002 World Cup competitions to identify three significant factors that determined the success of a kicked shot: distance from the goal; angle from the goal; and whether or not the player taking the shot was at least 1 m away from the nearest defender. More recent studies have identified similar factors as relevant for xG metrics.Howard Hamilton (2009) proposed "a useful statistic in soccer" that "will ultimately contribute to what I call an 'expected goal value' — for any action on the field in the course of a game, the probability that said action will create a goal".Sander Itjsma (2011) discussed "a method to assign different value to different chances created during a football match" and in doing so concluded: we now have a system in place in order to estimate the overall value of the chances created by either team during the match. Knowing how many goals a team is expected to score from its chances is of much more value than just knowing how many attempts to score a goal were made. Other applications of this method of evaluation would be to distinguish a lack of quality attempts created from a finishing problem or to evaluate defensive and goalkeeping performances. And a third option would be to plot the balance of play during the match in terms of the quality of chances created in order to graphically represent how the balance of play evolved during the match. Metric: Sarah Rudd (2011) discussed probable goal scoring patterns (P(Goal)) in her use of Markov chains for tactical analysis (including the proximity of defenders) from 123 games in the 2010-2011 English Premier League season. In a video presentation of her paper at the 2011 New England Symposium of Statistics in Sport, Rudd reported her use of analysis methods to compare "expected goals" with actual goals and her process of applying weightings to incremental actions for P(goal) outcomes.In April 2012, Sam Green wrote about 'expected goals' in his assessment of Premier League goalscorers. He asked "So how do we quantify which areas of the pitch are the most likely to result in a goal and therefore, which shots have the highest probability of resulting in a goal?". He added: If we can establish this metric, we can then accurately and effectively increase our chances of scoring and therefore winning matches. Similarly, we can use this data from a defensive perspective to limit the better chances by defending key areas of the pitch. Metric: Green proposed a model to determine "a shot's probability of being on target and/or scored". With this model "we can look at each player's shots and tally up the probability of each of them being a goal to give an expected goal (xG) value". Metric: Ice hockey In 2004, Alan Ryder shared a methodology for the study of the quality of an ice hockey shot on goal. His discussion started with this sentence “Not all shots on goal are created equal”. Ryder's model for the measurement of shot quality was: Collect the data and analyze goal probabilities for each shooting circumstance Build a model of goal probabilities that relies on the measured circumstance For each shot, determine its goal probability Expected Goals: EG = the sum of the goal probabilities for each shot Neutralize the variation in shots on goal by calculating Normalized Expected Goals Shot Quality Against Ryder concluded: The model to get to expected goals given the shot quality factors is simply based on the data. There are no meaningful assumptions made. The analytic methods are the classics from statistics and actuarial science. The results are therefore very credible. Metric: In 2007, Ryder issued a product recall notice for his shot quality model. He presented “a cautionary note on the calculation of shot quality” and pointed to “data quality problems with the measurement of the quality of a hockey team’s shots taken and allowed”.He reported: I have been worried that there is a systemic bias in the data. Random errors don’t concern me. They even out over large volumes of data. But I do think that ... the scoring in certain rinks has a bias towards longer or shorter shots, the most dominant factor in a shot quality model. And I set out to investigate that possibility. Metric: The term 'expected goals' appeared in a paper about ice hockey performance presented by Brian Macdonald at the MIT Sloan Sports Analytics Conference in 2012. Macdonald's method for calculating expected goals was reported in the paper: We used data from the last four full NHL seasons. For each team, the season was split into two halves. Since midseason trades and injuries can have an impact on a team’s performance, we did not use statistics from the first half of the season to predict goals in the second half. Instead, we split the season into odd and even games, and used statistics from odd games to predict goals in even games. Data from 2007-08, 2008-09, and 2009-10 was used as the training data to estimate the parameters in the model, and data from the entire 2010-11 was set aside for validating the model. The model was also validated using 10-fold cross-validation. Mean squared error (MSE) of actual goals and predicted goals was our choice for measuring the performance of our models.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Two-phase commit protocol** Two-phase commit protocol: In transaction processing, databases, and computer networking, the two-phase commit protocol (2PC, tupac) is a type of atomic commitment protocol (ACP). It is a distributed algorithm that coordinates all the processes that participate in a distributed atomic transaction on whether to commit or abort (roll back) the transaction. This protocol (a specialised type of consensus protocol) achieves its goal even in many cases of temporary system failure (involving either process, network node, communication, etc. failures), and is thus widely used. Two-phase commit protocol: However, it is not resilient to all possible failure configurations, and in rare cases, manual intervention is needed to remedy an outcome. To accommodate recovery from failure (automatic in most cases) the protocol's participants use logging of the protocol's states. Log records, which are typically slow to generate but survive failures, are used by the protocol's recovery procedures. Many protocol variants exist that primarily differ in logging strategies and recovery mechanisms. Though usually intended to be used infrequently, recovery procedures compose a substantial portion of the protocol, due to many possible failure scenarios to be considered and supported by the protocol. Two-phase commit protocol: In a "normal execution" of any single distributed transaction (i.e., when no failure occurs, which is typically the most frequent situation), the protocol consists of two phases: The commit-request phase (or voting phase), in which a coordinator process attempts to prepare all the transaction's participating processes (named participants, cohorts, or workers) to take the necessary steps for either committing or aborting the transaction and to vote, either "Yes": commit (if the transaction participant's local portion execution has ended properly), or "No": abort (if a problem has been detected with the local portion), and The commit phase, in which, based on voting of the participants, the coordinator decides whether to commit (only if all have voted "Yes") or abort the transaction (otherwise), and notifies the result to all the participants. The participants then follow with the needed actions (commit or abort) with their local transactional resources (also called recoverable resources; e.g., database data) and their respective portions in the transaction's other output (if applicable).The two-phase commit (2PC) protocol should not be confused with the two-phase locking (2PL) protocol, a concurrency control protocol. Assumptions: The protocol works in the following manner: one node is a designated coordinator, which is the master site, and the rest of the nodes in the network are designated the participants. The protocol assumes that there is stable storage at each node with a write-ahead log, that no node crashes forever, that the data in the write-ahead log is never lost or corrupted in a crash, and that any two nodes can communicate with each other. The last assumption is not too restrictive, as network communication can typically be rerouted. The first two assumptions are much stronger; if a node is totally destroyed then data can be lost. Assumptions: The protocol is initiated by the coordinator after the last step of the transaction has been reached. The participants then respond with an agreement message or an abort message depending on whether the transaction has been processed successfully at the participant. Basic algorithm: Commit request (or voting) phase The coordinator sends a query to commit message to all participants and waits until it has received a reply from all participants. The participants execute the transaction up to the point where they will be asked to commit. They each write an entry to their undo log and an entry to their redo log. Each participant replies with an agreement message (participant votes Yes to commit), if the participant's actions succeeded, or an abort message (participant votes No to commit), if the participant experiences a failure that will make it impossible to commit. Commit (or completion) phase Success If the coordinator received an agreement message from all participants during the commit-request phase: The coordinator sends a commit message to all the participants. Each participant completes the operation, and releases all the locks and resources held during the transaction. Each participant sends an acknowledgement to the coordinator. The coordinator completes the transaction when all acknowledgements have been received. Failure If any participant votes No during the commit-request phase (or the coordinator's timeout expires): The coordinator sends a rollback message to all the participants. Each participant undoes the transaction using the undo log, and releases the resources and locks held during the transaction. Each participant sends an acknowledgement to the coordinator. The coordinator undoes the transaction when all acknowledgements have been received. Message flow Coordinator Participant QUERY TO COMMIT --------------------------------> VOTE YES/NO prepare*/abort* <------------------------------- commit*/abort* COMMIT/ROLLBACK --------------------------------> ACKNOWLEDGEMENT commit*/abort* <-------------------------------- end An * next to the record type means that the record is forced to stable storage. Disadvantages: The greatest disadvantage of the two-phase commit protocol is that it is a blocking protocol. If the coordinator fails permanently, some participants will never resolve their transactions: After a participant has sent an agreement message to the coordinator, it will block until a commit or rollback is received. Implementing the two-phase commit protocol: Common architecture In many cases the 2PC protocol is distributed in a computer network. It is easily distributed by implementing multiple dedicated 2PC components similar to each other, typically named transaction managers (TMs; also referred to as 2PC agents or Transaction Processing Monitors), that carry out the protocol's execution for each transaction (e.g., The Open Group's X/Open XA). The databases involved with a distributed transaction, the participants, both the coordinator and participants, register to close TMs (typically residing on respective same network nodes as the participants) for terminating that transaction using 2PC. Each distributed transaction has an ad hoc set of TMs, the TMs to which the transaction participants register. A leader, the coordinator TM, exists for each transaction to coordinate 2PC for it, typically the TM of the coordinator database. However, the coordinator role can be transferred to another TM for performance or reliability reasons. Rather than exchanging 2PC messages among themselves, the participants exchange the messages with their respective TMs. The relevant TMs communicate among themselves to execute the 2PC protocol schema above, "representing" the respective participants, for terminating that transaction. With this architecture the protocol is fully distributed (does not need any central processing component or data structure), and scales up with number of network nodes (network size) effectively. Implementing the two-phase commit protocol: This common architecture is also effective for the distribution of other atomic commitment protocols besides 2PC, since all such protocols use the same voting mechanism and outcome propagation to protocol participants. Protocol optimizations Database research has been done on ways to get most of the benefits of the two-phase commit protocol while reducing costs by protocol optimizations and protocol operations saving under certain system's behavior assumptions. Implementing the two-phase commit protocol: Presumed abort and presumed commit Presumed abort or Presumed commit are common such optimizations. An assumption about the outcome of transactions, either commit, or abort, can save both messages and logging operations by the participants during the 2PC protocol's execution. For example, when presumed abort, if during system recovery from failure no logged evidence for commit of some transaction is found by the recovery procedure, then it assumes that the transaction has been aborted, and acts accordingly. This means that it does not matter if aborts are logged at all, and such logging can be saved under this assumption. Typically a penalty of additional operations is paid during recovery from failure, depending on optimization type. Thus the best variant of optimization, if any, is chosen according to failure and transaction outcome statistics. Implementing the two-phase commit protocol: Tree two-phase commit protocol The Tree 2PC protocol (also called Nested 2PC, or Recursive 2PC) is a common variant of 2PC in a computer network, which better utilizes the underlying communication infrastructure. The participants in a distributed transaction are typically invoked in an order which defines a tree structure, the invocation tree, where the participants are the nodes and the edges are the invocations (communication links). The same tree is commonly utilized to complete the transaction by a 2PC protocol, but also another communication tree can be utilized for this, in principle. In a tree 2PC the coordinator is considered the root ("top") of a communication tree (inverted tree), while the participants are the other nodes. The coordinator can be the node that originated the transaction (invoked recursively (transitively) the other participants), but also another node in the same tree can take the coordinator role instead. 2PC messages from the coordinator are propagated "down" the tree, while messages to the coordinator are "collected" by a participant from all the participants below it, before it sends the appropriate message "up" the tree (except an abort message, which is propagated "up" immediately upon receiving it or if the current participant initiates the abort). Implementing the two-phase commit protocol: The Dynamic two-phase commit (Dynamic two-phase commitment, D2PC) protocol is a variant of Tree 2PC with no predetermined coordinator. It subsumes several optimizations that have been proposed earlier. Agreement messages (Yes votes) start to propagate from all the leaves, each leaf when completing its tasks on behalf of the transaction (becoming ready). An intermediate (non leaf) node sends ready when an agreement message to the last (single) neighboring node from which agreement message has not yet been received. The coordinator is determined dynamically by racing agreement messages over the transaction tree, at the place where they collide. They collide either at a transaction tree node, to be the coordinator, or on a tree edge. In the latter case one of the two edge's nodes is elected as a coordinator (any node). D2PC is time optimal (among all the instances of a specific transaction tree, and any specific Tree 2PC protocol implementation; all instances have the same tree; each instance has a different node as coordinator): By choosing an optimal coordinator D2PC commits both the coordinator and each participant in minimum possible time, allowing the earliest possible release of locked resources in each transaction participant (tree node).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Presbyphagia** Presbyphagia: Presbyphagia refers to characteristic changes in the swallowing mechanism of otherwise healthy older adults. Although age-related changes place older adults at risk swallowing disorders, an older adult's swallow is not necessarily an impaired swallow. Clinicians are becoming more aware of the need to distinguish among swallowing disorders, presbyphagia (an old yet healthy swallow) and other related diagnoses in order to avoid over diagnosing and over treating presbyphagia. Older adults are more vulnerable and with the increased threat of acute illnesses, medications and any number of age-related conditions, they can cross the line from having a healthy older swallow to being dysphagic. Presbyphagia: Work focused primarily on the anatomy and physiology of the oropharyngeal swallowing mechanism indicates a progression of change that may put the older population at increased risk for swallowing disorders. Such changes combined with naturally diminished functional reserve, the resilient ability to adapt to physiological stress, make the older population more susceptible to dysphagia. Age-Associated Changes in Swallowing: Age-Associated Changes in Oropharyngeal Swallowing A major characteristic of older healthy swallowing is that it occurs more slowly. The longer duration is found to occur largely before the more automatic pharyngeal phase of the swallow is initiated. In those over age 65, the initiation of laryngeal and pharyngeal events, including laryngeal vestibule closure, are delayed significantly longer than in adults younger than 45 years of age. Although the specific neural underpinning is not confirmed it might be hypothesized that oral events become “uncoupled” from the pharyngeal response, which includes airway protection. Thus, in older healthy adults it is not uncommon for the bolus to be adjacent to an open airway by pooling or pocketing in the pharyngeal recesses, for more time than in younger adults. Age-Associated Changes in Swallowing: Whereas older adults demonstrate a delay in the onset of specific pharyngeal events, such as opening of the upper esophageal sphincter (UES) to permit bolus passage from the pharynx into the esophagus, an equally critical finding is that the range of UES opening is diminished. A scintigraphic study revealed increased pharyngeal residue with age, possibly related to the limited UES opening. Again, these findings indicate exposure of an open airway to material retained in the pharynx, increasing the risk for aspiration in older individuals. Age-Associated Changes in Swallowing: Aspiration (defined as entry of material into the airway [trachea] thus passing below the vocal folds) and airway penetration (defined as entry of material into the laryngeal vestibule but not below the level of the vocal folds) (Figure 2) are believed to be the most significant adverse clinical outcomes of misdirected bolus flow. In older adults, penetration of the bolus into the airway occurs more often and to a deeper and more severe level than in younger adults. When the swallowing mechanism is functionally altered or perturbed in older people, such as with the placement of a nasogastric tube, airway penetration can be even more pronounced. A study examining this issue found that liquid penetrated the airway significantly more frequently when a nasogastric tube was in place in men and women older than 70 years. That study and additional evidence indicates that under stressful conditions or system perturbations, older individuals are less able to compensate due to the age-related reduction in reserve capacity (add Pendergast reference) and are more at risk to experience airway penetration or aspiration. Age-Associated Changes in Swallowing: Age-related Change in Lingual Pressure Generation The tongue is the primary propulsive agent for pumping food through the mouth, into the pharynx while bypassing the airway and through to the esophagus. Recent findings clearly reveal that an age-related change in lingual pressures is another contributing factor to presbyphagia. Healthy older individuals demonstrate significantly reduced isometric (i.e., static) tongue pressures compared with younger counterparts. In contrast, maximal tongue pressures generated during swallowing (i.e., dynamic) remain normal in magnitude. because, fortunately, swallowing is a submaximal pressure-demanding activity. In general, swallowing is considered a submaximal pressure task such that peak tongue pressures used in swallowing are lower than those generated isometrically. Although older individuals manage to achieve pressures necessary to affect a successful swallow, despite a reduction in overall maximum tongue strength, they achieve these pressures more slowly than young swallowers. It has been suggested that the slowness that characterizes senescent swallowing may reflect the increased time necessary to recruit sufficient motor units to generate pressures necessary to operate an effective, safe swallow.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mountain bike orienteering** Mountain bike orienteering: Mountain bike orienteering (MTB-O or MTBO) is an orienteering endurance racing sport on a mountain bike where navigation is done along trails and tracks. Compared with foot orienteering, competitors usually are not permitted to leave the trail and track network. Navigation tactics are similar to ski-orienteering, where the major focus is route choice while navigating. The main difference compared to ski-orienteering is that navigation is done at a higher pace, because the bike can reach higher speeds. As the biker reaches higher speeds, map reading becomes more challenging. Equipment: Preferred bike type is a robust mountain bike meant for cross-country cycling, but any type of bike can be used. Depending on terrain type either hard tailed or full suspension mountain bikes are more appropriate. Clipless pedals with a special cycling shoe are mostly used by serious cyclists to enable maximum power output, and to keep feet secure on the pedals. Bicycle helmets are usually a requirement in competitions. Equipment: Special equipment A map holder attached to the handlebar of the bike is an essential piece of equipment in mountain bike orienteering, and most holders allow the map to be rotated. Known brands for map holders are Orifix, Mapdec, Miry, Devotech, Nordenmark, Autopilot and Windchill. Compasses may be used but electronic navigational aids (such as GPS-based watches) are not permitted. Competitors may carry repair tools and spare parts during races. Map: Maps are usually smaller scale (1:5 000 – 1:30 000) and less detailed than standard orienteering maps. Trails and tracks are marked on mountain bike orienteering maps based on their riding difficulty, with four classifications: easy, slow, difficult and impossible to ride. Also, obstacles that require a dismount are usually marked on the map. Organization and events: MTB-O is one of four orienteering sports governed by the International Orienteering Federation. The first World Championship event was held in 2002 in Fontainebleau, France. Since 2004 the World Championships have been held annually. European Championships have been held annually since 2006. Mountain bike orienteering is most popular in European countries and Australia. M17 and W17 (Youth) is for competitors who reach the age of 17, or younger, in the year which the event is held. M20 and W20 (Junior) is for competitors who reach the age of 20, or younger, in the year which the event is held. Organization and events: M21 and W21 (Elite) is meant for competitors who reach the age of 21, or older, in the year which the event is held. Any competitor, regardless of their age, can however compete in the elite classes.There are annual World Championships in the elite and junior classes. There are also world championships for masters, which is for competitors aged 35 and up. Organization and events: There are annual European Championships in elite, junior and youth classes. Mountain bike orienteerers: The most successful mountain bike orienteerer is Anton Foliforov from Russia, who has taken 31 World Championship and 11 European Championship medals. Other successful mountain bike orienteers are Michaela Gigon, Ruslan Gritsan, Adrian Jackson, Christine Schaffner and Päivi Tommola. For a full list of all medals taken by mountain bike orienteerers at World- and European Championships, visit MTBO Info Time-Keeping: In order to keep track of the competitors' riding times, Sportident is typically used. Each rider has a 'card' (chip) on their finger, and they 'punch' the Control point (orienteering) as shown on the image below. The card registers when the punch was made, which can be used for keeping track of riding times and split-times for each control point a rider has punched. Time-Keeping: In recent years, time-keeping has become more modern, and mountain bike orienteering events typically use touch-free time-keeping, meaning that competitors can maintain their speed while punching the control points. The competitors can ride past the control points at up to 180 cm range and still punch the controls. Another time-keeping system is Emit, which works in a similar fashion to Sportident. Disciplines: In mountain bike orienteering there are 5 main disciplines which can be competed in at the world championships. Generally, all disciplines have around 25 control points along the way. Disciplines: Sprint The sprint is the shortest discipline, with estimated winning-times of 20–25 minutes for M21 and W21 (elite classes), and 16–20 minutes for M20 and W20 (junior classes). Sprints often take place in cities, towns or industrial districts. Competitors race individually, typically starting with 1–2 minute gaps between the competitors. Fastest time to punch all the controls in the right order and cross the finish line wins. Disciplines: Middle Distance The middle distance is somewhere between the sprint and long distance. Winning times are 50–55 minutes for M21 and W21, and 40–45 minutes for M20 and W20. Middle distances often take place in forests. Competitors race individually, typically starting with 2 minute gaps between the competitors. Fastest time to punch all the controls in the right order and cross the finish line wins. Disciplines: Long Distance The long distance is the longest discipline. Winning times are 105–115 minutes for M21 and W21, and 84–92 minutes for M20 and W20. Long distances often take place in forests. Competitors race individually, typically starting with 3 minute gaps between the competitors. Fastest time to punch all the controls in the right order and cross the finish line wins. Disciplines: Mass Start The mass start is known as the most chaotic discipline. Winning times are 75–85 minutes for M21 and W21, and 60–68 minutes for M20 and W20. Mass starts often take place in forests. Competitors all start at the same time, hence the name mass start. In order to keep competitors from just following each other and ensuring they have to orienteer themselves, there are 'forkings' on mass starts. This means that not all competitors have to ride to the control points in the same order. It could for example be 2 loops called A and B where half the competitors do A first and then B, and the other half does B first and then A. All competitors will end up riding the exact same course in the end, but will have to split up during the race. First competitor to punch all controls in the right order and cross the finish line wins. Disciplines: Relay The relay is a team-discipline. There are 3 competitors on each team, taking turn to ride their course. Winning times are 120–135 minutes (total) for M21 and W21, and 90–105 minutes for M20 and W20. Relays often take place in forests. The first competitors on all teams start at the same time, similar to the mass start. When they cross the finish line and touch their team's next competitor, the next competitor continues the race. Similarly to the mass start, there are forkings on the relay, so competitors can't just follow other riders. All teams will end up riding the exact same courses in the end, but in different orders. First team to all finish their courses and cross the finish line wins. Rules: There is a set of rules which must be followed when competing at events. If not followed, it can lead to disqualification. The most prominent rules are: Competitors may not leave their bike; it has to be ridden, carried or pushed at all times. Competitors may only ride on paths and roads that are on the map, unless otherwise described. In some countries or areas riders are allowed to ride off-track, which will be specified in the event bulletin. Competitors may not use GPS devices while competing in a race.For a full list of the rules, see MTBO Competition Rules Most recent World Championships: The most recent World Championships were in Finland, Kuortane from 9 to 18 June 2021. The winners and World Champions of each discipline were as follows: Mass StartM21: Samuel Pökälä W21: Svetlana Foliforova M20: Morten Örnhagen Jørgensen W20: Kaarina Nurminen Sprint DistanceM21: Krystof Bogar W21: Marika Hara M20: Mikkel Brunstedt Nørgaard W20: Kaarina Nurminen Middle DistanceM21: Samuel Pökälä W21: Svetlana Foliforova M20: Morten Örnhagen Jørgensen W20: Kaarina Nurminen Long DistanceM21: Andre Haga W21: Camilla Søgaard M20: Morten Örnhagen Jørgensen W20: Lucie Nedomlelova RelayM21: Andre Haga, Pekka Niemi, Samuel Pökälä W21: Cæcilie Christoffersen, Nikoline Splittorff, Camilla Søgaard M20: Noah Tristan Hoffmann, Mikkel Brunstedt Nørgaard, Morten Örnhagen Jørgensen W20: Ekaterina Landgraf, Daria Toporova, Alena Aksenova
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded