text
stringlengths
60
353k
source
stringclasses
2 values
**Amchoor** Amchoor: Amchoor or aamchur or amchur, also referred to as mango powder, is a fruity spice powder made from dried unripe green mangoes and is used as a citrusy seasoning. It is mostly produced in India, and is used to flavor foods and add the nutritional benefits of mangoes when the fresh fruit is out of season. Preparation: To make amchoor, early-season mangoes are harvested while still green and unripe. Once harvested, the green mangoes are peeled, thinly sliced, and sun-dried. The dried slices, which are light brown and resemble strips of woody bark, can be purchased whole and ground by the individual at home, but the majority of the slices processed in this way are ground into fine powder and sold as ready-made amchoor. Use: It has a honey-like fragrance and a sour fruity flavor and is a tart pale-beige-to-brownish powder. It is used in dishes where acidity is required, in stir fried vegetable dishes, soups, curries, and to tenderize meat and poultry. It is used to add a fruit flavor without adding moisture, or as a souring agent, and lends an acidic taste to the foods.Amchoor is a predominant flavoring agent used in Indian dishes where it is used to add a sour tangy fruity flavor without moisture. It is used to flavor samosa and pakora fillings, stews and soups, fruit salads and pastries, curries, chutneys, pickles and dals and to tenderize meats, poultry, and fish. It is added to marinades for meat and poultry as an enzymatic tenderizer and lends its sourness to chutneys and pickles. Amchoor is also a primary component of chaat masala, an Indian spice mix.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lymphedema** Lymphedema: Lymphedema, also known as lymphoedema and lymphatic edema, is a condition of localized swelling caused by a compromised lymphatic system. The lymphatic system functions as a critical portion of the body's immune system and returns interstitial fluid to the bloodstream. Lymphedema is most frequently a complication of cancer treatment or parasitic infections, but it can also be seen in a number of genetic disorders. Though incurable and progressive, a number of treatments may improve symptoms. Tissues with lymphedema are at high risk of infection because the lymphatic system has been compromised.While there is no cure, treatment may improve outcomes. This commonly include compression therapy, good skin care, exercise, and manual lymphatic drainage (MLD), which together are known as combined decongestive therapy. Diuretics are not useful. Signs and symptoms: The most common manifestation of lymphedema is soft tissue swelling, edema. As the disorder progresses, worsening edema and skin changes including discoloration, verrucous (wart-like) hyperplasia, hyperkeratosis, papillomatosis, dermal thickening, and ulcers may be seen. Additionally, there is increased risk of infection of the skin, known as erysipelas. Signs and symptoms: Complications When lymphatic impairment becomes so great that the lymph fluid exceeds the lymphatic system's ability to transport it, an abnormal amount of protein-rich fluid collects in the tissues. Left untreated, this stagnant, protein-rich fluid causes tissue channels to increase in size and number, reducing oxygen availability. This interferes with wound healing and provides a rich culture medium for bacterial growth that can result in skin infections, lymphangitis, lymphadenitis, and, in severe cases, skin ulcers. It is vital for lymphedema patients to be aware of the symptoms of infection and to seek immediate treatment, since recurrent infections or cellulitis, in addition to their inherent danger, further damage the lymphatic system and set up a vicious circle.In rare cases, lymphedema may lead to a form of cancer called lymphangiosarcoma, although the mechanism of carcinogenesis is not understood. Lymphedema-associated lymphangiosarcoma is called Stewart–Treves syndrome. Lymphangiosarcoma most frequently occurs in cases of long-standing lymphedema. The incidence of angiosarcoma is estimated to be 0.45% in patients living five years after radical mastectomy. Lymphedema is also associated with a low grade form of cancer called retiform hemangioendothelioma (a low grade angiosarcoma).Lymphedema can be disfiguring, and may result in a poor body image, which can cause psychological distress. Complications of lymphedema can cause difficulties in activities of daily living. Causes & Risk Factors: Lymphedema may be inherited (primary) or caused by injury to the lymphatic vessels (secondary). There are also risk factors that may increase one's risk of developing lymphedema. Risk factors for lymphedema old age, being overweight or obese, and having rheumatic or psoriatic arthritis. Causes & Risk Factors: Lymph node damage It is most frequently seen after lymph node dissection, surgery and/or radiation therapy, in which damage to the lymphatic system is caused during the treatment of cancer, most notably breast cancer. In many patients with cancer, this condition does not develop until months or even years after therapy has concluded. Lymphedema may also be associated with accidents or certain diseases or problems that may inhibit the lymphatic system from functioning properly. In tropical endemic areas of the world, a common cause of secondary lymphedema is filariasis, a parasitic infection. It can also be caused by damage to the lymphatic system from infections such as cellulitis.Primary lymphedema may be congenital or arise sporadically. Multiple syndromes are associated with primary lymphedema, including Turner syndrome, Milroy's disease, and Klippel–Trénaunay syndrome. It is generally thought to occur as a result of absent or malformed lymph nodes and/or lymphatic channels. Lymphedema may be present at birth, develop at the onset of puberty (praecox), or not become apparent for many years into adulthood (tarda). In men, lower-limb primary lymphedema is most common, occurring in one or both legs. Some cases of lymphedema may be associated with other vascular abnormalities.Secondary lymphedema affects both men and women. In women, it is most prevalent in the upper limbs after breast cancer surgery, in particular after axillary lymph node dissection, occurring in the arm on the side of the body in which the surgery is performed. Breast and trunk lymphedema can also occur but go unrecognised as there is swelling in the area after surgery, and its symptoms (peau d'orange and/or an inverted nipple) can be confused with post surgery fat necrosis. In Western countries, secondary lymphedema is most commonly due to cancer treatment. Between 38 and 89% of breast cancer patients have lymphedema due to axillary lymph node dissection and/or radiation. Unilateral lymphedema occurs in up to 41% of patients after gynecologic cancer. For men, a 5-66% incidence of lymphedema has been reported in patients treated with incidence depending on whether staging or radical removal of lymph glands was done in addition to radiotherapy.Head and neck lymphedema can be caused by surgery or radiation therapy for tongue or throat cancer. It may also occur in the lower limbs or groin after surgery for colon, ovarian or uterine cancer, in which removal of lymph nodes or radiation therapy is required. Surgery or treatment for prostate, colon and testicular cancers may result in secondary lymphedema, particularly when lymph nodes have been removed or damaged.The onset of secondary lymphedema in patients who have had cancer surgery has also been linked to aircraft flight (likely due to decreased cabin pressure or relative immobility). For cancer survivors, therefore, wearing a prescribed and properly fitted compression garment may help decrease swelling during air travel.Some cases of lower-limb lymphedema have been associated with the use of tamoxifen, due to the blood clots and deep vein thrombosis (DVT) that can be associated with this medication. Resolution of the blood clots or DVT is needed before lymphedema treatment can be initiated.Infectious causes include lymphatic filariasis. Causes & Risk Factors: At birth Hereditary lymphedema is a primary lymphedema – swelling that results from abnormalities in the lymphatic system that are present from birth. Swelling may be present in a single affected limb, several limbs, genitalia, or the face. It is sometimes diagnosed prenatally by a nuchal scan or postnatally by lymphoscintigraphy. The most common form is Meige disease that usually presents at puberty. Another form of hereditary lymphedema is Milroy's disease caused by mutations in the VEGFR3 gene. Hereditary lymphedema is frequently syndromic and is associated with Turner syndrome, lymphedema–distichiasis syndrome, yellow nail syndrome, and Klippel–Trénaunay syndrome.One defined genetic cause for hereditary lymphedema is GATA2 deficiency. This deficiency is a grouping of several disorders caused by common defect, viz., familial or sporadic inactivating mutations in one of the two parental GATA2 genes. These autosomal dominant mutations cause a reduction, i.e. a haploinsufficiency, in the cellular levels of the gene's product, GATA2. The GATA2 protein is a transcription factor critical for the embryonic development, maintenance, and functionality of blood-forming, lympathic-forming, and other tissue-forming stem cells. In consequence of these mutations, cellular levels of GATA2 are deficient and individuals develop over time hematological, immunological, lymphatic, and/or other disorders. GATA2 deficiency-induced defects in the lymphatic vessels and valves underlies the development of lymphedema which is primarily located in the lower extremities but may also occur in other places such as the face or testes (i.e. hydrocele). This form of the deficiency, when coupled with sensorineural hearing loss which may also be due to faulty development of the lymphatic system, is sometimes termed the Emberger syndrome.Primary lymphedema has a quoted incidence of approximately 1-3 births out of every 10,000 births, with a particular female preponderance to male ratio of 3.5:1 In North America, the incidence of primary lymphedema is approximately 1.15 births out of every 100,000 births Compared to secondary lymphedema, primary lymphedema is relatively rare. Causes & Risk Factors: Inflammatory lymphedema Bilateral lower extremity inflammatory lymphedema (BLEIL) is a distinct type of lymphedema occurring in a setting of acute and prolonged standing, such as in new recruits during basic training. The possible underlying mechanisms are thought to be venous congestion and inflammatory vasculitis. Physiology: Lymph is formed from the fluid that filters out of the blood circulation and contains proteins, cellular debris, bacteria, etc. The collection of this fluid is carried out by the initial lymph collectors that are blind-ended endothelial-lined vessels with fenestrated openings that allow fluids and particles as large as cells to enter. Once inside the lumen of the lymphatic vessels, the fluid is guided along increasingly larger vessels, first with rudimentary valves to prevent backflow, which later develops into complete valves similar to the venous valve. Once the lymph enters the fully valved lymphatic vessels, it is pumped by a rhythmic peristaltic-like action by smooth muscle cells within the lymphatic vessel walls. This peristaltic action is the primary driving force, moving lymph within its vessel walls. The regulation of the frequency and power of contraction is regulated by the sympathetic nervous system. Lymph movement can be influenced by the pressure of nearby muscle contraction, arterial pulse pressure and the vacuum created in the chest cavity during respiration, but these passive forces contribute only a minor percentage of lymph transport. The fluids collected are pumped into continually larger vessels and through lymph nodes, which remove debris and police the fluid for dangerous microbes. The lymph ends its journey in the thoracic duct or right lymphatic duct, which drain into the blood circulation. Diagnosis: Diagnosis is generally based on signs and symptoms, with testing used to rule out other potential causes. An accurate diagnosis and staging may help with management. A swollen limb can result from different conditions that require different treatments. Diagnosis of lymphedema is currently based on history, physical exam, and limb measurements. Imaging studies such as lymphoscintigraphy and indocyanine green lymphography are only required when surgery is being considered. However, the ideal method for lymphedema staging to guide the most appropriate treatment is controversial because of several different proposed protocols. Lymphedema can occur in both the upper and lower extremities, and in some cases, the head and neck. Assessment of the extremities first begins with a visual inspection. Color, presence of hair, visible veins, size and any sores or ulcerations are noted. Lack of hair may indicate an arterial circulation problem. Given swelling, the extremities' circumference is measured for reference as time continues. In early stages of lymphedema, elevating the limb may reduce or eliminate the swelling. Palpation of the wrist or ankle can determine the degree of swelling; assessment includes a check of the pulses. The axillary or inguinal nodes may be enlarged due to the swelling. Enlargement of the nodes lasting more than three weeks may indicate infection or other illnesses such as sequela from breast cancer surgery requiring further medical attention.Diagnosis or early detection of lymphedema is difficult. The first signs may be subjective observations such as a feeling of heaviness in the affected extremity. These may be symptomatic of early-stage lymphedema where accumulation of lymph is mild and not detectable by changes in volume or circumference. As lymphedema progresses, definitive diagnosis is commonly based upon an objective measurement of differences between the affected or at-risk limb at the opposite unaffected limb, e.g. in volume or circumference. No generally accepted criterion is definitively diagnostic, although a volume difference of 200 ml between limbs or a 4-cm difference (at a single measurement site or set intervals along the limb) is often used. Bioimpedance measurement (which measures the amount of fluid in a limb) offers greater sensitivity than existing methods.Chronic venous stasis changes can mimic early lymphedema, but the changes in venous stasis are more often bilateral and symmetric. Lipedema can also mimic lymphedema, however lipedema characteristically spares the feet beginning abruptly at the medial malleoli (ankle level). As a part of the initial work-up before diagnosing lymphedema, it may be necessary to exclude other potential causes of lower extremity swelling such as kidney failure, hypoalbuminemia, congestive heart-failure, protein-losing nephropathy, pulmonary hypertension, obesity, pregnancy and drug-induced edema. Diagnosis: Classification According to the Fifth WHO Expert Committee on Filariasis the most common method of classification of lymphedema is as follows: (The same classification method can be used for both primary and secondary lymphedema) The International Society of Lymphology (ISL) Staging System is based solely on subjective symptoms, making it prone to substantial observer bias. Imaging modalities have been suggested as useful adjuncts to the ISL staging to clarify the diagnosis. The lymphedema expert Dr. Ming-Huei Cheng developed a Cheng's Lymphedema Grading tool to assess the severity of extremity lymphedema based on objective limb measurements and providing appropriate options for management. Diagnosis: I. Grading Grade 1: Spontaneously reversible on elevation. Mostly pitting edema. Grade 2: Non-spontaneously reversible on elevation. Mostly non-pitting edema. Grade 3: Gross increase in volume and circumference of Grade 2 lymphedema, with eight stages of severity given below based on clinical assessments. Diagnosis: II. Staging As described by the Fifth WHO Expert Committee on Filariasis, and endorsed by the American Society of Lymphology., the staging system helps to identify the severity of lymphedema. With the assistance of medical imaging apparatus, such as MRI or CT, staging can be established by the physician, and therapeutic or medical interventions may be applied: Stage 0: The lymphatic vessels have sustained some damage that is not yet apparent. Transport capacity is sufficient for the amount of lymph being removed. Lymphedema is not present. Diagnosis: Stage 1 : Swelling increases during the day and disappears overnight as the patient lies flat in bed. Tissue is still at the pitting stage: when pressed by the fingertips, the affected area indents and reverses with elevation. Usually, upon waking in the morning, the limb or affected area is normal or almost normal in size. Treatment is not necessarily required at this point. Diagnosis: Stage 2: Swelling is not reversible overnight, and does not disappear without proper management. The tissue now has a spongy consistency and is considered non-pitting: when pressed by the fingertips, the affected area bounces back without indentation. Fibrosis found in Stage 2 lymphedema marks the beginning of the hardening of the limbs and increasing size. Stage 3: Swelling is irreversible and usually the limb(s) or affected area becomes increasingly large. The tissue is hard (fibrotic) and unresponsive; some patients consider undergoing reconstructive surgery, called "debulking". This remains controversial, however, since the risks may outweigh the benefits and the further damage done to the lymphatic system may in fact make the lymphedema worse. Stage 4: The size and circumference of the affected limb(s) become noticeably large. Bumps, lumps, or protrusions (also called knobs) on the skin begin to appear. Stage 5: The affected limb(s) become grossly large; one or more deep skin folds is prevalent among patients in this stage. Stage 6: Knobs of small elongated or small rounded sizes cluster together, giving mossy-like shapes on the limb. Mobility of the patient becomes increasingly difficult. Stage 7: The person becomes "handicapped", and is unable to independently perform daily routine activities such as walking, bathing and cooking. Assistance from the family and health care system is needed. Grades Lymphedema can also be categorized by its severity (usually referenced to a healthy extremity): Grade 1 (mild edema): Involves the distal parts such as a forearm and hand or a lower leg and foot. The difference in circumference is less than 4 cm and other tissue changes are not yet present. Grade 2 (moderate edema): Involves an entire limb or corresponding quadrant of the trunk. Difference in circumference is 4–6 cm. Tissue changes, such as pitting, are apparent. The patient may experience erysipelas. Grade 3a (severe edema): Lymphedema is present in one limb and its associated trunk quadrant. Circumferential difference is greater than 6 centimeters. Significant skin alterations, such as cornification or keratosis, cysts and/or fistulae, are present. Additionally, the patient may experience repeated attacks of erysipelas. Grade 3b (massive edema): The same symptoms as grade 3a, except that two or more extremities are affected. Grade 4 (gigantic edema): In this stage of lymphedema, the affected extremities are huge, due to almost complete blockage of the lymph channels. Differential Lymphedema should not be confused with edema arising from venous insufficiency, which is caused by compromise of the venous drainage rather than lymphatic drainage. However, untreated venous insufficiency can progress into a combined venous/lymphatic disorder. Treatment: While there is no cure, treatment may improve outcomes. This commonly include compression therapy, good skin care, exercise, and manual lymphatic drainage (MLD), which together is known as combined decongestive therapy. MLD is most effective in mild to moderate disease. In breast cancer-related lymphedema, MLD is safe and may offer added benefit to compression bandages for reducing swelling. Most people with lymphedema can be medically managed with conservative treatment. Diuretics are not useful. Surgery is generally only used in those who are not improved with other measures. Treatment: Compression Garments Once a person is diagnosed with lymphedema, compression becomes imperative in the management of the condition. Garments are often intended to be worn all day but may be taken off for sleeping unless otherwise prescribed. Elastic compression garments are worn on the affected limb following complete de-congestive therapy to maintain edema reduction. Inelastic garments provide containment and reduction. Available styles, options, and prices vary widely. A professional garment fitter or certified lymphedema therapist can help determine the best option for the patient. Treatment: Bandaging Compression bandaging, also called wrapping, is the application of layers of padding and short-stretch bandages to the involved areas. Short-stretch bandages are preferred over long-stretch bandages (such as those normally used to treat sprains), as the long-stretch bandages cannot produce the proper therapeutic tension necessary to safely reduce lymphedema and may in fact end up producing a tourniquet effect. Compression bandages provide resistance that assists in pumping fluid out of the affected area during exercise. This counter-force results in increased lymphatic drainage and therefore a decrease in size of the swollen area. Treatment: Intermittent pneumatic compression therapy Intermittent pneumatic compression therapy (IPC) utilizes a multi-chambered pneumatic sleeve with overlapping cells to promote movement of lymph fluid. Pump therapy should be used in addition to other treatments such as compression bandaging and manual lymph drainage. Pump therapy has been used a lot in the past to help with controlling lymphedema. In some cases, pump therapy helps soften fibrotic tissue and therefore potentially enable more efficient lymphatic drainage. However, reports link pump therapy to increased incidence of edema proximal to the affected limb, such as genital edema arising after pump therapy in the lower limb. Current literature has suggested the use of IPC treatment in conjunction with Kinesiotape (KT) is more effective in the overall reduction of lymphedema as well as increasing shoulder ROM than the traditional treatment of IPC paired with complete decongestive therapy. Kinesiotape (KT) is an elastic cotton strip with an acrylic adhesive that is used commonly used to relieve the discomfort and disability associated with sports injuries, but in the context of lymphedema, this increases the space between the dermis and the muscle which increases the opportunity for lymphatic fluid to flow out naturally. The use of IPC treatments with KT tape as well as subsequent lymphatic drainage has proven to significantly reduce the circumference of lymphatic limbs in patients experiencing lymphedema secondary to breast cancer postmastectomy. Treatment: Exercise In those with lymphedema or at risk of developing lymphedema, following breast cancer treatment, resistance training did not increase swelling and decreases in some, in addition to other potential beneficial effects on cardiovascular health. Moreover, resistance training and other forms of exercise were not associated with an increased risk of developing lymphedema in people who previously received breast cancer-related treatment. Compression garments should be worn during exercise (with the possible exception of swimming).Physical therapy treatment of patients with lymphedema may include trigger point release, soft tissue massage, postural improvement, patient education on condition management, strengthening, and stretching exercises. Exercises may increase in intensity and difficulty over time, beginning with passive movements to increase range of motion and progressing towards using external weights and resistance in various postures. Treatment: Surgery The treatment of lymphedema is usually conservative, however the use of surgery is proposed for some cases.Suction assisted lipectomy (SAL), also known as liposuction for lymphedema, may help improve chronic non pitting edema if present. The procedure removes fat and protein and is done along with continued compression therapy.Vascularized lymph node transfers (VLNT) and lymphovenous bypass are supported by tentative evidence as of 2017 but is associated with a number of complications. Treatment: Laser therapy Low-level laser therapy (LLLT) was cleared by the US Food and Drug Administration (FDA) for the treatment of lymphedema in November 2006.According to the US National Cancer Institute, LLLT may be effective in reducing lymphedema in some women. Two cycles of laser treatment were found to be reduce the volume of the affected arm in approximately one-third of people with postmastectomy lymphedema at three months post-treatment. Epidemiology: Lymphedema affects approximately 200 million people worldwide.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ankle fracture** Ankle fracture: An ankle fracture is a break of one or more of the bones that make up the ankle joint. Symptoms may include pain, swelling, bruising, and an inability to walk on the injured leg. Complications may include an associated high ankle sprain, compartment syndrome, stiffness, malunion, and post-traumatic arthritis.Ankle fractures may result from excessive stress on the joint such as from rolling an ankle or from blunt trauma. Types of ankle fractures include lateral malleolus, medial malleolus, posterior malleolus, bimalleolar, and trimalleolar fractures. The Ottawa ankle rule can help determine the need for X-rays. Special X-ray views called stress views help determine whether an ankle fracture is unstable. Ankle fracture: Treatment depends on the fracture type. Ankle stability largely dictates non-operative vs. operative treatment. Non-operative treatment includes splinting or casting while operative treatment includes fixing the fracture with metal implants through an open reduction internal fixation (ORIF). Significant recovery generally occurs within four months while completely recovery usually takes up to one year.Ankle fractures are common, occurring in over 1.8 per 1000 adults and 1 per 1000 children per year. They occur most commonly in young males and older females. Functional anatomy: The ankle region refers to where the leg meets the foot (talocrural region). The ankle joint is a highly constrained, complex hinge joint composed of three bones: the tibia, the fibula, and the talus. The weight-bearing aspect of the tibia closest to the foot (known as the plafond) connects with the talus. This articulation (where two bones meet) is primarily responsible for plantarflexion (moving your foot down) and dorsiflexion (moving your foot up). Together the tibia and fibula form a bracket-shaped socket known as the mortise, into which the dome-shaped talus fits. The talus and the fibula are connected by a strong group of ligaments, which provide support for the lateral aspect of the ankle. These ligaments include the anterior talofibular ligament (ATFL) and the posterior talofibular ligament (PTFL). The calcaneofibular ligament (CFL), which connects the fibula to the calcaneus, or heel bone, also provides lateral support. The deltoid ligament provides support to the medial part of the ankle (closest to the midline). It prevents the foot from excessively everting, or turning outwards while also preventing the talus from externally rotating. The distal parts of the tibia and fibula are connected by a connective tissue network referred to as the syndesmosis, which consists of four ligaments and the interosseous membrane. Signs and symptoms: Symptoms of an ankle fracture can be similar to those of ankle sprains (pain, swelling, limited range of motion), though typically they are often more severe by comparison. It is exceedingly rare for the ankle joint to dislocate in the presence of ligamentous injury alone. However, in the setting of an ankle fracture, the talus can become unstable and subluxate or dislocate. Patients may notice ecchymosis ("black and blue" coloration from bleeding under the skin), or there may be an abnormal position, alignment, gross instability, or lack of normal motion secondary to pain. In a displaced, fracture the skin is sometimes tented over a sharp edge of broken bone. The sharp fragments of broken bone sometimes tear the skin and form a laceration that communicates with the broken bone or joint space. This is known as an open fracture and has a high incidence of infection if not promptly treated. Diagnosis: Physical Examination Patients with ankle fractures may have variable findings on physical examination. Generally, the injured side should be compared to the non-injured side. The skin should be carefully examined, paying particular attention to any openings or breaks in the skin that could be due to an open fracture. It is important to evaluate the exact location of the pain, the range of motion of the ankle, and the condition of the nerves and blood vessels. It is also important to palpate the calf proximally (near the knee) because there may be an associated high fibula fracture(Maisonneuve fracture). Diagnosis: Imaging Imaging for evaluation of ankle fractures can include x-rays, CT scans, and MRIs. Typically evaluation begins with x-rays, which can provide information about the mechanism of injury, severity of injury, and stability of fracture. The Ottawa ankle rules determine the necessity of obtaining x-rays in patients with acute ankle injuries. These guidelines were created to minimize the expense of unnecessary x-rays. Diagnosis: X-ray Views There are three x-ray views in a complete ankle series: anteroposterior (AP), lateral, and oblique (or "mortise view"). The mortise view is an AP x-ray taken with the ankle internally rotated 15-20 degrees since the foot is naturally externally rotated relative to the ankle. In addition to these views, a full-length view of the tibia and fibula may be necessary to evaluate for injuries to the proximal fibula associated with Maisonneuve fractures.A specialized AP stress view of the ankle is performed when there is concern for an unstable ankle injury. There are two types of stress views: gravity and mechanical. In the gravity stress view, the patient lies in the lateral decubitus position with the ankle dangling over the edge of the table to mimic the mechanical stress view.Findings On X-rays, there can be a fracture of the medial malleolus, the lateral malleolus, and/or of the anterior/posterior margin of the distal tibia. The posterior margin (known as the posterior malleolus) is much more frequently injured than the anterior aspect of the distal tibia. If both the lateral and medial malleoli are broken, this is called a bimalleolar fracture (some of them are called Pott's fractures). If the posterior malleolus is also fractured, this is called a trimalleolar fracture. Diagnosis: CT CT scans may be indicated when there is concern for a highly comminuted fracture or a fracture involving the joint surface. This imaging may be used for surgical planning. MRI MRI is less commonly used to diagnose ankle fractures but may be used to show problems involving the soft tissues (ligaments and tendons) and articular cartilage. Classification There are several classification schemes for ankle fractures. Out of the following, the Lauge-Hansen and Danis-Weber classification systems are most commonly used. Diagnosis: The Lauge-Hansen classification categorizes fractures based on the mechanism of the injury as it relates to the position of the foot and the deforming force (the most common type is supination-external rotation) The Danis-Weber classification categorizes ankle fractures by the level of the fracture of the distal fibula (type A = below the syndesmotic ligament, type B = at its level, type C = above the ligament), with use in assessing injury to the syndesmosis and the interosseous membraneOther classification schemes: The Herscovici classification categorizes medial malleolus fractures of the distal tibia based on level. Diagnosis: The Ruedi-Allgower classification categorizes pilon fractures of the distal tibia. Pediatric fracture types Wagstaffe-Le Fort avulsion fracture¨, a vertical fracture of the antero-medial part of the distal fibula with avulsion of the anterior tibiofibular ligament. Tillaux fracture, a Salter–Harris type III fracture through the anterolateral aspect of the distal tibial epiphysis. Triplane fractures are a special type of fracture that involves the immature skeleton. It has a coronal plane in the metaphysis, an axial plane in the physis, and a sagittal plane in the epiphysis. Treatment: The broad goals of treating ankle fractures are restoring the ankle joint to normal alignment, healing the fracture, and preventing arthritis. The stability of the ankle joint often dictates treatment. Certain fracture patterns are stable and are thus treated without surgery similarly to ankle sprains. Unstable fractures require surgery, most often an open reduction and internal fixation (ORIF), which is usually performed with permanently implanted metal hardware that holds the bones in place while the natural healing process occurs. A cast or splint will be required to immobilize the ankle following surgery. Stable ankle fractures with preserved joint alignment may be treated with non-operative measures (splinting, casting, and/or walking boot). Complications: General complications associated with surgical treatment include infection, bleeding, blood clots, wound healing problems, and damage to surrounding nerves and blood vessels. Specific complications associated with surgical treatment of ankle fractures include fracture healing in an abnormal position (malunion), post-traumatic arthritis, failed fracture healing after a prolonged period of time (nonunion), and decreased range of motion (post-operative stiffness). If post-operative x-rays are concerning for malunion, then patients may need an additional procedure to restore proper ankle anatomy. The ultimate goal is to prevent or delay the development of post-traumatic arthritis. Post-traumatic arthritis can initially be managed with conservative options like activity modification, non-steroidal anti-inflammatory medication (NSAIDs), specialized footwear, and cortisone injections. If patients still have pain and impaired ankle function after these measures, then other procedures such as ankle arthrodesis and ankle arthroplasty can be considered. Nonunion is rare following surgical fixation of ankle fractures but can be managed with bone grafts and stable internal fixation. Patients can also experience pain or discomfort from the metal hardware used to fix the fracture. As a result, some patients decide to have the hardware removed after the fracture has healed through an additional procedure. Epidemiology: Several large studies have suggested that the incidence of ankle fractures has increased since the 1960s. The incidence is highest in elderly women over the age of 65, but importantly ankle fractures are not considered as fragility fractures. In terms of fracture type, isolated malleolar fractures are most common (two-thirds of fractures); bimalleolar fractures occur in roughly 25% of patients while trimalleolar fractures occur in 5-10%. Open fractures are rare, compromising 2% of all ankle fractures. In children, ankle fractures occur in about 1 per 1000 per year.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mars Black (pigment)** Mars Black (pigment): Mars Black is an iron oxide pigment developed in the 20th century. Also known under the names of black iron oxide, magnetic oxide, Pigment Black 11, and ferrous ferric oxide (Fe3O4), it has no known health hazards and is considered non-toxic, with an ASTM lightfastness rating of I. It is more opaque and less toxic than other black pigments. Artists' paint manufacturers have rated it one of the most satisfactory black pigments for acrylic paints with regard to opacity, lightfastness, and permanence. It takes its name from Mars, the god of war and patron of iron.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nail–patella syndrome** Nail–patella syndrome: Nail–patella syndrome is a genetic disorder that results in small, poorly developed nails and kneecaps, but can also affect many other areas of the body, such as the elbows, chest, and hips. The name "nail–patella" can be very misleading because the syndrome often affects many other areas of the body, including even the production of certain proteins.: 666  Those affected by NPS may have one or more affected areas of the body, and its severity varies depending on the individual. It is also referred to as iliac horn syndrome, hereditary onychoosteodysplasia (HOOD syndrome), Fong disease or Turner–Kieser syndrome.Diagnosis of NPS can be made at birth but is common for it to remain undiagnosed for several generations. While there is no cure available for NPS, treatment is available and recommended. Signs and symptoms: The skeletal structures of individuals who have this disorder may have pronounced deformities. As reported by several medical doctors, the following features are commonly found in people who with nail–patella syndrome:Bones and joints Patellar involvement is present in approximately 90% of patients; however, patellar aplasia occurs in only 20%. In instances in which the patellae are smaller or luxated, the knees may be unstable. The elbows may have limited motion (e.g., limited pronation, supination, extension). Subluxation of the radial head may occur. Arthrodysplasia of the elbows is reported in approximately 90% of patients. General hyperextension of the joints can be present. Exostoses arising from the posterior aspect of the iliac bones ("iliac horns") are present in as many as 80% of patients; this finding is considered pathognomonic for the syndrome. Other reported bone changes include scoliosis, scapular hypoplasia, and the presence of cervical ribs. Signs and symptoms: Glaucoma is also closely associated with nail-patella, specifically open-angled glaucoma (OAG). Side affects may include frequent headaches, blurred vision, or total vision loss. This occurs gradually over time and symptoms may not be evident in children.Kidney issues may arise such as deposition of protein in the urine and nephritis. Proteinuria is usually the first sign of kidney involvement. It can reveal itself either rapidly or years after having asymptomatic deposition of protein in the urine, kidney failure occurs in around 5% of NPS patients. Hypothyroidism, irritable bowel syndrome, attention deficit hyperactivity disorder (ADHD), and thin tooth enamel are associated with NPS, but whether these are related or simply coincidences are unclear. Genetics: Nail–patella syndrome is inherited via autosomal dominancy linked to aberrancy on human chromosome 9's q arm (the longer arm), 9q34. This autosomal dominancy means that only a single copy, instead of both, is sufficient for the disorder to be expressed in the offspring, meaning the chance of getting the disorder from an affected heterozygous parent is 50%. The frequency of the occurrence is 1/50,000. The disorder is linked to the ABO blood group locus.It is associated with random mutations in the LMX1B gene. Studies have been conducted and 83 mutations of this gene have been identified. Diagnosis: The hallmark features of this syndrome are poorly developed fingernails, toenails, and patellae (kneecaps). Sometimes, this disease causes the affected person to have either no thumbnails or a small piece of a thumbnail on the edge of the thumb. The lack of development or complete absence of fingernails results from the loss of function mutations in the LMX1B gene. This mutation may cause a reduction in dorsalising signals, which then results in the failure to normally develop dorsal specific structures such as nails and patellae. Other common abnormalities include elbow deformities, abnormally shaped pelvic (hip) bones, and kidney disease. Treatment: Treatment for NPS varies depending on the symptoms observed. Perform screening for kidney disease and glaucoma, surgery, intensive physiotherapy, or genetic counseling. ACE inhibitors are taken to treat proteinuria and hypertension in NPS patients. Dialysis and kidney transplant. Physical therapy, bracing and analgesics for joint pain.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Triacetic acid lactone** Triacetic acid lactone: Triacetic acid lactone (TAL; 4-hydroxy-6-methyl-2-pyrone) is an organic compound derived enzymatically from glucose. It is a light yellow solid that is soluble in organic solvents. Structure: Triacetic acid lactone consists of two main tautomers. The tautomer on the left, featuring a 4-hydroxy group, the C4 carbon, is dominant. Triacetic acid lactone is classified as a 2-pyrone compound owing to the ketone group on the C2 carbon in its dominant form. Synthesis: Triacetic acid lactone is synthesized either from dehydroacetic acid, another 2-pyrone derivative, or from glucose by enzymatic catalysis. In its original synthesis, triacetic acid lactone was obtained by treatment of dehydroacetic acid with sulfuric acid at 135 °C. Dehydroacetic acid undergoes ring-opening and hydration to form "tetracetic acid". Upon cooling, triacetic acid reverts to a lactone ring similar to the dehydroacetic acid structure, and the triacetic acid lactone is recovered by crystallization in cold water. Biosynthesis: The microbial synthesis of triacetic acid lactone requires the enzyme 2-pyrone synthase (2-PS). This enzyme has been examined in two hosts Escherichia coli and Saccharomyces cerevisiae. The Saccharomyces cerevisiae host being used during the synthesis produces a higher yield (70%) compared with the Escherichia coli host, which produces a yield of 40% of triacetic acid lactone. This enzyme catalyzes the synthesis of triacetic acid lactone from acetyl-CoA via two subsequent condensations with malonyl-CoA. This produces an intermediate of 3,5-diketohexanoate thioester, which undergoes ring closure to produce triacetic acid lactone. Reactivity: The lactone is a versatile intermediate in organic synthesis. It has also been described as a platform chemical, meaning that it could be the precursor to other fine chemicals. The lactone undergoes decarboxylation to acetylacetone. It is also a precursor to sorbic acid, dienoic acid, and hexenoic acid. Dienoic acid is used to inhibit the growth of various molds and hexenoic acid is used as a flavoring agent. Acetylacetone is used for metal extraction and plating and as a food additive.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Foldforming** Foldforming: Foldforming is a technique of metalworking whereby metal is folded, repeatedly forged and annealed, and unfolded; at which stage it generally has a dramatic new three-dimensional form. While alternate spellings abound (e.g., fold-forming, fold forming, Foldforming, and even form-folding, the definitive book "Foldforming" by Charles Lewton-Brain consistently uses the spelling of foldforming as one lowercase word. Origins: The original technique of foldforming was originated and developed in the late 1980s by Charles Lewton-Brain, an English-born goldsmith who lived and studied in Tanzania, the United States, and Germany before moving to Canada. Outside of the Industrial Revolution, the method represents the first major innovation in metalworking in thousands of years. In the 1980s, the technique of foldforming metal was developed by Charles Lewton-Brain, who from a young age was interested in art and was inspired to pursue his interest in jewelry by his girlfriend's mother. In 1974 he went to the Nova Scotia College of Art and Design, where he studied jewelry-making and metalsmithing. After his college career, Christian Gaudernack, one of the NASCAD professors and a Norwegian goldsmith, inspired Lewton-Brain to continue his education, and he went on to attend the Fachhochschule fur Gestalstung, an art and design university in Pforzheim Germany. Lewton-Brain worked as a part-time goldsmith. During his time in the metals program, he was instructed by Klaus Ullrich, a postwar metalsmith. Ullrich was the person that helped Lewton-Brain to develop the foldforming technique. Ullrich emphasized to his students the importance of comprehending the properties of metal in order to understand how metal forms. Charles Lewton-Brain was able to develop his foldforming technique by seeing the characteristics of the metal as it is folded, unfolded, forged, rolled, annealed, and worked on. He brought about a new style of metalworking that had some connection to nature. His technique focused on the metal's natural reaction to being hammered and heated, based on his understanding of the metals elastic and ductile characteristics that were part of his instruction by Klaus Ullrich. Lewton-Brain continued to teach the foldforming technique to people at workshops and at Alberta College of Art and Design as the Head of Metals and Jewelry, having been part of this institute since 1986. By 1991, Lewton-Brain was winning awards for the technique and in 1997 workshops demonstrating the technique were at the core of the "Touch the Future" portion of the JCK International Jewelry Show in Orlando, Florida. Applications: When foldforming was first developed by Charles Lewton-Brain, it was mostly used in creative artwork or jewelry. Metalsmiths or artists turn a 2-dimensional into a 3-dimensional figure. The outcome of these 3-dimension is determined on how many times the sheet metal is folded, unfolded, annealed, and forged (hammered on an anvil). Artists like Charles Lewton-Brain have added these natural figures as a part of their art and jewelry. Jewelry, such as earrings or necklaces, can be made with foldforming. For some artists or students trying to become artists, like Ball State University graduate student Rachael Jobst, using this technique can be helpful when making leaves or flowers for an art piece. Applications: There are other applications to foldforming. Manufacturers have been able to apply this process to help them produce cheaper automobiles. When processing some parts of the vehicle, like the frame and body, the metal that is used go through a process of press-based stamping, a comparatively more complicated method of producing the car's body. With foldforming, manufacturers are able to cut costs and time for manufacturing because of the reduced need for tools and additional operations required with press-based stamping. Also, with foldforming, the metal sheets used take advantage of the flexibility of the material, reducing the chance of cracks and wear. Resemblance: Many of the shapes and forms that come out of foldforming resemble many things seen in nature, and utilizes laws of nature in the creation process. The most common shapes created using foldforming are flowers, leaves or horns of a ram, as these require the repeated process of folding, annealing, unfolding, and hammering of sheet metal that foldforming also involves. The process of a flower unfolding or how a leaf forms is similar between natural occurrences and foldforming. With this, artists are able to obtain a better understanding of how to incorporate nature's natural beauty into their artwork. With metalsmiths, this technique requires them to push their material to the limit so they'll be able to have a better understanding of what they will be able to make based on the material's ductility and elasticity. Resemblance: Another resemblance that foldforming has is the paper fold technique known as "origami". The process of folding and unfolding a flat material is seen in both metal foldforming and papering folding origami. Many of the principles and issues that come with the folding and unfolding process can be seen in origami and foldforming. With this similarity some artists create a paper origami model of their project before working with sheet metal. The difficulty with this is that paper and sheet metal are materials with very different properties, so artist are still limited to the materials' limits of malleability. Paper material is able to bend more freely but incapable of sustaining a folded form as easily as sheet metal, and sheet metal is a thicker and tougher material to work with. Folding technique: Hundreds of folds have now been categorized. Charles Lewton-Brain was able to come up with four basic steps to foldforming. Step one: fold the sheet metal over itself. This creates the bent shape in the sheet metal. Step two: forge (hammer) or roll the metal. By doing this, metalsmiths are either creating the main form of the figure or making the area where the metal is folded more distinctive. Step three: anneal the metal. This is just heating the sheet metal enough for it to be easier to work with. Step four: unfold the sheet metal revealing its form.All four steps act upon the characteristics of metals. Tools: Techniques now include the use of traditional forging tools like various types of hammers, mallets, and anvils. Other tools consist of rolling mills, vice grips, pliers, embedding wire, other objects into the folds. and a heat source. The heat source can be some kind of forge, a blowtorch, or anything hot enough to anneal the metal.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bot herder** Bot herder: Bot herders are hackers who use automated techniques to scan specific network ranges and find vulnerable systems, such as machines without current security patches, on which to install their bot program. The infected machine then has become one of many zombies in a botnet and responds to commands given by the bot herder, usually via an Internet Relay Chat channel. Bot herder: One of the new bot herders includes the controller of Conficker. A bot herder usually uses a pseudonym to keep themselves anonymous, and may use proxy servers, shell accounts and bouncers to conceal their IP address thus maintaining anonymity.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Plasticulture** Plasticulture: The term plasticulture refers to the practice of using plastic materials in agricultural applications. The plastic materials themselves are often and broadly referred to as "ag plastics". Plasticulture ag plastics include soil fumigation film, irrigation drip tape/tubing, plastic plant packaging cord, nursery pots and bales, but the term is most often used to describe all kinds of plastic plant/soil coverings. Such coverings range from plastic mulch film, row coverings, high and low tunnels (polytunnels), to plastic greenhouses. Plasticulture: Plastic used in agriculture was expected to include 6.7 million tons of plastic in 2019 or 2% of global plastic production. Plastic used in agriculture is hard to recycle because of contamination by agricultural chemicals. Moreover, plastic degradation into microplastics is damaging to soil health, microorganisms and beneficial organisms like earthworms. Current science is not clear if there are negative impacts on food or once food grown in plasticulture is eaten by humans. Because of these impacts, some governments, like the European Union under the Circular Economy Action Plan, are beginning to regulate its use and plastic waste produced on farms. Types of plastics used: Polyethylene (PE) is the plastic film used by the majority of growers because of its affordability, flexibility and easy manufacturing. It comes in a variety of thicknesses, such as a low density form (LDPE) as well as a linear low density form (LLDPE). These can be modified by addition of certain elements to the plastic that give it properties beneficial to plant growth such as reduced water loss, UV stabilization to cool soil and prevent insects, elimination of photosynthetically active radiation to prevent weed growth, IR opacity, antidrip/antifog, and fluorescence.Polypropylene (PP) is often used for agricultural plant packaging cord. Applications: Greenhouses and walk-in tunnel covers A greenhouse is a large structure in which it is possible to stand and work with automated ventilation. High tunnels are hoop houses, manually ventilated by rolling up the sides. Greenhouse and high tunnel films are usually within the parameters of 80-220μm thick and 20m wide, and have a life span between 6–45 months dependent on several factors. Monolayer polyethylene films are better suited for less extreme environmental conditions, while multilayer covers made of three layers, one EVA19 layer inserted between two low-density polyethylene layers has been shown to have a better performance under harsh conditions. Applications: Small tunnel covers Small tunnel covers are about 1m wide and 1m high, and have a thinner polyethylene film than the large tunnel covers, usually below 80μm. Their lifetime is also shorter than that of the larger versions; they usually have a usable life span of 6–8 months. Use of small tunnels is less popular than both the more expensive but durable greenhouses/walk-in tunnels and the cheaper plastic mulch. Applications: Plastic mulch Plastic mulching is when a thin plastic film is placed over the ground, poking holes at regular intervals for seeds to be planted in, or placing it directly over plants in the beginning stages of growth. The films remain in place for the duration of the cultivation (usually 2–4 months) and usually have a thickness of 12-80μm. The main functions of plastic mulch are to insulate and maintain a consistent temperature and humidity of the soil, preventing evaporation of moisture from the soil, minimization of seedtime and harvest, prevent weed growth, and to prevent erosion. Pigmented or colourless films can be used, each with specific advantages and disadvantages over the other.Black films prevent weed growth, but do not transmit light to heat up the soil; clear films transmit light and heat the soil, but promote weed growth. Photosensitive films have been developed that are pigmented to prevent weed growth, but still transmit light to heat the soil. These photosensitive films are more costly than either the clear or black polyethylene sheeting. Black plastic mulch controls evaporation from the soil and improves soil water retention. Plastic mulching proved to reduce irrigation requirements in pepper by 14-29% because of elimination of soil evaporation.Flowering time was also reduced in okra when black plastic mulch was used; the plants reached 50% flowering 3–6 days earlier than un-mulched plots. Plant height in okra was significantly increased with black plastic mulch use compared to those grown in bare soil. Evaporation from soil accounts for 25-50% of water used in irrigation, using plastic mulch prevents much of this evaporation and thus reduces the amount of water needed to grow the crop. This conservation of water makes plastic mulch favourable for farmers in dry and arid climates where water is a limited resource. As the second most used ag plastic in the world, the volume of plastic mulch used every year is estimated at 700,000t. Origins and development around the world: The first use of plastic film in agriculture was in an effort to make a cheaper version of a glasshouse. In 1948 Professor E.M. Emmert built the first plastic greenhouse, a wooden structure covered with cellulose acetate film. He later switched this to a more effective polyethylene film. After this introduction of plastic film to agriculture it began being used at a larger scale around the world by the early 1950s to replace paper for mulching vegetables.By 1999 almost 30 million acres worldwide were covered in plastic mulch. Only a small percentage of this was in the United States (185 000 acres), the majority of this plastic growth was happening in economically poor areas of the world and previously unproductive desert regions, such as Almeria in southern Spain.The largest concentrations of greenhouses around the world are mainly found in two areas, with 80% throughout the Far East (China, Japan, Korea), and 15% in the Mediterranean basin. The area of greenhouse cover is still increasing at a fast rate, during the last decade it is estimated that it has been growing by 20% every year. Areas such as the Middle East and Africa are growing in their use of plastic greenhouses by 15-20% per year, compared to the weak growth in more developed and economically stable areas such as Europe. China leads the world's growth at 30% per year, translating into a volume of plastic film reaching 1,000,000 t/year. In 2006 80% of the area covered by plastic mulch is found in China where it has a growth rate of 25% per year; this is the highest in the world.Since its introduction in the 1950s, plastic film has been designed and developed to increase produce yield, increase produce size and shorten growth time. Developments in plastic film include durability, optical (ultraviolet, visible, near infrared, and middle infrared) properties, and the antidrip or antifog effect. Recent developments in this area include UV-blocking, NIR-blocking, fluorescent, and ultrathermic films. Origins and development around the world: Large-scale usage in southern Spain The use of plasticulture in agriculture is growing rapidly, perhaps nowhere more visibly than around Almería in southern Spain. The eastern approaches to Almería, north of the airport, are densely covered, as is a large area further northeast, surrounding the towns of Campohermoso, Los Pipaces and Los Grillos (close to Níjar). Origins and development around the world: The densest concentration lies about 20 km southwest of Almería, where almost the entire Campo de Dalías, a low-lying cape, is now under plastic (an estimated area of 20,000 hectares). Further west, a similar, but smaller, coastal plain around Carchuna, southeast of Motril, is similarly enveloped. The technique is not restricted to the plains; it is also applied to wide terraces on the sides of shallow valleys, as the valley north of Castell de Ferro shows. Origins and development around the world: Elsewhere along the Costa Tropical and the Costa del Sol, particularly between Almería and Málaga, fruit trees growing on terraces in steeper valleys may be covered with vast tents of plastic netting. Environmental aspects: As (non-biodegradable) plastics are used in agriculture, there is a risk of it ending up in the soil, thus polluting it in the process. Environmental aspects: Recycling One significant component of plasticulture is the disposal of used ag plastics. Technologies exist which allow for ag plastics to be recycled into plastic resins for reuse in the plastics manufacturing industry.Recycling of plastic mulch is difficult because the mulch is often wet or dirty. Thin mulch breaks down quickly, and can be impossible to pick up for recycling once degraded. Legislation on plastic use in agriculture: In the European Union, Directive 2008/98/EC on waste management is in place, of which article 8 states "each member state may introduce the ERP concept into its own legal framework in addition to deciding how to encourage manufacturers to participate in the prevention, re-use, recycling and recovery of used plastic products." In addition, in 2018, the European Commission published a communication laying out a strategy for plastics in a circular economy. It mentioned curbing plastic waste and littering, for instance by reducing single-use plastics, tackling sources of marine litter at sea, restricting the use of oxo-degradable plastics and curbing micro-plastics pollution. In 2020, the EU finally released its Circular Economy Action Plan. It included a set of measures to reduce plastic litter and address the presence of microplastics in the environment. It also expressed addressing sustainability issues by developing a policy framework on biodegradable or compostable plastics.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Intelligent Home Control** Intelligent Home Control: Intelligent House Concept is a building automation system using a star configured topology with wires to each device. Originally made by LK, but now owned by Schneider Electric and sold as "IHC Intelligent House Concept". The system is made up of a central controller and up to 8 input modules and 16 output modules. Each input module can have 16 digital (on/off) inputs and each output module 8 digital (on/off) outputs, resulting in a total of 128 input and 128 outputs per controller. Module control protocol: The central controller has one point-to-point data communication wire connected to each module. The protocol between the central controller and the modules uses a 5V pulse width encoding as follows: A header that is 4100µs high and 300µs low One pulse per I/O port, i.e. 16 pulses for input modules and 8 pulses for output modules One addition parity pulse; an even number of pulses is 0 parity and odd number pulses is 1 parity The pulse width is 600µs A 0 (input or output off, or even parity) is encoded as 300µs high and 300µs low A 1 (input or output on, or odd parity) is encoded as 150µs high and 450µs lowThe above signal constantly repeats.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ARHGEF35** ARHGEF35: Rho guanine nucleotide exchange factor (GEF) 35 is a protein in humans that is encoded by the ARHGEF35 gene.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Iso (American football)** Iso (American football): An Iso, short for isolation and also known as a Halfback Lead, is a simple run play in American Football which is designed to isolate the fullback on a lead block with a linebacker, giving the halfback an easy 5 yards. Meanwhile, the other linebackers are blocked on combo blocks from the offensive line. It is one of the most commonly run concepts in football, and is useful in most situations. Types/variants: Iso plays are almost always run out of I formations due to the fact that they require a fullback, however, coaches have developed many ways to run them. Types/variants: Quarterback Iso The quarterback iso is a recently developed variant of the iso that is run out of shotgun and pistol formations. Often seen at the youth and high school levels of football, it takes advantage of mobile quarterbacks and uses the halfback as the lead blocker rather than a fullback. It is successful against defenses that put a limited number of players in the box. Requirements: The play requires a fullback to be successful, and is therefore almost always run out of I-formations. It helps if the defense is spread across the field, as that makes the combo blocks from the offensive line easier to execute.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Jump rings** Jump rings: Jump rings are (usually metal) rings used to make chains, jewellery and chain mail. They are made by wrapping wire around a mandrel to make a coil, then cutting the coil with wire cutters to make individual rings. The rings can be assembled one by one into chains, earrings, objects such as bowls or ornaments, and chain mail clothing.The making of items from jump rings is called chain maille ("maille" is French for "mesh"). Jump rings can be described by the following qualities:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Horse harness** Horse harness: Horse harness is a device that connects a horse to a vehicle or another type of load. Horse harness: There are two main categories of horse harness: (1) the "breaststrap" or "breastcollar" design, and (2) the collar and hames design. For light work, such as horse show competition where light carts are used, a harness needs only a breastcollar. It can only be used for lighter hauling, since it places the weight of the load on the sternum of the horse and the nearby windpipe. This is not the heaviest skeletal area; also heavy loads can constrict the windpipe and reduce a horse's air supply. Horse harness: By contrast, the collar and harness places the weight of the load onto the horse's shoulders, and without any restriction on the air supply. For heavy hauling, the harness must include a horse collar to allow the animal to use its full weight and strength. Harness components designed for other animals (such as the yoke used with oxen) are not suitable for horses and will not allow the horse to work efficiently. Putting harness on a horse is called harnessing or harnessing up. Attaching the harness to the load is called putting to (British Isles) or hitching (North America). The order of putting on harness components varies by discipline, but when a horse collar is used, it is usually put on first. History: Throughout the ancient world, the 'throat-and-girth' harness was used for harnessing horses that pulled carts; this greatly limited a horse's ability to exert itself as it was constantly choked at the neck. A painting on a lacquerware box from the State of Chu, dated to the 4th century BC, shows the first known use of a yoke placed across a horses's chest, with traces connecting to the chariot shaft. The hard yoke across the horse's chest was gradually replaced by a breast strap, which was often depicted in carved reliefs and stamped bricks of tombs from the Han Dynasty (202 BC – 220 AD). Eventually, the horse collar was invented in China, at least by the 5th century. Parts: Parts of the harness include: A collar to allow the horse to push against the harness with its shoulders and chest. Two main alternative arrangements (with some intermediate types): A horse collar (or full collar). A padded loop fitting closely around the horse's neck, pointed at the top to fit the crest of the neck. Used for heavier pulling, especially when used without a swingletree or whippletree. Parts: A breastcollar. A padded strap running around the chest from side to side. Used for light work, or for somewhat heavier work it is used together with a swingletree evenly on each step without rubbing. Hames (if a full collar is used). Two metal or wooden strips which take the full force of the pull, padded by the collar. Parts: Breeching . A strap around the horse's haunches allowing it to set back and slow a vehicle, usually hooked to the shafts or pole of the vehicle (also known as thill). Used for a single horse, a pair, or in a larger team, only for the wheelers (the animal or pair closest to the vehicle). The leaders in a team do not have breeching, as they are in front of the shafts or pole and so cannot slow the vehicle. Breeching may also be omitted in fine harness, or when the cart is very light or has efficient brakes on the wheels. Parts: Traces. The straps or chains which take the pull from the breastcollar or hames to the load. Harness saddle or "pad". A small supportive piece of the harness that lies on the horse's back, not the same as a riding saddle. Girth. A strap that goes firmly around the girth of the horse to attach the harness saddle. Belly-band. A strap that goes more loosely under the belly of the horse, outside the girth. Prevents the shafts rising up, especially on a two-wheeled vehicle (where weight on the rear of the cart may tip the front up). Back band. A strap going through the harness saddle to join the belly band either side. Takes the weight of the shafts or pole. In cart harness it is replaced by a chain running in a groove in the harness saddle, hooked to the shafts either side. Sliding back band. In a two-wheeled vehicle, the shafts are fixed to the vehicle to hold it level. On a side-slope, one shaft will be higher than the other, and in this case the back band is normally allowed to slide sideways through the harness saddle, so the horse can walk upright without strain on the harness. Parts: Fixed back-band. In a four-wheeled vehicle, the shafts or pole must be allowed to hinge up and down, to allow the horse and vehicle to pass over hillocks and dips. Often the shafts are independently hinged, and on a side-slope these will each hinge to follow the horse, and a sliding back band is not needed. However, if a sliding back band was used with independent shafts it might allow one shaft to ride up higher than the other, and so for such shafts the back-band is normally fixed to the harness saddle. On other four-wheeled vehicles, the two shafts hinge together, and a sliding back band is needed as for two-wheeled vehicles. Parts: Surcingle. A term used within certain light fine harness designs to describe the combination of a light girth and harness saddle. False martingale. A strap passing between the front legs, from the centre of the collar to the belly band, to hold the collar in position. Called "false", because unlike a true martingale it does not attach to the bridle or have any influence on the horse's action. Crupper. A soft padded loop under the base of the tail, to keep the harness from slipping forward. Parts: Back strap. A strap attached by looping through the crupper D at the rear of the saddle / pad or surcingle to attach the crupper Shaft tugs, or just tugs. Loops attached to the back band to hold up the shafts of a vehicle in van or fine harness (not needed in cart harness, which attaches to hooks on the shafts). Two types: For two-wheeled vehicles the tugs are stiff leather loops, fitting fairly loosely around the shafts (which are rigidly attached to the vehicle), to allow flexibility as the animal and the vehicle move against each other. Parts: For four-wheeled vehicles with independently hinged shafts, the tugs (Tilbury tugs) are leather straps buckled tightly around the shafts so they move with the animal. Terrets. Metal loops on the saddle and collar to support the reins. The bridles of the rear animals of a large team may also have terrets to take the lines of the animals to the front of them. Parts: Reins or Lines. Long leather straps (occasionally ropes) running from the bit to the driver's hands, used to guide the horses. In teams of several animals these may be joined together so the driver need hold only one pair.Bridle: When working in harness, most horses wear a specialised bridle that includes features not seen in bridles used for riding. These usually include blinders, also called blinkers or winkers, behind and to the side of the horse's eyes, to prevent it from being distracted by the cart and other activity behind it. Harness racing horses sometimes have a shadow roll on the noseband of the bridle for the same purpose. Parts: Bits for harness (often a Liverpool bit, but the Wilson snaffle is also popular) may be similar to those used for riding, particularly in mouthpiece, usually operating with a curb bit and adjustable leverage to help balance the effect of the reins on different horses in a team. The bridles of the rearward horses in a team (the wheelers in a four-horse team, and both wheelers and centre horses in a six-horse team) often have rings at each end of the browband, through which the lines of the forward horses pass. Parts: Some horses pulling lighter vehicles, particularly at horse shows and other public exhibitions, may have an overcheck to assist them in holding a desired head position, and for safety reasons (to avoid the horse's head and neck going under the shaft in a stumble). In some cases a specially designed running martingale may also be added. A looser overcheck may also be used in a working harness to prevent the horse grazing. The overcheck hooks to a pedestal on the harness saddle. Parts: Horse brasses. Brass plaques mounted on leather straps, used for decoration, especially on working harness. Made in a very wide range of designs. Types: Show harness Show harnesses for light cart driving have a breastcollar instead of a horse collar and are made with strong but refined-looking leather throughout, usually black and highly polished. In draft horse showing and combined driving, horse collars are seen, but harness leather is still highly polished and well-finished. Carriage or van harness Lighter weight but strong harness similar to show harness, used for pulling passenger vehicles such as buggies or carts, or other lighter loads. The traces attach either to the shafts of the vehicle or to the vehicle itself, and the harness may have either a horse collar or a breastcollar. Types: Racing harness The racing harness, like the show harness, is a breastcollar harness. Horses are hitched to a very lightweight two-wheeled cart, called a sulky. Most race harnesses incorporate a standing martingale and an overcheck. Horses may be raced in a "blind" bridle, which restricts the horse from seeing beside and behind him to various degrees by use of blinkers (horse tack), or may be raced with an "open" bridle, one that does not have blinkers. Specialized equipment, called "hobbles" or "hopples" are added to the harness of race horses who pace (and sometimes to the harness of those who trot) in order to help them maintain their gait. Types: Cart or wagon harness Harness for pulling heavier vehicles always has a horse collar. The traces are often made of chain and attach to loops on the shafts of the vehicle. A chain attached to the shafts may be passed over the saddle to carry their weight. Reins are of rope or leather, depending on region of the world. Types: Plow harness Similar to cart harness but without breeching, used for dragged loads such as plows, harrows, canal boats or logs. This style is also used on the leaders in a team of animals pulling a vehicle. The traces attach to a whippletree behind the horse and this then pulls the load (or in larger teams may attach to further whippletrees). Types: There are two main plow harness types: the New England D-Ring and the Western harness. The New England D-Ring makes use of a metal D shaped ring that allows for a ninety degree angle to be maintained at the junction of the front trace and the hames regardless of the height of the implement being pulled. The Western harness does not provide this flexibility but has other useful characteristics such as a strap that runs from the britchen to the collar which stops the pull from riding up and hitting the horses in the face when descending a steep incline.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Energy park** Energy park: An energy park is a separate area used and planned for the purpose of clean energy development, like wind and solar generation facilities. Energy park: Energy parks create many other economic development benefits too. In Ohio, energy parks are creating thousands of green jobs. In Minnesota, community wind parks are also popular. In England, wind parks are commonly known as wind farms. A more "lightweight" version of an energy park is a wind park or solar park. These have one type of clean energy generation, rather than two or more technologies, as in an energy park. Some energy parks feature additional features beyond clean energy generation. Additional benefits include: green job creation, Smart grid connections, as well as new recreational, technology innovation and agricultural opportunities. The Stamford Energy Park in Vermont, United States, is one example of an integrated energy park.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**World War II cryptography** World War II cryptography: Cryptography was used extensively during World War II because of the importance of radio communication and the ease of radio interception. The nations involved fielded a plethora of code and cipher systems, many of the latter using rotor machines. As a result, the theoretical and practical aspects of cryptanalysis, or codebreaking, were much advanced. World War II cryptography: Possibly the most important codebreaking event of the war was the successful decryption by the Allies of the German "Enigma" Cipher. The first break into Enigma was accomplished by Polish Cipher Bureau around 1932; the techniques and insights used were passed to the French and British Allies just before the outbreak of the war in 1939. They were substantially improved by British efforts at Bletchley Park during the war. Decryption of the Enigma Cipher allowed the Allies to read important parts of German radio traffic on important networks and was an invaluable source of military intelligence throughout the war. Intelligence from this source and other high level sources, such as Cryptanalysis of the Lorenz cipher, was eventually called Ultra.A similar break into the most secure Japanese diplomatic cipher, designated Purple by the US Army Signals Intelligence Service, started before the US entered the war. Product from this source was called Magic. World War II cryptography: On the other side, German code breaking in World War II achieved some notable successes cracking British naval and other ciphers. Australia: Central Bureau FRUMEL: Fleet Radio Unit, Melbourne Secret Intelligence Australia Germany: Enigma machine Fish (cryptography) British codename for German teleprinter ciphers Lorenz cipher a Fish cipher codenamed Tunny by the British Siemens and Halske T52 Geheimfernschreiber, a Fish cipher codenamed Sturgeon by the British Short Weather Cipher B-Dienst Reservehandverfahren OKW/CHI Gisbert Hasenjaeger Japan: Japanese army and diplomatic codes Japanese naval codes PURPLE JN-25 Poland: Cryptanalysis of the Enigma Biuro Szyfrów (Cipher Bureau) Marian Rejewski Jerzy Różycki Henryk Zygalski bomba Lacida Machine United Kingdom: Bletchley Park Cryptanalysis of the Enigma Cryptanalysis of the Lorenz cipher Far East Combined Bureau (FECB) Naval Intelligence Division (NID) Wireless Experimental Centre (WEC) Bombe Colossus computer Typex SYKO Ultra Alan Turing W. T. Tutte John Tiltman Max Newman Tommy Flowers I. J. Good John Herivel Leo Marks Gordon Welchman Poem code United States: Magic (cryptography) Signals Intelligence Service US Army, see also Arlington Hall OP-20-G US Navy Signals Intelligence group Elizebeth Smith Friedman William Friedman Frank Rowlett Abraham Sinkov Genevieve Grotjan Feinstein Leo Rosen Joseph Rochefort, leader of the effort to crack Japanese Naval codes Joseph Mauborgne Agnes Meyer Driscoll SIGABA cipher machine SIGSALY voice encryption SIGTOT one-time tape system M-209 cipher machine Station HYPO cryptanalysis group Station CAST cryptanalysis group Station NEGAT
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cutans** Cutans: Cutans are the modification of the soil texture, or soil structure, at natural surfaces (particle, pore, or ped) in soil materials due to illuviation. Cutans are oriented deposits which can be composed of any of the component substances of the soil material. Cutans are common features in soil and represent focuses of chemical and biological reactions. Cutans may include clay skins or coatings of silica, sesquioxide, manganese, ferromanganese, soil organic matter or carbonate. Clay skins are also called argillans, and soil horizons with sufficient clay illuviation are termed argillic horizons. Significance: Cutans provide physical evidence, observable in the field, as to the degree and nature of pedogenesis. The ability to assess cutans is a core skill in soil morphology and paleopedology.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**XtreemFS** XtreemFS: XtreemFS is an object-based, distributed file system for wide area networks. XtreemFS' outstanding feature is full (all components) and real (all failure scenarios, including network partitions) fault tolerance, while maintaining POSIX file system semantics. Fault-tolerance is achieved by using Paxos-based lease negotiation algorithms and is used to replicate files and metadata. SSL and X.509 certificates support make XtreemFS usable over public networks. XtreemFS: XtreemFS has been under development since early 2007. A first public release was made in August 2008. XtreemFS 1.0 was released in August 2009. The 1.0 release includes support for read-only replication with failover, data center replica maps, parallel reads and writes, and a native Windows client. The 1.1 added automatic on-close replication and POSIX advisory locks. In mid-2011, release 1.3 added read/write replication for files. Version 1.4 underwent extensive testing and is considered production-quality. An improved Hadoop integration and support for SSDs was added in version 1.5. XtreemFS: XtreemFS is funded by the European Commission's IST programme. The original XtreemFS team founded Quobyte Inc. in 2013. Quobyte offers a professional storage system as a commercial product. Features: Secure connections to Contrail (software) Clients for Linux, Windows and OS X Open source (New BSD License since release 1.3) Cross-site file replication with auto-failover Partial replicas, objects fetched on demand POSIX compatibility Plugins for authentication policies, replica selection RAID0 (striping) with parallel I/O over stripes Read-only replication Security (SSL, X.509 certificates) Servers for Linux and Solaris Natively and Non-Native Windows Java & ANT based server. Features: experimental file system driver for Hadoop (added in version 1.2) Use cases: as a filer replacement (home directories and group shares), in HPC cluster, in Hadoop clusters, for VM block storage cross-branch data sharing and many more use cases, all in a single system.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ftx (gene)** Ftx (gene): In molecular biology, FTX transcript, XIST regulator (non-protein coding), also known as FTX (Five prime to Xist), is a long non-coding RNA. In humans, it is located on the X chromosome. It was identified during sequence analysis of the X inactivation centre, surrounding the XIST gene. FTX contains several microRNAs within its introns. It upregulates expression of XIST, and inhibits DNA methylation of the XIST promoter.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Froebel gifts** Froebel gifts: The Froebel gifts (German: Fröbelgaben) are educational play materials for young children, originally designed by Friedrich Fröbel for the first kindergarten at Bad Blankenburg. Playing with Froebel gifts, singing, dancing, and growing plants were each important aspects of this child-centered approach to education. The series was later extended from the original six to at least ten sets of gifts. Description: The Sunday Papers (Sonntagsblatt) published by Fröbel between 1838 and 1840 explained the meaning and described the use of each of his six initial "play gifts" (Spielgabe): "The active and creative, living and life producing being of each person, reveals itself in the creative instinct of the child. All human education is bound up in the quiet and conscientious nurture of this instinct of activity; and in the ability of the child, true to this instinct, to be active."Between May 1837 and 1850, the Froebel gifts were made in Bad Blankenburg in the principality of Schwarzburg Rudolstadt, by master carpenter Löhn, assisted by artisans and women of the village. In 1850, production was moved to the Erzgebirge region of the Kingdom of Saxony in a factory established for this purpose by S F Fischer.Fröbel also developed a series of activities ("occupations") such as sewing, weaving, and modeling with clay, for children to extend their experiences through play. Ottilie de Liagre in a letter to Fröbel in 1844 observed that playing with the Froebel gifts empowers children to be lively and free, but people can degrade it into a mechanical routine. Description: Each of the first five gifts was assigned a number by Fröbel in the Sunday Papers, which indicated the sequence in which each gift was to be given to the child. Description: Gift 1 (infant) The first gift is a soft ball or yarn ball in solid color, which is the right size for the hand of a small child. When attached to a matching string, the ball can be moved by a mother in various ways as she sings to the child. Although Fröbel sold single balls, they are now usually supplied in sets of six balls consisting of the primary colors: red, yellow, and blue; as well as the secondary colors: purple, green, and orange. These soft balls can be squashed in the hand, and they revert to their original shapes.The first gift was intended by Fröbel to be given to very young children. His intention was that, through holding, dropping, rolling, swinging, hiding, and revealing the balls, the child may acquire knowledge of objects and spatial relationships, movement, speed and time, color and contrast, and weights and gravity.: 42 Gift 2 (1–2 years) The second gift originally consisted of two wooden objects, a sphere and a cube. Fröbel called this gift "the child's delight", since he observed the joy of each child discovering the differences between the sphere and cube. Description: The child is already familiar with the shape of the wooden sphere, which is the same as the ball of the first gift. The wooden sphere always looks the same when viewed from any direction. Like the child, the wooden sphere is always on the move. When rolled on a hard surface, the wooden sphere produces sounds. In contrast, the wooden cube is the surprise of the second gift. It remains where it is placed, and from each direction presents a different appearance. Description: The second gift was developed to enable a child to explore and enjoy the differences between shapes. By attaching a string or inserting a rod in a hole drilled through these wooden geometric shapes, they can be spun by a child. Although the sphere always appears the same, the spinning cube reveals many shapes when spun in different ways. This led Fröbel to later include a wooden cylinder in the second gift, which may also be spun in many different ways. Description: Gift 3 (2–3 years) The familiar shape of the cube is now divided into eight identical beechwood cubes, about one inch along each edge, which is a convenient size for the hand of a small child. A child delights in pulling apart this gift, rearranging the eight cubes in many ways, and then reassembling them in the form of a cube. This is the first building gift. Description: Gift 4 (2–3 years) This second building gift at first appears the same as in Gift 3. But a surprise awaits the child when the pieces are pulled apart. Each of these eight identical beechwood blocks is a rectangular plank, twice as long and half the width of the cubes of the previous gift. Many new possibilities for play and construction arise due to these differences. Description: Gift 5 (3–4 years) This building gift consists of more cubes, some of which are divided in halves or quarters. Gift 6 (4–5 years) A set of more complex wooden blocks that includes cubes, planks, and triangular prisms. Influence: Froebel's gifts were adapted by Caroline Pratt for the school, which she founded in 1913 in the Greenwich Village district of New York City. This school embodied a child-centered approach to education. Children worked together to reconstruct their experiences through play. Based on the ideas of Friedrich Fröbel, the curriculum was drawn from the environment of the child; observations about the neighborhood inspired each child to reflect on their world directly so that they could make sense of their experiences.Joachim Liebschner commented in his book, A Child's Work: Freedom and Guidance in Froebel's Educational Theory and Practice "Realising how the Gifts were eventually misused by Kindergarten teachers who followed after Fröbel, it is important to consider what Fröbel expected the gifts to achieve. He envisaged that the Gifts will teach the child to use his (or her) environment as an educational aid; secondly, that they will give the child an indication of the connection between human life and life in nature; and finally, that they will create a bond between the adult and the child who play with them". Influence: Fröbel's building forms and movement games were forerunners of abstract art as well as a source of inspiration to the Bauhaus movement. Many modernist architects were exposed as children to Fröbel's ideas about geometry, including Frank Lloyd Wright, Le Corbusier, and Buckminster Fuller. Wright was given a set of the Froebel blocks at about age nine, and in his autobiography he cited them indirectly in explaining that he learned the geometry of architecture in kindergarten play: For several years I sat at the little kindergarten table-top ruled by lines about four inches apart each way making four-inch squares; and, among other things, played upon these ‘unit-lines’ with the square (cube), the circle (sphere) and the triangle (tetrahedron or tripod)—these were smooth maple-wood blocks. All are in my fingers to this day.: 359  Wright later wrote, "The virtue of all this lay in the awakening of the child-mind to rhythmic structures in Nature… I soon became susceptible to constructive pattern evolving in everything I saw.": 25 : 205 Current availability: Froebel gifts continue to be used in early childhood education in Korea and Japan, where they are made from local timber.Reproduction sets can be ordered via the Internet.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**LizaMoon** LizaMoon: LizaMoon is a piece of malware that infected thousands of websites beginning in September, 2010. It is an SQL injection attack that spreads scareware encouraging users to install needless and rogue "anti-virus software". Although it does not use new infection techniques, it was initially thought to be notable based on the scale and speed at which it spread, and that it affected some of Apple's iTunes service. LizaMoon was initially reported to the general public by Websense Security Lab. Overview: Initial press statements reported the infection of hundreds of thousands or of millions of sites were infected. McAfee estimated approximately 1.5 million hosts affected between March and April 2011. However, subsequent research has shown a much lower infection rate. Although initial estimates for the infection based on Google search data were thought to show hundreds of thousands of infected sites, the true number appears to only be in the thousands: according to Niels Provos, a security researcher at Google, Google's safe browsing database indicates the LizaMoon attacks began around September 2010 and peaked in October 2010, with approximately 5600 infected sites. Cisco researcher Mary Landesman has confirmed that the infection rate appears quite low.How the web sites spreading the infection were attacked remains a mystery. However, hackers may inject vulnerable and popular websites with malicious code in order to spread the infection once users visit these sites. Users should never permit installs of software of unknown provenance from the Internet under any circumstances – those that follow this policy cannot be infected by LizaMoon. These types of malware, known as rogue antivirus software, come under different names and logos such as "XP Security 2011", "Malware Scanner" or similar. After the initial installation, the software runs a fake scan showing non-existing malware on the system and in many cases requires the user to pay in order to remove the alleged malware. Effects: As with all malware, LizaMoon is easier for a user to deal with by avoiding it rather than by attempting to repair the damage it causes after the fact. Fortunately, LizaMoon is easy for most users to avoid. The software requires the user to actively participate in downloading and installing itself. Indeed, to become infected, a user must give permission to the software four times. LizaMoon asks the user to install a piece of rogue antivirus software to remove various non-existent "viruses" from the PC. The rogue AV software that is installed is called Windows Stability Center. As of April 1, the file that is downloaded is currently detected by only 13 of 43 anti-virus engines according to VirusTotal.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Radical 58** Radical 58: Radical 58 or radical snout (彐部) meaning "pig snout" is one of the 31 Kangxi radicals (214 radicals in total) composed of three strokes. In the Kangxi Dictionary, there are 25 characters (out of 49,030) to be found under this radical. 彐 is also the 50 indexing component in the Table of Indexing Chinese Character Components predominantly adopted by Simplified Chinese dictionaries published in mainland China. Two associated indexing components, ⺕ and 彑, are affiliated to the principal indexing component 彐. Literature: Fazzioli, Edoardo (1987). Chinese calligraphy : from pictograph to ideogram : the history of 214 essential Chinese/Japanese characters. calligraphy by Rebecca Hon Ko. New York: Abbeville Press. ISBN 0-89659-774-1. Lunde, Ken (Jan 5, 2009). "Appendix J: Japanese Character Sets" (PDF). CJKV Information Processing: Chinese, Japanese, Korean & Vietnamese Computing (Second ed.). Sebastopol, Calif.: O'Reilly Media. ISBN 978-0-596-51447-1.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Plastic limit theorems** Plastic limit theorems: Plastic limit theorems in continuum mechanics provide two bounds that can be used to determine whether material failure is possible by means of plastic deformation for a given external loading scenario. According to the theorems, to find the range within which the true solution must lie, it is necessary to find both a stress field that balances the external forces and a velocity field or flow pattern that corresponds to those stresses. If the upper and lower bounds provided by the velocity field and stress field coincide, the exact value of the collapse load is determined. Limit theorems: The two plastic limit theorems apply to any elastic-perfectly plastic body or assemblage of bodies. Lower limit theorem: If an equilibrium distribution of stress can be found which balances the applied load and nowhere violates the yield criterion, the body (or bodies) will not fail, or will be just at the point of failure. Upper limit theorem: The body (or bodies) will collapse if there is any compatible pattern of plastic deformation for which the rate of work done by the external loads exceeds the internal plastic dissipation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Epi-isozizaene synthase** Epi-isozizaene synthase: epi-Isozizaene synthase (EC 4.2.3.37, SCO5222 protein) is an enzyme with systematic name (2E,6E)-farnesyl-diphosphate diphosphate-lyase ((+)-epi-isozizaene-forming). This enzyme catalyses the following chemical reaction (2E,6E)-farnesyl diphosphate ⇌ (+)-epi-isozizaene + diphosphateThis enzyme requires Mg2+ for activity.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DICOMweb** DICOMweb: DICOMweb is a term applied to the family of RESTful DICOM services defined for sending, retrieving and querying for medical images and related information. The intent is to provide a light-weight mobile device and web browser friendly mechanism for accessing images, which can be implemented by developers who have minimal familiarity with the DICOM standard and which uses consumer application friendly mechanisms like http, JSON and media types (like "image/jpeg") to the maximum extent possible. The standard is formally defined in DICOM PS3.18 Web Services. The DICOMweb services are distinguished from other DICOM web services by the suffix "-RS", indicating their RESTful nature. DICOMweb: The family consists primarily of: WADO-RS for retrieval of DICOM PS3.10 files, meta data in XML or JSON forms, bulk data separated from the meta data and rendered consumer format images STOW-RS for storage (sending) of DICOM PS3.10 files or separated meta data and bulk data QIDO-RS for querying collections (databases, registries) of DICOM objectsA key feature of the WADO-RS services is the ability to retrieve entire studies and series rather than needing repeated request for individual instances. DICOMweb: Other services including support for work lists (UPS-RS) and retrieval of server capabilities. Examples: Some very simple examples of URL syntax and meta data encoding are described in the DICOMweb Cheatsheet. Comparison with Conventional DICOM services: Roughly speaking, the DICOMweb services can be compared with the conventional DIMSE DICOM services as follows: Indeed, apart from the different encoding of the request, packaging of the response and protocol used, the services are sufficiently similar that a DICOMweb proxy to a conventional implementation of DICOM DIMSE services can be implemented (this is by design). The conventional DIMSE DICOM services do actually have capabilities that correspond to the instance and frame level (Instance and Frame Level Retrieve) and separate meta data retrieval capabilities (Composite Instance Retrieve Without Bulk Data) of DICOMweb, though these are not nearly as widely implemented as the traditional study-root study, series and image retrieval services. History: Earlier DICOM web services used either URL parameters (WADO-URI) or SOAP-based web services (WADO-WS) to retrieve DICOM objects. History: The original Web Access to DICOM Persistent Objects (WADO) standard was a joint effort by DICOM and an ISO working group and was released in 2003 as DICOM Supplement 85 and ISO 17432. The word "persistent" in the name was later dropped. The ISO standard has not been maintained as DICOM PS3.18 has evolved over time. The suffix "-URI" was later added to distinguish what is now called WADO-URI from the newer services. WADO-URI became popular for providing access to both original DICOM files and server side rendered versions of them, and accordingly was included in the IHE XDS-I.b profile as one of its required transport mechanisms, After IHE had gone through several revisions of the XDS-I profile, it defined a SOAP-based mechanism for transferring images (the RAD-69 transaction), and this was added to DICOM retrospectively, extended, and became WADO-WS, which was subsequently retired since it was incomplete and not being maintained. History: As part of DICOM Supplement 118 - Application Hosting, finalized in 2010, an XML "native DICOM model" was introduced that defined bi-directional transcoding of DICOM datasets between the conventional binary representation and an XML representation. History: An independent group of developers defined an alternative transport mechanism, Medical Imaging Network Transport (MINT), and proposed it as an extension to DICOM. Though MINT was not adopted in its entirety, the developers were assimilated by DICOM WG 27, and several key features of MINT were defined as extensions to DICOM PS3.18. Historical information about MINT can be found at the original MINT Google Code site. History: The current set of DICOM web services in DICOM PS3.18, which include DICOMweb, have evolved (or are being extended) through the following supplements: DICOM Supplement 85 - Web Access to DICOM Objects (WADO) DICOM Supplement 148 - Web Access to DICOM Persistent Objects by Means of Web Services Extension of the Retrieve Service (WADO Web Service) DICOM Supplement 161 - WADO by means of RESTful Services DICOM Supplement 163 - Store Over the Web by RESTful Services (STOW-RS) DICOM Supplement 166 - Query based on ID for DICOM Objects by RESTful Services (QIDO-RS) DICOM Supplement 170 - Server Options RESTful Services DICOM Supplement 171 - Unified Procedure Step by REpresentational State Transfer (REST) Services DICOM Supplement 174 - RESTful Rendering DICOM Supplement 183 - PS3.18 Web Services Re-Documentation DICOM Supplement 193 - REST Notifications DICOM Supplement 194 - RESTful Services for Non-Patient Instances DICOM Supplement 198 - Retirement of WADO-WS DICOM Supplement 203 - Thumbnail Resources for DICOMweb DICOM Supplement 211 - DICOMweb Support for Retrieve via application/zip DICOM Supplement 228 - DICOMweb API for Server-Side Volumetric RenderingTake care to always use the current DICOM standard though, rather than implementing from any supplement, since corrections and additions have been incorporated over time. Implementations: Server DCM4CHEE Archive Orthanc Features supported Medical Connections OsiriX 8.5 or higher Client OHIF Viewer OsiriX Viewer 8.5 or higher Orthanc
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Offshore custom software development** Offshore custom software development: In software engineering, offshore custom software development consists in offshoring the software development process in a country where production costs are lower, thus decreasing budget spending. Background: Early days Since the 1960s and the early days of the Silicon Valley, technology pioneers developed offshoring centers in the state of Jalisco, Mexico. In 1996, General Electric offshored its IT for the first time when it opened its own center in India. Given the rapid growth of this sector, several companies have started to use offshore development in China, India and other countries with a lower cost per developer model. In the early 2000s, the leading countries in offshore custom software development were Russia, India, Ukraine and China. The time difference when working with India and China for the Western world allowed work to be done round the clock adding a competitive advantage. Background: 2008 Recession During the Great Recession, offshore software development spending lowered. During his 2008 presidential campaign, Barack Obama stated «I will stop giving tax breaks to companies that ship job overseas and I will start giving them to companies that create good jobs right here in America.» This led to a $3,000 tax break for US companies per hire onshore instead of offshore. In 2010, the market picked up again. In 2011, General Electric, whose CEO had a seat at the President's Council on Jobs and Competitiveness, announced the creation of 11,000 onshore IT jobs. Background: Globalization By the mid-2010s, the debate onshore/offshore was becoming irrelevant, as all major software outsourcing providers had shifted to worldwide operations and integrated offshoring into a seamless offer for their clients. You’ll understand why the software talent shortage is approaching a crisis. With such a global appetite for software, demand for developers far outstrips supply, especially as more companies stake their futures on digital transformation. Background: New agile and DevOps development models called for a tighter relation between the client and the offshoring provider, making major long-distance offshoring destinations (Russia, India, China) unfit for the job. Nearshoring, offshoring to a very nearby country, has gained increasing popularity among the CIO and CTO community. The USA is increasing its IT shopping in Latin American countries, and Europe in Poland and other small Eastern European countries such as Lithuania. North Korea appeared on the map of IT offshoring destinations, having great engineering resources and an excellent price/quality ratio.By 2010, India started to consider China as a threatening competitor. In September 2010, the French company Capgemini bought the Brazilian software developer CPM Braxis for $330 million to significantly grow its offshore capacity. In November 2010, Hewlett-Packard confirmed a $1 billion investment to develop 6 major offshore centers in Bulgaria, China, Costa Rica, India, Malaysia and the Philippines.In 2013, China's offshore software market reached $5.05 billion. By 2015, India was considering repatriating most of its outsourcing activities to move to a new generation of automated software development. In February 2016, Apple Inc. opened its first offshore software development center in India. Description: In software engineering, offshore custom software development consists in offshoring the software development process in a country where production costs are lower, thus decreasing budget spending. Description: Offshore software development can include following services: product design and architecture, coding and testing; develops SaaS, Internet/Intranet solutions, e-commerce, CRM, project management and other special web-services (including Web 2.0 solutions). Several new Web 2.0 platforms and sites are developed offshore while the entrepreneurs and management is located in Western countries such as US, UK and EU. The advantages mostly revolve around better cost-control over the process, which means that there is lower cash-outflow (often the biggest struggle for startups). Industry: International consulting firms include Accenture, Atos, Capgemini, Cognizant, IBM Global Services, Infosys and Tata Consultancy Services.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**JEDEC memory standards** JEDEC memory standards: The JEDEC memory standards are the specifications for semiconductor memory circuits and similar storage devices promulgated by the Joint Electron Device Engineering Council (JEDEC) Solid State Technology Association, a semiconductor trade and engineering standardization organization. JEDEC Standard 100B.01 specifies common terms, units, and other definitions in use in the semiconductor industry. JESC21-C specifies semiconductor memories from the 256 bit static RAM to DDR4 SDRAM modules. JEDEC standardization goals: The Joint Electron Device Engineering Council characterizes its standardization efforts as follows: JEDEC standards and publications are designed to serve the public interest through eliminating misunderstandings between manufacturers and purchasers, facilitating interchangeability and improvement of products, and assisting the purchaser in selecting and obtaining with minimum delay the proper product for use by those other than JEDEC members, whether the standard is to be used either domestically or internationally. JEDEC Standard 100B.01: The December 2002 JEDEC Standard 100B.01 is entitled Terms, Definitions, and Letter Symbols for Microcomputers, Microprocessors, and Memory Integrated Circuits. The purpose of the standard is to promote the uniform use of symbols, abbreviations, terms, and definitions throughout the semiconductor industry. Units of information The specification defines the two common units of information: The bit (b) is the smallest unit of information in the binary numeration system and is represented by the digits 0 and 1. The byte (B) is a binary character string typically operated upon as one unit. It is usually shorter than a computer word. Unit prefixes for semiconductor storage capacity The specification contains citations of the commonly used prefixes kilo, mega, and giga "as a prefix to units of semiconductor storage capacity" to designate multiples of the units. The specification cites three prefixes as follows, with the note that these prefixes are included in the document only to reflect common usage. kilo (K): A multiplier equal to 1024 (210). mega (M): A multiplier equal to 1048576 (220 or K2, where K = 1024). JEDEC Standard 100B.01: giga (G): A multiplier equal to 1073741824 (230 or K3, where K = 1024).It refers to the IEEE/ASTM SI 10-1997 standard as stating that "this practice frequently leads to confusion and is deprecated". The document further refers to the description of the IEC binary prefixes in Amendment 2 of IEC 60027-2, "Letter symbols to be used in electrical technology", for an alternate system of prefixes and includes a table of the IEC prefixes in the note. However the JEDEC specification does not explicitly include the IEC prefixes in the list of general terms and definitions. JEDEC Standard 100B.01: The document notes that these prefixes are used in their decimal sense for serial communication data rates measured in bits. JEDEC Standard 100B.01: The JEDEC terms dictionary further includes definitions for the binary prefixes kibi (Ki), mebi (Mi), gibi (Gi), and tebi (Ti) as powers of 2, and kilo, mega, giga, and tera as powers of 10. For example, 240 tebi Ti tera + binary: (210)4 = 1099511627776 tera: (103)4The JEDEC DDR3 SDRAM standard JESD-79-3f uses Mb and Gb to specify binary memory capacity: "The purpose of this Standard is to define the minimum set of requirements for JEDEC compliant 512 Mb through 8 Gb for x4, x8, and x16 DDR3 SDRAM devices." JESD21-C: The standard JESD21-C: Configurations for Solid State Memories is maintained by JEDEC committee JC41. This committee consists of members from manufacturers of microprocessors, memory ICs, memory modules, and other components, as well as component integrators, such as video card and personal computer makers. Standard 21 is published in loose-leaf binder format to accommodate frequent updates. JESD21-C: The documentation of modern memory modules, such as the standards for the memory ICs and a reference design of the module requires over one hundred pages. The standards specify the physical and electrical characteristics of the modules, and include the data for computer simulations of the memory module operating in a system.Memory modules of the DDR2-SDRAM type are available for laptop, desktop, and server computers in a wide selection of capacities and access speeds. The standards specify memory module label formats for end-user markets. For example: 1GB 2Rx4 PC2-3200P-333-11-D2 is a 1 GB DDR2 Registered DIMM, with address/command parity function, using 2 ranks of x4 SDRAMs operational to PC2-3200 performance with CAS Latency = 3, tRCD = 3, tRP = 3, using JEDEC SPD revision 1.1, raw card reference design file D revision 2 used for the assembly.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Katherine Albrecht** Katherine Albrecht: Katherine Albrecht is a consumer privacy advocate, Vice President (VP) of Startpage.com and spokesperson against radio-frequency identification (RFID). Albrecht devised the term "spy chips" to describe RFID tags such as those embedded in passport cards and certain enhanced United States driver's licenses. Katherine Albrecht holds a Doctor of Education degree from Harvard University. She is a resident of Nashua, New Hampshire. Katherine Albrecht: Albrecht was interviewed about RFID chips in Aaron Russo's 2006 documentary America: From Freedom to Fascism. Publications: Books Albrecht and Liz McIntyre (CASPIAN's communications director) co-authored the book Spychips: How Major Corporations and Government Plan to Track Your Every Move, which won the November 2005 Lysander Spooner Award for advancing the literature of liberty. The book lays out the potential implications of RFID on privacy and civil liberties. RFID industry representatives have criticized it, claiming the authors exaggerate some RFID privacy threats. In a lengthy rebuttal, Albrecht asked why critics don't "mention sworn patent documents from IBM describing ways to secretly follow innocent people in libraries, theaters, and public restrooms through the RFID tags in their clothes and belongings? Where is […] outrage over BellSouth's patent-pending plans to pick through our garbage and skim the data contained in the RFID tags we discard?" Articles and papers Albrecht, Katherine. "Supermarket Cards: The Tip of the Retail Surveillance Iceberg." Denver University Law Review, Volume 79, Issue 4, Summer 2002. pp. 534–539 and 558–565. Publications: Position Paper on the Use of RFID in Consumer Products. Co-authored with Liz McIntyre and Beth Givens. November 14, 2003. [1] "RFID: The Doomsday Scenario." In: RFID: Applications, Security, and Privacy, eds. S. Garfinkel and B. Rosenberg. New Jersey: Addison Wesley. 2006. pp. 259–273. "RFID: The Big Brother Bar Code." (Co-authored with Liz McIntyre) ALEC Policy Forum, Winter 2004, Volume 6, Number 3, pp. 49–54. Radio talk show host: Previously, she hosted a two-hour daily program called Uncovering the Truth with Katherine Albrecht on the We The People Radio Network (WTPRN) from April 2007 until the network ceased all programming in October 2008. Albrecht later broadcast The Dr. Katherine Albrecht Show on the GCN Radio network until 2016. Religious beliefs: Albrecht believes that RFID chips and other emerging technologies could lead to the Mark of the Beast. She has written a children's book called I Won't Take the Mark: A Bible Book and Contract for Children.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ACOX1** ACOX1: Peroxisomal acyl-coenzyme A oxidase 1 is an enzyme that in humans is encoded by the ACOX1 gene.The protein encoded by this gene is the first enzyme of the fatty acid beta-oxidation pathway, which catalyzes the desaturation of acyl-CoAs to 2-trans-enoyl-CoAs. It donates electrons directly to molecular oxygen, thereby producing hydrogen peroxide. Defects in this gene result in pseudoneonatal adrenoleukodystrophy, a disease that is characterized by accumulation of very long chain fatty acids. Alternatively spliced transcript variants encoding different isoforms have been identified.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Caspase 6** Caspase 6: Caspase-6 is an enzyme that in humans is encoded by the CASP6 gene.CASP6 orthologs have been identified in numerous mammals for which complete genome data are available. Unique orthologs are also present in birds, lizards, lissamphibians, and teleosts. Caspase-6 has known functions in apoptosis, early immune response and neurodegeneration in Huntington's and Alzheimer's disease. Function: This gene encodes a protein that is a member of the cysteine-aspartic acid protease (caspase) family. Sequential activation of caspases plays a central role in the execution-phase of cell apoptosis. Caspases exist as inactive proenzymes that undergo proteolytic processing at conserved aspartic residues to produce two subunits, large and small, that dimerize to form the active enzyme. This protein is processed by caspases 7, 8 and 10, and is thought to function as a downstream enzyme in the caspase activation cascade. Caspase 6 can also undergo self-processing without other members of the caspase family. Alternative splicing of this gene results in two transcript variants that encode different isoforms.Caspase-6 plays a role in the early immune response via de-repression. It reduces the expression of the immunosuppressant cytokine interleukin-10 and cleaves the macrophage suppressing IRAK-M.With respect to neurodegeneration, caspase-6 cleaves HTT in Huntington's and APP in Alzheimer's disease. Resulting in both cases in protein aggregation of the fragments. Interactions: Caspase 6 has been shown to interact with Caspase 8.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tonofibril** Tonofibril: Tonofibrils are cytoplasmic protein structures in epithelial tissues that converge at desmosomes and hemidesmosomes. They consist of fine fibrils in epithelial cells that are anchored to the cytoskeleton. They were discovered by Rudolf Heidenhain, and first described in detail by Louis-Antoine Ranvier in 1897. Composition: Tonofilaments are keratin intermediate filaments that makes up tonofibrils in the epithelial tissue. In epithelial cells, tonofilaments loop through desmosomes. Electron microscopy has advanced now to illustrate the tonofilaments more clearly.The protein filaggrin is believed to be synthesized as a giant precursor protein, profilaggrin (>400 kDA in humans). When filaggrin binds to keratin intermediate filaments, the keratin aggregates into macrofibrils.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Comedown (drugs)** Comedown (drugs): The comedown, or crashing (also "down", "low", or sometimes "crash"), is a phase of drug withdrawal that involves the deterioration in mood and energy that occurs when a psychoactive drug, typically a stimulant, clears from the blood in the bloodstream. The improvement and deterioration of mood (euphoria and dysphoria) are represented in the cognitive schema as high and low elevations; thus, after the drug has elevated the mood (a state known as a high), there follows a period of coming back down, which often has a distinct character from withdrawal in stimulants. Generally, a comedown can happen to anyone as a transient symptom, but in people who are dependent on the drug (especially those addicted to it), it is an early symptom of withdrawal and thus can be followed by others. Comedown (drugs): Various drug classes, most especially stimulants and, to a lesser degree, opioids and sedatives, are subject to comedowns. A milder analogous mood cycle can happen even with blood sugar levels (thus sugar highs and sugar lows), which is especially relevant to people with diabetes mellitus and to parents and teachers managing children's behavior, as well as in adults with ADHD, although the notion of a "sugar high" has not been verified in scientific studies and appears to be a form of confirmation bias or placebo effect. The use of caffeine may also be subject to periods of low energy and mood following its effects. Stimulant comedowns are unique in that they often appear very abruptly after a period of focus or high, and are typically the more intensely dysphoric phase of withdrawal than that following complete elimination from the bloodstream. Besides general dysphoria, this phase can be marked by frustration, anger, anhedonia, social withdrawal, and other symptoms characteristic to a milder mixed episode in bipolar disorder. Alertness and other general stimulant effects are still present. MDMA: For example, in an MDMA ("ecstasy" and "molly") comedown, if the user experiences severe, persisting emotional distress, such as panic attacks, severe generalized anxiety, or insomnia following an MDMA session, a physician may prescribe a benzodiazepine (specifically, lorazepam) and/or sleep aid (e.g., zolpidem), to alleviate those effects.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Entrée** Entrée: An entrée (, US also ; French: [ɑ̃tʁe]), in modern French table service and that of much of the English-speaking world, is a dish served before the main course of a meal. Outside North America, it is generally synonymous with the terms hors d'oeuvre, appetizer, or starter. It may be the first dish served, or it may follow a soup or other small dish or dishes. Entrée: In the United States and parts of Canada, the term entrée instead refers to the main dish or the only dish of a meal. Early use of the term: The word entrée as a culinary term first appears in print around 1536, in the Petit traicté auquel verrez la maniere de faire cuisine, in a collection of menus at the end of the book. There, the first stage of each meal is called the entree de table (entrance to the table); the second stage consists of potaiges (foods boiled or simmered "in pots"); the third consists of one or more services de rost (meat or fowl "roasted" in dry heat); and the last is the issue de table (departure from the table). These four stages of the meal appear consistently in this order in all the books that derive from the Petit traicté.The terms entree de table and issue de table are organizing words, "describing the structure of a meal rather than the food itself". The terms potaiges and rost indicate cooking methods but not ingredients. The menus, though, give some idea of both the ingredients and the cooking methods that were characteristic of each stage of the meal. Early use of the term: Sausages, offal, and raw "watery" fruits (oranges, plums, peaches, apricots, and grapes) were apparently considered uniquely appropriate for starting the meal, as those foods appear only in the entree de table. Other dishes considered appropriate for the entree stage may also appear in later stages of the meal, such as venison cooked in various ways (in the entree, potaiges, and rost services) and savory pies and sauced meats (in the entree and rost services). The distribution of dishes is very similar to that of the menus in the Ménagier de Paris, written 150 years before the Petit traicté. "Classical Order" of service: The stages of the meal underwent several significant changes between the mid-16th and mid-17th century. Notably, the entrée became the second stage of the meal and potage became the first. At this point, the term "entrée" had lost its literal meaning and had come to refer to a certain type of dish, unrelated to its place in the meal. The cookbooks and dictionaries of the 17th and 18th centuries rarely discuss directly the composition of the dishes for each stage of the meal, but they routinely designate recipes or include lists of dishes appropriate to each stage. Nevertheless, entrées and the dishes of the other stages of the meal can be distinguished from each other by certain characteristics, such as their ingredients, cooking methods, and serving temperatures. The distinct characteristics of the entrée were at first loosely observed, or perhaps more accurately, the "rules" were in a formative stage for several decades. By the early 18th century, though, certain ingredients and cooking methods were increasingly confined to the entrée stage of the meal. "Classical Order" of service: In the 17th, 18th, and 19th centuries, entrées, on meat days, included most butchers' meats (but not ham), suckling pig, fowl, furred and feathered game, and offal. Eggs, on meat days, were never served as entrées; they were served only as entremets. Vegetables often made up part of the sauce or garnish, but entrées were always meat dishes; vegetable dishes were served only as entremets.On lean days, fish replaced meat and fowl in every stage of the meal. Even on lean days, few entrées were composed only of vegetables. During Lent, though, vegetable entrées ("entrées en racines", encompassing all vegetables, not just "roots") were sometimes served.Moist cooking methods were characteristic of the entrée stage of the meal, typical preparations being sautés, ragoûts, and fricassées. Meat or fowl (but not fish) might be roasted, but it was first wrapped in paper, or stuffed with a forcemeat, or barded with herbs or anchovies, or finished in a sauce, or prepared in some other way to keep the dish from browning and crisping like a true roast. Savory pies and pastries were baked in dry heat, but the enclosed meat cooked in its own steam and juices.All entrées were served hot, and this was a salient feature of entrées until the 19th century.In the mid-18th century, entrées were increasingly divided into new categories based on the content and preparation of the dish. "Hors d'œuvres", which, in the late 17th century, were served at several points during the meal, were considered a type of entrée in the 18th century, but by the 19th century, they had become a distinct stage of the meal. Large, whole joints of meat (usually beef or veal) and very large fowl (turkey and geese) were categorized as grandes or grosses entrées. When roasted, those whole joints and fowl were called "spit-roasted entrées" (entrées de broche); they were always served with a sauce to distinguish them from true roasts. When boiled, a joint of beef was called the "le bouilli"; this was generally the first of the entrées consumed in the meal, after the potages and hors d’œuvres. In the late 18th century, the practice arose of removing the empty soup tureens and replacing them with additional grosses entrées or entrées de broche; these replacement dishes were commonly called "relevés"; they were last of the entrées consumed at the meal, though they often appear on menus right after the potages. Taken together, all these bulky dishes were called "substantial entrées" (fortes entrées).The most numerous of the entrées at any meal were the "ordinary entrées" (entrées ordinaires), consumed between the bouilli and the relevés. In composition, they were distinguished from the fortes entrées by the relatively small size of their ingredients. Small fowl could be served whole; but large fowl and large joints of meat were cut into pieces or fillets. Despite the designation "ordinary", these entrées were much more elaborate and refined than fortes entrées. Changes in the 19th century: In the 19th century, due at least in part to the collapse of the church's authority in France, rules governing meat and lean days were followed irregularly. In particular, fish was commonly served on meat days, providing even more variety to the meal. Fish came to be considered a classic relevé, and in some cases was served as a separate "fish course". After the 1820s, the bouilli was no longer routinely served at fine dinners. In addition, cold entrées became increasingly common over the course of the 19th century, a marked change from earlier practices.Following the widespread adoption of service à la russe in the 1860s, dishes were presented one after another rather than being placed on the table for guests to select what they wanted. In this new type of service, the ordinary entrées were often served after the relevés, particularly in France; in England, the ordinary entrées more commonly preceded the relevés, as they had in the 18th century. At this point, the two terms had completely lost their literal meanings. "Entrée" referred to those entrées served in slices, fillets, or small pieces; "relevé" referred to those entrées served as large joints, whole birds, or whole fish.Distinctions between the various types of entrées (grosses, grandes, de broche, relevé) had fallen out of use by the end of the 19th century. The entrée as a stage of a multi-course meal persisted in some circles after the Great War; but with the broad cultural transformations of the 20th century, the word lost its connection to its traditional meaning. Modern French cuisine: In France, the modern meaning of "entrée" on a restaurant menu is the small course that precedes the main course in a three-course meal, i.e., the course which in British usage is often called the "starter" and in American usage the "appetizer". Thus a typical modern French three-course meal in a restaurant consists of "entrée" (first course or starter (UK); appetizer (U.S.)), followed by the "plat" or "plat principal" (the main course), and then dessert or cheese. This sequence is commonly found in prix fixe menus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cloning** Cloning: Cloning is the process of producing individual organisms with identical genomes, either by natural or artificial means. In nature, some organisms produce clones through asexual reproduction; this reproduction of an organism by itself without a mate is known as parthenogenesis. In the field of biotechnology, cloning is the process of creating cloned organisms of cells and of DNA fragments. Cloning: The artificial cloning of organisms, sometimes known as reproductive cloning, is often accomplished via somatic-cell nuclear transfer (SCNT), a cloning method in which a viable embryo is created from a somatic cell and an egg cell. In 1996, Dolly the sheep achieved notoriety for being the first mammal cloned from a somatic cell. Another example of artificial cloning is molecular cloning, a technique in molecular biology in which a single living cell is used to clone a large population of cells that contain identical DNA molecules. Cloning: In bioethics, there are a variety of ethical positions regarding the practice and possibilities of cloning. The use of embryonic stem cells, which can be produced through SCNT, in some stem cell research has attracted controversy. Cloning has been proposed as a means of reviving extinct species. In popular culture, the concept of cloning—particularly human cloning—is often depicted in science fiction; depictions commonly involve themes related to identity, the recreation of historical figures or extinct species, or cloning for exploitation (i.e. cloning soldiers for warfare). Etymology: Coined by Herbert J. Webber, the term clone derives from the Ancient Greek word κλών (klōn), twig, which is the process whereby a new plant is created from a twig. In botany, the term lusus was used. In horticulture, the spelling clon was used until the early twentieth century; the final e came into use to indicate the vowel is a "long o" instead of a "short o". Since the term entered the popular lexicon in a more general context, the spelling clone has been used exclusively. Natural cloning: Natural cloning is the production of clones without the involvement of genetic engineering techniques. It may occur accidentally in the case of identical twins, which are formed when a fertilized egg splits, creating two or more embryos that carry almost identical DNA. It may also be part of asexual reproduction, which is a process where a single parent organism produces genetically identical offspring by itself. Cloning is a natural form of reproduction that has allowed life forms to spread for hundreds of millions of years. It is a reproduction method used by plants, fungi, and bacteria, and is also the way that clonal colonies reproduce themselves. Examples of these organisms include blueberry plants, Hazel trees, the Pando trees, the Kentucky coffeetree, Myrica, and the American sweetgum. Molecular cloning: Molecular cloning refers to the process of making multiple molecules. Cloning is commonly used to amplify DNA fragments containing whole genes, but it can also be used to amplify any DNA sequence such as promoters, non-coding sequences and randomly fragmented DNA. It is used in a wide array of biological experiments and practical applications ranging from genetic fingerprinting to large scale protein production. Occasionally, the term cloning is misleadingly used to refer to the identification of the chromosomal location of a gene associated with a particular phenotype of interest, such as in positional cloning. In practice, localization of the gene to a chromosome or genomic region does not necessarily enable one to isolate or amplify the relevant genomic sequence. To amplify any DNA sequence in a living organism, that sequence must be linked to an origin of replication, which is a sequence of DNA capable of directing the propagation of itself and any linked sequence. However, a number of other features are needed, and a variety of specialised cloning vectors (small piece of DNA into which a foreign DNA fragment can be inserted) exist that allow protein production, affinity tagging, single-stranded RNA or DNA production and a host of other molecular biology tools. Molecular cloning: Cloning of any DNA fragment essentially involves four steps fragmentation - breaking apart a strand of DNA ligation – gluing together pieces of DNA in a desired sequence transfection – inserting the newly formed pieces of DNA into cells screening/selection – selecting out the cells that were successfully transfected with the new DNAAlthough these steps are invariable among cloning procedures a number of alternative routes can be selected; these are summarized as a cloning strategy. Molecular cloning: Initially, the DNA of interest needs to be isolated to provide a DNA segment of suitable size. Subsequently, a ligation procedure is used where the amplified fragment is inserted into a vector (piece of DNA). The vector (which is frequently circular) is linearised using restriction enzymes, and incubated with the fragment of interest under appropriate conditions with an enzyme called DNA ligase. Following ligation, the vector with the insert of interest is transfected into cells. A number of alternative techniques are available, such as chemical sensitisation of cells, electroporation, optical injection and biolistics. Finally, the transfected cells are cultured. As the aforementioned procedures are of particularly low efficiency, there is a need to identify the cells that have been successfully transfected with the vector construct containing the desired insertion sequence in the required orientation. Modern cloning vectors include selectable antibiotic resistance markers, which allow only cells in which the vector has been transfected, to grow. Additionally, the cloning vectors may contain colour selection markers, which provide blue/white screening (alpha-factor complementation) on X-gal medium. Nevertheless, these selection steps do not absolutely guarantee that the DNA insert is present in the cells obtained. Further investigation of the resulting colonies must be required to confirm that cloning was successful. This may be accomplished by means of PCR, restriction fragment analysis and/or DNA sequencing. Cell cloning: Cloning unicellular organisms Cloning a cell means to derive a population of cells from a single cell. In the case of unicellular organisms such as bacteria and yeast, this process is remarkably simple and essentially only requires the inoculation of the appropriate medium. However, in the case of cell cultures from multi-cellular organisms, cell cloning is an arduous task as these cells will not readily grow in standard media. Cell cloning: A useful tissue culture technique used to clone distinct lineages of cell lines involves the use of cloning rings (cylinders). In this technique a single-cell suspension of cells that have been exposed to a mutagenic agent or drug used to drive selection is plated at high dilution to create isolated colonies, each arising from a single and potentially clonal distinct cell. At an early growth stage when colonies consist of only a few cells, sterile polystyrene rings (cloning rings), which have been dipped in grease, are placed over an individual colony and a small amount of trypsin is added. Cloned cells are collected from inside the ring and transferred to a new vessel for further growth. Cell cloning: Cloning stem cells Somatic-cell nuclear transfer, popularly known as SCNT, can also be used to create embryos for research or therapeutic purposes. The most likely purpose for this is to produce embryos for use in stem cell research. This process is also called "research cloning" or "therapeutic cloning". The goal is not to create cloned human beings (called "reproductive cloning"), but rather to harvest stem cells that can be used to study human development and to potentially treat disease. While a clonal human blastocyst has been created, stem cell lines are yet to be isolated from a clonal source.Therapeutic cloning is achieved by creating embryonic stem cells in the hopes of treating diseases such as diabetes and Alzheimer's. The process begins by removing the nucleus (containing the DNA) from an egg cell and inserting a nucleus from the adult cell to be cloned. In the case of someone with Alzheimer's disease, the nucleus from a skin cell of that patient is placed into an empty egg. The reprogrammed cell begins to develop into an embryo because the egg reacts with the transferred nucleus. The embryo will become genetically identical to the patient. The embryo will then form a blastocyst which has the potential to form/become any cell in the body.The reason why SCNT is used for cloning is because somatic cells can be easily acquired and cultured in the lab. This process can either add or delete specific genomes of farm animals. A key point to remember is that cloning is achieved when the oocyte maintains its normal functions and instead of using sperm and egg genomes to replicate, the donor's somatic cell nucleus is inserted into the oocyte. The oocyte will react to the somatic cell nucleus, the same way it would to a sperm cell's nucleus.The process of cloning a particular farm animal using SCNT is relatively the same for all animals. The first step is to collect the somatic cells from the animal that will be cloned. The somatic cells could be used immediately or stored in the laboratory for later use. The hardest part of SCNT is removing maternal DNA from an oocyte at metaphase II. Once this has been done, the somatic nucleus can be inserted into an egg cytoplasm. This creates a one-cell embryo. The grouped somatic cell and egg cytoplasm are then introduced to an electrical current. This energy will hopefully allow the cloned embryo to begin development. The successfully developed embryos are then placed in surrogate recipients, such as a cow or sheep in the case of farm animals.SCNT is seen as a good method for producing agriculture animals for food consumption. It successfully cloned sheep, cattle, goats, and pigs. Another benefit is SCNT is seen as a solution to clone endangered species that are on the verge of going extinct. However, stresses placed on both the egg cell and the introduced nucleus can be enormous, which led to a high loss in resulting cells in early research. For example, the cloned sheep Dolly was born after 277 eggs were used for SCNT, which created 29 viable embryos. Only three of these embryos survived until birth, and only one survived to adulthood. As the procedure could not be automated, and had to be performed manually under a microscope, SCNT was very resource intensive. The biochemistry involved in reprogramming the differentiated somatic cell nucleus and activating the recipient egg was also far from being well understood. However, by 2014 researchers were reporting cloning success rates of seven to eight out of ten and in 2016, a Korean Company Sooam Biotech was reported to be producing 500 cloned embryos per day.In SCNT, not all of the donor cell's genetic information is transferred, as the donor cell's mitochondria that contain their own mitochondrial DNA are left behind. The resulting hybrid cells retain those mitochondrial structures which originally belonged to the egg. As a consequence, clones such as Dolly that are born from SCNT are not perfect copies of the donor of the nucleus. Organism cloning: Organism cloning (also called reproductive cloning) refers to the procedure of creating a new multicellular organism, genetically identical to another. In essence this form of cloning is an asexual method of reproduction, where fertilization or inter-gamete contact does not take place. Asexual reproduction is a naturally occurring phenomenon in many species, including most plants and some insects. Scientists have made some major achievements with cloning, including the asexual reproduction of sheep and cows. There is a lot of ethical debate over whether or not cloning should be used. However, cloning, or asexual propagation, has been common practice in the horticultural world for hundreds of years. Organism cloning: Horticultural The term clone is used in horticulture to refer to descendants of a single plant which were produced by vegetative reproduction or apomixis. Many horticultural plant cultivars are clones, having been derived from a single individual, multiplied by some process other than sexual reproduction. As an example, some European cultivars of grapes represent clones that have been propagated for over two millennia. Other examples are potato and banana.Grafting can be regarded as cloning, since all the shoots and branches coming from the graft are genetically a clone of a single individual, but this particular kind of cloning has not come under ethical scrutiny and is generally treated as an entirely different kind of operation. Organism cloning: Many trees, shrubs, vines, ferns and other herbaceous perennials form clonal colonies naturally. Parts of an individual plant may become detached by fragmentation and grow on to become separate clonal individuals. A common example is in the vegetative reproduction of moss and liverwort gametophyte clones by means of gemmae. Some vascular plants e.g. dandelion and certain viviparous grasses also form seeds asexually, termed apomixis, resulting in clonal populations of genetically identical individuals. Organism cloning: Parthenogenesis Clonal derivation exists in nature in some animal species and is referred to as parthenogenesis (reproduction of an organism by itself without a mate). This is an asexual form of reproduction that is only found in females of some insects, crustaceans, nematodes, fish (for example the hammerhead shark), Cape honeybees, and lizards including the Komodo dragon and several whiptails. The growth and development occurs without fertilization by a male. In plants, parthenogenesis means the development of an embryo from an unfertilized egg cell, and is a component process of apomixis. In species that use the XY sex-determination system, the offspring will always be female. An example is the little fire ant (Wasmannia auropunctata), which is native to Central and South America but has spread throughout many tropical environments. Organism cloning: Artificial cloning of organisms Artificial cloning of organisms may also be called reproductive cloning. Organism cloning: First steps Hans Spemann, a German embryologist was awarded a Nobel Prize in Physiology or Medicine in 1935 for his discovery of the effect now known as embryonic induction, exercised by various parts of the embryo, that directs the development of groups of cells into particular tissues and organs. In 1924 he and his student, Hilde Mangold, were the first to perform somatic-cell nuclear transfer using amphibian embryos – one of the first steps towards cloning. Organism cloning: Methods Reproductive cloning generally uses "somatic cell nuclear transfer" (SCNT) to create animals that are genetically identical. This process entails the transfer of a nucleus from a donor adult cell (somatic cell) to an egg from which the nucleus has been removed, or to a cell from a blastocyst from which the nucleus has been removed. If the egg begins to divide normally it is transferred into the uterus of the surrogate mother. Such clones are not strictly identical since the somatic cells may contain mutations in their nuclear DNA. Additionally, the mitochondria in the cytoplasm also contains DNA and during SCNT this mitochondrial DNA is wholly from the cytoplasmic donor's egg, thus the mitochondrial genome is not the same as that of the nucleus donor cell from which it was produced. This may have important implications for cross-species nuclear transfer in which nuclear-mitochondrial incompatibilities may lead to death. Organism cloning: Artificial embryo splitting or embryo twinning, a technique that creates monozygotic twins from a single embryo, is not considered in the same fashion as other methods of cloning. During that procedure, a donor embryo is split in two distinct embryos, that can then be transferred via embryo transfer. It is optimally performed at the 6- to 8-cell stage, where it can be used as an expansion of IVF to increase the number of available embryos. If both embryos are successful, it gives rise to monozygotic (identical) twins. Organism cloning: Dolly the sheep Dolly, a Finn-Dorset ewe, was the first mammal to have been successfully cloned from an adult somatic cell. Dolly was formed by taking a cell from the udder of her 6-year-old biological mother. Dolly's embryo was created by taking the cell and inserting it into a sheep ovum. It took 435 attempts before an embryo was successful. The embryo was then placed inside a female sheep that went through a normal pregnancy. She was cloned at the Roslin Institute in Scotland by British scientists Sir Ian Wilmut and Keith Campbell and lived there from her birth in 1996 until her death in 2003 when she was six. She was born on 5 July 1996 but not announced to the world until 22 February 1997. Her stuffed remains were placed at Edinburgh's Royal Museum, part of the National Museums of Scotland.Dolly was publicly significant because the effort showed that genetic material from a specific adult cell, designed to express only a distinct subset of its genes, can be redesigned to grow an entirely new organism. Before this demonstration, it had been shown by John Gurdon that nuclei from differentiated cells could give rise to an entire organism after transplantation into an enucleated egg. However, this concept was not yet demonstrated in a mammalian system. Organism cloning: The first mammalian cloning (resulting in Dolly) had a success rate of 29 embryos per 277 fertilized eggs, which produced three lambs at birth, one of which lived. In a bovine experiment involving 70 cloned calves, one-third of the calves died quite young. The first successfully cloned horse, Prometea, took 814 attempts. Notably, although the first clones were frogs, no adult cloned frog has yet been produced from a somatic adult nucleus donor cell.There were early claims that Dolly had pathologies resembling accelerated aging. Scientists speculated that Dolly's death in 2003 was related to the shortening of telomeres, DNA-protein complexes that protect the end of linear chromosomes. However, other researchers, including Ian Wilmut who led the team that successfully cloned Dolly, argue that Dolly's early death due to respiratory infection was unrelated to problems with the cloning process. This idea that the nuclei have not irreversibly aged was shown in 2013 to be true for mice.Dolly was named after performer Dolly Parton because the cells cloned to make her were from a mammary gland cell, and Parton is known for her ample cleavage. Organism cloning: Species cloned and applications The modern cloning techniques involving nuclear transfer have been successfully performed on several species. Notable experiments include: Tadpole: (1952) Robert Briggs and Thomas J. King had successfully cloned northern leopard frogs: thirty-five complete embryos and twenty-seven tadpoles from one-hundred and four successful nuclear transfers. Carp: (1963) In China, embryologist Tong Dizhou produced the world's first cloned fish by inserting the DNA from a cell of a male carp into an egg from a female carp. He published the findings in a Chinese science journal. Zebrafish: The first vertebrate cloned (1981) by George Streisinger Sheep: Marked the first mammal being cloned (1984) from early embryonic cells by Steen Willadsen. Megan and Morag cloned from differentiated embryonic cells in June 1995 and Dolly from a somatic cell in 1996. Mice: (1986) A mouse was successfully cloned from an early embryonic cell. Soviet scientists Chaylakhyan, Veprencev, Sviridova, and Nikitin had the mouse "Masha" cloned. Research was published in the magazine Biofizika volume ХХХII, issue 5 of 1987. Rhesus monkey: Tetra (January 2000) from embryo splitting and not nuclear transfer. More akin to artificial formation of twins. Pig: the first cloned pigs (March 2000). By 2014, BGI in China was producing 500 cloned pigs a year to test new medicines. Gaur: (2001) was the first endangered species cloned. Organism cloning: Cattle: Alpha and Beta (males, 2001) and (2005), Brazil In 2023, Chinese scientists reported the cloning of three supercows with a milk productivity "nearly 1.7 times the amount of milk an average cow in the United States produced in 2021" and a plan for 1,000 of such super cows in the near-term. According to a news report "[i]n many countries, including the United States, farmers breed clones with conventional animals to add desirable traits, such as high milk production or disease resistance, into the gene pool". Organism cloning: Cat: CopyCat "CC" (female, late 2001), Little Nicky, 2004, was the first cat cloned for commercial reasons Rat: Ralph, the first cloned rat (2003) Mule: Idaho Gem, a john mule born 4 May 2003, was the first horse-family clone. Horse: Prometea, a Haflinger female born 28 May 2003, was the first horse clone. Przewalksi's Horse: An ongoing cloning program by the San Diego Zoo Wildlife Alliance and Revive & Restore attempts to reintroduce genetic diversity to this endangered species.Kurt, the first cloned Przewalski's horse, was born in 2020. He was cloned from the skin tissue of a stallion which was preserved in 1980. "Trey" was born in 2023. He was cloned from the same stallion's tissue as Kurt. Dog: Snuppy, a male Afghan hound was the first cloned dog (2005). In 2017, the world's first gene-editing clone dog, Apple, was created by Sinogene Biotechnology. Sooam Biotech, South Korea, was reported in 2015 to have cloned 700 dogs to date for their owners, including two Yakutian Laika hunting dogs, which are seriously endangered due to crossbreeding. Organism cloning: Cloning of super sniffer dogs was reported in 2011, four years afterwards when the dogs started working. Cloning of a successful rescue dog was also reported in 2009 and of a similar police dog in 2019. Cancer-sniffing dogs have also been cloned. A review concluded that "qualified elite working dogs can be produced by cloning a working dog that exhibits both an appropriate temperament and good health." Wolf: Snuwolf and Snuwolffy, the first two cloned female wolves (2005). Organism cloning: Water buffalo: Samrupa was the first cloned water buffalo. It was born on 6 February 2009, at India's Karnal National Diary Research Institute but died five days later due to lung infection. Pyrenean ibex (2009) was the first extinct animal to be cloned back to life; the clone lived for seven minutes before dying of lung defects. Camel: (2009) Injaz, was the first cloned camel. Pashmina goat: (2012) Noori, is the first cloned pashmina goat. Scientists at the faculty of veterinary sciences and animal husbandry of Sher-e-Kashmir University of Agricultural Sciences and Technology of Kashmir successfully cloned the first Pashmina goat (Noori) using the advanced reproductive techniques under the leadership of Riaz Ahmad Shah. Goat: (2001) Scientists of Northwest A&F University successfully cloned the first goat which use the adult female cell. Gastric brooding frog: (2013) The gastric brooding frog, Rheobatrachus silus, thought to have been extinct since 1983 was cloned in Australia, although the embryos died after a few days. Organism cloning: Macaque monkey: (2017) First successful cloning of a primate species using nuclear transfer, with the birth of two live clones named Zhong Zhong and Hua Hua. Conducted in China in 2017, and reported in January 2018. In January 2019, scientists in China reported the creation of five identical cloned gene-edited monkeys, using the same cloning technique that was used with Zhong Zhong and Hua Hua and Dolly the sheep, and the gene-editing Crispr-Cas9 technique allegedly used by He Jiankui in creating the first ever gene-modified human babies Lulu and Nana. The monkey clones were made to study several medical diseases. Organism cloning: Black-footed ferret: (2020) A team of scientists cloned a female named Willa, who died in the mid-1980s and left no living descendants. Her clone, a female named Elizabeth Ann, was born on 10 December. Scientists hope that the contribution of this individual will alleviate the effects of inbreeding and help black-footed ferrets better cope with plague. Experts estimate that this female's genome contains three times as much genetic diversity as any of the modern black-footed ferrets. Organism cloning: First artificial parthenogenesis in mammals: (2022) Viable mice offspring was born from unfertilized eggs via targeted DNA methylation editing of seven imprinting control regions. Organism cloning: Human cloning Human cloning is the creation of a genetically identical copy of a human. The term is generally used to refer to artificial human cloning, which is the reproduction of human cells and tissues. It does not refer to the natural conception and delivery of identical twins. The possibility of human cloning has raised controversies. These ethical concerns have prompted several nations to pass legislation regarding human cloning and its legality. As of right now, scientists have no intention of trying to clone people and they believe their results should spark a wider discussion about the laws and regulations the world needs to regulate cloning.Two commonly discussed types of theoretical human cloning are therapeutic cloning and reproductive cloning. Therapeutic cloning would involve cloning cells from a human for use in medicine and transplants, and is an active area of research, but is not in medical practice anywhere in the world, as of 2021. Two common methods of therapeutic cloning that are being researched are somatic-cell nuclear transfer and, more recently, pluripotent stem cell induction. Reproductive cloning would involve making an entire cloned human, instead of just specific cells or tissues. Organism cloning: Ethical issues of cloning There are a variety of ethical positions regarding the possibilities of cloning, especially human cloning. While many of these views are religious in origin, the questions raised by cloning are faced by secular perspectives as well. Perspectives on human cloning are theoretical, as human therapeutic and reproductive cloning are not commercially used; animals are currently cloned in laboratories and in livestock production. Organism cloning: Advocates support development of therapeutic cloning to generate tissues and whole organs to treat patients who otherwise cannot obtain transplants, to avoid the need for immunosuppressive drugs, and to stave off the effects of aging. Advocates for reproductive cloning believe that parents who cannot otherwise procreate should have access to the technology.Opponents of cloning have concerns that technology is not yet developed enough to be safe and that it could be prone to abuse (leading to the generation of humans from whom organs and tissues would be harvested), as well as concerns about how cloned individuals could integrate with families and with society at large. Cloning humans could lead to serious violations of human rights.Religious groups are divided, with some opposing the technology as usurping "God's place" and, to the extent embryos are used, destroying a human life; others support therapeutic cloning's potential life-saving benefits. There is at least one religion, Raëlism, in which cloning plays a major role.Contemporary work on this topic is concerned with the ethics, adequate regulation and issues of any cloning carried out by humans, not potentially by extraterrestrials (including in the future), and largely also not replication – also described as mind cloning – of potential whole brain emulations. Organism cloning: Cloning of animals is opposed by animal-groups due to the number of cloned animals that suffer from malformations before they die, and while food from cloned animals has been approved as safe by the US FDA, its use is opposed by groups concerned about food safety.In practical terms, the inclusion of "licensing requirements for embryo research projects and fertility clinics, restrictions on the commodification of eggs and sperm, and measures to prevent proprietary interests from monopolizing access to stem cell lines" in international cloning regulations has been proposed, albeit e.g. effective oversight mechanisms or cloning requirements have not been described. Organism cloning: Cloning extinct and endangered species Cloning, or more precisely, the reconstruction of functional DNA from extinct species has, for decades, been a dream. Possible implications of this were dramatized in the 1984 novel Carnosaur and the 1990 novel Jurassic Park. The best current cloning techniques have an average success rate of 9.4 percent (and as high as 25 percent) when working with familiar species such as mice, while cloning wild animals is usually less than 1 percent successful. Organism cloning: Conservation cloning Several tissue banks have come into existence, including the "Frozen zoo" at the San Diego Zoo, to store frozen tissue from the world's rarest and most endangered species. This is also referred to as "Conservation cloning".Engineers have proposed a "lunar ark" in 2021 – storing millions of seed, spore, sperm and egg samples from Earth's contemporary species in a network of lava tubes on the Moon as a genetic backup. Similar proposals have been made since at least 2008. These also include sending human customer DNA, and a proposal for "a lunar backup record of humanity" that includes genetic information by Avi Loeb et al.Scientists at the University of Newcastle and University of New South Wales announced in March 2013 that the very recently extinct gastric-brooding frog would be the subject of a cloning attempt to resurrect the species.Many such "De-extinction" projects are being championed by the non-profit Revive & Restore. Organism cloning: De-extinction One of the most anticipated targets for cloning was once the woolly mammoth, but attempts to extract DNA from frozen mammoths have been unsuccessful, though a joint Russo-Japanese team is currently working toward this goal. In January 2011, it was reported by Yomiuri Shimbun that a team of scientists headed by Akira Iritani of Kyoto University had built upon research by Dr. Wakayama, saying that they will extract DNA from a mammoth carcass that had been preserved in a Russian laboratory and insert it into the egg cells of an Asian elephant in hopes of producing a mammoth embryo. The researchers said they hoped to produce a baby mammoth within six years. It was noted, however that the result, if possible, would be an elephant-mammoth hybrid rather than a true mammoth. Another problem is the survival of the reconstructed mammoth: ruminants rely on a symbiosis with specific microbiota in their stomachs for digestion.In 2022, scientists showed major limitations and the scale of challenge of genetic-editing-based de-extinction, suggesting resources spent on more comprehensive de-extinction projects such as of the woolly mammoth may currently not be well allocated and substantially limited. Their analyses "show that even when the extremely high-quality Norway brown rat (R. norvegicus) is used as a reference, nearly 5% of the genome sequence is unrecoverable, with 1,661 genes recovered at lower than 90% completeness, and 26 completely absent", complicated further by that "distribution of regions affected is not random, but for example, if 90% completeness is used as the cutoff, genes related to immune response and olfaction are excessively affected" due to which "a reconstructed Christmas Island rat would lack attributes likely critical to surviving in its natural or natural-like environment".In a 2021 online session of the Russian Geographical Society, Russia's defense minister Sergei Shoigu mentioned using the DNA of 3,000-year-old Scythian warriors to potentially bring them back to life. The idea was described as absurd at least at this point in news reports and it was noted that Scythians likely weren't skilled warriors by default.The idea of cloning Neanderthals or bringing them back to life in general is controversial but some scientists have stated that it may be possible in the future and have outlined several issues or problems with such as well as broad rationales for doing so. Organism cloning: Unsuccessful attempts In 2001, a cow named Bessie gave birth to a cloned Asian gaur, an endangered species, but the calf died after two days. In 2003, a banteng was successfully cloned, followed by three African wildcats from a thawed frozen embryo. These successes provided hope that similar techniques (using surrogate mothers of another species) might be used to clone extinct species. Anticipating this possibility, tissue samples from the last bucardo (Pyrenean ibex) were frozen in liquid nitrogen immediately after it died in 2000. Researchers are also considering cloning endangered species such as the Giant panda and Cheetah.In 2002, geneticists at the Australian Museum announced that they had replicated DNA of the thylacine (Tasmanian tiger), at the time extinct for about 65 years, using polymerase chain reaction. However, on 15 February 2005 the museum announced that it was stopping the project after tests showed the specimens' DNA had been too badly degraded by the (ethanol) preservative. On 15 May 2005 it was announced that the thylacine project would be revived, with new participation from researchers in New South Wales and Victoria.In 2003, for the first time, an extinct animal, the Pyrenean ibex mentioned above was cloned, at the Centre of Food Technology and Research of Aragon, using the preserved frozen cell nucleus of the skin samples from 2001 and domestic goat egg-cells. The ibex died shortly after birth due to physical defects in its lungs. Organism cloning: Lifespan After an eight-year project involving the use of a pioneering cloning technique, Japanese researchers created 25 generations of healthy cloned mice with normal lifespans, demonstrating that clones are not intrinsically shorter-lived than naturally born animals. Other sources have noted that the offspring of clones tend to be healthier than the original clones and indistinguishable from animals produced naturally.Some posited that Dolly the sheep may have aged more quickly than naturally born animals, as she died relatively early for a sheep at the age of six. Ultimately, her death was attributed to a respiratory illness, and the "advanced aging" theory is disputed.A detailed study released in 2016 and less detailed studies by others suggest that once cloned animals get past the first month or two of life they are generally healthy. However, early pregnancy loss and neonatal losses are still greater with cloning than natural conception or assisted reproduction (IVF). Current research is attempting to overcome these problems. In popular culture: Discussion of cloning in the popular media often presents the subject negatively. In an article in the 8 November 1993 article of Time, cloning was portrayed in a negative way, modifying Michelangelo's Creation of Adam to depict Adam with five identical hands. Newsweek's 10 March 1997 issue also critiqued the ethics of human cloning, and included a graphic depicting identical babies in beakers.The concept of cloning, particularly human cloning, has featured a wide variety of science fiction works. An early fictional depiction of cloning is Bokanovsky's Process which features in Aldous Huxley's 1931 dystopian novel Brave New World. The process is applied to fertilized human eggs in vitro, causing them to split into identical genetic copies of the original. Following renewed interest in cloning in the 1950s, the subject was explored further in works such as Poul Anderson's 1953 story UN-Man, which describes a technology called "exogenesis", and Gordon Rattray Taylor's book The Biological Time Bomb, which popularised the term "cloning" in 1963.Cloning is a recurring theme in a number of contemporary science fiction films, ranging from action films such as Anna to the Infinite Power, The Boys from Brazil, Jurassic Park (1993), Alien Resurrection (1997), The 6th Day (2000), Resident Evil (2002), Star Wars: Episode II – Attack of the Clones (2002), The Island (2005) and Moon (2009) to comedies such as Woody Allen's 1973 film Sleeper.The process of cloning is represented variously in fiction. Many works depict the artificial creation of humans by a method of growing cells from a tissue or DNA sample; the replication may be instantaneous, or take place through slow growth of human embryos in artificial wombs. In the long-running British television series Doctor Who, the Fourth Doctor and his companion Leela were cloned in a matter of seconds from DNA samples ("The Invisible Enemy", 1977) and then – in an apparent homage to the 1966 film Fantastic Voyage – shrunk to microscopic size to enter the Doctor's body to combat an alien virus. The clones in this story are short-lived, and can only survive a matter of minutes before they expire. Science fiction films such as The Matrix and Star Wars: Episode II – Attack of the Clones have featured scenes of human foetuses being cultured on an industrial scale in mechanical tanks.Cloning humans from body parts is also a common theme in science fiction. Cloning features strongly among the science fiction conventions parodied in Woody Allen's Sleeper, the plot of which centres around an attempt to clone an assassinated dictator from his disembodied nose. In the 2008 Doctor Who story "Journey's End", a duplicate version of the Tenth Doctor spontaneously grows from his severed hand, which had been cut off in a sword fight during an earlier episode.After the death of her beloved 14-year-old Coton de Tulear named Samantha in late 2017, Barbra Streisand announced that she had cloned the dog, and was now "waiting for [the two cloned pups] to get older so [she] can see if they have [Samantha's] brown eyes and her seriousness". The operation cost $50,000 through the pet cloning company ViaGen. In popular culture: Cloning and identity Science fiction has used cloning, most commonly and specifically human cloning, to raise the controversial questions of identity. A Number is a 2002 play by English playwright Caryl Churchill which addresses the subject of human cloning and identity, especially nature and nurture. The story, set in the near future, is structured around the conflict between a father (Salter) and his sons (Bernard 1, Bernard 2, and Michael Black) – two of whom are clones of the first one. A Number was adapted by Caryl Churchill for television, in a co-production between the BBC and HBO Films.In 2012, a Japanese television series named "Bunshin" was created. The story's main character, Mariko, is a woman studying child welfare in Hokkaido. She grew up always doubtful about the love from her mother, who looked nothing like her and who died nine years before. One day, she finds some of her mother's belongings at a relative's house, and heads to Tokyo to seek out the truth behind her birth. She later discovered that she was a clone.In the 2013 television series Orphan Black, cloning is used as a scientific study on the behavioral adaptation of the clones. In a similar vein, the book The Double by Nobel Prize winner José Saramago explores the emotional experience of a man who discovers that he is a clone. In popular culture: Cloning as resurrection Cloning has been used in fiction as a way of recreating historical figures. In the 1976 Ira Levin novel The Boys from Brazil and its 1978 film adaptation, Josef Mengele uses cloning to create copies of Adolf Hitler.In Michael Crichton's 1990 novel Jurassic Park, which spawned a series of Jurassic Park feature films, the bioengineering company InGen develops a technique to resurrect extinct species of dinosaurs by creating cloned creatures using DNA extracted from fossils. The cloned dinosaurs are used to populate the Jurassic Park wildlife park for the entertainment of visitors. The scheme goes disastrously wrong when the dinosaurs escape their enclosures. Despite being selectively cloned as females to prevent them from breeding, the dinosaurs develop the ability to reproduce through parthenogenesis. In popular culture: Cloning for warfare The use of cloning for military purposes has also been explored in several fictional works. In Doctor Who, an alien race of armour-clad, warlike beings called Sontarans was introduced in the 1973 serial "The Time Warrior". Sontarans are depicted as squat, bald creatures who have been genetically engineered for combat. Their weak spot is a "probic vent", a small socket at the back of their neck which is associated with the cloning process. The concept of cloned soldiers being bred for combat was revisited in "The Doctor's Daughter" (2008), when the Doctor's DNA is used to create a female warrior called Jenny.The 1977 film Star Wars was set against the backdrop of a historical conflict called the Clone Wars. The events of this war were not fully explored until the prequel films Attack of the Clones (2002) and Revenge of the Sith (2005), which depict a space war waged by a massive army of heavily armoured clone troopers that leads to the foundation of the Galactic Empire. Cloned soldiers are "manufactured" on an industrial scale, genetically conditioned for obedience and combat effectiveness. It is also revealed that the popular character Boba Fett originated as a clone of Jango Fett, a mercenary who served as the genetic template for the clone troopers. In popular culture: Cloning for exploitation A recurring sub-theme of cloning fiction is the use of clones as a supply of organs for transplantation. The 2005 Kazuo Ishiguro novel Never Let Me Go and the 2010 film adaption are set in an alternate history in which cloned humans are created for the sole purpose of providing organ donations to naturally born humans, despite the fact that they are fully sentient and self-aware. The 2005 film The Island revolves around a similar plot, with the exception that the clones are unaware of the reason for their existence. In popular culture: The exploitation of human clones for dangerous and undesirable work was examined in the 2009 British science fiction film Moon. In the futuristic novel Cloud Atlas and subsequent film, one of the story lines focuses on a genetically engineered fabricant clone named Sonmi~451, one of millions raised in an artificial "wombtank", destined to serve from birth. She is one of thousands created for manual and emotional labor; Sonmi herself works as a server in a restaurant. She later discovers that the sole source of food for clones, called 'Soap', is manufactured from the clones themselves.In the film Us, at some point prior to the 1980s, the US Government creates clones of every citizen of the United States with the intention of using them to control their original counterparts, akin to voodoo dolls. This fails, as they were able to copy bodies, but unable to copy the souls of those they cloned. The project is abandoned and the clones are trapped exactly mirroring their above-ground counterparts' actions for generations. In the present day, the clones launch a surprise attack and manage to complete a mass-genocide of their unaware counterparts.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Comma operator** Comma operator: In the C and C++ programming languages, the comma operator (represented by the token ,) is a binary operator that evaluates its first operand and discards the result, and then evaluates the second operand and returns this value (and type); there is a sequence point between these evaluations. The use of the comma token as an operator is distinct from its use in function calls and definitions, variable declarations, enum declarations, and similar constructs, where it acts as a separator. Syntax: The comma operator separates expressions (which have value) in a way analogous to how the semicolon terminates statements, and sequences of expressions are enclosed in parentheses analogously to how sequences of statements are enclosed in braces: (a, b, c) is a sequence of expressions, separated by commas, which evaluates to the last expression c, while {a; b; c;} is a sequence of statements, and does not evaluate to any value. A comma can only occur between two expressions – commas separate expressions – unlike the semicolon, which occurs at the end of a (non-block) statement – semicolons terminate statements. Syntax: The comma operator has the lowest precedence of any C operator, and acts as a sequence point. In a combination of commas and semicolons, semicolons have lower precedence than commas, as semicolons separate statements but commas occur within statements, which accords with their use as ordinary punctuation: a, b; c, d is grouped as (a, b); (c, d) because these are two separate statements. Syntax: The comma operator has been deprecated in subscripting expressions (as of C++20); to reduce confusion, and open up the future possibility of repurposing the syntax for multidimensional array indexing. In C++23, the ability to overload operator[] with multiple arguments was added making unparenthesised comma expressions unusable in subscripts. The comma operator is still usable and not deprecated in this context if the comma expression is surrounded by parentheses (as in a[(b,c)]). Examples: In this example, the differing behavior between the second and third lines is due to the comma operator having lower precedence than assignment. The last example differs as well since the return expression must be fully evaluated before the function can return. Uses: The comma operator has relatively limited use cases. Because it discards its first operand, it is generally only useful where the first operand has desirable side effects that must be sequenced before the second operand. Further, because it is rarely used outside of specific idioms, and easily mistaken with other commas or the semicolon, it is potentially confusing and error-prone. Nevertheless, there are certain circumstances where it is commonly used, notably in for loops and in SFINAE. For embedded systems which may have limited debugging capabilities, the comma operator can be used in combination with a macro to seamlessly override a function call, to insert code just before the function call. Uses: For loops The most common use is to allow multiple assignment statements without using a block statement, primarily in the initialization and the increment expressions of a for loop. This is the only idiomatic use in elementary C programming. In the following example, the order of the loop's initializers is significant: An alternative solution to this problem in other languages is parallel assignment, which allows multiple assignments to occur within a single statement, and also uses a comma, though with different syntax and semantics. This is used in Go in its analogous for loop.Outside of for loop initializers (which have a special use of semicolons), the comma might be used instead of a semicolon, particularly when the statements in question function similarly to a loop increment (e.g. at the end of a while loop): Macros The comma can be used in preprocessor macros to perform multiple operations in the space of a single syntactic expression. Uses: One common use is to provide custom error messages in failed assertions. This is done by passing a parenthesized expression list to the assert macro, where the first expression is an error string and the second expression is the condition being asserted. The assert macro outputs its argument verbatim on an assertion failure. The following is an example: Output: i = 0 i = 1 i = 2 i = 3 i = 4 assert: assert.c:6: test_assert: Assertion `( "i is too big!", i <= 4 )' failed. Uses: Aborted However the assert macro is usually disabled in production code, so use it only for debug purposes. Condition The comma can be used within a condition (of an if, while, do while, or for) to allow auxiliary computations, particularly calling a function and using the result, with block scoping: A similar idiom exists in Go, where the syntax of the if statement explicitly allows an optional statement. Uses: Complex return The comma can be used in return statements, to assign to a global variable or out parameter (passed by reference). This idiom suggests that the assignments are part of the return, rather than auxiliary assignments in a block that terminates with the actual return. For example, in setting a global error number: This can be written more verbosely as: Avoid a block For brevity, the comma can be used to avoid a block and associated braces, as in: instead of: Other languages: In the OCaml and Ruby programming languages, the semicolon (";") is used for this purpose. JavaScript and Perl utilize the comma operator in the same way C/C++ does. In Java, the comma is a separator used to separate elements in a list in various contexts. It is not an operator and does not evaluate to the last element in the list.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**6-phosphogluconolactonase** 6-phosphogluconolactonase: 6-Phosphogluconolactonase (EC 3.1.1.31, 6PGL, PGLS, systematic name 6-phospho-D-glucono-1,5-lactone lactonohydrolase) is a cytosolic enzyme found in all organisms that catalyzes the hydrolysis of 6-phosphogluconolactone to 6-phosphogluconic acid in the oxidative phase of the pentose phosphate pathway: 6-phospho-D-glucono-1,5-lactone + H2O = 6-phospho-D-gluconateThe tertiary structure of 6PGL employs an α/β hydrolase fold, with active site residues clustered on the loops of the α-helices. Based on the crystal structure of the enzyme, the mechanism is proposed to be dependent on proton transfer by a histidine residue in the active site. 6PGL selectively catalyzes the hydrolysis of δ-6-phosphogluconolactone, and has no activity on the γ isomer. Enzyme Mechanism: 6PGL hydrolysis of 6-phosphogluconolactone to 6-phosphogluconic acid has been proposed to proceed via proton transfer to the O5 ring oxygen atom, similar to xylose isomerase and ribose-5-phosphate isomerase. The reaction initiates via attack of a hydroxide ion at the C5 ester. A tetrahedral intermediate forms and elimination of the ester linkage follows, aided by donation of a proton from an active site histidine residue. The specific residue that participates in the proton transfer eluded researchers until 2009, as previous structural studies demonstrated two possible conformations of the substrate in the active site, which position the O5 ring oxygen proximal to either an arginine or a histidine residue. Molecular dynamic simulations were employed to discover that the residue that donates a proton is histidine, and that the arginine residues are only involved in electric stabilization of the negatively charged phosphate group. Electric stabilization of the enzyme-substrate complex also occurs between the product carboxylate and backbone amines of surrounding glycine residues. Enzyme Structure: 6PGL in Homo sapiens exists as a monomer at cytosolic physiological conditions, and is composed of 258 amino acid residues with a total molecular mass of ~30 kDa. The tertiary structure of the enzyme utilizes an α/β hydrolase fold, with both parallel and anti-parallel β-sheets surrounded by eight α-helices and five 310 helices. Stability of the tertiary structure of the protein is reinforced through salt bridges between aspartic acid and arginine residues, and from aromatic side-chain stacking interactions. 6PGL isolated from Trypanosoma brucei was found to bind with a Zn2+ ion in a non-catalytic role, but this has not been observed in other organisms, including Thermotoga maritima and Vibrio cholerae. Biological Function: 6-phosphogluconolactonase catalyzes the conversion of 6-phosphogluconolactone to 6-phosphogluconic acid, both intermediates in the oxidative phase of the pentose phosphate pathway, in which glucose is converted into ribulose 5-phosphate. The oxidative phase of the pentose phosphate pathway releases CO2 and results in the generation of two equivalents of NADPH from NADP+. The final product, ribulose 5-phosphate, is further processed by the organism during the non-oxidative phase of the pentose phosphate pathway to synthesize biomolecules including nucleotides, ATP, and Coenzyme A.The enzyme that precedes 6PGL in the pentose phosphate pathway, glucose-6-phosphate dehydrogenase, exclusively forms the δ-isomer of 6-phosphogluconolactone. However, if accumulated, this compound can undergo intramolecular rearrangement to isomerize to the more stable γ-form, which is unable to be hydrolyzed by 6PGL and cannot continue to the non-oxidative phase of the pentose phosphate pathway. By quickly hydrolyzing the δ-isomer of 6-phosphogluconolactone, 6PGL prevents its accumulation and subsequent formation of the γ-isomer, which would be wasteful of the glucose resources available to the cell. 6-phosphogluconolactone is also susceptible to attack from intracellular nucleophiles, evidenced by α-N-6-phosphogluconoylation of His-tagged proteins expressed in E. coli, and efficient hydrolysis of 6-phosphogluconolactone by 6PGL prevents lactone accumulation and consequent toxic reactions from occurring between the lactone intermediate and the cell. Disease Relevance: Malarial parasites Plasmodium berghei and Plasmodium falciparum have been shown to express a bi-functional enzyme that exhibits both glucose-6-phosphate dehydrogenase and 6-phosphogluconolactonase activity, enabling it to catalyze the first two steps of the pentose phosphate pathway. This bifunctional enzyme has been identified as a druggable target for malarial parasites, and high-throughput screening of small molecule inhibitors has resulted in the discovery of novel compounds that can potentially be translated into potent antimalarials.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Output impedance** Output impedance: The output impedance of an electrical network is the measure of the opposition to current flow (impedance), both static (resistance) and dynamic (reactance), into the load network being connected that is internal to the electrical source. The output impedance is a measure of the source's propensity to drop in voltage when the load draws current, the source network being the portion of the network that transmits and the load network being the portion of the network that consumes. Output impedance: Because of this the output impedance is sometimes referred to as the source impedance or internal impedance. Description: All devices and connections have non-zero resistance and reactance, and therefore no device can be a perfect source. The output impedance is often used to model the source's response to current flow. Some portion of the device's measured output impedance may not physically exist within the device; some are artifacts that are due to the chemical, thermodynamic, or mechanical properties of the source. This impedance can be imagined as an impedance in series with an ideal voltage source, or in parallel with an ideal current source (see: Series and parallel circuits). Sources are modeled as ideal sources (ideal meaning sources that always keep the desired value) combined with their output impedance. The output impedance is defined as this modeled and/or real impedance in series with an ideal voltage source. Mathematically, current and voltage sources can be converted to each other using Thévenin's theorem and Norton's theorem. Description: In the case of a nonlinear device, such as a transistor, the term "output impedance" usually refers to the effect upon a small-amplitude signal, and will vary with the bias point of the transistor, that is, with the direct current (DC) and voltage applied to the device. Measurement: The source resistance of a purely resistive device can be experimentally determined by increasingly loading the device until the voltage across the load (AC or DC) is one half of the open circuit voltage. At this point, the load resistance and internal resistance are equal. It can more accurately be described by keeping track of the voltage vs current curves for various loads, and calculating the resistance from Ohm's law. (The internal resistance may not be the same for different types of loading or at different frequencies, especially in devices like chemical batteries.) The generalized source impedance for a reactive (inductive or capacitive) source device is more complicated to determine, and is usually measured with specialized instruments, rather than taking many measurements by hand. Audio amplifiers: The real output impedance (ZS) of a power amplifier is usually less than 0.1 Ω, but this is rarely specified. Instead it is "hidden" within the damping factor parameter, which is: DF=ZLZS Solving for ZS, ZS=ZLDF gives the small source impedance (output impedance) of the power amplifier. This can be calculated from the ZL of the loudspeaker (typically 2, 4, or 8 ohms) and the given value of the damping factor. Audio amplifiers: Generally in audio and hifi, the input impedance of components is several times (technically, more than 10) the output impedance of the device connected to them. This is called impedance bridging or voltage bridging. In this case, ZL>> ZS, (in practice:) DF > 10 In video, RF, and other systems, impedances of inputs and outputs are the same. This is called impedance matching or a matched connection. In this case, ZS = ZL, DF = 1/1 = 1 . The actual output impedance for most devices is not the same as the rated output impedance. A power amplifier may have a rated impedance of 8 ohms, but the actual output impedance will vary depending on circuit conditions. The rated output impedance is the impedance into which the amplifier can deliver its maximum amount of power without failing. Batteries: Internal resistance is a concept that helps model the electrical consequences of the complex chemical reactions inside a battery. It is impossible to directly measure the internal resistance of a battery, but it can be calculated from current and voltage data measured from a circuit. When a load is applied to a battery, the internal resistance can be calculated from the following equations: RB=(VsI)−RL=VS−VLI where RB is the internal resistance of the battery VS is the battery voltage without a load VL is the battery voltage with a load RL is the total resistance of the circuit I is the total current supplied by the batteryInternal resistance varies with the age of a battery, but for most commercial batteries the internal resistance is on the order of 1 ohm. Batteries: When there is a current through a cell, the measured e.m.f. is lower than when there is no current delivered by the cell. The reason for this is that part of the available energy of the cell is used up to drive charges through the cell. This energy is wasted by the so-called "internal resistance" of that cell. This wasted energy shows up as lost voltage. Internal resistance is r=E−VLI
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Diatonic and chromatic** Diatonic and chromatic: Diatonic and chromatic are terms in music theory that are most often used to characterize scales, and are also applied to musical instruments, intervals, chords, notes, musical styles, and kinds of harmony. They are very often used as a pair, especially when applied to contrasting features of the common practice music of the period 1600–1900.These terms may mean different things in different contexts. Very often, diatonic refers to musical elements derived from the modes and transpositions of the "white note scale" C–D–E–F–G–A–B. In some usages it includes all forms of heptatonic scale that are in common use in Western music (the major, and all forms of the minor).Chromatic most often refers to structures derived from the twelve-note chromatic scale, which consists of all semitones. Historically, however, it had other senses, referring in Ancient Greek music theory to a particular tuning of the tetrachord, and to a rhythmic notational convention in mensural music of the 14th to 16th centuries. History: Greek genera In ancient Greece there were three standard tunings (known by the Latin word genus, plural genera) of a lyre. These three tunings were called diatonic, chromatic, and enharmonic, and the sequences of four notes that they produced were called tetrachords ("four strings"). A diatonic tetrachord comprised, in descending order, two whole tones and a semitone, such as A G F E (roughly). In the chromatic tetrachord the second string of the lyre was lowered from G to G♭, so that the two lower intervals in the tetrachord were semitones, making the pitches A G♭ F E. In the enharmonic tetrachord the second string of the lyre was lowered further to G, so that the two lower interval in the tetrachord were quarter tones, making the pitches A G F E (where F is F♮ lowered by a quarter tone). For all three tetrachords, only the middle two strings varied in their pitch. History: Medieval coloration The term cromatico (Italian) was occasionally used in the Medieval and Renaissance periods to refer to the coloration (Latin coloratio) of certain notes. The details vary widely by period and place, but generally the addition of a colour (often red) to an empty or filled head of a note, or the "colouring in" of an otherwise empty head of a note, shortens the duration of the note. In works of the Ars Nova from the 14th century, this was used to indicate a temporary change in metre from triple to duple, or vice versa. This usage became less common in the 15th century as open white noteheads became the standard notational form for minims (half-notes) and longer notes called white mensural notation. Similarly, in the 16th century, a form of notating secular music, especially madrigals in was referred to as "chromatic" because of its abundance of "coloured in" black notes, that is semiminims (crotchets or quarter notes) and shorter notes, as opposed to the open white notes in , commonly used for the notation of sacred music. These uses for the word have no relationship to the modern meaning of chromatic, but the sense survives in the current term coloratura. History: Renaissance chromaticism The term chromatic began to approach its modern usage in the 16th century. For instance Orlando Lasso's Prophetiae Sibyllarum opens with a prologue proclaiming, "these chromatic songs, heard in modulation, are those in which the mysteries of the Sibyls are sung, intrepidly," which here takes its modern meaning referring to the frequent change of key and use of chromatic intervals in the work. (The Prophetiae belonged to an experimental musical movement of the time, called musica reservata). This usage comes from a renewed interest in the Greek genera, especially its chromatic tetrachord, notably by the influential theorist Nicola Vicentino in his treatise on ancient and modern practice, 1555. Diatonic scales: Medieval theorists defined scales in terms of the Greek tetrachords. The gamut was the series of pitches from which all the Medieval "scales" (or modes, strictly) notionally derive, and it may be thought of as constructed in a certain way from diatonic tetrachords. The origin of the word gamut is explained in the article Guidonian hand; here the word is used in one of the available senses: the all-encompassing gamut as described by Guido d'Arezzo (which includes all of the modes). Diatonic scales: The intervals from one note to the next in this Medieval gamut are all tones or semitones, recurring in a certain pattern with five tones (T) and two semitones (S) in any given octave. The semitones are separated as much as they can be, between alternating groups of three tones and two tones. Here are the intervals for a string of ascending notes (starting with F) from the gamut: ... –T–T–T–S–T–T–S–T–T–T–S–T– ...And here are the intervals for an ascending octave (the seven intervals separating the eight notes A–B–C–D–E–F–G–A) from the gamut: T–S–T–T–S–T–T (five tones and two semitones)The white keys are the modern analog of the gamut. In its most strict definition, therefore, a diatonic scale is one that may be derived from the pitches represented in successive white keys of the piano (or a transposition thereof). This would include the major scale, and the natural minor scale (same as the descending form of the melodic minor), but not the old ecclesiastical church modes, most of which included both versions of the "variable" note B♮/B♭. Diatonic scales: Modern meanings There are specific applications in the music of the Common Practice Period, and later music that shares its core features. Diatonic scales: Most, but not all writers, accept the natural minor as diatonic. As for other forms of the minor: "Exclusive" usageSome writers consistently classify the other variants of the minor scale – the melodic minor (ascending form) and the harmonic minor – as non-diatonic, since they are not transpositions of the white-note pitches of the piano. Among such theorists there is no agreed general term that encompasses the major and all forms of the minor scale."Inclusive" usageSome writers consistently include the melodic and harmonic minor scales as diatonic also. For this group, every scale standardly used in common practice music and much similar later music is either diatonic (the major, and all forms of the minor) or chromatic."Mixed" usageStill other writers mix these two meanings of diatonic (and conversely for chromatic), and this can lead to confusions and misconceptions. Sometimes context makes the intended meaning clear.Some other meanings of the term diatonic scale take the extension to harmonic and melodic minor even further, to be even more inclusive.In general, diatonic is most often used inclusively with respect to music that restricts itself to standard uses of traditional major and minor scales. When discussing music that uses a larger variety of scales and modes (including much jazz, rock, and some tonal 20th-century concert music), writers often adopt the exclusive use to prevent confusion. Chromatic scale: Chromatic scale on C: full octave ascending and descending A chromatic scale consists of an ascending or descending sequence of pitches, always proceeding by semitones. Such a sequence of pitches is produced, for example, by playing all the black and white keys of a piano in order. The structure of a chromatic scale is therefore uniform throughout—unlike major and minor scales, which have tones and semitones in particular arrangements (and an augmented second, in the harmonic minor). Musical instruments: Some instruments, such as the violin, can play any scale; others, such as the glockenspiel, are restricted to the scale to which they are tuned. Among this latter class, some instruments, such as the piano, are always tuned to a chromatic scale, and can be played in any key, while others are restricted to a diatonic scale, and therefore to a particular key. Some instruments, such as the harmonica, harp, and glockenspiel, are available in both diatonic and chromatic versions (although it is possible to play chromatic notes on a diatonic harmonica, they require extended embouchure techniques, and some chromatic notes are only usable by advanced players). Intervals: When one note of an interval is chromatic or when both notes are chromatic, the entire interval is called chromatic. Chromatic intervals arise by raising or lowering one or both notes of a diatonic interval, so that the interval is made larger or smaller by the interval of half step ["altered diatonic intervals"]. Intervals: Because diatonic scale is itself ambiguous, distinguishing intervals is also ambiguous. For example, the interval B♮–E♭ (a diminished fourth, occurring in C harmonic minor) is considered diatonic if the harmonic minor scale is considered diatonic, but chromatic if the harmonic minor scale is not considered diatonic.Forte lists the chromatic intervals in major and natural minor as the augmented unison, diminished octave, augmented fifth, diminished fourth, augmented third, diminished sixth, diminished third, augmented sixth, minor second, major seventh, major second, minor seventh, doubly diminished fifth, and doubly augmented fourth.Additionally, the label chromatic or diatonic for an interval may depend on context. For instance, in C major, the interval C–E♭ could be considered a chromatic interval because it does not appear in the prevailing diatonic key; conversely, in C minor it would be diatonic. This usage is still subject to the categorization of scales above, e.g. in the B♮–E♭ example above, classification would still depend on whether the harmonic minor scale is considered diatonic. Intervals: In different systems of tuning Pythagorean diatonic and chromatic interval: E♮-F♮ and E♮-E♯ In equal temperament, there is no difference in tuning (and therefore in sound) between intervals that are enharmonically equivalent. For example, the notes F and E♯ represent the same pitch, so the diatonic interval C–F (a perfect fourth) sounds the same as its enharmonic equivalent—the chromatic interval C–E♯ (an augmented third). Intervals: But in systems other than equal temperament, there is often a difference in tuning between enharmonically equivalent intervals. In systems based on a cycle of fifths, such as Pythagorean tuning and meantone temperament, these alternatives are labelled diatonic or chromatic intervals. Under these systems the cycle of fifths is not circular in the sense that a pitch at one end of the cycle (e.g., G♯) is not tuned the same as the enharmonic equivalent at its other end (A♭); they are different by an amount known as a comma. Intervals: This broken cycle causes intervals that cross the break to be written as augmented or diminished chromatic intervals. In meantone temperament, for instance, chromatic semitones (E–E♯) are smaller than diatonic semitones (E–F), With consonant intervals such as the major third the enharmonic equivalent is generally less consonant. If the tritone is assumed diatonic, the classification of written intervals on this definition is not significantly different from the "drawn from the same diatonic scale" definition above as long as the harmonic minor and ascending melodic minor scale variants are not included. Chords: By chromatic linear chord is meant simply a chord entirely of linear origin which contains one or more chromatic notes. A great many of these chords are to be found in the literature. Chords: Diatonic chords are generally understood as those that are built using only notes from the same diatonic scale; all other chords are considered chromatic. However, given the ambiguity of diatonic scale, this definition, too, is ambiguous. And for some theorists, chords are only ever diatonic in a relative sense: the augmented triad E♭–G–B♮ is diatonic "to" or "in" C minor.On this understanding, the diminished seventh chord built on the leading note is accepted as diatonic in minor keys.If the strictest understanding of the term diatonic scale is adhered to – whereby only transposed 'white note scales' are considered diatonic – even a major triad on the dominant scale degree in C minor (G–B♮–D) would be chromatic or altered in C minor. Some writers use the phrase "diatonic to" as a synonym for "belonging to". Therefore a chord is not said to be "diatonic" in isolation, but can be said to be "diatonic to" a particular key if its notes belong to the underlying diatonic scale of the key. Harmony: The chromatic expansion of tonality which characterizes much of nineteenth century music is illustrated in miniature by the substitution of a chromatic harmony for an expected diatonic harmony. This technique resembles the deceptive cadence, which involves the substitution of another diatonic chord for the expected diatonic goal harmony. ... In the major mode a substitute chromatic consonance often proves to be a triad which has been taken from the parallel minor mode. This process ["assimilation"]...is called mixture of mode or simply mixture....Four consonant triads from the minor mode may replace their counterparts in the major mode. These we call chromatic triads by mixture. The words diatonic and chromatic are also applied inconsistently to harmony: Often musicians call diatonic harmony any kind of harmony inside the major–minor system of common practice. When diatonic harmony is understood in this sense, the supposed term chromatic harmony means little, because chromatic chords are also used in that same system. At other times, especially in textbooks and syllabuses for musical composition or music theory, diatonic harmony means harmony that uses only "diatonic chords". According to this usage, chromatic harmony is then harmony that extends the available resources to include chromatic chords: the augmented sixth chords, the Neapolitan sixth, chromatic seventh chords, etc. Harmony: Since the word harmony can be used of single classes of chords (dominant harmony, E minor harmony, for example), diatonic harmony and chromatic harmony can be used in this distinct way also.However, Chromatic harmony may be defined as the use of successive chords that are from two different keys and therefore contain tones represented by the same note symbols but with different accidentals. Four basic techniques produce chromatic harmony under this definition: modal interchange, secondary dominants, melodic tension, and chromatic mediants. Harmony: Instrumental compositions of the late Renaissance and early Baroque periods also began experimenting with the expressive possibilities of contrasting diatonic passages of music with chromatic ones. Here, for example is part of the Virginal Piece ‘His Humour’ by Giles Farnaby. (The title ‘Humour’ should be interpreted as meaning ‘mood’, here.) The first four bars are largely diatonic. These are followed by a passage exploiting chromatic harmony, with the upper part forming an ascending, followed by a descending chromatic scale: In the following passage from the slow movement of Beethoven's Piano Concerto No. 4, Op. 58., the long, flowing melody of the first five bars is almost entirely diatonic, consisting of notes within the scale of E minor, the movement's home key. The only exception is the G sharp in the left hand in the third bar. By contrast, the remaining bars are highly chromatic, using all the notes available to convey a sense of growing intensity as the music builds towards its expressive climax.A further example may be found in this extract from act 3 of Richard Wagner's opera Die Walküre. The first four bars harmonize a descending chromatic scale with a rich, intoxicating chord progression. In contrast, the bars that follow are entirely diatonic, using notes only within the scale of E major. The passage is intended to convey the god Wotan putting his daughter Brünnhilde into a deep sleep. Miscellaneous usages: Tones Notes which do not belong to the key [those "that lie within the major 2nds" of the diatonic scale] are called chromatic notes. In modern usage, the meanings of the terms diatonic note/tone and chromatic note/tone vary according to the meaning of the term diatonic scale. Generally – not universally – a note is understood as diatonic in a context if it belongs to the diatonic scale that is used in that context; otherwise it is chromatic. Inflection The term chromatic inflection (alternatively spelt inflexion) is used in two senses: Alteration of a note that makes it (or the harmony that includes it) chromatic rather than diatonic. Melodic movement between a diatonic note and a chromatically altered variant (from C to C♯ in G major, or vice versa, for example). Progression The term chromatic progression is used in three senses: Movement between harmonies that are not elements of any common diatonic system (that is, not of the same diatonic scale: movement from D–F–A to D♯–F♯–A, for example). The same as the second sense of chromatic inflection, above. In musica ficta and similar contexts, a melodic fragment that includes a chromatic semitone, and therefore includes a chromatic inflection in the second sense, above.The term diatonic progression is used in two senses: Movement between harmonies that both belong to at least one shared diatonic system (from F–A–C to G–B–E, for example, since both occur in C major). In musica ficta and similar contexts, a melodic fragment that does not include a chromatic semitone, even if two semitones occur contiguously, as in F♯–G–A♭. Modulation Diatonic modulation is modulation via a diatonic progression. Chromatic modulation is modulation via a chromatic progression, in the first sense given above. Pentatonic scale One very common kind of pentatonic scale that draws its notes from the diatonic scale (in the exclusive sense, above) is sometimes called the diatonic pentatonic scale: C–D–E–G–A[–C], or some other modal arrangement of those notes. Other pentatonic scales (such as the pelog scales) may also be construed as reduced forms of a diatonic scale but are not labelled diatonic. Modern extensions: Traditionally, and in all uses discussed above, the term diatonic has been confined to the domain of pitch, and in a fairly restricted way. Exactly which scales (and even which modes of those scales) should count as diatonic is unsettled, as shown above. But the broad selection principle itself is not disputed, at least as a theoretical convenience. Modern extensions: Extended pitch selections The selection of pitch classes can be generalised to encompass formation of non-traditional scales from the underlying twelve chromatic pitch classes. Or a larger set of underlying pitch classes may be used instead. For example, the octave may be divided into varying numbers of equally spaced pitch classes. The usual number is twelve, giving the conventional set used in Western music. But Paul Zweifel uses a group-theoretic approach to analyse different sets, concluding especially that a set of twenty divisions of the octave is another viable option for retaining certain properties associated with the conventional "diatonic" selections from twelve pitch classes. Modern extensions: Rhythms It is possible to generalise this selection principle even beyond the domain of pitch. The diatonic idea has been applied in analysis of some traditional African rhythms, for example. Some selection or other is made from an underlying superset of metrical beats, to produce a "diatonic" rhythmic "scale" embedded in an underlying metrical "matrix". Some of these selections are diatonic in a way similar to the traditional diatonic selections of pitch classes (that is, a selection of seven beats from a matrix of twelve beats – perhaps even in groupings that match the tone-and-semitone groupings of diatonic scales). But the principle may also be applied with even more generality (including even any selection from a matrix of beats of any size).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Norethandrolone** Norethandrolone: Norethandrolone, sold under the brand names Nilevar and Pronabol among others, is an androgen and anabolic steroid (AAS) medication which has been used to promote muscle growth and to treat severe burns, physical trauma, and aplastic anemia but has mostly been discontinued. It is still available for use in France however. It is taken by mouth.Side effects of norethandrolone include symptoms of masculinization like acne, increased hair growth, voice changes, and increased sexual desire. It can also cause estrogenic effects like fluid retention, breast tenderness, and breast enlargement in men and liver damage. The drug is a synthetic androgen and anabolic steroid and hence is an agonist of the androgen receptor (AR), the biological target of androgens like testosterone and dihydrotestosterone (DHT). It has strong anabolic effects relative to its androgenic effects. The drug also has strong progestogenic effects.Norethandrolone was discovered in 1953 and was introduced for medical use in 1956. It was the first AAS with a favorable separation of anabolic and androgenic effect to be marketed. The drug was mostly withdrawn in the 1980s due to concerns of liver damage. In addition to its medical use, norethandrolone has been used to improve physique and performance. The drug is a controlled substance in many countries and so non-medical use is generally illicit. Medical uses: Norethandrolone has been used in the treatment of muscle wasting, patients with severe burns, after severe trauma, and for certain forms of aplastic anemia among other indications. Side effects: Side effects of norethandrolone include virilization among others. It has estrogenic effects and can cause gynecomastia and fluid retention. As with all 17α-alkylated AAS, long-term use of norethandrolone in high doses may result in hepatotoxicity including elevated liver enzymes and cirrhosis. Pharmacology: Pharmacodynamics Norethandrolone is an androgen and anabolic steroid and hence is an agonist of the androgen receptor, the biological target of androgens like testosterone and dihydrotestosterone. It has a high ratio of anabolic to androgenic activity. Analogously to the case of nandrolone and 5α-dihydronandrolone, 5α-dihydronorethandrolone, the 5α-reduced metabolite of norethandrolone, shows diminished affinity for the androgen receptor relative to norethandrolone. This is likely related to the high ratio of anabolic to androgenic activity observed with norethandrolone. Norethandrolone has relatively high estrogenic activity via transformation by aromatase into the potent estrogen ethylestradiol. It also has strong progestogenic activity. The progestogenic potency of norethandrolone is similar to that of norethisterone in terms of endometrial changes in women. In addition, norethandrolone is hepatotoxic. Pharmacology: Pharmacokinetics The pharmacokinetics of norethandrolone have been reviewed. Chemistry: Norethandrolone, also known as 17α-ethyl-19-nortestosterone or as 17α-ethylestr-4-en-17β-ol-3-one, is a synthetic estrane steroid and a 17α-alkylated derivative of testosterone and 19-nortestosterone (nandrolone). It is closely related to normethandrone (17α-methyl-19-nortestosterone) and to ethylestrenol (3-deketo-17α-ethyl-19-nortestosterone). Synthesis Chemical syntheses of norethandrolone have been published. History: Norethandrolone was synthesized at G. D. Searle & Company in 1953 and was originally studied as a progestin, along with norethisterone and noretynodrel, but ultimately was not marketed as such. In 1955, it was re-examined for testosterone-like activity and was found to have similar anabolic activity to testosterone but only one-sixteenth the androgenic potency. Norethandrolone was introduced for medical use as an AAS in 1956 and was the first so-called "anabolic steroid", or AAS with a favorable separation of anabolic and androgenic effect, to be marketed. It was followed by normethandrone as a progestin in 1957 and by the more well-known AAS nandrolone phenylpropionate in 1959. Norethandrolone was introduced in the United States in the late 1950s under the brand name Nilevar but was discontinued in this country in the 1960s due to limited sales. Although it was also introduced into Europe and certain other markets, it was withdrawn in many countries in the 1980s due to concerns of cholestatic jaundice. Today, the drug remains available only in France. Society and culture: Generic names Norethandrolone is the generic name of the drug and its INN and BAN. It has also been referred to as noretandrolone, ethylnandrolone, and ethylnortestosterone, as well as by its developmental code name CB-8022. Brand names Norethandrolone is marketed under the brand names Nilevar and Pronabol. Availability Norethandrolone is available today only in France. Research: Norethandrolone has been studied for use in male hormonal contraception.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Multi-function structure** Multi-function structure: A multi-function material is a composite material. The traditional approach to the development of structures is to address the load-carrying function and other functional requirements separately. Recently, however, there has been increased interest in the development of load-bearing materials and structures which have integral non-load-bearing functions, guided by recent discoveries about how multifunctional biological systems work. Introduction: With conventional structural materials, it has been difficult to achieve simultaneous improvement in multiple structural functions, but the increasing use of composite materials has been driven in part by the potential for such improvements. The multi-functions can vary from mechanical to electrical and thermal functions. The most widely used composites have polymer matrix materials, which are typically poor conductors. Enhanced conductivity could be achieved with reinforcing the composite with carbon nanotubes for instance. Functions: Among the many functions that can be attained are electrical/thermal conductivity, sensing and actuation, energy harvesting/storage, self-healing capability, electromagnetic interference (EMI) shielding and recyclability and biodegradability. See also functionally graded materials which are composite materials where the composition or the microstructure are locally varied so that a certain variation of the local material properties is achieved. However, functionally graded materials can be designed for specific function and applications. Functions: Many applications such as re-configurable aircraft wings, shape-changing aerodynamic panels for flow control, variable geometry engine exhausts, turbine blade, wind turbine configuration at different wind speed, microelectromechanical systems (micro-switches), mechanical memory cells, valves, micropumps, flexible direction panel position in solar cells, innovative architecture (adaptive shape panels for roofs and windows), flexible and foldable electronic devices and optics (shape changing mirrors for active focusing in adaptive optical systems).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Superbit** Superbit: Superbit was a brand of premium DVD-Video versions of motion pictures from Sony Pictures Home Entertainment, a division of Sony Pictures Entertainment. Superbit DVDs aimed to improve picture quality over a standard DVD edition of a feature by increasing the bit rate of the encoded video. Audio quality was also improved by the mandatory inclusion of both Dolby Digital and DTS 5.1 surround audio tracks. Technical details: Superbit discs can be read by all regular DVD video players, but their film files were encoded at a bit rate that is, according to Sony, approximately 1.5 times as high (6-7 Mbit/s vs 4-5 Mbit/s) as that of standard DVDs, which helps minimize artifacts caused by video compression and allow the image to be pre-filtered less prior to compression, which results in more detail. Superbit should not be confused with either Blu-ray or HD DVD discs, both of which are different media formats of much higher bit rate and resolution, and are incompatible with standard DVD video players. Technical details: To maximize space for the main feature, static menus are used and commentary tracks are removed. To further improve the size and therefore quality of the film on the disc, Superbit discs contained a reduced amount (and usually are completely devoid) of bonus material, such as documentaries or interviews, which can be found on regular DVDs. All Superbit releases present a film in its theatrical aspect ratio. Technical details: In addition to maximizing the bitrate for improved audio and video, the Superbit line introduced seamless layer changes. Prior to this line of Sony DVDs, all dual layer DVDs caused a slight pause during playback when the layer change occurred. Some standard DVDs had their layer changes placed better than others making some almost imperceptible. Superbit DVD were the only DVDs produced that truly had seamless layer changes. When Blu-ray was introduced seamless layer changes were standard on the improved disc format. History: The Superbit line launched in October 2001 with five titles: The Fifth Element, Crouching Tiger, Hidden Dragon, Air Force One, Desperado and Johnny Mnemonic. Following the initial release of the Superbit line, Superbit Deluxe was introduced, which bundled a Superbit-quality feature with a second disc containing the special features. In January 2007, Sony Pictures Home Entertainment discontinued its Superbit line in order to promote its Blu-ray Disc format.Some of the most popular Superbit releases were the Sam Raimi films Spider-Man and Spider-Man 2. The multi-disc Superbit titles (meaning the film spanning more than one disc) included Das Boot as well as David Lean’s Lawrence of Arabia, but in order to maximize the bitrate for AV-quality the title was not split where enthusiasts were expecting: the intermission interlude. This led many fans of the film to ignore the release. Lawrence of Arabia was also limited to a single movie disc in many regions when the Blu-ray debuted with Sony’s Blu-ray version of Superbit ‘Mastered in 4K’ line. Only Japan got a ‘Mastered in 4K’ where the film spanned multiple discs with the disc split finally occurring at the more appropriate intermission interlude. The UHD Blu-ray of Lawrence was split across two triple layer BD100s at the appropriate spot again when debuting on that format with the Columbia Classics VoI. 1 UHD Blu-ray box set. History: David Fincher's Panic Room was released exclusively on Superbit DVD, and as of November 2021 the title has not been released on any of the newer HD/UHD formats.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Opiorphin** Opiorphin: Opiorphin is an endogenous chemical compound first isolated from human saliva. Initial research with mice shows the compound has a painkilling effect greater than that of morphine. It works by stopping the normal breakup of enkephalins, natural pain-killing opioids in the spinal cord. It is a relatively simple molecule consisting of a five-amino acid polypeptide, Gln-Arg-Phe-Ser-Arg (QRFSR).Opiorphin pentapeptide originates from the N-terminal region of the protein PROL1 (proline-rich, lacrimal 1). Opiorphin inhibits three proteases: neutral ecto-endopeptidase (MME), ecto-aminopeptidase N (ANPEP) and perhaps also a dipeptidyl peptidase DPP3. Opiorphin: Such action extends the duration of enkephalin effect where the natural pain killers are released physiologically in response to specific potentially painful stimuli, in contrast with administration of narcotics, which floods the entire body and causes many undesirable adverse reactions, including addiction liability and constipation. Opiorphin: In addition, opiorphin may exert anti-depressive and antipanic action.Therapeutic application of opiorphin in humans would require modifying the molecule to avoid its rapid degradation in the intestine and its poor penetration of the blood–brain barrier. This modification is done in the body by transformation of N-terminal glutamine into pyroglutamate. This form preserves the analgesic properties of opiorphin but with increased pharmaceutical stability.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bombsight** Bombsight: A bombsight is a device used by military aircraft to drop bombs accurately. Bombsights, a feature of combat aircraft since World War I, were first found on purpose-designed bomber aircraft and then moved to fighter-bombers and modern tactical aircraft as those aircraft took up the brunt of the bombing role. Bombsight: A bombsight has to estimate the path the bomb will take after release from the aircraft. The two primary forces during its fall are gravity and air drag, which make the path of the bomb through the air roughly parabolic. There are additional factors such as changes in air density and wind that may be considered, but they are concerns only for bombs that spend a significant portion of a minute falling through the air. Those effects can be minimized by reducing the fall time by low-level bombing or by increasing the speed of the bombs. Those effects are combined in the dive bomber. Bombsight: However, low-level bombing also increases the danger to the bomber from ground-based defences, so accurate bombing from higher altitudes has always been desired. That has led to a series of increasingly sophisticated bombsight designs dedicated to high-altitude level bombing. Bombsight: Bombsights were first used before World War I and have since gone through several major revisions. The earliest systems were iron sights, which were pre-set to an estimated fall angle. In some cases, they consisted of nothing more than a series of nails hammered into a convenient spar, lines drawn on the aircraft, or visual alignments of certain parts of the structure. They were replaced by the earliest custom-designed systems, normally iron sights that could be set based on the aircraft's airspeed and altitude. These early systems were replaced by the vector bombsights, which added the ability to measure and adjust for winds. Vector bombsights were useful for altitudes up to about 3,000 m and speeds up to about 300 km/h. Bombsight: In the 1930s, mechanical computers with the performance needed to "solve" the equations of motion started to be incorporated into the new tachometric bombsights, the most famous of which is the Norden. Then, in World War II, tachometric bombsights were often combined with radar systems to allow accurate bombing through clouds or at night. When postwar studies demonstrated that bomb accuracy was roughly equal either optically or radar-guided, optical bombsights were generally removed and the role passed to dedicated radar bombsights. Bombsight: Finally, especially since the 1960s, fully computerized bombsights were introduced, which combined the bombing with long-range navigation and mapping. Bombsight: Modern aircraft do not have a bombsight but use highly computerized systems that combine bombing, gunnery, missile fire and navigation into a single head-up display. The systems have the performance to calculate the bomb trajectory in real time, as the aircraft manoeuvres, and add the ability to adjust for weather, relative altitude, relative speeds for moving targets and climb or dive angle. That makes them useful both for level bombing, as in earlier generations, and tactical missions, which used to bomb by eye. Theory: Forces on a bomb The drag on a bomb for a given air density and angle of attack is proportional to the relative air speed squared. If the vertical component of the velocity is denoted by vv and the horizontal component by vh then the speed is vv2+vh2 and the vertical and horizontal components of the drag are: dv=CAρvvvv2+vh2(vv2+vh2)=CAρvvvv2+vh2 dh=CAρvhvv2+vh2(vv2+vh2)=CAρvhvv2+vh2 where C is the coefficient of drag, A is the cross-sectional area, and ρ is the air density. These equations show that horizontal velocity increases vertical drag and vertical velocity increases horizontal drag. These effects are ignored in the following discussion. Theory: To start with, consider only the vertical motion of a bomb. In this direction, the bomb will be subject to two primary forces, gravity and drag, the first constant, and the second varying with the square of velocity. For an aircraft flying straight and level, the initial vertical velocity of the bomb will be zero, which means it will also have zero vertical drag. Gravity will accelerate the bomb downwards, and as its velocity increases so does the drag force. At some point (as speed and air density increase), the force of drag will become equal to the force of gravity, and the bomb will reach terminal velocity. As the air drag varies with air density, and thus altitude, the terminal velocity will decrease as the bomb falls. Generally, the bomb will slow as it reaches lower altitudes where the air is denser, but the relationship is complex. Theory: Now consider the horizontal motion. At the instant it leaves the shackles, the bomb carries the forward speed of the aircraft with it. This momentum is countered solely by drag, which starts to slow the forward motion. As the forward motion slows, the drag force drops and this deceleration diminishes. The forward speed is never reduced entirely to zero. If the bomb were not subject to drag, its path would be purely ballistic and it would impact at an easily calculable point, the vacuum range. In practice, drag means that the impact point is short of the vacuum range, and this real-world distance between dropping and impact is known simply as the range. The difference between the vacuum range and actual range is known as the trail because the bomb appears to trail behind the aircraft as it falls. The trail and range differ for different bombs due to their individual aerodynamics and typically have to be measured on a bombing range.The main problem in completely separating the motion into vertical and horizontal components is the terminal velocity. Bombs are designed to fly with the nose pointed forward into the relative wind, normally through the use of fins at the back of the bomb. The drag depends on the angle of attack of the bomb at any given instant. If the bomb is released at low altitudes and speeds the bomb will not reach terminal velocity and its speed will be defined largely by how long the bomb has been falling. Theory: Finally, consider the effects of wind. The wind acts on the bomb through drag and is thus a function of the wind speed. This is typically only a fraction of the speed of the bomber or the terminal velocity, so it only becomes a factor if the bomb is dropped from altitudes high enough for this small influence to noticeably affect the bomb's path. The difference between the impact point and where it would have fallen if there had been no wind is known as drift, or cross trail. Theory: The bombsight problem In ballistics terms, it is traditional to talk of the calculation of aiming of ordnance as the solution. The bombsight problem is the calculation of the location in space where the bombs should be dropped in order to hit the target when all of the effects noted above are taken into account.In the absence of wind, the bombsight problem is fairly simple. The impact point is a function of three factors, the aircraft's altitude, its forward speed, and the terminal velocity of the bomb. In many early bombsights, the first two inputs were adjusted by separately setting the front and back sights of an iron sight, one for the altitude and the other for the speed. Terminal velocity, which extends the fall time, can be accounted for by raising the effective altitude by an amount that is based on the bomb's measured ballistics.When windage is accounted for, the calculations become more complex. As the wind can operate in any direction, bombsights generally break the windage into the portions that act along the flight path and across it. In practice, it was generally simpler to have the aircraft fly in such a way to zero out any sideways motion before the drop, and thereby eliminate this factor. This is normally accomplished using a common flying techniques known as crabbing or sideslip. Theory: Bombsights are sighting devices that are pointed in a particular direction, or aimed. Although the solution outlined above returns a point in space, simple trigonometry can be used to convert this point into an angle relative to the ground. The bombsight is then set to indicate that angle. The bombs are dropped when the target passes through the sights. The distance between the aircraft and target at that moment is the range, so this angle is often referred to as the range angle, although dropping angle, aiming angle, bombing angle and similar terms are often used as well. In practice, some or all of these calculations are carried out using angles and not points in space, skipping the final conversion. Theory: Accuracy The accuracy of the drop is affected both by inherent problems like the randomness of the atmosphere or bomb manufacture as well as more practical problems like how close to flat and level the aircraft is flying or the accuracy of its instruments. These inaccuracies compound over time, so increasing the altitude of the bomb run, thereby increasing the fall time, has a significant impact on the final accuracy of the drop. Theory: It is useful to consider a single example of a bomb being dropped on a typical mission. In this case we will consider the AN-M64 500 lbs General-Purpose Bomb, widely used by the USAAF and RAF during World War II, with direct counterparts in the armouries of most forces involved. Ballistic data on this bomb can be found in "Terminal Ballistic Data, Volume 1: Bombing". Against men standing in the open, the 500 lbs has a lethal radius of about 107 m (351 ft), but much less than that against buildings, perhaps 27 m (89 ft).The M64 will be dropped from a Boeing B-17 flying at 322 km/h (200 mph) at an altitude of 20,000 feet in a 42 km/h (26 mph) wind. Given these conditions, the M64 would travel approximately 10,000 feet (3,000 m) forward from the drop point before impact, for a trail of about 305 m (1,001 ft) from the vacuum range, and impact with a velocity of 351 m/s (1150 fps) at an angle of about 77 degrees from horizontal. A 42 km/h (26 mph) wind would be expected to move the bomb about 91 m (299 ft) during that time. The time to fall is about 37 seconds.Assuming errors of 5% in every major measurement, one can estimate those effects on accuracy based on the methodology and tables in the guide. A 5% error in altitude at 20,000 feet would be 1,000 feet, so the aircraft might be anywhere from 19 to 21,000 feet. According to the table, this would result in an error around 10 to 15 feet. A 5% error in airspeed, 10 mph, would cause an error of about 15 to 20 feet. In terms of drop timing, errors on the order of one-tenth of a second might be considered the best possible. In this case, the error is simply the ground speed of the aircraft over this time, or about 30 feet. All of these are well within the lethal radius of the bomb. Theory: The wind affects the accuracy of the bomb in two ways, pushing directly on the bomb while it falls, as well as changing the ground speed of the aircraft before the drop. In the case of the direct effects on the bomb, a measurement that has a 5% error, 1.25 mph, that would cause a 5% error in the drift, which would be 17.5 feet. However, that 1.25 mph error, or 1.8 fps, would also be added to the aircraft's velocity. Over the time of the fall, 37 seconds, that would result in an error of 68 feet, which is at the outside limit of the bomb's performance.The measurement of the wind speed is a more serious concern. Early navigation systems generally measured it using a dead reckoning procedure that compares measured movement over the ground with the calculated movement using the aircraft instruments. The Federal Aviation Administration's FAR Part 63 suggests 5 to 10% accuracy of these calculations, the US Air Force's AFM 51-40 gives 10%, and the US Navy's H.O. 216 at a fixed 20 miles or greater. Compounding this inaccuracy is that it is made using the instrument's airspeed indication, and as the airspeed in this example is about 10 times that of the wind speed, its 5% error can lead to great inaccuracies in wind speed calculations. Eliminating this error through the direct measurement of ground speed (instead of calculating it) was a major advance in the tachometric bombsights of the 1930s and 40s. Theory: Finally, consider errors of the same 5% in the equipment itself, that is, an error of 5% in the setting of the range angle, or a similar 5% error in the levelling of the aircraft or bombsight. For simplicity, consider that 5% to be a 5 degree angle. Using simple trigonometry, 5 degrees at 20,000 feet is approximately 1,750 feet, an error that would place the bombs far outside their lethal radius. In tests, accuracies of 3 to 4 degrees were considered standard, and angles as high as 15 degrees were not uncommon. Given the seriousness of the problem, systems for automatic levelling of bombsights was a major area of study before World War II, especially in the US. Early systems: All of the calculations needed to predict the path of a bomb can be carried out by hand, with the aid of calculated tables of the bomb ballistics. However, the time to carry out these calculations is not trivial. Using visual sighting, the range at which the target is first sighted remains fixed, based on eyesight. As aircraft speeds increase, there is less time available after the initial spotting to carry out the calculations and correct the aircraft's flight path to bring it over the proper drop point. During the early stages of bombsight development, the problem was addressed by reducing the allowable engagement envelope, thereby reducing the need to calculate marginal effects. For instance, when dropped from very low altitudes, the effects of drag and wind during the fall will be so small that they can be ignored. In this case only the forward speed and altitude have any measurable effect.One of the earliest recorded examples of such a bombsight was built in 1911 by Lieutenant Riley E. Scott, of the U.S. Army Coast Artillery Corps. This was a simple device with inputs for airspeed and altitude which was hand-held while lying prone on the wing of the aircraft. After considerable testing, he was able to build a table of settings to use with these inputs. In testing at College Park, Maryland, Scott was able to place two 18 pound bombs within 10 feet of a 4-by-5 foot target from a height of 400 feet. In January 1912, Scott won $5,000 for first place in the Michelin bombing competition at Villacoublay Airfield in France, scoring 12 hits on a 125-by-375 foot target with 15 bombs dropped from 800 meters.In spite of early examples like Scott's prior to the war, during the opening stages of the First World War bombing was almost always carried out by eye, dropping the small bombs by hand when the conditions looked right. As the use and roles for aircraft increased during the war, the need for better accuracy became pressing. At first this was accomplished by sighting off parts of the aircraft, such as struts and engine cylinders, or drawing lines on the side of the aircraft after test drops on a bombing range. These were useful for low altitudes and stationary targets, but as the nature of the air war expanded, the needs quickly outgrew these solutions as well.For higher altitude drops, the effect of wind and bomb trajectory could no longer be ignored. One important simplification was to ignore the terminal velocity of the bomb, and calculate its average speed as the square root of the altitude measured in feet. For instance, a bomb dropped from 10,000 feet would fall at an average rate of 400 fps, allowing easy calculation of the time to fall. Now all that remained was a measurement of the wind speed, or more generally the ground speed. Normally this was accomplished by flying the aircraft into the general direction of the wind and then observing motion of objects on the ground and adjusting the flight path side to side until any remaining sideways drift due to wind was eliminated. The speed over the ground was then measured by timing the motion of objects between two given angles as seen through the sight.One of the most fully developed examples of such a sight to see combat was the German Görtz bombsight, developed for the Gotha heavy bombers. The Görtz used a telescope with a rotating prism at the bottom that allowed the sight to be rotated fore and aft. After zeroing out sideways motion the sight was set to a pre-set angle and then an object was timed with a stopwatch until it was directly below the aircraft. This revealed the ground speed, which was multiplied by the time taken to hit the ground, and then a pointer in the sight was set to an angle looked up on a table. The bomb aimer then watched the target in the sight until it crossed the pointer, and dropped the bombs. Similar bombsights were developed in France and England, notably the Michelin and Central Flying School Number Seven bombsight. While useful, these sights required a time-consuming setup period while the movement was timed.A great upgrade to the basic concept was introduced by Harry Wimperis, better known for his later role in the development of radar in England. In 1916 he introduced the Drift Sight, which added a simple system for directly measuring the wind speed. The bomb aimer would first dial in the altitude and airspeed of the aircraft. Doing so rotated a metal bar on the right side of the bombsight so it pointed out from the fuselage. Prior to the bomb run, the bomber would fly at right angles to the bomb line, and the bomb aimer would look past the rod to watch the motion of objects on the ground. He would then adjust the wind speed setting until the motion was directly along the rod. This action measured the wind speed, and moved the sights to the proper angle to account for it, eliminating the need for separate calculations. A later modification was added to calculate the difference between true and indicated airspeed, which grows with altitude. This version was the Drift Sight Mk. 1A, introduced on the Handley Page O/400 heavy bomber. Variations on the design were common, like the US Estoppey bombsight. Early systems: All of these bombsights shared the problem that they were unable to deal with wind in any direction other than along the path of travel. That made them effectively useless against moving targets, like submarines and ships. Unless the target just happened to be travelling directly in line with the wind, their motion would carry the bomber away from the wind line as they approached. Additionally, as anti-aircraft artillery grew more effective, they would often pre-sight their guns along the wind line of the targets they were protecting, knowing that attacks would come from those directions. A solution for attacking cross-wind was sorely needed. Vector bombsights: Calculating the effects of an arbitrary wind on the path of an aircraft was already a well-understood problem in air navigation, one requiring basic vector mathematics. Wimperis was very familiar with these techniques, and would go on to write a seminal introductory text on the topic. The same calculations would work just as well for bomb trajectories, with some minor adjustments to account for the changing velocities as the bombs fell. Even as the Drift Sight was being introduced, Wimperis was working on a new bombsight that helped solve these calculations and allow the effects of wind to be considered no matter the direction of the wind or the bomb run.The result was the Course Setting Bomb Sight (CSBS), called "the most important bomb sight of the war". Dialling in the values for altitude, airspeed and the speed and direction of the wind rotated and slid various mechanical devices that solved the vector problem. Once set up, the bomb aimer would watch objects on the ground and compare their path to thin wires on either side of the sight. If there was any sideways motion, the pilot could slip-turn to a new heading in an effort to cancel out the drift. A few attempts were typically all that was needed, at which point the aircraft was flying in the right direction to take it directly over the drop point, with zero sideways velocity. The bomb aimer (or pilot in some aircraft) then sighted through the attached iron sights to time the drop.The CSBS was introduced into service in 1917 and quickly replaced earlier sights on aircraft that had enough room – the CSBS was fairly large. Versions for different speeds, altitudes and bomb types were introduced as the war progressed. After the war, the CSBS continued to be the main bombsight in British use. Thousands were sold to foreign air forces and numerous versions were created for production around the world. A number of experimental devices based on a variation of the CSBS were also developed, notably the US's Estoppey D-1 sight, developed shortly after the war, and similar versions from many other nations. These "vector bombsights" all shared the basic vector calculator system and drift wires, differing primarily in form and optics. Vector bombsights: As bombers grew and multi-place aircraft became common, it was no longer possible for the pilot and bombardier to share the same instrument, and hand signals were no longer visible if the bombardier was below the pilot in the nose. A variety of solutions using dual optics or similar systems were suggested in the post-war era, but none of these became widely used. This led to the introduction of the pilot direction indicator, an electrically driven pointer which the bomb aimer used to indicate corrections from a remote location in the aircraft.Vector bombsights remained the standard by most forces well into the Second World War, and was the main sight in British service until 1942. This was in spite of the introduction of newer sighting systems with great advantages over the CSBS, and even newer versions of the CSBS that failed to be used for a variety of reasons. The later versions of the CSBS, eventually reaching the Mark X, included adjustments for different bombs, ways to attack moving targets, systems for more easily measuring winds, and a host of other options. Tachometric bombsights: One of the main problems using vector bombsights was the long straight run needed before dropping the bombs. This was needed so the pilot would have enough time to accurately account for the effects of wind, and get the proper flight angle set up with some level of accuracy. If anything changed during the bomb run, especially if the aircraft had to maneuver in order to avoid defences, everything had to be set up again. Additionally, the introduction of monoplane bombers made the adjustment of the angles more difficult, because they were not able to slip-turn as easily as their earlier biplane counterparts. They suffered from an effect known as "Dutch roll" that made them more difficult to turn and tended to oscillate after levelling. This further reduced the time the bomb aimer had to adjust the path. Tachometric bombsights: One solution to this later problem had already been used for some time, the use of some sort of gimbal system to keep the bombsight pointed roughly downward during maneuvering or being blown around by wind gusts. Experiments as early as the 1920s had demonstrated that this could roughly double the accuracy of bombing. The US carried out an active program in this area, including Estoppey sights mounted to weighted gimbals and Sperry Gyroscope's experiments with US versions of the CSBS mounted to what would today be called an inertial platform. These same developments led to the introduction of the first useful autopilots, which could be used to directly dial in the required path and have the aircraft fly to that heading with no further input. A variety of bombing systems using one or both of these systems were considered throughout the 1920s and 30s.During the same period, a separate line of development was leading to the first reliable mechanical computers. These could be used to replace a complex table of numbers with a carefully shaped cam-like device, and the manual calculation though a series of gears or slip wheels. Originally limited to fairly simple calculations consisting of additions and subtractions, by the 1930s they had progressed to the point where they were being used to solve differential equations. For bombsight use, such a calculator would allow the bomb aimer to dial in the basic aircraft parameters – speed, altitude, direction, and known atmospheric conditions – and the bomb sight would automatically calculate the proper aim point in a few moments. Some of the traditional inputs, like airspeed and altitude, could even be taken directly from the aircraft instruments, eliminating operational errors. Tachometric bombsights: Although these developments were well known within the industry, only the US Army Air Corps and US Navy put any concerted effort into development. During the 1920s, the Navy funded development of the Norden bombsight while the Army funded development of the Sperry O-1. Both systems were generally similar; a bomb sight consisting of a small telescope was mounted on a stabilizing platform to keep the sighting head stable. A separate mechanical computer was used to calculate the aim point. The aim point was fed back to the sight, which automatically rotated the telescope to the correct angle to account for drift and aircraft movement, keeping the target still in the view. When the bomb aimer sighted through the telescope, he could see any residual drift and relay this to the pilot, or later, feed that information directly into the autopilot. Simply moving the telescope to keep the target in view had the side effect of fine-tuning the windage calculations continuously, and thereby greatly increasing their accuracy. For a variety of reasons, the Army dropped their interest in the Sperry, and features from the Sperry and Norden bombsights were folded into new models of the Norden. The Norden then equipped almost all US high-level bombers, most notably the B-17 Flying Fortress. In tests, these bombsights were able to generate fantastic accuracy. In practice, however, operational factors seriously upset them, to the point that pinpoint bombing using the Norden was eventually abandoned.Although the US put the most effort into development of the tachometric concept, they were also being studied elsewhere. In the UK, work on the Automatic Bomb Sight (ABS) had been carried on since the mid-1930s in an effort to replace the CSBS. However, the ABS did not include stabilization of the sighting system, nor the Norden's autopilot system. In testing the ABS proved to be too difficult to use, requiring long bomb runs to allow the computer time to solve the aim point. When RAF Bomber Command complained that even the CSBS had too long a run-in to the target, efforts to deploy the ABS ended. For their needs they developed a new vector bombsight, the Mk. XIV. The Mk. XIV featured a stabilizing platform and aiming computer, but worked more like the CSBS in overall functionality – the bomb aimer would set the computer to move the sighting system to the proper angle, but the bombsight did not track the target or attempt to correct the aircraft path. The advantage of this system was that it was dramatically faster to use, and could be used even while the aircraft was manoeuvring, only a few seconds of straight-line flying were needed before the drop. Facing a lack of production capability, Sperry was contracted to produce the Mk. XIV in the US, calling it the Sperry T-1.Both the British and Germans would later introduce Norden-like sights of their own. Based at least partially on information about the Norden passed to them through the Duquesne Spy Ring, the Luftwaffe developed the Lotfernrohr 7. The basic mechanism was almost identical to the Norden, but much smaller. In certain applications the Lotfernrohr 7 could be used by a single-crew aircraft, as was the case for the Arado Ar 234, the world's first operational jet bomber. During the war the RAF had the need for accurate high-altitude bombing and in 1943 introduced a stabilized version of the earlier ABS, the hand-built Stabilized Automatic Bomb Sight (SABS). It was produced in such limited numbers that it was at first used only by the famed No. 617 Squadron RAF, The Dambusters.All of these designs collectively became known as tachometric sights, "tachometric" referring to the timing mechanisms which counted the rotations of a screw or gear that ran at a specified speed. Radar bombing and integrated systems: In the pre-World War II era there had been a long debate about the relative merits of daylight versus night-time bombing. At night the bomber is virtually invulnerable (until the introduction of radar) but finding its target was a major problem. In practice, only large targets such as cities could be attacked. During the day the bomber could use its bombsights to attack point targets, but only at the risk of being attacked by enemy fighters and anti-aircraft artillery. Radar bombing and integrated systems: During the early 1930s the debate had been won by the night-bombing supporters, and the RAF and Luftwaffe started construction of large fleets of aircraft dedicated to the night mission. As "the bomber will always get through", these forces were strategic in nature, largely a deterrent to the other force's own bombers. However, new engines introduced in the mid-1930s led to much larger bombers that were able to carry greatly improved defensive suites, while their higher operational altitudes and speeds would render them less vulnerable to the defences on the ground. Policy once again changed in favour of daylight attacks against military targets and factories, abandoning what was considered a cowardly and defeatist night-bombing policy. Radar bombing and integrated systems: In spite of this change, the Luftwaffe continued to put some effort into solving the problem of accurate navigation at night. This led to the Battle of the Beams during the opening stages of the war. The RAF returned in force in early 1942 with similar systems of their own, and from that point on, radio navigation systems of increasing accuracy allowed bombing in any weather or operational conditions. The Oboe system, first used operationally in early 1943, offered real-world accuracies on the order of 35 yards, much better than any optical bombsight. The introduction of the British H2S radar further improved the bomber's abilities, allowing direct attack of targets without the need of remote radio transmitters, which had range limited to the line-of-sight. By 1943 these techniques were in widespread use by both the RAF and USAAF, leading to the H2X and then a series of improved versions like the AN/APQ-13 and AN/APQ-7 used on the Boeing B-29 Superfortress. Radar bombing and integrated systems: These early systems operated independently of any existing optical bombsight, but this presented the problem of having to separately calculate the trajectory of the bomb. In the case of Oboe, these calculations were carried out before the mission at the ground bases. But as daylight visual bombing was still widely used, conversions and adaptations were quickly made to repeat the radar signal in the existing bombsights, allowing the bombsight calculator to solve the radar bombing problem. For instance, the AN/APA-47 was used to combine the output from the AN/APQ-7 with the Norden, allowing the bomb aimer to easily check both images to compare the aim point.Analysis of the results of bombing attacks carried out using radio navigation or radar techniques demonstrated accuracy was essentially equal for the two systems – night time attacks with Oboe were able to hit targets that the Norden could not during the day. With the exception of operational considerations – limited resolution of the radar and limited range of the navigation systems – the need for visual bombsights quickly disappeared. Designs of the late-war era, like the Boeing B-47 Stratojet and English Electric Canberra retained their optical systems, but these were often considered secondary to the radar and radio systems. In the case of the Canberra, the optical system only existed due to delays in the radar system becoming available. Postwar developments: The strategic bombing role was following an evolution over time to ever-higher, ever-faster, ever-longer-ranged missions with ever-more-powerful weapons. Although the tachometric bombsights provided most of the features needed for accurate bombing, they were complex, slow, and limited to straight-line and level attacks. In 1946 the US Army Air Force asked the Army Air Forces Scientific Advisory Group to study the problem of bombing from jet aircraft that would soon be entering service. They concluded that at speeds over 1,000 knots (1,900 km/h), optical systems would be useless – the visual range to the target would be less than the range of a bomb being dropped at high altitudes and speeds.At the attack ranges being considered, thousands of miles, radio navigation systems would not be able to offer both the range and the accuracy needed. This demanded radar bombing systems, but existing examples did not offer anywhere near the required performance. At the stratospheric altitudes and long "sighting" ranges being considered, the radar antenna would need to be very large to offer the required resolution, yet this ran counter for the need to develop an antenna that was as small as possible in order to reduce drag. They also pointed out that many targets would not show up directly on the radar, so the bombsight would need the ability to drop at points relative to some landmark that did appear, the so-called "offset aiming points". Finally, the group noted that many of the functions in such a system would overlap formerly separate tools like the navigation systems. They proposed a single system that would offer mapping, navigation, autopilot and bomb aiming, thereby reducing complexity, and especially the needed space. Such a machine first emerged in the form of the AN/APQ-24, and later the "K-System", the AN/APA-59.Through the 1950s and 1960s, radar bombing of this sort was common and the accuracy of the systems were limited to what was needed to support attacks by nuclear weapons – a circular error probable (CEP) of about 3,000 feet (910 m) was considered adequate. As mission range extended to thousands of miles, bombers started incorporating inertial guidance and star trackers to allow accurate navigation when far from land. These systems quickly improved in accuracy, and eventually became accurate enough to handle the bomb dropping without the need for a separate bombsight. This was the case for the 1,500 feet (460 m) accuracy demanded of the B-70 Valkyrie, which lacked any sort of conventional bombsight. Modern systems: During the Cold War the weapon of choice was a nuclear one, and accuracy needs were limited. Development of tactical bombing systems, notably the ability to attack point targets with conventional weapons that had been the original goal of the Norden, was not considered seriously. Thus when the US entered the Vietnam War, their weapon of choice was the Douglas A-26 Invader equipped with the Norden. Such a solution was inadequate. Modern systems: At the same time, the ever-increasing power levels of new jet engines led to fighter aircraft with bomb loads similar to heavy bombers of a generation earlier. This generated demand for a new generation of greatly improved bombsights that could be used by a single-crew aircraft and employed in fighter-like tactics, whether high-level, low-level, in a dive towards the target, or during hard maneuvering. A specialist capability for toss bombing also developed in order to allow aircraft to escape the blast radius of their own nuclear weapons, something that required only middling accuracy but a very different trajectory that initially required a dedicated bombsight. Modern systems: As electronics improved, these systems were able to be combined, and then eventually with systems for aiming other weapons. They may be controlled by the pilot directly and provide information through the head-up display or a video display on the instrument panel. The definition of bombsight is becoming blurred as "smart" bombs with in-flight guidance, such as laser-guided bombs or those using GPS, replace "dumb" gravity bombs.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**NOVA1** NOVA1: RNA-binding protein Nova-1 is a protein that in humans is encoded by the NOVA1 gene.This gene encodes a neuron-specific RNA-binding protein, a member of the Nova family of paraneoplastic disease antigens, that is recognized and inhibited by paraneoplastic antibodies. These antibodies are found in the sera of patients with paraneoplastic opsoclonus-ataxia, breast cancer, and small cell lung cancer. Alternatively spliced transcripts encoding distinct isoforms have been described. Both Neanderthals and Denisovans had one version and nearly all modern humans had another suggesting positive selection. Insertion of Neanderthal gene variant of the neuro-oncological ventral antigen 1 (NOVA1) gene into human cortical organoids might promote slower development and higher surface complexity in the brain models, but this may be an artefact of a CRISPR side effect, as it could not be replicated in a subsequent study.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mushroom festival** Mushroom festival: A mushroom festival is a food festival in which mushrooms are featured. There are numerous mushroom festivals held annually in: Telluride Mushroom Festival in Telluride, Colorado Mushroom Festival at Mount Pisgah Arboretum in Eugene, Oregon Kennett Square, Pennsylvania Morel Mushroom Festival held in Harrison, Michigan, Mesick, Michigan, and Boyne City, Michigan Fantastic Forage Mushroom Festival held in Laconia, New Hampshire Mushroom Mardi Gras Festival held in Morgan Hill, California Pacific Northwest Mushroom Festival in Lacey, Washington
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Google Japanese Input** Google Japanese Input: Google Japanese Input (Google 日本語入力, Gūguru Nihongo Nyūryoku) is an input method published by Google for the entry of Japanese text on a computer. Since its dictionaries are generated automatically from the Internet, it supports typing of personal names, Internet slang, neologisms and related terms. Google Japanese Input can be used on Windows, macOS, and ChromeOS. Google also releases an open-source version under the name mozc. It can be used on Linux, Windows, macOS, Android, and ChromeOS. It does not use Google's closed-source algorithms for generating dictionary data from online sources.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Solid-phase microextraction** Solid-phase microextraction: Solid phase microextraction, or SPME, is a solid phase extraction sampling technique that involves the use of a fiber coated with an extracting phase, that can be a liquid (polymer) or a solid (sorbent), which extracts different kinds of analytes (including both volatile and non-volatile) from different kinds of media, that can be in liquid or gas phase. The quantity of analyte extracted by the fibre is proportional to its concentration in the sample as long as equilibrium is reached or, in case of short time pre-equilibrium, with help of convection or agitation. Analysis: After extraction, the SPME fiber is transferred to the injection port of separating instruments, such as a gas chromatography and mass spectrometry, where desorption of the analyte takes place and analysis is carried out. Advantages: The attraction of SPME is that the extraction is fast, simple, can be done usually without solvents, and detection limits can reach parts per trillion (ppt) levels for certain compounds. SPME also has great potential for field applications; on-site sampling can be done even by nonscientists without the need to have gas chromatography-mass spectrometry equipment at each location. When properly stored, samples can be analyzed days later in the laboratory without significant loss of volatiles. Fiber Coatings: The coating on the SPME fiber can be selected to improve sensitivity for specific analytes of interest; ideally the sorbent layer will have a high affinity for the target analytes. There are many commercially available SPME fiber coatings that are combinations of polydimethylsiloxane, divinylbenzene, Carboxen, polyacrylate, and polyethylene glycol. However, one downside to many of the commercially available SPME fibers is that they tend to be physically brittle due to their composition. Depending on the characteristics of the target analytes, certain properties of the coating improve extraction such as polarity, thickness, and surface area. The sample matrix can also influence the fiber coating selection. Based on the sample and analytes of interest, the fiber may need to tolerate direct immersion as opposed to a headspace extraction. Interactive lectures: Introduction to Solid Phase Microextraction Quantification using Solid Phase Microextraction
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**File (formation)** File (formation): A file is a military term for a number of troops drawn up in line ahead, i.e. one behind the other in a column. The number of files is the measure of the width of a column of troops in several ranks one behind the other. Usage: Files are useful when troops don't know where the enemy is, since there are overlapping fields of fire from each soldier, and cover from a possible flanking attack. Files are at a disadvantage when there are heavy weapons nearby, supported by infantry, especially machine guns and tanks. Ancient Greek use: A file of men in the Greek phalanx was called a lochos (Greek: λόχος) and usually ranged from eight to sixteen men.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Non-autonomous mechanics** Non-autonomous mechanics: Non-autonomous mechanics describe non-relativistic mechanical systems subject to time-dependent transformations. In particular, this is the case of mechanical systems whose Lagrangians and Hamiltonians depend on the time. The configuration space of non-autonomous mechanics is a fiber bundle Q→R over the time axis R coordinated by (t,qi) . This bundle is trivial, but its different trivializations Q=R×M correspond to the choice of different non-relativistic reference frames. Such a reference frame also is represented by a connection Γ on Q→R which takes a form Γi=0 with respect to this trivialization. The corresponding covariant differential (qti−Γi)∂i determines the relative velocity with respect to a reference frame Γ As a consequence, non-autonomous mechanics (in particular, non-autonomous Hamiltonian mechanics) can be formulated as a covariant classical field theory (in particular covariant Hamiltonian field theory) on X=R . Accordingly, the velocity phase space of non-autonomous mechanics is the jet manifold J1Q of Q→R provided with the coordinates (t,qi,qti) . Its momentum phase space is the vertical cotangent bundle VQ of Q→R coordinated by (t,qi,pi) and endowed with the canonical Poisson structure. The dynamics of Hamiltonian non-autonomous mechanics is defined by a Hamiltonian form pidqi−H(t,qi,pi)dt . One can associate to any Hamiltonian non-autonomous system an equivalent Hamiltonian autonomous system on the cotangent bundle TQ of Q coordinated by (t,qi,p,pi) and provided with the canonical symplectic form; its Hamiltonian is p−H
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sony PMW-EX1** Sony PMW-EX1: The PMW-EX1 is a high definition camcorder made by Sony costing $7,790 MSRP The Sony EX1 is popular among independent filmmakers due to the 1/2" TrueHD sensors, better depth of field control, and better low light capabilities. Other comparable class cameras use 1/3" sensors and pixel shifting or other schemes to simulate resolution. The PMW-EX1 utilizes Sony's three 1/2-inch type "Exmor" CMOS sensors, each with an effective pixel count of 1920 x 1080. Coupled with signal processing LSI, the PMW-EX1 produces images in 1080p (30 and 24 frame/s), 720p (up to 60 frame/s) and 1080i (up to 60 frame/s) HD. The Sony EX1 records internally to SxS (S by S) cards and does not internally record to tape (an external tape device would be required). The SxS-1 card was introduced in December 2009 as a more affordable option with a shorter operational life than SxS Pro cards. Development of the ExpressCard adapters such as MxR, MxM and KxT have allowed for the use of selected consumer-level SDHC cards at standard frame rates and 720p rates up to 42 frame/s. For 4:2:2 color, an external recording device would be required to be used utilizing the EX1's HD-SDI out. External recording storage devices include: PHU-60K 60GB portable XDcam Ex Storage approx 200 mins recording time [1] Sony SxS card management strategies for video and film production. Films that have used the Sony EX1 include: - District 9 - Public Enemies - Crank: High Voltage - The Act of Killing External links Sony Xdcam links: Sony Xdcam Ex SxS Training handout
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**HUH-tag** HUH-tag: HUH endonucleases (HUH-tags) are sequence-specific single-stranded DNA (ssDNA) binding proteins originating from numerous species of bacteria and viruses. Viral HUH endonucleases are involved in initiating rolling circle replication while ones of bacterial origin initiate bacterial conjugation. In biotechnology, they can be used to create protein-DNA linkages, akin to other methods such as SNAP-tag. In doing so, they create a 5' covalent bond between the ssDNA and the protein. HUH endonucleases can be fused with other proteins or used as protein tags. HUH-tag: The name HUH stands for "histidine-hydrophobic-histidine," referring to the three amino acids at the active site of the endonuclease. Some DNA viruses code for an HUH endonuclease which initiates rolling circle replication of the viral genome, and this process defines the realm Monodnaviria. Types of HUH endonucleases: HUH endonucleases are broadly split into two categories of enzymes: replication initiator proteins (Rep) or relaxase / mobilization proteins. They both contain small protein domains that recognize sequence-specific origins of replication or origin of transfer at which site they nick DNA. The nicking domain of Reps tend to be smaller, on the order of 10-20 kDa while nicking domains from relaxases are larger, roughly 20-40 kDa in size. Mode of action: HUH endonucleases generally have two histidine (H) residues in the active site coordinating a metal cation (Mg2+ or Mn2+) that interacts with the phosphate backbone of DNA. These residues allow for a nucleophilic attack, most commonly by an activated tyrosine of the scissile phosphate in the DNA backbone, generating a 5' covalent bond with the ssDNA. In contrast to other DNA-protein linkage approaches, this reaction occurs at ambient conditions and does not require any additional modifications. X-ray crystallography and NMR structures have provided insight into the sequence specificity of DNA binding. Applications: MobA relaxase incorporated into the viral capsid of Adeno-associated virus to link a DNA-antibody conjugate to target the virus to specific cell types PCV2 Rep protein fused to Cas9 to covalently link a DNA repair template to Cas9, resulting in increased homology-directed repair in human cells Similar to the approach mentioned above, Agrobacterium VirD2 relaxase fused to Cas9 allowing for linking of a DNA repair template to increase homology-directed repair in plants PCV2 Rep protein fused to Elastin-like particles (ELPs) linked to a Mucin-1 DNA aptamer to deliver drugs to cancer cells TraI, MobA, and TrwC relaxases used in orthogonal assembly on DNA nanostructures PCV2 Rep protein fused to luciferase linked to DNA aptamers that detected thrombin levels in a sample
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mammoth plate** Mammoth plate: A mammoth plate is a photographic plate that is usually 18 x 21 inches, but may vary in size from 15 by 18 inches to 22 by 25 inches. There is no official sizing or naming production chart for these glass negatives, only historical record. Before photographic enlargers were developed, photographers used mammoth plates to make large prints that were precisely the same size as the negative from which they were made. Notably, American landscape photographer Carleton Watkins derived his detailed images of the American West, namely his views of Yosemite commissioned by the California State Geological Survey, from mammoth plates.There is a proposed naming for very large plates under the listing "Mega Mammoth".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lefschetz hyperplane theorem** Lefschetz hyperplane theorem: In mathematics, specifically in algebraic geometry and algebraic topology, the Lefschetz hyperplane theorem is a precise statement of certain relations between the shape of an algebraic variety and the shape of its subvarieties. More precisely, the theorem says that for a variety X embedded in projective space and a hyperplane section Y, the homology, cohomology, and homotopy groups of X determine those of Y. A result of this kind was first stated by Solomon Lefschetz for homology groups of complex algebraic varieties. Similar results have since been found for homotopy groups, in positive characteristic, and in other homology and cohomology theories. Lefschetz hyperplane theorem: A far-reaching generalization of the hard Lefschetz theorem is given by the decomposition theorem. The Lefschetz hyperplane theorem for complex projective varieties: Let X be an n-dimensional complex projective algebraic variety in CPN, and let Y be a hyperplane section of X such that U = X ∖ Y is smooth. The Lefschetz theorem refers to any of the following statements: The natural map Hk(Y, Z) → Hk(X, Z) in singular homology is an isomorphism for k < n − 1 and is surjective for k = n − 1. The Lefschetz hyperplane theorem for complex projective varieties: The natural map Hk(X, Z) → Hk(Y, Z) in singular cohomology is an isomorphism for k < n − 1 and is injective for k = n − 1. The Lefschetz hyperplane theorem for complex projective varieties: The natural map πk(Y, Z) → πk(X, Z) is an isomorphism for k < n − 1 and is surjective for k = n − 1.Using a long exact sequence, one can show that each of these statements is equivalent to a vanishing theorem for certain relative topological invariants. In order, these are: The relative singular homology groups Hk(X, Y, Z) are zero for k≤n−1 The relative singular cohomology groups Hk(X, Y, Z) are zero for k≤n−1 The relative homotopy groups πk(X, Y) are zero for k≤n−1 Lefschetz's proof Solomon Lefschetz used his idea of a Lefschetz pencil to prove the theorem. Rather than considering the hyperplane section Y alone, he put it into a family of hyperplane sections Yt, where Y = Y0. Because a generic hyperplane section is smooth, all but a finite number of Yt are smooth varieties. After removing these points from the t-plane and making an additional finite number of slits, the resulting family of hyperplane sections is topologically trivial. That is, it is a product of a generic Yt with an open subset of the t-plane. X, therefore, can be understood if one understands how hyperplane sections are identified across the slits and at the singular points. Away from the singular points, the identification can be described inductively. At the singular points, the Morse lemma implies that there is a choice of coordinate system for X of a particularly simple form. This coordinate system can be used to prove the theorem directly. The Lefschetz hyperplane theorem for complex projective varieties: Andreotti and Frankel's proof Aldo Andreotti and Theodore Frankel recognized that Lefschetz's theorem could be recast using Morse theory. Here the parameter t plays the role of a Morse function. The basic tool in this approach is the Andreotti–Frankel theorem, which states that a complex affine variety of complex dimension n (and thus real dimension 2n) has the homotopy type of a CW-complex of (real) dimension n. This implies that the relative homology groups of Y in X are trivial in degree less than n. The long exact sequence of relative homology then gives the theorem. The Lefschetz hyperplane theorem for complex projective varieties: Thom's and Bott's proofs Neither Lefschetz's proof nor Andreotti and Frankel's proof directly imply the Lefschetz hyperplane theorem for homotopy groups. An approach that does was found by René Thom no later than 1957 and was simplified and published by Raoul Bott in 1959. Thom and Bott interpret Y as the vanishing locus in X of a section of a line bundle. An application of Morse theory to this section implies that X can be constructed from Y by adjoining cells of dimension n or more. From this, it follows that the relative homology and homotopy groups of Y in X are concentrated in degrees n and higher, which yields the theorem. The Lefschetz hyperplane theorem for complex projective varieties: Kodaira and Spencer's proof for Hodge groups Kunihiko Kodaira and Donald C. Spencer found that under certain restrictions, it is possible to prove a Lefschetz-type theorem for the Hodge groups Hp,q. Specifically, assume that Y is smooth and that the line bundle OX(Y) is ample. Then the restriction map Hp,q(X) → Hp,q(Y) is an isomorphism if p + q < n − 1 and is injective if p + q = n − 1. By Hodge theory, these cohomology groups are equal to the sheaf cohomology groups Hq(X,⋀pΩX) and Hq(Y,⋀pΩY) . Therefore, the theorem follows from applying the Akizuki–Nakano vanishing theorem to Hq(X,⋀pΩX|Y) and using a long exact sequence. The Lefschetz hyperplane theorem for complex projective varieties: Combining this proof with the universal coefficient theorem nearly yields the usual Lefschetz theorem for cohomology with coefficients in any field of characteristic zero. It is, however, slightly weaker because of the additional assumptions on Y. The Lefschetz hyperplane theorem for complex projective varieties: Artin and Grothendieck's proof for constructible sheaves Michael Artin and Alexander Grothendieck found a generalization of the Lefschetz hyperplane theorem to the case where the coefficients of the cohomology lie not in a field but instead in a constructible sheaf. They prove that for a constructible sheaf F on an affine variety U, the cohomology groups Hk(U,F) vanish whenever k>n The Lefschetz theorem in other cohomology theories: The motivation behind Artin and Grothendieck's proof for constructible sheaves was to give a proof that could be adapted to the setting of étale and ℓ -adic cohomology. Up to some restrictions on the constructible sheaf, the Lefschetz theorem remains true for constructible sheaves in positive characteristic. The theorem can also be generalized to intersection homology. In this setting, the theorem holds for highly singular spaces. A Lefschetz-type theorem also holds for Picard groups. Hard Lefschetz theorem: Let X be a n-dimensional non-singular complex projective variety in CPN Then in the cohomology ring of X, the k-fold product with the cohomology class of a hyperplane gives an isomorphism between Hn−k(X) and Hn+k(X) This is the hard Lefschetz theorem, christened in French by Grothendieck more colloquially as the Théorème de Lefschetz vache. It immediately implies the injectivity part of the Lefschetz hyperplane theorem. Hard Lefschetz theorem: The hard Lefschetz theorem in fact holds for any compact Kähler manifold, with the isomorphism in de Rham cohomology given by multiplication by a power of the class of the Kähler form. It can fail for non-Kähler manifolds: for example, Hopf surfaces have vanishing second cohomology groups, so there is no analogue of the second cohomology class of a hyperplane section. Hard Lefschetz theorem: The hard Lefschetz theorem was proven for ℓ -adic cohomology of smooth projective varieties over algebraically closed fields of positive characteristic by Pierre Deligne (1980).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gliese 710** Gliese 710: Gliese 710, or HIP 89825, is an orange 0.6 M☉ star in the constellation Serpens Cauda. It is projected to pass near the Sun in about 1.29 million years at a predicted minimum distance of 0.051 parsecs—0.1663 light-years (10,520 astronomical units) (about 1.60 trillion km) – about 1/25th of the current distance to Proxima Centauri. Such a distance would make for a similar brightness to the brightest planets, optimally reaching an apparent visual magnitude of about −2.7. The star's proper motion will peak around one arcminute per year, a rate of apparent motion that would be noticeable over a human lifespan. This is a timeframe, based on data from Gaia DR3, well within the parameters of current models which cover the next 15 million years. Description: Gliese 710 currently is 62.3 light-years (19.1 parsecs) from Earth in the constellation Serpens and has a below naked-eye visual magnitude of 9.69. A stellar classification of K7 Vk means it is a small main-sequence star mostly generating energy through the thermonuclear fusion of hydrogen at its core. (The suffix 'k' indicates that the spectrum shows absorption lines from interstellar matter.) Stellar mass is about 57% of the Sun's mass with an estimated 58% of the Sun's radius. It is suspected to be a variable star that may vary in magnitude from 9.65 to 9.69. As of 2020, no planets have been detected orbiting it. Computing and details of the closest approach: In their work Bobylev et.al in 2010 suggested Gliese 710 has an 86% chance of passing through the Oort cloud, assuming the Oort cloud to be a spheroid around the Sun with semiminor and semimajor axes of 80,000 and 100,000 AU respectively. The distance of closest approach of Gliese 710 is generally difficult to compute precisely as it depends sensitively on its current position and velocity; Bobylev et.al. estimated that Gliese_710 would pass within 0.311±0.167 parsecs (1.014±0.545 light-years) of the Sun. At the time, there was even a 1 in 10,000 chance of the star penetrating into the region (d < 1,000 AU) where the influence of the passing star on Kuiper belt objects would be significant. Computing and details of the closest approach: Results from new calculations that include input data from Gaia DR3 indicate that the flyby of Gliese 710 to the Solar System will on average be slightly closer at 0.051±0.003 pc (10635±500 au) in 1.29±0.04 Ma time, but with considerably less uncertainty. The effects of such an encounter on the orbit of the Pluto–Charon system (and therefore, on the classical trans-Neptunian belt) are negligible, but Gliese 710 will traverse the outer Oort cloud (inside 100,000 AU or 0.48 pc) and reach the outskirts of the inner Oort cloud (inward of 20,000 AU). Computing and details of the closest approach: Gliese 710 has the potential to perturb the Oort cloud in the outer Solar System, exerting enough force to send showers of comets into the inner Solar System for millions of years, triggering visibility of about ten naked-eye comets per year, and possibly causing an impact event. According to Filip Berski and Piotr Dybczyński, this event will be "the strongest disrupting encounter in the future and history of the Solar System". Earlier dynamic models indicated that the net increase in cratering rate due to the passage of Gliese 710 would be no more than 5%. They had originally estimated that the closest approach would happen in 1,360,000 years when the star will approach within 0.337±0.177 parsecs (1.100±0.577 light-years) of the Sun. Gaia DR2 later found the minimum perihelion distance to be 0.0676±0.0157 parsecs or 13900±3200 AU about 1.281 million years from now.Table of parameters of predictions of Gliese 710 encounter with Sun
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Incertae sedis** Incertae sedis: Incertae sedis (Latin for 'of uncertain placement') or problematica is a term used for a taxonomic group where its broader relationships are unknown or undefined. Alternatively, such groups are frequently referred to as "enigmatic taxa". In the system of open nomenclature, uncertainty at specific taxonomic levels is indicated by incertae familiae (of uncertain family), incerti subordinis (of uncertain suborder), incerti ordinis (of uncertain order) and similar terms. Examples: The fossil plant Paradinandra suecica could not be assigned to any family, but was placed incertae sedis within the order Ericales when described in 2001. The fossil Gluteus minimus, described in 1975, could not be assigned to any known animal phylum. The genus is therefore incertae sedis within the kingdom Animalia. While it was unclear to which order the New World vultures (family Cathartidae) should be assigned, they were placed in Aves incertae sedis. It was later agreed to place them in a separate order, Cathartiformes. Bocage's longbill, Motacilla bocagii, previously known as Amaurocichla bocagii, is a species of passerine bird that belongs to the superfamily Passeroidea. Since it was unclear to which family it belongs, it was classified as Passeroidea incertae sedis, until a 2015 phylogenetic study placed it in Motacilla of Motacillidae. Parakaryon myojinensis, a single-celled organism that is apparently distinct from prokaryotes and eukaryotes. In formal nomenclature: When formally naming a taxon, uncertainty about its taxonomic classification can be problematic. The International Code of Nomenclature for algae, fungi, and plants, stipulates that "species and subdivisions of genera must be assigned to genera, and infraspecific taxa must be assigned to species, because their names are combinations", but ranks higher than the genus may be assigned incertae sedis. Reason for use: Poor description This excerpt from a 2007 scientific paper about crustaceans of the Kuril–Kamchatka Trench and the Japan Trench describes typical circumstances through which this category is applied in discussing: ...the removal of many genera from new and existing families into a state of incertae sedis. Their reduced status was attributed largely to poor or inadequate descriptions but it was accepted that some of the vagueness in the analysis was due to insufficient character states. It is also evident that a proportion of the characters used in the analysis, or their given states for particular taxa, were inappropriate or invalid. Additional complexity, and factors that have misled earlier authorities, are intrusion by extensive homoplasies, apparent character state reversals and convergent evolution. Reason for use: Not included in an analysis If a formal phylogenetic analysis is conducted that does not include a certain taxon, the authors might choose to label the taxon incertae sedis instead of guessing its placement. This is particularly common when molecular phylogenies are generated, since tissue for many rare organisms is hard to obtain. It is also a common scenario when fossil taxa are included, since many fossils are defined based on partial information. For example, if the phylogeny was constructed using soft tissue and vertebrae as principal characters and the taxon in question is only known from a single tooth, it would be necessary to label it incertae sedis. Reason for use: Controversy If conflicting results exist or if there is not a consensus among researchers as to how a taxon relates to other organisms, it may be listed as incertae sedis until the conflict is resolved. Other ways of denoting uncertainty: Uncertain taxonomic assigations of other degrees may be denoted using the 'cf.' (before a taxon name) and '?' (after a taxon name) specifiers. In zoological nomenclature: In zoological nomenclature, "incertae sedis" is not a nomenclatural term at all per se, but is used by taxonomists in their classifications to mean "of uncertain taxonomic position".Glossary In botany, a name is not validly published if it is not accepted by the author in the same publication.Article 36.1 In zoology, a name proposed conditionally may be available under certain conditions.Articles 11 and 15 For uncertainties at lower levels, some authors have proposed a system of "open nomenclature", suggesting that question marks be used to denote a questionable assignment. For example, if a new species was given the specific epithet album by Anton and attributed with uncertainty to Agenus, it could be denoted "Agenus? album Anton (?Anton)"; the "(?Anton)" indicates the author that assigned the question mark. So if Anton described Agenus album, and Bruno called the assignment into doubt, this could be denoted "Agenus? album (Anton) (?Bruno)", with the parentheses around Anton because the original assignment (to Agenus) was modified (to Agenus?) by Bruno. This practice is not included in the International Code of Zoological Nomenclature, and is used only by paleontologists.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rectal foreign body** Rectal foreign body: Rectal foreign bodies are large foreign items found in the rectum that can be assumed to have been inserted through the anus, rather than reaching the rectum via the mouth and gastrointestinal tract. It can be of clinical relevance if the patient cannot remove it the way they intended. Smaller, ingested foreign bodies, such as bones eaten with food, can sometimes be found stuck in the rectum upon x-ray and are rarely of clinical relevance. Rectal foreign body: Rectal foreign bodies are a subgroup of foreign bodies in the alimentary tract. Signs and symptoms: If the foreign body is too big to allow feces from the colon to pass, a mechanical ileus may occur. The distension of the rectum and the disruption of the peristasis reinforce this effect. The foreign body may cause infections, destroying the intestinal wall. Depending on the location of the perforation, this may lead to a peritonitis due to the feces or an abscess in the retroperitoneal space. Smaller objects that injure the intestinal wall, but do not perforate it, may be encapsulated by a foreign body granuloma. They may remain in the rectum as a pseudotumor without any further effects. Signs and symptoms: Complications The most common – but still rare – complication is a perforation of the rectum caused by the foreign object itself or attempts to remove it. Diagnosed perforations are operated immediately by opening the abdomen and removal or suturing of the perforated area. In order to suppress infections, antibiotics are usually prescribed. Often, a temporary ileostomy is necessary to protect the stitches. After a contrast medium applied by an enema proves the complete healing of the perforated area, the ileostomy is reversed. This usually takes between three and six months. Average hospitalization is 19 days.Medical literature describes some deaths due to rectal foreign bodies, but they are very rare and usually classified as autoerotic fatality. A 75-year-old patient died due to a rectal perforation caused by a mentally ill person using a cane. Another middle-aged patient died due to a rectal perforation by a vibrator. The perforation was sutured and the patient received intensive medical care, but he contracted Acute Respiratory Distress Syndrome (ARDS) and systemic inflammatory response syndrome (SIRS) due to the trauma, resulting in multiple organ dysfunction syndrome (MODS) and death. There is a paper describing a death after a perforation with a shoehorn. The rectum has to be nursed after a surgical procedure until healing is complete. A 54-year-old man, who had been operated on twice in order to remove a foreign body (a cucumber and a parsnip), died due to a peritonitis after he inserted two apples into the rectum before the wound had healed. Causes: Reasons for foreign rectal bodies vary wildly, but in most cases they are of sexual or criminal motivation. The foreign body was inserted voluntarily in the vast majority of cases. This especially includes sexually motivated behaviour, encompassing the majority of cases. Bodypacking, i.e. illegal transport of drugs within a body orifice (here: inside the rectum), is another – potentially – voluntary reason for insertion of foreign rectal bodies. This includes attempts to transport objects like weapons, including knives, or ammunition. According to one study, sexual stimulation was responsible for 80% of clinically relevant foreign rectal bodies. About 10% of the cases were due to sexual assault.In rare cases, the patient inserted the object into the rectum without a way to remove it intending to receive attention and pity from doctors and nurses. This behaviour is categorized as Munchausen's syndrome.Another cause may be attempted self-treatment of diseases. One patient attempted to treat his chronic diarrhea by inserting an ear of maize into his rectum. Another patient tried to soothe the itching due to his hemorrhoids (Pruritus ani) with a toothbrush. The toothbrush went out of control and disappeared inside his anus.Accidents or torture may cause an involuntary insertion of a foreign body. A mercury medical thermometer inserted into the anus in order to measure the temperature, but broke off while inside, is an example of a foreign rectal body due to an accident. Ancient Greece knew the Rhaphanidosis as a punishment for male adulterers. It involved the insertion of a radish into the anus. Many self-inserted rectal bodies are stated as accidentally by the patients due to feelings of shame. Causes: There are several reasons that contribute to the jamming of rectal bodies inside the rectum. Many of the objects used for sexual stimulation have a conical tip in order to facilitate penetration, while the base is flat. Extraction by the user may be impossible if the base of the object passed the anus towards the rectum. In order to receive a stronger stimulation, the object may be inserted deeper than intended. In this case, the sphincter prevents, by mechanical means, the extraction of the foreign body. Causes: By mouth The other way for a foreign body to travel through the digestive system (after oral intake and passage through the entire intestines) happens very often, but is only rarely medically relevant. Other constrictions, such as the esophagus, cardia, pylorus or ileocecal valve tend to cause issues with other organs, provided a foreign body is large enough to be an issue. Some foreign bodies may still pass those narrows and may cause medically relevant issues, i.e. toothpicks and bones. Bones especially, i.e. from chickens, cause about two thirds of all intestinal perforations.Plant-based food, especially seeds like popcorn, watermelon, sunflower and pumpkin seed, may clump together inside the lower intestines to form bezoars. Those may grow too big for normal anal passage, thus becoming clinically relevant. This kind of rectal foreign body happens chiefly in children, especially in Northern Africa and the Middle East, where those seeds form an elemental part of the diet. In very rare cases, seeds inside a bezoar may germinate inside the lower intestines or the rectum, causing a blockade. Causes: Objects Type and size of the foreign rectal bodies are diverse and may exceed the anatomical-physiological imagination.Objects documented in literature include: Razor, screw, screwdriver, small rolled tool bag (15×12 cm, including tools 620 g), hairpin, milk can opener, drill bit Short staffs, such as a 27 cm long chair leg, a 19 cm long spade handle and a broken off broom handle, extension parts for a vacuum cleaner Containers, sometimes exceeding 0.5 L in volume, e.g. sparkling wine bottles, bottles of Coca-Cola, jam pots, small beer glasses, cups Spray can, light bulb, vacuum tube, candle WWII artillery shell requiring attention from a bomb squad Table tennis ball, Boccia ball Ammunition, firecracker Vibrator, rubber rod, dildo a toy car spectacles, a suitcase key, a tobacco pouch and a magazine at the same time plastic tooth brush caseNot all objects are solid. In 1987, a case was documented of a patient who administered a cement enema. After it solidified and impacted, the resulting block had to be surgically extracted. Another extreme case occurred in November 1953. A depressed man inserted a 15 cm long cardboard tube into his rectum and tossed a lighted firecracker into the tube's opening, resulting in a large hole in his rectum. Diagnosis: Many patients feel ashamed during the anamnesis and provide information only reluctantly. This may lead to missing information that may be important during therapy. For the same reason, patients may not visit a doctor until very late. Trusting and sensitive care for the ashamed and uncomfortable patients is paramount for a successful therapy and may be life-saving.Usually, several radiological images are recorded in order to pinpoint the precise place and depth of the foreign body. This is usually done by x-ray. Foreign bodies made from low-contrast material (e.g. plastics) may necessitate medical ultrasound or a CT scan. Magnetic resonance imaging is contraindicated, especially if the foreign body is unknown. Foreign rectal bodies may penetrate deep into the colon, in certain circumstances up to the right colic flexure.An endoscopy, which may also be of use during therapy, facilitates the identification and localisation of the object inside the rectum.Information about the foreign body obtained in those ways are of high importance during therapy, as a perforation of the rectum or the anus is to be absolutely avoided. Treatment: The therapeutic measures to remove the foreign body can be as diverse as the objects inside the rectum. In many instances, the foreign bodies consist of fragile materials, such as glass. Most patients wait for several hours or even days until they visit a doctor. Before they do, they often repeatedly try to remove the object themselves or by a layperson. This often worsens the situation for a successful extraction. Treatment: In most cases, the foreign body can be removed endoscopically. Vibrators, for example, can be often removed using a large sling usually used to remove polyps during coloscopy. A flexible endoscope can be of no help with large and jammed objects. It may be preferable to use rigid tools in those cases.There have been several cases where instruments used in child birth have proven their worth for the removal of those foreign bodies, such as the forceps and suction cups. Treatment: Wooden objects have been retrieved with corkscrews and drinking glasses after filling them with plaster. A spoon can be used as an "anchor" by leaving it inside the glass during the plaster filling, removing it together with the glass. Light bulbs are encased in a gauze shroud, shattered inside the rectum and extracted.There have been successful cases using argon-plasma coagulation. The object in question was a green apple wrapped in cellophane inside the rectum of a 44-year-old patient. The argon-beam coagulation shrunk the apple by more than 50%, enabling its removal. Previous extraction attempts using endoscopic tools failed due to the flat surface of the object.If the object is too far up, in the area of the colon sigmoideum, and cannot be removed using one of the above methods, bed rest and sedation can cause the object to descend back into the rectum, where retrieval and extraction are easier.In difficult cases, a laparotomy may be necessary. Statistically, this is the case in about 10 percent of patients. The large intestine can be manipulated inside the abdominal cavity, making it possible for it to wander in the direction of the anus and be grabbed there. A surgical opening of the large intestines can be indication in very difficult cases, especially if the manipulation of the object may pose a serious health risk. This may be the case with a jammed drug condom. Treatment: Anaesthesia Mild cases may need a sedation at most. Local and spinal anaesthesia find common use. Difficult interventions may need general anaesthesia; surgical opening of the abdominal cavity or the colon require it. General anaesthesia can be beneficial for the relaxation of the sphincter. Aftercare After the surgery, a sigmoidoscopy – a colonoscopy focused on the first 60 cm of the colon – is good practice in order to rule out possible perforation and injury of the rectum and the colon sigmoideum. Stationary aftercare may be indicated. Examples APC = Argon beam-coagulation N.A. = Not available(Source:) Epidemiology: There is no reliable data about the incidence of clinically meaningful foreign rectal bodies. It may have increased in the long term as it is observed more often in recent times.The incident rate is significantly higher for men than for women. The gender ratio is in the area of 28:1. A metastudy in the year 2010 found a ratio of 37:1. Median age of the patients was 44.1 years, with a standard deviation of 16.6 years. Rectal foreign bodies are not an unusual occurrence in hospital emergency rooms.The first documented case dates from the 16th century. Other animals: Foreign rectal bodies are rare in veterinary medicine. A passage through the entire intestines, followed by a stay inside the rectum is – as with humans – rare. Animals may have bezoars out of different materials, which may migrate to the rectum and cause problems. Ig Nobel Prize: The Ig Nobel Prize was awarded in 1995 to David B. Busch and James R. Starling from Madison, Wisconsin, for their 1986 article Rectal foreign bodies: Case Reports and a Comprehensive Review of the World's Literature (see List of Ig Nobel Prize winners).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Catenation** Catenation: In chemistry, catenation is the bonding of atoms of the same element into a series, called a chain. A chain or a ring shape may be open if its ends are not bonded to each other (an open-chain compound), or closed if they are bonded in a ring (a cyclic compound). The words to catenate and catenation reflect the Latin root catena, "chain". Carbon: Catenation occurs most readily with carbon, which forms covalent bonds with other carbon atoms to form longer chains and structures. This is the reason for the presence of the vast number of organic compounds in nature. Carbon is most well known for its properties of catenation, with organic chemistry essentially being the study of catenated carbon structures (and known as catenae). Carbon chains in biochemistry combine any of various other elements, such as hydrogen, oxygen, and biometals, onto the backbone of carbon. Carbon: However, carbon is by no means the only element capable of forming such catenae, and several other main-group elements are capable of forming an expansive range of catenae, including hydrogen, boron, silicon, phosphorus, sulfur and halogens. Carbon: The ability of an element to catenate is primarily based on the bond energy of the element to itself, which decreases with more diffuse orbitals (those with higher azimuthal quantum number) overlapping to form the bond. Hence, carbon, with the least diffuse valence shell p orbital is capable of forming longer p-p sigma bonded chains of atoms than heavier elements which bond via higher valence shell orbitals. Catenation ability is also influenced by a range of steric and electronic factors, including the electronegativity of the element in question, the molecular orbital n and the ability to form different kinds of covalent bonds. For carbon, the sigma overlap between adjacent atoms is sufficiently strong that perfectly stable chains can be formed. With other elements this was once thought to be extremely difficult in spite of plenty of evidence to the contrary. Hydrogen: Theories of the structure of water involve three-dimensional networks of tetrahedra and chains and rings, linked via hydrogen bonding.A polycatenated network, with rings formed from metal-templated hemispheres linked by hydrogen bonds, was reported in 2008.In organic chemistry, hydrogen bonding is known to facilitate the formation of chain structures. 4-tricyclanol C10H16O, for example, shows catenated hydrogen bonding between the hydroxyl groups, leading to the formation of helical chains; crystalline isophthalic acid C8H6O4 is built up from molecules connected by hydrogen bonds, forming infinite chains.In unusual conditions, a 1-dimensional series of hydrogen molecules confined within a single wall carbon nanotube is expected to become metallic at a relatively low pressure of 163.5 GPa. This is about 40% of the ~400 GPa thought to be required to metallize ordinary hydrogen, a pressure which is difficult to access experimentally. Silicon: Silicon can form sigma bonds to other silicon atoms (and disilane is the parent of this class of compounds). However, it is difficult to prepare and isolate SinH2n+2 (analogous to the saturated alkane hydrocarbons) with n greater than about 8, as their thermal stability decreases with increases in the number of silicon atoms. Silanes higher in molecular weight than disilane decompose to polymeric polysilicon hydride and hydrogen. But with a suitable pair of organic substituents in place of hydrogen on each silicon it is possible to prepare polysilanes (sometimes, erroneously called polysilenes) that are analogues of alkanes. These long chain compounds have surprising electronic properties - high electrical conductivity, for example - arising from sigma delocalization of the electrons in the chain.Even silicon–silicon pi bonds are possible. However, these bonds are less stable than the carbon analogues. Disilane is quite reactive compared to ethane. Disilene and disilynes are quite rare, unlike alkenes and alkynes. Examples of disilynes, long thought to be too unstable to be isolated were reported in 2004. Boron: In dodecaborate(12) anion, twelve boron atoms covalently link to each other to form an icosahedral structure. Various other similar motifs are also well studied, such as boranes, carboranes and metal dicarbollides. Nitrogen: Nitrogen, unlike its neighbor carbon, is much less likely to form chains that are stable at room temperature. Some examples of which are solid nitrogen, triazane, azide anion and triazoles. Even longer series with eight nitrogen atoms or more, such as 1,1'-Azobis-1,2,3-triazole, have been synthesized. These compounds have potentially use as a convenient way to store large amount of energy. Phosphorus: Phosphorus chains (with organic substituents) have been prepared, although these tend to be quite fragile. Small rings or clusters are more common. Sulfur: The versatile chemistry of elemental sulfur is largely due to catenation. In the native state, sulfur exists as S8 molecules. On heating these rings open and link together giving rise to increasingly long chains, as evidenced by the progressive increase in viscosity as the chains lengthen. Also, sulfur polycations, sulfur polyanions (polysulfides) and lower sulfur oxides are all known. Furthermore, selenium and tellurium show variants of these structural motifs. Semimetallic elements: In recent years a variety of double and triple bonds between the semi-metallic elements have been reported, including silicon, germanium, arsenic, bismuth and so on. The ability of certain main group elements to catenate is currently the subject of research into inorganic polymers. Halogen elements: Except for fluorine that can only form unstable polyfluorides at low temperature, all other stable halogens (Cl, Br, I) can form several isopolyhalogen anions that are stable at room temperature, of which the most prominent example being triiodide. In all these anions, the halogen atoms of the same element bond to each other.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Observe. Hack. Make.** Observe. Hack. Make.: Observe. Hack. Make. also known as OHM, was an outdoor hacker conference that took place in the Netherlands from July 31 to August 4, 2013.This conference was part of a sequence that began with the Galactic Hacker Party in 1989, followed by Hacking at the End of the Universe in 1993, Hacking In Progress in 1997, Hackers At Large in 2001, What the Hack in 2005, and Hacking at Random in 2009. The tradition continued in 2017 with Still Hacking Anyway and May Contain Hackers in 2022. Observe. Hack. Make.: With 3000 tickets sold, the camp was completely sold out weeks before it started. With 700 more tickets than the previous camp it was the biggest Dutch hacker camp so far. Julian Assange: One of the highlights of the event was a live video connection with Julian Assange. After his appearance at Hacking at Random in 2009 the organizers invited Assange again. This time he spoke to the audience over a live video connection from the Ecuador embassy in London.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SRC1** SRC1: SRC1 (systemic name YML034W or YML034w) is a yeast inner nuclear membrane protein which regulates subtelomeric genes and is linked to TREX (transcription export) factors. SRC1 produces two splice variant proteins with different functions; alternative splicing of SRC1 pre-mRNA is promoted by Hub1p; mutant has aneuploidy tolerance.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GURPS Fantasy Bestiary** GURPS Fantasy Bestiary: GURPS Fantasy Bestiary is a sourcebook for GURPS. Contents: GURPS Fantasy Bestiary is a supplement for GURPS Magic containing original monsters and creatures found commonly in fantasy settings. Publication history: GURPS Fantasy Bestiary was written by Steffan O'Sullivan with Steve Jackson and Warren Spector, with a cover by Carol Heyer and illustrations by Tom Baxa, and published as a 128-page book by Steve Jackson Games in 1990. Reception: Lawrence Schick noted the "Animals with Multiple or Unusual Heads" as his favorite section.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Basal area** Basal area: Basal area is the cross-sectional area of trees at breast height (1.3m or 4.5 ft above ground). It is a common way to describe stand density. In forest management, basal area usually refers to merchantable timber and is given on a per hectare or per acre basis. If you cut down all the merchantable trees on an acre at 4 ½ feet off the ground and measured the square inches on the top of each stump (πr*r), added them all together and divided by square feet (144 sq inches per square foot), that would be the basal area on that acre. In forest ecology, basal area is used as a relatively easily-measured surrogate of total forest biomass and structural complexity, and change in basal area over time is an important indicator of forest recovery during succession . Estimation from diameter at breast height: The basal area (BA) of a tree can be estimated from its diameter at breast height (DBH), the diameter of the trunk as measured 1.3m (4.5 ft) above the ground. DBH is converted to BA based on the formula for the area of a circle: BA=π×(DBH/2)2 If DBH was measured in cm, BA will be in cm2. To convert to m2, divide by 10,000: 10000 If DBH is in inches, divide by 144 to convert to ft2: 144 The formula for BA in ft2 may also be simplified as: 0.005454 ×DBH(in)2 in English system 0.00007854 ×DBH(cm)2 in Metric system The basal area of a forest can be found by adding the basal areas (as calculated above) of all of the trees in an area and dividing by the area of land in which the trees were measured. Basal area is generally made for a plot and then scaled to m2/ha or ft2/acre to compare forest productivity and growth rate among multiple sites. Estimation using a wedge prism: A wedge prism can be used to quickly estimate basal area per hectare. To find basal area using this method, simply multiply your BAF (Basal Area Factor) by the number of "in" trees in your variable radius plot. The BAF will vary based on the prism used, common BAFs include 5/8/10, and all "in" trees are those trees, when viewed through your prism from plot centre, that appear to be in-line with the standing tree on the outside of the prism. Worked example: Suppose you carried out a survey using a variable radius plot with angle count sampling (wedge prism) and you selected a Basal Area Factor (BAF) of 4. If your first tree had a diameter at breast height (DBH) of 14cm, then the standard way of calculating how much of 1ha was covered by tree area (scaling up from that tree to the hectare) would be: (BAF/((DBH+0.5)² × π/4))) × 10,000 BAF, in this case 4, is the BAF selected for the sampling technique. Worked example: DBH, in this case 14 (this uses an assumed diameter, when actually used is the radius perpendicular to the tangent line) The + 0.5 allows under and over measurement to be accounted for. The π/4 converts the rest to the area.In this case this means in every Ha there is 242m² of tree area according to this sampled tree being taken as representative of all the unmeasured trees. Fixed area plot: It would also be possible to survey the trees in a Fixed Area Plot (FAP). Also called a Fixed Radius Plot. In the case that this plot was 100m². Then the formula would be (DBH+0.5)²X π/4
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Infant formula** Infant formula: Infant formula, also called baby formula, simply formula (American English), baby milk or infant milk (British English), is a manufactured food designed and marketed for feeding to babies and infants under 12 months of age, usually prepared for bottle-feeding or cup-feeding from powder (mixed with water) or liquid (with or without additional water). The U.S. Federal Food, Drug, and Cosmetic Act (FFDCA) defines infant formula as "a food which purports to be or is represented for special dietary use solely as a food for infants by reason of its simulation of human milk or its suitability as a complete or partial substitute for human milk".Manufacturers state that the composition of infant formula is designed to be roughly based on a human mother's milk at approximately one to three months postpartum; however, there are significant differences in the nutrient content of these products. The most commonly used infant formulas contain purified cow's milk whey and casein as a protein source, a blend of vegetable oils as a fat source, lactose as a carbohydrate source, a vitamin-mineral mix, and other ingredients depending on the manufacturer. Modern infant formulas also contain human milk oligosaccharides, which are beneficial for immune development and a healthy gut microbiota in babies. In addition, there are infant formulas using soybean as a protein source in place of cow's milk (mostly in the United States and Great Britain) and formulas using protein hydrolysed into its component amino acids for infants who are allergic to other proteins. An upswing in breastfeeding in many countries has been accompanied by a deferment in the average age of introduction of baby foods (including cow's milk), resulting in both increased breastfeeding and increased use of infant formula between the ages of 3- and 12-months.A 2001 World Health Organization (WHO) report found that infant formula prepared in accordance with applicable Codex Alimentarius standards was a safe complementary food and a suitable breast milk substitute. In 2003, the WHO and UNICEF published their Global Strategy for Infant and Young Child Feeding, which restated that "processed-food products for...young children should, when sold or otherwise distributed, meet applicable standards recommended by the Codex Alimentarius Commission", and also warned that "lack of breastfeeding—and especially lack of exclusive breastfeeding during the first half-year of life—are important risk factors for infant and childhood morbidity and mortality". Infant formula: In particular, the use of infant formula in less economically developed countries is linked to poorer health outcomes because of the prevalence of unsanitary preparation conditions, including lack of clean water and lack of sanitizing equipment. A formula-fed child living in unclean conditions is between 6 and 25 times more likely to die of diarrhea and four times more likely to die of pneumonia than a breastfed child. Rarely, use of powdered infant formula (PIF) has been associated with serious illness, and even death, due to infection with Cronobacter sakazakii and other microorganisms that can be introduced to PIF during its production. Although C. sakazakii can cause illness in all age groups, infants are believed to be at greatest risk of infection. Between 1958 and 2006, there have been several dozen reported cases of C. sakazakii infection worldwide. The WHO believes that such infections are under-reported. Uses, risks and controversies: The use and marketing of infant formula has come under scrutiny. Breastfeeding, including exclusive breastfeeding for the first 6 months of life, is widely advocated as "ideal" for babies and infants, both by health authorities—and accordingly in ethical advertising of infant formula manufacturers.Despite the recommendation that babies be exclusively breastfed for the first 6 months, less than 40% of infants below this age are exclusively breastfed worldwide. The overwhelming majority of American babies are not exclusively breastfed for this period—in 2005 under 12% of babies were breastfed exclusively for the first 6 months, with over 60% of babies of 2 months of age being fed formula, and approximately one in four breastfed infants having infant formula feeding within two days of birth.Some studies have shown that use of formula can vary according to the parents' socio-economic status, ethnicity or other characteristics. For example, according to a research conducted in Vancouver, Canada, 82.9% of mothers breastfeed their babies at birth, but the number differed between Caucasians (91.6%) and non-Caucasians (56.8%), with the difference essentially attributed to marital status, education and family income. In the United States, mothers of lower socio-economic status have been found less likely to breastfeed, although this may be partly related to adverse effects of government nutrition supplementation programs that provide subsidies for infant formula.The use of hydrolysed cow milk baby formula versus standard milk baby formula does not appear to change the risk of allergies or autoimmune diseases. Uses, risks and controversies: Use of infant formula In some cases, breastfeeding is medically contraindicated. These include: Mother's health: The mother is infected with HIV or has active tuberculosis. She is extremely ill or has had certain kinds of breast surgery, which may have removed or disconnected all milk-producing parts of the breast. She is taking any kind of drug that could harm the baby, including both prescription drugs such as cytotoxic chemotherapy for cancer treatments as well as illicit drugs.One of the main global risks posed by breast milk specifically is the transmission of HIV and other infectious diseases. Breastfeeding by an HIV-infected mother poses a 5–20% chance of transmitting HIV to the baby. However, if a mother has HIV, she is more likely to transmit it to her child during the pregnancy or birth than during breastfeeding. A 2012 study conducted by researchers from the University of North Carolina School of Medicine showed reduced HIV-1 transmission in humanized mice, due to components in the breast milk. Cytomegalovirus infection poses potentially dangerous consequences for pre-term babies. Other risks include mother's infection with HTLV-1 or HTLV-2 (viruses that could cause T-cell leukemia in the baby), herpes simplex when lesions are present on the breasts, and chickenpox in the newborn when the disease manifested in the mother within a few days of birth. In some cases these risks can be mitigated by using heat-treated milk and nursing for a briefer time (e.g. 6 months, rather than 18–24 months), and can be avoided by using an uninfected woman's milk, as via a wet-nurse or milk bank, or by using infant formula and/or treated milk. Uses, risks and controversies: In balancing the risks, such as cases where the mother is infected with HIV, a decision to use infant formula versus exclusive breastfeeding may be made based on alternatives that satisfy the “AFASS” (Acceptable, Feasible, Affordable, Sustainable and Safe) principles. Baby is unable to breastfeed: The child has a birth defect or inborn error of metabolism such as galactosemia that makes breastfeeding difficult or impossible. Uses, risks and controversies: Baby is considered at risk for malnutrition: In certain circumstances infants may be at risk for malnutrition, such as due to iron deficiency, vitamin deficiencies (e.g. vitamin D which may be less present in breast milk than needed at high latitudes where there is less sun exposure), or inadequate nutrition during transition to solid foods. Risks can often be mitigated with improved diet and education of mothers and caregivers, including availability of macro and micronutrients. For example, in Canada, marketed infant formulas are fortified with vitamin D, but Health Canada also recommends breastfed infants receive extra vitamin D in the form of a supplement. Uses, risks and controversies: Personal preferences, beliefs, and experiences: The mother may dislike breast-feeding or find it inconvenient. In addition, breastfeeding can be difficult for victims of rape or sexual abuse; for example, it may be a trigger for posttraumatic stress disorder. Many families bottle feed to increase the father's role in parenting his child. Uses, risks and controversies: Mental health: The pressure to breastfeed in many cultures can be so much that the mother's mental health may take a sharp decline. This can have physical effects such as poor latching as well as milk depletion and a lack of connection to the child. In some cases it is better for the child to be formula fed so that a better bond can be made between mother and child rather than the ‘special bond’ that comes from breastfeeding being tainted by negative breastfeeding experiences. The pressure to breastfeed in many cultures can increase the likelihood of postpartum depression. Uses, risks and controversies: Absence of the mother: The child is adopted, orphaned, abandoned, or in the sole custody of a man or male same-sex couple. The mother is separated from her child by being in prison or a mental hospital. The mother has left the child in the care of another person for an extended period of time, such as while traveling or working abroad. Uses, risks and controversies: Food allergies: The mother eats foods that may provoke an allergic reaction in the infant. Financial pressures: Maternity leave is unpaid, insufficient, or lacking. The mother's employment interferes with breastfeeding. Mothers who breastfeed may experience a loss of earning power. Societal structure: Breastfeeding may be forbidden, discouraged or difficult at the mother's job, school, place of worship or in other public places, or the mother may feel that breastfeeding in these places or around other people is immodest, unsanitary, or inappropriate. Uses, risks and controversies: Social pressures: Family members, such as mother's husband or boyfriend, or friends or other members of society may encourage the use of infant formula. For example, they may believe that breastfeeding will decrease the mother's energy, health, or attractiveness. Conversely, societal pressures to breastfeed can also lead to mental health issues. A sense of shame from not being able to or struggling to do so equalling being a failure has a connection to Postpartum Depression Lack of training and education: The mother lacks education and training from medical providers or community members. Lactation insufficiency: The mother is unable to produce sufficient milk. In studies that do not account for lactation failure with obvious causes (such as use of formula and/or breast pumps), chronic lactation insufficiency affects around 10-15% of women. For about 5-8% of women, milk coming in (i.e., lactogenesis II) may not occur at all, and only drops are produced. Alternatively, despite a healthy supply, the woman or her family may incorrectly believe that her breast milk is of low quality or in low supply. These women may choose infant formula either exclusively or as a supplement to breastfeeding. New research is showing that mothers who report problems with milk production have physical markers indicating low milk production, calling into question the assumption (called “perceived insufficient milk supply” or PIMS) that mothers are incorrect about the quantity of milk they are producing. Uses, risks and controversies: Fear of exposure to environmental contaminants: Certain environmental pollutants, such as polychlorinated biphenyls, can bioaccumulate in the food chain and may be found in humans including mothers' breast milk.However studies have shown that the greatest risk period for adverse effects from environmental exposures is prenatally. Other studies have further found that the levels of most persistent organohalogen compounds in human milk decreased significantly over the past three decades and equally did their exposure through breastfeeding. Uses, risks and controversies: Research on risks from chemical pollution is generally inconclusive in terms of outweighing the benefits of breastfeeding. Studies supported by the WHO and others have found that neurological benefits of breast milk remain, regardless of dioxin exposure. In developing countries, environmental contaminants associated with increased health risks from use of infant formula, particularly diarrhea due to unclean water and lack of sterile conditions – both prerequisites to the safe use of formula – often outweigh any risks from breastfeeding. Uses, risks and controversies: Lack of other sources of breast milk: Lack of wet nurses: Wet nursing is illegal and stigmatized in some countries, and may not be available. It may also be socially unsupported, expensive, or health screening of wet nurses may not be available. The mother, her doctor, or family may not know that wet nursing is possible, or may believe that nursing by a relative or paid wet-nurse is unhygienic. Uses, risks and controversies: Lack of milk banks: Human-milk banks may not be available, as few exist, and many countries cannot provide the necessary screening for diseases and refrigeration. Uses, risks and controversies: Health risks Use of infant formula has been cited for association with numerous increased health risks. Studies have found infants in developed countries who consume formula are at increased risk for acute otitis media, gastroenteritis, severe lower respiratory tract infections, atopic dermatitis, asthma, obesity, type 1 and 2 diabetes, sudden infant death syndrome (SIDS), eczema and necrotizing enterocolitis when compared to infants who are breastfed. Some studies have found an association between infant formula and lower cognitive development, including iron supplementation in baby formula being linked to lowered I.Q. and other neurodevelopmental delays; however other studies have found no correlation. Causation, however, has not been established for negative long-term health effects of infant formula; studies analyzing health outcomes for breastfed vs. formula fed babies are primarily observational in nature and are plagued with confounding factors such as socioeconomic status, education level, and maternal preexisting conditions (such as obesity, which is associated with both low milk production and childhood obesity). When confounding factors are controlled for, differences between long-term health of breastfed and formula fed infants decrease. Uses, risks and controversies: Melamine contamination In 2008, a case of melamine poisoning of infant formula was discovered in China, where milk was deliberately adulterated with the chemical, leading to the death of six babies, and illnesses in more than 300,000 infants, including cases of acute kidney failure. Large quantities of melamine were added to watered-down milk to give it the appearance of having adequate protein levels. Some of those responsible for the poisoning were sentenced to death.In November 2008, traces of melamine were reported to have been found by the U.S. Food and Drug Administration in infant formula sold in the United States made by the three main American firms — Abbott Laboratories, Nestlé and Mead Johnson — responsible for 90–99% of the infant formula market in that country. The levels were much less than those reported in China, where levels of melamine contamination had reached as much as 2,500 parts per million, about 10,000 times higher than the recorded US levels. The safety data sheet for melamine (CAS registry number 108-78-1; C3-H6-N6) recorded the acute oral toxicity (median lethal dose) at 3161 mg/kg for a rat. Uses, risks and controversies: Health Canada conducted a separate test and also detected traces of melamine in infant formula available in Canada. The melamine levels were well below Health Canada's safety limits, although concerns remain about the safety of manufactured food for infants and monitoring of potentially dangerous substances. Uses, risks and controversies: Other health controversies In 1985, Syntex Corporation was ordered to pay $27 million in compensation for the deaths of two American infants who suffered brain damage after drinking the company's baby formula, called Neo-mull-soy. Formulas produced by Syntex had previously been subject to a major recall as they were found to have insufficient chloride to support normal infant growth and development. Uses, risks and controversies: In 2003, baby plant-based formula manufactured by the German company Humana and sold in Israel under the brand Remedia caused severe vitamin deficiencies in babies. Babies who consumed the formula were hospitalized with cardiac and neurological symptoms. Three of them died, and at least twenty others were left with severe disabilities. An investigation revealed that the formula contained a much lower quantity of Thiamine than is needed for healthy infant development because of a manufacturing error. Humana's chief food technologist received a 30-month prison sentence for negligent manslaughter in February 2013 over the case. Uses, risks and controversies: In 2010, Abbott Laboratories issued a voluntary recall of about five million Similac brand powder infant formulas that were sold in the United States, Guam, Puerto Rico and some Caribbean countries. The recall was issued after the presence of a 'small common beetle' was detected in the product. In Canada, New Zealand and elsewhere, public concerns have been raised over the continued sale and marketing of soy-based formulae potentially containing high levels of phytoestrogens, linked to abnormal child development including damage to babies' thyroid glands. Uses, risks and controversies: In December 2011 Wal-Mart recalled a quantity of infant formula after a baby died in Missouri. "We extend our deepest condolences to this baby boy's family as they try to come to grips with their loss," said Dianna Gee, a Wal-Mart spokeswoman. "As soon as we heard what happened, we immediately reached out to the manufacturer of the formula and to the Department of Health and Senior Services to provide any information we may have to help with the investigation." Wal-Mart said it pulled a batch of Enfamil from its stores nationwide that matched the size and lot number ZP1k7G of the formula that may have sickened the baby in Missouri, Gee said. The baby formula was purchased from a Wal-Mart in Lebanon, Missouri. After the purchase, a 10-day-old infant died from a rare bacterial infection, CNN affiliate KYTV reported. Authorities ran tests to determine if the death came from the formula, the water to make the formula or any other factor, said Mead Johnson Nutrition, the company that makes Enfamil. "We are highly confident in the safety and quality of our products – and the rigorous testing we put them through," said Chris Perille, a Mead Johnson Nutrition spokesman.[Source CNN] Preparation and content: Variations Infant formulas come in powder, liquid concentrate, and ready-to-feed forms. They are designed to be prepared by the parent or caregiver in small batches and fed to the infant, usually with either a cup or a baby bottle.Infant formulas come in a variety of types: Cow's milk formula is the most commonly used type. The milk has been altered to resemble breast milk. Preparation and content: Soy protein based formulas are frequently used for infants allergic to cow's milk or lactose. Soy-based formulas can also be useful if the parent wants to exclude animal proteins from the child's diet. Protein hydrolysate formulas contain protein that's been broken down into smaller sizes than are those in cow's milk and soy-based formulas. Protein hydrolysate formulas are meant for babies who do not tolerate cow's milk or soy-based formulas. Preparation and content: Specialized formulas are also available for premature infants and those with specific medical conditions.Manufacturers and health officials advise it is very important to measure powders or concentrates accurately to achieve the intended final product concentration; otherwise, the child will be malnourished. It is advisable that all equipment that comes into contact with the infant formula be cleaned and sterilized before each use. Proper refrigeration is essential for any infant formula which is prepared in advance. Preparation and content: In developing countries, formula is frequently prepared improperly, resulting in high infant mortality due to malnutrition and diseases such as diarrhea and pneumonia. This is due to lack of clean water, lack of sterile conditions, lack of refrigeration, illiteracy (so written instructions cannot be followed), poverty (diluting formula so that it lasts longer), and lack of education of mothers by formula distributors. These problems and resulting disease and death are a key factor in opposition to the marketing and distribution of infant formula in developing countries by numerous public health agencies and NGOs (discussed in more detail at Nestlé boycott and International Code of Marketing of Breast-milk Substitutes). Preparation and content: Nutritional content Besides breast milk, infant formula is the only other milk product which the medical community considers nutritionally acceptable for infants under the age of one year (as opposed to cow's milk, goat's milk, or follow-on formula). Supplementing with solid food in addition to breast milk or formula begins during weaning, and most babies begin supplementing about the time their first teeth appear, usually around the age of six months. Preparation and content: Although cow's milk is the basis of almost all infant formula, plain cow's milk is unsuited for infants because of its high casein content and low whey content, and untreated cow's milk is not recommended before the age of 12 months. The infant intestine is not properly equipped to digest non-human milk, and this may often result in diarrhea, intestinal bleeding and malnutrition. To reduce the negative effect on the infant's digestive system, cow's milk used for formula undergoes processing to be made into infant formula. This includes steps to make protein more easily digestible and alter the whey-to-casein protein balance to one closer to human milk, the addition of several essential ingredients (often called "fortification", see below), the partial or total replacement of dairy fat with fats of vegetable or marine origin, etc. Preparation and content: The nutrient content of infant formula for sale in the United States is regulated by the Food and Drug Administration (FDA) based on recommendations by the American Academy of Pediatrics Committee on Nutrition. The following must be included in all formulas produced in the U.S.: Protein Fat Linoleic acid Vitamins: A, C, D, E, K, thiamin (B1), riboflavin (B2), B6, B12 Niacin Folic acid Pantothenic acid Calcium Minerals: magnesium, iron, zinc, manganese, copper Phosphorus Iodine Sodium chloride Potassium chloride Carbohydrates Carbohydrates are an important source of energy for growing infants, as they account for 35 to 42% of their daily energy intake. In most cow's milk-based formulas, lactose is the main source of carbohydrates present, but lactose is not present in cow's milk-based lactose-free formulas nor specialized non-milk protein formulas or hydrolyzed protein formulas for infants with milk protein sensitivity. Lactose is also not present in soy-based formulas. Therefore, those formulas without lactose will use other sources of carbohydrates, such as sucrose and glucose, dextrins, and natural and modified starches. Lactose is not only a good source of energy, it also aids in the absorption of the minerals magnesium, calcium, zinc and iron. Preparation and content: Human milk oligosaccharides (HMOs) HMOs are naturally occurring sugars found in human breast milk, they improve the immune system and act as nutrients to beneficial gut bacteria. Some manufacturers also use human milk oligosaccharides as a modern infant formula supplement to give additional health benefits to their products; however, they are not found all types of formula. Preparation and content: Nucleotides Nucleotides are compounds found naturally in human breast milk. They are involved in critical metabolic processes, such as energy metabolism and enzymatic reactions. Also, as the building blocks of deoxyribonucleic acid (DNA) and ribonucleic acid (RNA), they are essential for normal body functions. Compared to human breast milk, cow's milk has lower levels of the nucleotides uridine, inosine, and cytidine. Therefore, several companies that produce infant formula have added nucleotides to their infant formulas.Other commonly used ingredients: Emulsifiers and stabilizers: Ingredients added to prevent the separation of the oil from the water (and its soluble components) in the infant formula. Some commonly used emulsifiers include monoglycerides, diglycerides, and gums. Preparation and content: Diluents: Skim milk is commonly used as the primary diluent in milk-based liquid formula to provide the bulk of the volume. In contrast, purified water is the most commonly used diluent in milk-free formulations. Policy, industry and marketing: The policy, regulatory and industry environments surrounding the infant formula market vary tremendously between countries. Policy, industry and marketing: International The International Code of Marketing of Breast-milk Substitutes is an international health policy framework adopted by the World Health Assembly of the WHO in 1981 regarding infant formula marketing, including strict restrictions on advertising. Its implementation depends on the laws of different countries and the behavior of infant formula manufacturers – the code has no power itself. Legislation and corporate behavior vary significantly between countries: at least 84 countries have enacted national legislation implementing all or many of the provisions of the Code and 14 countries have draft laws awaiting adoption; whereas elsewhere neither the Code nor its principles are followed by governments or formula manufacturers. Policy, industry and marketing: Practices that are banned in the Code include most advertising, claiming health benefits for formula, and giving free samples to women able to breastfeed – this latter practice is particularly criticized because it can interfere with lactation, creating dependence on formula, without proper education on ensuring continued breast stimulation while formula is being used. In many countries free samples of infant formula have been provided to hospitals for decades; infant formula is often the only product routinely provided free of charge to hospitals. The Baby Friendly Hospital Initiative (BFHI) aims to reduce and eliminate this controversial practice; however, there is increasing criticism of the BFHI's rigidity in limiting use of infant formula, which can be an appropriate treatment for common conditions such as suboptimal intake jaundice, and may cause mothers to feel pressured or guilted into breastfeeding. Policy, industry and marketing: By country Philippines Infant formula is one of the top three consumer commodities in the Philippines, and among the most imported products. Annual sales amount to some US$469 million annually. US$88 million is spent on advertising the product.Infant formula marketing has been regulated since the 1987 Executive Order 51 or "Milk Code", which regulated, but did not ban, practices such as advertising and providing free samples. Shortly after it was enacted, Wyeth introduced "follow-on formula", which was not in the purview of the Milk Code which predated its market entry. Policy, industry and marketing: In 2006, the Department of Health banned the advertising of infant formula and the practice of providing free samples, regardless of intended age group (in the Revised Implementing Rules and Regulations of Executive Order 51, or RIRR). The new regulation was challenged by the infant formula industry in the Supreme Court. Initially the challenge was dismissed, but this decision was reversed following industry pressure and a controversial letter by American business leader Thomas Donahue, then President and CEO of the US Chamber of Commerce, resulting in the regulation being suspended and advertising continuing.The Guardian newspaper reports widespread illegal advertising and marketing of formula milk contrary to World Health Organization guidelines. Doctors and midwives are encouraged to promote feeding babies formula milk, advertising also targets mothers directly. Babies get sick and sometimes die because poor mothers cannot sterilize bottles. Policy, industry and marketing: South Africa In South Africa, there is a move towards plain packaging of infant formula under R 991 of the Foodstuffs, Cosmetics and Disinfectants Act; as of 6 December 2013, Regulation 7 (Sale and Promotion) is force, whereas Regulations 2-6 (primarily with respect to labelling) are scheduled to come into force on 6 December 2014. One of the key requirements as per Regulation 3.1.A.iii is a conspicuous message stating “[t]his product shall only be used on the advice of a health professional”. United Kingdom In the United Kingdom, infant formula advertising has been allowed since 1995; advertising for "follow-on formula" is legal, which has been cited as a loophole allowing advertising of similarly packaged formula. Policy, industry and marketing: United States In the United States, infant formula is both heavily marketed—the country has not adopted the Code, nor is it being systematically implemented by manufacturers for domestic marketing—and even heavily subsidized by the government: at least one third of the American market is supported by the government, with over half of infant formula sold in the country provided through the Special Supplemental Nutrition Program for Women, Infants, and Children (known as WIC).According to surveys, over 70% of large U.S. hospitals dispense infant formula to all infants, a practice opposed by the American Academy of Pediatrics and in violation of the Code. The Gerber Products Company began marketing its brand of infant formula directly to the public in October 1989, while the Carnation Company began marketing Good Start infant formula directly to the public in January 1991.Infant formula costs are a significant fraction of the WIC program costs: 21% post-rebate and 46% pre-rebate. Formula manufacturers are granted a WIC monopoly in individual states. Meanwhile, breastfeeding rates are substantially lower for WIC recipients; this is partly attributed to formula being free of charge to mothers in the WIC program, who are of lower socio-economic status. Violations of federal policy have also been found in terms of infant formula company advertising using the WIC trademark, to reach both WIC and non-WIC participants. In recent years WIC has been expanding its breastfeeding promotion strategies, including providing subsidies for clients who use milk banks. Policy, industry and marketing: 2022 United States Baby Formula Shortages Supply chain disruptions related to the government response to the COVID-19 pandemic in the United States have been reported as responsible for causing widespread shortages of infant formula in the United States, as of May 2022. This contrasts with far less severe shortages of infant formula around the globe. Reason magazine reported that this was largely the result of Food and Drug Administration (FDA) processes delaying approval of otherwise safe infant formula from Europe or other sources abroad, which might otherwise have eased demand for infant formula tensions in the United States.As a result of the shortages, on May 16, 2022, the FDA announced that it would temporarily ease enforcement of some labeling rules to allow the importation of foreign formulas. FDA Commissioner Robert Califf stated, "Today's action paves the way for companies who don't normally distribute their infant formula products in the U.S. to do so efficiently and safely. We anticipate that those products that can quickly meet safety and nutrition standards could hit U.S. stores in a matter of weeks." Former FDA associate commissioner, Peter Pitts, asserts that the FDA's regulatory scheme is at least partially to blame for the shortage. Pitts states, "The difference between European baby formula and American baby formula, more or less, is that the labeling is different. The knot in getting that product into the U.S. isn't safety, it's a regulatory issue. I don't want to say it's a nitty issue, but it's certainly something the FDA could have jumped on a lot quicker."Amid and prior to the formula shortages, Woman and Infant Children (WIC) centers in Georgia and North Carolina were disposing of infant formula. This was done under the USDA's recommendation that unused, returned WIC infant formula were to be disposed of upon return. Despite an attempt by the USDA to walk back this recommendation by stating that it is a recommendation rather than a requirement, the USDA confirms that it will not reverse this recommendation, even amid the formula shortage. As a result, from October 2021 through May 2022, 16,459 cans of baby formula were destroyed by WIC clinics in Georgia and an unknown amount of baby formula cans were destroyed in North Carolina and other US States. Policy, industry and marketing: On July 6, 2022, the FDA announced that it would change its rules to allow foreign formula manufacturers to permanently import their goods into the U.S., potentially reducing the severity of the shortage. Critics of the FDA note that this does not remove the regulations entirely and that this shortage has been self-imposed by the FDA from the start. Additionally, critics note that if a formula maker passes EU regulations, this should be good enough for the FDA to allow importation of that formula. Policy, industry and marketing: Critics of the FDA's regulatory policy note that the regulatory scheme surrounding European formulas is not borne from a science-based desire to protect children, but rather an influence that the US dairy industry has on the agency. Critics also note that if there were an issue with European formulas, the issue would be widespread among the European babies that regularly consume the formula.The FORMULA Act is set to expire at the end of 2022, which will subsequently reinstate tariffs on foreign made formula. Experts worry that this will result in a repeat formula shortage for 2023. The CEO of the National Milk Producers Federation, a lobbying organization for dairy producers, wrote in a letter to Congress and the Biden administration to allow for the reinstatement of tariffs on foreign baby formula to commence. History: The Wabanaki and other Native American tribal nations of North America made an infant formula from nuts and cornmeal. Elizabeth Hanson was captured by Wabanaki in 1725 and a Native American woman showed Hanson how to make this infant formula and she included this in her captivity narrative. Early infant foods In 1865, the first infant food was invented. History: Throughout history, mothers who could not breastfeed their babies either employed a wet nurse or, less frequently, prepared food for their babies, a process known as "dry nursing". Baby food composition varied according to region and economic status. In Europe and North America during the early 19th century, the prevalence of wet nursing began to decrease, while the practice of feeding babies mixtures based on animal milk rose in popularity. This trend was driven by cultural changes as well as increased sanitation measures, and it continued throughout the 19th and much of the 20th century, with a notable increase after Elijah Pratt invented and patented the India-rubber nipple in 1845. As early as 1846, scientists and nutritionists noted an increase in medical problems and infant mortality was associated with dry nursing. In an attempt to improve the quality of manufactured baby foods, in 1867, Justus von Liebig developed the world's first commercial infant formula, Liebig's Soluble Food for Babies. The success of this product quickly gave rise to competitors such as Mellin's Food, Ridge's Food for Infants and Nestlé's Milk. History: Raw milk formulas As physicians became increasingly concerned about the quality of such foods, medical recommendations such as Thomas Morgan Rotch's "percentage method" (published in 1890) began to be distributed, and gained widespread popularity by 1907. These complex formulas recommended that parents mix cow's milk, water, cream, and sugar or honey in specific ratios to achieve the nutritional balance believed to approximate human milk reformulated in such a way as to accommodate the believed digestive capability of the infant. History: At the dawn of the 20th century in the United States, most infants were breastfed, although many received some formula feeding as well. Home-made "percentage method" formulas were more commonly used than commercial formulas in both Europe and the United States. They were less expensive and were widely believed to be healthier. However, formula-fed babies exhibited more diet-associated medical problems, such as scurvy, rickets and bacterial infections than breastfed babies. By 1920, the incidence of scurvy and rickets in formula-fed babies had greatly decreased through the addition of orange juice and cod liver oil to home-made formulas. Bacterial infections associated with formula remained a problem more prevalent in the United States than in Europe, where milk was usually boiled prior to use in formulas. History: Evaporated milk formulas In the 1920s and 1930s, evaporated milk began to be widely commercially available at low prices, and several clinical studies in the period suggested that breastfed babies and babies fed evaporated milk equally thrived.These studies, accompanied by the affordable price of evaporated milk and the availability of the home icebox initiated a tremendous rise in the use of evaporated milk formulas. By the late 1930s, the use of evaporated milk formulas in the United States surpassed all commercial formulas, and by 1950 over half of all babies in the United States were reared on such formulas. History: Commercial formulas In parallel with the enormous shift (in industrialized nations) away from breastfeeding to home-made formulas, nutrition scientists continued to analyze human milk and attempted to make infant formulas that more closely matched its composition. Maltose and dextrins were believed nutritionally important, and in 1912, the Mead Johnson Company released a milk additive called Dextri-Maltose. This formula was made available to mothers only by physicians. In 1919, milkfats were replaced with a blend of animal and vegetable fats as part of the continued drive to closer simulate human milk. This formula was called SMA for "simulated milk adapted."In the late 1920s, Alfred Bosworth released Similac (for "similar to lactation"), and Mead Johnson released Sobee. Several other formulas were released over the next few decades, but commercial formulas did not begin to seriously compete with evaporated milk formulas until the 1950s. The reformulation and concentration of Similac in 1951, and the introduction (by Mead Johnson) of Enfamil (for "infant milk") in 1959 were accompanied by marketing campaigns that provided inexpensive formula to hospitals and pediatricians. By the early 1960s, commercial formulas were more commonly used than evaporated milk formulas in the United States, which all but vanished in the 1970s. By the early 1970s, over 75% of American babies were fed on formulas, almost entirely commercially produced.When birth rates in industrial nations tapered off during the 1960s, infant formula companies heightened marketing campaigns in non-industrialized countries. Unfortunately, poor sanitation led to steeply increased mortality rates among infants fed formula prepared with contaminated water. Additionally, a WHO has cited over-diluting formula preparations as resulting in infant malnourishment. Organized protests, the most famous of which was the Nestlé boycott of 1977, called for an end to unethical marketing. This boycott is ongoing, as the current coordinators maintain that Nestlé engages in marketing practices which violate the International Code of Marketing of Breast-milk Substitutes. History: Generic brand formulas In addition to commercially marketed brands, generic brands (or store brands) of infant formula were introduced in the United States in 1997, first by PBM Products. These private label formulas are sold by many leading food and drug retailers such as Wal-Mart, Target, Kroger, Loblaws, and Walgreens. All infant formula brands in the United States are required to adhere to the Food and Drug Administration (FDA) guidelines. As reported by the Mayo Clinic: “as with most consumer products, brand-name infant formulas cost more than generic brands. But that doesn't mean that brand-name [Similac, Nestle, Enfamil] formulas are better. Although manufacturers may vary somewhat in their formula recipes, the FDA requires that all formulas contain the same nutrient density.”Similarly, in Canada all infant formulas regardless of brand are required to meet standards set by Health Canada. History: Follow-on and toddler formulas Follow-on or toddler formulas are sold for ages 6 months to 3 years (when infants are typically breastfed). In the US, a transition formula is marketed for children from age 9 to 24 months, and a toddler milk is sold for children age 12 to 26 months. In both cases, the ingredients are powdered milk, corn syrup and other added sugars, vegetable oil, and salt.Toddler formulas are not nutritionally complete, nor are they subject to the same regulations or food labeling laws as infant formula. Critics have argued that follow-on and toddler formulas were introduced to circumvent the regulations regarding infant formula and have resulted in confusing advertising.An early example of follow-on formula was introduced by Wyeth in the Philippines in 1987, following the introduction in this country of regulations on infant formula advertising, but which did not address follow-on formulas (products that did not exist at the time of their drafting). Similarly, while infant formula advertising is illegal in the United Kingdom, follow-on formula advertising is legal, and the similar packaging and market results in follow-on advertisements frequently being interpreted as advertisements for formula. (See also industry and marketing, below.) These products have also recently fallen under criticism for contributing to the childhood obesity epidemic in some developed countries due to their marketing and flavoring practices. The drinks are also expensive. Although usually not quite as expensive as infant formula, they can cost four times the price of cow's milk. History: Usage since 1970s Since the early 1970s, industrial countries have witnessed a resurgence in breastfeeding among newborns and infants to 6 months of age. This upswing in breastfeeding has been accompanied by a deferment in the average age of introduction of other foods (such as cow's milk), resulting in increased use of both breastfeeding and infant formula between the ages of 3–12 months.The global infant formula market has been estimated at $7.9 billion, with North America and Western Europe accounting for 33% of the market and considered largely saturated, and Asia representing 53% of the market. South East Asia is a particularly large fraction of the world market relative to its population. Infant formula is the largest segment of the baby food market, with the fraction given as between 40% and 70%.Leading health organizations (e.g. WHO, U.S. Centers for Disease Control and Department of Health and Human Services) are attempting to reduce the use of infant formula and increase the prevalence of breastfeeding from birth through 12 to 24 months of age through public health awareness campaigns. The specific goals and approaches of these breastfeeding promotion programs, and the policy environment surrounding their implementation, vary by country. As a policy basic framework, the International Code of Marketing of Breast-milk Substitutes, adopted by the WHO's World Health Assembly in 1981, requires infant formula companies to preface their product information with statements that breastfeeding is the best way of feeding babies and that a substitute should only be used after consultation with health professionals. The Baby Friendly Hospital Initiative (BFHI) also restricts use by hospitals of free formula or other infant care aids provided by formula companies. (See also Policy section below.) While the Code was intended to restrict inappropriate marketing of infant formula, not access to it, parents have complained of being lectured or made to sign waivers implying formula would harm their babies in BFHI hospitals. Infant formula processing: History Current general procedure The manufacturing process may differ for different types of formula made; therefore the following is the general procedure for liquid-milk based formulas: Mixing ingredients Primary ingredients are blended in large stainless steel tanks and skim milk is added and adjusted to 60 °C. Then, fats, oils and emulsifiers are added. Additional heating and mixing may be required to get proper consistency. Next, minerals, vitamins, and stabilizing gums are added at various points, depending on their sensitivity to heat. The batch is temporarily stored and then transported by pipelines to pasteurization equipment when mixing is complete. Infant formula processing: Pasteurization This is a process that protects against spoilage by eliminating bacteria, yeasts and molds. It involves quickly heating and then cooling of the product under controlled conditions which micro-organisms cannot survive. The batch is held at around 85–94 °C for approximately 30 seconds which is necessary to adequately reduce micro-organisms and prepare the formula for filling. Homogenization This is a process which increases emulsion uniformity and stability by reducing size of fat and oil particles in the formula. It is done with a variety of mixing equipment that applies shear to the product and this mixing breaks fat and oil particles into very small droplets. Standardization Standardization is used to ensure that the key parameters like pH, fat concentration and vitamins and mineral content are correct. If insufficient levels of these are found, the batch is reworked to achieve appropriate levels. After this step, the batch is ready to be packaged. Packaging Packaging depends on manufacturer and type of equipment used, but in general, liquid formula is filled into metal cans with lids crimped into place. Infant formula processing: Heat treatment or sterilization Finally, infant formulas are heat treated to maintain the bacteriologic quality of the product. This can be done traditionally by either retort sterilization or high-temperature short-time (HTST) treatment. Recently, ultrahigh-temperature treated formula has become more commonly used. If powdered formula is made, then spray drying would be required in addition. Retort sterilization is a traditional retort sterilization method that uses 10-15mins treatment at 118 °C. Ultrahigh-temperature (UHT) is a method that uses a brief (2–3 seconds) treatment at 142 °C. Because of the short time used, there is little protein denaturation, but the process still ensures sterility of the final product. Infant formula processing: Recent and future potential new ingredients Probiotics Randomized, controlled trials completed in the 2000s have shown limited and short term clinical benefits for the use of probiotics in infants’ diet. A 2018 clinical study using the multistrain De Simone Formulation probiotic showed it helped some infants reduce symptoms of infant colic. The safety of probiotics in general and in infants, especially preterm infants, has been investigated in a limited number of controlled trials. The findings thus far suggest probiotics are generally safe, though the research is preliminary and has yet to provide definitive conclusions. Infant formula processing: Prebiotics Prebiotics are undigestible carbohydrates that promote the growth of probiotic bacteria in the gut. Human milk contains a variety of oligosaccharides believed to be an important factor in the pattern of microflora colonization of breastfed infants. Because of variety, variability, complexity and polymorphism of the oligosaccharide composition and structure, it is currently not feasible to reproduce the oligosaccharide components of human milk in a strictly structural fashion.The European Society of Pediatric Gastroenterology, Hepatology, and Nutrition Committee on Nutrition found evidence to support short term effects of ingesting prebiotics on stool microflora of infants with increased in the number of bifidobacteria. Babies can be at risk of dehydration with the induction of softer stools, if they have the kidney immaturity and/or a poor ability to concentrate urine. A reduction of pathogens has been associated with the consumption of prebiotics. However, there was no evidence to support major clinical or long-term benefits. Therefore, there is little evidence of beneficial effects of prebiotics in dietary products. Infant formula processing: Lysozyme and lactoferrin Lysozyme is an enzyme that is responsible for protecting the body by damaging bacterial cell walls. Lactoferrin is a globular, multifunctional protein that has antimicrobial activity. Compared to human milk, cow's milk has a signifactly lower levels of lysozyme and lactoferrin; therefore, the industry has an increasing interest in adding them into infant formulas. Long chain polyunsaturated fatty acid supplementation Some manufacturers have begun supplementing formula milk with long-chain polyunsaturated fatty acids (LCPUFA). The current evidence suggests that there may be little or no difference between formula milk with and without LCPUFA supplementation in terms of babies' visual function, physical growth or neurodevelopment.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Y-DNA haplogroups in Kazakh tribes** Y-DNA haplogroups in Kazakh tribes: The frequency of Y haplogroups in percent among Kazakh tribes. Haplogroups of the tribes of the Senior zhuz: Senior Zhuz is formed by a combination of not only genetically related tribal groups, but also genetically remote.Y haplogroups of the tribes of the Senior zhuz in percentage. Western/European Kazakhs: A study analyzing the haplogroups of Western Kazakhs, in the European part of Kazakhstan, found that the majority (2/3) of Kazakh samples belong to the paternal haplogroup C2a1a2-M48, which, according to the authors, supports the traditional genealogy claims that the Alimuly and Baiuly clans descent from Emir Alau (and his paternal relatives).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Uranium zirconium hydride** Uranium zirconium hydride: Uranium zirconium hydride (UZrH), a combination of uranium hydride and zirconium(II) hydride, is used as the fuel in TRIGA reactors. UZrH fuel is used in most research reactors at universities and has a large, prompt negative fuel temperature coefficient of reactivity, meaning that as the temperature of the core increases, the reactivity rapidly decreases.Franco-Belge de Fabrication du Combustible, in Romans-sur-Isère, France, is the only manufacturer of this fuel.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Le Lisp** Le Lisp: Le Lisp (also Le_Lisp and Le-Lisp) is a programming language, a dialect of the language Lisp.It was developed at the French Institute for Research in Computer Science and Automation (INRIA), to be an implementation language for a very large scale integration (VLSI) workstation being designed under the direction of Jean Vuillemin. Le Lisp also had to run on various incompatible platforms (mostly running Unix operating systems) that were used by the project. The main goals for the language were to be a powerful post-Maclisp version of Lisp that would be portable, compatible, extensible, and efficient.Jérôme Chailloux led the Le Lisp team, working with Emmanuel St. James, Matthieu Devin, and Jean-Marie Hullot in 1980. The dialect is historically noteworthy as one of the first Lisp implementations to be available on both the Apple II and the IBM PC.On 2020-01-08, INRIA agreed to migrate the source code to the 2-clause BSD License which allowed few native ports from ILOG and Eligis to adopt this license model.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**FICO Xpress** FICO Xpress: The FICO Xpress optimizer is a commercial optimization solver for linear programming (LP), mixed integer linear programming (MILP), convex quadratic programming (QP), convex quadratically constrained quadratic programming (QCQP), second-order cone programming (SOCP) and their mixed integer counterparts. Xpress includes a general purpose non-linear solver, Xpress NonLinear, including a successive linear programming algorithm (SLP, first-order method), and Artelys Knitro (second-order methods). FICO Xpress: Xpress was originally developed by Dash Optimization, and was acquired by FICO in 2008. Its initial authors were Bob Daniel and Robert Ashford. The first version of Xpress could only solve LPs; support for MIPs was added in 1986. Being released in 1983, Xpress was the first commercial LP and MIP solver running on PCs. In 1992, an Xpress version for parallel computing was published, which was extended to distributed computing five years later. Xpress was the first MIP solver to cross the billion matrix non-zero threshold by introducing 64-bit indexing in 2010. Since 2014, Xpress features the first commercial implementation of a parallel dual simplex method. Technology: Linear and quadratic programs can be solved via the primal simplex method, the dual simplex method or the barrier interior point method. All mixed integer programming variants are solved by a combination of the branch and bound method and the cutting-plane method. Infeasible problems can be analyzed via the IIS (irreducible infeasible subset) method. Xpress provides a built-in tuner for automatic tuning of control settings. Technology: Xpress includes its modelling language Xpress Mosel and the integrated development environment Xpress Workbench. Technology: Mosel includes distributed computing features to solve multiple scenarios of an optimization problem in parallel. Uncertainty in the input data can be handled via robust optimization methods.Xpress has a modeling module called BCL (Builder Component Library) that interfaces to the C, C++, Java programming languages, and to the .NET Framework. Independent of BCL, there are Python and MATLAB interfaces. Next to Mosel, Xpress connects to other standard modeling languages, such as AIMMS, AMPL, and GAMS. Technology: The FICO Xpress Executor executes and deploys Mosel models, using SOAP or REST interfaces. It can be used from external applications or from the FICO Decision Management Platform.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**KRACK** KRACK: KRACK ("Key Reinstallation Attack") is a replay attack (a type of exploitable flaw) on the Wi-Fi Protected Access protocol that secures Wi-Fi connections. It was discovered in 2016 by the Belgian researchers Mathy Vanhoef and Frank Piessens of the University of Leuven. Vanhoef's research group published details of the attack in October 2017. By repeatedly resetting the nonce transmitted in the third step of the WPA2 handshake, an attacker can gradually match encrypted packets seen before and learn the full keychain used to encrypt the traffic. KRACK: The weakness is exhibited in the Wi-Fi standard itself, and not due to errors in the implementation of a sound standard by individual products or implementations. Therefore, any correct implementation of WPA2 is likely to be vulnerable. The vulnerability affects all major software platforms, including Microsoft Windows, macOS, iOS, Android, Linux, OpenBSD and others.The widely used open-source implementation wpa_supplicant, utilized by Linux and Android, was especially susceptible as it can be manipulated to install an all-zeros encryption key, effectively nullifying WPA2 protection in a man-in-the-middle attack. Version 2.7 fixed this vulnerability.The security protocol protecting many Wi-Fi devices can essentially be bypassed, potentially allowing an attacker to intercept sent and received data. Details: The attack targets the four-way handshake used to establish a nonce (a kind of "shared secret") in the WPA2 protocol. The standard for WPA2 anticipates occasional Wi-Fi disconnections, and allows reconnection using the same value for the third handshake (for quick reconnection and continuity). Because the standard does not require a different key to be used in this type of reconnection, which could be needed at any time, a replay attack is possible. Details: An attacker can repeatedly re-send the third handshake of another device's communication to manipulate or reset the WPA2 encryption key. Each reset causes data to be encrypted using the same values, so blocks with the same content can be seen and matched, working backwards to identify parts of the keychain which were used to encrypt that block of data. Repeated resets gradually expose more of the keychain until eventually the whole key is known, and the attacker can read the target's entire traffic on that connection. Details: According to US-CERT: "US-CERT has become aware of several key management vulnerabilities in the 4-way handshake of the Wi-Fi Protected Access II (WPA2) security protocol. The impact of exploiting these vulnerabilities includes decryption, packet replay, TCP connection hijacking, HTTP content injection, and others. Note that as protocol-level issues, most or all correct implementations of the standard will be affected. The CERT/CC and the reporting researcher KU Leuven, will be publicly disclosing these vulnerabilities on 16 October 2017." The paper describing the vulnerability is available online, and was formally presented at the ACM Conference on Computer and Communications Security on 1 November 2017. US-CERT is tracking this vulnerability, listed as VU#228519, across multiple platforms. The following CVE identifiers relate to the KRACK vulnerability: CVE-2017-13077, CVE-2017-13078, CVE-2017-13079, CVE-2017-13080, CVE-2017-13081, CVE-2017-13082, CVE-2017-13084, CVE-2017-13086, CVE-2017-13087 and CVE-2017-13088.Some WPA2 users may counter the attack by updating Wi-Fi client and access point device software, if they have devices for which vendor patches are available. However, vendors may delay in offering a patch, or not provide patches at all in the case of many older devices. Patches: Patches are available for different devices to protect against KRACK, starting at these versions: Workarounds: In order to mitigate risk on vulnerable clients, some WPA2-enabled Wi-Fi access points have configuration options that can disable EAPOL-Key frame re-transmission during key installation. Attackers cannot cause re-transmissions with a delayed frame transmission, thereby denying them access to the network, provided TDLS is not enabled. One disadvantage of this method is that, with poor connectivity, key reinstallation failure may cause failure of the Wi-Fi link. Continued vulnerability: In October 2018, reports emerged that the KRACK vulnerability was still exploitable in spite of vendor patches, through a variety of workarounds for the techniques used by vendors to close off the original attack.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gastric dilatation volvulus** Gastric dilatation volvulus: Gastric dilatation volvulus (GDV), also known as gastric dilation, twisted stomach, or gastric torsion, is a medical condition that affects dogs in which the stomach becomes overstretched and rotated by excessive gas content. The word bloat is often used as a general term to mean gas distension without stomach torsion (a normal change after eating), or to refer to GDV. Gastric dilatation volvulus: GDV is a life-threatening condition in dogs that requires prompt treatment. It is common in certain breeds; deep-chested breeds are especially at risk. Mortality rates in dogs range from 10 to 60%, even with treatment. With surgery, the mortality rate is 15 to 33 percent. Symptoms: Symptoms are not necessarily distinguishable from other kinds of distress. A dog might stand uncomfortably and seem to be in extreme discomfort for no apparent reason. Other possible symptoms include firm distension of the abdomen, weakness, depression, difficulty breathing, hypersalivation, and retching without producing any vomitus (nonproductive vomiting). Many dogs with GDV have cardiac arrhythmias (40% in one study). Chronic GDV in dogs, include symptoms such as loss of appetite, vomiting, and weight loss. Causes: Gastric dilatation volvulus in dogs is likely caused by a multitude of factors, but in all cases the immediate prerequisite is a dysfunction of the sphincter between the esophagus and stomach and an obstruction of outflow through the pylorus. Some of the more widely acknowledged factors for developing GDV include increased age, breed, having a deep and narrow chest, eating foods, such as kibble, that expand in the stomach, overfeeding, too much water consumption in a small period of time before or after exercise, and other causes of gastrointestinal disease and distress. The risk of bloat in dogs perceived as happy by their owners is decreased, and increased in dogs perceived as fearful. This may be owing to the physiological effects of the dog's personality on the function and motility of the gastrointestinal system. Alternatively, the dogs may become unhappy/uncomfortable as a consequence of the conditions that lead up to exhibiting bloat. Dogs with inflammatory bowel disease may be at an increased risk for bloat. Causes: Dietary factors One common recommendation in the past has been to raise the food bowl of dogs when they eat, but this may actually increase the risk of GDV. Eating only once daily and eating food consisting of particles less than 30 mm (1.2 in) in size also may increase the risk of GDV. One study looking at the ingredients of dry dog food found that while neither grains, soy, nor animal proteins increased risk of bloat, foods containing an increased amount of added oils or fats do increase the risk, possibly owing to delayed emptying of the stomach. Pathophysiology: The stomach twists around the longitudinal axis of the digestive tract, also known as volvulus. Gas distension may occur prior to or after the stomach twists. The most common direction for rotation is clockwise, viewing the animal from behind. The stomach can rotate up to 360° in this direction and 90° counterclockwise. If the volvulus is greater than 180°, the esophagus is closed off, thereby preventing the animal from relieving the condition by belching or vomiting. The results of this distortion of normal anatomy and gas distension include hypotension (low blood pressure), decreased return of blood to the heart, ischemia (loss of blood supply) of the stomach, and shock. Pressure on the portal vein decreases blood flow to liver and decreases the ability of that organ to remove toxins and absorbed bacteria from the blood. At the other end of the stomach, the spleen may be damaged if the twisting interrupts its blood supply. If not quickly treated, bloat can lead to blood poisoning, peritonitis, and death by toxic shock. Diagnosis: A diagnosis of GDV is made by several factors. The breed and history often gives a significant suspicion of the condition, and a physical examination often reveals the telltale sign of a distended abdomen with abdominal tympany. Shock is diagnosed by the presence of pale mucous membranes with poor capillary refill, increased heart rate, and poor pulse quality. Radiographs (X-rays), usually taken after decompression of the stomach if the dog is unstable, shows a stomach distended with gas. The pylorus, which normally is ventral and to the right of the body of the stomach, is cranial to the body of the stomach and left of the midline, often separated on the X-ray by soft tissue and giving the appearance of a separate gas-filled pocket (double-bubble sign). Treatment: Gastric dilatation volvulus is an emergency medical condition; having the animal examined by a veterinarian is imperative. GDV can become fatal within a matter of minutes. Treatment: Treatment usually involves resuscitation with intravenous fluid therapy, usually a combination of isotonic fluids and hypertonic saline or a colloidal solution such as hetastarch, and emergency surgery. The stomach is initially decompressed by passing a stomach tube, or if that is not possible, trocars can be passed through the skin into the stomach to remove the gas, alternatively the trocars may be inserted directly into the stomach following anaesthesia to reduce the chances of infection. During surgery, the stomach is placed back into its correct position, and the abdomen is examined for any devitalized tissue (especially the stomach and spleen). A partial gastrectomy may be necessary if any necrosis of the stomach wall occurs. Prevention: Recurrence of GDV attacks can be a problem, occurring in up to 80% of dogs treated medically only (without surgery). To prevent recurrence, at the same time the bloat is treated surgically, a right-side gastropexy is often performed, which by a variety of methods firmly attaches the stomach wall to the body wall, to prevent it from twisting inside the abdominal cavity in the future. While dogs that have had gastropexies still may develop gas distension of the stomach, a significant reduction in recurrence of gastric volvulus is seen. Of 136 dogs that had surgery for gastric dilatation-volvulus, six that did have gastropexies had a recurrence, while 74 (54.5%) of those without the additional surgery recurred. Gastropexies are also performed prophylactically in dogs considered to be at high risk of GDV, including dogs with previous episodes or with gastrointestinal disease predisposing to GDV, and dogs with a first-order relative (parent or sibling) with a history of it.Precautions that are likely to help prevent gastric dilatation-volvulus include feeding small meals throughout the day instead of one big meal, and not exercising immediately before or after a meal. Prognosis: Immediate treatment is the most important factor in a favorable prognosis. A delay in treatment greater than 6 hours or the presence of peritonitis, sepsis, hypotension, or disseminated intravascular coagulation are negative prognostic indicators.Historically, GDV has held a guarded prognosis. Although "early studies showed mortality rates between 33 and 68% for dogs with GDV," studies from 2007 to 2012 "reported mortality rates between 10 and 26.8%". Mortality rates approach 10 to 40% even with treatment. With prompt treatment and good preoperative stabilization of the patient, mortality is significantly lessened to 10% overall (in a referral setting). Negative prognostic indicators following surgical intervention include postoperative cardiac arrhythmia, splenectomy, or splenectomy with partial gastric resection. A longer time from presentation to surgery was associated with a lower mortality, presumably because these dogs had received more complete preoperative fluid resuscitation, thus were better cardiovascularly stabilized prior to the procedure. Epidemiology: As a general rule, GDV is of greatest risk to deep-chested dogs. The five breeds at greatest risk are Great Danes, Weimaraners, St. Bernards, Gordon Setters, and Irish Setters. In fact, the lifetime risk for a Great Dane to develop GDV has been estimated to be close to 37%. Standard Poodles are also at risk for this health problem, as are Irish Wolfhounds, German Shorthaired Pointers, German Shepherds, and Rhodesian Ridgebacks. Basset Hounds and Dachshunds have the greatest risk for dogs less than 50 lb (23 kg). Society and culture: In the novel and film Marley & Me, Marley develops and ultimately dies of "bloat". In "Dog of Death," an episode of the animated TV series The Simpsons, the family dog Santa's Little Helper develops a "twisted stomach", necessitating surgery.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**NTAP** NTAP: NTAP is an acronym for nonprofit technology assistance provider. The term generally refers to organizations and individuals that specialize in providing information and communication technology support to nonprofit organizations, without regard for whether the provider itself is formally incorporated as a nonprofit entity or a for-profit business. Nonprofit technology assistance provider is distinguished from a "nonprofit management assistance provider." The latter focuses on building organizational capacity in all areas of nonprofit management, some of which may include technology assistance. Readers should also understand that the term "technical assistance" has historically covered any form of capacity building assistance, technological or otherwise.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Anthanthrene** Anthanthrene: Anthanthrene is a polycyclic aromatic hydrocarbon. According to the International Agency for Research on Cancer, as of 2006 there was "limited evidence in experimental animals" that it is a carcinogen.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Abacavir/lamivudine** Abacavir/lamivudine: Abacavir/lamivudine, sold under the brand name Kivexa among others, is a fixed-dose combination antiretroviral medication used to treat HIV/AIDS. It contains abacavir and lamivudine. It is generally recommended for use with other antiretrovirals. It is commonly used as part of the preferred treatment in children. It is taken by mouth as a tablet.Common side effects include trouble sleeping, headache, depression, feeling tired, nausea, rash, and fever. Serious side effects may include high blood lactate levels, allergic reactions, and enlargement of the liver. It is not recommended in people with a specific gene known as HLA-B*5701. Safety in pregnancy has not been well studied but it appears to be okay. Lamivudine and abacavir are both nucleoside reverse transcriptase inhibitors (NRTI).Abacavir/lamivudine was approved for medical use in the United States in 2004. It is on the World Health Organization's List of Essential Medicines. Society and culture: Names It is marketed as Kivexa in most countries except for the United States, where it is branded as Epzicom. It is marketed by ViiV Healthcare. Society and culture: Legal challenges Teva Pharmaceuticals and Lupin Ltd both filed abbreviated new drug applications (ANDAs) relating to the treatments of HIV using various combinations of abacavir, lamivudine and AZT, and challenging various patents. In 2013 the US District Court for the District of Delaware upheld the validity of a patent covering Epzicom and Tizivir. Other matters were subject to appeal or litigation as of 20 November 2014.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Intrinsic factor** Intrinsic factor: Intrinsic factor (IF), cobalamin binding intrinsic factor, also known as gastric intrinsic factor (GIF), is a glycoprotein produced by the parietal cells (in humans) or chief cells (in rodents) of the stomach. It is necessary for the absorption of vitamin B12 later on in the distal ileum of the small intestine. In humans, the gastric intrinsic factor protein is encoded by the CBLIF gene. Haptocorrin (transcobalamin I) is another glycoprotein secreted by the salivary glands which binds to vitamin B12. Vitamin B12 is acid-sensitive and in binding to haptocorrin it can safely pass through the acidic stomach to the duodenum.In the less acidic environment of the small intestine, pancreatic enzymes digest the glycoprotein carrier and vitamin B12 can then bind to intrinsic factor. This new complex is then absorbed by the epithelial cells (enterocytes) of the ileum. Inside the cells, vitamin B12 dissociates once again and binds to another protein, transcobalamin II; the new complex can then exit the epithelial cells to be carried to the liver. Site of secretion: Intrinsic factor is secreted by parietal cells within the stomach, and so is present in the gastric juice as well as in the gastric mucous membrane. The optimum pH for its action is approximately 7. Its concentration does not correlate with the amount of HCl or pepsin in the gastric juice, e.g., intrinsic factor may be present even when pepsin is largely absent. The site of formation of the intrinsic factor varies in different species. In pigs it is obtained from the pylorus and beginning of the duodenum; in human beings it is present in the fundus and body of the stomach.The limited amount of normal human gastric intrinsic factor limits normal efficient absorption of B12 to about 2 μg per meal, a nominally adequate intake of B12. Insufficiency: In pernicious anemia, which is usually an autoimmune disease, autoantibodies directed against intrinsic factor or parietal cells themselves lead to an intrinsic factor deficiency, malabsorption of vitamin B12, and subsequent megaloblastic anemia. Atrophic gastritis can also cause intrinsic factor deficiency and anemia through damage to the parietal cells of the stomach wall. Pancreatic exocrine insufficiency can interfere with normal dissociation of vitamin B12 from its binding proteins in the small intestine, preventing its absorption via the intrinsic factor complex. Other risk factors contributing to pernicious anemia are anything that damages or removes a portion of the stomach's parietal cells, including bariatric surgery, gastric tumors, gastric ulcers, and excessive consumption of alcohol.Mutations in the GIF gene are responsible for a rare inheritable disease called intrinsic factor deficiency which results in malabsorption of vitamin B12. Insufficiency: Treatment In most countries, intramuscular injections of vitamin B12 are used to treat pernicious anemia. Orally administered vitamin B12 is absorbed without intrinsic factor, but at levels of less than one percent than if intrinsic factor is present. Despite the low amounts absorbed, oral vitamin B12 therapy is effective at reducing symptoms of pernicious anemia.Vitamin B12 can also be given sublingually, but there is no evidence that this route of administration is superior to the oral route, and only Canada and Sweden routinely prescribe this route of administration.Because vitamin B12 absorption is a multistep process that involves the stomach, pancreas and small intestine, and is mediated by two carriers: Haptocorrin and intrinsic factor, and because Haptocorrin (transcobalamin I) binds to vitamin B12, and Vitamin B12 is acid-sensitive, when vitamin B12 binds to Haptocorrin it can safely pass through the acidic stomach to the duodenum, given time in the mouth.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Depolarizing prepulse** Depolarizing prepulse: A depolarizing prepulse (DPP) is an electrical stimulus that causes the potential difference measured across a neuronal membrane to become more positive or less negative, and precedes another electrical stimulus. DPPs may be of either the voltage or current stimulus variety and have been used to inhibit neural activity, selectively excite neurons, and increase the pain threshold associated with electrocutaneous stimulation. Biophysical mechanisms: Hodgkin–Huxley model Typical action potentials are initiated by voltage-gated sodium channels. As the transmembrane voltage is increased the probability that a given voltage gated sodium channel is open is increased, thus enabling an influx of Na+ ions. Once the sodium inflow becomes greater than the potassium outflow, a positive feedback loop of sodium entry is closed and thus an action potential is fired. Biophysical mechanisms: In the early 1950s Drs. Hodgkin and Huxley performed experiments on the squid giant axon, and in the process developed a model (the Hodgkin–Huxley model) for sodium channel conductance. It was found that the conductance may be expressed as: gNa+=g¯Na+m3h, where g¯Na+ is the maximum sodium conductance, m is the activation gate, and h is the inactivation gate (both gates are shown in the adjacent image). The values of m and h vary between 0 and 1, depending upon the transmembrane potential. Biophysical mechanisms: As the transmembrane potential rises, the value of m increases, thus increasing the probability that the activation gate will be open. And as the transmembrane potential drops, the value of h increases, along with the probability that the inactivation gate will be open. The rate of change for an h gate is much slower than that of an m gate, therefore if one precedes a sub-threshold voltage stimulation with a hyperpolarizing prepulse, the value of h may be temporarily increased, enabling the neuron to fire an action potential. Biophysical mechanisms: Vice versa, if one precedes a supra-threshold voltage stimulation with a depolarizing prepulse, the value of h may be temporarily reduced, enabling the inhibition of the neuron. An illustration of how the transmembrane voltage response to a supra-threshold stimulus may differ, based upon the presence of a depolarizing prepulse, may be observed in the adjacent image. Biophysical mechanisms: The Hodgkin–Huxley model is slightly inaccurate as it fudges over some dependencies, for example the inactivation gate should not be able to close unless the activation gate is open and the inactivation gate, once closed, is located inside the cell membrane where it cannot be directly affected by the transmembrane potential. However, this model is useful for gaining a high level understanding of hyperpolarizing and depolarizing prepulses. Depolarizing neurons creates a more likely out come of the neuron firing. Biophysical mechanisms: Voltage-gated sodium channel Since the Hodgkin–Huxley model was first proposed in the 1950s, much has been learned concerning the structure and functionality of voltage-gated sodium channels. Although the exact three dimensional structure of the sodium channel remains unknown, its composition and the functionality of individual components have been determined. Voltage-gated sodium channels are large, multimeric complexes, composed of a single α subunit and one or more β subunits, an illustration of which may be observed in the adjacent image. The α subunit folds into four homologous domains, each of which contain six α-helical transmembrane segments. The S4 segments of each domain serve as voltage sensors for activation. Each S4 segment consists of a repeating structure of one positively charged residue and two hydrophobic residues, and these combine to form a helical arrangement. When the channel is depolarized these S4 segments undergo a conformational change that widens the helical arrangement and opens the sodium-channel pore. Within milliseconds after the pore's opening, the intracellular loop that connects domains III and IV, binds to the channel's intracellular pore, inactivating the channel. Thus, by providing a depolarizing prepulse before a stimulus, there is a greater probability that the inactivating domains of the sodium channels have bound to their respective pores, reducing the stimulus induced sodium influx and the influence of the stimulus. Depolarizing prepulse properties: DPP duration The relationship between the DPP duration and neuronal recruitment is as follows. If the duration of the DPP is relatively short, i.e. much less than 100 μs, then the threshold of excitation for the surrounding nerves will be decreased as opposed to increased. Possibly resulting from the depolarization of the S4 segments and the little time given for inactivation. For long duration DPP's the III and IV domains of the sodium channels (discussed above) are given more time to bind with their respective channel pores, thus the threshold current is observed to increase with an increasing DPP duration. Depolarizing prepulse properties: DPP amplitude As the DPP amplitude is increased from zero to near threshold, the resulting increase in threshold current will grow as well. This is because the higher amplitude activates more sodium channels, thus allowing more channels to become inactivated by their III and IV domains. DPP inter-phase delay An increase in the delay between the DPP and the stimulus provides more time during which the sodium channel S4 segments may close and the III and IV domains may detach themselves from their respective pores. Thus, an increase in the DPP inter-phase delay will reduce the effective increase in threshold current, induced by the DPP. Depolarizing prepulse applications: Elevating pain thresholds One immediate application for depolarizing prepulses, explored by Drs. Poletto and Van Doren, is to elevate the pain thresholds associated with electrocutaneous stimulation. Electrocutaneous stimulation possesses a great deal of potential as a mechanism for the conveyance of additional sensory information. Hence, this method of stimulation may be directly applied to fields such as virtual reality, sensory substitution, and sensory augmentation . However, many of these applications require the use of small electrode arrays, stimulation through which is often painful, thus limiting the usefulness of this technology. The experimental setup, constructed by Drs. Poletto and Van Doren, was as follows: 4 human subjects, each of which had demonstrated the ability to provide reliable pain judgments in previous studies left middle finger rests on a 1 mm diameter polished stainless steel disk electrodes a single stimulus consisted of a burst of three identical prepulse and stim-pulse pairs, presented at the beginning, middle, and end of a 1-second interval the prepulse and stim-pulse widths were matched at a duration of 10 milliseconds so that the thresholds would be the same for both used varying prepulse amplitudes of 0%, 79%, 63%, 50%, 40%, and 32% so as to study their influence over the pain experienced the experiments were conducted in such a way that the stimulus, without a prepulse was painful for about half of the time; this was achieved by stepping the stim-pulse amplitude up and down for the next trial, based upon whether it was reported as painfulTheir results demonstrated that a prepulse before a stimulus pulse effectively reduces the probability that pain will be experienced due to electrocutaneous stimulation. Surprisingly enough, a prepulse of 32% of the amplitude of the stimulus pulse was able to nearly half the probability of experiencing pain. Therefore, in environments in which the pain threshold is difficult to discern, it may be sufficient to deliver a relatively low amplitude prepulse before the stimulus to achieve the desired effects. Depolarizing prepulse applications: Nerve fiber recruitment order In addition to inhibiting neural excitability, it has been observed that preceding an electrical stimulus with a depolarizing prepulse allows one to invert the current-distance relationship controlling nerve fiber recruitment, where the current-distance relationship describes how the threshold current for nerve fiber excitation is proportional to the square of the distance between the nerve fiber and the electrode. Therefore, if the region of influence for the depolarizing prepulse is less than that for the stimulus, the nerve fibers closer to the electrode will experience a greater increase in their threshold current for excitation. Thus, provided such a stimulus, the nerve fibers closest to the electrode may be inhibited, while those further away may be excited. A simulation of this stimulation, constructed by Drs. Warren Grill and J. Thomas Mortimer, may be observed in the adjacent image. Building upon this, a stimulus with two depolarizing prepulses, each of an amplitude slightly below the threshold current (at the time of delivery), should increase the radii of influence for nearby nerve fiber inactivation and distant nerve fiber excitation. Depolarizing prepulse applications: Typically, nerve fibers of a larger diameter may be activated by single-pulse stimuli of a lower intensity, and thus may be recruited more readily. However, DPPs have demonstrated the additional capability to invert this recruitment order. As electrical stimuli have a greater effect over nerve fibers of a larger diameter, DPPs will in turn cause a larger degree of sodium conductance inactivation within such nerve fibers, thus nerve fibers of a smaller diameter will have a lower threshold current.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Membrane topology** Membrane topology: Topology of a transmembrane protein refers to locations of N- and C-termini of membrane-spanning polypeptide chain with respect to the inner or outer sides of the biological membrane occupied by the protein. Membrane topology: Several databases provide experimentally determined topologies of membrane proteins. They include Uniprot, TOPDB, OPM, and ExTopoDB. There is also a database of domains located conservatively on a certain side of membranes, TOPDOM.Several computational methods were developed, with a limited success, for predicting transmembrane alpha-helices and their topology. Pioneer methods utilized the fact that membrane-spanning regions contain more hydrophobic residues than other parts of the protein, however applying different hydrophobic scales altered the prediction results. Later, several statistical methods were developed to improve the topography prediction and a special alignment method was introduced. According to the positive-inside rule, cytosolic loops near the lipid bilayer contain more positively-charged amino acids. Applying this rule resulted in the first topology prediction methods. There is also a negative-outside rule in transmembrane alpha-helices from single-pass proteins, although negatively charged residues are rarer than positively charged residues in transmembrane segments of proteins. As more structures were determined, machine learning algorithms appeared. Supervised learning methods are trained on a set of experimentally determined structures, however, these methods highly depend on the training set. Unsupervised learning methods are based on the principle that topology depends on the maximum divergence of the amino acid distributions in different structural parts. It was also shown that locking a segment location based on prior knowledge about the structure improves the prediction accuracy. This feature has been added to some of the existing prediction methods. The most recent methods use consensus prediction (i.e. they use several algorithm to determine the final topology) and automatically incorporate previously determined experimental informations. HTP database provides a collection of topologies that are computationally predicted for human transmembrane proteins. Membrane topology: Discrimination of signal peptides and transmembrane segments is an additional problem in topology prediction treated with a limited success by different methods. Both signal peptides and transmembrane segments contain hydrophobic regions which form α-helices. This causes the cross-prediction between them, which is a weakness of many transmembrane topology predictors. By predicting signal peptides and transmembrane helices simultaneously (Phobius), the errors caused by cross-prediction are reduced and the performance is substantially increased. Another feature used to increase the accuracy of the prediction is the homology (PolyPhobius).” It is also possible to predict beta-barrel membrane proteins' topology.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GrIDsure** GrIDsure: GrIDsure was a personal identification system which extends the standard ‘shared-secret’ authentication model to create a secure methodology whereby a dynamic ‘one-time’ password or PIN can be generated by a user. It was invented by Jonathan Craymer and Stephen Howes in November 2005. It has received positive media reception.GrIDsure went into liquidation in October 2011 after investor funding dried up.On 18 November 2011 Cryptocard announced it had acquired the intellectual property of GrIDsure, which includes eight patents that have been granted and a further 16 pending. Cryptocard was already a GrIDsure OEM partner and uses the product in their portfolio. Authentication method: In order to authenticate, the user is asked to input a series of numbers based on a preregistered pattern on a grid (that the user knows) and a grid of pseudo-random numbers generated by the authenticator. This results in a different series of numbers each time the user authenticates. Academic reception: A study was carried out on the statistical security of GrIDsure by Richard Weber in the Statistical Laboratory of the University of Cambridge. He concluded "This is one of the most beautiful ideas I have seen in many years of looking at algorithms and optimisation problems."In March 2008, an independent security researcher, Mike Bond, identified flaws in the Gridsure authentication scheme, specifically commenting on Weber's analysis, and concluded: "The Gridsure authentication mechanism remains largely unproven. Studies so far are flawed or taken out of context; my own initial studies indicate further weaknesses." The introduction to Dr Bond's paper states "This document is not intended to be a fully representative or balanced appraisal of the scheme."University College London conducted a usability trial. Academic reception: In a covering letter to the study report, Professor Sasse states: "Having looked at many mechanisms which have been proposed in recent years to overcome users' problems with PINs and passwords, this is the first one that has the potential to offer good usability and increased security at the same time."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**H1F0** H1F0: H1 histone family, member 0 is a member of the histone family of nuclear proteins which are a component of chromatin. In humans, this protein is encoded by the H1F0 gene.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hybrid-propellant rocket** Hybrid-propellant rocket: A hybrid-propellant rocket is a rocket with a rocket motor that uses rocket propellants in two different phases: one solid and the other either gas or liquid. The hybrid rocket concept can be traced back to the early 1930s. Hybrid-propellant rocket: Hybrid rockets avoid some of the disadvantages of solid rockets like the dangers of propellant handling, while also avoiding some disadvantages of liquid rockets like their mechanical complexity. Because it is difficult for the fuel and oxidizer to be mixed intimately (being different states of matter), hybrid rockets tend to fail more benignly than liquids or solids. Like liquid rocket engines, hybrid rocket motors can be shut down easily and the thrust is throttleable. The theoretical specific impulse ( Isp ) performance of hybrids is generally higher than solid motors and lower than liquid engines. Isp as high as 400 s has been measured in a hybrid rocket using metalized fuels. Hybrid systems are more complex than solid ones, but they avoid significant hazards of manufacturing, shipping and handling solid rocket motors by storing the oxidizer and the fuel separately. History: The first work on hybrid rockets was performed in the early 1930s at the Soviet Group for the Study of Reactive Motion. Mikhail Klavdievich Tikhonravov, who would later supervise the design of Sputnik I and the Luna programme, was responsible for the first hybrid propelled rocket launch, the GIRD-9, on 17 August 1933, which reached an altitude of 400 metres (1,300 ft). In the late 1930s at IG Farben in Germany and concurrently at the California Rocket Society in the United States. Leonid Andrussow, working in Germany, theorized hybrid propellant rockets. O. Lutz, W. Noeggerath, and Andrussow tested a 10 kilonewtons (2,200 lbf) hybrid rocket motor using coal and gaseous N2O as the propellants. Oberth also worked on a hybrid rocket motor using LOX as the oxidizer and graphite as the fuel. The high heat of sublimation of carbon prevented these rocket motors from operating efficiently, as it resulted in a negligible burning rate. History: In the 1940s, the California Pacific Rocket Society used LOX in combination with several different fuel types, including wood, wax, and rubber. The most successful of these tests was with the rubber fuel, which is still the dominant fuel in use today. In June 1951, a LOX / rubber rocket was flown to an altitude of 9 kilometres (5.6 mi).Two major efforts occurred in the 1950s. One of these efforts was by G. Moore and K. Berman at General Electric. The duo used 90% high test peroxide (HTP, or H2O2) and polyethylene (PE) in a rod and tube grain design. They drew several significant conclusions from their work. The fuel grain had uniform burning. Grain cracks did not affect combustion, like it does with solid rocket motors. No hard starts were observed (a hard start is a pressure spike seen close to the time of ignition, typical of liquid rocket engines). The fuel surface acted as a flame holder, which encouraged stable combustion. The oxidizer could be throttled with one valve, and a high oxidizer to fuel ratio helped simplify combustion. The negative observations were low burning rates and that the thermal instability of peroxide was problematic for safety reasons. Another effort that occurred in the 1950s was the development of a reverse hybrid. In a standard hybrid rocket motor, the solid material is the fuel. In a reverse hybrid rocket motor, the oxidizer is solid. William Avery of the Applied Physics Laboratory used jet fuel and ammonium nitrate, selected for their low cost. His O/F ratio was 0.035, which was 200 times smaller than the ratio used by Moore and Berman.In 1953 Pacific Rocket Society (est. 1943) was developing the XDF-23, a 4 inches (10 cm) × 72 inches (180 cm) hybrid rocket, designed by Jim Nuding, using LOX and rubber polymer called "Thiokol". They had already tried other fuels in prior iterations including cotton, paraffin wax and wood. The XDF name itself comes from "experimental Douglas fir" from one of the first units.In the 1960s, European organizations also began work on hybrid rockets. ONERA, based in France, and Volvo Flygmotor, based in Sweden, developed sounding rockets using hybrid rocket motor technology. The ONERA group focused on a hypergolic rocket motor, using nitric acid and an amine fuel. The company flew eight rockets: Once in April 1964, three times in June 1965, and four times in 1967. The maximum altitude the flights achieved was over 100 kilometres (62 mi). The Volvo Flygmotor group also used a hypergolic propellant combination. They also used nitric acid for their oxidizer, but used Tagaform (polybutadiene with an aromatic amine) as their fuel. Their flight was in 1969, lofting a 20 kilograms (44 lb) payload to 80 kilometres (50 mi).Meanwhile, in the United States, United Technologies Center (Chemical Systems Division) and Beech Aircraft were working on a supersonic target drone, known as Sandpiper. It used MON-25 (mixed 25% NO, 75% N2O4) as the oxidizer and polymethyl methacrylate (PMM) and Mg for the fuel. The drone flew six times in 1968, for more than 300 seconds and to an altitude greater than 160 kilometres (99 mi). The second iteration of the rocket, known as the HAST, had IRFNA-PB/PMM for its propellants and was throttleable over a 10/1 range. HAST could carry a heavier payload than the Sandpiper. Another iteration, which used the same propellant combination as the HAST, was developed by Chemical Systems Division and Teledyne Aircraft. Development for this program ended in the mid-1980s. Chemical Systems Division also worked on a propellant combination of lithium and FLOx (mixed F2 and O2). This was an efficient hypergolic rocket that was throttleable. The vacuum specific impulse was 380 seconds at 93% combustion efficiency.American Rocket Company (AMROC) developed the largest hybrid rockets ever created in the late 1980s and early 1990s. The first version of their engine, fired at the Air Force Phillips Laboratory, produced 312,000 newtons (70,000 lbf) of thrust for 70 seconds with a propellant combination of LOX and hydroxyl-terminated polybutadiene (HTPB) rubber. The second version of the motor, known as the H-250F, produced more than 1,000,000 newtons (220,000 lbf) of thrust.Korey Kline of Environmental Aeroscience Corporation (eAc) first fired a gaseous oxygen and rubber hybrid in 1982 at Lucerne Dry Lake, CA, after discussions on the technology with Bill Wood, formerly with Westinghouse. The first SpaceShipOne hybrid tests were successfully conducted by Kline and eAc at Mojave, CA.In 1994, the U.S. Air Force Academy flew a hybrid sounding rocket to an altitude of 5 kilometres (3.1 mi). The 6.4 metres (21 ft) rocket used HTPB and LOX for its propellant, and reached a peak thrust of 4,400 newtons (990 lbf) and had a thrust duration of 16 seconds. Basic concepts: In its simplest form, a hybrid rocket consists of a pressure vessel (tank) containing the liquid oxidiser, the combustion chamber containing the solid propellant, and a mechanical device separating the two. When thrust is desired, a suitable ignition source is introduced in the combustion chamber and the valve is opened. The liquid oxidiser (or gas) flows into the combustion chamber where it is vaporized and then reacted with the solid propellant. Combustion occurs in a boundary layer diffusion flame adjacent to the surface of the solid propellant. Basic concepts: Generally, the liquid propellant is the oxidizer and the solid propellant is the fuel because solid oxidizers are extremely dangerous and lower performing than liquid oxidizers. Furthermore, using a solid fuel such as Hydroxyl-terminated polybutadiene (HTPB) or paraffin wax allows for the incorporation of high-energy fuel additives such as aluminium, lithium, or metal hydrides. Combustion: The governing equation for hybrid rocket combustion shows that the regression rate is dependent on the oxidizer mass flux rate, which means the rate that the fuel will burn is proportional to the amount of oxidizer flowing through the port. This differs from a solid rocket motor, in which the regression rate is proportional to the chamber pressure of the motor. Combustion: r˙=aoGon where r˙ is the regression rate, ao is the regression rate coefficient (incorporating the grain length), Go is the oxidizer mass flux rate, and n is the regression rate exponent.As the motor burns, the increase in diameter of the fuel port results in an increased fuel mass flow rate. This phenomenon makes the oxidizer to fuel ratio (O/F) shift during the burn. The increased fuel mass flow rate can be compensated for by also increasing the oxidizer mass flow rate. In addition to the O/F varying as a function of time, it also varies based on the position down the fuel grain. The closer the position is to the top of the fuel grain, the higher the O/F ratio. Since the O/F varies down the port, a point called the stoichiometric point may exist at some point down the grain. Properties: Hybrid rocket motors exhibit some obvious as well as some subtle advantages over liquid-fuel rockets and solid-fuel rockets. A brief summary of some of these is given below: Advantages compared with liquid rockets Mechanically simpler – requires only a single liquid propellant resulting in less plumbing, fewer valves, and simpler operations. Denser fuel – fuels in the solid phase generally have higher density than those in the liquid phase, reducing overall system volume. Metal additives – reactive metals such as aluminium, magnesium, lithium or beryllium can be easily included in the fuel grain increasing specific impulse ( Isp ), density, or both. Combustion instabilities – Hybrid rockets do not typically exhibit high frequency combustion instabilities that plague liquid rockets due to the solid fuel grain breaking up acoustic waves that would otherwise reflect in an open liquid engine combustion chamber. Properties: Propellant pressurization – One of the most difficult to design portions of a liquid rocket system are the turbopumps. Turbopump design is complex as it has to precisely and efficiently pump and keep separated two fluids of different properties in precise ratios at very high volumetric flow rates, often cryogenic temperatures, and highly volatile chemicals while combusting those same fluids in order to power itself. Hybrids have far less fluid to move and can often be pressurized by a blow-down system (which would be prohibitively heavy in a liquid rocket) or self-pressurized oxidizers (such as N2O). Properties: Cooling – Liquid rockets often depend on one of the propellants, typically the fuel, to cool the combustion chamber and nozzle due to the very high heat fluxes and vulnerability of the metal walls to oxidation and stress cracking. Hybrid rockets have combustion chambers that are lined with the solid propellant which shields it from the product gases. Their nozzles are often graphite or coated in ablative materials similarly to solid rocket motors. The design, construction, and testing of liquid cooling flows is complex, making the system more prone to failure. Properties: Advantages compared with solid rockets Higher theoretical Isp – Possible due to limits of known solid oxidizers compared to often used liquid oxidizers. Properties: Less explosion hazard – Propellant grain is more tolerant of processing errors such as cracks since the burn rate is dependent on oxidizer mass flux rate. Propellant grain cannot be ignited by stray electrical charge and is very insensitive to auto-igniting due to heat. Hybrid rocket motors can be transported to the launch site with the oxidizer and fuel stored separately, improving safety. Properties: Fewer handling and storage issues – Ingredients in solid rockets are often incompatible chemically and thermally. Repeated changes in temperature can cause distortion of the grain. Antioxidants and coatings are used to keep the grain from breaking down or decomposing. More controllable – Stop/restart and throttling are all easily incorporated into most designs. Solid rockets rarely can be shut down easily and almost never have throttling or restart capabilities. Properties: Disadvantages of hybrid rockets Hybrid rockets also exhibit some disadvantages when compared with liquid and solid rockets. These include: Oxidizer-to-fuel ratio shift ("O/F shift") – with a constant oxidizer flow-rate, the ratio of fuel production rate to oxidizer flow rate will change as a grain regresses. This leads to off-peak operation from a chemical performance point of view. However, for a well-designed hybrid, O/F shift has a very small impact on performance because Isp is insensitive to O/F shift near the peak. Properties: Poor regression characteristics often drive multi-port fuel grains. Multi-port fuel grains have poor volumetric efficiency and, often, structural deficiencies. High regression rate liquefying fuels developed in the late 1990s offer a potential solution to this problem. Properties: Compared with liquid-based propulsion, re-fueling a partially or totally depleted hybrid rocket would present significant challenges, as the solid propellant cannot simply be pumped into a fuel tank. This may or may not be an issue, depending upon how the rocket is planned to be used.In general, much less development work has been completed with hybrids than liquids or solids and it is likely that some of these disadvantages could be rectified through further investment in research and development. Properties: One problem in designing large hybrid orbital rockets is that turbopumps become necessary to achieve high flow rates and pressurization of the oxidizer. This turbopump must be powered by something. In a traditional liquid-propellant rocket, the turbopump uses the same fuel and oxidizer as the rocket, since they are both liquid and can be fed to the pre-burner. But in a hybrid, the fuel is solid and cannot be fed to a turbopump's engine. Some hybrids use an oxidizer that can also be used as a monopropellant, such as nitromethane or hydrogen peroxide, and so a turbopump can run on it alone. But nitromethane and hydrogen peroxide are significantly less efficient than liquid oxygen, which cannot be used alone to run a turbopump. Another fuel would be needed, requiring its own tank and decreasing rocket performance. Fuel: Common fuel choices A reverse-hybrid rocket, which is not very common, is one where the engine uses a solid oxidizer and a liquid fuel. Some liquid fuel options are kerosene, hydrazine, and LH2. Common fuels for a typical hybrid rocket engine include polymers such as acrylics, polyethylene (PE), cross-linked rubber, such as HTPB, or liquefying fuels such as paraffin wax. Plexiglass was a common fuel, since the combustion could be visible through the transparent combustion chamber. Hydroxyl-terminated polybutadiene (HTPB) synthetic rubber is currently the most popular fuel for hybrid rocket engines, due to its energy, and due to how safe it is to handle. Tests have been performed in which HTPB was soaked in liquid oxygen, and it still did not become explosive. These fuels are generally not as dense as solid rocket motors, so they are often doped with aluminum to increase the density and therefore the rocket performance.: 404 Grain manufacturing methods Cast Hybrid rocket fuel grains can be manufactured via casting techniques, since they are typically a plastic or a rubber. Complex geometries, which are driven by the need for higher fuel mass flow rates, makes casting fuel grains for hybrid rockets expensive and time-consuming due in part to equipment costs. On a larger scale, cast grains must be supported by internal webbing, so that large chunks of fuel do not impact or even potentially block the nozzle. Grain defects are also an issue in larger grains. Traditional fuels that are cast are hydroxyl-terminated polybutadiene (HTPB) and paraffin waxes. Fuel: Additive manufacturing Additive manufacturing is currently being used to create grain structures that were otherwise not possible to manufacture. Helical ports have been shown to increase fuel regression rates while also increasing volumetric efficiency. An example of material used for a hybrid rocket fuel is acrylonitrile butadiene styrene (ABS). The printed material is also typically enhanced with additives to improve rocket performance. Recent work at the University of Tennessee Knoxville has shown that, due to the increased surface area, the use of powdered fuels (i.e. graphite, coal, aluminum) encased in a 3D printed, ABS matrix can significantly increase the fuel burn rate and thrust level as compared to traditional polymer grains. Oxidizer: Common oxidizer choices Common oxidizers include gaseous or liquid oxygen, nitrous oxide, and hydrogen peroxide. For a reverse hybrid, oxidizers such as frozen oxygen and ammonium perchlorate are used.: 405–406 Proper oxidizer vaporization is important for the rocket to perform efficiently. Improper vaporization can lead to very large regression rate differences at the head end of the motor when compared to the aft end. One method is to use a hot gas generator to heat the oxidizer in a pre-combustion chamber. Another method is to use an oxidizer that can also be used as a monopropellant. A good example is hydrogen peroxide, which can be catalytically decomposed over a silver bed into hot oxygen and steam. A third method is to inject a propellant that is hypergolic with the oxidizer into the flow. Some of the oxidizer will decompose, heating up the rest of the oxidizer in the flow.: 406–407 Hybrid safety: Generally, well designed and carefully constructed hybrids are very safe. The primary hazards associated with hybrids are: Pressure vessel failures – Chamber insulation failure may allow hot combustion gases near the chamber walls leading to a "burn-through" in which the vessel ruptures. Hybrid safety: Blow back – For oxidizers that decompose exothermically such as nitrous oxide or hydrogen peroxide, flame or hot gasses from the combustion chamber can propagate back through the injector, vaporising the oxidizer and mixing it with hot fuel rich gasses leading to a tank explosion. Blow-back requires gases to flow back through the injector due to insufficient pressure drop which can occur during periods of unstable combustion. Blow back is inherent to specific oxidizers and is not possible with oxidizers such as oxygen, or nitrogen tetroxide, unless fuel is present in the oxidizer tank. Hybrid safety: Hard starts – An excess of oxidizer in the combustion chamber prior to ignition, particularly for monopropellants such as nitrous oxide, can result in a temporary over-pressure or "spike" at ignition.Because the fuel in a hybrid does not contain an oxidizer, it will not combust explosively on its own. For this reason, hybrids are classified as having no TNT equivalent explosive power. In contrast, solid rockets often have TNT equivalencies similar in magnitude to the mass of the propellant grain. Liquid-fuel rockets typically have a TNT equivalence calculated based on the amount of fuel and oxidizer which could realistically intimately combine before igniting explosively; this is often taken to be 10–20% of the total propellant mass. For hybrids, even filling the combustion chamber with oxidizer prior to ignition will not generally create an explosion with the solid fuel, the explosive equivalence is often quoted as 0%. Organizations working on hybrids: Commercial companies In 1998 SpaceDev acquired all of the intellectual property, designs, and test results generated by over 200 hybrid rocket motor firings by the American Rocket Company over its eight-year life. SpaceShipOne, the first private crewed spacecraft, was powered by SpaceDev's hybrid rocket motor burning HTPB with nitrous oxide. However, nitrous oxide was the prime substance responsible for the explosion that killed three in the development of the successor of SpaceShipOne at Scaled Composites in 2007. The Virgin Galactic SpaceShipTwo follow-on commercial suborbital spaceplane uses a scaled-up hybrid motor. Organizations working on hybrids: SpaceDev was developing the SpaceDev Streaker, an expendable small launch vehicle, and SpaceDev Dream Chaser, capable of both suborbital and orbital human space flight. Both Streaker and Dream Chaser use hybrid rocket motors that burn nitrous oxide and the synthetic HTPB rubber. SpaceDev was acquired by Sierra Nevada Corporation in 2009, becoming its Space Systems division, which continues to develop Dream Chaser for NASA's Commercial Crew Development contract. Sierra Nevada also developed RocketMotorTwo, the hybrid engine for SpaceShipTwo. On October 31, 2014, when SpaceShipTwo was lost, initial speculation had suggested that its hybrid engine had in fact exploded and killed one test pilot and seriously injured the other. However, investigation data now indicates an early deployment of the SpaceShip-Two feather system was the cause for aerodynamic breakup of the vehicle.U.S. Rockets manufactured and deployed hybrids using self-pressurizing nitrous oxide (N2O) and hydroxyl-terminated polybutadiene (HTPB) as well as mixed High-test peroxide (HTP) and HTPB. The High-test peroxide (H2O2) 86% and (HTPB) and aluminum hybrids developed by U.S. Rockets produced a sea level delivered specific impulse (Isp) of 240, well above the typical 180 of N2O-HTPB hybrids. In addition to that, they were self-starting, restartable, had considerably lower combustion instability making them suitable for fragile or crewed missions such as Bloodhound SSC, SpaceShipTwo or SpaceShipThree. The company had successfully tested and deployed both pressure fed and pump fed versions of the latter HTP-HTPB style. Deliverables to date have ranged from 6 inch to 18 inch diameter, and developed units up to 54 inch diameter. The vendor claimed scalability to over 5 meters diameter with regression rates approaching solids, according to literature distributed at the November 2013 Defense Advanced Research Projects Agency (DARPA) meeting for XS-1. U.S. Rockets is no longer manufacturing large-scale rockets.Gilmour Space Technologies began testing Hybrid rocket engines in 2015 with both N2O and HP with HDPE and HDPE+wax blends. For 2016 testing includes a 5000 Lb HP/PE engine. The company is planning to use hybrids for both sounding and orbital rockets. Organizations working on hybrids: Orbital Technologies Corporation (Orbitec) has been involved in some U.S. government-funded research on hybrid rockets including the "Vortex Hybrid" concept.Environmental Aeroscience Corporation (eAc) was incorporated in 1994 to develop hybrid rocket propulsion systems. It was included in the design competition for the SpaceShipOne motor but lost the contract to SpaceDev. Environmental Aeroscience Corporation still supplied parts to SpaceDev for the oxidizer fill, vent, and dump system.Rocket Lab formerly sold hybrid sounding rockets and related technology. Organizations working on hybrids: The Reaction Research Society (RRS), although known primarily for their work with liquid rocket propulsion, has a long history of research and development with hybrid rocket propulsion. Copenhagen Suborbitals, a Danish rocket group, has designed and test-fired several hybrids using N2O at first and currently LOX. Their fuel is epoxy, paraffin wax, or polyurethane. The group eventually moved away from hybrids because of thrust instabilities, and now uses a motor similar to that of the V-2 rocket. Organizations working on hybrids: TiSPACE is a Taiwanese company which is developing a family of hybrid-propellant rockets.bluShift Aerospace in Brunswick, Maine, won a NASA SBIR grant to develop a modular hybrid rocket engine for its proprietary bio-derived fuel in June 2019. Having completed the grant bluShift has launched its first sounding rocket using the technology.Vaya Space based out of Cocoa, Florida, is expected to launch its hybrid fuel rocket Dauntless in 2023.Reaction Dynamics based out Saint-Jean-sur-Richelieu, Quebec, began developing a hybrid rocket engine in 2017 capable of producing 21.6 kN of thrust. Their Aurora rocket will use nine engines on the first stage and one engine on the second stage and will be capable of delivering a payload of 50-150 kg to LEO. In May 2022, Reaction Dynamics announced they were partnering with Maritime Launch Services to launch the Aurora rocket from their launch site currently under construction in Canso, Nova Scotia, beginning with suborbital test flights in Summer, 2023 with a target of 2024 for the first orbital launch. Organizations working on hybrids: Universities Space Propulsion Group was founded in 1999 by Arif Karabeyoglu, Brian Cantwell, and others from Stanford University to develop high regression-rate liquefying hybrid rocket fuels. They have successfully fired motors as large as 12.5 in. diameter which produce 13,000 lbf. using the technology and are currently developing a 24 in. diameter, 25,000 lbf. motor to be initially fired in 2010. Stanford University is the institution where liquid-layer combustion theory for hybrid rockets was developed. The SPaSE group at Stanford is currently working with NASA Ames Research Center developing the Peregrine sounding rocket which will be capable of 100 km altitude. Engineering challenges include various types of combustion instabilities. Although the proposed motor was test fired in 2013, the Peregrine program eventually switched to a standard solid rocket for its 2016 debut. Organizations working on hybrids: The University of Tennessee Knoxville has carried out hybrid rocket research since 1999, working in collaboration with NASA Marshall Space Flight Center and private industry. This work has included the integration of a water-cooled calorimeter nozzle, one of the first 3D-printed, hot section components successfully used in a rocket motor. Other work at the university has focused on the use of helical oxidizer injection, bio-derived fuels and powdered fuels encased in a 3D-printed, ABS matrix, including the successful launch of a coal-fired hybrid at the 2019 Spaceport America Cup.At the Delft University of Technology, the student team Delft Aerospace Rocket Engineering (DARE) is very active in the design and building of hybrid rockets. In October 2015, DARE broke the European student altitude record with the Stratos II+ sounding rocket. Stratos II+ was propelled by the DHX-200 hybrid rocket engine, using a nitrous oxide oxidizer and fuel blend of paraffin, sorbitol and aluminium powder. On July 26, 2018 DARE attempted to launch the Stratos III hybrid rocket. This rocket used the same fuel/oxidizer combination as its predecessor, but with an increased impulse of around 360 kNs. At the time of development, this was the most powerful hybrid rocket engine ever developed by a student team in terms of total impulse. Unfortunately, the Stratos III vehicle was lost 20 seconds into the flight.Florida Institute of Technology has successfully tested and evaluated hybrid technologies with their Panther Project. The WARR student-team at the Technical University of Munich has been developing hybrid engines and rockets since the early 1970s. Using acids, oxygen, or nitrous oxide in combination with polyethylene, or HTPB. The development includes test stand engines as well as airborne versions, like the first German hybrid rocket Barbarella. They are currently working on a hybrid rocket with Liquid oxygen as its oxidizer, to break the European height record of amateur rockets. They are also working with Rocket Crafters and testing their hybrid rockets. Organizations working on hybrids: Boston University's student-run "Rocket Propulsion Group", which in the past has launched only solid motor rockets, is attempting to design and build a single-stage hybrid sounding rocket to launch into sub-orbital space by July 2015.Brigham Young University (BYU), the University of Utah, and Utah State University launched a student-designed rocket called Unity IV in 1995 which burned the solid fuel hydroxyl-terminated polybutadiene (HTPB) with an oxidizer of gaseous oxygen, and in 2003 launched a larger version which burned HTPB with nitrous oxide. Organizations working on hybrids: University of Brasilia's Hybrid Team has extensive research in paraffin wax / N2O hybrids having already made more than 50 tests fires. Hybrid Team is currently working liquefied propellant, numeric optimization and rocket design. Nowadays the rocket design team, called Capital Rocket Team, is developing high power hybrid rockets and researching about some additives. The Chemical Propulsion Laboratory has already made some researches and is developing the motor for SARA platform.University of California, Los Angeles's student-run "Rocket Project at UCLA" launches hybrid propulsion rockets utilizing nitrous oxide as an oxidizer and HTPB as the fuel. They are currently in the development process of their fifth student-built hybrid rocket engine.University of Toronto's student-run "University of Toronto Aerospace Team", designs and builds hybrid engine powered rockets. They are currently constructing a new engine testing facility at the University of Toronto Institute for Aerospace Studies, and are working towards breaking the Canadian amateur rocketry altitude record with their new rocket, Defiance MKIII, currently under rigorous testing. Defiance MK III's engine, QUASAR, is a Nitrous-Paraffin hybrid engine, capable of producing 7 kN of thrust for a period of 9 seconds.In 2016, Pakistan's DHA Suffa University successfully developed Raheel-1, hybrid rocket engines in 1 kN class, using paraffin wax and liquid oxygen, thereby becoming the first university run rocket research program in the country. In India, Birla Institute of Technology, Mesra Space engineering and rocketry department has been working on Hybrid Projects with various fuels and oxidizers. Organizations working on hybrids: Pars Rocketry Group from Istanbul Technical University has designed and built the first hybrid rocket engine of Turkey, the rocket engine extensively tested in May 2015.A United Kingdom-based team (laffin-gas) is using four N2O hybrid rockets in a drag-racing style car. Each rocket has an outer diameter of 150 mm and is 1.4 m long. They use a fuel grain of high-density wound paper soaked in cooking oil. The N2O supply is provided by Nitrogen-pressurised piston accumulators which provide a higher rate of delivery than N2O gas alone and also provide damping of any reverse shock.In Italy one of the leading centers for research in hybrid propellants rockets is CISAS (Center of Studies and Activities for Space) "G. Colombo", University of Padua. The activities cover all stages of the development: from theoretical analysis of the combustion process to numerical simulation using CFD codes, and then by conducting ground tests of small scale and large-scale rockets (up to 20 kN, N2O-Paraffin wax based motors). One of these engines flew successfully in 2009. Since 2014, the research group is focused on the use of high test peroxide as oxidizer, in partnership with "Technology for Propulsion and Innovation", a university of Padua spin-off company.In Taiwan, hybrid rocket system developments began in 2009 through R&D projects of NSPO with two university teams. Both teams employed nitrous oxide / HTPB propellant system with different improvement schemes. Several hybrid rockets have been successfully launched by NCKU and NCTU teams so far, reaching altitudes of 10–20 km. Their plans include attempting 100–200 km altitude launch to test nanosatellites, and developing orbital launch capabilities for nanosatellites in the long run. A sub-scale N2O/PE dual-vortical-flow (DVF) hybrid engine hot-fire test in 2014 has delivered an averaged Isp of 280 sec, which indicates that the system has reached around 97% combustion efficiency.In (Germany) the University of Stuttgart's Student team HyEnd is the current world record holder for the highest-flying student-built hybrid rocket with their HEROS rockets.The Aerospace Team of the TU Graz, Austria, is also developing a hybrid-propellant rocket.The Polish Student team PWr in Space at Wrocław University of Science and Technology has developed three hybrid rockets: R2 "Setka", R3 "Dziewięćdziesiątka dziewiątka" and the most powerful of all - R4 "Lynx" with a successful test at their test stand Many other universities, such as Embry-Riddle Aeronautical University, the University of Washington, Purdue University, the University of Michigan at Ann Arbor, the University of Arkansas at Little Rock, Hendrix College, the University of Illinois, Portland State University, University of KwaZulu-Natal, Texas A&M University, Aarhus University, Rice University, and AGH University of Science and Technology have hybrid motor test stands that allow for student research with hybrid rockets. Organizations working on hybrids: High power rocketry There are a number of hybrid rocket motor systems available for amateur/hobbyist use in high-powered model rocketry. These include the popular HyperTek systems and a number of 'Urbanski-Colburn Valved' (U/C) systems such as RATTWorks, Contrail Rockets, and Propulsion Polymers. All of these systems use nitrous oxide as the oxidizer and a plastic fuel (such as Polyvinyl chloride (PVC), Polypropylene), or a polymer-based fuel such as HTPB. This reduces the cost per flight compared to solid rocket motors, although there is generally more ground support equipment required with hybrids. In popular culture: An October 26, 2005 episode of the television show MythBusters entitled "Confederate Rocket" featured a hybrid rocket motor using liquid nitrous oxide and paraffin wax. The myth purported that during the American Civil War, the Confederate Army was able to construct a rocket of this type. The myth was revisited in a later episode entitled Salami Rocket, using hollowed out dry salami as the solid fuel. In popular culture: In the February 18, 2007, episode of Top Gear, a Reliant Robin was used by Richard Hammond and James May in an attempt to modify a normal K-reg Robin into a reusable Space Shuttle. Steve Holland, a professional radio-controlled aircraft pilot, helped Hammond to work out how to land a Robin safely. The craft was built by senior members of the United Kingdom Rocketry Association (UKRA) and achieved a successful launch, flew for several seconds into the air and managed to successfully jettison the solid-fuel rocket boosters on time. This was the largest rocket launched by a non-government organisation in Europe. It used 6 × 40960 NS O motors by Contrail Rockets giving a maximum thrust of 8 tonnes. However, the car failed to separate from the large external fuel tank due to faulty explosive bolts between the Robin and the external tank, and the Robin subsequently crashed into the ground and seemed to have exploded soon after. This explosion was added for dramatic effect as neither Reliant Robins nor hybrid rocket motors explode in the way depicted.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Olfactory mucosa** Olfactory mucosa: The olfactory mucosa is the neuroepithelialial mucosa lining the roof and upper parts of the septum and lateral wall of the nasal cavity which contains bipolar neurons of the of the primary receptor neurons of the olfactory pathway, as well as supporting cells. The neurons' dendrites project towards the nasal cavity while their axons ascend through the cribriform plate as the olfactory nerves.The part of the nasal cavity that is lined with olfactory mucosa is known as the olfactory region (pars olfactoria tunicae mucosae nasi), while the rest of the nasal cavity that is lined by ordinary respiratory mucosa is known as the respiratory region. Structure: Olfactory mucosa lines about 5cm2 of the posterosuperior parts of the lateral nasal wall. Parts of the nasal cavity lined by olfactory mucosa include: parts of the roof of the nasal cavity, the superior nasal concha and some upper parts of the middle nasal concha, parts of the nasal septum, and the sphenoethmoidal recess.The olfactory mucosa is thicker and lighter in colour (yellowish-brown) in comparison to the (pinkish) respiratory mucosa lining the rest of the nasal cavity. Structure: Glands of the olfactory mucosa secrete a mostly serous fluid. Structure: Histology The olfactory mucosa consists of the olfactory epithelium and the underlying lamina propria, connective tissue containing fibroblasts, blood vessels, Bowman's glands and bundles of fine axons from the olfactory neurons.In vertebrates, the olfactory epithelium consists of a three basic cell types: bipolar olfactory receptor neurons; sustentacular cells, a type of supporting cell; and basal cells, the stem cells that continuously give rise to new olfactory receptor neurons and sustentacular cells.Electron microscopy studies show that Bowman's glands contain cells with large secretory vesicles. The exact composition of the secretions from Bowman's glands is unclear, but there is evidence that they produce odorant binding protein. Physiology: The mucus protects the olfactory epithelium and allows odors to dissolve so that they can be detected by olfactory receptor neurons. Research: Adult stem cell harvesting Cells in the olfactory mucosa have been used in clinical trials for adult stem cell therapeutic treatments and successfully harvested for future applications.CB1 receptors and obesity Type 1 cannabinoid receptors (CB1 receptors) are present in the sustentacular cells of the olfactory mucosa, in the periglomerular cells of the olfactory bulb, and in the anterior olfactory nucleus and olfactory cortices. A study in 2008 in mice has shown that the level of CB1 expression in various brain regions, including the olfactory nucleus, is modulated by diet-induced obesity.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Microsoft ION** Microsoft ION: Microsoft ION is Microsoft's self-sovereign identity system. It builds on the Bitcoin blockchain and IPFS through a Sidetree-based DID network.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nokia Communicator** Nokia Communicator: The Nokia Communicator is a brand name for a series of business-optimized mobile phones marketed by Nokia Corporation, all of which appear as normal (if large) phones on the outside, and open in clamshell format to access a QWERTY keyboard and an LCD screen nearly the size of the device footprint. Nokia Communicators have Internet connectivity and clients for Internet and non-Internet communication services. The earlier 9000 series Communicators introduced features which later developed into smartphones. The latest Communicator model, the Nokia E90 Communicator, is part of the Nokia Eseries. Models: The Nokia 9300 and 9300i (running Symbian OS version 7.0s and Series 80 v2.0) are very similar to the Nokia 9500 but were not marketed under the Communicator name by Nokia. Likewise, the Nokia N97 and Nokia E7 (running Symbian^3) from 2009 and 2011 respectively are also similar to the Communicator series, but not marketed as it. Movies: The first Nokia smartphone in the movies was Nokia Communicator 9000: Val Kilmer as The Saint used the device to foil the plans of a villainous Russian oligarch back in 1997. In Terminator 3: Rise of the Machines the Terminatrix (T-X), played by Kristanna Loken, hijacks a silver Lexus SC 430 and uses a Nokia 9210 inside the car to dial-up a remote link to a local phone systems server. In Bad Company (2002) the special phone used by Chris Rock is the Nokia 9210.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stinkor** Stinkor: Stinkor is a fictional character, a villain from He-Man and the Masters of the Universe. Labeled the "Evil Master of Odors," Stinkor is essentially a humanoid skunk whose superpower is the ability to release a toxic odor from his body that renders foes immobile. Conception: 1985 Stinkor was first introduced in 1985 as an action figure from the He-Man and the Masters of the Universe toyline and came packaged with a mini-comic entitled The Stench of Evil!. The Stinkor action figure had a semi-foul scent, giving it the distinction of being one of the few toys whose "action feature" was an odor. The Stinkor action figure was created by Mattel by re-using the mold of another villain in the Masters of the Universe line Mer-Man. The only differences between the Mer-Man and Stinkor action figures were that Stinkor was painted black and white, had different chest armor and was chemically treated with patchouli oil to smell musky.Stinkor was presented to Lou Scheimer and other staff at Filmation for inclusion in the original Masters of the Universe cartoon series, but his questionable superpower kept him from ever making an appearance on television. According to Filmation staff, when the description of Stinkor was read out at a meeting of the story editors, all of them burst out laughing and vowed never to use Stinkor in any episode script. Conception: 2002 In the 2002 version of the He-Man cartoon, Stinkor's origin was finally revealed in The Sweet Smell of Victory episode, marking the first time the character had appeared on television. Stinkor was originally a common thief named Odiphus and resembled a large house cat or mogwai. Odiphus was first seen witnessing the escape of Kobra Khan. Later on, Odiphus sought to join Skeletor's group. A chemical accident in Tri-Klops' lab mutated Odiphus into Stinkor and gave him his horrible stench. Stinkor is not immune to his own stench and must wear an oxygen mask to breathe properly. Stinkor eventually incorporated into his breathing apparatus a way to control his stench into focused blasts and teamed up with Skeletor against He-Man and the other Masters of the Universe. As it turns out as difficult as Stinkor is to be around, Skeletor eventually holds him in relatively high favor as a minion who has proved himself agreeably useful. Conception: In the show's second season, the episode "Out of the Past" revealed further background to Stinkor's character. In a flashback sequence we saw Odiphus as a young boy, and it was revealed he was from a race of creatures called the Pelezeans who populate a small village called Pelezea. Odiphus had desired to be a criminal ever since his childhood and as a child betrayed his people by telling the invading warlord Prahvus where the Pelezean kept their weapons. This did get Odiphus punished after a disguised Sorceress of Castle Grayskull repelled Prahvus' forces. In the present, Stinkor suggested to Skeletor to send his skeleton warriors to Pelezea. Reception: Stinkor has had a mixed reception from critics and fans. Stinkor was voted No. 30 in 'The 36 Worst Action Figures From Iconic Toy Lines' by Cracked. Stinkor was voted No. 7 in 'The 12 Coolest Masters of the Universe Action Features' by Topless Robot. CBR voted Stinkor 6th-worst He-Man toy.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Unmanned aircraft in Singapore** Unmanned aircraft in Singapore: According to the Civil Aviation Authority of Singapore (CAAS), an unmanned aircraft (UA), commonly known as a drone, is operated without a pilot on board. An unmanned aircraft system (UAS) comprises the UA and associated elements such as the remote control equipment.Due to Singapore's busy airspace and densely populated urban environment, the UA laws in Singapore are regressive. UAs must be operated safely and responsibly to avoid risks to aviation and public safety. The CAAS requires operators to understand and abide by regulations, including recreational or research uses of the UA. More information on the regulations can be found on Air Navigation Order, paragraph 80. Regulations of Unmanned Aircraft: Unmanned Aircraft (Public Safety and Security) Act The Unmanned Aircraft (Public Safety and Security) Act provides clear guidelines for the safe use of unmanned aircraft.Laws were passed in Parliament on 11 May 2015 to allay concerns over safety, security and privacy surrounding unmanned aerial vehicles (UAVs), taking effect on 1 June the same year. The Unmanned Aircraft (Public Safety and Security) Act outlines regulations for the safe flying of drones and enforcement action against errant users. For instance, permits are required to fly drones above 7 kg, or within a 5 km radius of an aerodrome.Before conducting any outdoor activities, operators should ensure that the UA is flown within the permitted areas. The CAAS website provides a map delineating prohibited areas, danger and restricted areas, areas within 5 km of an airport or an airbase and protected areas. Regulations of Unmanned Aircraft: Public Order Act In 2017, the Singapore National Day Parade was gazetted as a “special event” under the Public Order Act. The order, which was in effect for 24 hours on 9 August 2017, prohibited the unauthorised flying of unmanned aerial vehicles (AUV) such as drones in the area without a permit. The boundaries of the special event area included Marina Boulevard, Victoria Street, Middle Road, Beach Road and the Marina Barrage carpark.It is also an offence to fly a UAV outside of the special event area in a manner that “disrupts, interferes with, delays or obstructs” with the National Day Parade. Offenders may be arrested and upon conviction, be liable to an imprisonment term for up to 12 months, a fine of up to $20,000, or to both. The UAV will be seized.The 32nd Asean summit held at the Istana on 27 April 2018 and the Shangri-La Hotel on 28 April was declared an enhanced security special event under the Public Order Act by the Ministry of Home Affairs. It is an offence to bring or fly drones in the area or outside of the area that disrupts, interferes with, delays or obstructs the conduct of the event. Regulations of Unmanned Aircraft: Personal Data Protection Act The Data Protection Provisions do not impose any obligation on an individual acting in his personal or domestic capacity. Organisations will need to consider whether the drones they deploy are likely to capture personal data of individuals, and may wish to evaluate whether any exception under the Personal Data Protection Act (PDPA) applies in respect of its particular circumstances.Organisations using UAs for photography, video or audio recording activities that capture personal data should refer to the Personal Data Protection Commission's (PDPC) advisory guidelines. Among other obligations, the Data Protection Provisions require organisations to inform individuals of the purposes for which their personal data will be collected, used and disclosed in order to obtain their consent. An organisation must therefore provide notification of the purposes for the collection, use or disclosure of personal data captured by its drones, in order to fulfil the obligation to obtain consent. The notifications should specify if photography, video and/or audio recording is occurring and should generally be placed so as to enable individuals to have sufficient awareness that drones are in operation in the general locale. For example, it may be appropriate to place a notice at points of entry to the area of operation, where individuals are able to read the notice prior to entry. Regulations of Unmanned Aircraft: Telecommunications Act Users with UAs that contain short range devices that conform to the approved operating radio frequencies and corresponding power limits will be exempted from Info-communications Media Development Authority's (IMDA) licensing requirements. For UAs with radio frequencies or power limits that are not in the IMDA's guidelines, equipment dealers have to apply for the relevant license and register their equipment with the IMDA. Under the Telecommunications (Dealers) Regulations, an equipment dealer may only sell IMDA-registered telecommunication equipment for use in Singapore. Types of UA Usage: In general, permits are not required for recreational or research uses of UA in Singapore, as long as the operation of the UA is in line with CAAS’ operating conditions. Types of UA Usage: Situations where recreational or research uses of UA require a permit is where: The total mass of UA including payload exceeds 7 kg (Operator permit and Class 1 Activity Permit required) UA is flown higher than 200 feet above mean sea level (Class 2 Activity Permit required) Within restricted, danger, protected, prohibited areas and within 5 km of an aerodrome / airbase (Class 2 Activity Permit required) Recreational Uses CAAS defines recreational activities as “any pursuit or activity engaged in for enjoyment, relaxation or leisure”. Types of UA Usage: Activities that are not considered recreational uses include: A sporting activity that forms part of an organised group activity or organised competition or tournament (such as a flying display); A recreational activity provided by a business, or in the course of business. Types of UA Usage: Research Uses According to CAAS, any activity falling within the following categories are considered research in nature: Any lecture, tutorial, seminar, demonstration, class or similar activity on unmanned aircraft provided by an educational institution, referred to in section 72 of the Private Education Act; or Any research and development activity carried on by an educational institution, referred to in section 72 of the Private Education Act, with the object of acquiring knowledge that may be of use for the purpose of devising or developing a new or substantially improved product that is an unmanned aircraft. Types of UA Usage: Other Uses Regardless of UA weight or location of UA operations, an operator permit and Class 1 Activity Permit is required for operations that are non-recreational or non-research in nature. Examples of applicable uses include: Business providing aerial surveying or photography services Company's public communications department using an UA to take event photographs for its own internal newsletter Competitive UA races by a private organiser Training courses with a practical element, that is, a UA is deployed as part of the course syllabus UA Permits: Operator Permit CAAS grants an operator permit to an applicant who is able to ensure safe operation of UA, taking into account the applicant's organisational set-up, competency of the personnel especially those flying the UA, procedures to manage safety including the conduct of safety risk assessments, and the airworthiness of each of the aircraft. The permit is valid for up to one year. UA Permits: Activity Permit An activity permit is granted by CAAS to an applicant for a single activity or a block of repeated activities to be carried out by a UA at a specific area of operation, and which are of specific operational profiles and conditions. A Class 1 Activity permit is required for purposes that are not recreational or research in nature; or if the UA used is over 7 kg in total mass (including payload). A Class 1 Activity Permit is not valid without a UA Operator Permit. UA Permits: A Class 2 Activity permit is required for UA activities for recreational or research purposes, and which meets any of these conditions: Operating altitude higher than 200 ft (approx. 60m) above mean sea level (AMSL); Within 5 km of a civil/military aerodrome; or Within any Restricted Area, Danger Area or Protected AreaThe table below summarises the various permit fees: Other Permits Besides permits from the CAAS, other permits may be required from various agencies depending on the nature of the usage. This includes: Singapore Police Force (SPF) for aerial photography and/or overflight of security-sensitive locations Info-communications Media Development Authority of Singapore (IMDA) for use of radio frequencies and power limits that do not follow those in IMDA's guidelines for short range devices. Issues Surrounding UAs: Accidents involving UAs Before the Act was introduced in 2015, then Transport Minister Lui Tuck Yew said in Parliament that there had been more than 20 reported incidents involving drones between April 2014 and May 2015, two of which involved the drones falling onto MRT tracks.In 2016, the Singapore government received a report of a remote control aeroplane that damaged the roof of a housing block in Bishan. However, the operator had yet to be located, according to Transport Minister Khaw Boon Wan in a written parliamentary response. Issues Surrounding UAs: Breaches of regulations In 2015, a 27-year-old man who flew a drone at the War Memorial Park on National Day was given a stern warning by the police. He was believed to have been trying to take photographs of the NDP fireworks.Between June 2015 and May 2017, the CAAS recorded 103 violations. Mr Tan Kah Han, senior director for safety regulation and director for airworthiness and flight operations at the CAAS, said that such incidents typically involve flying within 5 km of aerodromes, which is not allowed, and flying within restricted and security-sensitive areas without a permit.In 2017, there were 12 breaches involving UAs during the National Education and preview shows according to a police statement.On 9 August 2017, a 53-year-old man was arrested on National Day for flying a drone at Marina Barrage, which was marked within the Special Events Area. Unauthorised flying of unmanned aerial vehicles is not allowed for 24 hours on National Day. Issues Surrounding UAs: Opinions by industry players Experts highlight the risks of increasingly sophisticated drone technology being "accessible to the man on the street at brick-and-mortar shops or online", such as high-definition video capabilities making it easier for surveillance to be carried out covertly.In 2015, The Workers' Party’s Gerald Giam, a Non-Constituency Member of Parliament, proposed fitting drones above a certain weight and size with geo-fencing capabilities, to prevent them from entering prohibited spaces. Government Parliamentary Committee (Home Affairs and Law) member Desmond Choo felt that it was important to conduct education campaigns to educate both users and companies that bring in drones.Dr Foong Shaohui, a Singapore University of Technology and Design assistant professor with an interest in robotics and unmanned systems, highlighted how without proper training, even a drone weighing just 1 kilograms can cause property damage and serious injuries.”Mr Mohamed Faisal Mohamed Salleh, deputy director of Nanyang Technology University's Air Traffic Management Research Institute, said that it can be "almost impossible" to find the pilot after UA-related accidents. To counter this issue, he suggested the use of aircraft surveillance technology to trace the positions of all UAs, which could potentially involve the use of telco or wireless networks, or by creating an "Electronic Road Pricing system in the sky".While Mr Yue Keng Mun, from Temasek Polytechnic's School of Engineering, proposed having indoor or fenced-up outdoor spaces for operators to practice their flying skills, Mr Khaw responded in his parliamentary reply that "promoting shared used of space in land-scarce Singapore was preferable to setting aside special flying parks."According to an article by The Straits Times in January 2018, more private condominiums are banning the use of drones within their estates, citing concerns over privacy and safety. Developments in UA regulations and initiatives: In 2017, Singapore joined the Unmanned Aircraft Systems (UAS) Advisory Group, a 15-member group set up by the United Nations’ civil aviation arm to draw up global rules and regulations for the safe use of UAs.In 2018, one-north was designated as Singapore's first drone estate, to provide companies and research institutions with an urban environment for test-bedding innovative unmanned aircraft systems. Under the drone estate initiative, approved operators and research users can carry out their trials and operations at one-north without compromising safety and security.The Singapore Armed Forces have invested in new technologies such as drones that can home in and catch errant drones. Counter-drone systems are among the features of smart airbases of the future.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Osteopenia** Osteopenia: Osteopenia, known as "low bone mass" or "low bone density", is a condition in which bone mineral density is low. Because their bones are weaker, people with osteopenia may have a higher risk of fractures, and some people may go on to develop osteoporosis. In 2010, 43 million older adults in the US had osteopenia. Unlike osteoporosis, osteopenia does not usually cause symptoms, and losing bone density in itself does not cause pain. Osteopenia: There is no single cause for osteopenia, although there are several risk factors, including modifiable (behavioral, including dietary and use of certain drugs) and non-modifiable (for instance, loss of bone mass with age). For people with risk factors, screening via a DXA scanner may help to detect the development and progression of low bone density. Prevention of low bone density may begin early in life and includes a healthy diet and weight-bearing exercise, as well as avoidance of tobacco and alcohol. The treatment of osteopenia is controversial: non-pharmaceutical treatment involves preserving existing bone mass via healthy behaviors (dietary modification, weight-bearing exercise, avoidance or cessation of smoking or heavy alcohol use). Pharmaceutical treatment for osteopenia, including bisphosphonates and other medications, may be considered in certain cases but is not without risks. Overall, treatment decisions should be guided by considering each patient's constellation of risk factors for fractures. Risk factors: Many divide risk factors for osteopenia into fixed (non-changeable) and modifiable factors. Osteopenia can also be secondary to other diseases. An incomplete list of risk factors: Fixed Age: bone density peaks at age 35, and then decreases. Bone density loss occurs in both men and women Ethnicity: European and Asian people have increased risk Sex: women are at higher risk, particularly those with early menopause Family history: low bone mass in the family increases risk Modifiable / behavioral Tobacco use Alcohol use Inactivity – particularly lack of weight-bearing or resistance activities Insufficient caloric intake – osteopenia can be connected to female athlete triad syndrome, which occurs in female athletes as a combination of energy deficiency, menstrual irregularities, and low bone mineral density. Risk factors: Low nutrient diet (particularly calcium, Vitamin D) Other diseases Celiac disease, via poor absorption of calcium and vitamin D Hyperthyroidism Anorexia nervosa Medications Steroids Anticonvulsants Screening and diagnosis: The ISCD (International Society for Clinical Densitometry) and the National Osteoporosis Foundation recommend that older adults (women over 65 and men over 70) and adults with risk factors for low bone mass, or previous fragility fractures, undergo DXA testing. The DXA (dual X-ray absorptiometry) scan uses a form of X-ray technology, and offers accurate bone mineral density results with low radiation exposure. Screening and diagnosis: The United States Preventive Task Force recommends osteoporosis screening for women with increased risk over 65 and states there is insufficient evidence to support screening men. The main purpose of screening is to prevent fractures. Of note, USPSTF screening guidelines are for osteoporosis, not specifically osteopenia. The National Osteoporosis Foundation recommends use of central (hip and spine) DXA testing for accurate measure of bone density, emphasizing that peripheral or "screening" scanners should not be used to make clinically meaningful diagnoses and that peripheral and central DXA scans cannot be compared to each other.DXA scanners can be used to diagnose osteopenia or osteoporosis as well as to measure bone density over time as people age or undergo medical treatment or lifestyle changes.Information from the DXA scanner creates a bone mineral density T-score by comparing a patient's density to the bone density of a healthy young person. Bone density between 1 and 2.5 standard deviations below the reference, or a T-score between −1.0 and −2.5, indicates osteopenia (a T-score smaller than or equal to −2.5 indicates osteoporosis). Calculation of the T-score itself may not be standardized. The ISCD recommends using Caucasian women between 20 and 29 years old as the baseline for bone density for ALL patients, but not all facilities follow this recommendation.The ISCD recommends that Z-scores, not T-scores, be used to classify bone density in premenopausal women and men under 50. Prevention: Prevention of low bone density can start early in life by maximizing peak bone density. Once a person loses bone density, the loss is usually irreversible, so preventing (greater than normal) bone loss is important.Actions to maximize bone density and stabilize loss include: Exercise, particularly weight-bearing exercise, resistance exercises and balance exercises, through mechanical loading that promotes increased bone mass, and reduced fall risk Adequate caloric intake Sufficient calcium in diet: older adults may have increased calcium needs—of note, medical conditions such as Celiac and hyperthyroidism can affect absorption of calcium Sufficient Vitamin D in diet Estrogen replacement Avoidance of steroid medications Limit alcohol use and smoking Treatment: The pharmaceutical treatment of osteopenia is controversial and more nuanced than well-supported recommendations for improved nutrition and weight-bearing exercise. The diagnosis of osteopenia in and of itself does not always warrant pharmaceutical treatment. Many people with osteopenia may be advised to follow risk prevention measures (as above). Treatment: Risk of fracture guides clinical treatment decisions: the World Health Organization (WHO) Fracture Risk Assessment Tool (FRAX) estimates the probability of hip fracture and the probability of a major osteoporotic fracture (MOF), which could occur in a bone other than the hip. In addition to bone density (T-score), calculation of the FRAX score involves age, body characteristics, health behaviors, and other medical history.As of 2014, The National Osteoporosis Foundation (NOF) recommends pharmaceutical treatment for osteopenic postmenopausal women and men over 50 with FRAX hip fracture probability of >3% or FRAX MOF probability >20%. As of 2016, the American Association of Clinical Endocrinologists and the American College of Endocrinology agree. In 2017, the American College of Physicians recommended that clinicians use individual judgment and knowledge of patients' particular risk factors for fractures, as well as patient preferences, to decide whether to pursue pharmaceutical treatment for women with osteopenia over 65.Pharmaceutical treatment for low bone density includes a range of medications. Commonly used drugs include bisphosphonates (alendronate, risedronate, and ibandronate)—some studies show that decreased fracture risk and increased bone density after bisphosphonate treatment for osteopenia. Other medications include selective estrogen receptor modulators (SERMs) (e.g., raloxifene), estrogens (e.g., estradiol), calcitonin, and parathyroid hormone-related protein analogues (e.g., abaloparatide, teriparatide).These drugs are not without risks. In this complex landscape, many argue that clinicians must consider a patient's individual risk of fracture, not simply treat those with osteopenia as equally at risk. A 2005 editorial in the Annals of Internal Medicine states "The objective of using osteoporosis drugs is to prevent fractures. This can be accomplished only by treating patients who are likely to have a fracture, not by simply treating T-scores." History: Osteopenia, from Greek ὀστέον (ostéon), "bone" and πενία (penía), "poverty", is a condition of sub-normally mineralized bone, usually the result of a rate of bone lysis that exceeds the rate of bone matrix synthesis. See also osteoporosis. History: In June 1992, the World Health Organization defined osteopenia. An osteoporosis epidemiologist at the Mayo Clinic who participated in setting the criterion in 1992 said "It was just meant to indicate the emergence of a problem", and noted that "It didn't have any particular diagnostic or therapeutic significance. It was just meant to show a huge group who looked like they might be at risk."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Creation Engine** Creation Engine: Creation Engine is a 3D video game engine created by Bethesda Game Studios based on the Gamebryo engine. The Creation Engine has been used to create role-playing video games such as The Elder Scrolls V: Skyrim, Fallout 4, and Fallout 76. Development: After using the Gamebryo engine to create The Elder Scrolls III: Morrowind, The Elder Scrolls IV: Oblivion, and Fallout 3, Bethesda decided that Gamebryo's capabilities were becoming too outdated and began work on the Creation Engine for their next game, The Elder Scrolls V: Skyrim, by forking the codebase used for Fallout 3. Development: Following the completion of Skyrim, Bethesda set out to enhance the graphical core of the Creation Engine by first adding a physically based deferred renderer to allow for more dynamic lighting and to paint materials object surfaces with realistic materials. Bethesda worked with technology company Nvidia to implement volumetric lighting through a technique that makes use of hardware tesselation. Additionally the updated version of the Creation Engine powering Bethesda's Fallout 4 offers more advanced character generation.Bethesda Game Studios Austin (at the time BattleCry Studios) was tasked with modifying the Creation Engine to support multiplayer content in preparation for the development of Fallout 76 shortly before the release of Fallout 4, while Bethesda Game Studios began development of Starfield and downloadable content for Fallout 4. In conjunction with id Software (a fellow ZeniMax subsidiary), BattleCry attempted to integrate id's Quake netcode into Fallout 4's engine. This was considered a challenge by experts in the online game industry. A primary issue facing the developers was that components of the core engine (dating back to Gamebryo used in The Elder Scrolls III: Morrowind) such as quests or world loading were designed centering on a single player (dubbed "Atlas" by the developers for its role in holding up the fabric of the loaded game world), a paradigm that would need to fundamentally change to allow multiple players spanning multiple worlds.In addition to the network changes to the engine used in Fallout 4, the Fallout 76 implementation of the engine was described at the game's E3 reveal as having "all new rendering, lighting, and landscape technology". Bethesda Game Studios claims the improvements also allow for a 16× increase in detail and the ability to view unique weather systems occurring at a distance. Development: Creation Engine 2 Bethesda revealed in June 2021 that they were working on a new iteration of the engine called Creation Engine 2, and that it would power their upcoming games Starfield and The Elder Scrolls VI. Creation Engine 2 features real-time global illumination and advanced volumetric lighting. Features: Havok Behavior is a flexible animation tool that allows the developers to blend animations together in a few clicks. This means that animations such as walking and running can be blended together seamlessly to make the animations look much more realistic. This important addition enabled Bethesda to improve character animations in their games. Features: An upgraded version of Radiant AI allows non-player characters (NPCs) to dynamically react and interact with the world around them. The player can observe an NPC eat breakfast, go to work, go to the pub, and then go to sleep. The improved AI allows NPCs to react to the player's actions and they can become friendly or hostile to the player because of their actions. Features: Radiant Story allows for NPCs to dynamically create new quests for the player in unexplored places. In previous games, Bethesda licensed SpeedTree for trees and foliage, but when making Skyrim with Creation Engine, the Bethesda team made their own foliage rendering system. The new system is capable of rendering larger amounts of foliage at one time and allows for more freedom with animations. Features: Creation Kit The Creation Kit is a modding tool for Creation Engine games. The Creation Kit takes advantage of the Creation Engine's modular nature. It was created by Bethesda Game Studios for the modding community of The Elder Scrolls series. The tool can be used to create worlds, races, NPCs, weapons, update textures, and fix bugs. Mods created using this tool are hosted on the Steam Workshop, Nexus Mods, Bethesda.net and various other sites. Features: A Fallout 4–compatible Creation Kit was released in April 2016.The Creation Kit is a new version of Bethesda's editor developed for Gamebryo, known as The Elder Scrolls Construction Set for The Elder Scrolls III: Morrowind and The Elder Scrolls IV: Oblivion, and as the Garden of Eden Creation Kit for Fallout 3 (referencing an in-game item of the same name). Games using Creation Engine: Creation Engine The Elder Scrolls V: Skyrim (2011)The Elder Scrolls V: Skyrim – Special Edition (2016) The Elder Scrolls V: Skyrim VR (2017) The Elder Scrolls V: Skyrim – Anniversary Edition (2021) Fallout 4 (2015)Fallout 4 VR (2017) Fallout 76 (2018) Creation Engine 2 Starfield (2023) The Elder Scrolls VI (TBA)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**AdGuard** AdGuard: Developed by AdGuard Software Limited, AdGuard offers open source, free, and shareware products. AdGuard's DNS app supports Microsoft Windows, Linux, macOS, Android and iOS. AdGuard is also available as a browser extension.AdGuard Software Limited was founded in 2009 in Moscow. In 2014 AdGuard Software Limited's products became available in Cyprus, to where its headquarters were subsequently moved. Features: AdGuard features include: AdGuard Home AdGuard Home acts as a recursive DNS resolver, which prevents most advertisements from displaying by responding with an invalid address for domains that appear in its filter lists. It is similar to Pi-hole but offers some unique features, including out of the box support for DNS-over-HTTPS. Features: AdGuard Browser extensions The browser extension blocks video ads, interstitial ads, floating ads, pop-ups, banners, and text ads. It is also able to handle anti-AdBlock scripts. The product blocks spyware and warns users of malicious websites. AdGuard Content Blocker is an additional browser extension for browsers Yandex Browser and Samsung Internet, which uses Content Blocker API. It downloads filter list updates and asks browsers to enforce them via Content Blocker API. Features: AdGuard applications AdGuard has Windows and Mac versions, as well as native mobile versions for Android and iOS. The application sets up a local VPN, which filters all traffic on the mobile device. AdGuard DNS AdGuard operates recursive name servers for public use. AdGuard DNS supports encryption technologies, including DNSCrypt, DNS over HTTPS, DNS over TLS, and DNS-over-QUIC. AdGuard began testing DNS service back in 2016, and officially launched it in 2018. Reception: While the company's products have earned positive feedback in industry publications, a series of policies by Google and the Apple app store were implemented between 2014 - 2018, which impeded user access to AdGuard's mobile applications.Macworld mentioned AdGuard for iOS in a list of five "best adblockers for iOS".In April 2020, Android Central stated that AdGuard uses "a little more processing power to do its thing than uBlock Origin", but it is "the best all-in-one blocking tool for someone who doesn't want to use more than one extension" because it blocks crypto mining. However, Android Central recommended uBlock Origin with a dedicated crypto mining blocker over AdGuard. Incidents: Distribution of AdGuard's app for Android was removed from Google Play at the end of 2014. It nevertheless is still being updated and has been made available for download from the developers’ own website. AdGuard for iOS received no updates from the summer of 2018 until summer 2019 due to Apple policies at the time against ad blocking via the iOS VPN APIs. Incidents: In September 2018, AdGuard was hit by credential stuffing attack. AdGuard claims that their servers were not compromised and instead attackers used credential pairs reused by victims on other sites and stolen from those other sites. According to company spokesperson, they "do not know what accounts exactly were accessed by the attackers", so the company had reset passwords for all accounts "as a precautionary measure". Also, AdGuard pledged to use "Have I Been Pwned?" API to check all new passwords for appearance in known public data leaks. Furthermore, they implemented more strict password security requirements.In November 2020, Microsoft Edge Store and Chrome web store were infiltrated with fraudulent add-ons posing as various legitimate VPN browser add-ons, including NordVPN and AdGuard's VPN add-on. Subsequently Microsoft and Google were alerted and actions were taken to remove the imposter add-ons in the various browser stores. Research: AdGuard developers have taken up research in order to inform wider audiences on user privacy, cybersecurity and data protection. The following issues are notable cases involving the developers: Top-ranked websites involved in crypto jacking Facebook Ad Network widespread distribution Alerting or reporting of fake adblockers Popular Android and iOS app privacy issues
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sewage sludge** Sewage sludge: Sewage sludge is the residual, semi-solid material that is produced as a by-product during sewage treatment of industrial or municipal wastewater. The term "septage" also refers to sludge from simple wastewater treatment but is connected to simple on-site sanitation systems, such as septic tanks. Sewage sludge: When fresh sewage or wastewater enters a primary settling tank, approximately 50% of the suspended solid matter will settle out in an hour and a half. This collection of solids is known as raw sludge or primary solids and is said to be "fresh" before anaerobic processes become active. The sludge will become putrescent in a short time once anaerobic bacteria take over, and must be removed from the sedimentation tank before this happens. Sewage sludge: This is accomplished in one of two ways. Most commonly, the fresh sludge is continuously extracted from the bottom of a hopper-shaped tank by mechanical scrapers and passed to separate sludge-digestion tanks. In some treatment plants an Imhoff tank is used: sludge settles through a slot into the lower story or digestion chamber, where it is decomposed by anaerobic bacteria, resulting in liquefaction and reduced volume of the sludge. Sewage sludge: The secondary treatment process also generates a sludge largely composed of bacteria and protozoa with entrained fine solids, and this is removed by settlement in secondary settlement tanks. Both sludge streams are typically combined and are processed by anaerobic or aerobic treatment process at either elevated or ambient temperatures. After digesting for an extended period, the result is called "digested" sludge and may be disposed of by drying and then landfilling. Sewage sludge: "Biosolids" is a term often used in conjunction with reuse of sewage solids after sewage sludge treatment. Biosolids can be defined as organic wastewater solids that can be reused after stabilization processes such as anaerobic digestion and composting. Opponents of sewage sludge reuse reject this term as a public relations term. Quantities produced: The amount of sewage sludge produced is proportional to the amount and concentration of wastewater treated, and it also depends on the type of wastewater treatment process used. It can be expressed as kg dry solids per cubic metre of wastewater treated. The total sludge production from a wastewater treatment process is the sum of sludge from primary settling tanks (if they are part of the process configuration) plus excess sludge from the biological treatment step. For example, primary sedimentation produces about 110–170 kg/ML of so-called primary sludge, with a value of 150 kg/ML regarded as being typical for municipal wastewater in the U.S. or Europe. The sludge production is expressed as kg of dry solids produced per ML of wastewater treated; one mega litre (ML) is 103 m3. Of the biological treatment processes, the activated sludge process produces about 70–100 kg/ML of waste activated sludge, and a trickling filter process produces slightly less sludge from the biological part of the process: 60–100 kg/ML. This means that the total sludge production of an activated sludge process that uses primary sedimentation tanks is in the range of 180–270 kg/ML, being the sum of primary sludge and waste activated sludge. Quantities produced: United States municipal wastewater treatment plants in 1997 produced about 7.7 million dry tons of sewage sludge, and about 6.8 million dry tons in 1998 according to EPA estimates. As of 2004, about 60% of all sewage sludge was applied to land as a soil amendment and fertilizer for growing crops. In a review article published in 2012, it was reported that a total amount of 10.1 million tn DS/year were produced in EU-27 countries.Production of sewage sludge can be reduced by conversion from flush toilets to dry toilets such as urine-diverting dry toilets and composting toilets. Contaminants: Pathogens Bacteria in Class A sludge products can actually regrow under certain environmental conditions. Pathogens could easily remain undetected in untreated sewage sludge. Pathogens are not a significant health issue if sewage sludge is properly treated and site-specific management practices are followed. Contaminants: Micro-pollutants Micro-pollutants are compounds which are normally found at concentrations up to microgram per liter and milligram per kilogram in the aquatic and terrestrial environment, respectively, and they are considered to be potential threats to environmental ecosystems. They can become concentrated in sewage sludge. Each of these disposal options comes with myriad potential—and in some cases proven—human health and environment impacts. Contaminants: Several organic micro-pollutants such as endocrine disrupting compounds, pharmaceuticals and per-fluorinated compounds have been detected in sewage sludge samples around the world at concentrations ranging up to some hundreds mg/kg of dried sludge.Sterols and other hormones have also been detected. Contaminants: Heavy metals One of the main concerns in the treated sludge is the concentrated metals content (lead, arsenic, cadmium, thallium, etc.); certain metals are regulated while others are not. Leaching methods can be used to reduce the metal content and meet the regulatory limit.In 2009 the EPA released the Targeted National Sewage Sludge Study, which reports on the level of metals, chemicals, hormones, and other materials present in a statistical sample of sewage sludges. Some highlights include: Lead, arsenic, chromium, and cadmium are estimated by the EPA to be present in detectable quantities in 100% of national sewage sludges in the US, while thallium is only estimated to be present in 94.1% of sludges. Contaminants: Silver is present to the degree of 20 mg/kg of sludge, on average, while some sludges have up to 200 milligrams of silver per kilogram of sludge; one outlier demonstrated a silver lode of 800–900 mg per kg of sludge. Barium is present at the rate of 500 mg/kg, while manganese is present at the rate of 1 g/kg sludge. Contaminants: Other hazardous substances Sewage treatment plants receive various forms of hazardous waste from hospitals, nursing homes, industry and households. Low levels of constituents such as PCBs, dioxin, and brominated flame retardants, may remain in treated sludge. There are potentially thousands of other components of sludge that remain untested/undetected disposed of from modern society that also end up in sludge (pharmaceuticals, nano particles, etc.) which have been proven to be hazardous to both human and ecological health.In 2013 in South Carolina PCBs were discovered in very high levels in wastewater sludge. The problem was not discovered until thousands of acres of farm land in South Carolina were discovered to be contaminated by this hazardous material. SCDHEC issued emergency regulatory order banning all PCB laden sewage sludge from being land applied on farm fields or deposited into landfills in South Carolina.Also in 2013, after DHEC request, the city of Charlotte decided to stop land applying sewage sludge in South Carolina while authorities investigated the source of PCB contamination. In February 2014, the city of Charlotte admitted PCBs have entered their sewage treatment centers as well.Contaminants of concern in sewage sludge are plasticizers, PDBEs, PFASs ("forever chemicals"), and others generated by human activities, including personal care products and medicines. Synthetic fibers from fabrics persist in treated sewage sludge as well as in biosolids-treated soils and may thus serve as an indicator of past biosolids application. Contaminants: Pollutant ceiling concentration The term "pollutant" is defined as part of the EPA 503 rule. The components of sludge have pollutant limits defined by the EPA. "A Pollutant is an organic substance, an inorganic substance, a combination of organic and inorganic substances, or a pathogenic organism that, after discharge and upon exposure, ingestion, inhalation, or assimilation into an organism either directly from the environment or indirectly by ingestion through the food chain, could, on the basis of information available to the Administrator of EPA, cause death, disease, behavioral abnormalities, cancer, genetic mutations, physiological malfunctions (including malfunction in reproduction), or physical deformations in either organisms or offspring of the organisms." The maximum component pollutant limits by the US EPA are: Treatment: Sewage sludge treatment is the process of removing contaminants from wastewater. Sewage sludge is produced from the treatment of wastewater in sewage treatment plants and consists of two basic forms — raw primary sludge and secondary sludge, also known as activated sludge in the case of the activated sludge process. Treatment: Sewage sludge is usually treated by one or several of the following treatment steps: lime stabilization, thickening, dewatering, drying, anaerobic digestion or composting. Some treatment processes, such as composting and alkaline stabilization, that involve significant amendments may affect contaminant strength and concentration: depending on the process and the contaminant in question, treatment may decrease or in some cases increase the bioavailability and/or solubility of contaminants. Regarding sludge stabilization processes, anaerobic and aerobic digestion seem to be the most common used methods in EU-27.Following treatment, sewage sludge is either landfilled, dumped in the ocean, incinerated, applied on agricultural land or, in some cases, retailed or given away for free to the general public. According to a review article published in 2012, sludge reuse (including direct agricultural application and composting) was the predominant choice for sludge management in EU-15 (53% of produced sludge), following by incineration (21% of produced sludge). On the other hand, the most common disposal method in EU-12 countries was landfilling. Treatment: Classes of sewage sludge after treatment (United States) In the United States, the following classes of sewage sludge after treatment are defined: Class A sludge is typically dried and pasteurized, and is also known as "exceptional" quality. Class B includes all sludge not classified as Class A. Class B sludge is typically "undigested" and is volatile.Both classes of sludge may still contain radioactive or pharmaceutical wastes. Disposal: After treatment, and dependent upon the quality of sludge produced (for example with regards to heavy metal content), sewage sludge is most commonly either disposed of in landfills, dumped in the ocean or applied to land for its fertilizing properties, as pioneered by the product Milorganite. Landfill Sewage sludge deposition in landfills can circulate human-virulent species of Cryptosporidium and Giardia pathogens. Sonication and quicklime stabilization are most effective in inactivation of these pathogens; microwave energy disintegration and top-soil stabilization were less effective. Ocean dumping It used to be common practice to dump sewage sludge into the ocean, however, this practice has stopped in many nations due to environmental concerns as well to domestic and international laws and treaties. Ronald Reagan signed the law that prohibited ocean dumping as a means of disposal of sewage sludge in the US in 1988. Land application Biosolids is a term widely used to denote the byproduct of domestic and commercial sewage and wastewater treatment that is to be used in agriculture. National regulations that dictate the practice of land application of treated sewage sludge differ widely and e.g. in the US there are widespread disputes about this practice. Disposal: Depending on their level of treatment and resultant pollutant content, biosolids can be used in regulated applications for non-food agriculture, food agriculture, or distribution for unlimited use. Treated biosolids can be produced in cake, granular, pellet, or liquid form and are spread over land before being incorporated into the soil or injected directly into the soil by specialist contractors. Such use was pioneered by the production of Milorganite in 1926.Use of sewage sludge has shown an increase in level of soil available phosphorus and soil salinity.The findings of a 20-year field study of air, land, and water in Arizona, concluded that use of biosolids is sustainable and improves the soil and crops. Other studies report that plants uptake large quantities of heavy metals and toxic pollutants that are retained by produce, which is then consumed by humans.A PhD thesis studying the addition of sludge to neutralize soil acidity concluded that the practice was not recommended if large amounts are used because the sludge produces acids when it oxidizes.Studies have indicated that pharmaceuticals and personal care products, which often adsorb to sludge during wastewater treatment, can persist in agricultural soils following biosolid application. Some of these chemicals, including potential endocrine disruptor triclosan, can also travel through the soil column and leach into agricultural tile drainage at detectable levels. Other studies, however, have shown that these chemicals remain adsorbed to surface soil particles, making them more susceptible to surface erosion than infiltration. These studies are also mixed in their findings regarding the persistence of chemicals such as triclosan, triclocarban, and other pharmaceuticals. The impact of this persistence in soils is unknown, but the link to human and land animal health is likely tied to the capacity for plants to absorb and accumulate these chemicals in their consumed tissues. Studies of this kind are in early stages, but evidence of root uptake and translocation to leaves did occur for both triclosan and triclocarban in soybeans. This effect was not present in corn when tested in a different study.A cautionary approach to land application of biosolids has been advocated by some for regions where soils have lower capacities for toxics absorption or due to the presence of unknowns in sewage biosolids. In 2007 the Northeast Regional Multi-State Research Committee (NEC 1001) issued conservative guidelines tailored to the soils and conditions typical of the northeastern US.Use of sewage sludge is prohibited for produce to be labeled USDA-certified organic. In 2014 the United States grocery chain Whole Foods banned produce grown in sewage sludge.Treated sewage sludge has been used in the UK, Europe and China agriculturally for more than 80 years, though there is increasing pressure in some countries to stop the practice of land application due to farm land contamination and negative public opinion. In the 1990s there was pressure in some European countries to ban the use of sewage sludge as a fertilizer. Switzerland, Sweden, Austria, and others introduced a ban. Since the 1960s there has been cooperative activity with industry to reduce the inputs of persistent substances from factories. This has been very successful and, for example, the content of cadmium in sewage sludge in major European cities is now only 1% of what it was in 1970. Disposal: Incineration Sludge can also be incinerated in sludge incineration plants which comes with its own set of environmental concerns (air pollution, disposal of the ash). Pyrolysis of the sludge to create syngas and potentially biochar is possible, as is combustion of biofuel produced from drying sewage sludge or incineration in a waste-to-energy facility for direct production of electricity and steam for district heating or industrial uses. Disposal: Thermal processes can greatly reduce the volume of the sludge, as well as achieve remediation of all or some of the biological concerns. Direct waste-to-energy incineration and complete combustion systems (such as the Gate 5 Energy System) will require multi-step cleaning of the exhaust gas, to ensure no hazardous substances are released. In addition, the ash produced by incineration or incomplete combustion processes (such as fluidized-bed dryers) may be difficult to use without subsequent treatment due to high heavy metal content; solutions to this include leaching of the ashes to remove heavy metals or in the case of ash produced in a complete-combustion process, or with biochar produced from a pyrolytic process, the heavy metals may be fixed in place and the ash material readily usable as a LEEDs preferred additive to concrete or asphalt. Disposal: Examples of other ways to use dried sewage sludge as an energy resource include the Gate 5 Energy System, an innovative process to power a steam turbine using heat from burning milled and dried sewage sludge, or combining dried sewage sludge with coal in coal-fired power stations. In both cases this allows for production of electricity with less carbon-dioxide emissions than conventional coal-fired power stations. Health risks: In 2011, the EPA commissioned a study at the United States National Research Council (NRC) to determine the health risks of sludge. In this document the NRC pointed out that many of the dangers of sludge are unknown and unassessed. Health risks: The NRC published "Biosolids Applied to Land: Advancing Standards and Practices" in July 2002. The NRC concluded that while there is no documented scientific evidence that sewage sludge regulations have failed to protect public health, there is persistent uncertainty on possible adverse health effects. The NRC noted that further research is needed and made about 60 recommendations for addressing public health concerns, scientific uncertainties, and data gaps in the science underlying the sewage sludge standards. The EPA responded with a commitment to conduct research addressing the NRC recommendations.Residents living near Class B sludge processing sites may experience asthma or pulmonary distress due to bioaerosols released from sludge fields.A 2004 survey of 48 individuals near affected sites found that most reported irritation symptoms, about half reported an infection within a month of the application, and about a fourth were affected by Staphylococcus aureus, including two deaths. The number of reported S. aureus infections was 25 times as high as in hospitalized patients, a high-risk group. The authors point out that regulations call for protective gear when handling Class B biosolids and that similar protections could be considered for residents in nearby areas given the wind conditions. Health risks: In 2007, a health survey of persons living in close proximity to Class B sludged land was conducted. A sample of 437 people exposed to Class B sludge (living within 1-mile (1.6 km) of sludged land) - and using a control group of 176 people not exposed to sludge (not living within 1-mile (1.6 km) of sludged land) reported the following: "Results revealed that some reported health-related symptoms were statistically significantly elevated among the exposed residents, including excessive secretion of tears, abdominal bloating, jaundice, skin ulcer, dehydration, weight loss, and general weakness. The frequency of reported occurrence of bronchitis, upper respiratory infection, and giardiasis were also statistically significantly elevated. The findings suggest an increased risk for certain respiratory, gastrointestinal, and other diseases among residents living near farm fields on which the use of biosolids was permitted." Although correlation does not imply causation, such extensive correlations may lead reasonable people to conclude that precaution is necessary in dealing with sludge and sludged farmlands. Health risks: Harrison and Oakes suggest that, in particular, "until investigations are carried out that answer these questions (...about the safety of Class B sludge...), land application of Class B sludges should be viewed as a practice that subjects neighbors and workers to substantial risk of disease." They further suggest that even Class A treated sludge may have chemical contaminants (including heavy metals, such as lead) or endotoxins present, and a precautionary approach may be justified on this basis, though the vast majority of incidents reported by Lewis, et al. have been correlated with exposure to Class B untreated sludge and not Class A treated sludge. Health risks: A 2005 report by the state of North Carolina concluded that "a surveillance program of humans living near application sites should be developed to determine if there are adverse health effects in humans and animals as a result of biosolids application."The chain of sewage sledge to biosolids to fertilizers has resulted in PFASs ("forever chemicals") contamination of farm produce in Maine in 2021 and beef raised in Michigan in 2022. The EPA PFAS Strategic Roadmap initiative, running from 2021 to 2024, will consider the full lifecycle of PFAS including health risks of PFAS in wastewater sludge. Regulation and guidelines: European Union European legislation on dangerous substances has eliminated the production and marketing of some substances that have been of historic concern such as persistent organic micro-pollutants. The European Commission has said repeatedly that the "Directive on the protection of the environment, and in particular of the soil, when sewage sludge is used in agriculture" (86/278/EEC) has been very successful in that there have been no cases of adverse effect where it has been applied. Regulation and guidelines: The EC encourages the use of sewage sludge in agriculture because it conserves organic matter and completes nutrient cycles. Recycling of phosphate is regarded as especially important because the phosphate industry predicts that at the current rate of extraction the economic reserves will be exhausted in 100 or at most 250 years. Phosphate can be recovered with minimal capital expenditure as technology currently exists, but municipalities have little political will to attempt nutrient extraction, instead opting for a "take all the other stuff" mentality.European countries that joined the EU after 2004 favor landfills as a means of disposal for sewage sludge. In 2006, the predicted sewage sludge growth rate was 10 million tons of sewage sludge per year. This increase in the amount of sewage sludge accumulation in the EU can be due to the increase in the number of households that are connected to the sewage system. The EU has directives in place to encourage the use of sewage sludge in agriculture, in a way that will not harm the soil, humans, and the environment are not harmed. A guideline the EU has put into place it that sewage sludge should not be added to fruit and vegetable crops that are in season. In Austria, in order to dispose of the sewage sludge in a landfill, it must first be treated in a way that reduces its biological reactivity. Sweden no longer allows sewage sludge to be disposed in the land fills. In the EU, regulations regarding sewage sludge disposal differ because legislation regarding landfill disposal in not in the national regulations for the EU. Regulation and guidelines: United States According to the EPA, biosolids that meet treatment and pollutant content criteria of Part 503.13 "can be safely recycled and applied as fertilizer to sustainably improve and maintain productive soils and stimulate plant growth." However, they can not be disposed of in a sludge only landfill under Part 503.23 because of high chromium levels and boundary restrictions. Regulation and guidelines: Biosolids that meet the Class B pathogen treatment and pollutant criteria, in accordance with the EPA "Standards for the use or disposal of sewage sludge" (40 CFR Part 503), can be land applied with formal site restrictions and strict record keeping. Biosolids that meet Class A pathogen reduction requirements or equivalent treatment by a "Process to Further Reduce Pathogens" (PFRP) have the least restrictions on use. PFRPs include pasteurization, heat drying, thermophilic composting (aerobic digestion, most common method), and beta or gamma ray irradiation.The EPA Office of the Inspector General (OIG) completed two assessments in 2000 and 2002 of the EPA sewage sludge program. The follow-up report in 2002 documented that "the EPA cannot assure the public that current land application practices are protective of human health and the environment." The report also documented that there had been an almost 100% reduction in EPA enforcement resources since the earlier assessment. This is probably the greatest issue with the practice: under both the federal program operated by the EPA and those of the several states, there is limited inspection and oversight by agencies charged with regulating these practices. To some degree, this lack of oversight is a function of the perceived (by the regulatory agencies) benign nature of the practice. However, a greater underlying issue is funding. Few states and the US EPA have the discretionary funds necessary to establish and implement a full enforcement program for biosolids.As detailed in the 1995 Plain English Guide to the Part 503 Risk Assessment, the EPA's most comprehensive risk assessment was completed for biosolids. Regulation and guidelines: Prior to 1991 Since 1884 when sewage was first treated the amount of sludge has increased along with population and more advanced treatment technology (secondary treatment in addition to primary treatment). In the case of New York City, at first the sludge was discharged directly along the banks of rivers surrounding the city, then later piped further into the rivers, and then further still out into the harbor. In 1924, to relieve a dismal condition in New York Harbor, New York City began dumping sludge at sea at a location in the New York Bight called the 12-Mile Site. This was deemed a successful public health measure and not until the late 1960s was there any examination of its consequences to marine life or to humans. There was accumulation of sludge particles on the seafloor and consequent changes in the numbers and types of benthic organisms. In 1970 a large area around the site was closed to shellfishing. From then until 1986, the practice of dumping at the 12-Mile Site came under increasing pressure stemming from a series of untoward environmental crises in the New York Bight that were attributed partly to sludge dumping. In 1986, sludge dumping was moved still further seaward to a site over the deep ocean called the 106-Mile Site. Then, again in response to political pressure arising from events unrelated to ocean dumping, the practice ended entirely in 1992. Since 1992, New York City sludge has been applied to land (outside of New York state). The wider question is whether or not changes on the sea floor caused by the portion of sludge that settles are severe enough to justify the added operational cost and human health concerns of applying sludge to land. Regulation and guidelines: Since 1991 After the 1991 Congressional ban on ocean dumping, the U.S. Environmental Protection Agency (EPA) instituted a policy of digested sludge reuse on agricultural land. The US EPA promulgated regulations – 40 CFR Part 503 – that continued to allow the use of biosolids on land as fertilizers and soil amendments which had been previously allowed under Part 257. The EPA promoted biosolids recycling throughout the 1990s. The EPA's Part 503 regulations were developed with input from university, EPA, and USDA researchers from around the country and involved an extensive review of the scientific literature and the largest risk assessment the agency had conducted to that time. The Part 503 regulations became effective in 1993. Society and culture: Court cases in the United States In 2009, James Rosendall of Grand Rapids, MI was sentenced by United States District Judge Avern Cohn to 11 months in prison followed by three years of supervised release for conspiring to commit bribery. Rosendall was the former president of Synagro of Michigan, a subsidiary of Synagro Technologies. His duties included obtaining the approval of the City of Detroit to process and dispose of the city's wastewater. Society and culture: In 2011, Travis County Commissioners declared that Synagro's solid waste disposal activities would be inappropriate and prohibited land use according to the towns already established ordinances. Society and culture: A battle between the home rule of local government and states rights/commerce rights has been waged between the small town of Kern County, California, and Los Angeles, California. Kern county passed an ordinance "Keep Kern Clean" ballot initiative which banned sludge from being applied in Kern County. Los Angeles sued and after a protracted verdict, won the case in 2016. Society and culture: In 2012, two families won a $225,000 tort lawsuit against a sludge company that contaminated their properties. In 2013 in Pennsylvania, the case Gilbert vs. Synagro, a judge barred a nuisance, negligence and trespass lawsuit under Pennsylvania's Right to Farm Act. Scientists, testing the potential of sewage sludge to protect against lead-poisoned soil, did not inform test participants of possible dangers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pseudotachylyte** Pseudotachylyte: Pseudotachylyte (sometimes written as pseudotachylite) is an extremely fine-grained to glassy, dark, cohesive rock occurring as veins that form through frictional melting and subsequent quenching during earthquakes, large-scale landslides, and impacts events. Chemical composition of pseudotachylyte generally reflects the local bulk chemistry, though may skew to slightly more mafic compositions due to the preferential incorporation of hydrous and ferro-magnesian minerals (mica and amphibole, respectively) into the melt phase.Pseudotachylyte was first documented by Shand in the Vredefort Impact Structure and was named due to its close resemblance to tachylyte, a basaltic glass. Though pseudotachylyte is reported to have a glassy appearance, they are extremely susceptible to alteration and are thus rarely found to be entirely composed of glass. Typically, they are completely devitrified into a very fine-grained material with quench textures such as chilled margins, radial and concentric clusters of microcrystalites (spherulites) or as radial overgrowths of microcrystalites on clasts, as well as skeletal and spinifex microcrystalites. Formation: Seismic faulting Pseudotachylytes have been referred to as “fossil earthquakes” as they represent definitive evidence of seismic slip. During seismic faulting (earthquakes), pseudotachylyte forms through an extreme concentration of frictional sliding onto a thin surface of a fault. The friction creates heat, and because rocks are insulators, the temperature increases on this surface allowing the rock to melt. This generates a “fault vein” which are often accompanied by “injection veins” that open from the fault vein as opening mode cracks. A melt origin for pseudotachylyte was controversial for some time, with some researchers favouring extreme comminution for their generation (crush-origin). Ample evidence of direct crystallisation from a melt though, has more or less put this argument to rest with most researchers defining pseudotachylyte as having a melt origin. Formation: Laboratory experiments investigating how pseudotachylytes form have shown that the initial phase of formation involves the flash melting of asperities that eventually grow and join together into larger patches of a high viscosity melt. The high viscosity of these melt patches raises the fault's coefficient of friction, hindering sliding. As the patches of melt continue to grow and join together, they form a continuous melt layer with a lower viscosity, which reduces the fault’s coefficient of friction, effectively lubricating the fault and allowing sliding to occur more easily. Once the melt layer has reached some critical thickness, frictional heat can no longer be generated, and the melt begins to quench and crystallise thus again increasing the melt’s viscosity and begins acting as a viscous brake to sliding. Once sliding is stopped, the quenching of the melt layer welds the fault shut and restores its strength to that of the unfaulted surrounding rock. Formation: Abundance of seismic pseudotachylyte in nature There is an apparent lack of pseudotachylyte in the geologic record relative to the observed seismicity of today, which brings into question if this is an issue of the rarity of its production, lack of recognition in the field, or its ability to be preserved. It was once thought that pseudotachylyte could only produced in dry, crystalline rock, this however, has been shown to be incorrect. Therefore, its production is likely not as rare as originally thought. Pseudotachylyte is often closely associated with other extremely fine grained rocks (e.g. mylonite and cataclasite), and is extremely prone to alteration that often renders it unrecognisable which supports arguments that pseudotachylyte production isn't rare, but rather is likely to go unrecognised, and thus unreported. Formation: Landslides Pseudotachylytes have been observed at the base of some large-scale landslide deposits. The formation of pseudotachylyte along the base of a landslide occurs due to the same processes as earthquake-generated pseudotachylyte - frictional heating during gliding along the base of the detachment melts the surrounding rock. They are similar in appearance to earthquake-generated pseudotachylyte. Some notable examples of landslide-generated pseudotachylyte in the geologic record is the Arequipa volcanic landslide deposit in Peru from approximately 2.4 million years ago, and the Langtang landslide deposit in Nepal which occurred between 30,000 - 25,000 years ago. Pseudotachylyte has also been found along the base of more modern landslides, such as the landslide generated by the 1999 Taiwan earthquake. Formation: Impact structures Pseudotachylyte has also been associated with impact structures. Pseudotachylyte in impact craters typically occurs as abundant irregular, anastomosing, and dike-like bodies that contain several large and small rounded inclusions of the impacted, or target, rock in a dense fine-grained to glassy black to greenish matrix. Individual pseudotachylyte bodies within impact craters are not uniform over long distances, and may change in size and shape drastically within meters or tens of meters. The most extensive examples of impact related pseudotachylytes come from impact structures that have been deeply eroded below the floor of the crater, such as in case of the Vredefort impact structure in South Africa, and the Sudbury impact structure in Canada.Impact-generated pseudotachylytes are classified into two types depending on their method of formation. S-Type pseudotachylytes, also known as "shock veins", are found as small (<1 cm, typically <1 mm) glassy veins that contain high-pressure mineral polymorphs like coesite and stishovite. These shock veins are thought to form via frictional and shock melting due to the higher pressure compressive stages (%need to make it skip to formation section%) of the shockwave expansion. E-Type (endogenic) pseudotachylytes are formed via frictional melting of the target rock due to high-speed slip caused by the collapse of the crater margin. Formation: Pseudotachylyte vs. impact melt in impact structures Though pseudotachylyte and impact melt within impact structures are visually similar, both occurring as dike-like bodies, they are chemically different. Since pseudotachylyte is derived locally, it will reflect the composition of the wall-rock from which it formed. Impact melts are generated from a much larger volume of rock by instantaneous shock melting, so their chemical compositions will be more reflective of regional-scale mixing and homogenization during melting, particularly in heterogeneous terranes. In the Sudbury impact structure, researchers have been able to distinguish dikes of pseudotachylyte from dikes of impact melt based on their chemical compositions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Asterisk Gateway Interface** Asterisk Gateway Interface: Asterisk Gateway Interface (AGI) is a software interface and communications protocol for application level control of selected features of the Asterisk PBX. AGI allows an external, user-written program, launched from the Asterisk dial plan via pipes to control telephony operations on its associated control and voice channels. It is similar to the CGI feature of web servers in allowing any language to be used for writing the external program which communicates with Asterisk via stdin and stdout. While the initial feature set of AGI included only a procedural control of Asterisk operations via commands and response handshake, Enhanced AGI (EAGI) also provided out-of-band access to the incoming audio stream. Asterisk Gateway Interface: FastAGI is an extension to AGI which allows the external program to run at a separate network host to avoid the overhead of creating a new process for every call on the Asterisk server. It uses a TCP socket for communication to the external host which provides the function of an AGI service, in a manner of the client–server model. The default TCP port for FastAGI is 4573. Similar to HTTP uniform resource identifiers (URIs), FastAGI employs a URI format of agi://hostname[:port][/program/path]. Asterisk Gateway Interface: The AGI feature set of Asterisk is implemented as an Asterisk loadable module (res_agi). The features may be access by a variety of application programming interfaces in various languages, such as phpagi, Perl AGI Library, CAGI, NanoAGI, and PyST
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cryo bio-crystallography** Cryo bio-crystallography: Cryo bio-crystallography is the application of crystallography to biological macromolecules at cryogenic temperatures. Basic principles: Cryo crystallography enables X-ray data collection at cryogenic temperatures, typically 100K. Crystals are transferred from the solution they have grown in (called mother liquor) to a solution with a cryo-protectant to prevent ice formation. Crystals are mounted in a glass fiber (as opposed to a capillary.) Crystals are cooled by dipping directly into liquid nitrogen and then placed in a cryo cold stream. Cryo cooled macromolecular crystals show reduced radiation damage by more than 70 times that at room temperature. Advantages: Significant improvement of resolution in data collection Reduced or eliminated radiation damage in crystals Usefulness and applications: Crystallography of large biological macromolecules can be achieved while maintaining their solution state. The best known example is the ribosome. Usefulness and applications: Today, liquid nitrogen cryo cooling is used for protein crystallography at every synchrotron around the world. Radiation damaged is reduced by more than 70 fold at cryo temperatures. A recent review paper explains the development of reduced radiation damage in macromolecular crystals at Synchrotrons and describes how more than 90% of all structures deposited in the Protein Data Bank used cryo cooling in their determination. Usefulness and applications: 2020 Haas, DJ. The early history of cryo-cooling for macromolecular crystallography (2020) IUCrJ (2020). 7, 148–157. https://journals.iucr.org/m/issues/2020/02/00/be5283/be5283.pdf 1970 Haas, D.J., and Rossmann, M.G. Crystallographic Studies on Lactate Dehydrogenase at -75 C. Acta Crystallogr. (1970), B26, 998.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Susan Marqusee** Susan Marqusee: Susan Marqusee is the Eveland Warren Endowed Chair Professor of Biochemistry, Biophysics, and Structural Biology at the University of California, Berkeley, and the Berkeley campus director of the California Institute for Quantitative Biosciences. Her research concerns the structure and dynamics of protein molecules. She received her A.B. in Physics and Chemistry from Cornell University in 1982, and her Ph.D. (Biochemistry) and M.D. degrees from Stanford University in 1990, where she trained with Robert Baldwin on the intrinsic helical properties of amino acids in model peptides.She was one of the 1995 winners of the Beckman Young Investigators Award, the 1996–1997 winner of the Margaret Oakley Dayhoff Award, and the 2012 winner of the William C. Rose Award. Susan Marqusee: In 2016 she was elected to the National Academy of Sciences. Her recent research concerns protein energetics, folding and turnover as altered by protein ubiquitylation.As of June, 2023, she is serving as an assistant director at the National Science Foundation, leading the Directorate for Biological Sciences.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Endolymph** Endolymph: Endolymph is the fluid contained in the membranous labyrinth of the inner ear. The major cation in endolymph is potassium, with the values of sodium and potassium concentration in the endolymph being 0.91 mM and 154 mM, respectively. It is also called Scarpa's fluid, after Antonio Scarpa. Structure: The inner ear has two parts: the bony labyrinth and the membranous labyrinth. The membranous labyrinth is contained within the bony labyrinth, and within the membranous labyrinth is a fluid called endolymph. Between the outer wall of the membranous labyrinth and the wall of the bony labyrinth is the location of perilymph. Structure: Composition Perilymph and endolymph have unique ionic compositions suited to their functions in regulating electrochemical impulses of hair cells. The electric potential of endolymph is ~80-90 mV more positive than perilymph due to a higher concentration of K compared to Na.The main component of this unique extracellular fluid is potassium, which is secreted from the stria vascularis. The high potassium content of the endolymph means that potassium, not sodium, is carried as the de-polarizing electric current in the hair cells. This is known as the mechano-electric transduction (MET) current. Structure: Endolymph has a high positive potential (80–120 mV in the cochlea), relative to other nearby fluids such as perilymph, due to its high concentration of positively charged ions. It is mainly this electrical potential difference that allows potassium ions to flow into the hair cells during mechanical stimulation of the hair bundle. Because the hair cells are at a negative potential of about -50 mV, the potential difference from endolymph to hair cell is on the order of 150 mV, which is the largest electrical potential difference found in the body. Function: Hearing: Cochlear duct: fluid waves in the endolymph of the cochlear duct stimulate the receptor cells, which in turn translate their movement into nerve impulses that the brain perceives as sound. Balance: Semicircular canals: angular acceleration of the endolymph in the semicircular canals stimulate the vestibular receptors of the endolymph. The semicircular canals of both inner ears act in concert to coordinate balance. Clinical significance: Disruption of the endolymph due to jerky movements (like spinning around or driving over bumps while riding in a car) can cause motion sickness. A condition where the volume of the endolymph is greatly enlarged is called endolymphatic hydrops and has been linked to Ménière's disease.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Elaine G. Toms** Elaine G. Toms: Elaine G. Toms is a Canadian information scientist working in human–computer interaction and known for her research on information retrieval, usability of web sites, and the measurement of user engagement. She is Professor of Information Innovation & Management at the Sheffield University Management School, part of the University of Sheffield in England. Education and career: Toms was a student at Dalhousie University, and completed a Ph.D. in 1997 at the University of Western Ontario.Toms was president of the Canadian Association for Information Science for 1998–1999. After four years in the Faculty of Information Studies at the University of Toronto, she returned to Dalhousie in 2004, as associate professor and Canada Research Chair in Management Informatics. She moved from Dalhousie to Sheffield in 2011.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Jalview** Jalview: Jalview is a piece of bioinformatics software that is used to look at and edit multiple sequence alignments. The program was originally written by Michele Clamp whilst working in Geoff Barton's group at the University of Oxford and European Bioinformatics Institute (EBI). Jalview 2, a re-engineered version produced by Andrew Waterhouse and Jim Procter whilst working in Geoff Barton's group at the School of Life Sciences, University of Dundee, was released in 2005, and its development is supported by the Biotechnology and Biological Sciences Research Council (BBSRC) and Wellcome Trust. Jalview: It is used widely by a variety of web servers (e.g. the EBI ClustalW server and the Pfam protein domain database) but is also available as a general purpose alignment editor. Jalview: Jalview has a wide range of functions in addition to multiple sequence alignment generation, viewing and editing, including calculating phylogenetic trees and viewing molecular structures. Recent versions of Jalview include features for the analysis of genetic variation from public databases or local Variant Call Format (VCF) files. Jalview connects to many external web services to import data and perform calculations.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Aggregative game** Aggregative game: In game theory, an aggregative game is a game in which every player’s payoff is a function of the player’s own strategy and the aggregate of all players’ strategies. The concept was first proposed by Nobel laureate Reinhard Selten in 1970 who considered the case where the aggregate is the sum of the players' strategies. Definition: Consider a standard non-cooperative game with n players, where Si⊆R is the strategy set of player i, S=S1×S2×…×Sn is the joint strategy set, and fi:S→R is the payoff function of player i. The game is then called an aggregative game if for each player i there exists a function f~i:Si×R→R such that for all s∈S :fi(s)=f~i(si,∑j=1nsj) In words, payoff functions in aggregative games depend on players' own strategies and the aggregate ∑sj . As an example, consider the Cournot model where firm i has payoff/profit function fi(s)=siP(∑sj)−Ci(si) (here P and Ci are, respectively, the inverse demand function and the cost function of firm i). This is an aggregative game since fi(s)=f~i(si,∑sj) where f~i(si,X)=siP(X)−Ci(si) Generalizations: A number of generalizations of the standard definition of an aggregative game have appeared in the literature. A game is generalized aggregative if there exists an additively separable function g:S→R (i.e., if there exist increasing functions h0,h1,…,hn:R→R such that g(s)=h0(∑ihi(si)) ) such that for each player i there exists a function f~i:Si×R→R such that fi(s)=f~i(si,g(s1,…,sn)) for all s∈S . Obviously, any aggregative game is generalized aggregative as seen by taking g(s1,…,sn)=∑si . A more general definition still is that of quasi-aggregative games where agents' payoff functions are allowed to depend on different functions of opponents' strategies. Aggregative games can also be generalized to allow for infinitely many players in which case the aggregator will typically be an integral rather than a linear sum. Aggregative games with a continuum of players are frequently studied in mean field game theory. Properties: Generalized aggregative games (hence aggregative games) admit backward reply correspondences and in fact, is the most general class to do so. Backward reply correspondences, as well as the closely related share correspondences, are powerful analytical tools in game theory. For example, backward reply correspondences were used to give the first general proof of the existence of a Nash equilibrium in the Cournot model without assuming quasiconcavity of firms' profit functions. Backward reply correspondences also play a crucial role for comparative statics analysis (see below). Properties: Quasi-aggregative games (hence generalized aggregative games, hence aggregative games) are best-response potential games if best-response correspondences are either increasing or decreasing. Precisely as games with strategic complementarities, such games therefore have a pure strategy Nash equilibrium regardless of whether payoff functions are quasiconcave and/or strategy sets are convex. The existence proof in is a special case of such more general existence results. Properties: Aggregative games have strong comparative statics properties. Under very general conditions one can predict how a change in exogenous parameters will affect the Nash equilibria.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Potocki–Lupski syndrome** Potocki–Lupski syndrome: Potocki–Lupski syndrome (PTLS), also known as dup(17)p11.2p11.2 syndrome, trisomy 17p11.2 or duplication 17p11.2 syndrome, is a contiguous gene syndrome involving the microduplication of band 11.2 on the short arm of human chromosome 17 (17p11.2). The duplication was first described as a case study in 1996. In 2000, the first study of the disease was released, and in 2007, enough patients had been gathered to complete a comprehensive study and give it a detailed clinical description. PTLS is named for two researchers involved in the latter phases, Drs. Lorraine Potocki and James R. Lupski of Baylor College of Medicine.PTLS was the first predicted reciprocal of a homologous recombination (microdeletion or microduplication) where both reciprocal recombinations result in a contiguous gene syndrome. Its reciprocal disease is Smith–Magenis syndrome (SMS), in which the chromosome portion duplicated in PTLS is deleted altogether.Potocki–Lupski syndrome is considered a rare disease, predicted to appear in at least 1 in 20,000 humans.Symptoms of the syndrome include intellectual disability, autism, and other disorders unrelated to the listed symptoms. Presentation: Clinically, PTLS presents as a milder syndrome than SMS, with distinct characteristics, though PTLS can be mistaken for SMS. Both syndromes are characterized by multiple congenital abnormalities and intellectual disability. A key feature which appears in 80% of cases is autism spectrum disorder. Other unique features of Potocki–Lupski syndrome include infantile hypotonia, sleep apnea, structural cardiovascular anomalies, cognitive deficits, abnormal social behaviors, learning disabilities, attention-deficit disorder, obsessive-compulsive behaviours, malocclusions, short stature and failure to thrive.After noting that autism is commonly associated with PTLS, researchers at the Centro de Estudios Científicos and the Austral University of Chile genetically engineered a PTLS "model mouse" where the syntenic chromosome segment was duplicated, and examined the social behaviours of these mice versus those without the anomaly (the "wild-type"). One human autism-related symptom is abnormal reciprocal social interaction. The researchers observed that the genetically-engineered mice of both sexes had a slight (statistically insignificant) impairment of their preference of a social target (i.e., a living, breathing mouse) over an inanimate one — the average human will prefer the social target — and preferred to explore newly introduced mice instead of familiar ones, unlike the typical human and mouse preference of a friend over a stranger, demonstrating a change in their liking of social novelty. They also found that male mice, in some scenarios, showed increased anxiety and dominant behaviour than the control group. Anatomically, the engineered mice had a decreased brain-to-body mass ratio and an alteration in the expression of several genes in the hippocampus. Molecular genetics: Both Potocki–Lupski and Smith–Magenis syndromes arise through a faulty non-allelic homologous recombination mechanism. Both appear to involve a 1.3-3.7Mb chromosome section in 17p11.2 that includes the retinoic acid inducible 1 (RAI1) gene. Other candidate genes have been identified within the duplicated section, including SREBF1, DRG2, LLGL1, SHMT1 and ZFP179.In mice of the subfamily Murinae, a 32-34cM region of chromosome 11 is syntenic to 17p11.2, meaning that they contain the same genes in the same order and orientation. This conserved sequence has been exploited to learn more about SMS and PTLS. Through genetic studies on both laboratory mice and humans, it has been discovered that RAI1 is likely the gene responsible for these syndromes. For example, in one study, it was shown that mice with 2 copies of the RAI1 gene and 3 copies of each of the other 18 genes in the described translocated region of chromosome 11 appeared and behaved like the control mice with the described region intact. In other words, RAI1 is dosage-sensitive. This provides evidence that it is the number of RAI1 copies present that affects the symptoms of PTLS and SMS. It is therefore believed that RAI1 is the critical gene involved in these disorders; however, since no cases of RAI1 duplication alone have been identified, this has not been concluded.One group has noted that, in a mouse model, the flanking genes in the duplicated segment were also overexpressed, suggesting some new candidates for analysis, including MFAP4, TTC19 and GJA12. Diagnosis: The duplication involved in PTLS is usually large enough to be detected through G-banding alone, though there is a high false negative rate. To ascertain the diagnosis when karyotyping results are unclear or negative, more sophisticated techniques such as subtelomeric fluorescent in-situ hybridization analysis and array comparative genomic hybridization (aCGH) may be used.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Troland Research Awards** Troland Research Awards: The Troland Research Awards are an annual prize given by the United States National Academy of Sciences to two researchers (preferably 45 years of age or younger) in recognition of psychological research on the relationship between consciousness and the physical world. The areas where these award funds are to be spent include but are not limited to areas of experimental psychology, the topics of sensation, perception, motivation, emotion, learning, memory, cognition, language, and action. The award preference is given to experimental work with a quantitative approach or experimental research seeking physiological explanations. Recipients: Source: NAS Tim Buschman (2023)For his groundbreaking insights into the neural mechanisms of cognitive control.Catherine Hartley (2023)For her pioneering contributions to the understanding of the adolescent mind and brain.Roozbeh Kiani (2022)For his seminal contributions to understanding visually-based decisions.Leah Somerville (2022)For her pioneering research on how brain development shapes psychological functioning from childhood to adulthood.Michael J. Frank (2021)For his groundbreaking discoveries in our understanding of learning, valuation, and cognitive control.Nicole C. Rust (2021)For her fundamental contributions to the understanding of how the cortex makes use of complex visual information to guide intelligent behavior.Michael C. Frank (2020)For his work revolutionizing our understanding of language acquisition by placing it in its social context.Nim Tottenham (2020)For her innovative discoveries of critical windows of affective development during childhood and adolescence, their underlying neural basis at the circuit level and their disruption following early life stress.Adriana Galvan (2019)For her experimental advances characterizing the neurobiological mechanisms underlying adaptive and risky adolescent behavior, elucidating theoretical models of adolescence and the impact of this research on decisions of juvenile justice.Tom Griffiths (2019)For his pioneering work bringing the methods of Bayesian inference to bear on understanding a broad range of cognitive functions, from perception to language, decision making, reasoning, and cognitive control, and for bringing formal rigor to the notion of bounded rationality, explaining apparent irrationalities of behavior in rational terms.Josh McDermott (2018)For groundbreaking research into how humans hear and interpret sound.Marlene Cohen (2018)For her pioneering studies of how neurons in the brain process visual information.Tim Behrens (2017)For his outstanding research into the neuroanatomical systems mediating learning and decision making.Sian Leah Beilock (2017)For her fundamental contributions to our understanding of human skill learning and performance breakdowns in high-pressure and anxiety-provoking situations.David J. Freedman (2016)For his innovative experimental and computational work revealing how neurons in cerebral cortex support the representation of visual categories.Geoffrey F. Woodman (2016)For his artful blending of behavioral, electrophysiological and neurophysiological techniques in humans and nonhuman primates to reveal neural concomitants of attention, cognitive control and memory. His most recent work pairing transcranial electrical stimulation with error monitoring introduces an important new strategy for improving cognitive function in people with brain disorders.Lisa Feigenson (2015)For her meticulous investigations of the origins and early development of representations of objects and numbers. Her research on cognition in infancy illuminates the foundations of young children's mathematical reasoning and learning.Yael Niv (2015)For her studies at the confluence of theory and experiment that illuminate the behavioral and biological foundations of learning and decision-making.Ueli Rutishauser (2014)For his innovative experimental and computational studies to understand human perception and memory.Rebecca Saxe (2014)For discovering the part of the human brain specialized for understanding what other people are thinking.Asif A. Ghazanfar (2013)For studies on the development and neural basis of primate communication that advance our understanding of human communication.Lori L. Holt (2013)For studies advancing our understanding of the sensory and cognitive processes that are fundamental to the perception of speech.Jason P. Mitchell (2012)For his insightful use of neuroimaging and behavioral methods to enrich our understanding of how people infer the thoughts, feelings, and opinions of others.Laura Schulz (2012)For her fundamental contributions to our understanding of how children develop knowledge of the physical and social world.Elizabeth A. Buffalo (2011)For innovative, multidisciplinary study of the hippocampus and the neural basis of memory.Joshua B. Tenenbaum (2011)For formulating a groundbreaking new Bayesian model of human inductive learning and for using this model to generate innovative empirical studies of human perception, language, and reasoning.Michael J. Kahana (2010)For innovative experimental, theoretical, and computational work leading to new insights regarding the dynamics of human episodic memory.Frank Tong (2010)For pioneering the use of neural decoding techniques to explore mechanisms in the human brain mediating perception, attention, and object recognition.Tirin Moore (2009)For fundamental and insightful contributions to our understanding of the neuronal mechanisms that control directed visual attention.Andrew J. Oxenham (2009)For profound and rigorous contributions to our understanding of the relationship between auditory perception and its underlying physiological mechanisms.Miguel P. Eckstein (2008)For sophisticated theoretical analysis and modeling that address fundamental issues in perception and cognition and their application to the practical problems of medical imaging.Isabel Gauthier (2008)For seminal experiments on the role of visual expertise in the recognition of complex objects including faces and for exploration of brain areas activated by this recognition.Randy Buckner (2007)For substantive contributions to understandings of the neural mechanisms of memory formation and retrieval.Pawan Sinha (2007)For elucidating how humans learn to recognize visual objects, and for developing computational models of the mechanisms that mediate this learning.Marvin M. Chun (2006)For creative use of behavioral, brain-imaging, and neuropsychological evidence to elucidate the interplay of conscious and unconscious processes in perception, memory, and learning.Frederick M. Rieke (2006)For experimental and theoretical analyses of information coding in the central nervous system and its relation to perception.Gregory C. DeAngelis (2005)For his fundamental contributions to understanding the neural mechanisms underlying stereoscopic vision: the discovery of a disparity mechanism and how it contributes to depth perception.Jacob Feldman (2005)For his advancement of mathematical and computational approaches to perceptual organization in human vision and human concept learning.Robert L. Goldstone (2004)For novel experimental analyses and elegant modeling that show how perceptual learning dynamically adjusts dimensions and boundaries of categories and concepts in human thought.Wendy A. Suzuki (2004)For her fundamental work on the neuroanatomy, physiology, and function of brain structures important for memory.David C. Plaut (2003)For his penetrating computational analyses of reading, language, and other aspects of cognition, which elucidate normal function and the consequences of brain injury.Michael J. Tarr (2003)For his empirical and theoretical investigations of object recognition and for demonstrating the importance of expertise in organizing brain areas for faces and other objects.David J. Heeger (2002)For his groundbreaking contributions to our understanding of the relation between perceptual experience and neural activity in visual cortex, using neuroimaging and computational methods.John K. Kruschke (2002)For deep insights and empirical evaluations concerning concept formation and attention in learning and rigorous formalization of the underlying psychological principles in connectionist frameworks.Steven J. Luck (2001)For his pathbreaking behavioral, psychophysical, and physiological studies of attention and visual memory.Karen Wynn (2001)For her pioneering research on the foundations of quantitative and mathematical thinking in infants and young children.Elizabeth Gould (2000)For her discoveries about neurogenesis in adult mammals and its modulation by hormones, neurotransmitters, and experience, which have radically altered our ideas regarding brain function.Earl K. Miller (2000)For his pioneering research on working memory and its neurobiological basis in the prefrontal cortex.Nancy G. Kanwisher (1999)For her innovative research on visual attention, awareness, and imagery, including the characterization of a face perception module and discovery of a place encoding module.Harold E. Pashler (1999)For his many experimental breakthroughs in the study of spatial attention and central executive control and for his insightful theoretical analysis of human cognitive architecture.Virginia M. Richards (1998)For her contributions to auditory perception, especially to the understanding of the envelope and energy cues that contribute to detecting signals in noise.Jeffrey D. Schall (1998)For his contributions to our understanding of neural mechanisms of visual selective attention, the control of voluntary movements, and response time.Richard Ivry (1997)For his innovative work with normal humans and neurological patients, showing the importance of the cerebellum for computations related to sensory and motor timing.Keith R. Kluender (1997)For his empirical and theoretical contributions to our understanding of the perception of speech.Joseph E. Steinmetz (1996)For his pioneering anatomical, physiological, and behavioral studies that identify pathways in the brain, subserving a basic form of learning, associating conditioned and unconditioned stimuli.Steven G. Yantis (1996)For his insightful research on visual attention and perception, especially his studies on the capture and the interactions between attention and perceptual organization.Michael S. Fanselow (1995)For his ingenious analysis of the mechanisms that enable stimuli to acquire the ability to produce fear and how fear translates into specifics behaviors.Robert M. Nosofsky (1995)For his creative synthesis of information processing models and techniques of psychological measurement in the combined theoretical and experimental analysis of human categorization learning.Donald D. Hoffman (1994)For advancing the formal and empirical study of human visual perception and for developing a general theoretical framework for the analysis of perceptual inference.David G. Lavond (1994)For his pioneering application of the method of reversible cooling inactivation of regions of neural tissue to localize a memory trace in the mammalian brain.Steven Pinker (1993)For his significant contributions to the fields of visual perception and the acquisition and evolutionary basis of language.Martha Farah (1992)For her rigorous empirical and theoretical analysis of visual cognition, in which understanding of normal function and analysis of neurological deficits illuminate and strengthen one another.Daniel L. Schacter (1991)For his investigations of the amnesic syndrome and of explicit versus implicit memory, major steps toward a neuro-psychological analysis of the functions of consciousness.Robert Desimone (1990)For his outstanding work on the cortical and subcortical neuronal mechanisms underlying visual perception and attention.John T. Cacioppo (1989)For his contributions to understanding the psychophysiological correlates of attitudes, cognition, and emotions, and for his use of non-invasive physiological measures to resolve basic theoretical questions regarding psychological processes.Eric I. Knudsen (1988)For his elegant analysis—integrating behavior, neurophysiology, and neuroanatomy—which elucidated the role of experience in the development of directional hearing in the owl.Laurence T. Maloney and Brian A. Wandell (1987)For their elegant account of how we preserve the inherent colors of surfaces despite wide variations in illumination, and of Wandell's other fundamental investigations of color vision.Roger Ratcliff (1986)For his effectively combined mathematical and experimental analyses of human information processing.Keith D. White (1985)For his psychophysical studies on visual guidance and discrimination through the selective perception of pattern, color, and movement.Edward N. Pugh (1984)For his distinguished, quantitative psychophysical work on mechanisms of color adaptation and to encourage his physiological work on mechanisms of receptor transduction and sensitivity control.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Middle rectal veins** Middle rectal veins: The middle rectal veins (or middle hemorrhoidal vein) take origin in the hemorrhoidal plexus and receive tributaries from the bladder, prostate, and seminal vesicle. They run lateralward on the pelvic surface of the levator ani to end in the internal iliac vein. Veins superior to the middle rectal vein in the colon and rectum drain via the portal system to the liver. Veins inferior, and including, the middle rectal vein drain into systemic circulation and are returned to the heart, bypassing the liver.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Archives of Computational Methods in Engineering** Archives of Computational Methods in Engineering: Archives of Computational Methods in Engineering is a scholarly journal that provides a forum for spreading results of research and advanced industrial practice in computational engineering with particular emphasis on mechanics and its related areas. It publishes reviews presenting developments in computational engineering. Subjects covered: Areas of research published in the journal include modeling; solution techniques and applications of computational methods in areas including liquid and gas dynamics, solid and structural mechanics, biomechanics); variational formulations and numerical algorithms related to implementation of the finite and boundary element methods; finite difference and finite volume methods and other computational methods. Impact factor: The journal has a 2020 impact factor of 7.302. Indexing: Among others, the journal is abstracted and indexed in Google Scholar, Index to Scientific Reviews, Journal Citation Reports/Science Edition, OCLC, Science Citation Index Expanded (SciSearch), SCOPUS, Summon by Serial Solutions, VINITY - Russian Academy of Science]] and Zentralblatt Math. Editorial board: The editors-in-chief of the journal are Michael Kleiber (Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw) and Eugenio Oñate (School of Civil Engineering and CIMNE - Technical University of Catalonia (UPC), Barcelona, Spain).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Optical material** Optical material: Optical materials are transparent materials from which optical lenses, prisms, windows, waveguides, and second-surface mirrors can be made. They are required in most optical instruments. Most optical materials are rigid solids, but flexible and elastic materials are used for special functions. Contained liquids can also be used as optical materials. Known optical materials include amorphous materials and crystalline materials: Glass Plastics Polycarbonate Poly(methyl methacrylate) Sodium chloride Strontium fluoride Synthetic diamond Zinc sulfideOptical materials useful with infrared light include: Silicon (1.2–7 µm) Zinc selenideNon-linear optical materials or nonlinear media transform light in various ways in nonlinear optics. Non-linear optical materials include: Barium borateSome materials (optical and non-optical) can be made into first-surface mirrors, by silvering them or plating them with metal. Some metals can be highly polished, providing both support and the reflective surface of first-surface mirrors.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Morphogenesis** Morphogenesis: Morphogenesis (from the Greek morphê shape and genesis creation, literally "the generation of form") is the biological process that causes a cell, tissue or organism to develop its shape. It is one of three fundamental aspects of developmental biology along with the control of tissue growth and patterning of cellular differentiation. The process controls the organized spatial distribution of cells during the embryonic development of an organism. Morphogenesis can take place also in a mature organism, such as in the normal maintenance of tissue by stem cells or in regeneration of tissues after damage. Cancer is an example of highly abnormal and pathological tissue morphogenesis. Morphogenesis also describes the development of unicellular life forms that do not have an embryonic stage in their life cycle. Morphogenesis is essential for the evolution of new forms. Morphogenesis: Morphogenesis is a mechanical process involving forces that generate mechanical stress, strain, and movement of cells, and can be induced by genetic programs according to the spatial patterning of cells within tissues. Abnormal morphogenesis is called dysmorphogenesis. History: Some of the earliest ideas and mathematical descriptions on how physical processes and constraints affect biological growth, and hence natural patterns such as the spirals of phyllotaxis, were written by D'Arcy Wentworth Thompson in his 1917 book On Growth and Form and Alan Turing in his The Chemical Basis of Morphogenesis (1952). Where Thompson explained animal body shapes as being created by varying rates of growth in different directions, for instance to create the spiral shell of a snail, Turing correctly predicted a mechanism of morphogenesis, the diffusion of two different chemical signals, one activating and one deactivating growth, to set up patterns of development, decades before the formation of such patterns was observed. The fuller understanding of the mechanisms involved in actual organisms required the discovery of the structure of DNA in 1953, and the development of molecular biology and biochemistry. Genetic and molecular basis: Several types of molecules are important in morphogenesis. Morphogens are soluble molecules that can diffuse and carry signals that control cell differentiation via concentration gradients. Morphogens typically act through binding to specific protein receptors. An important class of molecules involved in morphogenesis are transcription factor proteins that determine the fate of cells by interacting with DNA. These can be coded for by master regulatory genes, and either activate or deactivate the transcription of other genes; in turn, these secondary gene products can regulate the expression of still other genes in a regulatory cascade of gene regulatory networks. At the end of this cascade are classes of molecules that control cellular behaviors such as cell migration, or, more generally, their properties, such as cell adhesion or cell contractility. For example, during gastrulation, clumps of stem cells switch off their cell-to-cell adhesion, become migratory, and take up new positions within an embryo where they again activate specific cell adhesion proteins and form new tissues and organs. Developmental signaling pathways implicated in morphogenesis include Wnt, Hedgehog, and ephrins. Cellular basis: At a tissue level, ignoring the means of control, morphogenesis arises because of cellular proliferation and motility. Morphogenesis also involves changes in the cellular structure or how cells interact in tissues. These changes can result in tissue elongation, thinning, folding, invasion or separation of one tissue into distinct layers. The latter case is often referred as cell sorting. Cell "sorting out" consists of cells moving so as to sort into clusters that maximize contact between cells of the same type. The ability of cells to do this has been proposed to arise from differential cell adhesion by Malcolm Steinberg through his differential adhesion hypothesis. Tissue separation can also occur via more dramatic cellular differentiation events during which epithelial cells become mesenchymal (see Epithelial–mesenchymal transition). Mesenchymal cells typically leave the epithelial tissue as a consequence of changes in cell adhesive and contractile properties. Following epithelial-mesenchymal transition, cells can migrate away from an epithelium and then associate with other similar cells in a new location. In plants, cellular morphogenesis is tightly linked to the chemical composition and the mechanical properties of the cell wall. Cellular basis: Cell-to-cell adhesion During embryonic development, cells are restricted to different layers due to differential affinities. One of the ways this can occur is when cells share the same cell-to-cell adhesion molecules. For instance, homotypic cell adhesion can maintain boundaries between groups of cells that have different adhesion molecules. Furthermore, cells can sort based upon differences in adhesion between the cells, so even two populations of cells with different levels of the same adhesion molecule can sort out. In cell culture cells that have the strongest adhesion move to the center of a mixed aggregates of cells. Moreover, cell-cell adhesion is often modulated by cell contractility, which can exert forces on the cell-cell contacts so that two cell populations with equal levels of the same adhesion molecule can sort out. The molecules responsible for adhesion are called cell adhesion molecules (CAMs). Several types of cell adhesion molecules are known and one major class of these molecules are cadherins. There are dozens of different cadherins that are expressed on different cell types. Cadherins bind to other cadherins in a like-to-like manner: E-cadherin (found on many epithelial cells) binds preferentially to other E-cadherin molecules. Mesenchymal cells usually express other cadherin types such as N-cadherin. Cellular basis: Extracellular matrix The extracellular matrix (ECM) is involved in keeping tissues separated, providing structural support or providing a structure for cells to migrate on. Collagen, laminin, and fibronectin are major ECM molecules that are secreted and assembled into sheets, fibers, and gels. Multisubunit transmembrane receptors called integrins are used to bind to the ECM. Integrins bind extracellularly to fibronectin, laminin, or other ECM components, and intracellularly to microfilament-binding proteins α-actinin and talin to link the cytoskeleton with the outside. Integrins also serve as receptors to trigger signal transduction cascades when binding to the ECM. A well-studied example of morphogenesis that involves ECM is mammary gland ductal branching. Cellular basis: Cell contractility Tissues can change their shape and separate into distinct layers via cell contractility. Just as in muscle cells, myosin can contract different parts of the cytoplasm to change its shape or structure. Myosin-driven contractility in embryonic tissue morphogenesis is seen during the separation of germ layers in the model organisms Caenorhabditis elegans, Drosophila and zebrafish. There are often periodic pulses of contraction in embryonic morphogenesis. A model called the cell state splitter involves alternating cell contraction and expansion, initiated by a bistable organelle at the apical end of each cell. The organelle consists of microtubules and microfilaments in mechanical opposition. It responds to local mechanical perturbations caused by morphogenetic movements. These then trigger traveling embryonic differentiation waves of contraction or expansion over presumptive tissues that determine cell type and is followed by cell differentiation. The cell state splitter was first proposed to explain neural plate morphogenesis during gastrulation of the axolotl and the model was later generalized to all of morphogenesis. Cellular basis: Branching morphogenesis In the development of the lung a bronchus branches into bronchioles forming the respiratory tree. The branching is a result of the tip of each bronchiolar tube bifurcating, and the process of branching morphogenesis forms the bronchi, bronchioles, and ultimately the alveoli.Branching morphogenesis is also evident in the ductal formation of the mammary gland. Primitive duct formation begins in development, but the branching formation of the duct system begins later in response to estrogen during puberty and is further refined in line with mammary gland development. Cancer morphogenesis: Cancer can result from disruption of normal morphogenesis, including both tumor formation and tumor metastasis. Mitochondrial dysfunction can result in increased cancer risk due to disturbed morphogen signaling. Virus morphogenesis: During assembly of the bacteriophage (phage) T4 virion, the morphogenetic proteins encoded by the phage genes interact with each other in a characteristic sequence. Maintaining an appropriate balance in the amounts of each of these proteins produced during viral infection appears to be critical for normal phage T4 morphogenesis. Phage T4 encoded proteins that determine virion structure include major structural components, minor structural components and non-structural proteins that catalyze specific steps in the morphogenesis sequence. Phage T4 morphogenesis is divided into three independent pathways: the head, the tail and the long tail fibres as detailed by Yap and Rossman. Computer models: An approach to model morphogenesis in computer science or mathematics can be traced to Alan Turing's 1952 paper, "The chemical basis of morphogenesis", a model now known as the Turing pattern. Computer models: Another famous model is the so-called French flag model, developed in the sixties.Improvements in computer performance in the twenty-first century enabled the simulation of relatively complex morphogenesis models. In 2020, such a model was proposed where cell growth and differentiation is that of a cellular automaton with parametrized rules. As the rules' parameters are differentiable, they can be trained with gradient descent, a technique which has been highly optimized in recent years due to its use in machine learning. This model was limited to the generation of pictures, and is thus bi-dimensional. Computer models: A similar model to the one described above was subsequently extended to generate three-dimensional structures, and was demonstrated in the video game Minecraft, whose block-based nature made it particularly expedient for the simulation of 3D cellular automatons.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded