text
stringlengths
60
353k
source
stringclasses
2 values
**Fuzzy math (politics)** Fuzzy math (politics): Fuzzy math is a catch phrase used often by American politicians to describe numbers, particularly in regard to government spending, that they claim do not add up correctly. It is frequently used by politicians who are dismissing another politician's numbers as doubtful or otherwise inaccurate. Origin: The term "fuzzy math" was first heard during the debates prior to the 2000 U.S. presidential election. It was used by George W. Bush, who dismissed the figures used by his opponent Al Gore. Others later turned the term against Bush. The term has since been used by many other politicians in attacks against opponents or various stances, such as concern over global warming.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Java 4K Game Programming Contest** Java 4K Game Programming Contest: The Java 4K Game Programming Contest, also known as Java 4K and J4K, is an informal contest that was started by the Java Game Programming community to challenge their software development abilities. Concept: The goal of the contest is to develop the best game possible within four kibibytes (4096 bytes) of data. While the rules originally allowed for nearly any distribution method, recent years have required that the games be packaged as either an executable JAR file, a Java Webstart application, or a Java Applet, and now only an applet. Concept: Because the Java class file format incurs quite a bit of overhead, creating a complete game in 4K can be quite a challenge. As a result, contestants must choose how much of their byte budget they wish to spend on graphics, sound, and gameplay. Finding the best mix of these factors can be extremely difficult. Many new entrants believe that impressive graphics alone are enough to carry a game. However, entries with more modest graphics and focus on gameplay have regularly scored higher than such technology demonstrations. Concept: Prizes When first conceived, the "prize" for winning the contest was a bundle of "Duke Dollars", a virtual currency used on Sun Microsystems' Java forums. This currency could theoretically be redeemed for physical prizes such as watches and pens. The artificial currency was being downplayed by the introduction of the 4K contest, thus leaving no real prize at all. While there has been some discussion of providing prizes for the contest, it has continued to thrive without them. Concept: Spin-offs Following the creation of the Java4K contest, spin-offs targeting 8K, 16K, or a specific API like LWJGL have been launched, usually without success. While there has been a great deal of debate on why the Java 4K contest is so successful, the consensus from the contestants seems to be that it provides a very appealing challenge: not only do the entrants get the chance to show off how much they know about Java programming, but the 4K size helps "even the odds" compared to other competitions where the use of artists and musicians can easily place an entry far ahead of the others. The contestants seem to believe that 4K is the "sweet spot" that balances what an individual can do. Because of the tricks developed for the 4K contest, it's believed that adding even a single kilobyte would open the doors to far more complex games that are beyond the ability of a single developer. History: Contest creation The Java 4K Game Programming Contest came into being on August 28, 2002, when a user by the handle of codymanix posted the suggestion to the Sun Microsystems Java forums. After a bit of argument over how feasible a game would be in 4K, a user by the handle of mlk officially organized the contest on August 29, 2002. History: Slowly but surely, entries began to trickle in for the contest. The majority of these entries were Applets, as it was believed that separating the images from the class files would help reduce the size of the file. Future contests would see a reversal of this as game creators utilized compressed JAR files to reduce the size of their code. History: One of the most interesting points about the first contest was that non-game applications were allowed. One contestant produced a telnet server in 4K of Java. However, this artifact of the first competition did not survive, and was most likely allowed because of the loose handling of the first contest. While no winner was officially declared the first year, the 4K Racing game submitted by Robin Chaddock (aka Abuse/AbU5e) was generally agreed upon to have "won". History: Successive competitions became more and more organized, with many of the contestants pitching in to handle administration and promotion of the contest. All contests received official judging, with the method of judging being refined each year. By the third year, the contest was officially transitioned over to the JavaGaming.org forums. The fourth year saw the introduction of the JavaUnlimited website as the official repository for the contest. The site had been used the previous year to track entries that had been posted to the official threads on JavaGaming.org and forum.java.sun.com. History: Evolution throughout the years Year 2 (2004)Heavy use of pre-rendered sprites, transparency, and sound effects defined this year's entries. The strongest contenders were Defender 4000, Abuse's Shooty-Transparenty Game, and Space Invaders. However, Space Invaders' lack of sound caused it to fall behind the other two entries which were competing hard to pack in the most technology and gameplay. History: Of particular interest were the different tactics used by the two entries. For graphics, Abuse used precious few high color images which he then applied transparency and rotation to at runtime. Jbanes, on the other hand, developed an imaging packing technique that allowed him to store twenty-one single-color images. Rather than applying rotation and transparency, he chose to use his larger number of images to produce pre-rendered animations. For sound, Abuse used clear chimes and other instruments from the MIDI soundbank. Jbanes chose to use runtime-generated PCM sound that sounded more like video games of the late 1970s and early 1980s. History: Both approaches had their merit, so it's difficult to say what finally swayed the judge's opinion. What is known is that Year 2 was the last year that sound would be a deciding factor in the games. In future years, the bytes allocated to sound were reallocated to other functions such as 3D graphics, levels, and bosses. History: Year 2 was the first year that official judging took place. Unlike subsequent years, the only judge was the contest organizer, mlk. After careful consideration, the judge decided to award Prong with the Best Technical Achievement Award, and declared Defender 4000 as the overall winner. He scored each game but did not use this score in determining the winner. Abuse's Shooty-Transparenty Game actually scored one point higher than Defender 4000. History: Year 3 (2005)Year 3 was defined by a major influx of professional Java developers, 3D graphics in the games, and a gradual transition to the JavaGaming.org forums. JavaUnlimited also began mirroring the competitors in a permanent archive. While the mirror started as a manually edited HTML page, it eventually grew into a complete content management site with a database back-end. History: Judging this year was handled by a panel of three volunteers, professional developers who were not participating in the contest. One of the volunteer judges was Chris Melissinos, Sun's Chief Gaming Officer. The scoring method used was based on the method that mlk had applied the previous year, but was updated to allow the judges to give awards for exceptional gameplay or technological achievements. History: While most of the entries were of exceptional quality, T4XI by Kevin Glass (aka kevglass) was chosen as the winner. Besides having extremely original gameplay, it provided exceptional graphics through a pseudo-3D effect that gave perspective to the buildings. History: A minor amount of controversy erupted due to entries that judges had failed to score. Entries like JM4K and IsOlation Net were either too complex for the judges to launch, or contained networking components that they couldn't test. After this year's competition, the rules were changed to require that games be self-executable. In addition, contestants were warned in advance about the difficulties in judging networked games. History: Year 4 (2006)Year 4 marked a period of transition toward making gameplay a priority over graphics and technical accomplishment. Many of the games were fairly simple in design, but aimed to make up for it with engrossing or addictive gameplay. History: For the first time in the contest's history, a special forum was set up on JavaGaming.org to host the contest. In addition, the JavaUnlimited.net site became the official site for entries and judging. While judging was originally going to be handled through JavaUnlimited by the Javagaming.org community, pushback from several members resulted in falling back on a more traditional judging system. History: After the results came back, Miners4K by Markus Persson was declared the winner. Second place was given to Kevin Glass's Roll4K, and third place was given to Goomba4K by Woogley. History: The results of Year 4's judging were significantly better than those of Year 3, in part due to the rule changes which forced the entries to conform to easily executable formats. However, this did not eliminate judging issues. Some controversy erupted when two entries (Xero and JSquares) were given lower scores due to technical glitches. Several recommendations were posed to prevent this from happening in future contests, including trimmed mean scoring and verification of judge's scoring before acceptance. History: Year 5 (2007)Year 5 launched in December 2006 and lasted until March 1, 2007. It saw some great games, with much less focus on 3D and pseudo-3D graphics. Most games were 2D, with Pipe Extreme and Trailblazer being the only notable exceptions (one could argue that a few others are 3D as well, but distinctly less so). History: Just like year 4, a forum was hosted on JavaGaming.org to host the contest. JavaUnlimited's system was used for hosting the games again, being considered the official site for the entries. A site update was planned for JavaUnlimited, but did not occur. Originally, the plan was to have a public vote and a judging panel. One month after the contest closing date the organizer without further explanation dropped the judging panel, which caused some unrest in the forums, accusations of censorship, locked threads and two participants withdrawing their entries from the contest (bringing the total down from 65 to 58). Voting was limited to javagaming.org forum participants, and within the allotted time, 25 people voted. About two months after the contest closing date, the official results were announced. History: The winner was Metro4k by Blaine Hodge, followed by Jojoh's Roadfourk and Ulf Ochsenfahrt's aichess4k. Metro4k is a Sim City-like city simulation game, Roadfourk a racing game, and aichess4k a chess game featuring an AI opponent. Unlike previous years, year 5 saw no game take the "last place", because the approval voting system used only gave votes to around half the games. Year 6 (2008)Year 6 launched in December 2007 and lasted until March 1, 2008. Notably less games were submitted than in 2006 and 2007 - only 21 in total. Most of the games were 2d, with a total of 3 games using 3D or pseudo-3D graphics. History: The competition was hosted on a new website, Java4k.com. Games from previous years can also be found on the new website. Before the launch of the contest, woogley had announced his withdrawal from arranging contest. The task of administrating the contest and hosting the site was therefore taken over by Arni Arent (appel) and Joakim Johnsson (jojoh). Just like previous years, there was also a dedicated forum at Java-Gaming.org. History: The games were then thoroughly reviewed by five judges; Arni Arent, Joakim Johnsson, Kevin Glass, Matt Hicks and Chris Melissinos. They reviewed each game in three categories; Overall, Technical and Presentation. The results were announced on March 28, 2008. History: Year 7 (2009)Year 7 launched in December 2008 and lasted until February 28, 2009 (extended from an original closing date of January 31). The number of games submitted returned to previous levels, with 67. This year introduced a requirement (later relaxed, but still followed by most games) to use JNLP deployment, and as a result had a mix of applications and applets. History: Other technical first for this year were the submission of word games and a game which used the microphone. Word Twister used built-in levels, and Scr4mble used reflection to grab class names from the J2SE API and split them into words to build a dictionary. Frequent Flier was controlled by the pitch sung into the mic. The games were reviewed by five judges: Arni Arent, Chris Melissinos, Matt Hicks, Eli Delventhal, and Mark DeLoura. As previously, they reviewed in the three categories of Overall, Technical, and Presentation. There was minor controversy over the scoring because some judges were unable to play some games. Their scores for those games were initially 0 and counted against those games when the scores were first released on April 1, but the averages were changed to discount these 0 scores three hours later. History: Year 8 (2010) to Year 12 (2014)Following problems with Webstart in 2009, the 2010 and later contests were applets-only, but it did introduce the option of using pack200 compression. Since 2010, judges gave only an overall score, which was normalised before averaging. There was also a separate community voting system where each voter had 50 points (25 before 2013) to allocate between the games, with a limit of 5 points to any game. Since 2013, there is the option for voters to add a short sentence for feedback.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Oncology nursing** Oncology nursing: An oncology nurse is a specialized nurse who cares for cancer patients. These nurses require advanced certifications and clinical experiences in oncology further than the typical baccalaureate nursing program provides. Oncology nursing care can be defined as meeting the various needs of oncology patients during the time of their disease including appropriate screenings and other preventive practices, symptom management, care to retain as much normal functioning as possible, and supportive measures upon end of life. Certification in the United States: The Oncology Nursing Certification Corporation (ONCC) offers several different options for board certification in oncological nursing. Certification is a voluntary process and ensures that a nurse has proper qualifications and knowledge of a specialty area and has kept up-to-date in his or her education. Certification in the United States: The ONCC offers eight options for certification: Basic: OCN: Oncology Certified Nurse CPON: Certified Pediatric Oncology Nurse CPHON: Certified Pediatric Hematology Oncology Nurse Specialty: BMTCN: Blood and Marrow Transplant Certified Nurse CBCN: Certified Breast Care Nurse Advanced: AOCN: Advanced Oncology Certified Nurse AOCNP: Advanced Oncology Certified Nurse Practitioner AOCNS: Advanced Oncology Certified Clinical Nurse SpecialistCertification is granted for four years, after which it must be renewed by taking a recertification test or by earning a certain number of continuing education credits. Certification in the United States: To become certified, nurses must have an RN license, meet specific eligibility criteria for nursing experience and specialty practice, and must pass a multiple-choice test. For the advanced AOCNP and AOCNS certifications, a nurse must have a master's degree or higher in nursing and a minimum of 500 hours of supervised clinical practice of oncology nursing. The AOCNP certification also requires successful completion of an accredited nurse practitioner program. Oncology Nursing in Morocco: Demand The demand for oncology nurses is enormous in Morocco. Statistics of the Moroccan Ministry of Health indicate that the death toll of malignant neoplasms mounts to 17 thousands a year. The number of patients with cancer is believed to be three-times the number of annual deaths. A recent study of the European Institute of Health Sciences (Institut Européen des Sciences de la Santé) projected that the need for oncology nurses in 2025 is estimated at 5 thousand nurses. Yet, the number of qualified oncology nurses in the country is equal to nil. The reason is obviously the absence of a formal educational program in oncology nursing. Oncology Nursing in Morocco: Oncology nursing training in Morocco Currently there currently exists only one educational program in oncology nursing that is being offered by the European Institute of Health Sciences. It has been approved by the Ministry of Higher Education as well as the Ministry of Health in 2014. The duration of this Bachelor of Science program in Oncology Nursing is 3 years and encompasses a total of 6 thousands hours, equivalent to 120 semester credits in the US educational system and 180 ECTS in the European system. The program attracts a large number of students from African countries. Oncology Nursing in Morocco: Certification requirements in Morocco In Morocco, there exists no system for certification of oncology nurses. However, graduates of the oncology nursing program of the European Institute of Health Sciences can set for certification exams abroad, particularly in European countries. Roles: Oncology nurses, like any Registered Nurse have a large variety of settings they can work in. Oncology nurses can work inpatient settings such as hospitals, outpatient settings, in hospice services, or in physician offices. There are a variety of specialties such as radiation, surgery, pediatric, or gynecologic. Oncology nurses have advanced knowledge of assessing the client's status and from this assessment will help the multi-disciplinary medical team to develop a treatment plan. Roles: Education The nurse must also educate the patient on their condition, its side effects, its treatment plan, and how to prevent possible complications. This education should be done effectively throughout the treatment of the disease, according to the teaching style that best suits the particular patient. According to the Oncology Nursing Standards, the patient or caregivers for the patient should understand the state of the disease and the therapy used at their education level, understand the therapy schedule and when it is being used, be involved in decisions regarding their own care, and state interventions for serious side effects and complications of the disease and intervention. Roles: Treatment Nurses must be able to manage the many side effects associated with cancer and the treatment. Nurses must have extensive knowledge of pharmacological and nonpharmacological nursing interventions, and when they are appropriate to use. Roles: Chemotherapy Oncology nurses must have appropriate training in the administration, handling, side effects, and dosing of chemotherapy. Each institution will have its own policies for various chemotherapy drugs to ensure adequate training and for prevention of errors. The Oncology Nursing Society (ONS) and Oncology Nursing Certification Corporation (ONCC) offer a Chemotherapy/Biotherapy training course available to any oncology nurse to ensure the safe administration and management of side effects of chemotherapy and biotherapy agents. This course consists of 16 contact hours. This certification needs to be renewed after two years.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**NGC 4194** NGC 4194: NGC 4194, the Medusa merger, is a pair of interacting galaxies in the constellation Ursa Major. A region of extreme star formation 500 ly (150 pc) across exists in the center of the Eye of Medusa, the central gas-rich region.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dualite** Dualite: Dualite is a very rare and complex mineral of the eudialyte group, its complexity being expressed in its formula Na30(Ca,Na,Ce,Sr)12(Na,Mn,Fe,Ti)6Zr3Ti3MnSi51O144(OH,H2O,Cl)9. The formula is simplified as it does not show the presence of cyclic silicate groups. The name of the mineral comes from its dual nature: zircono- and titanosilicate at once. Dualite has two modules in its structure: alluaivite one and eudialyte one. After alluaivite and labyrinthite it stands for third representative of the eudialyte group with essential titanium. Occurrence and association: Dualite was found in peralkaline pegmatoid rock at Mt Alluaiv, Lovozero massif, Kola Peninsula Russia. It associates with aegirine, alkaline amphibole, cancrinite, eudialyte, K-Na feldspar, lamprophyllite, lomonosovite, lovozerite, nepheline, sodalite, sphalerite, villiaumite, and vuonnemite. Notes on chemistry: Dualite admixtures not mentioned in the formula are especially that of niobium, with lesser amount of aluminium, barium, potassium, neodymium and lanthanum. Dualite is chemically similar to labyrinthite and rastsvetaevite. Notes on crystal structure: Dualite has doubled c value when compared to ordinary eudialyte. Its structural framework has 24 layers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Laser Doppler velocimetry** Laser Doppler velocimetry: Laser Doppler velocimetry, also known as laser Doppler anemometry, is the technique of using the Doppler shift in a laser beam to measure the velocity in transparent or semi-transparent fluid flows or the linear or vibratory motion of opaque, reflecting surfaces. The measurement with laser Doppler anemometry is absolute and linear with velocity and requires no pre-calibration. Technology origin: The development of the helium–neon laser (He-Ne) in 1962 at the Bell Telephone Laboratories provided the optics community with a continuous wave electromagnetic radiation source that was highly concentrated at a wavelength of 632.8 nanometers (nm) in the red portion of the visible spectrum. It was discovered that fluid flow measurements could be made using the Doppler effect on a He-Ne beam scattered by small polystyrene spheres in the fluid.At the Research Laboratories of Brown Engineering Company (later Teledyne Brown Engineering), this phenomenon was used to develop the first laser Doppler flowmeter using heterodyne signal processing. This instrument became known as the laser Doppler velocimeter and the technique was called laser Doppler velocimetry. It is also referred to as laser Doppler anemometry. Technology origin: Early laser Doppler velocimetry applications included measuring and mapping the exhaust from rocket engines with speeds up to 1000 m/s, as well as determining flow in a near-surface blood artery. Similar instruments were also developed for solid surface monitoring, with applications ranging from measuring product speeds in production lines of paper and steel mills to measuring vibration frequency and amplitude of surfaces. Operating principles: In its simplest and most presently used form, laser Doppler velocimetry crosses two beams of collimated, monochromatic, and coherent laser light in the flow of the fluid being measured. The two beams are usually obtained by splitting a single beam, thus ensuring coherence between the two. Lasers with wavelengths in the visible spectrum (390–750 nm) are commonly used; these are typically He-Ne, Argon ion, or laser diode, allowing the beam path to be observed. A transmitting optics system focuses the beams to intersect at their waists (the focal point of a laser beam), where they interfere and generate a set of straight fringes. As particles (either naturally occurring or induced) entrained in the fluid pass through the fringes, they reflect light that is then collected by a receiving optics and focused on a photodetector (typically an avalanche photodiode). Operating principles: The reflected light fluctuates in intensity, the frequency of which is equivalent to the Doppler shift between the incident and scattered light, and is thus proportional to the component of particle velocity which lies in the plane of two laser beams. If the sensor is aligned to the flow such that the fringes are perpendicular to the flow direction, the electrical signal from the photodetector will then be proportional to the full particle velocity. By combining three devices (e.g., He-Ne, Argon ion, and laser diode) with different wavelengths, all three flow velocity components can be simultaneously measured.Another form of laser Doppler velocimetry, particularly used in early device developments, has a completely different approach akin to an interferometer. The sensor also splits the laser beam into two parts; one (the measurement beam) is focused into the flow and the second (the reference beam) passes outside the flow. A receiving optics provides a path that intersects the measurement beam, forming a small volume. Particles passing through this volume will scatter light from the measurement beam with a Doppler shift; a portion of this light is collected by the receiving optics and transferred to the photodetector. The reference beam is also sent to the photodetector where optical heterodyne detection produces an electrical signal proportional to the Doppler shift, by which the particle velocity component perpendicular to the plane of the beams can be determined.The signal detection scheme of the instrument is using the principle of optical heterodyne detection. This principle is similar to other laser Doppler-based instruments such as laser Doppler vibrometer, or laser surface velocimeter. It is possible to apply digital techniques to the signal to obtain the velocity as a measured fraction of the speed-of-light, and therefore in one sense Laser Doppler velocimetry is a particularly fundamental measurement traceable to the S.I. system of measurement. Applications: In the decades since the laser Doppler velocimetry was first introduced, there has been a wide variety of laser Doppler sensors developed and applied. Applications: Flow research Laser Doppler velocimetry is often chosen over other forms of flow measurement because the equipment can be outside of the flow being measured and therefore has no effect on the flow. Some typical applications include the following: Wind tunnel velocity experiments for testing aerodynamics of aircraft, missiles, cars, trucks, trains, and buildings and other structures Velocity measurements in water flows (research in general hydrodynamics, ship hull design, rotating machinery, pipe flows, channel flow, etc.) Fuel injection and spray research where there is a need to measure velocities inside engines or through nozzles Environmental research (combustion research, wave dynamics, coastal engineering, tidal modeling, river hydrology, etc.).One disadvantage has been that laser Doppler velocimetry sensors are range-dependent; they have to be calibrated minutely and the distances where they measure has to be precisely defined. This distance restriction has recently been at least partially overcome with a new sensor that is range independent. Applications: Automation Laser Doppler velocimetry can be useful in automation, which includes the flow examples above. It can also be used to measure the speed of solid objects, like conveyor belts. This can be useful in situations where attaching a rotary encoder (or a different mechanical speed measurement device) to the conveyor belt is impossible or impractical. Applications: Medical applications Laser Doppler velocimetry is used in hemodynamics research as a technique to partially quantify blood flow in human tissues such as skin or the eye fundus. Within the clinical environment, the technology is often referred to as laser Doppler flowmetry; when images are made, it is referred to as laser Doppler imaging. The beam from a low-power laser (usually a laser diode) penetrates the skin sufficiently to be scattered with a Doppler shift by the red blood cells and return to be concentrated on a detector. These measurements are useful to monitor the effect of exercise, drug treatments, environmental, or physical manipulations on targeted micro-sized vascular areas.The laser Doppler vibrometer is being used in clinical otology for the measurement of tympanic membrane (eardrum), malleus (hammer), and prosthesis head displacement in response to sound inputs of 80- to 100-dB sound-pressure level. It also has potential use in the operating room to perform measurements of prosthesis and stapes (stirrup) displacement. Applications: Navigation The Autonomous Landing Hazard Avoidance Technology used in NASA's Project Morpheus lunar lander to automatically find a safe landing place contains a lidar Doppler velocimeter that measures the vehicle's altitude and velocity. The AGM-129 ACM cruise missile uses laser doppler velocimeter for precise terminal guidance. Applications: Calibration and measurement Laser Doppler velocimetry is used in the analysis of vibration of MEMS devices, often to compare the performance of devices such as accelerometers-on-a-chip with their theoretical (calculated) modes of vibration. As a specific example in which the unique features of Laser Doppler velocimetry are important, the measurement of velocity of a MEMS watt balance device has allowed greater accuracy in the measurement of small forces than previously possible, through directly measuring the ratio of this velocity to the speed of light. This is a fundamental, traceable measurement that now allows traceability of small forces to the S.I. System.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Intracranial hypertension syndrome** Intracranial hypertension syndrome: Intracranial hypertension syndrome is characterized by an elevated intracranial pressure, papilledema, and headache with occasional abducens nerve paresis, absence of a space-occupying lesion or ventricular enlargement, and normal cerebrospinal fluid chemical and hematological constituents.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Enterprise Privacy Authorization Language** Enterprise Privacy Authorization Language: Enterprise Privacy Authorization Language (EPAL) is a formal language for writing enterprise privacy policies to govern data handling practices in IT systems according to fine-grained positive and negative authorization rights. It was submitted by IBM to the World Wide Web Consortium (W3C) in 2003 to be considered for recommendation. In 2004, a lawsuit was filed by Zero-Knowledge Systems claiming that IBM breached a copyright agreement from when they worked together in 2001 - 2002 to create Privacy Rights Markup Language (PRML). EPAL is based on PRML, which means Zero-Knowledge argued they should be a co-owner of the standard.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Steroid 11beta-monooxygenase** Steroid 11beta-monooxygenase: In enzymology, a steroid 11beta-monooxygenase (EC 1.14.15.4) is an enzyme that catalyzes the chemical reaction a steroid + reduced adrenal ferredoxin + O2 ⇌ an 11beta-hydroxysteroid + oxidized adrenal ferredoxin + H2OThe 3 substrates of this enzyme are steroid, reduced adrenal ferredoxin, and O2, whereas its 3 products are 11beta-hydroxysteroid, oxidized adrenal ferredoxin, and H2O. Steroid 11beta-monooxygenase: This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with reduced iron-sulfur protein as one donor, and incorporation o one atom of oxygen into the other donor. The systematic name of this enzyme class is steroid,reduced-adrenal-ferredoxin:oxygen oxidoreductase (11beta-hydroxylating). Other names in common use include steroid 11beta-hydroxylase, steroid 11beta/18-hydroxylase, and oxygenase, steroid 11beta -mono-. This enzyme participates in c21-steroid hormone metabolism and androgen and estrogen metabolism. It employs one cofactor, heme.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Boreout** Boreout: Boredom boreout syndrome is a psychological disorder that causes physical illness, mainly caused by mental underload at the workplace due to lack of either adequate quantitative or qualitative workload. One reason for boreout could be that the initial job description does not match the actual work.This theory was first expounded in 2007 in Diagnose Boreout, a book by Peter Werder and Philippe Rothlin, two Swiss business consultants. Symptoms and consequences: Symptoms of the bore-out syndrome are described by the Frankfurt psychotherapist Wolfgang Merkle as similar to the burnout syndrome. These include depression, listlessness and insomnia, but also tinnitus, susceptibility to infection, stomach upset, headache and dizziness.The consequences of boreout for employees are numerous both psychologically and physically and more or less serious. On the psychological level, boredom, dissatisfaction, and permanent frustration gradually lead the victim of a boreout into a vicious circle. They gradually lose the will to act at the professional level and at the personal level. To the loss of self-esteem is added the constant anxiety of being discovered. The boreout victim lives with the constant fear that their supervisor, colleagues, or friends will discover their inactivity and duplicity. The confrontation with and enduring the unsatisfactory situation leads to further stress that paralyzes and strains. Being constantly confronted with the emptiness of their professional life and their apparent uselessness in society, the employee may experience significant stress. The suffering all the more accentuated because it cannot be shared and if it is, is not understood. This is also the reason that this syndrome is relatively unknown: This has to do with the fact that everyone prefers to have disorders that are socially considered. Someone who says, 'I have so much to do, my God, the job is banging up at work', is much more respected than someone who says he's bored, has no responsibilities, and that's what gets him done. Everyone says: 'I want to trade with you, that's great! This can lead to serious mental disorders such as personality destruction or even depression or suicide. Boreout is also a trigger for physical diseases such as certain types of epilepsy caused by stress or exhaustion, severe sleep disorders, hand and voice tremors, shingles, and ulcers. Symptoms and consequences: On the physical side, according to the British "Bored to death" study, employees who are bored at work are two to three times more likely to be victims of cardiovascular events than those whose employment is stimulating. The permanent anxiety in which the employee lives exhausts him/her physically. Fatigue is constant despite physical inactivity. Boreout can lead to eating disorders such as untimely nibbling or loss of appetite. Some people may use alcohol or drugs to overcome their discomfort and thus develop a harmful addiction. Elements: According to Peter Werder and Philippe Rothlin, the absence of meaningful tasks, rather than the presence of stress, is many workers' chief problem. Ruth Stock-Homburg defines boreout as a negative psychological state with low work-related arousal.Boreout has been studied in terms of its key dimensions. In their practitioners book, Werder and Rothlin suggest elements: boredom, lack of challenge, and lack of interest. These authors disagree with the common perceptions that a demotivated employee is lazy; instead, they claim that the employee has lost interest in work tasks. Those suffering from boreout are "dissatisfied with their professional situation" in that they are frustrated at being prevented, by institutional mechanisms or obstacles as opposed to by their own lack of aptitude, from fulfilling their potential (as by using their skills, knowledge, and abilities to contribute to their company's development) and/or from receiving official recognition for their efforts. Elements: Relying on empirical data from service employees, Stock-Homburg identifies three components of boreout: job boredom, crisis of meaning and crisis of growth, which arise from a loss of resources due to a lack of challenges.Peter Werder and Philippe Rothlin suggest that the reason for researchers' and employers' overlooking the magnitude of boreout-related problems is that they are underreported because revealing them exposes a worker to the risk of social stigma and adverse economic effects. (By the same token, many managers and co-workers consider an employee's level of workplace stress to be indicative of that employee's status in the workplace.) There are several reasons boreout might occur. The authors note that boreout is unlikely to occur in many non-office jobs where the employee must focus on finishing a specific task (e.g., a surgeon) or helping people in need (e.g., a childcare worker or nanny). In terms of group processes, it may well be that the boss or certain forceful or ambitious individuals with the team take all the interesting work leaving only a little of the most boring tasks for the others. Alternatively, the structure of the organization may simply promote this inefficiency. Of course, few if any employees (even among those who would prefer to leave) want to be fired or laid off, so the vast majority are unwilling and unlikely to call attention to the dispensable nature of their role. Elements: As such, even if an employee has very little work to do or would only expect to be given qualitative inadequate work, they give the appearance of "looking busy" (e.g., ensuring that a work-related document is open on one's computer, covering one's desk with file folders, and carrying briefcases (whether empty or loaded) from work to one's home and vice versa). Coping strategies: The symptoms of boreout lead employees to adopt coping or work-avoidance strategies that create the appearance that they are already under stress, suggesting to management both that they are heavily "in demand" as workers and that they should not be given additional work: "The boreout sufferer's aim is to look busy, to not be given any new work by the boss and, certainly, not to lose the job."Boreout strategies include: Stretching work strategy: This involves drawing out tasks so they take much longer than necessary. For example, if an employee's sole assignment during a work week is a report that takes three work days, the employee will "stretch" this three days of work over the entire work week. Stretching strategies vary from employee to employee. Some employees may do the entire report in the first three days, and then spend the remaining days surfing the Internet, planning their holiday, browsing online shopping websites, sending personal e-mails, and so on (all the while ensuring that their workstation is filled with the evidence of "hard work", by having work documents ready to be switched-to on the screen). Alternatively, some employees may "stretch" the work over the entire work week by breaking up the process with a number of pauses to send personal e-mails, go outside for a cigarette, get a coffee, chat with friends in other parts of the company, or even go to the washroom for a 10-minute nap. Coping strategies: Pseudo-commitment strategy: The pretence of commitment to the job by attending work and sitting at the desk, sometimes after work hours. As well, demotivated employees may stay at their desks to eat their lunch to give the impression that they are working through the lunch hour; in fact, they may be sending personal e-mails or reading online articles unrelated to work. An employee who spends the afternoon on personal phone calls may learn how to mask this by sounding serious and professional during their responses, to give the impression that it is a work-related call. For example, if a bureaucrat is chatting with a friend to set up a dinner date, when the friend suggests a time, the bureaucrat can respond that "we can probably fit that meeting time in." Consequences for employees: Consequences of boreout for employees include dissatisfaction, fatigue as well as ennui and low self-esteem. The paradox of boreout is that despite hating the situation, employees feel unable to ask for more challenging tasks, to raise the situation with superiors or even look for a new job. The authors do, however, propose a solution: first, one must analyse one's personal job situation, then look for a solution within the company and finally if that does not help, look for a new job. If all else fails, turning to friends, family, or other co-workers for support can be extremely beneficial until any of the previously listed options become viable. Consequences for businesses: Stock-Homburg empirically investigated the impact of the three boreout dimensions among service employees - showing that a crisis of meaning as well as a crisis of growth had a negative impact on the innovative work behavior. Another study showed that boreout negatively affects customer orientation of service employees.Prammer studied a variety of boreout effects on businesses: Whereabouts of dissatisfied employees, who do not work because they have internally terminated, cost the company money. Consequences for businesses: If employees actively quit internally, they can damage the operation by demonstrating their ability to mentally restore the employment contract. The qualification of the employee is not recognized (the company can not use its potential). The qualified employee changes jobs (and takes their experience), which can endanger entire business locations. As long as a recession continues, the affected employee remains in the company and leaves the company at the appropriate opportunity. In-house, a problem of distribution of work orders arises. Tabooing causes real problems to go undetected. Whole generations of employees are lost (because they have no opportunity to fully realize their potential).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**G run** G run: In bluegrass and other music, the G run (G-run), or Flatt run (presumably after Lester Flatt), is a stereotypical ending used as a basis for improvisation on the guitar. It is the most popular run in bluegrass, the second being "Shave and a Haircut".The best known version, above, is a slight elaboration of the simplest form, below.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**NCAPG2** NCAPG2: Condensin-2 complex subunit G2 (CAP-G2) also known as chromosome-associated protein G2 (CAP-G2) or leucine zipper protein 5 (LUZP5) is a protein that in humans is encoded by the NCAPG2 gene. CAP-G2 is a subunit of condensin II, a large protein complex involved in chromosome condensation. It interacts with PLK1 through its C-terminal region during mitosis Clinical importance: Mutations in this gene in humans have been associated with severe neurodevelopmental defects, failure to thrive, ocular abnormalities, and defects in urogenital and limb morphogenesis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Metric prefix** Metric prefix: A metric prefix is a unit prefix that precedes a basic unit of measure to indicate a multiple or submultiple of the unit. All metric prefixes used today are decadic. Each prefix has a unique symbol that is prepended to any unit symbol. The prefix kilo-, for example, may be added to gram to indicate multiplication by one thousand: one kilogram is equal to one thousand grams. The prefix milli-, likewise, may be added to metre to indicate division by one thousand; one millimetre is equal to one thousandth of a metre. Metric prefix: Decimal multiplicative prefixes have been a feature of all forms of the metric system, with six of these dating back to the system's introduction in the 1790s. Metric prefixes have also been used with some non-metric units. The SI prefixes are metric prefixes that were standardised for use in the International System of Units (SI) by the International Bureau of Weights and Measures (BIPM) in resolutions dating from 1960 to 2022. Since 2009, they have formed part of the ISO/IEC 80000 standard. They are also used in the Unified Code for Units of Measure (UCUM). List of SI prefixes: The BIPM specifies twenty-four prefixes for the International System of Units (SI). List of SI prefixes: The first uses of prefixes in SI date back to the definition of kilogram after the French Revolution at the end of the 18th century. Several more prefixes came into use, and were recognised by the 1947 IUPAC 14th International Conference of Chemistry before being officially adopted for the first time in 1960.The most recent prefixes adopted were ronna-, quetta-, ronto-, and quecto- in 2022, after a proposal from British metrologist Richard J. C. Brown. The large prefixes ronna- and quetta- were adopted in anticipation of needs for use in data science, and because unofficial prefixes that did not meet SI requirements were already circulating. The small prefixes were also added, even without such a driver, in order to maintain symmetry. List of SI prefixes: Rules The symbols for the units of measure are combined with the symbols for each prefix name. The SI symbols for kilometre, kilogram, and kilowatt, for instance, are km, kg, and kW, respectively. (The symbol for kilo- is k.) Except for the early prefixes of kilo-, hecto-, and deca-, the symbols for the prefixes for multiples are uppercase letters, and those for the prefixes for submultiples are lowercase letters. List of SI prefixes: All of the metric prefix symbols are made from upper- and lower-case Latin letters except for the symbol for micro, which is uniquely a Greek letter "μ". List of SI prefixes: Like the numbers they combine with, SI units and unit symbols are never shown in italics. The prefixes and their symbols are always prefixed to the symbol for the unit without any intervening space or punctuation. This distinguishes a prefixed unit symbol from the product of unit symbols, for which a space or mid-height dot as separator is required. So, for instance, while 'ms' means millisecond, 'm s' or 'm·s' means metre second. List of SI prefixes: Prefixes corresponding to an integer power of one thousand are generally preferred, and the prefixes for tens (deci-, deca-) and hundreds (cent-, hecto-) are disfavoured. Hence 100 m is preferred over 1 hm (hectometre) or 10 dam (decametres). The prefixes deci- and centi-, and less frequently hecto- and deca-, are commonly used for everyday purposes; the centimetre (cm) is especially common. Some modern building codes require that the millimetre be used in preference to the centimetre, because "use of centimetres leads to extensive usage of decimal points and confusion". Deprecated prefixes are also used to create metric units corresponding to older conventional units, for example hectares and hectopascals. List of SI prefixes: Prefixes may not be used in combination on a single symbol. This includes the case of the base unit kilogram, which already contains a prefix. For example, milligram (mg) is used instead of microkilogram (μkg). In the arithmetic of measurements having units, the units are treated as multiplicative factors to values. In the product of multiple units, each individual unit prefix must be evaluated as a separate numeric multiplier and then combined with the others. A prefix symbol attached to a unit symbol is included when the unit is raised to a power. For example, km2 is km × km, not km × m. Usage: Examples The mass of an electron is about 1 rg (rontogram). The mass of 1 litre of water is about 1 kg (kilogram). The mass of the Earth is about 6 Rg (ronnagrams). The mass of Jupiter is about 2 Qg (quettagrams). Examples of powers of units with metric prefixes 1 km2 means one square kilometre, or the area of a square of 1000 m by 1000 m. In other words, an area of 1000000 square metres and not 1000 square metres. 2 Mm3 means two cubic megametres, or the volume of two cubes of 1000000 m by 1000000 m by 1000000 m, i.e. 2×1018 m3, and not 2000000 cubic metres (2×106 m3). Examples with prefixes and powers 5 mV × 5 mA = 5×10−3 V × 5×10−3 A = 25×10−6 V⋅A = 25 μW. 5.00 mV + 10 μV = 5.00 mV + 0.01 mV = 5.01 mV. 5 cm = 5×10−2 m = 5 × 0.01 m = 0.05 m. 9 km2 = 9 × (103 m)2 = 9 × (103)2 × m2 = 9×106 m2 = 9 × 1000000 m2 = 9000000 m2. 3 MW = 3×106 W = 3 × 1000000 W = 3000000 W. Micro symbol When mega and micro were adopted in 1873, there were then three prefixes starting with "m", so it was necessary to use some other symbol besides upper and lowercase 'm'. Eventually the Greek letter "µ" was adopted. However, with the lack of a "µ" key on most typewriters, as well as computer keyboards, various other abbreviations remained common, including "mcg", "mic", "mm", and "u". From about 1960 onwards, "u" prevailed in type-written documents. Because ASCII, EBCDIC, and other common encodings lacked code-points for "µ", this tradition remained even as computers replaced typewriters. When ISO 8859-1 was created, it included the "µ" symbol for micro at codepoint 0xB5. The whole of ISO 8859-1 was incorporated into the initial version of Unicode, but subsequently Unicode version 6 deprecated the micro symbol on codepoint U+00b5 in favour of the Greek letter "μ" on codepoint U+03bc. Keyboard entry Most keyboards do not have a "µ" key, so it is necessary to use a key-code; this varies depending on the operating system, physical keyboard layout, and user's language. Usage: For all keyboard layouts On Microsoft Windows systems, arbitrary Unicode codepoints can be entered in decimal as: Alt+0181, note that a leading "0" is required (this registers as the corresponding Unicode hexadecimal code-point, 0xB5 = 181.), or arbitrary Unicode codepoints can be entered in hexadecimal as: Alt++b5 (up to 5 hexadecimal characters, not counting the leading ‘+’, upper or lower case), or in the tradition of MS-DOS, IBM code page 437 one can also enter old code-points in decimal: Alt+230 (the leading zero must be omitted); On Linux systems, arbitrary Unicode codepoints can be entered in hexadecimal as: Ctrl+⇧ Shift+u b5space, orFor QWERTY keyboard layouts On Linux systems, code-point U+00b5 can be entered as right-alt+m (provided the right alt key is configured to act as AltGr). Usage: On MacOS systems, code-point U+00b5 can be entered as either ⌥ Opt+m or ⌥ Opt+Y. Typesetting in Latex The LaTeX typesetting system features an SIunitx package in which the units of measurement are spelled out, for example, \qty{3}{\tera\hertz} formats as "3 THz". Application to units of measurement: The use of prefixes can be traced back to the introduction of the metric system in the 1790s, long before the 1960 introduction of the SI. The prefixes, including those introduced after 1960, are used with any metric unit, whether officially included in the SI or not (e.g., millidyne and milligauss). Metric prefixes may also be used with some non-metric units, but not, for example, with the non-SI units of time. Application to units of measurement: Metric units Mass The units kilogram, gram, milligram, microgram, and smaller are commonly used for measurement of mass. However, megagram, gigagram, and larger are rarely used; tonnes (and kilotonnes, megatonnes, etc.) or scientific notation are used instead. The megagram does not share the risk of confusion that the tonne has with other units with the name "ton".The kilogram is the only coherent unit of the International System of Units that includes a metric prefix.: 144 Volume The litre (equal to a cubic decimetre), millilitre (equal to a cubic centimetre), microlitre, and smaller are common. In Europe, the centilitre is often used for liquids, and the decilitre is used less frequently. Bulk agricultural products, such as grain, beer and wine, often use the hectolitre (100 litres).Larger volumes are usually denoted in kilolitres, megalitres or gigalitres, or else in cubic metres (1 cubic metre = 1 kilolitre) or cubic kilometres (1 cubic kilometre = 1 teralitre). For scientific purposes, the cubic metre is usually used. Application to units of measurement: Length The kilometre, metre, centimetre, millimetre, and smaller units are common. The decimetre is rarely used. The micrometre is often referred to by the older non-SI name micron. In some fields, such as chemistry, the ångström (0.1 nm) has been used commonly instead of the nanometre. The femtometre, used mainly in particle physics, is sometimes called a fermi. For large scales, megametre, gigametre, and larger are rarely used. Instead, ad hoc non-metric units are used, such as the solar radius, astronomical units, light years, and parsecs; the astronomical unit is mentioned in the SI standards as an accepted non-SI unit. Application to units of measurement: Time Prefixes for the SI standard unit second are most commonly encountered for quantities less than one second. For larger quantities, the system of minutes (60 seconds), hours (60 minutes) and days (24 hours) is accepted for use with the SI and more commonly used. When speaking of spans of time, the length of the day is usually standardised to 86400 seconds so as not to create issues with the irregular leap second.Larger multiples of the second such as kiloseconds and megaseconds are occasionally encountered in scientific contexts, but are seldom used in common parlance. For long-scale scientific work, particularly in astronomy, the Julian year or annum (a) is a standardised variant of the year, equal to exactly 31557600 seconds (365+ 1 /4 days). The unit is so named because it was the average length of a year in the Julian calendar. Long time periods are then expressed by using metric prefixes with the annum, such as megaannum (Ma) or gigaannum (Ga). Application to units of measurement: Angle The SI unit of angle is the radian, but degrees, as well as arc-minutes and arc-seconds, see some scientific use. Application to units of measurement: Temperature Common practice does not typically use the flexibility allowed by official policy in the case of the degree Celsius (°C). NIST states: "Prefix symbols may be used with the unit symbol °C and prefix names may be used with the unit name degree Celsius. For example, 12 m°C (12 millidegrees Celsius) is acceptable." In practice, it is more common for prefixes to be used with the kelvin when it is desirable to denote extremely large or small absolute temperatures or temperature differences. Thus, temperatures of star interiors may be given in units of MK (megakelvins), and molecular cooling may be described in mK (millikelvins). Application to units of measurement: Energy In use the joule and kilojoule are common, with larger multiples seen in limited contexts. In addition, the kilowatt-hour, a composite unit formed from the kilowatt and hour, is often used for electrical energy; other multiples can be formed by modifying the prefix of watt (e.g. terawatt-hour).There exist a number of definitions for the non-SI unit, the calorie. There are gram calories and kilogram calories. One kilogram calorie, which equals one thousand gram calories, often appears capitalised and without a prefix (i.e. Cal) when referring to "dietary calories" in food. It is common to apply metric prefixes to the gram calorie, but not to the kilogram calorie: thus, 1 kcal = 1000 cal = 1 Cal. Application to units of measurement: Non-metric units Metric prefixes are widely used outside the metric SI system. Common examples include the megabyte and the decibel. Metric prefixes rarely appear with imperial or US units except in some special cases (e.g., microinch, kilofoot, kilopound). They are also used with other specialised units used in particular fields (e.g., megaelectronvolt, gigaparsec, millibarn, kilodalton). In astronomy, geology, and palaeontology, the year, with symbol ‘a’ (from the Latin annus), is commonly used with metric prefixes: ka, Ma, and Ga.Official policies about the use of SI prefixes with non-SI units vary slightly between the International Bureau of Weights and Measures (BIPM) and the American National Institute of Standards and Technology (NIST). For instance, the NIST advises that "to avoid confusion, prefix symbols (and prefix names) are not used with the time-related unit symbols (names) min (minute), h (hour), d (day); nor with the angle-related symbols (names) ° (degree), ′ (minute), and ″ (second)", whereas the BIPM adds information about the use of prefixes with the symbol as for arcsecond when they state: "However astronomers use milliarcsecond, which they denote mas, and microarcsecond, μas, which they use as units for measuring very small angles." Non-standard prefixes: Obsolete metric prefixes Some of the prefixes formerly used in the metric system have fallen into disuse and were not adopted into the SI. The decimal prefix for ten thousand, myria- (sometimes spelled myrio-), and the early binary prefixes double- (2×) and demi- (1/2×) were parts of the original metric system adopted by France in 1795, but were not retained when the SI prefixes were internationally adopted by the 11th CGPM conference in 1960. Non-standard prefixes: Other metric prefixes used historically include hebdo- (107) and micri- (10−14). Double prefixes Double prefixes have been used in the past, such as micromillimetres or millimicrons (now nanometres), micromicrofarads (μμF; now picofarads, pF), kilomegatonnes (now gigatonnes), hectokilometres (now 100 kilometres) and the derived adjective hectokilometric (typically used for qualifying the fuel consumption measures). These are not compatible with the SI. Other obsolete double prefixes included "decimilli-" (10−4), which was contracted to "dimi-" and standardised in France up to 1961. Non-standard prefixes: There are no more letters of the Latin alphabet available for new prefixes (all the unused letters are already used for units). As such, Richard J.C. Brown (who proposed the prefixes adopted for 10±27 and 10±30) has proposed a reintroduction of compound prefixes (e.g. kiloquetta- for 1033) if a driver for prefixes at such scales ever materialises, with a restriction that the last prefix must always be quetta- or quecto-. This usage has not been approved by the BIPM. Similar symbols and abbreviations: In written English, the symbol K is often used informally to indicate a multiple of thousand in many contexts. For example, one may talk of a 40K salary (40000), or call the Year 2000 problem the Y2K problem. In these cases, an uppercase K is often used with an implied unit (although it could then be confused with the symbol for the kelvin temperature unit if the context is unclear). This informal postfix is read or spoken as "thousand" or "grand", or just "k". Similar symbols and abbreviations: The financial and general news media mostly use m or M, b or B, and t or T as abbreviations for million, billion (109) and trillion (1012), respectively, for large quantities, typically currency and population.The medical and automotive fields in the United States use the abbreviations cc or ccm for cubic centimetres. One cubic centimetre is equal to one millilitre. Similar symbols and abbreviations: For nearly a century, engineers used the abbreviation MCM to designate a "thousand circular mils" in specifying the cross-sectional area of large electrical cables. Since the mid-1990s, kcmil has been adopted as the official designation of a thousand circular mils, but the designation MCM still remains in wide use. A similar system is used in natural gas sales in the United States: m (or M) for thousands and mm (or MM) for millions of British thermal units or therms, and in the oil industry, where MMbbl is the symbol for "millions of barrels". This usage of the capital letter M for "thousand" is from Roman numerals, in which M means 1000. Similar symbols and abbreviations: Binary prefixes The original metric system adopted by France in 1795 included the two binary prefixes double- (2×) and demi- (1/2×). However, they were not retained when the SI prefixes were internationally adopted by the 11th CGPM conference in 1960. Similar symbols and abbreviations: In some fields of information technology, it has been common to designate non-decimal multiples based on powers of 1024, rather than 1000, for some SI prefixes (kilo-, mega-, giga-), contrary to the definitions in the International System of Units (SI). (The SI does not permit the metric prefixes to be used in this conflicting sense.) This practice was once sanctioned by some industry associations, including JEDEC, despite the ongoing conflict of measuring addressable units in binary, while measuring transmitted units per second in decimal.The International Electrotechnical Commission (IEC) standardised the system of binary prefixes (kibi-, mebi-, gibi-, etc.) for this purpose.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DAT (chemotherapy)** DAT (chemotherapy): DAT in the context of chemotherapy is an acronym that means a chemotherapy regimen most often used as an induction regimen in acute myelogenous leukemia, usually for those who are refractory to the standard "7+3" induction regimen or who has relapsed. But this regimen also can be used as primary, first-line induction therapy. The DAT regimen consists of: (D)aunorubicin - an anthracycline antibiotic that is able to intercalate DNA, thus disrupting cell division and preventing mitosis; (A)ra-C (cytarabine) - an antimetabolite; (T)hioguanine - another antimetabolite.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Greiner–Hormann clipping algorithm** Greiner–Hormann clipping algorithm: The Greiner-Hormann algorithm is used in computer graphics for polygon clipping. It performs better than the Vatti clipping algorithm, but cannot handle degeneracies. It can process both self-intersecting and non-convex polygons. It can be trivially generalized to compute other Boolean operations on polygons, such as union and difference. The algorithm is based on the definition of the "inside" of a polygon based on the winding number. It considers regions with odd winding number to be inside the polygon; this is known as the even–odd rule. It takes two lists of polygons as input. In its original form, the algorithm is divided into three phases: In the first phase, pairwise intersections between edges of the polygons are computed. Additional vertices are inserted into both polygons at the points of intersection; an intersection vertex holds a pointer to its counterpart in the other polygon. Greiner–Hormann clipping algorithm: In the second phase, each intersection is marked as either an entry intersection or an exit intersection. This is accomplished by evaluating the even–odd rule at the first vertex, which allows you to know whether the first vertex is inside or outside the other polygon. Then, following the polygon's borders, the intersections are marked with alternating flags (the next intersection after an entry intersection must be an exit intersection). Greiner–Hormann clipping algorithm: In the third phase, the result is generated. The algorithm starts at an unprocessed intersection and picks the direction of traversal based on the entry/exit flag: for an entry intersection it traverses forward, and for an exit intersection it traverses in reverse. Vertices are added to the result until the next intersection is found; the algorithm then switches to the corresponding intersection vertex in the other polygon and picks the traversal direction again using the same rule. If the next intersection has already been processed, the algorithm finishes the current component of the output and starts again from an unprocessed intersection. The output is complete when there are no more unprocessed intersections.The algorithm is not restricted to polygons and can handle arbitrary parametric curves as segments, as long as there is a suitable pairwise intersection procedure. Greiner–Hormann clipping algorithm: A major shortcoming of the original Greiner–Hormann algorithm is the fact that it cannot handle degeneracies, such as common edges or intersections exactly at a vertex. The original paper suggests perturbing the vertices to remove them.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MMADHC** MMADHC: Methylmalonic aciduria and homocystinuria type D protein, mitochondrial also known as MMADHC is a protein that in humans is encoded by the MMADHC gene. Function: This gene encodes a protein localized in cytosol and mitochondria that is involved in an early step of vitamin B12 metabolism. Vitamin B12 (cobalamin) is essential for normal development and survival in humans. Clinical significance: Mutations in this gene cause methylmalonic aciduria and homocystinuria type cblD (MMADHC), a disorder of cobalamin metabolism that is characterized by decreased levels of the coenzymes adenosylcobalamin and methylcobalamin.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cruiser bicycle** Cruiser bicycle: A cruiser bicycle, also known as a beach cruiser or (formerly) motobike, is a bicycle that usually combines balloon tires, an upright seating posture, a single-speed drivetrain, and straightforward steel construction with expressive styling. Cruisers are popular among casual bicyclists and vacationers because they are very stable and easy to ride, but their heavy weight and balloon tires tend to make them rather slow. Another common feature is their ability to be customized with accessories including fenders, lights and saddle bags. They are designed for use primarily on paved roads, low speeds/distances, and are included in the non-racing/non-touring class and heavyweight or middleweight styles of the road bicycle type. Cruiser bicycle: The bikes, noted for their durability and heavy weight, were the most popular bicycle in the United States from the early 1930s through the 1950s, and have enjoyed renewed popularity since the late 1990s. Etymology: One of the first uses of the term “cruiser” for motobikes may have been in the WW2 era, by Mead Cycle Co., who sold via mail-order bicycles of the brand names Ranger, Pathfinder and Crusader. The Crusader “Cruiser” model was the high-end men/boy’s bicycle, and included additional features, such as front headlight, rear rack, and most importantly, the motorbike tank. The low-end model, (also described in ads' fine print as a cruiser), was the crusader “chaser,” and the ladies’ the crusader “clipper” and ‘cutter” models to complete the nautical theme in the product naming scheme. Art work of U.S. Navy Cruiser ships were depicted in the Mead Cycle Co. ads. Etymology: So "cruiser" may have originated as one model name used by one distributor of the motorbike style of bicycles. The term beach-ranger never really caught on. In an old catalog from Sears, Roebuck and Company, the Elgin Motor Bike was advertised, and the term motor-bike was explained as follows, "The term "Moto-Bike" has reference only to the type of frame, meaning that it is built on the order of a motorcycle". History: Development Schwinn was one of many manufacturers who contributed to the development of the cruiser at a time when U.S. bicycle sales had declined sharply due to the Great Depression; adults purchased few bicycles, which were seen as luxury products intended largely for sport or recreation. In response to other manufacturers' innovations, Schwinn conceived their own sturdy, affordable bicycle designed for the more resilient youth market—originally marketing the Schwinn B-10E Motorbike—which resembled a motorcycle but carried no motor—in 1933. Mr. Schwinn adapted features from the Henderson and Excelsior motorcycles that his (formerly purchased) bankrupt company had built during the 1920s, including a heavy "cantilevered" frame with two top tubes and 2.125-inch-wide (54.0 mm) "balloon" tires from Germany. Schwinn, like others, copied what they saw going on in Europe. Both Sears and Montgomery Ward had bicycles in 1932 that had balloon tires in the USA, a full year before Schwinn. And the streamline movement in bicycles was really pioneered by Sears and Huffman. The resulting bicycles could endure abuse that could damage others.In 1934, Schwinn successfully re-styled the B-10E, renaming it the Aero Cycle. While the Aero Cycle featured no technical improvements over the original B-10E, its streamlined frame, faux gas tank, and battery-powered headlight came to define the cruiser 'look'. Modern cruiser bicycles retain these design elements, (except sometimes for the tank and/or light accessories). Schwinn is credited with kicking off the balloon tire craze when they introduced their offerings for 1933. By 1954, the balloon tire was tired and on its way out. That's when Schwinn introduced the middleweight. All other manufacturers followed suit in quick succession. The middleweights would last well into the 1970s. History: 1950s heyday Cruisers were popular throughout the 1930s and 40s and gained greater postwar success. Their combination of substantial weight (some models weighing over 70 pounds), single speed mechanicals (often New Departure Model D oil coaster hubs), and wide tires (26 × 2.25″ 559) made the bicycles primarily suited to flat terrain. They were popular with paperboys and bicycle couriers.Competing firms including the Cleveland Welding Corporation (CWC) which made many of the Ward's Hawthornes and Shelby Flyers, later American Machine and Foundry AMF (Roadmaster) after the merger that took place in 1954, Westfield (Columbia), Monark-Silver King (bought out by Huffy in 1957), Snyder Rollfast, Evan's Colson, Murray (Elgin, JC Higgins and later Sears), and Huffman (Huffy) used styling features and distinctive models to attract buyers—including a Donald Duck bike (Shelby Flyer) with quacking horn, "cowboy" models named after Gene Autry or Hopalong Cassidy (Snyder Rollfast), and details such as fringed saddlebags, capgun holsters, proprietary springer fork suspensions, motorcycle-style horn tanks, and extensive chrome plating. The Huffy "RadioBike"® (one word) featured an electron-tube radio built into the tank and an antenna and battery pack on the rear carrier. History: Decline of the cruiser During the late 1950s and early 1960s, bicycles imported from Great Britain and Continental Europe became popular, especially lighter and more nimble sports roadster models or "English racer". These models featured three-speed gearing, taller wheels, narrower tires and lighter weight and greater hill-climbing ability. By the late 1950s, U.S. manufacturers such as Schwinn ramped up production of the English racer. Schwinn was no stranger to this style. Between the 28 inch wheeled track bikes that they built between the turn of the 20th century and the 1920s and the lightweight offerings they introduced in the 30s such as the Continental, Varsity and Superior, they knew their way around. These prewar bikes could be had with imported half inch pitch drivetrains with freewheels and hand brakes. In postwar production, Schwinn began producing lightweights again in the mid 40s with models such as the New World. These bikes could be had with Sturmey Archer 3 speeds from England and had chromoly tubing. To popularize these bicycles they enlisted the help of Hollywood celebrities. Ronald Reagan is seen riding one in the 1947 Schwinn catalog. History: The cruiser also ceded market share to muscle and lowrider bikes, which Schwinn introduced in 1963, featuring banana seats, oversized shift levers, and ape-hanger bars inspired by West coast motorcycle customizers—which in turn gave birth to the modern BMX bike, while the cruiser went into a steep sales decline. History: By 1972, a new wave of lightweight derailleur-equipped bicycles led a wave of new consumer interest in recreational bicycling, resulting in the bike boom. Derailleur-equipped sport bikes or ten speeds inspired by European racing bicycles soon dominated the adult market. Schwinn introduced their 10 speeds in the early 60s starting first with the Continental, a name they resurrected, and later the Varsity. The Varsity was offered between 1960 and 1962 as an 8 speed and as a 10 speed between 1963 and 1982. History: While largely obsolete by the late 1960s, the cruiser remained popular for utility and recreational use at the beach, where they soon earned the title of "beach cruisers". The term "beach cruiser" started in 1976 at Recycled Cycles in Newport Beach when Larry McNeely coined the phrase and used it as their trademark for the production of the modern Beach Cruiser. Secondhand cruisers found new life on America's coastlines as practical transportation for beach bums and surfers.By the late 70s, Schwinn reintroduced heavyweights, but this time with a blend of BMX parts. The Spitfire was reintroduced as a heavyweight for 1977 and was sold through 1979 with reissue S2s that were made in Hungary. In 1980, Schwinn introduced the Cruiser Series which has survived multiple iterations and has been offered more or less continuously through present. The most desirable Cruisers are the Cruiser, Deluxe Cruiser and Cruiser 5. The Cruiser Six was made by Giant, for Schwinn. The 4, 7 and Alloy as well as the other models were made as part of the Signature Series by Schwinn's current parent, in China. History: Schwinn registered the trademark "Schwinn Cruiser"® with the U.S. Patent and Trademark Office in November 1979. TRAC International Corporation of Atlanta, Georgia, registered the trademarks "Beach Cruiser"® and "Street Cruiser"® with the USPTO in December 1983, for their Taiwanese made bicycles. The release of the 1985 film Pee-wee's Big Adventure highlighted the main character's cross-country search for his lost custom-built heavily-accessorized horn-tank bicycle. History: As inspiration for the mountain bike In the early Seventies, two groups of enthusiasts: the Larkspur Canyon Gang, from Larkspur (long-time speed-riders down Mount Tamalpais), and later members of Velo-Club Tamalpais from Fairfax and San Anselmo in Marin County, California began group rides in the canyons and over ridges, up and down the fireroads around Mount Tamalpais, later racing bikes downhill in a race they called "Repack" because the ride was so grueling that riders had to repack their coaster brakes with grease after each run. The off-road terrain was rocky and the steep mountainside helped riders attain high speeds as they bounced and slammed over rocks and mud. Such harsh treatment caused regular road bikes to crumble, so the racers searched for a more durable and affordable alternative. They soon discovered that old balloon-tired "clunkers" (as they called them) could be had for $5.00 at a garage sale and would endure tremendous punishment. Soon, riders were snapping up these old cruisers, stripping off the heavy fenders and trim, and souped them up to improve downhill performance. Derailleur gears were added by Russ Mahon of The Morrow Dirt Club in Cupertino at the 1974 Marin County cyclo-cross and Gary Fisher's 1975 used a tandem rear hub (from a flea market) with internal steel drum brake and threaded for a freewheel derailleur cluster to his old Schwinn Excelsior bike, enabling him to ride up the mountain, as well as down. About the same time, another rider named Joe Breeze began tinkering with his own Schwinn Excelsior, making it more suited to the "Repack" course. Soon, both of them began to build and sell custom mountain bikes to fellow enthusiasts, launching a worldwide cycling phenomenon. History: In the late 1970s and early 1980s, cruiser frames formed the basis of the newly developed mountain bike. History: The late 1970s and early 1980s saw the emergence of interest in collecting old bicycles, and prices for balloon-tired classics climbed. A bicycle collecting community has developed, with newsletters and specialty shops focused on bicycle collectors. Gary Fisher was one of the main men behind the mountain bike. He was one of the original Tam Bombers and essentially commercialized the mountain bike. Cruiser bikes today: Cruisers' comfort, style, and affordability (compared to mountain and racing bikes) have led to renewed popularity in recent years In late 1979, Schwinn produced the "Schwinn Cruiser" model. In the 1980s Huffy built the "Good Vibrations" beach cruiser, and Murray built the "Monterey" beach cruiser, both using product names, like beaches, with an association to the west coast of California. Then in the early to mid-1990s, Schwinn produced a series of cruiser models, including the "Cruiser Deluxe" (which featured a Phantom-style tank with horn, chrome fenders, white-wall balloon tires, rear rack, a springer fork, and two-tone blue or green frames). The cruiser resurgence continued in 1995, when Schwinn reissued the Black Phantom to celebrate the company's 100th birthday. During that same time frame, similar offerings appeared from Columbia (a limited reissue of the classic 1950's 5-Star was produced in the early 1990s), and Roadmaster. Harley-Davidson even licensed a cruiser bike with their logo and trademark styling. These helped stir up interest in cruisers, which brought them to the attention of aging Baby Boomers, who remembered the originals from their youth and now were reaching an age where a comfortable bike was more exciting than a fast bike, and who also had the money to buy whatever they wanted. The classic "retro" looks, reliable mechanical performance, comfortable ride, and relatively low price of cruisers (compared to mountain bikes or road racers) also appealed to young Gen Xers. Nearly every major bike manufacturer now offers at least one cruiser model, if not an entire line. Some notable contemporary manufactures include Electra Bicycle Company and Felt Bicycles. Cruiser sales have continued to rise over the past decade and today many towns have clubs sponsoring regular cruiser rides as a way to promote the low-tech, high fun aspect of cycling.Three other contemporary bike trends are related to cruisers. For decades, Latino car enthusiasts have been lowering the suspension on older American cars to build "lowriders". Their younger siblings have begun building their own custom "lowrider bikes". Lowrider bicycles are usually built on old Schwinn Sting-Ray or other "muscle bike" frames, but the entire lowrider look of "old school" accessories such as springer forks and bullet headlights is in the cruiser tradition. Lowrider bike magazines and catalogs also feature cruisers and are a great source of accessories for cruiser owners. A similar trend is the sudden appearance of "chopper" bicycles over the past couple of years, in response to the surge of interest in custom motorcycles. Several manufacturers offer "chopper" style bikes in their cruiser range. These bikes usually feature a lower center of gravity, suspension forks, hot rod paint jobs, and large rear tires. Cruiser bikes today: Finally, manufacturers have also introduced the "comfort bike" category, to combine the soft ride and upright posture of cruisers with a more conventionally styled bike. Comfort bikes have such features as fenders, suspension seatposts and forks, and large padded saddles with giant springs. All of these features are copied from cruisers, but redesigned to look more like regular road or hybrid bikes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Passive solar building design** Passive solar building design: In passive solar building design, windows, walls, and floors are made to collect, store, reflect, and distribute solar energy, in the form of heat in the winter and reject solar heat in the summer. This is called passive solar design because, unlike active solar heating systems, it does not involve the use of mechanical and electrical devices. Passive solar building design: The key to designing a passive solar building is to best take advantage of the local climate performing an accurate site analysis. Elements to be considered include window placement and size, and glazing type, thermal insulation, thermal mass, and shading. Passive solar design techniques can be applied most easily to new buildings, but existing buildings can be adapted or "retrofitted". Passive energy gain: Passive solar technologies use sunlight without active mechanical systems (as contrasted to active solar, which uses thermal collectors). Such technologies convert sunlight into usable heat (in water, air, and thermal mass), cause air-movement for ventilating, or future use, with little use of other energy sources. A common example is a solarium on the equator-side of a building. Passive cooling is the use of similar design principles to reduce summer cooling requirements. Passive energy gain: Some passive systems use a small amount of conventional energy to control dampers, shutters, night insulation, and other devices that enhance solar energy collection, storage, and use, and reduce undesirable heat transfer. Passive solar technologies include direct and indirect solar gain for space heating, solar water heating systems based on the thermosiphon, use of thermal mass and phase-change materials for slowing indoor air temperature swings, solar cookers, the solar chimney for enhancing natural ventilation, and earth sheltering. More widely, solar technologies include the solar furnace, but this typically requires some external energy for aligning their concentrating mirrors or receivers, and historically have not proven to be practical or cost effective for widespread use. 'Low-grade' energy needs, such as space and water heating, have proven over time to be better applications for passive use of solar energy. As a science: The scientific basis for passive solar building design has been developed from a combination of climatology, thermodynamics (particularly heat transfer: conduction (heat), convection, and electromagnetic radiation), fluid mechanics/natural convection (passive movement of air and water without the use of electricity, fans or pumps), and human thermal comfort based on heat index, psychrometrics and enthalpy control for buildings to be inhabited by humans or animals, sunrooms, solariums, and greenhouses for raising plants. As a science: Specific attention is divided into: the site, location and solar orientation of the building, local sun path, the prevailing level of insolation (latitude/sunshine/clouds/precipitation), design and construction quality/materials, placement/size/type of windows and walls, and incorporation of solar-energy-storing thermal mass with heat capacity.While these considerations may be directed toward any building, achieving an ideal optimized cost/performance solution requires careful, holistic, system integration engineering of these scientific principles. Modern refinements through computer modeling (such as the comprehensive U.S. Department of Energy "Energy Plus" building energy simulation software), and application of decades of lessons learned (since the 1970s energy crisis) can achieve significant energy savings and reduction of environmental damage, without sacrificing functionality or aesthetics. In fact, passive-solar design features such as a greenhouse/sunroom/solarium can greatly enhance the livability, daylight, views, and value of a home, at a low cost per unit of space. As a science: Much has been learned about passive solar building design since the 1970s energy crisis. Many unscientific, intuition-based expensive construction experiments have attempted and failed to achieve zero energy – the total elimination of heating-and-cooling energy bills. Passive solar building construction may not be difficult or expensive (using off-the-shelf existing materials and technology), but the scientific passive solar building design is a non-trivial engineering effort that requires significant study of previous counter-intuitive lessons learned, and time to enter, evaluate, and iteratively refine the simulation input and output. One of the most useful post-construction evaluation tools has been the use of thermography using digital thermal imaging cameras for a formal quantitative scientific energy audit. Thermal imaging can be used to document areas of poor thermal performance such as the negative thermal impact of roof-angled glass or a skylight on a cold winter night or hot summer day. The scientific lessons learned over the last three decades have been captured in sophisticated comprehensive building energy simulation computer software systems (like U.S. DOE Energy Plus). As a science: Scientific passive solar building design with quantitative cost benefit product optimization is not easy for a novice. The level of complexity has resulted in ongoing bad-architecture, and many intuition-based, unscientific construction experiments that disappoint their designers and waste a significant portion of their construction budget on inappropriate ideas.The economic motivation for scientific design and engineering is significant. If it had been applied comprehensively to new building construction beginning in 1980 (based on 1970s lessons learned), The United States could be saving over $250,000,000 per year on expensive energy and related pollution today.Since 1979, Passive Solar Building Design has been a critical element of achieving zero energy by educational institution experiments, and governments around the world, including the U.S. Department of Energy, and the energy research scientists that they have supported for decades. The cost effective proof of concept was established decades ago, but cultural change in architecture, the construction trades, and building-owner decision making has been very slow and difficult.The new subjects such as architectural science and architectural technology are being added to some schools of architecture, with a future goal of teaching the above scientific and energy-engineering principles. The solar path in passive design: The ability to achieve these goals simultaneously is fundamentally dependent on the seasonal variations in the sun's path throughout the day. This occurs as a result of the inclination of the Earth's axis of rotation in relation to its orbit. The sun path is unique for any given latitude. The solar path in passive design: In Northern Hemisphere non-tropical latitudes farther than 23.5 degrees from the equator: The sun will reach its highest point toward the south (in the direction of the equator) As winter solstice approaches, the angle at which the sun rises and sets progressively moves further toward the south and the daylight hours will become shorter The opposite is noted in summer where the sun will rise and set further toward the north and the daylight hours will lengthenThe converse is observed in the Southern Hemisphere, but the sun rises to the east and sets toward the west regardless of which hemisphere you are in. The solar path in passive design: In equatorial regions at less than 23.5 degrees, the position of the sun at solar noon will oscillate from north to south and back again during the year.In regions closer than 23.5 degrees from either north-or-south pole, during summer the sun will trace a complete circle in the sky without setting whilst it will never appear above the horizon six months later, during the height of winter. The solar path in passive design: The 47-degree difference in the altitude of the sun at solar noon between winter and summer forms the basis of passive solar design. This information is combined with local climatic data (degree day) heating and cooling requirements to determine at what time of the year solar gain will be beneficial for thermal comfort, and when it should be blocked with shading. By strategic placement of items such as glazing and shading devices, the percentage of solar gain entering a building can be controlled throughout the year. The solar path in passive design: One passive solar sun path design problem is that although the sun is in the same relative position six weeks before, and six weeks after, the solstice, due to "thermal lag" from the thermal mass of the Earth, the temperature and solar gain requirements are quite different before and after the summer or winter solstice. Movable shutters, shades, shade screens, or window quilts can accommodate day-to-day and hour-to-hour solar gain and insulation requirements. The solar path in passive design: Careful arrangement of rooms completes the passive solar design. A common recommendation for residential dwellings is to place living areas facing solar noon and sleeping quarters on the opposite side. A heliodon is a traditional movable light device used by architects and designers to help model sun path effects. In modern times, 3D computer graphics can visually simulate this data, and calculate performance predictions. Passive solar heat transfer principles: Personal thermal comfort is a function of personal health factors (medical, psychological, sociological and situational), ambient air temperature, mean radiant temperature, air movement (wind chill, turbulence) and relative humidity (affecting human evaporative cooling). Heat transfer in buildings occurs through convection, conduction, and thermal radiation through roof, walls, floor and windows. Passive solar heat transfer principles: Convective heat transfer Convective heat transfer can be beneficial or detrimental. Uncontrolled air infiltration from poor weatherization / weatherstripping / draft-proofing can contribute up to 40% of heat loss during winter; however, strategic placement of operable windows or vents can enhance convection, cross-ventilation, and summer cooling when the outside air is of a comfortable temperature and relative humidity. Filtered energy recovery ventilation systems may be useful to eliminate undesirable humidity, dust, pollen, and microorganisms in unfiltered ventilation air. Passive solar heat transfer principles: Natural convection causing rising warm air and falling cooler air can result in an uneven stratification of heat. This may cause uncomfortable variations in temperature in the upper and lower conditioned space, serve as a method of venting hot air, or be designed in as a natural-convection air-flow loop for passive solar heat distribution and temperature equalization. Natural human cooling by perspiration and evaporation may be facilitated through natural or forced convective air movement by fans, but ceiling fans can disturb the stratified insulating air layers at the top of a room, and accelerate heat transfer from a hot attic, or through nearby windows. In addition, high relative humidity inhibits evaporative cooling by humans. Passive solar heat transfer principles: Radiative heat transfer The main source of heat transfer is radiant energy, and the primary source is the sun. Solar radiation occurs predominantly through the roof and windows (but also through walls). Thermal radiation moves from a warmer surface to a cooler one. Roofs receive the majority of the solar radiation delivered to a house. A cool roof, or green roof in addition to a radiant barrier can help prevent your attic from becoming hotter than the peak summer outdoor air temperature (see albedo, absorptivity, emissivity, and reflectivity). Passive solar heat transfer principles: Windows are a ready and predictable site for thermal radiation. Passive solar heat transfer principles: Energy from radiation can move into a window in the day time, and out of the same window at night. Radiation uses photons to transmit electromagnetic waves through a vacuum, or translucent medium. Solar heat gain can be significant even on cold clear days. Solar heat gain through windows can be reduced by insulated glazing, shading, and orientation. Windows are particularly difficult to insulate compared to roof and walls. Convective heat transfer through and around window coverings also degrade its insulation properties. When shading windows, external shading is more effective at reducing heat gain than internal window coverings.Western and eastern sun can provide warmth and lighting, but are vulnerable to overheating in summer if not shaded. In contrast, the low midday sun readily admits light and warmth during the winter, but can be easily shaded with appropriate length overhangs or angled louvres during summer and leaf bearing summer shade trees which shed their leaves in the fall. The amount of radiant heat received is related to the location latitude, altitude, cloud cover, and seasonal / hourly angle of incidence (see Sun path and Lambert's cosine law). Passive solar heat transfer principles: Another passive solar design principle is that thermal energy can be stored in certain building materials and released again when heat gain eases to stabilize diurnal (day/night) temperature variations. The complex interaction of thermodynamic principles can be counterintuitive for first-time designers. Precise computer modeling can help avoid costly construction experiments. Site specific considerations during design: Latitude, sun path, and insolation (sunshine) Seasonal variations in solar gain e.g. cooling or heating degree days, solar insolation, humidity Diurnal variations in temperature Micro-climate details related to breezes, humidity, vegetation and land contour Obstructions / Over-shadowing – to solar gain or local cross-winds Design elements for residential buildings in temperate climates: Placement of room-types, internal doors and walls, and equipment in the house. Orienting the building to face the equator (or a few degrees to the East to capture the morning sun) Extending the building dimension along the east–west axis Adequately sizing windows to face the midday sun in the winter, and be shaded in the summer. Design elements for residential buildings in temperate climates: Minimising windows on other sides, especially western windows Erecting correctly sized, latitude-specific roof overhangs, or shading elements (shrubbery, trees, trellises, fences, shutters, etc.) Using the appropriate amount and type of insulation including radiant barriers and bulk insulation to minimise seasonal excessive heat gain or loss Using thermal mass to store excess solar energy during the winter day (which is then re-radiated during the night)The precise amount of equator-facing glass and thermal mass should be based on careful consideration of latitude, altitude, climatic conditions, and heating/cooling degree day requirements. Design elements for residential buildings in temperate climates: Factors that can degrade thermal performance: Deviation from ideal orientation and north–south/east/west aspect ratio Excessive glass area ("over-glazing") resulting in overheating (also resulting in glare and fading of soft furnishings) and heat loss when ambient air temperatures fall Installing glazing where solar gain during the day and thermal losses during the night cannot be controlled easily e.g. West-facing, angled glazing, skylights Thermal losses through non-insulated or unprotected glazing Lack of adequate shading during seasonal periods of high solar gain (especially on the West wall) Incorrect application of thermal mass to modulate daily temperature variations Open staircases leading to unequal distribution of warm air between upper and lower floors as warm air rises High building surface area to volume – Too many corners Inadequate weatherization leading to high air infiltration Lack of, or incorrectly installed, radiant barriers during the hot season. (See also cool roof and green roof) Insulation materials that are not matched to the main mode of heat transfer (e.g. undesirable convective/conductive/radiant heat transfer) Efficiency and economics of passive solar heating: Technically, PSH is highly efficient. Direct-gain systems can utilize (i.e. convert into "useful" heat) 65–70% of the energy of solar radiation that strikes the aperture or collector. Passive solar fraction (PSF) is the percentage of the required heat load met by PSH and hence represents potential reduction in heating costs. RETScreen International has reported a PSF of 20–50%. Within the field of sustainability, energy conservation even of the order of 15% is considered substantial. Other sources report the following PSFs: 5–25% for modest systems 40% for "highly optimized" systems Up to 75% for "very intense" systemsIn favorable climates such as the southwest United States, highly optimized systems can exceed 75% PSF.For more information see Solar Air Heat Key passive solar building configurations: There are three distinct passive solar energy configurations, and at least one noteworthy hybrid of these basic configurations: direct solar systems indirect solar systems hybrid direct/indirect solar systems isolated solar systems Direct solar system In a direct-gain passive solar system, the indoor space acts as a solar collector, heat absorber, and distribution system. South-facing glass in the northern hemisphere(north-facing in the southern hemisphere) admits solar energy into the building interior where it directly heats (radiant energy absorption) or indirectly heats (through convection) thermal mass in the building such as concrete or masonry floors and walls. The floors and walls acting as thermal mass are incorporated as functional parts of the building and temper the intensity of heating during the day. At night, the heated thermal mass radiates heat into the indoor space.In cold climates, a sun-tempered building is the most basic type of direct gain passive solar configuration that simply involves increasing (slightly) the south-facing glazing area, without adding additional thermal mass. It is a type of direct-gain system in which the building envelope is well insulated, is elongated in an east–west direction, and has a large fraction (~80% or more) of the windows on the south side. It has little added thermal mass beyond what is already in the building (i.e., just framing, wall board, and so forth). In a sun-tempered building, the south-facing window area should be limited to about 5 to 7% of the total floor area, less in a sunny climate, to prevent overheating. Additional south-facing glazing can be included only if more thermal mass is added. Energy savings are modest with this system, and sun tempering is very low cost.In genuine direct gain passive solar systems, sufficient thermal mass is required to prevent large temperature fluctuations in indoor air; more thermal mass is required than in a sun tempered building. Overheating of the building interior can result with insufficient or poorly designed thermal mass. About one-half to two-thirds of the interior surface area of the floors, walls and ceilings must be constructed of thermal storage materials. Thermal storage materials can be concrete, adobe, brick, and water. Thermal mass in floors and walls should be kept as bare as is functionally and aesthetically possible; thermal mass needs to be exposed to direct sunlight. Wall-to-wall carpeting, large throw rugs, expansive furniture, and large wall hangings should be avoided. Key passive solar building configurations: Typically, for about every 1 ft2 of south-facing glass, about 5 to 10 ft3 of thermal mass is required for thermal mass (1 m3 per 5 to 10 m2). When accounting for minimal-to-average wall and floor coverings and furniture, this typically equates to about 5 to 10 ft2 per ft2 (5 to 10 m2 per m2) of south-facing glass, depending upon whether the sunlight strikes the surface directly. The simplest rule of thumb is that thermal mass area should have an area of 5 to 10 times the surface area of the direct-gain collector (glass) area.Solid thermal mass (e.g., concrete, masonry, stone, etc.) should be relatively thin, no more than about 4 in (100 mm) thick. Thermal masses with large exposed areas and those in direct sunlight for at least part of the day (2 hour minimum) perform best. Medium-to-dark, colors with high absorptivity, should be used on surfaces of thermal mass elements that will be in direct sunlight. Thermal mass that is not in contact with sunlight can be any color. Lightweight elements (e.g., drywall walls and ceilings) can be any color. Covering the glazing with tight-fitting, moveable insulation panels during dark, cloudy periods and nighttime hours will greatly enhance performance of a direct-gain system. Water contained within plastic or metal containment and placed in direct sunlight heats more rapidly and more evenly than solid mass due to natural convection heat transfer. The convection process also prevents surface temperatures from becoming too extreme as they sometimes do when dark colored solid mass surfaces receive direct sunlight. Key passive solar building configurations: Depending on climate and with adequate thermal mass, south-facing glass area in a direct gain system should be limited to about 10 to 20% of the floor area (e.g., 10 to 20 ft2 of glass for a 100 ft2 floor area). This should be based on the net glass or glazing area. Note that most windows have a net glass/glazing area that is 75 to 85% of the overall window unit area. Above this level, problems with overheating, glare and fading of fabrics are likely. Key passive solar building configurations: Indirect solar system In an indirect-gain passive solar system, the thermal mass (concrete, masonry, or water) is located directly behind the south-facing glass and in front of the heated indoor space and so there is no direct heating. The position of the mass prevents sunlight from entering the indoor space and can also obstruct the view through the glass. There are two types of indirect gain systems: thermal storage wall systems and roof pond systems. Key passive solar building configurations: Thermal Storage (Trombe) Walls In a thermal storage wall system, often called a Trombe wall, a massive wall is located directly behind south-facing glass, which absorbs solar energy and releases it selectively towards the building interior at night. The wall can be constructed of cast-in-place concrete, brick, adobe, stone, or solid (or filled) concrete masonry units. Sunlight enters through the glass and is immediately absorbed at the surface of the mass wall and either stored or conducted through the material mass to the inside space. The thermal mass cannot absorb solar energy as fast as it enters the space between the mass and the window area. Temperatures of the air in this space can easily exceed 120 °F (49 °C). This hot air can be introduced into interior spaces behind the wall by incorporating heat-distributing vents at the top of the wall. This wall system was first envisioned and patented in 1881 by its inventor, Edward Morse. Felix Trombe, for whom this system is sometimes named, was a French engineer who built several homes using this design in the French Pyrenees in the 1960s. Key passive solar building configurations: A thermal storage wall typically consists of a 4 to 16 in (100 to 400 mm) thick masonry wall coated with a dark, heat-absorbing finish (or a selective surface) and covered with a single or double layer of high transmissivity glass. The glass is typically placed from ¾ in to 2 in from the wall to create a small airspace. In some designs, the mass is located 1 to 2 ft (0.6 m) away from the glass, but the space is still not usable. The surface of the thermal mass absorbs the solar radiation that strikes it and stores it for nighttime use. Unlike a direct gain system, the thermal storage wall system provides passive solar heating without excessive window area and glare in interior spaces. However, the ability to take advantage of views and daylighting are eliminated. The performance of Trombe walls is diminished if the wall interior is not open to the interior spaces. Furniture, bookshelves and wall cabinets installed on the interior surface of the wall will reduce its performance. Key passive solar building configurations: A classical Trombe wall, also generically called a vented thermal storage wall, has operable vents near the ceiling and floor levels of the mass wall that allow indoor air to flow through them by natural convection. As solar radiation heats the air trapped between the glass and wall and it begins to rise. Air is drawn into the lower vent, then into the space between the glass and wall to get heated by solar radiation, increasing its temperature and causing it to rise, and then exit through the top (ceiling) vent back into the indoor space. This allows the wall to directly introduce heated air into the space; usually at a temperature of about 90 °F (32 °C). Key passive solar building configurations: If vents are left open at night (or on cloudy days), a reversal of convective airflow will occur, wasting heat by dissipating it outdoors. Vents must be closed at night so radiant heat from the interior surface of the storage wall heats the indoor space. Generally, vents are also closed during summer months when heat gain is not needed. During the summer, an exterior exhaust vent installed at the top of the wall can be opened to vent to the outside. Such venting makes the system act as a solar chimney driving air through the building during the day. Key passive solar building configurations: Vented thermal storage walls vented to the interior have proven somewhat ineffective, mostly because they deliver too much heat during the day in mild weather and during summer months; they simply overheat and create comfort issues. Most solar experts recommended that thermal storage walls should not be vented to the interior. Key passive solar building configurations: There are many variations of the Trombe wall system. An unvented thermal storage wall (technically not a Trombe wall) captures solar energy on the exterior surface, heats up, and conducts heat to the interior surface, where it radiates from the interior wall surface to the indoor space later in the day. A water wall uses a type of thermal mass that consists of tanks or tubes of water used as thermal mass. Key passive solar building configurations: A typical unvented thermal storage wall consists of a south facing masonry or concrete wall with a dark, heat-absorbing material on the exterior surface and faced with a single or double layer of glass. High transmission glass maximizes solar gains to the mass wall. The glass is placed from ¾ to 6 in. (20 to 150 mm) from the wall to create a small airspace. Glass framing is typically metal (e.g., aluminum) because vinyl will soften and wood will become super dried at the 180 °F (82 °C) temperature that can exist behind the glass in the wall. Heat from sunlight passing through the glass is absorbed by the dark surface, stored in the wall, and conducted slowly inward through the masonry. As an architectural detail, patterned glass can limit the exterior visibility of the wall without sacrificing solar transmissivity. Key passive solar building configurations: A water wall uses containers of water for thermal mass instead of a solid mass wall. Water walls are typically slightly more efficient than solid mass walls because they absorb heat more efficiently due to the development of convective currents in the liquid water as it is heated. These currents cause rapid mixing and quicker transfer of heat into the building than can be provided by the solid mass walls. Key passive solar building configurations: Temperature variations between the exterior and interior wall surfaces drive heat through the mass wall. Inside the building, however, daytime heat gain is delayed, only becoming available at the interior surface of the thermal mass during the evening when it is needed because the sun has set. The time lag is the time difference between when sunlight first strikes the wall and when the heat enters the building interior. Time lag is contingent upon the type of material used in the wall and the wall thickness; a greater thickness yields a greater time lag. The time lag characteristic of thermal mass, combined with dampening of temperature fluctuations, allows the use of varying daytime solar energy as a more uniform night-time heat source. Windows can be placed in the wall for natural lighting or aesthetic reasons, but this tends to lower the efficiency somewhat. Key passive solar building configurations: The thickness of a thermal storage wall should be approximately 10 to 14 in (250 to 350 mm) for brick, 12 to 18 in (300 to 450 mm) for concrete, 8 to 12 in (200 to 300 mm) for earth/adobe, and at least 6 in (150 mm) for water. These thicknesses delay movement of heat such that indoor surface temperatures peak during late evening hours. Heat will take about 8 to 10 hours to reach the interior of the building (heat travels through a concrete wall at rate of about one inch per hour). A good thermal connection between the inside wall finishes (e.g., drywall) and the thermal mass wall is necessary to maximize heat transfer to the interior space. Key passive solar building configurations: Although the position of a thermal storage wall minimizes daytime overheating of the indoor space, a well-insulated building should be limited to approximately 0.2 to 0.3 ft2 of thermal mass wall surface per ft2 of floor area being heated (0.2 to 0.3 m2 per m2 of floor area), depending upon climate. A water wall should have about 0.15 to 0.2 ft2 of water wall surface per ft2 (0.15 to 0.2 m2 per m2) of floor area. Key passive solar building configurations: Thermal mass walls are best-suited to sunny winter climates that have high diurnal (day-night) temperature swings (e.g., southwest, mountain-west). They do not perform as well in cloudy or extremely cold climates or in climates where there is not a large diurnal temperature swing. Nighttime thermal losses through the thermal mass of the wall can still be significant in cloudy and cold climates; the wall loses stored heat in less than a day, and then leak heat, which dramatically raises backup heating requirements. Covering the glazing with tight-fitting, moveable insulation panels during lengthy cloudy periods and nighttime hours will enhance performance of a thermal storage system. Key passive solar building configurations: The main drawback of thermal storage walls is their heat loss to the outside. Double glass (glass or any of the plastics) is necessary for reducing heat loss in most climates. In mild climates, single glass is acceptable. A selective surface (high-absorbing/low-emitting surface) applied to the exterior surface of the thermal storage wall improves performance by reducing the amount of infrared energy radiated back through the glass; typically, it achieves a similar improvement in performance without the need for daily installation and removal of insulating panels. A selective surface consists of a sheet of metal foil glued to the outside surface of the wall. It absorbs almost all the radiation in the visible portion of the solar spectrum and emits very little in the infrared range. High absorbency turns the light into heat at the wall's surface, and low emittance prevents the heat from radiating back towards the glass. Key passive solar building configurations: Roof Pond System A roof pond passive solar system, sometimes called a solar roof, uses water stored on the roof to temper hot and cold internal temperatures, usually in desert environments. It typically is constructed of containers holding 6 to 12 in (150 to 300 mm) of water on a flat roof. Water is stored in large plastic bags or fiberglass containers to maximize radiant emissions and minimize evaporation. It can be left unglazed or can be covered by glazing. Solar radiation heats the water, which acts as a thermal storage medium. At night or during cloudy weather, the containers can be covered with insulating panels. The indoor space below the roof pond is heated by thermal energy emitted by the roof pond storage above. These systems require good drainage systems, movable insulation, and an enhanced structural system to support a 35 to 70 lb/ft2 (1.7 to 3.3 kN/m2) dead load. Key passive solar building configurations: With the angles of incidence of sunlight during the day, roof ponds are only effective for heating at lower and mid-latitudes, in hot to temperate climates. Roof pond systems perform better for cooling in hot, low humidity climates. Not many solar roofs have been built, and there is limited information on the design, cost, performance, and construction details of thermal storage roofs. Key passive solar building configurations: Hybrid direct/indirect solar system Kachadorian demonstrated that the drawbacks of thermal storage walls can be overcome by orienting the Trombe wall horizontally instead of vertically. If the thermal storage mass is constructed as a ventilated concrete slab floor instead of as a wall, it does not block sunlight from entering the home (the Trombe wall's most obvious disadvantage) but it can still be exposed to direct sunlight through double-glazed equator-facing windows, which can be further insulated by thermal shutters or shades at night. The Trombe wall's problematic delay in daytime heat capture is eliminated, because heat does not have to be driven through the wall to reach the interior air space: some of it reflects or re-radiates immediately from the floor. Provided the slab has air channels like the Trombe wall, which run through it in the north-south direction and are vented to the interior air space through the concrete slab floor just inside the north and south walls, vigorous air thermosiphoning through the slab still occurs as in the vertical Trombe wall, distributing the impounded heat throughout the house (and cooling the house in summer by the reverse process). Key passive solar building configurations: The ventilated horizontal slab is less expensive to construct than vertical Trombe walls, as it forms the foundation of the house which is a necessary expense in any building. Slab-on-grade foundations are a common, well-understood and cost-effective building component (modified only slightly by the inclusion of a layer of concrete-brick air channels), rather than an exotic Trombe wall construct. The only remaining drawback to this kind of thermal mass solar architecture is the absence of a basement, as in any slab-on grade design. Key passive solar building configurations: The Kachadorian floor design is a direct-gain passive solar system, but its thermal mass also acts as an indirect heating (or cooling) element, giving up its heat at night. It is an alternating cycle hybrid energy system, like a hybrid electric vehicle. Key passive solar building configurations: Isolated solar system In an isolated gain passive solar system, the components (e.g., collector and thermal storage) are isolated from the indoor area of the building.An attached sunspace, also sometimes called a solar room or solarium, is a type of isolated gain solar system with a glazed interior space or room that is part of or attached to a building but which can be completely closed off from the main occupied areas. It functions like an attached greenhouse that makes use of a combination of direct-gain and indirect-gain system characteristics. A sunspace may be called and appear like a greenhouse, but a greenhouse is designed to grow plants whereas a sunspace is designed to provide heat and aesthetics to a building. Sunspaces are very popular passive design elements because they expand the living areas of a building and offer a room to grow plants and other vegetation. In moderate and cold climates, however, supplemental space heating is required to keep plants from freezing during extremely cold weather. Key passive solar building configurations: An attached sunspace's south-facing glass collects solar energy as in a direct-gain system. The simplest sunspace design is to install vertical windows with no overhead glazing. Sunspaces may experience high heat gain and high heat loss through their abundance of glazing. Although horizontal and sloped glazing collects more heat in the winter, it is minimized to prevent overheating during summer months. Although overhead glazing can be aesthetically pleasing, an insulated roof provides better thermal performance. Skylights can be used to provide some daylighting potential. Vertical glazing can maximize gain in winter, when the angle of the sun is low, and yield less heat gain during the summer. Vertical glass is less expensive, easier to install and insulate, and not as prone to leaking, fogging, breaking, and other glass failures. A combination of vertical glazing and some sloped glazing is acceptable if summer shading is provided. A well-designed overhang may be all that is necessary to shade the glazing in the summer. Key passive solar building configurations: The temperature variations caused by the heat losses and gains can be moderated by thermal mass and low-emissivity windows. Thermal mass can include a masonry floor, a masonry wall bordering the house, or water containers. Distribution of heat to the building can be accomplished through ceiling and floor level vents, windows, doors, or fans. In a common design, thermal mass wall situated on the back of the sunspace adjacent to the living space will function like an indirect-gain thermal mass wall. Solar energy entering the sunspace is retained in the thermal mass. Solar heat is conveyed into the building by conduction through the shared mass wall in the rear of the sunspace and by vents (like an unvented thermal storage wall) or through openings in the wall that permit airflow from the sunspace to the indoor space by convection (like a vented thermal storage wall). Key passive solar building configurations: In cold climates, double glazing should be used to reduce conductive losses through the glass to the outside. Night-time heat loss, although significant during winter months, is not as essential in the sunspace as with direct gain systems since the sunspace can be closed off from the rest of the building. In temperate and cold climates, thermally isolating the sunspace from the building at night is important. Large glass panels, French doors, or sliding glass doors between the building and attached sunspace will maintain an open feeling without the heat loss associated with an open space. Key passive solar building configurations: A sunspace with a masonry thermal wall will need approximately 0.3 ft2 of thermal mass wall surface per ft2 of floor area being heated (0.3 m2 per m2 of floor area), depending on climate. Wall thicknesses should be similar to a thermal storage wall. If a water wall is used between the sunspace and living space, about 0.20 ft2 of thermal mass wall surface per ft2 of floor area being heated (0.2 m2 per m2 of floor area) is appropriate. In most climates, a ventilation system is required in summer months to prevent overheating. Generally, vast overhead (horizontal) and east- and west-facing glass areas should not be used in a sunspace without special precautions for summer overheating such as using heat-reflecting glass and providing summer-shading systems areas. Key passive solar building configurations: The internal surfaces of the thermal mass should be dark in color. Movable insulation (e.g., window coverings, shades, shutters) can be used help trap the warm air in the sunspace both after the sun has set and during cloudy weather. When closed during extremely hot days, window coverings can help keep the sunspace from overheating. To maximize comfort and efficiency, the non-glass sunspace walls, ceiling and foundation should be well insulated. The perimeter of the foundation wall or slab should be insulated to the frost line or around the slab perimeter. In a temperate or cold climate, the east and west walls of the sunspace should be insulated (no glass). Additional measures: Measures should be taken to reduce heat loss at night e.g. window coverings or movable window insulation. Heat storage The sun doesn't shine all the time. Heat storage, or thermal mass, keeps the building warm when the sun can't heat it. Additional measures: In diurnal solar houses, the storage is designed for one or a few days. The usual method is a custom-constructed thermal mass. This includes a Trombe wall, a ventilated concrete floor, a cistern, water wall or roof pond. It is also feasible to use the thermal mass of the earth itself, either as-is or by incorporation into the structure by banking or using rammed earth as a structural medium.In subarctic areas, or areas that have long terms without solar gain (e.g. weeks of freezing fog), purpose-built thermal mass is very expensive. Don Stephens pioneered an experimental technique to use the ground as thermal mass large enough for annualized heat storage. His designs run an isolated thermosiphon 3 m under a house, and insulate the ground with a 6 m waterproof skirt. Additional measures: Insulation Thermal insulation or superinsulation (type, placement and amount) reduces unwanted leakage of heat. Some passive buildings are actually constructed of insulation. Special glazing systems and window coverings The effectiveness of direct solar gain systems is significantly enhanced by insulative (e.g. double glazing), spectrally selective glazing (low-e), or movable window insulation (window quilts, bifold interior insulation shutters, shades, etc.).Generally, Equator-facing windows should not employ glazing coatings that inhibit solar gain. There is extensive use of super-insulated windows in the German Passive House standard. Selection of different spectrally selective window coating depends on the ratio of heating versus cooling degree days for the design location. Additional measures: Glazing selection Equator-facing glass The requirement for vertical equator-facing glass is different from the other three sides of a building. Reflective window coatings and multiple panes of glass can reduce useful solar gain. However, direct-gain systems are more dependent on double or triple glazing or even quadruple glazing in higher geographic latitudes to reduce heat loss. Indirect-gain and isolated-gain configurations may still be able to function effectively with only single-pane glazing. Nevertheless, the optimal cost-effective solution is both location and system dependent. Additional measures: Roof-angle glass and skylights Skylights admit harsh direct overhead sunlight and glare either horizontally (a flat roof) or pitched at the same angle as the roof slope. In some cases, horizontal skylights are used with reflectors to increase the intensity of solar radiation (and harsh glare), depending on the roof angle of incidence. When the winter sun is low on the horizon, most solar radiation reflects off of roof angled glass ( the angle of incidence is nearly parallel to roof-angled glass morning and afternoon ). When the summer sun is high, it is nearly perpendicular to roof-angled glass, which maximizes solar gain at the wrong time of year, and acts like a solar furnace. Skylights should be covered and well-insulated to reduce natural convection ( warm air rising ) heat loss on cold winter nights, and intense solar heat gain during hot spring/summer/fall days. Additional measures: The equator-facing side of a building is south in the northern hemisphere, and north in the southern hemisphere. Skylights on roofs that face away from the equator provide mostly indirect illumination, except for summer days when the sun may rise on the non-equator side of the building (at some latitudes). Skylights on east-facing roofs provide maximum direct light and solar heat gain in the summer morning. West-facing skylights provide afternoon sunlight and heat gain during the hottest part of the day. Additional measures: Some skylights have expensive glazing that partially reduces summer solar heat gain, while still allowing some visible light transmission. However, if visible light can pass through it, so can some radiant heat gain (they are both electromagnetic radiation waves). Additional measures: You can partially reduce some of the unwanted roof-angled-glazing summer solar heat gain by installing a skylight in the shade of deciduous (leaf-shedding) trees, or by adding a movable insulated opaque window covering on the inside or outside of the skylight. This would eliminate the daylight benefit in the summer. If tree limbs hang over a roof, they will increase problems with leaves in rain gutters, possibly cause roof-damaging ice dams, shorten roof life, and provide an easier path for pests to enter your attic. Leaves and twigs on skylights are unappealing, difficult to clean, and can increase the glazing breakage risk in wind storms. Additional measures: "Sawtooth roof glazing" with vertical-glass-only can bring some of the passive solar building design benefits into the core of a commercial or industrial building, without the need for any roof-angled glass or skylights. Skylights provide daylight. The only view they provide is essentially straight up in most applications. Well-insulated light tubes can bring daylight into northern rooms, without using a skylight. A passive-solar greenhouse provides abundant daylight for the equator-side of the building. Infrared thermography color thermal imaging cameras ( used in formal energy audits ) can quickly document the negative thermal impact of roof-angled glass or a skylight on a cold winter night or hot summer day. The U.S. Department of Energy states: "vertical glazing is the overall best option for sunspaces." Roof-angled glass and sidewall glass are not recommended for passive solar sunspaces. Additional measures: The U.S. DOE explains drawbacks to roof-angled glazing: Glass and plastic have little structural strength. When installed vertically, glass (or plastic) bears its own weight because only a small area (the top edge of the glazing) is subject to gravity. As the glass tilts off the vertical axis, however, an increased area (now the sloped cross-section) of the glazing has to bear the force of gravity. Glass is also brittle; it does not flex much before breaking. To counteract this, you usually must increase the thickness of the glazing or increase the number of structural supports to hold the glazing. Both increase overall cost, and the latter will reduce the amount of solar gain into the sunspace. Additional measures: Another common problem with sloped glazing is its increased exposure to the weather. It is difficult to maintain a good seal on roof-angled glass in intense sunlight. Hail, sleet, snow, and wind may cause material failure. For occupant safety, regulatory agencies usually require sloped glass to be made of safety glass, laminated, or a combination thereof, which reduce solar gain potential. Most of the roof-angled glass on the Crowne Plaza Hotel Orlando Airport sunspace was destroyed in a single windstorm. Roof-angled glass increases construction cost, and can increase insurance premiums. Vertical glass is less susceptible to weather damage than roof-angled glass. Additional measures: It is difficult to control solar heat gain in a sunspace with sloped glazing during the summer and even during the middle of a mild and sunny winter day. Skylights are the antithesis of zero energy building Passive Solar Cooling in climates with an air conditioning requirement. Additional measures: Angle of incident radiation The amount of solar gain transmitted through glass is also affected by the angle of the incident solar radiation. Sunlight striking a single sheet of glass within 45 degrees of perpendicular is mostly transmitted (less than 10% is reflected), whereas for sunlight striking at 70 degrees from perpendicular over 20% of light is reflected, and above 70 degrees this percentage reflected rises sharply.All of these factors can be modeled more precisely with a photographic light meter and a heliodon or optical bench, which can quantify the ratio of reflectivity to transmissivity, based on angle of incidence. Additional measures: Alternatively, passive solar computer software can determine the impact of sun path, and cooling-and-heating degree days on energy performance. Operable shading and insulation devices A design with too much equator-facing glass can result in excessive winter, spring, or fall day heating, uncomfortably bright living spaces at certain times of the year, and excessive heat transfer on winter nights and summer days. Additional measures: Although the sun is at the same altitude 6-weeks before and after the solstice, the heating and cooling requirements before and after the solstice are significantly different. Heat storage on the Earth's surface causes "thermal lag." Variable cloud cover influences solar gain potential. This means that latitude-specific fixed window overhangs, while important, are not a complete seasonal solar gain control solution. Additional measures: Control mechanisms (such as manual-or-motorized interior insulated drapes, shutters, exterior roll-down shade screens, or retractable awnings) can compensate for differences caused by thermal lag or cloud cover, and help control daily / hourly solar gain requirement variations. Home automation systems that monitor temperature, sunlight, time of day, and room occupancy can precisely control motorized window-shading-and-insulation devices. Additional measures: Exterior colors reflecting – absorbing Materials and colors can be chosen to reflect or absorb solar thermal energy. Using information on a Color for electromagnetic radiation to determine its thermal radiation properties of reflection or absorption can assist the choices.See Lawrence Berkeley National Laboratory and Oak Ridge National Laboratory: "Cool Colors" In cold climates with short winter days direct-gain systems utilizing equator-facing windows may actually perform better when snow covers the ground, since reflected as well as direct sunlight will enter the house and be captured as heat. Landscaping and gardens: Energy-efficient landscaping materials for careful passive solar choices include hardscape building material and "softscape" plants. The use of landscape design principles for selection of trees, hedges, and trellis-pergola features with vines; all can be used to create summer shading. For winter solar gain it is desirable to use deciduous plants that drop their leaves in the autumn gives year round passive solar benefits. Non-deciduous evergreen shrubs and trees can be windbreaks, at variable heights and distances, to create protection and shelter from winter wind chill. Xeriscaping with 'mature size appropriate' native species of-and drought tolerant plants, drip irrigation, mulching, and organic gardening practices reduce or eliminate the need for energy-and-water-intensive irrigation, gas powered garden equipment, and reduces the landfill waste footprint. Solar powered landscape lighting and fountain pumps, and covered swimming pools and plunge pools with solar water heaters can reduce the impact of such amenities. Landscaping and gardens: Sustainable gardening Sustainable landscaping Sustainable landscape architecture Other passive solar principles: Passive solar lighting Passive solar lighting techniques enhance taking advantage of natural illumination for interiors, and so reduce reliance on artificial lighting systems. Other passive solar principles: This can be achieved by careful building design, orientation, and placement of window sections to collect light. Other creative solutions involve the use of reflecting surfaces to admit daylight into the interior of a building. Window sections should be adequately sized, and to avoid over-illumination can be shielded with a Brise soleil, awnings, well placed trees, glass coatings, and other passive and active devices.Another major issue for many window systems is that they can be potentially vulnerable sites of excessive thermal gain or heat loss. Whilst high mounted clerestory window and traditional skylights can introduce daylight in poorly oriented sections of a building, unwanted heat transfer may be hard to control. Thus, energy that is saved by reducing artificial lighting is often more than offset by the energy required for operating HVAC systems to maintain thermal comfort. Other passive solar principles: Various methods can be employed to address this including but not limited to window coverings, insulated glazing and novel materials such as aerogel semi-transparent insulation, optical fiber embedded in walls or roof, or hybrid solar lighting at Oak Ridge National Laboratory. Other passive solar principles: Reflecting elements, from active and passive daylighting collectors, such as light shelves, lighter wall and floor colors, mirrored wall sections, interior walls with upper glass panels, and clear or translucent glassed hinged doors and sliding glass doors take the captured light and passively reflect it further inside. The light can be from passive windows or skylights and solar light tubes or from active daylighting sources. In traditional Japanese architecture the Shōji sliding panel doors, with translucent Washi screens, are an original precedent. International style, Modernist and Mid-century modern architecture were earlier innovators of this passive penetration and reflection in industrial, commercial, and residential applications. Other passive solar principles: Passive solar water heating There are many ways to use solar thermal energy to heat water for domestic use. Different active-and-passive solar hot water technologies have different location-specific economic cost benefit analysis implications. Fundamental passive solar hot water heating involves no pumps or anything electrical. It is very cost effective in climates that do not have lengthy sub-freezing, or very-cloudy, weather conditions. Other active solar water heating technologies, etc. may be more appropriate for some locations. It is possible to have active solar hot water which is also capable of being "off grid" and qualifies as sustainable. This is done by the use of a photovoltaic cell which uses energy from the sun to power the pumps. Comparison to the Passive House standard in Europe: There is growing momentum in Europe for the approach espoused by the Passive House (Passivhaus in German) Institute in Germany. Rather than relying solely on traditional passive solar design techniques, this approach seeks to make use of all passive sources of heat, minimises energy usage, and emphasises the need for high levels of insulation reinforced by meticulous attention to detail in order to address thermal bridging and cold air infiltration. Most of the buildings built to the Passive House standard also incorporate an active heat recovery ventilation unit with or without a small (typically 1 kW) incorporated heating component. Comparison to the Passive House standard in Europe: The energy design of Passive House buildings is developed using a spreadsheet-based modeling tool called the Passive House Planning Package (PHPP) which is updated periodically. The current version is PHPP 9.6 (2018). A building may be certified as a "Passive House" when it can be shown that it meets certain criteria, the most important being that the annual specific heat demand for the house should not exceed 15kWh/m2a. Comparison to the Zero heating building: With advances in ultra low U-value glazing a Passive House-based (nearly) zero heating building is proposed to supersede the apparently failed nearly-zero energy buildings in EU. The zero heating building reduces on the passive solar design and makes the building more opened to conventional architectural design. The annual specific heat demand for the zero-heating house should not exceed 3 kWh/m2a. Zero heating building is simpler to design and to operate. For example: there is no need for modulated sun shading in zero-heating houses. Design tools: Traditionally a heliodon was used to simulate the altitude and azimuth of the sun shining on a model building at any time of any day of the year. In modern times, computer programs can model this phenomenon and integrate local climate data (including site impacts such as overshadowing and physical obstructions) to predict the solar gain potential for a particular building design over the course of a year. GPS-based smartphone applications can now do this inexpensively on a hand held device. These design tools provide the passive solar designer the ability to evaluate local conditions, design elements and orientation prior to construction. Energy performance optimization normally requires an iterative-refinement design-and-evaluate process. There is no such thing as a "one-size-fits-all" universal passive solar building design that would work well in all locations. Levels of application: Many detached suburban houses can achieve reductions in heating expense without obvious changes to their appearance, comfort or usability. This is done using good siting and window positioning, small amounts of thermal mass, with good-but-conventional insulation, weatherization, and an occasional supplementary heat source, such as a central radiator connected to a (solar) water heater. Sunrays may fall on a wall during the daytime and raise the temperature of its thermal mass. This will then radiate heat into the building in the evening. External shading, or a radiant barrier plus air gap, may be used to reduce undesirable summer solar gain. Levels of application: An extension of the "passive solar" approach to seasonal solar capture and storage of heat and cooling. These designs attempt to capture warm-season solar heat, and convey it to a seasonal thermal store for use months later during the cold season ("annualised passive solar.") Increased storage is achieved by employing large amounts of thermal mass or earth coupling. Anecdotal reports suggest they can be effective but no formal study has been conducted to demonstrate their superiority. The approach also can move cooling into the warm season. Examples: Passive Annual Heat Storage (PAHS) – by John Hait Annualized Geothermal Solar (AGS) heating – by Don Stephen Earthed-roofA "purely passive" solar-heated house would have no mechanical furnace unit, relying instead on energy captured from sunshine, only supplemented by "incidental" heat energy given off by lights, computers, and other task-specific appliances (such as those for cooking, entertainment, etc.), showering, people and pets. The use of natural convection air currents (rather than mechanical devices such as fans) to circulate air is related, though not strictly solar design. Passive solar building design sometimes uses limited electrical and mechanical controls to operate dampers, insulating shutters, shades, awnings, or reflectors. Some systems enlist small fans or solar-heated chimneys to improve convective air-flow. A reasonable way to analyse these systems is by measuring their coefficient of performance. A heat pump might use 1 J for every 4 J it delivers giving a COP of 4. A system that only uses a 30 W fan to more-evenly distribute 10 kW of solar heat through an entire house would have a COP of 300. Levels of application: Passive solar building design is often a foundational element of a cost-effective zero energy building. Although a ZEB uses multiple passive solar building design concepts, a ZEB is usually not purely passive, having active mechanical renewable energy generation systems such as: wind turbine, photovoltaics, micro hydro, geothermal, and other emerging alternative energy sources. Passive solar is also a core building design strategy for passive survivability, along with other passive strategies. Levels of application: Passive solar design on skyscrapers There has been recent interest in the utilization of the large amounts of surface area on skyscrapers to improve their overall energy efficiency. Because skyscrapers are increasingly ubiquitous in urban environments, yet require large amounts of energy to operate, there is potential for large amounts of energy savings employing passive solar design techniques. One study, which analyzed the proposed 22 Bishopsgate tower in London, found that a 35% energy decrease in demand can theoretically be achieved through indirect solar gains, by rotating the building to achieve optimum ventilation and daylight penetration, usage of high thermal mass flooring material to decrease temperature fluctuation inside the building, and using double or triple glazed low emissivity window glass for direct solar gain. Indirect solar gain techniques included moderating wall heat flow by variations of wall thickness (from 20 to 30 cm), using window glazing on the outdoor space to prevent heat loss, dedicating 15–20% of floor area for thermal storage, and implementing a Trombe wall to absorb heat entering the space. Overhangs are used to block direct sunlight in the summer, and allow it in the winter, and heat reflecting blinds are inserted between the thermal wall and the glazing to limit heat build-up in the summer months. Levels of application: Another study analyzed double-green skin facade (DGSF) on the outside of high-rise buildings in Hong Kong. Such a green facade, or vegetation covering the outer walls, can combat the usage of air conditioning greatly - as much as 80%, as discovered by the researchers. In more temperate climates, strategies such as glazing, adjustment of window-to-wall ratio, sun shading and roof strategies can offer considerable energy savings, in the 30% to 60% range.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cryptic Masonry** Cryptic Masonry: Cryptic Masonry is the second part of the York Rite system of Masonic degrees, and the last found within the Rite that deals specifically with the Hiramic Legend. These degrees are the gateway to Temple restoration rituals or the Second Temple Legend. The body itself is known as either the Council of Royal & Select Masters or Council of Cryptic Masons depending on the jurisdiction. Constituent degrees: Within the York Rite, members of Cryptic Masonry meet as a Council, and the Council confers three degrees: Royal Master, Select Master, and Super Excellent Master. Outside the United States, Grand Councils have the right to confer other degrees such as the Royal Ark Mariner degree in Canada and the Excellent Master degree in Scotland. In England and Wales, the York Rite degrees of Cryptic Masonry are part of the Order of Royal and Select Masters. Organization: Local Council of Royal and Select Masters A Council is similar in many ways to a Masonic Lodge; it has officers and a ritual degree system, which in this case consists of three degrees: Royal Master, Select Master, and Super Excellent Master. The Super Excellent master's degree is optional in some jurisdictions. The various positions in the lodge are modeled directly after Craft Masonry and though the names are often different the duties are largely the same. Their seating is a bit different, however, in that all three principals of a council sit on the east dais, while the captain of the guard and conductor of the council sit in the west and south. Organization: Councils in some jurisdictions have more than one steward,. Organist/musician is an optional office in either body, and is quite often vacant. The council office of marshal is optional in some jurisdictions. Organization: Grand Councils Every US state has its own Grand Council, which performs the same administrative functions for its subordinate councils as a Grand Lodge does for its subordinate lodges. In other countries, there are either national or state Grand Councils. The council also has its own equivalents of Grand Lodge Officers, modified from the titles of the officers of a council: Most Illustrious Grand Master Right Illustrious Deputy Grand Master Right Illustrious Grand Principal Conductor of the Work Right Illustrious Grand Treasurer Right Illustrious Grand Recorder Very Illustrious Grand Chaplain Very Illustrious Grand Captain of the Guard Very Illustrious Grand Conductor of the Council Very Illustrious Grand Marshal Very Illustrious Grand Steward Very Illustrious Grand Lecturer Very Illustrious Grand SentinelJurisdictions that are not members of the General Grand Council may use different titles than those presented here. For instance, in Pennsylvania, the title "Most Puissant Grand Master" is used in place of "Most Illustrious Grand Master." Many Prince Hall grand councils instead use the title "Grand Thrice Illustrious Master". Organization: In jurisdictions that have them, there are also Regional Deputy Grand Masters or District Inspectors appointed by the Most Illustrious Grand Master to oversee the districts of the jurisdiction as the representative of the Most Illustrious Grand Master. In other jurisdictions these duties are performed by a Master of the Arch. Grand Representatives are appointed to keep in contact with their counterparts in other jurisdictions. Organization: Grand Councils also contribute to specific charities which differ from state to state. General Grand Council Many of the Grand Councils around the world are members of an umbrella group called General Grand Council of Cryptic Masons International, founded 25 August 1880. It publishes a quarterly magazine called The Cryptic Freemason and supports the Cryptic Masons Medical Research Foundation, Inc. History and development of the Cryptic Degrees: The degrees of Royal and Select Master were not originally combined into one system, each having been conferred by separate parties and initially controlled by separate Councils. As near as may be determined from conflicting claims, the Select degree is the oldest of the Rite. It was customary to confer the Royal degree on Master Masons prior to the Royal Arch, and the Select degree after exaltation to the sublime degree. This accounts for the fact that control of the Cryptic degrees vacillated back and forth in many jurisdictions, even after the formation of Grand Councils. To this date, the Royal and Select degrees are controlled by Grand Chapter in Virginia and West Virginia, and conferred by subordinate Chapters in those jurisdictions. History and development of the Cryptic Degrees: The Royal degree appears to have been developed primarily in New York under direction of Thomas Lownds, whereas the Select was vigorously promulgated by Philip Eckel in Baltimore. It is claimed by Eckel that a Grand Council of Select Masters was formed in Baltimore in 1792, while it is definitely known that a Grand Council of Royal Masters (Columbian No. 1) was organized in 1810 in New York. It remained for Jeremy Cross to combine the two degrees under one system, which occurred about 1818, and this pattern was adopted in most jurisdictions as the degrees became dispersed beyond the eastern seaboard. History and development of the Cryptic Degrees: The degree of Super Excellent Master is not allied to the other two degrees of the Cryptic Rite, so far as its teachings and traditions are concerned. The records of St. Andrews Chapter in Boston indicate that a degree of this name was conferred during the latter part of the eighteenth century. The earliest positive reference to the Super Excellent in connection to the Cryptic Rite is 22 December 1817, when a "Lodge" of Super Excellent Masters was organized by Columbian Council of Royal Masters in New York. The incidents, teachings, and ritualistic format of the Super Excellent degree bear no resemblance in any former degrees so named, which appears to justify the claim that it is American in origin. This degree has been, and to some extent still is, a rather controversial subject. It is conferred as one of the regular Cryptic Rite degrees in some jurisdictions, whereas the others confer it as an honorary degree only; in some instances, separate Grand Councils of Super Excellent Masters have been formed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Braking distance** Braking distance: Braking distance refers to the distance a vehicle will travel from the point when its brakes are fully applied to when it comes to a complete stop. It is primarily affected by the original speed of the vehicle and the coefficient of friction between the tires and the road surface, and negligibly by the tires' rolling resistance and vehicle's air drag. The type of brake system in use only affects trucks and large mass vehicles, which cannot supply enough force to match the static frictional force.The braking distance is one of two principal components of the total stopping distance. The other component is the reaction distance, which is the product of the speed and the perception-reaction time of the driver/rider. A perception-reaction time of 1.5 seconds, and a coefficient of kinetic friction of 0.7 are standard for the purpose of determining a bare baseline for accident reconstruction and judicial notice; most people can stop slightly sooner under ideal conditions. Braking distance: Braking distance is not to be confused with stopping sight distance. The latter is a road alignment visibility standard that provides motorists driving at or below the design speed an assured clear distance ahead (ACDA) which exceeds a safety factor distance that would be required by a slightly or nearly negligent driver to stop under a worst likely case scenario: typically slippery conditions (deceleration 0.35g) and a slow responding driver (2.5 seconds). Because the stopping sight distance far exceeds the actual stopping distance under most conditions, an otherwise capable driver who uses the full stopping sight distance, which results in injury, may be negligent for not stopping sooner. Derivation: Energy equation The theoretical braking distance can be found by determining the work required to dissipate the vehicle's kinetic energy.The kinetic energy E is given by the formula: E = 1 2 m v 2 E={\frac {1}{2}}mv^{2} ,where m is the vehicle's mass and v is the speed at the start of braking. The work W done by braking is given by: W = μ m g d W=\mu mgd ,where μ is the coefficient of friction between the road surface and the tires, g is the gravity of Earth, and d is the distance travelled. Derivation: The braking distance (which is commonly measured as the skid length) given an initial driving speed v is then found by putting W = E, from which it follows that d = v 2 2 μ g d={\frac {v^{2}}{2\mu g}} .The maximum speed given an available braking distance d is given by: v = 2 μ g d v={\sqrt {2\mu gd}} . Newton's law and equation of motion: From Newton's second law: F = m a F=ma For a level surface, the frictional force resulting from coefficient of friction μ \mu is: F f r i c t = − μ m g F_{frict}=-\mu mg Equating the two yields the deceleration: a = − μ g a=-\mu g The d f ( d i , v i , v f ) d_{f}(d_{i},v_{i},v_{f}) form of the formulas for constant acceleration is: d f = d i + v f 2 − v i 2 2 a d_{f}=d_{i}+{\frac {v_{f}^{2}-v_{i}^{2}}{2a}} Setting d i , v f = 0 d_{i},v_{f}=0 and then substituting a a into the equation yields the braking distance: d f = − v i 2 2 a = v i 2 2 μ g d_{f}={\frac {-v_{i}^{2}}{2a}}={\frac {v_{i}^{2}}{2\mu g}} Total stopping distance: The total stopping distance is the sum of the perception-reaction distance and the braking distance. Total stopping distance: Dtotal=Dp−r+Dbraking=vtp−r+v22μg A common baseline value of 1.5 0.7 is used in stopping distance charts. These values incorporate the ability of the vast majority of drivers under normal road conditions. However, a keen and alert driver may have perception-reaction times well below 1 second, and a modern car with computerized anti-skid brakes may have a friction coefficient of 0.9--or even far exceed 1.0 with sticky tires.Experts historically used a reaction time of 0.75 seconds, but now incorporate perception resulting in an average perception-reaction time of: 1 second for population as an average; occasionally a two-second rule to simulate the elderly or neophyte; or even a 2.5 second reaction time—to specifically accommodate very elderly, debilitated, intoxicated, or distracted drivers. The coefficient of friction may be 0.25 or lower on wet or frozen asphalt, and anti-skid brakes and season specific performance tires may somewhat compensate for driver error and conditions. In legal contexts, conservative values suggestive of greater minimum stopping distances are often used as to be sure to exceed the pertinent legal burden of proof, with care not to go as far as to condone negligence. Thus the reaction time chosen can be related to the burden's corresponding population percentile; generally a reaction time of 1 second is as a preponderance more probable than not, 1.5 seconds is clear and convincing, and 2.5 seconds is beyond reasonable doubt. The same principle applies to the friction coefficient values. Total stopping distance: Actual total stopping distance The actual total stopping distance may differ from the baseline value when the road or tire conditions are substantially different from the baseline conditions or when the driver's cognitive function is superior or deficient. To determine actual total stopping distance, one would typically empirically obtain the coefficient of friction between the tire material and the exact road spot under the same road conditions and temperature. They would also measure the person's perception and reaction times. A driver who has innate reflexes, and thus braking distances, that are far below the safety margins provided in the road design or expected by other users, may not be safe to drive. Most old roads were not engineered with the deficient driver in mind, and often used a defunct 3/4 second reaction time standard. There have been recent road standard changes to make modern roadways more accessible to an increasingly aging population of drivers.For rubber tyres on cars, the coefficient of friction (μ) decreases as the mass of the car increases. Additionally, μ depends on whether the wheels are locked or rolling during the braking, and a few more parameters such as rubber temperature (increases during the braking) and speed. Total stopping distance: Rules of thumb In a non-metric country the stopping distance in feet given a velocity in MPH can be approximated as follows: take the first digit of the velocity, and square it. Add a zero to the result, then divide by 2. sum the previous result to the double of the velocity.Example: velocity = 50 MPH. stopping distance = 5 squared = 25, add a zero = 250, divide by 2 = 125, sum 2*50 = 225 feet (the exact value can be calculated using the formula given below the diagram on the right). Total stopping distance: In Germany the rule of thumb for the stopping distance in a city in good conditions is the 1-second rule, i.e. the distance covered in 1 second should at most be the distance to the vehicle ahead. At 50 km/h this corresponds to about 15 m. For higher speeds up to about 100 km/h outside built-up areas a similarly defined 2-second rule applies, which for 100 km/h translates to about 50 m. For speeds on the order of 100 km/h there is also the more or less equivalent rule that the stopping distance be the speed divided by 2 k/h, referred to as halber tacho (half the speedometer) rule, e.g. for 100 km/h the stopping distance should be about 50 m. Additionally, German driving schools teach their pupils that the total stopping distance is typically: 10 10 )2 In the UK, the typical total stopping distances (thinking distance plus braking distance) used in The Highway Code are quoted in Rule 126 [1] as: 20 mph: 40 feet (12 metres) 30 mph: 75 feet (23 metres) 40 mph: 118 feet (36 metres) 50 mph: 175 feet (53 metres) 60 mph: 240 feet (73 metres) 70 mph: 315 feet (96 metres)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sguil** Sguil: Sguil (pronounced sgweel or squeal) is a collection of free software components for Network Security Monitoring (NSM) and event driven analysis of IDS alerts. The sguil client is written in Tcl/Tk and can be run on any operating system that supports these. Sguil integrates alert data from Snort, session data from SANCP, and full content data from a second instance of Snort running in packet logger mode. Sguil: Sguil is an implementation of a Network Security Monitoring system. NSM is defined as "collection, analysis, and escalation of indications and warnings to detect and respond to intrusions." Sguil is released under the GPL 3.0.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**KLC2** KLC2: Kinesin light chain 2 is a protein that in humans is encoded by the KLC2 gene. Interactions: KLC2 has been shown to interact with MAPK8IP3 and KIF5B. Model organisms: Model organisms have been used in the study of KLC2 function. A conditional knockout mouse line called Klc2tm1e(EUCOMM)Wtsi was generated at the Wellcome Trust Sanger Institute. Male and female animals underwent a standardized phenotypic screen to determine the effects of deletion. Additional screens performed: - In-depth immunological phenotyping
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Methade** Methade: Methade, or 6-(dimethylamino)-4,4-diphenylheptane, is the parent compound of the methadone and methadol series of opioid analgesics: Derived from the chemical structure of methadone, various analogs and derivatives have been synthesized and developed to enhance its therapeutic properties and minimize potential side effects. Methadone itself is a synthetic opioid that exhibits potent analgesic properties, making it effective in relieving moderate to severe pain. It acts on the central nervous system, specifically targeting opioid receptors in the brain and spinal cord to alleviate pain signals. Methade: One of the notable applications of methadone is in the treatment of opioid addiction. It has been widely used as a substitution therapy for individuals addicted to opioids, such as heroin or prescription painkillers. Methadone treatment helps to reduce withdrawal symptoms and cravings, allowing individuals to stabilize their lives and gradually taper off opioids under medical supervision. The methadone series of opioids, which share a structural similarity to methadone, have been developed with the aim of improving therapeutic efficacy and safety profiles. These analogs and derivatives undergo rigorous testing and clinical trials to ensure their effectiveness, tolerability, and potential for abuse. It is important to note that methadone and its derivatives are potent opioids with the potential for addiction and misuse. Therefore, their use is strictly regulated and monitored by healthcare professionals to ensure safe and appropriate administration. Chemical derivatives: The methade series includes the following compounds: Acetylmethadol Alphacetylmethadol Levacetylmethadol Betacetylmethadol) Dimepheptanol (or methadol) Alphamethadol Betamethadol Methadone Levomethadone Noracymethadol Normethadone Related compounds: Some related compounds include: Alimadol Dextromoramide Dextropropoxyphene Dimenoxadol Dioxaphetyl butyrate Dipipanone Isomethadone Normethadone Norpipanone Phenadoxone
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CALL/360:BASIC** CALL/360:BASIC: CALL/360:BASIC was an IBM dialect of the BASIC programming language for the System/360 and later platforms. It was based on mid-1960s versions of Dartmouth BASIC but added a number of extensions. Most of these were related to file handling, which, at that time, Dartmouth lacked. It also added support for the mathematical symbols found on some IBM terminals, so that <= could be entered directly as ≤. Differences are otherwise minor. History: CALL/360:BASIC was announced in 1968, along with several other languages for the system including APL and FORTRAN and the Datatext markup language based text editor. Early advertizing for the system boasted that one could "Start learning CALL / 360 : BASIC after breakfast and you can share our computer before lunch".The CALL/360 suite was developed within IBM's Information Marketing department. Initially, the products were considered proprietary and could only be accessed via the online service. Customer demand forced them to offer these products to other System/360 users, which they did by releasing it on an "as is" basis with no support. Later the same year, IBM transferred this department, along with the rest of its timesharing services, to the Service Bureau Corporation (SBC), including the CALL/360 operating system and CALL/360:BASIC. Manuals after that date refer to the language as an SBC product.In 1973, SBC was itself transferred to Control Data Corporation as part of a long running anti-trust lawsuit. Description: CALL/360:BASIC is almost identical to Dartmouth BASIC the Fourth, including support for the advanced MATrix math features. It differs primarily in its support of file handling. Description: Basics The language included the commands LET, PRINT, END, FOR...NEXT with an optional STEP, GOTO, GOSUB...RETURN, IF...THEN, IF...GOTO, DEF, READ, DATA, RESTORE DIM, and REM. To this list, it added computed GOTO of the form GOTO 100,200,300 ON X. Note that the THEN in an IF statement can only be followed by a line number, the idea of allowing arbitrary statements after THEN did not appear until later. REMarks are always shown with a colon in the manual, REM: or REMARK:, but it is not clear if these were required. The RESTORE, END and STOP commands could also be followed by a comment string, where a colon was not required.PRINT was expanded with PRINT USING followed by a line number. The line referred to started with a colon and then a series of formatting strings. This series of strings was known as an "image". Items to be printed could be separated by commas or semicolons, with commas having "print zones" 18-characters wide. A new command, PAUSE, stopped the program with a statement PAUSE AT LINE 35 and then waited for the user to enter text, which was ignored. The end-of-line character caused the program to continue. It could also be followed by a comment in the source.It also included the same basic set of math instructions as Dartmouth, +, -, * and /, as well as the up-arrow for exponents and adding the two-asterisk form, 10**9. Logical operators included the standard set of =, >, =>, <, <= and <>, as well as the special character versions, ≥, ≤, ≠. It included the standard set of mathematical functions from Dartmouth, adding COT, SEC, CSC, ASN, ACS, HSN, HCS, HTN, LTW for base-2 logs, and LGT for base-10. It also included DEG and RAD functions to convert between degrees and radians, and three pre-defined internal constants, &PI, $E and $SQR2, which could be used instead of typing in the actual numbers.CALL/360 included string variables, only recently introduced to Dartmouth, using the same dollar-sign notation. It added the ability to delimit string constants with either single or double quotes, as well as the ability to type two of either character within a string to include a single character of that type. For instance, "ABC""DE" represents ABC"DE. Strings were broken into 18-character lengths internally, and strings that did not use up an entire 18-character record were padded with blanks, meaning "" would be interpreted as 18 spaces. Description: Arrays and matrix math Like early versions of Dartmouth, CALL/360:BASIC supported one and two dimensional arrays, with the lower index always being 1. Thus an array defined using DIM A(3) contains three values, A(1) through A(3). CALL/360 also added the ability to define string arrays, with each entry being a single 18-character string. In contrast to Dartmouth, it does not appear variables were always DIMed; in Dartmouth one could refer to A(5) without dimensioning A, in which case it had a default behaviour of being DIM A(10). The manual does not explicitly say CALL/360 does not do this, but it does state variables cannot be used in matrix operations without being dimensioned. A maximum of 29 numeric arrays were allowed in a program, with the total sum of the elements across all arrays being no more than 7167.CALL/360:BASIC included most of the matrix commands from Dartmouth, including the ability to perform basic math on a matrix as a single operation, like MAT A = A * 10 where A is an array that will then have all of its elements multiplied by 10. It also included the functions CON, IDN ZER, INV and TRN. Data could be loaded into a matrix with MAT READ and output with MAT PRINT. To these original commands they also added GET and PUT, which were used to read or write all the elements in a matrix to or from a file. Description: Files The major addition to CALL/360:BASIC was a usable file handling system. This started with the OPEN 10, 'filename' which opened a file and assigned it to the provided file number, 10 in this case, which could be an expression. Reading from the file was accomplished with GET 10: A, B, C in the same general fashion as the READ statement. Writing was handled by the otherwise identical PUT. The file pointer could be moved back to the start of the file with RESET followed by one or more file numbers. There was no way to specify a position within a file. CLOSE with a similar list of one or more file numbers freed the file handles. Example: The following program opens the file TEMPFILE for input as file handle 10, and then reads lines of data containing a product name and four sales prices in a loop. Notice that the loop is not terminated, instead, this program ends when it runs out of data and causes a END OF FILE error. How control is passed to line 70 at that point is not explained in the manual. The output to the screen is formatted using the image on line 50.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lanthanum manganite** Lanthanum manganite: Lanthanum manganite is an inorganic compound with the formula LaMnO3, often abbreviated as LMO. Lanthanum manganite is formed in the perovskite structure, consisting of oxygen octahedra with a central Mn atom. The cubic perovskite structure is distorted into an orthorhombic structure by a strong Jahn–Teller distortion of the oxygen octahedra.LaMnO3 often has lanthanum vacancies as evidenced by neutron scattering. For this reason, this material is usually referred as LaMnO3+ẟ. These vacancies generate a structure with a rhombohedral unit cell in this perovskite. A temperatures below 140 K, this LaMnO3+ẟ semiconductor exhibit a ferromagnetic order. Synthesis: Lanthanum manganite can be prepared via solid-state reactions at high temperatures, using their oxides or carbonates. An alternative method is to use lanthanum nitrate and manganese nitrate as raw materials. The reaction occurs at high temperature after the solvents are vaporized. Lanthanum manganite alloys: Lanthanum manganite is an electrical insulator and an A-type antiferromagnet. It is the parent compound of several important alloys, often termed rare-earth manganites or colossal magnetoresistance oxides. These families include lanthanum strontium manganite, lanthanum calcium manganite and others. Lanthanum manganite alloys: In lanthanum manganite, both the La and the Mn are in the +3 oxidation state. Substitution of some of the La atoms by divalent atoms such as Sr or Ca induces a similar amount of tetravalent Mn4+ ions. Such substitution, or doping can induce various electronic effects, which form the basis of a rich and complex electron correlation phenomena that yield diverse electronic phase diagrams in these alloys.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Esophagectomy** Esophagectomy: Esophagectomy or oesophagectomy is the surgical removal of all or parts of the esophagus. Medical uses: The principal objective is to remove the esophagus, a part of the gastrointestinal tract. This procedure is usually done for patients with esophageal cancer. It is normally done when esophageal cancer is detected early, before it has spread to other parts of the body. Esophagectomy of early-stage cancer represents the best chance of a cure. Despite significant improvements in technique and postoperative care, the long-term survival for esophageal cancer is still poor. Multimodality treatment (chemotherapy and radiation therapy) is needed for advanced tumors. Esophagectomy is also occasionally performed for benign disease such as esophageal atresia in children, achalasia, or caustic injury.In those who have had an esophagectomy for cancer, omentoplasty (a procedure in which part of the greater omentum is used to cover or fill a defect, augment arterial or portal venous circulation, absorb effusions, or increase lymphatic drainage) appears to improve outcomes. Classification: There are two main types of esophagectomy. A transhiatal esophagectomy (THE) is performed on the neck and abdomen simultaneously. A transthoracic esophagectomy (TTE) involves opening the thorax (chest).In most cases, the stomach is transplanted into the neck and the stomach takes the place originally occupied by the esophagus. In some cases, the removed esophagus is replaced by another hollow structure, such as the patient's colon. Another option that is slowly becoming available is minimally invasive surgery (MIS) which is performed laparoscopically and thoracoscopically. Classification: After surgery, patients may have trouble with a regular diet and may have to consume softer foods, avoid liquids at meals, and stay upright for 1–3 hours after eating. Dysphagia is common and patients are encouraged to chew foods very well or grind their food. Patients may complain of substernal pain that resolves by sipping fluids or regurgitating food. Reflux-type symptoms can be severe, including intolerance to acidic foods and large, fatty meals. Jejunal feeding tubes may be placed during surgery to provide a temporary route of nutrition until oral eating resumes. Process: Esophagectomy is a very complex operation that can take between 4 and 8 hours to perform. It is best done exclusively by doctors who specialise in thoracic surgery or upper gastrointestinal surgery. Anesthesia for an esophagectomy is also complex, owing to the problems with managing the patient's airway and lung function during the operation. Lung collapse is highly probable, as well as loss of diaphragmatic function, and possible injury to the spleen. Process: Average mortality rates (deaths either in hospital or within 30 days of surgery) for the operation are around 10% in US hospitals. Recognized major cancer hospitals typically report mortality rates under 5%. Major complications occur in 10–20% of patients, and some sort of complication (major and minor) occurs in 40%. Time in hospital is usually 1–2 weeks and recovery time 3–6 months. It is possible for the recovery time to take up to a year.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pegsunercept** Pegsunercept: Pegsunercept is a drug for the treatment of rheumatoid arthritis. As of January 2010, Phase II clinical trials have been completed. It is being developed by Amgen. Similarly to etanercept, pegsunercept is a soluble tumor necrosis factor receptor. Pegsunercept is a PEGylated protein.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Aromatic L-amino acid decarboxylase deficiency** Aromatic L-amino acid decarboxylase deficiency: Aromatic L-amino acid decarboxylase deficiency is a rare genetic disorder caused by mutations in the DDC gene, which encodes an enzyme called aromatic L-amino acid decarboxylase. Signs and symptoms: Babies with severe aromatic L-amino acid decarboxylase deficiency usually present during the first few months of life. Symptoms can include: Hypotonia (floppiness) Developmental delay Oculogyric crises Difficulty with initiating and controlling movements Dystonia and dyskinesia Gastointestinal dysmotility which can present at as vomiting, gastro-oesophageal reflux, diarrhoea and/or constipation Autonomic symptoms including difficulties controlling temperature and blood sugar, excessive sweating and nasal congestionSome people may develop cerebral folate deficiency, because O-methylation of the excessive amounts of L-Dopa can deplete methyl donors such as S-adenosyl methionine and levomefolic acid. This deviation can be detected by measuring the levels of levomefolic acid in the cerebrospinal fluid, and can be corrected by folinic acid. Genetics: Aromatic L-amino acid decarboxylase deficiency is an autosomal recessive condition, meaning an individual needs to have two faulty copies of the DDC gene in order to be affected. Usually, one copy is inherited from each parent. Pathophysiology: The aromatic L-amino acid decarboxylase deficiency enzyme is involved in the synthesis of dopamine and serotonin, both of which are important neurotransmitters. Diagnosis: Once there is a clinical suspicion of the diagnosis, neurotransmitters can be analysed in cerebrospinal fluid from a lumbar puncture. If these show the pattern of abnormalities typical for aromatic L-amino acid decarboxylase deficiency, the diagnosis can be confirmed by genetic testing and/or measurement of enzyme activity. Treatment: There is no cure for aromatic L-amino acid decarboxylase deficiency, but medical and multidisciplinary treatment can relieve some of the symptoms. Individuals will require physiotherapy, occupational therapy, and speech and language therapy. Some will need enteral feeding (for example, a gastrostomy or jejunostomy) due to difficulties with chewing and swallowing.Various medications can help compensate for the missing neurotransmitters. Dopamine agonists such as rotigotine or pramipexole and monoamine oxidase inhibitors such as selegiline are commonly used. Individuals may also need to take a range of other medications to control dyskinesia, constipation and other symptoms.In July 2021, results of a small gene therapy phase I study reported observation of dopamine restoration on seven participants between 4 and 9 years old.In July 2022, the gene therapy product eladocagene exuparvovec was approved for use on adults in the European Union.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Caramelized pork and eggs** Caramelized pork and eggs: Caramelized pork and eggs (Vietnamese: thịt kho hột vịt or thịt kho tàu) is a Vietnamese dish traditionally consisting of small pieces of marinated pork and boiled eggs braised in coconut juice. Caramelized pork and eggs: In the Vietnamese language, Thịt means "meat" and Kho is a Vietnamese cooking technique.Although it is a familiar part of an everyday meal amongst the Vietnamese in Southern Vietnam, it is also one of the traditional dishes during Vietnamese New Year. Before it is served for general consumption, the food is offered to deceased ancestors or family members on altars. In Vietnam, rice is commonly served alongside this dish. It is similar to tau yu bak (豆油肉), a traditional Hokkien dish.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**TAS2R19** TAS2R19: Taste receptor type 2 member 19 is a protein that in humans is encoded by the TAS2R19 gene. It seems to be involved in the perception of salt and bitter tastes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fournier gangrene** Fournier gangrene: Fournier gangrene is a type of necrotizing fasciitis or gangrene affecting the external genitalia or perineum. It commonly occurs in older men, but it can also occur in women and children. It is more likely to occur in people with diabetes or alcoholism or those who are immunocompromised. About one per 62,500 males are affected per year. Males are affected about 40 times more often than females. It was first described by Baurienne in 1764 and is named after a French venereologist, Jean Alfred Fournier, following five cases he presented in clinical lectures in 1883. Signs and symptoms: Initial symptoms of Fournier gangrene include swelling or sudden pain in the scrotum, fever, pallor, and generalized weakness. It is characterized by pain that extends beyond the border of the demarcated erythema. Most cases present mildly, but can progress in hours. Subcutaneous air is often one of the specific clinical signs, but is not seen in >50% of presenting clinical cases. More marked cases are characterized by a foul odor and necrotic infected tissue. Crepitus has been reported. It begins as a subcutaneous infection. However, necrotic patches soon appear in the overlying skin, which later develop into necrosis. Cause: Most cases of Fournier gangrene are infected with both aerobic and anaerobic bacteria such as Clostridium perfringens. It can also result from infections caused by group A streptococcus (GAS), as well as other pathogens such as Staphylococcus aureus and Vibrio vulnificus. Lack of access to sanitation, medical care, and psychosocial resources has been linked to increased mortality.A 2006 Turkish study reported that blood sugar levels were elevated in 46 percent of patients diagnosed with Fourniers. Another study reported that about one third of patients were alcoholic, diabetic, and malnourished, while another ten percent had been immunosuppressed through chemotherapy, steroids, or malignancy.Fournier gangrene is a rare side effect of SGLT2 inhibitors (canagliflozin, dapagliflozin, and empagliflozin), which increase the excretion of glucose in the urine. Diagnosis: Fournier gangrene is usually diagnosed clinically, but laboratory tests and imaging studies are used to confirm diagnosis, determine severity, and predict outcomes. X-rays and ultrasounds may show the presence of gas below the surface of the skin. A CT scan can be useful in determining the site of origin and extent of spread. Treatment: Fournier gangrene is a urological emergency requiring intravenous antibiotics and debridement (surgical removal) of dead tissue. Formation of a colostomy may be required to divert bowel motions away from the area. In addition to surgery and antibiotics, hyperbaric oxygen therapy may be useful and acts to inhibit the growth of and kill the anaerobic bacteria. Multiple wound debridement may be required in cases with extensive tissue involvement. Simple reconstructive procedures following wound debridement yield satisfactory outcomes in majority of the cases. Prognosis: While recent case series (n=980) studies have found a mortality rate of 20–40%, a large (n=1641) 2009 study reported a mortality rate of 7.5%. Epidemiology: A 2009 epidemiological study found the incidence of Fournier gangrene to be 1.6 cases per 100,000 males, in the United States. Males 50 to 79 years old had the highest rate at 3.3 per 100,000. Of 1,680 cases identified in the study, 39 were women.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bifrost (Trojan horse)** Bifrost (Trojan horse): Bifrost is a backdoor trojan horse family of more than 10 variants which can infect Windows 95 through Windows 10 (although on modern Windows systems, after Windows XP, its functionality is limited). Bifrost uses the typical server, server builder, and client backdoor program configuration to allow a remote attacker, who uses the client, to execute arbitrary code on the compromised machine (which runs the server whose behavior can be controlled by the server editor). Bifrost (Trojan horse): The server component (sized around 20–50 kilobytes, depending on variant) is dropped to C:\Program Files\Bifrost\server.exe with default settings and, when running, connects to a predefined IP address on TCP port 81, awaiting commands from the remote user who uses the client component. However, both installation directory and TCP port can be changed. TCP connection is encrypted with a password (default: "pass"), but this can be changed as well. Bifrost (Trojan horse): It can be assumed that once all three components are operational, the remote user can execute arbitrary code at will on the compromised machine. The server components can also be dropped to C:\Windows and file attributes changed to "Read Only" and "Hidden". Casual users may not see the directories by default due to the "hidden" attributes set on the directory. Some anti-virus programs (example AVG – 17th Feb 2010) seem to miss the file entirely. Bifrost (Trojan horse): The server builder component has the following capabilities: Create the server component Change the server component's port number and/or IP address Change the server component's executable name Change the name of the Windows registry startup entry Include rootkit to hide server processes Include extensions to add features (adds 22,759 bytes to server) Use persistence (makes the server harder to remove from the infected system)The client component has the following capabilities: Process Manager (Browse or kill running processes) File manager (Browse, upload, download, or delete files) Window Manager (Browse, close, maximize/minimize, or rename windows) Get system information Extract passwords from machine Keystroke logging Screen capture Webcam capture Desktop logoff, reboot or shutdown Registry editor Remote shellOn December 28, 2005, the Windows WMF exploit was used to drop new variants of Bifrost to machines. Some workarounds and unofficial patches were published before Microsoft announced and issued an official patch on January 5, 2006. The WMF exploit is to be considered extremely dangerous. Bifrost (Trojan horse): Older variants of Bifrost used different ports, e.g. 1971, 1999; had a different payload, e.g. C:\Winnt\system32\system.exe; and/or wrote different Windows registry keys. Bifrost was designed before the introduction of UAC thus Bifrost cannot install itself on modern Windows systems, unless it is launched with administrator privileges.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**HES1** HES1: Transcription factor HES1 (hairy and enhancer of split-1) is a protein that is encoded by the Hes1 gene, and is the mammalian homolog of the hairy gene in Drosophila. HES1 is one of the seven members of the Hes gene family (HES1-7). Hes genes code nuclear proteins that suppress transcription.This protein belongs to the basic helix-loop-helix (bHLH) family of transcription factors. It is a transcriptional repressor of genes that require a bHLH protein for their transcription. The protein has a particular type of basic domain that contains a helix interrupting protein that binds to the N-box promoter region rather than the canonical enhancer box (E-box). As a member of the bHLH family, it is a transcriptional repressor that influences cell proliferation and differentiation in embryogenesis. HES1 regulates its own expression via a negative feedback loop, and oscillates with approximately 2-hour periodicity. Structure: There are three conserved domains in Hes genes that impart transcriptional functions: the bHLH domain, the Orange domain, and the WRPW motif. Hes genes differ from other bHLH factors in that they have a proline residue in the middle of the basic DNA binding region. This proline has been proposed to give Hes proteins unique DNA binding capacity. While most bHLH factors bind to the E-box consensus sequence (CANNTG) that is present in the promoter region of target genes, Hes factors bind more preferentially to the Class C site or N box (CACNAG). The Orange domain serves to regulate the choice of bHLH heterodimer partners. The C-terminal WRPW domain inhibits transcription. Interactions: Similarly to other HES proteins, Hes1 has been shown to interact with the co-repressors encoded by the Transducin-like E(spl) (TLE) genes and the Groucho-related gene (Grg), both homologs of the Drosophila groucho. Because Groucho in Drosophila inhibits transcription by recruiting histone deacetylase, it is likely that a Hes-Groucho complex actively blocks transcription by disabling chromatin. Hes proteins also heterodimerize with bHLH repressors such as Hey1 and Hey2, a process which also blocks transcription. Hes factors also heterodimerize with bHLH activators such as E47, also known as Tcfe2a, and Mash1, also known as Ascl1, both of which are the mammalian homologs to proneural genes in Drosophila. The E47-Hes and Mash1-Hes heterodimer complexes cannot bind DNA, and therefore repress transcription. Interactions: Hes1 also interacts with TLE2 and Sirtuin 1. HES1 and stem cells: HES1 influences the maintenance of certain stem cells and progenitor cells. Specifically, HES1 influences the timing of differentiation by repressing bHLH activators, and determines binary cell fate. HES1 has been shown to play a large role in both the nervous, and digestive systems. HES1 has been shown to influence these two systems partially through the Notch signaling pathway. HES1 and stem cells: Neural development HES1 is expressed in both neuroepithelial cells and radial glial cells, both neural stem cells. Hes1 expression, along with that of Hes5, covers the majority of the developing embryo at embryonic day 10.5. After this point, expression of Hes1 is limited to the subventricular zone. In HES1 knockout (KO) mice, Mash1 is compensatorily upregulated, and neurogenesis is accelerated. Indeed, if the expression of Hes1, Hes3, and Hes5 genes is inhibited, the expression of proneural genes increases, and while neurogenesis is accelerated, neural stem cells become prematurely depleted. Contrariwise, if these HES genes are overexpressed, neurogenesis is inhibited. Thus HES1 genes are only involved in maintaining, not creating, neural stem cells. HES1 and stem cells: Additionally, HES1 can guide neural stem cells down one of two paths of differentiation. HES1 can maintain neural stem cells expressing Pax6, but leads cells that are Pax6-negative to an astrocyte differentiation fate. Epigenetic modifications such as DNA methylation also influence HES1's ability to direct differentiation. Demethylation of HES1 target sites in the promoter region of astrocyte-specific genes hastens astrocyte differentiation. The oscillatory nature of Hes1 expression has a role in determining differentiation fate as well. HES1-high embryonic stem cells that received a differentiation signal often adopted a mesodermal fate, while HES1-low cells that received a differentiation signal differentiated into neuronal cells. These results were confirmed using quantitative PCR which showed that HES1-high cells showed high levels of Brachyury and Fgf5 expression (both of which are expressed highly in mesodermal cell types) with comparatively low levels genes expressed in neural cells such as Nestin. By contrast, HES1-low cells showed high levels of expression of genes involved in neural induction and low levels of expression of genes involved in mesodermal differentiation. Cycling HES1 levels also contribute to the maintenance of neural progenitor cells by regulating Neurogenin2 (Ngn2) and Dll1 oscillations. Hes1 levels fluctuate at different frequencies in different parts of the central nervous system: HES1 is continuously expressed at high levels in the boundaries, but vacillates in the compartments. This suggests that alternating HES1 levels may prompt differences in characteristics between anatomical elements of the central nervous system. HES1 and stem cells: Interactions with the Notch pathway HES1 also plays an important role in the Notch signaling pathway. In the absence of Notch signaling, RBPJ inhibits the expression of HES1. After Notch signals have been processed within the cell, however, the plasma membrane releases the intracellular domain of Notch, which moves to the nucleus where it associates with RBPJ. The binding causes a conformational change which leads co-repressors to disassociate and allows co-activators to bind. The new activating complex then prompts HES1 expression. Notch signaling activates HES1 expression. HES1 has been shown to target at least Notch ligands: Dll1, Jagged1 (Jag1), and Neurogenin-2., Dll1, as with other Notch ligands, has been shown to induce neural differentiation, and HES1 binding of Dll1 blocks neural differentiation and leads to the maintenance of the neural stem cells and neural progenitor cells. Notch signaling also occurs in the intestinal crypt cells. Hyperactivated Notch causes a reduction in the number of secretory cell types (i.e. goblet cells, enteroendocrine cells, and Paneth cells). Deletion of the Notch pathway by removing the Notch expression controller, Rbpsuh, causes the production of nearly only goblet cells. HES1 and stem cells: Digestive system HES1 has been shown to influence the differentiation decision of cells in the gastrointestinal tract. In pancreatic progenitor cells, HES1 expression inhibits the expression of Ptf1a, which controls exocrine cell differentiation, and Ngn3, which drives differentiation of endocrine cell types that will form the islets of Langerhans. The absence of Hes1 in the developing intestine of mice promotes the increase of Math1 (a protein required for the production of intestinal secretory cell types), which leads to an increase of goblet, enteroendocrine, and Paneth cells. When Hes1 is deleted in mouse and zebrafish, surplus goblet cells and enteroendocrine cells are made while few enterocytes are made., Liver progenitor cells differentiate into two different cell types: hepatocytes and biliary epithelial cells. When Hes1 expression is low, hepatocytes form normally, but bile ducts are completely absent. This phenotype resembles Alagille syndrome, a hallmark of which is mutations in Jagged1. Therefore, Hes-Notch interactions also play a role in digestive organ development.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Interbol** Interbol: Interbol (Russian: Интербол) is an international space project under the leadership of the Russian Space Agency and the Space Research Institute of the Russian Academy of Sciences. The list of participants includes the Institute of Atmospheric Research of the Czech Academy of Sciences, NASA, European Space Agency, Japan Aerospace Exploration Agency, and the Canadian Space Agency. The goal of the project is to study the correlations between plasma processes in the tail of the magnetosphere and in the Van Allen radiation belt (auroral particles acceleration region) with a high time-space resolution. Two space probes have been launched into high-altitude elliptical orbits: auroral probe was launched August 29, 1996 into orbit with an apogee of 20 000 km. The probe was sent to space in the same month as the FAST spacecraft, which studies aurora at both poles; tail probe was launched August 3, 1995 into orbit with an apogee of 200 000 km;Both orbits are almost parallel to the ecliptic. Each probe consists of a pair satellite-subsatellite. Subsatellites “Magion-4” (auroral) and “Magion-5” (tail) were procured by the Institute of atmospheric Research of the Czech Academy of Sciences. The communication with “Magion-5” was interrupted August 30, 1996 and was restored May 7, 1998. The life expectancy of space vehicles is 12 years. Related projects: GEOTAIL, WIND, POLAR, SOHO, FAST, RELICT-1, RELICT-2
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**University of Pennsylvania Smell Identification Test** University of Pennsylvania Smell Identification Test: The University of Pennsylvania Smell Identification Test (UPSIT) is a test that is commercially available for smell identification to test the function of an individual's olfactory system. Known for its accuracy among smell identification tests it is considered to be one of the most reliable (r=.94) and trusted.UPSIT was created by University of Pennsylvania physician and professor of psychology and otorhinolaryngology Richard Doty. Doty is also the director of the University of Pennsylvania’s Smell and Taste Center. University of Pennsylvania Smell Identification Test: The test has a secondary purpose as a self-examination test in the diagnosis of many diseases including Parkinson's disease and Alzheimer's. The original test has been altered in several ways to be useful in numerous languages and cultures. There are also several trends that are found when UPSIT is administered based on demographics such as age, gender, history of smoking and other characteristics. Format: The UPSIT is a measurement of the individual's ability to detect odors at a suprathreshold level. The test is usually administered in a waiting room and takes only a few minutes. The test has a total of 40 questions and consists of 4 different 10 page booklets. On each page, there is a different scratch and sniff strip which are embedded with a microencapsulated odorant. There is also a four choice multiple choice question on each page. The scents are released using a pencil. After each scent is released, the patient smells the level and detects the odor from the four choices. There is an answer column on the back of the test booklet, and the test is scored out of 40 items. The score is compared to scores in a normative database from 4000 normal individuals, this tells the level of absolute smell function. The score also indicates how the patient does in accordance to their age group and gender. Format: The test is occasionally judged to have an American cultural bias. There have been British, Chinese, French, German, Italian, Korean and Spanish UPSIT versions made. There are also the Brief (Cross-Cultural) Smell Identification Test, the Scandinavian Odor Identification Test. Demographics: In general, women have a better sense of smell than men do. This advantage can be observed as early as 4 years of age. This is evidenced by several cultures. This superiority in women also increases with age. Overall, women have a higher functioning olfactory system than men do starting from a young age. Demographics: With the increase in age, there is an increased loss of the olfactory function. On average, individuals begin to lose function of their olfactory system by the age of 65. Of the individuals who do suffer a loss of olfactory function, half of the losses begin between the ages of 65 and 80. Three quarters of these occur after the age of 80. This plays a role in diagnosing Alzheimer's. Demographics: Genetics have been found to play a significant role in the ability of one's olfactory system as well. If an individual does suffer from olfactory dysfunction, it is five times more likely that their first order relatives will also suffer from olfactory dysfunction.Another major factor in a decrease of olfactory function is smoking. It can take years for past smokers to regain their presmoking olfactory function. Occasionally it is even impossible for individuals to regain this level in its entirety. The length of time it can take for smokers to regain this level depends on the duration and intensity of their smoking habits.The olfactory system can be compromised in several environments. This includes large urban cities and certain industries, for example paper and chemical manufacturing. Diagnosis: There are many central nervous system disorders that are associated with olfactory dysfunction. Most of these dysfunctions classify as degenerative neuropsychiatric disorders. Some of these diseases are: Alzheimer's disease, Parkinson's disease, Huntington's disease, Korsakoff's Psychosis, schizophrenia, congenital anosmia, head trauma, brain tumors, acquired immunodeficiency syndrome (AIDS), and multiple sclerosis. Diagnosis: Alzheimer's UPSIT has been used to detect Alzheimer's (AD). Smell loss can be a very early sign of detecting AD. It has been suggested that AD affects odor identification and odor detection, this shows that AD patients have more trouble performing higher olfactory tasks that involve specific cognitive processes. During a functional magnetic resonance imaging (fMRI) study, blood oxygen level-dependent was found more strongly in control patients than AD patients, who showed a weaker signal. It has also been found through several studies that olfactory function and cognition correlates to the severity of AD. Therefore, UPSIT is a very good clinical test to be able to determine the severity of AD. During AD, a patient's olfactory bulb, amygdala and temporal cortices are affected. There is also severe nerve cell loss. Diagnosis: Parkinson's disease UPSIT is also used to diagnose Parkinson's disease (PD). Smell dysfunction occurs in 90% of cases with PD. After the commercial release of UPSIT, there have been many studies published that have shown olfactory dysfunction in patients with PD. After it was discovered that smell tests can differentiate PD from progressive supranuclear palsy, essential tremor, and parkinsonism induced by MPTP, many studies were undertaken. It has been shown that the olfactory bulb is one of the two main regions where PD seems to begin. In families where there are individuals with PD, UPSIT can be used to predict whether other first degree relatives will also develop PD. It has been discovered that multiple factors contribute to the development of PD-related olfactory dysfunction. As with AD, the UPSIT score can also determine the severity of PD. But people develop various levels of olfactory dysfunction. The disorders with the olfactory dysfunction are those with the most pathology, such as PD and AD.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vegeta (software)** Vegeta (software): Vegeta is an HTTP load testing tool written in Go that can be used as a command in a command-line interface or as a library. The program tests how an HTTP-based application behaves when multiple users access it at the same time by generating a background load of GET requests. Vegeta is used to generate a sustained, constant number of requests per second in order to discover how long a service can sustain a peak load before dropping in performance.In addition to preemptive load testing, the program can also be used for shadow testing, where traffic from a live version of an application is mirrored onto a test version to determine how it handles the same traffic load, without causing potential disruption to the live version of the application. Shadow testing is done in this way in order to analyze anticipated server performance.Vegeta is provided for use by web hosting services such as Scaleway to use varied and multiple requests to stress test client HTTP services. It is also used with dedicated load-testing platform services such as BlazeMeter. Usage: The command-line usage is in the format of vegeta [global flags] <command> [command flags]. The three global flags are -cpus int which specifies the number of CPUs to use, -profile string which enables profiling, and -version which prints the software version and then terminates the program.The commands available are attack, encode, plot, and report, each with its own various command flag options, and both attack input and report output can be done in an optional JSON format when specified with the appropriate flag.Vegeta can specify targets as URLs in a separate file with optional custom headers and requests, which can then be used as an input option on the command line. Usage: Example An example usage would be to issue echo "GET http://localhost/" | vegeta attack -duration=5s | tee results.bin | vegeta report from the command-line. This example uses the echo command to output GET http://localhost/, and then executes the attack command for that output for five seconds. After that, it uses the tee command to write results to a file called results.bin, and runs the report command to display the output of the attack results.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Electron microprobe** Electron microprobe: An electron microprobe (EMP), also known as an electron probe microanalyzer (EPMA) or electron micro probe analyzer (EMPA), is an analytical tool used to non-destructively determine the chemical composition of small volumes of solid materials. It works similarly to a scanning electron microscope: the sample is bombarded with an electron beam, emitting x-rays at wavelengths characteristic to the elements being analyzed. This enables the abundances of elements present within small sample volumes (typically 10-30 cubic micrometers or less) to be determined, when a conventional accelerating voltage of 15-20 kV is used. The concentrations of elements from lithium to plutonium may be measured at levels as low as 100 parts per million (ppm), material dependent, although with care, levels below 10 ppm are possible. The ability to quantify lithium by EPMA became a reality in 2008. History: The electron microprobe, also known as the electron probe microanalyzer, developed utilizing two technologies: electron microscopy — the use of a focused high energy electron beam to interact with a target material, and X-ray spectroscopy — identification of the photons resulting from electron beam interaction with the target, with the energy/wavelength of the photons being characteristic of the atoms excited by the incident electrons. The names of Ernst Ruska and Max Knoll are associated with the first prototype electron microscope in 1931. The name of Henry Moseley is associated with the discovery of the direct relationship between the wavelength of X-rays and the identity of the atom from which it originated.There have been at several historical threads to electron beam microanalytical technique. One was developed by James Hillier and Richard Baker at RCA. In the early 1940s, they built an electron microprobe, combining an electron microscope and an energy loss spectrometer. A patent application was filed in 1944. Electron energy loss spectroscopy is very good for light element analysis and they obtained spectra of C-Kα, N-Kα and O-Kα radiation. In 1947, Hiller patented the idea of using an electron beam to produce analytical X-rays, but never constructed a working model. His design proposed using Bragg diffraction from a flat crystal to select specific X-ray wavelengths and a photographic plate as a detector. However, RCA had no interest in pursuing commercialization of this invention. History: A second thread developed in France in the late 1940s. In 1948–1950, Raimond Castaing, supervised by André Guinier, built the first electron “microsonde électronique” (electron microprobe) at ONERA. This microprobe produced an electron beam diameter of 1-3 μm with a beam current of ~10 nanoamperes (nA) and used a Geiger counter to detect the X-rays produced from the sample. However, the Geiger counter could not distinguish X-rays produced from specific elements and in 1950, Castaing added a quartz crystal between the sample and the detector to permit wavelength discrimination. He also added an optical microscope to view the point of beam impact. The resulting microprobe was described in Castaing's 1951 PhD Thesis, translated into English by Pol Duwez and David Wittry, in which he laid the foundations of the theory and application of quantitative analysis by electron microprobe, establishing the theoretical framework for the matrix corrections of absorption and fluorescence effects. Castaing (1921-1999) is considered the "father" of electron microprobe analysis. History: The 1950s was a decade of great interest in electron beam X-ray microanalysis, following Castaing's presentations at the First European Microscopy Conference in Delft in 1949 and then at the National Bureau of Standards conference on Electron Physics in Washington, DC, in 1951, as well as at other conferences in the early to mid-1950s. Many researchers, mainly material scientists, began to develop their own experimental electron microprobes, sometimes starting from scratch, but many times utilizing surplus electron microscopes. History: One of the organizers of the Delft 1949 Electron Microscopy conference was Vernon Ellis Cosslett at the Cavendish Laboratory at Cambridge University, a center of research on electron microscopy, as well as scanning electron microscopy with Charles Oatley as well as X-ray microscopy with Bill Nixon. Peter Duncumb combined all three technologies and developed a scanning electron X-ray microanalyzer as his PhD thesis project (published 1957), which was commercialized as the Cambridge MicroScan instrument. History: Pol Duwez, a Belgian material scientist who fled the Nazis and settled at the California Institute of Technology and collaborated with Jesse DuMond, encountered André Guinier on a train in Europe in 1952, where he learned of Castaing's new instrument and the suggestion that Caltech build a similar instrument. David Wittry was hired to build such an instrument as his PhD thesis, which he completed in 1957. It became the prototype for the ARL EMX electron microprobe. History: During the late 1950s and early 1960s there were over a dozen other laboratories in North America, the United Kingdom, Europe, Japan and the USSR developing electron beam X-ray microanalyzers. History: The first commercial electron microprobe, the "MS85" was produced by CAMECA (France) in 1956.. It was soon followed in the early-mid 1960s by many microprobes from other companies; however, all companies except CAMECA , JEOL and Shimadzu Corporation are now out of business. In addition, many researchers build electron microprobes in their labs. Significant subsequent improvements and modifications to microprobes included scanning the electron beam to make X-ray maps (1960), the addition of solid state EDS detectors (1968) and the development of synthetic multilayer diffracting crystals for analysis of light elements (1984). Later, CAMECA became also the pioneer on manufacturing a shielded version of the electron microprobe for nuclear applications. Several new advances in CAMECA instruments in the last decades allowed them to expand their range of applications on metallurgy, electronics, geology, mineralogy, nuclear plants, trace elements, dentistry, etc. Working: A beam of electrons is fired at a sample. The beam causes each element in the sample to emit X-rays at a characteristic frequency; the X-rays can then be detected by the electron microprobe. The size and current density of the electron beam determines the trade-off between resolution and scan time and/or analysis time. Working: Detailed description Low-energy electrons are produced from a tungsten filament, a lanthanum hexaboride crystal cathode or a field emission electron source and accelerated by a positively biased anode plate to 3 to 30 thousand electron volts (keV). The anode plate has central aperture and electrons that pass through it are collimated and focused by a series of magnetic lenses and apertures. The resulting electron beam (approximately 5 nm to 10 μm diameter) may be rastered across the sample or used in spot mode to produce excitation of various effects in the sample. Among these effects are: phonon excitation (heat), cathodoluminescence (visible light fluorescence), continuum X-ray radiation (bremsstrahlung), characteristic X-ray radiation, secondary electrons (plasmon production), backscattered electron production, and Auger electron production. Working: When the beam electrons (and scattered electrons from the sample) interact with bound electrons in the innermost electron shells of the atoms of the various elements in the sample, they can scatter the bound electrons from the electron shell producing a vacancy in that shell (ionization of the atom). This vacancy is unstable and must be filled by an electron from either a higher energy bound shell in the atom (producing another vacancy which is in turn filled by electrons from yet higher energy bound shells) or by unbound electrons of low energy. The difference in binding energy between the electron shell in which the vacancy was produced and the shell from which the electron comes to fill the vacancy is emitted as a photon. The energy of the photon is in the X-ray region of the electromagnetic spectrum. As the electron structure of each element is unique, the series X-ray line energies produced by vacancies in the innermost shells is characteristic of that element, although lines from different elements may overlap. As the innermost shells are involved, the X-ray line energies are generally not affected by chemical effects produced by bonding between elements in compounds except in low atomic number (Z) elements ( B, C, N, O and F for Kalpha and Al to Cl for Kbeta) where line energies may be shifted as a result of the involvement of the electron shell from which vacancies are filled in chemical bonding. Working: The characteristic X-rays are used for chemical analysis. Specific X-ray wavelengths or energies are selected and counted, either by wavelength dispersive X-ray spectroscopy (WDS) or energy dispersive X-ray spectroscopy (EDS). WDS utilizes Bragg diffraction from crystals to select X-ray wavelengths of interest and direct them to gas-flow or sealed proportional detectors. In contrast, EDS uses a solid state semiconductor detector to accumulate X-rays of all wavelengths produced from the sample. While EDS yields more information and typically requires a much shorter counting time, WDS is generally a more precise technique with lower limits of detection because its superior X-ray peak resolution and greater peak to background ratio. Working: Chemical composition is determined by comparing the intensities of characteristic X-rays from the sample material with intensities from known composition (standards). Counts from the sample must be corrected for matrix effects (depth of production of the X-rays, absorption and secondary fluorescence) to yield quantitative chemical compositions. The resulting chemical information is gathered in textural context. Variations in chemical composition within a material (zoning), such as a mineral grain or metal, can be readily determined. Working: Volume from which chemical information is gathered (volume of X-rays generation) is 0.3 – 3 cubic micrometers. Limitations WDS is useful for higher atomic numbers, therefore WDS cannot determine elements below number 3 (lithium). This limitation sets restrictions to WDS when analyzing geologically important elements such as H, Li, and Be. Despite the improved spectral resolution of elemental peaks, some peaks exhibit significant overlaps that result in analytical challenges (e.g., VKα and TiKβ). WDS analyses are not able to distinguish among the valence states of elements (e.g. Fe2+ vs. Fe3+) such that this information must be obtained by other techniques (e.g. Mössbauer spectroscopy or electron energy loss spectroscopy). The multiple masses of an element (i.e. isotopes) cannot be determined by WDS, but rather are most commonly obtained with a mass spectrometer. Uses: Materials science and engineering The technique is commonly used for analyzing the chemical composition of metals, alloys, ceramics, and glasses. It is particularly useful for assessing the composition of individual particles or grains and chemical changes on the scale of a few micrometres to millimeters. The electron microprobe is widely used for research, quality control, and failure analysis. Uses: Mineralogy and petrology This technique is most commonly used by mineralogists and petrologists. Most rocks are aggregates of small mineral grains. These grains may preserve chemical information adopted during their formation and subsequent alteration. This information may illuminate geologic processes, such as crystallization, lithification, volcanism, metamorphism, orogenic events (mountain building), plate tectonics. This technique is also used for the study of extraterrestrial rocks (i.e. meteorites), and provides chemical data which is vital to understanding the evolution of the planets, asteroids, and comets. Uses: The change in elemental composition from the center (also known as core) to the edge (or rim) of a mineral can yield information about the history of the crystal's formation, including the temperature, pressure, and chemistry of the surrounding medium. Quartz crystals, for example, incorporate a small, but measurable amount of titanium into their structure as a function of temperature, pressure, and the amount of titanium available in their environment. Changes in these parameters are recorded by titanium as the crystal grows. Uses: Paleontology In exceptionally preserved fossils, such as those of the Burgess shale, soft parts of organisms may be preserved. Since these fossils are often compressed into a 2D film, it can be difficult to deduce what features were what: a famous example is that of triangular extensions in Opabinia, which were interpreted as either legs or extensions of the gut. Elemental mapping showed that they had a similar composition to the gut, favouring the second interpretation. Because of the thin nature of the carbon films, only low voltages (5-15 kV) can be used in such specimens. Uses: Meteorite analysis The chemical composition of meteorites can be analysed quite accurately using EPMA technique. This can reveal a lot of information about the conditions that existed in the Solar System many years ago. Online tutorials Jim Wittke's class notes at Northern Arizona University John Fournelle's class notes at the University of Wisconsin–Madison John Donovan's class notes at the University of Oregon
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Animal repellent** Animal repellent: An animal repellent consists of any object or method made with the intention of keeping animals away from personal items as well as food, plants or yourself. Plants and other living organisms naturally possess a special ability to emit chemicals known as semiochemicals as a way to defend themselves from predators. Humans purposely make use of some of those and create a way to repel animals through various forms of protection. Overview: Repellents generally work by taking advantage of an animal's natural aversion to something, and often the thing chosen is something that the animal has learned to avoid (or instinctively avoids) in its natural environment. Overview: Chemical repellents fall into two main categories, odor and taste. The former work better in the warm season and the latter, which ward off an animal only after it eats, in the cold season. (For example, the smell of the lawn fertilizer Milorganite is claimed to make it an effective repellent.) Such repellents mimic natural substances that deter animals and/or are designed to be so irritating to a specific animal or type of animal that it will avoid the protected object or area. Contact plant-origin repellents such as pepper, peppermint, tarragon, garlic, various essential oils, and castor oil, as well as diatomaceous earth and putrescent egg solids, are examples. Overview: Further, some repellents function by inducing fear in the target animal. Such a repellent may contain animal urine, dried blood, or hair. Some animals will avoid anything that has the odor of the urine of their predators. Tiger urine is thus very effective at keeping away animals. Coyote urine has gained currency as a deer repellent. Fox urine is used to repel rabbits, groundhogs, woodchucks, squirrels and chipmunks. Bobcat urine repels moles, mice, voles and other rodents. Wolf urine is used to repel moose. Used cat litter is also effective. Domestic dogs can be repelled by vinegar.Other repellents are not chemical. A simple electrified or barbed-wire fence can mechanically repel livestock or predator animals. Some electrical repellent systems have been tested against sharks. High-frequency whistles are used on vehicles to drive deer away from highways, and similar devices are used to deter certain types of insects or rodents. Repellents of this kind for domestic cats and dogs include ultrasonic devices which emit a high-frequency noise that does not affect humans. These types of non-chemical repellents are controversial, both because their effectiveness varies from animal to animal and because there have been few scientific studies conducted to prove that they work. They are, however, safe and humane, as are motion-activated sprinklers and electronic pet barriers, which latter are used by pet owners to confine their own pets to designated areas. Overview: Flashing lights are used to repel lions in Kenya.The ideal repellent is completely specific for the target animal; that is, it drives away the animal that one wishes to repel without affecting or harming any other animals or people. One type of animal repellent may be effective for raccoons, while another animal repellent may be more effective for skunks. It can be difficult to design a repellent method that drives away only undesirable animals while having no effect on people or other creatures. Snake repellents: Research has shown that cinnamon oil, clove oil, and eugenol are effective snake repellents. Snakes will retreat when sprayed directly with these oils and will exit cargo or other confined spaces when these oils are introduced to the area. In ancient times the Greek historian Herodotus noted that Arabian harvesters of frankincense used burning resin from Styrax trees to repel poisonous snakes that lived in the trees. Camphor Moth balls The roots and other parts of Acacia polyacantha subsp. campylacantha emit chemical compounds that repel animals including rats, snakes and crocodiles. For snakes, roots are placed in the rafters of houses.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Common Data Representation** Common Data Representation: Common Data Representation (CDR) is used to represent structured or primitive data types passed as arguments or results during remote invocations on Common Object Request Broker Architecture (CORBA) distributed objects. It enables clients and servers written in different programming languages to work together. For example, it translates little-endian to big-endian. It assumes prior agreement on type, so no information is given with data representation in messages.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Jagarico** Jagarico: Jagarico is a family of salty snack products made primarily from processed fried potatoes. Introduced in 1995 by Calbee, Jagarico could be described as rod-shaped potato chips. According to Barabara Zec, they "have a similar appearance to French fries." Product description: This product is sold in packages of various sizes ranging from 38 to 108 grams (1.3 to 3.8 oz). The standard size of this product is 7 cm (2.8 in) in length and roughly 7 mm (0.28 in) in diameter. Since 2012, 8.5 cm (3.3 in) "long" size products have also been marketed. Moreover, thinner versions with a diameters of about 5.25 mm (0.207 in) began to hit store shelves in 2022. These pre-packaged potato sticks are available in many flavors throughout Japan and in at least ten other countries. To boost sales, Calbee regularly introduces new flavors while taking those items with sluggish sales out of production. For example, a "Salt and Sesame Oil" incarnation of this product was launched in 2016. At roughly the same time their "Jurassic Salt" and "Cheese Curry" flavors were discontinued due to tepid sales. Product description: Moreover, in 2020 a shorter garlic-flavored version of this product that targets older consumers hit the market.According to a 2018 report, about 14.5% of Calbee's total 2017 Q1 sales were derived from Jagarico products. During the third financial quarter of 2021 Jagarico sales in Japan amounted to about 34.5 billion yen, and overseas sales are an increasingly important revenue source. According to a 2017 survey by Keio Group, Jagarico was ranked as the fourth most popular snack in the Tokyo area. However, the sample size for that survey was not specified. Similar products: Although the ingredients of Jagarico are similar to many mass-marketed potato chips, their shape resembles a traditional Japanese candied sweet potato snack known as kempi. According to a 1999 US Department of Agriculture report, this product is classified as a fabricated potato snack. However, it resembles other shoestring potato snacks such as Koikeya's "Stick Karamucho", Morinaga's "Potelong", and Seijō Ishii's "Miraku Nori". Jagarico is also related to another Calbee product known as Jagabee. Whereas Jagabee are somewhat thick and made from unhusked whole potatoes, Jagarico are usually thinner and made from skinned potatoes. According to Onishi, Calbee tailors its products to specific audiences and Jagarico was product designed primarily for teenage women, whereas Jagabee targets older consumers.The success of many Jagarico products has spawned a number of derivative snacks. For example, around 1998 Calbee began marketing a sweet potato version of Jagarico known as "Satsumariko". Moreover, a corn version of this product known as "Tomorico" has been sold nationwide since 2018. That same year, a soybean incarnation of Jagarico known as "Edamarico" hit store shelves in Japan. Furthermore, a thicker processed potato product known as "Poteriko" was launched in 2021.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dan Klitsner** Dan Klitsner: Dan Klitsner is the founder and creative director of toy inventing and licensing firm KID Group LLC.Dan has invented and licensed several number one hit toys, including Bop It, Perplexus and Hyper Dash. His inventions have been the recipients of over two dozen awards nominations, 2 gold IDEA Awards and 4 Toy Association Toy of the Year Awards. He also co-founded QiGo Inc., which utilized USB Key technology to support child-safe internet connection for toys.Klitsner has served as a judge in a number of design and innovation competitions, including the IDSA international design competition, the Consumer Electronics Show, and the Toy of the Year Awards.Klitsner recently honored the 25th-anniversary of Bop It! with its latest iteration, “Bop It! Button” where a single-button version of the toy calls players to “Bop It,” “Don’t Bop It,” “Do Bop It,” “Do Not Bop It.” Awards: 2021 Toy and Game International Excellence Awards, Innovative Art and Design Visuals of the Year, Perplexus Snitch2015 Toy Association Toy of the Year Awards, Game of the Year, Simon Swipe2013 Toy Association Toy of the Year Awards, Game of the Year, Perplexus Epic2009 Creative Child Creative Toy Awards, Top Toy of the Year, Hyper Dash 2008 Toy Association Toy of the Year Awards, Electronic Entertainment Toy of the Year, Power Tour Guitar2003 Toy Association Toy of the Year Awards, Vehicle of the Year, Air Hogs Regenerator RC1999 IDEA Award Keytop Toys 1997 Duracell Toy Award, Bop It
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Baire set** Baire set: In mathematics, more specifically in measure theory, the Baire sets form a σ-algebra of a topological space that avoids some of the pathological properties of Borel sets. Baire set: There are several inequivalent definitions of Baire sets, but in the most widely used, the Baire sets of a locally compact Hausdorff space form the smallest σ-algebra such that all compactly supported continuous functions are measurable. Thus, measures defined on this σ-algebra, called Baire measures, are a convenient framework for integration on locally compact Hausdorff spaces. In particular, any compactly supported continuous function on such a space is integrable with respect to any finite Baire measure. Baire set: Every Baire set is a Borel set. The converse holds in many, but not all, topological spaces. Baire sets avoid some pathological properties of Borel sets on spaces without a countable base for the topology. In practice, the use of Baire measures on Baire sets can often be replaced by the use of regular Borel measures on Borel sets. Baire sets were introduced by Kunihiko Kodaira (1941, Definition 4), Shizuo Kakutani and Kunihiko Kodaira (1944) and Halmos (1950, page 220), who named them after Baire functions, which are in turn named after René-Louis Baire. Basic definitions: There are at least three inequivalent definitions of Baire sets on locally compact Hausdorff spaces, and even more definitions for general topological spaces, though all these definitions are equivalent for locally compact σ-compact Hausdorff spaces. Moreover, some authors add restrictions on the topological space that Baire sets are defined on, and only define Baire sets on spaces that are compact Hausdorff, or locally compact Hausdorff, or σ-compact. Basic definitions: First definition Kunihiko Kodaira defined what we call Baire sets (although he confusingly calls them "Borel sets") of certain topological spaces to be the sets whose characteristic function is a Baire function (the smallest class of functions containing all continuous real-valued functions and closed under pointwise limits of sequences). Dudley (1989, Sect. 7.1) gives an equivalent definition and defines Baire sets of a topological space to be elements of the smallest σ-algebra such that all continuous real-valued functions are measurable. For locally compact σ-compact Hausdorff spaces this is equivalent to the following definitions, but in general the definitions are not equivalent. Conversely, the Baire functions are exactly the real-valued functions that are Baire measurable. For metric spaces, the Baire sets coincide with the Borel sets. Second definition Halmos (1950, page 220) defined Baire sets of a locally compact Hausdorff space to be the elements of the σ-ring generated by the compact Gδ sets. This definition is no longer used much, as σ-rings are somewhat out of fashion. When the space is σ-compact, this definition is equivalent to the next definition. One reason for working with compact Gδ sets rather than closed Gδ sets is that Baire measures are then automatically regular (Halmos 1950, theorem G page 228). Third definition The third and most widely used definition is similar to Halmos's definition, modified so that the Baire sets form a σ-algebra rather than just a σ-ring. Basic definitions: A subset of a locally compact Hausdorff topological space is called a Baire set if it is a member of the smallest σ–algebra containing all compact Gδ sets. In other words, the σ–algebra of Baire sets is the σ–algebra generated by all compact Gδ sets. Alternatively, Baire sets form the smallest σ-algebra such that all continuous functions of compact support are measurable (at least on locally compact Hausdorff spaces; on general topological spaces these two conditions need not be equivalent). Basic definitions: For σ-compact spaces this is equivalent to Halmos's definition. For spaces that are not σ-compact the Baire sets under this definition are those under Halmos's definition together with their complements. However, in this case it is no longer true that a finite Baire measure is necessarily regular: for example, the Baire probability measure that assigns measure 0 to every countable subset of an uncountable discrete space and measure 1 to every co-countable subset is a Baire probability measure that is not regular. Examples: The different definitions of Baire sets are not equivalent For locally compact Hausdorff topological spaces that are not σ-compact the three definitions above need not be equivalent. Examples: A discrete topological space is locally compact and Hausdorff. Any function defined on a discrete space is continuous, and therefore, according to the first definition, all subsets of a discrete space are Baire. However, since the compact subspaces of a discrete space are precisely the finite subspaces, the Baire sets, according to the second definition, are precisely the at most countable sets, while according to the third definition the Baire sets are the at most countable sets and their complements. Thus, the three definitions are non-equivalent on an uncountable discrete space. Examples: For non-Hausdorff spaces the definitions of Baire sets in terms of continuous functions need not be equivalent to definitions involving Gδ compact sets. For example, if X is an infinite countable set whose closed sets are the finite sets and the whole space, then the only continuous real functions on X are constant, but all subsets of X are in the σ-algebra generated by compact closed Gδ sets. Examples: A Borel set that is not a Baire set In a Cartesian product of uncountably many compact Hausdorff spaces with more than one point, a point is never a Baire set, in spite of the fact that it is closed, and therefore a Borel set. Properties: Baire sets coincide with Borel sets in Euclidean spaces. Properties: For every compact Hausdorff space, every finite Baire measure (that is, a measure on the σ-algebra of all Baire sets) is regular.For every compact Hausdorff space, every finite Baire measure has a unique extension to a regular Borel measure.The Kolmogorov extension theorem states that every consistent collection of finite-dimensional probability distributions leads to a Baire measure on the space of functions. Assuming compactness (of the given space, and therefore also the function space) one may extend it to a regular Borel measure. After completion one gets a probability space that is not necessarily standard.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dibenzo-1,4-dioxin** Dibenzo-1,4-dioxin: Dibenzo-1,4-dioxin, also dibenzodioxin or dibenzo-p-dioxin (dibenzo-para-dioxin), is a polycyclic heterocyclic organic compound in which two benzene rings are connected by a 1,4-dioxin ring. Its molecular formula is C12H8O2. The two oxygen atoms occupy opposite (para-) positions in the six-membered dioxin ring. Dibenzodioxin is the carbon skeleton of the poisonous polychlorinated dibenzodioxins (PCDDs), often called dioxins. The most harmful PCDD is 2,3,7,8-tetrachlorodibenzodioxin (TCDD). Dioxins and dioxin-like compounds is a category of pollutants that includes PCDDs and other compounds that have similar structure, toxicity, and persistence. Dibenzodioxin is also the skeleton of the polybrominated dibenzodioxins. Isomer: The general name dibenzodioxin usually refers to dibenzo-p-dioxin. The isomeric compound dibenzo-o-dioxin (dibenzo-ortho-dioxin) or dibenzo-1,2-dioxin, like the unstable 1,2-dioxin, has two adjacent oxygen atoms (ortho-). No detailed information is available on this isomer, but it is expected to be highly unstable, with peroxide-like characteristics.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Myers–Steenrod theorem** Myers–Steenrod theorem: Two theorems in the mathematical field of Riemannian geometry bear the name Myers–Steenrod theorem, both from a 1939 paper by Myers and Steenrod. The first states that every distance-preserving map (that is, an isometry of metric spaces) between two connected Riemannian manifolds is a smooth isometry of Riemannian manifolds. A simpler proof was subsequently given by Richard Palais in 1957. The main difficulty lies in showing that a distance-preserving map, which is a priori only continuous, is actually differentiable. Myers–Steenrod theorem: The second theorem, which is much more difficult to prove, states that the isometry group of a Riemannian manifold is a Lie group. For instance, the group of isometries of the two-dimensional unit sphere is the orthogonal group O(3).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Institute of Applied Physics and Computational Mathematics** Institute of Applied Physics and Computational Mathematics: The Institute of Applied Physics and Computational Mathematics (IAPCM) was established in 1958 in Beijing in the People's Republic of China. The institution conducts research on nuclear warhead design computations for the Chinese Academy of Engineering Physics (CAEP) in Mianyang, Sichuan and focuses on applied theoretical research and on the study of fundamental theories. Its main research fields include: Theoretical physics, nuclear fusion, plasma physics, nuclear physics, atomic molecular physics, laser physics, fluid dynamics, applied mathematics, and arms control science and technology. Institute of Applied Physics and Computational Mathematics: The Federal Bureau of Investigation has stated that IAPCM has targeted U.S. defense labs for industrial espionage.From August 2012, the director of the institute was LI Hua.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Planning Domain Definition Language** Planning Domain Definition Language: The Planning Domain Definition Language (PDDL) is an attempt to standardize Artificial Intelligence (AI) planning languages. It was first developed by Drew McDermott and his colleagues in 1998 (inspired by STRIPS and ADL among others) mainly to make the 1998/2000 International Planning Competition (IPC) possible, and then evolved with each competition. The standardization provided by PDDL has the benefit of making research more reusable and easily comparable, though at the cost of some expressive power, compared to domain-specific systems. De facto official versions of PDDL: PDDL1.2 This was the official language of the 1st and 2nd IPC in 1998 and 2000 respectively. De facto official versions of PDDL: It separated the model of the planning problem in two major parts: (1) domain description and (2) the related problem description. Such a division of the model allows for an intuitive separation of those elements, which are (1) present in every specific problem of the problem-domain (these elements are contained in the domain-description), and those elements, which (2) determine the specific planning-problem (these elements are contained in the problem-description). Thus several problem-descriptions may be connected to the same domain-description (just as several instances may exist of a class in OOP (Object Oriented Programming) or in OWL (Web Ontology Language) for example). Thus a domain and a connecting problem description forms the PDDL-model of a planning-problem, and eventually this is the input of a planner (usually domain-independent AI planner) software, which aims to solve the given planning-problem via some appropriate planning algorithm. The output of the planner is not specified by PDDL, but it is usually a totally or partially ordered plan (a sequence of actions, some of which may be executed even in parallel sometimes). Now lets take a look at the contents of a PDDL1.2 domain and problem description in general...(1) The domain description consisted of a domain-name definition, definition of requirements (to declare those model-elements to the planner which the PDDL-model is actually using), definition of object-type hierarchy (just like a class-hierarchy in OOP), definition of constant objects (which are present in every problem in the domain), definition of predicates (templates for logical facts), and also the definition of possible actions (operator-schemas with parameters, which should be grounded/instantiated during execution). Actions had parameters (variables that may be instantiated with objects), preconditions and effects. The effects of actions could be also conditional (when-effects).(2) The problem description consisted of a problem-name definition, the definition of the related domain-name, the definition of all the possible objects (atoms in the logical universe), initial conditions (the initial state of the planning environment, a conjunction of true/false facts), and the definition of goal-states (a logical expression over facts that should be true/false in a goal-state of the planning environment). Thus eventually PDDL1.2 captured the "physics" of a deterministic single-agent discrete fully accessible planning environment. De facto official versions of PDDL: PDDL2.1 This was the official language of the 3rd IPC in 2002. De facto official versions of PDDL: It introduced numeric fluents (e.g. to model non-binary resources such as fuel-level, time, energy, distance, weight, ...), plan-metrics (to allow quantitative evaluation of plans, and not just goal-driven, but utility-driven planning, i.e. optimization, metric-minimization/maximization), and durative/continuous actions (which could have variable, non-discrete length, conditions and effects). Eventually PDDL2.1 allowed the representation and solution of many more real-world problems than the original version of the language. De facto official versions of PDDL: PDDL2.2 This was the official language of the deterministic track of the 4th IPC in 2004. De facto official versions of PDDL: It introduced derived predicates (to model the dependency of given facts from other facts, e.g. if A is reachable from B, and B is reachable from C, then A is reachable from C (transitivity)), and timed initial literals (to model exogenous events occurring at given time independently from plan-execution). Eventually PDDL2.2 extended the language with a few important elements, but wasn't a radical evolution compared to PDDL2.1 after PDDL1.2. De facto official versions of PDDL: PDDL3.0 This was the official language of the deterministic track of the 5th IPC in 2006. De facto official versions of PDDL: It introduced state-trajectory constraints (hard-constraints in form of modal-logic expressions, which should be true for the state-trajectory produced during the execution of a plan, which is a solution of the given planning problem) and preferences (soft-constraints in form of logical expressions, similar to hard-constraints, but their satisfaction wasn't necessary, although it could be incorporated into the plan-metric e.g. to maximize the number of satisfied preferences, or to just measure the quality of a plan) to enable preference-based planning. Eventually PDDL3.0 updated the expressiveness of the language to be able to cope with recent, important developments in planning. De facto official versions of PDDL: PDDL3.1 This was the official language of the deterministic track of the 6th and 7th IPC in 2008 and 2011 respectively. It introduced object-fluents (i.e. functions' range now could be not only numerical (integer or real), but it could be any object-type also). Thus PDDL3.1 adapted the language even more to modern expectations with a syntactically seemingly small, but semantically quite significant change in expressiveness. Current situation The latest version of the language is PDDL3.1. The BNF (Backus–Naur Form) syntax definition of PDDL3.1 can be found among the resources of the IPC-2011 homepage or the IPC-2014 homepage. Successors/variants/extensions of PDDL: PDDL+ This extension of PDDL2.1 from around 2002–2006 provides a more flexible model of continuous change through the use of autonomous processes and events. Successors/variants/extensions of PDDL: The key this extension provides is the ability to model the interaction between the agent's behaviour and changes that are initiated by the agent's environment. Processes run over time and have a continuous effect on numeric values. They are initiated and terminated either by the direct action of the agent or by events triggered in the environment. This 3-part structure is referred to as the start-process-stop model. Distinctions are made between logical and numeric states: transitions between logical states are assumed to be instantaneous whilst occupation of a given logical state can endure over time. Thus in PDDL+ continuous update expressions are restricted to occur only in process effects. Actions and events, which are instantaneous, are restricted to the expression of discrete change. This introduces the before mentioned 3-part modelling of periods of continuous change: (1) an action or event starts a period of continuous change on a numeric variable expressed by means of a process; (2) the process realizes the continuous change of the numeric variable; (3) an action or event finally stops the execution of the process and terminates its effect on the numeric variable. Comment: the goals of the plan might be achieved before an active process is stopped. Successors/variants/extensions of PDDL: NDDL NDDL (New Domain Definition Language) is NASA's response to PDDL from around 2002. Successors/variants/extensions of PDDL: Its representation differs from PDDL in several respects: 1) it uses a variable/value representation (timelines/activities) rather than a propositional/first-order logic, and 2) there is no concept of states or actions, only of intervals (activities) and constraints between those activities. In this respect, models in NDDL look more like schemas for SAT encodings of planning problems rather than PDDL models. Because of the mentioned differences planning and execution of plans (e.g. during critical space missions) may be more robust when using NDDL, but the correspondence to standard planning-problem representations other than PDDL may be much less intuitive than in case of PDDL. Successors/variants/extensions of PDDL: MAPL MAPL (Multi-Agent Planning Language, pronounced "maple") is an extension of PDDL2.1 from around 2003. Successors/variants/extensions of PDDL: It is a quite serious modification of the original language. It introduces non-propositional state-variables (which may be n-ary: true, false, unknown, or anything else). It introduces a temporal model given with modal operators (before, after, etc.). Nonetheless, in PDDL3.0 a more thorough temporal model was given, which is also compatible with the original PDDL syntax (and it is just an optional addition). MAPL also introduces actions whose duration will be determined in runtime and explicit plan synchronization which is realized through speech act based communication among agents. This assumption may be artificial, since agents executing concurrent plans shouldn't necessarily communicate to be able to function in a multi-agent environment. Finally, MAPL introduces events (endogenous and exogenous) for the sake of handling concurrency of actions. Thus events become part of plans explicitly, and are assigned to agents by a control function, which is also part of the plan. Successors/variants/extensions of PDDL: OPT OPT (Ontology with Polymorphic Types) was a profound extension of PDDL2.1 by Drew McDermott from around 2003–2005 (with some similarities to PDDL+). Successors/variants/extensions of PDDL: It was an attempt to create a general-purpose notation for creating ontologies, defined as formalized conceptual frameworks for planning domains about which planning applications are to reason. Its syntax was based on PDDL, but it had a much more elaborate type system, which allowed users to make use of higher-order constructs such as explicit λ-expressions allowing for efficient type inference (i.e. not only domain objects had types (level 0 types), but also the functions/fluents defined above these objects had types in the form of arbitrary mappings (level 1 types), which could be generic, so their parameters (the domain and range of the generic mapping) could be defined with variables, which could have an even higher level type (level 2 type) not to speak of that the mappings could be arbitrary, i.e. the domain or range of a function (e.g. predicate, numeric fluent) could be any level 0/1/2 type. For example, functions could map from arbitrary functions to arbitrary functions...). OPT was basically intended to be (almost) upwardly compatible with PDDL2.1. The notation for processes and durative actions was borrowed mainly from PDDL+ and PDDL2.1, but beyond that OPT offered many other significant extensions (e.g. data-structures, non-Boolean fluents, return-values for actions, links between actions, hierarchical action expansion, hierarchy of domain definitions, the use of namespaces for compatibility with the semantic web). Successors/variants/extensions of PDDL: PPDDL PPDDL (Probabilistic PDDL) 1.0 was the official language of the probabilistic track of the 4th and 5th IPC in 2004 and 2006 respectively. Successors/variants/extensions of PDDL: It extended PDDL2.1 with probabilistic effects (discrete, general probability distributions over possible effects of an action), reward fluents (for incrementing or decrementing the total reward of a plan in the effects of the actions), goal rewards (for rewarding a state-trajectory, which incorporates at least one goal-state), and goal-achieved fluents (which were true, if the state-trajectory incorporated at least one goal-state). Eventually these changes allowed PPDDL1.0 to realize Markov Decision Process (MDP) planning, where there may be uncertainty in the state-transitions, but the environment is fully observable for the planner/agent. Successors/variants/extensions of PDDL: APPL APPL (Abstract Plan Preparation Language) is a newer variant of NDDL from 2006, which is more abstract than most existing planning languages such as PDDL or NDDL. Successors/variants/extensions of PDDL: The goal of this language was to simplify the formal analysis and specification of planning problems that are intended for safety-critical applications such as power management or automated rendezvous in future manned spacecraft. APPL used the same concepts as NDDL with the extension of actions, and also some other concepts, but still its expressive power is much less than PDDL's (in hope of staying robust and formally verifiable). Successors/variants/extensions of PDDL: RDDL RDDL (Relational Dynamic influence Diagram Language) was the official language of the uncertainty track of the 7th IPC in 2011. Successors/variants/extensions of PDDL: Conceptually it is based on PPDDL1.0 and PDDL3.0, but practically it is a completely different language both syntactically and semantically. The introduction of partial observability is one of the most important changes in RDDL compared to PPDDL1.0. It allows efficient description of Markov Decision Processes (MDPs) and Partially Observable Markov Decision Processes (POMDPs) by representing everything (state-fluents, observations, actions, ...) with variables. This way RDDL departs from PDDL significantly. Grounded RDDL corresponds to Dynamic Bayesian Networks (DBNs) similarly to PPDDL1.0, but RDDL is more expressive than PPDDL1.0. Successors/variants/extensions of PDDL: MA-PDDL MA-PDDL (Multi Agent PDDL) is a minimalistic, modular extension of PDDL3.1 introduced in 2012 (i.e. a new :multi-agent requirement) that allows planning by and for multiple agents. The addition is compatible with all the features of PDDL3.1 and addresses most of the issues of MAPL. It adds the possibility to distinguish between the possibly different actions of different agents (i.e. different capabilities). Similarly different agents may have different goals and/or metrics. The preconditions of actions now may directly refer to concurrent actions (e.g. the actions of other agents) and thus actions with interacting effects can be represented in a general, flexible way (e.g. suppose that at least 2 agents are needed to execute a lift action to lift a heavy table into the air, or otherwise the table would remain on the ground (this is an example of constructive synergy, but destructive synergy can be also easily represented in MA-PDDL)). Moreover, as kind of syntactic sugar, a simple mechanism for the inheritance and polymorphism of actions, goals and metrics was also introduced in MA-PDDL (assuming :typing is declared). Since PDDL3.1 assumes that the environment is deterministic and fully observable, the same holds for MA-PDDL, i.e. every agent can access the value of every state fluent at every time-instant and observe every previously executed action of each agent, and also the concurrent actions of agents unambiguously determine the next state of the environment. This was improved later by the addition of partial-observability and probabilistic effects (again, in form of two new modular requirements, :partial-observability and :probabilistic-effects, respectively, the latter being inspired by PPDDL1.0, and both being compatible with all the previous features of the language, including :multi-agent). Example: This is the domain definition of a STRIPS instance for the automated planning of a robot with two gripper arms. And this is the problem definition that instantiates the previous domain definition with a concrete environment with two rooms and two balls.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**International English** International English: International English is the concept of using the English language as a global means of communication similar to an international auxiliary language, and often refers to the movement towards an international standard for the language. Related and sometimes synonymous terms include: Global English, World English, Common English, Continental English, General English, and Engas (English as associate language). Sometimes, these terms refer to the actuality of the situation, where English is spoken and used in numerous dialects around the world. These terms may acknowledge the diversity and varieties of English spoken throughout the world. International English: Sometimes however, these related terms refer to a desired standardisation (i.e., Standard English), but there is no consensus on the path to this goal. There have been many proposals for making International English more accessible to people from different nationalities; Basic English is an example, but it failed to make progress. More recently, there have been proposals for English as a lingua franca (ELF) in which non-native speakers take a highly active role in the development of the language. It has also been argued that International English is held back by its traditional spelling. There has been slow progress in adopting alternative spellings. Historical context: The modern concept of "International English" does not exist in isolation, but is the product of centuries of development of the English language. Historical context: The English language evolved in England, from a set of West Germanic dialects spoken by the Angles and Saxons, who arrived from continental Europe in the 5th century. Those dialects became known as Englisc (literally "Anglish"), the language today referred to as Anglo-Saxon or Old English (the language of the poem Beowulf). However, less than a quarter of the vocabulary of Modern English is derived from the shared ancestry with other West Germanic languages because of extensive borrowings from Norse, Norman, Latin, and other languages. It was during the Viking invasions of the Anglo-Saxon period that Old English was influenced by contact with Norse, a group of North Germanic dialects spoken by the Vikings, who came to control a large region in the North of England known as the Danelaw. Vocabulary items entering English from Norse (including the pronouns they and them) are thus attributable to the on-again-off-again Viking occupation of Northern England during the centuries prior to the Norman Conquest (see, e.g., Canute the Great). Soon after the Norman Conquest of 1066, the Englisc language ceased being a literary language (see, e.g., Ormulum) and was replaced by Anglo-Norman as the written language of England. During the Norman Period, English absorbed a significant component of French vocabulary (approximately one-third of the vocabulary of Modern English). With this new vocabulary, additional vocabulary borrowed from Latin (with Greek, another approximately one-third of Modern English vocabulary, though some borrowings from Latin and Greek date from later periods), a simplified grammar, and use of the orthographic conventions of French instead of Old English orthography, the language became Middle English (the language of Chaucer). The "difficulty" of English as a written language thus began in the High Middle Ages, when French orthographic conventions were used to spell a language whose original, more suitable orthography had been forgotten after centuries of nonuse. During the late medieval period, King Henry V of England (lived 1387–1422) ordered the use of the English of his day in proceedings before him and before the government bureaucracies. That led to the development of Chancery English, a standardised form used in the government bureaucracy. (The use of so-called Law French in English courts continued through the Renaissance, however.) The emergence of English as a language of Wales results from the incorporation of Wales into England and also dates from approximately this time period. Soon afterward, the development of printing by Caxton and others accelerated the development of a standardised form of English. Following a change in vowel pronunciation that marks the transition of English from the medieval to the Renaissance period, the language of the Chancery and Caxton became Early Modern English (the language of Shakespeare's day) and with relatively moderate changes eventually developed into the English language of today. Scots, as spoken in the lowlands and along the east coast of Scotland, developed largely independent of Modern English, and is based on the Northern dialects of Anglo-Saxon, particularly Northumbrian, which also serve as the basis of Northern English dialects such as those of Yorkshire and Newcastle upon Tyne. Northumbria was within the Danelaw and therefore experienced greater influence from Norse than did the Southern dialects. As the political influence of London grew, the Chancery version of the language developed into a written standard across Great Britain, further progressing in the modern period as Scotland became united with England as a result of the Acts of Union of 1707. Historical context: English was introduced to Ireland twice—a medieval introduction that led to the development of the now-extinct Yola dialect, and a modern introduction in which Hiberno-English largely replaced Irish as the most widely spoken language during the 19th century, following the Act of Union of 1800. Received Pronunciation (RP) is generally viewed as a 19th-century development and is not reflected in North American English dialects (except the affected Transatlantic accent), which are based on 18th-century English. Historical context: The establishment of the first permanent English-speaking colony in North America in 1607 was a major step towards the globalisation of the language. British English was only partially standardised when the American colonies were established. Isolated from each other by the Atlantic Ocean, the dialects in England and the colonies began evolving independently. Historical context: The British colonisation of Australia starting in 1788 brought the English language to Oceania. By the 19th century, the standardisation of British English was more settled than it had been in the previous century, and this relatively well-established English was brought to Africa, Asia and New Zealand. It developed both as the language of English-speaking settlers from Britain and Ireland, and as the administrative language imposed on speakers of other languages in the various parts of the British Empire. The first form can be seen in New Zealand English, and the latter in Indian English. In Europe, English received a more central role particularly since 1919, when the Treaty of Versailles was composed not only in French, the common language of diplomacy at the time, but, under special request from American president Woodrow Wilson, also in English – a major milestone in the globalisation of English.The English-speaking regions of Canada and the Caribbean are caught between historical connections with the UK and the Commonwealth and geographical and economic connections with the U.S. In some things they tend to follow British standards, whereas in others, especially commercial, they follow the U.S. standard. English as a global language: Braj Kachru divides the use of English into three concentric circles.The inner circle is the traditional base of English and includes countries such as the United Kingdom and Ireland and the anglophone populations of the former British colonies of the United States, Australia, New Zealand, South Africa, Canada, and various islands of the Caribbean, Indian Ocean, and Pacific Ocean. English as a global language: In the outer circle are those countries where English has official or historical importance ("special significance"). This includes most of the countries of the Commonwealth of Nations (the former British Empire), including populous countries such as India, Pakistan, and Nigeria; and others, such as the Philippines, under the sphere of influence of English-speaking countries. English in this circle is used for official purposes such as in business, news broadcasts, schools, and air traffic. Some countries in this circle have made English their national language. Here English may serve as a useful lingua franca between ethnic and language groups. Higher education, the legislature and judiciary, national commerce, and so on, may all be carried out predominantly in English. English as a global language: The expanding circle refers to those countries where English has no official role, but is nonetheless important for certain functions, e.g., international business and tourism. By the twenty-first century, non-native English speakers have come to outnumber native speakers by a factor of three, according to the British Council. Darius Degher, a professor at Malmö University in Sweden, uses the term decentered English to describe this shift, along with attendant changes in what is considered important to English users and learners. The Scandinavian language area as well as the Netherlands have a near complete bilingualism between their native languages and English as a foreign second language. Elsewhere in Europe, although not universally, English knowledge is still rather common among non-native speakers. In many cases this leads to accents derived from the native languages altering pronunciations of the spoken English in these countries. English as a global language: Research on English as a lingua franca in the sense of "English in the Expanding Circle" is comparatively recent. Linguists who have been active in this field are Jennifer Jenkins, Barbara Seidlhofer, Christiane Meierkord and Joachim Grzega. English as a lingua franca in foreign language teaching: English as an additional language (EAL) is usually based on the standards of either American English or British English as well as incorporating foreign terms. English as an international language (EIL) is EAL with emphasis on learning different major dialect forms; in particular, it aims to equip students with the linguistic tools to communicate internationally. Roger Nunn considers different types of competence in relation to the teaching of English as an International Language, arguing that linguistic competence has yet to be adequately addressed in recent considerations of EIL.Several models of "simplified English" have been suggested for teaching English as a foreign language: Basic English, developed by Charles Kay Ogden (and later also I. A. Richards) in the 1930s; a recent revival has been initiated by Bill Templer Threshold Level English, developed by van Ek and Alexander Globish, developed by Jean-Paul Nerrière Basic Global English, developed by Joachim GrzegaFurthermore, Randolph Quirk and Gabriele Stein thought about a Nuclear English, which, however, has never been fully developed. English as a lingua franca in foreign language teaching: With reference to the term "Globish", Robert McCrum has used this to mean "English as global language". Jean-Paul Nerriere uses it for a constructed language. English as a lingua franca in foreign language teaching: Basic Global English Basic Global English, or BGE, is a concept of global English initiated by German linguist Joachim Grzega. It evolved from the idea of creating a type of English that can be learned more easily than regular British or American English and that serves as a tool for successful global communication. BGE is guided by creating "empathy and tolerance" between speakers in a global context. This applies to the context of global communication, where different speakers with different mother tongues come together. BGE aims to develop this competence as quickly as possible. English as a lingua franca in foreign language teaching: English language teaching is almost always related to a corresponding culture, e. g., learners either deal with American English and therefore with American culture, or British English and therefore with British culture. Basic Global English seeks to solve this problem by creating one collective version of English. Additionally, its advocates promote it as a system suited for self-teaching as well as classroom teaching. English as a lingua franca in foreign language teaching: BGE is based on 20 elementary grammar rules that provide a certain degree of variation. For example, regular as well as irregular formed verbs are accepted. Pronunciation rules are not as strict as in British or American English, so there is a certain degree of variation for the learners. Exceptions that cannot be used are pronunciations that would be harmful to mutual understanding and therefore minimize the success of communication. English as a lingua franca in foreign language teaching: Basic Global English is based on a 750-word vocabulary. Additionally, every learner has to acquire the knowledge of 250 additional words. These words can be chosen freely, according to the specific needs and interests of the learner. BGE provides not only basic language skills, but also so called "Basic Politeness Strategies". These include creating a positive atmosphere, accepting an offer with "Yes, please" or refusing with "No, thank you", and small talk topics to choose and to avoid. English as a lingua franca in foreign language teaching: Basic Global English has been tested in two elementary schools in Germany. For the practical test of BGE, 12 lessons covered half of a school year. After the BGE teaching, students could answer questions about themselves, their family, their hobbies etc. Additionally they could form questions themselves about the same topics. Besides that, they also learned the numbers from 1 to 31 and vocabulary including things in their school bag and in their classroom. The students as well as the parents had a positive impression of the project. Varying concepts: Universality and flexibility International English sometimes refers to English as it is actually being used and developed in the world; as a language owned not just by native speakers, but by all those who come to use it. Varying concepts: Basically, it covers the English language at large, often (but not always or necessarily) implicitly seen as standard. It is certainly also commonly used in connection with the acquisition, use, and study of English as the world's lingua franca ('TEIL: Teaching English as an International Language'), and especially when the language is considered as a whole in contrast with British English, American English, South African English, and the like. — McArthur (2002, p. 444–445) It especially means English words and phrases generally understood throughout the English-speaking world as opposed to localisms. The importance of non-native English language skills can be recognized behind the long-standing joke that the international language of science and technology is broken English. Varying concepts: Neutrality International English reaches toward cultural neutrality. This has a practical use: What could be better than a type of English that saves you from having to re-edit publications for individual regional markets! Teachers and learners of English as a second language also find it an attractive idea—both often concerned that their English should be neutral, without American or British or Canadian or Australian coloring. Any regional variety of English has a set of political, social and cultural connotations attached to it, even the so-called 'standard' forms. Varying concepts: According to this viewpoint, International English is a concept of English that minimises the aspects defined by either the colonial imperialism of Victorian Britain or the cultural imperialism of the 20th century United States. While British colonialism laid the foundation for English over much of the world, International English is a product of an emerging world culture, very much attributable to the influence of the United States as well, but conceptually based on a far greater degree of cross-talk and linguistic transculturation, which tends to mitigate both U.S. influence and British colonial influence. Varying concepts: The development of International English often centres on academic and scientific communities, where formal English usage is prevalent, and creative use of the language is at a minimum. This formal International English allows entry into Western culture as a whole and Western cultural values in general. Opposition The continued growth of the English language itself is seen by authors such as Alistair Pennycook as a kind of cultural imperialism, whether it is English in one form or English in two slightly different forms. Robert Phillipson argues against the possibility of such neutrality in his Linguistic Imperialism (1992). Learners who wish to use purportedly correct English are in fact faced with the dual standard of American English and British English, and other less known standard Englishes (including Australian, Scottish and Canadian). Varying concepts: Edward Trimnell, author of Why You Need a Foreign Language & How to Learn One (2005) argues that the international version of English is only adequate for communicating basic ideas. For complex discussions and business/technical situations, English is not an adequate communication tool for non-native speakers of the language. Trimnell also asserts that native English-speakers have become "dependent on the language skills of others" by placing their faith in international English. Varying concepts: Appropriation theory Some reject both what they call "linguistic imperialism" and David Crystal's theory of the neutrality of English. They argue that the phenomenon of the global spread of English is better understood in the framework of appropriation (e.g., Spichtinger 2000), that is, English used for local purposes around the world. Demonstrators in non-English speaking countries often use signs in English to convey their demands to TV-audiences around the globe, for example. Varying concepts: In English-language teaching, Bobda shows how Cameroon has moved away from a mono-cultural, Anglo-centered way of teaching English and has gradually appropriated teaching material to a Cameroonian context. This includes non-Western topics, such as the rule of Emirs, traditional medicine, and polygamy (1997:225). Kramsch and Sullivan (1996) describe how Western methodology and textbooks have been appropriated to suit local Vietnamese culture. The Pakistani textbook "Primary Stage English" includes lessons such as Pakistan My Country, Our Flag, and Our Great Leader (Malik 1993: 5,6,7), which might sound jingoistic to Western ears. Within the native culture, however, establishing a connection between English Language Teaching (ELT), patriotism, and Muslim faith is seen as one of the aims of ELT. The Punjab Textbook Board openly states: "The board ... takes care, through these books to inoculate in the students a love of the Islamic values and awareness to guard the ideological frontiers of your [the students] home lands." (Punjab Text Book Board 1997). Varying concepts: Many Englishes Many difficult choices must be made if further standardization of English is pursued. These include whether to adopt a current standard or move towards a more neutral, but artificial one. A true International English might supplant both current American and British English as a variety of English for international communication, leaving these as local dialects, or would rise from a merger of General American and standard British English with admixture of other varieties of English and would generally replace all these varieties of English. Varying concepts: We may, in due course, all need to be in control of two standard Englishes—the one which gives us our national and local identity, and the other which puts us in touch with the rest of the human race. In effect, we may all need to become bilingual in our own language. — David Crystal (1988: p. 265) This is the situation long faced by many users of English who possess a "non-standard" dialect of English as their birth tongue but have also learned to write (and perhaps also speak) a more standard dialect. (This phenomenon is known in linguistics as diglossia.) Many academics often publish material in journals requiring different varieties of English and change style and spellings as necessary without great difficulty. Varying concepts: As far as spelling is concerned, the differences between American and British usage became noticeable due to the first influential lexicographers (dictionary writers) on each side of the Atlantic. Samuel Johnson's dictionary of 1755 greatly favoured Norman-influenced spellings such as centre and colour; on the other hand, Noah Webster's first guide to American spelling, published in 1783, preferred spellings like center and the Latinate color. The difference in strategy and philosophy of Johnson and Webster are largely responsible for the main division in English spelling that exists today. However, these differences are extremely minor. Spelling is but a small part of the differences between dialects of English, and may not even reflect dialect differences at all (except in phonetically spelled dialogue). International English refers to much more than an agreed spelling pattern. Varying concepts: Dual standard Two approaches to International English are the individualistic and inclusive approach and the new dialect approach. Varying concepts: The individualistic approach gives control to individual authors to write and spell as they wish (within purported standard conventions) and to accept the validity of differences. The Longman Grammar of Spoken and Written English, published in 1999, is a descriptive study of both American and British English in which each chapter follows individual spelling conventions according to the preference of the main editor of that chapter. Varying concepts: The new dialect approach appears in The Cambridge Guide to English Usage (Peters, 2004), which attempts to avoid any language bias and accordingly uses an idiosyncratic international spelling system of mixed American and British forms (but tending to prefer the American English spellings). Qualifications: Standardised testing in International English for non-native English language speakers has existed for a while, learners can use their local dialect of English so it does not matter if you use British or American spelling. The International English Language Testing System (IELTS) is recognised in countries such as the USA, the UK, Canada, Australia and New Zealand and is the world's most popular English language test for higher education and immigration. Other options are the International Certificate (PTE General) and Cambridge English Qualifications which are also recognised globally and can be used as evidence of a required standard of English.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Society for Behavioral Neuroendocrinology** Society for Behavioral Neuroendocrinology: The Society for Behavioral Neuroendocrinology is an interdisciplinary scientific organization dedicated to the study of hormonal processes and neuroendocrine systems that regulate behavior. Publications: SBN publishes the scientific journal Hormones and Behavior.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Popular Astronomy (UK magazine)** Popular Astronomy (UK magazine): Popular Astronomy is the bi-monthly magazine of the UK's Society for Popular Astronomy, published in January, March, May, July, September and November. History and profile: The magazine was started in 1953 with the name The Junior Astronomer. Before 2011 it was a quarterly publication. Before 1981 the journal was known as Hermes, and earlier still it was called The Junior Astronomer. The magazine is published by the Society for Popular Astronomy, a national society for amateur astronomers.The magazine aims to present the science in plain English, avoiding unnecessary jargon. As well as main features covering professional and amateur research, regular articles include: AstroNews - updates on some of the most interesting current developments in professional astronomy; Amateur Scene - a look around local astronomy clubs; Deep Sky Notes - surveying the season's deep celestial sights; Sky Diary - what's happening in the sky in the coming weeks; Glorious Universe - comparing amateur and professional observations of celestial objects and phenomena; Also, readers' letters, plus book and product reviews, society news, competitions and more.The magazine also includes a section for Young Stargazers to help younger readers to understand modern astronomy. Editors past and present: Richard Baum (1953 June–1955 October) Patrick Moore (1956) Richard Baum (1957 January–July) Gilbert Satterthwaite (1957 October–1961 April) John Lytheer (1961 July–1964 April) George Teideman (1964 July–1967 April) Ian Ridpath (1967 July–1974 April) Paul Sutherland (1974 July–1982 July) Enid Lake (1982 October–1985 October) Ian Ridpath (1986 January–1989 July; editor-in-chief until 1992 October) Tom Hosking (1989 October–2000 July) Peter Grego (2000 October–2016 February) Amanda Doyle (2016 February-2018 July) Mandy Bailey (acting ed.) (2018 July-2019 January) Osnat Katz (2019 January-2020 January) Robin Scagell (acting ed.) (2020 January- ).Changes of name: The Junior Astronomer from 1953 June until 1960 July; Hermes from 1960 October to 1980 October; Popular Astronomy 1981 January to present.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Share (P2P)** Share (P2P): Share (シャレ(洒落),シェア) is the name for a closed-source P2P application being developed in Japan by ファイル倉庫, a pseudonym translating as 'file warehouse.' Share was developed to be the successor to Winny. Like Winny, Share functions using a distributed data store referred to as a cache with each computer acting as a node in the network. Netagent published a survey in June 2018 suggesting that Share was the third most popular p2p network in Japan after Winny and Perfect Dark (P2P) with approximately 10,000 nodes connecting each day over Golden Week, down from 30,000 in 2015. Background: Share's logo refers to the Laughing Man, which is a fictional character from the anime series Ghost in the Shell: Stand Alone Complex, as an anonymous hacker. Share uses encryption to hide the identity of who is transferring or what they are transferring. It is non-centralized so it cannot be easily shut down and it supports multiple source "swarm" downloading. All files are transferred encrypted so they must be decrypted upon download completion. In the meantime they are stored in encrypted form in a "Cache" folder. This folder is also used to allow recently downloaded files to be shared among the network based on priorities. Background: Share also features a plugin system. The plugins and PDK are readily available through the Share network. The PDK is written in Delphi. Background: Unlike Winny, Share allows users to specify up to 255 Cluster Keywords, though only 5 can be active at once (Winny only allowed 3 cluster words, and its system was more confusing). These are used to connect to nodes that have also specified the same Cluster Keywords. This allows users to maintain connections with nodes that are sharing files they might be interested in, while disconnecting from nodes that share content they are not concerned about. Background: Users can specify auto-download triggers and auto-block filters. The network also appears to have some sort of a "forgery warning" system to warn people about possible falsified data/files. Like Winny, Share uses « Trip IDs » to verify the identity of a person sharing a file. A « Trip ID » is a sort of encrypted key that identifies a person who they say they are. This allows users to decide whether or not they trust a person based on their previous sharing experience with them. When a new version of Share becomes available, users are given a notice in the Share statusbar. When this happens, users can search for the new version on the Share network, and download it from a reliable source based on Trip. Criticism: Share is highly popular in Japan, but in the West, some concerns have been raised. Criticism: In Japan, high speed Internet is more readily available than in most of Western Europe and North America. For this reason the minimum upload and download limits are set to 50 kB/s. Also, the cache system can use around 4 GB of free space at any given time to store cached downloads. This might be inconvenient for people with small hard drives. Criticism: As a closed source product, Share partially relies on security through obscurity. Like many other P2P applications, Share downloads files in blocks. However, Share can only export partially downloaded files in sequential manner. For example, if a file has 100 blocks and block 51 is missing, Share will not be able to export block 52-100 even if they are already downloaded. Plugin developers have tried to overcome this limitation. Download: Share can be downloaded from the Share P2P and P2P ファイル共有ソフトノード登録所 websites. Language localizations: Unlike Winny, Share includes an option for language localisation changes (labeling of buttons, etc.). The locale.txt file contains the information for a particular language and resides in the Share directory. An English localization for A82 An English localization for EX2 (TCP version) An English localization for NT5 (UDP version) More English localizations for Locate/Hint/Readme/Info Spanish localization for EX2 Archived 2017-01-18 at the Wayback Machine Legal issues: On 9 May 2008, three Japanese people aged 21 to 41 were arrested in Kyoto, Japan for illegally uploading copyrighted files with Share. These were the first Share-related cases in Japan. Nevertheless, a research showed that there was no significant drop of on-line Share users after these arrests.On 27 November 2008, another male Share user was arrested in Japan for illegally uploading Japanese TV drama with Share.On 12 February 2009, two male Share users became the first to be arrested on charges of uploading Child pornography with Share.On 30 September 2009, multiple Japanese media reported that two men were arrested for uploading Nintendo DS game software which include Square Enix's Dragon Quest IX. They are the first users arrested for uploading DS games.On 30 November 2009, 10 Japanese men and 1 woman were arrested for sharing anime, music, movies, and games. Not all were the original uploaders. Legal issues: On 31 March 2010, 62-year-old Seiji Sato was spotted by a new p2p surveillance software for sharing Avatar and other movies.On 14 January 2011, 18 people were arrested for sharing movies, anime, music, games and software.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**2-Aminothiazole** 2-Aminothiazole: 2-Aminothiazole is a heterocyclic amine featuring a thiazole core. It can also be considered a cyclic isothiourea. It possesses an odor similar to pyridine and is soluble in water, alcohols and diethyl ether. It is commonly used as a starting point for the synthesis of many compounds including sulfur drugs, biocides, fungicides, dyes and chemical reaction accelerators. 2-Aminothiazole can be used as a thyroid inhibitor in the treatment of hyperthyroidism and has antibacterial activity. Alternatively, its acid tartrate salt can be used. Recent studies using prion-infected neuroblastoma cell lines have suggested that aminothiazole may be used as a therapeutic drug for prion diseases.Many 2-aminothiazoles and 2-amidothiazoles are drugs: avatrombopag, amthamine, amiphenazole, abafungin, acotiamide, pramipexole.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Death knock** Death knock: In journalism, the term death knock refers to the practice of journalists contacting people with a close relationship to a deceased individual, in an attempt to garner their thoughts and feelings regarding the death, and also gather other information.The practice of death knocks, is often considered to be a negative aspect of journalism, but the exposure it brings has also been shown to sometimes be a comfort to bereaved individuals.In the United Kingdom, the Independent Press Standards Organisation has laid down guidelines relating to how death knocks are carried out. These guidelines include using sensitivity, sympathy, and discretion when practicing death knocks. Digital use: Due to the increasing popularity of social media, many journalists now take advantage of the internet when practicing death knocks. Journalists will often use social media platforms to find photographs and comments that were posted by the deceased individual or their loved ones. Using the internet for death knocks is not only convenient, but also far less stressful for journalists. Since many journalists view death knocks as a negative aspect of their job, it is often associated with anxiety, low-self esteem, and even self-disgust. Journalists are also often criticized for using social media for death knocks because it is a controversial practice that many people find to be unethical. Although journalists consider social media to be in the public domain, others consider it an invasion of privacy of the deceased and their loved ones. Not only is social media often criticized for its inaccuracy, but its use by journalists may be harmful to the deceased individual's family. When journalists use information from social media, families lose control over the media's portrayal of the deceased individual. Many families expect to be contacted by journalists after their loved one's death and will often feel betrayed once they discover that information was used from their social media without their consent. However, many journalists recognize the importance of traditional death knocks and will typically only use social media as a "last resort."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rhombitriheptagonal tiling** Rhombitriheptagonal tiling: In geometry, the rhombitriheptagonal tiling is a semiregular tiling of the hyperbolic plane. At each vertex of the tiling there is one triangle and one heptagon, alternating between two squares. The tiling has Schläfli symbol rr{7, 3}. It can be seen as constructed as a rectified triheptagonal tiling, r{7,3}, as well as an expanded heptagonal tiling or expanded order-7 triangular tiling. Dual tiling: The dual tiling is called a deltoidal triheptagonal tiling, and consists of congruent kites. It is formed by overlaying an order-3 heptagonal tiling and an order-7 triangular tiling. Related polyhedra and tilings: From a Wythoff construction there are eight hyperbolic uniform tilings that can be based from the regular heptagonal tiling. Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, there are 8 forms. Symmetry mutations This tiling is topologically related as a part of sequence of cantellated polyhedra with vertex figure (3.4.n.4), and continues as tilings of the hyperbolic plane. These vertex-transitive figures have (*n32) reflectional symmetry.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Laurène Meyniel-Schicklin** Laurène Meyniel-Schicklin: Laurène Meyniel-Schicklin is a bioinformatics engineer who specializes in genomic data science. Career: In 2014 she co-founded Enyo Pharma where she conducts research on a drug discovery engine which mimics viruses' ability to model the cellular functions of the host. She previously worked as an engineer with Inserm and taught at the Catholic University of Lyon. Education: She holds a degree in Bioinformatics from the University of Évry Val d'Essonne. Awards and honors: Laurène was featured in Forbes' Top 50 Women in Tech 2018 list, and she has been granted a patent.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sexual network** Sexual network: A sexual network is a social network that is defined by the sexual relationships within a set of individuals. Studies and discoveries: Like other forms of social networks, sexual networks can be formally studied using the mathematics of graph theory and network theory.Recent epidemiological studies have investigated sexual networks, and suggest that the statistical properties of sexual networks are crucial to the spread of sexually transmitted diseases (STDs). Sub-graphs, both large and small, can be defined within the overall sexual network graph; for example, people who frequent particular bars or clubs, belong to a particular ethnic group or take part in a particular type of sexual activity, or are part of a particular outbreak of an STD. In particular, assortative mixing between people with large numbers of sexual partners seems to be an important factor in the spread of STD. Studies and discoveries: In a surprising result, mathematical models predict that the sexual network graph for the human race appears to have a single giant component that indirectly links almost all people who have had more than one sexual partner, and a great many of those who have had only one sexual partner (if their one sexual partner was themselves part of the giant component).. Studies and discoveries: For more detailed epidemiological work, the time sequence of sexual contacts is important.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Capital Markets Index** Capital Markets Index: The Capital Markets Index (CPMKTS) was an investment tool that tracked the value of traditional investment-grade U.S. capital market securities. It viewed the markets broadly and included approximately 9,500 equity, fixed income, and money market instruments. The American stock exchange published sub-indexes CPMKTE, CPMKTB, and CPMKTL, tracking equities, bonds, and liquidity, respectively. CPMKTS was launched by Dorchester Capital Management Company of Houston, Texas on May 4, 2006. The Capital Markets Index was carried on the American stock exchange that updated every 15 seconds. The trademark was canceled on October 24, 2014.The index is inactive, and the last available data is from December 2015.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Indoxyl-UDPG glucosyltransferase** Indoxyl-UDPG glucosyltransferase: In enzymology, an indoxyl-UDPG glucosyltransferase (EC 2.4.1.220) is an enzyme that catalyzes the chemical reaction UDP-glucose + indoxyl ⇌ UDP + indicanThus, the two substrates of this enzyme are UDP-glucose and indoxyl, whereas its two products are UDP and indican. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-glucose:indoxyl 3-O-beta-D-glucosyltransferase. This enzyme is also called indoxyl-UDPG-glucosyltransferase.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mesangial cell** Mesangial cell: Mesangial cells are specialised cells in the kidney that make up the mesangium of the glomerulus. Together with the mesangial matrix, they form the vascular pole of the renal corpuscle. The mesangial cell population accounts for approximately 30-40% of the total cells in the glomerulus. Mesangial cells can be categorized as either extraglomerular mesangial cells or intraglomerular mesangial cells, based on their relative location to the glomerulus. The extraglomerular mesangial cells are found between the afferent and efferent arterioles towards the vascular pole of the glomerulus. The extraglomerular mesangial cells are adjacent to the intraglomerular mesangial cells that are located inside the glomerulus and in between the capillaries. The primary function of mesangial cells is to remove trapped residues and aggregated protein from the basement membrane thus keeping the filter free of debris. The contractile properties of mesangial cells have been shown to be insignificant in changing the filtration pressure of the glomerulus. Structure: Mesangial cells have irregular shapes with flattened-cylinder-like cell bodies and processes at both ends containing actin, myosin and actinin, giving mesangial cells contractile properties. The anchoring filaments from mesangial cells to the glomerular basement membrane can alter capillary flow by changing glomerular ultrafiltration surface area. Extraglomerular mesangial cells are in close connection to afferent and efferent arteriolar cells by gap junctions, allowing for intercellular communication. Mesangial cells are separated by intercellular spaces containing extracellular matrix called the mesangial matrix that is produced by the mesangial cells. Mesangial matrix provides structural support for the mesangium. Mesangial matrix is composed of glomerular matrix proteins such as collagen IV (α1 and α2 chains), collagen V, collagen VI, laminin A, B1, B2, fibronectin, and proteoglycans. Development: It is unclear whether the mesangial cells originate from mesenchymal or stromal cells. However there is evidence suggesting that they originate elsewhere outside of the glomerulus and then migrate into the glomerulus during development. Human foetal and infant kidneys stained for alpha smooth muscle actin (α-SMA), a marker for mesangial cells, demonstrated that α-SMA-positive mesenchymal cells migrate towards the glomerulus and during a later stage they can be found within the mesangium. It is possible that they share the same origin as supporting cells such as pericytes and vascular smooth muscle cells, or even be a type of specialised vascular smooth muscle cell. Function: Formation of capillary loops during development During development mesangial cells are important in the formation of convoluted capillaries allowing for efficient diffusion to occur. Endothelial precursor cells secrete platelet-derived growth factor (PDGF)-B and mesangial cells have receptors for PDGF. This induces mesangial cells to attach to endothelial cells causing developing blood vessels to loop resulting in convoluted capillaries. Mice lacking the growth factor PDGF-B or PDGFRβ do not develop mesangial cells. When mesangial cells are absent the blood vessel becomes a single dilated vessel with up to 100-fold decrease in surface area. The transcription factor for PDGFRβ, Tbx18, is crucial for the development of mesangial cells. Without Tbx18 the development of mesangial cells is compromised and results in the formation of dilated loops. Mesangial cell progenitors are also a target of PDGF-B and can be selected for by the signal to then develop into mesangial cells. Function: Interactions with other renal cells Mesangial cells form a glomerular functional unit with glomerular endothelial cells and podocytes through interactions of molecular signalling pathways which are essential for the formation of the glomerular tuft. Mesangial cells aid filtration by constituting part of the glomerular capillary tuft structure that filters fluids to produce urine. Communication between mesangial cells and vascular smooth muscle cells via gap junctions helps regulate the process of tubuloglomerular feedback and urine formation. Damage to mesangial cells using Thy 1-1 antibody specific to mesangial cells causes the vasoconstriction of arterioles mediated by tubuloglomerular feedback to be lost. Function: Contractions regulate capillary flow Mesangial cells can contract and relax to regulate capillary flow. This is regulated by vasoactive substances. Contraction of mesangial cells is dependent on cell membrane permeability to calcium ions and relaxation is mediated by paracrine factors, hormones and cAMP. In response to capillary stretching, mesangial cells can respond by producing several growth factors: TGF-1, VEGF and connective tissue growth factor. Function: Removal of macromolecules The mesangium is exposed to macromolecules from the capillary lumen as they are separated only by fenestrated endothelium without basement membrane. Mesangial cells play a role in restricting macromolecules from accumulating in the mesangial space by receptor- independent uptake processes of phagocytosis, micro- and macro-pinocytosis, or receptor-dependent processes and then transported along the mesangial stalk. Size, charge, concentration, and affinity for mesangial cell receptors of the macromolecule affects how the macromolecule is removed. Triglycerides may undergo pinocytosis and antibody IgG complexes may lead to activation of adhesion molecules and chemokines by mesangial cells. Clinical significance: Diabetic nephropathy The expansion of mesangial matrix is one characteristic of diabetic nephropathy although it also involves other cells in interaction including podocytes and endothelial cells. Mesangial expansion occurs due to increased deposition of extracellular matrix proteins, for example fibronectin, into the mesangium. Accumulation of extracellular matrix proteins then occurs due to insufficient degradation by matrix metalloproteinases.Increased glucose levels results in the activation of metabolic pathways leading to increased oxidative stress. This in turn results in the over-production and accumulation of advanced glycosylation end products responsible for enhancing the risk of developing glomerular diseases. Mesangial cells grown on advanced glycosylation end product-modified matrix proteins demonstrate increased production of fibronectin and a decrease in proliferation. These factors eventually lead to the thickening of the glomerular basement membrane, mesangial matrix expansion then glomerulosclerosis and fibrosis.Mesangial pathologies may also develop during the early phase of diabetes. Glomerular hypertension causes mesangial cells to stretch which causes induced expression of GLUT1 leading to increased cellular glucose. The repetition of stretching and relaxation cycle of mesangial cells due to hypertension increases mesangial cell proliferation and the production of extracellular matrix which can then accumulate and lead to glomerular disease.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**(549948) 2011 WL2** (549948) 2011 WL2: (549948) 2011 WL2 is a sub-kilometer asteroid, classified as a near-Earth object and potentially hazardous asteroid of the Apollo group. It was discovered on 16 November 2011, by astronomers with the LINEAR at the Lincoln Laboratory ETS near Socorro, New Mexico, in the United States. Orbit: 2011 WL2 is a potentially hazardous asteroid (PHA), but has a well determined orbit with a 10 year observation arc. 2011 WL2 will pass at a distance of 0.0056 AU (840,000 km; 520,000 mi) from Earth on 25 October 2077. For comparison, the distance to the Moon is about 0.0026 AU (384,400 km). 2011 WL2 appears on the list of PHA close approaches issued by the Minor Planet Center (MPC), with the next close approach in the year 2038.The Jupiter Tisserand invariant, used to distinguish different kinds of orbits, is 5.7.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**B-tree** B-tree: In computer science, a B-tree is a self-balancing tree data structure that maintains sorted data and allows searches, sequential access, insertions, and deletions in logarithmic time. The B-tree generalizes the binary search tree, allowing for nodes with more than two children. Unlike other self-balancing binary search trees, the B-tree is well suited for storage systems that read and write relatively large blocks of data, such as databases and file systems. History: B-trees were invented by Rudolf Bayer and Edward M. McCreight while working at Boeing Research Labs, for the purpose of efficiently managing index pages for large random-access files. The basic assumption was that indices would be so voluminous that only small chunks of the tree could fit in main memory. Bayer and McCreight's paper, Organization and maintenance of large ordered indices, was first circulated in July 1970 and later published in Acta Informatica.Bayer and McCreight never explained what, if anything, the B stands for: Boeing, balanced, between, broad, bushy, and Bayer have been suggested. McCreight has said that "the more you think about what the B in B-trees means, the better you understand B-trees."In 2011 Google developed the C++ B-Tree, reporting a 50-80% reduction in memory use for small data types and improved performance for large data sets when compared to a Red-Black tree. Definition: According to Knuth's definition, a B-tree of order m is a tree which satisfies the following properties: Every node has at most m children. Every internal node has at least ⌈m/2⌉ children. The root node has at least two children unless it is a leaf. All leaves appear on the same level. Definition: A non-leaf node with k children contains k−1 keys.Each internal node's keys act as separation values which divide its subtrees. For example, if an internal node has 3 child nodes (or subtrees) then it must have 2 keys: a1 and a2. All values in the leftmost subtree will be less than a1, all values in the middle subtree will be between a1 and a2, and all values in the rightmost subtree will be greater than a2. Definition: Internal nodes Internal nodes (also known as inner nodes) are all nodes except for leaf nodes and the root node. They are usually represented as an ordered set of elements and child pointers. Every internal node contains a maximum of U children and a minimum of L children. Thus, the number of elements is always 1 less than the number of child pointers (the number of elements is between L−1 and U−1). U must be either 2L or 2L−1; therefore each internal node is at least half full. The relationship between U and L implies that two half-full nodes can be joined to make a legal node, and one full node can be split into two legal nodes (if there's room to push one element up into the parent). These properties make it possible to delete and insert new values into a B-tree and adjust the tree to preserve the B-tree properties. Definition: The root node The root node's number of children has the same upper limit as internal nodes, but has no lower limit. For example, when there are fewer than L−1 elements in the entire tree, the root will be the only node in the tree with no children at all.Leaf nodes In Knuth's terminology, the "leaf" nodes are the actual data objects / chunks. The internal nodes that are one level above these leaves are what would be called the "leaves" by other authors: these nodes only store keys (at most m-1, and at least m/2-1 if they are not the root) and pointers (one for each key) to nodes carrying the data objects / chunks.A B-tree of depth n+1 can hold about U times as many items as a B-tree of depth n, but the cost of search, insert, and delete operations grows with the depth of the tree. As with any balanced tree, the cost grows much more slowly than the number of elements. Definition: Some balanced trees store values only at leaf nodes, and use different kinds of nodes for leaf nodes and internal nodes. B-trees keep values in every node in the tree except leaf nodes. Definition: Differences in terminology The literature on B-trees is not uniform in its terminology.Bayer and McCreight (1972), Comer (1979), and others define the order of B-tree as the minimum number of keys in a non-root node. Folk and Zoellick points out that terminology is ambiguous because the maximum number of keys is not clear. An order 3 B-tree might hold a maximum of 6 keys or a maximum of 7 keys. Knuth (1998) avoids the problem by defining the order to be the maximum number of children (which is one more than the maximum number of keys).The term leaf is also inconsistent. Bayer and McCreight (1972) considered the leaf level to be the lowest level of keys, but Knuth considered the leaf level to be one level below the lowest keys. There are many possible implementation choices. In some designs, the leaves may hold the entire data record; in other designs, the leaves may only hold pointers to the data record. Those choices are not fundamental to the idea of a B-tree.For simplicity, most authors assume there are a fixed number of keys that fit in a node. The basic assumption is the key size is fixed and the node size is fixed. In practice, variable length keys may be employed. Informal description: In B-trees, internal (non-leaf) nodes can have a variable number of child nodes within some pre-defined range. When data is inserted or removed from a node, its number of child nodes changes. In order to maintain the pre-defined range, internal nodes may be joined or split. Because a range of child nodes is permitted, B-trees do not need re-balancing as frequently as other self-balancing search trees, but may waste some space, since nodes are not entirely full. The lower and upper bounds on the number of child nodes are typically fixed for a particular implementation. For example, in a 2–3 tree (sometimes referred to as a 2–3 B-tree), each internal node may have only 2 or 3 child nodes. Informal description: Each internal node of a B-tree contains a number of keys. The keys act as separation values which divide its subtrees. For example, if an internal node has 3 child nodes (or subtrees) then it must have 2 keys: a1 and a2 . All values in the leftmost subtree will be less than a1 , all values in the middle subtree will be between a1 and a2 , and all values in the rightmost subtree will be greater than a2 Usually, the number of keys is chosen to vary between d and 2d , where d is the minimum number of keys, and d+1 is the minimum degree or branching factor of the tree. In practice, the keys take up the most space in a node. The factor of 2 will guarantee that nodes can be split or combined. If an internal node has 2d keys, then adding a key to that node can be accomplished by splitting the hypothetical 2d+1 key node into two d key nodes and moving the key that would have been in the middle to the parent node. Each split node has the required minimum number of keys. Similarly, if an internal node and its neighbor each have d keys, then a key may be deleted from the internal node by combining it with its neighbor. Deleting the key would make the internal node have d−1 keys; joining the neighbor would add d keys plus one more key brought down from the neighbor's parent. The result is an entirely full node of 2d keys. Informal description: The number of branches (or child nodes) from a node will be one more than the number of keys stored in the node. In a 2–3 B-tree, the internal nodes will store either one key (with two child nodes) or two keys (with three child nodes). A B-tree is sometimes described with the parameters (d+1) —(2d+1) or simply with the highest branching order, (2d+1) A B-tree is kept balanced after insertion by splitting a would-be overfilled node, of 2d+1 keys, into two d -key siblings and inserting the mid-value key into the parent. Depth only increases when the root is split, maintaining balance. Similarly, a B-tree is kept balanced after deletion by merging or redistributing keys among siblings to maintain the d -key minimum for non-root nodes. A merger reduces the number of keys in the parent potentially forcing it to merge or redistribute keys with its siblings, and so on. The only change in depth occurs when the root has two children, of d and (transitionally) d−1 keys, in which case the two siblings and parent are merged, reducing the depth by one. Informal description: This depth will increase slowly as elements are added to the tree, but an increase in the overall depth is infrequent, and results in all leaf nodes being one more node farther away from the root. Informal description: B-trees have substantial advantages over alternative implementations when the time to access the data of a node greatly exceeds the time spent processing that data, because then the cost of accessing the node may be amortized over multiple operations within the node. This usually occurs when the node data are in secondary storage such as disk drives. By maximizing the number of keys within each internal node, the height of the tree decreases and the number of expensive node accesses is reduced. In addition, rebalancing of the tree occurs less often. The maximum number of child nodes depends on the information that must be stored for each child node and the size of a full disk block or an analogous size in secondary storage. While 2–3 B-trees are easier to explain, practical B-trees using secondary storage need a large number of child nodes to improve performance. Informal description: Variants The term B-tree may refer to a specific design or it may refer to a general class of designs. In the narrow sense, a B-tree stores keys in its internal nodes but need not store those keys in the records at the leaves. The general class includes variations such as the B+ tree, the B* tree and the B*+ tree. Informal description: In the B+ tree, copies of the keys are stored in the internal nodes; the keys and records are stored in leaves; in addition, a leaf node may include a pointer to the next leaf node to speed sequential access. Informal description: The B* tree balances more neighboring internal nodes to keep the internal nodes more densely packed. This variant ensures non-root nodes are at least 2/3 full instead of 1/2. As the most costly part of operation of inserting the node in B-tree is splitting the node, B*-trees are created to postpone splitting operation as long as they can. To maintain this, instead of immediately splitting up a node when it gets full, its keys are shared with a node next to it. This spill operation is less costly to do than split, because it requires only shifting the keys between existing nodes, not allocating memory for a new one. For inserting, first it is checked whether the node has some free space in it, and if so, the new key is just inserted in the node. However, if the node is full (it has m − 1 keys, where m is the order of the tree as maximum number of pointers to subtrees from one node), it needs to be checked whether the right sibling exists and has some free space. If the right sibling has j < m − 1 keys, then keys are redistributed between the two sibling nodes as evenly as possible. For this purpose, m - 1 keys from the current node, the new key inserted, one key from the parent node and j keys from the sibling node are seen as an ordered array of m + j + 1 keys. The array becomes split by half, so that ⌊(m + j + 1)/2⌋ lowest keys stay in the current node, the next (middle) key is inserted in the parent and the rest go to the right sibling. (The newly inserted key might end up in any of the three places.) The situation when right sibling is full, and left isn't is analogous. When both the sibling nodes are full, then the two nodes (current node and a sibling) are split into three and one more key is shifted up the tree, to the parent node. If the parent is full, then spill/split operation propagates towards the root node. Deleting nodes is somewhat more complex than inserting however. Informal description: The B*+ tree combines the main B+ tree and B* tree features together. B-trees can be turned into order statistic trees to allow rapid searches for the Nth record in key order, or counting the number of records between any two records, and various other related operations. B-tree usage in databases: Time to search a sorted file Usually, sorting and searching algorithms have been characterized by the number of comparison operations that must be performed using order notation. A binary search of a sorted table with N records, for example, can be done in roughly ⌈ log2 N ⌉ comparisons. If the table had 1,000,000 records, then a specific record could be located with at most 20 comparisons: ⌈ log2 (1,000,000) ⌉ = 20. B-tree usage in databases: Large databases have historically been kept on disk drives. The time to read a record on a disk drive far exceeds the time needed to compare keys once the record is available. The time to read a record from a disk drive involves a seek time and a rotational delay. The seek time may be 0 to 20 or more milliseconds, and the rotational delay averages about half the rotation period. For a 7200 RPM drive, the rotation period is 8.33 milliseconds. For a drive such as the Seagate ST3500320NS, the track-to-track seek time is 0.8 milliseconds and the average reading seek time is 8.5 milliseconds. For simplicity, assume reading from disk takes about 10 milliseconds. B-tree usage in databases: Naively, then, the time to locate one record out of a million would take 20 disk reads times 10 milliseconds per disk read, which is 0.2 seconds. B-tree usage in databases: The time won't be that bad because individual records are grouped together in a disk block. A disk block might be 16 kilobytes. If each record is 160 bytes, then 100 records could be stored in each block. The disk read time above was actually for an entire block. Once the disk head is in position, one or more disk blocks can be read with little delay. With 100 records per block, the last 6 or so comparisons don't need to do any disk reads—the comparisons are all within the last disk block read. B-tree usage in databases: To speed the search further, the first 13 to 14 comparisons (which each required a disk access) must be sped up. B-tree usage in databases: An index speeds the search A significant improvement in performance can be made with a B-tree index. A B-tree index creates a multi-level tree structure that breaks a database down into fixed-size blocks or pages. Each level of this tree can be used to link those pages via an address location, allowing one page (known as a node, or internal page) to refer to another with leaf pages at the lowest level. One page is typically the starting point of the tree, or the "root". This is where the search for a particular key would begin, traversing a path that terminates in a leaf. Most pages in this structure will be leaf pages which ultimately refer to specific table rows. Because each node (or internal page) can have more than two children, a B-tree index will usually have a shorter height (the distance from the root to the farthest leaf) than a Binary Search Tree. In the example above, initial disk reads narrowed the search range by a factor of two. That can be improved substantially by creating an auxiliary index that contains the first record in each disk block (sometimes called a sparse index). This auxiliary index would be 1% of the size of the original database, but it can be searched more quickly. Finding an entry in the auxiliary index would tell us which block to search in the main database; after searching the auxiliary index, we would have to search only that one block of the main database—at a cost of one more disk read. The index would hold 10,000 entries, so it would take at most 14 comparisons. Like the main database, the last six or so comparisons in the auxiliary index would be on the same disk block. The index could be searched in about eight disk reads, and the desired record could be accessed in 9 disk reads. B-tree usage in databases: The trick of creating an auxiliary index can be repeated to make an auxiliary index to the auxiliary index. That would make an aux-aux index that would need only 100 entries and would fit in one disk block. B-tree usage in databases: Instead of reading 14 disk blocks to find the desired record, we only need to read 3 blocks. This blocking is the core idea behind the creation of the B-tree, where the disk blocks fill-out a hierarchy of levels to make up the index. Reading and searching the first (and only) block of the aux-aux index which is the root of the tree identifies the relevant block in aux-index in the level below. Reading and searching that aux-index block identifies the relevant block to read, until the final level, known as the leaf level, identifies a record in the main database. Instead of 150 milliseconds, we need only 30 milliseconds to get the record. B-tree usage in databases: The auxiliary indices have turned the search problem from a binary search requiring roughly log2 N disk reads to one requiring only logb N disk reads where b is the blocking factor (the number of entries per block: b = 100 entries per block in our example; log100 1,000,000 = 3 reads). In practice, if the main database is being frequently searched, the aux-aux index and much of the aux index may reside in a disk cache, so they would not incur a disk read. The B-tree remains the standard index implementation in almost all relational databases, and many nonrelational databases use them too. Insertions and deletions If the database does not change, then compiling the index is simple to do, and the index need never be changed. If there are changes, then managing the database and its index becomes more complicated. B-tree usage in databases: Deleting records from a database is relatively easy. The index can stay the same, and the record can just be marked as deleted. The database remains in sorted order. If there are a large number of lazy deletions, then searching and storage become less efficient.Insertions can be very slow in a sorted sequential file because room for the inserted record must be made. Inserting a record before the first record requires shifting all of the records down one. Such an operation is just too expensive to be practical. One solution is to leave some spaces. Instead of densely packing all the records in a block, the block can have some free space to allow for subsequent insertions. Those spaces would be marked as if they were "deleted" records. B-tree usage in databases: Both insertions and deletions are fast as long as space is available on a block. If an insertion won't fit on the block, then some free space on some nearby block must be found and the auxiliary indices adjusted. The hope is that enough space is available nearby, such that a lot of blocks do not need to be reorganized. Alternatively, some out-of-sequence disk blocks may be used. B-tree usage in databases: Advantages of B-tree usage for databases The B-tree uses all of the ideas described above. In particular, a B-tree: keeps keys in sorted order for sequential traversing uses a hierarchical index to minimize the number of disk reads uses partially full blocks to speed up insertions and deletions keeps the index balanced with a recursive algorithmIn addition, a B-tree minimizes waste by making sure the interior nodes are at least half full. A B-tree can handle an arbitrary number of insertions and deletions. Best case and worst case heights: Let h ≥ –1 be the height of the classic B-tree (see Tree (data structure) § Terminology for the tree height definition). Let n ≥ 0 be the number of entries in the tree. Let m be the maximum number of children a node can have. Each node can have at most m−1 keys. It can be shown (by induction for example) that a B-tree of height h with all its nodes completely filled has n = mh+1–1 entries. Hence, the best case height (i.e. the minimum height) of a B-tree is: log m⁡(n+1)⌉−1 Let d be the minimum number of children an internal (non-root) node must have. For an ordinary B-tree, d=⌈m/2⌉. Comer (1979) and Cormen et al. (2001) give the worst case height (the maximum height) of a B-tree as log d⁡n+12⌋. Algorithms: Search Searching is similar to searching a binary search tree. Starting at the root, the tree is recursively traversed from top to bottom. At each level, the search reduces its field of view to the child pointer (subtree) whose range includes the search value. A subtree's range is defined by the values, or keys, contained in its parent node. These limiting values are also known as separation values. Algorithms: Binary search is typically (but not necessarily) used within nodes to find the separation values and child tree of interest. Algorithms: Insertion All insertions start at a leaf node. To insert a new element, search the tree to find the leaf node where the new element should be added. Insert the new element into that node with the following steps: If the node contains fewer than the maximum allowed number of elements, then there is room for the new element. Insert the new element in the node, keeping the node's elements ordered. Algorithms: Otherwise the node is full, evenly split it into two nodes so: A single median is chosen from among the leaf's elements and the new element that is being inserted. Values less than the median are put in the new left node and values greater than the median are put in the new right node, with the median acting as a separation value. Algorithms: The separation value is inserted in the node's parent, which may cause it to be split, and so on. If the node has no parent (i.e., the node was the root), create a new root above this node (increasing the height of the tree).If the splitting goes all the way up to the root, it creates a new root with a single separator value and two children, which is why the lower bound on the size of internal nodes does not apply to the root. The maximum number of elements per node is U−1. When a node is split, one element moves to the parent, but one element is added. So, it must be possible to divide the maximum number U−1 of elements into two legal nodes. If this number is odd, then U=2L and one of the new nodes contains (U−2)/2 = L−1 elements, and hence is a legal node, and the other contains one more element, and hence it is legal too. If U−1 is even, then U=2L−1, so there are 2L−2 elements in the node. Half of this number is L−1, which is the minimum number of elements allowed per node. Algorithms: An alternative algorithm supports a single pass down the tree from the root to the node where the insertion will take place, splitting any full nodes encountered on the way preemptively. This prevents the need to recall the parent nodes into memory, which may be expensive if the nodes are on secondary storage. However, to use this algorithm, we must be able to send one element to the parent and split the remaining U−2 elements into two legal nodes, without adding a new element. This requires U = 2L rather than U = 2L−1, which accounts for why some textbooks impose this requirement in defining B-trees. Algorithms: Deletion There are two popular strategies for deletion from a B-tree. Locate and delete the item, then restructure the tree to retain its invariants, OR Do a single pass down the tree, but before entering (visiting) a node, restructure the tree so that once the key to be deleted is encountered, it can be deleted without triggering the need for any further restructuringThe algorithm below uses the former strategy. There are two special cases to consider when deleting an element: The element in an internal node is a separator for its child nodes Deleting an element may put its node under the minimum number of elements and childrenThe procedures for these cases are in order below. Deletion from a leaf node Search for the value to delete. If the value is in a leaf node, simply delete it from the node. If underflow happens, rebalance the tree as described in section "Rebalancing after deletion" below. Algorithms: Deletion from an internal node Each element in an internal node acts as a separation value for two subtrees, therefore we need to find a replacement for separation. Note that the largest element in the left subtree is still less than the separator. Likewise, the smallest element in the right subtree is still greater than the separator. Both of those elements are in leaf nodes, and either one can be the new separator for the two subtrees. Algorithmically described below: Choose a new separator (either the largest element in the left subtree or the smallest element in the right subtree), remove it from the leaf node it is in, and replace the element to be deleted with the new separator. Algorithms: The previous step deleted an element (the new separator) from a leaf node. If that leaf node is now deficient (has fewer than the required number of nodes), then rebalance the tree starting from the leaf node. Algorithms: Rebalancing after deletion Rebalancing starts from a leaf and proceeds toward the root until the tree is balanced. If deleting an element from a node has brought it under the minimum size, then some elements must be redistributed to bring all nodes up to the minimum. Usually, the redistribution involves moving an element from a sibling node that has more than the minimum number of nodes. That redistribution operation is called a rotation. If no sibling can spare an element, then the deficient node must be merged with a sibling. The merge causes the parent to lose a separator element, so the parent may become deficient and need rebalancing. The merging and rebalancing may continue all the way to the root. Since the minimum element count doesn't apply to the root, making the root be the only deficient node is not a problem. The algorithm to rebalance the tree is as follows: If the deficient node's right sibling exists and has more than the minimum number of elements, then rotate left Copy the separator from the parent to the end of the deficient node (the separator moves down; the deficient node now has the minimum number of elements) Replace the separator in the parent with the first element of the right sibling (right sibling loses one node but still has at least the minimum number of elements) The tree is now balanced Otherwise, if the deficient node's left sibling exists and has more than the minimum number of elements, then rotate right Copy the separator from the parent to the start of the deficient node (the separator moves down; deficient node now has the minimum number of elements) Replace the separator in the parent with the last element of the left sibling (left sibling loses one node but still has at least the minimum number of elements) The tree is now balanced Otherwise, if both immediate siblings have only the minimum number of elements, then merge with a sibling sandwiching their separator taken off from their parent Copy the separator to the end of the left node (the left node may be the deficient node or it may be the sibling with the minimum number of elements) Move all elements from the right node to the left node (the left node now has the maximum number of elements, and the right node – empty) Remove the separator from the parent along with its empty right child (the parent loses an element) If the parent is the root and now has no elements, then free it and make the merged node the new root (tree becomes shallower) Otherwise, if the parent has fewer than the required number of elements, then rebalance the parentNote: The rebalancing operations are different for B+ trees (e.g., rotation is different because parent has copy of the key) and B*-tree (e.g., three siblings are merged into two siblings). Algorithms: Sequential access While freshly loaded databases tend to have good sequential behavior, this behavior becomes increasingly difficult to maintain as a database grows, resulting in more random I/O and performance challenges. Algorithms: Initial construction A common special case is adding a large amount of pre-sorted data into an initially empty B-tree. While it is quite possible to simply perform a series of successive inserts, inserting sorted data results in a tree composed almost entirely of half-full nodes. Instead, a special "bulk loading" algorithm can be used to produce a more efficient tree with a higher branching factor. Algorithms: When the input is sorted, all insertions are at the rightmost edge of the tree, and in particular any time a node is split, we are guaranteed that no more insertions will take place in the left half. When bulk loading, we take advantage of this, and instead of splitting overfull nodes evenly, split them as unevenly as possible: leave the left node completely full and create a right node with zero keys and one child (in violation of the usual B-tree rules). Algorithms: At the end of bulk loading, the tree is composed almost entirely of completely full nodes; only the rightmost node on each level may be less than full. Because those nodes may also be less than half full, to re-establish the normal B-tree rules, combine such nodes with their (guaranteed full) left siblings and divide the keys to produce two nodes at least half full. The only node which lacks a full left sibling is the root, which is permitted to be less than half full. In filesystems: In addition to its use in databases, the B-tree (or § Variants) is also used in filesystems to allow quick random access to an arbitrary block in a particular file. The basic problem is turning the file block i address into a disk block address. In filesystems: Some operating systems require the user to allocate the maximum size of the file when the file is created. The file can then be allocated as contiguous disk blocks. In that case, to convert the file block address i into a disk block address, the operating system simply adds the file block address i to the address of the first disk block constituting the file. The scheme is simple, but the file cannot exceed its created size. In filesystems: Other operating systems allow a file to grow. The resulting disk blocks may not be contiguous, so mapping logical blocks to physical blocks is more involved. In filesystems: MS-DOS, for example, used a simple File Allocation Table (FAT). The FAT has an entry for each disk block, and that entry identifies whether its block is used by a file and if so, which block (if any) is the next disk block of the same file. So, the allocation of each file is represented as a linked list in the table. In order to find the disk address of file block i , the operating system (or disk utility) must sequentially follow the file's linked list in the FAT. Worse, to find a free disk block, it must sequentially scan the FAT. For MS-DOS, that was not a huge penalty because the disks and files were small and the FAT had few entries and relatively short file chains. In the FAT12 filesystem (used on floppy disks and early hard disks), there were no more than 4,080 entries, and the FAT would usually be resident in memory. As disks got bigger, the FAT architecture began to confront penalties. On a large disk using FAT, it may be necessary to perform disk reads to learn the disk location of a file block to be read or written. In filesystems: TOPS-20 (and possibly TENEX) used a 0 to 2 level tree that has similarities to a B-tree. A disk block was 512 36-bit words. If the file fit in a 512 (29) word block, then the file directory would point to that physical disk block. If the file fit in 218 words, then the directory would point to an aux index; the 512 words of that index would either be NULL (the block isn't allocated) or point to the physical address of the block. If the file fit in 227 words, then the directory would point to a block holding an aux-aux index; each entry would either be NULL or point to an aux index. Consequently, the physical disk block for a 227 word file could be located in two disk reads and read on the third. In filesystems: Apple's filesystem HFS+ and APFS, Microsoft's NTFS, AIX (jfs2) and some Linux filesystems, such as Btrfs and ext4, use B-trees. B*-trees are used in the HFS and Reiser4 file systems. DragonFly BSD's HAMMER file system uses a modified B+-tree. Performance: A B-tree grows slower with growing data amount, than the linearity of a linked list. Compared to a skip list, both structures have the same performance, but the B-tree scales better for growing n. A T-tree, for main memory database systems, is similar but more compact. Variations: Access concurrency Lehman and Yao showed that all the read locks could be avoided (and thus concurrent access greatly improved) by linking the tree blocks at each level together with a "next" pointer. This results in a tree structure where both insertion and search operations descend from the root to the leaf. Write locks are only required as a tree block is modified. This maximizes access concurrency by multiple users, an important consideration for databases and/or other B-tree-based ISAM storage methods. The cost associated with this improvement is that empty pages cannot be removed from the btree during normal operations. (However, see for various strategies to implement node merging, and source code at.) United States Patent 5283894, granted in 1994, appears to show a way to use a 'Meta Access Method' to allow concurrent B+ tree access and modification without locks. The technique accesses the tree 'upwards' for both searches and updates by means of additional in-memory indexes that point at the blocks in each level in the block cache. No reorganization for deletes is needed and there are no 'next' pointers in each block as in Lehman and Yao. Variations: Parallel algorithms Since B-trees are similar in structure to red-black trees, parallel algorithms for red-black trees can be applied to B-trees as well. Maple tree A Maple tree is a B-tree developed for use in the Linux kernel to reduce lock contention in virtual memory management. Sources: Bayer, R.; McCreight, E. (1972), "Organization and Maintenance of Large Ordered Indexes" (PDF), Acta Informatica, 1 (3): 173–189, doi:10.1007/bf00288683, S2CID 29859053 Comer, Douglas (June 1979), "The Ubiquitous B-Tree", Computing Surveys, 11 (2): 123–137, doi:10.1145/356770.356776, ISSN 0360-0300, S2CID 101673. Cormen, Thomas; Leiserson, Charles; Rivest, Ronald; Stein, Clifford (2001), Introduction to Algorithms (Second ed.), MIT Press and McGraw-Hill, pp. 434–454, ISBN 0-262-03293-7. Chapter 18: B-Trees. Folk, Michael J.; Zoellick, Bill (1992), File Structures (2nd ed.), Addison-Wesley, ISBN 0-201-55713-4 Knuth, Donald (1998), Sorting and Searching, The Art of Computer Programming, vol. 3 (Second ed.), Addison-Wesley, ISBN 0-201-89685-0. Section 6.2.4: Multiway Trees, pp. 481–491. Also, pp. 476–477 of section 6.2.3 (Balanced Trees) discusses 2–3 trees. Original papers Bayer, Rudolf; McCreight, E. (July 1970), Organization and Maintenance of Large Ordered Indices, vol. Mathematical and Information Sciences Report No. 20, Boeing Scientific Research Laboratories. Bayer, Rudolf (1971), Binary B-Trees for Virtual Memory, Proceedings of 1971 ACM-SIGFIDET Workshop on Data Description, Access and Control, San Diego, California{{citation}}: CS1 maint: location missing publisher (link).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Supercritical carbon dioxide** Supercritical carbon dioxide: Supercritical carbon dioxide (sCO2) is a fluid state of carbon dioxide where it is held at or above its critical temperature and critical pressure. Supercritical carbon dioxide: Carbon dioxide usually behaves as a gas in air at standard temperature and pressure (STP), or as a solid called dry ice when cooled and/or pressurised sufficiently. If the temperature and pressure are both increased from STP to be at or above the critical point for carbon dioxide, it can adopt properties midway between a gas and a liquid. More specifically, it behaves as a supercritical fluid above its critical temperature (304.128 K, 30.9780 °C, 87.7604 °F) and critical pressure (7.3773 MPa, 72.808 atm, 1,070.0 psi, 73.773 bar), expanding to fill its container like a gas but with a density like that of a liquid. Supercritical carbon dioxide: Supercritical CO2 is becoming an important commercial and industrial solvent due to its role in chemical extraction in addition to its relatively low toxicity and environmental impact. The relatively low temperature of the process and the stability of CO2 also allows most compounds to be extracted with little damage or denaturing. In addition, the solubility of many extracted compounds in CO2 varies with pressure, permitting selective extractions. Applications: Solvent Carbon dioxide is gaining popularity among coffee manufacturers looking to move away from classic decaffeinating solvents. sCO2 is forced through the green coffee beans which are then sprayed with water at high pressure to remove the caffeine. The caffeine can then be isolated for resale (e.g. to the pharmaceutical or beverage manufacturers) by passing the water through activated charcoal filters or by distillation, crystallization or reverse osmosis. Supercritical carbon dioxide is used to remove organochloride pesticides and metals from agricultural crops without adulterating the desired constituents from the plant matter in the herbal supplement industry.Supercritical carbon dioxide can be used as a more environmentally friendly solvent for dry cleaning over traditional solvents such as chlorocarbons, including perchloroethylene.Supercritical carbon dioxide is used as the extraction solvent for creation of essential oils and other herbal distillates. Its main advantages over solvents such as hexane and acetone in this process are that it is non-flammable and does not leave toxic residue. Furthermore, separation of the reaction components from the starting material is much simpler than with traditional organic solvents. The CO2 can evaporate into the air or be recycled by condensation into a cold recovery vessel. Its advantage over steam distillation is that it operates at a lower temperature, which can separate the plant waxes from the oils.In laboratories, sCO2 is used as an extraction solvent, for example for determining total recoverable hydrocarbons from soils, sediments, fly-ash and other media, and determination of polycyclic aromatic hydrocarbons in soil and solid wastes. Supercritical fluid extraction has been used in determining hydrocarbon components in water.Processes that use sCO2 to produce micro and nano scale particles, often for pharmaceutical uses, are under development. The gas antisolvent process, rapid expansion of supercritical solutions and supercritical antisolvent precipitation (as well as several related methods) process a variety of substances into particles.Due to its ability to selectively dissolve organic compounds and assist the functioning of enzymes, sCO2 has been suggested as a potential solvent to support biological activity on Venus- or super-Earth-type planets. Applications: Manufactured products Environmentally beneficial, low-cost substitutes for rigid thermoplastic and fired ceramic are made using sCO2 as a chemical reagent. The sCO2 in these processes is reacted with the alkaline components of fully hardened hydraulic cement or gypsum plaster to form various carbonates. The primary byproduct is water. Supercritical carbon dioxide is used in the foaming of polymers. Supercritical carbon dioxide can saturate the polymer with solvent. Upon depressurization and heating the carbon dioxide rapidly expands, causing voids within the polymer matrix, i.e., creating a foam. Research is also ongoing at many universities in the production of microcellular foams using sCO2. An electrochemical carboxylation of a para-isobutylbenzyl chloride to ibuprofen is promoted under sCO2. Applications: Working fluid Supercritical CO2 is chemically stable, reliable, low-cost, non-flammable and readily available, making it a desirable candidate working fluid for transcritical cycles.Supercritical CO2 is used as the working fluid in high efficiency domestic water heat pumps. Manufactured and widely used, heat pumps are also commercially available for domestic and business heating and cooling. While some of the more common domestic water heat pumps remove heat from the space in which they are located, such as a basement or garage, the CO2 heat pump water heaters are typically located outside, where they remove heat from the outside air. Applications: Power generation The unique properties of sCO2 present advantages for closed-loop power generation and can be applied to various power generation applications. Power generation systems that use traditional air Brayton and steam Rankine cycles can be upgraded to sCO2 to increase efficiency and power output. The relatively new Allam power cycle uses sCO2 as the working fluid in combination with fuel and pure oxygen. The CO2 produced by combustion mixes with the sCO2 working fluid and a corresponding amount of pure CO2 must be removed from the process (for industrial use or sequestration). This process reduces atmospheric emissions to zero. Applications: It presents interesting properties that promise substantial improvements in system efficiency. Due to its high fluid density, sCO2 enables extremely compact and highly efficient turbomachinery. It can use simpler, single casing body designs while steam turbines require multiple turbine stages and associated casings, as well as additional inlet and outlet piping. The high density allows for highly compact, microchannel-based heat exchanger technology.In 2016, General Electric announced a super-critical CO2 based turbine that enabled a 50% efficiency of converting heat energy to electrical energy. In it the CO2 is heated to 700 °C. It requires less compression and allows heat transfer. It reaches full power in 2 minutes, whereas steam turbines need at least 30 minutes. The prototype generated 10 MW and is approximately 10% the size of a comparable steam turbine.For concentrated solar power, carbon dioxide critical temperature is not high enough to obtain the maximum energy conversion efficiency. Solar thermal plants are usually located in arid areas, so it is impossible to cool down the heat sink to sub-critical temperatures. Therefore, supercritical carbon dioxide blends, with higher critical temperatures, are in development to improve concentrated solar power electricity production. Applications: Further, due to its superior thermal stability and non-flammability, direct heat exchange from high temperature sources is possible, permitting higher working fluid temperatures and therefore higher cycle efficiency. Unlike two-phase flow, the single-phase nature of sCO2 eliminates the necessity of a heat input for phase change that is required for the water to steam conversion, thereby also eliminating associated thermal fatigue and corrosion.Despite the promise of substantially higher efficiency and lower capital costs, the use of sCO2 presents corrosion engineering, material selection and design issues. Materials in power generation components must display resistance to damage caused by high-temperature, oxidation and creep. Candidate materials that meet these property and performance goals include incumbent alloys in power generation, such as nickel-based superalloys for turbomachinery components and austenitic stainless steels for piping. Components within sCO2 Brayton loops suffer from corrosion and erosion, specifically erosion in turbomachinery and recuperative heat exchanger components and intergranular corrosion and pitting in the piping.Testing has been conducted on candidate Ni-based alloys, austenitic steels, ferritic steels and ceramics for corrosion resistance in sCO2 cycles. The interest in these materials derive from their formation of protective surface oxide layers in the presence of carbon dioxide, however in most cases further evaluation of the reaction mechanics and corrosion/erosion kinetics and mechanisms is required, as none of the materials meet the necessary goals. Applications: Other Work is underway to develop a sCO2 closed-cycle gas turbine to operate at temperatures near 550 °C. This would have implications for bulk thermal and nuclear generation of electricity, because the supercritical properties of carbon dioxide at above 500 °C and 20 MPa enable thermal efficiencies approaching 45 percent. This could increase the electrical power produced per unit of fuel required by 40 percent or more. Given the volume of carbon fuels used in producing electricity, the environmental impact of cycle efficiency increases would be significant.Supercritical CO2 is an emerging natural refrigerant, used in new, low carbon solutions for domestic heat pumps. Supercritical CO2 heat pumps are commercially marketed in Asia. EcoCute systems from Japan, developed by Mayekawa, develop high temperature domestic water with small inputs of electric power by moving heat into the system from the surroundings.Supercritical CO2 has been used since the 1980s to enhance recovery in mature oil fields. Applications: "Clean coal" technologies are emerging that could combine such enhanced recovery methods with carbon sequestration. Using gasifiers instead of conventional furnaces, coal and water is reduced to hydrogen gas, carbon dioxide and ash. This hydrogen gas can be used to produce electrical power In combined cycle gas turbines, CO2 is captured, compressed to the supercritical state and injected into geological storage, possibly into existing oil fields to improve yields.Supercritical CO2 can be used as a working fluid for geothermal electricity generation in both enhanced geothermal systems and sedimentary geothermal systems (so-called CO2 Plume Geothermal). EGS systems utilize an artificially fractured reservoir in basement rock while CPG systems utilize shallower naturally-permeable sedimentary reservoirs. Possible advantages of using CO2 in a geologic reservoir, compared to water, include higher energy yield resulting from its lower viscosity, better chemical interaction, and permanent CO2 storage as the reservoir must be filled with large masses of CO2. As of 2011, the concept had not been tested in the field. Applications: Aerogel production Supercritical carbon dioxide is used in the production of silica, carbon and metal based aerogels. For example, silicon dioxide gel is formed and then exposed to sCO2. When the CO2 goes supercritical, all surface tension is removed, allowing the liquid to leave the aerogel and produce nanometer sized pores. Sterilization of biomedical materials Supercritical CO2 is an alternative for thermal sterilization of biological materials and medical devices with combination of the additive peracetic acid (PAA). Supercritical CO2 does not sterilize the media, because it does not kill the spores of microorganisms. Moreover, this process is gentle, as the morphology, ultrastructure and protein profiles of inactivated microbes are preserved. Cleaning Supercritical CO2 is used in certain industrial cleaning processes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Feeding disorder** Feeding disorder: A feeding disorder, in infancy or early childhood, is a child's refusal to eat certain food groups, textures, solids or liquids for a period of at least one month, which causes the child to not gain enough weight, grow naturally or cause any developmental delays. Feeding disorders resemble failure to thrive, except that at times in feeding disorder there is no medical or physiological condition that can explain the very small amount of food the children consume or their lack of growth. Some of the times, a previous medical condition that has been resolved is causing the issue. Types: Feeding disorder has been divided into six further sub-types: Feeding disorder of state regulation Feeding disorder of reciprocity (neglect) Infantile anorexia Sensory food aversion Feeding disorder associated with concurrent medical condition Post-traumatic feeding disorder Symptoms and signs: Children attempting to swallow different food textures often vomit, gag, or choke while eating. At feeding times they may react negatively to attempts to feed them, and refuse to eat. Other symptoms include head turns, crying, difficulty in chewing or vomiting and spitting whilst eating. Many children may have feeding difficulties and may be picky eaters, but most of them still have a fairly healthy diet. Children with a feeding disorder however, will completely abandon some of the food groups, textures, or liquids that are necessary for human growth and development Children with this disorder can develop much more slowly because of their lack of nutritional intake. In severe cases the child seems to feel socially isolated because of the lack of social activities involving foods. Symptoms and signs: Associated problems A few of the medical and psychological conditions that have been known to be associated with this disorder include: Gastrointestinal motility disorders Oral-motor dysfunction Failure to thrive Prematurity Food allergies Sensory problems Reflux Feeding tube placementA child that is suffering from malnutrition can have permanently stunted mental and physical development. Getting treatment early is essential and can prevent many of the complications. They can also develop further eating disorders later in life such as anorexia nervosa, or they could become a limited eater—though they could still be a healthy child they may become a picky eater. Diagnosis: A barium swallow test is often performed, where the child is given a liquid or food with barium in it. This allows the consulting medical practitioners to trace the swallow-function on an X-ray or other investigative system such as a CAT scan. An endoscopic assignment test can also be performed, where an endoscope is used to view the oesophagus and throat on a screen. It can also allow viewing of how the patient will react during feeding. Treatments: There is no quick cure, and treatment will be based on what problems may be causing the feeding disorder. Depending on the condition, the following steps can be taken: increasing the number of foods that are accepted, increasing the amount of calories and the amount of fluids; checks for vitamin or mineral deficiencies; finding out what the illnesses or psychosocial problems are. To accomplish these goals patients may have to be hospitalized for extensive periods of time. Treatment involves professionals from multiple fields of study including, but not limited to; behavior analysts (Behavioral interventions), occupational and speech therapist who specialize in feeding disorders, dietitians, psychologists and physicians. To obtain the best results, treatment should include a behavior modification plan under the guidance of multiple professionals. If the child has oral motor difficulties related to the feeding disorder a pediatric occupational or speech therapist who is trained in feeding disorders and oral motor function should help develop a plan. Epidemiology: Some 25% to 40% of young children are reported to have feeding problems—mainly colic, vomiting, slow feeding, and refusal to eat. It has been reported that up to 80% of infants with developmental handicaps also demonstrate feeding problems while 1 to 2% of infants aged less than one year show severe food refusal and poor growth. Among infants born prematurely, 40% to 70% experience some form of feeding problem.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DAD1** DAD1: Dolichyl-diphosphooligosaccharide—protein glycosyltransferase subunit DAD1 is an enzyme that in humans is encoded by the DAD1 gene. Function: DAD1, the defender against apoptotic cell death, was initially identified as a negative regulator of programmed cell death in the temperature sensitive tsBN7 cell line. The DAD1 protein disappeared in temperature-sensitive cells following a shift to the nonpermissive temperature, suggesting that loss of the DAD1 protein triggered apoptosis. DAD1 is believed to be a tightly associated subunit of oligosaccharyltransferase both in the intact membrane and in the purified enzyme, thus reflecting the essential nature of N-linked glycosylation in eukaryotes. Interactions: DAD1 has been shown to interact with MCL1.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Multistable auditory perception** Multistable auditory perception: Multistable auditory perception is a cognitive phenomenon in which certain auditory stimuli can be perceived in multiple ways. While multistable perception has been most commonly studied in the visual domain, it also has been observed in the auditory and olfactory modalities. In the olfactory domain, different scents are piped to the two nostrils, while in the auditory domain, researchers often examine the effects of binaural sequences of pure tones. Generally speaking, multistable perception has three main characteristics: exclusivity, implying that the multiple perceptions cannot simultaneously occur; randomness, indicating that the duration of perceptual phases follows a random law, and inevitability, meaning that subjects are unable to completely block out one percept indefinitely. History: While binocular rivalry has been studied since the 16th century, the study of multistable auditory perception is relatively new. Diana Deutsch was the first to discover multistability in human auditory perception, in the form of auditory illusions involving periodically oscillating tones. Experimental Findings: Different experimental paradigms have since been used to study multistable perception in the auditory modality. One is auditory stream segregation, in which two different frequencies are presented in a temporal pattern. Listeners experience alternating percepts: one percept is of a single stream fluctuating between frequencies, and the alternative percept is of two separate streams repeating single frequencies each. Experimental Findings: Other experimental findings demonstrate the verbal transformation effect. In this paradigm, the input is a speech form repeated rapidly and continuously. The alternating percepts here are words—for example, continuous repetition of the word “life” results in the bistability of “life” and “fly.” Prefrontal activation is implicated with such fluctuations in percept, and not with changes in the physical stimulus, and there is also a possible inverse relationship between left inferior frontal and cingulate activation involved in this percept alternation. Principles of Perceptual Bistability: The temporal dynamics observed in auditory stream segregation are similar to those of bistable visual perception, suggesting that the mechanisms mediating multistable perception, the alternating dominance and suppression of multiple competing interpretations of ambiguous sensory input, might be shared across modalities. Pressnitzer and Hupe analyzed results of an auditory streaming experiment and demonstrated that the perceptual experience that occurred exhibited all three properties of multistable perception found in the visual modality—exclusivity, randomness, and inevitability.Exclusivity was satisfied, as there was “spontaneous alternation between mutually exclusive percepts,” and very little time was spent in an “indeterminate” experience. Randomness also characterized the phenomenon, as the first phase of perception is longer in duration than subsequent phases, and then the “steady-state of the temporal dynamics of auditory streaming is purely stochastic with no long-term trend.” Lastly, the percept alternation was inevitable; even though volitional control did reduce suppression of the specified percept, it did not exclude perception of the alternative percept altogether. Principles of Perceptual Bistability: These similarities between perceptual bistability in the visual and auditory modalities raise the possibility of a common mechanism governing the phenomenon. In Pressnitzer and Hupe's subjects, the distributions of phase durations in the two modalities were not significantly different, and it has been speculated that the intraparietal sulcus, likely involved in crossmodal integration, could be responsible for bistability in both domains. However, the absence of subject-specific biases across the modalities contradicts the notion that a “single top-down selection mechanism were the sole determinant of the auditory and visual bistability.” This observation, along with evidence of neural correlates at different stages of processing, instead suggests that competition is distributed and “based on adaptation and mutual inhibition, at multiple neural processing stages.” Neural Correlates: Place model When using a two stream tone test, specific populations of neurons activate, known as the place model. Event related potential (ERP) amplitude increases when the difference of the frequency of the two tones increase. This model hypothesizes that when this is happening, the distance between the two populations of neurons increase, so that the two populations will interact less with each other, allowing for easier tone segregation. Neural Correlates: fMRI results FMRI has been used to measure the correlation between listening to alternating tones compared to single stream of tones. The posterior regions of the left auditory cortex were modulated by the alternating tones, indicating that there may be areas of the brains responsible for stream segregation. Theoretical View: Sequential grouping A problem of large behavioral importance is the question of how to group auditory stimuli. When a continuous stream of auditory information is received, numerous alternative interpretations are possible, but individuals are only consciously aware of one percept at a time. For this to occur, the auditory system must segregate and group incoming sounds, the goal being to “construct, modify, and maintain dynamic representations of putative objects within its environment”. It has been suggested that this process of binding sound events into groups is driven by different levels of similarities. Theoretical View: One principle for binding is based on the perceptual similarity between individual events. Sounds that share many or all of their acoustic features are more likely to have been emitted by the same source, and thus are more likely to be linked to form a “proto-object”. The other principle for binding is based on the sequential predictability of sound events. If events reliably follow each other, it is also more likely that they have a common underlying cause. Theoretical View: Competition A theory explaining the alternation of auditory percepts is that different interpretations are neurally represented simultaneously, but all but the dominant one at the time are suppressed. This idea of competition among parallel hypotheses might provide an explanation for the temporal dynamics observed in auditory stream segregation. The initial perceptual phase is held longer than the subsequent ones, “with the duration of the first phase being stimulus-parameter dependent and an order of magnitude longer in duration than parameter-independent subsequent phases”. At stimulus onset, the first percept might be that which is easiest to discover, based on featural proximity (and thus stimulus-parameter dependent), and it is held for relatively longer because time is required for other hypotheses to form. As more sensory information is received and processed, the “neural associations underlying the alternative sound organizations become strong and start to vie for dominance” and “the probabilities of perceiving different organizations tend to become more balanced with time”.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dumbemployed** Dumbemployed: Dumbemployed is an English-language blog and blook featuring stories about "the jobs you love to hate". User-submitted entries are limited to 300 characters in length and begin with "At work today" and end with "I'm dumbemployed". A book containing submissions to the site along with charts, graphs, illustrations, and tips was published in June 2011.Dumbemployed stories are divided into five categories: Bosses, Customers, Just Dumb, Overtime, Weird Shift. History: Dumbemployed was launched in April 2009 by Phil Edwards and Matt Kraft. Visitors to the site are encouraged to share stories about hilarious and painful dumbemployment. Anonymous submissions are permitted and users can vote and leave comments on stories. History: In October 2011, a Dumbemployed book was announced by Marianne Strong Literary Agency. The book, entitled "Dumbemployed: Hilariously Dumb and Sadly True Stories about Jobs Like Yours", was published by Running Press on June 28, 2011.A review of the book appeared in the Sunday Book Review section of The New York Times on August 14, 2011.In June 2012, Edwards announced that Warner Bros. TV has optioned the TV and film rights for Dumbemployed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Combining grapheme joiner** Combining grapheme joiner: The combining grapheme joiner (CGJ), U+034F ͏ COMBINING GRAPHEME JOINER is a Unicode character that has no visible glyph and is "default ignorable" by applications. Its name is a misnomer and does not describe its function: the character does not join graphemes. Its purpose is to semantically separate characters that should not be considered digraphs as well as to block canonical reordering of combining marks during normalization. Combining grapheme joiner: For example, in a Hungarian language context, adjoining letters c and s would normally be considered equivalent to the cs digraph. If they are separated by the CGJ, they will be considered as two separate graphemes. However, in contrast to the zero-width joiner and similar characters, the CGJ does not affect whether the two letters are rendered separately or as a ligature or cursively joined—the default behavior for this is determined by the font.The CGJ is also needed for complex scripts. For example, in most cases the Hebrew cantillation accent metheg is supposed to appear to the left of the vowel point and by default most display systems will render it like this even if it is typed before the vowel. But in some words in Biblical Hebrew the metheg appears to the right of the vowel, and to tell the display engine to render it properly on the right, CGJ must be typed between the metheg and the vowel. Compare: In the case of several consecutive combining diacritics, an intervening CGJ indicates that they should not be subject to canonical reordering.In contrast, the "zero-width non-joiner" (at U+200C in the General Punctuation range) prevents two adjacent character from turning into a ligature.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Internal documentation** Internal documentation: Software documentation is written text or illustration that accompanies computer software or is embedded in the source code. The documentation either explains how the software operates or how to use it, and may mean different things to people in different roles. Documentation is an important part of software engineering. Types of documentation include: Requirements – Statements that identify attributes, capabilities, characteristics, or qualities of a system. This is the foundation for what will be or has been implemented. Architecture/Design – Overview of software. Includes relations to an environment and construction principles to be used in design of software components. Technical – Documentation of code, algorithms, interfaces, and APIs. End user – Manuals for the end-user, system administrators and support staff. Marketing – How to market the product and analysis of the market demand. Requirements documentation: Requirements documentation is the description of what a particular software does or shall do. It is used throughout development to communicate how the software functions or how it is intended to operate. It is also used as an agreement or as the foundation for agreement on what the software will do. Requirements are produced and consumed by everyone involved in the production of software, including: end users, customers, project managers, sales, marketing, software architects, usability engineers, interaction designers, developers, and testers. Requirements documentation: Requirements come in a variety of styles, notations and formality. Requirements can be goal-like (e.g., distributed work environment), close to design (e.g., builds can be started by right-clicking a configuration file and selecting the 'build' function), and anything in between. They can be specified as statements in natural language, as drawn figures, as detailed mathematical formulas, or as a combination of them all. Requirements documentation: The variation and complexity of requirement documentation make it a proven challenge. Requirements may be implicit and hard to uncover. It is difficult to know exactly how much and what kind of documentation is needed and how much can be left to the architecture and design documentation, and it is difficult to know how to document requirements considering the variety of people who shall read and use the documentation. Thus, requirements documentation is often incomplete (or non-existent). Without proper requirements documentation, software changes become more difficult — and therefore more error prone (decreased software quality) and time-consuming (expensive). Requirements documentation: The need for requirements documentation is typically related to the complexity of the product, the impact of the product, and the life expectancy of the software. If the software is very complex or developed by many people (e.g., mobile phone software), requirements can help better communicate what to achieve. If the software is safety-critical and can have a negative impact on human life (e.g., nuclear power systems, medical equipment, mechanical equipment), more formal requirements documentation is often required. If the software is expected to live for only a month or two (e.g., very small mobile phone applications developed specifically for a certain campaign) very little requirements documentation may be needed. If the software is a first release that is later built upon, requirements documentation is very helpful when managing the change of the software and verifying that nothing has been broken in the software when it is modified. Requirements documentation: Traditionally, requirements are specified in requirements documents (e.g. using word processing applications and spreadsheet applications). To manage the increased complexity and changing nature of requirements documentation (and software documentation in general), database-centric systems and special-purpose requirements management tools are advocated. In Agile software development, requirements are often expressed as User Stories with accompanying acceptance criteria. Architecture design documentation: Architecture documentation (also known as software architecture description) is a special type of design document. In a way, architecture documents are third derivative from the code (design document being second derivative, and code documents being first). Very little in the architecture documents is specific to the code itself. These documents do not describe how to program a particular routine, or even why that particular routine exists in the form that it does, but instead merely lays out the general requirements that would motivate the existence of such a routine. A good architecture document is short on details but thick on explanation. It may suggest approaches for lower level design, but leave the actual exploration trade studies to other documents. Architecture design documentation: Another type of design document is the comparison document, or trade study. This would often take the form of a whitepaper. It focuses on one specific aspect of the system and suggests alternate approaches. It could be at the user interface, code, design, or even architectural level. It will outline what the situation is, describe one or more alternatives, and enumerate the pros and cons of each. A good trade study document is heavy on research, expresses its idea clearly (without relying heavily on obtuse jargon to dazzle the reader), and most importantly is impartial. It should honestly and clearly explain the costs of whatever solution it offers as best. The objective of a trade study is to devise the best solution, rather than to push a particular point of view. It is perfectly acceptable to state no conclusion, or to conclude that none of the alternatives are sufficiently better than the baseline to warrant a change. It should be approached as a scientific endeavor, not as a marketing technique. Architecture design documentation: A very important part of the design document in enterprise software development is the Database Design Document (DDD). It contains Conceptual, Logical, and Physical Design Elements. The DDD includes the formal information that the people who interact with the database need. The purpose of preparing it is to create a common source to be used by all players within the scene. The potential users are: Database designer Database developer Database administrator Application designer Application developerWhen talking about Relational Database Systems, the document should include following parts: Entity - Relationship Schema (enhanced or not), including following information and their clear definitions: Entity Sets and their attributes Relationships and their attributes Candidate keys for each entity set Attribute and Tuple based constraints Relational Schema, including following information: Tables, Attributes, and their properties Views Constraints such as primary keys, foreign keys, Cardinality of referential constraints Cascading Policy for referential constraints Primary keysIt is very important to include all information that is to be used by all actors in the scene. It is also very important to update the documents as any change occurs in the database as well. Technical documentation: It is important for the code documents associated with the source code (which may include README files and API documentation) to be thorough, but not so verbose that it becomes overly time-consuming or difficult to maintain them. Various how-to and overview documentation guides are commonly found specific to the software application or software product being documented by API writers. This documentation may be used by developers, testers, and also end-users. Today, a lot of high-end applications are seen in the fields of power, energy, transportation, networks, aerospace, safety, security, industry automation, and a variety of other domains. Technical documentation has become important within such organizations as the basic and advanced level of information may change over a period of time with architecture changes. Technical documentation: Code documents are often organized into a reference guide style, allowing a programmer to quickly look up an arbitrary function or class. Technical documentation embedded in source code Often, tools such as Doxygen, NDoc, Visual Expert, Javadoc, JSDoc, EiffelStudio, Sandcastle, ROBODoc, POD, TwinText, or Universal Report can be used to auto-generate the code documents—that is, they extract the comments and software contracts, where available, from the source code and create reference manuals in such forms as text or HTML files. Technical documentation: The idea of auto-generating documentation is attractive to programmers for various reasons. For example, because it is extracted from the source code itself (for example, through comments), the programmer can write it while referring to the code, and use the same tools used to create the source code to make the documentation. This makes it much easier to keep the documentation up-to-date. Technical documentation: A possible downside is that only programmers can edit this kind of documentation, and it depends on them to refresh the output (for example, by running a cron job to update the documents nightly). Some would characterize this as a pro rather than a con. Technical documentation: Literate programming Respected computer scientist Donald Knuth has noted that documentation can be a very difficult afterthought process and has advocated literate programming, written at the same time and location as the source code and extracted by automatic means. The programming languages Haskell and CoffeeScript have built-in support for a simple form of literate programming, but this support is not widely used. Technical documentation: Elucidative programming Elucidative Programming is the result of practical applications of Literate Programming in real programming contexts. The Elucidative paradigm proposes that source code and documentation be stored separately. Technical documentation: Often, software developers need to be able to create and access information that is not going to be part of the source file itself. Such annotations are usually part of several software development activities, such as code walks and porting, where third party source code is analysed in a functional way. Annotations can therefore help the developer during any stage of software development where a formal documentation system would hinder progress. User documentation: Unlike code documents, user documents simply describe how a program is used. In the case of a software library, the code documents and user documents could in some cases be effectively equivalent and worth conjoining, but for a general application this is not often true. User documentation: Typically, the user documentation describes each feature of the program, and assists the user in realizing these features. It is very important for user documents to not be confusing, and for them to be up to date. User documents don't need to be organized in any particular way, but it is very important for them to have a thorough index. Consistency and simplicity are also very valuable. User documentation is considered to constitute a contract specifying what the software will do. API Writers are very well accomplished towards writing good user documents as they would be well aware of the software architecture and programming techniques used. See also technical writing. User documentation: User documentation can be produced in a variety of online and print formats. However, there are three broad ways in which user documentation can be organized. Tutorial: A tutorial approach is considered the most useful for a new user, in which they are guided through each step of accomplishing particular tasks. Thematic: A thematic approach, where chapters or sections concentrate on one particular area of interest, is of more general use to an intermediate user. Some authors prefer to convey their ideas through a knowledge based article to facilitate the user needs. This approach is usually practiced by a dynamic industry, such as Information technology. User documentation: List or Reference: The final type of organizing principle is one in which commands or tasks are simply listed alphabetically or logically grouped, often via cross-referenced indexes. This latter approach is of greater use to advanced users who know exactly what sort of information they are looking for.A common complaint among users regarding software documentation is that only one of these three approaches was taken to the near-exclusion of the other two. It is common to limit provided software documentation for personal computers to online help that give only reference information on commands or menu items. The job of tutoring new users or helping more experienced users get the most out of a program is left to private publishers, who are often given significant assistance by the software developer. User documentation: Composing user documentation Like other forms of technical documentation, good user documentation benefits from an organized process of development. In the case of user documentation, the process as it commonly occurs in industry consists of five steps: User analysis, the basic research phase of the process. Planning, or the actual documentation phase. Draft review, a self-explanatory phase where feedback is sought on the draft composed in the previous step. Usability testing, whereby the usability of the document is tested empirically. Editing, the final step in which the information collected in steps three and four is used to produce the final draft. Documentation and agile development controversy: "The resistance to documentation among developers is well known and needs no emphasis." This situation is particularly prevalent in agile software development because these methodologies try to avoid any unnecessary activities that do not directly bring value. Specifically, the Agile Manifesto advocates valuing "working software over comprehensive documentation", which could be interpreted cynically as "We want to spend all our time coding. Remember, real programmers don't write documentation."A survey among software engineering experts revealed, however, that documentation is by no means considered unnecessary in agile development. Yet it is acknowledged that there are motivational problems in development, and that documentation methods tailored to agile development (e.g. through Reputation systems and Gamification) may be needed. Marketing documentation: For many applications it is necessary to have some promotional materials to encourage casual observers to spend more time learning about the product. This form of documentation has three purposes: To excite the potential user about the product and instill in them a desire for becoming more involved with it. To inform them about what exactly the product does, so that their expectations are in line with what they will be receiving. To explain the position of this product with respect to other alternatives.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DalTrans** DalTrans: DalTrans is the means by which traffic information is gathered and disseminated in Dallas, Texas. DalTrans is an Intelligent Transportation System operated by the Texas Department of Transportation, a collection of devices and a communications backbone designed to help alleviate the freeway congestion. Traffic Data Collected by DalTrans is available in real time to the public on its website. DalTrans: Dallas, Texas is a city with an ever-increasing traffic congestion problem. This is primarily due to the enormous growth of the Dallas area suburbs. Due to the additional vehicles on the Dallas area freeways, there is a shortage of both the room and the resources required to build more capacity. The approach being taken is to operate the freeways in a more efficient manner. This approach requires the gathering of traffic information and the dissemination of that information to the traveling public. DalTrans: Operators in the DalTrans traffic management center monitor the freeway conditions, dispatch assistance to stranded motorists on the freeways via Courtesy Patrol, and share the information about the freeway conditions to the motorists as well as the media. This information can help the motorist make an informed decision about what routes to take and what routes to avoid. The DalTrans operators work closely with the Dallas Area Rapid Transit (DART), surrounding cities, and various traffic reporting agencies in sharing up-to-the-minute traffic conditions and to find solutions to our transportation problems. DalTrans: Since the DalTrans operations center opened in the summer of 1997, it has enhanced the Courtesy Patrol operation already offered to the motoring public by adding real-time freeway traffic conditions to travelers on the system.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Variant Chinese characters** Variant Chinese characters: Variant Chinese characters (traditional Chinese: 異體字; simplified Chinese: 异体字; pinyin: yìtǐzì; Kanji: 異体字; Hepburn: itaiji; Korean: 이체자; Hanja: 異體字; Revised Romanization: icheja) are Chinese characters that are homophones and synonyms. Most variants are allographs in most circumstances, such as casual handwriting. Some contexts require the usage of certain variants, such as in textbook editing. Regional standards: Variant Chinese characters exist within and across all regions where Chinese characters are used, whether Chinese-speaking (mainland China, Hong Kong, Macau, Taiwan, Singapore), Japanese-speaking (Japan), or Korean-speaking (North Korea, South Korea). Some of the governments of these regions have made efforts to standardize the use of variants, by establishing certain variants as standard. The choice of which variants to use has resulted in some divergence in the forms of Chinese characters used in mainland China, Hong Kong, Japan, Korea and Taiwan. This effect compounds with the sometimes drastic divergence in the standard Chinese character sets of these regions resulting from the character simplifications pursued by mainland China and by Japan. Regional standards: The standard character forms of each region are described in: The Table of General Standard Chinese Characters for mainland China The List of Graphemes of Commonly-Used Chinese Characters for Hong Kong The Standard Form of National Characters for Taiwan The list of Jōyō kanji for Japan The Kangxi Dictionary (de facto) for KoreaThere are other standards or glyph sets with widespread use in particular regions (particularly for Traditional Chinese) which might not have an officially recognized status, but has a large user base using it daily. A few of such standard includes the Monotype Traditional Chinese fonts or Inherited Glyphs standard, which is compiled by Traditional Chinese users based on daily seen glyphs. Origins of variants: Character forms that are most orthodox are known as orthodox variants (Chinese: 正字; pinyin: zhèngzì), which is sometimes taken as homonymous to Kangxi Dictionary forms (康熙字典體; Kāngxī zìdiǎn tǐ), as the forms found in the Kangxi dictionary are usually the ones consider to be orthodox, at least by late Imperial China standards. Variants that differ from the orthodox form, mainly used in informal situations, are known as folk variants (Chinese: 俗字; pinyin: súzì; Revised Romanization: sokja; Hepburn: zokuji). Some of these are longstanding abbreviations or alternate forms that became the basis for the Simplified Character set promulgated by the People's Republic of China. For example, 痴 is the folk variant, whereas 癡 is the orthodox form, of the character meaning 'foolish, obsessive'. In this case, two different phonetic elements were chosen to represent the same sound. In other cases, the differences between the orthodox form and popular form are merely minor distinctions in the length or location of strokes, whether certain strokes cross, or the presence or absence of minor inconspicuous strokes (dots). These are often not considered true variant characters but are adoptions of different standards for character shape. In mainland China, these are called xin zixing (Chinese: 新字形; pinyin: xīn zìxíng; lit. 'new character forms', typically a simplified popular form) and jiu zixing (simplified Chinese: 旧字形; traditional Chinese: 舊字形; pinyin: jiù zìxíng; lit. 'old character forms', typically the Kangxi dictionary form). For instance, 述 is the new form of the character with traditional orthography 述 'recount; describe'. As another example, 吴 'a surname; name of an ancient state' is the 'new character shape' form of the character traditionally written 吳. Origins of variants: Variant graphs also sometimes arise during the historical processes of liding (隸定, lit. 'clerical fixing') and libian (隸變, lit. 'clerical changing'). Libian was the natural evolving process of the seal script into the clerical script, which often involved significant omissions, additions, or transmutations of graphical form, while liding is the direct regularization and linearization of shapes to convert them into clerical forms while also preserving the original structure. For instance, the small seal script character for 'year' was converted by liding to a clerical script form that led to the variant 秊, while the same character, after undergoing libian, gave rise to a clerical script form that eventually became the orthodox 年. A similar divergence in the regularization process led to two characters for 'tiger', 虎 and 乕. Origins of variants: There are variants that arise through the use of different radicals to refer to specific definitions of a polysemous character. For instance, the character 雕 could mean either 'certain types of hawk' or 'carve; engrave.' For the former, the variant 鵰 ('bird' radical) is sometimes employed, while for the latter, the variant 琱 ('jade' radical) is sometimes used. Origins of variants: In rare cases, two characters in ancient Chinese with similar meanings can be confused and conflated if their modern Chinese readings have merged, for example, 飢 and 饑, are both read as jī and mean 'famine; severe hunger' and are used interchangeably in the modern language, even though 飢 meant 'insufficient food to satiate', and 饑 meant 'famine' in Old Chinese. The two characters formerly belonged to two different Old Chinese rime groups (脂 and 微 groups, respectively) and could not possibly have had the same pronunciation back then. A similar situation is responsible for the existence of variant forms of the particle with the meaning 'in; to', 于 and 於 in the modern (traditional) orthography. In both cases described above, the variants were merged into a single simplified Chinese character, 饥 and 于, by the mainland (PRC) authorities. Usage in computing: Unicode deals with variant characters in a complex manner, as a result of the process of Han unification. In Han unification, some variants that are nearly identical between Chinese-, Japanese-, Korean-speaking regions are encoded in the same code point, and can only be distinguished using different typefaces. Other variants that are more divergent are encoded in different code points. On web pages, displaying the correct variants for the intended language is dependent on the typefaces installed on the computer, the configuration of the web browser and the language tags of web pages. Systems that are ready to display the correct variants are rare because many computer users do not have standard typefaces installed and the most popular web browsers are not configured to display the correct variants by default. The following are some examples of variant forms of Chinese characters with different code points and language tags. Usage in computing: The following are some examples of variant forms of Chinese characters with the same code points and different language tags. Graphemic variants: Some variants are not allographic. For a set of variants to be allographs, someone who could read one should be able to read the others, but some variants cannot be read if one only knows one of them. An example is 搜 and 蒐, where someone who is able to read 搜 might not be able to read 蒐. Another example is 㠯, which is a variant of 以, but some people who could read 以 might not be able to read 㠯.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Zalutumumab** Zalutumumab: Zalutumumab (proposed trade name HuMax-EGFR) is a fully human IgG1 monoclonal antibody (mAb) directed towards the epidermal growth factor receptor (EGFR). It is a product developed by Genmab in Utrecht, the Netherlands. Specifically, zalutumumab is designed for the treatment of squamous cell carcinoma of the head and neck (SCCHN), a type of cancer. Mechanism of action: Zalutumumab works through inhibition of the EGFR signal. The EGFR is a receptor tyrosine kinase. Its structure includes an extracellular binding domain, a transmembrane lipophilic segment, and an intracellular tyrosine kinase domain. Mechanism of action: Mechanism of EGFR EGFR is over-expressed by many tumor cells. Upon binding by a ligand, such as the epidermal growth factor or TGF alpha, dimerization occurs, leading to autophosphorylation on the intracellular tyrosine residues. Following phosphorylation, the Grb2-SOS signaling complex is stimulated. This causes the activation of the G protein RAS through the exchange of guanosine diphosphate (GDP) for guanosine triphosphate (GTP). The exchange of GDP for GTP induces a conformational change of RAS to allow it to bind to Raf-1. Raf-1 is then activated through another multistep mechanism in which dephosphorylation of inhibitory sites by protein phosphatase 2A (PP2A), as well as the phosphorylation of activating sites by p21 activated kinase (PAK) occurs. After this, Raf-1 activates MAPK/ERK kinase (MEK), which then goes on to activate extracellular-signal-regulated kinase (ERK). ERK is then able to enter the cell nucleus and control gene expression by phosphorylating various transcription factors, such as Elk-1. It is from there that the specific gene transcription occurs to initiate the cell cycle. Through this mechanism, apoptosis is inhibited, angiogenesis, migration, adhesion, and invasion occur. Each of these is a functional element to the progression and development of cancer, which is defined as an abnormal growth of cells with a tendency to proliferate in an uncontrolled way and, in some cases, to metastasize. Mechanism of action: Mechanism of zalutumumab In order to combat SCCHN, zalutumumab was designed to inhibit the EGFR signaling. Specifically, it binds to the EGFR Domain III on the cell surface. This locks the receptor in an inactive conformation, making the drug an inverse agonist. In doing this it is also acting as a competitive antagonist for the EGF ligand. In the inactive conformation, the distance between the intracellular tyrosine kinase residues is larger, which inhibits dimerization. Phosphorylation is consequently inhibited, so that no signal is released. Without a signal, cell cycle characteristics to enhance tumor growth are inhibited and the cancer progression is suppressed.This is not the only way in which zalutumumab works. It also is responsible for some antitumor affects through antibody-dependent cellular cytotoxicity (ADCC). The Fab, or fragment antigen binding region of the antibody, binds to the antigen on the EGFr expressing tumor cells. Through an immunological response, the body’s natural killer (NK) cells, which are a type of lymphocyte, recognize and bind to the Fc portion on the antibody through an Fc receptor, CD16. The NK cell is then activated through the cross linking of the Fc receptors which sends a signal to induce apoptosis and cell death. The target tumor cell is then destroyed. Developmental status: 2009: Zalutumumab treatment was approved for Fast Track status by the U.S. Food and Drug Administration for patients suffering from SCCHN who have failed standard therapies and have no other options. The drug has undergone pre-clinical and Phase I and II studies and is also in Phases I and II for SCCHN front-line with chemo-radiation and SCCHN with radiation. Additionally, a Phase II is under way for SCCHN and Phase III studies are also being performed for SCCHN and SCCHN front-line with radio therapy.2010:A phase III study (of zalutumumab as an addition to 'best supportive care' in patients after failed standard platinum-based chemotherapy) reported a non-significant improvement in overall survival, and a significant 61% improvement in Progression-free survival2014:A study of zalutumumab as addition to chemoradiation for SCCHN showed no benefit, and 94% developed a skin rash (11% severe enough to discontinue).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**5-Methoxytryptamine** 5-Methoxytryptamine: 5-Methoxytryptamine (5-MT), also known as mexamine, is a tryptamine derivative closely related to the neurotransmitters serotonin and melatonin. 5-MT has been shown to occur naturally in the body in low levels. It is biosynthesized via the deacetylation of melatonin in the pineal gland.5-MT acts as a full agonist at the 5-HT1, 5-HT2, 5-HT4, 5-HT6, and 5-HT7 receptors. It has no affinity for the 5-HT3 receptor and its affinity for the 5-HT1E receptor is very weak in comparison to the other 5-HT1 receptors. Its affinity for the 5-HT5A receptor is unknown. 5-Methoxytryptamine: Measured affinity for some receptors (incomplete list): 5-HT1B receptors (Ki = 35 nM) 5-HT1D receptors (Ki = 7.3 nM) 5-HT1E receptors (Ki = 3151 nM) 5-HT1F receptors (Ki = 1166 nM) 5-HT2A receptors (Ki = 295 nM) 5-HT2B receptors (Ki = 16.4 nM) 5-HT2C receptors (Ki = 52.48 nM) 5-HT4 receptors (Ki = 501.18 nM) 5-HT6 receptors (Ki = 69.18 nM) 5-HT7 receptors (Ki = 5.01 nM)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Combat Estimate** Combat Estimate: The Combat Estimate, also known as the Seven Questions is a sequence of questions used by military commanders, usually in contact with the enemy, to plan their response, such as a platoon attack. It provides a means for formulating a plan that meets the exigencies of battle, even in very difficult circumstances. However, it may also be used at all levels in the chain of command, from tactical to strategic. Combat Estimate: The Combat Estimate was introduced by the British Army in 2001, although the military estimate or appreciation process is used widely by militaries around the world. It was developed to simplify and speedup the planning process at Battlegroup (BG) level. The approach focuses all of the work strands carried out during planning and ensures that these works have purpose. Its effectiveness has led to variants of it being used as a tool for decision making in a variety of contexts, from surgery to management consulting. An example is its application in identifying the process of plan development, the initial research stage for SMEs. The questions: The Combat Estimate consists of seven questions as follows: What is the situation and how does it affect me? What have I been told to do and why? What effects do I need to achieve and what direction must I give to develop my plan? Where can I best accomplish each action or effect? What resources do I need to accomplish each action or effect? When and where do these actions take place in relation to each other? What control measures do I need to impose?
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Theorem Proving System** Theorem Proving System: The Theorem Proving System (TPS) is an automated theorem proving system for first-order and higher-order logic. TPS has been developed at Carnegie Mellon University. An educational version of it is known as ETPS (Educational Theorem Proving System).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nokdu-muk** Nokdu-muk: Nokdu-muk (녹두묵, 綠豆- ; "mung bean jelly",) is a Korean muk, or jelly, made from mung bean starch. In its most commonly encountered form, it is also called cheongpo-muk (청포묵, 淸泡-), which literally means "clear froth jelly," owing to its clear white color. If it is colored with gardenia, the nokdu-muk is called hwangpo-muk, which literally means "yellow froth jelly."Nokdu-muk is usually served cold, usually as the banchan (side dish) nokdu-muk-muchim (녹두묵무침). As it has little flavor of its own, nokdu-muk is typically seasoned with soy sauce and vinegar. Nokdu-muk: Nokdu-muk is a common food for special occasions. It is often served at Korean weddings and other celebrations. Nokdumuk is also used as a main ingredient for making the Korean royal cuisine dish called tangpyeong-chae. It is made by mixing julienned nokdu-muk, stir-fried shredded beef, and various vegetables seasoned with soy sauce, vinegar, sugar, sesame seeds, salt, and sesame oil.Hwangpo-muk (황포묵) or norang-muk (노랑묵) is a Korean food which is a yellow jelly made from mung beans. The yellow color comes from dyeing with the fruit of gardenia. This jelly is particularly associated with Jeolla cuisine, and is a noted staple food of Namwon and also Jeonju (both cities in the North Jeolla province), where it is a common ingredient of Jeonju-style bibimbap.As with other varieties of muk (Korean jelly), hwangpomuk is commonly served in small chunks seasoned with vinegar, soy sauce, and other condiments; this side dish is called hwangpomuk-muchim (황포묵무침).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**G-spot** G-spot: The G-spot, also called the Gräfenberg spot (for German gynecologist Ernst Gräfenberg), is characterized as an erogenous area of the vagina that, when stimulated, may lead to strong sexual arousal, powerful orgasms and potential female ejaculation. It is typically reported to be located 5–8 cm (2–3 in) up the front (anterior) vaginal wall between the vaginal opening and the urethra and is a sensitive area that may be part of the female prostate.The existence of the G-spot has not been proven, nor has the source of female ejaculation. Although the G-spot has been studied since the 1940s, disagreement persists over its existence as a distinct structure, definition and location. The G-spot may be an extension of the clitoris, which together may be the cause of orgasms experienced vaginally. Sexologists and other researchers are concerned that women may consider themselves to be dysfunctional if they do not experience G-spot stimulation, and emphasize that not experiencing it is normal. Theorized structure: Location Two primary methods have been used to define and locate the G-spot as a sensitive area in the vagina: self-reported levels of arousal during stimulation, and stimulation of the G-spot leading to female ejaculation. Ultrasound technology has also been used to identify physiological differences between women, and changes to the G-spot region during sexual activity.The location of the G-spot is typically reported as being about 50 to 80 mm (2 to 3 in) inside the vagina, on the front wall. For some women, stimulating this area creates a more intense orgasm than clitoral stimulation. The G-spot area has been described as needing direct stimulation, such as two fingers pressed deeply into it. Attempting to stimulate the area through sexual penetration, especially in the missionary position, is difficult because of the particular angle of penetration required. Theorized structure: Vagina and clitoris Women usually need direct clitoral stimulation in order to orgasm, and G-spot stimulation may be best achieved by using both manual stimulation and vaginal penetration. A yoni massage also includes manual stimulation of the G-spot.Sex toys are available for G-spot stimulation. One common sex toy is the specially-designed G-spot vibrator, which is a phallus-like vibrator that has a curved tip and attempts to make G-spot stimulation easy. G-spot vibrators are made from the same materials as regular vibrators, ranging from hard plastic, rubber, silicone, jelly, or any combination of them. The level of vaginal penetration when using a G-spot vibrator depends on the woman, because women's physiology is not always the same. The effects of G-spot stimulation when using the penis or a G-spot vibrator may be enhanced by additionally stimulating other erogenous zones on a woman's body, such as the clitoris or vulva as a whole. When using a G-spot vibrator, this may be done by manually stimulating the clitoris, including by using the vibrator as a clitoral vibrator, or, if the vibrator is designed for it, by applying it so that it stimulates the head of the clitoris, the rest of the vulva and the vagina simultaneously.A 1981 case study reported that stimulation of the anterior vaginal wall made the area grow by fifty percent and that self-reported levels of arousal/orgasm were deeper when the G-spot was stimulated. Another study, in 1983, examined eleven women by palpating the entire vagina in a clockwise fashion, and reported a specific response to stimulation of the anterior vaginal wall in four of the women, concluding that the area is the G-spot. In a 1990 study, an anonymous questionnaire was distributed to 2,350 professional women in the United States and Canada with a subsequent 55% return rate. Of these respondents, 40% reported having a fluid release (ejaculation) at the moment of orgasm, and 82% of the women who reported the sensitive area (Gräfenberg spot) also reported ejaculation with their orgasms. Several variables were associated with this perceived existence of female ejaculation.Some research suggests that G-spot and clitoral orgasms are of the same origin. Masters and Johnson were the first to determine that the clitoral structures surround and extend along and within the labia. Upon studying women's sexual response cycle to different stimulation, they observed that both clitoral and vaginal orgasms had the same stages of physical response, and found that the majority of their subjects could only achieve clitoral orgasms, while a minority achieved vaginal orgasms. On this basis, Masters and Johnson argued that clitoral stimulation is the source of both kinds of orgasms, reasoning that the clitoris is stimulated during penetration by friction against its hood.Researchers at the University of L'Aquila, using ultrasonography, presented evidence that women who experience vaginal orgasms are statistically more likely to have thicker tissue in the anterior vaginal wall. The researchers believe these findings make it possible for women to have a rapid test to confirm whether or not they have a G-spot. Professor of genetic epidemiology, Tim Spector, who co-authored research questioning the existence of the G-spot and finalized it in 2009, also hypothesizes thicker tissue in the G-spot area; he states that this tissue may be part of the clitoris and is not a separate erogenous zone.Supporting Spector's conclusion is a study published in 2005 which investigates the size of the clitoris – it suggests that clitoral tissue extends into the anterior wall of the vagina. The main researcher of the studies, Australian urologist Helen O'Connell, asserts that this interconnected relationship is the physiological explanation for the conjectured G-spot and experience of vaginal orgasms, taking into account the stimulation of the internal parts of the clitoris during vaginal penetration. While using MRI technology, O'Connell noted a direct relationship between the legs or roots of the clitoris and the erectile tissue of the "clitoral bulbs" and corpora, and the distal urethra and vagina. "The vaginal wall is, in fact, the clitoris," said O'Connell. "If you lift the skin off the vagina on the side walls, you get the bulbs of the clitoris – triangular, crescental masses of erectile tissue." O'Connell et al., who performed dissections on the female genitals of cadavers and used photography to map the structure of nerves in the clitoris, were already aware that the clitoris is more than just its glans and asserted in 1998 that there is more erectile tissue associated with the clitoris than is generally described in anatomical textbooks. They concluded that some females have more extensive clitoral tissues and nerves than others, especially having observed this in young cadavers as compared to elderly ones, and therefore whereas the majority of females can only achieve orgasm by direct stimulation of the external parts of the clitoris, the stimulation of the more generalized tissues of the clitoris via intercourse may be sufficient for others.French researchers Odile Buisson and Pierre Foldès reported similar findings to those of O'Connell's. In 2008, they published the first complete 3D sonography of the stimulated clitoris, and republished it in 2009 with new research, demonstrating the ways in which erectile tissue of the clitoris engorges and surrounds the vagina. On the basis of this research, they argued that women may be able to achieve vaginal orgasm via stimulation of the G-spot because the highly innervated clitoris is pulled closely to the anterior wall of the vagina when the woman is sexually aroused and during vaginal penetration. They assert that since the front wall of the vagina is inextricably linked with the internal parts of the clitoris, stimulating the vagina without activating the clitoris may be next to impossible. In their 2009 published study, the "coronal planes during perineal contraction and finger penetration demonstrated a close relationship between the root of the clitoris and the anterior vaginal wall". Buisson and Foldès suggested "that the special sensitivity of the lower anterior vaginal wall could be explained by pressure and movement of clitoris's root during a vaginal penetration and subsequent perineal contraction". Theorized structure: Female prostate In 2001, the Federative Committee on Anatomical Terminology accepted female prostate as a second term for the Skene's gland, which is believed to be found in the G-spot area along the walls of the urethra. The male prostate is biologically homologous to the Skene's gland; it has been unofficially called the male G-spot because it can also be used as an erogenous zone.Regnier de Graaf, in 1672, observed that the secretions (female ejaculation) by the erogenous zone in the vagina lubricate "in agreeable fashion during coitus". Modern scientific hypotheses linking G-spot sensitivity with female ejaculation led to the idea that non-urine female ejaculate may originate from the Skene's gland, with the Skene's gland and male prostate acting similarly in terms of prostate-specific antigen and prostate-specific acid phosphatase studies, which led to a trend of calling the Skene's glands the female prostate. Additionally, the enzyme PDE5 (involved with erectile dysfunction) has additionally been associated with the G-spot area. Because of these factors, it has been argued that the G-spot is a system of glands and ducts located within the anterior (front) wall of the vagina. A similar approach has linked the G-spot with the urethral sponge. Clinical significance: G-spot amplification (also called G-spot augmentation or the G-Shot) is a procedure intended to temporarily increase pleasure in sexually active women with normal sexual function, focusing on increasing the size and sensitivity of the G-spot. G-spot amplification is performed by attempting to locate the G-spot and noting measurements for future reference. After numbing the area with a local anesthetic, human engineered collagen is then injected directly under the mucosa in the area the G-spot is concluded to be in.A position paper published by the American College of Obstetricians and Gynecologists in 2007 warns that there is no valid medical reason to perform the procedure, which is not considered routine or accepted by the College; and it has not been proven to be safe or effective. The potential risks include sexual dysfunction, infection, altered sensation, dyspareunia, adhesions and scarring. The College position is that it is untenable to recommend the procedure. The procedure is also not approved by the Food and Drug Administration or the American Medical Association, and no peer-reviewed studies have been accepted to account for either safety or effectiveness of this treatment. Society and culture: General skepticism In addition to general skepticism among gynecologists, sexologists and other researchers that the G-spot exists, a team at King's College London in late 2009 suggested that its existence is subjective. They acquired the largest sample size of women to date – 1,800 – who are pairs of twins, and found that the twins did not report a similar G-spot in their questionnaires. The research, headed by Tim Spector, documents a 15-year study of the twins, identical and non-identical. According to the researchers, if one identical twin reported having a G-spot, it was more likely that the other would too, but this pattern did not materialize. Study co-author Andrea Burri believes: "It is rather irresponsible to claim the existence of an entity that has never been proven and pressurise women and men too." She stated that one of the reasons for the research was to remove feelings of "inadequacy or underachievement" for women who feared they lacked a G-spot. Researcher Beverly Whipple dismissed the findings, commenting that twins have different sexual partners and techniques, and that the study did not properly account for lesbian or bisexual women.Petra Boynton, a British scientist who has written extensively on the G-spot debate, is also concerned about the promotion of the G-spot leading women to feel "dysfunctional" if they do not experience it. "We're all different. Some women will have a certain area within the vagina which will be very sensitive, and some won't — but they won't necessarily be in the area called the G spot," she stated. "If a woman spends all her time worrying about whether she is normal, or has a G spot or not, she will focus on just one area, and ignore everything else. It's telling people that there is a single, best way to have sex, which isn't the right thing to do." Nerve endings G-spot proponents are criticized for giving too much credence to anecdotal evidence, and for questionable investigative methods; for instance, the studies which have yielded positive evidence for a precisely located G-spot involve small participant samples. While the existence of a greater concentration of nerve endings at the lower third (near the entrance) of the vagina is commonly cited, some scientific examinations of vaginal wall innervation have shown no single area with a greater density of nerve endings.Several researchers also consider the connection between the Skene's gland and the G-spot to be weak. The urethral sponge, however, which is also hypothesized as the G-spot, contains sensitive nerve endings and erectile tissue. Sensitivity is not determined by neuron density alone: other factors include the branching patterns of neuron terminals and cross or collateral innervation of neurons. While G-spot opponents argue that because there are very few tactile nerve endings in the vagina and that therefore the G-spot cannot exist, G-spot proponents argue that vaginal orgasms rely on pressure-sensitive nerves. Society and culture: Clitoral and other anatomical debates The G-spot having an anatomical relationship with the clitoris has been challenged by Vincenzo Puppo, who, while agreeing that the clitoris is the center of female sexual pleasure, disagrees with Helen O'Connell and other researchers' terminological and anatomical descriptions of the clitoris. He stated, "Clitoral bulbs is an incorrect term from an embryological and anatomical viewpoint, in fact the bulbs do not develop from the phallus, and they do not belong to the clitoris." He says that clitoral bulbs "is not a term used in human anatomy" and that vestibular bulbs is the correct term, adding that gynecologists and sexual experts should inform the public with facts instead of hypotheses or personal opinions. "[C]litoral/vaginal/uterine orgasm, G/A/C/U spot orgasm, and female ejaculation, are terms that should not be used by sexologists, women, and mass media," he said, further commenting that the "anterior vaginal wall is separated from the posterior urethral wall by the urethrovaginal septum (its thickness is 10–12 mm)" and that the "inner clitoris" does not exist. "The female perineal urethra, which is located in front of the anterior vaginal wall, is about one centimeter in length and the G-spot is located in the pelvic wall of the urethra, 2–3 cm into the vagina," Puppo stated. He believes that the penis cannot come in contact with the congregation of multiple nerves/veins situated until the angle of the clitoris, detailed by Georg Ludwig Kobelt, or with the roots of the clitoris, which do not have sensory receptors or erogenous sensitivity, during vaginal intercourse. He did, however, dismiss the orgasmic definition of the G-spot that emerged after Ernst Gräfenberg, stating that "there is no anatomical evidence of the vaginal orgasm which was invented by Freud in 1905, without any scientific basis".Puppo's belief that there is no anatomical relationship between the vagina and clitoris is contrasted by the general belief among researchers that vaginal orgasms are the result of clitoral stimulation; they maintain that clitoral tissue extends, or is at least likely stimulated by the clitoral bulbs, even in the area most commonly reported to be the G-spot. "My view is that the G-spot is really just the extension of the clitoris on the inside of the vagina, analogous to the base of the male penis," said researcher Amichai Kilchevsky. Because female fetal development is the "default" direction of fetal development in the absence of substantial exposure to male hormones and therefore the penis is essentially a clitoris enlarged by such hormones, Kilchevsky believes that there is no evolutionary reason why females would have two separate structures capable of producing orgasms and blames the porn industry and "G-spot promoters" for "encouraging the myth" of a distinct G-spot.The general difficulty of achieving vaginal orgasms, which is a predicament that is likely due to nature easing the process of childbearing by drastically reducing the number of vaginal nerve endings, challenge arguments that vaginal orgasms help encourage sexual intercourse in order to facilitate reproduction. O'Connell stated that focusing on the G-spot to the exclusion of the rest of a woman's body is "a bit like stimulating a guy's testicles without touching the penis and expecting an orgasm to occur just because love is present". She stated that it "is best to think of the clitoris, urethra, and vagina as one unit because they are intimately related". Ian Kerner stated that the G-spot may be "nothing more than the roots of the clitoris crisscrossing the urethral sponge".A Rutgers University study, published in 2011, was the first to map the female genitals onto the sensory portion of the brain, and supports the possibility of a distinct G-spot. When the research team asked several women to stimulate themselves in a functional magnetic resonance (fMRI) machine, brain scans showed stimulating the clitoris, vagina and cervix lit up distinct areas of the women's sensory cortex, which means the brain registered distinct feelings between stimulating the clitoris, the cervix and the vaginal wall – where the G-spot is reported to be. "I think that the bulk of the evidence shows that the G-spot is not a particular thing," stated Barry Komisaruk, head of the research findings. "It's not like saying, 'What is the thyroid gland?' The G-spot is more of a thing like New York City is a thing. It's a region, it's a convergence of many different structures." In 2009, The Journal of Sexual Medicine held a debate for both sides of the G-spot issue, concluding that further evidence is needed to validate the existence of the G-spot. In 2012, scholars Kilchevsky, Vardi, Lowenstein and Gruenwald stated in the journal, "Reports in the public media would lead one to believe the G-spot is a well-characterized entity capable of providing extreme sexual stimulation, yet this is far from the truth." The authors cited that dozens of trials have attempted to confirm the existence of a G-spot using surveys, pathologic specimens, various imaging modalities, and biochemical markers, and concluded:The surveys found that a majority of women believe a G-spot actually exists, although not all of the women who believed in it were able to locate it. Attempts to characterize vaginal innervation have shown some differences in nerve distribution across the vagina, although the findings have not proven to be universally reproducible. Furthermore, radiographic studies have been unable to demonstrate a unique entity, other than the clitoris, whose direct stimulation leads to vaginal orgasm. Objective measures have failed to provide strong and consistent evidence for the existence of an anatomical site that could be related to the famed G-spot. However, reliable reports and anecdotal testimonials of the existence of a highly sensitive area in the distal anterior vaginal wall raise the question of whether enough investigative modalities have been implemented in the search of the G-spot. Society and culture: A 2014 review from Nature Reviews Urology reported that "no single structure consistent with a distinct G-spot has been identified." History: The release of fluids had been seen by medical practitioners as beneficial to health. Within this context, various methods were used over the centuries to release "female seed" (via vaginal lubrication or female ejaculation) as a treatment for suffocation ex semine retento (suffocation of the womb), female hysteria or green sickness. Methods included a midwife rubbing the walls of the vagina or insertion of the penis or penis-shaped objects into the vagina. In the book History of V, Catherine Blackledge lists old terms for what she believes refer to the female prostate (the Skene's gland), including the little stream, the black pearl and palace of yin in China, the skin of the earthworm in Japan, and saspanda nadi in the India sex manual Ananga Ranga.The 17th-century Dutch physician Regnier de Graaf described female ejaculation and referred to an erogenous zone in the vagina that he linked as homologous with the male prostate; this zone was later reported by the German gynecologist Ernst Gräfenberg. Coinage of the term G-spot has been credited to Addiego et al. in 1981, named after Gräfenberg, and to Alice Kahn Ladas and Beverly Whipple et al. in 1982. Gräfenberg's 1940s research, however, was dedicated to urethral stimulation; Gräfenberg stated, "An erotic zone always could be demonstrated on the anterior wall of the vagina along the course of the urethra". The concept of the G-spot entered popular culture with the 1982 publication of The G Spot and Other Recent Discoveries About Human Sexuality by Ladas, Whipple and Perry, but it was criticized immediately by gynecologists: some of them denied its existence as the absence of arousal made it less likely to observe, and autopsy studies did not report it.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ALF Products** ALF Products: ALF Products Inc., or ALF (named after an assembly language instruction for "rotate the A register Left Four bits"), was a Colorado company primarily known for its computer-controlled music synthesizers and floppy disk supplies and duplicators. History: In 1971 Tim Gill, a Wheat Ridge High School student with an interest in computers, visited the computer terminal room at Lakewood High School looking for "other intelligent life-forms". There he met Philip Tubb, a Lakewood High School student, who shared his interest in computers. This meeting inspired Philip to start the Jefferson County Computer Club. As a freshman, Philip had served as Student President, and he had good relationships with the school's and district's staff. He was able to create the only student-founded multi-school club in the district. Using log-on messages on the county's Hewlett-Packard 2000-series time-shared computer system, club meetings were announced county-wide and held at various high schools.At the Jefferson County Computer Club, Philip Tubb met many other students who shared an interest in computers. He also shared a strong interest in electronics with John Ridges, a Wheat Ridge High School student. John designed and built one of the first computer-controlled music synthesizers, a polyphonic unit with 6 voices (each with an 8 octave range and 8 volume levels). It could be controlled by a remotely located computer when connected between a teletype (or similar device) and its modem. The ASCII serial data flowing on that connection was used to issue commands to the synthesizer. John also wrote programs in BASIC which allowed music to be entered in text format, saved on the computer's hard drive, and played back using the device. The synthesizer got the nickname "Mesmerelda" due to the hypnotic effects of its status LEDs during playback. History: While a student at Lakewood High School, Philip Tubb was hired part-time to operate the district's computer. In that job, Philip also taught seminars on programming to many of the county's high school math teachers who, with little if any prior instruction, were struggling to teach the programming classes. With those contacts, Philip and John began demonstrating Mesmerelda to music classes at several high schools, introducing the students (and teachers) to this new concept of computer-controlled music. Many of the students were interested in music but not skilled enough to perform using a conventional instrument. These students were excited by the idea of using a computer to play music, eliminating the need to master an instrument first. The potential market for computer-controlled synthesizers was apparently larger than the two had assumed. History: After high school, Philip Tubb joined fellow former computer club members Tim Gill and Rich Harman at the University of Colorado. Philip soon discovered the computer science classes were based almost entirely on mainframe computers, which he considered obsolete by that time. He dropped out after one semester to study programming independently. Late in 1975, Philip began discussing the idea of starting a company to make computer-related electronic products with John Ridges (who by then was a student at the University of Colorado). Colorado law at that time required an incorporator to be 21, and required at least three directors. Neither Philip nor John were 21 years old; Rich joined the project and signed the incorporation paperwork for "A L F Products Inc." in November 1975. The three served as the board of directors at ALF through 1992. The name "ALF" was chosen from a list of assembly language instructions for the Hewlett-Packard computer. It stands for "rotate the A register Left Four bits". This particular instruction was chosen largely because the letters have no curves and would therefore be easy to draw with a plotter or other line-vector graphics device. History: ALF developed miscellaneous products before doing more serious work on computer-controlled music synthesizers. Several former Jefferson County Computer Club members became ALF employees, including Tim Gill (who left a job at Hewlett-Packard to join ALF). ALF created several products for the Apple II computer. Tim Gill wanted ALF to work on products for the new Apple III, but Philip Tubb had concerns about the viability of that computer. Tim soon left ALF to start Quark, Inc. and wrote Word Juggler for the Apple III. Despite this parting, ALF and Quark maintained a relationship over the years. One item ALF manufactured for Quark was a keyboard enhancement circuit that allowed Word Juggler to be used with the Apple II. History: ALF was known for its whimsical advertisements and subtle humor in owner's manuals and product brochures. ALF's "Rock Star" ad noted that "Some companies will say anything to sell you a music card" and proceeded to ridicule selected quotes from competitors' ads. One of the quotes was actually from one of ALF's own earlier ads. The "guitarple" in the ad is not a real instrument; ALF constructed it only for the photo shoot. ALF's "Craftsman" advertisement was featured in Creative Computing's 1980 April Fools issue. The magazine, when turned upside down, appeared to be "Dr. KiloBYTE's creative Popular Personal Recreational Micro Computer Data Interface World Journal", a take-off on the names of several computer magazines at the time. This issue included 73 pages of humorous articles, with all the pages numbered in hexadecimal; ALF's ad appeared on page 3F. History: As computer-controlled music became more and more popular, much larger companies began entering the market. ALF decided to switch their focus to equipment for duplicating floppy disks, which had little competition, and became a dominant supplier in that field. As compact discs began to replace floppy disks, ALF realized a larger partner was needed for that market. A buyout by Rimage Corporation, who had recently completed their IPO, was negotiated. Most former ALF employees left soon after the acquisition; Philip Tubb and John Ridges remained with Rimage for a few years. Products: Early products ALF's first products were adaptations of the punched tape reader in the Model 33ASR Teletype which allowed it to operate at higher speeds. Display-based terminals were becoming popular for use on time-shared systems, and they could operate at higher speeds than the Teletype. ALF created an interface card which allowed the Teletype's reader, which normally reads 10 characters per second, to read at 30 characters per second when used with a display-based terminal. It was sold only to schools in the local district; no attempt was made for larger marketing. Another version allowed the reader to operate at 55 characters per second, but modems that could operate at such speeds were not widely used at that time. Products: Next, ALF produced a number of incidental S-100 products: a card extender, which facilitates testing an S-100 card by raising it above the other cards in the computer; an S-100 motherboard; an S-100 motherboard testing card, which simplified checking for assembly errors on a motherboard; and a random number generator. The motherboard testing card was sold though local hobby-electronics stores, and the motherboard was used in a subsequent product (the AD8). Products: AD8 Music Synthesizer ALF's first computer-controlled music synthesizer, designed in 1976–1977, was called the AD8 (a homophone of 88, the number of keys on a piano). It was intended for use with any S-100 computer, but could be used with any computer via a parallel bus. The primary hardware was a one-voice synthesizer card; up to eight cards could be used to create a polyphonic system with one to eight simultaneous voices. A controller card, which had its own 6502 processor, connected to the user's computer and the synthesizer cards. Each one-voice card had the following controls: An 8 octave range (96 pitches) Volume control with 256 levels Two programmable waveform generators (sample-based synthesis) Scanned-RAM D/A with 64 elements 256 amplitude levels per element A low-pass filter with 16 levels An envelope generator Rise rates of 0.004 to 1.3 seconds in 256 steps Fall rates of 0.003 to 7.8 seconds in 256 steps 256 sustain levels Stereo channel selection (left or right)The AD8 was also able to produce various white noise effects, which were particularly useful for percussion sounds, by programming the waveform-RAMs with random numbers. Products: ALF created a demonstration record, "Computer Controlled Synthesizer Performances", containing performances from Mesmerelda and the AD8.Costing almost twice a much as the Altair 8800 or similar computer required to control it, the AD8 was too expensive for most hobbyists at the time. Few systems were sold. Products: Quad Chromatic Pitch Generator Around the same time as the AD8, ALF sold a simple pitch generator card in two versions: one that plugged directly into an S-100 computer, and one that could be connected to any computer via parallel interface. Each card could produce four simultaneous voices, and multiple cards could be used in an S-100 system. There were no controls other than pitch (the same 8 octave range as the AD8). It could serve as a computer-controlled sequencer by connecting the individual voices to external equipment, such as conventional analog synthesizers. Additionally, a standard audio cable allowed connection to an ordinary audio system. Products: Apple Music Synthesizer / Music Card MC16 The S-100 computers that customers used to control the AD8 or Quad Chromatic Pitch Generators varied widely in configuration; there was no single standard for even major items such as the type of display, keyboard interface or layout, tape device for software distribution, and so forth. This lack of standardization was a significant obstacle to ALF creating user-friendly software for their products. When Apple introduced the Apple II in 1977, it was available with only one display and keyboard format, which allowed software to be created that would work for all users. Unfortunately, the Apple II was less powerful than most S-100 computers and the accessory cards it could accommodate were physically quite small. It was necessary for ALF to design a synthesizer much simpler than the complex AD8. Products: ALF used the AD8 to simulate a wide variety of possible synthesizer designs. These ranged from very simple ones much like the Quad Chromatic Pitch Generator (which was obviously too simple for desirable music) to far more complex schemes using nearly the full capabilities of the AD8 (which was already known to be too expensive). These simulations could be operated in real-time and their Relative Enjoyment Factor (REF) measured to determine how usable each design would be as a functional music synthesizer. The target goal was a REF above 80. Numerous designs were evaluated and considered along with their estimated production cost. Finally, a design retaining the 8 octave range, accurate tuning, and a combination of ADSR envelope and volume control was selected; the programmable waveform generation and filtering functions of the AD8 were omitted. The final REF achieved was greater than 82. Products: The product was originally sold as "ALF's Apple Music Synthesizer", but Apple was concerned that customers might think the product was sold by Apple rather than being Apple-compatible; ALF changed the name to "Music Card MC16" ("MC" for "Music Card" and 16 being the last digits of the product's part number). It was the first hardware music product sold for the Apple II, and was one of the largest selling hardware accessories for the Apple II (aside from Apple's diskette drive) for some time. The product was demonstrated to Apple and Apple dealers late in 1978, and volume sales began in June 1979. Products: The sophisticated software written by John Ridges for this synthesizer was the first to implement graphical entry for a personal computer music product. At the time, his music entry program was the largest Assembly Language program available for the Apple II (even larger than the entire Applesoft BASIC language interpreter), and one of the few programs to utilize Apple's hi-resolution graphics. It was also perhaps the first software for an Apple computer to use a graphical user interface (GUI) with icons and pointing elements (the "IMP" portion of WIMP interfaces); several years ahead of Apple's Macintosh. Since the Apple II had no mouse, the GUI was implemented using the Apple's "game paddles"; one moved an arrow to select the desired icon, and the other moved the selected icon to the desired position on the musical staff on the screen display. When entering a musical note, the sound of the note was simultaneously played by the synthesizer for confirmation that the correct pitch had been selected.Advanced functions in the software allowed repeated sections of music to be played without entering them more than once, and allowed the notes to be played on multiple voices simultaneously for purposes of additive synthesis. Additive synthesis is normally performed using sine waves, but since no waveform generator (like that on the AD8) had been included, each voice could create only square waves. Additive synthesis can also be done using square waves, but the range of possible sounds is more limited (in particular, no sound less harmonically complex than the base square wave itself can be created). Tests with the AD8 had shown that very interesting sounds could be created with square wave additive synthesis when each voice used slightly different ADSR envelopes and/or small shifts in timing. Therefore, the MC16 was designed with very fine ADSR control. Products: Each card could produce three simultaneous voices, each with an 8 octave range (starting at the same pitch as a piano but extending 8 semitones higher) with excellent tuning accuracy (within 2 cents) and 256 envelope/volume levels with an exponential scaling (78 dB range). Each voice could also produce quarter tones (pitches exponentially halfway between each piano pitch). Two cards could be used for six voices or three cards for nine voices; with two or three cards the audio output was in stereo. Products: Apple Music II / Music Card MC1 After the original Music Card MC16 had been selling in volume for some time, engineers at ALF learned that Texas Instruments (TI) had put essentially the entire MC16 card's circuitry into a single integrated circuit, the SN76489N. TI had significantly reduced the pitch range and tuning accuracy, compared to the MC16, and reduced the 256 envelope/volume levels down to 16 (and the 78 dB range down to 28 dB). TI added a pseudorandom number generator circuit which could be used to create white noise effects similar to what ALF had demonstrated on the AD8 synthesizer using random amplitudes, although TI probably intended it more for sound effects than for simulating percussion instruments. Products: ALF designed a card with three SN76489N chips, thus allowing nine simultaneous voices (similar to three MC16 cards). The product was originally named "ALF's Apple Music II" and was later renamed (at Apple's request) "Music Card MC1". Rather than starting at the same pitch as the lowest note on a piano (A0, the A below the C three octaves below Middle C) like the MC16, the MC1 started 15 semitones higher, at C2 (the C two octaves below middle C); and rather than having an 8-octave (96 semitone) range, it had a 6-octave (72 semitone) range. The Music Card MC16's software was modified to operate the MC1. Products: MC16/MC1 accessories ALF sold disks containing the data for songs, which could be played back using the MC16 or MC1 synthesizers. Many of the songs were entered by ALF's customers, and ALF paid a nominal licensing fee to them for the rights to distribute their work. ALF also sold a disk of "Basic Ear Training Skills" which drilled students in rudimentary skills such as identifying major, augmented, diminished, or minor chords. Due to the relatively poor tuning accuracy of the MC1, the Ear Training programs were only offered for the MC16. Other accessories included the "Timing Mode Input Board", which allowed one voice of an MC16 card to be used to control playback tempo; and "Process", a collection of editing functions and other aids for use with the music entry program. Products: PAL9000 radio direction finder Moving away from music products, in 1981 ALF designed a radio direction finder designated the PAL9000 (for sequence-Phased Antenna Locator, with the 9000 being a play on the movie 2001's HAL 9000). The product consisted of an array of eight helical antennas which could be mounted on top of a vehicle, and a control box with a circle of LED direction indicators which was typically mounted inside for viewing by the driver or a passenger. It was originally intended for use in the sport of amateur radio direction finding. Customers included the US Border Patrol, who used it to track dogs they had equipped with radio transmitters, and various taxi companies, who used it to locate drivers who interfered with radio dispatch by continuously transmitting. Products: The product used a digital FIR filter and other advanced digital techniques to continuously determine the direction of the incoming radio signal based on its interaction with the eight separate antennas. A user-supplied FM radio or transceiver, tuned to the desired transmission frequency (within the unit's 144 to 148 MHz range), was also connected to the display unit and to the antenna assembly. The PAL9000 was excellent at indicating the direction of incoming radio signals, but in some situations use of the product was hampered by the fact that a radio signal may be coming from something reflecting the signal (such as a large building) rather than from the originating transmitter itself. Products: Copy System CS3 / Total Accuracy Copy Program When ALF began selling music cards for the Apple II, floppy disk drives were not yet common on the Apple, so the software was supplied on cassette tape. ALF worked closely with a local cassette tape duplication company, helping them modify equipment designed for voice and music to copy cassettes with data in Apple's format. They asked ALF to design equipment to copy floppy disks so they could do business in that market as well, but ALF, wanting to focus on their own products, declined. When ALF later began offering the music card software on floppy disks, they quickly discovered that existing methods for copying the disks were slow and extremely unreliable. ALF went back to the cassette duplication company to work with them on creating floppy duplicators, but by that time the cassette company had decided not to enter the floppy business. Products: ALF had some experience with floppy drives and hard drives, particularly with the S-100 systems, and began designing disk copying software and hardware for internal use. At computer industry conventions, ALF was often asked by other software and hardware companies how ALF was getting their disks copied, since this was a common problem in the industry. When they discovered ALF had designed their own equipment, many companies asked if they could buy the equipment or if ALF would copy their disks. ALF began copying disks for several software companies, at first largely as favors to the exhibitors ALF's employees had socialized with at the conventions. These casual arrangements soon evolved into a full-fledged disk copying service which ALF began advertising in computer magazines.Later ALF began selling two related disk copying products. One was a software product that ran on a standard Apple II computer, but copied disks faster than Apple's software and much more reliably. The other was a product that combined that software with additional hardware to allow even faster copying. The software-only product was originally named "PenultiCopy"; "penulti" from "penultimate", meaning second best or second from the top (i.e. second compared to the hardware product), but unfortunately more often meaning next to last. This was soon changed to the more apt name of "Total Accuracy Copy Program".The hardware-enhanced product was originally called simply "Copy System", which was later expanded to "Copy System CS3" as ALF introduced additional models of hardware-based copiers. The power supply in the Apple II was not capable of running more than one disk drive at a time, which of course limited the speed at which disks could be copied. The hardware of the CS3 replaced the power supply in the Apple. Since this high-capacity power supply was physically too large to fit within the Apple II, it was placed directly behind the Apple II and connected by means of a short cable. The CS3 with several drives could copy about 200 disks per hour (including a complete verification of each copy). Products: Both products included software tools for adjusting the disk drive's motor speed (a common problem on Apple's drives), software for more advanced maintenance, and detailed technical instructions on maintenance procedures. The CS3 also included a small circuit assembly which stabilized the Apple II's clock to allow one of the drive's maintenance adjustments to be performed without needing special drive exerciser equipment. Products: AD8088 Processor Card and AD128K Memory Card In June 1982, ALF began selling a card for the Apple II that added an Intel 8088 processor, the same processor as used in the IBM PC which had been introduced nine months earlier. Since the Apple II had only an 8-bit processor running at 1.023 MHz, ALF's AD8088 with its 16-bit processor running at 5 MHz allowed for much faster operations. An optional card, the AD128K, added up to 128K of memory for the 8088 (more than twice the memory available on the Apple II) and/or an Intel 8087 floating-point math coprocessor. Products: Much like Microsoft's Z-80 SoftCard, which allowed the Apple II to run software written for the Altair 8800 (or other S-100 computers) and operating systems such as CP/M, ALF's AD8088 allowed the Apple II to run software written for IBM's PC and operating systems such as CP/M-86 and MS-DOS. A version of CP/M-86 for the AD8088 was sold by Clone Software Corporation. Unlike the SoftCard and most other processor cards for the Apple II, ALF's AD8088 allowed the Apple's processor to operate simultaneously along with the AD8088's processor. The AD8088's processor could access the relatively slow memory in the Apple II, the fast memory on the AD8088 card itself (2K, 4K, 6K, or 8K bytes), and the fast memory on the accessory card (64K or 128K bytes). The AD8088 also had 4K bytes of ROM memory. Products: Three applications were included with the product: FTL, MET, and MEMDISK. Products: FTL FTL, Formula Transfer Link, was an application that allowed floating-point math operations in Apple's Applesoft BASIC to be sent to the AD8088's 16-bit processor for evaluation rather than being evaluated by the Apple's 8-bit processor. This resulted in much faster execution and better accuracy. For even faster performance, math operations could be handled by the optional 8087 coprocessor. No modifications were needed to the user's Applesoft programs; FTL was activated simply by running an installation program after booting up the Apple II. FTL was also compatible with popular BASIC compilers (Microsoft TASC and On-Line Systems Expediter II). Products: A similar program, "The Pascal Patch", was sold by Micro Magic. It allowed the AD8088 to speed up math functions in Apple's Pascal programming language. Products: MET MET, Multiple Event Timer, was an application that allowed the AD8088 to be used for software profiling or other precision timing functions. For profiling, the programmer would insert a write instruction to the AD8088 card (for example, using POKE in BASIC) at the beginning and end of each code section to be profiled, and execute the program. Next, a program supplied by ALF would be used to read the precise timing measurements stored in the AD8088's memory. This allowed the programmer to determine the amount of time each section took to execute, which is useful for determining which sections would benefit from optimization. MET could time intervals as short as 50 microseconds. Products: MEMDISK Supplied with the AD128K, this application allowed the entire contents of a floppy disk (except the bootup tracks) to be read into the card's memory. The AD8088 could then be used directly by most software as if it were an Apple Disk II controller. The advantage of doing this is that the AD8088's solid-state memory is much faster than the mechanical rotating storage of a floppy diskette. The disadvantage is that any data changed on the memory-disk is lost if the power fails. Normally the user would write the memory-disk back to a floppy disk when finished making modifications. Products: CS5 Turbo / CS6 Turbo II diskette copiers The Turbo series of disk copiers used an Apple II computer but did not use any of Apple's disk hardware. Apple's hardware consisted of a controller card, plugged into an expansion slot, connected to one or two external drive units. Their external drive units used the computer's power supply, which could only handle one drive at a time. ALF's disk controller card similarly plugged inside the Apple, but connected to one or two pairs of drives; each pair had a dedicated power supply so all drives could be operated simultaneously. Two controller cards could be used to operate a total of eight drives. ALF's controller could read and write not only Apple's various disk formats, but formats for other brands of computers as well. Products: The CS5 Turbo system, introduced in 1984, could handle Apple, Atari, Commodore, and TRS-80 disk formats as well as most standard FM 5.25" formats. The CS6 Turbo II system, which used an upgraded controller card, added the popular IBM PC formats and most standard MFM 5.25" formats. This controller card, along with one of the dual-drive units, was also sold for use with the AD8088 Processor Card to allow CP/M-86 and MS-DOS users to read and write disks in both Apple II and IBM PC format; but most units were sold for disk copying purposes. With eight drives, a single Turbo II system could copy 319 Apple II or Commodore 64 disks, 283 Atari disks, 158 single-sided IBM PC or Atari Enhanced disks, 158 Kaypro or TRS-80 disks, or 86 double-sided IBM PC disks per hour (all including complete verification of each copy).The CS6 Turbo II was also available in a version for use with automatic disk loaders, such as the Mountain Computer model 3200. When using an automatic loader, blank disks are fed from a stack in a hopper, rather than manually placed into drives one at a time. The loader has two output bins, one for good copies and one for defective disks. Each Turbo II system could operate one or two automatic loaders. Products: In addition to the standard 48 track-per-inch 5.25" drives, the CS6 was available in a version with 96 track-per-inch drives (quad density) for DEC Rainbow and Tandy 2000 formats. A 3.5" drive version could copy Amiga, Apple Macintosh and Unidisk, IBM PC Convertible, and other standard 3.5" formats. Products: Formatted disks ALF obtained blank disks at very low prices due to the huge quantities needed for its disk copying service. ALF began selling blank disks in bulk packaging, instead of the usual ten-pack boxes, and with some advertising in the major personal computer magazines quickly became a major disk vendor. As competition in bulk disk sales began to increase, ALF looked for a way to distinguish its bulk disk products and soon hit upon the idea of selling pre-formatted disks. Normally, the user would have to format each disk before being able to use it in their computer, which was time-consuming. With pre-formatted disks, the user could use the disk in their computer immediately upon receipt. ALF, being a manufacturer of disk copying equipment, had a significant advantage in producing pre-formatted disks over most of the bulk disk vendors. Soon, ALF was formatting and selling millions of disks each year. ALF also formatted disks for some of the major disk manufacturers, such as Nashua Corporation Quick Copy disk copiers The Quick Copy series was ALF's first copier line to have its own built-in computer, rather than requiring an Apple II computer to function. The first model was introduced in 1987, and copied only standard FM and MFM 5.25" disks (such as the IBM PC formats). Products: All the Quick Copy models had two disk drives, and were very easy to use. When the unit was first turned on, a green light next to the upper drive flashed, indicating the user should insert the "master disk" (the disk the user wants to copy). While the Quick Copy read the master disk into memory, the user could place a blank disk into the lower drive, and the unit would automatically begin making the first copy. When the unit was finished reading the master disk, the green light would go on, indicating the master disk was read into memory OK (or, a red light next to the upper drive would indicate a problem reading the master). The master disk could then be removed, and the user could insert a blank disk. As each copy was finished, in either the upper or lower drive, the unit indicated a good copy with a green light or a defective copy with a red light. The user could simply continue removing all the good copies and inserting new blank disks in each drive to make as many copies as desired. While the user was removing a finished disk and inserting a blank disk in one drive, the other drive could be in the process of copying; unlike systems where the two drives had to be synchronized. Products: A small LCD display showed the number of disks copied, and two "count downs" to show the progress of each drive. The display could also show various menu selections when not copying. Two buttons to the right of the display allowed the user to cycle through and select menu items, and a button to the left of the display could be held to scroll instructions for each menu item across the display. For large volume copying, multiple units could be placed side by side. A single user could operate them and make about 2000 copies per hour. Products: Starting in 1988, ALF began introducing a number of 800-series models of Quick Copys. Externally, the 800-series models looked the same as the 701 model, but internally the controlling electronics were completely different. Models were available with various 5.25" or 3.5" drives; together, the various models could copy virtually every disk format in use by any personal computer. In the final model (model 832), introduced in 1992, each of the two drives contained both a 5.25" drive and a 3.5" drive; the user could use either the two 5.25" drives or the two 3.5" drives and thus copy a wide variety of disk formats. Products: 100-Series autoloader controllers In 1989, ALF used the electronics of the 800-series to create the 100-series of copying controllers for automatic loaders. Each 100-series unit operated a single automatic loader, but it could copy both sides of a double-sided disk simultaneously (all of ALF's earlier copiers, and all drives used in computers, could only read or write to one side of the disk at any given moment) and could handle drives that spin twice as fast as normal. Virtually every manufacturer of automatic loaders in the U.S. offered at least one product using ALF's 100-series (companies included Ashby Industries, CopyMaster, Costas Systems, MissionSix, Rimage Corporation, Trace Corporation, and Victory Enterprises). Most built the 100-series electronics directly inside their loader, but ALF also offered a compact metal enclosure that could be located on top of the loader or nearby. Products: Pro-Series autoloader controllers By the 1990s, there were many large disk-copying services, and most of these already had ample installations of copying equipment (purchased from ALF and from other manufacturers). It was also becoming clear that floppy disks were going to be replaced by other media. ALF decided to design the next generation of automatic loader controllers, the Pro-Series, to be a very powerful design that could handle large numbers of automatic loaders with the latest high-speed simultaneous-double-side drives but at a very low cost per capability. The assumption was copying services would be adding equipment only as needed to replace existing equipment as it wore out. The days of massive expansion in production were over, and they would be looking for very low-cost products. Products: The Pro-Series was designed so a single controller could handle eight automatic loaders, each with a very high speed drive. Additionally, multiple units could easily be connected with an Ethernet-like twisted-pair cable so that one unit could load the master for use by hundreds of units, and the hundreds of units could all be controlled from one unit's keyboard and display. The use of large-scale gate arrays, which could later be converted to application-specific integrated circuits if sales volumes permitted, allowed all the features and speed of the highest-cost copying equipment to be included in a relatively inexpensive circuit board. Products: Most of the design work for the Pro-Series was done prior to ALF being acquired by Rimage Corporation in October 1993. The design was completed at Rimage by former ALF employees after the acquisition.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bruss–Duerinckx theorem** Bruss–Duerinckx theorem: The theorem of the envelopment of societies for resource-dependent populations, also called the Bruss–Duerinckx theorem, is a mathematical result on the behavior of populations that choose their society form according to only two hypotheses that they see as most "natural": Hypothesis 1 (H1): Individuals want to survive and see a future for their descendants. Hypothesis 2 (H2): The average individual prefers a higher standard of living to a lower one.H1 is supposed to precede H2 in the case of incompatibility with H2. Here populations with a society structure are modeled by so-called resource-dependent branching processes (RDBPs). Bruss–Duerinckx theorem: The objective of RDBPs is to model different society structures and to compare the advantages and disadvantages of different societies, with the focus being on human societies. A RDBP is a discrete time branching process (BP) in which individuals are supposed to have to work to be able to live and to reproduce. The population decides on a current society form by a policy, that is a prescription of rules how available resources are to be distributed among the individuals. Policies may change over the generations by the interaction of individuals. Adapting the model to reality: To model a human society, a RDBP incorporates a part from an initial state (number of ancestors at time 0) individual demands for resources (standard of living), creation (production) of new resources for the next generation (including non-consumption and heritage of resources), a policy to distribute resources, and a control option for individuals interacting with the society. For simplicity the reproduction in a RDBP is modeled as being asexual, but the option to replace the mean reproduction rate by the so-called average reproduction rate of mating units (see Galton-Watson processes and ref. 1984) allows to show that the main results given below are not affected by this simplification. Adapting the model to reality: Formally, a RDBP is a stochastic process Γ defined on the non-negative integers, which is a BP defined by an: Initial state Γ0 Law of reproduction of individuals (asexual) Law of individual creation of resources Law of individual resource demands (claims) Policy to distribute available resources to individuals in the population Tool of interaction between individuals and society Tractable RDBPs: Models for the development of a human society in time must allow for interdependence between the different components. Such models are in general very complicated. Crucial in the development of the results was the idea not to try to model the development of a society with a (single) realistic RDBP but rather by a sequence of RDBPs respecting H1 and H2, that is by control actions defining at each time of control a relevant short-horizon RDBP. Thus RDBPs serve as locally defined models for the short-term behavior of a society whereas the evolution of a society is seen as a sequence of RDBPs controlled by the interaction of individuals. The tool of interaction for the individuals within each generation is the option to emigrate before reproduction (generating children) if their individual resource claims are not met by the current society form. Emigration can here be replaced by other options of protest. Special policies: It turns out that two special policies stand out as guidelines for the development of any society. These are the so-called weakest-first policy (wf-policy) and the so-called strongest-first policy (sf-policy) defined in resource-dependent branching process. It can be argued that the wf-society shares important features of an extreme form of communism whereas the sf-society can similarly be interpreted as an extreme form of capitalism. Special policies: Let: m = mean reproduction (descendants) per individual r = mean production (resource creation) per individual F = the individual probability distribution of claims (resources)Then using a result on the behavior of stopping times of sums of order statistics (ref. 1991) the survival criteria can be explicitly computed for both the wf-society and the sf-society as a function of m, r and F. Main result: The theorem of the envelopment of societies says: Remark on the proof: Intuition why the above theorem should be true, is only partially true and sometimes completely wrong (explicit counterexamples). This is why this result has attracted much attention. The mathematical proof does not use new mathematical methods but is subtle. Apart from a classical result on so-called complete convergence, it is mainly based on theorems for stopping times on sums of independent and identically distributed order statistics (ref. 1991) and fine-tuned balancing acts between model assumptions and convergence in probability and almost sure convergence. Impact: The theorem allows for several conclusions, but the most challenging one is arguably the following. If one sees RDBPs with the two natural hypotheses as being an acceptable model, then the wf-policy and the sf-policy (arguably seen as an idealized form or extreme form of communism and (an extreme form of) capitalism, respectively) play both a particular role. They are both guidelines for any human society following the natural hypotheses. They cannot be stable societies: Extreme communism cannot be stable because individuals would like to go towards an increased standard of living, that is, towards H2. Extreme Capitalism cannot be stable because, unless resources are abundant, it would either die out or be quickly outnumbered by competing societies streaming into the vacuum. Impact: However both form in the long run (in terms of the effective of populations) an envelope of any society whatever sophisticated its policy may be.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Antenna analyzer** Antenna analyzer: An antenna analyzer or in British aerial analyser (also known as a noise bridge, RX bridge, SWR analyzer, or RF analyzer) is a device used for measuring the input impedance of antenna systems in radio electronics applications. Antenna analyzer: In radio communications systems, including amateur radio, an antenna analyzer is a common tool used for fine tuning antenna and feedline performance, as well as troubleshooting them.Antenna bridges have long been used in the broadcast industry to tune antennas. A bridge is available which measures complex impedance while the transmitter is operating, practically a necessity when tuning multi-tower antenna systems. In more recent times the direct-reading network analyzers have become more common. Types of analysers: There are several different instruments of varying complexity and accuracy for testing antennas and their feed lines. All can also be used to measure other electrical circuits and components (at least, in principle). The simplest is an SWR meter, which only indicates the degree of mismatch; the actual mismatched impedance must be inferred by measuring several nearby frequencies and performing a few simple calculations. The SWR meter requires a transmitter or signal generator to provide a few watts power test signal. An antenna bridge is able to measure at low power, but also requires a supplied test signal; depending on the bridge circuit, it can be used to measure both reactance and resistance by reading values marked on knobs that have been adjusted for a match. The noise bridge and network analyzers both supply their own very low-power test signals; both are able to measure both resistance and reactance, either by calculation or by reading knobs adjusted for a match. Modern analyzers directly display resistance and reactance, with the calculations done internally by a microprocessor. Antenna bridge A bridge circuit has two legs which are frequency-dependent complex-valued impedances. One leg is a circuit in the analyzer with calibrated components whose combined impedance can be read on a scale. The other leg is the unknown – either an antenna or a reactive component. Types of analysers: To measure impedance, the bridge is adjusted, so that the two legs have the same impedance. When the two impedances are the same, the bridge is balanced. Using this circuit it is possible to either measure the impedance of the antenna connected between ANT and GND, or it is possible to adjust an antenna, until it has the same impedance as the network on the left side of the diagram below. The bridge can be driven either with white noise or a simple carrier (connected to drive). In the case of white noise the amplitude of the exciting signal can be very low and a radio receiver used as the detector. In the case where a simple carrier is used then depending on the level either a diode detector or a receiver can be used. In both cases a null will indicate when the bridge is balanced. Types of analysers: Complex voltage and current meters A second type of antenna analyzer measures the complex voltage across and current into the antenna. The operator then uses mathematical methods to calculate complex impedance, or reads it off a calibrated meter or a digital display. Professional instruments of this type are usually called network analyzers.Modern analyzers do not require the operator to adjust any R and X knobs as with the bridge-type analyzers. Many of these instruments have the ability to automatically sweep the frequency over a wide range and then plot the antenna characteristics on a graphical display. Doing this with a manually-operated bridge would be time-consuming, requiring one to change the frequency and adjust the knobs at each frequency for a match. Types of analysers: High and low power methods Many transmitters include an SWR meter in the output circuits which works by measuring the reflected wave from the antenna back to the transmitter, which is minimal when the antenna is matched. Reflected power from a badly tuned antenna can present an improper load at the transmitter which can damage it. The SWR meter requires about 5–10 watts of outgoing signal from the radio to register the reflected power (if any), and then only indicates the relative degree of mismatch, not the reactive and resistive impedance seen at the end of the antenna's feedline. Types of analysers: A complex-impedance antenna analyzer typically only requires a few milliwatts of power be applied to the antenna, and typically provides its own signal, not requiring any test signal from a transmitter. Using a low-power test signal avoids damaging the analyzer when testing a badly-matched antenna. In addition, because its signal power is very low, the analyzer can be used for frequencies outside of the transmit bands licensed to its operator, and thus measure antenna performance over an unrestricted range of frequencies.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lexikon der gesamten Technik** Lexikon der gesamten Technik: The Lexikon der gesamten Technik is an illustrated German-language encyclopedia of architectural, engineering and manufacturing technology, written by Otto Lueger (German engineer, 1843–1911) and first published in 1894. Editions: 1st Edition, 7 volumes, 1894–1899 2nd Edition, 8 volumes, 1904–1910 (with two supplements in 1914 and 1920) 3rd Edition, 6 volumes, 1926–1929 (with a separate index volume) 4th Edition, 17 volumes, 1960–1972
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**United States military ration** United States military ration: United States military ration refers to the military rations provided to sustain United States Armed Forces service members, including field rations and garrison rations, and the military nutrition research conducted in relation to military food. U.S. military rations are often made for quick distribution, preparation, and eating in the field and tend to have long storage times in adverse conditions due to being thickly packaged or shelf-stable. History: 18th and 19th centuries From the Revolutionary War to the Spanish–American War, the U.S. Army ration, as decreed by the Continental Congress, was the garrison ration, which consisted of meat or salt fish, bread or hardtack, and vegetables. History: There was also a spirit ration. In 1785, it was set at four ounces of rum, reduced to two ounces of whiskey, brandy, or rum in 1790. In 1794, troops about to enter combat or who were engaged in frontier service could receive a double ration of four ounces of rum or whiskey; this was extended in 1799 to include troops engaged in fatigue duties. It was discontinued in 1832 and replaced with a ration of coffee and sugar, which was increased in 1836. In 1846, a spirit ration was reinstated for issue to troops engaged in construction or surveying duties; this was discontinued in 1865. History: During the American Civil War, both armies struggled to keep their soldiers adequately fed. Difficulties with food logistics led to a multitude of rations. World War I In World War I three types of rations came into usage by the U.S. military: the Reserve ration, the Trench ration, and the Emergency ration (also known as the Iron ration). History: "Iron Ration" (1907–1922) The first attempt to make an individual ration for issue to soldiers in the field was the "iron ration", first introduced in 1907. It consisted of three three-ounce cakes (made from a concoction of beef bouillon powder and parched and cooked wheat), three one-ounce bars of sweetened chocolate, and packets of salt and pepper that was issued in a sealed tin packet that weighed one pound. It was designed for emergency use when the troops were unable to be supplied with food. It was later discontinued by the adoption of the "Reserve Ration", but experience with the Iron Ration went into the development of the emergency D-ration. History: "Trench Ration" (1914–1918) This ration was issued in the early part of the war to address a problem. Soldiers fighting in the front lines needed to be supplied with their daily rations, but cooked food prepared at field kitchens was sometimes spoiled by gas attacks. The trench ration was the answer. It was a variety of canned meats (salmon, corned beef, sardines, etc.) that were commercially procured and sealed in a large tin box covered in canvas. It was bulky and heavy and the soldiers began to get weary of the limited menu and it was soon replaced by the Reserve Ration. History: "Reserve Ration" (1917–1937) The reserve ration was first issued during the latter part of World War I to feed troops who were away from a garrison or field kitchen. It originally consisted of 12 ounces of fresh bacon or one pound of canned meat known as the Meat Ration, usually corned beef. Additionally, two 8-ounce cans of hard bread or hardtack biscuits, a packet of 1.16 ounces of pre-ground coffee, a packet of 2.4 ounces of granulated sugar, and a packet of 0.16 ounces of salt were issued. There was also a separate "tobacco ration" of 0.4 ounces of tobacco and 10 cigarette rolling papers, later replaced by brand-name machine-rolled cigarettes. History: After the war, there were attempts to improve the ration based on input from the field. In 1922, the Meat Ration was revised, consisting of one pound of meat (usually a combination of dried beef and canned corned beef). This was supplemented by hard chocolate, 14 ounces of hard bread or hardtack biscuits, coffee, and sugar. In 1925, the Meat Ration was changed, removing the dried beef in favor of canned pork and beans, and reducing the bread component. The corned beef allowance was also reduced in size (older rations continued to be issued, however). In 1936, menu planners attempted to introduce more variety by developing an alternate Meat Ration consisting of an "A"-menu (canned corned beef) and a "B"-menu (canned pork and beans). The A and B Reserve or combat ration was canceled after being superseded in 1938 by the Field Ration, Type C. History: World War II After 1918, the army ration system went through several revisions, eventually leading to the: A-ration: Garrison ration. Fresh, refrigerated, or frozen food prepared in dining halls or field kitchens. The most valued of all rations. B-ration: Field ration. Canned, packaged, or preserved foods normally prepared in field kitchens without refrigeration. C-ration: Individual ration. A complete pre-cooked, ready-to-eat canned individual meal. K-ration: Individual ration. Designed as a short duration individual "assault" ration for paratroopers and other specialized light infantry forces. Declared obsolete in 1948. History: D-ration: Emergency ration. Bars of concentrated chocolate combined with other ingredients to provide high calorie content (intended as an emergency ration).A-rations were generally whatever meat and produce could be obtained locally, so there could be great variety from one theatre of operations to the next. B-rations were generally used when there was inadequate refrigeration for perishable A-rations. The composition of the D-ration did not change much throughout the war, but the C-ration developed many variations. History: A- and B-rations were only served at bases or established camps in rear areas as they require cooking. C-rations could be eaten hot or cold and required no special preparation or storage, so these could be served almost anywhere. History: During the war a new ration for assault troops, the 2,830 calories (11,800 kJ) K-ration, was developed. K-rations were originally intended to be used as short duration rations for only 2–3 days, but cost concerns and later standardization led to its overuse, contributing in some cases to vitamin deficiencies and malnourishment.There were various other special rations developed for specific circumstances, including: Type X Ration 5-in-1 ration 10-in-1 food parcel Mountain ration: 4,800 calories (20,000 kJ), discontinued 1943 Jungle ration: 4,000 calories (17,000 kJ), discontinued 1943 The Assault Lunch: Chocolate bars, caramels, dried fruit, chewing gum, peanuts, salt tablets, cigarettes, matches, and water purification tablets; total of 1,500–2,000 calories (6,300–8,400 kJ), discontinued 1947 The Assault ration (Pacific Theater): 28 pieces of assorted hard candy, chewing gum, cigarettes and a chocolate peanut bar The Aircrew Lunch The AAF Combat Lunch Parachute Emergency Ration Liferaft Ration Airborne Lifeboat Ration Cold War rations Some of these specialized rations were discontinued during the war due to cost concerns, forcing commanders to adopt standardized rations in their place. The K- and D-rations were declared obsolete after World War II, but canned wet rations in the form of the C-ration (later the MCI) continued until 1983, when they were replaced by the Meal, Ready-to-Eat (MRE). Created during this era was the T-ration (or "T-rat"), a semi-perishable meal packaged, heated, and served in a tray pack similar to frozen meals. Present-day issuance: Currently, the following rations are available to troops: Unitized Group Ration: standard group ration for field kitchens and garrisons, succeeding the A-, B-, and T-rations UGR-H&S (Heat & Serve): precooked, shelf-stable meals designed to be heated and served in less than an hour UGR-A: fresh meals prepared on-site (or nearby and transported), requiring refrigeration and a field kitchen UGR-M (formerly UGR-B): meals, mostly dehydrated or preserved, intended for the U.S. Marine Corps UGR-E (Express): meals in self-heating packages designed to feed groups where field kitchens are unavailable Navy Standard Core Menu: meals served to U.S. Navy personnel aboard navy vessels; considered a group ration, but not part of the UGR system Meal, Ready-to-Eat: standard individual field ration First Strike Ration: individual ration designed for use in combat or while moving Soldier Fuel: specialized energy bar found in some MREsThe composition of these rations are predetermined in a way to make sure that soldiers are properly fed and equipped with the necessary nutrients in order to maximize energy levels. There are certain levels of nutrients that are recommended by the Military Nutrition Research Committee. As a baseline, it is recommended that there be at least 2,400 kcal in each ration, in which those rations are constructed of different levels of numerous vitamins and nutrients. Ration waste disposal: The military prioritizes the burn disposal of the waste because, if not taken care of properly, it can lead to more problems. The goal achieved from the proper disposal includes the elimination of vermin problems, airborne diseases, and preventing any enemy militaries from obtaining the waste to use as resources for themselves or as intelligence.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Anorak (slang)** Anorak (slang): "Anorak" is a British slang term which refers to a person who has a very strong interest, perhaps obsessive, in niche subjects. This interest may be unacknowledged or not understood by the general public. The term is sometimes used synonymously with "geek" or "nerd", or the Japanese term "otaku", albeit referring to different niches. Etymology: The first use of the term to describe an obsessive fan has been credited to the radio presenter Andy Archer, who used the term in the early 1970s for fans of offshore radio, who would charter boats to come out to sea to visit the radio ships.In 1983, the first edition of the Anoraks UK Weekly Report was published, featuring news of pirate radio broadcasts. Etymology: In 1984 the Observer newspaper used the term as a metonym for the prototype group interested in detailed trivia, the trainspotters, as members of this group often wore unfashionable but warm cagoules or parkas called "anoraks" when standing for hours on station platforms or along railway tracks, noting down details of passing trains.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bid price** Bid price: A bid price is the highest price that a buyer (i.e., bidder) is willing to pay for some goods. It is usually referred to simply as the "bid". In bid and ask, the bid price stands in contrast to the ask price or "offer", and the difference between the two is called the bid–ask spread. An unsolicited bid or purchase offer is when a person or company receives a bid even though they are not looking to sell. Bidding war: A bidding war is said to occur when a large number of competing bids are placed in rapid succession by two or more entities, especially when the price paid is much greater than the ask price, or greater than the first bid in the case of unsolicited bidding. In other words, a bidding war is a situation where two or more buyers are so interested in an item (such as a house or a business) that they make increasingly higher-priced offers in attempts to outbid others and win the ownership of the item. Bidding war: In real estate, a potential buyer can increase their bid in a number of different ways. Some common ways a bidder can increase their bid include: offering a higher purchase price, reduce the number of contingencies, pay with cash or even write a letter to appeal to the seller. These are all strategies that are proven to increase the odds of the buyer winning the bidding war. In the markets: In the context of stock trading on a stock exchange, the bid price is the highest price a buyer of a stock is willing to pay for a share of that given stock. The bid price displayed in most quote services is the highest bid price in the market. The ask or offer price on the other hand is the lowest price a seller of a particular stock is willing to sell a share of that given stock. The ask or offer price displayed is the lowest ask/offer price in the market (Stock market).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Thyroglossal cyst** Thyroglossal cyst: A thyroglossal cyst is a fibrous cyst that forms from a persistent thyroglossal duct. Thyroglossal cysts can be defined as an irregular neck mass or a lump which develops from cells and tissues left over after the formation of the thyroid gland during developmental stages.Thyroglossal cysts are the most common cause of midline neck masses and are generally located caudal to (below) the hyoid bone. These neck masses can occur anywhere along the path of the thyroglossal duct, from the base of the tongue to the suprasternal notch. Other common causes of midline neck masses include lymphadenopathy, dermoid cysts, and various odontogenic anomalies.Thyroglossal cysts develop at birth. Many diagnostic procedures may be used to establish the degree of the cyst. Signs and symptoms: Thyroglossal duct cysts most often present with a palpable asymptomatic midline neck mass usually below [65% of the time] the level of the hyoid bone. The mass on the neck moves during swallowing or on protrusion of the tongue because of its attachment to the tongue via the tract of thyroid descent. Some patients will have neck or throat pain, or dysphagia.The persistent duct or sinus can promote oral secretions, which may cause cysts to become infected. Up to half of thyroglossal cysts are not diagnosed until adult life. The tract can lie dormant for years or even decades, until some kind of stimulus leads to cystic dilation. Infection can sometimes cause the transient appearance of a mass or enlargement of the cyst, at times with periodic recurrences. Spontaneous drainage may also occur. Differential diagnosis are ectopic thyroid, enlarged lymph nodes, dermoid cysts and goiter.Thyroglossal cyst usually presents as a midline neck lump (in the region of the hyoid bone) that is usually painless, smooth and cystic, though if infected, pain can occur. There may be difficulty breathing, dysphagia (difficulty swallowing), or dyspepsia (discomfort in the upper abdomen), especially if the cyst becomes large.The most common location for a thyroglossal cyst is midline or slightly off midline, between the isthmus of the thyroid and the hyoid bone or just above the hyoid bone. A thyroglossal cyst can develop anywhere along a thyroglossal duct, though cysts within the tongue or in the floor of the mouth are rare.A thyroglossal cyst will move upwards with protrusion of the tongue.Thyroglossal cysts are associated with an increased incidence of ectopic thyroid tissue. Occasionally, a lingual thyroid can be seen as a flattened strawberry-like lump at the base of the tongue. Signs and symptoms: Complications Infection An infected thyroglossal duct cyst can occur when it is left untreated for a certain amount of time or simply when a thyroglossal duct cyst hasn't been suspected. The degree of infection can be examined as major rim enhancement has occurred, located inferior to the hyoid bone. Soft tissue swelling occurs, along with airway obstruction and trouble swallowing, due to the rapid enlargement of the cyst. With infections, there can be rare cases where an expression of fluid is projected into the pharynx causing other problems within the neck. Signs and symptoms: Thyroglossal Fistula A thyroglossal duct cyst may rupture unexpectedly, resulting in a draining sinus known as a thyroglossal fistula. Thyroglossal fistula can develop when the removal of the cyst has not been fully completed. This is usually noticed when bleeding in the neck occurs, causing swelling and fluid ejection around the original wound of removal. Thyroglossal duct cyst carcinoma Rarely (in less than 1% of cases), cancer may be present in a thyroglossal duct cyst. These tumors are generally papillary thyroid carcinomas, arising from the ectopic thyroid tissue within the cyst. Causes: Thyroglossal Duct Cysts are a birth defect. During embryonic development, the thyroid gland is being formed, beginning at the base of the tongue and moving towards the neck canal, known as the thyroglossal duct. Once the thyroid reaches its final position in the neck, the duct normally disappears. In some individuals, portions of the duct remain behind, leaving small pockets, known as cysts. During a person's life, these cyst pockets can fill with fluids and mucus, enlarging when infected, presenting the thyroglossal cyst. Embryology: The thyroglossal tract arises from the foramen cecum at the junction of the anterior two-thirds and posterior one-third of the tongue. Any part of the tract can persist, causing a sinus, fistula or cyst. Most fistulae are acquired following rupture or incision of the infected thyroglossal cyst. A thyroglossal cyst is lined by pseudostratified, ciliated columnar epithelium while a thyroglossal fistula is lined by columnar epithelium. Diagnosis: Diagnosis of a thyroglossal duct cyst requires a medical professional, and is usually done by a physical examination. It is important to identify whether or not the thyroglossal cyst contains any thyroid tissue, as it can define the degree of cyst that is being dealt with.Diagnostic procedures for a thyroglossal cyst include: Clinical features Clinical features can be found in the subhyoid portion of the tract and 75% present as midline swellings. The remainder can be found as far lateral as the lateral tip of the hyoid bone.Typically, the cyst will move upwards on protrusion of the tongue, given its attachment to the embryonic duct, as well as on swallowing, due to attachment of the tract to the foramen caecum. Treatment: Although generally benign, the cyst must be removed if the patient exhibits difficulty in breathing or swallowing, or if the cyst is infected. Even if these symptoms are not present, the cyst may be removed to eliminate the chance of infection or development of a carcinoma,Thyroid scans and thyroid function studies are ordered preoperatively; this is important to demonstrate that normally functioning thyroid tissue is in its usual area.Surgical management options include the Sistrunk procedure, en bloc central neck dissection, suture-guided transhyoid pharyngotomy, and Koempel's supra-hyoid technique. Cystectomy is an inadequate approach. Treatment: Sistrunk Procedure The Sistrunk procedure is the surgical resection of the central portion of the hyoid bone along with a wide core of tissue from the midline area between the hyoid and foramen cecum. It involves excision not only of the cyst but also of the path's tract and branches, and removal of the central portion of the hyoid bone is indicated to ensure complete removal of the tract. The original Sistrunk papers (the "classic" procedure described in 1920, and the "modified" procedure described in 1928) are available on-line with a modern commentary.In general, the procedure consists of three steps: incision resection of cyst and hyoid bone drainage and closureThere are several versions of the Sistrunk procedure, including: "classic": excision of the center of the hyoid bone along with a thyroglossal duct cyst, removal of one-eighth inch diameter core of tongue muscle superior to the hyoid at a 45 degree angle up to the foramen cecum to include mucosa, removal of one-quarter inch of the center of the hyoid bone, closure of the cut ends of the hyoid bone, and placement of a drain. Treatment: modified: dissection through the tongue base but not through the mucosa. The modified Sistrunk procedure is the procedure of choice in both primary and revision cases. Treatment: hyoid cartilage division: In cases without mature ossification of the hyoid bone, the non-fused cartilage portion can be divided by monopolar Bovie electro-cauterization or scissors. There were no statistical differences between this modified Sistrunk and the conventional Sistrunk procedure.The procedure is relatively safe. In a study of 35 pediatric patients, Maddalozzo et al. found no major complications, but did observe minor complications (6 patients presented with seroma and 4 patients with local wound infections). A more recent paper analyzed 24 research studies on different treatment complications of thyroglossal cyst, and reported a total minor complications rate of 6% for the Sistrunk operation (classical or modified) and simple cystectomy treatment modalities. The Sistrunk procedure also showed better outcomes concerning the rate of overall recurrence, i.e. has the lowest rate of recurrence.Sistrunk procedure results in a 95% cure rate and 95–100% long-term survival. Epidemiology: 90% of cases are presented in children before the age of 10 70% of neck anomalies are from Thyroglossal cysts Thyroglossal Duct Cysts are the second most common neck abnormalities after lymphadenopathy A person can live with a Thyroglossal Duct Cyst without any problems, until a pathology develops. Approximately 7% of the population has thyroglossal duct remnants Thyroglossal duct carcinoma occurs in approximately 1 to 2% of Thyroglossal cyst cases.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Geosmin synthase** Geosmin synthase: Geosmin synthase or germacradienol-geosmin synthase designates a class of bifunctional enzymes (EC 4.1.99.16) that catalyze the conversion of farnesyl diphosphate (FPP) to geosmin, a volatile organic compound known for its earthy smell. The N-terminal half of the protein catalyzes the conversion of farnesyl diphosphate to germacradienol and germacrene D, followed by the C-terminal-mediated conversion of germacradienol to geosmin. The conversion of FPP to geosmin was previously thought to involve multiple enzymes in a biosynthetic pathway. Species distribution: Geosmin is found in a wide variety of microbes such as cyanobacteria and actinobacteria. Geosmin has also been found in myxobacteria, fungi, arthropods, and plants such as beets. Based on studies performed on a geosmin synthase (encoded by SCO6073) in Streptomyces coelicor and the high sequence similarity between this and other known or putative geosmin synthases (45-78% identity), it has been hypothesized that all geosmin synthases function in the same manner. Screening of available bacterial genomic data has resulted in the prediction of at least 55 putative geosmin synthases in this domain of prokaryotic organisms. Function and mechanism: Two distinct active sites Geosmin synthase is approximately 726 amino acids in length and has two distinctive active sites on the N-terminal and C-terminal halves, respectively (in S. coelicor the N-terminal domain consists of amino acids 1-319 while the C-terminal domain exists from 374-726), both of which resembling the sesquiterpene synthase pentalenene synthase. Both the N- and C-terminal halves of the synthase contain aspartate-rich domains (DDHFLE and DDYYP, respectively) and the NSE amino acid motif (NDLFSYQRE and NDVFSYQKE, respectively), which bind trinuclear magnesium. Magnesium is a necessary cofactor, without which the synthase displays a complete lack of catalytic activity.In experiments where FPP was incubated with recombinant geosmin synthase, increasing the concentration of the synthase or increasing the incubation time resulted in an absolute and relative increase of geosmin compared to the intermediate germacradienol; this shows that geosmin synthase does not act exclusively on a series of enzyme-bound intermediates. Instead, germacradienol is released from the N-terminal domain and then rebinds to the C-terminal domain for final conversion to geosmin.Targeted mutagenesis of the N-terminal magnesium binding sites resulted in an enzyme incapable of converting FPP to germacradienol and germacrene D. Targeted mutagenesis of the C-terminal magnesium-binding sites resulted in an enzyme incapable of catalyzing the second half of the reaction from germacradienol to geosmin, but still capable of converting FPP to germacradienol and germacrene D. Truncated mutants of only the N-terminal or C-terminal halves of the geosmin synthase are also capable of catalyzing their respective reactions, providing further evidence that the N- and C-terminal halves of geosmin synthase are in essence two distinct and independent enzymes. Function and mechanism: N terminal repeat The N-terminal half of geosmin synthase contains a second NSE magnesium-binding motif, approximately 38 residues downstream of the first. Targeted mutagenesis of this repeated NSE motif does not significantly alter the catalytic activity of the synthase, suggesting that it does not serve any functional role. This repeated downstream motif is well-conserved in other known or putative geosmin synthases, suggesting that it either has a role that has not yet been discovered or may be a remnant of evolutionary development. Function and mechanism: Proposed mechanism The first step in the mechanism is for FPP's carbon-carbon double bond farthest from the diphosphate group to attack the carbon adjacent to the diphosphate, forming a cyclic carbocation with the loss of the diphosphate group. A 1,3 hydride shift moves the carbocation closer to the nearest carbon-carbon double bond; the loss of a proton forms a new carbon-carbon double bond and allows the carbon-carbon double bond adjacent to the carbocation to quench this charged group, forming the byproduct germacrene D. Alternatively, the carbocation produced in the first step can immediately lose a proton to form the intermediate isolepidozene, which is subsequently attacked by water to form germacradienol. Further processing of germacradienol involves a proton-initiated cyclization and a novel retro-Prins-type fragmentation producing the intermediate octalin and byproduct acetone. Finally, protonation, a 1,2 hydride shift, and quenching by water convert octalin to geosmin. Industrial importance: Geosmin has a very low detection threshold in humans of ~10-100 parts per trillion. Geosmin produced by various microbes can contaminate water supplies, degrading consumer confidence and decreasing water utility performance. One action taken to treat geosmin contaminated water supplies is the addition of copper sulfate, which is controversial due to possible environmental effects.Studies attempting to link the expression of geosmin synthase to various environmental conditions (e.g., light and temperature) have shown synthase production to be correlated with cell growth but not significantly affected by diurnal cycles. Geosmin production is also correlated to the availability of substrate, as demonstrated by the deletion of pathways competing for precursors to FPP, which led to an increase in geosmin production. The growing body of knowledge on geosmin synthase and its conserved and functionally important components have led to the development of a DNA PCR screen that may allow for better detection of geosmin synthase containing microorganisms, potentially allowing for better control of geosmin production and contamination in water supplies.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Iron cycle** Iron cycle: The iron cycle (Fe) is the biogeochemical cycle of iron through the atmosphere, hydrosphere, biosphere and lithosphere. While Fe is highly abundant in the Earth's crust, it is less common in oxygenated surface waters. Iron is a key micronutrient in primary productivity, and a limiting nutrient in the Southern ocean, eastern equatorial Pacific, and the subarctic Pacific referred to as High-Nutrient, Low-Chlorophyll (HNLC) regions of the ocean.Iron exists in a range of oxidation states from -2 to +7; however, on Earth it is predominantly in its +2 or +3 redox state and is a primary redox-active metal on Earth. The cycling of iron between its +2 and +3 oxidation states is referred to as the iron cycle. This process can be entirely abiotic or facilitated by microorganisms, especially iron-oxidizing bacteria. The abiotic processes include the rusting of iron-bearing metals, where Fe2+ is abiotically oxidized to Fe3+ in the presence of oxygen, and the reduction of Fe3+ to Fe2+ by iron-sulfide minerals. The biological cycling of Fe2+ is done by iron oxidizing and reducing microbes.Iron is an essential micronutrient for almost every life form. It is a key component of hemoglobin, important to nitrogen fixation as part of the Nitrogenase enzyme family, and as part of the iron-sulfur core of ferredoxin it facilitates electron transport in chloroplasts, eukaryotic mitochondria, and bacteria. Due to the high reactivity of Fe2+ with oxygen and low solubility of Fe3+, iron is a limiting nutrient in most regions of the world. Ancient earth: On the early Earth, when atmospheric oxygen levels were 0.001% of those present today, dissolved Fe2+ was thought to have been a lot more abundant in the oceans, and thus more bioavailable to microbial life. Iron sulfide may have provided the energy and surfaces for the first organisms. At this time, before the onset of oxygenic photosynthesis, primary production may have been dominated by photo-ferrotrophs, which would obtain energy from sunlight, and use the electrons from Fe2+ to fix carbon.During the Great Oxidation Event, 2.3-2.5 billion years ago, dissolved iron was oxidized by oxygen produced by cyanobacteria to form iron oxides. The iron oxides were denser than water and fell to the ocean floor forming banded iron formations (BIF). Over time, rising oxygen levels removed increasing amounts of iron from the ocean. BIFs have been a key source of iron ore in modern times. Terrestrial ecosystems: The iron cycle is an important component of the terrestrial ecosystems. The ferrous form of iron, Fe2+, is dominant in the Earth's mantle, core, or deep crust. The ferric form, Fe3+, is more stable in the presence of oxygen gas. Dust is a key component in the Earth's iron cycle. Chemical and biological weathering break down iron-bearing minerals, releasing the nutrient into the atmosphere. Changes in hydrological cycle and vegetative cover impact these patterns and have a large impact on global dust production, with dust deposition estimates ranging between 1000 and 2000 Tg/year. Aeolian dust is a critical part of the iron cycle by transporting iron particulates from the Earth's land via the atmosphere to the ocean.Volcanic eruptions are also a key contributor to the terrestrial iron cycle, releasing iron-rich dust into the atmosphere in either a large burst or in smaller spurts over time. The atmospheric transport of iron-rich dust can impact the ocean concentrations. Oceanic ecosystem: The ocean is a critical component of the Earth's climate system, and the iron cycle plays a key role in ocean primary productivity and marine ecosystem function. Iron limitation has been known to limit the efficiency of the biological carbon pump. The largest supply of iron to the oceans is from rivers, where it is suspended as sediment particles. Coastal waters receive inputs of iron from rivers and anoxic sediments. Other major sources of iron to the ocean include glacial particulates, atmospheric dust transport, and hydrothermal vents. Iron supply is an important factor affecting growth of phytoplankton, the base of marine food web. Offshore regions rely on atmospheric dust deposition and upwelling. Other major sources of iron to the ocean include glacial particulates, hydrothermal vents, and volcanic ash. In offshore regions, bacteria also compete with phytoplankton for uptake of iron. In HNLC regions, iron limits the productivity of phytoplankton. Oceanic ecosystem: Most commonly, iron was available as an inorganic source to phytoplankton; however, organic forms of iron can also be used by specific diatoms which use a process of surface reductase mechanism. Uptake of iron by phytoplankton leads to lowest iron concentrations in surface seawater. Remineralization occurs when the sinking phytoplankton are degraded by zooplankton and bacteria. Upwelling recycles iron and causes higher deep water iron concentrations. On average there is 0.07±0.04 nmol Fe kg−1 at the surface (<200 m) and 0.76±0.25 nmol Fe kg−1 at depth (>500 m). Therefore, upwelling zones contain more iron than other areas of the surface oceans. Soluble iron in ferrous form is bioavailable for utilization which commonly comes from aeolian resources. Iron primarily is present in particulate phases as ferric iron, and the dissolved iron fraction is removed out of the water column by coagulation. For this reason, the dissolved iron pool turns over rapidly, in around 100 years. Interactions with other elemental cycles: The iron cycle interacts significantly with the sulfur, nitrogen, and phosphorus cycles. Soluble Fe(II) can act as the electron donor, reducing oxidized organic and inorganic electron receptors, including O2 and NO3, and become oxidized to Fe(III). The oxidized form of iron can then be the electron acceptor for reduced sulfur, H2, and organic carbon compounds. This returns the iron to the oxidized Fe(II) state, completing the cycle.The transition of iron between Fe(II) and Fe(III) in aquatic systems interacts with the freshwater phosphorus cycle. With oxygen in the water, Fe(II) gets oxidized to Fe(III), either abiotically or by microbes via lithotrophic oxidation. Fe(III) can form iron hydroxides, which bind tightly to phosphorus, removing it from the bioavailable phosphorus pool, limiting primary productivity. In anoxic conditions, Fe(III) can be reduced, used by microbes to be the final electron acceptor from either organic carbon or H2. This releases the phosphorus back into the water for biological use.The iron and sulfur cycle can interact at several points. Purple sulfur bacteria and green sulfur bacteria can use Fe(II) as an electron donor during anoxic photosynthesis. Sulfate reducing bacteria in anoxic environments can reduce sulfate to sulfide, which then binds to Fe(II) to create iron sulfide, a solid mineral that precipitates out of water and removes the iron and sulfur. The iron, phosphate, and sulfur cycles can all interact with each other. Sulfide can reduce Fe(III) from iron that is already bound to phosphate when there are no more metal ions available, which releases the phosphate and creates iron sulfide.Iron plays an important role in the nitrogen cycle, aside from its role as part of the enzymes involved in nitrogen fixation. In anoxic conditions, Fe(II) can donate an electron that is accepted by NO3− which is oxidized to several different forms of nitrogen compounds, NO2−, N2O, N2, and NH4+, while Fe(II) is reduced to Fe(III). Anthropogenic influences: Human impact on the iron cycle in the ocean is due to dust concentrations increasing at the beginning of the industrial era. Today, there is approximately double the amount of soluble iron in oceans than pre-industrial times from anthropogenic pollutants and soluble iron combustion sources. Changes in human land-use activities and climate have augmented dust fluxes which increases the amount of aeolian dust to open regions of the ocean. Other anthropogenic sources of iron are due to combustion. Highest combustion rates of iron occurs in East Asia, which contributes to 20-100% of ocean depositions around the globe.Humans have altered the cycle for Nitrogen from fossil fuel combustion and large-scale agriculture. Due to increased Iron and Nitrogen raises marine nitrogen fixation in the subtropical North and South Pacific Ocean. In the subtropics, tropics and HNLC regions, increased inputs of iron may lead to increased CO2 uptake, impacting the global carbon cycle.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Citra (emulator)** Citra (emulator): Citra is a free and open-source emulator of the handheld Nintendo 3DS for Windows, macOS, Linux, and Android. Citra's name is derived from CTR, which is the model name of the original 3DS. Citra can run many homebrew games and commercial games.Citra was first made available in 2014. The core team behind it went on to develop Nintendo Switch emulator yuzu in 2018. Development: Citra was initially created in April 2014. The first commercial Nintendo 3DS game to be run by Citra was The Legend of Zelda: Ocarina of Time 3D.Citra can emulate audio since May 21, 2016, and has had a JIT compiler since September 15, 2016. In November 2017, Citra announced networking support for the emulator. The networking support emulates the 3DS's local Wi-Fi, which originally made it possible to play over local networks. Additionally, Citra allows the networking to be compatible with other users anywhere. In April 2020, the Citra Team announced compatibility with New Nintendo 3DS games and support for save states, and in May 2020, they announced a version of Citra for Android.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Metallic color** Metallic color: A metallic color is a color that appears to be that of a polished metal. The visual sensation usually associated with metals is its metallic shine. This cannot be reproduced by a simple solid color, because the shiny effect is due to the material's brightness varying with the surface angle to the light source. In addition, there is no mechanism for showing metallic or fluorescent colors on a computer without resorting to rendering software which simulates the action of light on a shiny surface. Consequently in art and in heraldry one would normally use a metallic paint that glitters like a real metal. Metallic color: For example, to create a painting that gave the impression of gold appearing in the painting, a metallic paint that glitters in an approximation of real gold would be used; a solid color does not aesthetically "read" as gold. Especially in sacral art in Christian churches, real gold (as gold leaf) was used for rendering gold in paintings, e.g. for the halo of saints. Gold can also be woven into sheets of silk to give an East Asian traditional look. More recent art styles, for example art nouveau, also used a metallic, shining gold. However, the metallic finish of such paints was added using fine aluminum powder and pigment rather than actual gold. Metallic color: The use of metallic colors is not limited to those colors that approximate the appearance of actual metals. In some instances, it has been noted, "beetles with bright metallic colors are made up into tie pins and cuff links". One popular modern use of metallic colors is for automobiles, which use metallic paint to achieve a particular shine. Such colors "are made from a combination of different pigments and aluminum flakes that have different weights and particle sizes". Crayon-maker Crayola has manufactured several lines of "metallic" products, including "Metallic FX" crayons, and "Metallic Colors" colored pencils, which have flecks of sparkles to achieve the metallic effect.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Phenomenology of religion** Phenomenology of religion: The phenomenology of religion concerns the experiential aspect of religion, describing religious phenomena in terms consistent with the orientation of worshippers. It views religion as made up of different components, and studies these components across religious traditions in order to gain some understanding of them. Phenomenology of religion: A different approach is that of typological or classifying phenomenology, which seeks to describe and explain religion in general by analyzing the many diverse 'phenomena' of religions, such as rituals, holy places, narratives, religious roles, and the many other modes of religious expression. In this respect, the phenomenology of religion takes the generalizing role that linguistics has over philologies or that anthropology has in relation the specific ethnographies: where the history of religions produces insights into specific religious traditions, the phenomenology of religion becomes the general scholarly (or scientific) enterprise that explains and interprets religious phenomena in general. Chantepie de la Saussaye: The first explicit use of the phrase "phenomenology of religion" occurs in the Lehrbuch der Religionsgeschichte (Handbook of the History of Religions), written by Pierre Daniël Chantepie de la Saussaye in 1887, wherein he articulates the task of the science of religion and gives an "Outline of the phenomenology of religion". Employing the terminology of Georg Wilhelm Friedrich Hegel, Chantepie divides his science of religion into two areas of investigation, essence and manifestations, which are approached through investigations in philosophy and history, respectively. However, Chantepie's phenomenology "belongs neither to the history nor the philosophy of religion as Hegel envisioned them". For Chantepie, it is the task of phenomenology to prepare historical data for philosophical analysis through "a collection, a grouping, an arrangement, and a classifying of the principal groups of religious conceptions". This sense of phenomenology as a grouping of manifestations is similar to the conception of phenomenology articulated by Robison and the British; however, insofar as Chantepie conceives of phenomenology as a preparation for the philosophical elucidation of essences, his phenomenology is not completely opposed to that of Hegel. Kristensen: Chantepie's Lehrbuch was highly influential, and many researchers began similar efforts after its publication and its subsequent translation into English and French. One such researcher was William Brede Kristensen. In 1901, Kristensen was appointed the first professorship relating to the phenomenology of religion at the University of Leiden. Some of the material from Kristensen's lectures on the phenomenology of religion was edited posthumously, and the English translation was published in 1960 as The Meaning of Religion. James notes that Kristensen's phenomenology "adopts many of the features of Chantepie’s grouping of religious phenomena," and penetrates further into the intricacies of Chantepie's phenomenological approach.For Chantepie, phenomenology is affected by the philosophy and history of religion, but for Kristensen, it is also the medium whereby the philosophy and history of religion interact with and affect one another. In this sense, Kristensen's account of the relationship between historical manifestations and philosophy is more similar to that of Hegel than it is to Chantepie. In defining the religious essence of which he explores historical manifestations, Kristensen appropriates Rudolf Otto’s conception of das Heilige ("the holy" or "the sacred"). Otto describes das Heilige with the expression mysterium tremendum et fascinans—a numinous power revealed in a moment of "awe" that admits of both the horrible shuddering of "religious dread" (tremendum) and fascinating wonder (fascinans) with the overpowering majesty (majestas) of the ineffable, "wholly other" mystery (mysterium).Like Chantepie, Kristensen argues that phenomenology seeks the “meaning” of religious phenomena. Kristensen clarifies this supposition by defining the meaning that his phenomenology is seeking as “the meaning that the religious phenomena have for the believers themselves”. Furthermore, Kristensen argues that phenomenology is not complete in grouping or classifying the phenomena according to their meaning, but in the act of understanding. “Phenomenology has as its objects to come as far as possible into contact with and to understand the extremely varied and divergent religious data”.Being a phenomenologist, Kristensen was less interested in philosophical presuppositions than in his concrete depth-research in the incidental religious phenomena. These subjects concerned mythological material (such as Creation, the Flood etc.) as well as human action (such as baptism, Olympic Games etc.), and objects of nature and handicrafts. Kristensen: In all of this he only made use of the authentic sources: writings and images by the believers themselves. This procedure compelled him to reduce the field of his research - he had to profoundly master all relating languages and writings in order to be able to understand his sources in a way as they would have wanted to be understood themselves. Kristensen: Consequently, he reduced his field of research to the phenomena in religions living around the origin of Christianity: during the millennia before and the centuries after Christ, in Iran (Avesta), Babylonia and Assyria, Israel, Egypt, Greece and Rome. The required knowledge of speeches, also, is one of the causes that only few (Van der Leeuw, Bleeker) of his pupils did carry on in his line, although many scholars showed interests in the results of his research. Apart from his synopsis The Meaning of Religion, and a just simple Introduction in History of Religion, his publications are mostly restricted to the results of his incidental partial researches, published in the shape of a Communication of the Royal Academy of the Netherlands. van der Leeuw: The phenomenological approach to religion developed in Gerardus van der Leeuw’s Phänomenologie der Religion (1933) follows Kristensen in many respects, while also appropriating the phenomenology of Martin Heidegger and the hermeneutics of Wilhelm Dilthey. van der Leeuw: For van der Leeuw, understanding is the subjective aspect of phenomena, which is inherently intertwined with the objectivity of that which is manifest. Van der Leeuw articulates the relation of understanding to understood phenomena according to the schema outlined in Dilthey’s definition of the human sciences (Geisteswissenschaften) as sciences that are “based on the relations between experience, expression and understanding” (“Verhältnis von Erlebnis, Ausdruck, und Verstehen”). Van der Leeuw correlates subjective experience, expression, and understanding with three objective levels of appearing—relative concealment (Verborgenheit), relative transparency (Durchsichtigkeit), and gradually becoming manifest or revealed (Offenbarwerden), wherein the understanding of what is becoming revealed is the primordial level of appearing from which the experienced concealment and expressed transparency of appearing are derived.Because van der Leeuw, like Kristensen, appropriates Otto's concept of das Heilige in defining the essential category of religion, the transcendence becoming revealed in all human understanding can be further described as sacred — an overpowering “wholly other,” which becomes revealed in astonishing moments of dreadful awe (Scheu) and wonderful fascination. Van der Leeuw argues that this concept of religious dread is also present in Kierkegaard's work on Angst and in Heidegger's statement that “what arouses dread is ‘being in the world’ itself”. Moreover, van der Leeuw recognizes that, although dreadful, Being-in-the-world is fundamentally characterized as care (Sorge), the existential structure whereby Dasein is concerned with meaningful relationships in the world alongside other beings.Because all experiences disclose concealed (wholly other) transcendence to the understanding, all experiences of Being-in-the-world are ultimately religious experiences of the sacred, whether explicitly recognized as such or not. Human being as such is homo religiosus, the opposite of homo negligens.It is the task of the phenomenology of religion to interpret the various ways in which the sacred appears to human beings in the world, the ways in which humans understand and care for that which is revealed to them, for that which is ultimately wholly other mystery. van der Leeuw: Among other great phenomenologists who worked and influenced phenomenology of religion are Henry Corbin, Ninian Smart, Mircea Eliade, and C. Jouco Bleeker.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Timewarp (computer graphics)** Timewarp (computer graphics): A timewarp is a tool for manipulating the temporal dimension in a hierarchically described 3D computer animation system. The term was coined by Jeff Smith and Karen Drewery in 1991. Continuous curves that are normally applied to parametric modeling and rendering attributes are instead applied to the local clock value, which effectively remaps the flow of global time within the context of the subsection of the model to which the curves are applied. The tool was first developed to assist animators in making minor adjustments to subsections of animated scenes that might employ dozens of related interpolation curves. Rather than adjust the timing of every curve within the subsection, a timewarp curve can be applied to the model section in question, adjusting the flow of time itself for that element, with respect to the timing of the other, unaffected elements. Timewarp (computer graphics): Originally, the tool was used to achieve minor adjustments, moving a motion forward or back in time, or to alter the speed of a movement. Subsequent experiments with the technique moved beyond these simpler timing adjustment and began to employ the timing curves to create more complex effects, such as continuous animation cycles and simulating more natural movements of large collections of models, such as flocks or crowds, by creating numerous identical copies of a single animated model and then applying small random perturbation timewarps to each of the copies, giving the impression of a less robotic precision to the group's movements. Timewarp (computer graphics): The tool has since become common in both 3D animation and video editing software systems.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**2.5D integrated circuit** 2.5D integrated circuit: A 2.5D integrated circuit (2.5D IC) is an advanced packaging technique that combines multiple integrated circuit dies in a single package without stacking them into a three-dimensional integrated circuit (3D-IC) with through-silicon vias (TSVs). The term "2.5D" originated when 3D-ICs with TSVs were quite new and still very difficult. Chip designers realized that many of the advantages of 3D integration could be approximated by placing bare dies side by side on an interposer instead of stacking them vertically. If the pitch is very fine and the interconnect very short, the assembly can be packaged as a single component with better size, weight, and power characteristics than a comparable 2D circuit board assembly. This half-way 3D integration was facetiously named "2.5D" and the name stuck. 2.5D integrated circuit: Since then, 2.5D has proven to be far more than just "half-way to 3D." Some benefits: An interposer can support heterogeneous integration – that is, dies of different pitch, size, material, and process node. Placing dies side by side instead of stacking them reduces heat buildup. Upgrading or modifying a 2.5D assembly is as easy as swapping in a new component and revamping the interposer to suit; much faster and simpler than reworking an entire 3D-IC or System-on-Chip (SoC).Some sophisticated 2.5D assemblies even incorporate TSVs and 3D components. Several foundries now support 2.5D packaging. The success of 2.5D assembly has given rise to "chiplets" – small, functional circuit blocks designed to be combined in mix-and-match fashion on interposers. Several high-end products already take advantage of these LEGO-style chiplets; some experts predict the emergence of an industry-wide chiplet ecosystem.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Skin secretions (human)** Skin secretions (human): Skin secretions are those substances and materials that are secreted by the skin and the external mucous membranes. Some skin secretions are associated with body hair. Skin secretions (human): Skin secretions originate from glands that in dermal layer of the epidermis. Sweat, a physiological aid to body temperature regulation, is secreted by eccrine glands. Sebaceous glands secrete the skin lubricant sebum. Sebum is secreted onto the hair shaft and it prevents the hair from splitting. It consists mostly of lipids. After the sebum spreads along and up the hair shaft, it is distributed over the skin surface where it lubricates and waterproofs the outer layer of the skin, the stratum corneum.Defensins are substances that are secreted onto the skin surface that are anti-microbial.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vimla L. Patel** Vimla L. Patel: Vimla Lodhia Patel is a Fijian-born Canadian cognitive psychologist and biomedical informaticist. Vimla L. Patel: Patel has worked in the area of biomedical informatics, in particular studying the mediating roles of technology on performance. Her work includes studies of medical errors and error reduction in emergency care and other critical medical environments, (including telephone triage). Her past work in health cognition includes studies of risk-taking behavior and sexual decision making as it pertains to HIV in youth and adolescents. Her current work focuses mostly on identifying underlying cognition in medical error and learning. Biography and career: Patel was born in Fiji and was educated at Southland Girls High School and the University of Otago in New Zealand. After moving to Australia and then Canada, she obtained a PhD at McGill University in Montreal, before working as a professor. She was a founding member of HEALnet (Health Evidence Application and Linkage Network), which made seminal contributions furthering informatics research and application in Canada. She was also a member of the InterMed Collaboratory, which developed guidelines for medical decision support, and has done extensive work in India, Africa, and Colombia in cross-cultural cognition research. Biography and career: In 2000 she became director of the Laboratory of Cognition and Decision Making in the department of Biomedical Informatics at Columbia University, where she was also faculty in the department of Psychiatry and Teacher's College. From 2007 to 2009, she served as interim chair and vice chair of Department of BMI at Arizona State University. Patel was a professor of biomedical informatics and co-director of the Center for Cognitive Informatics and Decision Making at the University of Texas Health Science Center at Houston from 2009 to 2011. As of November 2011, Patel joined the New York Academy of Medicine as a senior research scientist and is the head of the Center for Cognitive Studies in Medicine and Public Health and is an adjunct professor of biomedical informatics at Columbia University in New York. Research: In 1978 Elstein, Shulman and Sprafka applied cognitive science methods to investigate physicians’ clinical competence, developing a model of hypothetico-deductive reasoning which proposed that physicians reason by generating and testing a set of hypotheses to explain clinical data. This is an example of backward (hypothesis-to-data) reasoning. In 1986, Patel and Groen demonstrated that experts who accurately diagnosed complex clinical problems used forward reasoning (data to hypothesis), in contrast to novice subjects who used backward reasoning and misdiagnosed or partially diagnosed the same problems.Patel also applied text comprehension methods to understanding the use of clinical practice guidelines with the goal of increasing adoption of best practices. Patel and colleagues have recently argued for new paradigm for error studies, where instead of zero error tolerance, detection and correction of potential error is viewed as an integral part of cognitive work in a complex workplace.She is the author of more than 300 publications in cognitive psychology, biomedical informatics, medical education and related fields. Honors: Member, Committee on Patient Safety and Health Information Technology, Institute of Medicine (IOM). 2010-2011. Honors: Science and Technology Research Award (STAR) with Edward Shortliffe, UTHealth, Houston,Texas. 2009 Vice Chair, AMIA Program Committee. 2009 Service Faculty of the Year Award, School of Computing and Informatics, Arizona State University. 2008 Member, Clinical Research Review Committee, The National Center for Research Resources (NCRR). 2007-2009 Selected for Marquis Who’s Who in the World. 2007 Member, Committee on Opportunities in Basic Research in the Behavioral and the Social Sciences for the Military, National Research Council, U.S.A. 2006 Elected Fellow, New York Academy of Medicine. 2004 Vice President (Member Service), International Medical Informatics Association Governing Board. 2003-2006 Outstanding Manuscript Award in Educational Methodology, Journal of Dental Education. 2002 Member, Bio-engineering Training and Education Program, National Science Foundation, USA. 1999-2007 Chair, Editorial Committee, Medinfo2001, International Medical Informatics Association, London, UK. 1999 D.Sc. (honorary), University of Victoria, BC, Canada. 1998 Member, Roundtable on Work, Learning and Assessment, National Research Council, U.S.A. 1997 Elected Member, Board of Governors, Cognitive Science Society. 1997 Elected Fellow, American College of Medical Informatics. 1996 Fellow, The Royal Society of Canada (elected by the Academy of Humanities and Social Sciences). 1996 Elected “Woman of Science” for the year (Sweden). 1994 Publications: Journal Articles Book Chapters Medline Publications Google Scholar Citations
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spin valve** Spin valve: A spin valve is a device, consisting of two or more conducting magnetic materials, whose electrical resistance can change between two values depending on the relative alignment of the magnetization in the layers. The resistance change is a result of the giant magnetoresistive effect. The magnetic layers of the device align "up" or "down" depending on an external magnetic field. In the simplest case, a spin valve consists of a non-magnetic material sandwiched between two ferromagnets, one of which is fixed (pinned) by an antiferromagnet which acts to raise its magnetic coercivity and behaves as a "hard" layer, while the other is free (unpinned) and behaves as a "soft" layer. Due to the difference in coercivity, the soft layer changes polarity at lower applied magnetic field strength than the hard one. Upon application of a magnetic field of appropriate strength, the soft layer switches polarity, producing two distinct states: a parallel, low-resistance state, and an antiparallel, high-resistance state. How it works: Spin valves work because of a quantum property of electrons (and other particles) called spin. Due to a split in the density of states of electrons at the Fermi energy in ferromagnets, there is a net spin polarisation. An electric current passing through a ferromagnet therefore carries both charge and a spin component. In comparison, a normal metal has an equal number of electrons with up and down spins so, in equilibrium situations, such materials can sustain a charge current with a zero net spin component. However, by passing a current from a ferromagnet into a normal metal it is possible for spin to be transferred. A normal metal can thus transfer spin between separate ferromagnets, subject to a long enough spin diffusion length. How it works: Spin transmission depends on the alignment of magnetic moments in the ferromagnets. If a current is passing into a ferromagnet whose majority spin is spin up, for example, then electrons with spin up will pass through relatively unhindered, while electrons with spin down will either 'reflect' or spin flip scatter to spin up upon encountering the ferromagnet to find an empty energy state in the new material. Thus if both the fixed and free layers are polarised in the same direction, the device has relatively low electrical resistance, whereas if the applied magnetic field is reversed and the free layer's polarity also reverses, then the device has a higher resistance due to the extra energy required for spin flip scattering. How it works: Antiferromagnetic and non-magnetic layers An antiferromagnetic layer is required to pin one of the ferromagnetic layers (i.e., make it fixed or magnetically hard). This results from a large negative exchange coupling energy between ferromagnets and antiferromagnets in contact. The non-magnetic layer is required to decouple the two ferromagnetic layers so that at least one of them remains free (magnetically soft). How it works: Pseudo spin valves The basic operating principles of a pseudo spin valve are identical to that of an ordinary spin valve, but instead of changing the magnetic coercivity of the different ferromagnetic layers by pinning one with an antiferromagnetic layer, the two layers are made of different ferromagnets with different coercivities e.g., NiFe and Co. Note that coercivities are largely an extrinsic property of materials and thus determined by processing conditions. Applications: Spin valves are used in magnetic sensors and hard disk read heads. They are also used in magnetic random access memories (MRAM).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spambot** Spambot: A spambot is a computer program designed to assist in the sending of spam. Spambots usually create accounts and send spam messages with them. Web hosts and website operators have responded by banning spammers, leading to an ongoing struggle between them and spammers in which spammers find new ways to evade the bans and anti-spam programs, and hosts counteract these methods. Email: Email spambots harvest email addresses from material found on the Internet in order to build mailing lists for sending unsolicited email, also known as spam. Such spambots are web crawlers that can gather email addresses from websites, newsgroups, special-interest group (SIG) postings, and chat-room conversations. Because email addresses have a distinctive format, such spambots are easy to code. Email: A number of programs and approaches have been devised to foil spambots. One such technique is address munging, in which an email address is deliberately modified so that a human reader (and/or human-controlled web browser) can interpret it but spambots cannot. This has led to the evolution of more sophisticated spambots that are able to recover email addresses from character strings that appear to be munged, or instead can render the text into a web browser and then scrape it for email addresses. Alternative transparent techniques include displaying all or part of the email address on a web page as an image, a text logo shrunken to normal size using inline CSS, or as text with the order of characters jumbled, placed into readable order at display time using CSS. Forums: Forum spambots browse the internet, looking for guestbooks, wikis, blogs, forums, and other types of web forms that they can then use to submit bogus content. These often use OCR technology to bypass CAPTCHAs. Some spam messages are targeted towards readers and can involve techniques of target marketing or even phishing, making it hard to tell real posts from the bot generated ones. Other spam messages are not meant to be read by humans, but are instead posted to increase the number of links to a particular website, to boost its search engine ranking. Forums: One way to prevent spambots from creating automated posts is to require the poster to confirm their intention to post via email. Since most spambot scripts use a fake email address when posting, any email confirmation request is unlikely to be successfully routed to them. Some spambots will pass this step by providing a valid email address and use it for validation, mostly via webmail services. Using methods such as security questions are also proven to be effective in curbing posts generated by spambots, as they are usually unable to answer it upon registering, also on various forums, consistent uploading of spam will also gain the person the title 'spambot'. Twitter: A Twitterbot is a program used to produce automated posts on the Twitter microblogging service, or to automatically follow Twitter users. Twitterbots come in various forms. For example, many serve as spam, enticing clicks on promotional links. Others post @replies or automatically "retweet" in response to tweets that include a certain word or phrase. These automatic tweets are often seen as funny or silly. Some Twitter users even program Twitterbots to assist themselves with scheduling or reminders.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tacheometry** Tacheometry: Tacheometry (; from Greek for "quick measure") is a system of rapid surveying, by which the horizontal and vertical positions of points on the earth's surface relative to one another are determined without using a chain or tape, or a separate levelling instrument. Tacheometry: Instead of the pole normally employed to mark a point, a staff similar to a level staff is used. This is marked with heights from the base or foot, and is graduated according to the form of tacheometer in use.The horizontal distance S is inferred from the vertical angle subtended between two well-defined points on the staff and the known distance 2L between them. Alternatively, also by readings of the staff indicated by two fixed stadia wires in the diaphragm (reticle) of the telescope. The difference of height Δh is computed from the angle of depression z or angle of elevation α of a fixed point on the staff and the horizontal distance S already obtained. The azimuth angle is determined as normally. Thus, all the measurements requisite to locate a point both vertically and horizontally with reference to the point where the tacheometer is centred are determined by an observer at the instrument without any assistance beyond that of a person to hold the level staff.The ordinary methods of surveying with a theodolite, chain, and levelling instrument are fairly satisfactory when the ground is relatively clear of obstructions and not very precipitous, but it becomes extremely cumbersome when the ground is covered with bush, or broken up by ravines. Chain measurements then become slow and liable to considerable error; the levelling, too, is carried on at great disadvantage in point of speed, though without serious loss of accuracy. These difficulties led to the introduction of tacheometry.In western countries, tacheometry is primarily of historical interest in surveying, as professional measurement nowadays is usually carried out using total stations and recorded using data collectors. Location positions are also determined using GNSS. Traditional methods and instruments are still in use in many areas of the world and by users who are not primarily surveyors. Tacheometer: A tachymeter or tacheometer is a type of theodolite used for rapid measurements and determines, electronically or electro-optically, the distance to target. The principles of action are similar to those of rangefinders. Stadia measurements: Other forms of tacheometry in surveying include the use of stadia rods with theodolites or plane-table alidades. These use stadia marks on the instrument's reticle to measure the distance between two points on the stadia rod (the stadia interval). This is converted to distance from the instrument to the stadia rod by multiplying the stadia interval by the stadia interval factor. If the stadia rod is not at the same elevation as the instrument, the value must be corrected for the angle of elevation between the instrument and the rod. Stadia measurements: The formula most widely used for finding the distances is: d=ks+c Here, s is the stadia interval (top intercept minus bottom intercept); k and c are multiplicative and additive constants. Generally, the instrument is made so that 100 and c=0 exactly, to simplify calculations. Subtense bars: Another device used in tacheometry is the subtense bar. This is a rigid rod, usually of a material insensitive to change in temperature such as invar, of fixed length (typically 2 metres (6.6 ft)). The subtense bar is mounted on a tripod over the station to which the distance is desired. It is brought to level, and a small telescope on the bar enables the bar to be oriented perpendicular to the line of sight to the angle measuring station. Subtense bars: A theodolite is used to measure the horizontal angle between indicators on the two ends of the subtense bar. The distance from the telescope to the subtense bar is the height of an isosceles triangle formed with the theodolite at the upper vertex and the subtense bar length at its base, determined by trigonometry.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dice chess** Dice chess: Dice chess can refer to a number of chess variants in which dice are used to alter gameplay; specifically that the moves available to each player are determined by rolling a pair of ordinary six-sided dice. There are many different variations of this form of dice chess. One of them is described here. Rules: The players alternate rolling the dice and, if possible, moving. On each die, the 1 represents a pawn, 2 a knight, 3 a bishop, 4 a rook, 5 a queen, and 6 a king. The player may move either of the pieces indicated on the two dice. For example, a player rolling a 1 and a 2 may move either a pawn or a knight. A player who rolls doubles (the same number on both dice) may play any legal move. Otherwise, standard chess rules apply, with these exceptions: a player who has no legal move with either of the pieces indicated by the dice loses that turn (passed turn); if castling is otherwise legal, a player may castle upon rolling a 4, 6, or doubles; an en passant capture of a pawn is possible only if the player rolls a 1, or doubles, immediately once the opportunity for the en passant capture arises; a player who is in check can only play a legal response to that check (capturing the checking piece, moving the king, or interposing a piece); a player who is in check but does not make a roll allowing a legal response to the check loses that turn, but does not automatically lose the game; except in the unlikely event that the game ends in a draw pursuant to the standard rules of chess, the game ends when one player either checkmates the opponent or captures the opponent's king. Sample game: Here is a sample game of dice chess: White rolls doubles, allowing her to play any move, and selects 1.e4. Black rolls a 2 and a 3; no bishop move being possible, he plays 1...Nc6. White rolls a 3 and a 4, and plays 2.Bc4. Black rolls a 4 and a 5; since no queen move is possible, he must play the only legal rook move, 2...Rb8. White rolls a 3 and a 6, and plays 3.Bxf7+. Black rolls a 2 and a 4; since no knight or rook move is a legal response to the check, he must pass. (Only a 6, or doubles, would have allowed him to move.) White rolls a 2 and a 4, and chooses 4.Nf3. (A 3 or 5 would have enabled an immediate win with 4.Bxe8, 4.Qf3# or 4.Qh5#). Black rolls a 1 and a 3; again, this does not allow a legal response to the check, so he must pass. White rolls a 2 and a 4, and plays 5.Ng5#, ending the game (see diagram). Rules variants: There is no standard ruleset for dice chess, and so games called dice chess may have different rules to the ones given here. For example, in the version of dice chess given on the BrainKing site: The players roll only one die. Pawns may move from the seventh to the eighth rank not only on a roll of 1 (when they promote to a piece of the player's choice), but also on a roll of 2, 3, 4 or 5 (when they can promote only to the piece specified by the roll). There is no check or checkmate. Rather, the goal is to actually capture the king. Rules variants: Another form of dice chess is Vegas Fun Chess, whose rules are described on The Chess Variant Pages. That site also states that "Pritchard's Encyclopedia of Chess Variants contains descriptions of seven versions of what he calls 'Dice Chess'." John Gollon, in his book Chess Variations: Ancient, Regional, and Modern, notes three ways in which dice may be used in connection with a game of chess. The most common is similar to that described in the preceding sections. A second way to use dice is to have each player roll one die on each turn, with the number rolled indicating the number of moves to be played. The maximum number of moves that can be played is usually four, so a roll of a 4, 5, or 6 allows the player to make four moves. A third form of the game uses two dice of contrasting colors, with one determining the piece that can move, and the other the number of moves that the piece makes. History: Anne Sunnucks writes that there is evidence from the literature of the period that dice were used to play chess in Europe between the 11th and 14th centuries, and even earlier in Burma and India. The dice were thrown before each turn to determine the piece to be moved; the same numbering system as set forth above was used (1=pawn, 2=knight, etc.). In the Burmese form of the game, three dice were thrown and each player made three moves at a time. Vladimir Pribylinec writes that the cubes in Cubic Chess move as in orthochess by a symbol uppermost as is described in both editions of Pritchard's Encyclopedia of Chess Variants, first published in 1977. In the variant Protheus cubes are turned on the adjacent squares.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Luxury packaging** Luxury packaging: Luxury and specialty packaging is the design, research, development, and manufacturing of packaging, displays, and for luxury brands. The packaging of a luxury product is part of the brand’s image and research shows consumers are willing to spend more on products if the packaging looks appealing and luxurious.As well as adding to the value of the product, luxury packaging fulfils various other roles; it enhances the image of the brand, increases consumer engagement through personalised packaging, performs a function, creates appeal and diversifies the product.Globally, the luxury packaging market continues to grow, driven by global trends of personalised packaging, attention to sustainability issues, economic and demographic drivers. The luxury packaging market is forecast to grow by 4.4% to 2019, reaching $17.6 billion, and consumption will reach 9.9 billion tons with growth of 3.1%. Package development considerations: Package design and development are often thought of as an integral part of the new product development process. Alternatively, development of a package (or component) can be a separate process, but must be linked closely with the product to be packaged.Package design starts with the identification of all the requirements: structural design, marketing, shelf life, quality assurance, logistics, legal, regulatory, graphic design, end-use, and environmental. The design criteria, performance (specified by package testing), completion time targets, resources, and cost constraints need to be established and agreed upon. Package design processes often employ rapid prototyping, computer-aided design, computer-aided manufacturing and document automation. Package development considerations: Security High value products are inviting to theft. Security packaging can be an important consideration to help reduce package pilferage. Authentication technologies can be used to help verify that the expensive luxury product is not among the many counterfeit consumer goods. Package development considerations: Security solutions involve all phases of product production, packaging, distribution, logistics, sale, and use. No single solution is considered as "pilfer proof". Often, packaging engineers, logistics engineers, and security professionals have addressed multiple levels of security to reduce the risk of pilfering.Each situation is unique. Some considerations have included: Identifying who a potential thief might be: an internal employee, security guard, truck driver, delivery person, receiver (consignee), organized crime, etc. Engineers usually start with knowing what level of knowledge, materials, tools, etc. might they have. Package development considerations: Identifying all feasible methods of unauthorized access into a product, package, or system. In addition to the primary means of entry, engineers also consider secondary or "back door" methods. Identifying available means of resealing, reclosing, or replacing special seals. Using extra strong and secure packaging: A weak or damaged package is an invitation to pilferage. Considering unique custom seals and labels (changing regularly because these are subject to counterfeiting) Improving the pilfer resistance to make pilfering more difficult, time-consuming, etc. Concealing the identity and value of a pilfer able item during distribution. Logistics and packaging professionals do not want bring attention to the item, its package, addresses, names, etc. Adding pilfer-evident features to help indicate the existence of pilfering. Choosing a logistics provider who can reduce the risks of pilferage. Shipping in packages in unit loads with stretch wrap or in intermodal shipping containers with security seals Educating people to watch for evidence of pilfering. With a corrugated box, using a wider and stronger closure tape, 3-inch or 72 mm, reinforced gummed tape or pressure-sensitive tape. Using a tamper-evident security tape or seal on packages that leaves a message, warning, or other indication if removed. Installing a surveillance system to help identify any suspects. Package development considerations: Design steps Some designers consider nine steps to creating the graphic design of luxury packaging, beginning with meeting the client to gain insight into the product. The next step develops first sketches followed by developing the initial layout of the packaging. The fourth step involves refining the idea. The fifth step includes the lettering and typeface. The sixth step covers legalities and laws about how large the obligatory information must be printed, The seventh step involves final adjustments with the eighth step being choosing accents of the packaging. The final step involves designing matching accessories such as boxes or bags to complement the main packaging. End-use products: Cosmetics and fragrances Tobacco Confectionery Premium alcoholic drinks Gourmet food and drinks Watches and jewellery
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Plenken** Plenken: Plenken is a German typographical term for the insertion of inappropriate spaces before a punctuation mark. Its counterpart is Klempen, the incorrect omission of a space after punctuation marks. Etymology: Plenken is derived as a borrowed word from the English blank. Its antonym klempen combines plenken and klemmen ("to clamp"), exchanging the K and P in such a way as to suggest both phonetically and orthographically its relationship and opposite meaning.Both are internet coinages, dating from Johannes "Jödel" Leckebusch's introduction of plenken on MausNet in 1988 and now widely used on German newsgroups. Usage: Plenken was once a common practice in Germany due to the conventional approximation there on typewriters of the standard type-setter's spacing rules. These use a variety of different-width spaces and insert a thin space or hair space between words and most punctuation marks. With the introduction of the typewriter and its fixed-width space, English-language typists removed most spaces around punctuation marks, but some other languages' typists, including French- and German-language ones, did not. Usage: Simplistic computer tools typically mishandle plenken by treating them as ordinary whitespace, potentially inserting incorrect linebreaks and wrapping the punctuation mark onto the next line, rendering the text difficult to read. More sophisticated computer tools have French spacing modes which automatically change plenked spaces to special typographic spaces such as the espace fine insécable (U+202F   NARROW NO-BREAK SPACE). However, such spacing behaviour is not necessarily replicated for other languages and their typists must manually enter non-breaking spaces. Usage: Plenken is increasingly discouraged among German-language typists. Examples: Das war der schönste Tag meines Lebens ! Ich behaupte, dass das falsch ist . Illustrating word-wrapping problems:You are here ! I am here , too ! You are just plenking !
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Feynman Prize in Nanotechnology** Feynman Prize in Nanotechnology: The Feynman Prize in Nanotechnology is an award given by the Foresight Institute for significant advances in nanotechnology. Two prizes are awarded annually, in the categories of experimental and theoretical work. There is also a separate challenge award for making a nanoscale robotic arm and 8-bit adder. Overview: The Feynman Prize consists of annual prizes in experimental and theory categories, as well as a one-time challenge award. They are awarded by the Foresight Institute, a nanotechnology advocacy organization. The prizes are named in honor of physicist Richard Feynman, whose 1959 talk There's Plenty of Room at the Bottom is considered by nanotechnology advocates to have inspired and informed the start of the field of nanotechnology.The annual Feynman Prize in Nanotechnology is awarded for pioneering work in nanotechnology, towards the goal of constructing atomically precise products through molecular machine systems. Input on prize candidates comes from both Foresight Institute personnel, and outside academic and commercial organizations. The awardees are selected mainly by an annually changing body of former winners and other academics. The prize is considered prestigious, and authors of one study considered it to be reasonably representative of notable research in the parts of nanotechnology under its scope.The separate Feynman Grand Prize is a $250,000 challenge award to the first persons to create both a nanoscale robotic arm capable of precise positional control, and a nanoscale 8-bit adder, conforming to given specifications. It is intended to stimulate the field of molecular nanotechnology. History: The Feynman Prize was instituted in the context of Foresight Institute co-founder K. Eric Drexler's advocacy of funding for molecular manufacturing. The prize was first given in 1993. Before 1997, one prize was given biennially. From 1997 on, two prizes were given each year in theory and experimental categories. By awarding these prizes early in the history of the field, the prize increased awareness of nanotechnology and influenced its direction.: 60 The Grand Prize was announced in 1995 at the Fourth Foresight Conference on Molecular Nanotechnology and was sponsored by James Von Ehr and Marc Arnold. In 2004, X-Prize Foundation founder Peter Diamandis was selected to chair the Feynman Grand Prize committee. Recipients: Single prize Experimental category Theory category
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dynamic speckle** Dynamic speckle: In physics, dynamic speckle is a result of the temporal evolution of a speckle pattern where variations in the scattering elements responsible for the formation of the interference pattern in the static situation produce the changes that are seen in the speckle pattern, where its grains change their intensity (grey level) as well as their shape along time. One easy to observe example is milk: place some milk in a teaspoon and observe the surface in direct sunlight. There will be a visible "dancing" pattern of coloured points. Where the milk dries on the spoon at the edge, the speckle is seen to be static. This is direct evidence of the thermal motion of atoms, which cause the Brownian motion of the colloidal particles in the milk, which in turn results in the dynamic speckle visible to the naked eye. Information content: The dynamic pattern shows then the changes that, if they are analyzed along time, represent the activity of the illuminated material. The visual effect is that of a boiling liquid or the image in a TV set far from tuning. Information content: It can be analyzed by means of several mathematical and statistical tools and provide numeric or visual information on its magnitude, the not well defined idea of activity. Because the number of scattering centers is very high the collective phenomenon is hard to interpret and their individual contributions to the final result can not be inferred. The measurements that are obtained by means of the analysis tools present the activity level as a sum of the contributions of phenomena due to Doppler effect of the scattered light as well as other phenomena eventually present (time variations of the refractive index of the sample, etc.) Light scattered with small Doppler shifts in its frequency beats on the detector (eventually the eye) giving rise to the slow intensity variations that constitute the dynamic of the speckle pattern. Information content: A biological sample, for example, that is a material that contains a huge number of mobile scattering centers, presents refractive index variations in the materials that compose it with power changes as well as many other effects increasing the complexity in the identification and isolation of these phenomena. Then, the complete interpretation of the activity of a sample, by means of dynamic speckle, presents itself big challenges.Figure 1 shows a sequence of speckle patterns in a corn seed in the start of its germination process where the dynamic effect is higher in the areas where the scattering centers are expected to be more active as is the case of the embryo and in a break in the endosperm region of the seed. The embryo is in the lower left side and the break is a river-like region in the center. In the crack, the activity is due to intensive inner water evaporation while in the embryo activity is higher due to metabolism of the alive tissue together with the activity caused by water evaporation. In the endosperm, the high right region of the image represents that the relatively low activity is due only to water evaporation. Applications: Biological tissue is one of the most complex that can be found in nature. Besides it is worsened by the intrinsic variability present between one sample and another. These facts make even more difficult the comparison of results between different samples even in presence of the same stimulus. In this context, speckle patterns have been applied to study bacteria, parasites, seeds and plants.Other fields of application are the analysis of drying paint, control in gels, foams, corrosion, efflorescence, etc. Dynamic Speckle analysis: Several mathematical and statistical tools have been proposed for the characterization of the activity of a dynamic speckle pattern. Some of them are: Inertia Moment of the Co-Occurrence matrix (MOC) Fujii Generalized differences Temporal difference These and other methods are gathered in Biospeckle laser tool library.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded