text
stringlengths
60
353k
source
stringclasses
2 values
**Cleaner** Cleaner: A cleaner or a cleaning operative is a type of industrial or domestic worker who does the cleaning.Cleaner in Cambridge English dictionary means: "a person whose job is to clean houses, offices, public places, etc.:", in Collins dictionary: "A cleaner is someone who is employed to clean the rooms and furniture inside a building." However, a cleaner does not always have to be employed and does not have to perform work for pay an example would be socially and charity cleaning for example free forest cleanup from garbage so the definitions of the cleaner word depend on the author. Word cleaner also means: "substance used for cleaning" e.g. oven cleaner and device used to clean e.g. an air cleaner.To sum up, the simplest thing to say is that a cleaner is usually a person who does cleaning. Cleaner: Cleaning operatives may specialize in cleaning particular things or places, such as window cleaners. Cleaning operatives often work when the people who otherwise occupy the space are not around. They may clean offices at night or houses during the workday. Occupation classification: Types of cleaning operatives: The cleaning industry is quite big as different types of cleaning are required for different objects and different properties. For example, cleaning an office space requires the services of a commercial cleaner, whereas cleaning a house requires a residential cleaner or residential cleaning service. Depending on the task, even these categories can be subdivided into, for example, end-of-lease cleaning, carpet cleaning, upholstery cleaning, window cleaning, car cleaning services etc. Cleaners specialize in a specific cleaning sector or even a specific task in a cleaning sector, and one cannot expect a window cleaner to be able or willing to clean a carpet. For example according to International Standard Classification of Occupations and European Skills, Competences, Qualifications and Occupations the profession of a cleaner can be divided into: 9112.6 - train cleaner: "Train cleaners keep the interiors of trains tidy and clean. They clean out the bins in the different compartments, and perform other cleaning activities such as hoovering, mopping and deep cleaning." 9123.1 - window cleaner: "Window cleaners use cleaning tools such as sponges and detergents to clean windows, mirrors and other glass surfaces of buildings, both on the interior and exterior. They use specific ladders to clean taller buildings, using safety belts for support." 9122.1 - vehicle cleaner: "Vehicle cleaners clean and polish surfaces of external parts and interiors of vehicles." 9111.1 - domestic cleaner: "Domestic cleaners perform all necessary cleaning activities in order to clean their clients' houses. They vacuum and sweep floors, wash dishes, launder clothes, dust, scrub and polish surfaces and disinfect equipment and materials." 9129.2 - sewerage cleaner: "Sewerage cleaners maintain and clean sewerage systems and their pipes within communities. They remove blockages that stop the sewerage flow to ensure the smooth running of the systems." 9112.2 - building cleaner: "Building cleaners maintain the cleanliness and overall functionality of various types of buildings such as offices, hospitals and public institutions. They perform cleaning duties like sweeping, vacuuming and mopping floors, empty trash and check security systems, locks and windows. Building cleaners check air conditioning systems and notify the appropriate persons in case of malfunctions or problems." 9112.3 - furniture cleaner: "Furniture cleaners maintain furniture items by removing dust, applying furniture polish, cleaning stains and maintaining colouring." 5153.1.1 - amusement park cleaner: "Amusement park cleaners work to keep the amusement park clean and take on small repairs. Amusement park cleaners usually work at night, when the park is closed, but urgent maintenance and cleaning is done during the day." 8160.10 - cacao beans cleaner: "Cacao beans cleaners operate machines for the removal of foreign materials such as stones, string and dirt from cacao beans. They operate silos as to move beans from there to hoppers. They direct the cleaned beans to specified silos. They operate air-cleaning system in order to remove further foreign materials." 7133.2 - building exterior cleaner: "Building exterior cleaners remove dirt and litter from a building's exterior, as well as perform restoration tasks. They ensure the cleaning methods are compliant with safety regulations, and monitor the exteriors to ensure they are in proper condition." 9129.1 - drapery and carpet cleaner: "Drapery and carpet cleaners clean draperies and carpets for their clients by removing stains, dust or odors. They do this by applying chemical and repellent solutions and with the use of brushes or mechanical equipment." 9112.5 - toilet attendant: "Toilet attendants clean and maintain toilet facilities in accordance with company standards and policies. They use cleaning equipment to clean mirrors, floors, toilets and sinks. They perform the cleaning activities before, during and after operational service hours. Toilet attendants refill the facility with supplies as needed and maintain records of their daily operations." 9129 - Other cleaning workers: cleaning workers not classifiedIn addition: 9112.1 - aircraft groomer can clean place in airplanes: "Aircraft groomers clean aircraft cabins and airplanes after usage. They vacuum or sweep the interior of cabin, brush debris from seats, and arrange seat belts. They clean trash and debris from seat pockets and arranged in-flight magazines, safety cards, and sickness bags. They also clean galleys and lavatories." 8157.1.1 - laundry ironer: can clean clothes: "Laundry ironers re-shape clothing items and linen and remove creases from them by using irons, presses and steamers. They clean and maintain the ironing and drying area and organise the items accordingly." 9129.3 - swimming facility attendant can clean place around the swimming poolWaste collection and recyclingThe cleaning person may receive waste and carry out activities related to its transport to the place of storage, segregation and recycling. Occupation classification: Types of cleaning operatives: Charity / free social cleaningCleaning can be done freely, free of charge and without employment e.g. social cleaning of the forest from garbage. Cleaning by convictsCleaning is sometimes done by convicts for rehabilitation or leniency purposes. Cleaning as a substitute punishment. However, in the other hand in some cases, cleaners are checked against criminal records. Typical cleaning equipment: The following are some items used by cleaning staff: However, the equipment depends on the situation and the type of cleaning. Broom/dustpan Bucket Cleaning agents Floor polisher Garbage bag Hand feather duster and/or microfiber floor duster Mop and mop bucket cart Towels Vacuum cleaner Wet floor signin addition: ladder, rake, bags for leaves. Not always, but depending on the situation, (for example during cleaning dusty or dangerous substances or places, window cleaning at high heights, being on a busy street or in factories) items used by cleaning staff can include safety equipment such as: gloves overalls filter mask fitted hardhat height harness protective boots visibility clothing Hazards: The exposure of a cleaner to hazards depends on the activity performed and the situation for example: allergens, dust, biohazards, fall, possibility of contact with electric shock, slipping on a slippery surface, so safety equipment should be adapted to the situation. In addition: On the whole it is not recommended to perform this work for a person with severe allergies. Working conditions: The 2000 film Bread and Roses by British director Ken Loach depicted the struggle of cleaners in Los Angeles, California, for better pay and working conditions and for the right to join a union. In an interview with the BBC in 2001, Loach stated that thousands of cleaners from around 30 countries have since contacted him with tales similar to the one told in the film.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Whispering campaign** Whispering campaign: A whispering campaign or whisper campaign is a method of persuasion in which damaging rumors or innuendo are spread about the target, while the source of the rumors seeks to avoid being detected while they are spread. For example, a political campaign might distribute anonymous flyers attacking the other candidate. The tactic is generally considered unethical in open societies, particularly in matters of public policy. The speed and the anonymity of communication made possible by modern technologies like the Internet have increased public awareness of whisper campaigns and their ability to succeed. The phenomenon has also led to the failure of whisper campaigns, as those seeking to prevent them can publicize their existence much more readily than in the past. Whisper campaigns are defended in some circles as an efficient mechanism for underdogs who lack other resources to disclose wrongdoings of the powerful without repercussions. Marketing: Other tactics include "buying" drinks and giving away cigarettes to patrons without making known that the benefactor is a representative of the company. More recently, companies are also paying bloggers to mention products or causes. As a form of astroturfing, companies hire employees to post comments on blogs, forums, online encyclopedias such as (on Wikipedia), etc. to steer online conversations in their desired direction. Politics: Whisper campaigns in the United States began with the conflict between John Adams and Thomas Jefferson as both were vying for the 1800 presidential election. The Federalists, which supported Adams, accused Jefferson of having robbed a widow and her children of a trust fund and of having fathered numerous mulatto children by his own slave women. Politics: Whisper campaigns are frequently used in electoral politics as a method of shaping the discussion without being seen to do so. US President Grover Cleveland was the target of a whisper campaign in 1884, when Republicans claimed that he had fathered an illegitimate child while he was still Governor of New York. US President Franklin D. Roosevelt was frequently a topic of whisper campaigns resulting from his support of the New Deal and his poor health. During the 2000 Republican presidential primary, Senator John McCain, whose adopted daughter is a dark-skinned child from Bangladesh, was the target of a whisper campaign, which implied that he had fathered a black child out of wedlock. Voters in South Carolina were reportedly asked in a push poll, "Would you be more likely or less likely to vote for John McCain if you knew that he fathered an illegitimate black child?". In addition, on the week of the nomination vote, dozens of radio stations were called on the saasking, and talk show hosts were asked what they thought of McCain's fathering of a black child out of wedlock. Politics: In 2018, when the question of what the United States should do about the disappearance of Jamal Khashoggi was an open question, a whispering campaign was mounted that attacked the character of Khashoggi.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ColorSounds** ColorSounds: ColorSounds was a national music video program televised on PBS stations in the mid-1980s. ColorSounds taught viewers how to read and speak English creatively through the use of music videos. ColorSounds: In this series, a music video, the same one that would appear on channels such as MTV and VH1 is presented, with the lyrics shown karaoke-style on the bottom of the screen, with words associated with the video's focus highlighted in various colors. For example, if a music video featured nouns, every noun in the lyrics would be highlighted in red; accordingly, if another music video featured vowels sounds such as "a" all the a's would be highlighted in gray. ColorSounds presented an effective learning program through its multimodal approach to teaching and learning language. ColorSounds: In addition, ColorSounds also noted and corrected spelling mistakes and poor grammar in the original lyrics. The show had no host, announcer or theme songs—the topics to be covered in that particular episode is displayed silently as a static caption at the start of the program or segment. Color Sounds was packaged in two formats—a 30-minute version for general PBS scheduling, and a 15-minute version for use during in-school telecasts.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Liquid nitrogen wash** Liquid nitrogen wash: The Liquid Nitrogen Wash is mainly used for the production of ammonia synthesis gas within fertilizer production plants. It is usually the last purification step in the ammonia production process sequence upstream of the actual ammonia production. Competing Technologies: The purpose of the final purification step upstream of the actual ammonia production is to remove all components that are poisonous for the sensitive ammonia synthesis catalyst. This can be done with the following concepts: Methanation, formally the standard concept with the disadvantage, that the methane content is not removed, but even increased, since in this process, the carbon oxides (carbon monoxide and carbon dioxide) are converted to methane. Competing Technologies: Pressure Swing Adsorption, which can replace the low temperature shift, the carbon dioxide removal and the methanation, since this process produces pure hydrogen, which can be mixed with pure nitrogen. Liquid Nitrogen Wash, which produces an ammonia syngas for a so-called "inert free" ammonia synthesis loop, that can be operated without the withdrawal of a purge gas stream. Functions: The Liquid Nitrogen Wash has two principle functions: Removal of impurities such as carbon monoxide, argon and methane from the crude hydrogen gas Addition of the required stoichiometric amount of nitrogen to the hydrogen stream to achieve the correct ammonia synthesis gas ratio of hydrogen to nitrogen of 3 : 1The carbon monoxide must be removed completely from the synthesis gas (i.e. syngas) since it is poisonous for the sensitive ammonia synthesis catalyst. The components argon and methane are inert gases within the ammonia synthesis loop, but would enrich there and call for a purge gas system with synthesis gas losses or additional expenditures for a purge gas separation unit. The main sources for the supply of feed gases are partial oxidation processes. Upstream Syngas Preparations: Since the synthesis gas exiting the partial oxidation process consists mainly of carbon monoxide and hydrogen, usually a sulfur tolerant CO shift (i.e. water-gas shift reaction) is installed in order to convert as much carbon monoxide into hydrogen as possible. Shifting carbon monoxide and water into hydrogen also produces carbon dioxide, usually this is removed in an acid gas scrubbing process together with other sour gases as e.g. hydrogen sulfide (e.g. in a Rectisol Wash Unit). Components: The Liquid Nitrogen Wash consists of an adsorber unit where solvent traces of an upstream acid gas scrubbing process (e.g. methanol, water), traces of carbon dioxide or other compounds are completely removed in a molecular sieve bed in order to avoid freezing and subsequently blockage in the low temperature process which operates at temperatures down to 80 K (-193 °C or -315 °F) and the actual Liquid Nitrogen Wash enclosed in a so-called cold box where all cryogenic process equipment is located and insulated in order to minimize heat ingress from ambient. Principle of Operation: The name Liquid Nitrogen Wash is a little misleading, since no liquid nitrogen is supplied from outside to be used for scrubbing, but gaseous high pressure nitrogen, supplied by the Air separation Unit that usually also provides the oxygen for the upstream Partial Oxidation. This gaseous high pressure nitrogen is partially liquefied in the process and is used as washing agent. In a so-called nitrogen wash column, the impurities carbon monoxide, argon and methane are washed out of the synthesis gas by means of this liquid nitrogen. These impurities are dissolved together with a small part of hydrogen and leave the column as the bottom stream. The purified gas leaves the column at the top. The now purified synthesis gas is warmed up and is mixed with the required amount of gaseous high pressure nitrogen in order to achieve the hydrogen to nitrogen ratio of 3 to 1, and can then be routed to the ammonia synthesis. At operating pressures higher than about 50 bar(a), the refrigeration demand of the Liquid Nitrogen Wash is covered by the Joule–Thomson effect, and no additional external refrigeration, e.g. by vaporization of liquid nitrogen is required. Advantages of the Combination of a Liquid Nitrogen Wash with a Rectisol Process: The Liquid Nitrogen Wash is especially favorable when combined with the Rectisol Wash Unit. The combination and advantageous interconnections between a Rectisol Wash Unit and a Liquid Nitrogen Wash lead to smaller equipment and better operability. The gas coming from the Rectisol Wash Unit can be sent to the Liquid Nitrogen Wash at low temperature (directly from the methanol absorber without being warmed up). Since part of the purified gas is reheated in the Rectisol Wash Unit, small fluctuations in flow and temperatures can easily be compensated leading to best operability. To improve the hydrogen recovery, an integrated hydrogen recycle from the Liquid Nitrogen Wash to the Rectisol Wash Unit can be installed, which uses the already existing recycle compressor of the Rectisol Wash Unit to recycle the hydrogen-rich flash gas from the Liquid Nitrogen Wash back into the feed gas of the Rectisol Wash Unit. This leads to extremely high hydrogen recovery rates without any further equipment.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Timeline of Gravity Probe B** Timeline of Gravity Probe B: The Gravity Probe B mission timeline describes the events during the flight of Gravity Probe B, the science phase of its experimental campaign, and the analysis of the recorded data. Mission progress: April 20, 2004Launch of GP-B from Vandenberg AFB and successful insertion into polar orbit. April 28, 2004 Mission controllers started the "Initialization and Orbit Checkout" phase (IOC), which was expected to last 40–60 days. At this point all gyros were spun up and the SQUID detectors were being checked. All other spacecraft subsystems performed well, including solar power and the attitude control system. Mission progress: May 1, 2004 During the IOC the primary computer of the spacecraft received too much radiation to cope with the built-in error correction mechanism. GP-B switched over to the backup computer as designed. Since the spacecraft crosses over the polar areas of the Earth with their high radiation, this was anticipated by the designers. The primary computer was repaired and put back into service. All science instruments on board were working perfectly throughout this incident. Mission progress: May 14, 2004 The spacecraft went into safe mode for a short period when some of the helium micro-thrusters behaved in an unstable way. This problem was addressed quickly and GP-B went back into IOC mode. The cause of this incident was a high-pressure condition in the dewar, which was reached due to warm (10 K) helium being used to remove magnetic flux from the gyroscopes. Mission members believed that the IOC phase would still be completed on time after a total 60 mission days. Mission progress: July 13, 2004 The preparations for the science phase of the mission reached a major milestone: One of the gyros (No. 4) reached the science-ready speed of 6,348 rpm (105.8 Hz) during a short test. Mission progress: July 16, 2004 An unexpectedly large slowdown of gyro 4 was detected during the full-speed spin-up of gyro 2. Although some "leakage" effect was expected, the amount seen led mission planners to search for ways to diminish the effect for this final step towards the science phase. This investigation took close to a week and delayed the planned spin-up of gyro 1 and 3. Mission progress: Ground tests had indicated that a good signal-to-noise ratio for science data is reached, once the gyro spin rate exceeds 80 Hz. However, mission managers stress that a slightly lower number will also be sufficient for entering the science phase of GP-B. August 27, 2004 Mission managers announced that GP-B entered its science phase, today. On mission day 129 all systems were configured to be ready for data collection, with the only exception being gyro 4, which needs further spin axis alignment. Mission progress: After weeks of testing it was decided to use the "back-up drag-free" mode around gyro 3. Back-up drag-free mode suspends the rotor electrically and flies the thrusters to drive the suspension correction to zero. This contrasts with main drag-free mode which uses no electrical suspension, and flies the thrusters to center the rotor. Also, the rotation period of GP-B was adjusted to 0.7742 rpm (from the original 0.52 rpm planned) in order to make better use of the lower than planned rotor speeds. The spacecraft roll rate is always chosen to avoid harmonic interferences with the sample rate, the orbital rate, the calibration rate and the telemetry data rate during data taking. Mission progress: They also report that it was planned to continue tuning the drag-free performance of the Attitude and Translation Control (ATC) system in the early portion of the Science Phase to correct for an unknown force, which is causing excess helium flow from the Dewar through the micro thrusters. September 7, 2004 The main computer suffered a "double-bit" error in its memory. The location of this error was non-critical to the mission and the function of the spacecraft. A correction that fixed the problem was successfully uploaded. All other subsystems are reported to continue to perform well. Mission progress: September 16, 2004 Gyro #4 joined Gyro's #1, #2, and #3 in science mode after its spin axis was successfully aligned with the guide star September 23, 2004 Due to problems with gyro 3, GP-B went into safe mode. The mission team was able to ensure minimal impact to the science by exercising the "safing" actions of the spacecraft, and switch the control system setup. It is now maintaining the drag-free orbit around gyro 1. Mission progress: September 24, 2004 The mission went back into science mode. October 19, 2004 Gyro 1 showed the same behavior as gyro 3 earlier, which prompted mission members to switch back to a drag-free orbit around gyro 3. Adjustments were made to both gyro suspension systems (GSS) to avoid future problems. All this was done in a span of three hours, and science data collection was interrupted only briefly. Mission progress: November 10, 2004 When passing over the South Atlantic Anomaly during a strong solar storm, a memory error in a critical region put GP-B into safe mode. This incident caused a computer to reboot and put the gyros into "analog mode." After about two days all memory problems were fixed and science data became available again. At first, it was assumed a proton hit from the storm was the cause, but later analysis showed that this was not the case. Instead, an earlier error at a presumed non-critical memory position was causing the safe mode, when the memory was accessed during routine maintenance. Mission progress: January 2005 A series of strong solar flares disrupted data taking for several days. On January 17 a very powerful radiation storm created multi-bit errors in the onboard computer memory, and saturated the telescope detectors so that GP-B lost track of the guide star. The science team, however, is confident that the temporary loss of science data will have no significant effect on the results. On January 20 the high level of proton flux was still generating "single bit errors" in GP-B memory, but the telescope is locked on the guide star again, and the gyroscope electronics seem to perform nominally. Mission progress: March 14, 2005 The onboard backup computer (B-side) rebooted after a safe mode event, which came two weeks after the switch-over from the nominal computer (A-side). Both events were triggered by the occurrence of Multi-Bit Errors (MBEs) in the memory of each computer. It took mission members about 29 hours to recover and transfer back to the nominal state, with the guide star locked in. Mission progress: May 6, 2005 Mission members deduce from a "heat pulse test" that there is enough liquid helium on board the space craft to cool the experiment until sometime between late August and early September 2005. They are preparing to start the calibration procedures, and thus end the science phase, in early August. August 15, 2005 The science phase of the mission ended and the spacecraft instruments transitioned to the final calibration mode. September 26, 2005 The calibration phase ended with liquid helium still in the dewar. The spacecraft was returned to science mode pending the depletion of the last of the liquid helium. September 29, 2005 The liquid helium in the dewar finally ran out, and the experiment began to warm up. Drag-free mode turned off. Mission progress: February 2006 Phase I of data analysis complete July 10, 2006 uncommanded reboot of the backup CCCA flight computer August 2006 Completion of Phase II of data analysis September 2006 Analysis team realised that more error analysis, particularly around the Polhode motion of the gyros, was necessary than could be done in the time to April 2007, and applied to NASA for an extension of funding to the end of 2007. Mission progress: October 2006 United States Air Force Academy (USAFA) take control of satellite operations December 2006 Completion of Phase III of data analysis April 14, 2007 Announcement of best results obtained to date. Francis Everitt gave a plenary talk at the meeting of the American Physical Society announcing initial results: "The data from the GP-B gyroscopes clearly confirm Einstein's predicted geodetic effect to a precision of better than 1 percent. However, the frame-dragging effect is 170 times smaller than the geodetic effect, and Stanford scientists are still extracting its signature from the spacecraft data." (Source: Gravity Probe B web site ) Spring 2008 Mission updateIncreasing the Precision of the Results : "In reality, GP-B experienced six major or significant anomalies during the 353-day science data collection period, and these anomalies caused the experimental data set to be divided into seven major segments, with a total of 307 days of "good" science data when all seven segments are combined. This segmentation reduced the best precision obtainable from the 1% goal down to about 2% for the frame-dragging effect and 0.02% for the geodetic effect. This reduced level of precision, if achieved would be extraordinary." http://einstein.stanford.edu/highlights/status1.html May 2011 Final results are published in a paper in Physical Review Letters and on the arXiv. The results of a geodetic drift rate of −6,601.8±18.3 mas/yr and a frame-dragging drift rate of −37.2±7.2 mas/yr are consistent with the GR predictions of −6,606.1 mas/yr and −39.2 mas/yr, respectively. Mission progress: Future On February 9, 2007 it was announced that a number of unexpected signals had been received and that these would need to be separated out before final results could be released. Consequently, the date for the final release of data has been pushed back from April 2007 to December 2007. Speculation on some internet sites, such as PhysicsForums.org, has centered around the source and nature of these anomalous signals. Several posters and alternative theorists (some skeptical of GPB and its methodology) have indicated that understanding these signals may be more interesting than the original goal of testing GR. Mission progress: Stanford has agreed to release the raw data to the public at an unspecified date in the future. It is likely that this data will be examined by independent scientists and independently reported to the public well after the December 2007 release. Because future interpretations of the data by scientists outside GPB may differ from the official results, it may take several more years for all of the data received by GPB to be completely understood.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Jade** Jade: Jade is a mineral used as jewellery or for ornaments. It is typically green, although may be yellow or white. Jade can refer to either of two different silicate minerals: nephrite (a silicate of calcium and magnesium in the amphibole group of minerals), or jadeite (a silicate of sodium and aluminium in the pyroxene group of minerals).Jade is well known for its ornamental use in East Asian, South Asian, and Southeast Asian art. It is commonly used in Latin America, such as Mexico and Guatemala. The use of jade in Mesoamerica for symbolic and ideological ritual was influenced by its rarity and value among pre-Columbian Mesoamerican cultures, such as the Olmecs, the Maya, and other ancient civilizations of the Valley of Mexico. Etymology: The English word jade is derived (via French l'ejade and Latin ilia 'flanks, kidney area') from the Spanish term piedra de ijada (first recorded in 1565) or 'loin stone', from its reputed efficacy in curing ailments of the loins and kidneys. Nephrite is derived from lapis nephriticus, a Latin translation of the Spanish piedra de ijada. History: East Asia Prehistoric and historic China During Neolithic times, the key known sources of nephrite jade in China for utilitarian and ceremonial jade items were the now-depleted deposits in the Ningshao area in the Yangtze River Delta (Liangzhu culture 3400–2250 BC) and in an area of the Liaoning province and Inner Mongolia (Hongshan culture 4700–2200 BC). Dushan Jade (imitation jade) was being mined as early as 6000 BC. In the Yin Ruins of the Shang Dynasty (1600 to 1050 BC) in Anyang, Dushan Jade ornaments were unearthed in the tomb of the Shang kings. History: Jade was considered to be the "imperial gem" and was used to create many utilitarian and ceremonial objects, from indoor decorative items to jade burial suits. From the earliest Chinese dynasties to the present, the jade deposits most used were not only those of Khotan in the Western Chinese province of Xinjiang but other parts of China as well, such as Lantian, Shaanxi. There, white and greenish nephrite jade is found in small quarries and as pebbles and boulders in the rivers flowing from the Kuen-Lun mountain range eastward into the Takla-Makan desert area. The river jade collection is concentrated in the Yarkand, the White Jade (Yurungkash) and Black Jade (Karakash) Rivers. From the Kingdom of Khotan, on the southern leg of the Silk Road, yearly tribute payments consisting of the most precious white jade were made to the Chinese Imperial court and there worked into objets d'art by skilled artisans as jade had a status-value exceeding that of gold or silver. Jade became a favourite material for the crafting of Chinese scholars' objects, such as rests for calligraphy brushes, as well as the mouthpieces of some opium pipes, due to the belief that breathing through jade would bestow longevity upon smokers who used such a pipe.Jadeite, with its bright emerald-green, pink, lavender, orange and brown colours was imported from Burma to China only after about 1800. The vivid green variety became known as Feicui (翡翠) or Kingfisher (feathers) Jade. It quickly became almost as popular as nephrite and a favorite of Qing Dynasty's nouveau riche, while scholars still had strong attachment to nephrite (white jade, or Khotan), which they deemed to be the symbol of a nobleman. History: In the history of the art of the Chinese empire, jade has had a special significance, comparable with that of gold and diamonds in the West. Jade was used for the finest objects and cult figures, and for grave furnishings for high-ranking members of the imperial family. Due to that significance and the rising middle class in China, in 2010 the finest jade when found in nuggets of "mutton fat" jade – so-named for its marbled white consistency – could sell for $3,000 an ounce, a tenfold increase from a decade previously.The Chinese character 玉 (yù) is used to denote the several types of stone known in English as "jade" (e.g. 玉器, jadewares), such as jadeite (硬玉, 'hard jade', another name for 翡翠) and nephrite (軟玉, 'soft jade'). But because of the value added culturally to jades throughout Chinese history, the word has also come to refer more generally to precious or ornamental stones, and is very common in more symbolic usage as in phrases like 拋磚引玉/抛砖引玉 (lit. "casting a brick (i.e. the speaker's own words) to draw a jade (i.e. pearls of wisdom from the other party)"), 玉容 (a beautiful face; "jade countenance"), and 玉立 (slim and graceful; "jade standing upright"). The character has a similar range of meanings when appearing as a radical as parts of other characters. History: Prehistoric and historic Japan Jade in Japan was used for jade bracelets. It was a symbol of wealth and power. Leaders also used jade in rituals. It is the national stone of Japan. Examples of use in Japan can be traced back to the early Jomon period about 7,000 years ago. XRF analysis results have revealed that all jade used in Japan since the Jomon period is from Itoigawa. The jade culture that blossomed in ancient Japan respected green ones, and jade of other colors was not used. There is a theory that the reason why the meaning is that it was believed that the color of green enables the reproduction of fertility, the life, and the soul of the earth. History: Prehistoric and historic Korea The use of jade and other greenstone was a long-term tradition in Korea (c. 850 BC – AD 668). Jade is found in small numbers of pit-houses and burials. The craft production of small comma-shaped and tubular "jades" using materials such as jade, microcline, jasper, etc., in southern Korea originates from the Middle Mumun Pottery Period (c. 850–550 BC). Comma-shaped jades are found on some of the gold crowns of Silla royalty (c. 300/400–668 AD) and sumptuous elite burials of the Korean Three Kingdoms. After the state of Silla united the Korean Peninsula in 668, the widespread popularisation of death rituals related to Buddhism resulted in the decline of the use of jade in burials as prestige mortuary goods. History: South Asia India The Jain temple of Kolanpak in the Nalgonda district, Telangana, India is home to a 5-foot (1.5 m) high sculpture of Mahavira that is carved entirely out of jade. India is also noted for its craftsman tradition of using large amounts of green serpentine or false jade obtained primarily from Afghanistan in order to fashion jewellery and ornamental items such as sword hilts and dagger handles.The Salar Jung Museum in Hyderabad has a wide range of jade hilted daggers, mostly owned by the former Sultans of Hyderabad. History: Southeast Asia Myanmar Today, it is estimated that Myanmar is the origin of upwards of 70% of the world's supply of high-quality jadeite. Most of the jadeite mined in Myanmar is not cut for use in Myanmar, instead being transported to other nations, primarily in Asia, for use in jewelry and other products. The jadeite deposits found in Kachinland, in Myanmar's northern regions is the highest quality jadeite in the world, considered precious by sources in China going as far back as the 10th century. History: Jadeite in Myanmar is primarily found in the "Jade Tract" located in Lonkin Township in Kachin State in northern Myanmar which encompasses the alluvial region of the Uyu River between the 25th and 26th parallels. Present-day extraction of jade in this region occurs at the Phakant-gyi, Maw Se Za, Tin Tin, and Khansee mines. Khansee is also the only mine that produces Maw Sit Sit, a type of jade. Mines at Tawmao and Hweka are mostly exhausted. From 1964 to 1981, mining was exclusively an enterprise of the Myanmar government. In 1981, 1985, and 1995, the Gemstone laws were modified to allow increasing private enterprise. In addition to this region, there are also notable mines in the neighboring Sagaing District, near the towns of Nasibon and Natmaw and Hkamti. Sagaing is a district in Myanmar proper, not a part of the ethic Kachin State. History: Taiwan, Philippines, and Maritime Southeast Asia Carved nephrite jade was the main commodity trade during the historical Maritime Jade Road, an extensive trading network connecting multiple areas in Southeast and East Asia. The nephrite jade was mined in east Taiwan by animist Taiwanese indigenous peoples and processed mostly in the Philippines by animist indigenous Filipinos. Some were also processed in Vietnam, while the peoples of Malaysia, Brunei, Singapore, Thailand, Indonesia, and Cambodia also participated in the massive animist-led nephrite jade trading network, where other commodities were also traded. Participants in the network at the time had a majority animist population. The maritime road is one of the most extensive sea-based trade networks of a single geological material in the prehistoric world. It was in existence for at least 3,000 years, where its peak production was from 2000 BCE to 500 CE, older than the Silk Road in mainland Eurasia. It began to wane during its final centuries from 500 CE until 1000 CE. The entire period of the network was a golden age for the diverse animist societies of the region. History: Others Māori Nephrite jade in New Zealand is known as pounamu in the Māori language (often called "greenstone" in New Zealand English), and plays an important role in Māori culture. It is considered a taonga, or treasure, and therefore protected under the Treaty of Waitangi, and the exploitation of it is restricted and closely monitored. It is found only in the South Island of New Zealand, known as Te Wai Pounamu in Māori—"The [land of] Greenstone Water", or Te Wahi Pounamu—"The Place of Greenstone". History: Pounamu taonga increase in mana (prestige) as they pass from one generation to another. The most prized taonga are those with known histories going back many generations. These are believed to have their own mana and were often given as gifts to seal important agreements. History: Tools, weapons and ornaments were made of it; in particular adzes, the 'mere' (short club), and the hei-tiki (neck pendant). Nephrite jewellery of Maori design is widely popular with locals and tourists, although some of the jade used for these is now imported from British Columbia and elsewhere.Pounamu taonga include tools such as toki (adzes), whao (chisels), whao whakakōka (gouges), ripi pounamu (knives), scrapers, awls, hammer stones, and drill points. Hunting tools include matau (fishing hooks) and lures, spear points, and kākā poria (leg rings for fastening captive birds); weapons such as mere (short handled clubs); and ornaments such as pendants (hei-tiki, hei matau and pekapeka), ear pendants (kuru and kapeu), and cloak pins. History: Functional pounamu tools were widely worn for both practical and ornamental reasons, and continued to be worn as purely ornamental pendants (hei kakï) even after they were no longer used as tools. History: Mesoamerica Jade was a rare and valued material in pre-Columbian Mesoamerica. The only source from which the various indigenous cultures, such as the Olmec and Maya, could obtain jade was located in the Motagua River valley in Guatemala. Jade was largely an elite good, and was usually carved in various ways, whether serving as a medium upon which hieroglyphs were inscribed, or shaped into symbolic figurines. Generally, the material was highly symbolic, and it was often employed in the performance of ideological practices and rituals. History: Canada Jade was first identified in Canada by Chinese settlers in 1886 in British Columbia. At this time jade was considered worthless because they were searching for gold. Jade was not commercialized in Canada until the 1970s. The mining business Loex James Ltd., which was started by two Californians, began commercial mining of Canadian jade in 1972.Mining is done from large boulders that contain bountiful deposits of jade. Jade is exposed using diamond-tipped core drills in order to extract samples. This is done to ensure that the jade meets requirements. Hydraulic spreaders are then inserted into cleavage points in the rock so that the jade can be broken away. Once the boulders are removed and the jade is accessible, it is broken down into more manageable 10-tonne pieces using water-cooled diamond saws. The jade is then loaded onto trucks and transported to the proper storage facilities. History: Russia Russia imported jade from China for a long time, but in the 1860s its own jade deposits were found in Siberia. Today, the main deposits of jade are located in Eastern Siberia, but jade is also extracted in the Polar Urals and in the Krasnoyarsk territory (Kantegirskoye and Kurtushibinskoye deposits). Russian raw jade reserves are estimated at 336 tons. History: Russian jade culture is closely connected with such jewellery production as Fabergé, whose workshops combined the green stone with gold, diamonds, emeralds, and rubies. Siberia In the 1950s and 1960s, there was a strong belief among many Siberians, which stemmed from tradition, that jade was part of a class of sacred objects that had life. Mongolia In the 1950s and 1960s, there was a strong belief among many Mongolians, which came from ancient tradition, that jade was part of a class of sacred objects that had life. The mineral: Nephrite and jadeite It was not until 1863 that French mineralogist Alexis Damour determined that what was referred to as "jade" could in fact be one of two different minerals, either nephrite or jadeite.Nephrite consists of a microcrystalline interlocking fibrous matrix of the calcium, magnesium-iron rich amphibole mineral series tremolite (calcium-magnesium)-ferroactinolite (calcium-magnesium-iron). The middle member of this series with an intermediate composition is called actinolite (the silky fibrous mineral form is one form of asbestos). The higher the iron content, the greener the colour. Tremolite occurs in metamorphosed dolomitic limestones and Actinolite in metamorphic greenschists/glaucophane schists. The mineral: Jadeite is a sodium- and aluminium-rich pyroxene. The more precious kind of jade, this is a microcrystalline interlocking growth of crystals (not a fibrous matrix as nephrite is.) It only occurs in metamorphic rocks. The mineral: Both nephrite and jadeite were used from prehistoric periods for hardstone carving. Jadeite has about the same hardness (between 6.0 and 7.0 Mohs hardness) as quartz, while nephrite is slightly softer (6.0 to 6.5) and so can be worked with quartz or garnet sand, and polished with bamboo or even ground jade. However nephrite is tougher and more resistant to breakage. Among the earliest known jade artifacts excavated from prehistoric sites are simple ornaments with bead, button, and tubular shapes. Additionally, jade was used for adze heads, knives, and other weapons, which can be delicately shaped. The mineral: As metal-working technologies became available, the beauty of jade made it valuable for ornaments and decorative objects. The mineral: Unusual varieties The name Nephrite derives from the Greek word meaning "kidney". This is because in ancient times it was believed that wearing this kind of jade around the waist could cure kidney disease. Nephrite can be found in a creamy white form (known in China as "mutton fat" jade) as well as in a variety of light green colours, whereas jadeite shows more colour variations, including blue, brown, red, black, dark green, lavender and white. Of the two, jadeite is rarer, documented in fewer than 12 places worldwide. Translucent emerald-green jadeite is the most prized variety, both historically and today. As "quetzal" jade, bright green jadeite from Guatemala was treasured by Mesoamerican cultures, and as "kingfisher" jade, vivid green rocks from Burma became the preferred stone of post-1800 Chinese imperial scholars and rulers. Burma (Myanmar) and Guatemala are the principal sources of modern gem jadeite. In the area of Mogaung in the Myitkyina District of Upper Burma, jadeite formed a layer in the dark-green serpentine, and has been quarried and exported for well over a hundred years. Canada provides the major share of modern lapidary nephrite. The mineral: Enhancement Jade may be enhanced (sometimes called "stabilized"). Some merchants will refer to these as grades, but degree of enhancement is different from colour and texture quality. In other words, Type A jadeite is not enhanced but can have poor colour and texture. There are three main methods of enhancement, sometimes referred to as the ABC Treatment System: Type A jadeite has not been treated in any way except surface waxing. The mineral: Type B treatment involves exposing a promising but stained piece of jadeite to chemical bleaches and/or acids and impregnating it with a clear polymer resin. This results in a significant improvement of transparency and colour of the material. Currently, infrared spectroscopy is the most accurate test for the detection of polymer in jadeite. Type C jade has been artificially stained or dyed. The effects are somewhat uncontrollable and may result in a dull brown. In any case, translucency is usually lost. B+C jade is a combination of B and C: it has been both impregnated and artificially stained. Type D jade refers to a composite stone such as a doublet comprising a jade top with a plastic backing.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ICMP Router Discovery Protocol** ICMP Router Discovery Protocol: In computer networking, the ICMP Internet Router Discovery Protocol (IRDP), also called the Internet Router Discovery Protocol, is a protocol for computer hosts to discover the presence and location of routers on their IPv4 local area network. Router discovery is useful for accessing computer systems on other nonlocal area networks. The IRDP is defined by the IETF RFC 1256 standard, with the Internet Control Message Protocol (ICMP) upon which it is based defined in IETF RFC 792. IRDP eliminates the need to manually configure routing information. Router discovery messages: To enable router discovery, the IRDP defines two kinds of ICMP messages: The ICMP Router Solicitation message is sent from a computer host to any routers on the local area network to request that they advertise their presence on the network. Router discovery messages: The ICMP Router Advertisement message is sent by a router on the local area network to announce its IP address as available for routing.When a host boots up, it sends solicitation messages to IP multicast address 224.0.0.2. In response, one or more routers may send advertisement messages. If there is more than one router, the host usually picks the first message it gets and adds that router to its routing table. Independently of a solicitation, a router may periodically send out advertisement messages. These messages are not considered a routing protocol, as they do not determine a routing path, just the presence of possible gateways. Extensions: The IRDP strategy has been used in the development of the IPv6 neighbor discovery protocol. These use ICMPv6 messages, the IPv6 analog of ICMP messages. Neighbor discovery is governed by IETF standards RFC 4861 and RFC 4862. IRDP plays an essential role in mobile networking through IETF standard RFC 3344. This is called MIPv4 Agent discovery.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Digon** Digon: In geometry, a digon is a polygon with two sides (edges) and two vertices. Its construction is degenerate in a Euclidean plane because either the two sides would coincide or one or both would have to be curved; however, it can be easily visualised in elliptic space. A regular digon has both angles equal and both sides equal and is represented by Schläfli symbol {2}. It may be constructed on a sphere as a pair of 180 degree arcs connecting antipodal points, when it forms a lune. The digon is the simplest abstract polytope of rank 2. A truncated digon, t{2} is a square, {4}. An alternated digon, h{2} is a monogon, {1}. In Euclidean geometry: The digon can have one of two visual representations if placed in Euclidean space. In Euclidean geometry: One representation is degenerate, and visually appears as a double-covering of a line segment. Appearing when the minimum distance between the two edges is 0, this form arises in several situations. This double-covering form is sometimes used for defining degenerate cases of some other polytopes; for example, a regular tetrahedron can be seen as an antiprism formed of such a digon. It can be derived from the alternation of a square (h{4}), as it requires two opposing vertices of said square to be connected. When higher-dimensional polytopes involving squares or other tetragonal figures are alternated, these digons are usually discarded and considered single edges. In Euclidean geometry: A second visual representation, infinite in size, is as two parallel lines stretching to (and projectively meeting at; i.e. having vertices at) infinity, arising when the shortest distance between the two edges is greater than zero. This form arises in the representation of some degenerate polytopes, a notable example being the apeirogonal hosohedron, the limit of a general spherical hosohedron at infinity, composed of an infinite number of digons meeting at two antipodal points at infinity. However, as the vertices of these digons are at infinity and hence are not bound by closed line segments, this tessellation is usually not considered to be an additional regular tessellation of the Euclidean plane, even when its dual order-2 apeirogonal tiling (infinite dihedron) is. In Euclidean geometry: Any straight-sided digon is regular even though it is degenerate, because its two edges are the same length and its two angles are equal (both being zero degrees). As such, the regular digon is a constructible polygon.Some definitions of a polygon do not consider the digon to be a proper polygon because of its degeneracy in the Euclidean case. In elementary polyhedra: A digon as a face of a polyhedron is degenerate because it is a degenerate polygon. But sometimes it can have a useful topological existence in transforming polyhedra. As a spherical lune: A spherical lune is a digon whose two vertices are antipodal points on the sphere.A spherical polyhedron constructed from such digons is called a hosohedron. Theoretical significance: The digon is an important construct in the topological theory of networks such as graphs and polyhedral surfaces. Topological equivalences may be established using a process of reduction to a minimal set of polygons, without affecting the global topological characteristics such as the Euler value. The digon represents a stage in the simplification where it can be simply removed and substituted by a line segment, without affecting the overall characteristics. Theoretical significance: The cyclic groups may be obtained as rotation symmetries of polygons: the rotational symmetries of the digon provide the group C2.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Keratoendotheliitis fugax hereditaria** Keratoendotheliitis fugax hereditaria: Keratoendotheliitis fugax hereditaria is an autosomal dominantly inherited disease of the cornea, caused by a point mutation in cryopyrin (also known as NALP3) that in humans is encoded by the NLRP3 gene located on the long arm of chromosome 1.In keratoendotheliitis fugax hereditaria, patients suffer from periodical transient inflammation of the corneal endothelium and stroma, leading to short term obscuration of vision and, in some patients after repeated attacks, to central corneal stromal opacities. Approximately 50 known cases have been reported in the literature. The disease so far has only been described from Finland, but exome databases suggest it may be more widely distributed in people of European ancestry.Keratoendotheliitis fugax hereditaria is thought to belong to cryopyrin-associated periodic syndromes. Presentation: Patients experience repeated unilateral attacks of keratitis 1 to 6 times per year, beginning at the age of 5 to 28 years. Men and women are equally affected. Attacks get less severe and less frequent in middle age. No seasonal variation has been reported. The symptoms are redness of the eye, pain, and photophobia. The attack may be associated with anterior chamber flare. These symptoms disappear in 1 to 2 days, but blurred vision may last for a few weeks.During the acute symptoms, a slit lamp shows pseudoguttae, dark patches in the corneal endothelium, thought to represent patchy corneal endothelial swelling. The endothelium appears normal between attacks. The attack can be misdiagnosed and treated as an acute iridocyclitis. Visual acuity transiently deteriorates during the attack. Presentation: Older patients may show faint to definite central, horizontally oval, bilateral stromal opacities. The opacities may be associated with decreased visual acuity, but they have not been severe enough to need corneal transplantation. Genetics: Keratoendotheliitis fugax hereditaria is inherited in an autosomal dominant manner, meaning an affected individual must inherit only one mutated allele from one parent. The protein, cryopyrin is coded for by the gene NLRP3, located at 1q44. The disease is frequent in Finland, and this population has a common mutation D21H accounting for all reported cases in this population. It has not been described in any other populations. However, the mutation was found in exome databases at a minor allele frequency (MAF) of 0.023% and in the Finnish and at an MAF of 0.0090% in aggregated non-Finnish European populations. Diagnosis: Upon clinical suspicion, diagnostic testing will consist of identifying cornea pseudoguttata by using a specular microscope or confocal microscope. Molecular genetic testing is also an option. Treatment: Patients have reported benefit from immediate treatment of their attacks with a topical corticosteroid or non-steroidal anti-inflammatory drug (NSAID) applied a few times a day for up to one week. Some patients have found more benefit from an oral NSAID. Prognosis: The repeated corneal inflammation over time can lead to reduced visual acuity. History: Keratoendotheliitis fugax hereditaria was first described in 1964 by Olavi Valle (1934-2013), a Finnish ophthalmologist with an interest in hereditary eye diseases. He reported this disease as keratitis fugax hereditaria in a family with 10 affected members over 4 generations. Two decades later, a second Finnish family with 21 affected members in 5 generations was reported by other Finnish ophthalmologists who highlighted transient corneal endothelial changes, and proposed the term keratoendotheliitis fugax hereditaria.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Purely inseparable extension** Purely inseparable extension: In algebra, a purely inseparable extension of fields is an extension k ⊆ K of fields of characteristic p > 0 such that every element of K is a root of an equation of the form xq = a, with q a power of p and a in k. Purely inseparable extensions are sometimes called radicial extensions, which should not be confused with the similar-sounding but more general notion of radical extensions. Purely inseparable extensions: An algebraic extension E⊇F is a purely inseparable extension if and only if for every α∈E∖F , the minimal polynomial of α over F is not a separable polynomial. If F is any field, the trivial extension F⊇F is purely inseparable; for the field F to possess a non-trivial purely inseparable extension, it must be imperfect as outlined in the above section. Purely inseparable extensions: Several equivalent and more concrete definitions for the notion of a purely inseparable extension are known. If E⊇F is an algebraic extension with (non-zero) prime characteristic p, then the following are equivalent: E is purely inseparable over F. Purely inseparable extensions: For each element α∈E , there exists n≥0 such that αpn∈F Each element of E has minimal polynomial over F of the form Xpn−a for some integer n≥0 and some element a∈F .It follows from the above equivalent characterizations that if E=F[α] (for F a field of prime characteristic) such that αpn∈F for some integer n≥0 , then E is purely inseparable over F. (To see this, note that the set of all x such that xpn∈F for some n≥0 forms a field; since this field contains both α and F, it must be E, and by condition 2 above, E⊇F must be purely inseparable.) If F is an imperfect field of prime characteristic p, choose a∈F such that a is not a pth power in F, and let f(X) = Xp − a. Then f has no root in F, and so if E is a splitting field for f over F, it is possible to choose α with f(α)=0 . In particular, αp=a and by the property stated in the paragraph directly above, it follows that F[α]⊇F is a non-trivial purely inseparable extension (in fact, E=F[α] , and so E⊇F is automatically a purely inseparable extension).Purely inseparable extensions do occur naturally; for example, they occur in algebraic geometry over fields of prime characteristic. If K is a field of characteristic p, and if V is an algebraic variety over K of dimension greater than zero, the function field K(V) is a purely inseparable extension over the subfield K(V)p of pth powers (this follows from condition 2 above). Such extensions occur in the context of multiplication by p on an elliptic curve over a finite field of characteristic p. Purely inseparable extensions: Properties If the characteristic of a field F is a (non-zero) prime number p, and if E⊇F is a purely inseparable extension, then if F⊆K⊆E , K is purely inseparable over F and E is purely inseparable over K. Furthermore, if [E : F] is finite, then it is a power of p, the characteristic of F. Conversely, if F⊆K⊆E is such that F⊆K and K⊆E are purely inseparable extensions, then E is purely inseparable over F. Purely inseparable extensions: An algebraic extension E⊇F is an inseparable extension if and only if there is some α∈E∖F such that the minimal polynomial of α over F is not a separable polynomial (i.e., an algebraic extension is inseparable if and only if it is not separable; note, however, that an inseparable extension is not the same thing as a purely inseparable extension). If E⊇F is a finite degree non-trivial inseparable extension, then [E : F] is necessarily divisible by the characteristic of F. Purely inseparable extensions: If E⊇F is a finite degree normal extension, and if Fix Gal (E/F)) , then K is purely inseparable over F and E is separable over K. Galois correspondence for purely inseparable extensions: Jacobson (1937, 1944) introduced a variation of Galois theory for purely inseparable extensions of exponent 1, where the Galois groups of field automorphisms in Galois theory are replaced by restricted Lie algebras of derivations. The simplest case is for finite index purely inseparable extensions K⊆L of exponent at most 1 (meaning that the pth power of every element of L is in K). In this case the Lie algebra of K-derivations of L is a restricted Lie algebra that is also a vector space of dimension n over L, where [L:K] = pn, and the intermediate fields in L containing K correspond to the restricted Lie subalgebras of this Lie algebra that are vector spaces over L. Although the Lie algebra of derivations is a vector space over L, it is not in general a Lie algebra over L, but is a Lie algebra over K of dimension n[L:K] = npn. Galois correspondence for purely inseparable extensions: A purely inseparable extension is called a modular extension if it is a tensor product of simple extensions, so in particular every extension of exponent 1 is modular, but there are non-modular extensions of exponent 2 (Weisfeld 1965). Sweedler (1968) and Gerstenhaber & Zaromp (1970) gave an extension of the Galois correspondence to modular purely inseparable extensions, where derivations are replaced by higher derivations.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mail-in-a-Box** Mail-in-a-Box: Mail-in-a-Box is a free and open-source program for mail server hosting developed by Joshua Tauberer. The software's goal is to enable any user to turn a cloud system into a mail server in few hours. The tool enables developers to host mail for multiple users and multiple domain names.The default configuration provides a spam detection system, monitoring, reporting and backup mechanisms. It can also set up and automatically renew a Let's Encrypt certificate, as well as configuring the detailed DNS configurations needed to ensure that a mail server's IP address is trusted by other servers, and less likely to be blacklisted. Its support for IMAP/SMTP facilitates synchronizing across devices.First developed in 2013 by Tauberer, the tool is written in Python. The project supports Ubuntu LTS.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Biogenesis scandal** Biogenesis scandal: The Biogenesis scandal broke in 2013 when several Major League Baseball (MLB) players were accused of obtaining performance-enhancing drugs ("PEDs"), specifically human growth hormone, from the now-defunct rejuvenation clinic Biogenesis of America. After an ex-employee, annoyed over missing back-pay, revealed clinic records that were "clear in describing the firm's real business: selling performance-enhancing drugs", MLB sued six people connected to Biogenesis, accusing them of damaging the sport by providing banned substances to its players. In July, thirteen involved players received lengthy suspensions of fifty or more games (nearly a third of a season). Clinic history: Biogenesis of America was a health clinic briefly operating in Coral Gables, Florida, specializing in weight loss and hormone replacement therapy. It was first registered in state corporation records in March 2012, and was founded by Anthony Bosch (also listed as the program director). His father, Dr. Pedro Bosch, was listed as the medical director, and Bosch's younger brother, attorney Ashley Bosch, was listed as managing member. Porter Fischer was listed as marketing director. Several employees quit in the fall of 2012 after they were not paid, and the clinic closed months later in December 2012. Accusations and investigation: On January 22, 2013, the Miami New Times obtained documents from former Biogenesis employee Porter Fischer which it said linked three players – Melky Cabrera, Bartolo Colón and Yasmani Grandal – who had tested positive for performance-enhancing drugs in 2012 to the clinic. Additionally, the paper said several star players including Alex Rodriguez, Ryan Braun, and Nelson Cruz could be tied to the clinic. The paper, however, refused to hand the documents over to Major League Baseball (MLB) authorities.The Florida Department of Health, and MLB, both targeted the clinic's owner, Anthony Bosch, each separately taking action against him. Accusations and investigation: In March, MLB sued Bosch, and his business partners, Carlos Acevedo, Ricardo Martinez, Marcelo Albir, and Paulo da Silveira in an attempt to obtain information. The suit alleged that the six had "actively participated in a scheme ... to solicit or induce Major League players to purchase or obtain PES (performing-enhancing substances)". Subsequently, MLB claimed to have found evidence that a representative of Rodriguez had purchased his medical records. It then paid a former Biogenesis employee for documents.In April, Bosch received a complaint from the Florida Department of Health for practicing medicine without a license. The complaint urged him to sign a cease and desist agreement. Accusations and investigation: In May, Bosch agreed to work with MLB investigators in exchange for his name being removed from the lawsuit. MLB conducted a large number of interviews with players it believed may be connected with Biogenesis in June. Every player interviewed was provided legal counsel by the Major League Baseball Players Association.In August, Wifredo A. Ferrer, U.S. attorney for the southern district of Florida, announced that Bosch intended to plead guilty to one charge of conspiracy to distribute testosterone. Player suspensions: On July 22, 2013, MLB suspended Milwaukee Brewers player Ryan Braun for the remainder of the 2013 season (65 games and the postseason) for his involvement with the Biogenesis clinic. Braun, who lost $3.25 million as a result, did not appeal the suspension. ESPN reported that Braun decided to "strike a deal" with MLB after being presented with the evidence against him. Braun had previously tested positive for testosterone in December 2011, but maintained his innocence and ultimately avoided suspension for that violation on a technicality that his test sample had been improperly handled.On August 5, 2013, Alex Rodriguez was suspended through the 2014 season (211 games at the time of the decision), but was allowed to play in 2013 pending his appeal of that decision. An arbitrator later upheld the suspension in January 2014, after being allowed to play in the 49 games between the decision and the hearing, technically reducing the suspension to 162 games, representing the entire 2014 regular season and postseason. Twelve other players connected to the Biogenesis case agreed to 50-game suspensions without the right to appeal: Antonio Bastardo, Everth Cabrera, Francisco Cervelli, Nelson Cruz, Fautino de los Santos, Sergio Escalona, Fernando Martínez, Jesús Montero, Jordan Norberto, Jhonny Peralta, César Puello, and Jordany Valdespin. Cabrera, Cruz, and Peralta were All-Stars in 2013. Rodriguez, who received the longest suspension of all the players linked to Biogenesis, was punished for "his use and possession of numerous forms of prohibited performance-enhancing substances, including testosterone and human growth hormone, over the course of multiple years" and "for his attempts to cover up those violations and obstruct a league investigation", according to MLB. The 13 player suspensions are the most to be imposed simultaneously in the history of organized baseball, the previous record being Kenesaw Mountain Landis' banning of eight players for life for throwing the 1919 World Series.Melky Cabrera, Bartolo Colón, and Yasmani Grandal each had previously been suspended in 2012 and already served 50-game suspensions for their involvement with Biogenesis. Two players mentioned in Biogenesis documents, Gio González and Danny Valencia, were cleared of any wrongdoing. Player suspensions: List of suspensions Appeals All of the suspended players, with the exception of Rodriguez, reached agreement with the League on the length of that suspension, and as part of that agreement waived their contractual right to appeal it to an arbitrator. Rodriguez was the only player who appealed his suspension. Rodriguez was allowed to play while his appeal was heard. The Players Association said it agreed with his decision to appeal, adding "We believe that the Commissioner has not acted appropriately under the Basic Agreement." His appeal was heard by arbitrator Fredric Horowitz, who succeeded Shyam Das as baseball's designated arbitrator in 2012. Das was removed from his position as baseball's long-time arbitrator as a direct result of his overturning Braun's original 50-game suspension for PEDs. Horowitz ruled the original suspension from time of ruling until the end of the 2014 season would stand (though technically reduced from 211 to 162 games since he was allowed to play the 49 games between the ruling and the appeal), leaving Rodriguez's career in limbo. Reactions: MLB commissioner Bud Selig remarked "We conducted a thorough, aggressive investigation guided by facts so that we could justly enforce our rules ... we pursued this matter because it was not only the right thing to do, but the only thing to do."The other players involved all agreed to deals that included a waiver of the right to appeal. Cruz blamed a gastrointestinal infection for his drug use and remarked that faced with the weight loss from the infection he was unsure he would be physically able to play and "made an error in judgment that I deeply regret, and I accept full responsibility for that error." An emotional Cabrera said he had taken a banned substance for four days in 2012 to aid in injury recovering before stopping because "I realized it wasn't necessary. My heart and my conscience was killing me." Peralta remarked "I take full responsibility for my actions, have no excuses for my lapse in judgment and I accept my suspension."On Rodriguez’s first game after his suspension against the Boston Red Sox, Ryan Dempster intentionally threw at Rodriguez, hitting him on the arm with his fourth pitch and receiving an ovation from the crowd. The home plate umpire, Brian O'Nora, issued a warning to both benches, but did not eject Dempster. He then ejected Joe Girardi for arguing with him. In response, Rodriguez hit a home run later that game.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Water conditioner** Water conditioner: Water conditioners are formulations designed to be added to tap water before its use in an aquarium. If the tap water is chlorinated then a simple conditioner containing a dechlorinator may be used. These products contain sodium thiosulfate which reduces chlorine to chloride which is less harmful to fish. However, chloramine is now often used in water disinfection and simple dechlorinators only deal with the chlorine portion, releasing free ammonia that is very harmful to fish. More complex products employ sulfonates that are able to deal with both chlorine and ammonia. The most sophisticated products also contain chelators such as ethylenediaminetetraacetic acid to bind and remove heavy metals and slime coat protectors such as polyvinylpyrrolidones or Aloe vera extracts.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Anti-tank grenade** Anti-tank grenade: An anti-tank grenade is a specialized hand-thrown grenade used to defeat armored targets. Although their inherently short range limits the usefulness of grenades, troops can lie in ambush or maneuver under cover to exploit the limited outward visibility of the crew in a target vehicle. Hand launched anti-tank grenades became redundant with the introduction of standoff rocket propelled grenades and man-portable anti-tank systems. Anti-tank grenade: Grenades were first used against armored vehicles during World War I, but it wasn't until World War II when more effective shaped charge anti-tank grenades were produced. AT grenades are unable to penetrate the armor of modern tanks, but may still damage lighter vehicles. History: The first anti-tank grenades were improvised devices. During World War I the Germans were the first to come up with an improvised anti-tank grenade by taking their regular "potato masher" stick grenade and taping two or three more high explosive heads to create one larger grenade. In combat, after arming, the grenade was thrown on top of the slowly advancing tank where the armor was thin. The destructive properties of the stick grenade relied on its explosive payload, rather than the fragmentation effect, which was advantageous against hard targets. History: During World War II, various nations made improvised anti-tank grenades by putting a number of defensive high explosive grenades into a sandbag. Due to their weight, these were normally thrown from very close range or directly placed in vulnerable spots onto an enemy vehicle. Another method used by the British Home Guard in 1940 was to place dynamite or some other high explosive in a thick sock and cover the lower part with axle grease and then place the grease covered part in a suitable size tin can. The sock was pulled out, the fuse lit and the sock thrown against the side of the tank turret in the hope it would stick until the explosion. If successful, it caused internal spalling of the armor plate, killing or injuring the tank crew inside. It is unknown if this type of improvised anti-tank grenade was ever successfully employed in combat. By late 1940, the British had brought into production a purpose-built adhesive anti-tank grenade - known as the "sticky bomb" - that was not very successful in combat. In Vietnam, the lunge mine was used in First Indochina War, specifically the Battle of Hanoi, during which Battalion Commander Nguyen Van Thieng tried to use it; however, "the bombs failed to explode. In the end, he was shot and heroically sacrificed".When tanks overran entrenchments, hand grenades could be, and were, used by infantry as improvised anti-tank mines by placing or throwing them in the path of a tank in the hope of disabling a track. While this method was used in desperation, it usually proved more dangerous to the soldier on the ground than to the crew of the tank. History: Chinese troops in the Second Sino-Japanese War used suicide bombing against Japanese tanks. Chinese troops strapped explosives like grenade packs or dynamite to their bodies and threw themselves under Japanese tanks to blow them up. This tactic was used during the Battle of Shanghai, where a Chinese suicide bomber stopped a Japanese tank column by exploding himself beneath the lead tank, and at the Battle of Taierzhuang where dynamite and grenades were strapped on by Chinese troops who rushed at Japanese tanks and blew themselves up. During one incident at Taierzhuang, Chinese suicide bombers obliterated four Japanese tanks with grenade bundles.Purpose-designed anti-tank grenades generally use the shaped charge principle to penetrate tank armor, although the high-explosive squash head (HESH) concept is also used. In military terminology, warheads employing shaped charges are called high-explosive anti-tank (HEAT) warheads. Because of the way shaped charges function, the grenade must hit the vehicle at an exact right angle for the effect to work most efficiently. The grenade design may ensure this by deploying a small drogue parachute or fabric streamers after being thrown, or improvised stabilisation fins if dropped from a drone. History: Britain put the first purpose-built anti-tank grenade into the field during the Second World War in late 1940 with the No 68 AT Grenade, which was one of the first "any" type anti-tank weapons of the shape charge or HEAT type. The No 68 was fired from a rifle using the Mills grenade cup launcher. The Type 68 had a penetration of 50 mm of armor plating, which was astonishing for 1940. Also developed by the UK during the war was the No 74 ST Grenade, popularly known as the "sticky bomb", in which the main charge was held in a glass sphere covered in adhesive. In anticipation of a German invasion, the British Army asked for ideas for a simple, easy to use, ready for production and cheap close-in antitank weapon. The ST Grenade was a government sponsored initiative, by MIR(c), a group tasked with developing weapons for use in German and Italian occupied territory, and they placed the ST Grenade into mass production at Churchill's insistence, but seeing how it was operated, the British Army rejected it for the Home Guard much less their regular forces. History: The No 74 Grenade was later issued to troops as an emergency stop-gap measure against lightly armored Italian tanks in North Africa, where it proved—to the surprise of many—highly effective. Later in the war, French partisans used the No 74 effectively in sabotage work against German installations. The Hawkins grenade (No 75) was yet another anti-tank grenade that could be thrown or strung together in a chain and employed in a road-block. History: Shortly after the German invasion of Russia in 1941, the Germans introduced the Panzerwurfmine(L), an extremely lethal close-quarter HEAT anti-tank grenade that could destroy the heaviest armored tanks in the war. The grenade was tossed overhand to land atop the tank. After release by the thrower, three spring-out canvas fins stabilized it during its short flight. The Panzerwurfmine(L) was lethal, and inexpensive to manufacture, but required considerable skill to throw accurately and was issued only to specially trained infantry tank-killer teams.It did not take long after the Russians captured the German Panzerwurfmine(L) to come out with their own hand-thrown anti-tank grenade with a HEAT warhead. In 1940, they developed a crude anti-tank grenade that used the simple blast effect of a large high explosive charge, designated RPG-40, which was stabilized in flight by a ribbon released after it was thrown. The RPG-43 (developed in late 1943) was a modified RPG-40 with a cone liner and a large number of fabric ribbons for flight stabilization after release. In the last year of the war, they introduced the RPG-6, a total redesign of the RPG-43 with an improved kite-tail drogue in the handle and a standoff for the HEAT warhead, drastically increasing both accuracy and penetration, which was reported to be over 100 mm, more than adequate to cause catastrophic damage to any tank if it impacted the top. The Russian RPG-43 and RPG-6 were far simpler to use in combat than the German Panzerwurfmine(L) and did not require extensive training. History: A special chapter of German anti-tank grenade is the "Geballte Ladung" (massed load). It is not a singular grenade model but some normal handgrenades which were linked to each other (multiple High Explosive loads in one stick grenade). Another such German attempt at man-portable AT weapons was the "Hafthohlladung" (attachable shaped charge). It was a large shaped charge equipped with three magnets so it would stick to a tank, but it was too heavy to be thrown: it had to be stuck to the target area of a tank directly. After the end of World War Two, many eastern European nations engineered their own versions of the RPG-6, such as the Hungarian AZ-58-K-100. These were manufactured in the tens of thousands and given to 'armies of national liberation', seeing combat worldwide, including with the Egyptian Army during 1967 and 1973.The first Japanese anti-tank grenade was a hand-thrown grenade, which had a simple 100 mm diameter cone HEAT warhead with a simple "all the way" fuse system in the base. (If dropped accidentally with the pin removed, it would explode). It had what looked like the end of a mop head on the tail end of the warhead. A soldier would remove the antitank grenade from its sack, pull the pin, and throw it gripping the mop-head as the handle. This was dangerous, as there was no arming safety after release and the thrower could strike something in his back swing before release. Penetration was reportedly only around 50 mm. History: The second Japanese anti-tank grenade, a suicide weapon, was nicknamed the "lunge mine". This weapon was a very large HEAT warhead on a five-foot stick. The soldier rammed it forward into the tank or other target, which broke a shear wire that allowed a strike pin to impact a primer and detonate the large HEAT warhead—destroying both soldier and target. While crude, the Japanese lunge mine had six inches (150 mm) of penetration, the greatest penetration of any anti-tank grenades of World War Two. History: The U.S. Army first encountered the hand-thrown anti-tank grenade in 1944, in the Philippines (some believe they were locally manufactured). The later suicide lunge mine first appeared during the U.S. invasion of Saipan and the subsequent invasion of Okinawa. Tens of thousands of these crude devices were produced and issued to both regular units and home-guard units on the home islands of Japan before the war ended. History: In the late 1970s, the U.S. Army was worried about the lack of emergency anti-tank weapons for issue to its rear area units, to counter isolated enemy armored vehicles infiltrating or being air dropped. When the US Army asked for ideas, engineers at U.S. Army laboratories suggested the reverse-engineered and additional safety improvements of the East German AZ-58-K-100 HEAT anti-tank grenade that had been clandestinely obtained. This concept was called "HAG" for "High-explosive Antiarmor Grenade". While the civilian engineers working for the US Army thought it was a great idea, it was rejected out of hand by almost all senior US Army officers with field experience, who thought it would be more dangerous to the troops who used them than the enemy. The idea was quietly shelved by 1985. This decision left many rear-area U.S. units with no heavier "anti-tank weapon" than the M2 heavy machine gun. Modern usage: The most widely distributed anti-tank grenades today are the post World War Two Russian designs of the 1950s and 1960s, mainly the RKG-3. During the Iran–Iraq War, the 13 year old Iranian soldier Mohammad Hossein Fahmideh was celebrated as a war hero after he blew himself up under an Iraqi tank with a grenade. Modern usage: Due to improvements in modern tank armor and the invention of rocket propelled grenades, anti-tank hand grenades are generally considered obsolete. However, in the recent Iraq War, the RKG-3 anti-tank hand grenade has made a reappearance with Iraqi insurgents who used them primarily against U.S. Humvees, Strykers and MRAPs, which lack the heavier armor of tanks. This has in turn led the U.S. to adopt countermeasures such as modifications to MRAP and Stryker vehicles by the fitting of slat armor, which causes the anti-tank grenade to detonate before coming in contact with the vehicle.The RKG-3 grenade has also be seen in use by the Aerorozvidka unit of the Ukrainian military in the 2022 Russian invasion of Ukraine. PJSC Mayak modifies the grenade into the RKG 1600 by changing the fuze timing and adding 3D printed fins to stabilise its flight when dropped from a commercial drone.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Grapefruit mercaptan** Grapefruit mercaptan: Grapefruit mercaptan is the common name for a natural organic compound found in grapefruit. It is a monoterpenoid that contains a thiol (also known as a mercaptan) functional group. Structurally a hydroxy group of terpineol is replaced by the thiol in grapefruit mercaptan, so it also called thioterpineol. Volatile thiols typically have very strong, often unpleasant odors that can be detected by humans in very low concentrations. Grapefruit mercaptan has a very potent, but not unpleasant, odor, and it is the chemical constituent primarily responsible for the aroma of grapefruit. This characteristic aroma is a property of only the R enantiomer.Pure grapefruit mercaptan, or citrus-derived oils rich in grapefruit mercaptan, are sometimes used in perfumery and the flavor industry to impart citrus aromas and flavors. However, both industries actively seek substitutes for grapefruit mercaptans for use as a grapefruit flavorant, since its decomposition products are often highly disagreeable to the human sense of smell. Grapefruit mercaptan: The detection threshold for the (+)-(R) enantiomer of grapefruit mercaptan is 2×10−5 ppb, or equivalently a concentration of 2×10−14. This corresponds to being able to detect 2×10−5 mg in one metric ton of water - one of the lowest detection thresholds ever recorded for a naturally occurring compound.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Voiced bilabial fricative** Voiced bilabial fricative: The voiced bilabial fricative is a type of consonantal sound, used in some spoken languages. The symbol in the International Phonetic Alphabet that represents this sound is ⟨β⟩, and the equivalent X-SAMPA symbol is B. The official symbol ⟨β⟩ is the Greek letter beta. Voiced bilabial fricative: This letter is also often used to represent the bilabial approximant, though that is more precisely written with a lowering diacritic, that is ⟨β̞⟩. That sound may also be transcribed as an advanced labiodental approximant ⟨ʋ̟⟩, in which case the diacritic is again frequently omitted, since no contrast is likely. It has been proposed that either a turned ⟨β⟩ (approximately 𐅸) or reversed ⟨β⟩ be used as a dedicated symbol for the bilabial approximant, but despite occasional usage this has not gained general acceptance.It is extremely rare for a language to make a phonemic contrast between the voiced bilabial fricative and the bilabial approximant. The Mapos Buang language of New Guinea contains this contrast. Its bilabial approximant is analyzed as filling a phonological gap in the labiovelar series of the consonant system rather than the bilabial series. Proto-Germanic and Proto-Italic are also reconstructed as having had this contrast, albeit with [β] being an allophone for another consonant in both cases. In Bashkir language, it is an intervocal allophone of /b/, and it is contrastive with /w/: балабыҙ [bɑɫɑˈβɯð] - "our child", балауыҙ [bɑɫɑˈwɯð] - "wax". Voiced bilabial fricative: The bilabial fricative is diachronically unstable (likely to be considerably varied between dialects of a language that makes use of it) and is likely to shift to [v].The sound is not the primary realization of any sound in English dialects except for Chicano English, but it can be produced by approximating the normal English [v] between the lips; it can also sometimes occur as an allophone of /v/ after bilabial consonants. Features: Features of the voiced bilabial fricative: Its manner of articulation is fricative, which means it is produced by constricting air flow through a narrow channel at the place of articulation, causing turbulence. Its place of articulation is bilabial, which means it is articulated with both lips. Its phonation is voiced, which means the vocal cords vibrate during the articulation. It is an oral consonant, which means air is allowed to escape through the mouth only. Because the sound is not produced with airflow over the tongue, the central–lateral dichotomy does not apply. The airstream mechanism is pulmonic, which means it is articulated by pushing air solely with the intercostal muscles and diaphragm, as in most sounds.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Duplication and elimination matrices** Duplication and elimination matrices: In mathematics, especially in linear algebra and matrix theory, the duplication matrix and the elimination matrix are linear transformations used for transforming half-vectorizations of matrices into vectorizations or (respectively) vice versa. Duplication matrix: The duplication matrix Dn is the unique n2×n(n+1)2 matrix which, for any n×n symmetric matrix A , transforms vech(A) into vec(A) :Dnvech(A)=vec(A) .For the 2×2 symmetric matrix A=[abbd] , this transformation reads Dnvech(A)=vec(A)⟹[100010010001][abd]=[abbd] The explicit formula for calculating the duplication matrix for a n×n matrix is: DnT=∑i≥juij(vecTij)T Where: uij is a unit vector of order 12n(n+1) having the value 1 in the position (j−1)n+i−12j(j−1) and 0 elsewhere; Tij is a n×n matrix with 1 in position (i,j) and (j,i) and zero elsewhereHere is a C++ function using Armadillo (C++ library): Elimination matrix: An elimination matrix Ln is a n(n+1)2×n2 matrix which, for any n×n matrix A , transforms vec(A) into vech(A) :Lnvec(A)=vech(A) . By the explicit (constructive) definition given by Magnus & Neudecker (1980), the 12n(n+1) by n2 elimination matrix Ln is given by Ln=∑i≥juijvec(Eij)T=∑i≥j(uij⊗ejT⊗eiT), where ei is a unit vector whose i -th element is one and zeros elsewhere, and Eij=eiejT . Here is a C++ function using Armadillo (C++ library): For the 2×2 matrix A=[abcd] , one choice for this transformation is given by Lnvec(A)=vech(A)⟹[100001000001][acbd]=[acd]
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quasithin group** Quasithin group: In mathematics, a quasithin group is a finite simple group that resembles a group of Lie type of rank at most 2 over a field of characteristic 2. More precisely it is a finite simple group of characteristic 2 type and width 2. Here characteristic 2 type means that its centralizers of involutions resemble those of groups of Lie type over fields of characteristic 2, and the width is roughly the maximal rank of an abelian group of odd order normalizing a non-trivial 2-subgroup of G. When G is a group of Lie type of characteristic 2 type, the width is usually the rank (the dimension of a maximal torus of the algebraic group). Classification: The classification of quasithin groups is a crucial part of the classification of finite simple groups. The quasithin groups were classified in a 1221-page paper by Michael Aschbacher and Stephen D. Smith (2004, 2004b). An earlier announcement by Geoffrey Mason (1980) of the classification, on the basis of which the classification of finite simple groups was announced as finished in 1983, was premature as the unpublished manuscript (Mason 1981) of his work was incomplete and contained serious gaps. Classification: According to Aschbacher & Smith (2004b, theorem 0.1.1), the finite simple quasithin groups of even characteristic are given by Groups of Lie type of characteristic 2 and rank 1 or 2, except that U5(q) only occurs for q = 4 PSL4(2), PSL5(2), Sp6(2) The alternating groups on 5, 6, 8, 9, points PSL2(p) for p a Fermat or Mersenne prime, Lε3(3), Lε4(3), G2(3) The Mathieu groups M11, M12, M22, M23, M24, The Janko groups J2, J3, J4, the Higman-Sims group, the Held group, and the Rudvalis group.If the condition "even characteristic" is relaxed to "even type" in the sense of the revision of the classification by Daniel Gorenstein, Richard Lyons, and Ronald Solomon, then the only extra group that appears is the Janko group J1.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bida Airstrip** Bida Airstrip: Bida Airport (ICAO: DNBI) is an airport serving Bida in Nigeria. The Bida VOR/DME (ident: BDA) and Bida Non-directional beacon (ident: BD) are located on the airfield. Details description: The airstrip is located in Bida in Niger state Nigeria,one of the largest city and populated place in Niger state. According to findings it was concluded that the airport distance from Abuja to Bida airstrip is calculated to be 163km distance. The airstrip has a longitude of 9°05'60.0"N (9.1000000°) 6°01'00.0"E (6.0166700°). And latitude of 6°01'00.0"E (6.0166700°. And according to findings the airport has only one run way 4/22 .it has some nearby airport around in the same state and region , Minna airport (DMN) which is 78km away, shiroro airport 124km away.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Laminas** Laminas: Laminas Project (formerly Zend Framework or ZF) is an open source, object-oriented web application framework implemented in PHP 7 and licensed under the New BSD License. The framework is basically a collection of professional PHP-based packages. The framework uses various packages by the use of Composer as part of its package dependency managers; some of them are PHPUnit for testing all packages, Travis CI for continuous Integration Services. Laminas provides to users a support of the model–view–controller (MVC) in combination with Front Controller solution. MVC implementation in Laminas has five main areas. The router and dispatcher functions to decide which controller to run based on data from URL, and controller functions in combination with the model and view to develop and create the final web page.On 17 April 2019 it was announced that the framework is transitioning into an open source project hosted by the Linux Foundation to be known as Laminas. License: Laminas is licensed under the Open Source Initiative (OSI)-approved New BSD License. All new contributions are required to be accompanied with Developer Certificate of Origin affirmation.Zend Framework also licensed under New BSD License. For ZF1 all code contributors were required to sign a Contributor License Agreement (CLA) based on the Apache Software Foundation’s CLA. The licensing and contribution policies were established to prevent intellectual property issues for commercial ZF users, according to Zend's Andi Gutmans. ZF2 and later is CLA free. Components and versioning: Laminas Project follows semantic versioning. Framework components are versioned independently and released as separate Composer packages. Dependencies between framework components are declared as Composer dependencies using semantic versioning ranges. Prior to Zend Framework version 2.5 all components shared the same version. Starting with Zend Framework version 2.5, components were split into independently versioned packages and zendframework/zendframework converted into a Composer meta-package. Framework components introduced after the split started at version 1.0 while existing components continued from 2.5. New components were not added to the meta-package and meta-package itself was discontinued after 3.0.0 release. Components and versioning: Zend Framework 3 was the last release before framework wide versioning was discontinued. In Zend Framework 3 major versions of individual components did not match framework version anymore and caused confusion. Some components such as zend-mvc and zend-servicemanager received matching major version release but other remained on version 2 while newly introduced zend-diactoros, zend-stratigility and zend-expressive were at major version 1. Components and versioning: Laminas Project does not carry a single framework version. Components transitioned from Zend Framework continued with existing versions and had all past releases migrated from their counterparts. zendframework/zendframework meta-package does not have a counterpart in Laminas. Laminas includes following components: Installation: Officially supported install method is via Composer package manager. Laminas provides meta-package that includes 61 component but recommended way is to install required framework components individually. Composer will resolve and install all additional dependencies. For instance, if you need MVC package, you can install with the following command:Full list of components is available in Zend Framework documentation. Anatomy of the framework: Laminas follows configuration-over-convention approach and does not impose any particular application structure. Skeleton applications for zend-mvc and zend-expressive are available and provide everything necessary to run applications and to serve as a good starting point. Sponsor and partners: Zend Technologies, co-founded by PHP core contributors Andi Gutmans and Zeev Suraski, was the original corporate sponsor of Zend Framework. Technology partners include IBM, Google, Microsoft, Adobe Systems, and StrikeIron. Features: Laminas features include: All components are fully object-oriented PHP 5 and are E_STRICT compliant, which helps in the development of building tests and writing codes in a bug-free and crash-proof application manner. Features: Use-at-will architecture with loosely coupled components and minimal interdependencies Extensible MVC implementation supporting layouts and PHP-based templates by default Support for multiple database systems and vendors, including MariaDB, MySQL, Oracle, IBM Db2, Microsoft SQL Server, PostgreSQL, SQLite, and Informix Dynamic Server Email composition and delivery, retrieval via mbox, Maildir, POP3 and IMAP4 Flexible caching sub-system with support for many types of backends, such as memory or a file system. Features: With the help of remote procedure call (RPC) and REST(Representational State Transfer) services, Zend Apigility helps developers to create APIs, authentication of APIs, documentation of APIs, Easy Modification Development of applications: Laminas applications can run on any PHP stack that fulfills the technical requirements. Zend Technologies provides a PHP stack, Zend Server (or Zend Server Community Edition), which is advertised to be optimized for running Laminas applications. Zend Server includes Zend Framework in its installers, along with PHP and all required extensions. According to Zend Technologies, Zend Server provides improved performance for PHP and especially Zend Framework applications through opcode acceleration and several caching capabilities, and includes application monitoring and diagnostics facilities. Zend Studio is an IDE that includes features specifically to integrate with Zend Framework. It provides an MVC view, MVC code generation based on Zend_Tool (a component of the Zend Framework), a code formatter, code completion, parameter assist, and more. Zend Studio is not free software, whereas the Zend Framework and Zend Server Community Edition are free. Zend Server is compatible with common debugging tools such as Xdebug. Other developers may want to use a different PHP stack and another IDE such as Eclipse PDT which works well together with Zend Server. A pre configured, free version of Eclipse PDT with Zend Debug is available on the Zend web site. Code, documentation, and test standards: Code contributions to Laminas are subject to rigorous code, documentation, and test standards. All code must meet project coding standards and unit tests must reach 80% code coverage before the corresponding code may be moved to the release branch. Simple cloud API: On September 22, 2009, Zend Technologies announced that it would be working with technology partners including Microsoft, IBM, Rackspace, Nirvanix, and GoGrid along with the Zend Framework community to develop a common API to cloud application services called the Simple Cloud API. This project is part of Zend Framework and will be hosted on the Zend Framework website, but a separate site called simplecloud.org has been launched to discuss and download the most current versions of the API. The Simple Cloud API and several Cloud Services are included in Zend Framework. The adapters to popular cloud services have reached production quality. Current development: Zend Framework 3.0 was released on June 28, 2016. It includes new components like a JSON RPC server, a XML to JSON converter, PSR-7 functionality, and compatibility with PHP 7. Zend Framework 3.0 runs up to 4 times faster than Zend Framework 2, and the packages have been decoupled to allow for greater reuse. The contributors of Zend Framework are actively encouraging the use of Zend Framework version 3.x. The stated end of life for Zend Framework 1 is 2016-09-28, and for Zend Framework 2 is 2018-03-31. The first development release of Zend Framework 2.0 was released on August 6, 2010. Changes made in this release were the removal of require_once statements, migration to PHP 5.3 namespaces, a refactored test suite, a rewritten Zend\Session, and the addition of the new Zend\Stdlib. The second development release was on November 3, 2010. The first stable release of Zend Framework 2.0 was released 5 September 2012.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Computational philosophy** Computational philosophy: Computational philosophy or digital philosophy is the use of computational techniques in philosophy. It includes concepts such as computational models, algorithms, simulations, games, etc. that help in the research and teaching of philosophical concepts, as well as specialized online encyclopedias and graphical visualizations of relationships among philosophers and concepts. The use of computers in philosophy has gained momentum as computer power and the availability of data have increased greatly. This, along with the development of many new techniques that use those computers and data, has opened many new ways of doing philosophy that were not available before. It has also led to new insights in philosophy.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Phenomenology (psychology)** Phenomenology (psychology): Phenomenology or phenomenological psychology, a sub-discipline of psychology, is the scientific study of subjective experiences. It is an approach to psychological subject matter that attempts to explain experiences from the point of view of the subject via the analysis of their written or spoken word. The approach has its roots in the phenomenological philosophical work of Edmund Husserl. History: Early phenomenologists such as Husserl, Jean-Paul Sartre, and Maurice Merleau-Ponty conducted philosophical investigations of consciousness in the early 20th century. Their critiques of psychologism and positivism later influenced at least two main fields of contemporary psychology: the phenomenological psychological approach of the Duquesne School (the descriptive phenomenological method in psychology), including Amedeo Giorgi and Frederick Wertz; and the experimental approaches associated with Francisco Varela, Shaun Gallagher, Evan Thompson, and others (embodied mind thesis). Other names associated with the movement include Jonathan Smith (interpretative phenomenological analysis), Steinar Kvale, and Wolfgang Köhler. But "an even stronger influence on psychopathology came from Heidegger (1963), particularly through Kunz (1931), Blankenburg (1971), Tellenbach (1983), Binswanger (1994), and others." Phenomenological psychologists have also figured prominently in the history of the humanistic psychology movement. Methodology: Phenomenology is concerned with the rich qualitative description of first-person experiences. This stands in contrast to quantitative approaches which seek to operationalize, abstract and predict behavior. Following Husserl's battle-cry "back to the things themselves", a phenomenological approach seeks to avoid speculation about underlying causes, and instead emphasizes direct descriptions of phenomena, whether by means of introspection or by attentive observation of another person. Methodology: Experience The experiencing subject can be considered to be the person or self, for purposes of convenience. In phenomenological philosophy (and in particular in the work of Husserl, Heidegger, and Merleau-Ponty), "experience" is a considerably more complex concept than it is usually taken to be in everyday use. Instead, experience (or being, or existence itself) is an "in-relation-to" phenomenon, and it is defined by qualities of directedness, embodiment, and worldliness, which are evoked by the term "Being-in-the-World".The quality or nature of a given experience is often referred to by the term qualia, whose archetypical exemplar is "redness". For example, we might ask, "Is my experience of redness the same as yours?" While it is difficult to answer such a question in any concrete way, the concept of intersubjectivity is often used as a mechanism for understanding how it is that humans are able to empathize with one another's experiences, and indeed to engage in meaningful communication about them. The phenomenological formulation of "Being-in-the-World", where person and world are mutually constitutive, is central here.The observer, or in some cases the interviewer, achieves this sense of understanding and feeling of relatedness to the subject's experience, through subjective analysis of the experience, and the implied thoughts and emotions that they relay in their words. Methodology: Challenges in studying subjectivity The philosophical psychology prevalent before the end of the 19th century relied heavily on introspection. The speculations concerning the mind based on those observations were criticized by the pioneering advocates of a more scientific and objective approach to psychology, such as William James and the behaviorists Edward Thorndike, Clark Hull, John B. Watson, and B. F. Skinner. However, not everyone agrees that introspection is intrinsically problematic, such as Francisco Varela, who has trained experimental participants in the structured "introspection" of phenomenological reduction.In the early 1970s, Amedeo Giorgi applied phenomenological theory to his development of the Descriptive Phenomenological Method in Psychology. He sought to overcome certain problems he perceived from his work in psychophysics by approaching subjective phenomena from the traditional hypothetical-deductive framework of the natural sciences. Giorgi hoped to use what he had learned from his natural science background to develop a rigorous qualitative research method. His goal was to ensure that phenomenological research was both reliable and valid and he did this by seeking to make its processes increasingly measurable.Philosophers have long confronted the problem of "qualia". Few philosophers believe that it is possible to be sure that one person's experience of the "redness" of an object is the same as another person's, even if both persons had effectively identical genetic and experiential histories. In principle, the same difficulty arises in feelings (the subjective experience of emotion), in the experience of effort, and especially in the "meaning" of concepts. As a result, many qualitative psychologists have claimed phenomenological inquiry to be essentially a matter of "meaning-making" and thus a question to be addressed by interpretive approaches. Applications: Psychotherapy Carl Rogers's person-centered psychotherapy theory is based directly on the “phenomenal field” personality theory of Combs and Snygg. That theory in turn was grounded in phenomenological thinking. Rogers attempts to put a therapist in closer contact with a person by listening to the person's report of their recent subjective experiences, especially emotions of which the person is not fully aware. For example, in relationships the problem at hand is often not based around what actually happened but, instead, based on the perceptions and feelings of each individual in the relationship. “At the core of phenomenology lies the attempt to describe and understand phenomena such as caring, healing, and wholeness as experienced by individuals who have lived through them". Applications: Recent applications The study and practice of phenomenology continues to grow and develop today. In 2021 a study on the experiences of individuals who attended a coexistence center (CECO) was conducted using phenomenological interviews to understand the lives of the participants. After the interviews the researchers constructed a comprehensive narrative, putting their understanding of the participants experience into their own words. This process led the researchers to understand that "the CECO is a propitious space for the development of individual and collective potentialities and the valuation of constructive social relationships that facilitate and preserve the inherent tendency of people towards growth, autonomy and psychological maturation."Another example of phenomenology in recent years is an article published in 2022 which explains how phenomenology can grow into a larger field of study if we recognize how phenomenology has the ability to make the experiences of other people more clear, bridging the gap between subjective and objective reality. It puts forth "a methodological concept of phenomenological elucidation to promote the development of phenomenology as psychology." Critiques: In 2022 Gerhard Thonhauser published an article which critiques phenomenology in psychology for adoption of Le Bon's crowd psychology, as well as what Thonhauser calls the "disease model of emotion transfer". Thonhauser claims there is little to no evidence of Le Bon's crowd psychology framework, of which phenomenology relies on.In a 2015 article written for the Partially Examined Life blog, Michael Burgess argues that "...the foundational problem here is that consciousness is not a container for objects; this assertion mostly derives from another: that the world itself seems to be one way but is another, thus in its initial state of “seeming to be” it cannot be itself real (that illusion is metaphysical)."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dyadic kinship term** Dyadic kinship term: Dyadic kinship terms (abbreviated DY or DYAD) are kinship terms in a few languages that express the relationship between individuals as they relate one to the other. In English, there are a few set phrases for such situations, such as "they are father and son", but there is not a single dyadic term that can be used the way "they are cousins" can; even the latter is not truly dyadic, as it does not necessarily mean that they are cousins to each other. The few, and uncommon, English dyadic terms involve in-laws: co-mothers-in-law, co-fathers-in-law, co-brothers-in-law, co-sisters-in-law, co-grandmothers, and co-grandfathers. Examples of dyadic terms for blood kin include Kayardild (Australian) ngamathu-ngarrba "mother and child", derived from ngamathu "mother", and kularrin-ngarrba "brother and sister", from kularrin "cross-sibling", with the dyadic suffix -ngarrba. Not all such terms are derived; the Ok language Mian has a single unanalysable root lum for "father and child".Dyadic blood-kin terms are rare in Indo-European languages. Examples are Icelandic and Faroese, which have the terms feðgar "father and son", feðgin "father and daughter", mæðgin "mother and son", mæðgur "mother and daughter".Chinese and Japanese use compound nouns to make dyadic terms, such as (in Japanese) 親子 oyako "parent and child", 兄弟 kyōdai "brothers; siblings", 姉妹 shimai "sisters", and 夫婦 fūfu "husband and wife". Dyadic kinship term: The languages which have such terms are concentrated in the western Pacific. There are at least ten in New Guinea, including Oksapmin, Menya, and the Ok languages; fifteen or more Austronesian languages, from Taiwan to New Caledonia; and at least sixty in Australia, such as Kayardild above. There are sporadic examples in Northern Eurasia, including a few Turkic and Uralic languages, Yukaghir, and Ainu; depending on definitions, the Yi languages of Southeast Asia may also be said to have such terms. Elsewhere they are rare, or at least have not been described. Known languages include Athabaskan (Koyukon and Carrier), Pomo, and Southern Paiute in North America, Quechua, Paezan (Nasa Yuwe), and Cariban (Tiriyo) in South America, Adyghe in the Caucasus, and Khoe (Kxoe, Gǀwi) in southern Africa.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Emtricitabine/tenofovir** Emtricitabine/tenofovir: Emtricitabine/tenofovir, sold under the brand name Truvada among others, is a fixed-dose combination antiretroviral medication used to treat and prevent HIV/AIDS. It contains the antiretroviral medications emtricitabine and tenofovir disoproxil. For treatment, it must be used in combination with other antiretroviral medications. For prevention before exposure, in those who are at high risk, it is recommended along with safer sex practices. It does not cure HIV/AIDS. Emtricitabine/tenofovir is taken by mouth.Common side effects include headache, tiredness, trouble sleeping, abdominal pain, weight loss, and rash. Serious side effects may include high blood lactate levels and enlargement of the liver. Use of this medication during pregnancy does not appear to harm the fetus, but this has not been well studied.Emtricitabine/tenofovir was approved for medical use in the United States in 2004. It is on the World Health Organization's List of Essential Medicines. In the United States, emtricitabine/tenofovir was under patent by Gilead until 2020, but is now available as a generic worldwide. In 2020, it was the 278th most commonly prescribed medication in the United States, with more than 1 million prescriptions. Medical uses: Emtricitabine/tenofovir is used both to treat and to prevent HIV/AIDS. The U.S. National Institutes of Health (NIH) recommends antiretroviral therapy (ART) for all people with HIV/AIDS. Medical uses: HIV prevention The Centers for Disease Control and Prevention (CDC) recommends the use of emtricitabine/tenofovir for pre-exposure prophylaxis (PrEP) for uninfected, HIV-1 negative individuals that may be at risk for HIV-1 infection. A Cochrane systematic review found a 51% relative risk reduction of contracting HIV with both tenofovir alone and the tenofovir/emtricitabine combination. A JAMA systematic review found a similar relative risk reduction of 54% on average and greater reduction with greater adherence. It was approved for PrEP against HIV infection in the United States in 2012.The CDC recommends PrEP be considered for the following high-risk groups: Individuals in an ongoing sexual relationship with an HIV-positive partner Gay or bisexual men who either have had anal sex without a condom or been diagnosed with an STD in the past six months Heterosexual men or women who do not regularly use condoms during sex with partners of unknown HIV status who are substantial risk Injection of drugs in the last six months with sharing of equipment Serodiscordant heterosexual and homosexual partners. where one partner is HIV-positive and the other HIV-negativeThe consideration of utilizing emtricitabine/tenofovir as a reduction strategy involves discussion with a health professional who can help the patient navigate the benefits and risks. Patients are advised to discuss any history of bone issues, kidney issues, or hepatitis B infection with their health care provider. Effectiveness of PrEP for prevention of infection is reliant on an individual's ability to take the medication consistently.Emtricitabine/tenofovir is also used for HIV post-exposure prophylaxis. People who start taking emtricitabine/tenofovir see HIV reduction benefit up to 72 hours after starting, but the medicine must be taken for thirty days after a high-risk sexual event to ensure HIV transmission levels are optimally reduced.Truvada as PrEP should not be used for individuals that are positive for HIV-1. Medical uses: HIV treatment Emtricitabine/tenofovir has been approved in the United States as part of antiretroviral combination therapy for the treatment of HIV-1. The combination therapy is suggested as one of the options for adults who have not received any prior treatment for HIV infection. Hepatitis B Both emtricitabine and tenofovir are indicated for the treatment of hepatitis B, with the added benefit that they can target HIV for those with co-infection. Emtricitabine/tenofovir may also be considered for some antiviral resistant hepatitis B infections. Medical uses: Pregnancy and breastfeeding In the United States, it is recommend that all pregnant HIV-infected women start antiretroviral therapy (ART) as early in pregnancy as possible to reduce risk of transmission. ART generally does not increase risk of birth defects with exception of dolutegravir, which is not recommended during first trimester of pregnancy only due to potential risk of neural tube defects.Emtricitabine/tenofovir is secreted in breast milk. In developed countries, HIV-infected mothers are generally recommended to not breastfeed due to slight risk of mother-to-children HIV transmission. In developing countries, where avoiding breastfeeding may not be an option, the World Health Organization recommends a triple drug regimen of tenofovir, efavirenz, and either lamivudine or emtricitabine. Side effects: Emtricitabine/tenofovir is generally well tolerated. Some of its side effects include: Rare: lactic acidosis, liver dysfunction, worsening of hepatitis B infection Common: headache, abdominal pain, decreased weight, nausea, diarrhea, and decreased bone densityFat redistribution and accumulation (lipodystrophy) has been observed in people receiving antiretroviral therapy, including fat reductions in the face, limbs, and buttocks and increases in visceral fat of the abdomen and accumulations in the upper back. When used as pre-exposure prophylaxis (PrEP) this effect may not be present. Weight changes have however been linked to the medication. Drug interactions: Other drugs with adverse reactions include dabigatran etexilate, lamivudine, and vincristine. Dabigatran etexilate used with p-glycoprotein inducers require monitoring of decreased levels and effects of dabigatran. Lamivudine may increase the adverse or toxic effect of emtricitabine. Vincristine used with P-glycoprotein/ABCB1 inducers can decrease the serum concentration of vincristine. Society and culture: The patent for the drug combination is owned by Gilead Sciences in some regions. The European patent EP0915894B1 expired in July 2018, Gilead Sciences wished the patent to be extended, however "four rival labs—Teva, Accord Healthcare, Lupin and Mylan—had sought to have that overturned in the courts in Britain", the High Court of England and Wales invalidated Gilead's patent, however the company appealed and the UK referred the case to the European Court of Justice who refused to extend the patent. An Irish court rejected an injunction request to prevent the launch of generic Emtricitabine/tenofovir prior to the resolution of the case. Despite the expiration of the Gilead Sciences patent, as of 2021, there are still widespread challenges to the availability and uptake of generic PrEP throughout Europe. In 2019, Gilead Sciences challenged the validity of patents granted to the United States after 2015 for using the drug combination for HIV PrEP and post-exposure prophylaxis (PEP).In the United States, most healthcare plans are required to cover PrEP without any copay or other cost sharing. This is due to a United States Preventive Services Task Force recommendation that gave PrEP a grade A rating. Under the Affordable Care Act, this recommendation requires all non-grandfathered private health plans to cover PrEP without cost sharing.In the United Kingdom, PrEP is widely available to all at risk groups following the Department for Health and Social Care's decision to make it available across England from 2020. Wales, Scotland, and Northern Ireland made it available in 2017 and 2018.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Valmet** Valmet: Valmet Oyj, a Finnish company, is a developer and supplier of technologies, automation systems and services for the pulp, paper and energy industries. Valmet has over 200 years of history as an industrial operator. Formerly owned by the State of Finland, Valmet was reborn in December 2013 with the demerger of the pulp, paper and power businesses from Metso Corporation. Valmet: Valmet's services include maintenance outsourcing, mill and power plant improvements, and spare parts. The company provides technology for pulp, tissue, board and paper mills and bioenergy plants. Valmet has operations in more than 40 countries and it employs about 17,000 people. Its headquarters are located in Espoo, and it is listed on the Nasdaq Helsinki. In 2021, Valmet's net sales totaled €3.9 billion. History: Historical products During its history, Valmet has made ships, trains, aeroplanes, tractors, clocks and weapons, as described in the list of Valmet products. History: Roots in the 18th century The company originated in the 1750s when a small shipyard was established in the Sveaborg fortress (now called Suomenlinna) on the islands outside Helsinki, at the time a part of the Swedish province Nyland. In the early 20th century, it ended up under the ownership of the Finnish state and became part of Valmet. Tamfelt was established in 1797 and became one of the leading suppliers of technical textiles. These operations are now part of Valmet's Service business line.Several of the companies forming part of the new Valmet Corporation that was born in the 2010s date back to the 19th century. The Karlstad Mekaniska Werkstad (KMW) in Sweden began in 1865. Beloit Corporation began in 1858 as a foundry in the city of Beloit, Wisconsin, US. Sunds Bruk, the predecessor of Sunds Defibrator Industries Ab, was established in Sweden in 1868. History: 1946–1998 Creation of Valmet In 1946, several metal workshops owned by the Finnish state were merged to form the Valtion Metallitehtaat (English: State Metalworks), abbreviated as ValMet. The new company came to include various metalworks that manufactured war reparations products for the Soviet Union in different parts of Finland. In the year of its establishment, the company had some 6,200 employees.At the beginning of 1951, the Valtion Metallitehtaat group was renamed Valmet Oy. The various factories, which had previously manufactured warships, aircraft and artillery pieces, spent years finding their new purposes. The conversion of an artillery works into a paper machine manufacturer was a success, but import restrictions created serious obstacles in its route to Western markets. The company's shipyard operations were often –and unfavorably – compared to those of Wärtsilä. The airplane industry was maintained to strengthen national security, but it was not profitable. The branch did, however, make a solid contribution to Valmet's product development and designed, among other things, the straddle carrier used in harbors and manufactured as part of Finland's war reparations. For decades, the entire conglomerate searched for new fields, such as the manufacture of cars or instrumentation. Modern quality control of production operations entered the Finnish manufacturing industry via Valmet's operations, for example in its car factories. National politics strongly influenced the management and decision-making in the company.Valmet began manufacturing paper machines at the former Rautpohja artillery works in Jyväskylä, Finland in the early 1950s and delivered its first paper machine in 1953. Valmet became a paper machine supplier of international importance in the mid-1960s, when it delivered several machines to the world's leading paper industry countries.In 1961, Valmet had 8,841 employees.Valmet had shipyards in Turku and Helsinki (first in the Katajanokka district, later in Vuosaari). In 1986, Valmet sold its shipbuilding operations to Wärtsilä Oy, which merged them with its own shipyards to form Wärtsilä Marine. The company was declared bankrupt in 1989. The operations continued under the names of Masa Yards Oy, Kvaerner Masa-Yards Oy, Aker Yards Oy, Aker Finnyards Oy, STX Finland Oy and currently as Meyer Turku Oy.In connection with the shipyard transaction, Valmet bought from Wärtsilä a paper-finishing machinery unit located in Järvenpää, Finland. Together with Valmet's own paper machine manufacturing units, the Järvenpää unit formed Valmet Paperikoneet Oy, which then purchased Tampella's board machine manufacturing operations in 1992.In 1986, Valmet's gun manufacturing unit in Jyväskylä was transferred under Sako-Valmet Oy, which was later renamed Sako Oy. The company is currently owned by the Italian gun manufacturer Beretta.Valmet sold its tractor, forest machine and transportation vehicle manufacturing operations to Sisu Auto in 1994. In 1997, Sisu Auto was sold to Partek, and the tractors became known as Valtra Valmet, and later Valtra. In 2002, Kone Corporation bought Partek, and in 2004 it sold Sisu to Suomen Autoteollisuus Oy, formed by a group of Finnish private investors and the company management.In 1988, Valmet had 17,405 employees. History: 1999–2012 Valmet and Rauma merge as Metso In July 1999, Valmet Corporation and Rauma Corporation merged to form a new company. Initially called Valmet-Rauma Corporation, the name was changed to Metso Corporation in August 1999. At the time of the merger, Valmet was a paper and board machine supplier, while Rauma focused on fiber technology, rock crushing and flow control products. The merger produced an equipment supplier serving the process industry. Shares in Metso were listed on the Helsinki Stock Exchange, which replaced the listings of its predecessor companies.In 2000, Metso acquired Beloit Corporation's tissue and paper-making technology as well as its service operations in the United States and France. In December 2006, Metso completed the acquisition of the pulping and power businesses from Norwegian company Aker Kvaerner ASA. The acquisition aimed to further improve the company's ability to serve the pulp and paper industries as a turnkey delivery partner, and to respond to the business opportunities created by power generation and biomass technologies. At the end of 2009, Metso acquired Tamfelt Corporation, one of the world's leading suppliers of technical textiles. History: 2013– Valmet reborn On October 1, 2013, following a meeting, Metso was split into two companies: Valmet and Metso. After the demerger on December 31, 2013, the pulp, paper and power businesses of Metso formed the new Valmet Corporation, while the mining and construction and automation businesses remained with Metso.Jukka Viinanen became chairman of Valmet's board of directors. Viinanen had served as a member of Metso's board from 2008 to 2013, and as the chairman since 2009.In March 2015, Bo Risberg from Sweden replaced Jukka Viinanen as the chairman of Valmet's board of directors. Previously, Risberg had held a high position at ABB and the position of managing director at the construction supply company Hilti. He also holds positions of trust at Piab Holding, Grundfos Holding and Trelleborg, among others. The Members of the board are vice chairman Mikael von Frenckell, Lone Fønss Schrøder, Friederike Helfer, Pekka Lundmark, Erkki Pehu-Lehtonen and Rogerio Ziviani. Pekka Lundmark later resigned from the board after being nominated CEO of Fortum Corporation. History: In April 2015, Valmet introduced an automation systems business. Valmet completed a EUR 340 million transaction with Metso to purchase its process automation system and service unit (PAS). y. At the time of the transaction, the PAS unit employed 1,600 people, and its net sales totaled EUR 300 million in 2013. In July 2015, Valmet announced the acquisition of Massimiliano Corsini's tissue paper rewinding business. The net sales of the 33-employee unit in Pescia, Italy, has remained steady at EUR 10 million in recent years.In the beginning of 2019, Valmet acquired North American-based GL&V for EUR 113 million. GL&V supplied technologies, upgrades and optimization services, rebuilds, and spare parts for the pulp and paper industry, especially in the areas of chemical pulping, stock preparation, papermaking, and finishing. The acquired operations employed about 630 people and the net sales were approximately EUR 160 million. In May 2019, Valmet acquired American J&L Fiber Services Inc., which employed about 100 people in Wisconsin, USA, with net sales of approximately EUR 30 million. J&L Fiber was a manufacturer and provider of refiner segments to the pulp, paper and fiberboard industry. The enterprise value of the acquisition was approximately EUR 51 million. Both of the acquired companies became a part of Valmet's Services business. In September 2019, Valmet announced building a new pilot facility at its Fiber Technology Center in Sundsvall, Sweden to strengthen the company's research and development capabilities related to bioenergy, biofuels, and biochemicals. The new pilot facility comprises new pilot equipment called “BioTrac”, which uses Valmet DNA control system. History: Valmet 2020– Main article: Neles At the end of June 2020, Neles was separated from Metso. The State of Finland sold its share of 15 percent to Valmet. In mid-July, the Swedish company Alfa Laval made an offer to buy Neles. Valmet’s CEO Laine rebuked the board of Neles for ill-advised actions and accepting a price that was too low. Laine had previously managed the business operations of Neles and thought that Valmet could in time have bought more of its stock. Alfa Laval’s CEO said that they had bought Neles at a “pandemic discount,” and the front page of the Swedish financial newspaper Dagens Industri celebrated the fact that Sweden would soon own Finland’s industry. Valmet started buying Neles stock. By the fall, Valmet owned nearly 30 percent of the stock. Alfa Laval only received the support of a third of Neles owners for its takeover bid, and withdrew from the competition in November.In January 2021, Valmet reported that it would supply the mills of the Swedish company Renewcell with equipment to produce dissolving pulp from recycled clothes and textiles. In May, Valmet announced that it would deliver drying technology to Spinnova, which produces textile fiber from cellulose. Valmet equipment for the textile industry is in high demand in Europe, because the collection of discarded textiles for recycling must be organized in the EU countries by 2025. Valmet had been developing recycling technology with Renewcell for years, and the companies constructed a pilot plant and a factory in Sundsvall together. In July 2021, Valmet and Neles agreed to merge. Neles owners obtained 18.8 percent of the merging company. The companies’ synergies were considered to be substantial during the transaction. Neles was thought to help increase sales in automation systems, while its products were also to be sold to the paper industry.Neles merged into Valmet in April 2022, becoming Valmet’s fifth business line, Flow Control. The companies had several managers and employees who knew each other from Metso days. After the merger, the company had 17,000 employees, 3,000 of whom came from Neles. Organization and products: Valmet's operations are divided into five geographical areas: North America, South America, EMEA, China, and Asia-Pacific. Valmet operates in some 40 countries and employs 17,000 people. Valmet's primary production sites are the Rautpohja factory in Jyväskylä, Finland, and the units in Sweden's Karlstad and Sundsvall and China's Xi'an and Shanghai. Rautpohja is the center of paper and board machine engineering operations, the manufacture of components such as headboxes and the assembly of machine deliveries to Europe. Recognitions: In 2019, Valmet was included in the Dow Jones Sustainability Indices (DJSI) for the sixth consecutive year. The company was listed both in the Dow Jones Sustainability World and Europe indices. In January 2020, Valmet was listed in the CDP's annual A-list among the best companies leading on environmental transparency and performance.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pattern Recognition (journal)** Pattern Recognition (journal): Pattern Recognition is a single blind peer-reviewed academic journal published by Elsevier Science. It was first published in 1968 by Pergamon Press. The founding editor-in-chief was Robert Ledley, who was succeeded from 2009 until 2016 by Ching Suen of Concordia University. Since 2016 the current editor-in-chief is Edwin Hancock of the University of York. The journal publishes papers in the general area of pattern recognition, including applications in the areas of image processing, computer vision, handwriting recognition, biometrics and biomedical signal processing. The journal awards the Pattern Recognition Society Medal to the best paper published in the journal each year. Pattern Recognition (journal): In 2020, the journal had an impact factor of 7.196 and it currently has a Scopus CiteScore of 13.1. Google Scholar currently lists the journal as ranked 6th in the top 20 publications in Computer Vision and Pattern Recognition. Abstracting and indexing: The journal is abstracted and indexed by the following services:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Disilane** Disilane: Disilane is a chemical compound with chemical formula Si2H6 that was identified in 1902 by Henri Moissan and Samuel Smiles (1877–1953). Moissan and Smiles reported disilane as being among the products formed by the action of dilute acids on metal silicides. Although these reactions had been previously investigated by Friedrich Woehler and Heinrich Buff between 1857 and 1858, Moissan and Smiles were the first to explicitly identify disilane. They referred to disilane as silicoethane. Higher members of the homologous series SinH2n+2 formed in these reactions were subsequently identified by Carl Somiesky (sometimes spelled "Karl Somieski") and Alfred Stock. At standard temperature and pressure, disilane is a colourless, acrid gas. Disilane and ethane have similar structures, although disilane is much more reactive. Other compounds of the general formula Si2X6 (X = hydrogen, halogen, alkyl, aryl, and mixtures of these groups) are called disilanes. Disilane is a group 14 hydride. Synthesis: Disilane is usually prepared by the hydrolysis of magnesium silicide. This reaction produces silane, disilane, and even trisilane. The method has been abandoned for the production of silane, but it remains viable for generating disilane. The presence of traces of disilane is responsible for the spontaneous flammability of silane produced by hydrolysis by this method (analogously diphosphine is often the spontaneously pyrophoric contaminant in samples of phosphine). Synthesis: It also arises by thermal decomposition disilane via both photochemical and thermal decomposition of silane. The reduction of Si2Cl6 with lithium aluminium hydride affords disilane in modest yield. Applications and reactions: Disilane and silane thermally decompose around 640 °C, depositing amorphous silicon. This chemical vapor deposition process is relevant to the manufacture of photovoltaic devices. Specifically it is utilized in the production of silicon wafers.More generally, diorganosilanes are produced by reductive coupling of silyl chlorides, e.g. 2 (CH3)3SiCl + 2 Na → (CH3)3Si−Si(CH3)3 + 2 NaClDisilane gas can be used to control pressure of Si vapors during process of graphene growth by thermal decomposition of SiC. Pressure of Si vapors influences quality of produced graphene.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Subtilase** Subtilase: Subtilases are a family of subtilisin-like serine proteases. They appear to have independently and convergently evolved an Asp/Ser/His catalytic triad, like in the trypsin serine proteases. The structure of proteins in this family shows that they have an alpha/beta fold containing a 7-stranded parallel beta sheet. Subtilase: The subtilisin family is the second largest serine protease family characterised to date. Over 200 subtilases are presently known, more than 170 of which with their complete amino acid sequence. Subtilase is widespread, being found in eubacteria, archaebacteria, eukaryotes and viruses. The vast majority of the family are endopeptidases, although there is an exopeptidase, tripeptidyl peptidase. Structures have been determined for several members of the subtilisin family showing that subtilisins exploit the same catalytic triad as the chymotrypsins although the residues occur in a different order (His/Asp/Ser in chymotrypsin and Asp/His/Ser in subtilisin); otherwise the structures show similarity to no other proteins. Some subtilisins are mosaic proteins, whereas others contain N- and C-terminal extensions that show no sequence similarity to any other known protein. Based on sequence homology, a subdivision into six families has been proposed.The proprotein-processing endopeptidases kexin, furin and related enzymes form a distinct subfamily known as the kexin subfamily (S8B). These preferentially cleave C-terminally to paired basic amino acids. Members of this subfamily can be identified by subtly different motifs around the active site. Members of the kexin family, along with endopeptidases R, T and K from the yeast Tritirachium and cuticle-degrading peptidase from Metarhizium, require thiol activation. This can be attributed to the presence of Cys-173 near to the active histidine. Only 1 viral member of the subtilisin family is known, a 56-kDa protease from herpes virus 1, which infects the channel catfish.Sedolisins (serine-carboxyl peptidases) are proteolytic enzymes whose fold resembles that of subtilisin; however, they are considerably larger, with the mature catalytic domains containing approximately 375 amino acids. The defining features of these enzymes are a unique catalytic triad, Ser/Glu/Asp, as well as the presence of an aspartic acid residue in the oxyanion hole. High-resolution crystal structures have now been solved for sedolisin from Pseudomonas sp. 101, as well as for kumamolisin from a thermophilic bacterium, Bacillus novo sp. MN-32. Mutations in the human gene leads to a fatal neurodegenerative disease. Human proteins containing this domain: FURIN; MBTPS1; PCSK1; PCSK2; PCSK4; PCSK5; PCSK6; PCSK7; PCSK9; TPP2;
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gregorian mode** Gregorian mode: A Gregorian mode (or church mode) is one of the eight systems of pitch organization used in Gregorian chant. History: The name of Pope Gregory I was attached to the variety of chant that was to become the dominant variety in medieval western and central Europe (the diocese of Milan was the sole significant exception) by the Frankish cantors reworking Roman ecclesiastical song during the Carolingian period. The theoretical framework of modes arose later to describe the tonal structure of this chant repertory, and is not necessarily applicable to the other European chant dialects (Old Roman, Mozarabic, Ambrosian, etc.). History: The repertory of Western plainchant acquired its basic forms between the sixth and early ninth centuries, but there are neither theoretical sources nor notated music from this period. By the late eighth century, a system of eight modal categories, for which there was no precedent in Ancient Greek theory, came to be associated with the repertory of Gregorian chant. This system likely originated from the early Byzantine oktōēchos, as indicated by the non-Hellenistic Greek names used in the earliest Western sources from about 800. Tonality: In the traditional system of eight modes (in use mainly between the 8th and 16th centuries) there are four pairs, each pair comprising an authentic mode and a plagal mode. Tonality: Authentic mode The authentic modes were the odd-numbered modes 1, 3, 5, 7, and this distinction was extended to the Aeolian and Ionian modes when they were added to the original eight Gregorian modes in 1547 by Glareanus in his Dodecachordon. The final of an authentic mode is the tonic, though the range of modes 1, 2, and 7 may occasionally descend one step further. This added degree is called the "subfinal" which, since it lies a whole tone below the final, is also the "subtonium" of the mode. The range of mode 5 (Lydian) does not employ a subfinal, and so always maintains F as its lower limit. These four modes correspond to the modern modal scales starting on re (Dorian), mi (Phrygian), fa (Lydian), and so (Mixolydian). The tenor, or dominant (corresponding to the "reciting tone" of the psalm tones), is a fifth above the final of the scale, with the exception of mode 3 (Phrygian), where it is a sixth above the final. This is because a fifth above the tonic of mode 3 is the "unstable" ti (in modern solfège), which may be flattened to ta. Tonality: The older Byzantine system still retains eight echoi (sing. ἦχος – echos), each consisting of a small family of closely related modes that, if rounded to their diatonic equivalents, would be the eight modes of Gregorian chant. However, they are numbered differently, the authentic modes being 1, 2, 3, 4. Other Eastern Christian rites use similar systems of eight modes; see Syriac usage of Octoechos and Armenian usage of Octoechos. Tonality: Plagal mode A plagal mode (from Greek πλάγιος 'oblique, sideways, athwart') has a range that includes the octave from the fourth below the final to the fifth above. The plagal modes are the even-numbered modes 2, 4, 6 and 8, and each takes its name from the corresponding odd-numbered authentic mode with the addition of the prefix "hypo-": Hypodorian, Hypophrygian, Hypolydian, and Hypomixolydian.The earliest definition of plagal mode is found in Hucbald's treatise De harmonica (c. 880), who specifies the range as running from the fourth below the final to the fifth above. Later writers extend this general rule to include the sixth above the final and the fifth below, except for the Hypolydian mode, which would have a diminished fifth below the final and so the fourth below, C, remained the lower limit. In addition to the range, the tenor (cofinal, or dominant, corresponding to the "reciting tone" of the psalm tones) differs. In the plagal modes, the tenor is a third lower than the tenor of the corresponding authentic mode, except in mode 8 (Hypomixolydian), where it is raised to a 4th above the finalis (a second below the tenor of the authentic mode 7) in order to avoid the "unstable" degree ti, which may be flattened (in the authentic mode 3, the tenor is similarly raised to the sixth above the finalis, and the tenor of plagal mode 4—Hypophrygian—is therefore also a fourth above the finalis). Tonality: In Byzantine modal theory (octoechos), the word "plagal" ("plagios") refers to the four lower-lying echoi, or modes. Thus plagal first mode (also known as "tone 5" in the Russian naming system) represents a somewhat more developed and widened in range version of the first mode. The plagal second mode ("tone 6" in the Russian system) has a similar relation to the second mode, and the plagal fourth mode—respectively to the fourth mode. Though there is no "plagal third mode", the mode that one would expect ("tone 7") is called the "grave tone". Tonality: Hierarchy of tones Two characteristic notes or pitches in a modal melody are the final and cofinal (tenor, dominant, or reciting tone). These are the primary degrees (often the 1st and 5th) on which the melody is conceived and on which it most often comes to rest, in graduated stages of finality. The final is the pitch in which the chant usually ends; it may be approximately regarded as analogous (but not identical) to the tonic in the Western classical tradition. Likewise the cofinal is an additional resting point in the chant; it may be regarded as having some analogy to the more recent dominant, but its interval from the tonic is not necessarily a fifth. In addition to the final and cofinal, every mode is distinguished by scale degrees called the mediant and the participant. The mediant is named from its position—in the authentic modes—between the final and cofinal. In the authentic modes it is the third degree of the scale, unless that note should happen to be B, in which case C substitutes for it. In the plagal modes, its position is somewhat irregular. The participant is an auxiliary note, generally adjacent to the mediant in authentic modes and, in the plagal forms, coincident with the cofinal of the corresponding authentic mode (some modes have a second participant).Given the confusion between ancient, medieval, and modern terminology, "today it is more consistent and practical to use the traditional designation of the modes with numbers one to eight". Sources: Sadie, Stanley; Tyrrell, John, eds. (2001). The New Grove Dictionary of Music and Musicians (2nd ed.). London: Macmillan. ISBN 9780195170672.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Propagermanium** Propagermanium: Propagermanium (INN), also known by a variety of other names including bis(2-carboxyethylgermanium) sesquioxide and 2-carboxyethylgermasesquioxane, is an organometallic compound of germanium that is sold as an alternative medicine. It is a polymeric compound with the formula ((HOOCCH2CH2Ge)2O3)n. The compound was first synthesized in 1967 at the Asai Germanium Research Institute in Japan. It is a water-soluble organogermanium compound used as raw material in health foods. The compound displays low toxicity in studies with rats.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Last** Last: A last is a mechanical form shaped like a human foot. It is used by shoemakers and cordwainers in the manufacture and repair of shoes. Lasts come in many styles and sizes, depending on the exact job they are designed for. Common variations include simple one-size lasts used for repairing soles and heels, custom-purpose mechanized lasts used in modern mass production, and custom-made lasts used in the making of bespoke footwear. Lasts are made of firm materials—hardwoods, cast iron, and high-density plastics—to withstand contact with wetted leather and the strong forces involved in reshaping it. Since the early 19th century, lasts typically come in pairs to match the separate shapes of the right and left feet. The development of an automated lasting machine by the Surinamese-American Jan Ernst Matzeliger in the 1880s was a major development in shoe production, immediately improving quality, halving prices, and eliminating the previous putting-out systems surrounding shoemaking centers. Name: The English word last is thought to derive from a Proto-Germanic term reconstructed as *laistaz and intending a track, a trace, or a footprint. Cognates include Swedish läst, Danish læste, and German Leisten. History: Although Roman cordwainers—bespoke shoemakers—have been found to have shaped some footwear separately for the right and left foot, this distinction was mostly lost following the barbarian invasions in late Antiquity. Upon the return of commercial shoemaking during the High Middle Ages, a single last was used to make shoes for either foot, with the expectation that use would gradually reshape the shoe as needed. The use of such "straights" was particularly important after the rise of both male and female heels in the 17th century made shoemaking more complicated than previously. It was not until the beginning of industrial production and mass marketing in the early 19th century that lasts were again generally made and used in matching pairs. Generic one-size lasts are now only used for basic shoe repair. History: Well into the Industrial Revolution, shoe production was optimized by elaborate division of labor in putting-out systems arranged around central workshops but each step of production still required skilled labor. Attempts at mechanization in Britain by Marc Isambard Brunel during the Napoleonic Wars were partial and proved uneconomical after demobilization. Improvements to the sewing machine to suit it for work in leather took until 1850 and the major breakthrough was the Surinamese immigrant Jan Ernst Matzeliger's automated lasting machine, patented in 1883. This instantly centralized production, increased production by as much as 14 times, improved quality, and halved prices. Design: A last is a mechanical form shaped like a human foot. Lasts come in many styles and sizes, depending on the exact job they are designed for. Common variations include simple uniform lasts for shoe repair, custom-purpose mechanized lasts for shoe factories, and custom-made lasts for bespoke footwear. Though a last is typically made to approximate the shape of a human foot, the precise shape is tailored to the kind of footwear being made. For example, boot lasts typically hug the instep for a close fit. Modern last shapes are now usually designed with dedicated CAD software. Design: Lasts are typically made from hardwoods, cast iron, and high-density plastics to maintain their shape even after prolonged use in contact with materials like wetted leather and under the mechanical stresses necessary to stretch and shape the material for shoes. Factory lasts must be able to hold the lasting tacks that position the parts of the shoe and then handle the force of the pullover machines used to bottom the shoe and add the sole. The usual material now is high-density polyethylene plastic (HMW-HDPE), which can be easily, cheaply, and precisely shaped; which withstand more damage from the tacks before requiring repair or replacement; and which can be recycled once they finally do wear out entirely. Wooden lasts are now used only for repair work and bespoke shoemaking, particularly in Europe and North America. Custom lasts: Cordwainers often use lasts that are specifically designed to the proportions of individual customers' feet. Made from wood or from various modern materials, they don't need to withstand the pressures of mass production machinery, but they must be able to handle constant tacking and pinning and the wet environment associated with stretching and shaping materials such as leather.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cherry picking (basketball)** Cherry picking (basketball): Cherry picking, in basketball and certain other sports, refers to play where one player (the cherry picker) does not play defense with the rest of the team, but rather remains near half court or closer to their own team's goal.If the opponents do not designate a player to stay near the cherry picker, they will have a 5-on-4 advantage as they try to score, but if the defense steals the ball, it could make a long pass to the cherry picker for an uncontested basket. Acquiring the ball by a violation or foul, or after a made basket, the cherry picker is less relevant, as the opponents have more time to put their own defense in place. Cherry picking (basketball): Disapproval of cherry picking stems from the fact that the cherry picker is not playing the "complete" game and accumulates statistics for points scored that exaggerate the player's prowess. Legality: Cherry picking is uncommon but legal in organized basketball. In some amateur leagues, cherry picking—defined as a defender remaining in the opponents' backcourt after the opponents have advanced the ball to their forecourt—is a violation, penalized by loss of possession and of any resulting points. Methods: The cherry picker may "camp out", remaining on the player's offensive end to wait for a teammate to obtain the ball and pass it to the cherry picker for a shot before the defense can get back (as is also the case in the fast break). Methods: Another method is "bolting" or "breaking" toward the other goal the moment the opponents launch a shot, without waiting to see the outcome of the shot. If the shot fails and the defense gets the rebound, the cherry picker can be positioned to receive a long pass and take an unguarded shot. Even after a made shot, the team that was on defense may signal for one or more players to run toward the other basket so as to enable one or more passes toward the goal before the opponents can set up their defense. Other uses: Cherry picking is also a strategy in videogames that simulate basketball.Analogs to cherry picking exist in other sports: Cherry picking in water polo is called "sea-gulling". Similar to the strategy in basketball, a swimmer of the defensive team remains on his or her offensive end and awaits a pass once teammates regain the ball. Cherry picking violates the offside rule of association football. In indoor football in many European countries, a rule named "Man over the line" prevents any player crossing the middle of the court, apparently to encourage more play in attack. In Britain, staying near the goal is informally called "goal hanging". Ice hockey also has an offside rule that prevents cherry picking, but "loafing" just outside the opponents' blue line can lead to breakaways if the loafer's opponent is unaware of it.The term also refers to "picking" a rebound above the rim, a practice that may be a goaltending violation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ELIZA effect** ELIZA effect: The ELIZA effect, in computer science, is the tendency to project human traits — such as experience, semantic comprehension or empathy — into computer programs that have a textual interface. The effect is a category mistake that arises when the program's symbolic computations are described through terms such as "think," "know" or "understand." History: The effect is named for ELIZA, the 1966 chatbot developed by MIT computer scientist Joseph Weizenbaum. When executing Weizenbaum's DOCTOR script, ELIZA simulated a Rogerian psychotherapist, largely by rephrasing the "patient"'s replies as questions: Human: Well, my boyfriend made me come here. ELIZA: Your boyfriend made you come here? Human: He says I'm depressed much of the time. ELIZA: I am sorry to hear you are depressed. Human: It's true. I'm unhappy. History: ELIZA: Do you think coming here will help you not to be unhappy?Though designed strictly as a mechanism to support "natural language conversation" with a computer, ELIZA's DOCTOR script was found to be surprisingly successful in eliciting emotional responses from users who, in the course of interacting with the program, began to ascribe understanding and motivation to the program's output. As Weizenbaum later wrote, "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people." Indeed, ELIZA's code had not been designed to evoke this reaction in the first place. Upon observation, researchers discovered users unconsciously assuming ELIZA's questions implied interest and emotional involvement in the topics discussed, even when they consciously knew that ELIZA did not simulate emotion. Characteristics: In its specific form, the ELIZA effect refers only to "the susceptibility of people to read far more understanding than is warranted into strings of symbols—especially words—strung together by computers". A trivial example of the specific form of the Eliza effect, given by Douglas Hofstadter, involves an automated teller machine which displays the words "THANK YOU" at the end of a transaction. A naive observer might think that the machine is actually expressing gratitude; however, the machine is only printing a preprogrammed string of symbols.More generally, the ELIZA effect describes any situation where, based solely on a system's output, users perceive computer systems as having "intrinsic qualities and abilities which the software controlling the (output) cannot possibly achieve" or "assume that [outputs] reflect a greater causality than they actually do". In both its specific and general forms, the ELIZA effect is notable for occurring even when users of the system are aware of the determinate nature of output produced by the system. Characteristics: From a psychological standpoint, the ELIZA effect is the result of a subtle cognitive dissonance between the user's awareness of programming limitations and their behavior towards the output of the program. Significance: The discovery of the ELIZA effect was an important development in artificial intelligence, demonstrating the principle of using social engineering rather than explicit programming to pass a Turing test.ELIZA convinced some users into thinking that a machine was human. This shift in human-machine interaction marked progress in technologies emulating human behavior. Two groups of chatbots are distinguished by William Meisel as "general personal assistants" and "specialized digital assistants". General digital assistants have been integrated into personal devices, with skills like sending messages, taking notes, checking calendars, and setting appointments. Specialized digital assistants "operate in very specific domains or help with very specific tasks". Digital assistants that are programmed to aid productivity by assuming behaviors analogous to humans. Significance: Weizenbaum considered that not every part of the human thought could be reduced to logical formalisms and that "there are some acts of thought that ought to be attempted only by humans". He also observed that we develop emotional involvement with machines if we interact with them as humans. When chatbots are anthropomorphized, they tend to portray gendered features as a way through which we establish relationships with the technology. "Gender stereotypes are instrumentalised to manage our relationship with chatbots" when human behavior is programmed into machines.In the 1990s, Clifford Nass and Byron Reeves conducted a series of experiments establishing The Media Equation, demonstrating that people tend to respond to media as they would either to another person (by being polite, cooperative, attributing personality characteristics such as aggressiveness, humor, expertise, and gender) – or to places and phenomena in the physical world. Numerous subsequent studies that have evolved from the research in psychology, social science and other fields indicate that this type of reaction is automatic, unavoidable, and happens more often than people realize. Reeves and Nass (1996) argue that, "Individuals' interactions with computers, television, and new media are fundamentally social and natural, just like interactions in real life," (p. 5). Significance: Feminized labor, or women's work, automated by anthropomorphic digital assistants reinforces an "assumption that women possess a natural affinity for service work and emotional labour". In defining our proximity to digital assistants through their human attributes, chatbots become gendered entities.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Centroblast** Centroblast: A centroblast generally refers to an activated B cell that is enlarged (12–18 micrometer) and is rapidly proliferating in the germinal center of a lymphoid follicle. They are specifically located in the dark zone of the germinal center. Centroblasts form from naive B cells being exposed to follicular dendritic cell cytokines, such as IL-6, IL-15, 8D6, and BAFF. Stimulation from helper T cells is also required for centroblast development. Interaction between CD40 ligand on an activated T helper cell and the B cell CD40 receptor induces centroblasts to express activation-induced cytidine deaminase, leading to somatic hypermutation, allowing the B cell receptor to potentially gain stronger affinity for an antigen. In the absence of FDC and helper T cell stimulation, centroblasts are unable to differentiate and will undergo CD95-mediated apoptosis. Centroblast: Morphologically, centroblasts are large lymphoid cells containing a moderate amount of cytoplasm, round to oval vesicular (i.e. containing small fluid-filled sacs) nuclei, vesicular chromatin, and 2–3 small nucleoli often located adjacent to the nuclear membrane. They are derived from B cells. Immunoblasts are distinguished from centroblasts by being B cell-derived lymphoid cells that have moderate-to-abundant basophilic cytoplasm and a prominent, centrally located, trapezoid-shaped single nucleolus which often has fine strands of chromatin attached to the nuclear membrane (‘spider legs’). In some cases, immunoblasts can show some morphologic features of plasma cells.Centroblasts do not express immunoglobulins and are unable to respond to the follicular dendritic cell antigens present in the secondary lymphoid follicles. However, they are able to promote the secretion of immunoglobulins though CD27/CD70 interactions. B cells begin expressing CD27 at the beginning of the centroblast stage and lose the cell marker after differentiating into centrocytes. CD27 is an important marker for germinal center formation in the lymphoid follicle and is produced by centroblasts interacting with CD28+ helper T cells. The production of the germinal center is important for the production of antibody secreting plasma cells and memory B cells. After proliferation, centroblasts migrate to the light zone of the germinal center and eventually give rise to centrocytes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**2C-O-4** 2C-O-4: 2C-O-4 (4-isopropoxy-2,5-dimethoxyphenethylamine) is a phenethylamine of the 2C family. It is also a positional isomer of isoproscaline and was probably first synthesized by Alexander Shulgin. It produces hallucinogenic, psychedelic, and entheogenic effects. Because of the low potency of 2C-O-4, and the inactivity of 2C-O, Shulgin felt that the 2C-O series would not be an exciting area for research, and did not pursue any further analogues. Chemistry: 2C-O-4 is in a class of compounds commonly known as phenethylamines, and the systematic chemical name is 2-(4-isopropoxy-2,5-dimethoxyphenyl)ethanamine. Effects: Little is known about the psychopharmacological effects of 2C-O-4. Based on the one report available in his book PiHKAL, Shulgin lists the dosage of 2C-O-4 as being >60 mg. Pharmacology: The mechanism that produces the hallucinogenic and entheogenic effects of 2C-O-4 is unknown. Dangers: The toxicity of 2C-O-4 is not known. Legality: Canada As of October 31, 2016, 2C-O-4 is a controlled substance (Schedule III) in Canada. United States 2C-O-4 is unscheduled and unregulated in the United States; however, because of its close similarity in structure and effects to mescaline and 2C-T-7, possession and sale of 2C-O-4 may be subject to prosecution under the Federal Analog Act.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fananas cell** Fananas cell: Fañanas cells (also known as Feathered cells of Fañanas) are glial cells of the cerebellar cortex. Fananas cell: They are located in the granular layer with their cytoplasmatic protrusions extending into the lower part of the molecular layer as well. They are meant to be closely related to and sometimes even called Golgi epithelial cells and are juxtaposed to the Radial glial cells or Bergmann glial cells. Fañanas cells are sometimes defined as "specialised astrocytes".The feathered cells are named after Jorge Ramón y Cajal Fañanás, a son of Santiago Ramón y Cajal, who first described this type of glial cell in 1916. Histology: Microscopic studies show that the Fañanas cell represents a satellite glial cell whose protrusions do not contain glial filaments like GFAP. They are located near the somata of Purkinje cells in the granular layer. With regard to the typical "feathered" microscopic structure of the cells, Fañanas glial cells occur in subforms with one, two or multiple "feathers" of cytoplasmatic extensions, that are studded with small, rounded sprouts. The protrusions are often much shorter than those of other Golgi epithelial cells and run parallel to the fibres of the Bergmann glial cells. Fañanas cell extensions are normally not part of the glial limiting membrane. Histology: With the Nissl-method, Fañanas cells can be identified by their slightly bigger, roundish and ovally shaped nuclei, scattered in the molecular and granular layer.The cells need to be prepared with a gold sublimate impregnation by Ramón y Cajal or a silver staining method. Function and clinical relevance: The role of the Fañanas cell in the connectivity and structure of the cerebellar cortex is still unknown. Function and clinical relevance: One study found deviations of the expression of Vimentin in patients with Creutzfeldt-Jakob disease (CJD) that could be related to pathological changes in Fañanas glia. These variances were also described in cerebellar microglia and Bergmann cells. However, the results of the study did not point at significant mutations in Fañanas cells but rather described the possible importance of astrocytes in general in the aetiology of CJD.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Teach the controversy (campaign)** Teach the controversy (campaign): The "teach the controversy" campaign of the Discovery Institute seeks to promote the pseudoscientific principle of intelligent design (a variant of traditional creationism) as part of its attempts to discredit the teaching of evolution in United States public high school science courses. Scientific organizations (including the American Association for the Advancement of Science) point out that the institute claims that there is a scientific controversy where in fact none exists.The Discovery Institute is a conservative Christian think tank based in Seattle, Washington. The overall goals of the movement are "to defeat scientific materialism" and "to replace [it] with the theistic understanding that nature and human beings are created by God". It claims that fairness requires educating students with a "critical analysis of evolution" in which "the full range of scientific views", evolution's "unresolved issues", and the "scientific weaknesses of evolutionary theory" are presented and evaluated and in which intelligent design concepts such as irreducible complexity are presented. Teach the controversy (campaign): The scientific community and science education organizations have replied that there is no scientific controversy regarding the validity of the theory of evolution and that the controversy exists solely in religion and politics. A federal court has agreed with evaluation of the majority of scientific organizations (including the American Association for the Advancement of Science) that the institute has manufactured the controversy they want to have taught by promoting the false perception that evolution is "a theory in crisis" by falsely claiming the theory is the subject of wide controversy and debate within the scientific community. In fact, intelligent design has been rejected by essentially all of the members of the scientific community, including the numerical estimate of 99.9 percent of scientists.In December 2005, a federal judge ruled that intelligent design is not science and "cannot uncouple itself from its creationist, and thus religious, antecedents". The federal ruling also characterized "teaching the controversy" as part of a religious ploy. Origin of the campaign name: The term "teach the controversy" originated with Gerald Graff, a professor of English and education at the University of Illinois at Chicago, as a reminder to teach that established knowledge is created in a crucible of debate and controversy. To the chagrin of Graff, who describes himself as a liberal secularist, the idea was later appropriated by Phillip E. Johnson, Discovery Institute program advisor and father of the ID movement. Discussing the 1999-2000 Kansas State Board of Education controversy over the teaching of intelligent design in public school classrooms, Johnson wrote "What educators in Kansas and elsewhere should be doing is to 'teach the controversy'." In his book Johnson proposed casting the conflicting points of view and agendas as a scholarly controversy. Johnson's usage differs fundamentally and disingenuously from Graff's original use of the concept. While Graff advocated that a comprehensive understanding of what are considered to be "established" concepts must include teaching the debates and conflicts by which they were established, Johnson appropriated the phrase to cast doubt upon the very process and results of the scientific method of establishing knowledge through debate and conflict based on facts determined by experimentation.The phrase was picked up by the Discovery Institute affiliates Stephen C. Meyer, David K. DeWolf, and Mark E. DeForrest in their 1999 article "Teaching the Controversy: Darwinism, Design and the Public School Science Curriculum" published by the Foundation for Thought and Ethics. This foundation also publishes the pseudoscientific intelligent design biology textbook Of Pandas and People suggested as an alternative to mainstream science and biology textbooks in the Critical Analysis of Evolution lesson plans proposed by proponents of the "teach the controversy" campaign. Development of the strategy: Comparisons of the drafts of the intelligent design textbook Of Pandas and People before and after the 1987 Edwards v. Aguillard ruling showed that the definition given in the book for "creation science" in pre Edwards drafts is identical to the definition of "intelligent design" in post Edwards drafts; cognates of the word creation—creationism and creationist, which appeared approximately 150 times were deliberately and systematically replaced with the phrase 'intelligent design'; and the changes occurred shortly after the Supreme Court ruled in Edwards that creation science is religious and cannot be taught in public school science classes.The campaign was devised by Stephen C. Meyer and Discovery Institute founder and President Bruce Chapman as a compromise strategy in March 2002. They had come to the realisation that the dispute over intelligent design's (lack of) scientific standing was complicating their efforts to have evolution challenged in the science classroom. This strategy was designed to move the focus onto an approach that stresses open debate and evolution's purported weakness, but does not require students to study intelligent design. The intention was to create doubt over evolution and avoid the question of whether the intelligent designer was God, while giving the institute time to strengthen their purported theory of intelligent design. Another advantage of this strategy was to allay teacher fears of legal action. Employment of the strategy: The Discovery Institute's strategy has been for the institute itself or groups acting on its behalf to lobby state and local boards of education, and local, state and federal policymakers to enact policies and/or laws, often in the form of textbook disclaimers and the language of state science standards, that undermine or remove evolutionary theory from the public school science classroom by portraying it as "controversial" and "in crisis;" a portrayal that stands in contrast to the overwhelming consensus of the scientific community that there is no controversy, that evolution is one of the best-supported theories in all of science, and that whatever controversy does exist is political and religious, not scientific. The Teach the Controversy strategy has benefitted from 'stacking' municipal, county and state school boards with intelligent design proponents as alluded to in the Discovery Institute's Wedge Strategy. Employment of the strategy: As the primary organizer and promoter of the Teach the Controversy campaign, the Discovery Institute has played a central role in nearly all intelligent design cases, often working behind the scenes to orchestrate, underwrite and support local campaigns and intelligent design groups such as the Intelligent Design Network. It has provided support ranging from material assistance to federal, state and regionally elected representatives in the drafting of bills to the provision of support and advice to individual parents confronting their school boards. DI's goal is to move from battles over standards to curriculum writing and textbook adoption while undermining the central positions of evolution in biology and methodological naturalism in science. In order to make their proposals more palatable, the Institute and its supporters claim to advocate presenting evidence both for and against evolution, thus encouraging students to evaluate the evidence. Employment of the strategy: Though Teach the Controversy is presented by its proponents as encouraging academic freedom, it, along with the Santorum Amendment, is viewed by many academics as a threat to academic freedom and is rejected by the National Science Teachers Association, and the American Association for the Advancement of Science. The American Society for Clinical Investigation's Journal of Clinical Investigation describes the Teach the Controversy strategy and campaign as a "hoax" and that "the controversy is manufactured".Along with the objection that there is no scientific controversy to teach, another common objection is that the Teach the Controversy campaign and intelligent design arise out of a Christian fundamentalist and evangelistic movement that calls for broad social, academic and political changes. Intelligent design proponents argue their concepts and motives should be given independent consideration. Those critical of intelligent design see the two as intertwined and inseparable, citing the foundational documents of the movement such as the Wedge Document and statements made by intelligent design proponents to their constituents. The judge in the Kitzmiller v. Dover Area School District trial considered testimony and evidence from both sides on the question of the motives of intelligent design proponents when he ruled that "ID cannot uncouple itself from its creationist, and thus religious, antecedents" and "that ID is an interesting theological argument, but that it is not science."In the debate surrounding the linking of the motives of intelligent design proponents to their arguments, following the Kansas evolution hearings the chairman of the Kansas school board, Steve Abrams, cited in The New York Times as saying that though he's a creationist who believes that God created the universe 6,500 years ago, he is able to keep the two separate: In my personal faith, yes, I am a creationist, ... But that doesn't have anything to do with science. I can separate them. ... my personal views of Scripture have no room in the science classroom. Employment of the strategy: Afterward, Lawrence Krauss, a Case Western Reserve University physicist and astronomer, in a New York Times essay said: A key concern should not be whether Dr. Abrams's religious views have a place in the classroom, but rather how someone whose religious views require a denial of essentially all modern scientific knowledge can be chairman of a state school board. ... As we work to improve the abysmal state of science education in our public schools, we will continue to do battle with those who feel that knowledge is a threat to religious faith ... we should remember that the battle is not against faith, but against ignorance. Employment of the strategy: A rudimentary form of the teach the controversy strategy had emerged first among creationists following the Supreme Court's Edwards v. Aguillard decision. The Institute for Creation Research (ICR) prepared an evaluation of what the movement should try next, suggesting "school boards and teachers should be strongly encouraged at least to stress the scientific evidences and arguments against evolution in their classes . . . even if they don't wish to recognize these as evidences and arguments for creationism." Glenn Branch of the National Center for Science Education says this comment shows that the teach the controversy strategy was "pioneered in the wake of Edwards v. Aguillard."Prior to the September 2005 start of the Kitzmiller v. Dover Area School District trial, the "Dover trial," prominent intelligent design proponents gradually shifted to a "Teach the Controversy" strategy. They had realised that mandates requiring the teaching of intelligent design were unlikely to survive challenges based on the Establishment Clause of the First Amendment, and that an unfavorable ruling had the effect of legally ruling intelligent design a form of religious creationism.Thus, the Discovery Institute repositioned itself. It publicly abandoned advocating for any policies or laws that required the teaching of intelligent design in favor of a Teach the Controversy strategy. Institute Fellows reasoned that once the "fact" that a controversy indeed exists had been established in the public's mind, then the reintroduction of intelligent design into public school criteria would be much less controversial later.The best illustration of this shift in strategy is comparing the Discovery Institute's 1999 guidebook Intelligent Design in Public School Science Curricula which concludes "school boards have the authority to permit, and even encourage, teaching about design theory as an alternative to Darwinian evolution" to 2006 statements by Phillip E. Johnson, that his intent was never to use public school education as the forum for his ideas and that he hoped to ignite and perpetuate a debate in universities and among the higher echelon of scientific thinkers.With the December 2005 ruling in Kitzmiller v. Dover Area School District, wherein Judge John E. Jones III concluded that intelligent design is not science, intelligent design proponents were left with the Teach the Controversy strategy as the most likely method left to realize the goals stated in the wedge document. Thus, the Teach the Controversy strategy has become the primary thrust of the Discovery Institute in promoting its aims. Just as intelligent design is a stalking horse for the campaign against what its proponents claim is a materialist foundation in science that precludes God, Teach the Controversy has become a stalking horse for intelligent design. But the Dover ruling also characterized "teaching the controversy" as part of a religious ploy. Shift to the "Critical Analysis of Evolution": By May 2006 the Discovery Institute sought to replace the failed "teach the controversy" strategy with a strategy broadened to include examples of other supposedly legitimate scientific controversies. In Ohio and Michigan where school boards were again reviewing science curricula standards the Discovery Institute and its allies proposed lesson plans that included global warming, cloning and stem cell research as further examples of controversies that are akin to the alleged scientific controversy over evolution. All four topics are widely accepted by the majority of the scientific community as legitimate science, and all four are areas where US political conservatives have been known to be critical of the scientific consensus. Members of the scientific community have responded to this tactic by pointing out that like evolution whatever controversy may exist over cloning and stem cell research has been largely social and political, while dissident viewpoints over global warming are often viewed as pseudoscience. Richard B. Hoppe, holder of a Ph.D. in Experimental Psychology from the University of Minnesota, described the tactic in the following way: Like the attacks on evolution, the attack on climate science is driven by the sectarian conviction that 'materialistic' science is untrustworthy and must be replaced. As with intelligent design creationism, science-deniers' so-called evidence takes the form of claims for the insufficiency of current scientific explanations rather than concrete, testable alternative hypotheses. As in the evolution debate, religious extremists use the clever strategy of denigrating the scientific consensus on causality (global warming is human-caused via pollution) by pretending it contrasts sharply with an alternative scientific theory that, properly-understood, is really just a more nuanced view that's not really in opposition (current global warming is part of the earth’s natural cycle but is being exacerbated by pollution). This exaggerates the intensity of normal scientific debate in order to suggest there's something wrong with climate science, and then uses this manufactured controversy to cloak the anti-science view and smuggle it into classrooms — sectarian religious evangelism masquerading as science. Shift to the "Critical Analysis of Evolution": With the Dover ruling describing "teach the controversy" as "at best disingenuous, and at worst a canard", intelligent design proponents have moved to a fallback position, emphasizing contrived flaws in evolution and overemphasizing remaining questions in the theory what they call the Critical Analysis of Evolution. The Critical Analysis of Evolution strategy is viewed by Nick Matzke and other intelligent design critics as a means of teaching all the intelligent design arguments without using the intelligent design label. Critical Analysis of Evolution continues the themes of the teach the controversy strategy, emphasizing what they say are the "criticisms" of evolutionary theory and "arguments against evolution," which continues to be portrayed as "a theory in crisis." Early drafts of the Critical Analysis of Evolution lesson plan referred to the lesson as the "great evolution debate"; one of the early drafts of the lesson plan had one section titled "Conducting the Macroevolution Debate". In a subsequent draft, it was changed to "Conducting the Critical Analysis Activity". The wording for the two sections is nearly identical, with just "debate" changed to "critical analysis activity" wherever it appeared, in the manner of how intelligent design proponents simply replaced "creation" with "intelligent design" in Of Pandas and People to repackage a creation science textbook into an intelligent design textbook. Repercussions: The campaigns of intelligent design proponents seeking curricular challenges have been disruptive, divisive and expensive for the affected communities. In pursuing the goal of establishing intelligent design at the expense of evolution in public school science classes, intelligent design groups have threatened and isolated high school science teachers, school board members and parents who opposed their efforts. The campaigns run by intelligent design groups place teachers in the difficult position of arguing against their employers while the legal challenges to local school districts are costly, diverting funding away from education and into court battles. For example, as a result of the Dover trial, the Dover Area School District was forced to pay $1,000,011 in legal fees and damages for pursuing a policy of teaching the controversy.Four days after the six-week Dover trial concluded, all eight of the Dover school board members who were up for reelection were voted out of office. Televangelist Pat Robertson in turn told the citizens of Dover, "If there is a disaster in your area, don't turn to God. You just rejected him from your city." Robertson said if they have future problems in Dover, "I recommend they call on Charles Darwin. Maybe he can help them."Critics, like Wesley R. Elsberry, say the Discovery Institute has cynically manufactured much of the political and religious controversy to further its agenda, pointing to statements of prominent proponents like Johnson: Whether educational authorities allow the schools to teach about the controversy or not, public recognition that there is something seriously wrong with Darwinian orthodoxy is going to keep on growing. While the educators stonewall, our job is to continue building the community of people who understand the difference between a science that tests its theories against the evidence, and a pseudoscience that protects its key doctrines by imposing philosophical rules and erecting legal barriers to freedom of thought. Repercussions: To the absence of actual scientific controversy over the validity of evolutionary theory, Johnson said: If the science educators continue to pretend that there is no controversy to teach, perhaps the television networks and the newspapers will take over the responsibility of informing the public. And to the resistance of science educators over portraying evolution as controversial or disputed, Johnson said: If the public school educators will not "teach the controversy," our informal network can do the job for them. In time, the educators will be running to catch up. Repercussions: Elsberry and others allege that statements like Johnson's are proof that the alleged scientific controversy intelligent design proponents seek to have taught is a product of the institute's members and staff. In the Dover trial's ruling the judge wrote that intelligent design proponents had misrepresented the scientific status of evolution.According to published reports, the nonprofit Discovery Institute received grants and gifts totaling $4.1 million for 2003 from 22 foundations. Of these, two-thirds had primarily religious missions. The institute spends more than $1 million a year for research, polls, lobbying and media pieces that support intelligent design and their Teach the Controversy campaign and is employing the same Washington, D.C. public relations firm that promoted the Contract with America. Political action: The Discovery Institute aggressively promoted its Teach the Controversy campaign and intelligent design to the public, education officials and public policymakers. Its efforts were largely aimed at conservative Christian policymakers, to whom it was cast as a counterbalance to the liberal influences of "atheistic scientists" and "Dogmatic Darwinists." As a measure of their success in this effort, on 1 August 2005, during a round-table interview with reporters from five Texas newspapers, President Bush said that he believes schools should discuss intelligent design alongside evolution when teaching students about the origin of life. Bush, a conservative Christian, declined to go into detail on his personal views of the origin of life, but advocated the Teach the Controversy approach, saying, "I think that part of education is to expose people to different schools of thought... you're asking me whether or not people ought to be exposed to different ideas, the answer is yes." Christian conservatives, a substantial part of Bush's voting base, were central in promoting the Teach the Controversy campaign. Political action: In some state battles, the ties of Teach the Controversy and intelligent design proponents to the Discovery Institute's political and social activities were made public, resulting in their efforts being temporarily thwarted. The Discovery Institute took the view that all publicity is good and no defeat is real. The Institute showed a willingness to back off, even to not advocate for the inclusion of ID, to ensure that all science teachers were required to portray evolution as a "theory in crisis." The institute's strategy is to move from standards battles, to curriculum writing, to textbook adoption, and back again, doing whatever it took to undermine the central position of evolution in biology. Critics of this strategy and the movement contended that the intelligent design controversy diverts much time, effort and tax money away from the actual education of children. Political action: Political battles involving the Discovery Institute 2000 Congressional briefing: In 2000, the leading ID proponents operating through the Discovery Institute held a congressional briefing in Washington, D.C., to promote ID to lawmakers. Sen. Rick Santorum was and continues to be one of ID's most vocal supporters. One result of this briefing was that Sen. Santorum inserted pro-ID language into the No Child Left Behind bill calling for students to be taught why evolution "generates so much continuing controversy," an assertion heavily promoted by the Discovery Institute. Political action: 2001 Santorum Amendment: As a result of the 2000 Congressional briefing, the Discovery Institute drafted and lobbied for the Santorum Amendment to the No Child Left Behind education act. The amendment encouraged the "teach the controversy" approach to evolution education. The amendment was passed by the U.S. Senate, but was left out of the final version of the Act, and remains only in highly modified form in the conference report, where it does not carry the weight of law. The conference report language is commonly touted by the Discovery Institute as model language for bills and curricula. The Discovery Institute lobbies states, counties, and municipalities, and offers them legal analysis and Institute-developed curricula and text books they proclaim meet constitutional criteria established by the courts in previous creationism/evolution First Amendment cases. Political action: 2002-2006 Ohio Board of Education: The Discovery Institute proposed a model lesson plan that featured intelligent design prominently in its curricula. It was adopted in part in October 2002, with the Board's advising that the science standards do "not mandate the teaching or testing of intelligent design." This was touted by the Discovery Institute as a significant victory. By February 2006 the Ohio Board of Education voted 11–4 to delete the science standard and correlating lesson plan adopted in 2002. [6] The board also rejected a competing plan from the institute to request a legal opinion from the state attorney general on the constitutionality of the science standards. Intelligent design proponents pledged to force another vote on the issue. Political action: 2005 Kansas evolution hearings: A series of hearings instigated by the institute held in Topeka, Kansas May 2005 by the Kansas State Board of Education to review changes how the origin of life would be taught in the state's public high school science classes. The hearings were boycotted by the scientific community, and views expressed represented largely those of intelligent design advocates. The result of the hearings was the adoption of new science standards by the Republican-dominated board in defiance of the State Board Science Hearing Committee that relied upon the institute's Critical Analysis of Evolution lesson plan and adopted the institute's Teach the Controversy approach. In August 2006 conservative Republicans lost their majority on the board in a primary election. The moderate Republican and Democrats gaining seats vowed to overturn the 2005 school science standards and adopt those recommended by a State Board Science Hearing Committee that were rejected by the previous board. Political action: 2005 Kitzmiller v. Dover Area School District: Eleven parents of students in the school district of Dover, Pennsylvania, sued the Dover Area School District over a statement that the school board required to be read aloud in ninth-grade science classes when evolution was taught endorsing intelligent design as an alternative to evolution. The plaintiffs successfully argued that intelligent design is a form of creationism, and that the school board policy thus violated the Establishment Clause of the First Amendment. In December, 2005 United States federal court judge John E. Jones III ruled that intelligent design is not science and is essentially religious in nature. Criticism: The theory of evolution is accepted by the vast majority of biologists and by the scientific community in general, in such overwhelming numbers that the theory of evolution is viewed as having scientific consensus. Over 70 scientific societies, institutions, and other professional groups representing tens of thousands of individual scientists have issued policy statements supporting evolution education and opposing intelligent design. Scientific controversies are minor and concern the details of the mechanisms of evolution, not the validity of the overarching theory of evolution. In the absence of an actual professional controversy between groups of experts on evolution, critics say intelligent design proponents have merely renamed the conflict that already existed between biologists and creationists, and that the controversy to which intelligent design proponents refer is political in nature and thus, by definition, outside of the realm of science and scientific educational curricula. Critics contend that intelligent design proponents ignore this point by continuing to make the claim of a "scientific controversy." According to Thomas Dixon, "The 'controversy' in question has not arisen from any substantial scientific disagreement but is the product of a concerted public relations exercise aimed at the Christian parents of America."For example, the National Association of Biology Teachers, in a statement endorsing evolution as noncontroversial, quoted Theodosius Dobzhansky: "Nothing in biology makes sense except in the light of evolution" and went on to state that the quote "accurately reflects the central, unifying role of evolution in biology. The theory of evolution provides a framework that explains both the history of life and the ongoing adaptation of organisms to environmental challenges and changes." They emphasized that "Scientists have firmly established evolution as an important natural process" and that "The selection of topics covered in a biology curriculum should accurately reflect the principles of biological science. Teaching biology in an effective and scientifically honest manner requires that evolution be taught in a standards-based instructional framework with effective classroom discussions and laboratory experiences."Prominent evolutionary biologists such as Richard Dawkins and Jerry Coyne have proposed various "controversies" that are worth teaching, instead of intelligent design. Dawkins compares teaching intelligent design in schools to teaching flat earthism: perfectly fine in a history class but not in science. "If you give the idea that there are two schools of thought within science, one that says the earth is round and one that says the earth is flat, you are misleading children". Tufts University Professor of Philosophy Daniel C. Dennett, author of Darwin's Dangerous Idea, describes how they generate a sense of controversy: "The proponents of intelligent design use an ingenious ploy that works something like this: First you misuse or misdescribe some scientist's work. Then you get an angry rebuttal. Then, instead of dealing forthrightly with the charges leveled, you cite the rebuttal as evidence that there is a 'controversy' to teach".Critics of the Teach the Controversy movement and strategy can also be found outside of the scientific community. Barry W. Lynn, executive director of Americans United for Separation of Church and State, described the approach of the movement's proponents as "a disarming subterfuge designed to undermine solid evidence that all living things share a common ancestry." "The movement is a veneer over a certain theological message. Every one of these groups is now actively engaged in trying to undercut sound science education by criticizing evolution," said Lynn. "It is all based on their religious ideology. Even the people who don't specifically mention religion are hard-pressed with a straight face to say who the intelligent designer is if it's not God". Criticism: The Discovery Institute According to critics of the Discovery Institute's efforts through the Teach the Controversy campaign and the intelligent design movement, the Wedge strategy betrays the institute's political rather than scientific and educational purpose. The Discovery Institute and its Center for Science and Culture (CSC) has an overarching conservative Christian social and political agenda that seeks to redefine both law and science and how they are conducted, with the stated goal of a religious "renewal" of American culture. Criticism: Critics also allege that the Discovery Institute has a long-standing record of misrepresenting research, law and its own policy and agenda and that of others: In announcing the Teach the Controversy strategy in 2002, the Discovery Institute's Stephen C. Meyer presented an annotated bibliography of 44 peer-reviewed scientific articles that were said to raise significant challenges to key tenets of what was referred to as "Darwinian evolution." In response to this claim the National Center for Science Education, an organization that works in collaboration with National Academy of Sciences, the National Association of Biology Teachers, and the National Science Teachers Association that support the teaching of evolution in public schools, contacted the authors of the papers listed and twenty-six scientists, representing thirty-four of the papers, responded. None of the authors considered his or her research to provide evidence against evolution. Criticism: The Discovery Institute, following the policies outlined by Phillip E. Johnson, obfuscates its agenda. Opposed to the public statements to the contrary made by the Discovery Institute, Johnson has admitted that the goal of intelligent design movement is to cast creationism as a scientific concept: Our strategy has been to change the subject a bit so that we can get the issue of intelligent design, which really means the reality of God, before the academic world and into the schools. Criticism: This isn't really, and never has been a debate about science. It's about religion and philosophy. Criticism: If we understand our own times, we will know that we should affirm the reality of God by challenging the domination of materialism and naturalism in the world of the mind. With the assistance of many friends I have developed a strategy for doing this....We call our strategy the 'wedge.' So the question is: "How to win?" That’s when I began to develop what you now see full-fledged in the "wedge" strategy: "Stick with the most important thing" —the mechanism and the building up of information. Get the Bible and the Book of Genesis out of the debate because you do not want to raise the so-called Bible-science dichotomy. Phrase the argument in such a way that you can get it heard in the secular academy and in a way that tends to unify the religious dissenters. That means concentrating on, "Do you need a Creator to do the creating, or can nature do it on its own?" and refusing to get sidetracked onto other issues, which people are always trying to do. Criticism: Rob Boston of the Americans United for Separation of Church and State described Johnson's vision of the Wedge as: "The objective [of the Wedge Strategy] is to convince people that Darwinism is inherently atheistic, thus shifting the debate from creationism vs. evolution to the existence of God vs. the non-existence of God. From there people are introduced to 'the truth' of the Bible and then 'the question of sin' and finally 'introduced to Jesus.'" Instead of producing original scientific data to support ID's claims, the Discovery Institute has promoted ID politically to the public, education officials and public policymakers through its Teach the Controversy campaign. Criticism: Johnson's statements validate the criticisms leveled by those who allege that the Discovery Institute and its allied organizations are merely stripping the obvious religious content from their anti-evolution assertions as a means of avoiding the legal restriction on establishment. They argue that ID is simply an attempt to put a patina of secularity on top of what is a fundamentally religious belief and agenda. Criticism: Given the history of the Discovery Institute as an organization committed to opposing any scientific theory inconsistent with "the theistic understanding that nature and human beings are created by God", many scientists regard the movement purely as a ploy to insert creationism into the science curriculum rather than as a serious attempt to discuss scientific evidence. In the words of Eugenie Scott of the National Center for Education: Teach the controversy' is a deliberately ambiguous phrase. It means 'pretend to students that scientists are arguing over whether evolution took place.' This is not happening. I mean you go to the scientific journals, you go to universities... and you ask the professors, is there an argument going on about whether living things had common ancestors? They'll look at you blankly. This is not a controversy. Criticism: Though Teach the Controversy proponents cite the current public policy statements of the Discovery Institute as belying the criticisms that their strategy is a creationist ploy and decry critics as biased in failing to recognize that the intelligent design movement's Teach the Controversy strategy as really just a question of science with no religion involved, is itself belied by Discovery Institute's former published policy statements, its "Wedge Document", and statements made to its constituency by its leadership, and in particular Phillip E. Johnson. Criticism: Writes Johnson in the foreword to Creation, Evolution, & Modern Science (2000): The Intelligent Design movement starts with the recognition that "In the beginning was the Word," and "In the beginning God created." Establishing that point isn't enough, but it is absolutely essential to the rest of the gospel message. ... The first thing that has to be done is to get the Bible out of the discussion. ...This is not to say that the biblical issues are unimportant; the point is rather that the time to address them will be after we have separated materialist prejudice from scientific fact. Johnson's words bolster the claims of those critics who cite Johnson's admission that the ultimate goal of the campaign is getting "the issue of intelligent design, which really means the reality of God, before the academic world and into the schools".Amid this political and religious controversy the clear, categorical and oft-repeated view of established national and international scientific organizations remains that there is no scientific controversy over teaching evolution in public schools. Criticism: University course George Mason University Biology Department introduced a 1-credit course on the creation/evolution controversy, and Emmett Holman, an associate professor of philosophy from the university, found that as students learn more about biology, they find objections to evolution less convincing. He concluded that "teaching the controversy" would undermine creationists’ criticisms, and that the scientific community's resistance to this approach was bad public relations. Rather than being taught in a mainstream science course, it would be a separate elective course, probably taught by a scientist but called a course on "philosophy of science", "history of science", or "politics of science and religion".Biologist Tom A. Langen argues in a journal letter entitled "What is right with 'teaching the controversy'?" that offering a specific course about this controversy will help students understand the demarcation between science and other ways of obtaining knowledge about nature. Similar positions have been expressed by atheists Julian Baggini and Aaron Sloman.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lidar** Lidar: Lidar (, also LIDAR, LiDAR or LADAR, an acronym of "light detection and ranging" or "laser imaging, detection, and ranging") is a method for determining ranges by targeting an object or a surface with a laser and measuring the time for the reflected light to return to the receiver. LIDAR may operate in a fixed direction (e.g., vertical) or it may scan multiple directions, in which case it is known as LIDAR scanning or 3D laser scanning, a special combination of 3-D scanning and laser scanning. LIDAR has terrestrial, airborne, and mobile applications.Lidar is commonly used to make high-resolution maps, with applications in surveying, geodesy, geomatics, archaeology, geography, geology, geomorphology, seismology, forestry, atmospheric physics, laser guidance, airborne laser swathe mapping (ALSM), and laser altimetry. It is used to make digital 3-D representations of areas on the Earth's surface and ocean bottom of the intertidal and near coastal zone by varying the wavelength of light. It has also been increasingly used in control and navigation for autonomous cars and for the helicopter Ingenuity on its record-setting flights over the terrain of Mars. History and etymology: Under the direction of Malcolm Stitch, the Hughes Aircraft Company introduced the first lidar-like system in 1961, shortly after the invention of the laser. Intended for satellite tracking, this system combined laser-focused imaging with the ability to calculate distances by measuring the time for a signal to return using appropriate sensors and data acquisition electronics. It was originally called "Colidar" an acronym for "coherent light detecting and ranging", derived from the term "radar", itself an acronym for "radio detection and ranging". All laser rangefinders, laser altimeters and lidar units are derived from the early colidar systems. The first practical terrestrial application of a colidar system was the "Colidar Mark II", a large rifle-like laser rangefinder produced in 1963, which had a range of 11 km and an accuracy of 4.5 m, to be used for military targeting. The first mention of lidar as a stand-alone word in 1963 suggests that it originated as a portmanteau of "light" and "radar": "Eventually the laser may provide an extremely sensitive detector of particular wavelengths from distant objects. Meanwhile, it is being used to study the moon by 'lidar' (light radar) ..." The name "photonic radar" is sometimes used to mean visible-spectrum range finding like lidar.Lidar's first applications were in meteorology, for which the National Center for Atmospheric Research used it to measure clouds and pollution. The general public became aware of the accuracy and usefulness of lidar systems in 1971 during the Apollo 15 mission, when astronauts used a laser altimeter to map the surface of the Moon. History and etymology: Although the English language no longer treats "radar" as an acronym, (i.e., uncapitalized), the word "lidar" was capitalized as "LIDAR" or "LiDAR" in some publications beginning in the 1980s. No consensus exists on capitalization. Various publications refer to lidar as "LIDAR", "LiDAR", "LIDaR", or "Lidar". The USGS uses both "LIDAR" and "lidar", sometimes in the same document; the New York Times predominantly uses "lidar" for staff-written articles, although contributing news feeds such as Reuters may use Lidar. History and etymology: General description Lidar uses ultraviolet, visible, or near infrared light to image objects. It can target a wide range of materials, including non-metallic objects, rocks, rain, chemical compounds, aerosols, clouds and even single molecules. A narrow laser beam can map physical features with very high resolutions; for example, an aircraft can map terrain at 30-centimetre (12 in) resolution or better. History and etymology: The essential concept of lidar was originated by E. H. Synge in 1930, who envisaged the use of powerful searchlights to probe the atmosphere. Indeed, lidar has since been used extensively for atmospheric research and meteorology. Lidar instruments fitted to aircraft and satellites carry out surveying and mapping – a recent example being the U.S. Geological Survey Experimental Advanced Airborne Research Lidar. NASA has identified lidar as a key technology for enabling autonomous precision safe landing of future robotic and crewed lunar-landing vehicles.Wavelengths vary to suit the target: from about 10 micrometers (infrared) to approximately 250 nanometers (ultraviolet). Typically, light is reflected via backscattering, as opposed to pure reflection one might find with a mirror. Different types of scattering are used for different lidar applications: most commonly Rayleigh scattering, Mie scattering, Raman scattering, and fluorescence. Suitable combinations of wavelengths can allow remote mapping of atmospheric contents by identifying wavelength-dependent changes in the intensity of the returned signal. History and etymology: The name "photonic radar" is sometimes used to mean visible-spectrum range finding like lidar, although photonic radar more strictly refers to radio-frequency range finding using photonics components. Technology: Mathematical formula A lidar determines the distance of an object or a surface with the formula: d=c⋅t2 where c is the speed of light, d is the distance between the detector and the object or surface being detected, and t is the time spent for the laser light to travel to the object or surface being detected, then travel back to the detector. Technology: Design The two kinds of lidar detection schemes are "incoherent" or direct energy detection (which principally measures amplitude changes of the reflected light) and coherent detection (best for measuring Doppler shifts, or changes in the phase of the reflected light). Coherent systems generally use optical heterodyne detection. This is more sensitive than direct detection and allows them to operate at much lower power, but requires more complex transceivers. Technology: Both types employ pulse models: either micropulse or high energy. Micropulse systems utilize intermittent bursts of energy. They developed as a result of ever-increasing computer power, combined with advances in laser technology. They use considerably less energy in the laser, typically on the order of one microjoule, and are often "eye-safe", meaning they can be used without safety precautions. High-power systems are common in atmospheric research, where they are widely used for measuring atmospheric parameters: the height, layering and densities of clouds, cloud particle properties (extinction coefficient, backscatter coefficient, depolarization), temperature, pressure, wind, humidity, and trace gas concentration (ozone, methane, nitrous oxide, etc.). Technology: Components Lidar systems consist of several major components. Laser 600–1000 nm lasers are most common for non-scientific applications. The maximum power of the laser is limited, or an automatic shut-off system which turns the laser off at specific altitudes is used in order to make it eye-safe for the people on the ground. Technology: One common alternative, 1550 nm lasers, are eye-safe at relatively high power levels since this wavelength is not strongly absorbed by the eye, but the detector technology is less advanced and so on these wavelengths are generally used at longer ranges with lower accuracies. They are also used for military applications because 1550 nm is not visible in night vision goggles, unlike the shorter 1000 nm infrared laser. Technology: Airborne topographic mapping lidars generally use 1064 nm diode-pumped YAG lasers, while bathymetric (underwater depth research) systems generally use 532 nm frequency-doubled diode pumped YAG lasers because 532 nm penetrates water with much less attenuation than 1064 nm. Laser settings include the laser repetition rate (which controls the data collection speed). Pulse length is generally an attribute of the laser cavity length, the number of passes required through the gain material (YAG, YLF, etc.), and Q-switch (pulsing) speed. Better target resolution is achieved with shorter pulses, provided the lidar receiver detectors and electronics have sufficient bandwidth. Technology: Phased arrays A phased array can illuminate any direction by using a microscopic array of individual antennas. Controlling the timing (phase) of each antenna steers a cohesive signal in a specific direction. Technology: Phased arrays have been used in radar since the 1940s. The same technique can be used with light. On the order of a million optical antennas are used to see a radiation pattern of a certain size in a certain direction. The system is controlled by timing the precise flash. A single chip (or a few) replace a US$75,000 electromechanical system, drastically reducing costs.Several companies are working on developing commercial solid-state lidar units.The control system can change the shape of the lens to enable zoom in and zoom out functions. Specific sub-zones can be targeted at sub-second intervals.Electromechanical lidar lasts for between 1,000 and 2,000 hours. By contrast, solid-state lidar can run for 100,000 hours. Technology: Microelectromechanical machines Microelectromechanical mirrors (MEMS) are not entirely solid-state. However, their tiny form factor provides many of the same cost benefits. A single laser is directed to a single mirror that can be reoriented to view any part of the target field. The mirror spins at a rapid rate. However, MEMS systems generally operate in a single plane (left to right). To add a second dimension generally requires a second mirror that moves up and down. Alternatively, another laser can hit the same mirror from another angle. MEMS systems can be disrupted by shock/vibration and may require repeated calibration. Technology: Scanner and optics Image development speed is affected by the speed at which they are scanned. Options to scan the azimuth and elevation include dual oscillating plane mirrors, a combination with a polygon mirror, and a dual axis scanner. Optic choices affect the angular resolution and range that can be detected. A hole mirror or a beam splitter are options to collect a return signal. Technology: Photodetector and receiver electronics Two main photodetector technologies are used in lidar: solid state photodetectors, such as silicon avalanche photodiodes, or photomultipliers. The sensitivity of the receiver is another parameter that has to be balanced in a lidar design. Position and navigation systems Lidar sensors mounted on mobile platforms such as airplanes or satellites require instrumentation to determine the absolute position and orientation of the sensor. Such devices generally include a Global Positioning System receiver and an inertial measurement unit (IMU). Technology: Sensor Lidar uses active sensors that supply their own illumination source. The energy source hits objects and the reflected energy is detected and measured by sensors. Distance to the object is determined by recording the time between transmitted and backscattered pulses and by using the speed of light to calculate the distance traveled. Flash lidar allows for 3-D imaging because of the camera's ability to emit a larger flash and sense the spatial relationships and dimensions of area of interest with the returned energy. This allows for more accurate imaging because the captured frames do not need to be stitched together, and the system is not sensitive to platform motion. This results in less distortion.3-D imaging can be achieved using both scanning and non-scanning systems. "3-D gated viewing laser radar" is a non-scanning laser ranging system that applies a pulsed laser and a fast gated camera. Research has begun for virtual beam steering using Digital Light Processing (DLP) technology. Technology: Imaging lidar can also be performed using arrays of high speed detectors and modulation sensitive detector arrays typically built on single chips using complementary metal–oxide–semiconductor (CMOS) and hybrid CMOS/Charge-coupled device (CCD) fabrication techniques. In these devices each pixel performs some local processing such as demodulation or gating at high speed, downconverting the signals to video rate so that the array can be read like a camera. Using this technique many thousands of pixels / channels may be acquired simultaneously. High resolution 3-D lidar cameras use homodyne detection with an electronic CCD or CMOS shutter.A coherent imaging lidar uses synthetic array heterodyne detection to enable a staring single element receiver to act as though it were an imaging array.In 2014, Lincoln Laboratory announced a new imaging chip with more than 16,384 pixels, each able to image a single photon, enabling them to capture a wide area in a single image. An earlier generation of the technology with one fourth as many pixels was dispatched by the U.S. military after the January 2010 Haiti earthquake. A single pass by a business jet at 3,000 m (10,000 ft) over Port-au-Prince was able to capture instantaneous snapshots of 600 m (2,000 ft) squares of the city at a resolution of 30 cm (1 ft), displaying the precise height of rubble strewn in city streets. The new system is ten times better, and could produce much larger maps more quickly. The chip uses indium gallium arsenide (InGaAs), which operates in the infrared spectrum at a relatively long wavelength that allows for higher power and longer ranges. In many applications, such as self-driving cars, the new system will lower costs by not requiring a mechanical component to aim the chip. InGaAs uses less hazardous wavelengths than conventional silicon detectors, which operate at visual wavelengths. Technology: Flash lidar In flash lidar, the entire field of view is illuminated with a wide diverging laser beam in a single pulse. This is in contrast to conventional scanning lidar, which uses a collimated laser beam that illuminates a single point at a time, and the beam is raster scanned to illuminate the field of view point-by-point. This illumination method requires a different detection scheme as well. In both scanning and flash lidar, a time-of-flight camera is used to collect information about both the 3-D location and intensity of the light incident on it in every frame. However, in scanning lidar, this camera contains only a point sensor, while in flash lidar, the camera contains either a 1-D or a 2-D sensor array, each pixel of which collects 3-D location and intensity information. In both cases, the depth information is collected using the time of flight of the laser pulse (i.e., the time it takes each laser pulse to hit the target and return to the sensor), which requires the pulsing of the laser and acquisition by the camera to be synchronized. The result is a camera that takes pictures of distance, instead of colors. Flash lidar is especially advantageous, when compared to scanning lidar, when the camera, scene, or both are moving, since the entire scene is illuminated at the same time. With scanning lidar, motion can cause "jitter" from the lapse in time as the laser rasters over the scene. Technology: As with all forms of lidar, the onboard source of illumination makes flash lidar an active sensor. The signal that is returned is processed by embedded algorithms to produce a nearly instantaneous 3-D rendering of objects and terrain features within the field of view of the sensor. The laser pulse repetition frequency is sufficient for generating 3-D videos with high resolution and accuracy. The high frame rate of the sensor makes it a useful tool for a variety of applications that benefit from real-time visualization, such as highly precise remote landing operations. By immediately returning a 3-D elevation mesh of target landscapes, a flash sensor can be used to identify optimal landing zones in autonomous spacecraft landing scenarios.Seeing at a distance requires a powerful burst of light. The power is limited to levels that do not damage human retinas. Wavelengths must not affect human eyes. However, low-cost silicon imagers do not read light in the eye-safe spectrum. Instead, gallium-arsenide imagers are required, which can boost costs to $200,000. Gallium-arsenide is the same compound used to produce high-cost, high-efficiency solar panels usually used in space applications. Classification: Based on orientation Lidar can be oriented to nadir, zenith, or laterally. For example, lidar altimeters look down, an atmospheric lidar looks up, and lidar-based collision avoidance systems are side-looking. Classification: Based on scanning mechanism Laser projections of lidars can be manipulated using various methods and mechanisms to produce a scanning effect: the standard spindle-type, which spins to give a 360-degree view; solid-state lidar, which has a fixed field of view, but no moving parts, and can use either MEMS or optical phased arrays to steer the beams; and flash lidar, which spreads a flash of light over a large field of view before the signal bounces back to a detector. Classification: Based on platform Lidar applications can be divided into airborne and terrestrial types. The two types require scanners with varying specifications based on the data's purpose, the size of the area to be captured, the range of measurement desired, the cost of equipment, and more. Spaceborne platforms are also possible, see satellite laser altimetry. Airborne: Airborne lidar (also airborne laser scanning) is when a laser scanner, while attached to an aircraft during flight, creates a 3-D point cloud model of the landscape. This is currently the most detailed and accurate method of creating digital elevation models, replacing photogrammetry. One major advantage in comparison with photogrammetry is the ability to filter out reflections from vegetation from the point cloud model to create a digital terrain model which represents ground surfaces such as rivers, paths, cultural heritage sites, etc., which are concealed by trees. Within the category of airborne lidar, there is sometimes a distinction made between high-altitude and low-altitude applications, but the main difference is a reduction in both accuracy and point density of data acquired at higher altitudes. Airborne lidar can also be used to create bathymetric models in shallow water.The main constituents of airborne lidar include digital elevation models (DEM) and digital surface models (DSM). The points and ground points are the vectors of discrete points while DEM and DSM are interpolated raster grids of discrete points. The process also involves capturing of digital aerial photographs. To interpret deep-seated landslides for example, under the cover of vegetation, scarps, tension cracks or tipped trees airborne lidar is used. Airborne lidar digital elevation models can see through the canopy of forest cover, perform detailed measurements of scarps, erosion and tilting of electric poles.Airborne lidar data is processed using a toolbox called Toolbox for Lidar Data Filtering and Forest Studies (TIFFS) for lidar data filtering and terrain study software. The data is interpolated to digital terrain models using the software. The laser is directed at the region to be mapped and each point's height above the ground is calculated by subtracting the original z-coordinate from the corresponding digital terrain model elevation. Based on this height above the ground the non-vegetation data is obtained which may include objects such as buildings, electric power lines, flying birds, insects, etc. The rest of the points are treated as vegetation and used for modeling and mapping. Within each of these plots, lidar metrics are calculated by calculating statistics such as mean, standard deviation, skewness, percentiles, quadratic mean, etc. Airborne: Drones are now being used with laser scanners, as well as other remote sensors, as a more economical method to scan smaller areas. The possibility of drone remote sensing also eliminates any danger that aircraft crews may be subjected to in difficult terrain or remote areas. Airborne: Airborne lidar bathymetry The airborne lidar bathymetric technological system involves the measurement of time of flight of a signal from a source to its return to the sensor. The data acquisition technique involves a sea floor mapping component and a ground truth component that includes video transects and sampling. It works using a green spectrum (532 nm) laser beam. Two beams are projected onto a fast rotating mirror, which creates an array of points. One of the beams penetrates the water and also detects the bottom surface of the water under favorable conditions. Airborne: Water depth measurable by lidar depends on the clarity of the water and the absorption of the wavelength used. Water is most transparent to green and blue light, so these will penetrate deepest in clean water. Blue-green light of 532 nm produced by frequency doubled solid-state IR laser output is the standard for airborne bathymetry. This light can penetrate water but pulse strength attenuates exponentially with distance traveled through the water. Lidar can measure depths from about 0.9 to 40 m (3 to 131 ft), with vertical accuracy in the order of 15 cm (6 in). The surface reflection makes water shallower than about 0.9 m (3 ft) difficult to resolve, and absorption limits the maximum depth. Turbidity causes scattering and has a significant role in determining the maximum depth that can be resolved in most situations, and dissolved pigments can increase absorption depending on wavelength. Other reports indicate that water penetration tends to be between two and three times Secchi depth. Bathymetric lidar is most useful in the 0–10 m (0–33 ft) depth range in coastal mapping.On average in fairly clear coastal seawater lidar can penetrate to about 7 m (23 ft), and in turbid water up to about 3 m (10 ft). An average value found by Saputra et al, 2021, is for the green laser light to penetrate water about one and a half to two times Secchi depth in Indonesian waters. Water temperature and salinity have an effect on the refractive index which has a small effect on the depth calculation.The data obtained shows the full extent of the land surface exposed above the sea floor. This technique is extremely useful as it will play an important role in the major sea floor mapping program. The mapping yields onshore topography as well as underwater elevations. Sea floor reflectance imaging is another solution product from this system which can benefit mapping of underwater habitats. This technique has been used for three-dimensional image mapping of California's waters using a hydrographic lidar. Airborne: Full-waveform lidar Airborne lidar systems were traditionally able to acquire only a few peak returns, while more recent systems acquire and digitize the entire reflected signal. Scientists analysed the waveform signal for extracting peak returns using Gaussian decomposition. Zhuang et al, 2017 used this approach for estimating aboveground biomass. Handling the huge amounts of full-waveform data is difficult. Therefore, Gaussian decomposition of the waveforms is effective, since it reduces the data and is supported by existing workflows that support interpretation of 3-D point clouds. Recent studies investigated voxelisation. The intensities of the waveform samples are inserted into a voxelised space (3-D grayscale image) building up a 3-D representation of the scanned area. Related metrics and information can then be extracted from that voxelised space. Structural information can be extracted using 3-D metrics from local areas and there is a case study that used the voxelisation approach for detecting dead standing Eucalypt trees in Australia. Terrestrial: Terrestrial applications of lidar (also terrestrial laser scanning) happen on the Earth's surface and can be either stationary or mobile. Stationary terrestrial scanning is most common as a survey method, for example in conventional topography, monitoring, cultural heritage documentation and forensics. The 3-D point clouds acquired from these types of scanners can be matched with digital images taken of the scanned area from the scanner's location to create realistic looking 3-D models in a relatively short time when compared to other technologies. Each point in the point cloud is given the colour of the pixel from the image taken at the same location and direction as the laser beam that created the point. Terrestrial: Mobile lidar (also mobile laser scanning) is when two or more scanners are attached to a moving vehicle to collect data along a path. These scanners are almost always paired with other kinds of equipment, including GNSS receivers and IMUs. One example application is surveying streets, where power lines, exact bridge heights, bordering trees, etc. all need to be taken into account. Instead of collecting each of these measurements individually in the field with a tachymeter, a 3-D model from a point cloud can be created where all of the measurements needed can be made, depending on the quality of the data collected. This eliminates the problem of forgetting to take a measurement, so long as the model is available, reliable and has an appropriate level of accuracy. Terrestrial: Terrestrial lidar mapping involves a process of occupancy grid map generation. The process involves an array of cells divided into grids which employ a process to store the height values when lidar data falls into the respective grid cell. A binary map is then created by applying a particular threshold to the cell values for further processing. The next step is to process the radial distance and z-coordinates from each scan to identify which 3-D points correspond to each of the specified grid cell leading to the process of data formation. Applications: There are a wide variety of lidar applications, in addition to the applications listed below, as it is often mentioned in National lidar dataset programs. These applications are largely determined by the range of effective object detection; resolution, which is how accurately the lidar identifies and classifies objects; and reflectance confusion, meaning how well the lidar can see something in the presence of bright objects, like reflective signs or bright sun.Companies are working to cut the cost of lidar sensors, currently anywhere from about US$1,200 to more than $12,000. Lower prices will make lidar more attractive for new markets. Applications: Agriculture Agricultural robots have been used for a variety of purposes ranging from seed and fertilizer dispersions, sensing techniques as well as crop scouting for the task of weed control. Applications: Lidar can help determine where to apply costly fertilizer. It can create a topographical map of the fields and reveal slopes and sun exposure of the farmland. Researchers at the Agricultural Research Service used this topographical data with the farmland yield results from previous years, to categorize land into zones of high, medium, or low yield. This indicates where to apply fertilizer to maximize yield. Applications: Lidar is now used to monitor insects in the field. The use of lidar can detect the movement and behavior of individual flying insects, with identification down to sex and species. In 2017 a patent application was published on this technology in the United States, Europe, and China.Another application is crop mapping in orchards and vineyards, to detect foliage growth and the need for pruning or other maintenance, detect variations in fruit production, or count plants. Applications: Lidar is useful in GNSS-denied situations, such as nut and fruit orchards, where foliage blocks satellite signals to precision agriculture equipment or a driverless tractor. Lidar sensors can detect the edges of rows, so that farming equipment can continue moving until GNSS signal is reestablished. Applications: Plant species classification Controlling weeds requires identifying plant species. This can be done by using 3-D lidar and machine learning. Lidar produces plant contours as a "point cloud" with range and reflectance values. This data is transformed, and features are extracted from it. If the species is known, the features are added as new data. The species is labelled and its features are initially stored as an example to identify the species in the real environment. This method is efficient because it uses a low-resolution lidar and supervised learning. It includes an easy-to-compute feature set with common statistical features which are independent of the plant size. Applications: Archaeology Lidar has many uses in archaeology, including planning of field campaigns, mapping features under forest canopy, and overview of broad, continuous features indistinguishable from the ground. Lidar can produce high-resolution datasets quickly and cheaply. Lidar-derived products can be easily integrated into a Geographic Information System (GIS) for analysis and interpretation. Applications: Lidar can also help to create high-resolution digital elevation models (DEMs) of archaeological sites that can reveal micro-topography that is otherwise hidden by vegetation. The intensity of the returned lidar signal can be used to detect features buried under flat vegetated surfaces such as fields, especially when mapping using the infrared spectrum. The presence of these features affects plant growth and thus the amount of infrared light reflected back. For example, at Fort Beauséjour – Fort Cumberland National Historic Site, Canada, lidar discovered archaeological features related to the siege of the Fort in 1755. Features that could not be distinguished on the ground or through aerial photography were identified by overlaying hill shades of the DEM created with artificial illumination from various angles. Another example is work at Caracol by Arlen Chase and his wife Diane Zaino Chase. In 2012, lidar was used to search for the legendary city of La Ciudad Blanca or "City of the Monkey God" in the La Mosquitia region of the Honduran jungle. During a seven-day mapping period, evidence was found of man-made structures. In June 2013, the rediscovery of the city of Mahendraparvata was announced. In southern New England, lidar was used to reveal stone walls, building foundations, abandoned roads, and other landscape features obscured in aerial photography by the region's dense forest canopy. In Cambodia, lidar data were used by Damian Evans and Roland Fletcher to reveal anthropogenic changes to Angkor landscape.In 2012, lidar revealed that the Purépecha settlement of Angamuco in Michoacán, Mexico had about as many buildings as today's Manhattan; while in 2016, its use in mapping ancient Maya causeways in northern Guatemala, revealed 17 elevated roads linking the ancient city of El Mirador to other sites. In 2018, archaeologists using lidar discovered more than 60,000 man-made structures in the Maya Biosphere Reserve, a "major breakthrough" that showed the Maya civilization was much larger than previously thought. Applications: Autonomous vehicles Autonomous vehicles may use lidar for obstacle detection and avoidance to navigate safely through environments. The introduction of lidar was a pivotal occurrence that was the key enabler behind Stanley, the first autonomous vehicle to successfully complete the DARPA Grand Challenge. Point cloud output from the lidar sensor provides the necessary data for robot software to determine where potential obstacles exist in the environment and where the robot is in relation to those potential obstacles. Singapore's Singapore-MIT Alliance for Research and Technology (SMART) is actively developing technologies for autonomous lidar vehicles.The very first generations of automotive adaptive cruise control systems used only lidar sensors. Applications: Object detection for transportation systems In transportation systems, to ensure vehicle and passenger safety and to develop electronic systems that deliver driver assistance, understanding vehicle and its surrounding environment is essential. Lidar systems play an important role in the safety of transportation systems. Many electronic systems which add to the driver assistance and vehicle safety such as Adaptive Cruise Control (ACC), Emergency Brake Assist, and Anti-lock Braking System (ABS) depend on the detection of a vehicle's environment to act autonomously or semi-autonomously. Lidar mapping and estimation achieve this. Applications: Basics overview: Current lidar systems use rotating hexagonal mirrors which split the laser beam. The upper three beams are used for vehicle and obstacles ahead and the lower beams are used to detect lane markings and road features. The major advantage of using lidar is that the spatial structure is obtained and this data can be fused with other sensors such as radar, etc. to get a better picture of the vehicle environment in terms of static and dynamic properties of the objects present in the environment. Conversely, a significant issue with lidar is the difficulty in reconstructing point cloud data in poor weather conditions. In heavy rain, for example, the light pulses emitted from the lidar system are partially reflected off of rain droplets which adds noise to the data, called 'echoes'.Below mentioned are various approaches of processing lidar data and using it along with data from other sensors through sensor fusion to detect the vehicle environment conditions. Applications: Obstacle detection and road environment recognition using lidar This method proposed by Kun Zhou et al. not only focuses on object detection and tracking but also recognizes lane marking and road features. As mentioned earlier the lidar systems use rotating hexagonal mirrors that split the laser beam into six beams. The upper three layers are used to detect the forward objects such as vehicles and roadside objects. The sensor is made of weather-resistant material. The data detected by lidar are clustered to several segments and tracked by Kalman filter. Data clustering here is done based on characteristics of each segment based on object model, which distinguish different objects such as vehicles, signboards, etc. These characteristics include the dimensions of the object, etc. The reflectors on the rear edges of vehicles are used to differentiate vehicles from other objects. Object tracking is done using a two-stage Kalman filter considering the stability of tracking and the accelerated motion of objects Lidar reflective intensity data is also used for curb detection by making use of robust regression to deal with occlusions. The road marking is detected using a modified Otsu method by distinguishing rough and shiny surfaces. Applications: AdvantagesRoadside reflectors that indicate lane border are sometimes hidden due to various reasons. Therefore, other information is needed to recognize the road border. The lidar used in this method can measure the reflectivity from the object. Hence, with this data road border can also be recognized. Also, the usage of sensor with weather-robust head helps detecting the objects even in bad weather conditions. Canopy Height Model before and after flood is a good example. Lidar can detect high detailed canopy height data as well as its road border. Applications: Lidar measurements help identify the spatial structure of the obstacle. This helps distinguish objects based on size and estimate the impact of driving over it.Lidar systems provide better range and a large field of view which helps detecting obstacles on the curves. This is one major advantage over RADAR systems which have a narrower field of view. The fusion of lidar measurement with different sensors makes the system robust and useful in real-time applications, since lidar dependent systems can't estimate the dynamic information about the detected object.It has been shown that lidar can be manipulated, such that self-driving cars are tricked into taking evasive action. Applications: Ecology and conservation Lidar has also found many applications for mapping natural and managed landscapes such as forests, wetlands, and grasslands. Canopy heights, biomass measurements, and leaf area can all be studied using airborne lidar systems. Similarly, lidar is also used by many industries, including Energy and Railroad, and the Department of Transportation as a faster way of surveying. Topographic maps can also be generated readily from lidar, including for recreational use such as in the production of orienteering maps. Lidar has also been applied to estimate and assess the biodiversity of plants, fungi, and animals. Using southern bull kelp in New Zealand, coastal lidar mapping data has been compared with population genomic evidence to form hypotheses regarding the occurrence and timing of prehistoric earthquake uplift events. Applications: Forestry Lidar systems have also been applied to improve forestry management. Measurements are used to take inventory in forest plots as well as calculate individual tree heights, crown width and crown diameter. Other statistical analysis use lidar data to estimate total plot information such as canopy volume, mean, minimum and maximum heights, vegetation cover, biomass, and carbon density. Aerial lidar has been used to map the bush fires in Australia in early 2020. The data was manipulated to view bare earth, and identify healthy and burned vegetation. Applications: Geology and soil science High-resolution digital elevation maps generated by airborne and stationary lidar have led to significant advances in geomorphology (the branch of geoscience concerned with the origin and evolution of the Earth surface topography). The lidar abilities to detect subtle topographic features such as river terraces and river channel banks, glacial landforms, to measure the land-surface elevation beneath the vegetation canopy, to better resolve spatial derivatives of elevation, and to detect elevation changes between repeat surveys have enabled many novel studies of the physical and chemical processes that shape landscapes. Applications: In 2005 the Tour Ronde in the Mont Blanc massif became the first high alpine mountain on which lidar was employed to monitor the increasing occurrence of severe rock-fall over large rock faces allegedly caused by climate change and degradation of permafrost at high altitude.Lidar is also used in structural geology and geophysics as a combination between airborne lidar and GNSS for the detection and study of faults, for measuring uplift. The output of the two technologies can produce extremely accurate elevation models for terrain – models that can even measure ground elevation through trees. This combination was used most famously to find the location of the Seattle Fault in Washington, United States. This combination also measures uplift at Mount St. Helens by using data from before and after the 2004 uplift. Airborne lidar systems monitor glaciers and have the ability to detect subtle amounts of growth or decline. A satellite-based system, the NASA ICESat, includes a lidar sub-system for this purpose. The NASA Airborne Topographic Mapper is also used extensively to monitor glaciers and perform coastal change analysis. Applications: The combination is also used by soil scientists while creating a soil survey. The detailed terrain modeling allows soil scientists to see slope changes and landform breaks which indicate patterns in soil spatial relationships. Applications: Atmosphere Initially, based on ruby lasers, lidar for meteorological applications was constructed shortly after the invention of the laser and represents one of the first applications of laser technology. Lidar technology has since expanded vastly in capability and lidar systems are used to perform a range of measurements that include profiling clouds, measuring winds, studying aerosols, and quantifying various atmospheric components. Atmospheric components can in turn provide useful information including surface pressure (by measuring the absorption of oxygen or nitrogen), greenhouse gas emissions (carbon dioxide and methane), photosynthesis (carbon dioxide), fires (carbon monoxide), and humidity (water vapor). Atmospheric lidars can be either ground-based, airborne or satellite-based depending on the type of measurement. Applications: Atmospheric lidar remote sensing works in two ways – by measuring backscatter from the atmosphere, and by measuring the scattered reflection off the ground (when the lidar is airborne) or other hard surface.Backscatter from the atmosphere directly gives a measure of clouds and aerosols. Other derived measurements from backscatter such as winds or cirrus ice crystals require careful selecting of the wavelength and/or polarization detected. Doppler lidar and Rayleigh Doppler lidar are used to measure temperature and wind speed along the beam by measuring the frequency of the backscattered light. The Doppler broadening of gases in motion allows the determination of properties via the resulting frequency shift. Scanning lidars, such as NASA's conical-scanning HARLIE, have been used to measure atmospheric wind velocity. The ESA wind mission ADM-Aeolus will be equipped with a Doppler lidar system in order to provide global measurements of vertical wind profiles. A doppler lidar system was used in the 2008 Summer Olympics to measure wind fields during the yacht competition.Doppler lidar systems are also now beginning to be successfully applied in the renewable energy sector to acquire wind speed, turbulence, wind veer, and wind shear data. Both pulsed and continuous wave systems are being used. Pulsed systems use signal timing to obtain vertical distance resolution, whereas continuous wave systems rely on detector focusing. Applications: The term, eolics, has been proposed to describe the collaborative and interdisciplinary study of wind using computational fluid mechanics simulations and Doppler lidar measurements.The ground reflection of an airborne lidar gives a measure of surface reflectivity (assuming the atmospheric transmittance is well known) at the lidar wavelength, however, the ground reflection is typically used for making absorption measurements of the atmosphere. "Differential absorption lidar" (DIAL) measurements utilize two or more closely spaced (less than 1 nm) wavelengths to factor out surface reflectivity as well as other transmission losses, since these factors are relatively insensitive to wavelength. When tuned to the appropriate absorption lines of a particular gas, DIAL measurements can be used to determine the concentration (mixing ratio) of that particular gas in the atmosphere. This is referred to as an Integrated Path Differential Absorption (IPDA) approach, since it is a measure of the integrated absorption along the entire lidar path. IPDA lidars can be either pulsed or CW and typically use two or more wavelengths. IPDA lidars have been used for remote sensing of carbon dioxide and methane.Synthetic array lidar allows imaging lidar without the need for an array detector. It can be used for imaging Doppler velocimetry, ultra-fast frame rate imaging (millions of frames per second), as well as for speckle reduction in coherent lidar. An extensive lidar bibliography for atmospheric and hydrospheric applications is given by Grant. Applications: Law enforcement Lidar speed guns are used by the police to measure the speed of vehicles for speed limit enforcement purposes. Additionally, it is used in forensics to aid in crime scene investigations. Scans of a scene are taken to record exact details of object placement, blood, and other important information for later review. These scans can also be used to determine bullet trajectory in cases of shootings. Applications: Military Few military applications are known to be in place and are classified (such as the lidar-based speed measurement of the AGM-129 ACM stealth nuclear cruise missile), but a considerable amount of research is underway in their use for imaging. Higher resolution systems collect enough detail to identify targets, such as tanks. Examples of military applications of lidar include the Airborne Laser Mine Detection System (ALMDS) for counter-mine warfare by Areté Associates.A NATO report (RTO-TR-SET-098) evaluated the potential technologies to do stand-off detection for the discrimination of biological warfare agents. The potential technologies evaluated were Long-Wave Infrared (LWIR), Differential Scattering (DISC), and Ultraviolet Laser Induced Fluorescence (UV-LIF). The report concluded that : Based upon the results of the lidar systems tested and discussed above, the Task Group recommends that the best option for the near-term (2008–2010) application of stand-off detection systems is UV-LIF , however, in the long-term, other techniques such as stand-off Raman spectroscopy may prove to be useful for identification of biological warfare agents. Applications: Short-range compact spectrometric lidar based on Laser-Induced Fluorescence (LIF) would address the presence of bio-threats in aerosol form over critical indoor, semi-enclosed and outdoor venues such as stadiums, subways, and airports. This near real-time capability would enable rapid detection of a bioaerosol release and allow for timely implementation of measures to protect occupants and minimize the extent of contamination.The Long-Range Biological Standoff Detection System (LR-BSDS) was developed for the U.S. Army to provide the earliest possible standoff warning of a biological attack. It is an airborne system carried by helicopter to detect synthetic aerosol clouds containing biological and chemical agents at long range. The LR-BSDS, with a detection range of 30 km or more, was fielded in June 1997. Five lidar units produced by the German company Sick AG were used for short range detection on Stanley, the autonomous car that won the 2005 DARPA Grand Challenge. Applications: A robotic Boeing AH-6 performed a fully autonomous flight in June 2010, including avoiding obstacles using lidar. Mining For the calculation of ore volumes is accomplished by periodic (monthly) scanning in areas of ore removal, then comparing surface data to the previous scan.Lidar sensors may also be used for obstacle detection and avoidance for robotic mining vehicles such as in the Komatsu Autonomous Haulage System (AHS) used in Rio Tinto's Mine of the Future. Applications: Physics and astronomy A worldwide network of observatories uses lidars to measure the distance to reflectors placed on the moon, allowing the position of the moon to be measured with millimeter precision and tests of general relativity to be done. MOLA, the Mars Orbiting Laser Altimeter, used a lidar instrument in a Mars-orbiting satellite (the NASA Mars Global Surveyor) to produce a spectacularly precise global topographic survey of the red planet. Laser altimeters produced global elevation models of Mars, the Moon (Lunar Orbiter Laser Altimeter (LOLA)) Mercury (Mercury Laser Altimeter (MLA)), NEAR–Shoemaker Laser Rangefinder (NLR). Future missions will also include laser altimeter experiments such as the Ganymede Laser Altimeter (GALA) as part of the Jupiter Icy Moons Explorer (JUICE) mission.In September, 2008, the NASA Phoenix Lander used lidar to detect snow in the atmosphere of Mars.In atmospheric physics, lidar is used as a remote detection instrument to measure densities of certain constituents of the middle and upper atmosphere, such as potassium, sodium, or molecular nitrogen and oxygen. These measurements can be used to calculate temperatures. Lidar can also be used to measure wind speed and to provide information about vertical distribution of the aerosol particles.At the JET nuclear fusion research facility, in the UK near Abingdon, Oxfordshire, lidar Thomson Scattering is used to determine Electron Density and Temperature profiles of the plasma. Applications: Rock mechanics Lidar has been widely used in rock mechanics for rock mass characterization and slope change detection. Some important geomechanical properties from the rock mass can be extracted from the 3-D point clouds obtained by means of the lidar. Some of these properties are: Discontinuity orientation Discontinuity spacing and RQD Discontinuity aperture Discontinuity persistence Discontinuity roughness Water infiltrationSome of these properties have been used to assess the geomechanical quality of the rock mass through the RMR index. Moreover, as the orientations of discontinuities can be extracted using the existing methodologies, it is possible to assess the geomechanical quality of a rock slope through the SMR index. In addition to this, the comparison of different 3-D point clouds from a slope acquired at different times allows researchers to study the changes produced on the scene during this time interval as a result of rockfalls or any other landsliding processes. Applications: THOR THOR is a laser designed toward measuring Earth's atmospheric conditions. The laser enters a cloud cover and measures the thickness of the return halo. The sensor has a fiber optic aperture with a width of 7+1⁄2 inches (19 cm) that is used to measure the return light. Applications: Robotics Lidar technology is being used in robotics for the perception of the environment as well as object classification. The ability of lidar technology to provide three-dimensional elevation maps of the terrain, high precision distance to the ground, and approach velocity can enable safe landing of robotic and crewed vehicles with a high degree of precision. Lidar are also widely used in robotics for simultaneous localization and mapping and well integrated into robot simulators. Refer to the Military section above for further examples. Applications: Spaceflight Lidar is increasingly being utilized for rangefinding and orbital element calculation of relative velocity in proximity operations and stationkeeping of spacecraft. Lidar has also been used for atmospheric studies from space. Short pulses of laser light beamed from a spacecraft can reflect off tiny particles in the atmosphere and back to a telescope aligned with the spacecraft laser. By precisely timing the lidar echo, and by measuring how much laser light is received by the telescope, scientists can accurately determine the location, distribution and nature of the particles. The result is a revolutionary new tool for studying constituents in the atmosphere, from cloud droplets to industrial pollutants, which are difficult to detect by other means."Laser altimetry is used to make digital elevation maps of planets, including the Mars Orbital Laser Altimeter (MOLA) mapping of Mars, the Lunar Orbital Laser Altimeter (LOLA) and Lunar Altimeter (LALT) mapping of the Moon, and the Mercury Laser Altimeter (MLA) mapping of Mercury. It is also used to help navigate the helicopter Ingenuity in its record-setting flights over the terrain of Mars. Applications: Surveying Airborne lidar sensors are used by companies in the remote sensing field. They can be used to create a DTM (Digital Terrain Model) or DEM (Digital Elevation Model); this is quite a common practice for larger areas as a plane can acquire 3–4 km (2–2+1⁄2 mi) wide swaths in a single flyover. Greater vertical accuracy of below 50 mm (2 in) can be achieved with a lower flyover, even in forests, where it is able to give the height of the canopy as well as the ground elevation. Typically, a GNSS receiver configured over a georeferenced control point is needed to link the data in with the WGS (World Geodetic System).Lidar is also in use in hydrographic surveying. Depending upon the clarity of the water lidar can measure depths from 0.9 to 40 m (3 to 131 ft) with a vertical accuracy of 15 cm (6 in) and horizontal accuracy of 2.5 m (8 ft). Applications: Transport Lidar has been used in the railroad industry to generate asset health reports for asset management and by departments of transportation to assess their road conditions. CivilMaps.com is a leading company in the field. Lidar has been used in adaptive cruise control (ACC) systems for automobiles. Systems such as those by Siemens, Hella, Ouster and Cepton use a lidar device mounted on the front of the vehicle, such as the bumper, to monitor the distance between the vehicle and any vehicle in front of it. In the event, the vehicle in front slows down or is too close, the ACC applies the brakes to slow the vehicle. When the road ahead is clear, the ACC allows the vehicle to accelerate to a speed preset by the driver. Refer to the Military section above for further examples. A lidar-based device, the Ceilometer is used at airports worldwide to measure the height of clouds on runway approach paths. Applications: Wind farm optimization Lidar can be used to increase the energy output from wind farms by accurately measuring wind speeds and wind turbulence. Experimental lidar systems can be mounted on the nacelle of a wind turbine or integrated into the rotating spinner to measure oncoming horizontal winds, winds in the wake of the wind turbine, and proactively adjust blades to protect components and increase power. Lidar is also used to characterise the incident wind resource for comparison with wind turbine power production to verify the performance of the wind turbine by measuring the wind turbine's power curve. Wind farm optimization can be considered a topic in applied eolics. Another aspect of lidar in wind related industry is to use computational fluid dynamics over lidar-scanned surfaces in order to assess the wind potential, which can be used for optimal wind farms placement. Applications: Solar photovoltaic deployment optimization Lidar can also be used to assist planners and developers in optimizing solar photovoltaic systems at the city level by determining appropriate roof tops and for determining shading losses. Recent airborne laser scanning efforts have focused on ways to estimate the amount of solar light hitting vertical building facades, or by incorporating more detailed shading losses by considering the influence from vegetation and larger surrounding terrain. Applications: Video games Recent simulation racing games such as rFactor Pro, iRacing, Assetto Corsa and Project CARS increasingly feature race tracks reproduced from 3-D point clouds acquired through lidar surveys, resulting in surfaces replicated with centimeter or millimeter precision in the in-game 3-D environment.The 2017 exploration game Scanner Sombre, by Introversion Software, uses lidar as a fundamental game mechanic. Applications: In Build the Earth, lidar is used to create accurate renders of terrain in Minecraft to account for any errors (mainly regarding elevation) in the default generation. The process of rendering terrain into Build the Earth is limited by the amount of data available in region as well as the speed it takes to convert the file into block data. Applications: Other uses The video for the 2007 song "House of Cards" by Radiohead was believed to be the first use of real-time 3-D laser scanning to record a music video. The range data in the video is not completely from a lidar, as structured light scanning is also used.In 2020, Apple introduced the fourth generation of iPad Pro with a lidar sensor integrated into the rear camera module, especially developed for augmented reality (AR) experiences. The feature was later included in the iPhone 12 Pro lineup and subsequent Pro models. On Apple devices, lidar empowers portrait mode pictures with night mode, quickens auto focus and improves accuracy in the Measure app. Applications: In 2022, Wheel of Fortune started using lidar technology to track when Vanna White moves her hand over the puzzle board to reveal letters. The first episode to have this technology was in the Season 40 premiere. Alternative technologies: Recent development of Structure From Motion (SFM) technologies allows delivering 3-D images and maps based on data extracted from visual and IR photography. The elevation or 3-D data is extracted using multiple parallel passes over mapped area, yielding both visual light images and 3-D structure from the same sensor, which is often a specially chosen and calibrated digital camera.Computer stereo vision has shown promise as an alternative to lidar for close range applications.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sorption cooling** Sorption cooling: Sorption cooling is a technology that uses heat to produce cooling, by taking advantage of material properties. One substance will heat or refrigerate depending on whether it is absorbed or released by another substance. There may be a third substance that is displaced when the first substance is absorbed and re-absorbed when the first substance is released. The absorption and release are dependent on ambient temperature.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PC12 cell line** PC12 cell line: PC12 is a cell line derived from a pheochromocytoma of the rat adrenal medulla, that have an embryonic origin from the neural crest that has a mixture of neuroblastic cells and eosinophilic cells. Background: This cell line was first cultured by Greene and Tischler in 1976. It was developed in parallel to the adrenal chromaffin cell model because of its extreme versatility for pharmacological manipulation, ease of culture, and the large amount of information on their proliferation and differentiation. These qualities provide advantage even though they have smaller vesicles and quantal size, holding only an average of 1.9x10−19 moles of neurotransmitter released. The vesicles hold catecholamines, mostly dopamine, but also limited amount of norepinephrine, and release of these neurotransmitters give rise to spikes due to changes in current similar to chromaffin cells. Background: PC12 cell line use has given much information to the function of proteins underlying vesicle fusion. This cell line has been used to understand the role of synaptotagmin in vesicle-cell membrane fusion. Differentiation: Their embryological origin with neuroblastic cells means they can easily differentiate into neuron-like cells even though they are not considered adult neurons. Neuron-like means they share properties similar to neurons, in this case it is referring to releasing neurotransmitter by vesicles. PC12 cells stop dividing and terminally differentiate when treated with nerve growth factor or dexamethasone. This makes PC12 cells useful as a model system for neuronal differentiation and neurosecretion. Differentiation: Nerve Growth Factor Treatment of PC12 cells with nerve growth factor creates cells with long processes known as neurite varicosities, which contain small amounts of vesicles. PC12 cells treated for 10–14 days with nerve growth factor had no release of vesicles from the cell body which indicates the aggregation of vesicles in the ends of the neurites. Dexamethasone Treatment of PC12 cells with dexamethasone differentiates them into chromaffin-like cells. Using patch clamp recording and amperometry, there was a significant increase in quantal size, excitability and coupling between calcium channels and vesicle release sites, increasing from ~2x10−19 to ~6.5x10−19 moles. Drugs effects on vesicles: Research has shown differences in vesicle size and quantal size depending on treatment with certain drugs. L-DOPA has shown increase in average quantal size when treated for only 40–90 minutes. Treatment with amphetamine or reserpine causes a reduction in vesicle content. Inserting the heavy metals Lead(II), Cadmium(II), Strontium(II), or Barium(II) have been shown to have agonist to the calcium-sensor synaptotagmin. Other organics have been studied using this cell line to understand their effects on PC12 cells These types of studies show that use of PC12 cell line can be a model for past and future neurotoxicological studies. Drugs effects on dopamine metabolism: PC12 cells have been researched to evaluate the potential for pharmaceutical alternation of dopamine metabolites such as DOPAL, an autotoxin implicated in the pathology of Parkinson's disease. MAO-B inhibitors rasagiline and selegiline decreased DOPAL formation. Most reversible MAO-A inhibitors failed to decrease DOPAL formation, however the irreversible MAO-A inhibitor clorgyline decreased DOPAL production. Research: The PC12 cell line has been used to get more information about diseases of the brain. It has been used in research of hypoxia, where acute hypoxia induces exocytosis and prolonged hypoxia can induce excessive exocytosis. PC12 cells were used to find which prion protein fragments caused neuronal dysfunction.In addition to the monoamine (dopamine and norepinephrine) pathway, PC12 cells have been reported to express both the kynurenine and serotonin pathways. Support for this assertion includes both RT-PCR evidence for expression of the required enzymes and changes in expression upon treatment with melatonin. A contrary view regarding expression of the serotonin pathway by PC12 cells was expressed by the investigators who first established this cell line.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cask breather** Cask breather: A cask breather (sometimes called a cask aspirator) is a type of demand valve used to serve draught beer. The cask breather enables the empty space created when beer is drawn from a beer cask to be filled with carbon dioxide from an external source. This prevents ambient air from being drawn into the cask, thus extending the life of the beer by preventing oxidation.To avoid carbonation of the beer, the carbon dioxide gas added by a cask breather is at low pressure, unlike the high pressure gas used to pressurize keg beer. Cask breathers are typically used in conjunction with a pressure regulator to ensure the gas pressure is sufficiently low.Before 2018, the use of cask breathers was opposed by the Campaign for Real Ale (CAMRA), a policy that was changed in April 2018 to allow pubs using cask breathers to be classified as real ale pubs and listed in the Good Beer Guide.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Industrial design** Industrial design: Industrial design is a process of design applied to physical products that are to be manufactured by mass production. It is the creative act of determining and defining a product's form and features, which takes place in advance of the manufacture or production of the product. It consists purely of repeated, often automated, replication, while craft-based design is a process or approach in which the form of the product is determined by the product's creator largely concurrent with the act of its production.All manufactured products are the result of a design process, but the nature of this process can vary. It can be conducted by an individual or a team, and such a team could include people with varied expertise (e.g. designers, engineers, business experts, etc.). It can emphasize intuitive creativity or calculated scientific decision-making, and often emphasizes a mix of both. It can be influenced by factors as varied as materials, production processes, business strategy, and prevailing social, commercial, or aesthetic attitudes. Industrial design, as an applied art, most often focuses on a combination of aesthetics and user-focused considerations, but also often provides solutions for problems of form, function, physical ergonomics, marketing, brand development, sustainability, and sales. History: Precursors For several millennia before the onset of industrialization, design, technical expertise, and manufacturing was often done by individual craftsmen, who determined the form of a product at the point of its creation, according to their own manual skill, the requirements of their clients, experience accumulated through their own experimentation, and knowledge passed on to them through training or apprenticeship.The division of labour that underlies the practice of industrial design did have precedents in the pre-industrial era. The growth of trade in the medieval period led to the emergence of large workshops in cities such as Florence, Venice, Nuremberg, and Bruges, where groups of more specialized craftsmen made objects with common forms through the repetitive duplication of models which defined by their shared training and technique. Competitive pressures in the early 16th century led to the emergence in Italy and Germany of pattern books: collections of engravings illustrating decorative forms and motifs which could be applied to a wide range of products, and whose creation took place in advance of their application. The use of drawing to specify how something was to be constructed later was first developed by architects and shipwrights during the Italian Renaissance.In the 17th century, the growth of artistic patronage in centralized monarchical states such as France led to large government-operated manufacturing operations epitomized by the Gobelins Manufactory, opened in Paris in 1667 by Louis XIV. Here teams of hundreds of craftsmen, including specialist artists, decorators and engravers, produced sumptuously decorated products ranging from tapestries and furniture to metalwork and coaches, all under the creative supervision of the King's leading artist Charles Le Brun. This pattern of large-scale royal patronage was repeated in the court porcelain factories of the early 18th century, such as the Meissen porcelain workshops established in 1709 by the Grand Duke of Saxony, where patterns from a range of sources, including court goldsmiths, sculptors, and engravers, were used as models for the vessels and figurines for which it became famous. As long as reproduction remained craft-based, however, the form and artistic quality of the product remained in the hands of the individual craftsman, and tended to decline as the scale of production increased. History: Birth of industrial design The emergence of industrial design is specifically linked to the growth of industrialization and mechanization that began with the industrial revolution in Great Britain in the mid 18th century. The rise of industrial manufacture changed the way objects were made, urbanization changed patterns of consumption, the growth of empires broadened tastes and diversified markets, and the emergence of a wider middle class created demand for fashionable styles from a much larger and more heterogeneous population.The first use of the term "industrial design" is often attributed to the industrial designer Joseph Claude Sinel in 1919 (although he himself denied this in interviews), but the discipline predates 1919 by at least a decade. Christopher Dresser is considered among the first independent industrial designers. Industrial design's origins lie in the industrialization of consumer products. For instance, the Deutscher Werkbund, founded in 1907 and a precursor to the Bauhaus, was a state-sponsored effort to integrate traditional crafts and industrial mass-production techniques, to put Germany on a competitive footing with Great Britain and the United States. History: The earliest published use of the term may have been in The Art-Union, 15 September 1840. History: Dyce's Report to the Board of Trade, on Foreign Schools of Design for Manufactures.Mr. Dyce's official visit to France, Prussia, and Bavaria, for the purpose of examining the state of schools of design in those countries, will be fresh in the recollection of our readers. His report on this subject was ordered to be printed some few months since, on the motion of Mr. Hume; and it is the sum and substance of this Report that we are now about to lay before our own especial portion of the reading public.The school of St. Peter, at Lyons, was founded about 1750, for the instruction of draftsmen employed in preparing patterns for the silk manufacture. It has been much more successful than the Paris school; and having been disorganized by the revolution, was restored by Napoleon and differently constituted, being then erected into an Academy of Fine Art: to which the study of design for silk manufacture was merely attached as a subordinate branch. History: It appears that all the students who entered the school commence as if they were intended for artists in the higher sense of the word and are not expected to decide as to whether they will devote themselves to the Fine Arts or to Industrial Design, until they have completed their exercises in drawing and painting of the figure from the antique and from the living model. It is for this reason, and from the fact that artists for industrial purposes are both well-paid and highly considered (as being well-instructed men), that so many individuals in France engage themselves in both pursuits. History: The Practical Draughtsman's Book of Industrial Design by Jacques-Eugène Armengaud was printed in 1853. The subtitle of the (translated) work explains, that it wants to offer a "complete course of mechanical, engineering, and architectural drawing." The study of those types of technical drawing, according to Armengaud, belongs to the field of industrial design. This work paved the way for a big expansion in the field of drawing education in France, the United Kingdom, and the United States. History: Robert Lepper helped to establish one of the country's first industrial design degree programs in 1934 at Carnegie Institute of Technology. Education: Product design and industrial design overlap in the fields of user interface design, information design, and interaction design. Various schools of industrial design specialize in one of these aspects, ranging from pure art colleges and design schools (product styling), to mixed programs of engineering and design, to related disciplines such as exhibit design and interior design, to schools that almost completely subordinated aesthetic design to concerns of usage and ergonomics, the so-called functionalist school. Except for certain functional areas of overlap between industrial design and engineering design, the former is considered an applied art while the latter is an applied science. Educational programs in the U.S. for engineering require accreditation by the Accreditation Board for Engineering and Technology (ABET) in contrast to programs for industrial design which are accredited by the National Association of Schools of Art and Design (NASAD). Of course, engineering education requires heavy training in mathematics and physical sciences, which is not typically required in industrial design education. Education: Institutions Most industrial designers complete a design or related program at a vocational school or university. Relevant programs include graphic design, interior design, industrial design, architectural technology, and drafting Diplomas and degrees in industrial design are offered at vocational schools and universities worldwide. Diplomas and degrees take two to four years of study. The study results in a Bachelor of Industrial Design (B.I.D.), Bachelor of Science (B.Sc) or Bachelor of Fine Arts (B.F.A.). Afterwards, the bachelor programme can be extended to postgraduate degrees such as Master of Design, Master of Fine Arts and others to a Master of Arts or Master of Science. Definition: Industrial design studies function and form—and the connection between product, user, and environment. Generally, industrial design professionals work in small scale design, rather than overall design of complex systems such as buildings or ships. Industrial designers don't usually design motors, electrical circuits, or gearing that make machines move, but they may affect technical aspects through usability design and form relationships. Usually, they work with other professionals such as engineers who focus on the mechanical and other functional aspects of the product, assuring functionality and manufacturability, and with marketers to identify and fulfill customer needs and expectations. Definition: Design, itself, is often difficult to describe to non-designers because the meaning accepted by the design community is not made of words. Instead, the definition is created as a result of acquiring a critical framework for the analysis and creation of artifacts. One of the many accepted (but intentionally unspecific) definitions of design originates from Carnegie Mellon's School of Design: "Everyone designs who devises courses of action aimed at changing existing situations into preferred ones." This applies to new artifacts, whose existing state is undefined, and previously created artifacts, whose state stands to be improved. Definition: Industrial design can overlap significantly with engineering design, and in different countries the boundaries of the two concepts can vary, but in general engineering focuses principally on functionality or utility of products, whereas industrial design focuses principally on aesthetic and user-interface aspects of products. In many jurisdictions this distinction is effectively defined by credentials and/or licensure required to engage in the practice of engineering. "Industrial design" as such does not overlap much with the engineering sub-discipline of industrial engineering, except for the latter's sub-specialty of ergonomics. Definition: At the 29th General Assembly in Gwangju, South Korea, 2015, the Professional Practise Committee unveiled a renewed definition of industrial design as follows: "Industrial Design is a strategic problem-solving process that drives innovation, builds business success and leads to a better quality of life through innovative products, systems, services and experiences." An extended version of this definition is as follows: "Industrial Design is a strategic problem-solving process that drives innovation, builds business success and leads to a better quality of life through innovative products, systems, services and experiences. Industrial Design bridges the gap between what is and what's possible. It is a trans-disciplinary profession that harnesses creativity to resolve problems and co-create solutions with the intent of making a product, system, service, experience or a business, better. At its heart, Industrial Design provides a more optimistic way of looking at the future by reframing problems as opportunities. It links innovation, technology, research, business and customers to provide new value and competitive advantage across economic, social and environmental spheres. Definition: Industrial Designers place the human in the centre of the process. They acquire a deep understanding of user needs through empathy and apply a pragmatic, user centric problem solving process to design products, systems, services and experiences. They are strategic stakeholders in the innovation process and are uniquely positioned to bridge varied professional disciplines and business interests. They value the economic, social and environmental impact of their work and their contribution towards co-creating a better quality of life. " Design process: Although the process of design may be considered 'creative,' many analytical processes also take place. In fact, many industrial designers often use various design methodologies in their creative process. Some of the processes that are commonly used are user research, sketching, comparative product research, model making, prototyping and testing. These processes are best defined by the industrial designers and/or other team members. Industrial designers often utilize 3D software, computer-aided industrial design and CAD programs to move from concept to production. They may also build a prototype or scaled down sketch models through a 3D printing process or using other materials such as paper, balsa wood, various foams, or clay for modeling. They may then use industrial CT scanning to test for interior defects and generate a CAD model. From this the manufacturing process may be modified to improve the product. Design process: Product characteristics specified by industrial designers may include the overall form of the object, the location of details with respect to one another, colors, texture, form, and aspects concerning the use of the product. Additionally, they may specify aspects concerning the production process, choice of materials and the way the product is presented to the consumer at the point of sale. The inclusion of industrial designers in a product development process may lead to added value by improving usability, lowering production costs, and developing more appealing products. Design process: Industrial design may also focus on technical concepts, products, and processes. In addition to aesthetics, usability, and ergonomics, it can also encompass engineering, usefulness, market placement, and other concerns—such as psychology, desire, and the emotional attachment of the user. These values and accompanying aspects that form the basis of industrial design can vary—between different schools of thought, and among practicing designers. Design process: Third Order Design An industrial designer effectively balances their design process to include both producers and the market. In Design Issues, Vol 12, No.1, released in 1996, Tony Golsby-Smith wrote the following: “the enlightened industrial designer researches the market and its needs, the producing company and its processes of manufacture, as well as its market aspirations.”. In packaging design, a designer determines the usability and source of action of the artifact – like how the package design would attract a consumer, how the material feels, and access its contents. On the producer side, the designer finds the process to which the package is created and how its contents would fit inside. It places industrial designers to filter out information based on their research and determine the best solution. This form of the design process is what industrial designers commonly use today in their profession. Third order also explains two services industrial designers offer. The first is the intervention of the designer in the decision-making process directly – this would be a product that is produced once only. The designer would directly focus on ensuring that the product works for the client. The second is when a designer intervenes indirectly in the process. An example of this would be a product that is produced regularly. Designers are tasked to create a prototype and describe how the client is able to adapt to the design process. Design process: Co-Design Recently, industrial designers are finding new methods to approach the design process. Industrial designers tend to work within small teams. This method is called participatory design or co-design. These teams would often consist of different professions based on the project at hand. An industrial designer designing a prosthesis would work with a volunteer patient and with a prosthetist throughout this process. It establishes an environment where the designer and the participants are active members throughout the design process – instead of the designer relying on them only as a primary source of research or reference. Design process: Fourth Order Design Third order design only focuses on the intended purpose of the product and its relationship with the producer and the market. On the other hand, fourth order design builds on previous design processes while acknowledging a broader spectrum of thought surrounding a product. These include and are not limited to socio-politics, economics, sustainability, ecology, mental health, etc. which is what Golsby-Smith describes as a field where a solution “exists within a series of connected processes.” each with its own intangible factors. He defines these processes as purpose (culture), integration, and system (community). Using culture as a verb that continuously changes over time, in turn, provides a reason for the integration of a solution. Fourth Order emphasizes that a solution does not exist within a vacuum – questioning its value and reason to why it should exist in the world. Industrial designers do not question this in a philosophical way but more so in its practical implementations. It offers a humanistic approach to the design process as it portrays people as a key feature in the overall design process. An example of fourth order design can be found in the Rebuild by Design competition launched by the Obama administration after the devastating damages of Hurricane Sandy. The competition was used to encourage designers to find a viable solution to protect New York from future superstorms. Additionally, they pushed to “overcome existing creative and regulatory barriers by cultivating collaboration between designers, researchers, community members, government officials, and subject matter experts.”. The Big U was chosen for the award and given permission to continue with the project. This solution consists of placing flood gates within neighborhoods and berms across the southern tip of Manhattan – providing protection up to 16 feet above current sea level. The main concern is the fact that the team behind the project only planned for it to guard the city until 2050; Based on the rising projections of storm-force sea level. This will inevitably contribute to displacing of lower to middle-class people in Manhattan. An alternative proposal called Living Breakwaters was offered to build a “necklace” at the south shore of Staten Island where most of the erosion damage was done by Hurricane Sandy. The goal of the project is to tackle the revitalization of marine ecologies and ensure the island is able to host plant, animal, and human life. The breakwaters are planned to be made out of concrete boxes to provide lodging for a variety of marine species. This solution successfully identifies its purpose by embracing climate change as an opportunity for a new ecological future; and the system surrounding the ecological and socio-economic disparity in Manhattan. The Big U is currently underway on advancing the construction of a 2.4-mile system located in Stuyvesant Cove, East River Park – which includes extended flood protections, improved waterfront access, and focused entry points to better connect the community to the park, and upgrade existing sewer systems. Moreover, they are also improving open spaces for 110, 000 New Yorkers and 28, 000 public housing residents. This current state of the Big U portrays consideration for the community affected by its construction. It is evident that they are practicing fourth order design but does not consider the culture of climate change after the year 2050. Industrial design rights: Industrial design rights are intellectual property rights that make exclusive the visual design of objects that are not purely utilitarian. A design patent would also be considered under this category. An industrial design consists of the creation of a shape, configuration or composition of pattern or color, or combination of pattern and color in three-dimensional form containing aesthetic value. An industrial design can be a two- or three-dimensional pattern used to produce a product, industrial commodity or handicraft. Under the Hague Agreement Concerning the International Deposit of Industrial Designs, a WIPO-administered treaty, a procedure for an international registration exists. An applicant can file for a single international deposit with WIPO or with the national office in a country party to the treaty. The design will then be protected in as many member countries of the treaty as desired. Examples of industrial design: A number of industrial designers have made such a significant impact on culture and daily life that their work is documented by historians of social science. Alvar Aalto, renowned as an architect, also designed a significant number of household items, such as chairs, stools, lamps, a tea-cart, and vases. Raymond Loewy was a prolific American designer who is responsible for the Royal Dutch Shell corporate logo, the original BP logo (in use until 2000), the PRR S1 steam locomotive, the Studebaker Starlight (including the later bulletnose), as well as Schick electric razors, Electrolux refrigerators, short-wave radios, Le Creuset French ovens, and a complete line of modern furniture, among many other items. Examples of industrial design: Richard Teague, who spent most of his career with the American Motors Corporation, originated the concept of using interchangeable body panels so as to create a wide array of different vehicles using the same stampings. He was responsible for such unique automotive designs as the Pacer, Gremlin, Matador coupe, Jeep Cherokee, and the complete interior of the Eagle Premier. Milwaukee's Brooks Stevens was best known for his Milwaukee Road Skytop Lounge car and Oscar Mayer Wienermobile designs, among others. Viktor Schreckengost designed bicycles manufactured by Murray bicycles for Murray and Sears, Roebuck and Company. With engineer Ray Spiller, he designed the first truck with a cab-over-engine configuration, a design in use to this day. Schreckengost also founded The Cleveland Institute of Art's school of industrial design. Examples of industrial design: Oskar Barnack was a German optical engineer, precision mechanic, industrial designer, and the father of 35mm photography. He developed the Leica, which became the hallmark for photography for 50 years, and remains a high-water mark for mechanical and optical design. Charles and Ray Eames were most famous for their pioneering furniture designs, such as the Eames Lounge Chair Wood and Eames Lounge Chair. Other influential designers included Henry Dreyfuss, Eliot Noyes, John Vassos, and Russel Wright. Examples of industrial design: Dieter Rams is a German industrial designer closely associated with the consumer products company Braun and the Functionalist school of industrial design. Examples of industrial design: German industrial designer Luigi Colani, who designed cars for automobile manufacturers including Fiat, Alfa Romeo, Lancia, Volkswagen, and BMW, was also known to the general public for his unconventional approach to industrial design. He had expanded in numerous areas ranging from mundane household items, instruments and furniture to trucks, uniforms and entire rooms. A grand piano created by Colani, the Pegasus, is manufactured and sold by the Schimmel piano company. Examples of industrial design: Many of Apple's recent products were designed by Sir Jonathan Ive.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**High-performance teams** High-performance teams: High-performance teams (HPTs) is a concept within organization development referring to teams, organizations, or virtual groups that are highly focused on their goals and that achieve superior business results. High-performance teams outperform all other similar teams and they outperform expectations given their composition. Definition: A high-performance team can be defined as a group of people with specific roles and complementary talents and skills, aligned with and committed to a common purpose, who consistently show high levels of collaboration and innovation, produce superior results, and extinguish radical or extreme opinions that could be damaging. The high-performance team is regarded as tight-knit, focused on their goal and have supportive processes that will enable any team member to surmount any barriers in achieving the team's goals.Within the high-performance team, people are highly skilled and are able to interchange their roles. Also, leadership within the team is not vested in a single individual. Instead the leadership role is taken up by various team members, according to the need at that moment in time. High-performance teams have robust methods of resolving conflict efficiently, so that conflict does not become a roadblock to achieving the team's goals. There is a sense of clear focus and intense energy within a high-performance team. Collectively, the team has its own consciousness, indicating shared norms and values within the team. The team feels a strong sense of accountability for achieving their goals. Team members display high levels of mutual trust towards each other.To support team effectiveness within high-performance teams, understanding of individual working styles is important. This can be done by applying Belbin High Performing Teams, DISC assessment, the Myers-Briggs Type Indicator and the Herrmann Brain Dominance Instrument to understand behavior, personalities and thinking styles of team members. Definition: Using Tuckman's stages of group development as a basis, a HPT moves through the stages of forming, storming, norming and performing, as with other teams. However, the HPT uses the storming and norming phase effectively to define who they are and what their overall goal is, and how to interact together and resolve conflicts. Therefore, when the HPT reaches the performing phase, they have highly effective behaviours that allow them to overachieve in comparison to regular teams. Later, leadership strategies (coordinating, coaching, empowering, and supporting) were connected to each stage to help facilitate teams to high performance.Characteristics Different characteristics have been used to describe high-performance teams. Despite varying approaches to describing high-performance teams there is a set of common characteristics that are recognised to lead to success Participative leadership – using a democratic leadership style that involves and engages team members Effective decision-making – using a blend of rational and intuitive decision making methods, depending on that nature of the decision task Open and clear communication – ensuring that the team mutually constructs shared meaning, using effective communication methods and channels Valued diversity – valuing a diversity of experience and background in team, contributing to a diversity of viewpoints, leading to better decision making and solutions Mutual trust – trusting in other team members and trusting in the team as an entity Managing conflict – dealing with conflict openly and transparently and not allowing grudges to build up and destroy team morale Clear goals – goals that are developed using SMART criteria; also each goal must have personal meaning and resonance for each team member, building commitment and engagement Defined roles and responsibilities – each team member understands what they must do (and what they must not do) to demonstrate their commitment to the team and to support team success Coordinative relationship – the bonds between the team members allow them to seamlessly coordinate their work to achieve both efficiency and effectiveness Positive atmosphere – an overall team culture that is open, transparent, positive, future-focused and able to deliver successThere are many types of teams in organizations as well. The most traditional type of team is the manager-led team. Within this team, a manager fits the role of the team leader and is responsible for defining the team goals, methods, and functions. The remaining team members are responsible for carrying out their assigned work under the monitoring of the manager. Self-managing or self-regulating teams operate when the “manager” position determines the overall purpose or goal for the team and the remainder of the team are at liberty to manage the methods by which are needed to achieve the intended goal. Self-directing or self-designing teams determine their own team goals and the different methods needed in order to achieve the end goal. This offers opportunities for innovation, enhance goal commitment and motivation. Finally, self-governing teams are designed with high control and responsibility to execute a task or manage processes. Board of directors is a prime example of self-governing team.Given the importance of team-based work in today's economy, much focus has been brought in recent years to use evidence-based organizational research to pinpoint more accurately to the defining attributes of high-performance teams. The team at MIT's Human Dynamics Laboratory investigated explicitly observable communication patterns and found energy, engagement, and exploration to be surprisingly powerful predictive indicators for a team's ability to perform.Other researchers focus on what supports group intelligence and allows a team to be smarter than their smartest individuals. A group at MIT's Center for Collective Intelligence, e.g., found that teams with more women and teams where team members share "airtime" equally showed higher group intelligence scores.The Fundamental Interpersonal Relations Orientation – Behavior (FIRO-B) questionnaire is a resource that could help the individual help identify their personal orientation. In other words, the behavioral tendency a person in different environments, with different people. The theory of personal orientation was initially shared by Schultz (1958) who claimed personal orientation consists of three fundamental human needs: need for inclusion, need for control, and the need for affection. The FIRO-B test helps an individual identify their interpersonal compatibilities with these needs which can be directly correlated to their performance in a high-performance team. Historical development of concept: First described in detail by the Tavistock Institute, UK, in the 1950s, HPTs gained popular acceptance in the US by the 1980s, with adoption by organizations such as General Electric, Boeing, Digital Equipment Corporation (now HP), and others. In each of these cases, major change was created through the shifting of organizational culture, merging the business goals of the organization with the social needs of the individuals. Often in less than a year, HPTs achieved a quantum leap in business results in all key success dimensions, including customer, employee, shareholder and operational value-added dimensions.Due to its initial success, many organizations attempted to copy HPTs. However, without understanding the underlying dynamics that created them, and without adequate time and resources to develop them, most of these attempts failed. With this failure, HPTs fell out of general favor by 1995, and the term high-performance began to be used in a promotional context, rather than a performance-based one.Recently, some private sector and government sector organizations have placed new focus on HPTs, as new studies and understandings have identified the key processes and team dynamics necessary to create all-around quantum performance improvements. With these new tools, organizations such as Kraft Foods, General Electric, Exelon, and the US government have focused new attention on high-performance teams. Historical development of concept: In Great Britain, high-performance workplaces are defined as being those organizations where workers are actively communicated with and involved in the decisions directly affecting the workers. By regulation of the UK Department of Trade and Industry, these workplaces will be required in most organizations by 2008
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MusicDNA (file format)** MusicDNA (file format): MusicDNA is a music file format developed by some of the key figures involved in the development of the MP3 format. Design: The format is backwards-compatible with existing MP3 players, and offers the same sound quality. MusicDNA files can contain metadata, such as lyrics, artwork, blog posts and user-created content, which can be updated continually via the internet. MusicDNA is intended to be a competitor to Apple's iTunes LP, which also offers user-added content. MusicDNA was created by Norwegian developer Dagfinn Bach, Chief Executive Officer of Bach Technology. German developer Karlheinz Brandenburg, credited with the invention of the .mp3 file, is one of the investors in this project. As of January 2010, no major record labels have adopted the new format, although a number of independent labels have shown an interest. MusicDNA files are likely to be more expensive than current music downloads.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fruit waxing** Fruit waxing: Fruit waxing is the process of covering fruits (and, in some cases, vegetables) with artificial waxing material. Natural wax is removed first, usually by washing, followed by a coating of a biological or petroleum derived wax. Potentially allergenic proteins (peanut, soy, dairy, wheat) may be combined with shellac.The primary reasons for waxing are to prevent water loss (making up for the removal in washing of the natural waxes in fruits that have them, particularly citrus but also, for example, apples) and thus slow shrinkage and spoilage, and to improve appearance. Dyes may be added to further enhance appearance, and sometimes fungicides. Fruits were waxed to cause fermentation as early as the 12th or the 13th century; commercial producers began waxing citrus to extend shelf life in the 1920s and 1930s. Aesthetics (consumer preference for shiny fruit) has since become the main reason. In addition to fruit, some vegetables can usefully be waxed, such as cassava. A distinction may be made between storage wax, pack-out wax (for immediate sale), and high-shine wax (for optimum attractiveness). Products that are often waxed: A number of sources list the following as products which may be waxed before shipping to stores: applesavocados bell and hot peppers cantaloupes cucumbers eggplant grapefruit lemons limes mangoes melons nectarines oranges papayas parsnips passion fruit peaches pears pineapple plums pumpkins rutabaga squash sweet potatoes tangarines tomatoes turnips yucca Materials: The materials used to wax produce depend to some extent on regulations in the country of production and/or export. Both natural waxes (carnauba, shellac, beeswax or resin) and petroleum-based waxes (usually proprietary formulae) are used, and often more than one wax is combined to create the desired properties for the fruit or vegetable being treated. Wax may be applied in a volatile petroleum-based solvent but is now more commonly applied via a water-based emulsion. Blended paraffin waxes applied as an oil or paste are often used on vegetables.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Masitinib** Masitinib: Masitinib is a tyrosine-kinase inhibitor used in the treatment of mast cell tumours in animals, specifically dogs. Since its introduction in November 2008 it has been distributed under the commercial name Masivet. It has been available in Europe since the second part of 2009. Masitinib has been studied for several human conditions including melanoma, multiple myeloma, gastrointestinal cancer, pancreatic cancer, Alzheimer disease, multiple sclerosis, rheumatoid arthritis, mastocytosis, amyotrophic lateral sclerosis and COVID-19. Mechanism of action: Masitinib is a tyrosine kinase inhibitor which inhibits tyrosine kinases, enzymes responsible for the activation of many proteins by signal transduction cascades. Specifically, masitinib targets the receptor tyrosine kinase c-Kit which is found to be overexpressed or mutated in several types of cancer. Masitinib is also additional targets, it also inhibits the platelet derived growth factor receptor (PDGFR), lymphocyte-specific protein tyrosine kinase (Lck), focal adhesion kinase (FAK) and fibroblast growth factor receptor 3 (FGFR3) as well as CSF1R.Masitinib has been shown to block the replication of SARS-CoV-2 by inhibiting its main protease, 3CLpro. Masitinib showed >200-fold reduction in viral titers in the lungs and nose of mice infected with SARS-CoV-2. Clinical use: Masitinib was under investigation for the treatment of systemic mastocytosis (Masipro) but approval was denied in the EU in 2017 due to concerns "about the reliability of the study results" and major changes to the study design.European approval of masitinib for treatment of amyotrophic lateral sclerosis (Alsitek) was also refused in 2018.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bleb (medicine)** Bleb (medicine): In medicine, a bleb is a blister-like protrusion (often hemispherical) filled with serous fluid. Blebs can form in a number of tissues by different pathologies, including frostbite and can "appear and disappear within a short time interval".In pathology pulmonary blebs are small subpleural thin-walled air-containing spaces, not larger than 1-2 cm in diameter. Their walls are less than 1 mm thick. If they rupture, they allow air to escape into pleural space, resulting in a spontaneous pneumothorax. Bleb (medicine): In ophthalmology, blebs may be formed intentionally in the treatment of glaucoma. In such treatments, functional blebs facilitate the circulation of aqueous humor, the blockage of which will lead to increase in eye pressure. Use of collagen matrix wound modulation device such as ologen during glaucoma surgery is known to produce vascular and functional blebs, which are positively correlated with treatment success rate. In the lungs, a bleb is a collection of air within the layers of the visceral pleura. In breasts a bleb is a milk blister (also known as blocked nipple pore, nipple blister, or “milk under the skin”).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ball boy** Ball boy: Ball boys and ball girls, also known as ball kids, are individuals, usually human youths but sometimes dogs, who retrieve and supply balls for players or officials in sports such as association football, American football, bandy, cricket, tennis, baseball and basketball. Though non-essential, their activities help to speed up play by reducing the amount of inactive time. Tennis: Due to the nature of the sport, quick retrieval of loose balls and delivery of the game balls to the servers are necessary for quick play in tennis. In professional tournaments, every court will have a trained squad of ball boys/girls with positionings and movements designed for maximum efficiency, while also not interfering with active play. As well as dealing with the game balls, ball boys/girls may also provide the players with other assistance, such as the delivery of towels and drinks. Tennis: Positions Nets are located on either side of the net to retrieve balls that are trapped by the net. Their job is to gather dead balls from the court and feed them to the bases after a point. This is usually done by rolling them alongside the court. Bases are located just off each corner (at either end of the baseline at either end of the court). Their job is to retrieve balls from the nets and then feed balls to the server. Tennis: Feeding Feeding is how the ball boys and girls give the balls to the players. At different tournaments, they use different techniques for feeding. At some tournaments, bases have both arms in the air, feeding the balls with one arm; at others, they have one arm in the air which they feed the balls and the other arm behind their back. When feeding the ball, they must also be aware of a player's preference. Most players accept the standard, which is for the ball boy or girl to gently toss the ball (from the position with their arms extended upwards) such that it bounces one time then to the proper height for the player to catch the ball easily. Tennis: Hiring There are various methods for selecting the ball boys and girls for a tournament. In many tournaments, such as Wimbledon and the Queen's Club Championships, they are picked from or apply through schools, where they are selected by tournaments and they have to go through a number of selections and tests. In some other tournaments, such as the Nottingham Open, Australian Open and the US Open, positions are advertised and there are open try-outs. Tennis: Applicants are required to pass a physical ability assessment. In addition to fitness and stamina, the abilities to concentrate and remain alert are essential. Association football: High-profile association football matches may employ ball boys to avoid long delays when a game ball leaves the pitch and its immediate vicinity. Typically positioned behind advertising boards surrounding the pitch, ball boys will try to be in possession of a spare ball at all times, so that this can be given to the players prior to the loose ball being retrieved. Methods for selecting ball boys vary between grounds. On occasion, away teams have complained about perceived favour of ball boys towards home sides.Association football ball boys hit the headlines in England in a 2013 Capital One Cup match when Eden Hazard, a member of the away team, which was trailing at the time, appeared to kick at an apparently time-wasting ball boy Charlie Morgan who was lying on top of the ball. Hazard was subsequently sent off for violent conduct and suspended for three games. It was later revealed that the ball boy had tweeted the day before that he had intended to waste time. Baseball: Ball kids are stationed in out-of-play areas near the first and third base foul lines to retrieve out-of-play baseballs. They should not be confused with batboys and batgirls, who remain in or near a team's dugout and the home plate area, primarily to tend to a team's baseball bats. Baseball: As ball kids are stationed on the field, albeit in foul territory, they can occasionally interfere with play; such events are governed by Rule 6.01(d), the main point of which is that if the interference is unintentional, any live ball remains alive and in play.Since 1992, the San Francisco Giants have employed older men as "balldudes", instead of the traditional youths. In 1993, Corinne Mullane became the first "balldudette", and she and her daughter Molly, who began working as a balldudette in the 2000s, have since been included in the National Baseball Hall of Fame as the first mother-daughter ball-retrieving duo in baseball. Cricket: Ball boys are stationed around the field just outside the boundary to retrieve any balls struck into the spectator stands or to the boundary rope. In India, disabled people are not allowed to be ball-boys anymore after a controversy occurred in 2017, after criticism of the Board of Control for Cricket in India surrounding the appearance of a polio-afflicted fan who had been serving as a ball-boy for a few years.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Re-recording (filmmaking)** Re-recording (filmmaking): Re-recording is the process by which the audio track of a film or video production is created. An Audio re-recording is often called a re-recording of music. As sound elements are mixed and combined the process necessitates "re-recording" all of the audio elements, such as dialogue, music, sound effects, by the sound re-recording mixer(s) to achieve the desired result, which is the final soundtrack that the audience hears when the finished film is played.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Underground hydrogen storage** Underground hydrogen storage: Underground hydrogen storage is the practice of hydrogen storage in caverns, salt domes and depleted oil/gas fields. Large quantities of gaseous hydrogen have been stored in caverns for many years. The storage of large quantities of hydrogen underground in solution-mined salt domes, aquifers, excavated rock caverns, or mines can function as grid energy storage, essential for the hydrogen economy. By using a turboexpander the electricity needs for compressed storage on 200 bar amounts to 2.1% of the energy content. Chevron Phillips Clemens Terminal: The Chevron Phillips Clemens Terminal in Texas has stored hydrogen since the 1980s in a solution-mined salt cavern. The cavern roof is about 2,800 feet (850 m) underground. The cavern is a cylinder with a diameter of 160 feet (49 m), a height of 1,000 feet (300 m), and a usable hydrogen capacity of 1,066 million cubic feet (30.2×10^6 m3), or 2,520 metric tons (2,480 long tons; 2,780 short tons). Development: Sandia National Laboratories released in 2011 a life-cycle cost analysis framework for geologic storage of hydrogen. The European project Hyunder indicated in 2013 that for the storage of wind and solar energy an additional 85 caverns are required as it cannot be covered by pumped-storage hydroelectricity and compressed air energy storage systems. ETI released in 2015 a report The role of hydrogen storage in a clean responsive power system noting that the UK has sufficient salt bed resources to provide tens of GWe. RAG Austria AG finished a hydrogen storage project in a depleted oil and gas field in Austria in 2017, and is conducting its second project "Underground Sun Conversion".A cavern sized 800 m tall and 50 m diameter can hold hydrogen equivalent to 150 GWh.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cocamide MEA** Cocamide MEA: Cocamide MEA, or cocamide monoethanolamine, is a solid, off-white to tan compound, often sold in flaked form. The solid melts to yield a pale yellow viscous clear liquid. It is a mixture of fatty acid amides which is produced from the fatty acids in coconut oil when reacted with ethanolamine. Uses: Cocamide MEA and other cocamide ethanolamines such as cocamide DEA are used as foaming agents and nonionic surfactants in shampoos and bath products, and as emulsifying agents in cosmetics.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Measurement uncertainty** Measurement uncertainty: In metrology, measurement uncertainty is the expression of the statistical dispersion of the values attributed to a measured quantity. All measurements are subject to uncertainty and a measurement result is complete only when it is accompanied by a statement of the associated uncertainty, such as the standard deviation. By international agreement, this uncertainty has a probabilistic basis and reflects incomplete knowledge of the quantity value. It is a non-negative parameter.The measurement uncertainty is often taken as the standard deviation of a state-of-knowledge probability distribution over the possible values that could be attributed to a measured quantity. Relative uncertainty is the measurement uncertainty relative to the magnitude of a particular single choice for the value for the measured quantity, when this choice is nonzero. This particular single choice is usually called the measured value, which may be optimal in some well-defined sense (e.g., a mean, median, or mode). Thus, the relative measurement uncertainty is the measurement uncertainty divided by the absolute value of the measured value, when the measured value is not zero. Background: The purpose of measurement is to provide information about a quantity of interest – a measurand. For example, the measurand might be the size of a cylindrical feature, the volume of a vessel, the potential difference between the terminals of a battery, or the mass concentration of lead in a flask of water. Background: No measurement is exact. When a quantity is measured, the outcome depends on the measuring system, the measurement procedure, the skill of the operator, the environment, and other effects. Even if the quantity were to be measured several times, in the same way and in the same circumstances, a different measured value would in general be obtained each time, assuming the measuring system has sufficient resolution to distinguish between the values. Background: The dispersion of the measured values would relate to how well the measurement is performed. Their average would provide an estimate of the true value of the quantity that generally would be more reliable than an individual measured value. The dispersion and the number of measured values would provide information relating to the average value as an estimate of the true value. However, this information would not generally be adequate. Background: The measuring system may provide measured values that are not dispersed about the true value, but about some value offset from it. Take a domestic bathroom scale. Suppose it is not set to show zero when there is nobody on the scale, but to show some value offset from zero. Then, no matter how many times the person's mass were re-measured, the effect of this offset would be inherently present in the average of the values. Background: The "Guide to the Expression of Uncertainty in Measurement" (commonly known as the GUM) is the definitive document on this subject. The GUM has been adopted by all major National Measurement Institutes (NMIs) and by international laboratory accreditation standards such as ISO/IEC 17025 General requirements for the competence of testing and calibration laboratories, which is required for international laboratory accreditation; and is employed in most modern national and international documentary standards on measurement methods and technology. See Joint Committee for Guides in Metrology. Background: Measurement uncertainty has important economic consequences for calibration and measurement activities. In calibration reports, the magnitude of the uncertainty is often taken as an indication of the quality of the laboratory, and smaller uncertainty values generally are of higher value and of higher cost. The American Society of Mechanical Engineers (ASME) has produced a suite of standards addressing various aspects of measurement uncertainty. For example, ASME standards are used to address the role of measurement uncertainty when accepting or rejecting products based on a measurement result and a product specification, provide a simplified approach (relative to the GUM) to the evaluation of dimensional measurement uncertainty, resolve disagreements over the magnitude of the measurement uncertainty statement, or provide guidance on the risks involved in any product acceptance/rejection decision. Indirect measurement: The above discussion concerns the direct measurement of a quantity, which incidentally occurs rarely. For example, the bathroom scale may convert a measured extension of a spring into an estimate of the measurand, the mass of the person on the scale. The particular relationship between extension and mass is determined by the calibration of the scale. A measurement model converts a quantity value into the corresponding value of the measurand. Indirect measurement: There are many types of measurement in practice and therefore many models. A simple measurement model (for example for a scale, where the mass is proportional to the extension of the spring) might be sufficient for everyday domestic use. Alternatively, a more sophisticated model of a weighing, involving additional effects such as air buoyancy, is capable of delivering better results for industrial or scientific purposes. In general there are often several different quantities, for example temperature, humidity and displacement, that contribute to the definition of the measurand, and that need to be measured. Indirect measurement: Correction terms should be included in the measurement model when the conditions of measurement are not exactly as stipulated. These terms correspond to systematic errors. Given an estimate of a correction term, the relevant quantity should be corrected by this estimate. There will be an uncertainty associated with the estimate, even if the estimate is zero, as is often the case. Instances of systematic errors arise in height measurement, when the alignment of the measuring instrument is not perfectly vertical, and the ambient temperature is different from that prescribed. Neither the alignment of the instrument nor the ambient temperature is specified exactly, but information concerning these effects is available, for example the lack of alignment is at most 0.001° and the ambient temperature at the time of measurement differs from that stipulated by at most 2 °C. Indirect measurement: As well as raw data representing measured values, there is another form of data that is frequently needed in a measurement model. Some such data relate to quantities representing physical constants, each of which is known imperfectly. Examples are material constants such as modulus of elasticity and specific heat. There are often other relevant data given in reference books, calibration certificates, etc., regarded as estimates of further quantities. Indirect measurement: The items required by a measurement model to define a measurand are known as input quantities in a measurement model. The model is often referred to as a functional relationship. The output quantity in a measurement model is the measurand. Formally, the output quantity, denoted by Y , about which information is required, is often related to input quantities, denoted by X1,…,XN , about which information is available, by a measurement model in the form of Y=f(X1,…,XN), where f is known as the measurement function. A general expression for a measurement model is h(Y, 0. It is taken that a procedure exists for calculating Y given X1,…,XN , and that Y is uniquely defined by this equation. Propagation of distributions: The true values of the input quantities X1,…,XN are unknown. In the GUM approach, X1,…,XN are characterized by probability distributions and treated mathematically as random variables. These distributions describe the respective probabilities of their true values lying in different intervals, and are assigned based on available knowledge concerning X1,…,XN . Sometimes, some or all of X1,…,XN are interrelated and the relevant distributions, which are known as joint, apply to these quantities taken together. Propagation of distributions: Consider estimates x1,…,xN , respectively, of the input quantities X1,…,XN , obtained from certificates and reports, manufacturers' specifications, the analysis of measurement data, and so on. The probability distributions characterizing X1,…,XN are chosen such that the estimates x1,…,xN , respectively, are the expectations of X1,…,XN . Moreover, for the i th input quantity, consider a so-called standard uncertainty, given the symbol u(xi) , defined as the standard deviation of the input quantity Xi . This standard uncertainty is said to be associated with the (corresponding) estimate xi The use of available knowledge to establish a probability distribution to characterize each quantity of interest applies to the Xi and also to Y . In the latter case, the characterizing probability distribution for Y is determined by the measurement model together with the probability distributions for the Xi . The determination of the probability distribution for Y from this information is known as the propagation of distributions.The figure below depicts a measurement model Y=X1+X2 in the case where X1 and X2 are each characterized by a (different) rectangular, or uniform, probability distribution. Y has a symmetric trapezoidal probability distribution in this case. Propagation of distributions: Once the input quantities X1,…,XN have been characterized by appropriate probability distributions, and the measurement model has been developed, the probability distribution for the measurand Y is fully specified in terms of this information. In particular, the expectation of Y is used as the estimate of Y , and the standard deviation of Y as the standard uncertainty associated with this estimate. Propagation of distributions: Often an interval containing Y with a specified probability is required. Such an interval, a coverage interval, can be deduced from the probability distribution for Y . The specified probability is known as the coverage probability. For a given coverage probability, there is more than one coverage interval. The probabilistically symmetric coverage interval is an interval for which the probabilities (summing to one minus the coverage probability) of a value to the left and the right of the interval are equal. The shortest coverage interval is an interval for which the length is least over all coverage intervals having the same coverage probability. Propagation of distributions: Prior knowledge about the true value of the output quantity Y can also be considered. For the domestic bathroom scale, the fact that the person's mass is positive, and that it is the mass of a person, rather than that of a motor car, that is being measured, both constitute prior knowledge about the possible values of the measurand in this example. Such additional information can be used to provide a probability distribution for Y that can give a smaller standard deviation for Y and hence a smaller standard uncertainty associated with the estimate of Y Type A and Type B evaluation of uncertainty: Knowledge about an input quantity Xi is inferred from repeated measured values ("Type A evaluation of uncertainty"), or scientific judgement or other information concerning the possible values of the quantity ("Type B evaluation of uncertainty"). In Type A evaluations of measurement uncertainty, the assumption is often made that the distribution best describing an input quantity X given repeated measured values of it (obtained independently) is a Gaussian distribution. X then has expectation equal to the average measured value and standard deviation equal to the standard deviation of the average. When the uncertainty is evaluated from a small number of measured values (regarded as instances of a quantity characterized by a Gaussian distribution), the corresponding distribution can be taken as a t-distribution. Other considerations apply when the measured values are not obtained independently. For a Type B evaluation of uncertainty, often the only available information is that X lies in a specified interval [ a,b ]. In such a case, knowledge of the quantity can be characterized by a rectangular probability distribution with limits a and b If different information were available, a probability distribution consistent with that information would be used. Sensitivity coefficients: Sensitivity coefficients c1,…,cN describe how the estimate y of Y would be influenced by small changes in the estimates x1,…,xN of the input quantities X1,…,XN For the measurement model Y=f(X1,…,XN) , the sensitivity coefficient ci equals the partial derivative of first order of f with respect to Xi evaluated at X1=x1 , X2=x2 , etc. For a linear measurement model Y=c1X1+⋯+cNXN, with X1,…,XN independent, a change in xi equal to u(xi) would give a change ciu(xi) in y. Sensitivity coefficients: This statement would generally be approximate for measurement models Y=f(X1,…,XN) The relative magnitudes of the terms |ci|u(xi) are useful in assessing the respective contributions from the input quantities to the standard uncertainty u(y) associated with y The standard uncertainty u(y) associated with the estimate y of the output quantity Y is not given by the sum of the |ci|u(xi) , but these terms combined in quadrature, namely by an expression that is generally approximate for measurement models Y=f(X1,…,XN) :u2(y)=c12u2(x1)+⋯+cN2u2(xN), which is known as the law of propagation of uncertainty. Sensitivity coefficients: When the input quantities Xi contain dependencies, the above formula is augmented by terms containing covariances, which may increase or decrease u(y) Uncertainty evaluation: The main stages of uncertainty evaluation constitute formulation and calculation, the latter consisting of propagation and summarizing. Uncertainty evaluation: The formulation stage constitutes defining the output quantity Y (the measurand), identifying the input quantities on which Y depends, developing a measurement model relating Y to the input quantities, and on the basis of available knowledge, assigning probability distributions — Gaussian, rectangular, etc. — to the input quantities (or a joint probability distribution to those input quantities that are not independent).The calculation stage consists of propagating the probability distributions for the input quantities through the measurement model to obtain the probability distribution for the output quantity Y , and summarizing by using this distribution to obtain the expectation of Y , taken as an estimate y of Y the standard deviation of Y , taken as the standard uncertainty u(y) associated with y , and a coverage interval containing Y with a specified coverage probability.The propagation stage of uncertainty evaluation is known as the propagation of distributions, various approaches for which are available, including the GUM uncertainty framework, constituting the application of the law of propagation of uncertainty, and the characterization of the output quantity Y by a Gaussian or a t -distribution, analytic methods, in which mathematical analysis is used to derive an algebraic form for the probability distribution for Y , and a Monte Carlo method, in which an approximation to the distribution function for Y is established numerically by making random draws from the probability distributions for the input quantities, and evaluating the model at the resulting values.For any particular uncertainty evaluation problem, approach 1), 2) or 3) (or some other approach) is used, 1) being generally approximate, 2) exact, and 3) providing a solution with a numerical accuracy that can be controlled. Uncertainty evaluation: Models with any number of output quantities When the measurement model is multivariate, that is, it has any number of output quantities, the above concepts can be extended. The output quantities are now described by a joint probability distribution, the coverage interval becomes a coverage region, the law of propagation of uncertainty has a natural generalization, and a calculation procedure that implements a multivariate Monte Carlo method is available. Uncertainty as an interval: The most common view of measurement uncertainty uses random variables as mathematical models for uncertain quantities and simple probability distributions as sufficient for representing measurement uncertainties. In some situations, however, a mathematical interval might be a better model of uncertainty than a probability distribution. This may include situations involving periodic measurements, binned data values, censoring, detection limits, or plus-minus ranges of measurements where no particular probability distribution seems justified or where one cannot assume that the errors among individual measurements are completely independent.A more robust representation of measurement uncertainty in such cases can be fashioned from intervals. An interval [a, b] is different from a rectangular or uniform probability distribution over the same range in that the latter suggests that the true value lies inside the right half of the range [(a + b)/2, b] with probability one half, and within any subinterval of [a, b] with probability equal to the width of the subinterval divided by b − a. The interval makes no such claims, except simply that the measurement lies somewhere within the interval. Distributions of such measurement intervals can be summarized as probability boxes and Dempster–Shafer structures over the real numbers, which incorporate both aleatoric and epistemic uncertainties.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dental anesthesiology** Dental anesthesiology: In the United States, Dental Anesthesiology is the specialty of dentistry that deals with the advanced use of general anesthesia, sedation and pain management to facilitate dental procedures. A Dentist Anesthesiologist is a dentist who has successfully completed an accredited postdoctoral anesthesiology residency program of three or more years duration, in accord with Commission on Dental Accreditation’s Standards for Dental Anesthesiology Residency Programs, and/or meets the eligibility requirements for examination by the American Dental Board of Anesthesiology. United States and Canada: Dental Anesthesiology is a recognized specialty of Dentistry in both the United States and Canada. The American Dental Board of Anesthesiology examines and certifies dentists who complete an accredited program of anesthesiology training in the United States or Canada. Dentists may then apply for Board Certification through the ADBA which requires ongoing and continual post-graduate education for maintenance.The American Society of Dentist Anesthesiologists is the only organization that represents dentists with three or more years of anesthesiology training.Dental Anesthesiology was the first specialty of dentistry to be recognized by both the American Board of Dental Specialtieshttp://dentalspecialties.org and National Commission on Recognition of Dental Specialties and Certifying Boards.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**X-ray notation** X-ray notation: X-ray notation is a method of labeling atomic orbitals that grew out of X-ray science. Also known as IUPAC notation, it was adopted by the International Union of Pure and Applied Chemistry in 1991 as a simplification of the older Siegbahn notation. In X-ray notation, every principal quantum number is given a letter associated with it. In many areas of physics and chemistry, atomic orbitals are described with spectroscopic notation (1s, 2s, 2p, 3s, 3p, etc.), but the more traditional X-ray notation is still used with most X-ray spectroscopy techniques including AES and XPS. Uses: X-ray sources are classified by the type of material and orbital used to generate them. For example, CuKα X-rays are emitted from the K orbital of copper. X-ray absorption is reported as which orbital absorbed the x-ray photon. In EXAFS and XMCD the L-edge or the L absorption edge is the point where the L orbital begins to absorb x-rays. Auger peaks are identified with three orbital definitions, for example KL1L2. In this case, K represents the hole that is initially present at the core level, L1 the initial state of the electron that relaxes down into the core level hole, and L2 the initial energy state of the emitted electron.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gender script** Gender script: A gender script is a concept in feminist studies that refers to structures or paths created by societal norms that one is supposed to follow based on the gender assigned to them at birth. The American Psychological Association defines gender script as "a temporally organized, gender-related sequence of events". Gender script is also closely related to the concept of gender roles. Gender scripts have been called a demonstration of the social construction of gender. Concept: Feminist theorist Judith Butler writes gender should not to be considered "a stable identity or locus of agency from which various acts follow" but that gender is an identity shaped by "a stylized repetition of acts." This notion of gender as performative acts relates to gender script as gender script determines the necessary acts required to be a man or a woman. Butler also notes, "that the gendered body is performative suggests that it has no ontological status apart from the various acts which constitute its reality." That is to say, gender is constructed by these scripted acts and performances society places on its members. Psychology Jean Malpas writes that children as young as two "have a good grasp of social norms and, in accordance with the developmental stages of gender constancy (Cohen-Kettenis & Pfafflin, 2003), are able to differentiate between a man and a woman, understand how boys and girls are supposed to look and behave, and pinpoint when something is out of line." Authors of the paper "Scripting Sexual Consent: Internalized Traditional Sexual Scripts and Sexual Consent Expectancies Among College Students" assert that "sexual scripts reflect the gendered power differentials in traditional gender roles." Examples: Children's toys are an explicit example how gender scripts; toys designed for girls will be pink and toys designed for boys will be blue. Another example Oost gives is of razors and other shaving products, which tend to be pink or white for women, and darker for men—even though both products work virtually the same. Other technological examples include pink earphones for women, pink computers, and even pink guns. Pinterest is another example. In their article investigating the gender script of the site, scholars Amanda Friz and Robert Gehl demonstrate how specific aspects of the site is geared toward women, especially during the sign-up process. Examples: A specific example that looks at the feminization of an existing product is the process of redesigning cellphones to sell to a female audience. In the attempt to appeal to a female audience, both telephone technologies and design were altered. As mobile phones were first introduced to women in the 1990s, it was marketed as a tool for "remote mothering", or as safety devices for traveling. As cellphone use began to gain momentum, the design and marketing of mobile phones for females shifted to include branding the devices as "branded fashion accessories and as status symbols through limited edition haute couture items. Customization in the form of ringtones and wallpapers were deemed to appeal to a female audience, and thus introduced to the mobile market. The Nokia 7270 folding phone was a fusion of functionality, usability and fashion; it featured "chic, interchangeable fabric wraps... allow[ing] you to impulsively change your look as often and as boldly as you please". This sort of "trendy and impulsive consumption styles" were generally regarded as female consumption patterns. By doing so, Nokia was able to market their mobile phones (being that technology enthusiasm were usually regarded as typically a male interest) to a larger market. Critique: Gender scripts can be considered problematic because of its binary, exclusionary nature. Those who feel as though they cannot fit themselves into the specific societal requirements assigned to them, like non-binary or trans* individuals, are left either feeling ostracized by society if they do not conform or they conform and are accepted, but are left unhappy. Critique: Looking specifically at product and technology design, scholar van Oost notes, many objects are designed for "everybody", with no specific user group in mind. However, some studies have demonstrated that even in these cases, there may be an unconscious bias where designers base their choices on a one-sided, default male user image. This can be due to many factors. One factor could be the result of who is involved in design and engineering. On teams where men are the majority, they may use the I-methodology, where they only see themselves as the intended users. This can create a bias toward male-oriented symbols and interests. This can also happen at the level of user testing if the user testers are all male and nobody considers the user needs of all potential users.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**NBI Knowledgebase** NBI Knowledgebase: NBI is short for the Nanomaterial-Biological Interactions Knowledgebase at Oregon State University, a repository for annotated data on nanomaterials characterization (purity, size, shape, charge, composition, functionalization, agglomeration state), synthesis methods, and nanomaterial-biological interactions (beneficial, benign or deleterious) defined at multiple levels of biological organization (molecular, cellular, organismal). Computational and data mining tools are being developed and incorporated into the NBI to provide a logical framework for species, route, dose, and scenario extrapolations and to identify key data required to predict the biological interactions of nanomaterials. NBI Knowledgebase: Information currently being gained in the emerging field of nanotechnology is extremely diverse, including a multitude of widely varying nanomaterials that are being or will be tested in a broad array of animal systems and in vitro assays. Knowledge of nanomaterial-biological interactions will likely require inclusion and consideration of the entire body of data produced from global research efforts, which will allow the definition of nanomaterial structure-activity relationships. Such mathematical representations can be used to predict nanomaterial properties in the absence of empirical data.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gpsim** Gpsim: gpsim is a full system simulator for Microchip PIC microcontrollers originally written by Scotte Dattalo. It is distributed under the GNU General Public License. Gpsim: gpsim has been designed for accuracy including the entire PIC - from the core to the I/O pins and including the functions of all internal peripherals. This makes it possible to create stimuli and tie them to the I/O pins and test the PIC the same way you would in the real world.The software can run natively in Windows using gpsimWin32, a port to Windows created by Borut Ražem.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lightwood–Albright syndrome** Lightwood–Albright syndrome: Lightwood–Albright syndrome is a neonatal form of renal tubular acidosis. It is characterized by distal renal tubular acidosis that occurs as a result of bicarbonate wasting and the inability to excrete hydrogen ions. By definition, it is a transient process and has no particular disease course. If untreated, it may lead to nephrocalcinosis and failure to thrive.It is also known as Lightwood Syndrome, Butler-Albright Syndrome, or Lightwood-Butler-Albright Syndrome. It is named for Reginald Cyril Lightwood and Fuller Albright. Pathophysiology: There is a genetic component to inheritance. While the disease can manifest without an inciting factor, most diagnoses come from an autosomal dominant and (less commonly) autosomal recessive form of inheritance. Specific genes include the SLC4A1 on chromosome 17, ATP6V1B1 on chromosome 2, and ATP6V0A4 on chromosome 7.Nephrons are the functional cells of the kidney and are necessary for kidney reabsorption, secretion, and excretion. Mutations in the genes mentioned above contribute to a defect in nephrons.In order for the nephron to remove acid (hydrogen ions) from the body, it must pair it with ammonia to produce ammonium. These mutations lead to the nephron's inability to pair the hydrogen ion with ammonia, preventing hydrogen ions from being excreted through the urine. This, along with the inability to excrete other acids in the body, contribute to metabolic acidosis and renal tubular acidosis. Diagnosis: Lightwood–Albright syndrome is diagnosed with a combination of laboratory and physical exam findings. Therefore, health care providers will look at electrolytes and serum acid-base levels to determine if Lightwood-Albright Syndrome is the proper diagnosis. Laboratory findings can include metabolic acidosis, hyperchloremia, hypercalcemia, and elevated urinary pH. Specifically, the urine will be unable to reach a pH under 5.5 because of its basicity. Clinical findings can include muscle wasting, vomiting, failure to thrive, fatigue, constipation, polyuria, and polydipsia.The following are important differential diagnoses that should be considered by a medical provider before making the diagnosis. Diagnosis: Periodic Paralysis Adenosine-Deaminase Deficiency Gitelman Syndrome Lowe Syndrome Treatment: Treatment of Lightwood–Albright syndrome is through transient alkali replacement therapy. This treatment option utilizes alkali as a base to help equilibrate the amount of extra acid that is being retained in the body. Treatment may not be required as it is a self-limiting process and is often resolved on its own. Epidemiology: Lightwood–Albright syndrome affects neonates. Most neonates that are affected by the disease are male. It is possible that older children may have this disease, but with a different clinical picture that includes rickets, bone deformities, growth retardation, and pathological bone fractures.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gauged supergravity** Gauged supergravity: Gauged supergravity is a supergravity theory in which some R-symmetry is gauged such that the gravitinos (superpartners of the graviton) are charged with respect to the gauge fields. Consistency of the supersymmetry transformation often requires the presence of the potential for the scalar fields of the theory, or the cosmological constant if the theory contains no scalar degree of freedom. The gauged supergravity often has the anti-de Sitter space as a supersymmetric vacuum. Gauged supergravity: Notable exception is a six-dimensional N=(1,0) gauged supergravity. "Gauged supergravity" in this sense should be contrasted with Yang–Mills–Einstein supergravity in which some other would-be global symmetries of the theory are gauged and fields other than the gravitinos are charged with respect to the gauge fields.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cable railway** Cable railway: A cable railway is a railway that uses a cable, rope or chain to haul trains. It is a specific type of cable transportation. Cable railway: The most common use for a cable railway is to move vehicles on a steeply graded line that is too steep for conventional locomotives to operate on – this form of cable railway is often called an incline or inclined plane. One common form of incline is the funicular – an isolated passenger railway where the cars are permanently attached to the cable. In other forms, the cars attach and detach to the cable at the ends of the cable railway. Some cable railways are not steeply graded - these are often used in quarries to move large numbers of wagons between the quarry to the processing plant. History: The oldest extant cable railway is probably the Reisszug, a private line providing goods access to Hohensalzburg Fortress at Salzburg in Austria. It was first documented in 1515 by Cardinal Matthäus Lang, who became Archbishop of Salzburg. The line originally used wooden rails and a hemp haulage rope and was operated by human or animal power. Today, steel rails, steel cables and an electric motor have taken over, but the line still follows the same route through the castle's fortifications. This line is generally described as the oldest funicular.In the early days of the industrial revolution, several railways used cable haulage in preference to locomotives, especially over steep inclines. The Bowes Railway on the outskirts of Gateshead opened in 1826. Today it is the world's only preserved operational 4 ft 8+1⁄2 in (1,435 mm) standard gauge cable railway system. The Cromford and High Peak Railway opened in 1831 with grades up to 1 in 8. There were nine inclined planes: eight were engine-powered, one was operated by a horse gin. The Middleton Top winding engine house at the summit of Middleton Incline has been preserved and the ancient steam engine inside, once used to haul wagons up, is often demonstrated. The Liverpool and Manchester Railway opened in 1830 with cable haulage down a 1 in 48 grade to the dockside at Liverpool. It was originally designed for cable haulage up and down 1 in 100 grades at Rainhill in the belief that locomotive haulage was impracticable. The Rainhill Trials showed that locomotives could handle 1 in 100 gradients. History: In 1832, the 1 in 17 Bagworth incline opened on Leicester to Burton upon Trent Line; the incline was bypassed in 1848. On July 20, 1837, the Camden Incline, between Euston and Primrose Hill on the London and Birmingham Railway opened.A Pit fishbelly gravitational railway operated between 1831 and 1846 to service the Australian Agricultural Company coal mine. B Pit opened 1837 and C Pit opened mid-1842. All were private-operations by the same company. Inclines: The majority of inclines were used in industrial settings, predominantly in quarries and mines, or to ship bulk goods over a barrier ridgeline as the Allegheny Portage Railroad and the Ashley Planes feeder railway shipped coal from the Pennsylvania Canal/Susquehanna basin via Mountain Top to the Lehigh Canal in the Delaware River Basin. The Welsh slate industry made extensive use of gravity balance and water balance inclines to connect quarry galleries and underground chambers with the mills where slate was processed. Examples of substantial inclines were found in the quarries feeding the Ffestiniog Railway, the Talyllyn Railway and the Corris Railway amongst others. Inclines: The Ashley Planes were used to transship heavy cargo over the Lehigh-Susquehanna drainage divide for over a hundred years and became uneconomic only when average locomotive traction engines became heavy and powerful enough that could haul long consists at speed past such obstructions yard to yard faster, even if the more roundabout route added mileage. Operation Level tracks are arranged above and below the gradient to allow wagons to be moved onto the incline either singly or in short rakes of two or more. Inclines: On the incline itself the tracks may be interlaced to reduce the width of land needed. This requires use of gauntlet track: either a single track of two rails, or a three-rail track where trains share a common rail; at the centre of the incline there will be a passing track to allow the ascending and descending trains to pass each other. Inclines: Railway workers attach the cable to the upper wagon, and detach it when it arrives at the other end of the incline. Generally, special-purpose safety couplings are used rather than the ordinary wagon couplings. The cables may be guided between the rails on the incline by a series of rollers so that they do not fall across the rail where they would be damaged by the wheels on the wagons. Inclines: Occasionally inclines were used to move locomotives between levels, but these were comparatively rare as it was normally cheaper to provide a separate fleet of locomotives on either side of the incline, or else to work the level sections with horses. On early railways, cable-worked inclines were also used on some passenger lines. Inclines: Controls The speed of the wagons was usually controlled by means of a brake that acted on the winding drum at the head of the incline. The incline cable passed round the drum several times to ensure there was sufficient friction for the brake to slow the rotation of the drum – and therefore the wagons – without the cable slipping.At the head of the incline various devices were employed to ensure that wagons did not start to descend before they were attached to the cable. These ranged from simple lumps of rock wedged behind the wagon's wheels to permanently installed chocks that were mechanically synchronized with the drum braking system. At Maenofferen Quarry a system was installed that raised a short section of the rail at the head of the incline to prevent runaways.The operation of an incline was typically controlled by the brakesman positioned at the winding house. A variety of systems were used to communicate with workers at the bottom of the incline, whose job it was to attach and detach the wagons from the incline cable. One of the most common communication methods was a simple electrical bell system. Inclines: Turnouts Cable railways were often used within quarries to connect working levels. Sometimes a single cable railway would span multiple levels, allowing wagons to be moved between the furthest levels in a single movement. In order to accommodate intermediate levels, turnouts were used to allow wagons to leave and join the cable railway part way along its length. Various methods were used to achieve this.One arrangement used at the Dinorwic Quarry was known as the "Ballast" method. This involved a two track incline with one track reserved for fully loaded wagons and the second used by partially loaded wagons. The line used by the partially loaded wagons was known as the "ballast" track and it had a stop placed on it part way down. The distance from the top of the incline to the stop was the same as the distance that the fully loaded wagons needed to travel. Empty wagons were hauled up the incline, counterbalanced by the descending ballast wagons. These empty wagons were replaced by fully loaded wagons ready to descend. The descending loaded wagons then returned the ballast wagons to the top of the incline. One of the major inclines at Dinorwic had four parallel tracks, two worked by the ballast method and two as conventional gravity balance. Inclines: Types Inclines are classified by the power source used to wind the cable. Stationary engine A stationary engine drives the winding drum that hauls the wagons to the top of the inclined plane and may provide braking for descending loads. Only a single track and cable is required for this type. The stationary engine may be a steam or internal combustion engine, or may be a water wheel. Inclines: Gravity balance In a gravity balance system two parallel tracks are employed with ascending trains on one and descending trains on the adjacent track. A single cable is attached to both trains, wound round a winding drum at the top of the incline to provide braking. The weight of the loaded descending cars is used to lift the ascending empties.This form of cable railway can only be used to move loads downhill and requires a wider space than a stationary engine -driven incline, but has the advantage of not requiring external power, and therefore costs less to operate. Inclines: Trwnc inclines A variation of the gravity balance incline was the trwnc incline found at slate quarries in north Wales, notably the Dinorwic Quarry and several in Blaenau Ffestiniog. These were worked by gravity, but instead of the wagons running on their own wheels, permanently attached angled wagons were used that had a horizontal platform on which the slate wagons rode. Inclines: Water balance This is a variant of the gravity balance incline that can be used to move loads uphill. A water tank is attached to the descending train. The tank is filled with water until the combined weight of the filled tank and train is greater than the weight of the loaded train that will be hauled uphill. The water is either carried in an additional water wagon attached to the descending train, or is carried underneath a trwnc car on which the empty train sits. This type of incline is especially associated with the Aberllefenni Slate Quarry that supplied the Corris Railway.This form of incline has the advantages of a gravity balance system with the added ability to haul loads uphill. It is only practical where a large supply of water is available at the top of the incline. An example of this type of cable railway is the passenger carrying Lynton and Lynmouth Cliff Railway. Inclines: Locomotive-hauled An uncommon form of cable railway uses locomotives, fitted with a winding drum, to power the cable. With the cable or chain attached to the wagons to be drawn, but the drive to the drum disengaged, the locomotive climbs the slope under its own power. When the cable is nearly at its full extent, or when the summit is reached, the locomotive is fastened to the rails and the cable wound in.In a simpler form the cable is attached to a locomotive, usually at the upper end of the incline. The locomotive is driven away from the head of the incline, hauling wagons up the inclined plane. The locomotive itself does not travel on the steeply graded section. An example is at the Amberley Chalk Pits Museum. This is most commonly used for a temporary incline where setting up the infrastructure of a winding drum and stationary engine is not appropriate. It is similarly employed for recovery operations where derailed rolling stock must be hauled back to the permanent track. Non-inclined cable railways: While the majority of cable railways moved trains over steep inclines, there are examples of cable-haulage on railways that did not have steep grades. The Glasgow Subway was cable-hauled from its opening in 1896 until it was converted to electric power in 1935. Hybrid cable railways: A few examples exist of cables being used on conventional railways to assist locomotives on steep grades. The Cowlairs incline was an example of this, with a continuous rope used on this section from 1842 until 1908. The middle section of the Erkrath-Hochdahl Railway in Germany (1841–1926) had an inclined plane where trains were assisted by rope from a stationary engine and later a bank engine running on a second track. The height difference was 82 metres over a 2.5 kilometre length (1845–1926) Examples: The Denniston Incline (1879–1967), north of Brunner, New Zealand, was gravity worked. It descended 518 m (1,699 ft) in a track distance of 1,670 m (5,479 ft), separated into two inclines, and during its life carried 13,000,000 t (12,794,685 long tons; 14,330,047 short tons) of coal. The Yosemite Valley Railroad operated a cable railway at Incline, California. The Duquesne Incline in Pittsburgh, PA was completed in 1877, and is 800 ft (244 m) long and 400 ft (122 m) high. 1891 – The Johnstown Inclined Plane, in Johnstown, PA, was completed following the Great Johnstown Flood of 1889. Dubbed the "World's Steepest Vehicular Inclined Plane", it is 896.5 ft (273.3 m) long, and ascends 502.2 ft (153.1 m) from the city valley to Westmont hilltop at a 70.9 percent grade. The São Paulo Railway in Brazil employed a series of five inclines to connect the port city of Santos to Rio Grande da Serra, rising 2,625 ft (800 m) in seven miles (11 km).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Zileuton** Zileuton: Zileuton (trade name Zyflo) is an orally active inhibitor of 5-lipoxygenase, and thus inhibits leukotrienes (LTB4, LTC4, LTD4, and LTE4) formation, used for the maintenance treatment of asthma. Zileuton was introduced in 1996 by Abbott Laboratories and is now marketed in two formulations by Cornerstone Therapeutics Inc. under the brand names Zyflo and Zyflo CR. The original immediate-release formulation, Zyflo, is taken four times per day. The extended-release formulation, Zyflo CR, is taken twice daily. Zileuton: Although the 600 mg immediate release tablet (Zyflo) and extended release formulation of zileuton are still available (Zyflo CR), the 300 mg immediate release tablet was withdrawn from the U.S. market on February 12, 2008. Pharmacotherapy: Indications and dosing Zileuton is indicated for the prophylaxis and chronic treatment of asthma in adults and children 12 years of age and older. Zileuton is not indicated for use in the reversal of bronchospasm in acute asthma attacks. Therapy with zileuton can be continued during acute exacerbations of asthma. Pharmacotherapy: The recommended dose of Zyflo is one 600 mg tablet, four times per day. The tablets may be split in half to make them easier to swallow. The recommended dose of Zyflo CR is two 600 mg extended-release tablets twice daily, within one hour after morning and evening meals, for a daily dose of 2400 mg. Do not split Zyflo CR tablets in half. Pharmacotherapy: Related compounds include montelukast (Singulair) and zafirlukast (Accolate). These two compounds are leukotriene receptor antagonists which block the action of specific leukotrienes, while zileuton inhibits leukotriene formation. ResearchResearch on mice suggests that Zileuton used alone or in combination with imatinib may inhibit chronic myeloid leukemia (CML). It has also been researched in a mouse model of dementia. Pharmacotherapy: Contraindications and warnings The most serious side effect of Zyflo and Zyflo CR is a potential elevation of liver enzymes (in 2% of patients). Therefore, zileuton is contraindicated in patients with active liver disease or persistent hepatic function enzymes elevations greater than three times the upper limit of normal. Hepatic function should be assessed prior to initiating Zyflo CR, monthly for the first 3 months, every 2–3 months for the remainder of the first year, and periodically thereafter. Pharmacotherapy: Neuropsychiatric events, including sleep disorders and behavioral changes, may occur with Zyflo and Zyflo CR. Patients should be instructed to notify their healthcare provider if neuropsychiatric events occur while using Zyflo or Zyflo CR. Pharmacotherapy: Zileuton is a weak inhibitor of CYP1A2 and thus has three clinically important drug interactions, which include increasing theophylline, and propranolol levels. It has been shown to lower theophylline clearance significantly, doubling the AUC and prolonging half-life by nearly 25%. Because of theophylline's relation to caffeine (both being a methylxanthine, and theophylline being a metabolite of caffeine), caffeine's metabolism and clearance may also be reduced, but there are no drug interaction studies between zileuton and caffeine. The R-isomer of warfarin metabolism and clearance is mainly affected by zileuton, while the S-isomer is not (because of metabolism via different enzymes). This can lead to an increase in prothrombin time. Chemistry: Zileuton is an active oral inhibitor of the enzyme 5-lipoxygenase, which forms leukotrienes, 5-hydroxyeicosatetraenoic acid, and 5-oxo-eicosatetraenoic acid from arachidonic acid. The chemical name of zileuton is (±)-1-(1-Benzo[b]thien-2-ylethyl)-l-hydroxyurea.The molecular formula of zileuton is C11H12N2O2S with a molecular weight of 236.29. The formulation from the manufacturer is a racemic mixture of R(+) and S(-) enantiomers. Pharmacokinetics: Following oral administration zileuton is rapidly absorbed with a mean time to peak blood serum concentration of 1.7 hours and an average half-life elimination of 2.5 hours. Blood plasma concentrations are proportional to dose, whereas the absolute bioavailability is unknown. The apparent volume of distribution of zileuton is approximately 1.2 L/kg. Zileuton is 93% bound to plasma proteins, primarily to albumin, with minor binding to alpha-1-acid glycoprotein. Elimination of zileuton is primarily through metabolites in the urine (~95%) with the feces accounting for the next largest amount (~2%). The drug is metabolized by the cytochrome P450 enzymes: CYP1A2, 2C9, and 3A4. Adverse effects: The most common adverse reactions reported by patients treated with Zyflo CR were sinusitis (6.5%), nausea (5%), and pharyngolaryngeal pain (5%) vs. placebo, 4%, 1.5%, and 4% respectively. Interactions: Drug interactions Zileuton is a minor substrate of CYP1A2, 2C8/9, 3A4, and a weak inhibitor of CYP 1A2. The drug has been shown to increase the serum concentration or effects of theophylline, propranolol, and warfarin, although significant increase in prothrombin time is not obvious. It is advised that the doses of each medication be monitored and/or reduced accordingly. Other interactions The avoidance of alcohol is recommended due to increased risk of CNS depression as well as an increased risk of liver toxicity. In addition, the herbal supplement St. John's wort may decrease the serum levels of zileuton. Overdose/toxicology: Symptoms Human experience of acute overdose with zileuton is limited. A patient in a clinical study took between 6.6 and 9.0 grams of zileuton immediate-release tablets in a single dose. Vomiting was inducted and the patient recovered without sequelae. Zileuton is not removed by dialysis. Overdose/toxicology: The oral minimum lethal doses in mice and rats were 500-4000 and 300–1000 mg/kg, respectively (providing greater than 3 and 9 times the systemic exposure (AUC) achieved at the maximum recommended human daily oral dose, respectively). In dogs, at an oral dose of 1000 mg/kg (providing in excess of 12 times the systemic exposure (AUC) achieved at the maximum recommended human daily oral dose) no deaths occurred but nephritis was reported. Overdose/toxicology: Treatment Should an overdose occur, the patient should be treated symptomatically and supportive measures instituted as required. If indicated, elimination of unabsorbed drug should be achieved by emesis or gastric lavage; usual precautions should be observed to maintain the airway. A Certified Poison Control Center should be consulted for up-to-date information on management of overdose with Zyflo CR.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MathFest** MathFest: MathFest is a mathematics conference hosted annually in late summer by the Mathematical Association of America. It is known for its dual focus on teaching and research in mathematics, as well as for student participation. MathFest Locations: The 2015 meeting in Washington, D.C. was an extra day long in order to include events to mark the centennial anniversary of the MAA. The 2020 meeting in Philadelphia, PA was cancelled due to the COVID-19 pandemic. The 2021 meeting was held virtually due to the COVID-19 pandemic. Events: MathFest features many annual lectures, such as the Earle Raymond Hedrick Lecture Series, which consists of up to three lectures by the same presenter, on three consecutive days, and the AWM-MAA Falconer Lecture, which is given by a distinguished female mathematician or mathematics educator.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**RUMBA** RUMBA: Rumba is a terminal emulation software program with user interface (UI) modernization properties. Rumba and Rumba+ allow users to connect to legacy systems (typically a mainframe) via desktop, web, and mobile. Rumba provides IT end users with a modern UI, allowing them to bypass green screen applications. Launched in 1989, Rumba (previously RUMBA) was one of the first Windows based terminal emulators available.Originally developed by Wall Data, Inc, Rumba was acquired by NetManage and then by Micro Focus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fast break** Fast break: Fast break is an offensive strategy in basketball and handball. In a fast break, a team attempts to move the ball up court and into scoring position as quickly as possible, so that the defense is outnumbered and does not have time to set up. The various styles of the fast break–derivative of the original created by Frank Keaney–are seen as the best method of providing action and quick scores. A fast break may result from cherry picking. Description: In a typical fast-break situation, the defending team obtains the ball and passes it to the fastest player, who sets up the fast break. That player (usually the smaller point guard, in the case of basketball) then speed-dribbles the ball up the court with several players trailing on the wings. He then either passes it to another player for quick scoring or takes the shot himself. If contact is made between him and a defender from behind while on a fast break, an unsportsmanlike foul is called. Recognition, speed, ball-handling skills, and decision-making are critical to the success of a fast break. Description: In basketball, fast breaks are often the result of good defensive play such as a steal, obtaining the ball off a block, or a missed shot by the opposing team and a rebound, where the defending team takes possession of the ball and the other team has not adjusted. A fast break can sometimes lead to an alley-oop if there are more offensive players than defenders. In basketball, if the fast break did not lead to a basket and an offensive rebound is obtained and put back quickly, this is called a secondary break. Fly fast break: A fly fast break (also known as a one out fast break, the technical term for the play) is a basketball move in which after a shot is attempted, the player who is guarding the shooter does not box out or rebound but instead runs down the court looking for a pass from a rebounding teammate for a quick score. Fly fast break: How to play the Fly fast break The coach designates a certain guard or guards to carry out the Fly fast break. This is often the guard that defends the opponents' shooting guard. When the designated opposing guard makes an attempted shot. The defending guard (referred to as 'Fly') will contest the shot but then sprints down the court to the other team's key. When the defending team obtains the rebound or has to inbound the ball (after a made basket), they throw the ball into the other team's key, knowing that there is a 'Fly' waiting to catch the ball and score. Fly fast break: Strengths Defeats the zone - the other team doesn't have time to set up their zone defense. Removes a rebounder - because the shooter has to defend against the Fly, they are removed from rebounding. Upsets the shooter - because the shooter has to worry about defense, they are less focused on their shooting. Weaknesses Rebounding weakness - The Fly's team is left with a 4 against 5 rebounding ratio, if the shooter stays to rebound. Inbounding - If a shooter scores, the inbounding set up takes longer and the distance to throw the ball is harder. Exhausting - The Fly has to sprint on offense, but has to hustle back on defense if the Fly fast break fails. Breaking Down the Fly fast break Breaking down the Fly fast break can be done in two ways: Have a confident shooter who can score and force the defending team to inbound while the shooter hustles back to defend against the Fly. Use non-shooting plays, where the #4 & #5 forwards do the scoring. Notes The 'Fly' is a term in fly fishing where the actions of this type of fishing are similar to the actions of the basketball player in Fly fast break.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Degree of anonymity** Degree of anonymity: In anonymity networks (e.g., Tor, Crowds, Mixmaster, I2P, etc.), it is important to be able to measure quantitatively the guarantee that is given to the system. The degree of anonymity d is a device that was proposed at the 2002 Privacy Enhancing Technology (PET) conference. Two papers put forth the idea of using entropy as the basis for formally measuring anonymity: "Towards an Information Theoretic Metric for Anonymity", and "Towards Measuring Anonymity". The ideas presented are very similar with minor differences in the final definition of d Background: Anonymity networks have been developed and many have introduced methods of proving the anonymity guarantees that are possible, originally with simple Chaum Mixes and Pool Mixes the size of the set of users was seen as the security that the system could provide to a user. This had a number of problems; intuitively if the network is international then it is unlikely that a message that contains only Urdu came from the United States, and vice versa. Information like this and via methods like the predecessor attack and intersection attack helps an attacker increase the probability that a user sent the message. Background: Example With Pool Mixes As an example consider the network shown above, in here A,B,C and D are users (senders), Q,R,S , and T are servers (receivers), the boxes are mixes, and {A,B}∈T , {A,B,C}∈S and {A,B,C,D}∈Q,R where ∈ denotes the anonymity set. Now as there are pool mixes let the cap on the number of incoming messages to wait before sending be 2 ; as such if A,B , or C is communicating with R and S receives a message then S knows that it must have come from E (as the links between the mixes can only have 1 message at a time). This is in no way reflected in S 's anonymity set, but should be taken into account in the analysis of the network. Degree of Anonymity: The degree of anonymity takes into account the probability associated with each user, it begins by defining the entropy of the system (here is where the papers differ slightly but only with notation, we will use the notation from [1].): := lg ⁡(1pi)] where H(X) is the entropy of the network, N is the number of nodes in the network, and pi is the probability associated with node i Now the maximal entropy of a network occurs when there is uniform probability associated with each node (1N) and this yields := lg ⁡(N) The degree of anonymity (now the papers differ slightly in the definition here, [2] defines a bounded degree where it is compared to HM and [3] gives an unbounded definition—using the entropy directly, we will consider only the bounded case here) is defined as := 1−HM−H(X)HM=H(X)HM Using this anonymity systems can be compared and evaluated using a quantitatively analysis. Degree of Anonymity: Definition of Attacker These papers also served to give concise definitions of an attacker: Internal/External an internal attacker controls nodes in the network, whereas an external can only compromise communication channels between nodes. Passive/Active an active attacker can add, remove, and modify any messages, whereas a passive attacker can only listen to the messages. Local/Global a local attacker has access to only part of the network, whereas a global can access the entire network. Example: d In the papers there are a number of example calculations of d ; we will walk through some of them here. Example: Crowds In Crowds there is a global probability of forwarding ( pf ), which is the probability a node will forward the message internally instead of routing it to the final destination. Let there be C corrupt nodes and N total nodes. In Crowds the attacker is internal, passive, and local. Trivially lg ⁡(N−C) , and overall the entropy is lg lg ⁡[N/pf] , d is this value divided by HM [4]. Example: Onion routing In onion routing let's assume the attacker can exclude a subset of the nodes from the network, then the entropy would easily be lg ⁡(S) , where S is the size of the subset of non-excluded nodes. Under an attack model where a node can both globally listen to message passing and is a node on the path this decreases to lg ⁡(L) , where L is the length of the onion route (this could be larger or smaller than S ), as there is no attempt in onion routing to remove the correlation between the incoming and outgoing messages. Example: Applications of this metric In 2004, Diaz, Sassaman, and DeWitte presented an analysis[5] of two anonymous remailers using the Serjantov and Danezis metric, showing one of them to provide zero anonymity under certain realistic conditions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Yield spread** Yield spread: In finance, the yield spread or credit spread is the difference between the quoted rates of return on two different investments, usually of different credit qualities but similar maturities. It is often an indication of the risk premium for one investment product over another. The phrase is a compound of yield and spread. The "yield spread of X over Y" is generally the annualized percentage yield to maturity (YTM) of financial instrument X minus the YTM of financial instrument Y. There are several measures of yield spread relative to a benchmark yield curve, including interpolated spread (I-spread), zero-volatility spread (Z-spread), and option-adjusted spread (OAS). It is also possible to define a yield spread between two different maturities of otherwise comparable bonds. For example, if a certain bond with a 10-year maturity yields 8% and a comparable bond from the same issuer with a 5-year maturity yields 5%, then the term premium between them may be quoted as 8% – 5% = 3%. Yield spread analysis: Yield spread analysis involves comparing the yield, maturity, liquidity and creditworthiness of two instruments, or of one security relative to a benchmark, and tracking how particular patterns vary over time. Yield spread analysis: When yield spreads widen between bond categories with different credit ratings, all else equal, it implies that the market is factoring more risk of default on the lower-grade bonds. For example, if a risk-free 10-year Treasury note is currently yielding 5% while junk bonds with the same duration are averaging 7%, then the spread between Treasuries and junk bonds is 2%. If that spread widens to 4% (increasing the junk bond yield to 9%), then the market is forecasting a greater risk of default, probably because of weaker economic prospects for the borrowers. A narrowing of yield spreads (between bonds of different risk ratings) implies that the market is factoring in less risk, probably due to an improving economic outlook. Yield spread analysis: The TED spread is one commonly-quoted credit spread. The difference between Baa-rated ten-year corporate bonds and ten-year Treasuries is another commonly-quoted credit spread. Consumer loans: Yield spread can also be an indicator of profitability for a lender providing a loan to an individual borrower. For consumer loans, particularly home mortgages, an important yield spread is the difference between the interest rate actually paid by the borrower on a particular loan and the (lower) interest rate that the borrower's credit would allow that borrower to pay. For example, if a borrower's credit is good enough to qualify for a loan at 5% interest rate but accepts a loan at 6%, then the extra 1% yield spread (with the same credit risk) translates into additional profit for the lender. As a business strategy, lenders typically offer yield spread premiums to brokers who identify borrowers willing to pay higher yield spreads.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dependability** Dependability: In systems engineering, dependability is a measure of a system's availability, reliability, maintainability, and in some cases, other characteristics such as durability, safety and security. In real-time computing, dependability is the ability to provide services that can be trusted within a time-period. The service guarantees must hold even when the system is subject to attacks or natural failures. The International Electrotechnical Commission (IEC), via its Technical Committee TC 56 develops and maintains international standards that provide systematic methods and tools for dependability assessment and management of equipment, services, and systems throughout their life cycles. The IFIP Working Group 10.4 on "Dependable Computing and Fault Tolerance" plays a role in synthesizing the technical community's progress in the field and organizes two workshops each year to disseminate the results. Dependability can be broken down into three elements: Attributes - a way to assess the dependability of a system Threats - an understanding of the things that can affect the dependability of a system Means - ways to increase a system's dependability History: Some sources hold that word was coined in the nineteen-teens in Dodge Brothers automobile print advertising. But the word predates that period, with the Oxford English Dictionary finding its first use in 1901. History: As interest in fault tolerance and system reliability increased in the 1960s and 1970s, dependability came to be a measure of [x] as measures of reliability came to encompass additional measures like safety and integrity. In the early 1980s, Jean-Claude Laprie thus chose dependability as the term to encompass studies of fault tolerance and system reliability without the extension of meaning inherent in reliability.The field of dependability has evolved from these beginnings to be an internationally active field of research fostered by a number of prominent international conferences, notably the International Conference on Dependable Systems and Networks, the International Symposium on Reliable Distributed Systems and the International Symposium on Software Reliability Engineering. History: Traditionally, dependability for a system incorporates availability, reliability, maintainability but since the 1980s, safety and security have been added to measures of dependability. Elements of dependability: Attributes Attributes are qualities of a system. These can be assessed to determine its overall dependability using Qualitative or Quantitative measures. Avizienis et al. define the following Dependability Attributes: Availability - readiness for correct service Reliability - continuity of correct service Safety - absence of catastrophic consequences on the user(s) and the environment Integrity - absence of improper system alteration Maintainability - ability for easy maintenance (repair)As these definitions suggested, only Availability and Reliability are quantifiable by direct measurements whilst others are more subjective. For instance Safety cannot be measured directly via metrics but is a subjective assessment that requires judgmental information to be applied to give a level of confidence, whilst Reliability can be measured as failures over time. Elements of dependability: Confidentiality, i.e. the absence of unauthorized disclosure of information is also used when addressing security. Security is a composite of Confidentiality, Integrity, and Availability. Security is sometimes classed as an attribute but the current view is to aggregate it together with dependability and treat Dependability as a composite term called Dependability and Security.Practically, applying security measures to the appliances of a system generally improves the dependability by limiting the number of externally originated errors. Elements of dependability: Threats Threats are things that can affect a system and cause a drop in Dependability. There are three main terms that must be clearly understood: Fault: A fault (which is usually referred to as a bug for historic reasons) is a defect in a system. The presence of a fault in a system may or may not lead to a failure. For instance, although a system may contain a fault, its input and state conditions may never cause this fault to be executed so that an error occurs; and thus that particular fault never exhibits as a failure. Elements of dependability: Error: An error is a discrepancy between the intended behavior of a system and its actual behavior inside the system boundary. Errors occur at runtime when some part of the system enters an unexpected state due to the activation of a fault. Since errors are generated from invalid states they are hard to observe without special mechanisms, such as debuggers or debug output to logs. Elements of dependability: Failure: A failure is an instance in time when a system displays behavior that is contrary to its specification. An error may not necessarily cause a failure, for instance an exception may be thrown by a system but this may be caught and handled using fault tolerance techniques so the overall operation of the system will conform to the specification.It is important to note that Failures are recorded at the system boundary. They are basically Errors that have propagated to the system boundary and have become observable. Elements of dependability: Faults, Errors and Failures operate according to a mechanism. This mechanism is sometimes known as a Fault-Error-Failure chain. As a general rule a fault, when activated, can lead to an error (which is an invalid state) and the invalid state generated by an error may lead to another error or a failure (which is an observable deviation from the specified behavior at the system boundary).Once a fault is activated an error is created. An error may act in the same way as a fault in that it can create further error conditions, therefore an error may propagate multiple times within a system boundary without causing an observable failure. If an error propagates outside the system boundary a failure is said to occur. A failure is basically the point at which it can be said that a service is failing to meet its specification. Since the output data from one service may be fed into another, a failure in one service may propagate into another service as a fault so a chain can be formed of the form: Fault leading to Error leading to Failure leading to Error, etc. Elements of dependability: Means Since the mechanism of a Fault-Error-Chain is understood it is possible to construct means to break these chains and thereby increase the dependability of a system. Four means have been identified so far: Prevention Removal Forecasting ToleranceFault Prevention deals with preventing faults being introduced into a system. This can be accomplished by use of development methodologies and good implementation techniques. Fault Removal can be sub-divided into two sub-categories: Removal During Development and Removal During Use. Removal during development requires verification so that faults can be detected and removed before a system is put into production. Once systems have been put into production a system is needed to record failures and remove them via a maintenance cycle. Fault Forecasting predicts likely faults so that they can be removed or their effects can be circumvented.Fault Tolerance deals with putting mechanisms in place that will allow a system to still deliver the required service in the presence of faults, although that service may be at a degraded level. Dependability means are intended to reduce the number of failures made visible to the end users of a system. Persistence Based on how faults appear or persist, they are classified as: Transient: They appear without apparent cause and disappear again without apparent cause Intermittent: They appear multiple times, possibly without a discernible pattern, and disappear on their own Permanent: Once they appear, they do not get resolved on their own Dependability of information systems and survivability: Some works on dependability use structured information systems, e.g. with SOA, to introduce the attribute survivability, thus taking into account the degraded services that an Information System sustains or resumes after a non-maskable failure. The flexibility of current frameworks encourage system architects to enable reconfiguration mechanisms that refocus the available, safe resources to support the most critical services rather than over-provisioning to build failure-proof system. With the generalisation of networked information systems, accessibility was introduced to give greater importance to users' experience. To take into account the level of performance, the measurement of performability is defined as "quantifying how well the object system performs in the presence of faults over a specified period of time".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Changeup** Changeup: A changeup is a type of pitch in baseball and fastpitch softball. Changeup: The changeup is a staple off-speed pitch often used in a pitcher's arsenal, usually thrown to look like a fastball but arriving much more slowly to the plate. Its reduced speed coupled with its deceptive delivery is meant to confuse the batter's timing. It is meant to be thrown the same as a fastball, but farther back in the hand, which makes it release from the hand slower while still retaining the look of a fastball. A changeup is generally thrown to be 8–15 miles per hour slower than a fastball. If thrown correctly, the changeup will confuse the batter because the human eye cannot discern that the ball is coming significantly slower until it is around 30 feet from the plate. For example, a batter swings at the oncoming ball as if it were a 90 mph fastball, but instead the ball is coming in at 75 mph—this means they will be swinging too early to hit the ball well (also known as being "way out in front"). Changeup: Other names include a change-of-pace or a change. In addition, before at least the second half of the twentieth century, the term "slow ball" was used to denote pitches that were not a fastball or breaking ball, which almost always meant a type of changeup. Therefore, the terms slow ball and changeup could be used interchangeably. The changeup is usually, but not always, pitched faster than a curveball and about the same speed as a slider.The changeup is analogous to the slower ball in cricket. Delivery: The changeup is thrown with the same arm action as a fastball, but at a lower speed due to the pitcher holding the ball in a special grip. Former pitcher and pitching coach Leo Mazzone stated: When a pitcher throws his best fastball, he puts more in it; the changeup is such that one throws something other than his best fastball. By having this mindset, the pitch will have less velocity on it in addition to the change in grips. This difference from what is expected by the arm action and the velocity can confuse the batter into swinging the bat far too early and thus receiving a strike, or not swinging at all. Should a batter be fooled on the timing of the pitch and still make contact, it will cause a foul ball or the ball being put into play weakly, usually resulting in an out. In addition to the unexpectedly slow velocity, the changeup can also [sic] possess a significant amount of movement, which can bewilder the batter even further. The very best changeups utilize both deception and movement. Popularity: Since the rise of Pedro Martínez, a Dominican pitcher whose changeup was one of the tools that led to his three Cy Young Awards, the changeup has become increasingly popular in the Dominican Republic. Dominican pitchers including Edinson Vólquez, Michael Ynoa, and Ervin Santana are all known to have developed effective changeups in the Dominican Republic after Martínez's success with the pitch.Probably the most famous changeup thrower of the last 30 years, Atlanta Braves southpaw Tom Glavine utilized a two-seam changeup as his number one pitch on the way to winning two Cy Young Awards, a World Series MVP, and 305 wins in a celebrated Hall of Fame career.Hall of Famer reliever Trevor Hoffman had one of the best changeups in his prime and used it to record 601 saves. Popularity: In the 2010s, some of the game's best pitchers came to rely heavily on the changeup. A 2013 article published by Sports Illustrated noted that star starting pitchers Justin Verlander, Félix Hernández, Stephen Strasburg, David Price, and Max Scherzer revolutionized the pitch and used it abundantly in their arsenal. In addition to its effectiveness on the field, according to Fox Sports changeups may also reduce the number of injuries suffered by a pitcher. Variations: There are several variations of changeups, which are generated by using different grips on the ball during the release of the pitch. Variations: The circle changeup is one well-known grip. The pitcher forms a circle with the index finger and thumb and lays the middle and ring fingers across the seams of the ball. By pronating the wrist upon release, the pitcher can make the pitch break in the same direction as a screwball. More or less break will result from the pitcher's arm slot. Pedro Martínez used this pitch throughout his career to great effect, and many considered it to be his best pitch.The most common type is the straight changeup. The ball is held with three fingers (instead of the usual two) and closer to the palm, to kill some of the speed generated by the wrist and fingers. This pitch generally breaks downward slightly, though its motion does not differ greatly from a two-seam fastball. Variations: Other variations include the palmball, vulcan changeup and fosh. The split-finger fastball and forkball is used by some pitchers as a type of changeup depending on its velocity.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**LEAPS (finance)** LEAPS (finance): In finance, Long-term Equity AnticiPation Securities (LEAPS) are derivatives that track the price of an underlying financial instrument (stocks or indices). They are option contracts with a much longer time to expiry than standard options. According to the Options Industry Council, the educational arm of the Options Clearing Corporation, LEAPS are available on stocks and indexes that have an average daily trading volume of at least 1000 contracts. As with standard options, LEAPS are available in two forms, calls and puts. LEAPS (finance): Options were originally created with expiry cycles of 3, 6, and 9 months, with no option term lasting more than a year. Options of this form, for such terms, still constitute the vast majority of options activity. LEAPS were created relatively recently and typically extend for terms of 2 years out. Equity LEAPS typically expire in January. For example, if today were December 2020, one could buy a Microsoft option that would expire in January of 2021, 2022, or 2023. The latter two are LEAPS. In practice, LEAPS behave and are traded just like standard options. LEAPS (finance): When LEAPS were first introduced in 1990, they were derivative instruments solely for stocks; however, more recently, equivalent instruments for indices have become available. These are also referred to as LEAPS. Applications: LEAPS are often used as a risk reduction tool by investors. For example, in an article in Stocks, Futures and Options Magazine, Dan Haugh of PTI Securities & Futures suggests that stock investors can manage risk and price protection by considering the purchase of an exchange-traded fund (ETF) and "...buying put protection on that ETF with LEAPS." In this example, risk is reduced when an investor in stock or ETFs buys enough LEAPS put options to protect all of the shares they own. LEAPS act like an insurance policy; it is possible to reduce the risk of loss to nothing but the purchase price of the LEAPS itself. Applications: An investor can also buy a LEAPS call, giving them a long time (potentially more than one or two years) to profit if the underlying stock or ETF rises in price. LEAPS are identical to standard options in how the investor gains or loses when trading them.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Inositol-3-phosphate synthase** Inositol-3-phosphate synthase: In enzymology, an inositol-3-phosphate synthase (EC 5.5.1.4) is an enzyme that catalyzes the chemical reaction D-glucose 6-phosphate ⇌ 1D-myo-inositol 3-phosphateHence, this enzyme has one substrate, D-glucose 6-phosphate, and one product, 1D-myo-inositol 3-phosphate. This enzyme belongs to the family of isomerases, specifically the class of intramolecular lyases. The systematic name of this enzyme class is 1D-myo-inositol-3-phosphate lyase (isomerizing). Other names in common use include myo-inositol-1-phosphate synthase, D-glucose 6-phosphate cycloaldolase, inositol 1-phosphate synthatase, glucose 6-phosphate cyclase, inositol 1-phosphate synthetase, glucose-6-phosphate inositol monophosphate cycloaldolase, glucocycloaldolase, and 1L-myo-inositol-1-phosphate lyase (isomerizing). Inositol-3-phosphate synthase: This enzyme participates in streptomycin biosynthesis and inositol phosphate metabolism. It employs one cofactor, NAD+. The reaction this enzyme catalyses represents the first committed step in the production of all inositol-containing compounds, including phospholipids, either directly or by salvage. The enzyme exists in a cytoplasmic form in a wide range of plants, animals, and fungi. It has also been detected in several bacteria and a chloroplast form is observed in alga and higher plants. Inositol phosphates play an important role in signal transduction. Inositol-3-phosphate synthase: In Saccharomyces cerevisiae (Baker's yeast), the transcriptional regulation of the INO1 gene encoding inositol-3-phosphate synthase has been studied in detail and its expression is sensitive to the availability of phospholipid precursors as well as growth phase. The regulation of the structural gene encoding 1L-myo-inositol-1-phosphate synthase has also been analyzed at the transcriptional level in the aquatic angiosperm, Spirodela polyrrhiza (Giant duckweed) and the halophyte, Mesembryanthemum crystallinum (Common ice plant).In prokaryotes, myo-D-inositol phosphate synthase was discovered by Bachhawat and Mande in 1999 (reported in Journal of Molecular Biology). The existence of inositol in prokaryotes is not extensive, but the discovery of this enzyme first in Mycobacterium tuberculosis, nucleated activity towards finding its inhibitors. Structural studies: As of late 2007, 12 structures have been solved for this class of enzymes, with PDB accession codes 1GR0, 1JKF, 1JKI, 1LA2, 1P1F, 1P1H, 1P1I, 1P1J, 1P1K, 1RM0, 1VJP, and 1VKO.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Training corset** Training corset: A training corset is generally a corset used in body modification. A training corset may be used for orthopedic reasons (such as to correct a crooked spine) or for cosmetic reasons (to achieve a smaller waistline, commonly called waist training or in more extreme cases tightlacing.) In addition, the term "training corset" may refer to a corset which is used to acclimate the body prior to wearing a full corset as an everyday undergarment, or to any corset worn by somebody undertaking training (achieving a desired body shape). Redresseur corset: The redresseur corset or preparatory corset was a form of training corset used from the mid-19th century into the early 20th century, designed specifically for young adolescent girls who had not worn stays from an early stage. In addition to moulding a pronounced waist, it served as a back harness and was intended to improve posture.During earlier times in western countries a corset was an everyday item of apparel. In some periods, children were put in stays as soon as they could sit upright or walk. It was believed that the young body was too soft to grow upright on its own. Boys stopped wearing them once they were breeched, i.e. were changed from toddler's dresses to boy's breeches at about 4-7. Girls continued to wear them for their whole lives. It was not until the late 19th century that this practice began to fall away. Orthopedic corsets: Corsets may be worn for orthopedic reasons, such as correcting a crooked spine or straightening the back and shoulders. Waist training corsets: It was primarily in the 19th century when corsets were used to noticeably reduce a woman's waist in order to achieve a fashionable hourglass figure. The corset was laced progressively tighter, forcing the floating ribs upward and compressing the soft tissue at the waist. This could lead to many negative health ramifications, including difficulty breathing, problems with digestion, and permanent deformity. Waist training, or waist reduction, generally uses an hourglass corset. Waist reduction, sometimes called "tightlacing", is often used by women who are post-pregnancy or by women involved in BDSM. No structural features distinguish a modern waist training corset from a corset worn as an external garment for special occasions. Training corsets are always made from strong fabric (or leather) and with relatively inflexible boning (not all corsets are strong enough to mold a body). A training corset is designed to be used every day, and will generally be hidden under clothing.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lethal dwarfism in rabbits** Lethal dwarfism in rabbits: In the rabbit (Oryctolagus cuniculus), lethal dwarfism occurs in individuals homozygous for the dwarf allele (dwdw). Homozygosity for the dwarf allele results in a lethal autosomal recessive mutation. This is caused by a loss of function (LOF) mutation in the High mobility AT-hook 2 (HMGA2) gene, spanning 12.1Kb from 44,709,089 bp to 44,721,236 bp that removes the gene promotor as well as multiple exons. This mutation greatly affects growth of homozygous embryos (resulting in stunted size and altered craniofacial development) and homozygous kits once born. These individuals homozygous for the dwarf allele are viable in the womb but die days after being born. Individuals that are heterozygous for the dwarf allele are healthy and unaffected by the lethality of the mutation, but are smaller than individuals homozygous for the wild type allele. Dwarf rabbits: Domestication of rabbits originated in the Catholic monasteries of Southern France around 500-600 AD. Species believed to have been present in the region were Oryctolagus cuniculus cuniculus and O. c. algirus, native to the Iberian Peninsula, as well as O. c. cuniculus, a species native to France. At this point in time, rabbits were mainly being raised for meat, and therefore, a larger, bulkier rabbit was preferred. It was not until much later that rabbits solely as pets gained popularity, and as they did, breeding for smaller size became more prevalent.Today, dwarf rabbits are largely popular and nine different breeds are accepted by the American Rabbit Breeder’s Association (ARBA) with many others accepted in other countries. These breeds vary greatly in characteristics but they all have the dwarf allele in common. Small non-dwarf breeds, though they can be a similar weight to dwarfs, do not carry the dwarf allele and thus do not produce "peanuts" (dwdw kits) in their litters. Though size of dwarf and small non-dwarf rabbit breeds may be similar, dwarfs have features unique to them. Dwarf rabbits have characteristically large, blocky heads that appear disproportional to their small, rounded bodies, with short noses and short, thick ears, allowing them to stand apart from other breeds in their proportions alone. Dwarf allele: Dwarfs owe their small size and features to both the dwarf allele (dw) and selective breeding, where rabbits are selected by humans for their characteristics, resulting in more offspring of the desired characteristic. Rabbits possessing two copies of the wild type allele (Dw/Dw) are larger than their other dwarf littermates, but these individuals that are homozygous for the wild type allele are still smaller that standard sized rabbits. This is because of selective breeding over the years selecting for a smaller size. Individuals heterozygous for the dwarf allele (Dw/dw) are what we typically think of as dwarfs. These are the individuals that are most often seen representing their breed because they more easily fit into weight requirements for competitions. They are about 2/3 the size of their homozygous wild type allele littermates. Because of this, we see the dwarf allele greatly contributes to the small size of dwarfs, but it is also a lethal autosomal recessive mutation. Kits (rabbit young) homozygous for the dwarf allele (dwdw) are often referred to as "peanuts", and although viable up to birth, die days afterwards. Physically, they differ greatly from their healthy littermates and differentiation is possible at birth. Peanuts are significantly smaller than their healthy littermates (about 1/3 the size of a healthy kit) and often possess swollen heads and smaller than normal ears. They also have been reported to have incompletely calcified calvariums, adding to deformity of their skulls. Peanuts exhibit a greatly decreased growth rate, and although it has been reported that some are capable of nursing, they are quickly left behind in growth and weight by their healthy littermates as they appear to not be growing at all. Peanuts are a common occurrence in dwarf litters, with there being a 1/4 chance of a kit being a peanut if both parents are heterozygous for the dwarf allele. There have been multiple reports of different organ systems being negatively affected by the peanut phenotype, including inhibition of the endocrine system at the pituitary. This could in part explain the inhibition of growth in dwdw kits. Dwarf allele: The dwarf allele has been shown to have a genetic linkage with the Agouti gene, pointing to its presence on chromosome 4. Location of the dwarf allele on chromosome 4 has been confirmed through heterozygosity mapping. Causal mutation for Dwarf allele: The causal mutation for the dwarf allele has been found to be a 12.1Kb deletion from 44,709,089 bp to 44,721,236 bp in the high mobility AT-hook 2 (HMGA2) gene (also known as HMGI-C). This deletion mutation removes the promoter as well as the first three exons of the gene, rendering it inactivated. This results in the gene being “knocked out” and rendered nonfunctional.High mobility AT-hook 2 (HMGA2) is an architectural transcription factor, a protein complex that mediates structure of interactions between DNA and protein and facilitates contact between DNA sequences within the genome. Essentially, HMGA2 regulates transcription' HMGA2 belongs to a family of non-histone chromatin proteins. HMGA2 has associations with body size in humans, mice, dogs, and horses. Research has also shown beak size in different species of Darwin’s finches correlate with a genomic region containing HMGA2, adding to its associations with size across a wide number of species.In rabbits, HMGA2 regulates growth of embryos and has been associated with mitochondrial function. HMGA2 is also required for normal IGF2BP2 expression. IGF2BP2 is an RNA binding protein that affects the translation of many different RNAs.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ninjutsu** Ninjutsu: Ninjutsu (忍術), sometimes used interchangeably with the modern term ninpō (忍法), is the martial art strategy and tactics of unconventional warfare, guerrilla warfare and espionage purportedly practised by the ninja. Ninjutsu was a separate discipline in some traditional Japanese schools, which integrated study of more conventional martial arts (taijutsu) along with shurikenjutsu, kenjutsu, sōjutsu, bōjutsu and others. Ninjutsu: While there is an international martial arts organization representing several modern styles of ninjutsu, the historical lineage of these styles is disputed. Some schools claim to be the only legitimate heir of the art, but ninjutsu is not centralized like modernized martial arts such as judo or karate. Togakure-ryū claims to be the oldest recorded form of ninjutsu, and claims to have survived past the 16th century. History: Spying in Japan dates as far back as Prince Shōtoku (572–622). According to Shōninki, the first open usage of ninjutsu during a military campaign was in the Genpei War, when Minamoto no Kuro Yoshitsune chose warriors to serve as shinobi during a battle. This manuscript goes on to say that during the Kenmu era, Kusunoki Masashige frequently used ninjutsu. According to footnotes in this manuscript, the Genpei War lasted from 1180 to 1185, and the Kenmu Restoration occurred between 1333 and 1336. Ninjutsu was developed by the samurai of the Nanboku-cho period, and further refined by groups of jizamurai mainly from Kōka and the Iga Province of Japan in later periods. From circa 1460 to 1574 and 1581, respectively, these jizamurai led Kōka and Iga as de facto independent confederacies – the Kōka and Iga ikki – and formed an alliance together which persisted until the conquest of Kōka by Oda Nobunaga in 1574. Throughout history, the shinobi were assassins, scouts, and spies who were hired mostly by territorial lords known as daimyō. Despite being able to assassinate in stealth, the primary role was as spies and scouts. Shinobi are mainly noted for their use of stealth and deception. They would use this to avoid direct confrontation if possible, which enabled them to escape large groups of opposition. History: Many different schools (ryū) have taught their unique versions of ninjutsu. An example of this is the Togakure-ryū, which claims to have been developed after a defeated samurai warrior called Daisuke Togakure escaped to the region of Iga. He later came in contact with the warrior-monk Kain Doshi, who taught him a new way of viewing life and the means of survival (ninjutsu).: 18–21 Ninjutsu was developed as a collection of fundamental survivalist techniques in the warring state of feudal Japan. The ninja used their art to ensure their survival in a time of violent political turmoil. Ninjutsu included methods of gathering information and techniques of non-detection, avoidance, and misdirection. Ninjutsu involved training in disguise, escape, concealment, archery, and medicine. Skills relating to espionage and assassination were highly useful to warring factions in feudal Japan. At some point, the skills of espionage became known collectively as ninjutsu, and the people who specialized in these tasks were called shinobi no mono. History: Today, the last authentic heir of ninjutsu is Jinichi Kawakami, the 21st head of the Koga Ban family, honorary director of the Ninja Museum of Igaryu, and professor at Mie University, specializing in the research of ninjutsu. In 2012, Kawakami chose to be the end of his line of ninjutsu, stating that the art has no practical place in the modern age.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Enamelin** Enamelin: Enamelin is an enamel matrix protein (EMPs), that in humans is encoded by the ENAM gene. It is part of the non-amelogenins, which comprise 10% of the total enamel matrix proteins. It is one of the key proteins thought to be involved in amelogenesis (enamel development). The formation of enamel's intricate architecture is thought to be rigorously controlled in ameloblasts through interactions of various organic matrix protein molecules that include: enamelin, amelogenin, ameloblastin, tuftelin, dentine sialophosphoprotein, and a variety of enzymes. Enamelin is the largest protein (~168kDa) in the enamel matrix of developing teeth and is the least abundant (encompasses approximately 1-5%) of total enamel matrix proteins. It is present predominantly at the growing enamel surface. Structure: Enamelin is thought to be the oldest member of the enamel matrix protein (EMP) family, with animal studies showing remarkable conservation of the gene phylogenetically. All other EMPs are derived from enamelin, such as amelogenin. EMPs belong to a larger family of proteins termed 'secretory calcium-binding phosphoproteins' (SCPP).Similar to other enamel matrix proteins, enamelin undergoes extensive post-translational modifications (mainly phosphorylation), processing, and secretion by proteases. Enamelin has three putative phosphoserines (Ser54, Ser191, and Ser216 in humans) phosphorylated by a Golgi-associated secretory pathway kinase (FAM20C) based on their distinctive Ser-x-Glu (S-x-E) motifs. The major secretory product of the ENAM gene has 1103 amino acids (post-secretion), and has an acidic isoelectric point ranging from 4.5–6.5 (depending on the fragment).At the secretory stage, the enzyme matrix metalloproteinase-20 (MMP20) proteolytically cleaves the secreted enamelin protein immediately upon release, into several smaller polypeptides; each having their own functions. However, the whole protein (~168 kDa) and its largest derivative fragment (~89 kDa) are undetectable in the secretory stage; these are existent only at the mineralisation front. Smaller polypeptide fragments remain embedded in the enamel, throughout the secretory stage enamel matrix. These strongly bind to the mineral and arrest seeded crystal growth. Function: The primary function of the proteins acts at the mineralisation front; growth sites where it is the interface between the ameloblast plasma membrane and lengthening extremity of crystals. The key activities of enamelin can be summarised: Necessary for the adhesion of ameloblasts to the surface of the enamel in the secretory stage Binds to hydroxyapatite and promotes crystallite elongation Act as a modulator for de novo mineral formationIt is speculated that this protein could interact with amelogenin or other enamel matrix proteins and be important in determining growth of the length of enamel crystallites. The mechanism of this proposed co-interaction is synergistic ("Goldilocks effect"). With enamelin enhancing the rates of crystal nucleation via the creation of addition sites for EMPs, such as amelogenin, to template calcium phosphate nucleation.It is best thought to understand the overarching function of enamelin as the proteins responsible for correct enamel thickness formation. Clinical significance: Mutations in the ENAM gene can cause certain subtypes of amelogenesis imperfecta (AI), a heterogenous group of heritable conditions in which enamel in malformed. Point mutations can cause autosomal-dominant hypoplastic AI, and novel ENAM mutations can cause autosomal-recessive hypoplastic AI. However, mutations in the ENAM gene mainly tend to lead to the autosomal-dominant AI. The phenotype of the mutations are generalised thin enamel and no defined enamel layer.A moderately higher than usual ENAM expression leads to protrusive structures (often, horizontal grooves) on the surface of enamel, and with high transgene expression, the enamel layer is almost lost.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**NCS-382** NCS-382: NCS-382 is a moderately selective antagonist for the GHB receptor. It blocks the effects of GHB in animals and has both anti-sedative and anticonvulsant effects. It has been proposed as a treatment for GHB overdose in humans as well as the genetic metabolic disorder succinic semialdehyde dehydrogenase deficiency (SSADHD), but has never been developed for clinical use.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SOX17** SOX17: SRY-box 17 is a protein that in humans is encoded by the SOX17 gene. Regulation at the human SOX17 locus: The gene encodes a member of the SOX (SRY-related HMG-box) family of transcription factors, located on Chromosome 8 q11.23. Its gene body is isolated within a CTCF loop domain. Approximately 230 kb upstream of SOX17 it has been identified a tissue specific differentially (hypo-)methylated region (DMR), which consists of SOX17 regulatory elements. The DMR in particular bears the most distal definitive endoderm specific enhancer at the SOX17 locus. SOX17 itself has recently been defined as so called topologically insulated gene (TIG). TIGs per definition are single protein coding genes (PCGs) within CTCF loop domains, that are mainly enriched in developmental regulators and suggested to be very tightly controlled via their 3D loop-domain architecture. Function in development: SOX17 is involved in the regulation of vertebrate embryonic development and in the determination of the endodermal cell fate. The encoded protein acts downstream of TGF beta signaling (Activin) and canonical WNT signaling (Wnt3a). Especially the correct phosphorylation of SMAD2/3 within the respective cell cycle (early G1 phase) is crucial for the activation of cardinal endodermal genes (e.g. SOX17) to further enter the definitive endodermal lineage. Besides that, perturbation of the SOX17 centromertic CTCF-boundary in early definitive endoderm differentiation, leads to massive developmental failure and a so-called mes-endodermal like trapped cell-state, which can be rescued by ectopic SOX17 expression. In Xenopus gastrulae it has been shown that SOX17 modifies Wnt responses, where genomic specificity of Wnt/β-catenin transcription is determined through functional interactions between SOX17 and β-catenin/Tcf transcriptional complexes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Isotopes of neodymium** Isotopes of neodymium: Naturally occurring neodymium (60Nd) is composed of 5 stable isotopes, 142Nd, 143Nd, 145Nd, 146Nd and 148Nd, with 142Nd being the most abundant (27.2% natural abundance), and 2 long-lived radioisotopes, 144Nd and 150Nd. In all, 33 radioisotopes of neodymium have been characterized up to now, with the most stable being naturally occurring isotopes 144Nd (alpha decay, a half-life (t1/2) of 2.29×1015 years) and 150Nd (double beta decay, t1/2 of 7×1018 years). All of the remaining radioactive isotopes have half-lives that are less than 12 days, and the majority of these have half-lives that are less than 70 seconds; the most stable artificial isotope is 147Nd with a half-life of 10.98 days. This element also has 13 known meta states with the most stable being 139mNd (t1/2 5.5 hours), 135mNd (t1/2 5.5 minutes) and 133m1Nd (t1/2 ~70 seconds). Isotopes of neodymium: The primary decay modes before the most abundant stable isotope, 142Nd, are electron capture and positron decay, and the primary mode after is beta decay. The primary decay products before 142Nd are praseodymium isotopes and the primary products after are promethium isotopes. Neodymium isotopes as fission products: Neodymium is one of the more common fission products that results from the splitting of uranium-233, uranium-235, plutonium-239 and plutonium-241. The distribution of resulting neodymium isotopes is distinctly different than those found in crustal rock formation on Earth. One of the methods used to verify that the Oklo Fossil Reactors in Gabon had produced a natural nuclear fission reactor some two billion years before present was to compare the relative abundances neodymium isotopes found at the reactor site with those found elsewhere on Earth.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Halo nucleus** Halo nucleus: In nuclear physics, an atomic nucleus is called a halo nucleus or is said to have a nuclear halo when it has a core nucleus surrounded by a "halo" of orbiting protons or neutrons, which makes the radius of the nucleus appreciably larger than that predicted by the liquid drop model. Halo nuclei form at the extreme edges of the table of nuclides — the neutron drip line and proton drip line — and have short half-lives, measured in milliseconds. These nuclei are studied shortly after their formation in an ion beam. Halo nucleus: Typically, an atomic nucleus is a tightly bound group of protons and neutrons. However, in some nuclides, there is an overabundance of one species of nucleon. In some of these cases, a nuclear core and a halo will form. Halo nucleus: Often, this property may be detected in scattering experiments, which show the nucleus to be much larger than the otherwise expected value. Normally, the cross-section (corresponding to the classical radius) of the nucleus is proportional to the cube root of its mass, as would be the case for a sphere of constant density. Specifically, for a nucleus of mass number A, the radius r is (approximately) r=r∘A13, where r∘ is 1.2 fm. Halo nucleus: One example of a halo nucleus is 11Li, which has a half-life of 8.6 ms. It contains a core of 3 protons and 6 neutrons, and a halo of two independent and loosely bound neutrons. It decays into 11Be by the emission of an antineutrino and an electron. Its mass radius of 3.16 fm is close to that of 32S or, even more impressively, of 208Pb, both much heavier nuclei.Experimental confirmation of nuclear halos is recent and ongoing. Additional candidates are suspected. Several nuclides including 9B, 13N, and 15N are calculated to have a halo in the excited state but not in the ground state. List of known nuclides with nuclear halo: Nuclei that have a neutron halo include 11Be and 19C. A two-neutron halo is exhibited by 6He, 11Li, 17B, 19B and 22C. Two-neutron halo nuclei break into three fragments and are called Borromean because of this behavior, analogously to how all three of the Borromean rings are linked together but no two share a link. For example, the two-neutron halo nucleus 6He (which can be taken as a three-body system consisting of an alpha particle and two neutrons) is bound, but neither 5He nor the dineutron is. 8He and 14Be both exhibit a four-neutron halo. List of known nuclides with nuclear halo: Nuclei that have a proton halo include 8B and 26P. A two-proton halo is exhibited by 17Ne and 27S. Proton halos are expected to be rarer and more unstable than neutron halos because of the repulsive forces of the excess proton(s).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Perdita Barran** Perdita Barran: Perdita Elizabeth Barran is a Professor of Mass Spectrometry at the University of Manchester. She is Director of the Michael Barber Centre for Collaborative Mass Spectrometry. She develops and applies ion-mobility spectrometry–mass spectrometry to the study of molecule structure and is searching for biomarkers for Parkinson's disease. She is Associate Dean for Research Facility Development at the University of Manchester. In 2020 and 2021 she was seconded to work for the Department of Health and Social Care as an advisor on the use case for mass spectrometry as a diagnostic method for diagnosis of COVID infection. Perdita Barran: She was awarded the 2009 Joseph Black award, and the 2020 Theophillus Redwood Award from the Royal Society of Chemistry Analytical Division. Along with a team of researchers 'NosetoDiagnose' she won the Horizon Prize from the Royal Society of Chemistry 2021. Education and early career: Barran went to Godolphin and Latymer School. She moved to the University of Manchester to study chemistry, graduating in 1994. She joined the University of Sussex for her graduate studies, working with Harry Kroto and Tony Stace. Research and career: Barran stayed with Stace for three years after completing her PhD in 1998. In 2001 Barran joined the University of California, Santa Barbara, working as a postdoctoral fellow with Mike Bowers. She was interested in the structure and stability of small molecules in the gas phase. She looked at how Ion-mobility spectrometry could be used to identify conformation.Barran joined the University of Edinburgh as an Engineering and Physical Sciences Research Council (EPSRC) Advanced Research Fellow in 2002. In 2005 she was awarded the 10th Desty Memorial prize for her innovations in Separation Science. She was made a Senior Lecturer in 2009. She worked on mass spectrometry techniques that can be used to evaluate conformational change, aggregation and intrinsic conformation. She investigated mass spectrometry for therapeutics for pre-fibrillar aggregation. She helped to establish the Scottish Instrumentation and Resource Centre for Advanced Mass Spectrometry at the University of Edinburgh. This had an initial remit to provide proteomic analysis for the MRC Human Genetics Unit.In 2013 Barran was appointed to the Manchester Institute of Biotechnology as a Chair in Mass Spectrometry sponsored by Waters Corporation. She led an EPSRC platform grant to study the structure-activity relationships of Beta defensins. She works with Cait MacPhee, Garth Cooper and Tilo Kunath on neurodegenerative proteins, and with several groups including Richard Kriwacki, Rohit Pappu and Gary Daughdrill to examine intrinsically disordered proteins. She works with several biopharmaceutical companies to apply new mass spectrometry techniques to new drug modalities including monoclonal antibodies. She also develops new mass spectrometry instrumentation. Her group looks at the structure of biological systems at a molecular level, studying them in the gas and solution phase as well as theoretically. They use electrospray ionization, mass spectrometry, ion mobility mass spectrometry native mass spectrometry and complementary solution based biophysical techniques. They are interested in a proteins structure and how it changes in an effort to relate that to their function. Ion-mobility spectrometry–mass spectrometry can be used to look at the temperature dependent rotationally averaged collision cross-section of gas-phase ions of proteins. In 2014 she was awarded a Biotechnology and Biological Sciences Research Council grant to study the interactions of proteins with other proteins. Barran serves on the editorial board of the International Journal of Mass Spectrometry. She was included in the page of Perditas created by Perdita Stevens. Research and career: Parkinson's disease Barran has been working with Joy Milne to search for odorous biomarkers of Parkinson's disease. By smelling skin swabs, Milne says she can differentiate between people with and without Parkinson's disease. She says she identified changes in her husband's scent before he was formally diagnosed with Parkinson's disease, which he died of in 2015. Barran uses mass spectrometry to investigate the biomarkers of Parkinson's disease. The story was made into a BBC documentary The Woman Who Can Smell Parkinson's. Barran received ethical approval for her work of the skin metabolites of Parkinson's in 2015, allowing them to work with Parkinson's UK to conduct a larger study. In 2018 Milne travelled to the Tanzanian training centre APOPO to check whether she could smell Tuberculosis. Barran's work on Parkinson's is sponsored by The Michael J. Fox Foundation.In 2022, Barran and others published a study of a method to detect Parkinson's disease by analysing sebum using mass spectrometry.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Condorcet paradox** Condorcet paradox: The Condorcet paradox (also known as the voting paradox or the paradox of voting) in social choice theory is a situation noted by the Marquis de Condorcet in the late 18th century, in which collective preferences can be cyclic, even if the preferences of individual voters are not cyclic. This is paradoxical, because it means that majority wishes can be in conflict with each other: Suppose majorities prefer, for example, candidate A over B, B over C, and yet C over A. When this occurs, it is because the conflicting majorities are each made up of different groups of individuals. Condorcet paradox: Thus an expectation that transitivity on the part of all individuals' preferences should result in transitivity of societal preferences is an example of a fallacy of composition. The paradox was independently discovered by Lewis Carroll and Edward J. Nanson, but its significance was not recognized until popularized by Duncan Black in the 1940s. Example: Suppose we have three candidates, A, B, and C, and that there are three voters with preferences as follows (candidates being listed left-to-right for each voter in decreasing order of preference): If C is chosen as the winner, it can be argued that B should win instead, since two voters (1 and 2) prefer B to C and only one voter (3) prefers C to B. However, by the same argument A is preferred to B, and C is preferred to A, by a margin of two to one on each occasion. Thus the society's preferences show cycling: A is preferred over B which is preferred over C which is preferred over A. Example: Cardinal ratings Note that in the graphical example, the voters and candidates are not symmetrical, but the ranked voting system "flattens" their preferences into a symmetrical cycle. Cardinal voting systems provide more information than rankings, allowing a winner to be found. For instance, under score voting, the ballots might be:Candidate A gets the largest score, and is the winner, as A is the nearest to all voters. However, a majority of voters have an incentive to give A a 0 and C a 10, allowing C to beat A, which they prefer, at which point, a majority will then have an incentive to give C a 0 and B a 10, to make B win, etc. (In this particular example, though, the incentive is weak, as those who prefer C to A only score C 1 point above A; in a ranked Condorcet method, it's quite possible they would simply equally rank A and C because of how weak their preference is, in which case a Condorcet cycle wouldn't have formed in the first place, and A would've been the Condorcet winner). So though the cycle doesn't occur in any given set of votes, it can appear through iterated elections with strategic voters with cardinal ratings. Necessary condition for the paradox: Suppose that x is the fraction of voters who prefer A over B and that y is the fraction of voters who prefer B over C. It has been shown that the fraction z of voters who prefer A over C is always at least (x + y – 1). Since the paradox (a majority preferring C over A) requires z < 1/2, a necessary condition for the paradox is that and hence x+y<32. Likelihood of the paradox: It is possible to estimate the probability of the paradox by extrapolating from real election data, or using mathematical models of voter behavior, though the results depend strongly on which model is used. In particular, Andranik Tangian has proved that the probability of Condorcet paradox is negligible in a large society. Impartial culture model We can calculate the probability of seeing the paradox for the special case where voter preferences are uniformly distributed among the candidates. Likelihood of the paradox: (This is the "impartial culture" model, which is known to be unrealistic,: 40  so, in practice, a Condorcet paradox may be more or less likely than this calculation.: 320 ) For n voters providing a preference list of three candidates A, B, C, we write Xn (resp. Yn , Zn ) the random variable equal to the number of voters who placed A in front of B (respectively B in front of C, C in front of A). The sought probability is pn=2P(Xn>n/2,Yn>n/2,Zn>n/2) (we double because there is also the symmetric case A> C> B> A). We show that, for odd n , pn=3qn−1/2 where qn=P(Xn>n/2,Yn>n/2) which makes one need to know only the joint distribution of Xn and Yn If we put pn,i,j=P(Xn=i,Yn=j) , we show the relation which makes it possible to compute this distribution by recurrence: pn+1,i,j=16pn,i,j+13pn,i−1,j+13pn,i,j−1+16pn,i−1,j−1 The following results are then obtained: The sequence seems to be tending towards a finite limit. Likelihood of the paradox: Using the central limit theorem, we show that qn tends to q=14P(|T|>24), where T is a variable following a Cauchy distribution, which gives arctan arccos ⁡132π (constant quoted in the OEIS). Likelihood of the paradox: The asymptotic probability of encountering the Condorcet paradox is therefore arccos arcsin ⁡69π which gives the value 8.77%.Some results for the case of more than three candidates have been calculated and simulated. The simulated likelihood for an impartial culture model with 25 voters increases with the number of candidates:: 28  The likelihood of a Condorcet cycle for related models approach these values for large electorates: Impartial anonymous culture (IAC): 6.25% Uniform culture (UC): 6.25% Maximal culture condition (MC): 9.17%All of these models are unrealistic, and are investigated to establish an upper bound on the likelihood of a cycle. Likelihood of the paradox: Group coherence models When modeled with more realistic voter preferences, Condorcet paradoxes in elections with a small number of candidates and a large number of voters become very rare.: 78 Spatial model A study of three-candidate elections analyzed 12 different models of voter behavior, and found the spatial model of voting to be the most accurate to real-world ranked-ballot election data. Analyzing this spatial model, they found the likelihood of a cycle to decrease to zero as the number of voters increases, with likelihoods of 5% for 100 voters, 0.5% for 1000 voters, and 0.06% for 10,000 voters.Another spatial model found likelihoods of 2% or less in all simulations of 201 voters and 5 candidates, whether two or four-dimensional, with or without correlation between dimensions, and with two different dispersions of candidates.: 31 Empirical studies Many attempts have been made at finding empirical examples of the paradox. Empirical identification of a Condorcet paradox presupposes extensive data on the decision-makers' preferences over all alternatives—something that is only very rarely available. Likelihood of the paradox: A summary of 37 individual studies, covering a total of 265 real-world elections, large and small, found 25 instances of a Condorcet paradox, for a total likelihood of 9.4%: 325  (and this may be a high estimate, since cases of the paradox are more likely to be reported on than cases without).: 47 An analysis of 883 three-candidate elections extracted from 84 real-world ranked-ballot elections of the Electoral Reform Society found a Condorcet cycle likelihood of 0.7%. These derived elections had between 350 and 1,957 voters. A similar analysis of data from the 1970–2004 American National Election Studies thermometer scale surveys found a Condorcet cycle likelihood of 0.4%. These derived elections had between 759 and 2,521 "voters".While examples of the paradox seem to occur occasionally in small settings (e.g., parliaments) very few examples have been found in larger groups (e.g. electorates), although some have been identified. Implications: When a Condorcet method is used to determine an election, the voting paradox of cyclical societal preferences implies that the election has no Condorcet winner: no candidate who can win a one-on-one election against each other candidate. There will still be a smallest group of candidates, known as the Smith set, such that each candidate in the group can win a one-on-one election against each of the candidates outside the group. The several variants of the Condorcet method differ on how they resolve such ambiguities when they arise to determine a winner. The Condorcet methods which always elect someone from the Smith set when there is no Condorcet winner are known as Smith-efficient. Note that using only rankings, there is no fair and deterministic resolution to the trivial example given earlier because each candidate is in an exactly symmetrical situation. Implications: Situations having the voting paradox can cause voting mechanisms to violate the axiom of independence of irrelevant alternatives—the choice of winner by a voting mechanism could be influenced by whether or not a losing candidate is available to be voted for. Implications: Two-stage voting processes One important implication of the possible existence of the voting paradox in a practical situation is that in a two-stage voting process, the eventual winner may depend on the way the two stages are structured. For example, suppose the winner of A versus B in the open primary contest for one party's leadership will then face the second party's leader, C, in the general election. In the earlier example, A would defeat B for the first party's nomination, and then would lose to C in the general election. But if B were in the second party instead of the first, B would defeat C for that party's nomination, and then would lose to A in the general election. Thus the structure of the two stages makes a difference for whether A or C is the ultimate winner. Implications: Likewise, the structure of a sequence of votes in a legislature can be manipulated by the person arranging the votes, to ensure a preferred outcome.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Flamant solution** Flamant solution: The Flamant solution provides expressions for the stresses and displacements in a linear elastic wedge loaded by point forces at its sharp end. This solution was developed by A. Flamant in 1892 by modifying the three-dimensional solution of Boussinesq. The stresses predicted by the Flamant solution are (in polar coordinates) cos sin ⁡θrσrθ=0σθθ=0 where C1,C3 are constants that are determined from the boundary conditions and the geometry of the wedge (i.e., the angles α,β ) and satisfy cos sin cos cos sin sin ⁡θdθ=0 where F1,F2 are the applied forces. The wedge problem is self-similar and has no inherent length scale. Also, all quantities can be expressed in the separated-variable form σ=f(r)g(θ) . The stresses vary as (1/r) Forces acting on a half-plane: For the special case where α=−π , β=0 , the wedge is converted into a half-plane with a normal force and a tangential force. In that case C1=−F1π,C3=−F2π Therefore, the stresses are cos sin ⁡θ)σrθ=0σθθ=0 and the displacements are (using Michell's solution) sin cos ln cos cos sin ln sin cos sin ln sin sin cos ln cos ⁡θ}] The ln ⁡r dependence of the displacements implies that the displacement grows the further one moves from the point of application of the force (and is unbounded at infinity). This feature of the Flamant solution is confusing and appears unphysical. For a discussion of the issue see http://imechanica.org/node/319. Forces acting on a half-plane: Displacements at the surface of the half-plane The displacements in the x1,x2 directions at the surface of the half-plane are given by ln sign ln sign (x1)8μ where plane strain plane stress ν is the Poisson's ratio, μ is the shear modulus, and sign (x)={+1x>0−1x<0 Derivation of Flamant solution: If we assume the stresses to vary as (1/r) , we can pick terms containing 1/r in the stresses from Michell's solution. Then the Airy stress function can be expressed as sin ln cos cos ln sin ⁡θ Therefore, from the tables in Michell's solution, we have cos cos sin sin sin cos cos sin ⁡θr) The constants C1,C2,C3,C4 can then, in principle, be determined from the wedge geometry and the applied boundary conditions. Derivation of Flamant solution: However, the concentrated loads at the vertex are difficult to express in terms of traction boundary conditions because the unit outward normal at the vertex is undefined the forces are applied at a point (which has zero area) and hence the traction at that point is infinite.To get around this problem, we consider a bounded region of the wedge and consider equilibrium of the bounded wedge. Let the bounded wedge have two traction free surfaces and a third surface in the form of an arc of a circle with radius a . Along the arc of the circle, the unit outward normal is n=er where the basis vectors are (er,eθ) . The tractions on the arc are t=σ⋅n⟹tr=σrr,tθ=σrθ. Derivation of Flamant solution: Next, we examine the force and moment equilibrium in the bounded wedge and get cos sin sin cos ⁡θ]adθ=0∑m3=∫αβ[aσrθ(a,θ)]adθ=0 We require that these equations be satisfied for all values of a and thereby satisfy the boundary conditions. Derivation of Flamant solution: The traction-free boundary conditions on the edges θ=α and θ=β also imply that at θ=α,θ=β except at the point r=0 If we assume that σrθ=0 everywhere, then the traction-free conditions and the moment equilibrium equation are satisfied and we are left with cos sin ⁡θdθ=0 and σθθ=0 along θ=α,θ=β except at the point r=0 . But the field σθθ=0 everywhere also satisfies the force equilibrium equations. Hence this must be the solution. Also, the assumption σrθ=0 implies that C2=C4=0 Therefore, cos sin ⁡θr;σrθ=0;σθθ=0 To find a particular solution for σrr we have to plug in the expression for σrr into the force equilibrium equations to get a system of two equations which have to be solved for C1,C3 cos sin cos cos sin sin ⁡θdθ=0 Forces acting on a half-plane If we take α=−π and β=0 , the problem is converted into one where a normal force F2 and a tangential force F1 act on a half-plane. In that case, the force equilibrium equations take the form cos sin cos cos sin sin ⁡θdθ=0⟹F2+C3π=0 Therefore C1=−F1π;C3=−F2π. Derivation of Flamant solution: The stresses for this situation are cos sin ⁡θ);σrθ=0;σθθ=0 Using the displacement tables from the Michell solution, the displacements for this case are given by sin cos ln cos cos sin ln sin cos sin ln sin sin cos ln cos ⁡θ}] Displacements at the surface of the half-plane To find expressions for the displacements at the surface of the half plane, we first find the displacements for positive x1 (θ=0 ) and negative x1 (θ=π ) keeping in mind that r=|x1| along these locations. Derivation of Flamant solution: For θ=0 we have ln ln ⁡|x1|] For θ=π we have ln ln ⁡|x1|] We can make the displacements symmetric around the point of application of the force by adding rigid body displacements (which does not affect the stresses) u1=F28μ(κ−1);u2=F18μ(κ−1) and removing the redundant rigid body displacements u1=F14πμ;u2=F24πμ. Then the displacements at the surface can be combined and take the form ln sign ln sign (x1) where sign (x)={+1x>0−1x<0
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Minolta AF Fish-Eye 16mm f/2.8** Minolta AF Fish-Eye 16mm f/2.8: Originally produced by Minolta, and currently produced by Sony, the AF Fish-Eye 16mm, is a prime Fisheye lens compatible with cameras using the Minolta A-mount and Sony A-mount lens mounts. It is a full-frame fisheye lens with a 180° viewing angle. The front of the lens does not have a mount for filters. Rather a number of filters are built in: Normal, 056, B12, and either FLW (in older versions) or A12 (in newer versions). The filters are selected by a rotating dial on the body of the lens.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dialect levelling in Britain** Dialect levelling in Britain: Dialect levelling is the means by which dialect differences decrease. For example, in rural areas of Britain, although English is widely spoken, the pronunciation and the grammar have historically varied. During the twentieth century, more people moved into towns and cities, standardising English. Dialect levelling can develop by the influence of various types of media. Background: Many of the great works in English dialectology were prompted because of fears that the dialects would soon die out and of a desire to record the dialect in time. Joseph Wright began his English Dialect Dictionary by saying "It is quite evident from the letters daily received at the 'Workshop' that pure dialect speech is rapidly disappearing from our midst, and that in a few years it will be almost impossible to get accurate information about difficult points." Harold Orton told his fieldworkers on the Survey of English Dialects that they had to work quickly in "a last-minute exercise to scoop out the last remaining vestige of dialect before it died out under the pressures of modern movement and communication." The results of the Atlas Linguarum Europae in England, collected in the late 1970s, did indeed find a reduction of lexical diversity since Harold Orton's survey.Dialect levelling is a linguistic phenomenon studied and observed by dialectologists and sociolinguists. There are different researcher opinions on what constitutes a dialect in this context. Chambers and Trudgill (1984) choose to view a dialect as a subdivision of a particular language such as the Parisian dialect of French and the Lancashire dialect of English. They feel that standard English is just as much a dialect as any other form of English and that it is incorrect to suppose that one language is in any way linguistically superior to another. Background: Sociolinguists study relations between language and social groups. This includes topics such as the differences in language usage between men and women, older and younger people, and lower and higher social classes, and attitudes towards various language forms. The techniques developed by sociolinguists can be used to study the phenomenon of dialect levelling (Boves, & Gerritsen, 1995). Development: Dialect levelling occurs mostly in socially and geographically mobile groups and in contexts where people have a tendency to adapt to their listener in order to ensure they better understand. People who come to a new town adapt their language and unconsciously leave out local language elements so that the hearer will understand them better. As a result, dialect forms that have a wide geographical and social range tend to be used more often. Eventually these short-term adaptions become long-term changes. Though most of the adjustments happen largely unconsciously, some people are more open to language change and adaptation than others and this influences the extent to which dialect levelling takes place (Kerswill, 2003; Milroy, 2002). Development: Historical examples show that dialect levelling generally takes place anywhere and anytime in situations of extensive mobility and cultural and linguistic mixing. One historical example of dialect levelling is the change in the London dialect that took place in the fifteenth century when Northern county immigrants moved to London. Their dialect diffused into southern forms and some elements were incorporated into standard English (Milroy, 2002). Dialect levelling has become a widespread phenomenon in Britain. Southern features seem to be spreading throughout the whole country and typical vowel sounds seem to be centred on big cities like Glasgow, Manchester, or Newcastle (Kerswill, 2001). Development: Due to an increase in mobility, migration, and the media, who portray variety in language as something positive, dialect levelling seems to take place more quickly than before (Kerswill, 2003). Cases: The following are the results of several research projects with a focus on dialect levelling. They enhance our knowledge of the dialect levelling that is taking place today in Great Britain. There has been research on the phenomenon of dialect leveling in Hull, Milton Keynes, and Reading (Williams, & Kerswill, 1999): The survey of British Dialect Grammar in the metropolitan regions of Blackburn, Birmingham, Cardiff, Nottingham, Glasgow, London, Liverpool, Manchester, Newcastle, Preston, Sheffield, Teesside, Coventry, Swansea, Brighton, Leeds, and Bristol (Cheshire, Edwards, & Whittle, 1989).The major urban centres of Britain have certain grammatical features in common in their spoken English and so we could say that a ‘standardising’ non-standard variety of English is developing. Cases: Social network and class culture as independent influences on language change (Kerswill & Williams, 2000)In Milton Keynes, a new phenomenon has been investigated in linguistics research. A large group of working class people have moved to Milton Keynes, away from their home town and kin, in the hope of finding better housing. Unlike traditional working-class communities, they do not form close-knit networks and tend to keep themselves to themselves. This type of network is common with migrants everywhere. For some features, especially vowels, the levelling leans towards the Received Pronunciation norm. For other features, especially consonants, the levelling leans more towards a general, southern, non-standard norm. Cases: Strong class awareness amongst youngsters and strong prejudice against 'posh' people explain why Standard English and Received Pronunciation are not fully adopted. For the working class of Milton Keynes, it is a priority to establish a distinction between them and the upper class. This indicates that mobility and social class appear to be two separate influences that do not necessarily go hand in hand. Examples: The following are examples of new language features that are currently spreading throughout Britain. They are slowly taking the place of typical regional features. [θ]-fronting in Britain. This is when the -th- is pronounced as [f] or [v] (Kerswill, 2003).The following are the 13 most reported dialect features in the metropolitan regions of Blackburn, Birmingham, Cardiff, Nottingham, Glasgow, London, Liverpool, Manchester, Newcastle, Preston, Sheffield, Teesside, Coventry, Swansea, Brighton, Leeds, and Bristol, according to The Survey of British Dialect Grammar. Examples: Them as demonstrative determiner (Look at them big spiders), 'Should of' (You should of left half an hour ago), Absence of plural marking (To make a big cake you need two pound of flour), What as subject relative pronoun (The film what was on last night was good), 'Never' as past tense negator (No, I never broke that), 'There was' with plural 'notional' subject (There was some singers here a minute ago), 'There's' with plural 'notional' subject (There's cars outside the church), Perfect participle 'sat' following BE auxiliary (She was sat over there looking at her car), Adverbial 'quick' (I like pasta. It cooks really quick), 'Ain't'/'in't' (that ain't working/ that in't working), 'Give me it' (give me it, please), Perfect participle 'stood' following BE auxiliary (And he was stood in the corner looking at it), Non-standard 'was' (we was singing) (Cheshire, Edwards, & Whittle, 1989)Estuary English has been added as an example of modern-day dialect levelling because it is the well known result of dialect levelling that has been taking place on the Thames Estuary over the past twenty years. It is situated somewhere in the middle between popular London speech and Received Pronunciation. People arrive at it from above and from below. As people climb the social ladder they tend to correct their speech. They get rid of grammatically nonstandard features such as double negatives, the word 'ain't' and past tense forms such as 'writ' for 'wrote' and 'come' for 'came'. They also adapt their accent, for example pronouncing the ⟨h⟩ instead of dropping it, replacing the glottal stops with [t] as in water, and changing some vowels. Some claim that Estuary English is becoming the new standard, replacing Received Pronunciation, and that Received Pronunciation speakers are adopting it themselves (Kerswill, 2001; Milroy, 2002). Influences: Migration within a country Over the past forty years people have moved out of the cities and into dormitory towns and suburbs. In addition thirty-five new towns, such as Milton Keynes were created across the country (Kerswill, 2001). Industrialisation often causes an increase in work opportunities in a certain area causing people to move and evoking a general willingness to adopt certain language features that are typical for this area (Milroy, 2002). Influences: In general first-generation adult migrants only show slight language changes, whereas their children produce a more homogeneous language. When these children become teenagers, they often feel pressured to conform to the language of their peer group and thus a new levelled language variety starts to emerge (Williams, & Kerswill, 1999). Influences: Lateral (geographical) mobility Modern transportation has made travel easier and more efficient. This results in people travelling larger distances to work and meeting people from different areas at work, which in turn exposes them to different dialects and encourages dialect levelling. It causes employers to expect that employees are flexible and willing to work at different locations or to change locations throughout their career. It produces language missionaries or people that move away from their native area for a period of time and then return, bringing with them some traces of a foreign dialect and it results in the fact that parents nowadays do not come from the same community, causing dialect levelling to take place within the family (Williams, & Kerswill, 1999). Influences: Vertical social mobility When people are promoted they often feel the need to adapt their language so that a wider group of people will understand them more clearly. They often leave out typical regional varieties and use more widely known varieties instead. Schools realise the need for a common language variety and encourage pupils to adopt Standard English (Williams, & Kerswill, 1999). Influences: People approach new language forms positively Popular media such as TV and radio stations broadcast mostly from London and the south, causing traces of southern accents to be found in the north. Nowadays however, one finds a generally positive attitude towards different language forms as Non-Received Pronunciation English can be heard on every radio and television station. BBC newsreaders still form an exception in this respect, though even there Welsh and Scottish accents seem to be accepted. This positive attitude towards different varieties of English seems to catch on with the general public (Kerswill, 2001)(Williams, & Kerswill, 1999). Influences: Women are generally more open to new language varieties Several studies show that women adopt widely used language features more easily than men. The language of women tends to be more neutral and shows less regional varieties, though their language does not necessarily come closer to Received Pronunciation. Dialect levelling often starts with women but quickly spreads to the rest of the family (Kerswill, 2003). Influences: Speakers want to maintain a unique dialect that distinguishes them from others In some cases more than others, linguistic distinctiveness seems to be a sociolinguistic priority. When having a conversation with someone of a different dialect community, some people like to emphasise their own dialect (Kerswill, & Williams 2000)(Milroy, 2002). Amongst youngsters of all classes there is often a strong class awareness. Working-class teenagers for example, are known to make strong statements against 'posh' people. These class-based norms influence a person's willingness to adopt standard English and Received Pronunciation and their dislike for different language varieties (Kerswill, & Williams 2000). Natural factors Not all language changes are caused by external influences. Sometimes language changes through the course of time. One example of such a change is [θ]-fronting (Kerswill, 2003). Related items: Geographical diffusion Over the larger area of Great Britain, geographical diffusion tends to take place as opposed to dialect levelling. In this case specific language features spread out from a densely populated, economical and culturally dominant centre. Where dialect levelling takes place locally, geographical diffusion covers large areas (Kerswill, 2003). Social dialect The Survey of British Dialect Grammar suggests the term social dialect (sociolect) as opposed to regional dialect because the dialect a person uses seems to be more closely related to a person's social activities and relationships with other people than to the place where a person resides (Cheshire, Edwards, & Whittle, 1989). Koinéisation Koinéisation is the process by which speakers create a new language variety based on the dialects of the speakers with whom they have come into contact (Milroy, 2002). Standardisation of language The formalisation of a language variety with the intervention of an institution (Milroy, 2002).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Topical fluoride** Topical fluoride: Topical fluorides are fluoride-containing drugs indicated in prevention and treatment of dental caries, particularly in children's primary dentitions. The dental-protecting property of topical fluoride can be attributed to multiple mechanisms of action, including the promotion of remineralization of decalcified enamel, the inhibition of the cariogenic microbial metabolism in dental plaque and the increase of tooth resistance to acid dissolution. Topical fluoride is available in a variety of dose forms, for example, toothpaste, mouth rinses, varnish and silver diamine solution. These dosage forms possess different absorption mechanisms and consist of different active ingredients. Common active ingredients include sodium fluoride, stannous fluoride, silver diamine fluoride. These ingredients account for different pharmacokinetic profiles, thereby having varied dosing regimes and therapeutic effects. A minority of individuals may experience certain adverse effects, including dermatological irritation, hypersensitivity reactions, neurotoxicity and dental fluorosis. In severe cases, fluoride overdose may lead to acute toxicity. While topical fluoride is effective in preventing dental caries, it should be used with caution in specific situations to avoid undesired side effects. Medical uses: Topical fluoride formulations are effective measures for preventing and arresting the progression of dental caries, especially early childhood caries (ECC). Domestic products such as toothpaste and mouthwash can be used on a regular basis at home, while silver diamine solution therapy can be administered by specialists in dental clinics. Mechanism of action: Topical fluoride serves to prevent early dental caries primarily in three ways: promoting remineralization of decalcified enamel, inhibiting the cariogenic microbial processes in dental plaque and increasing tooth resistance to acid breakdown. Promotion of remineralization of decalcified enamel Fluoride has a high tendency to react with the calcium hydroxyapatite Ca10(PO4)6(OH)2 in tooth enamel due to its high affinity to metals. It subsequently replaces the hydroxide group in hydroxyapatite to precipitate calcium fluorapatite Ca5(PO4)3)F. These fluorapatite precipitations scavenge excess phosphate and calcium in the saliva to form a supersaturated solution for remineralization. Mechanism of action: Inhibition of the cariogenic microbial processes in dental plaque Topical fluoride also serves as an antimicrobial agent to reduce demineralization by inhibiting the growth of tooth-erupting microorganisms in dental plaque. Fluoride ions readily combine with hydrogen cations to produce hydrogen fluoride. Hydrogen fluoride subsequently acidifies the bacterial cytoplasm, inactivating the essential enzymes for bacterial metabolism, including enolase and proton releasing adenosine triphosphatase.As topical fluoride lowers the pH, bacteria have to consume more energy to maintain a neutral environment, leaving less energy for reproduction, and further generation of polysaccharides and acids. These polysaccharides are necessary for adherence to enamel, while these acids are essential for the synthesis of bacterial enzymes, for example, immunoglobulin A protease. These processes contribute to reducing the risk of dental caries by inhibiting microbial metabolism in the tooth plaque. Mechanism of action: Increase in tooth resistance to acid dissolution Topical fluoride can increase the resistance of enamel to acid. Bacteria in enamel, including Streptococcus mutans, generate acids to maintain a low pH environment during fermentation. These acids eventually dissociate the hydroxyapatite in teeth once the acidity falls below the critical pH (pH 5.5). The fluorapatite formed by topical fluorides has lower critical pH (pH 4.5) than normal enamel, it is therefore more acid resistant and not prone to degrade even in an acidic environment. This mechanism helps decelerate the rate of teeth demineralization. Dosage forms: Toothpaste The daily use of fluoride-containing toothpaste is recognized as the key factor contributing to the global reduction in dental caries over recent decades. Fluoride-containing toothpaste can be classified into two types, namely low-fluoride and high-fluoride toothpaste. Low-fluoride toothpaste, depending on brand, generally contains 0.22% to 0.31% fluoride. These fluorides are often manufactured in the form of sodium fluoride, stannous fluoride, or sodium monofluorophosphate (MFP). High-fluoride toothpaste typically contains 1.1% sodium fluoride, namely four times more concentrated than low-fluoride toothpaste. People using high-fluoride toothpaste should avoid eating or rinsing their mouth for at least 30 minutes after treatment for maximal therapeutic effect. Some fluoride-containing toothpaste incorporates extra chemical ingredients for additional purposes. For instance, calcium carbonate and magnesium carbonate are added as abrasives to remove dental plaque on teeth, while strontium chloride and potassium nitrate are added as anti-sensitive agents for individuals who have teeth sensitivity. Dosage forms: Mouth rinse Fluoride mouth rinse is usually used for adjunctive therapy with other topical fluoride products. It is generally prepared in the form of sodium chloride. Sodium chloride is kept in the saliva after spitting out the mouth rinse, thus helping to prevent tooth decay. 0.02% fluoride mouth rinse rinse is commonly administered twice daily, while 0.05% is administered once daily at bedtime after thoroughly brushing teeth. People using high-fluoride toothpaste should avoid eating or rinsing their mouth for at least 30 minutes after administration for maximal therapeutic effect. Dosage forms: Silver diamine solution Silver diamine fluoride (SDF) is a transparent solution prepared by dissolving silver ions and fluoride ions in ammonia water. It is approved in a few places, including Hong Kong, China, and the United States, for the prevention of early childhood caries (ECC) and relieving tooth sensitivity.SDF has multiple advantages over traditional fluoride varnish therapy: SDF is a non-invasive treatment with higher acceptability among children and elderlies. Dosage forms: The materials required for SDF are inexpensive, reducing the financial burden on patients. There is currently no evidence that SDF causes serious adverse reactions, for example, acute toxicity and infection of the dental pulp, rendering it a safer therapy. Dosage forms: SDF followed by stannous fluoride was proven to be more effective in reducing dental caries in children's primary molars.However, the SDF solution results in permanent black staining on the teeth's decayed proportion. This may be unacceptable by some individuals with aesthetic concerns.SDF, in addition to performing the functions of conventional topical fluorides, is suggested to have collagen-conserving properties and an additional antibacterial action owing to the presence of silver. While multiple clinical trials demonstrate that 38% SDF is more effective than 5% sodium fluoride varnish in preventing ECC, it is currently unavailable in many countries due to insufficient research data. Adverse effects: Increased exposure of fluoride may lead to certain adverse side effects, including dental fluorosis and developmental neurotoxicity. Other rare side effects include skin rash and hypersensitivity reactions. In severe cases, fluoride overdose may lead to acute toxicity. Adverse effects: Dental fluorosis Dental fluorosis is a dose-dependent adverse drug effect featured by temporary white marks. It can be induced by increased fluoride exposure, typically from stannous fluoride-containing products or fluoridated water. Excess intake of fluoride leads to overabundance of structurally-weak fluorapatite formed inside the enamel, resulting in increased brittleness of teeth. In severe dental fluorosis, brown or yellow staining may appear on teeth. Children under the age of eight are susceptible to dental fluorosis. Adverse effects: Developmental neurotoxicity Overdose of fluoride can potentially cause neurotoxicity during early development. While the exact pathophysiology of fluoride-induced developmental toxicity is not completely understood, most research suggested that excessive fluoride intake may result in formation of aluminium fluoride (AlF3 or AlF4). Aluminium fluoride structurally mimics phosphate, thus is capable of crossing the blood-brain barrier via phosphate transporters. These fluorides in the brain may cause neurodegenerative disorders, including Alzheimer's disease, Parkinson's disease and IQ declination. Nevertheless, topical fluoride was less likely to cause developmental neurotoxicity than fluoridated water. Adverse effects: Acute fluoride toxicity Fluoride overdose may cause acute toxicity. While the underlying mechanism of fluoride toxicity is unclear, most studies ascribe fluoride toxicity to its capacity to inhibit metalloproteins by imitating metallofluoride substrate. Inhibition of metalloproteins slows down multiple signalling pathways and disrupts cellular organelles, subsequently producing oxidative stress and cell cycle arrest. Fluoride overload is suggested to be linked to pH and electrolyte imbalances, creating an environment unfavourable for cell living. These mechanisms can ultimately result in cellular malfunction and cell death. Cautions: Toothpaste, cream, mouthrinse and varnish Most topical fluoride preparations with a concentration exceeding 0.6 ppm should be avoided to reduce risk of dental fluorosis if drinking water has already been fluoridated. Swallowing of topical fluoride products should be avoided in order to avoid systemic adverse effects, for example, skeletal fluorosis. While an appropriate amount of fluoride consumption during pregnancy is beneficial to prevent early childhood caries (ECC), pregnant women should avoid excessive fluoride exposure since it may predispose their children to skeletal fluorosis in later childhood. Most topical fluoride preparations containing more than 1.1ppm should be avoided in children younger than 6 years of age, unless otherwise instructed by a healthcare practitioner. Topical fluoride preparation containing benzyl alcohol derivative, polysorbate 80 and propylene glycol should be used in caution. These ingredients may precipitate severe adverse effects in neonates. Silver diamine fluoride Silver diamine fluoride is contraindicated in patients having silver allergy, oral ulcerations and severe gum disease. These diseases can cause painful responses when associated with the acid or ammonia in SDF.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Design manufacture service** Design manufacture service: Design manufacture service (DMS) is a business model that combines contract product design with contract manufacturing as a service to other companies that have insufficient or do not possess the required resources. Often the customer is focused on other aspects of their business or their existing resources may simply be overloaded. DMS providers may also provide other services such as order fulfillment, logistics and aftermarket service. Design manufacture service: Because of the high skill-levels required in each field, DMS firms specialize in different product categories. These might include medical devices, medical instruments, automotive, communications, etc. Typically, these are areas that require a relatively higher level of internal infrastructure or regulatory controls than the customer possesses. Certain industries including aviation and medical devices require special development and manufacturing practices required by international, Federal and local regulations. Design manufacture service: One of the key differences between the DMS model and other contract manufacturing such as original design manufacturer (ODM), is the way intellectual property (IP) is treated. In the DMS model, IP comes from three basic sources. These include (a) IP previously owned and contributed by the customer, (b) IP previously owned and contributed to the product by the DMS and (c) new or original IP created at the request of and paid by the customer. The latter (c) is commonly referred to as "work for hire". With DMS, the customer ultimately has rights to all of the IP embodied in the product. This is especially important in industries like medical devices where customer owned IP is critical. Some contract product development models, including Original Design Manufacturing (ODM), allow the developer, instead of the customer, to retain IP rights.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Oxford Chemistry Primers** Oxford Chemistry Primers: The Oxford Chemistry Primers are a series of short texts providing accounts of a range of essential topics in chemistry and chemical engineering written for undergraduate study. The first primer Organic Synthesis: The Roles of Boron and Silicon was published by Oxford University Press in 1991. As of 2017 there are 100 titles in the series, written by a wide range of authors. The editors are Steve G. Davies (Organic Chemistry), Richard G. Compton (Physical Chemistry), John Evans (Inorganic Chemistry) and Lynn Gladden (Chemical Engineering).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Extrapolation** Extrapolation: In mathematics, extrapolation is a type of estimation, beyond the original observation range, of the value of a variable on the basis of its relationship with another variable. It is similar to interpolation, which produces estimates between known observations, but extrapolation is subject to greater uncertainty and a higher risk of producing meaningless results. Extrapolation may also mean extension of a method, assuming similar methods will be applicable. Extrapolation may also apply to human experience to project, extend, or expand known experience into an area not known or previously experienced so as to arrive at a (usually conjectural) knowledge of the unknown (e.g. a driver extrapolates road conditions beyond his sight while driving). The extrapolation method can be applied in the interior reconstruction problem. Methods: A sound choice of which extrapolation method to apply relies on a priori knowledge of the process that created the existing data points. Some experts have proposed the use of causal forces in the evaluation of extrapolation methods. Crucial questions are, for example, if the data can be assumed to be continuous, smooth, possibly periodic, etc. Linear Linear extrapolation means creating a tangent line at the end of the known data and extending it beyond that limit. Linear extrapolation will only provide good results when used to extend the graph of an approximately linear function or not too far beyond the known data. If the two data points nearest the point x∗ to be extrapolated are (xk−1,yk−1) and (xk,yk) , linear extrapolation gives the function: y(x∗)=yk−1+x∗−xk−1xk−xk−1(yk−yk−1). (which is identical to linear interpolation if xk−1<x∗<xk ). It is possible to include more than two points, and averaging the slope of the linear interpolant, by regression-like techniques, on the data points chosen to be included. This is similar to linear prediction. Methods: Polynomial A polynomial curve can be created through the entire known data or just near the end (two points for linear extrapolation, three points for quadratic extrapolation, etc.). The resulting curve can then be extended beyond the end of the known data. Polynomial extrapolation is typically done by means of Lagrange interpolation or using Newton's method of finite differences to create a Newton series that fits the data. The resulting polynomial may be used to extrapolate the data. Methods: High-order polynomial extrapolation must be used with due care. For the example data set and problem in the figure above, anything above order 1 (linear extrapolation) will possibly yield unusable values; an error estimate of the extrapolated value will grow with the degree of the polynomial extrapolation. This is related to Runge's phenomenon. Methods: Conic A conic section can be created using five points near the end of the known data. If the conic section created is an ellipse or circle, when extrapolated it will loop back and rejoin itself. An extrapolated parabola or hyperbola will not rejoin itself, but may curve back relative to the X-axis. This type of extrapolation could be done with a conic sections template (on paper) or with a computer. Methods: French curve French curve extrapolation is a method suitable for any distribution that has a tendency to be exponential, but with accelerating or decelerating factors. This method has been used successfully in providing forecast projections of the growth of HIV/AIDS in the UK since 1987 and variant CJD in the UK for a number of years. Another study has shown that extrapolation can produce the same quality of forecasting results as more complex forecasting strategies. Methods: Geometric Extrapolation with error prediction Can be created with 3 points of a sequence and the "moment" or "index", this type of extrapolation have 100% accuracy in predictions in a big percentage of known series database (OEIS).Example of extrapolation with error prediction : sequence = [1,2,3,5] f1(x,y) = (x) / y d1 = f1 (3,2) d2 = f1 (5,3) m = last sequence (5) n = last $ last sequence fnos (m,n,d1,d2) = round ( ( ( n * d1 ) - m ) + ( m * d2 ) ) round $ ((3*1.66)-5) + (5*1.6) = 8 Quality: Typically, the quality of a particular method of extrapolation is limited by the assumptions about the function made by the method. If the method assumes the data are smooth, then a non-smooth function will be poorly extrapolated. Quality: In terms of complex time series, some experts have discovered that extrapolation is more accurate when performed through the decomposition of causal forces.Even for proper assumptions about the function, the extrapolation can diverge severely from the function. The classic example is truncated power series representations of sin(x) and related trigonometric functions. For instance, taking only data from near the x = 0, we may estimate that the function behaves as sin(x) ~ x. In the neighborhood of x = 0, this is an excellent estimate. Away from x = 0 however, the extrapolation moves arbitrarily away from the x-axis while sin(x) remains in the interval [−1, 1]. I.e., the error increases without bound. Quality: Taking more terms in the power series of sin(x) around x = 0 will produce better agreement over a larger interval near x = 0, but will produce extrapolations that eventually diverge away from the x-axis even faster than the linear approximation. Quality: This divergence is a specific property of extrapolation methods and is only circumvented when the functional forms assumed by the extrapolation method (inadvertently or intentionally due to additional information) accurately represent the nature of the function being extrapolated. For particular problems, this additional information may be available, but in the general case, it is impossible to satisfy all possible function behaviors with a workably small set of potential behavior. In the complex plane: In complex analysis, a problem of extrapolation may be converted into an interpolation problem by the change of variable z^=1/z . This transform exchanges the part of the complex plane inside the unit circle with the part of the complex plane outside of the unit circle. In particular, the compactification point at infinity is mapped to the origin and vice versa. Care must be taken with this transform however, since the original function may have had "features", for example poles and other singularities, at infinity that were not evident from the sampled data. In the complex plane: Another problem of extrapolation is loosely related to the problem of analytic continuation, where (typically) a power series representation of a function is expanded at one of its points of convergence to produce a power series with a larger radius of convergence. In effect, a set of data from a small region is used to extrapolate a function onto a larger region. In the complex plane: Again, analytic continuation can be thwarted by function features that were not evident from the initial data. Also, one may use sequence transformations like Padé approximants and Levin-type sequence transformations as extrapolation methods that lead to a summation of power series that are divergent outside the original radius of convergence. In this case, one often obtains rational approximants. Fast: The extrapolated data often convolute to a kernel function. After data is extrapolated, the size of data is increased N times, here N is approximately 2–3. If this data needs to be convoluted to a known kernel function, the numerical calculations will increase N log(N) times even with fast Fourier transform (FFT). There exists an algorithm, it analytically calculates the contribution from the part of the extrapolated data. The calculation time can be omitted compared with the original convolution calculation. Hence with this algorithm the calculations of a convolution using the extrapolated data is nearly not increased. This is referred as the fast extrapolation. The fast extrapolation has been applied to CT image reconstruction. Extrapolation arguments: Extrapolation arguments are informal and unquantified arguments which assert that something is probably true beyond the range of values for which it is known to be true. For example, we believe in the reality of what we see through magnifying glasses because it agrees with what we see with the naked eye but extends beyond it; we believe in what we see through light microscopes because it agrees with what we see through magnifying glasses but extends beyond it; and similarly for electron microscopes. Such arguments are widely used in biology in extrapolating from animal studies to humans and from pilot studies to a broader population.Like slippery slope arguments, extrapolation arguments may be strong or weak depending on such factors as how far the extrapolation goes beyond the known range.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stone picker** Stone picker: A stone picker (or rock picker) is an implement to sieve through the top layer of soil to separate and collect rocks and soil debris from good topsoil. It is usually tractor-pulled. A stone picker is similar in function to a rock windrower (rock rake); a stone picker generally digs to greater depths to remove stones and rocks. Stone picker: Stone pickers are used in farming and landscaping, where stones need to be removed from the soil and ground surface to prevent damage to other farm machinery (such as hay balers, combines, and mowers), improve the soil for crop production, or improve the appearance of the ground surface in preparation for a lawn or a golf course. Surface stones and large rocks often left from plowing can damage a hay bailer, the header or reciprocating knives on a combine, and blades on a rotary mower. Land with rock instead of fine soil are often less useful for crops. Thus, removing stones from the soil also ensures a more consistent yield. Additionally, using stone pickers is particularly useful for crops forming tubers (such as potatoes) in the ground. Stone picker: A stone picker has digging teeth, a conveyor system, a sieve or screen, and a stone bin. The digging teeth are at the leading edge of the stone picker and removes soil, which is placed on the conveyor system. If the sieve is not combined with the conveyor system, the conveyor system transports the stones and large rocks to a bin or hopper for periodic dumping. Some stone pickers use a large tractor, generally over 100 horsepower, equipped with hydraulics and power take-off driven mechanisms. PTO pump driven equipment can use 60 horsepower tractors. The tractor's hydraulics control the depth to which the stone picker digs to excavate soil material; whereas, the PTO controls the movement of the conveyor and picking system.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Back-released velar click** Back-released velar click: A velar click, or more precisely a back-released velar click or back-released uvular click, is a click consonant found in paralinguistic use in languages across Africa, such as Wolof. The tongue is in a similar position to other click articulations, such as an alveolar click, and like other clicks, the airstream mechanism is lingual. However, unlike other clicks, the salient sound is produced by releasing the rear (velar or uvular) closure of the tongue rather than the front closure. Consequently, the air that fills the vacuum comes from behind the tongue, from the nasal cavity and the throat. Velar clicks are always voiceless and typically nasal ([ʞ̃̊], [ᵑ̊ʞ] or [ᶰ̥ʞ]), as nasal airflow is required for a reasonably loud production. IPA symbol withdrawn: In 1921, the International Phonetic Association (IPA) adopted Daniel Jones' symbol ⟨ʞ⟩, a turned lowercase K, for the palatal clicks of Khoekhoe. Jones seems to have first applied the label "velar" in an IPA publication in 1928. At the time, little was known about the articulation of clicks, and different authors used different labels for the same sounds – Doke, for example, called the same clicks 'alveolar'. The last mention of the "velar" clicks was in the 1949 Principles. It was omitted when the other three click letters were moved into the symbol chart in 1951, and was not mentioned again. IPA symbol withdrawn: An actual velar click, in the sense that term is used with the languages of southern Africa, is not possible. A click is articulated with two closures of the tongue or lips. The rear articulation of all clicks is velar or uvular, and the families of dental, alveolar, palatal, and bilabial clicks are defined by the front closure, which is released to cause the influx of air from the front of the mouth that identifies the type of click. A forward closure in the velar region would leave no room for the air pocket that generates that influx of air.From 2008 to 2015 the unused letter was picked up by the extensions to the IPA to mark a velodorsal articulation in speech pathology. IPA symbol withdrawn: However, velar clicks are possible in the sense that the release sequence of the tongue closures can be reversed: in paralinguistic use in languages such as Wolof, it is the rear (often velar) closure rather than front one that is released to produce the sound, and such clicks have also been called 'velar'. The IPA letter was resurrected for such sounds, and was dropped from the extIPA to avoid confusion with that usage. Production: Lionnet describes the clicks as follows: Like any other click, [ʞ] is produced with an ingressive lingual (velaric) airstream: the oral cavity is closed in two places: at the velum and at the front of the mouth. Air rarefaction in the intra-oral cavity is achieved mostly through tongue body lowering. However, instead of the front closure, the velar closure is released, allowing air to rush into the mouth from the back, either from the nasal cavity or from the post-velar cavity if the velo-pharyngeal port is closed. Production: Velar clicks are produced with closed lips in those languages known to have them. For this reason, it was at first thought that the front articulation was labial: This click uses the ingressive airstream mechanism, just like regular clicks. The oral cavity is closed in two places: the lips and the palate or the velum. The tongue acts as a piston, with the only difference from velaric ingressive [i.e. other] clicks being the path through which air flows into the oral cavity: in clicks produced with the mouth open the air flows in through the mouth, and in this click it flows in through the nasal cavity. Production: However, the labial closure does not appear to be distinctive. Although articulatory measurements have not been done, it appears that the two relevant articulations are dorsal and coronal: The rear articulation appears to be at the very front of the velum, near the hard palate (at least in Wolof and Laal), and the front articulation is dental or alveolar. The lips are closed merely because that is their rest position; opening the lips has no effect on the consonant. That is, the setup of a velar click is very much like one of the coronal clicks, [ǀ, ǂ, ǃ], but with the roles of the two closures of the tongue reversed. Production: In Mundang and Kanuri, the rear articulation is said to be uvular and back-velar rather than front-velar. Comparisons between the languages have yet to be done. Occurrence: Paralinguistic velar clicks are attested from a number of languages in west and central Africa, from Senegal in the west to northern Cameroon and southern Chad in the east. The literature reports at least Laal, Mambay, Mundang, and Kanuri in the east, and Wolof and Mauritanian Pulaar in the west.In Wolof, a back-released velar click is in free variation with a lateral click or an alveolar click. It means 'yes' when used once, and 'I see' or 'I get it' when repeated. It's also used for back-channeling. In Laal as well, it is used for "strong agreement" and back-channeling, and is in free variation with the lateral click. It appears to have the same two functions in the other languages.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fire Emblem Heroes** Fire Emblem Heroes: Fire Emblem Heroes (Japanese: ファイアーエムブレム ヒーローズ, Hepburn: Faiā Emuburemu Hīrōzu) is a free-to-play tactical role-playing game developed by Intelligent Systems and published by Nintendo for Android and iOS. The game is a mobile spin-off of the Fire Emblem series featuring its characters, and was released on February 2, 2017. Fire Emblem Heroes received a number of awards and nominations in "Best Mobile Game" categories. As of 2020 the game had grossed over $656 million worldwide, making it Nintendo's highest-grossing mobile game. Gameplay: Fire Emblem Heroes is a tactical role-playing game. Players control a team of up to four characters ("Heroes") against enemy teams of varying sizes on an 8x6 grid map. Different characters have different movement restrictions; for example, armored units have a shorter range than cavalry units, but cavalry units cannot enter forest tiles. Flying units can enter most tiles, even ones impassable to all other units such as water or mountains. The game strictly alternates between a player phase and an enemy phase every turn. During the player's phase, Heroes can attack enemy characters when in range; if both the attacker and defender have the same range, the defender will counterattack if still alive. Heroes deal either physical or magical damage types; they also have a "color" that informs a rock paper scissors-esque system that makes it so some units have advantageous match-ups over other units. After moving and optionally attacking with all their heroes, the enemy phase occurs where the game's AI does the same for the opposing team of characters. The player, in general, seeks to lure enemy units into disadvantageous match-ups by careful character positioning on the map. Unlike other Fire Emblem titles, there is no element of randomness or chance in battles; an engagement between two characters is perfectly deterministic, as is the enemy AI, so a given strategy will either win or lose consistently. Fire Emblem Heroes has a set of "story" missions divided by chapters for the player to complete, rotating challenge missions, a training tower for increasing character's strength in random battles, fights against other players' teams (albeit controlled by the game's AI) in the "Arena" and in "Aether Raids", special challenges involving "Brigades" where teams with eight or more characters are controlled on a larger map than usual, and a variety of other content to complete. Gameplay: The game's currency is known as "orbs", which can be used to acquire new heroes as well as several other quality-of-life items. Missions cost stamina to play while stamina is recovered over time; the exact cost of missions and maximum of stamina held has varied over the course of the app. Orbs are either earned by completing in-game activities or with in-app purchases.Upon its initial release date, Fire Emblem Heroes players could potentially access up to 100 distinct controllable units in the game, with players beginning with access to Alfonse, Sharena, and Anna. These three members of the "Order of Heroes" were created specifically for Heroes, with the remainder of the controllable characters being crossovers from the rest of the franchise. Since release, additional characters have been added to the list, including from games that were previously not featured. In general, most characters are gained via a "Summon" tab which works on a gacha system. Orbs are spent on randomized heroes, with some heroes being common and other heroes being rare. Other heroes are gained for free, by completing in-game missions of varying difficulty and effort. If a player gains duplicates of the same character, the duplicates can be merged to create a single more powerful character. Gameplay: In March 2017, the game added "skill inheritance", which allows characters to mix skills previously reserved to other heroes: for example, a hero could gain another character's weapon or special ability. In November 2017, the game released "Book II", which included a new animated video, new story missions, added new mechanics, increased the power of some of the weapons and skills perceived as weak, and introduced a new set of Heroes-exclusive allies and villains. Heroes has continued to release new content and new characters. By December 2017, the game included more than 200 unique heroes; in August 2021, the game has more than 600 characters. Plot: The main conflict in Fire Emblem Heroes is a war between the nations of Askr and Embla. The protagonist, also known as the summoner, whose name can be chosen by the player, aids the Kingdom of Askr and its Order of Heroes, whose members include its Commander Anna, Prince Alfonse, and Princess Sharena. Embla is led by Princess Veronica and Prince Bruno; Veronica is featured on the original app icon and prominently in the title artwork. Both sides use summoned heroes from other worlds, with the other worlds being the settings of other Fire Emblem games. In the Book II expansion, released from November 2017–September 2018, two new nations are introduced: the Kingdom of Nifl and its Princess Fjorm, who ally with Askr, and the Kingdom of Múspell and its King Surtr, Princess Laevatein, and strategist Loki, who ally with Embla. In the Book III expansion, released from December 2018–November 2019, both Askr and Embla come under invasion from the forces of Hel, the realm of the dead. The protagonist gains an ally in Eir, who is Hel's daughter. In Book IV, Askr faces the forces of Dökkálfheimr, a realm inhabited by evil fairy-like creatures of nightmares. In Book V, Askr battles the forces of Niðavellir, a country whose inhabitants have merged magic and technology. In Book VI, the previously defused conflict between Askr and Embla is reignited by Embla's eponymous patron deity. In Book VII, Askr is invaded by Gullveig, an extratemporal being, and the heroes are forced to travel through time to stop her. Plot: The setting of Heroes includes loose references to Norse mythology; the names of the warring countries are that of Ask and Embla, the first humans, while the kingdoms introduced in later books match that of Niflheim, Muspelheim, and Hel. Place names and spell descriptions often include fragments in the Old Norse language, such as rauðr, blár, and gronn for red, blue, and green. The members of Múspell have the names of Surtr, the fire giant; Loki the trickster half-giant / half-god; and Laevatein the weapon; the members of Hel include references to Hel (the deity), Líf, and Eir. Development: The game was announced by Nintendo as the third mobile game made under the DeNA partnership in April 2016, alongside an Animal Crossing mobile game, and was originally set for release that year. Intelligent Systems, the studio within Nintendo that has produced the other games of the Fire Emblem series, headed up development of Heroes to ensure it would be a proper Fire Emblem game optimized for mobile devices. The game's title and gameplay details were later revealed during a Fire Emblem Direct presentation in January 2017. Immediately afterwards, Nintendo launched an online "choose your legend" promotion, in which players could vote for various characters from the series to be included in the game. The game was initially released in 39 countries for Android and iOS devices on February 2, 2017. As was the case with Nintendo's previously released mobile game Super Mario Run, Fire Emblem Heroes requires players to have an internet connection to play.Since release, the game has continued releasing new characters and features. Shingo Matsushita, the director of Heroes, compared a normal Fire Emblem game to a movie, and Heroes to a television series; there would be a constant demand for new "episodes", but there would also be a feedback loop between the producers and the players that could affect the initial vision of the product. Reception: At time of release, Fire Emblem Heroes received mixed reception, according to review aggregator Metacritic. One critic praised the game's casual-friendly design with its straightforward combat and character collection, but did not like the arbitrary play time constraint that stamina limit creates. Criticism of the stamina restrictions at launch were later met by an increase in the Stamina cap as well as a reduction in the Stamina spent to complete missions.During the first day of release, Nintendo reported that the game generated more than $2.9 million. The game also ranked in third place in Japan when the game launched, also making shares skyrocket for the company. By December 2017, the game had grossed $240 million from 12 million paid players. In February 2018, it was reported that Heroes had made roughly $295 million in its first year. By September 2018, the game had reached 14.1 million downloads and grossed $437 million. By February 2019, the game had grossed $500 million worldwide, with Japan accounting for 56% (about $280 million) of its revenue. As of January 2020, the game has grossed $656 million worldwide, making it Nintendo's highest-grossing mobile game.Commentators have noted that despite having an install base smaller than Super Mario Run, Fire Emblem Heroes is more financially lucrative for Nintendo. The Verge noted this was due both to being a "superior game" to Super Mario Run as well as Heroes targeting a somewhat older audience willing and capable of paying microtransactions for more content.The game won the award for "Best Game of 2017" in Google Play Japan, and for "Best Mobile Game" in Destructoid's Game of the Year Awards 2017, and it also won the People's Choice Award for the same category in IGN's Best of 2017 Awards. Polygon ranked it 20th on their list of the 50 best games of 2017.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Equilateral pentagon** Equilateral pentagon: In geometry, an equilateral pentagon is a polygon in the Euclidean plane with five sides of equal length. Its five vertex angles can take a range of sets of values, thus permitting it to form a family of pentagons. In contrast, the regular pentagon is unique, because it is equilateral and moreover it is equiangular (its five angles are equal; the measure is 108 degrees). Equilateral pentagon: Four intersecting equal circles arranged in a closed chain are sufficient to determine a convex equilateral pentagon. Each circle's center is one of four vertices of the pentagon. The remaining vertex is determined by one of the intersection points of the first and the last circle of the chain. Internal angles of a convex equilateral pentagon: When a convex equilateral pentagon is dissected into triangles, two of them appear as isosceles (triangles in orange and blue) while the other one is more general (triangle in green). We assume that we are given the adjacent angles α and β . According to the law of sines the length of the line dividing the green and blue triangles is: sin ⁡(β2). Internal angles of a convex equilateral pentagon: The square of the length of the line dividing the orange and green triangles is: cos sin sin sin ⁡(α+β2). According to the law of cosines, the cosine of δ can be seen from the figure: cos ⁡(δ)=12+12−b22(1)(1). Simplifying, δ is obtained as function of α and β: arccos cos cos cos ⁡(α+β)−12]. Internal angles of a convex equilateral pentagon: The remaining angles of the pentagon can be found geometrically: The remaining angles of the orange and blue triangles are readily found by noting that two angles of an isosceles triangle are equal while all three angles sum to 180°. Then ϵ,γ, and the two remaining angles of the green triangle can be found from four equations stating that the sum of the angles of the pentagon is 540°, the sum of the angles of the green triangle is 180°, the angle γ is the sum of its three components, and the angle ϵ is the sum of its two components. Internal angles of a convex equilateral pentagon: A cyclic pentagon is equiangular if and only if it has equal sides and thus is regular. Likewise, a tangential pentagon is equilateral if and only if it has equal angles and thus is regular. Tiling: There are two infinite families of equilateral convex pentagons that tile the plane, one having two adjacent supplementary angles and the other having two non-adjacent supplementary angles. Some of those pentagons can tile in more than one way, and there is one sporadic example of an equilateral pentagon that can tile the plane but does not belong to either of those two families; its angles are roughly 89°16', 144°32.5', 70°55', 135°22', and 99°54.5', no two supplementary. A two-dimensional mapping: Equilateral pentagons can intersect themselves either not at all, once, twice, or five times. The ones that don't intersect themselves are called simple, and they can be classified as either convex or concave. We here use the term "stellated" to refer to the ones that intersect themselves either twice or five times. We rule out, in this section, the equilateral pentagons that intersect themselves precisely once. A two-dimensional mapping: Given that we rule out the pentagons that intersect themselves once, we can plot the rest as a function of two variables in the two-dimensional plane. Each pair of values (α, β) maps to a single point of the plane and also maps to a single pentagon. A two-dimensional mapping: The periodicity of the values of α and β and the condition α ≥ β ≥ δ permit the size of the mapping to be limited. In the plane with coordinate axes α and β, the equation α = β is a line dividing the plane in two parts (south border shown in orange in the drawing). The equation δ = β as a curve divides the plane into different sections (north border shown in blue). A two-dimensional mapping: Both borders enclose a continuous region of the plane whose points map to unique equilateral pentagons. Points outside the region just map to repeated pentagons—that is, pentagons that when rotated or reflected can match others already described. Pentagons that map exactly onto those borders have a line of symmetry. Inside the region of unique mappings there are three types of pentagons: stellated, concave and convex, separated by new borders. A two-dimensional mapping: Stellated The stellated pentagons have sides intersected by others. A common example of this type of pentagon is the pentagram. A condition for a pentagon to be stellated, or self-intersecting, is to have 2α + β ≤ 180°. So, in the mapping, the line 2α + β = 180° (shown in orange at the north) is the border between the regions of stellated and non-stellated pentagons. Pentagons which map exactly to this border have a vertex touching another side. A two-dimensional mapping: Concave The concave pentagons are non-stellated pentagons having at least one angle greater than 180°. The first angle which opens wider than 180° is γ, so the equation γ = 180° (border shown in green at right) is a curve which is the border of the regions of concave pentagons and others, called convex. Pentagons which map exactly to this border have at least two consecutive sides appearing as a double length side, which resembles a pentagon degenerated to a quadrilateral. A two-dimensional mapping: Convex The convex pentagons have all of their five angles smaller than 180° and no sides intersecting others. A common example of this type of pentagon is the regular pentagon.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Journaling file system** Journaling file system: A journaling file system is a file system that keeps track of changes not yet committed to the file system's main part by recording the goal of such changes in a data structure known as a "journal", which is usually a circular log. In the event of a system crash or power failure, such file systems can be brought back online more quickly with a lower likelihood of becoming corrupted.Depending on the actual implementation, a journaling file system may only keep track of stored metadata, resulting in improved performance at the expense of increased possibility for data corruption. Alternatively, a journaling file system may track both stored data and related metadata, while some implementations allow selectable behavior in this regard. History: In 1990 IBM introduced JFS in AIX 3.1 as one of the first UNIX commercial filesystems that implemented journaling. The next year the idea was popularized in a widely cited paper on log-structured file systems. This was subsequently implemented in Microsoft's Windows NT's NTFS filesystem in 1993, in Apple's HFS Plus filesystem in 1998, and in Linux's ext3 filesystem in 2001. Rationale: Updating file systems to reflect changes to files and directories usually requires many separate write operations. This makes it possible for an interruption (like a power failure or system crash) between writes to leave data structures in an invalid intermediate state.For example, deleting a file on a Unix file system involves three steps: Removing its directory entry. Releasing the inode to the pool of free inodes. Rationale: Returning all disk blocks to the pool of free disk blocks.If a crash occurs after step 1 and before step 2, there will be an orphaned inode and hence a storage leak; if a crash occurs between steps 2 and 3, then the blocks previously used by the file cannot be used for new files, effectively decreasing the storage capacity of the file system. Re-arranging the steps does not help, either. If step 3 preceded step 1, a crash between them could allow the file's blocks to be reused for a new file, meaning the partially deleted file would contain part of the contents of another file, and modifications to either file would show up in both. On the other hand, if step 2 preceded step 1, a crash between them would cause the file to be inaccessible, despite appearing to exist. Rationale: Detecting and recovering from such inconsistencies normally requires a complete walk of its data structures, for example by a tool such as fsck (the file system checker). This must typically be done before the file system is next mounted for read-write access. If the file system is large and if there is relatively little I/O bandwidth, this can take a long time and result in longer downtimes if it blocks the rest of the system from coming back online. Rationale: To prevent this, a journaled file system allocates a special area—the journal—in which it records the changes it will make ahead of time. After a crash, recovery simply involves reading the journal from the file system and replaying changes from this journal until the file system is consistent again. The changes are thus said to be atomic (not divisible) in that they either succeed (succeeded originally or are replayed completely during recovery), or are not replayed at all (are skipped because they had not yet been completely written to the journal before the crash occurred). Techniques: Some file systems allow the journal to grow, shrink and be re-allocated just as a regular file, while others put the journal in a contiguous area or a hidden file that is guaranteed not to move or change size while the file system is mounted. Some file systems may also allow external journals on a separate device, such as a solid-state drive or battery-backed non-volatile RAM. Changes to the journal may themselves be journaled for additional redundancy, or the journal may be distributed across multiple physical volumes to protect against device failure. Techniques: The internal format of the journal must guard against crashes while the journal itself is being written to. Many journal implementations (such as the JBD2 layer in ext4) bracket every change logged with a checksum, on the understanding that a crash would leave a partially written change with a missing (or mismatched) checksum that can simply be ignored when replaying the journal at next remount. Techniques: Physical journals A physical journal logs an advance copy of every block that will later be written to the main file system. If there is a crash when the main file system is being written to, the write can simply be replayed to completion when the file system is next mounted. If there is a crash when the write is being logged to the journal, the partial write will have a missing or mismatched checksum and can be ignored at next mount. Techniques: Physical journals impose a significant performance penalty because every changed block must be committed twice to storage, but may be acceptable when absolute fault protection is required. Logical journals A logical journal stores only changes to file metadata in the journal, and trades fault tolerance for substantially better write performance. A file system with a logical journal still recovers quickly after a crash, but may allow unjournaled file data and journaled metadata to fall out of sync with each other, causing data corruption. For example, appending to a file may involve three separate writes to: The file's inode, to note in the file's metadata that its size has increased. The free space map, to mark out an allocation of space for the to-be-appended data. The newly allocated space, to actually write the appended data.In a metadata-only journal, step 3 would not be logged. If step 3 was not done, but steps 1 and 2 are replayed during recovery, the file will be appended with garbage. Techniques: Write hazards The write cache in most operating systems sorts its writes (using the elevator algorithm or some similar scheme) to maximize throughput. To avoid an out-of-order write hazard with a metadata-only journal, writes for file data must be sorted so that they are committed to storage before their associated metadata. This can be tricky to implement because it requires coordination within the operating system kernel between the file system driver and write cache. An out-of-order write hazard can also occur if a device cannot write blocks immediately to its underlying storage, that is, it cannot flush its write-cache to disk due to deferred write being enabled. Techniques: To complicate matters, many mass storage devices have their own write caches, in which they may aggressively reorder writes for better performance. (This is particularly common on magnetic hard drives, which have large seek latencies that can be minimized with elevator sorting.) Some journaling file systems conservatively assume such write-reordering always takes place, and sacrifice performance for correctness by forcing the device to flush its cache at certain points in the journal (called barriers in ext3 and ext4). Alternatives: Soft updates Some UFS implementations avoid journaling and instead implement soft updates: they order their writes in such a way that the on-disk file system is never inconsistent, or that the only inconsistency that can be created in the event of a crash is a storage leak. To recover from these leaks, the free space map is reconciled against a full walk of the file system at next mount. This garbage collection is usually done in the background. Alternatives: Log-structured file systems In log-structured file systems, the write-twice penalty does not apply because the journal itself is the file system: it occupies the entire storage device and is structured so that it can be traversed as would a normal file system. Alternatives: Copy-on-write file systems Full copy-on-write file systems (such as ZFS and Btrfs) avoid in-place changes to file data by writing out the data in newly allocated blocks, followed by updated metadata that would point to the new data and disown the old, followed by metadata pointing to that, and so on up to the superblock, or the root of the file system hierarchy. This has the same correctness-preserving properties as a journal, without the write-twice overhead.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Signaller** Signaller: A signaller, signalman, colloquially referred to as a radioman or signaleer in the armed forces is a specialist soldier, sailor or airman responsible for military communications. Signallers, a.k.a. Combat Signallers or signalmen or women, are commonly employed as radio or telephone operators, relaying messages for field commanders at the front line (Army units, Ships or Aircraft), through a chain of command which includes field headquarters. Messages are transmitted and received via a communications infrastructure comprising fixed and mobile installations. Duties: In the past, signalling skills have included the use of: Heliograph, Aldis lamp, semaphore flags, "Don R" (Dispatch Riders) and even carrier pigeons. Duties: Modern signallers are responsible for the battlefield voice and data communication and information technology infrastructure or in common English terms, they may carry a backpack radio transceiver used to communicate to forward operating bases (large and small outposts for the military), using a variety of media. All types of wire (line), satellite and ionospheric radio communication are employed. These include common radio systems such as HF/VHF radio and UHF/SHF radio (operated in line of sight, for example). Cellular radio and telephone systems such as TETRA are also becoming common. Duties: In addition to day-to-day soldiering, the signaller is required to be competent at a number of skill levels in the following topics: Maintaining Power Supplies (Batteries and Charging for example) Radio sets; storage and logistics; installation and operation; maintenance and repair at unit level. Station Organisation; Managing Radio Nets and Maintaining Net Discipline for example, map marking, log keeping etc. Voice and wireless telegraphy procedure (using Morse code or RATT (Radio assisted Teletype) for example). Formal message procedure, electronic mail. Electronic Warfare (EW); Communications Security (COMSEC) - including the encryption and deciphering of coded messages using paper/voice and electronic codes for example. Telephone and Line Information and Communication Technology Antennae selection and design Air Forces: In an air force, a signaller, an aircrew member, is a person trained to communicate between the aircraft, its base and units in the area of operation, by means of radio or other digital communications. Navy Forces: In the navy, a signaller is usually a seaman trained to communicate between the fleet forces and naval bases in the area of operation, by means of radio or other digital communications. Armies: Australia In the Australian Army, a signaller is often referred to as a Chook (Australian Slang for Chicken) by soldiers outside the Signal Corps, because the Morse code used by Signallers has been likened to the chirping of chickens. Armies: Canada In the Canadian Army, a signaller is often referred to as a "Jimmy" in reference to the flag and cap badge feature Mercury (Latin: Mercurius), the winged messenger of the Roman gods, who is referred to by members of the corps as "Jimmy". The origins of this nickname are unclear. According to one explanation, the badge is referred to as "Jimmy" because the image of Mercury was based on the late medieval bronze statue by the Italian sculptor Giambologna, and shortening over time reduced the name Giambologna to "Jimmy". The most widely accepted theory of where the name Jimmy comes from is a Royal Signals boxer, called Jimmy Emblem, who was the British Army Champion in 1924 and represented the Royal Corps of Signals from 1921 to 1924. Armies: Signallers in Canada are responsible for the majority of radio, satellite, telephone, and computer communications within the Canadian military. Trained signallers of the rank of private in Canada are referred to as "Sig" as a replacement for private (i.e. Sig Smith). United Kingdom In the British Army, signaller may refer to a member of the Royal Corps of Signals specifically to the rank of Signaller (formerly Signalman) or a trained signals specialist in other areas of the army such as the Infantry or Royal Artillery. The rank is equivalent to that of Private. Modern age: See also: Land Mobile Radio System, Walkie-Talkie, Transceiver The US and European powers, especially during World War 1 and World War 2, have employed extensive use of field telephones and other methods of transmitting messages like carrier pigeons, runners were essentially army messengers and couriers that ran from place to place, culminating in the extensive World War 2, Korea and Vietnam use of the backpack transceiver, eventually becoming unit-based radio and unit-to-HQ based field "telephone". Modern age: Specially designated soldiers in a unit would and still do, have a single soldier with a backpack transceiver and large telescoping antennae that can be as tall as 10 meters or 20 feet. It is also called RTO, which stands for "Radio Telephone Operator". At the field, soldiers usually call them RTO, rather than Signaller. They are soldiers specializing in military communications in the military, mainly operating wired/wireless communication equipment or sending telegrams to commanders from the front line according to the command line, including field headquarters and control agencies.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CUBIT** CUBIT: Cubit, often stylized CUBIT, is a computer user interface system for multi-touch devices, designed by Stefan Hechenberger and Addie Wagenknecht for Nortd Labs. It was developed to "demystify multitouch" technology by using an open-source model for software and hardware. It is a direct competitor of Microsoft Surface. Purchasing: As of 2 May 2008, Nortd Labs is accepting orders for developer kits named the TouchKit. Kit buyers and users must supply their own projector and camera at a cost estimated at between $1,080 and $1,580 USD.As of July 2008, the CUBIT system is for sale, by commission only, and both are rumored to have a two to three-month waiting list.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SCARB1** SCARB1: Scavenger receptor class B type 1 (SRB1) also known as SR-BI is a protein that in humans is encoded by the SCARB1 gene. SR-BI functions as a receptor for high-density lipoprotein. Function: Scavenger receptor class B, type I (SR-BI) is an integral membrane protein found in numerous cell types/tissues, including enterocytes, the liver and adrenal gland. It is best known for its role in facilitating the uptake of cholesteryl esters from high-density lipoproteins in the liver. This process drives the movement of cholesterol from peripheral tissues towards the liver, where cholesterol can either be secreted via the bile duct or be used to synthesise steroid hormones. This movement of cholesterol is known as reverse cholesterol transport and is a protective mechanism against the development of atherosclerosis, which is the principal cause of heart disease and stroke. SR-BI is crucial in carotenoid and vitamin E uptake in the small intestine. SR-B1 is upregulated in times of vitamin A deficiency and downregulated if vitamin A status is in the normal range.In melanocytic cells SCARB1 gene expression may be regulated by the MITF. Species distribution: SR-BI has also been identified in the livers of non-mammalian species (turtle, goldfish, shark, chicken, frog, and skate), suggesting it emerged early in vertebrate evolutionary history. The turtle also seems to upregulate SR-BI during egg development, indicating that cholesterol efflux may be at peak levels during developmental stages. Clinical significance: SCARB1 along with CD81 is the receptor for the entry of the Hepatitis C virus into liver cells. Preclinical research: Although malignant tumors are known to display extreme heterogeneity, overexpression of SR-B1 is a relatively consistent marker in cancerous tissues. While SR-B1 normally mediates the transfer of cholesterol between high-density lipoproteins (HDL) and healthy cells, it also facilitates the selective uptake of cholesterol by malignant cells. In this way, upregulation of the SR-B1 receptor becomes an enabling factor for self-sufficient proliferation in cancerous tissue.SR-B1 mediated delivery has also been used in the transfection of cancer cells with siRNA, or small interfering RNAs. This therapy causes RNA interference, in which short segments of double stranded RNA acts to silence targeted oncogenes post-transcription. SR-B1 mediation reduces siRNA degradation and off-target accumulation while enhancing delivery to targeted tissues. In "metastatic and taxane-resistant models of ovarian cancer, rHDL-mediated siren delivery improved responses. Interactive pathway map: Click on genes, proteins and metabolites below to link to respective articles.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**NLX (motherboard form factor)** NLX (motherboard form factor): NLX (short for New Low Profile eXtended) was a form factor proposed by Intel and developed jointly with IBM, DEC, and other vendors for low profile, low cost, mass-marketed retail PCs. Release 1.2 was finalized in March 1997 and release 1.8 was finalized in April 1999. NLX was similar in overall design to LPX, including a riser card and a low-profile slimline case. It was modernized and updated to allow support for the latest technologies while keeping costs down and fixing the main problems with LPX. It specified motherboards from 10 × 8 in (254 × 203 mm) to 13.6 × 9 in (345 × 229 mm) in size. NLX (motherboard form factor): Officially, the NLX form factor was designed to use ATX power supplies and featured the same soft power function. However, for size reduction, some NLX cases instead used the smaller SFX form factor or proprietary form factors with the same 20-pin connector. NLX (motherboard form factor): Many slimline systems that were formerly designed to fit the LPX form factor were modified to fit NLX. NLX is a true standard, unlike LPX, making interchangeability of components easier than it was for the older form factor. IBM, Gateway, and NEC produced a fair number of NLX computers in the late 1990s, primarily for Socket 370 (Pentium II-III and Celeron), but NLX never enjoyed the widespread acceptance that LPX had. Most importantly, one of the largest PC manufacturers, Dell decided against using NLX and created their own proprietary motherboards for use in their slimline systems. Although many of these computers and motherboards are still available secondhand, new production has essentially ceased, and in the slimline and small form factor market, NLX has been superseded by the Micro-ATX, FlexATX, and Mini-ITX form factors.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mo-Sai** Mo-Sai: Mo-Sai is a method of producing precast concrete cladding panels. It was patented by John Joseph Earley in 1940. The Mo-Sai institute later refined Earley's method and became the leader in exposed aggregate concrete. The Mo-Sai Institute, an organization of precast concrete manufacturers, adhered to the Mo-Sai method of producing the exposed aggregate precast concrete panels. Mo-Sai: A pivotal development in this technique occurred in 1938, when the administration buildings at the David Taylor Model Basin were built with panels used as permanent forms for cast-in-place walls. This was the first use of the Mo-Sai manufacturing technique produced in collaboration with the Dextrone Company of New Haven, Connecticut. Working from this background, the Dextone Company refined and obtained patents and copyrights in 1940 for the methods under which the Mo-Sai Associates, later known as Mo-Sai Institute Inc. The Mo-Sai Institute grew to include a number of licensed manufacturing firms throughout the United States. Mo-Sai: Buildings featuring Mo-Sai panels include the Columbine Building in Colorado Springs (1960), Prudential Building in Toronto, Ontario, Canada (1960), Denver Hilton Hotel (now the Sheraton Denver) in Denver, Colorado (1960), Los Angeles Temple (1956), Equitable Center in Portland, Oregon (1964), the Hartford National Bank and Trust Hartford, CT (1967) and the PanAm Building in New York City (1962).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**KCNE2** KCNE2: Potassium voltage-gated channel subfamily E member 2 (KCNE2), also known as MinK-related peptide 1 (MiRP1), is a protein that in humans is encoded by the KCNE2 gene on chromosome 21. MiRP1 is a voltage-gated potassium channel accessory subunit (beta subunit) associated with Long QT syndrome. It is ubiquitously expressed in many tissues and cell types. Because of this and its ability to regulate multiple different ion channels, KCNE2 exerts considerable influence on a number of cell types and tissues. Human KCNE2 is a member of the five-strong family of human KCNE genes. KCNE proteins contain a single membrane-spanning region, extracellular N-terminal and intracellular C-terminal. KCNE proteins have been widely studied for their roles in the heart and in genetic predisposition to inherited cardiac arrhythmias. The KCNE2 gene also contains one of 27 SNPs associated with increased risk of coronary artery disease. More recently, roles for KCNE proteins in a variety of non-cardiac tissues have also been explored. Discovery: Steve Goldstein (then at Yale University) used a BLAST search strategy, focusing on KCNE1 sequence stretches known to be important for function, to identify related expressed sequence tags (ESTs) in the NCBI database. Using sequences from these ESTs, KCNE2, 3 and 4 were cloned. Tissue distribution: KCNE2 protein is most readily detected in the choroid plexus epithelium, gastric parietal cells, and thyroid epithelial cells. KCNE2 is also expressed in atrial and ventricular cardiomyocytes, the pancreas, pituitary gland, and lung epithelium. In situ hybridization data suggest that KCNE2 transcript may also be expressed in various neuronal populations. Structure: Gene The KCNE2 gene resides on chromosome 21 at the band 21q22.11 and contains 2 exons. Since human KCNE2 is located ~79 kb from KCNE1 and in the opposite direction, KCNE2 is proposed to originate from a gene duplication event. Structure: Protein This protein belongs to the potassium channel KCNE family and is one five single transmembrane domain voltage-gated potassium (Kv) channel ancillary subunits. KCNE2 is composed of three major domains: the N-terminal domain, the transmembrane domain, and the C-terminal domain. The N-terminal domain protrudes out of the extracellular side of the cell membrane and is, thus, soluble in the aqueous environment. Meanwhile, the transmembrane and C-terminal domains are lipid-soluble to enable the protein to incorporate into the cell membrane. The C-terminal faces the intracellular side of the membrane and may share a putative PKC phosphorylation site with other KCNE proteins. Structure: Like other KCNEs, KCNE2 forms a heteromeric complex with the Kv α subunits. Function: Choroid plexus epithelium KCNE2 protein is most readily detected in the choroid plexus epithelium, at the apical side. KCNE2 forms complexes there with the voltage-gated potassium channel α subunit, Kv1.3. In addition, KCNE2 forms reciprocally regulating tripartite complexes in the choroid plexus epithelium with the KCNQ1 α subunit and the sodium-dependent myo-inositol transporter, SMIT1. Kcne2-/- mice exhibit increased seizure susceptibility, reduced immobility time in the tail suspension test, and reduced cerebrospinal fluid myo-inositol content, compared to wild-type littermates. Mega-dosing of myo-inositol reverses all these phenotypes, suggesting a link between myo-inositol and the seizure susceptibility and behavioral alterations in Kcne2-/- mice. Function: Gastric epithelium KCNE2 is also highly expressed in parietal cells of the gastric epithelium, also at the apical side. In these cells, KCNQ1-KCNE2 K+ channels, which are constitutively active, provide a conduit to return K+ ions back to the stomach lumen. The K+ ions enter the parietal cell through the gastric H+/K+-ATPase, which swaps them for protons as it acidifies the stomach. While KCNQ1 channels are inhibited by low extracellular pH, KCNQ1-KCNE2 channels activity is augmented by extracellular protons, an ideal characteristic for their role in parietal cells. Function: Thyroid epithelium KCNE2 forms constitutively active K+ channels with KCNQ1 in the basolateral membrane of thyroid epithelial cells. Kcne2-/- mice exhibit hypothyroidism, particularly apparent during gestation or lactation. KCNQ1-KCNE2 is required for optimal iodide uptake into the thyroid by the basolateral sodium iodide symporter (NIS). Iodide is required for biosynthesis of thyroid hormones. Function: Heart KCNE2 was originally discovered to regulate hERG channel function. KCNE2 decreases macroscopic and unitary current through hERG, and speeds hERG deactivation. hERG generates IKr, the most prominent repolarizing current in human ventricular cardiomyocytes. hERG, and IKr, are highly susceptible to block by a range of structurally diverse pharmacological agents. This property means that many drugs or potential drugs have the capacity to impair human ventricular repolarization, leading to drug-induced long QT syndrome. KCNE2 may also regulate hyperpolarization-activated, cyclic-nucleotide-gated (HCN) pacemaker channels in human heart and in the hearts of other species, as well as the Cav1.2 voltage-gated calcium channel.In mice, mERG and KCNQ1, another Kv α subunit regulated by KCNE2, are neither influential nor highly expressed in adult ventricles. However, Kcne2-/- mice exhibit QT prolongation at baseline at 7 months of age, or earlier if provoked with a QT-prolonging agent such as sevoflurane. This is because KCNE2 is a promiscuous regulatory subunit that forms complexes with Kv1.5 and with Kv4.2 in adult mouse ventricular myocytes. KCNE2 increases currents though Kv4.2 channels and slows their inactivation. KCNE2 is required for Kv1.5 to localize to the intercalated discs of mouse ventricular myocytes. Kcne2 deletion in mice reduces the native currents generated in ventricular myocytes by Kv4.2 and Kv1.5, namely Ito and IKslow, respectively. Clinical Significance: Gastric epithelium Kcne2-/- mice exhibit achlorhydria, gastric hyperplasia, and mis-trafficking of KCNQ1 to the parietal cell basal membrane. The mis-trafficking occurs because KCNE3 is upregulated in the parietal cells of Kcne2-/- mice, and hijacks KCNQ1, taking it to the basolateral membrane. When both Kcne2 and Kcne3 are germline-deleted in mice, KCNQ1 traffics to the parietal cell apical membrane but the gastric phenotype is even worse than for Kcne2-/- mice, emphasizing that KCNQ1 requires KCNE2 co-assembly for functional attributes other than targeting in parietal cells. Kcne2-/- mice also develop gastritis cystica profunda and gastric neoplasia. Human KCNE2 downregulation is also observed in sites of gastritis cystica profunda and gastric adenocarcinoma. Clinical Significance: Thyroid epithelium Positron emission tomography data show that with KCNE2, 124I uptake by the thyroid is impaired. Kcne2 deletion does not impair organification of iodide once it has been taken up by NIS. Pups raised by Kcne2-/- dams are particularly severely affected becauset they receive less milk (hypothyroidism of the dams impairs milk ejection), the milk they receive is deficient in T4, and they themselves cannot adequately transport iodide into the thyroid. Kcne2-/- pups exhibit stunted growth, alopecia, cardiomegaly and reduced cardiac ejection fraction, all of which are alleviated by thyroid hormone supplementation of pups or dams. Surrogating Kcne2-/- pups with Kcne2+/+ dams also alleviates these phenotypes, highlighting the influence of maternal genotype in this case. Clinical Significance: Heart As observed for hERG mutations, KCNE2 loss-of-function mutations are associated with inherited long QT syndrome, and hERG-KCNE2 channels carrying the mutations show reduced activity compared to wild-type channels. In addition, some KCNE2 mutations and also more common polymorphisms are associated with drug-induced long QT syndrome. In several cases, specific KCNE2 sequence variants increase the susceptibility to hERG-KCNE2 channel inhibition by the drug that precipitated the QT prolongation in the patient from which the gene variant was isolated. Long QT syndrome predisposes to potentially lethal ventricular cardiac arrhythmias including torsades de pointe, which can degenerate into ventricular fibrillation and sudden cardiac death. Moreover, KCNE2 gene variation can disrupt HCN1-KCNE2 channel function and this may potentially contribute to cardiac arrhythmogenesis. KCNE2 is also associated with familial atrial fibrillation, which may involve excessive KCNQ1-KCNE2 current caused by KCNE2 gain-of-function mutations.Recently, a battery of extracardiac effects were discovered in Kcne2-/- mice that may contribute to cardiac arrhythmogenesis in Kcne2-/- mice and could potentially contribute to human cardiac arrhythmias if similar effects are observed in human populations. Kcne2 deletion in mice causes anemia, glucose intolerance, dyslipidemia, hyperkalemia and elevated serum angiotensin II. Some or all of these might contribute to predisposition to sudden cardiac death in Kcne2-/- mice in the context of myocardial ischemia and post-ischemic arrhythmogenesis. Clinical Significance: Clinical Marker A multi-locus genetic risk score study based on a combination of 27 loci, including the KCNE2 gene, identified individuals at increased risk for both incident and recurrent coronary artery disease events, as well as an enhanced clinical benefit from statin therapy. The study was based on a community cohort study (the Malmo Diet and Cancer study) and four additional randomized controlled trials of primary prevention cohorts (JUPITER and ASCOT) and secondary prevention cohorts (CARE and PROVE IT-TIMI 22).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GJC3** GJC3: Gap junction gamma-3, also known as connexin-29 (Cx29) or gap junction epsilon-1 (GJE1), is a protein that in humans is encoded by the GJC3 gene.GJC3 is a conexin. Function: This gene encodes a gap junction protein. The encoded protein is known as a connexin, most of which form gap junctions that provide direct connections between neighboring cells. However, Cx29, which is highly expressed in myelin-forming glial cells of the CNS and PNS, has not been documented to form gap junctions in any cell type. In both PNS and CNS myelinated axons, Cx29 is precisely colocalized with Kv1.2 voltage-gated K+ channels, where both proteins are concentrated in the juxtaparanode and along the inner mesaxon. By freeze-fracture immunogold labeling electron microscopy, Cx29 is identified in abundant "rosettes" of transmembrane protein particles in the innermost layer of myelin, directly apposed to equally abundant immunogold-labeled Kv1.1 potassium channels, both in the juxtaparanodal axolemma and along the inner mesaxon. A role in K+ handling during saltatory conduction is implied but not yet demonstrated. Clinical significance: Mutations in this gene have been reported to be associated with nonsyndromic hearing loss.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gtkmm** Gtkmm: gtkmm (formerly known as gtk-- or gtk minus minus) is the official C++ interface for the popular GUI library GTK. gtkmm is free software distributed under the GNU Lesser General Public License (LGPL). gtkmm allows the creation of user interfaces either in code or with the Glade Interface Designer, using the Gtk::Builder class. Other features include typesafe callbacks, a comprehensive set of graphical control elements, and the extensibility of widgets via inheritance. Features: Because gtkmm is the official C++ interface of the GUI library GTK, C++ programmers can use the common OOP techniques such as inheritance, and C++-specific facilities such as STL (In fact, many of the gtkmm interfaces, especially those for widget containers, are designed to be similar to the Standard Template Library (STL)). Main features of gtkmm are listed as follows: Use inheritance to derive custom widgets. Type-safe signal handlers, in standard C++. Polymorphism. Use of Standard C++ Library, including strings, containers, and iterators. Full internationalization with UTF-8. Complete C++ memory management. Object composition. Automatic de-allocation of dynamically allocated widgets. Full use of C++ namespaces. No macros. Cross-platform: Linux (gcc, LLVM), FreeBSD (gcc, LLVM), NetBSD (gcc), Solaris (gcc, Forte), Win32 (gcc, MSVC++), macOS (gcc), others. Hello World in gtkmm: The above program will create a window with a button labeled "Hello World". The button sends "Hello world" to standard output when clicked. The program is run using the following commands: This is usually done using a simple makefile. Applications: Some notable applications that use gtkmm include: Amsynth Cadabra (computer program) Inkscape Vector graphics drawing. Horizon EDA an Electronic Design Automation package for printed circuit board design. PDF Slicer A simple application to extract, merge, rotate and reorder pages of PDF documents. Workrave Assists in recovery and prevention of RSI. Gnome System Monitor Gigedit GParted disk partitioning tool. Nemiver GUI for the GNU debugger gdb. PulseAudio tools: pavucontrol, paman, paprefs pavumeter, RawTherapee GNOME Referencer document organiser and bibliography manager Seq24 Synfig Studio Linthesia MySQL Workbench Administrator Database GUI. Ardour Open Source digital audio workstation (DAW) for Linux and MacOS. Gnote desktop notetaking application. VisualBoyAdvance VMware Workstation and VMware Player both use gtkmm for their Linux ports.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded