text
stringlengths
60
353k
source
stringclasses
2 values
**Space architecture** Space architecture: Space architecture is the theory and practice of designing and building inhabited environments in outer space. This mission statement for space architecture was developed at the World Space Congress in Houston in 2002 by members of the Technical Aerospace Architecture Subcommittee of the American Institute of Aeronautics and Astronautics (AIAA). The architectural approach to spacecraft design addresses the total built environment. It is mainly based on the field of engineering (especially aerospace engineering), but also involves diverse disciplines such as physiology, psychology, and sociology. Space architecture: Like architecture on Earth, the attempt is to go beyond the component elements and systems and gain a broad understanding of the issues that affect design success. Space architecture borrows from multiple forms of niche architecture to accomplish the task of ensuring human beings can live and work in space. These include the kinds of design elements one finds in “tiny housing, small living apartments/houses, vehicle design, capsule hotels, and more.”Much space architecture work has been in designing concepts for orbital space stations and lunar and Martian exploration ships and surface bases for the world's space agencies, chiefly NASA. Space architecture: The practice of involving architects in the space program grew out of the Space Race, although its origins can be seen much earlier. The need for their involvement stemmed from the push to extend space mission durations and address the needs of astronauts including but beyond minimum survival needs. Space architecture is currently represented in several institutions. The Sasakawa International Center for Space Architecture (SICSA) is an academic organization with the University of Houston that offers a Master of Science in Space Architecture. SICSA also works design contracts with corporations and space agencies. In Europe, The Vienna University of Technology and the International Space University are involved in space architecture research. The TU Wien offers an EMBA in Space Architecture. The International Conference on Environmental Systems (ICES) meets annually to present sessions on human spaceflight and space human factors. Within the American Institute of Aeronautics and Astronautics (AIAA), the Space Architecture Technical Committee (SATC) has been formed. Despite the historical pattern of large government-led space projects and university-level conceptual design, the advent of space tourism threatens to shift the outlook for space architecture work. Etymology: The word space in space architecture is referring to the outer space definition, which is from English outer and space. Outer can be defined as "situated on or toward the outside; external; exterior" and originated around 1350–1400 in Middle English. Space is "an area, extent, expanse, lapse of time," the aphetic of Old French espace dating to 1300. Espace is from Latin spatium, "room, area, distance, stretch of time," and is of uncertain origin. In space architecture, speaking of outer space usually means the region of the universe outside Earth's atmosphere, as opposed to outside the atmospheres of all terrestrial bodies. This allows the term to include such domains as the lunar and Martian surfaces. Etymology: Architecture, the concatenation of architect and -ure, dates to 1563, coming from Middle French architecte. This term is of Latin origin, formerly architectus, which came from Greek arkhitekton. Arkitekton means "master builder" and is from the combination of arkhi- "chief" and tekton "builder". The human experience is central to architecture – the primary difference between space architecture and spacecraft engineering. Etymology: There is some debate over the terminology of space architecture. Some consider the field to be a specialty within architecture that applies architectural principles to space applications. Others such as Ted Hall of the University of Michigan see space architects as generalists, with what is traditionally considered architecture (Earth-bound or terrestrial architecture) being a subset of a broader space architecture. Any structures that fly in space will likely remain for some time highly dependent on Earth-based infrastructure and personnel for financing, development, construction, launch, and operation. Therefore, it is a matter of discussion how much of these earthly assets are to be considered part of space architecture. The technicalities of the term space architecture are open to some level of interpretation. Origins: Ideas of people traveling to space were first published in science fiction stories, like Jules Verne's 1865 From the Earth to the Moon. In this story several details of the mission (crew of three, spacecraft dimensions, Florida launch site) bear striking similarity to the Apollo Moon landings that took place more than 100 years later. Verne's aluminum capsule had shelves stocked with equipment needed for the journey such as a collapsing telescope, pickaxes and shovels, firearms, oxygen generators, and even trees to plant. A curved sofa was built into the floor and walls and windows near the tip of the spacecraft were accessible by ladder. The projectile was shaped like a bullet because it was gun-launched from the ground, a method infeasible for transporting man to space due to the high acceleration forces produced. It would take rocketry to get humans to the cosmos. Origins: The first serious theoretical work published on space travel by means of rocket power was by Konstantin Tsiolkovsky in 1903. Besides being the father of astronautics he conceived such ideas as the space elevator (inspired by the Eiffel Tower), a rotating space station that created artificial gravity along the outer circumference, airlocks, space suits for extra-vehicular activity (EVA), closed ecosystems to provide food and oxygen, and solar power in space. Tsiolkovsky believed human occupation of space was the inevitable path for our species. In 1952 Wernher von Braun published his own inhabited space station concept in a series of magazine articles. His design was an upgrade of earlier concepts, but he took the unique step in going directly to the public with it. The spinning space station would have three decks and was to function as a navigational aid, meteorological station, Earth observatory, military platform, and way point for further exploration missions to outer space. It is said that the space station depicted in the 1968 film 2001: A Space Odyssey traces its design heritage to Von Braun's work. Wernher von Braun went on to devise mission schemes to the Moon and Mars, each time publishing his grand plans in Collier's Weekly. Origins: The flight of Yuri Gagarin on April 12, 1961, was humanity's maiden spaceflight. While the mission was a necessary first step, Gagarin was more or less confined to a chair with a small view port from which to observe the cosmos – a far cry from the possibilities of life in space. Following space missions gradually improved living conditions and quality of life in low Earth orbit. Expanding room for movement, physical exercise regimens, sanitation facilities, improved food quality, and recreational activities all accompanied longer mission durations. Architectural involvement in space was realized in 1968 when a group of architects and industrial designers led by Raymond Loewy, over objections from engineers, prevailed in convincing NASA to include an observation window in the Skylab orbital laboratory. This milestone represents the introduction of the human psychological dimension to spacecraft design. Space architecture was born. Theory: The subject of architectural theory has much application in space architecture. Some considerations, though, will be unique to the space context. Theory: Ideology of building In the first century BC, the Roman architect Vitruvius said all buildings should have three things: strength, utility, and beauty. Vitruvius's work De Architectura, the only surviving work on the subject from classical antiquity, would have profound influence on architectural theory for thousands of years to come. Even in space architecture these are some of the first things we consider. However, the tremendous challenge of living in space has led to habitat design based largely on functional necessity with little or no applied ornament. In this sense space architecture as we know it shares the form follows function principle with modern architecture. Theory: Some theorists link different elements of the Vitruvian triad. Walter Gropius writes: 'Beauty' is based on the perfect mastery of all the scientific, technological and formal prerequisites of the task ... The approach of Functionalism means to design the objects organically on the basis of their own contemporary postulates, without any romantic embellishment or jesting. As space architecture continues to mature as a discipline, dialogue on architectural design values will open up just as it has for Earth. Theory: Analogs A starting point for space architecture theory is the search for extreme environments in terrestrial settings where humans have lived, and the formation of analogs between these environments and space. For example, humans have lived in submarines deep in the ocean, in bunkers beneath the Earth's surface, and on Antarctica, and have safely entered burning buildings, radioactively contaminated zones, and the stratosphere with the help of technology. Aerial refueling enables Air Force One to stay airborne virtually indefinitely. Nuclear powered submarines generate oxygen using electrolysis and can stay submerged for months at a time. Many of these analogs can be very useful design references for space systems. In fact space station life support systems and astronaut survival gear for emergency landings bear striking similarity to submarine life support systems and military pilot survival kits, respectively. Theory: Space missions, especially human ones, require extensive preparation. In addition to terrestrial analogs providing design insight, the analogous environments can serve as testbeds to further develop technologies for space applications and train astronaut crews. The Flashline Mars Arctic Research Station is a simulated Mars base, maintained by the Mars Society, on Canada's remote Devon Island. The project aims to create conditions as similar as possible to a real Mars mission and attempts to establish ideal crew size, test equipment "in the field", and determine the best extra-vehicular activity suits and procedures. To train for EVAs in microgravity, space agencies make broad use of underwater and simulator training. The Neutral Buoyancy Laboratory, NASA's underwater training facility, contains full-scale mockups of the Space Shuttle cargo bay and International Space Station modules. Technology development and astronaut training in space-analogous environments are essential to making living in space possible. Theory: In space Fundamental to space architecture is designing for physical and psychological wellness in space. What often is taken for granted on Earth – air, water, food, trash disposal – must be designed for in fastidious detail. Rigorous exercise regimens are required to alleviate muscular atrophy and other effects of space on the body. That space missions are (optimally) fixed in duration can lead to stress from isolation. This problem is not unlike that faced in remote research stations or military tours of duty, although non-standard gravity conditions can exacerbate feelings of unfamiliarity and homesickness. Furthermore, confinement in limited and unchanging physical spaces appears to magnify interpersonal tensions in small crews and contribute to other negative psychological effects. These stresses can be mitigated by establishing regular contact with family and friends on Earth, maintaining health, incorporating recreational activities, and bringing along familiar items such as photographs and green plants. The importance of these psychological measures can be appreciated in the 1968 Soviet 'DLB Lunar Base' design: ...it was planned that the units on the Moon would have a false window, showing scenes of the Earth countryside that would change to correspond with the season back in Moscow. The exercise bicycle was equipped with a synchronized film projector, that allowed the cosmonaut to take a 'ride' out of Moscow with return. Theory: The challenge of getting anything at all to space, due to launch constraints, has had a profound effect on the physical shapes of space architecture. All space habitats to date have used modular architecture design. Payload fairing dimensions (typically the width but also the height) of modern launch vehicles limit the size of rigid components launched into space. This approach to building large scale structures in space involves launching multiple modules separately and then manually assembling them afterward. Modular architecture results in a layout similar to a tunnel system where passage through several modules is often required to reach any particular destination. It also tends to standardize the internal diameter or width of pressurized rooms, with machinery and furniture placed along the circumference. These types of space stations and surface bases can generally only grow by adding additional modules in one or more direction. Finding adequate working and living space is often a major challenge with modular architecture. As a solution, flexible furniture (collapsible tables, curtains on rails, deployable beds) can be used to transform interiors for different functions and change the partitioning between private and group space. For more discussion of the factors that influence shape in space architecture, see the Varieties section. Theory: Eugène Viollet-le-Duc advocated different architectural forms for different materials. This is especially important in space architecture. The mass constraints with launching push engineers to find ever lighter materials with adequate material properties. Moreover, challenges unique to the orbital space environment, such as rapid thermal expansion due to abrupt changes in solar exposure, and corrosion caused by particle and atomic oxygen bombardment, require unique materials solutions. Just as the industrial age produced new materials and opened up new architectural possibilities, advances in materials technology will change the prospects of space architecture. Carbon-fiber is already being incorporated into space hardware because of its high strength-to-weight ratio. Investigations are underway to see whether carbon-fiber or other composite materials will be adopted for major structural components in space. The architectural principle that champions using the most appropriate materials and leaving their nature unadorned is called truth to materials. Theory: A notable difference between the orbital context of space architecture and Earth-based architecture is that structures in orbit do not need to support their own weight. This is possible because of the microgravity condition of objects in free fall. In fact much space hardware, such as the Space Shuttle 's robotic arm, is designed only to function in orbit and would not be able to lift its own weight on the Earth's surface. Microgravity also allows an astronaut to move an object of practically any mass, albeit slowly, provided he or she is adequately constrained to another object. Therefore, structural considerations for the orbital environment are dramatically different from those of terrestrial buildings, and the biggest challenge to holding a space station together is usually launching and assembling the components intact. Construction on extraterrestrial surfaces still needs to be designed to support its own weight, but its weight will depend on the strength of the local gravitational field. Ground infrastructure: Human spaceflight currently requires a great deal of supporting infrastructure on Earth. All human orbital missions to date have been government-orchestrated. The organizational body that manages space missions is typically a national space agency, NASA in the case of the United States and Roscosmos for Russia. These agencies are funded at the federal level. At NASA, flight controllers are responsible for real-time mission operations and work onsite at NASA Centers. Most engineering development work involved with space vehicles is contracted-out to private companies, who in turn may employ subcontractors of their own, while fundamental research and conceptual design is often done in academia through research funding. Varieties: Suborbital Structures that cross the boundary of space but do not reach orbital speeds are considered suborbital architecture. For spaceplanes, the architecture has much in common with airliner architecture, especially those of small business jets. Varieties: SpaceShip On June 21, 2004, Mike Melvill reached space funded entirely by private means. The vehicle, SpaceShipOne, was developed by Scaled Composites as an experimental precursor to a privately operated fleet of spaceplanes for suborbital space tourism. The operational spaceplane model, SpaceShipTwo (SS2), will be carried to an altitude of about 15 kilometers by a B-29 Superfortress-sized carrier aircraft, WhiteKnightTwo. From there SS2 will detach and fire its rocket motor to bring the craft to its apogee of approximately 110 kilometers. Because SS2 is not designed to go into orbit around the Earth, it is an example of suborbital or aerospace architecture.The architecture of the SpaceShipTwo vehicle is somewhat different from what is common in previous space vehicles. Unlike the cluttered interiors with protruding machinery and many obscure switches of previous vehicles, this cabin looks more like something out of science fiction than a modern spacecraft. Both SS2 and the carrier aircraft are being built from lightweight composite materials instead of metal. When the time for weightlessness has arrived on a SS2 flight, the rocket motor will be turned off, ending the noise and vibration. Passengers will be able to see the curvature of the Earth. Numerous double-paned windows that encircle the cabin will offer views in nearly all directions. Cushioned seats will recline flat into the floor to maximize room for floating. An always-pressurized interior will be designed to eliminate the need for space suits. Varieties: Orbital Orbital architecture is the architecture of structures designed to orbit around the Earth or another astronomical object. Examples of currently-operational orbital architecture are the International Space Station and the re-entry vehicles Space Shuttle, Soyuz spacecraft, and Shenzhou spacecraft. Historical craft include the Mir space station, Skylab, and the Apollo spacecraft. Orbital architecture usually addresses the condition of weightlessness, a lack of atmospheric and magnetospheric protection from solar and cosmic radiation, rapid day/night cycles, and possibly risk of orbital debris collision. In addition, re-entry vehicles must also be adapted both to weightlessness and to the high temperatures and accelerations experienced during atmospheric reentry. Varieties: International Space Station The International Space Station (ISS) is the only permanently inhabited structure currently in space. It is the size of an American football field and has a crew of six. With a living volume of 358 m³, it has more interior room than the cargo beds of two American 18-wheeler trucks. However, because of the microgravity environment of the space station, there are not always well-defined walls, floors, and ceilings and all pressurized areas can be utilized as living and working space. The International Space Station is still under construction. Modules were primarily launched using the Space Shuttle until its deactivation and were assembled by its crew with the help of the working crew on board the space station. ISS modules were often designed and built to barely fit inside the shuttle's payload bay, which is cylindrical with a 4.6 meter diameter.Life aboard the space station is distinct from terrestrial life in some very interesting ways. Astronauts commonly "float" objects to one another; for example they will give a clipboard an initial nudge and it will coast to its receiver across the room. In fact, an astronaut can become so accustomed to this habit that they forget that it doesn't work anymore when they return to Earth. The diet of ISS spacefarers is a combination of participating nations' space food. Each astronaut selects a personalized menu before flight. Many food choices reflect the cultural differences of the astronauts, such as bacon and eggs vs. fish products for breakfast (for the United States and Russia, respectively). More recently such delicacies as Japanense beef curry, kimchi, and swordfish (Riviera style) have been featured on the orbiting outpost. As much of ISS food is dehydrated or sealed in pouches MRE-style, astronauts are quite excited to get relatively fresh food from shuttle and Progress resupply missions. Food is stored in packages that facilitate eating in microgravity by keeping the food constrained to the table. Spent packaging and trash must be collected to load into an available spacecraft for disposal. Waste management is not nearly as straight forward as it is on Earth. The ISS has many windows for observing Earth and space, one of the astronauts' favorite leisure activities. Since the Sun rises every 90 minutes, the windows are covered at "night" to help maintain the 24-hour sleep cycle. Varieties: When a shuttle is operating in low Earth orbit, the ISS serves as a safety refuge in case of emergency. The inability to fall back on the safety of the ISS during the latest Hubble Space Telescope Servicing Mission (because of different orbital inclinations) was the reason a backup shuttle was summoned to the launch pad. So, ISS astronauts operate with the mindset that they may be called upon to give sanctuary to a Shuttle crew should something happen to compromise a mission. The International Space Station is a colossal cooperative project between many nations. The prevailing atmosphere on board is one of diversity and tolerance. This does not mean that it is perfectly harmonious. Astronauts experience the same frustrations and interpersonal quarrels as their Earth-based counterparts. Varieties: A typical day on the station might start with wakeup at 6:00 am inside a private soundproof booth in the crew quarters. Astronauts would probably find their sleeping bags in an upright position tied to the wall, because orientation does not matter in space. The astronaut's thighs would be lifted about 50 degrees off the vertical. This is the neutral body posture in weightlessness – it would be excessively tiring to "sit" or "stand" as is common on Earth. Crawling out of his booth, an astronaut may chat with other astronauts about the day's science experiments, mission control conferences, interviews with Earthlings, and perhaps even a space walk or space shuttle arrival. Varieties: Bigelow Aerospace Bigelow Aerospace took the unique step in securing two patents NASA held from development of the Transhab concept in regard to inflatable space structures. The company now has sole rights to commercial development of the inflatable module technology. On July 12, 2006, the Genesis I experimental space habitat was launched into low Earth orbit. Genesis I demonstrated the basic viability of inflatable space structures, even carrying a payload of life science experiments. The second module, Genesis II, was launched into orbit on June 28, 2007, and tested out several improvements over its predecessor. Among these are reaction wheel assemblies, a precision measurement system for guidance, nine additional cameras, improved gas control for module inflation, and an improved on-board sensor suite.While Bigelow architecture is still modular, the inflatable configuration allows for much more interior volume than rigid modules. The BA-330, Bigelow's full-scale production model, has more than twice the volume of the largest module on the ISS. Inflatable modules can be docked to rigid modules and are especially well suited for crew living and working quarters. In 2009 NASA began considering attaching a Bigelow module to the ISS, after abandoning the Transhab concept more than a decade before. The modules will likely have a solid inner core for structural support. Surrounding usable space could be partitioned into different rooms and floors. The Bigelow Expandable Activity Module (BEAM) was transported to ISS arriving on April 10, 2016, inside the unpressurized cargo trunk of a SpaceX Dragon during the SpaceX CRS-8 cargo mission.Bigelow Aerospace may choose to launch many of their modules independently, leasing their use to a wide variety of companies, organizations, and countries that can't afford their own space programs. Possible uses of this space include microgravity research and space manufacturing. Or we may see a private space hotel composed of numerous Bigelow modules for rooms, observatories, or even a recreational padded gymnasium. There is the option of using such modules for habitation quarters on long-term space missions in the Solar System. One amazing aspect of spaceflight is that once a craft leaves an atmosphere, aerodynamic shape is a non-issue. For instance it's possible to apply a Trans Lunar Injection to an entire space station and send it to fly by the Moon. Bigelow has expressed the possibility of their modules being modified for lunar and Martian surface systems as well. However, it is out of business since March 2020. Varieties: Lunar Lunar architecture exists both in theory and in practice. Today the archeological artifacts of temporary human outposts lay untouched on the surface of the Moon. Five Apollo Lunar Module descent stages stand upright in various locations across the equatorial region of the Near Side, hinting at the extraterrestrial endeavors of mankind. The leading hypothesis on the origin of the Moon did not gain its current status until after lunar rock samples were analysed. The Moon is the furthest any humans have ever ventured from their home, and space architecture is what kept them alive and allowed them to function as humans. Varieties: Apollo On the cruise to the Moon, Apollo astronauts had two "rooms" to choose from – the Command Module (CM) or the Lunar Module (LM). This can be seen in the film Apollo 13 where the three astronauts were forced to use the LM as an emergency life boat. Passage between the two modules was possible through a pressurized docking tunnel, a major advantage over the Soviet design, which required donning a spacesuit to switch modules. The Command Module featured five windows made of three thick panes of glass. The two inner panes, made of aluminosilicate, ensured no cabin air leaked into space. The outer pane served as a debris shield and part of the heat shield needed for atmospheric reentry. The CM was a sophisticated spacecraft with all the systems required for successful flight but with an interior volume of 6.17 m3 could be considered cramped for three astronauts. It had its design weaknesses such as no toilet (astronauts used much-hated 'relief tubes' and fecal bags). The coming of the space station would bring effective life support systems with waste management and water reclamation technologies. Varieties: The Lunar Module had two stages. A pressurized upper stage, termed the ascent stage, was the first true spaceship as it could only operate in the vacuum of space. The descent stage carried the engine used for descent, landing gear and radar, fuel and consumables, the famous ladder, and the Apollo Lunar Rover during later Apollo missions. The idea behind staging is to reduce mass later in a flight, and is the same strategy used in an Earth-launched multistage rocket. The LM pilot stood up during the descent to the Moon. Landing was achieved via automated control with a manual backup mode. There was no airlock on the LM so the entire cabin had to be evacuated (air vented to space) in order to send an astronaut out to walk on the surface. To stay alive, both astronauts in the LM would have to get in their space suits at this point. The Lunar Module worked well for what it was designed to do. However, a big unknown remained throughout the design process – the effects of lunar dust. Varieties: Every astronaut who walked on the Moon tracked in lunar dust, contaminating the LM and later the CM during Lunar Orbit Rendezvous. These dust particles can't be brushed away in a vacuum, and have been described by John Young of Apollo 16 as being like tiny razor blades. It was soon realized that for humans to live on the Moon, dust mitigation was one of many issues that had to be taken seriously. Varieties: Constellation program The Exploration Systems Architecture Study that followed the Vision for Space Exploration of 2004 recommended the development of a new class of vehicles that have similar capabilities to their Apollo predecessors with several key differences. In part to retain some of the Space Shuttle program workforce and ground infrastructure, the launch vehicles were to use Shuttle-derived technologies. Secondly, rather than launching the crew and cargo on the same rocket, the smaller Ares I was to launch the crew with the larger Ares V to handle the heavier cargo. The two payloads were to rendezvous in low Earth orbit and then head to the Moon from there. The Apollo Lunar Module could not carry enough fuel to reach the polar regions of the Moon but the Altair lunar lander was intended to access any part of the Moon. While the Altair and surface systems would have been equally necessary for Constellation program to reach fruition, the focus was on developing the Orion spacecraft to shorten the gap in U.S. access to orbit following the retirement of the Space Shuttle in 2010. Varieties: Even NASA has described Constellation architecture as 'Apollo on steroids'. Nonetheless, a return to the proven capsule design is a move welcomed by many. Varieties: Martian Martian architecture is architecture designed to sustain human life on the surface of Mars, and all the supporting systems necessary to make this possible. The direct sampling of water ice on the surface, and evidence for geyser-like water flows within the last decade have made Mars the most likely extraterrestrial environment for finding liquid water, and therefore alien life, in the Solar System. Moreover, some geologic evidence suggests that Mars could have been warm and wet on a global scale in its distant past. Intense geologic activity has reshaped the surface of the Earth, erasing evidence of our earliest history. Martian rocks can be even older than Earth rocks, though, so exploring Mars may help us decipher the story of our own geologic evolution including the origin of life on Earth. Mars has an atmosphere, though its surface pressure is less than 1% of Earth's. Its surface gravity is about 38% of Earth's. Although a human expedition to Mars has not yet taken place, there has been significant work on Martian habitat design. Martian architecture would usually fall into one of two categories: architecture imported from Earth fully assembled and architecture making use of local resources. Varieties: Von Braun and other early proposals Wernher von Braun was the first to come up with a technically comprehensive proposal for a crewed Mars expedition. Rather than a minimal mission profile like Apollo, von Braun envisioned a crew of 70 astronauts aboard a fleet of ten massive spacecraft. Each vessel would be constructed in low Earth orbit, requiring nearly 100 separate launches before one was fully assembled. Seven of the spacecraft would be for crew while three were designated as cargo ships. There were even designs for small "boats" to shuttle crew and supplies between ships during the cruise to the Red Planet, which was to follow a minimum-energy Hohmann transfer trajectory. This mission plan would involve one-way transit times on the order of eight months and a long stay at Mars, creating the need for long-term living accommodations in space. Upon arrival at the Red Planet, the fleet would brake into Mars orbit and would remain there until the seven human vessels were ready to return to Earth. Only landing gliders, which were stored in the cargo ships, and their associated ascent stages would travel to the surface. Inflatable habitats would be constructed on the surface along with a landing strip to facilitate further glider landings. All necessary propellant and consumables were to be brought from Earth in von Braun's proposal. Some crew remained in the passenger ships during the mission for orbit-based observation of Mars and to maintain the ships. The passenger ships had habitation spheres 20 meters in diameter. Because the average crew member would spend much time in these ships (around 16 months of transit plus rotating shifts in Mars orbit), habitat design for the ships was an integral part of this mission. Varieties: Von Braun was aware of the threat posed by extended exposure to weightlessness. He suggested either tethering passenger ships together to spin about a common center of mass or including self-rotating, dumbbell-shaped "gravity cells" to drift alongside the flotilla to provide each crew member with a few hours of artificial gravity each day. At the time of von Braun's proposal, little was known of the dangers of solar radiation beyond Earth and it was cosmic radiation that was thought to present the more formidable challenge. The discovery of the Van Allen belts in 1958 demonstrated that the Earth was shielded from high energy solar particles. For the surface portion of the mission, inflatable habitats suggest the desire to maximize living space. It is clear von Braun considered the members of the expedition part of a community with much traffic and interaction between vessels. Varieties: The Soviet Union conducted studies of human exploration of Mars and came up with slightly less epic mission designs (though not short on exotic technologies) in 1960 and 1969. The first of which used electric propulsion for interplanetary transit and nuclear reactors as the power plants. On spacecraft that combine human crew and nuclear reactors, the reactor is usually placed at a maximum distance from the crew quarters, often at the end of a long pole, for radiation safety. An interesting component of the 1960 mission was the surface architecture. A "train" with wheels for rough terrain was to be assembled of landed research modules, one of which was a crew cabin. The train was to traverse the surface of Mars from south pole to north pole, an extremely ambitious goal even by today's standards. Other Soviet plans such as the TMK eschewed the large costs associated with landing on the Martian surface and advocated piloted (crewed) flybys of Mars. Flyby missions, like the lunar Apollo 8, extend the human presence to other worlds with less risk than landings. Most early Soviet proposals called for launches using the ill-fated N1 rocket. They also usually involved fewer crew than their American counterparts. Early Martian architecture concepts generally featured assembly in low Earth orbit, bringing all needed consumables from Earth, and designated work vs. living areas. The modern outlook on Mars exploration is not the same. Varieties: Recent initiatives In every serious study of what it would take to land humans on Mars, keep them alive, and then return them to Earth, the total mass required for the mission is simply stunning. The problem lies in that to launch the amount of consumables (oxygen, food and water) even a small crew would go through during a multi-year Mars mission, it would take a very large rocket with the vast majority of its own mass being propellant. This is where multiple launches and assembly in Earth orbit come from. However even if such a ship stocked full of goods could be put together in orbit, it would need an additional (large) supply of propellant to send it to Mars. The delta-v, or change in velocity, required to insert a spacecraft from Earth orbit to a Mars transfer orbit is many kilometers per second. When we think of getting astronauts to the surface of Mars and back home we quickly realize that an enormous amount of propellant is needed if everything is taken from the Earth. This was the conclusion reached in the 1989 '90-Day Study' initiated by NASA in response to the Space Exploration Initiative. Varieties: Several techniques have changed the outlook on Mars exploration. The most powerful of which is in-situ resource utilization. Using hydrogen imported from Earth and carbon dioxide from the Martian atmosphere, the Sabatier reaction can be used to manufacture methane (for rocket propellant) and water (for drinking and for oxygen production through electrolysis). Another technique to reduce Earth-brought propellant requirements is aerobraking. Aerobraking involves skimming the upper layers of an atmosphere, over many passes, to slow a spacecraft down. It's a time-intensive process that shows most promise in slowing down cargo shipments of food and supplies. NASA's Constellation program does call for landing humans on Mars after a permanent base on the Moon is demonstrated, but details of the base architecture are far from established. It is likely that the first permanent settlement will consist of consecutive crews landing prefabricated habitat modules in the same location and linking them together to form a base.In some of these modern, economy models of the Mars mission, we see the crew size reduced to a minimal 4 or 6. Such a loss in variety of social relationships can lead to challenges in forming balanced social responses and forming a complete sense of identity. It follows that if long-duration missions are to be carried out with very small crews, then intelligent selection of crew is of primary importance. Role assignments is another open issue in Mars mission planning. The primary role of 'pilot' is obsolete when landing takes only a few minutes of a mission lasting hundreds of days, and when that landing will be automated anyway. Assignment of roles will depend heavily on the work to be done on the surface and will require astronauts to assume multiple responsibilities. As for surface architecture inflatable habitats, perhaps even provided by Bigelow Aerospace, remain a possible option for maximizing living space. In later missions, bricks could be made from a Martian regolith mixture for shielding or even primary, airtight structural components. The environment on Mars offers different opportunities for space suit design, even something like the skin-tight Bio-Suit. Varieties: A number of specific habitat design proposals have been put forward, to varying degrees of architectural and engineering analysis. One recent proposal—and the winner of NASA's 2015 Mars Habitat Competition—is Mars Ice House. The design concept is for a Mars surface habitat, 3d-printed in layers out of water ice on the interior of an Earth-manufactured inflatable pressure-retention membrane. The completed structure would be semi-transparent, absorbing harmful radiation in several wavelengths, while admitting approximately 50 percent of light in the visible spectrum. The habitat is proposed to be entirely set up and built from an autonomous robotic spacecraft and bots, although human habitation with approximately 2–4 inhabitants is envisioned once the habitat is fully built and tested. Robotic: It is widely accepted that robotic reconnaissance and trail-blazer missions will precede human exploration of other worlds. Making an informed decision on which specific destinations warrant sending human explorers requires more data than what the best Earth-based telescopes can provide. For example, landing site selection for the Apollo Moon landings drew on data from three different robotic programs: the Ranger program, the Lunar Orbiter program, and the Surveyor program. Before a human was sent, robotic spacecraft mapped the lunar surface, proved the feasibility of soft landings, filmed the terrain up close with television cameras, and scooped and analysed the soil.A robotic exploration mission is generally designed to carry a wide variety of scientific instruments, ranging from cameras sensitive to particular wavelengths, telescopes, spectrometers, radar devices, accelerometers, radiometers, and particle detectors to name a few. The function of these instruments is usually to return scientific data but it can also be to give an intuitive "feel" of the state of the spacecraft, allowing a subconscious familiarization with the territory being explored, through telepresence. A good example of this is the inclusion of HDTV cameras on the Japanese lunar orbiter SELENE. While purely scientific instruments could have been brought in their stead, these cameras allow the use of an innate sense to perceive the exploration of the Moon. Robotic: The modern, balanced approach to exploring an extraterrestrial destination involves several phases of exploration, each of which needs to produce rationale for progressing to the next phase. The phase immediately preceding human exploration can be described as anthropocentric sensing, that is, sensing designed to give humans as realistic a feeling as possible of actually exploring in person. More, the line between a human system and a robotic system in space is not always going to be clear. As a general rule, the more formidable the environment, the more essential robotic technology is. Robotic systems can be broadly considered part of space architecture when their purpose is to facilitate the habitation of space or extend the range of the physiological senses into space. Future: The future of space architecture hinges on the expansion of human presence in space. Under the historical model of government-orchestrated exploration missions initiated by single political administrations, space structures are likely to be limited to small-scale habitats and orbital modules with design life cycles of only several years or decades. The designs, and thus architecture, will generally be fixed and without real time feedback from the spacefarers themselves. The technology to repair and upgrade existing habitats, a practice widespread on Earth, is not likely to be developed under short term exploration goals. If exploration takes on a multi-administration or international character, the prospects for space architecture development by the inhabitants themselves will be broader. Private space tourism is a way the development of space and a space transportation infrastructure can be accelerated. Virgin Galactic has indicated plans for an orbital craft, SpaceShipThree. The demand for space tourism is one without bound. It is not difficult to imagine lunar parks or cruises by Venus. Another impetus to become a spacefaring species is planetary defense. Future: The classic space mission is the Earth-colliding asteroid interception mission. Using nuclear detonations to split or deflect the asteroid is risky at best. Such a tactic could actually make the problem worse by increasing the amount of asteroid fragments that do end up hitting the Earth. Robert Zubrin writes: If bombs are to be used as asteroid deflectors, they cannot just be launched willy-nilly. No, before any bombs are detonated, the asteroid will have to be thoroughly explored, its geology assessed, and subsurface bomb placements carefully determined and precisely located on the basis of such knowledge. A human crew, consisting of surveyors, geologists, miners, drillers, and demolition experts, will be needed on the scene to do the job right. If such a crew is to be summoned to a distant asteroid, there may be less risky ways to divert the asteroid. Another promising asteroid mitigation strategy is to land a crew on the asteroid well ahead of its impact date and to begin diverting some its mass into space to slowly alter its trajectory. This is a form of rocket propulsion by virtue of Newton's third law with the asteroid's mass as the propellant. Whether exploding nuclear weapons or diversion of mass is used, a sizable human crew may need to be sent into space for many months if not years to accomplish this mission. Questions such as what the astronauts will live in and what the ship will be like are questions for the space architect. Future: When motivations to go into space are realized, work on mitigating the most serious threats can begin. One of the biggest threats to astronaut safety in space is sudden radiation events from solar flares. The violent solar storm of August 1972, which occurred between the Apollo 16 and Apollo 17 missions, could have produced fatal consequences had astronauts been caught exposed on the lunar surface. The best known protection against radiation in space is shielding; an especially effective shield is water contained in large tanks surrounding the astronauts. Unfortunately water has a mass of 1000 kilograms per cubic meter. A more practical approach would be to construct solar "storm shelters" that spacefarers can retreat to during peak events. For this to work, however, there would need to be a space weather broadcasting system in place to warn astronauts of upcoming storms, much like a tsunami warning system warns coastal inhabitants of impending danger. Perhaps one day a fleet of robotic spacecraft will orbit close to the Sun, monitoring solar activity and sending precious minutes of warning before waves of dangerous particles arrive at inhabited regions of space. Future: Nobody knows what the long-term human future in space will be. Perhaps after gaining experience with routine spaceflight by exploring different worlds in the Solar System and deflecting a few asteroids, the possibility of constructing non-modular space habitats and infrastructure will be within capability. Such possibilities include mass drivers on the Moon, which launch payloads into space using only electricity, and spinning space colonies with closed ecological systems. A Mars in the early stages of terraformation, where inhabitants only need simple oxygen masks to walk out on the surface, may be seen. In any case, such futures require space architecture.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cold shrink tubing** Cold shrink tubing: Cold shrink tubing is an open ended rubber sleeve, made primarily from rubber elastomers with high-performance physical properties, that has been factory expanded or pre-stretched, and assembled onto a supporting removable plastic core. Cold shrink tubing shrinks upon removal of the supporting core during the installation process and the electrician slides the tube over the cable to be jointed, terminated or abandoned and unwinds the core, causing the tube to collapse down, or contract, in place. The following video demonstrates the installation process of using Cold Shrink to abandon power cables. Cold shrink tubing: Cold shrink tubing is used to insulate wires, connections, joints and terminals in electrical work. It can also be used to repair wires, bundle wires together, and to protect wires or small parts from minor abrasion. It needs storage in controlled environments with temperatures not exceeding 43 degrees Celsius.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Phenylbutazone** Phenylbutazone: Phenylbutazone, often referred to as "bute", is a nonsteroidal anti-inflammatory drug (NSAID) for the short-term treatment of pain and fever in animals. Phenylbutazone: In the United States and United Kingdom, it is no longer approved for human use (except in the United Kingdom for ankylosing spondylitis), as it can cause severe adverse effects such as suppression of white blood cell production and aplastic anemia. This drug was implicated in the 2013 meat adulteration scandal. Positive phenylbutazone tests in horse meat were uncommon in the UK, however. Uses: In humans Phenylbutazone was originally made available for use in humans for the treatment of rheumatoid arthritis and gout in 1949. However, it is no longer approved, and therefore not marketed, for any human use in the United States. In the UK it is used to treat ankylosing spondylitis, but only when other therapies are unsuitable. Uses: In horses Phenylbutazone is the most commonly used NSAID for horses in the United States. It is used for the following purposes: Analgesia: It is used for pain relief from infections and musculoskeletal disorders, including sprains, overuse injuries, tendinitis, arthralgias, arthritis, and laminitis. Like other NSAIDs, it acts directly on musculoskeletal tissue to control inflammation, thereby reducing secondary inflammatory damage, alleviating pain, and restoring range of motion. It does not cure musculoskeletal ailments or work well on colic pain. Uses: Antipyresis: It is used for reduction of fevers. Its antipyretic qualities may mask other symptoms. Uses: History of phenylbutazone in racing In the 1968 Kentucky Derby, Dancer's Image, the winner of the race, was disqualified after traces of phenylbutazone were allegedly discovered in a post-race urinalysis. Owned by prominent Massachusetts businessman Peter D. Fuller and ridden by jockey Bobby Ussery, Dancer's Image was the first horse to win the Kentucky Derby and then be disqualified. Phenylbutazone was legal on most tracks around the United States in 1968, but had not yet been approved by Churchill Downs. Uses: Controversy and speculation still surround the incident. In the weeks prior to the race, Fuller had given previous winnings to Coretta Scott King, the widow of slain civil rights activist Martin Luther King Jr., which brought both praise and criticism. The previous year, King held a sit-in against housing discrimination which disrupted Derby week. Forty years later, Fuller still believed Dancer's Image was disqualified due to these events.Although Forward Pass had been named the winner, after many appeals the Kentucky Derby official website lists both Dancer's Image and Forward Pass as the winner. The website's race video commentary states that on the winner's plaque at Churchill Downs, both Dancer's Image and Forward Pass are listed as the 1968 winner of the Kentucky Derby. Uses: In dogs Phenylbutazone is occasionally used in dogs for the longer-term management of chronic pain, particularly due to osteoarthritis. About 20% of adult dogs are affected with osteoarthritis, which makes the management of musculoskeletal pain a major component of companion animal practice. The margin of safety for all NSAIDs is narrow in the dog, and other NSAIDs are more commonly used (etodolac, and carprofen). Gastrointestinal-protectant drugs, such as misoprostol, cimetidine, omeprazole, ranitidine, or sucralfate, are frequently included as a part of treatment with any NSAID. Dogs receiving chronic phenylbutazone therapy should be followed with regular blood work and renal monitoring.Side effects of phenylbutazone in dogs include gastrointestinal (GI) ulceration, bone marrow depression, rashes, malaise, blood dyscrasias, and diminished renal blood flow. Dosage and administration in horses: Phenylbutazone has a plasma elimination half-life of 4–8 hours, however the inflammatory exudate half life is 24 hours, so single daily dosing can be sufficient, although it is often used twice per day. The drug is considered fairly non-toxic when given at appropriate doses (2.2-4.4 mg/kg/day), even when used repeatedly. This dose has been doubled for diseases that cause severe pain, such as laminitis, but is toxic if repeated long-term, and exceptionally high doses (15 mg/kg/d or higher) can kill the animal in less than a week.Phenylbutazone can be administered orally (via paste, powder or feed-in) or intravenously. It should not be given intramuscularly or injected in any place other than a vein, as it can cause tissue damage. Tissue damage and edema may also occur if the drug is injected repetitively into the same vein. Side effects and disadvantages: Side effects of phenylbutazone are similar to those of other NSAIDs. Overdose or prolonged use can cause gastrointestinal ulcers, blood dyscrasia, kidney damage (primarily dose-dependant renal papillary necrosis), oral lesions if given by mouth, and internal hemorrhage. This is especially pronounced in young, ill, or stressed horses which are less able to metabolize the drug. Effects of gastrointestinal damage include edema of the legs and belly secondary to leakage of blood proteins into the intestines, resulting in decreased appetite, excessive thirst, weight loss, weakness, and in advanced stages, kidney failure and death. Phenylbutazone can also cause agranulocytosis. Side effects and disadvantages: Phenylbutazone amplifies the anticoagulant effect of vitamin K antagonists such as warfarin or phenprocoumon. Phenylbutazone displaces warfarin from plasma binding sites, and toxic blood levels leading to haemorrhage can occur. It may aggravate kidney or liver problems. Phenylbutazone may be toxic to the embryo and can be transferred via the umbilical cord and by milk. Phenylbutazone can be used in foals. Premature foals, septicemic foals, foals with questionable kidney or liver function and foals with diarrhea require careful monitoring. Drugs to protect the GI tract such as omeprazole, cimetidine, and sucralfate are frequently used with phenylbutazone. High doses of phenylbutazone may be considered a rules violation under some equestrian organizations, as the drug may remain in the bloodstream four to five days after administration. The International Agency for Research on Cancer places it in Group 3; i.e., "not classifiable as to its carcinogenicity to humans". Use in horses is limited to those not intended for food. Metabolites of phenylbutazone can cause aplastic anaemia in humans. Investigations into potential carcinogenicity: Opinions are conflicting regarding the carcinogenicity of phenylbutazone in animals; no evidence indicates it causes cancer in humans at therapeutic doses. Maekawa et al. (1987) found no increased cancer incidence in DONRYU rats fed a diet containing 0.125% or 0.25% phenylbutazone over two years. On the other hand, Kari et al. (1995) found a rare type of kidney cancer in rats (13 of 100) and an increased rate of liver cancer in male rats fed 150 and 300 mg/kg body weight of phenylbutazone for two years. Tennant (1993) listed phenylbutazone as a non-mutagenic carcinogen. Kirkland and Fowler (2010) acknowledged that, while phenylbutazone is not predicted to be a mutagen by computer software that simulates the chemicals interaction with DNA, one laboratory study indicated phenylbutazone subtly altered the structure of chromosomes of bone marrow cells of mice. Kirkland and Fowler (2010) furthermore explained that the theoretical carcinogenic effects of phenylbutazone in humans cannot be studied because patients prescribed the drug were given doses far below the level any effect may become apparent (<1 mM). The World Health Organization's International Agency For Research On Cancer (IARC) stated in 1987 that there was inadequate evidence for a carcinogenic effect in humans. Interactions: Other anti-inflammatory drugs that tend to cause GI ulcers, such as corticosteroids and other NSAIDs, can potentiate the bleeding risk. Combination with anticoagulant drugs, particularly coumarin derivatives, also increases the risk of bleeding. Avoid combining with other hepatotoxic drugs. Phenylbutazone may affect blood levels and duration of action of phenytoin, valproic acid, sulfonamides, sulfonylurea antidiabetic agents, barbiturates, promethazine, rifampicin, chlorpheniramine, diphenhydramine, and penicillin G. Overdose: Overdoses of phenylbutazone can cause kidney failure, liver injury, bone marrow suppression, and gastric ulceration or perforation. Early signs of toxicity include loss of appetite, and depression. Chemistry: Phenylbutazone is a crystalline substance. It is obtained by condensation of diethyl n-butylmalonate with hydrazobenzene in the presence of base. In effect, this represents the formation of the heterocyclic system by simple lactamization. Oxyphenbutazone, the major metabolite of phenylbutazone, differs only in the para location of one of its phenyl groups, where a hydrogen atom is replaced by a hydroxyl group (making it 4-butyl-1-(4-hydroxyphenyl)-2-phenyl-3,5-pyrazolidinedione).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hang Ten** Hang Ten: "hang ten" is a nickname for any of several maneuvers used in sports, especially surfing, wherein all ten toes or fingers are used to accomplish the maneuver. surfing: the surfer stands and hangs all their toes over the nose of the board. Usually this can only be done on a heavy longboard. basketball: the basketball player dunks the ball and hangs onto the hoop. BMX: a flatland move. Jiu Jitsu: any of an infinite number, of grips, chokes, escapes, or maneuvers where play involves all toes touching the mat or all ten fingers gripping Gi or swimming to some void somewhere to create or escape a dominant position. Gripping Gi skateboarding: a nose manual named after the surfing maneuver.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Windows Media Services** Windows Media Services: Windows Media Services (WMS) is streaming media server software from Microsoft that allows a Windows Server administrator to generate streaming media (audio/video). Only Windows Media, JPEG, and MP3 formats are supported. WMS is the successor of NetShow Services.In addition to streaming, WMS also has the ability to cache and record streams, enforce authentication, impose various connection limits, restrict access, use multiple protocols, generate usage statistics, and apply forward error correction (FEC). It can also handle a high number of concurrent connections making it suitable for content providers. Streams can also be distributed between servers as part of a distribution network where each server ultimately feeds a different network/audience. Both unicast and multicast streams are supported (multicast streams also use a proprietary and partially encrypted Windows Media Station (*.nsc) file for use by a player.) Typically, Windows Media Player is used to decode and watch/listen to the streams, but other players are also capable of playing unencrypted Windows Media content (Microsoft Silverlight, VLC, MPlayer, etc.) 64-bit versions of Windows Media Services are also available for increased scalability. The Scalable Networking Pack for Windows Server 2003 adds support for network acceleration and hardware-based offloading, which boosts Windows Media server performance. The newest version, Windows Media Services 2008, for Windows Server 2008, includes a built-in WMS Cache/Proxy plug-in which can be used to configure a Windows Media server either as a cache/proxy server or as a reverse proxy server so that it can provide caching and proxy support to other Windows Media servers. Microsoft claims that these offloading technologies nearly double the scalability, making Windows Media Services, according to the claim, the industry's most powerful streaming media server.Windows Media Services 2008 is no longer included with the setup files for the Windows Server 2008 operating system, but is available as a free download. It is also not supported on Windows Server 2012, having been replaced with IIS Media Services. Releases: NetShow Server 3.0 (Windows NT 4.0) NetShow Services 4.0 (Windows NT 4.0 SP3 or later) Windows Media Services 4.1 (Included in Windows 2000 Server family and downloadable for previous Windows versions) Windows Media Services 9 Series (Included in Windows Server 2003, works with IIS 6) Windows Media Services 2008 (Downloadable for Windows Server 2008, works with IIS 7)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tai chi** Tai chi: Tai chi (simplified Chinese: 太极拳; traditional Chinese: 太極拳; lit. 'Grand Ultimate Boxing') is an internal Chinese martial art practiced for self-defense and health. Known for its slow, intentional movements, Tai chi has practitioners worldwide and is particularly popular as a form of gentle exercise and moving meditation, with benefits to mental and physical health. Tai chi: Many forms of tai chi are practiced, both traditional and modern. While the precise origins are not known, the earliest documented practice is from Chen Village, Henan. Most modern styles trace their development to the five traditional schools: Chen, Yang, Wu (Hao), Wu, and Sun. Practitioners such as Yang Chengfu and Sun Lutang in the early 20th century promoted the art for its health benefits. Tai chi was included in the UNESCO List of Intangible Cultural Heritage of Humanity in 2020. Etymology: The name "tai chi", the most common English spelling, is not a standard romanization of the Chinese name for the art (simplified Chinese: 太极拳; traditional Chinese: 太極拳; lit. 'Taiji boxing'). The Chinese name was first commonly written in English using the Wade–Giles system as "tʻai chi chʻüan". But English speakers abbreviated it to "tʻai chi" and dropped the mark of aspiration. Since the late twentieth century, pinyin has replaced Wade–Giles as the most popular system for romanizing Chinese. In pinyin, tai chi is spelled taijiquan (tàijíquán). In English, tai chi is sometimes referred to as "shadowboxing". Etymology: The etymology of tai chi's Chinese name is somewhat uncertain because of the lack of a record of spoken usage. Before the mid-nineteenth century, it appears that outsiders generically described the art as zhanquan (沾拳, "touch boxing"), "Long Boxing"(長拳), mianquan ("Soft/Cotton/Neutralizing Boxing"; 軟/棉/化拳) or shisan shi (十三式, "the thirteen techniques"). In the mid-nineteenth century, the art began to be associated with the philosophy of taiji (see Conceptual background). This association may have originated in the writings of the founders of Wu (Hao)-style tai chi, perhaps inspired by a tai chi classic attributed to the semi-mythical Wang Zongyue that begins with the words "Taiji is born from Wuji; it is the mother of Yin and Yang". However, as the Wu (Hao) founders had no financial need to promote their art, their contributions to the "tai chi classics" were not distributed widely for many years. The first public association between taiji and the art was a poem by Imperial Court scholar Weng Tonghe describing a tai chi performance by Yang Luchan. It is not clear whether Weng was making a new connection or whether the new name was already in use. Written evidence for the Yang family's adoption of the name taiji first appeared in a later text, possibly completed in 1875 by Yang Luchan's son, Yang Banhou, or no later than the first decade of the twentieth century by one or more of Yang Banhou's disciples. By the second decade of the twentieth century, Yang Chengfu's disciples and Sun Lutang were using the term taijiquan in their publications, including in the titles of some of the tai chi classics. It then appeared in a book by a Chen family member, Chen Xin, published after he died in 1929. Philosophical background: Chinese philosophy, particularly Taoist and Confucian thought, forms the conceptual background to tai chi. Early tai chi texts include embedded quotations from early Chinese classics like the I Ching, Great Learning, Book of Documents, Records of the Grand Historian, and Zhuangzi, as well as from famous Chinese thinkers like Zhu Xi, Zhou Dunyi, and Mencius.Early tai chi sources are grounded in Taiji cosmology. Taiji cosmology appears in both Taoist and Confucian philosophy, where it represents the single source or mother of yin and yang (represented by the taijitu symbol ). Tai chi also draws on Chinese theories of the body, particularly Taoist neidan (internal alchemy) teachings on qi (vital energy) and on the three dantian. Cheng Man-ch'ing emphasizes the Taoist background of tai chi and states that it "enables us to reach the stage of undifferentiated pure yang, which is exactly the same as Laozi's 'concentrating the qi and developing softness'".As such, tai chi considers itself an "internal" (neijia) martial art focused on developing qi. In China, tai chi is categorized under the Wudang group of Chinese martial arts—that is, arts applied with internal power. Although the term Wudang suggests these arts originated in the Wudang Mountains, it is used only to distinguish the skills, theories, and applications of neijia from those of the Shaolin grouping, or waijia (hard/external styles).Tai chi also adopts the Taoist ideals of softness overcoming hardness, of wu wei (effortless action), and of yielding into its martial art technique while also retaining Taoist ideas of spiritual self-cultivation.Tai chi's path is one of developing naturalness by relaxing, attending inward, and slowing mind, body, and breath. This allows the practitioner to become less tense, to drop conditioned habits, to let go of thoughts, to allow qi to flow smoothly, and thus to flow with the Tao. It is thus a kind of moving meditation that allows us to let go of the self and experience no-mind (wuxin) and spontaneity (ziran).A key aspect of tai chi philosophy is to work with the flow of yin (softness) and yang (hardness) elements. When two forces push each other with equal force, neither side moves. Motion cannot occur until one side yields. Therefore, a key principle in tai chi is to avoid using force directly against force (hardness against hardness). Laozi provided the archetype for this in the Tao Te Ching when he wrote, "The soft and the pliable will defeat the hard and strong." Conversely, when in possession of leverage, one may want to use hardness to force the opponent to become soft. Traditionally, tai chi uses both soft and hard. Yin is said to be the mother of Yang, using soft power to create hard power. Philosophical background: Traditional schools also emphasize that one is expected to show wude ("martial virtue/heroism"), to protect the defenseless, and to show mercy to one's opponents.In December 2020, the 15th regular session of the UNESCO Intergovernmental Committee for the Safeguarding of the Intangible Cultural Heritage included tai chi in the UNESCO Representative List of the Intangible Cultural Heritage of Humanity. Practice: Traditionally, the foundational tai chi practice consists of learning and practicing a specific solo forms or routines (taolu). This entails learning a routine sequence of movements that emphasize a straight spine, abdominal breathing and a natural range of motion. Tai chi relies on knowing the appropriate change in response to outside forces, as well as on yielding to and redirecting an attack, rather than meeting it with opposing force. Physical fitness is also seen as an important step towards effective self-defense. Practice: Tai chi movements were inspired by animals, "particularly...birds and" leopards.There are also numerous other supporting solo practices such as: Sitting meditation: The empty, focus and calm the mind and aid in opening the microcosmic orbit. Practice: Standing meditation (zhan zhuang) to raise the yang qi Qigong to mobilize the qi Acupressure massage to develop awareness of qi channels Traditional Chinese medicine is taught to advanced students in some traditional schools.Further training entails learning tuishou (push hands drills), sanshou (striking techniques), free sparring, grappling training, and weapons training.In the "tai chi classics", writings by tai chi masters, it is noted that the physiological and kinesiological aspects of the body's movements are characterized by the circular motion and rotation of the pelvis, based on the metaphors of the pelvis as the hub and the arms and feet as the spokes of a wheel. Furthermore, the respiration of breath is coordinated with the physical movements in a state of deep relaxation, rather than muscular tension, in order to guide the practitioners to a state of homeostasis. Practice: Tai chi is a complete martial art system with a full range of bare-hand movement sets and weapon forms, such as tai chi sword and tai chi spear, which are based on the dynamic relationship between yin and yang. While tai chi is typified by its slow movements, many styles (including the three most popular: Yang, Wu, and Chen) have secondary, faster-paced forms. Some traditional schools teach martial applications of the postures of different forms (taolu). Practice: Solo practices Taolu (solo "forms") are choreographed sets of movements practiced alone or in unison as a group. Tai chi is often characterized by slow movements in Taolu practice, and one of the reasons is to develop body awareness. Accurate, repeated practice of the solo routine is said to retrain posture, encourage circulation throughout students' bodies, maintain flexibility, and familiarize students with the martial sequences implied by the forms. Usually performed standing, solo forms have also been adapted for seated practice. Practice: Weapon practice Tai chi practices involving weapons also exist. Weapons training and fencing applications often employ: the jian, a straight double-edged sword, practiced as taijijian; the dao, a heavier curved saber, sometimes called a broadsword; the tieshan, a folding fan, also called shan and practiced as taijishan; the gun, a 2 m long wooden staff and practiced as taijigun; the qiang, a 2 m long spear or a 4 m long lance.More exotic weapons include: the large dadao and podao sabres; the ji, or halberd; the cane; the sheng biao, or rope dart; the sanjiegun, or three sectional staff; the feng huo lun, or wind and fire wheels; the lasso; the whip, chain whip and steel whip. History: Early development Tai chi's formative influences came from practices undertaken in Taoist and Buddhist monasteries, such as Wudang, Shaolin and The Thousand Year Temple in Henan. The early development of tai chi proper is connected with Henan's Thousand Year Temple and a nexus of nearby villages: Chen Village, Tang Village, Wangbao Village, and Zhaobao Town. These villages were closely connected, shared an interest in the martial arts and many went to study at Thousand Year Temple (which was a syncretic temple with elements from the three teachings). New documents from these villages, mostly dating to the 17th century, are some of the earliest sources for the practice of tai chi.Some traditionalists claim that tai chi is a purely Chinese art that comes from ancient Taoism and Confucianism. These schools believe that tai chi theory and practice were formulated by Taoist monk Zhang Sanfeng in the 12th century. These stories are often filled with legendary and hagiographical content and lack historical support.Modern historians pointing out that the earliest reference indicating a connection between Zhang Sanfeng and martial arts is actually a 17th-century piece called Epitaph for Wang Zhengnan (1669), composed by Huang Zongxi (1610–1695). Aside from this single source, the other claims of connections between tai chi and Zhang Sanfeng appeared no earlier than the 19th century. According to Douglas Wile, "there is no record of a Zhang Sanfeng in the Song Dynasty (960–1279), and there is no mention in the Ming (1368–1644) histories or hagiographies of Zhang Sanfeng of any connection between the immortal and the material arts."Another common theory for the origin of tai chi is that it was created by Chen Wangting (1580–1660) while living in Chen Village (陳家溝), Henan. The other four contemporary traditional tai chi styles (Yang, Sun, Wu and Wu/Hao) trace their teachings back to Chen village in the early 1800s.Yang Luchan (1799–1872), the founder of the popular Yang style, trained with the Chen family for 18 years before he started to teach in Beijing, which strongly suggests that his work was heavily influenced by the Chen family art. Martial arts historian Xu Zhen claimed that the tai chi of Chen Village was influenced by the Taizu changquan style practiced at nearby Shaolin Monastery, while Tang Hao thought it was derived from a treatise by Ming dynasty general Qi Jiguang, Jixiao Xinshu ("New Treatise on Military Efficiency"), which discussed several martial arts styles including Taizu changquan. History: Standardization In 1956 the Chinese government sponsored the Chinese Sports Committee (CSC), which brought together four wushu teachers to truncate the Yang family hand form to 24 postures. This was an attempt to standardize tai chi for wushu tournaments as they wanted to create a routine that would be much less difficult to learn than the classical 88 to 108 posture solo hand forms. History: Another 1950s form is the "97 movements combined tai chi form", which blends Yang, Wu, Sun, Chen, and Fu styles. History: In 1976, they developed a slightly longer demonstration form that would not require the traditional forms' memory, balance, and coordination. This became the "Combined 48 Forms" that were created by three wushu coaches, headed by Men Hui Feng. The combined forms simplified and combined classical forms from the original Chen, Yang, Wu, and Sun styles. Other competitive forms were designed to be completed within a six-minute time limit. History: In the late 1980s, CSC standardized more competition forms for the four major styles as well as combined forms. These five sets of forms were created by different teams, and later approved by a committee of wushu coaches in China. These forms were named after their style: the "Chen-style national competition form" is the "56 Forms". The combined forms are "The 42-Form" or simply the "Competition Form". History: In the 11th Asian Games of 1990, wushu was included as an item for competition for the first time with the 42-Form representing tai chi. The International Wushu Federation (IWUF) applied for wushu to be part of the Olympic games.Tai chi was added to the UNESCO Intangible Cultural Heritage Lists in 2020 for China. Styles: Chinese origin The five major styles of tai chi are named for the Chinese families who originated them: Chen style (陳氏) of Chen Wangting (1580–1660) Yang style (楊氏) of Yang Luchan (1799–1872) Wu/Hao style (武郝氏) of Wu Yuxiang (1812–1880) and Hao Weizhen (1842–1920) Wu style (吳氏) of Wu Quanyou (1834–1902) and his son Wu Jianquan (1870–1942) Sun style (孫氏) of Sun Lutang (1861–1932)The most popular is Yang, followed by Wu, Chen, Sun, and Wu/Hao. The styles share underlying theory, but their training differs. Styles: Dozens of new styles, hybrid styles, and offshoots followed, although the family schools are accepted as standard by the international community. Other important styles are Zhaobao tai chi, a close cousin of Chen style, which is recognized by Western practitioners; Fu style, created by Fu Zhensong, which evolved from Chen, Sun and Yang styles, and incorporates movements from baguazhang; and Cheng Man-ch'ing style which simplifies Yang style. Styles: United States Choy Hok Pang, a disciple of Yang Chengfu, was the first known proponent of tai chi to openly teach in the United States, beginning in 1939. His son and student Choy Kam Man emigrated to San Francisco from Hong Kong in 1949 to teach tai chi in Chinatown. Choy Kam Man taught until he died in 1994.Sophia Delza, a professional dancer and student of Ma Yueliang, performed the first known public demonstration of tai chi in the United States at the New York City Museum of Modern Art in 1954. She wrote the first English language book on tai chi, T'ai-chi ch'üan: Body and Mind in Harmony, in 1961. She taught regular classes at Carnegie Hall, the Actors Studio, and the United Nations. Styles: Cheng Man-ch'ing, who opened his school Shr Jung tai chi after he moved to New York from Taiwan in 1964. Unlike the older generation of practitioners, Cheng was cultured and educated in American ways, and thus was able to transcribe Yang's dictation into a written manuscript that became the de facto manual for Yang style. Cheng felt Yang's traditional 108-movement form was unnecessarily long and repetitive, which makes it difficult to learn. He thus created a shortened 37-movement version that he taught in his schools. Cheng's form became the dominant form in the eastern United States until other teachers immigrated in larger numbers in the 1990s. He taught until his death in 1975. Styles: United Kingdom Norwegian Pytt Geddes was the first European to teach tai chi in Britain, holding classes at The Place in London in the early 1960s. She had first encountered tai chi in Shanghai in 1948, and studied with Choy Hok Pang and his son Choy Kam Man (who both also taught in the United States) while living in Hong Kong in the late 1950s. Styles: Lineage Note: This lineage tree is not comprehensive, but depicts those considered the "gate-keepers" and most recognised individuals in each generation of the respective styles. Although many styles were passed down to respective descendants of the same family, the lineage focused on is that of the martial art and its main styles, not necessarily that of the families. Each (coloured) style depicted below has a lineage tree on its respective article page that is focused on that specific style, showing a greater insight into the highly significant individuals in its lineage. Names denoted by an asterisk are legendary or semi-legendary figures in the lineage; while their involvement in the lineage is accepted by most of the major schools, it is not independently verifiable from known historical records. Modern forms The Cheng Man-ch'ing (Zheng Manqing) and Chinese Sports Commission short forms are derived from Yang family forms, but neither is recognized as Yang-style tai chi by standard-bearing Yang family teachers. The Chen, Yang, and Wu families promote their own shortened demonstration forms for competitive purposes. Benefits: The primary purposes of tai chi are health, sport/self-defense and aesthetics benefits. Practitioners mostly interested in tai chi's health benefits diverged from those who emphasize self-defense, and also those who attracted by its aesthetic appeal (wushu). More traditional practitioners hold that the two aspects of health and martial arts make up the art's yin and yang. The "family" schools present their teachings in a martial art context, whatever the intention of their students. Benefits: Health Tai chi's health training concentrates on relieving stress on the body and mind. In the 21st century, tai chi classes that purely emphasize health are popular in hospitals, clinics, community centers and senior centers. Tai chi's low-stress training method for seniors has become better known. Clinical studies exploring tai chi's effect on specific diseases and health conditions exist, though there are not sufficient studies with consistent approaches to generate a comprehensive conclusion.Tai chi has been promoted for treating various ailments, and is supported by the Parkinson's Foundation and Diabetes Australia, among others. However, medical evidence of effectiveness is lacking. A 2017 systematic review found that it decreased falls in older people.A 2011 comprehensive overview of systematic reviews of tai chi recommended tai chi to older people for its physical and psychological benefits. It found possitive results for fall prevention and overall mental health. No conclusive evidence showed benefit for most of the conditions researched, including Parkinson's disease, diabetes, cancer and arthritis.A 2015 systematic review found that tai chi could be performed by those with chronic medical conditions such as chronic obstructive pulmonary disease, heart failure, and osteoarthritis without negative effects, and found favorable effects on functional exercise capacity.In 2015 the Australian Government's Department of Health published the results of a review of alternative therapies that sought to identify any that were suitable for coverage by health insurance. Tai chi was one of 17 therapies evaluated. The study concluded that low-quality evidence suggests that tai chi may have some beneficial health effects when compared to control in a limited number of populations for a limited number of outcomes.A 2020 review of 13 studies found that tai chi had positive effect on the quality of life and depressive symptoms of older adults with chronic conditions who lived in community settings.In 2022, the U.S.A agency the National Institutes of Health published an analysis of various health claims, studies and findings. They concluded the evidence was of low quality, but that it appears to have a small positive effect on quality of life. Benefits: Sport and self-defense As a martial art, tai chi emphasizes defense over attack and replies to hard with soft. The ability to use tai chi as a form of combat is the test of a student's understanding of the art. This is typically demonstrated via competition with others. Practitioners test their skills against students from other schools and martial arts styles in tuishou ("pushing hands") and sanshou competition.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Yoshinobu Miyake** Yoshinobu Miyake: Yoshinobu Miyake (三宅 義信, Miyake Yoshinobu, born November 24, 1939) is a retired Japanese weightlifter and Japan Ground Self-Defense Force Lieutenant. He won one silver and two gold medals at the 1960, 1964 and 1968 Olympics and finished fourth in 1972. He also won world titles in 1962, 1963 and 1965–66. Between 1959 and 1969 Miyake set 25 official world records, including 10 consecutive records in the snatch and nine consecutive records in the total. In 1993 he was inducted into the International Weightlifting Federation Hall of Fame.Miyake was known for his signature "frog style" or "Miyake pull" lifting technique, in which he kept his heels together with knees spread outward to about 60 degrees with a wide grip on the bar, resembling a frog.After retiring from competitions Miyake coached the national weightlifting team. His brother Yoshiyuki Miyake and niece Hiromi Miyake also won Olympic medals in weightlifting. All three were shorter than 1.56 m.Miyake took part in the opening ceremony of the 2020 Olympics as one of the flagbearers of the flag of Japan.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ISO/IEC 4909** ISO/IEC 4909: ISO/IEC 4909 is a 2006 international standard produced by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) for Identification cards — Financial transaction cards — Magnetic stripe data content for track 3. It was reviewed in 2018. The original ISO 4909 standard appeared in 1987. It is one of a number of international bank card standards. The standard is used for credit cards.The standard has been adopted in many countries, including (for example) Denmark,Germany,India,Netherlands,New Zealand,Norway,United Kingdom, etc.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Arrangement of lines** Arrangement of lines: In geometry, an arrangement of lines is the subdivision of the plane formed by a collection of lines. Problems of counting the features of arrangements have been studied in discrete geometry, and computational geometers have found algorithms for the efficient construction of arrangements. Definition: Intuitively, any finite set of lines in the plane cuts the plane into two-dimensional polygons (cells), one-dimensional line segments or rays, and zero-dimensional crossing points. This can be formalized mathematically by classifying the points of the plane according to which side of each line they are on. Each line separates the plane into two open half-planes, and each point of the plane has three possibilities per line: it can be in either one of these two half-planes, or it can be on the line itself. Two points can be considered to be equivalent if they have the same classification with respect to all of the lines. This is an equivalence relation, whose equivalence classes are subsets of equivalent points. These subsets subdivide the plane into shapes of the following three types: The cells or chambers of the arrangement are two-dimensional regions not part of any line. They form the interiors of bounded or unbounded convex polygons. If the plane is cut along all of the lines, these are the connected components of the points that remain uncut. Definition: The edges or panels of the arrangement are one-dimensional regions belonging to a single line. They are the open line segments and open infinite rays into which each line is partitioned by its crossing points with the other lines. That is, if one of the lines is cut by all the other lines, these are the connected components of its uncut points. Definition: The vertices of the arrangement are isolated points belonging to two or more lines, where those lines cross each other.The boundary of a cell is the system of edges that touch it, and the boundary of an edge is the set of vertices that touch it (one vertex for a ray and two for a line segment). The system of objects of all three types, linked by this boundary operator, form a cell complex covering the plane. Two arrangements are said to be isomorphic or combinatorially equivalent if there is a one-to-one boundary-preserving correspondence between the objects in their associated cell complexes.The same classification of points, and the same shapes of equivalence classes, can be used for infinite but locally finite arrangements, in which every bounded subset of the plane may be crossed by only finitely many lines, although in this case the unbounded cells may have infinitely many sides. Complexity of arrangements: The study of arrangements was begun by Jakob Steiner, who proved the first bounds on the maximum number of features of different types that an arrangement may have. The most straightforward features to count are the vertices, edges, and cells: An arrangement with n lines has at most n(n−1)/2 vertices (a triangular number), one per pair of crossing lines. This maximum is achieved for simple arrangements, those in which each two lines cross at a vertex that is disjoint from all the other lines. The number of vertices is smaller when some lines are parallel, or when some vertices are crossed by more than two lines. Complexity of arrangements: Any arrangement can be rotated to avoid axis-parallel lines, without changing its number of cells. Any arrangement with no axis-parallel lines has n infinite-downward rays, one per line. These rays separate n+1 cells of the arrangement that are unbounded in the downward direction. The remaining cells all have a unique bottommost vertex (again, because there are no axis-parallel lines). For each pair of lines, there can be only one cell where the two lines meet at the bottom vertex, so the number of downward-bounded cells is at most the number of pairs of lines, n(n−1)/2 . Adding the unbounded and bounded cells, the total number of cells in an arrangement can be at most n(n+1)/2+1 . These are the numbers of the lazy caterer's sequence. Complexity of arrangements: The number of edges of the arrangement is at most n2 , as may be seen either by using the Euler characteristic to calculate it from the numbers of vertices and cells, or by observing that each line is partitioned into at most n edges by the other n−1 lines. Again, this worst-case bound is achieved for simple arrangements.More complex features go by the names of "zones", "levels", and "many faces": The zone of a line ℓ in a line arrangement is the collection of cells having edges belonging to ℓ . The zone theorem states that the total number of edges in the cells of a single zone is linear. More precisely, the total number of edges of the cells belonging to a single side of line ℓ is at most 5n−1 , and the total number of edges of the cells belonging to both sides of ℓ is at most 9.5 n⌋−1 . More generally, the total complexity of the cells of a line arrangement that are intersected by any convex curve is O(nα(n)) , where α denotes the inverse Ackermann function, as may be shown using Davenport–Schinzel sequences. The sum of squares of cell complexities in an arrangement is O(n2) , as can be shown by summing the zones of all lines. Complexity of arrangements: The k -level of an arrangement is the polygonal chain formed by the edges that have exactly k other lines directly below them. The ≤k -level is the portion of the arrangement below the k -level. Finding matching upper and lower bounds for the complexity of a k -level remains a major open problem in discrete geometry;. The best upper bound is O(nk1/3) , while the best lower bound is log ⁡k) . In contrast, the maximum complexity of the ≤k -level is known to be Θ(nk) . A k -level is a special case of a monotone path in an arrangement; that is, a sequence of edges that intersects any vertical line in a single point. However, monotone paths may be much more complicated than k -levels: there exist arrangements and monotone paths in these arrangements where the number of points at which the path changes direction is n2−o(1) Although a single cell in an arrangement may be bounded by all n lines, it is not possible in general for m different cells to all be bounded by n lines. Rather, the total complexity of m cells is at most Θ(m2/3n2/3+n) , almost the same bound as occurs in the Szemerédi–Trotter theorem on point-line incidences in the plane. A simple proof of this follows from the crossing number inequality: if m cells have a total of x+n edges, one can form a graph with m nodes (one per cell) and x edges (one per pair of consecutive cells on the same line). The edges of this graph can be drawn as curves that do not cross within the cells corresponding to their endpoints, and then follow the lines of the arrangement. Therefore, there are O(n2) crossings in this drawing. However, by the crossing number inequality, there are Ω(x3/m2) crossings. In order to satisfy both bounds, x must be O(m2/3n2/3) Projective arrangements and projective duality: It is often convenient to study line arrangements not in the Euclidean plane but in the projective plane, due to the fact that in projective geometry every pair of lines has a crossing point. In the projective plane, it is not possible to define arrangements using sides of lines, because a line in the projective plane does not separate the plane into two distinct sides. However, one may still define the cells of an arrangement to be the connected components of the points not belonging to any line, the edges to be the connected components of sets of points belonging to a single line, and the vertices to be points where two or more lines cross. A line arrangement in the projective plane differs from its Euclidean counterpart in that the two Euclidean rays at either end of a line are replaced by a single edge in the projective plane that connects the leftmost and rightmost vertices on that line, and in that pairs of unbounded Euclidean cells are replaced in the projective plane by single cells that are crossed by the projective line at infinity.Due to projective duality, many statements about the combinatorial properties of points in the plane may be more easily understood in an equivalent dual form about arrangements of lines. For instance, the Sylvester–Gallai theorem, stating that any non-collinear set of points in the plane has an ordinary line containing exactly two points, transforms under projective duality to the statement that any projective arrangement of finitely many lines with more than one vertex has an ordinary point, a vertex where only two lines cross. The earliest known proof of the Sylvester–Gallai theorem, by Melchior (1940), uses the Euler characteristic to show that such a vertex must always exist. Triangles in arrangements: An arrangement of lines in the projective plane is said to be simplicial if every cell of the arrangement is bounded by exactly three edges. Simplicial arrangements were first studied by Melchior. Three infinite families of simplicial line arrangements are known: A near-pencil consisting of n−1 lines through a single point, together with a single additional line that does not go through the same point, The family of lines formed by the sides of a regular polygon together with its axes of symmetry, and The sides and axes of symmetry of an even regular polygon, together with the line at infinity.Additionally there are many other examples of sporadic simplicial arrangements that do not fit into any known infinite family. Triangles in arrangements: As Branko Grünbaum writes, simplicial arrangements "appear as examples or counterexamples in many contexts of combinatorial geometry and its applications." For instance, Artés, Grünbaum & Llibre (1998) use simplicial arrangements to construct counterexamples to a conjecture on the relation between the degree of a set of differential equations and the number of invariant lines the equations may have. The two known counterexamples to the Dirac–Motzkin conjecture (which states that any n -line arrangement has at least n/2 ordinary points) are both simplicial.The dual graph of a line arrangement has one node per cell and one edge linking any pair of cells that share an edge of the arrangement. These graphs are partial cubes, graphs in which the nodes can be labeled by bitvectors in such a way that the graph distance equals the Hamming distance between labels. In the case of a line arrangement, each coordinate of the labeling assigns 0 to nodes on one side of one of the lines and 1 to nodes on the other side. Dual graphs of simplicial arrangements have been used to construct infinite families of 3-regular partial cubes, isomorphic to the graphs of simple zonohedra. Triangles in arrangements: It is also of interest to study the extremal numbers of triangular cells in arrangements that may not necessarily be simplicial. Any arrangement in the projective plane must have at least n triangles. Every arrangement that has only n triangles must be simple. For Euclidean rather than projective arrangements, the minimum number of triangles is n−2 , by Roberts's triangle theorem. The maximum possible number of triangular faces in a simple arrangement is known to be upper bounded by n(n−1)/3 and lower bounded by n(n−3)/3 ; the lower bound is achieved by certain subsets of the diagonals of a regular 2n -gon. For non-simple arrangements the maximum number of triangles is similar but more tightly bounded. The closely related Kobon triangle problem asks for the maximum number of non-overlapping finite triangles in an arrangement in the Euclidean plane, not counting the unbounded faces that might form triangles in the projective plane. For some but not all values of n , n(n−2)/3 triangles are possible. Multigrids and rhombus tilings: The dual graph of a simple line arrangement may be represented geometrically as a collection of rhombi, one per vertex of the arrangement, with sides perpendicular to the lines that meet at that vertex. These rhombi may be joined together to form a tiling of a convex polygon in the case of an arrangement of finitely many lines, or of the entire plane in the case of a locally finite arrangement with infinitely many lines. This construction is sometimes known as a Klee diagram, after a publication of Rudolf Klee in 1938 that used this technique. Not every rhombus tiling comes from lines in this way, however.de Bruijn (1981) investigated special cases of this construction in which the line arrangement consists of k sets of equally spaced parallel lines. For two perpendicular families of parallel lines this construction just gives the familiar square tiling of the plane, and for three families of lines at 120-degree angles from each other (themselves forming a trihexagonal tiling) this produces the rhombille tiling. However, for more families of lines this construction produces aperiodic tilings. In particular, for five families of lines at equal angles to each other (or, as de Bruijn calls this arrangement, a pentagrid) it produces a family of tilings that include the rhombic version of the Penrose tilings. Multigrids and rhombus tilings: There also exist three infinite simplicial arrangements formed from sets of parallel lines. The tetrakis square tiling is an infinite arrangement of lines forming a periodic tiling that resembles a multigrid with four parallel families, but in which two of the families are more widely spaced than the other two, and in which the arrangement is simplicial rather than simple. Its dual is the truncated square tiling. Similarly, the triangular tiling is an infinite simplicial line arrangement with three parallel families, which has as its dual the hexagonal tiling, and the bisected hexagonal tiling is an infinite simplicial line arrangement with six parallel families and two line spacings, dual to the great rhombitrihexagonal tiling. These three examples come from three affine reflection groups in the Euclidean plane, systems of symmetries based on reflection across each line in these arrangements. Algorithms: Constructing an arrangement means, given as input a list of the lines in the arrangement, computing a representation of the vertices, edges, and cells of the arrangement together with the adjacencies between these objects, for instance as a doubly connected edge list. Due to the zone theorem, arrangements can be constructed efficiently by an incremental algorithm that adds one line at a time to the arrangement of the previously added lines: each new line can be added in time proportional to its zone, resulting in a total construction time of O(n2) . However, the memory requirements of this algorithm are high, so it may be more convenient to report all features of an arrangement by an algorithm that does not keep the entire arrangement in memory at once. This may again be done efficiently, in time O(n2) and space O(n) , by an algorithmic technique known as topological sweeping. Computing a line arrangement exactly requires a numerical precision several times greater than that of the input coordinates: if a line is specified by two points on it, the coordinates of the arrangement vertices may need four times as much precision as these input points. Therefore, computational geometers have also studied algorithms for constructing arrangements efficiently with limited numerical precision.As well, researchers have studied efficient algorithms for constructing smaller portions of an arrangement, such as zones, k -levels, or the set of cells containing a given set of points. The problem of finding the arrangement vertex with the median x -coordinate arises (in a dual form) in robust statistics as the problem of computing the Theil–Sen estimator of a set of points.Marc van Kreveld suggested the algorithmic problem of computing shortest paths between vertices in a line arrangement, where the paths are restricted to follow the edges of the arrangement, more quickly than the quadratic time that it would take to apply a shortest path algorithm to the whole arrangement graph. An approximation algorithm is known, and the problem may be solved efficiently for lines that fall into a small number of parallel families (as is typical for urban street grids), but the general problem remains open. Non-Euclidean line arrangements: A pseudoline arrangement is a family of curves that share similar topological properties with a line arrangement. These can be defined most simply in the projective plane as simple closed curves any two of which meet in a single crossing point. A pseudoline arrangement is said to be stretchable if it is combinatorially equivalent to a line arrangement. Determining stretchability is a difficult computational task: it is complete for the existential theory of the reals to distinguish stretchable arrangements from non-stretchable ones. Every arrangement of finitely many pseudolines can be extended so that they become lines in a "spread", a type of non-Euclidean incidence geometry in which every two points of a topological plane are connected by a unique line (as in the Euclidean plane) but in which other axioms of Euclidean geometry may not apply.Another type of non-Euclidean geometry is the hyperbolic plane, and arrangements of hyperbolic lines in this geometry have also been studied. Any finite set of lines in the Euclidean plane has a combinatorially equivalent arrangement in the hyperbolic plane (e.g. by enclosing the vertices of the arrangement by a large circle and interpreting the interior of the circle as a Klein model of the hyperbolic plane). However, parallel (non-crossing) pairs of lines are less restricted in hyperbolic line arrangements than in the Euclidean plane: in particular, the relation of being parallel is an equivalence relation for Euclidean lines but not for hyperbolic lines. The intersection graph of the lines in a hyperbolic arrangement can be an arbitrary circle graph. The corresponding concept to hyperbolic line arrangements for pseudolines is a weak pseudoline arrangement, a family of curves having the same topological properties as lines such that any two curves in the family either meet in a single crossing point or have no intersection.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Small nucleolar RNA SNORA63** Small nucleolar RNA SNORA63: In molecular biology, Small nucleolar RNA SNORA63 (E3) belongs to the H/ACA class of snoRNAs, is involved in the processing of eukaryotic pre-rRNA and has regions of complementarity to 18S rRNA. E3 is encoded in introns in the gene for protein synthesis initiation factor 4AII.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MIS416** MIS416: MIS416 is an experimental drug developed by Innate Immunotherapeutics which underwent clinical trials to treat secondary progressive multiple sclerosis. It is derived from the bacteria that causes acne and targets myeloid cells through TLR9 and NOD2. In one of its first rounds of clinical trials, the drug was shown to be "safe and well tolerated", with 80% of secondary-progressive multiple sclerosis patients exhibiting more than 30% improvement in at least one area of their MS status. However, Phase II clinical trials were unable to prove that the drug provided a benefit to patients. It is also being researched as a potential treatment for cancer. Development: MIS416 is a microparticle derived from the cytoskeleton of P. acnes, a species of bacteria present on the skin of most adults that causes acne.MIS416 was originally developed as a vaccine adjuvant, a component of vaccines that helps to activate an immune response against the vaccine target. Bacteria-derived microparticles have several advantages over traditional adjuvants related both to their size and biological properties.Because MIS416 is engulfed by immune cells, it is being investigated as an immunotherapy-based treatment for solid tumors.The drug was used to treat multiple sclerosis under a compassionate use law in New Zealand before clinical trials began. It is administered as an intravenous infusion. In 2017, clinical trials in people with secondary progressive multiple sclerosis failed to meet the primary endpoint of slowed progression of the disease. Phase II trial and aftermath: On June 22, 2017, Innate Immunotherapeutics announced that clinical trials undertaken to evaluate the efficacy of MIS416 in managing secondary progressive multiple sclerosis (SPMS) had "failed to show any clinically meaningful benefit or statistical significance". As a result, the company's stock dropped by 92 percent and crashed on the Australian Securities Exchange.US Congressman Chris Collins, a member of the company's board of directors and 17-percent stock holder, was subsequently indicted on charges of insider trading in connection with the poor clinical trial results. Collins had allegedly obtained word from the company about the results and informed his son, Cameron Collins, who immediately sold his US stock. Cameron allegedly tipped off shareholder Stephen Zarsky, who informed three others, thus preventing a total loss of $768,000 in stocks. Zarsky was indicted together with Collins and his son.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**18-Methylaminocoronaridine** 18-Methylaminocoronaridine: (–)-18-Methylaminocoronaridine (18-MAC) is a second generation synthetic derivative of ibogaine developed by the research team led by the pharmacologist Stanley D. Glick from the Albany Medical College and the chemist Martin E. Kuehne from the University of Vermont.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tyre label** Tyre label: The Tyre Label is a mark for motor vehicle tyres. Manufacturers of tyres for cars, light and heavy trucks must specify fuel consumption, wet grip and noise classification of every tyre sold in EU market starting in November 2012.For passenger car, light truck and truck tyres the information must be available in technical promotional literature (leaflets, brochures, etc.), including the manufacturer website. For passenger and light truck tyres, the manufacturers or importers have the choice of either putting a sticker on the tyre tread or a label accompanying each delivery of batch of tyres to the dealer and to the end consumer. The tyre label will use a classification from the best (green category "A") to the worst performance (red category "G"). Tyre label: This initiative results from a regulation by the EU Commission released in 2009. It is part of the Energy Efficiency Action Plan, designed to improve the energy performance of products, buildings and services to reduce energy consumption by 20% until 2020. The EU has already created a system for marking of electrical household appliances such as refrigerators, washing machines and televisions with the intent to better inform the European population about the level of their consumption. Rolling resistance: Rolling resistance is the main key factor in measuring the energy efficiency of a tyre and has direct influence on the fuel consumption of a vehicle. A set of tyres of the green class "A" compared to a "G" class can reduce fuel consumption by 9% of a passenger car; even more for trucks. 'D' Grading is not used in rolling resistance grading for Passenger Cars and Light Trucks while it is used for Heavy Trucks Wet grip: As at January 2019, the wet grip tests for passenger car tyres (EU category C1) are specified in a 2011 amendment, Regulation No 228/2011 to the original 2009 Regulation No 1222/2009 "on the labelling of tyres with respect to fuel efficiency and other essential parameters". The wet grip index (WGI) is calculated from the results of two tests specified in the regulations. The first test measures the maximum achievable average deceleration of a vehicle as it slows from 85 ± 2 km/h (52.8 mph) to 20 ± 2 km/h (12.4 mph). The second test (the "skid trailer" test) is usually performed using a tow vehicle and trailer. The trailer is fitted with the tyres being tested and the average maximum braking force that can be applied through the tyres, under a high proportion (60 - 90%) of the tyres' maximum load, is measured as the combination travels at a constant speed of 65 ± 2 km/h. Wet grip: Results of at least three runs of each test are combined to produce the wet grip index, yielding ratings of A - G (although D and G are not used for passenger cars), where A is the best. When buying tyres, it is worth noting that the braking distance (in the wet) from the reference speed of 85 km/h, to a standstill, varies by something of the order of 3m from one class to the next. Noise emission: The driving by noise is quoted as an absolute value in decibel and as a 3 classes sound wave symbol. A continuous sound level above 80 decibel can cause health problems. Tyres that must be labeled: The Tyre Label will generally apply to Car and SUV tyres Van tyres Truck tyres Exceptions from labelling Tyres for cars made before 1 October 1990. Re-treaded tyres Motorcycle tyres Racing/sports car tyres Studded tyres Spare tyres Vintage car tyres Professional off-road tyres Tax on noisy tyres: Tyres that make too much roadway noise as determined by the EU, will have an extra tax/penalty imposed on them from November 2012. Reporting requirements: Tyre manufacturer For passenger car, light truck and truck tyres the information must be available in the technical promotional literature (leaflets, brochures, etc.), including the manufacturer website For passenger and light truck tyres, the manufacturers or importers have the choice of either putting a sticker on the tyre tread or a label accompanying each delivery of batch of tyres to the dealer and to the end consumer Tyre dealer Must ensure tyres which are visible to consumers at the point of sale carry a sticker or have a label in their close proximity which is shown to the end user before the sale Must give the information during the purchase process when the tyres offered for sale are not visible to the end-user Must give the information on or with the bill Car manufacturer Must declare the tyre wet grip and fuel efficiency class and external rolling noise measured value of the tyre type(s) that are offered in option, when different from those fitted normally on the basic vehicle. Reporting requirements: As soon as the customer is given a choice either in the size / type of tyres fitted on the basic rim or a choice of rim and tyre size, the labelling information must be provided before sale. There might be no obligation to provide information only in those cases where there is a choice of rim with tyres types and sizes that are strictly identical to those which are sold automatically with the new vehicle. EU-Commission Detailed information about contents and design of the label. Each EU member state is to organise monitoring and impose penalties in cases of non-compliance. Critical View: The new label is designed to show information regarding 3 criteria, however there are many other important performance factors to consider including: Resistance to aquaplaning Driving stability Handling and steering precision on wet and dry roads Durability Braking performance on dry roads Capabilities in winter conditionsPublished tyre tests take these performance factors into account and are a source of information regarding the total performance of a tyre. Critical View: Tyres that make too much noise as determined by the EU, will have an extra tax/penalty imposed on them from November 2012. Driving Proviso: Actual fuel savings and road safety also depend heavily on the behaviour of drivers when using their cars, and in particular the following: Eco-driving can significantly reduce fuel consumption Tyre pressure should be regularly checked to optimise wet grip and fuel efficiency performance Stopping distances should always be strictly respected
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Trompo** Trompo: A trompo is a top which is spun by winding a length of string around the body, and launching it so that lands spinning on its point. If the string is attached to a stick the rotation can be maintained by whipping the side of the body. The string may also be wound around the point while the trompo is spinning in order to control its position or even lift the spinning top to another surface. Etymology: These toys are popular in Latin America where the name trompo emerged, but there are many different local names. In Spain, these toys may be called trompo or peonza, perinola, and pirinola. In the Philippines, they are called trumpo or turumpo, while in Portugal they are called pião. In India it is called Bugari (Kannada); children make these tops by nailing wood and spin them with twisted jute rope. In Japan, similar tops are known as koma, with most cities having a particular design. In Germany a Peitschenkreisel may also be called Doppisch, Dildop, Pindopp, Dilledopp, Triesel or Tanzknopf (roughly dancing top) In Morocco it is called Trombia, and it is often made out of wood and painted in a reddish brown color. In Dutch it is called "priktol" (see https://nl.wikipedia.org/wiki/Priktol). A "tol" is a top. An other type of top is the "zweeptol". A "zweep" is a whip. (see https://nl.wikipedia.org/wiki/Kinderspelen and find "zweep" on that page) History: There is historical evidence suggesting the existence of trompos as early as 4000 BCE, and trompos have been found on the bank of the Euphrates river, likely belonging to an ancient civilization. There is also evidence that members of the ancient Greek and Roman civilizations used trompos as well. Description of motion: The gyroscopic effect allows the trompo to spin over its point until the force of gravity ends up at an angle with respect to the top's axis of rotation, causing a variation in the location of the center of gravity as the trompo undergoes precession (where the axis of rotation of the trompo moves in a circular path). The fall of the top is directly proportional to the angle between the direction of gravity on the trompo and the top's axis of rotation. The fall is also directly proportional to the magnitude of the force of gravity and is inversely proportional to the trompo's angular velocity. Description of motion: As air resistance and friction with the ground begin to slow the trompo's spin, its center of gravity begins to destabilize and the top's bottom point begins to trace a circular path with the ground. Soon the trompo becomes fully unbalanced and it falls to the ground, rolling until it comes to rest. This general motion is largely shared among many trompo variants, but differences in several design parameters (such as the mass distribution, friction between the bottom point and the ground, and the spinning method) can still lead to significant variation in the aforementioned variables. Form: The trompo's form has varied enormously throughout history. Though trompos have traditionally been cone-shaped, there are also diverse variations in trompo form across regions. However, despite these regional differences in design, all trompos are constructed to be capable of employing the gyroscopic effect. Form: Trompos generally have an approximately pear-shaped body and are usually made of a hard wood such as hawthorn, oak or beech, although new resins and strong plastic materials have also been used. Clay trompos have also been found from ancient civilizations near the Euphrates river. Whipping tops often have a more cylindrical shape to provide a bigger surface to be struck by the whip. Form: A trompo has a button-shaped on top, usually bigger than the tip on which it spins, and it is generally made of the same material as the rest of the body. Form: The base of a trompo is a stud or spike which may have a groove or roller-bearing to facilitate lifting the spinning trompo with a whip or string without imposing much friction on the body.The trompo surface may be painted or decorated, and some versions incorporate synthetic sound devices. The small size diameter and low mass of most trompos means that mechanical whistles would cause excessive drag (physics) and reduce their spinning time. Form: The Philippine trumpo differs in the tip, which is straight and pointed. It usually looks like a nail embedded in a wooden spheroid. Play: Playing with a trompo consists of throwing the top and having it spin on the floor. Due to its shape, a trompo spins on its axis and swirls around its conic tip which is usually made of iron or steel. A trompo uses a string wrapped around it to get the necessary spin needed. The player must roll the cord around the trompo from the metallic tip up. The user must then tie the string in a knot on the button-shaped tip before releasing it. When rolling the cord around the trompo, the cord must be tightly attached to it. The technique for throwing a trompo varies. One end of the cord must be rolled around the player's fingers and with the same hand the trompo must be held with the metallic tip facing upwards. Play: Championships are held in different Latin American countries, especially in Mexico, Colombia, Peru, Cuba, Nicaragua, and Puerto Rico where it is very popular among children of the middle and lower classes. Play: In Mexico most trompos sold are made of plastic, with a metal tip, and sometimes they are made of wood. There is a popular game called picotazos, where the main goal is to destroy the opponents' trompo. Another game is where a circle is drawn on the ground and a coin is placed in the middle, and the goal here is to strike the coin. Play: In Puerto Rico, trompos are sometimes played similarly to certain marble games, with trompos being placed in a circle drawn on the ground. The goal of this variant is to knock the trompos out of the circle. Failure to spin or spinning in the circle causes your trompo to be placed in the circle, and another person has a turn to spin. Trompos in Puerto Rico and Chile are frequently modified to have a sharper point. Play: José Miguel Agrelot, a Puerto Rican comedian, hosted a long-standing television program, Encabulla y Vuelve y Tira, whose name described the action of throwing and spinning a trompo. One of his comedic characterizations, mischievous boy Torito Fuertes, was a one-time sponsor of a line of trompos. The Filipino trumpo is basically played in the same manner, except that a knot is not tied into the tip before throwing it for the spin.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Split tunneling** Split tunneling: Split tunneling is a computer networking concept which allows a user to access dissimilar security domains like a public network (e.g., the Internet) and a local area network or wide area network at the same time, using the same or different network connections. This connection state is usually facilitated through the simultaneous use of a LAN network interface controller (NIC), radio NIC, Wireless LAN (WLAN) NIC, and VPN client software application without the benefit of an access control. Split tunneling: For example, suppose a user utilizes a remote access VPN software client connecting to a campus network using a hotel wireless network. The user with split tunneling enabled is able to connect to file servers, database servers, mail servers and other servers on the corporate network through the VPN connection. When the user connects to Internet resources (websites, FTP sites, etc.), the connection request goes directly out the gateway provided by the hotel network. However, not every VPN allows split tunneling. Some VPNs with split tunneling include Private Internet Access (PIA), ExpressVPN, and Surfshark.Split tunneling is sometimes categorized based on how it is configured. A split tunnel configured to only tunnel traffic destined to a specific set of destinations is called a split-include tunnel. When configured to accept all traffic except traffic destined to a specific set of destinations, it is called a split-exclude tunnel. Advantages: One advantage of using split tunneling is that it alleviates bottlenecks and conserves bandwidth as Internet traffic does not have to pass through the VPN server. Another advantage is in the case where a user works at a supplier or partner site and needs access to network resources on both networks. Split tunneling prevents the user from having to continually connect and disconnect. Disadvantages: A disadvantage is that when split tunneling is enabled, users bypass gateway level security that might be in place within the company infrastructure. For example, if web or content filtering is in place, this is something usually controlled at a gateway level, not the client PC. ISPs that implement DNS hijacking break name resolution of private addresses with a split tunnel. Variants and related technology: Inverse split tunneling A variant of this split tunneling is called "inverse" split tunneling. By default all datagrams enter the tunnel except those destination IPs explicitly allowed by VPN gateway. The criteria for allowing datagrams to exit the local network interface (outside the tunnel) may vary from vendor to vendor (i.e.: port, service, etc.) This keeps control of network gateways to a centralized policy device such as the VPN terminator. This can be augmented by endpoint policy enforcement technologies such as an interface firewall on the endpoint device's network interface driver, group policy object or anti-malware agent. This is related in many ways to network access control (NAC). Variants and related technology: Dynamic split tunneling A form of split-tunneling that derives the IP addresses to include/exclude at runtime-based on a list of hostname rules/policies. [Dynamic Split Tunneling] (DST) IPv6 dual-stack networking Internal IPv6 content can be hosted and presented to sites via a unique local address range at the VPN level, while external IPv4 & IPv6 content can be accessed via site routers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sleeve coupling** Sleeve coupling: A coupling is a device used to connect two shafts together at their ends for the purpose of transmitting power. The primary purpose of couplings is to join two pieces of rotating equipment while permitting some degree of misalignment or end movement or both. In a more general context, a coupling can also be a mechanical device that serves to connect the ends of adjacent parts or objects. Couplings do not normally allow disconnection of shafts during operation, however there are torque-limiting couplings which can slip or disconnect when some torque limit is exceeded. Selection, installation and maintenance of couplings can lead to reduced maintenance time and maintenance cost. Uses: Shaft couplings are used in machinery for several purposes. A primary function is to transfer power from one end to another end (ex: motor transfer power to pump through coupling). Other common uses: To alter the vibration characteristics of rotating units To connect the driving and the driven part To introduce protection To reduce the transmission of shock loads from one shaft to another To slip when overload occurs Types: Beam A beam coupling, also known as helical coupling, is a flexible coupling for transmitting torque between two shafts while allowing for angular misalignment, parallel offset and even axial motion, of one shaft relative to the other. This design utilizes a single piece of material and becomes flexible by removal of material along a spiral path resulting in a curved flexible beam of helical shape. Since it is made from a single piece of material, the beam style coupling does not exhibit the backlash found in some multi-piece couplings. Another advantage of being an all machined coupling is the possibility to incorporate features into the final product while still keep the single piece integrity. Types: Changes to the lead of the helical beam provide changes to misalignment capabilities as well as other performance characteristics such as torque capacity and torsional stiffness. It is even possible to have multiple starts within the same helix. The material used to manufacture the beam coupling also affects its performance and suitability for specific applications such as food, medical and aerospace. Materials are typically aluminum alloy and stainless steel, but they can also be made in acetal, maraging steel and titanium. The most common applications are attaching rotary encoders to shafts and motion control for robotics. Beam couplings can be known by various names depending upon industry. These names include flexible coupling, flexible beam coupling, flexible shaft coupling, flexure, helical coupling, and shaft coupling. The primary benefit to using a flexible beam coupling to join two rotating shafts is to reducing vibration and reaction loads which in turn will reduce overall wear and tear on machinery and prolong equipment life. Bush pin flange Bush pin flange coupling is used for slightly imperfect alignment of the two shafts. Types: This is modified form of the protected type flange coupling. This type of coupling has pins and it works with coupling bolts. The rubber or leather bushes are used over the pins. The coupling has two halves dissimilar in construction. The pins are rigidly fastened by nuts to one of the flange and kept loose on the other flange. This coupling is used to connect shafts which have a small parallel misalignment, angular misalignment or axial misalignment. In this coupling the rubber bushing absorbs shocks and vibration during its operations. This type of coupling is mostly used to couple electric motors and machines. Types: Constant velocity There are various types of constant-velocity (CV) couplings: Rzeppa joint, Double cardan joint, and Thompson coupling. Types: Clamp or split-muff In this coupling, the muff or sleeve is made into two halves parts of the cast iron and they are joined by means of mild steel studs or bolts. The advantages of this coupling is that assembling or disassembling of the coupling is possible without changing the position of the shaft. This coupling is used for heavy power transmission at moderate speed. Types: Diaphragm Diaphragm couplings transmit torque from the outside diameter of a flexible plate to the inside diameter, across the spool or spacer piece, and then from inside to outside diameter. The deforming of a plate or series of plates from I.D. to O.D accomplishes the misalignment. Disc Disc couplings transmit torque from a driving to a driven bolt tangentially on a common bolt circle. Torque is transmitted between the bolts through a series of thin, stainless steel discs assembled in a pack. Misalignment is accomplished by deforming of the material between the bolts. Types: Elastic An elastic coupling transmits torque or other load by means of an elastic component. One example is the coupling used to join a windsurfing rig (sail, mast, and components) to the sailboard. In windsurfing terminology it is usually called a "universal joint", but modern designs are usually based on a strong flexible material, and better technically described as an elastic coupling. They can be tendon or hourglass-shaped, and are constructed of a strong and durable elastic material. In this application, the coupling does not transmit torque, but instead transmits sail-power to the board, creating thrust (some portion of sail-power is also transmitted through the rider's body). Types: Flexible Flexible couplings are usually used to transmit torque from one shaft to another when the two shafts are slightly misaligned. They can accommodate varying degrees of misalignment up to 1.5° and some parallel misalignment. They can also be used for vibration damping or noise reduction. In rotating shaft applications a flexible coupling can protect the driving and driven shaft components (such as bearings) from the harmful effects of conditions such as misaligned shafts, vibration, shock loads, and thermal expansion of the shafts or other components. Types: At first, flexible couplings separate into two essential groups, metallic and elastomeric. Metallic types utilize freely fitted parts that roll or slide against one another or, on the other hand, non-moving parts that bend to take up misalignment. Elastomeric types, then again, gain flexibility from resilient, non-moving, elastic or plastic elements transmitting torque between metallic hubs. Fluid Gear A gear coupling is a mechanical device for transmitting torque between two shafts that are not collinear. It consists of a flexible joint fixed to each shaft. The two joints are connected by a third shaft, called the spindle. Each joint consists of a 1:1 gear ratio internal/external gear pair. The tooth flanks and outer diameter of the external gear are crowned to allow for angular displacement between the two gears. Mechanically, the gears are equivalent to rotating splines with modified profiles. They are called gears because of the relatively large size of the teeth. Types: Gear couplings and universal joints are used in similar applications. Gear couplings have higher torque densities than universal joints designed to fit a given space while universal joints induce lower vibrations. The limit on torque density in universal joints is due to the limited cross sections of the cross and yoke. The gear teeth in a gear coupling have high backlash to allow for angular misalignment. The excess backlash can contribute to vibration.Gear couplings are generally limited to angular misalignments, i.e., the angle of the spindle relative to the axes of the connected shafts, of 4–5°. Universal joints are capable of higher misalignments. Types: Single joint gear couplings are also used to connect two nominally coaxial shafts. In this application the device is called a gear-type flexible, or flexible coupling. The single joint allows for minor misalignments such as installation errors and changes in shaft alignment due to operating conditions. These types of gear couplings are generally limited to angular misalignments of 1/4–1/2°. Geislinger Giubo Grid A grid coupling is composed of two shaft hubs, a metallic grid spring, and a split cover kit. Torque is transmitted between the two coupling shaft hubs through the metallic grid spring element. Types: Like metallic gear and disc couplings, grid couplings have a high torque density. A benefit of grid couplings, over either gear or disc couplings, is the ability their grid coupling spring elements have to absorb and spread peak load impact energy over time. This reduces the magnitude of peak loads and offers some vibration dampening capability. A negative of the grid coupling design is that it generally is very limited in its ability to accommodate the misalignment. Types: Highly flexible Highly flexible couplings are installed when resonance or torsional vibration might be an issue, since they are designed to eliminate torsional vibration problems and to balance out shock impacts. Types: They are used in installations where the systems require a high level of torsional flexibility and misalignment capacity. This type of coupling provides an effective damping of torsional vibrations, and high displacement capacity, which protects the drive. The design of the highly flexible elastic couplings makes assembly easier. These couplings also compensate shaft displacements (radial, axial and angular) and the torque is transmitted in shear. Depending on the size and stiffness of the coupling, the flexible part may be single- or multi-row. Types: Hirth joints Hirth joints use tapered teeth on two shaft ends meshed together to transmit torque. Hydrodynamic Jaw Jaw coupling is also known as spider or Lovejoy coupling. Magnetic A magnetic coupling uses magnetic forces to transmit the power from one shaft to another without any contact. This allows for full medium separation. It can provide the ability to hermetically separate two areas whilst continuing to transmit mechanical power from one to the other making these couplings ideal for applications where prevention of cross-contamination is essential. Types: Oldham An Oldham coupling has three discs, one coupled to the input, one coupled to the output, and a middle disc that is joined to the first two by tongue and groove. The tongue and groove on one side is perpendicular to the tongue and groove on the other. The middle disc rotates around its center at the same speed as the input and output shafts. Its center traces a circular orbit, twice per rotation, around the midpoint between input and output shafts. Often springs are used to reduce backlash of the mechanism. An advantage to this type of coupling, as compared to two universal joints, is its compact size. The coupler is named for John Oldham who invented it in Ireland, in 1821, to solve a problem in a paddle steamer design. Types: Rag joint Rag joints are commonly used on automotive steering linkages and drive trains. When used on a drive train they are sometimes known as giubos. Rigid Rigid couplings are used when precise shaft alignment is required; any shaft misalignment will affect the coupling's performance as well as its life span, because rigid couplings do not have the ability to compensate for misalignment. Due to this, their application is limited, and they're typically used in applications involving vertical drivers. Types: Clamped or compression rigid couplings come in two parts and fit together around the shafts to form a sleeve. They offer more flexibility than sleeved models, and can be used on shafts that are fixed in place. They generally are large enough so that screws can pass all the way through the coupling and into the second half to ensure a secure hold. Flanged rigid couplings are designed for heavy loads or industrial equipment. They consist of short sleeves surrounded by a perpendicular flange. One coupling is placed on each shaft so the two flanges line up face to face. A series of screws or bolts can then be installed in the flanges to hold them together. Because of their size and durability, flanged units can be used to bring shafts into alignment before they are joined. Types: Schmidt Sleeve, box, or muff A sleeve coupling consists of a pipe whose bore is finished to the required tolerance based on the shaft size. Based on the usage of the coupling a keyway is made in the bore in order to transmit the torque by means of the key. Two threaded holes are provided in order to lock the coupling in position. Types: Sleeve couplings are also known as box couplings. In this case shaft ends are coupled together and abutted against each other which are enveloped by muff or sleeve. A gib head sunk keys hold the two shafts and sleeve together (this is the simplest type of the coupling) It is made from the cast iron and very simple to design and manufacture. It consists of a hollow pipe whose inner diameter is same as diameter of the shafts. The hollow pipe is fitted over a two or more ends of the shafts with the help of the taper sunk key. A key and sleeve are useful to transmit power from one shaft to another shaft. Types: Tapered shaft lock A tapered lock is a form of keyless shaft locking device that does not require any material to be removed from the shaft. The basic idea is similar to a clamp coupling but the moment of rotation is closer to the center of the shaft. An alternative coupling device to the traditional parallel key, the tapered lock removes the possibility of play due to worn keyways. It is more robust than using a key because maintenance only requires one tool and the self-centering balanced rotation means it lasts longer than a keyed joint would, but the downside is that it costs more. Types: Twin spring A flexible coupling made from two counter-wound springs with a ball bearing in the center, which allows torque transfer from input to output shaft. Requires no lubrication to consistently run as it has no internal components. Universal joint Maintenance and failure: Coupling maintenance requires a regularly scheduled inspection of each coupling. It consists of: Performing visual inspections Checking for signs of wear or fatigue Cleaning couplings regularly Checking and changing lubricant regularly if the coupling is lubricated. This maintenance is required annually for most couplings and more frequently for couplings in adverse environments or in demanding operating conditions. Maintenance and failure: Documenting the maintenance performed on each coupling, along with the date.Even with proper maintenance, however, couplings can fail. Underlying reasons for failure, other than maintenance, include: Improper installation Poor coupling selection Operation beyond design capabilities.External signs that indicate potential coupling failure include: Abnormal noise, such as screeching, squealing or chattering Excessive vibration or wobble Failed seals indicated by lubricant leakage or contamination. Balance: Couplings are normally balanced at the factory prior to being shipped, but they occasionally go out of balance in operation. Balancing can be difficult and expensive, and is normally done only when operating tolerances are such that the effort and the expense are justified. The amount of coupling unbalance that can be tolerated by any system is dictated by the characteristics of the specific connected machines and can be determined by detailed analysis or experience.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fowkes Formation** Fowkes Formation: The Fowkes Formation is a geologic formation in Wyoming. It preserves fossils dating back to the Paleogene period.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Arf invariant of a knot** Arf invariant of a knot: In the mathematical field of knot theory, the Arf invariant of a knot, named after Cahit Arf, is a knot invariant obtained from a quadratic form associated to a Seifert surface. If F is a Seifert surface of a knot, then the homology group H1(F, Z/2Z) has a quadratic form whose value is the number of full twists mod 2 in a neighborhood of an embedded circle representing an element of the homology group. The Arf invariant of this quadratic form is the Arf invariant of the knot. Definition by Seifert matrix: Let V=vi,j be a Seifert matrix of the knot, constructed from a set of curves on a Seifert surface of genus g which represent a basis for the first homology of the surface. This means that V is a 2g × 2g matrix with the property that V − VT is a symplectic matrix. The Arf invariant of the knot is the residue of mod 2). Definition by Seifert matrix: Specifically, if {ai,bi},i=1…g , is a symplectic basis for the intersection form on the Seifert surface, then Arf lk lk mod 2). where lk is the link number and a+ denotes the positive pushoff of a. Definition by pass equivalence: This approach to the Arf invariant is due to Louis Kauffman. We define two knots to be pass equivalent if they are related by a finite sequence of pass-moves.Every knot is pass-equivalent to either the unknot or the trefoil; these two knots are not pass-equivalent and additionally, the right- and left-handed trefoils are pass-equivalent.Now we can define the Arf invariant of a knot to be 0 if it is pass-equivalent to the unknot, or 1 if it is pass-equivalent to the trefoil. This definition is equivalent to the one above. Definition by partition function: Vaughan Jones showed that the Arf invariant can be obtained by taking the partition function of a signed planar graph associated to a knot diagram. Definition by Alexander polynomial: This approach to the Arf invariant is by Raymond Robertello. Let Δ(t)=c0+c1t+⋯+cntn+⋯+c0t2n be the Alexander polynomial of the knot. Then the Arf invariant is the residue of cn−1+cn−3+⋯+cr modulo 2, where r = 0 for n odd, and r = 1 for n even. Kunio Murasugi proved that the Arf invariant is zero if and only if Δ(−1) ≡ ±1 modulo 8. Arf as knot concordance invariant: From the Fox-Milnor criterion, which tells us that the Alexander polynomial of a slice knot K⊂S3 factors as Δ(t)=p(t)p(t−1) for some polynomial p(t) with integer coefficients, we know that the determinant |Δ(−1)| of a slice knot is a square integer. As |Δ(−1)| is an odd integer, it has to be congruent to 1 modulo 8. Combined with Murasugi's result this shows that the Arf invariant of a slice knot vanishes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Summer term** Summer term: Summer term is the summer academic term at many British schools and universities and elsewhere in the world. Summer term: In the UK, 'Summer term' runs from the Easter holiday until the end of the academic year in June or July, and so corresponds to the Easter term at Cambridge University, and Trinity term at Oxford, and some other places. 'Summer term' is defined in some UK statutory instruments, such as the Education (Assessment Regulations) (Foundation to Key Stage 3) Order (Northern Ireland) 2007, which says: "summer term" means the period commencing immediately after the Easter holiday and ending with the school year. Summer term: The Education (National Curriculum) (Key Stage 1 Assessment Arrangements) (England) Order 2004 says more simply: "summer term" means the final term of the school year. Covering the possibility of six-term academic years, the School Finance (England) Regulations 2008 say "summer term" means the third term of the school year where a school has three terms, or the fifth and sixth terms where a school has six terms
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Materion** Materion: Materion Corp. is a multinational company specializing in high-performance engineered materials. Among their products are precious and non-precious metals, inorganic chemicals, specialty coatings, beryllium, specialty engineered beryllium, beryllium copper alloys, ceramics, and engineered clad and plated metal systems. The company's engineered materials are used in the telecommunications, consumer electronics, automotive medical, industrial components, aerospace, defense, and optical coating industries. History: Beginning in the 1940s, Brush Wellman Inc. produced large amounts of beryllium for the United States government.Brush Engineered Materials Inc. changed its name to Materion Corporation on March 8, 2011, and now trades under the symbol MTRN.In March, 2017 Jugal K. Vijayvargiya was appointed president and chief executive officer of Materion Corporation, replacing Richard J. Hipple, who had served in that position for 11 years. Vijayvargiya was also named a director of the corporation. A graduate of the Ohio State University, Vijayvargiya spent 26 years at Delphi Corporation in a variety of management positions. Hipple will become executive chairman.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**HP-15C** HP-15C: The HP-15C is a high-end scientific programmable calculator of Hewlett-Packard's Voyager series produced between 1982 and 1989. Models: HP-15C The HP-15C is a high-end scientific pocket calculator with a root-solver and numerical integration. A member of Hewlett-Packard Voyager series of programmable calculators, it was produced between 1982 and 1989. The calculator is able to handle complex numbers and matrix operations. Although out of production, its popularity has led to high prices on the used market. The HP-15C was a replacement for the HP-34C. The 15C used CMOS technology for its processor, resulting in very low power consumption. Models: HP 15C Limited Edition After showing a prototype labelled "HP 15c+" at the HHC 2010, HP announced the HP 15C Limited Edition (NW250AA) on 1 September 2011. It is based on a flashable controller utilizing the same ARM7TDMI core already used in the 2008 revision of the 12C but in a different package, an Atmel AT91SAM7L128-AU running an emulator written by Cyrille de Brébisson to execute the old HP Nut code much faster than on the original hardware. The calculator was released alongside the HP 12c 30th Anniversary Edition. This model is powered by two CR2032 batteries, and can easily be differentiated from the original model by the "Limited Edition" script below the company logo as well as the black text on brushed metal back label, as opposed to the white text on black of the original. The power consumption of the processor is greater than that of the original HP-15C, as HP did not use the same technology in any of the future models. Models: HP 15C Collector’s Edition In May 2023, a Collector's Edition was announced and was released in July 2023 by the HP Development Company, L.P.'s licensees Moravia Consulting spol. s r.o. and Royal Consumer Information Products, Inc. It supports up to 672 steps for programs and up to 99 registers. The initial firmware has received fixes for the known bugs shown below and others; it is emulated on the same CPU as the 2015 and 2022 variants of the HP-12C, the Microchip ATSAM4LC2CA (ARM Cortex-M4). The calculator is also powered by two CR2032 batteries. Models: The test menu (Off, g+↵ Enter+ON) officially offers three choices. A fourth choice (4) is undocumented and permits to enter two hidden modes: "15.2" (more memory, but with some limitations like 8×8 inversion matrices, the three-digit step number display) and "16" (emulating a HP-16C). Bugs and problems: HP-15C: CHS stack lift bug (and fix) The non-responsive reset procedure documented in the 15C manual had the side effect of rotating the X register by 22 bits which could then be used to perform synthetic programming.HP-15C Limited Edition: One of the more significant bugs in the released firmware version (dated 2011-04-15 in the self-test) is that PSE only works once in a program and subsequently blanks the display until the program stops or is stopped. Downgrading the firmware resolves the PSE bug, however, other bugs will also be reintroduced. Bugs and problems: The original HP-15C self-test keystrokes do not work with the HP-15C LE and can corrupt memory contents. Although a new functional self-test procedure was added, the original manual did not document it.hp 15C Collector's Edition: No known bugs in the officially supported "15" mode yet; the bugs above and others have been fixed in the firmware, or in the case of the nonfunctional self-test procedure, instructions to switch to the new self-test in the accompanying documentation. There are a number of bugs and shortcomings in the undocumented "15.2" and "16" modes. Legacy: Emulators An official PC emulator for the 15C is available as freeware from Hewlett-Packard. Another version is commercially available for Android and iOS devices. Legacy: Clones On 6 February 2012, SwissMicros (previously known as RPN-Calc) introduced a miniature clone named DM-15CC approximating the size of an ID-1 credit card (88 mm × 59 mm × 7 mm). It closely emulates the functionality of the original HP-15C by running the original ROM image in an emulator on an ARM Cortex-M0-based NXP LPC1114 processor. Newer DM15 models feature a better keyboard and more RAM (LPC1115). With a modified firmware (M80 and M1B), the additional memory allows for up to 129 or even 230 registers and up to 1603 or 896 programs steps. A DM15 Silver Edition in a titanium case is available as well in three color variants (metal, brown, blue). Deviating from the original, these calculators feature a dot-matrix display, switchable fonts and clock speeds, and, based on a Silicon Labs CP2102 converter chip, they come with a USB (Mini-B) serial interface to exchange data with a PC etc. for backup purposes and possibly to communicate with applications (like PC-based HP-15C emulators) or to update the firmware. In September 2015, SwissMicros introduced the DM15L, a version of the calculator about the same size as the original HP-15C. It still comes with a USB Mini-B connector. Powering via USB is not supported.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**1,2,4-Trioxane** 1,2,4-Trioxane: 1,2,4-Trioxane is one of the isomers of trioxane. It has the molecular formula C3H6O3 and consists of a six membered ring with three carbon atoms and three oxygen atoms. The two adjacent oxygen atoms form a peroxide functional group and the other forms an ether functional group. It is like a cyclic acetal but with one of the oxygen atoms in the acetal group being replaced by a peroxide group. 1,2,4-Trioxane: 1,2,4-Trioxane itself has not been isolated or characterized, but rather only studied computationally. However, it constitutes an important structural element of some more complex organic compounds. The natural compound artemisinin, isolated from the sweet wormwood plant (Artemisia annua), and some semi-synthetic derivatives are important antimalarial drugs containing the 1,2,4-trioxane ring. Completely synthetic analogs containing the 1,2,4-trioxane ring are important potential improvements over the naturally derived artemisinins. The peroxide group in the 1,2,4-trioxane core of artemisinin is cleaved in the presence of the malaria parasite leading to reactive oxygen radicals that are damaging to the parasite.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Radiopaedia** Radiopaedia: Radiopaedia is a wiki-based international collaborative educational web resource containing a radiology encyclopedia and imaging case repository. It is currently the largest freely available radiology related resource in the world with more than 50,000 patient cases and over 16,000 reference articles on radiology-related topics. The open edit nature of articles allows radiologists, radiology trainees, radiographers, sonographers, and other healthcare professionals interested in medical imaging to refine most content through time. An editorial board peer reviews all contributions. Background: Radiopaedia was started as a past-time project to store radiology notes and cases online by the Australian neuroradiologist Associate Professor Frank Gaillard in December 2005, while he was a radiology resident. He later became passionate in building the website and decided to release it on the web, advocating free dissemination of knowledge.The domain name for radiopaedia.org was registered on 11 January 2007.The Radiopaedia.org platform and text content are owned by Radiopaedia Australia Pty Ltd, a privately held company for which Gaillard is the chief executive officer. One of its investors is Investling and its revenue derives from ads, courses, and paid supporters. For image content, contributors reserve some rights and license the content to Radiopaedia and its users under a Creative Commons license.The site was initially programmed using MediaWiki, the same program platform as Wikipedia, but now runs on a bespoke code written by TrikeApps.In 2010, almost all of the article and image collection from radswiki (a similar wiki-based radiology educational site) was donated to Radiopaedia.Its article content is currently limited to English. Purpose: Radiopaedia’s mission is "to create the best radiology reference the world has ever seen and to make it available for free, for ever, for all." Its intention is to benefit the radiology community and wider society and it relies on benevolent collaborations from radiologists and others with an interest in medical imaging. Purpose: Similarly to Wikipedia, registered users of the site are allowed to freely add and edit the majority of the content. This allows content to be progressively upgraded over years and for radiologists and society, in general, to continuously refine article content through time. The site also allows registered users to maintain their own personal case library of teaching cases. Rather than individually publishing articles, users are encouraged to integrate content with links to cases and journal articles and collaboratively refine content. In an attempt to reduce vandalism and to peer-review content, an editorial board moderates changes to ensure that the presented material is as accurate and relevant as possible. As with similar open edit sites, unreliability of content has been a concern; however, despite its open edit nature, it is ranked relatively high among user reviews.A survey done in 2020 shows that 90% of on-call radiology trainees in the United States are using Radiopedia and StatDx as the first and second line options to help them during their work. Educational benefit was also demonstrated when integrating Radiopedia-based training in medical curriculum. Sub sites: Radiopaedia also maintains several other educational subsites which include Radiology Signs - a tumblr feed with selected signs Radiology Channel - a YouTube channel containing educational videos Editorial team: The editorial team, develop as well as help users to maintain the high-quality content of the website.The current editorial board (2021) is composed of individuals from a variety of countries and includes:Editor in chief Frank GaillardAcademic director Andrew DixonCommunity director Jeremy JonesEditorial director Henry KnipeManaging editors Daniel J Bell Ian Bickle Andrew Murphy iPhone, iPad and iOS apps: In 2009, the first Radiopaedia iPhone app was released. These teaching files package cases and articles for users to review and have sample questions and answers. iPhone, iPad and iOS apps: Brain Gastrointestinal and hepatobiliary Musculoskeletal Paediatrics Chest Head and NeckThese have been released in two forms: LITE : 10 full cases FULL : 50–80 cases; the initial 50 have been supplemented in some cases.Teaching files for the iPad were released in mid-2010. The first of its kind. These have currently been released for Brain Head and Neck MusculoskeletalIn 2012, Radiopaedia released a new version of its iOS application which is a universal app with in-app purchases for case packs. Copyright: Most of the content is shared under a Creative Commons non-commercial license.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chloramines** Chloramines: Chloramines refer to derivatives of ammonia and organic amines wherein one or more N-H bonds have been replaced by N-Cl bonds. Two classes of compounds are considered: inorganic chloramines and organic chloramines. Inorganic chloramines: Inorganic chloramines comprise three compounds: monochloramine (NH2Cl), dichloramine (NHCl2), and nitrogen trichloride (NCl3). Monochloramine is of broad significance as a disinfectant for water. Organic chloramines: Organic chloramines feature the NCl functional group attached to an organic substituent. Examples include N-chloromorpholine (ClN(CH2CH2)2O), N-chloropiperidine, and N-chloroquinuclidinium chloride.Chloramines are commonly produced by the action of bleach on secondary amines: R2NH + NaOCl → R2NCl + NaOHTert-butyl hypochlorite can be used instead of bleach: R2NH + t-BuOCl → R2NCl + t-BuOH Swimming pools: Chloramines also refers to any chloramine formed by chlorine reacting with ammonia introduced into swimming pools by human perspiration, saliva, mucus, urine, and other biologic substances, and by insects and other pests. Chloramines are responsible for the "chlorine smell" of pools, as well as skin and eye irritation. These problems are the result of insufficient levels of free available chlorine.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Global cuisine** Global cuisine: A global cuisine is a cuisine that is practiced around the world. A cuisine is a characteristic style of cooking practices and traditions, often associated with a specific region, country or culture. To become a global cuisine, a local, regional or national cuisine must spread around the world, its food served worldwide. There have been significant improvements and advances during the last century in food preservation, storage, shipping and production, and today many countries, cities and regions have access to their traditional cuisines and many other global cuisines. Asia: Japan Japanese cuisine has spread throughout the world, and representative dishes such as sushi and ramen, among others are popular. In many cases, Japanese food is adapted and reinvented to fit the preferences of the local populace. For instance, the California roll is a popular dish in the United States that is a modification of the Japanese makizushi, a type of sushi. In South Korea, both the Japanese curry and the ramen have been imported and popularized primarily in the form of instant food. Tonkatsu and tempura, which are derived from Western food, are now considered and marketed as uniquely Japanese, as well as the Japanese curry, which derived from the Indian curry. Asia: In many countries including the United States, United Kingdom, Philippines, and Brazil, Japanese restaurants have become popular. Among these countries, Hong Kong, Taiwan, China, Singapore, Thailand and Indonesia are key consumers, according to recent research. The market of Japanese ingredients is also growing, with brands such as Ajinomoto, Kikkoman, Nissin and Kewpie mayonnaise, are establishing production base in other Asian countries, such as China, Thailand and Indonesia. Asia: China Chinese cuisine has become widespread throughout many other parts of the world — from Asia to the Americas, Australia, Western Europe and Southern Africa. In recent years, connoisseurs of Chinese cuisine have also sprouted in Eastern Europe and South Asia. American Chinese cuisine and Canadian Chinese food are popular examples of local varieties. Local ingredients would be adopted while maintaining the style and preparation technique. Asia: Traditional Chinese cuisines include Anhui, Cantonese, Fujian, Hunan, Jiangsu, Shandong, Sichuan, and Zhejiang, all of which are defined and termed per the respective regions within China where they developed. These regional cuisines are sometimes referred to as the "eight culinary traditions of China." A number of different styles contribute to Chinese cuisine, but perhaps the best known and most influential are the Sichuan, Shandong, Jiangsu and Guangdong cuisines. These styles are distinctive from one another due to factors such as available resources, climate, geography, history, cooking techniques and lifestyle. Many Chinese traditional regional cuisines rely on basic methods of food preservation such as drying, salting, pickling and fermentation. Asia: Thailand Thai cuisine is becoming increasingly popular in other parts of the world, including North America, Europe, and other parts of Asia. Thai restaurants are becoming more and more common, serving Thai curry and other traditional dishes. Asia: India Indian cuisine has contributed to shaping the history of international relations; the spice trade between India and Europe is often cited by historians as the primary catalyst for Europe's Age of Discovery. Spices were bought from India and traded around Europe and Asia. It has also had the created influence on international cuisines, especially those from Southeast Asia, the British Isles and the Caribbean. The use of Indian spices, herbs and vegetable produce have helped shaped the cuisines of many countries around the world. Asia: Indian cuisine consists of the foods and dishes of India (and to some extent neighboring countries), is characterized by the extensive use of various Indian spices and vegetables grown across India, herbs, vegetables and fruits, and is also known for the widespread practice of vegetarianism in Indian society. Indian regional cuisine is primarily categorized at the regional level, but also at provincial levels. Cuisine differences derive from various local cultures, geographical locations (whether a region is close to the sea, desert or the mountains), and economics. Indian cuisine is also seasonal, and utilizes fresh produce. Asia: The cuisine of India is very diverse with each state having an entirely different food platter. The development of these cuisines have been shaped by Hindu and Jain beliefs, in particular vegetarianism which is a common dietary trend in Indian society. There has also been Islamic influence from the years of Mughal and Delhi Sultanate rule, as well as Persian interactions on North Indian and Deccani cuisine. Indian cuisine has been and is still evolving, as a result of the nation's cultural interactions with other societies. Historical incidents such as foreign invasions, trade relations and colonialism have also played an important role in introducing certain food types and eating habits to the country. For instance, potato, a staple of North Indian diet was brought to India by the Portuguese, who also introduced chiles and breadfruit among other things. Spices were bought from India and traded in exchange for rubber and opium from Malacca. Europe: France Georgia Georgian cuisine includes more than 80 varieties of local cheeses that are often mixed with pastries and local pizza style cheese breads (Khachapuri), is famous for an abundant usage of walnuts in sauces (Satsivi), salads, or other meat dishes (Kharcho), local dumplings like Khinkali, and regional delicacies like Sinori. Georgia is the birthplace of wine and its table culture is deeply connected to the philosophical toast making rituals that are passed down from one generation to the next during Supras. Italy Italy's cuisine is one of the best-known cuisines in the world. As a Mediterranean cuisine, Italian cuisine makes heavy use of products based on wheat, olives, and grapes, with tomatoes being a distinguishing factor, and values using few but high-quality ingredients. Focaccia Pasta Pizza Risotto North America: United States American cuisine is a style of food preparation originating from the United States of America. European colonization of the Americas yielded the introduction of a number of ingredients and cooking styles to the latter. The various styles continued expanding well into the 19th and 20th centuries, proportional to the influx of immigrants from many foreign nations; such influx developed a rich diversity in food preparation throughout the country. Native American cuisine includes all food practices of the indigenous peoples of the Americas. Modern-day native peoples retain a rich body of traditional foods, some of which have become iconic of present-day Native American social gatherings. North America: Mexico Mexican cuisine has become widespread all over the world.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Posterior auricular nerve** Posterior auricular nerve: The posterior auricular nerve is a nerve of the head. It is a branch of the facial nerve (CN VII). It communicates with branches from the vagus nerve, the great auricular nerve, and the lesser occipital nerve. Its auricular branch supplies the posterior auricular muscle, the intrinsic muscles of the auricle, and gives sensation to the auricle. Its occipital branch supplies the occipitalis muscle. Structure: The posterior auricular nerve arises from the facial nerve (CN VII). It is the first branch outside of the skull. This origin is close to the stylomastoid foramen. It runs upward in front of the mastoid process. It is joined by a branch from the auricular branch of the vagus nerve (CN X). It communicates with the posterior branch of the great auricular nerve, as well as with the lesser occipital nerve. Structure: As it ascends between the external acoustic meatus and mastoid process it divides into auricular and occipital branches. The auricular branch travels to the posterior auricular muscle and the intrinsic muscles on the cranial surface of the auricule. The occipital branch, the larger branch, passes backward along the superior nuchal line of the occipital bone to the occipitalis muscle. Function: The posterior auricular nerve supplies the posterior auricular muscle, and the intrinsic muscles of the auricle. It gives sensation to the auricle. It also supplies the occipitalis muscle. Clinical significance: Nerve testing The posterior auricular nerve can be tested by contraction of the occipitalis muscle, and by sensation in the auricle. This testing is rarely performed. Biopsy The posterior auricular nerve can be biopsied. This can be used to test for leprosy, which can be important in diagnosis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stuart W. Krasner** Stuart W. Krasner: Stuart William Krasner (Born 1949), was the Principal Environmental Specialist (retired) with the Metropolitan Water District of Southern California, at the Water Quality Laboratory located in La Verne, California. In his 41 years with Metropolitan, he made revolutionary changes in the field's understanding of how disinfection by-products occur, are formed and how they can be controlled in drinking water. His research contributions include the study of emerging DBPs including those associated with chlorine, chloramines, ozone, chlorine dioxide and bromide/iodide-containing waters. He made groundbreaking advances in understanding the watershed sources of pharmaceuticals and personal care products (PPCPs) and wastewater impacts on drinking-water supplies. For DBPs and PPCPs, he developed analytical methods and occurrence data and he provided technical expertise for the development of regulations for these drinking water contaminants. In the early 1990s, Krasner developed the 3x3 matrix illustrating removal of total organic carbon from drinking water as a function of water alkalinity and initial total organic carbon concentration. The matrix was revised by him and included in the USEPA Stage 1 D/DBP regulation as the enhanced coagulation requirement. Every water utility in the U.S. that is subject to this regulation is required to meet total organic carbon removal requirements along with their exceptions. Stuart W. Krasner: He has been a key member of the toxicology and epidemiology community by providing key data for the development of improved carcinogen and non-carcinogen exposure assessments. In his early career at Metropolitan he developed key advances in the control of tastes and odors in drinking water including analytical methods, sensory analysis and determining sources and treatment of off-flavors. Early life and education: Stuart W. Krasner was born in 1949 in Los Angeles, California, and at the age of two, he moved with his family to Van Nuys, California where he grew up. He attended Kester Avenue Elementary School and Van Nuys High School. His father worked as an aerospace engineer at several companies in the Los Angeles area. His mother worked in the bookkeeping department for Warner Bros. Movie Studios before becoming a homemaker. His brother, Stanley, is three years younger. Stuart married Jan Patrice Barth on September 10, 1989. Early life and education: He earned his Bachelor of Science in chemistry (1971) and his Master of Science in analytical chemistry (1974) from the University of California, Los Angeles. Career: Krasner was a teaching and research assistant during his graduate work at UCLA. He worked for the Los Angeles County Sanitation Districts for four years (1974–77) before taking a position as a chemist with the Metropolitan Water District of Southern California in 1977. From the beginning of his career at Metropolitan, Krasner worked at the water quality laboratory which is located at the F.E. Weymouth Treatment Plant in La Verne, California. He held increasingly responsible positions as Research Chemist, Senior Chemist and Senior Research Chemist until being promoted to Principal Environmental Specialist in 1997. He retired from Metropolitan in September 2018. Career: As Principal Environmental Specialist, Krasner was responsible for the technical direction of DBP research at Metropolitan, as well as studies on the control of other micropollutants of health, regulatory, and aesthetic significance. He was involved in the design of experimental plans for natural organic matter (NOM), DBP, and PPCP research studies, project management, and interpretation of findings. In 1989, his article on the first national survey of multiple-DBP occurrence has received over 1,000 citations by other authors. Another survey of a new generation of DBPs in 2006 has been cited over 1,100 times.A few of the many externally funded projects for which he was responsible include: Co-Principal Investigator of a National Science Foundation (NSF) project on “Drinking Water Safety and Sustainability: Identifying Key Chemical Drivers of Toxicity for Long-Term Solutions in the United States.” (2017 – present) Technical advisor for a project on “Global Assessment of Exposure to Trihalomethanes in Drinking Water and Burden of Disease” being conducted by the Barcelona Institute for Global Health (ISGlobal). (2017 – present). Career: Principal investigator for Water Research Foundation project on “Nitrosamine Occurrence Survey.” (2013 – 2016).” Co-principal investigator for Water Research Foundation project on “Investigating Coagulant Aid Alternatives to polyDADMAC Polymers.” (2012 – 2015) Principal investigator for Water Research Foundation project on “Controlling the Formation of Nitrosamines during Water Treatment.” (2012 – 2015) Co-principal investigator for Water Research Foundation project on “Optimizing Conventional Treatment for Removal of Cyanobacteria and their Metabolites.” (2011 – 2015)He was a consultant to the drinking water community since 1983. Some of his projects included: Peer reviewer for Imperial College, London, of report on “Review of the Current Toxicological and Occurrence Information Available on Nitrogen-Containing Disinfection By-Products.” Technical advisor to the University of the Aegean on reinterpreting DBP data for a European Union project (HiWATE) on DBPs. Career: Technical auditor for the European Commission on laboratory practices for a project (HiWATE) on DBPs. Career: Technical advisor for Scottish Executive study on “The Formation of Disinfection By-products of Chloramination, Potential Health Implications and Techniques for Minimisation.” Workshop participant for National Science Foundation on “Engineering Controls for Ballast Water Discharge: Developing Research Needs.” Co-investigator for AwwaRF project on “Improved Exposure Assessment on Existing Cancer Studies.” Co-investigator on USEPA project on “Enhanced Evaluation of Disinfectant By-Product Exposures for Epidemiological Studies.” Professional associations and journals: He made professional contributions to many institutions, including: American Water Works Association (1977 – present), AWWA Research Foundation (now Water Research Foundation, WRF) and American Chemical Society (1975 – present). Professional associations and journals: For AWWA, he has been involved in over one hundred committees, workgroups and advisory committees, which have included: Trustee (2 terms) of the Water Science & Research Division Member of Standard Methods Committee (multiple editions); chair of Joint Task Group (JTG) on closed-loop stripping analysis (CLSA) in Water, 17th ed.; Vice-Chair of Joint Task Group (JTG) on CLSA, 16th ed.; member of JTG on Taste, 17th ed.; member of JTG on Flavor Profile Analysis (FPA), 17th ed. Professional associations and journals: Chair of the D/DBP Technical Advisory Workgroup (TAW) Member of Technical Advisory Group (TAG), which provided technical input to Water Utility Council (WUC) on legislative and regulatory issues Manager of the D/DBP TAW; included technical management of and coordination with universities and consulting engineering firms performing studies for the D/DBP TAW. Selected projects included: Disinfectants/Disinfection By-Products (D/DBP) Data Base for Regulation Negotiation Process Mathematical Modeling of the Formation of THMs and HAAs in Chlorinated Natural Waters Effect of Coagulation and Ozonation on the Formation of Disinfection By-Products Establishment of database on THM and HAA formation kinetics and impacts of various water quality parameters Development of chlorine and chloramine residual decay equations Authored state-of-the-science literature review on nitrosamines for AWWA Government Affairs Office. (2012 – 2013) Guest technical editor for special issue of Journal AWWA on nitrosaminesFor AWWA Research Foundation (now Water Research Foundation): Invited expert for state-of-the-science expert workshop on Evaluating the Scientific Evidence for Chlorination Disinfection By-Products (CDBPs) Associated with Human Health Outcomes (i.e., Bladder Cancer) Project Advisory Committee (PAC) on “Exploring formation and Control of Emerging DBPs in Treatment Facilities: Halonitromethanes and Iodo-Trihalomethanes. Professional associations and journals: PAC on Quantitative Comparative Mammalian Cell Cytotoxicity and Genotoxicity of Selected Classes of Drinking Water Disinfection By-Products PAC on Exploring the Mechanisms of Dihalogenated Acetic Acid Formation (DXAA) During ChloraminationFor the American Chemical Society: Organized symposium on Occurrence, Formation, Health Effects and Control of Disinfection By-Products in Drinking Water. Professional associations and journals: Organized symposium on Natural Organic Matter and Disinfection By-Products in Drinking Water.Krasner has been a peer-reviewer for many professional and scientific journals including Journal American Water Works Association, Environmental Science & Technology, Ozone: Science & Engineering, Water Research, Journal of Water Supply: Research and Technology – Aqua, Journal of Exposure Analysis and Environmental Epidemiology, Analytical Chemistry, Water Environment Research, The Science of the Total Environment, Chemosphere and Talanta Invited lectures and technical exchanges: Tsinghua University, Beijing, Keynote Presentation: Theory and Practices of DBP Formation and Control. April 18, 2012. International Workshop on Urban Water Safety Tongji University, Shanghai, on formation and control of emerging disinfection by-products Hong Kong University of Science and Technology on formation and control of emerging disinfection by-products in wastewater and drinking water Cranfield University, UK, on formation and health effects of disinfection by-products, and balancing the control of disinfection by-products. Invited lectures and technical exchanges: University of California, Berkeley, on sources of NDMA precursors; and the formation, occurrence, and control of NDMA in chloraminated drinking water University of Illinois Urbana-Champaign, on formation, occurrence, and control of emerging disinfection by-products of health concern Awards and honors: 1990, AWWA Water Quality Division Best Paper Award 1996, George A. Elliot Award from the California-Nevada Section of AWWA 2007, AWWA's A.P. Black Research Award. This award recognizes “outstanding research contributions to water science and water supply rendered over an appreciable period of time.” The award citation stated: “In recognition of his outstanding, leading-edge research in the water industry in the area of disinfection by-products.” 2012, AWWA Engineering and Construction Division Best Paper Award 2017, Water Research Foundation's Dr. Pankaj Parekh Research Innovation Award. The award letter stated “Your significant contributions to the Water Research Foundation, both in the volume of work you have conducted, and the longevity of your participation in our research program, made you the unanimous choice for this year’s Research Innovation Award by the Foundation’s Awards Committee.” 2019, AWWA Publications Award; AWWA Water Quality & Technology Division Best Paper Award Books and edited works: Off-Flavours in Drinking Water and Aquatic Organisms. (P.-E. Persson, F.B. Whitfield, & S.W. Krasner, eds.). 1992. Water Sci. & Technol., Vol. 25, No. 2. Natural Organic Matter and Disinfection By-Products: Characterization and Control in Drinking Water (S.E. Barrett, S.W. Krasner, & G.L. Amy, eds.). 2000. ACS, Washington, D.C. Disinfection By-Products in Drinking Water: Occurrence, Formation, Health Effects, and Control (T. Karanfil, S.W. Krasner, P. Westerhoff, & Y. Xie, eds.). 2008. ACS, Washington, D.C. Special issue on nitrosamines (S.W. Krasner, guest technical editor). June 2017. Jour. AWWA Selected publications: S.W. Krasner, M.J. McGuire, & V.B. Ferguson. 1985. Tastes and Odors: The Flavor Profile Method. Jour. AWWA, 77:3:34. S.W. Krasner, S.E. Barrett, M.S. Dale, & C.J. Hwang. 1989. Free Chlorine Versus Monochloramine for Controlling Off-Tastes and Off-Odors. Jour. AWWA, 81:2:86. S.W. Krasner, M.J. McGuire, J.G. Jacangelo, N.L. Patania, K.M. Reagan, & E.M. Aieta. 1989. The Occurrence of Disinfection By-Products in U.S. Drinking Water. Jour. AWWA, 81:8:41. S.W. Krasner, W.H. Glaze, H.S. Weinberg, P.A. Daniel, & I.N. Najm. 1993. Formation and Control of Bromate During Ozonation of Waters Containing Bromide. Jour. AWWA, 85:1:73. S.W. Krasner, & G.L. Amy. 1995. Jar-Test Evaluations of Enhanced Coagulation. Jour. AWWA, 87:10:93. S.W. Krasner, J.-P. Croué, J. Buffle, & E.M. Perdue. 1996. Three Approaches for Characterizing NOM. Jour. AWWA, 88:6:66. S.W. Krasner, H.S. Weinberg, S.D. Richardson, S.J. Pastor, R. Chinn, M.J. Sclimenti, G.D. Onstad, and A.D. Thruston, Jr. 2006. Occurrence of a New Generation of Disinfection Byproducts. Environ. Sci. Technol., 40(23):7175-7185. S.W. Krasner, P. Westerhoff, B. Chen, B.E. Rittmann, S.-N. Nam, & G. Amy. 2009. Impact of Wastewater Treatment Processes on Organic Carbon, Organic Nitrogen, and DBP Precursors in Effluent Organic Matter. Environ. Sci. Technol., 43(8):2911-2918. S.W. Krasner, P. Westerhoff, B. Chen, B.E. Rittmann, & G. Amy. 2009. Occurrence of Disinfection Byproducts in United States Wastewater Treatment Plant Effluents Environ. Sci. Technol., 43(21):8320–8325. O. Lu, S.W. Krasner, & S. Liang. 2011. Modeling Approach to Treatability Analysis of an Existing Treatment Plant. Jour. AWWA, 103(4):103–117. S.W. Krasner, W.A. Mitch, D.L. McCurry, D. Hanigan, & P. Westerhoff. 2013. Formation, Precursors, Control, and Occurrence of Nitrosamines in Drinking Water: A Review. Water Res., 47:4433-4450.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**The Pleasure of the Text** The Pleasure of the Text: The Pleasure of the Text (French: Le Plaisir du Texte) is a 1973 book by the literary theorist Roland Barthes. Summary: Barthes sets out some of his ideas about literary theory. He divides the effects of texts into two: plaisir ("pleasure") and jouissance, translated as "bliss" but the French word also carries the meaning of "orgasm". Summary: The distinction corresponds to a further distinction Barthes makes between texte lisible and texte scriptible, translated respectively as "readerly" and "writerly" texts (a more literal translation would be "readable" and "writable"). Scriptible is a neologism in French. The pleasure of the text corresponds to the readerly text, which does not challenge the reader's position as a subject. The writerly text provides bliss, which explodes literary codes and allows the reader to break out of his or her subject position. Summary: The "readerly" and the "writerly" texts were identified and explained in Barthes' S/Z. Barthes argues that "writerly" texts are more important than "readerly" ones because he sees the text's unity as forever being re-established by its composition, the codes that form and constantly slide around within the text. The reader of a readerly text is largely passive, whereas the person who engages with a writerly text has to make an active effort, and even to re-enact the actions of the writer himself. The different codes (hermeneutic, action, symbolic, semic, and historical) that Barthes defines in S/Z inform and reinforce one another, making for an open text that is indeterminant precisely because it can always be written anew. Summary: As a consequence, although one may experience pleasure in the readerly text, it is when one sees the text from the writerly point of view that the experience is blissful. Influences: Few writers in cultural studies and the social sciences have used and developed the distinctions that Barthes makes. The British sociologist of education Stephen Ball has argued that the National Curriculum in England and Wales is a writerly text, by which he means that schools, teachers and pupils have a certain amount of scope to reinterpret and develop it. On the other hand, artist Roy Ascott's pioneering telematic artwork, La Plissure du Texte ("The Pleating of the Texte", 1983) drew inspiration from Barthes' Le Plaisir du Texte. Ascott modified the title to emphasize the pleasure of collective textual pleating. In Ascott's artwork, the pleating of the text resulted from a process that the artist calls "distributed authorship," which expands Barthes' concept of the "readerly text." In Ascott's work, the text itself is the result of a collaborative reading/writing process among participants around the world, connected via computer networking (telematics). Ascott's work thus unravels the distinction between readers and writers, demonstrating a much greater degree of permeability than Barthes' distinction permits (and beyond Barthes' theory of the death of the author). Moreover, the mechanism of distributed authorship enabled Ascott's "planetary fairytale" to self-pleat in a way that, like a surrealist exquisite corpse, could not have been the product of a single mind. Rather, Ascott suggests, the work emerged as the result of an emergent field of collective intelligence that joined minds together in a global field of consciousness.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hebereke's Popoon** Hebereke's Popoon: Hebereke's Popoon is a two player puzzle video game developed and published by Sunsoft. It is based on the Hebereke series. Hebereke means drunk or untrustworthy. Popoon is an onomatopoeia for the sound made by the game pieces when they explode. The game is a Puyo Puyo clone. Players align Popoons with others to make them explode. Availability: According to the Video Arcade Preservation Society, via their website Killer List of Video Games, the arcade machine itself is very rare, if it still exists in cabinet form at all. Gameplay: Hebereke's Popoon is a block-grouping game. There are four playable characters, each having different abilities. In story mode, the player is forced to play as Hebe and must battle certain characters. A defeated player may elect to resume play by using a continue. In versus mode, every playable character are immediately available to either player. Players can also select a handicap level (from 1 - 5) to increase or decrease the difficulty of the game.In each round, pairs of Popoons of various colors (the set of colors varying with the character(s) chosen by the player(s)) descend from the top of the screen. These can be rotated and placed by the player. The immediate aim is to create groups of three blocks of the same color arranged either horizontally, vertically, or diagonally. When such a group is created, the member blobs blow up, disappearing from the screen. Any blobs above the disappearing group then drop to fill any resulting empty space. Gameplay: Each time a player successfully creates a group, a Poro-poro will drop on the other player's screen in a random position. These poro-porous can be removed by the other player by placing a blob of the same color as the head such that it touches the head either horizontally or vertically. Both the head and the blob will disappear from the screen, in much the same manner as group of blobs, though no head will appear on the first player's screen as a result. Gameplay: A player can sometimes cause multiple groups to disappear. This can happen simultaneously if the placement of a pair of blobs immediately causes two groups of blobs (or heads) to form or it can happen in a chain reaction, as the formation and disappearance of one group causes the dropping of any pieces above it, which can result in the formation of another group, and so on. If the groups in either process are of different colors then this is said to be a combination or "combo". The colors in a combo (or even a group) appear as small tiles in the lower of two panes in the middle of the screen and above the score-box. Gameplay: While a combo of one color (simply an ordinary group) causes a single head to appear on the opponent's screen, a combo of two colors causes a full row of poro-porous to appear on the opponent's screen. Combos of three and four colors are much more dramatic, the precise effect depending on the player's character. Upcoming heads or special effects are kept track of by symbols placed by the players' characters in the upper of two panels in the middle of the screen. Gameplay: A notable feature in Hebereke's Popoon is the constant bevy of sound effects as each player's character celebrates each group or combo by making nonsense sounds or yelling Japanese phrases. Characters' combo abilities: When the player makes a combo of three or four colors different effects occur depending on the player's character: Hebereke Head Color: Blue 3 colors: Head flies towards opponents screen attached to body via a tether. A double row of heads then drops onto the opponent's screen. 4 colors: Flies off the screen on fire. The player's pieces are removed and a proportionate number of heads are dropped on the opponent's screen.Oh-Chan Head Color: Orange 3 colors: Uses magic electricity to turn pieces on opponent's screen into "frozen blocks" that can never be removed. 4 colors: Whisks away the bottom few rows of the player's pieces. A proportionate number of heads are dropped on the opponent's screen.Sukezaemon Head Color: Pink 3 colors: A giant hammer smashes through the player's pieces, removing them from the screen. A proportionate number of heads is dropped on the opponent's screen. 4 colors: Hammers himself in the head popping his eyeballs out. Turns some of the opponent's pieces into heads.Jennifer Head Color: Green 3 colors: Causes opponent's screen to freeze up for 10 seconds. All the heads from all the groups the player made are dropped at once on the opponent's screen at the end of this time. Characters' combo abilities: 4 colors: Pukes up an iridescent blob which descends from the top of the player's screen. Wherever this blob is placed, several rows disappear and a proportionate number of heads are dropped on the opponent's screen.Bobodori Head Color: Light Purple 3 colors: Appears on the opponent's screen and turns it into an elevator which rises up and away. The opponent's screen then returns with many blobs having been turned into heads. Characters' combo abilities: 4 colors: A dragonfly flies from the top of her hat to the top of the opponent's screen. The beating wings of the dragonfly force all the opponent's pieces to drop at the maximum rate.Utsujin Head Color: Yellow 3 colors: Appears in a spaceship on the opponent's screen and drops several small copies of himself which proceed to walk around for a moment. Opponent's controls switch "left" and "right" for 10 seconds. Characters' combo abilities: 4 colors: Takes out a laser gun and fires a blast into the opponent's screen. The laser blast ricochets around several times, turning many blobs into heads.Pen-Chan Head Color: Purple 3 colors: For 10 seconds the opponent's screen is filled with an image of the crying child which obscures the opponent's vision. 4 colors: Sings and dances on the opponent's screen for 10 seconds, randomly permuting all the blobs and heads.Unyohn Head Color: Grey 3 colors: Surrounds himself with a shield on the player's screen, preventing the player from doing anything. While this is happening, any heads that would have dropped on the player's screen drop on the opponent's screen instead. 4 colors: Shoots a rocket from his hat which blows up all the pieces on the opponent's screen and replaces them with a proportionate number of heads. Reception: Hebereke's Popoon garnered generally favorable reception from critics. Computer and Video Games's Ed Lawrence and Mark Patterson praised the game's graphics, sound, and playability. While reckoning that the single-player mode was tame, both Automatic and Patterson were fond of its head-to-head mode, noting the use of special attacks and fast speed on higher levels. Video Games' Dirk Sauer felt mixed regarding the visuals and sound effects, but found both its music and gameplay to be addictive, the latter of which he noted for being initially difficult. Nintendo Magazine System's Paul Davies and Andy McVittie lauded its stylish and colorful imagery, audio, and compelling playability, but both felt that the game was less fun in single-player. Superjuegos' Javier Iturrioz commended the diverse music, and quality of the characters' voices. However, Iturrioz felt that it did not offer any novelty compared to Puyo Puyo and stated that its graphics, while colorful, were limited by the game's nature. Total!'s Josse and Atko gave positive remarks to the audiovisual presentation, gameplay, and overall longevity, finding it to be more fun than Super Puyo Puyo. Writing for the German edition, Michael Anton criticized its lack of depth but praised it for being a nice alternative to Tetris with usual gaudy Japanese graphics.Games World's four reviewers compared the gameplay with Dr Robotnik's Mean Bean Machine. Nevertheless, they gave it an overall positive outlook. MAN!AC's Martin Gaksch regarded it to be a fun Columns clone, commending its different game modes but was annoyed at the lack of multiplayer variants. In contrast to the other critics, Mega Fun's Götz Schmiedehause faulted the game for is visuals and audio. Play Time's Ulf Schneider noted its difficulty level and limited options. Super Gamer's three reviewers wrote that "Hebereke's Popoon relies more on chance than Super Puyo Puyo, which makes it just that crucial bit less satisfying." In 1995, Total! ranked the game as number 55 on its list of the top 100 SNES games, stating that it was "A bit like Kirby's Avalanche. If you like these puzzlers then it’s an absolute must." Hardcore Gaming 101's Federico Tiraboschi concurred with both Sauer and Schneider about the game's difficulty.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Monosodium tartrate** Monosodium tartrate: Monosodium tartrate or sodium bitartrate is a sodium acid salt of tartaric acid. As a food additive it is used as an acidity regulator and is known by the E number E335. As an analytical reagent, it can be used in a test for ammonium cation which gives a white precipitate.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SURF2** SURF2: SURF2 is a protein which in humans is encoded by the SURF2 gene.SURF2 is a member of the surfeit gene family. The SURF2 molecule interacts with beta-1, 4-Gal-T3, uPAR, and WDR20. As part of the surfeit gene cluster, SURF2 is one of several tightly linked genes that do not share sequence similarity. SURF2 maps to human chromosome 9q34.2 and shares a bidirectional promoter with SURF1 on the opposite strand. A bidirectional promoter activity is expected in the intergenic region between SURF1 and SURF2, as seen in mice.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Endurance running hypothesis** Endurance running hypothesis: The endurance running hypothesis is a series of conjectures which presume humans evolved anatomical and physiological adaptations to run long distances and, more strongly, that "running is the only known behavior that would account for the different body plans in Homo as opposed to apes or australopithecines".The hypothesis posits a significant role of endurance running in facilitating early hominins' ability to obtain meat. Proponents of this hypothesis propose that endurance running served as a means for hominins to effectively engage in persistence hunting and carcass poaching, thus enhancing their competitive edge in acquiring prey. Consequently, these evolutionary pressures have led to the prominence of endurance running as a primary factor shaping many biomechanical characteristics of modern humans. Evolutionary evidence: No primates other than humans are capable of endurance running, and in fact, Australopithecus did not have structural adaptations for running. Instead, forensic anthropology suggests that anatomical features that directly contributed to endurance running capabilities were heavily selected for within the genus Homo dating back to 1.9Ma. Consequently, selecting anatomical features that made endurance running possible radically transformed the hominid body. The general form of human locomotion is markedly distinct from all other animals observed in nature. ‘’From the Journal of Anatomy’’, author RM Alexander describes our unique form of bipedal motion: "… no animal walks or runs as we do. We keep the trunk erect; in walking, our knees are almost straight at mid-stance; the forces our feet exert on the ground are very markedly two-peaked when we walk fast; and in walking and usually in running, we strike the ground initially with the heel alone. No animal walks or runs like that." From the perspective of natural selection, scientists acknowledge that specialization in endurance running would not have helped early humans avoid faster predators over short distances. Instead, it could have allowed them to traverse shifting habitat zones more effectively in the African savannas during the Pliocene. Endurance running facilitated the timely scavenging of large animal carcasses and enabled the tracking and chasing prey over long distances. This tactic of exhausting prey was especially advantageous for capturing large quadrupedal mammals struggling to thermoregulate in hot weather and over extended distances. Conversely, humans possess efficient means to dissipate heat, primarily through sweating. Specifically, evaporative heat dissipation from the scalp and face prevents hyperthermia and heat-induced encephalitis by extreme cardiovascular loads. Furthermore, as humans continued to develop, our posture became more upright and subsequently increased vertically with the elongation of limbs and torso, effectively increasing surface area for corporeal heat dissipation.In work exploring the evolution of the human head, paleontologist Daniel Lieberman suggests that certain adaptations to the Homo skull and neck are correlational evidence of traits selective to endurance running optimization. Specifically, he posits that adaptations such as a flattening face and the development of the nuchal ligament promote improved head balance for cranial stabilization during extended periods of running.Compared to Australopithecus fossil skeletons, selection for walking by itself would not develop some of these proposed "endurance running" derived traits — evaporative heat dissipation from the scalp and face prevents hyperthermia flatter face makes the head more balanced Nuchal ligament helps counterbalance the head shoulders and body can rotate without rotating the head taller body has more skin surface for evaporative heat dissipation torso can counter-rotate to balance the rotation of the hindlimbs shorter forearms make it easier to counterbalance hindlimbs shorter forearms cost less to keep flexed backbones are wider, which will absorb more impact stronger backbone pelvis connection will absorb more impact compared to modern apes, human buttocks "are huge" and "critical for stabilization." longer hindlimbs Achilles tendon springs conserve energy lighter tendons efficiently replace lower limb muscles broader hindlimb joints will absorb more impact foot bones create a stiff arch for efficient push off broader heel bone will absorb more impact shorter toes and an aligned big toe provide better push off Academic discourse: The derived longer hindlimb was already present in Australopithecus along with evidence for foot bones with a stiff arch. Walking and running in Australopithecus may have been the same as early Homo. Small changes in joint morphology may indicate neutral evolutionary processes rather than selection.The methodology by which the proposed derived traits were chosen and evaluated does not seem to have been stated, and there were immediate highly technical arguments "dismissing their validity and terming them either trivial or incorrect."Most of those proposed traits have not been tested for their effect on walking and running efficiency. The new trunk shape counter-rotations, which help control rotations induced by hip-joint motion, seem active during walking. Elastic energy storage does occur in the plantar soft tissue of the foot during walking. Relative lower-limb length has a slightly larger effect on the economy of walking than running. The heel-down foot posture makes walking economical but does not benefit running.Model-based analysis showing that scavengers would reach a carcass within 30 minutes of detection suggests that "endurance running" would not have given earlier access to carcasses and so not result in selection for "endurance running". Earlier access to carcasses may have been selected for running short distances of 5 km or less, with adaptations that generally improved running performance.The discovery of more fossil evidence resulted in additional detailed descriptions of hindlimb bones with measurable data reported in the literature. From a study of those reports, hindlimb proposed traits were already present in Australopithecus or early Homo. Those hindlimb characteristics most likely evolved to improve walking efficiency with improved running as a by-product.Gluteus maximus activity was substantially higher in maximal effort jumping and punching than sprinting, and substantially higher in sprinting than in running at speeds that can be sustained. The activity levels are not consistent with the suggestion that the muscle size is a result of selection for sustained endurance running. Additionally, gluteus maximus activity was much greater in sprinting than in running, similar in climbing and running, and greater in running than walking. Increased muscle activity seems related to the speed and intensity of the movement rather than the gait itself. The data suggests that the large size of the gluteus maximus reflects multiple roles during rapid and powerful movements rather than a specific adaptation to submaximal endurance running.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Wheat lamp** Wheat lamp: A wheat lamp is a type of incandescent light designed for use in underground mining, named for inventor Grant Wheat and manufactured by Koehler Lighting Products in Wilkes-Barre, Pennsylvania, United States, a region known for extensive mining activity.A safety lamp designed for use in potentially hazardous atmospheres such as firedamp and coal dust, the lamp is mounted on the front of the miner's helmet and powered by a wet cell battery worn on the miner's belt. The average wheat lamp uses a three to five watt bulb which will typically operate for five to 16 hours depending on the amp-hour capacity of the battery and the current draw of the bulb being used.A grain-of-wheat lamp is an unrelated, very small incandescent lamp used in medical and optical instruments, as well as for illuminating miniature railroad and similar models.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Wv (software)** Wv (software): The software library wv, also known as wvware or by its previous name mswordview, is a set of free software programs licensed under the GNU General Public License which can be used for viewing and/or converting files in the Microsoft .doc format to plain text, LaTeX, HTML or other formats. The wv library provides several tools on the command line of a Unix shell, such as wvText for converting a .doc file to a plain text file. It is used by the program abiword, which provides a GUI interface for reading .doc files.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pass the hash** Pass the hash: In computer security, pass the hash is a hacking technique that allows an attacker to authenticate to a remote server or service by using the underlying NTLM or LanMan hash of a user's password, instead of requiring the associated plaintext password as is normally the case. It replaces the need for stealing the plaintext password to gain access with stealing the hash. The attack exploits an implementation weakness in the authentication protocol, where password hashes remain static from session to session until the password is next changed. Pass the hash: This technique can be performed against any server or service accepting LM or NTLM authentication, whether it runs on a machine with Windows, Unix, or any other operating system. Description: On systems or services using NTLM authentication, users' passwords are never sent in cleartext over the wire. Instead, they are provided to the requesting system, like a domain controller, as a hash in a response to a challenge–response authentication scheme.Native Windows applications ask users for the cleartext password, then call APIs like LsaLogonUser that convert that password to one or two hash values (the LM or NT hashes) and then send that to the remote server during NTLM authentication.If an attacker has the hashes of a user's password, they do not need the cleartext password; they can simply use the hash to authenticate with a server and impersonate that user. In other words, from an attacker's perspective, hashes are functionally equivalent to the original passwords that they were generated from. History: The pass the hash technique was originally published by Paul Ashton in 1997 and consisted of a modified Samba SMB client that accepted user password hashes instead of cleartext passwords. Later versions of Samba and other third-party implementations of the SMB and NTLM protocols also included the functionality. History: This implementation of the technique was based on an SMB stack created by a third-party (e.g., Samba and others), and for this reason suffered from a series of limitations from a hacker's perspective, including limited or partial functionality: The SMB protocol has continued to evolve over the years, this means that third parties creating their own implementation of the SMB protocol need to implement changes and additions to the protocol after they are introduced by newer versions of Windows and SMB (historically by reverse engineering, which is very complex and time-consuming). This means that even after performing NTLM authentication successfully using the pass the hash technique, tools like Samba's SMB client might not have implemented the functionality the attacker might want to use. This meant that it was difficult to attack Windows programs that use DCOM or RPC. History: Also, because attackers were restricted to using third-party clients when carrying out attacks, it was not possible to use built-in Windows applications, like Net.exe or the Active Directory Users and Computers tool amongst others, because they asked the attacker or user to enter the cleartext password to authenticate, and not the corresponding password hash value. History: In 2008, Hernan Ochoa published a tool called the "Pass-the-Hash Toolkit" that allowed 'pass the hash' to be performed natively on Windows. It allowed the user name, domain name, and password hashes cached in memory by the Local Security Authority to be changed at runtime after a user was authenticated — this made it possible to 'pass the hash' using standard Windows applications, and thereby to undermine fundamental authentication mechanisms built into the operating system. History: The tool also introduced a new technique which allowed dumping password hashes cached in the memory of the lsass.exe process (not in persistent storage on disk), which quickly became widely used by penetration testers (and attackers). This hash harvesting technique is more advanced than previously used techniques (e.g. dumping the local Security Accounts Manager database (SAM) using pwdump and similar tools), mainly because hash values stored in memory could include credentials of domain users (and domain administrators) that logged into the machine. For example, the hashes of authenticated domain users that are not stored persistently in the local SAM can also be dumped. This makes it possible for a penetration tester (or attacker) to compromise a whole Windows domain after compromising a single machine that was a member of that domain. Furthermore, the attack can be implemented instantaneously and without any requirement for expensive computing resources to carry out a brute force attack. History: This toolkit has subsequently been superseded by "Windows Credential Editor", which extends the original tool's functionality and operating system support. Some antivirus vendors classify the toolkit as malware. Hash harvesting: Before an attacker can carry out a pass-the-hash attack, they must obtain the password hashes of the target user accounts. To this end, penetration testers and attackers can harvest password hashes using a number of different methods: Cached hashes or credentials of users who have previously logged onto a machine (for example at the console or via RDP) can be read from the SAM by anyone who has Administrator-level privileges. The default behavior of caching hashes or credentials for offline use can be disabled by administrators, so this technique may not always work if a machine has been sufficiently hardened. Hash harvesting: Dumping the local user's account database (SAM). This database only contains user accounts local to the particular machine that was compromised. For example, in a domain environment, the SAM database of a machine will not contain domain users, only users local to that machine that more likely will not be very useful to authenticate to other services on the domain. However, if the same local administrative account passwords are used across multiple systems the attacker can remotely access those systems using the local user account hashes. Hash harvesting: Sniffing LM and NTLM challenge–response dialogues between client and servers, and later brute-forcing captured encrypted hashes (since the hashes obtained in this way are encrypted, it is necessary to perform a brute-force attack to obtain the actual hashes). Hash harvesting: Dumping authenticated users' credentials stored by Windows in the memory of the lsass.exe process. The credentials dumped in this way may include those of domain users or administrators, such as those logged in via RDP. This technique may therefore be used to obtain credentials of user accounts that are not local to the compromised computer, but rather originate from the security domain that the machine is a member of. Mitigations: Any system using LM or NTLM authentication in combination with any communication protocol (SMB, FTP, RPC, HTTP etc.) is at risk from this attack. The exploit is very difficult to defend against, due to possible exploits in Windows and applications running on Windows that can be used by an attacker to elevate their privileges and then carry out the hash harvesting that facilitates the attack. Furthermore, it may only require one machine in a Windows domain to not be configured correctly or be missing a security patch for an attacker to find a way in. A wide range of penetration testing tools are furthermore available to automate the process of discovering a weakness on a machine. Mitigations: There is no single defense against the technique, thus standard defense in depth practices apply – for example use of firewalls, intrusion prevention systems, 802.1x authentication, IPsec, antivirus software, reducing the number of people with elevated privileges, pro-active security patching etc. Preventing Windows from storing cached credentials may limit attackers to obtaining hashes from memory, which usually means that the target account must be logged into the machine when the attack is executed. Allowing domain administrators to log into systems that may be compromised or untrusted will create a scenario where the administrators' hashes become the targets of attackers; limiting domain administrator logons to trusted domain controllers can therefore limit the opportunities for an attacker. The principle of least privilege suggests that a least user access (LUA) approach should be taken, in that users should not use accounts with more privileges than necessary to complete the task at hand. Configuring systems not to use LM or NTLM can also strengthen security, but newer exploits are able to forward Kerberos tickets in a similar way. Limiting the scope of debug privileges on system may frustrate some attacks that inject code or steal hashes from the memory of sensitive processes.Restricted Admin Mode is a new Windows operating system feature introduced in 2014 via security bulletin 2871997, which is designed to reduce the effectiveness of the attack.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Combinant** Combinant: In the mathematical theory of probability, the combinants cn of a random variable X are defined via the combinant-generating function G(t), which is defined from the moment generating function M(z) as log ⁡(1+t)) which can be expressed directly in terms of a random variable X as := E[(1+t)X],t∈R, wherever this expectation exists. The nth combinant can be obtained as the nth derivatives of the logarithm of combinant generating function evaluated at –1 divided by n factorial: log ⁡(G(t))|t=−1 Important features in common with the cumulants are: the combinants share the additivity property of the cumulants; for infinite divisibility (probability) distributions, both sets of moments are strictly positive.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dendrite (adhesive)** Dendrite (adhesive): Dendrite is a contact adhesive and rubber cement brand marketed in South Asia, mainly in Northeast India, Bangladesh and Bhutan. Products: The adhesive is marketed in glue sticks, glue tubes and in cans. Dendrite holds 80% of the market share throughout the country in footwear retail market.Its marketing slogan is Bonding our world together. Production Company: Dendrite is produced by the Chandras' Chemical Enterprises (Pvt) Limited under the umbrella of the P. C. Chandra Group based in Kolkata. Production Company: Chandras' Chemical Enterprises (Pvt) Limited is the second unit of the Group which was set up in 1965. The company has three factories in Kolkata. The company manufactures and markets a variety of Synthetic Adhesives based on Polychloroprene, Polyurethane, Epoxy, EVA, Lamination and other Elastomers. These products are mainly used in footwear, automobile, shipbuilding, railway coaches, engineering, electronics, leather goods, flooring, packaging, construction and household applications. The company's major product is marketed under the brand name 'DENDRITE'. Production Company: The products are marketed through an All-India network of dealers and through its several branches all over the country. The products are also exported to Middle East and SAARC countries.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Phantom structure** Phantom structure: Phantom structures are artificial structures designed to emulate properties of the human body in matters such as, including, but not limited to, light scattering and optics, electrical conductivity, and sound wave reception. Phantoms have been used experimentally in lieu of, or as a supplement to, human subjects to maintain consistency, verify reliability of technologies, or reduce experimental expense. They also have been employed as material for training technicians to perform imaging. Optical phantoms: Optical tissue phantoms, or imaging phantoms, are reported to be used largely for three main purposes: to calibrate optical devices, record baseline reference measurements, and for imaging the human body. Optical tissue phantoms may have irregular shape of body parts. Composite Materials Optical phantoms can be made from a number of materials. These are including but not limited to: homogenized milk non-dairy creamer wax blood and yeast suspension water-soluble dye (India ink) intralipid latex microspheres solid epoxy liquid rubber silicone polyester polyurethane Computational phantoms: Computational human phantoms have many uses, including but not limited to, biomedical imaging computational modeling and simulations, radiation dosimetry, and treatment planning. Physiological models: Phantom head While using research oriented and Commercial Off The Shelf (COTS) EEG technologies built for monitoring brain activity, scientists established the need for a benchmark reading of neural electrical activity. EEG readings’ strong dependency on mechanical contact makes the technology sensitive to movement. This and a high responsivity to environmental conditions may lead to signal noise. Without a baseline, it is hard to interpret whether abnormal clinical data is a result of faulty technology, patient inconsistency or noncompliance, ambient noise, or an unexplained scientific principle.A phantom head was described by researchers in 2015. This head was developed at the U.S. Army Research Laboratory. Reported intent for the engineering of this phantom head was to “accurately recreate real and imaginary scalp impedance, contain internal emitters to create dipoles, and be easily replicable across various labs and research groups.” The scientists used an inverse 3D printed mold that was reproduced an anonymized MRI image. The head consisted of ballistics gel with a composition that included salt in order to conduct electricity like human tissue. Ballistics gelatin was chosen because it conducts electricity, while also possessing mechanical properties similar to living tissue. Multiple electric wires within the Army’s phantom head carried electric current. A CT scan was used to verify proper electrode placement. The limitations of this phantom was that the material was not sufficiently durable. The refrigerated gel degraded relatively quickly, by approximately .3% each day.Other reported models had been made of saline filled spheres. Physiological models: Phantom prostate In 2013, a patent submission for a prostate phantom was reported. The prostate was composed of three separate phantom layers of prostate, perineal gland, and skin tissue and developed for the study of prostate cancer brachytherapy. The scientists claimed that the phantom emulates the imaging and mechanical properties of the prostate and surrounding tissues. Phantom ear In 2002, researchers proposed an ear phantom for experimental studies on sound absorbance rates of cellular emissions. Phantom skin Several designs of phantom skin have been developed for various uses including, but not limited to, studying skin lesion therapy, applications of narrowband and ultra-band microwaves (like breast cancer detection), and imaging fingernails and underlying tissues. Physiological models: Phantom breast Ultrasound tissue elastography is a method to determine tissue health, as pathologies have been noted to increase the elasticity of tissue. In 2015, a tissue-like agar-based phantom had been reported to be useful in compression elastographical diagnosis of breast cancer. The scientists replicated the clinical appearance of conditions such as fibroadenoma and invasive ductal carcinoma in the phantom breast and compared elastographic and sonographic images.Additionally, a recipe for the formation of a semi-compressible phantom breast with liquid rubber has been reported. Physiological models: Phantom muscle There have been many fabrication methods developed on muscle phantoms over the years, and the research is still going on. Still, here in the year 2020, researchers have developed muscle phantoms to implicate or act as tumors in breast imaging for cancer detection.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Batch renaming** Batch renaming: Batch renaming is a form of batch processing used to rename multiple computer files and folders in an automated fashion, in order to save time and reduce the amount of work involved. Some sort of software is required to do this. Such software can be more or less advanced, but most have the same basic functions. Batch renaming can also be referred to as 'mass file renaming', rename 'en masse' and 'bulk renaming'. Common functions: Most batch renamers share a basic set of functions to manipulate the filenames: Find a string within the filename and replace it with another, or remove it. Setting the capitalization of the letters in the filenames. Extracting information from the files, such as Mp3 ID3 tags, and putting it in the filename. Add a number sequence (001,002,003,...) to a list of files. Use a text file as a source for new file names.Some batch rename software can do more than just renaming filenames. Features include changing the dates of files and changing the file attributes (such as the write protected attribute). Common uses: There are many situations where batch renaming software can be useful. Here is a list of some common uses: Many digital cameras store images using a base filename, such as DCSN0001 or IMG0001. Using a batch renamer the photographer can easily give the pictures meaningful names. When downloading files from the Internet such as mp3 music, the files often have crude names. A batch renamer can be used to quickly change the filenames to a style that suits the person who downloaded them. When managing large amount of files, such as a picture database, a batch renamer is more or less essential for the task of maintaining filenames without too much manual labour. When authoring music files onto a CD/DVD or transferring the files to a digital audio player, a batch renamer can be used to listen to songs in desired order. When uploading files to a web server or transferring the files to an environment that does not support space or non-English characters in filenames, a batch renamer can be used to substitute such characters with acceptable ones. Problems: There are a few problems to take in consideration when renaming a file list. (→ means: renamed to) Detecting that the target filename already exist. file01 → file02 (file02 already exists in file-system) Detecting that the target filename is already used. file01 → file03 file02 → file03 (file03 is already used) Detecting cycle renaming (Solved by a two-pass renaming). file01 → file02 (file02 already exists in file-system)file02 → file03 (file03 already exists in file-system)file03 → file01 (file01 already exists in file-system) Two-pass renaming: Two-pass renaming uses a temporary filename (that doesn't exist in file-system) as shown below. (→ means: renamed to) First pass file01 → file01_AAAAA file02 → file02_AAAAB file03 → file03_AAAAC Second pass file01_AAAAA → file02 file02_AAAAB → file03 file03_AAAAC → file01 It solves the cycle renaming problem. If this approach is to be used care should be taken not to exceed any filename length limits during the rename, and also that the temporary names do not clash with any existing files. List of software: This is a list of notable batch renaming programs in the form of a comparison table.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Paint-on-glass animation** Paint-on-glass animation: Paint-on-glass animation is a technique for making animated films by manipulating slow-drying oil paints on sheets of glass. Gouache mixed with glycerine is sometimes used instead. The best-known practitioner of the technique is Russian animator Aleksandr Petrov; he has used it in seven films, all of which have won awards. Animators/films: Agamurad Amanov (Агамурад Аманов) Tuzik (Тузик) (2001) Childhood's Autumn, Осень детства (Osen detstva) (2005) (with Yekatirina Boykova) Martine Chartrand Black Soul (2000) Witold Giersz Little Western (Mały Western) (1960) Red and Black (Czerwone i czarne) (1963) Horse (Koń) (1967) The Stuntman (Kaskader) (1972) Fire (Pożar) (1975) Aleksey Karayev (Алексей Караев) Welcome, Добро пожаловать (Dobro pozhalovat) (1986) The Lodgers of an Old House, Жильцы старого дома (Zhiltsy starovo doma) (1987) I Can Hear You, Я вас слышу (Ya vas slyshu) (1992) Caroline Leaf The Street (1976) Marcos Magalhães Animando (1987) (partially; instructive film) Miyo Sato Fox Fears (2016) Mob Psycho 100 (2016, 2018) Natalya Orlova (Наталья Орлова) Hamlet (1992) King Richard III (1994) Moby Dick, Моби Дик (1999) Aleksandr Petrov (Александр Петров) (was art director on Karayev's Welcome in 1986) The Cow, Корова (Korova) (1989) The Dream of a Ridiculous Man, Сон смешного человека (Son smeshnovo cheloveka) (1992) The Mermaid, Русалка (Rusalka) (1997) The Old Man and the Sea (1999) Winter Days, 冬の日 (Fuyu no hi) (2003) (segment) My Love, Моя любовь (Moya lyubov) (2006) Georges Schwizgebel The Man With No Shadow, (L'homme sans ombre) (2004) Retouches, (Retouches) (2008) Vladimir Samsonov The Winter, Зима (Zima) (1979) Brightness, Блики (Bliki) (1981) Contrasts, Контрасты (Kontrasty) (1981) Contours, Контуры (Kontury) (1981) Masquerade, Маскарад (Maskarad) (1981) Still Life, Натюрморт (Natyurmort) (1981) Restoration, Реставрация (Restavratsiya) (1981) The Little Sun, Солнышко (Solnyshko) (1981) The Snail, Улитка (Ulitka) (1981) Magic Trick, Фокус (Fokus) (1981) Coloured Music, Цветомузыка (Tsvetomuzyka) (1981) The Bumblebee, Шмель (Shmel) (1981) Mood, Настроение (Nastroyeniye) (1982) The Landscape, Пейзаж (Peyzazh) (1982) Rendez-Vous, Свидание (Svidaniye) (1982) The Magpie, Сорока (Soroka) (1982) The Motif, Мотив (Motiv) (1984) Waiting for..., Ожидание (Ozhidaniye) (1984) Miniatures, Миниатюры (Miniatyury) (1985) Miniatures - 86, Миниатюры - 86 (Miniatyury - 86) (1986) Olive Jar Studios MTV: Greetings From The World (1988) Boris Stepantsev The Song About the Falcon, Песня о соколе (Pesnya o sokole) (1967) Wendy Tilby Strings (1991)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Modified Gibson Incision** Modified Gibson Incision: The Modified Gibson incision is a transverse incision above the pubis, frequently used in gynecological and urological surgeries. This incision can be made on either side of the midline, but often on the left. It is started 3 cm above and parallel to the inguinal ligament and extended vertically 3 cm medial to the anterior superior iliac spine up to the umbilicus. The modified Gibson incision allows proper access to the small bowel and pelvic organs and limited access to omentum. It is also possible to have tactile assessment of large bowel and subdiaphragmatic surfaces using this incision. This incision is preferred for lymph node dissection, as extra peritoneal approach of pelvic sidewall is possible. The inferior epigastric vessels and round ligament are ligated to provide easy exposure. If traction to the peritoneum is high, there is a chance for avulsion of the inferior mesenteric artery and inferior mesenteric vein.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Über quantentheoretische Umdeutung kinematischer und mechanischer Beziehungen** Über quantentheoretische Umdeutung kinematischer und mechanischer Beziehungen: Über quantentheoretische Umdeutung kinematischer und mechanischer Beziehungen ("Quantum theoretical re-interpretation of kinematic and mechanical relations") was a breakthrough article in quantum mechanics written by Werner Heisenberg, which appeared in Zeitschrift für Physik in September 1925. Über quantentheoretische Umdeutung kinematischer und mechanischer Beziehungen: Heisenberg worked on the article while recovering from hay fever on the island of Heligoland, corresponding with Wolfgang Pauli on the subject. When asked for his opinion of the manuscript, Pauli responded favorably, but Heisenberg said that he was still "very uncertain about it". In July 1925, he sent the manuscript to Max Born to review and decide whether to submit it for publication.In the article, Heisenberg tried to explain the energy levels of a one-dimensional anharmonic oscillator, avoiding the concrete but unobservable representations of electron orbits by using observable parameters such as transition probabilities for quantum jumps, which necessitated using two indexes corresponding to the initial and final states.Also included was the Heisenberg commutator, his law of multiplication needed to describe certain properties of atoms, whereby the product of two physical quantities did not commute. Therefore, PQ would differ from QP where, for example, P was an electron's momentum, and Q its position. Paul Dirac, who had received a proof copy in August 1925, realized that the commutative law had not been fully developed, and he produced an algebraic formulation to express the same results in more logical form. Historical context: The article laid the groundwork for matrix mechanics, later developed further by Born and Pascual Jordan. When Born read the article, he recognized the formulation as one which could be transcribed and extended to the systematic language of matrices. Born, with the help of his assistant and former student Pascual Jordan, began immediately to make the transcription and extension, and they submitted their results for publication; their manuscript was received for publication just 60 days after Heisenberg’s article. A follow-on article by all three authors extending the theory to multiple dimensions was submitted for publication before the end of the year.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vouch by Reference** Vouch by Reference: Vouch by Reference (VBR) is a protocol used in Internet mail systems for implementing sender certification by third-party entities. Independent certification providers vouch for the reputation of senders by verifying the domain name that is associated with transmitted electronic mail. VBR information can be used by a message transfer agent, a mail delivery agent or by an email client. The protocol is intended to become a standard for email sender certification, and is described in RFC 5518. Operation: Email sender A user of a VBR email certification service signs its messages using DomainKeys Identified Mail (DKIM) and includes a VBR-Info field in the signed header. The sender may also use the Sender Policy Framework to authenticate its domain name. The VBR-Info: header field contains the domain name that is being certified, typically the responsible domain in a DKIM signature (d= tag), the type of content in the message, and a list of one or more vouching services, that is the domain names of the services that vouch for the sender for that kind of content: VBR-Info: md=domain.name.example; mc=type; mv=vouching.example:vouching2.example Email receiver An email receiver can authenticate the message's domain name using DKIM or SPF, thus finding the domains that are responsible for the message. It then obtains the name of a vouching service that it trusts, either from among the set supplied by the sender or from a locally configured set of preferred vouching services. Using the Domain Name System, the receiver can verify whether a vouching service actually vouches for a given domain. To do so, the receiver queries a TXT resource record for the name composed: domain.name.example._vouch.vouching.example The returned data, if any, is a space-delimited list of all the types that the service vouches, given as lowercase ASCII. They should match the self-asserted message content. The types defined are transaction, list, and all. Auditing the message may allow to establish whether its content corresponds. The result of the authentication can be saved in a new header field, according to RFC 6212, like so: Authentication-Results: receiver.example; vbr=pass header.mv=vouching.example header.md=domain.name.example Implementations and variations: OpenDKIM and MDaemon Messaging Server by Alt-N Technologies have been among the first software implementations of VBR. OpenDKIM provides a milter as well as a standalone library. Implementations and variations: Roaring Penguin Software's CanIt anti-spam filter supports VBR as of version 7.0.8 released on 2010-11-09.Spamhaus has released The Spamhaus Whitelist that includes a domain based whitelist, the DWL, where a domain name can be queried as, e.g., dwltest.com._vouch.dwl.spamhaus.org. Although the standard only specifies TXT resource records, following a long established DNSBL practice, Spamhaus has also assigned A resource records with values 127.0.2.0/24 for whitelist return codes. The possibility to query an address may allow easier deployment of existing code. However, their techfaq recommends checking the domain (the value of the d= tag) of a valid DKIM-Signature by querying the corresponding TXT record, and their howto gives details about inserting VBR-Info header fields in messages signed by whitelisted domains. By 2013, one of the protocol authors considered it a flop.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Simplicial localization** Simplicial localization: In category theory, a branch of mathematics, the simplicial localization of a category C with respect to a class W of morphisms of C is a simplicial category LC whose π0 is the localization C[W−1] of C with respect to W; that is, π0LC(x,y)=C[W−1](x,y) for any objects x, y in C. The notion is due to Dwyer and Kan.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cyc** Cyc: Cyc (pronounced SYKE) is a long-term artificial intelligence project that aims to assemble a comprehensive ontology and knowledge base that spans the basic concepts and rules about how the world works. Hoping to capture common sense knowledge, Cyc focuses on implicit knowledge that other AI platforms may take for granted. This is contrasted with facts one might find somewhere on the internet or retrieve via a search engine or Wikipedia. Cyc enables semantic reasoners to perform human-like reasoning and be less "brittle" when confronted with novel situations. Cyc: Douglas Lenat began the project in July 1984 at MCC, where he was Principal Scientist 1984–1994, and then, since January 1995, has been under active development by the Cycorp company, where he is the CEO. Overview: The need for a massive symbolic artificial intelligence project of this kind was born in the early 1980s. Early AI researchers had ample experience over the previous 25 years with AI programs that would generate encouraging early results but then fail to "scale up"—move beyond the 'training set' to tackle a broader range of cases. Douglas Lenat and Alan Kay publicized this need, and they organized a meeting at Stanford in 1983 to address the problem. The back-of-the-envelope calculations by Lenat, Kay, and their colleagues (including Marvin Minsky, Allen Newell, Edward Feigenbaum, and John McCarthy) indicated that that effort would require between 1000 and 3000 person-years of effort, far beyond the standard academic project model. However, events within a year of that meeting enabled an effort of that scale to get underway. Overview: The project began in July 1984 as the flagship project of the 400-person Microelectronics and Computer Technology Corporation (MCC), a research consortium started by two dozen large United States based corporations "to counter a then ominous Japanese effort in AI, the so-called "fifth-generation" project." The US Government reacted to the Fifth Generation threat by passing the National Cooperative Research Act of 1984, which for the first time allowed US companies to "collude" on long-term high-risk high-payoff research, and MCC and Sematech sprang up to take advantage of that ten-year opportunity. MCC's first President and CEO was Bobby Ray Inman, former NSA Director and Central Intelligence Agency deputy director. Overview: The objective of the Cyc project was to codify, in machine-usable form, the millions of pieces of knowledge that compose human common sense. This entailed, along the way, (1) developing an adequately expressive representation language, CycL, (2) developing an ontology spanning all human concepts down to some appropriate level of detail, (3) developing a knowledge base on that ontological framework, comprising all human knowledge about those concepts down to some appropriate level of detail, and (4) developing an inference engine exponentially faster than those used in then-conventional expert systems, to be able to infer the same types and depth of conclusions that humans are capable of, given their knowledge of the world. Overview: In slightly more detail: The CycL representation language started as an extension of RLL (the so-called Representation Language Language, developed in 1979–1980 by Lenat and his graduate student Russell Greiner while at Stanford University), but within a few years of the launch of the Cyc project it became clear that even representing a typical news story or novel or advertisement would require more than the expressive power of full first-order logic, namely second-order predicate calculus ("What is the relationship between rain and water?") and then even higher-level orders of logic including modal logic, reflection (enabling the system to reason about its progress so far, on a problem on which it's working), and context logic (enabling the system to reason explicitly about the contexts in which its various premises and conclusions might hold), non-monotonic logic, and circumscription. By 1989, CycL had expanded in expressive power to higher-order logic (HOL). Overview: Triplestore representations (which are akin to the Frame -and-slot representation languages of the 1970s from which RLL sprang) are widespread today in AI. It may be useful to cite a few examples that stress or break that type of representation, typical of the examples that forced the Cyc project to move from a triplestore representation to a much more expressive one during the period 1984–1989: English sentences including negations ("Fred does not own a dog"), nested quantifiers ("Every American has a mother" means for-all x there-exists y... but "Every American has a President" means there-exists y such that for-all x...), nested modals such as "The United States believes that Germany wants NATO to avoid pursuing..." and it's even awkward to represent, in a Triplestore, relationships of arity higher than 2, such as "Los Angeles is between San Diego and San Francisco along US101." Cyc's ontology grew to about 100,000 terms during the first decade of the project, to 1994, and as of 2017 contained about 1,500,000 terms. This ontology included: 416,000 collections (types, sorts, natural kinds, which includes both types of things such as Fish and types of actions such as Fishing) a little over a million individuals representing 42,500 predicates (relations, attributes, fields, properties, functions), about a million generally well known entities such as TheUnitedStatesOfAmerica, BarackObama, TheSigningOfTheUSDeclarationOfIndependence, etc. Overview: An arbitrarily large number of additional terms are also implicitly present in the Cyc ontology, in the sense that there are term-denoting functions such as CalendarYearFn (when given the argument 2016, it denotes the calendar year 2016), GovernmentFn (when given the argument France it denotes the government of France), Meter (when given the argument 2016, it denotes a distance of 2.016 kilometers), and nestings and compositions of such function-denoting terms. Overview: The Cyc knowledge base of general common-sense rules and assertions involving those ontological terms was largely created by hand axiom-writing; it grew to about 1 million in 1994, and as of 2017 is about 24.5 million and has taken well over 1,000 person-years of effort to construct. Overview: It is important to understand that the Cyc ontological engineers strive to keep those numbers as small as possible, not inflate them, so long as the deductive closure of the knowledge base isn't reduced. Suppose Cyc is told about one billion individual people, animals, etc. Then it could be told 1018 facts of the form "Mickey Mouse is not the same individual as <Bullwinkle the Moose/Abraham Lincoln/Jennifer Lopez>". But instead of that, one could tell Cyc 10,000 Linnaean taxonomy rules followed by just 108 rules of the form "No mouse is a moose". And even more compactly, Cyc could instead just be given those 10,000 Linnaean taxonomy rules followed by just one rule of the form "For any two Linnaean taxons, if neither is explicitly known to be a supertaxon of the other, then they are disjoint". Those 10,001 assertions have the same deductive closure as the earlier-mentioned 1018 facts. Overview: The Cyc inference engine design separates the epistemological problem (what content should be in the Cyc KB) from the heuristic problem (how Cyc could efficiently infer arguments hundreds of steps deep, in a sea of tens of millions of axioms). To do the former, the CycL language and well-understood logical inference might suffice. For the latter, Cyc used a community-of-agents architecture, where specialized reasoning modules, each with its own data structure and algorithm, "raised their hand" if they could efficiently make progress on any of the currently open sub-problems. By 1994 there were 20 such heuristic level (HL) modules; as of 2017 there are over 1,050 HL modules.Some of these HL modules are very general, such as a module that caches the Kleene Star (transitive closure) of all the commonly-used transitive relations in Cyc's ontology. Overview: Some are domain-specific, such as a chemical equation-balancer. These can be and often are an "escape" to (pointer to) some externally available program or webservice or online database, such as a module to quickly "compute" the current population of a city by knowing where/how to look that up.CycL has a publicly released specification and dozens of HL modules were described in Lenat and Guha's textbook, but the actual Cyc inference engine code, and the full list of 1000+ HL modules, is Cycorp-proprietary.The name "Cyc" (from "encyclopedia", pronounced [saɪk], like "syke") is a registered trademark owned by Cycorp. Access to Cyc is through paid licenses, but bona fide AI research groups are given research-only no-cost licenses (cf. ResearchCyc); as of 2017, over 600 such groups worldwide have these licenses. Overview: Typical pieces of knowledge represented in the Cyc knowledge base are "Every tree is a plant" and "Plants die eventually". When asked whether trees die, the inference engine can draw the obvious conclusion and answer the question correctly. Overview: Most of Cyc's knowledge, outside math, is only true by default. For example, Cyc knows that as a default parents love their children, when you're made happy you smile, taking your first step is a big accomplishment, when someone you love has a big accomplishment that makes you happy, and only adults have children. When asked whether a picture captioned "Someone watching his daughter take her first step" contains a smiling adult person, Cyc can logically infer that the answer is Yes, and "show its work" by presenting the step by step logical argument using those five pieces of knowledge from its knowledge base. These are formulated in the language CycL, which is based on predicate calculus and has a syntax similar to that of the Lisp programming language. Overview: In 2008, Cyc resources were mapped to many Wikipedia articles. Cyc is presently connected to Wikidata. Future plans may connect Cyc to both DBpedia and Freebase. Overview: Much of the current work Cyc continues to be knowledge engineering, representing facts about the world by hand, and implementing efficient inference mechanisms on that knowledge. Increasingly, however, work at Cycorp involves giving the Cyc system the ability to communicate with end users in natural language, and to assist with the ongoing knowledge formation process via machine learning and natural-language understanding. Another large effort at Cycorp is building a suite of Cyc-powered ontological engineering tools to lower the bar to entry for individuals to contribute to, edit, browse, and query Cyc. Overview: Like many companies, Cycorp has ambitions to use Cyc's natural-language processing to parse the entire internet to extract structured data; unlike all others, it is able to call on the Cyc system itself to act as an inductive bias and as an adjudicator of ambiguity, metaphor, and ellipsis. There are few, if any, systematic benchmark studies of Cyc's performance. Knowledge base: The concept names in Cyc are CycL terms or constants. Constants start with an optional "#$" and are case-sensitive. There are constants for: Individual items known as individuals, such as #$BillClinton or #$France. Collections, such as #$Tree-ThePlant (containing all trees) or #$EquivalenceRelation (containing all equivalence relations). A member of a collection is called an instance of that collection. Functions, which produce new terms from given ones. For example, #$FruitFn, when provided with an argument describing a type (or collection) of plants, will return the collection of its fruits. By convention, function constants start with an upper-case letter and end with the string "Fn". Knowledge base: Truth functions, which can apply to one or more other concepts and return either true or false. For example, #$siblings is the sibling relationship, true if the two arguments are siblings. By convention, truth function constants start with a lower-case letter. Truth functions may be broken down into logical connectives (such as #$and, #$or, #$not, #$implies), quantifiers (#$forAll, #$thereExists, etc.) and predicates.Two important binary predicates are #$isa and #$genls. The first one describes that one item is an instance of some collection, the second one that one collection is a subcollection of another one. Facts about concepts are asserted using certain CycL sentences. Predicates are written before their arguments, in parentheses: (#$isa #$BillClinton #$UnitedStatesPresident) "Bill Clinton belongs to the collection of U.S. presidents." (#$genls #$Tree-ThePlant #$Plant) "All trees are plants." (#$capitalCity #$France #$Paris) "Paris is the capital of France." Sentences can also contain variables, strings starting with "?". These sentences are called "rules". One important rule asserted about the #$isa predicate reads: (#$implies (#$and (#$isa ?OBJ ?SUBSET) (#$genls ?SUBSET ?SUPERSET)) (#$isa ?OBJ ?SUPERSET)) "If OBJ is an instance of the collection SUBSET and SUBSET is a subcollection of SUPERSET, then OBJ is an instance of the collection SUPERSET". Another typical example is (#$relationAllExists #$biologicalMother #$ChordataPhylum #$FemaleAnimal) which means that for every instance of the collection #$ChordataPhylum (i.e. for every chordate), there exists a female animal (instance of #$FemaleAnimal), which is its mother (described by the predicate #$biologicalMother).The knowledge base is divided into microtheories (Mt), collections of concepts and facts typically pertaining to one particular realm of knowledge. Unlike the knowledge base as a whole, each microtheory must be free from monotonic contradictions. Each microtheory is a first-class object in the Cyc ontology; it has a name that is a regular constant; microtheory constants contain the string "Mt" by convention. An example is #$MathMt, the microtheory containing mathematical knowledge. The microtheories can inherit from each other and are organized in a hierarchy: one specialization of #$MathMt is #$GeometryGMt, the microtheory about geometry. Inference engine: An inference engine is a computer program that tries to derive answers from a knowledge base. The Cyc inference engine performs general logical deduction (including modus ponens, modus tollens, universal quantification and existential quantification). It also performs inductive reasoning, statistical machine learning and symbolic machine learning, and abductive reasoning (but of course sparingly and using the existing knowledge base as a filter and guide). Releases: OpenCyc The first version of OpenCyc was released in spring 2002 and contained only 6,000 concepts and 60,000 facts. The knowledge base was released under the Apache License. Cycorp stated its intention to release OpenCyc under parallel, unrestricted licences to meet the needs of its users. The CycL and SubL interpreter (the program that allows users to browse and edit the database as well as to draw inferences) was released free of charge, but only as a binary, without source code. It was made available for Linux and Microsoft Windows. The open source Texai project released the RDF-compatible content extracted from OpenCyc. A version of OpenCyc, 4.0, was released in June 2012. OpenCyc 4.0 included much of the Cyc ontology at that time, containing hundreds of thousands of terms, along with millions of assertions relating the terms to each other; however, these are mainly taxonomic assertions, not the complex rules available in Cyc. The OpenCyc 4.0 knowledge base contained 239,000 concepts and 2,093,000 facts. Releases: The main point of releasing OpenCyc was to help AI researchers understand what was missing from what they now call ontologies and knowledge graphs. It's useful and important to have properly taxonomized concepts like person, night, sleep, lying down, waking, happy, etc., but what's missing from the OpenCyc content about those terms, but present in the Cyc KB content, are the various rules of thumb that most of us share about those terms: that (as a default, in the ModernWesternHumanCultureMt) each person sleeps at night, sleeps lying down, can be woken up, is not happy about being woken up, and so on. That point does not require continually-updated releases of OpenCyc, so, as of 2017, OpenCyc is no longer available. Releases: ResearchCyc In July 2006, Cycorp released the executable of ResearchCyc 1.0, a version of Cyc aimed at the research community, at no charge. (ResearchCyc was in beta stage of development during all of 2004; a beta version was released in February 2005.) In addition to the taxonomic information contained in OpenCyc, ResearchCyc includes significantly more semantic knowledge (i.e., additional facts and rules of thumb) involving the concepts in its knowledge base; it also includes a large lexicon, English parsing and generation tools, and Java-based interfaces for knowledge editing and querying. In addition it contains a system for ontology-based data integration. As of 2017, regular releases of ResearchCyc continued to appear, with 600 research groups utilizing licenses around the world at no cost for noncommercial research purposes. As of December 2019, ResearchCyc is no longer supported. Cycorp expects to improve and overhaul tools for external developers over the coming years. Applications: There have been over a hundred successful applications of Cyc; listed here are a few mutually dissimilar instances: Pharmaceutical Term Thesaurus Manager/Integrator For over a decade, Glaxo has used Cyc to semi-automatically integrate all the large (hundreds of thousands of terms) thesauri of pharmaceutical-industry terms that reflect differing usage across companies, countries, years, and sub-industries. This ontology integration task requires domain knowledge, shallow semantic knowledge, but also arbitrarily deep common sense knowledge and reasoning. Pharma vocabulary varies across countries, (sub-) industries, companies, departments, and decades of time. E.g., what’s a gel pak? What’s the “street name” for ranitidine hydrochloride? Each of these n controlled vocabularies is an ontology with approximately 300k terms. Glaxo researchers need to issue a query in their current vocabulary, have it translated into a neutral “true meaning”, and then have that transformed in the opposite direction to find potential matches against documents each of which was written to comply with a particular known vocabulary. They had been using a large staff to do that manually. Cyc is used as the universal interlingua capable of representing the union of all the terms’ “true meanings”, and capable of representing the 300k transformations between each of those controlled vocabularies and Cyc, thereby converting an n² problem into a linear one without introducing the usual sort of “telephone game” attenuation of meaning. Furthermore, creating each of those 300k mappings for each thesaurus is done in a largely automated fashion, by Cyc. Applications: Terrorism Knowledge Base The comprehensive Terrorism Knowledge Base was an application of Cyc in development that tried to ultimately contain all relevant knowledge about "terrorist" groups, their members, leaders, ideology, founders, sponsors, affiliations, facilities, locations, finances, capabilities, intentions, behaviors, tactics, and full descriptions of specific terrorist events. The knowledge is stored as statements in mathematical logic, suitable for computer understanding and reasoning. Applications: Cleveland Clinic Foundation The Cleveland Clinic has used Cyc to develop a natural-language query interface of biomedical information, spanning decades of information on cardiothoracic surgeries. A query is parsed into a set of CycL (higher-order logic) fragments with open variables (e.g., "this question is talking about a person who developed an endocarditis infection", "this question is talking about a subset of Cleveland Clinic patients who underwent surgery there in 2009", etc.); then various constraints are applied (medical domain knowledge, common sense, discourse pragmatics, syntax) to see how those fragments could possibly fit together into one semantically meaningful formal query; significantly, in most cases, there is exactly one and only one such way of incorporating and integrating those fragments. Integrating the fragments involves (i) deciding which open variables in which fragments actually represent the same variable, and (ii) for all the final variables, decide what order and scope of quantification that variable should have, and what type (universal or existential). That logical (CycL) query is then converted into a SPARQL query that is passed to the CCF SemanticDB that is its data lake. Applications: MathCraft One Cyc application aims to help students doing math at a 6th grade level, helping them much more deeply understand that subject matter. It is based on the experience that we often have thought we understood something, but only really understood it after we had to explain or teach it to someone else. Unlike almost all other educational software, where the computer plays the role of the teacher, this application of Cyc, called MathCraft, has Cyc play the role of a fellow student who is always slightly more confused than you, the user, are about the subject. The user's role is to observe the Cyc avatar and give it advice, correct its errors, mentor it, get it to see what it's doing wrong, etc. As the user gives good advice, Cyc allows the avatar to make fewer mistakes of that type, hence, from the user's point of view, it seems as though the user has just successfully taught it something. This is a variation of learning by teaching. Criticisms: The Cyc project has been described as "one of the most controversial endeavors of the artificial intelligence history". Catherine Havasi, CEO of Luminoso, says that Cyc is the predecessor project to IBM's Watson. Machine-learning scientist Pedro Domingos refers to the project as a "catastrophic failure" for several reasons, including the unending amount of data required to produce any viable results and the inability for Cyc to evolve on its own.Robin Hanson, a professor of economics at George Mason University, gives a more balanced analysis: Of course the CYC project is open to criticism on its many particular choices. People have complained about its logic-like and language-like representations, about its selection of prototypical cases to build from (e.g., encyclopedia articles), about its focus on answering over acting, about how often it rebuilds vs. maintaining legacy systems, and about being private vs. publishing everything. But any large project like this would produce such disputes, and it is not obvious any of its choices have been seriously wrong. They had to start somewhere, and in my opinion they have now collected a knowledge base with a truly spectacular size, scope, and integration. Other architectures may well work better, but if knowing lots is anywhere near as important as Lenat thinks, I’d expect serious AI attempts to import CYC’s knowledge, translating it into a new representation. No other source has anywhere near CYC’s size, scope, and integration. Criticisms: A similar sentiment was expressed by Marvin Minsky: "Unfortunately, the strategies most popular among AI researchers in the 1980s have come to a dead end," said Minsky. So-called “expert systems,” which emulated human expertise within tightly defined subject areas like law and medicine, could match users’ queries to relevant diagnoses, papers and abstracts, yet they could not learn concepts that most children know by the time they are 3 years old. “For each different kind of problem,” said Minsky, “the construction of expert systems had to start all over again, because they didn’t accumulate common-sense knowledge.” Only one researcher has committed himself to the colossal task of building a comprehensive common-sense reasoning system, according to Minsky. Douglas Lenat, through his Cyc project, has directed the line-by-line entry of more than 1 million rules into a commonsense knowledge base.Gary Marcus, a professor of psychology and neural science at New York University and the cofounder of an AI company called Geometric Intelligence, says "it represents an approach that is very different from all the deep-learning stuff that has been in the news.” This is consistent with Doug Lenat's position that "Sometimes the veneer of intelligence is not enough".Stephen Wolfram writes: In the early days of the field of artificial intelligence, there were plenty of discussions of “knowledge representation”, with approaches based variously on the grammar of natural language, the structure of predicate logic or the formalism of databases. Very few large-scale projects were attempted (Doug Lenat’s Cyc being a notable counterexample). Criticisms: Marcus writes: The field might well benefit if CYC were systematically described and evaluated. If CYC has solved some significant fraction of commonsense reasoning, then it is critical to know that, both as a useful tool, and as a starting point for further research. If CYC has run into difficulties, it would be useful to learn from the mistakes that were made. If CYC is entirely useless, then researchers can at least stop worrying about whether they are reinventing the wheel. Criticisms: Every few years since it began publishing (1993), there is a new Wired Magazine article about Cyc, some positive and some negative (including one issue which contained one of each). Notable employees: This is a list of some of the notable people who work or have worked on Cyc either while it was a project at MCC (where Cyc was first started) or Cycorp.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cedratine** Cedratine: Cedratine is a distilled beverage (liqueur) produced from citrus fruits with an alcohol percentage between 36 and 40 percent.It originated in Tunisia, where most of it is still produced. It is also popular in Corsica.Cedratine can be consumed either at room temperature, cold or served as the basis for many cocktails or fruit salads.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Native and foreign format** Native and foreign format: A native format, in the context of software applications, refers to the file format which the application is designed to work with. It captures the internal reality of the program as well as is possible. Most likely this is also the default format of the application. A native file format therefore most likely has a one to one relationship with the applications features. In turn, a foreign format is not a true reflection of application internals, even though it may be supported by an application. To read a foreign file causes translation of data, this can cause data loss and further editing may prevent faithful writing back to the foreign format. Example: A document writer application may support a multitude of files, ranging from simple text files that only store characters and not font faces or sizes, to complex documents containing text effects and images. However, when these text files or documents are opened, they are not necessarily edited in their original format. Instead, the document writer may first convert the file into its own native data structure. Once the file is done being edited, the application will then convert the file back to its original format. Example: In some cases, applications may be able to open (import) files, but not save (export) them in the same format. This may be due to licensing issues, or simply because the feature has not been implemented in the application's programming yet. However, the application will typically be able to save the document in its own native format or any of the other foreign formats it is programmed to export. Example: For example, Microsoft Office Word 2003 is able to open Windows Write (*.wri) files, but cannot save them. Instead it is able to save them in its native Word Document (*.doc) format or a number of other common formats.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CourseManagement Open Service Interface Definition** CourseManagement Open Service Interface Definition: An open service interface definition (OSID) is a programmatic interface specification describing a service. These interfaces are specified by the Open Knowledge Initiative (OKI) to implement a service-oriented architecture (SOA) to achieve interoperability among applications across a varied base of underlying and changing technologies. Rationale: To preserve the investment in software engineering, program logic is separated from underlying technologies through the use of software interfaces each of which defines a contract between a service consumer and a service provider. This separation is the basis of any valid SOA. While some methods define the service interface boundary at a protocol or server level, OSIDs place the boundary at the application level to effectively insulate the consumer from protocols, server identities, and utility libraries that are in the domain to a service provider resulting in software which is easier to develop, longer-lasting, and usable across a wider array of computing environments. Rationale: OSIDs assist in software design and development by breaking up the problem space across service interface boundaries. Because network communication issues are addressed within a service provider and below the interface, there isn't an assumption that every service provider implement a remote communications protocol (though many do). OSIDs are also used for communication and coordination among the various components of complex software which provide a means of organizing design and development activities for simplified project management. Rationale: OSID providers (implementations) are often reused across a varied set of applications. Once software is made to understand the interface contract for a service, other compliant implementations may be used in its place. This achieves reusability at a high level (a service level) and also serves to easily scale software written for smaller more dedicated purposes. Rationale: An OSID provider implementation may be composed of an arbitrary number of other OSID providers. This layering technique is an obvious means of abstraction. When all the OSID providers implement the same service, this is called an adapter pattern. Adapter patterns are powerful techniques to federate, multiplex, or bridge different services contracting from the same interface without the modification to the application. List: Agent Assessment Authentication Authorization CourseManagement Dictionary Filing Grading Hierarchy Logging Messaging Repository Scheduling Workflow
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Marketing exposure** Marketing exposure: Sometimes referred as Advertising Exposure, Marketing Exposure is the degree to which a company’s target market is exposed to the company’s communications about its product/ services, initiatives, etc. Exposure is the product of a marketing strategy, and once the strategy is implemented it is only a matter of time before exposure is put into action. Consumers recognize "marketing exposure" when the company creates and promotes a campaign. There are three types of marketing exposure: intensive, selective, and exclusive. Overview: Marketing exposure is put into action after a marketing strategy has been implemented. In the marketing world, exposure is a number within a portfolio. In the consumer world, exposure is a company's campaign or brand that is trying to market specific products to help service the consumer. It is also a way to make a business stand out in the marketplace. Without marketing exposure, campaigns would be non-existent and therefore companies would suffer. Purpose: Marketing exposure is a major part that determines a company's success in their market. Although it is never directly identified or defined, it crucial for helping a company progress, creating competition for other companies, making the company more credible with consumers, and overall benefit both the company while satisfying consumers. While all of this may seem easy, it typically takes months of preparation to create, launch, and manage a campaign. Campaigns must be exposed thoroughly in the market as much as possible without annoying or bothering consumers to the point of "overexposing" the campaign. There is a fine balance between keeping the consumers interested in a product or brand, and annoying them to the point that they have no interest in supporting a company. To expose a campaign successfully, many factors must be considered. Exposure is not only limited to a consumer base, exposure can also be to other companies in the market. These companies do not have to be similar to the business which aims for positive exposure, on the contrary the companies should be diverse which can reach into other markets opening up new pathways. Also as advised before diversifying into many sectors also reduces the risk of profit loss where as being too diverse means resources are stretched out very thin causing minimum returns. There must be a balances between taking risks and diversifying. Objectives: Exposure objectives are the basic goals that the company is looking to accomplish in their campaign. Among the important goals, first understanding their consumer is key. For successful exposure, the company must create a target market—identify the specific consumer and their needs. Consumer factors and environmental factors can determine whether or not the company is capable of selling their product or service. Therefore, the company must evaluate what they have to offer and then determine how their product can help the consumer. Once the consumer and their needs have been identified, companies can figure out their goals and strategies as to how they can get the consumer to choose their product or service over the competitor's. Objectives: Factors Within the objectives, factors must be taken into consideration. Factors fall into Environmental, Consumer, Product, and Company categories. Understanding these factors and how they effect the marketplace can greatly determine whether or not the objectives (or goals) can be attained. Objectives: Environmental Environmental factors include change in every day consumer life. Examples include changes in family lifestyles, advances in technology, and the way consumers use the Internet. Companies cannot directly control changes in the environment, however they can create objectives or ways to market the product. If the company can expose the product in the right way, companies can convince the consumer that the product improves their environment and creates a service they believe they need. Objectives: Consumer Consumer factors are key to selling a product. A company is capable of taking their product and selling it to potential buyers only if company understands their buyers. That is why companies must ask important questions such as: Who are potential customers? Where do they buy? When do they buy? How do they buy? What do they buy? Having a deeper understanding of these questions helps companies analyze their consumer and determine how to best approach them. Objectives: Product The company's product is something that the company already has a deep understanding of. What makes the product such an important factor is determining its purpose and value in the marketplace. The purpose of the product depends on the tasks it completes, how small or large it is, and its complexity just to name a few.Next, the value in the particular marketplace is important. For example, a product such as a scientific computer is expensive, which eliminates many consumers because not many want to pay for a scientific computer. On the other hand, pepper, an inexpensive commodity, attracts many more consumers since they use it in everyday life—so the consumer demographic is much larger. Objectives: Company Company factors are of highest importance. The company must understand their place in the marketplace and recognize their financial, human, and technological capabilities. The financial, human, and technological capabilities of a company determine how efficiently the company can execute their campaign. Once these factors are understood and recognized, the company can then create a successful campaign. Companies must also connect with other companies for the most effect in exposure. This is due to the reason that branching out to make connections will create bonds and pathways for companies to extend into other markets which they can receive more exposure. If done correctly the exposure gained will result on sales of goods and services increasing which would mean the exposure to these new markets would cause the investments into such sectors open and increase the return on investments Strategizing: Once objectives are set, the company can begin strategizing how they can successfully approach and execute their campaign. The basic principles of marketing strategy are simply stated: to achieve persistent success in the marketplace over competition. With these basic principles, the company must recognize their competition, and strategize how they can be unique, while yielding positive results in the marketplace. To yield the best results in the market place requires two essential elements: the issue of the position, specifically within the 'strategic triangle' (the customers, competitors, and corporation), and of time (the analysis of the past and future). Using these principles and essential elements, companies must develop their campaign strategies. The company must develop these strategies and then determine their rate of exposure, who they are exposing it to, and how they plan on presenting the information. These strategies embody a range of marketing techniques from the campaign slogan to where advertising is placed. Strategizing: Many Companies and brands use there 5 tactics i the brand or products marketing exposure. They are; online advertisement, for example any form of social media; product bundling which is pairing ur products with other small and inexpensive items to create an illusion of receiving more for what you're paying; giving your product away to influencers or even regular people in order to spread awareness on your product; "Buy one and get another free", one very popular yet not spoken about strategy which encourages people to buy your product over other products; and last but not least product testing which encourages others to trust not only the product but brand as well. The Portfolio: The general goal of the portfolio is to compile data to show to customers or employers how successful or unsuccessful the campaign and exposure was. Since the global financial crisis, it has been crucial for companies to use portfolios. The marketing exposure portfolio holds all of the monetary information that assesses how the exposure is interacting with consumers in the marketplace, the amount of money being spent on the campaign, as well as the amount of returns the company is getting for the campaign from the consumers. This portfolio helps to determine the gross potential, and when the company can break even.After the campaign has ended, it also allows the company to assess how well their campaign worked and whether or not consumers embraced the company's product. After reviewing these numbers companies can then assess the effectiveness of the campaign and if in the future events the campaign needs to be changed. by keeping connected to the target market companies can read the thought patterns and plan for future implementations of the type of exposure which would result a high profit with the least amount of cost and use of resource.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dichlorprop** Dichlorprop: Dichlorprop is a chlorophenoxy herbicide similar in structure to 2,4-D that is used to kill annual and perennial broadleaf weeds. It is a component of many common weedkillers. About 4 million pounds of dichlorprop are used annually in the United States. Chemistry: Dichlorprop possesses a single asymmetric carbon and is therefore a chiral molecule, however only the R-isomer is active as an herbicide. When dichlorprop was first marketed in the 1960s, it was sold as racemic mixture of stereoisomers, but since then advances in asymmetric synthesis have made possible the production of the enantiopure compound. Today, only R-dichlorprop (also called dichlorprop-p or 2,4-DP-p) and its derivatives are sold as pesticides in the United States. Chemistry: Dichlorprop is a carboxylic acid, and like related herbicides with free acid groups, it is often sold as a salt or ester. Currently, the 2-ethylhexyl ester is used commercially. The butoxyethyl and isooctyl esters were once popular, but are no longer approved for agricultural use. For the salts, the dimethylamine salt is still available, while the diethanolamine salt is no longer used. Chemistry: According to the United States Environmental Protection Agency (EPA), "2,4-DP-p is thought to increase cell wall plasticity, biosynthesis of proteins, and the production of ethylene. The abnormal increase in these processes result in abnormal and excessive cell division and growth, damaging vascular tissue. The most susceptible tissues are those that are undergoing active cell division and growth." Health effects: The EPA rates the oral acute toxicity of dichlorprop as "slight" based on a rat LD50 of 537 mg/kg, and its derivatives are even less toxic. It is, however, considered to be a severe eye irritant. There has been concern that chlorophenoxy herbicides including dichlorprop may cause cancer, and in 1987 the International Agency for Research on Cancer (IARC) ranked this class of compounds as group 2B "possibly carcinogenic to humans". The EPA classifies the R-isomer as “Not Likely to be Carcinogenic to Humans.”
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Diamagnetic inequality** Diamagnetic inequality: In mathematics and physics, the diamagnetic inequality relates the Sobolev norm of the absolute value of a section of a line bundle to its covariant derivative. The diamagnetic inequality has an important physical interpretation, that a charged particle in a magnetic field has more energy in its ground state than it would in a vacuum.To precisely state the inequality, let L2(Rn) denote the usual Hilbert space of square-integrable functions, and H1(Rn) the Sobolev space of square-integrable functions with square-integrable derivatives. Diamagnetic inequality: Let f,A1,…,An be measurable functions on Rn and suppose that loc 2(Rn) is real-valued, f is complex-valued, and f,(∂1+iA1)f,…,(∂n+iAn)f∈L2(Rn) Then for almost every x∈Rn In particular, |f|∈H1(Rn) Proof: For this proof we follow Elliott H. Lieb and Michael Loss. From the assumptions, loc 1(Rn) when viewed in the sense of distributions and for almost every x such that f(x)≠0 (and ∂j|f|(x)=0 if f(x)=0 ). Moreover, So for almost every x such that f(x)≠0 . The case that f(x)=0 is similar. Application to line bundles: Let p:L→Rn be a U(1) line bundle, and let A be a connection 1-form for L In this situation, A is real-valued, and the covariant derivative D satisfies Dfj=(∂j+iAj)f for every section f . Here ∂j are the components of the trivial connection for L If loc 2(Rn) and f,(∂1+iA1)f,…,(∂n+iAn)f∈L2(Rn) , then for almost every x∈Rn , it follows from the diamagnetic inequality that The above case is of the most physical interest. We view Rn as Minkowski spacetime. Since the gauge group of electromagnetism is U(1) , connection 1-forms for L are nothing more than the valid electromagnetic four-potentials on Rn If F=dA is the electromagnetic tensor, then the massless Maxwell–Klein–Gordon system for a section ϕ of L are and the energy of this physical system is The diamagnetic inequality guarantees that the energy is minimized in the absence of electromagnetism, thus A=0
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Red/black concept** Red/black concept: The red/black concept, sometimes called the red–black architecture or red/black engineering, refers to the careful segregation in cryptographic systems of signals that contain sensitive or classified plaintext information (red signals) from those that carry encrypted information, or ciphertext (black signals). Therefore, the red side is usually considered the internal side, and the black side the more public side, with often some sort of guard, firewall or data-diode between the two. Red/black concept: In NSA jargon, encryption devices are often called blackers, because they convert red signals to black. TEMPEST standards spelled out in Tempest/2-95 specify shielding or a minimum physical distance between wires or equipment carrying or processing red and black signals.Different organizations have differing requirements for the separation of red and black fiber optic cables. Red/black terminology is also applied to cryptographic keys. Black keys have themselves been encrypted with a "key encryption key" (KEK) and are therefore benign. Red keys are not encrypted and must be treated as highly sensitive material. Red/Gray/Black: The NSA's Commercial Solutions for Classified (CSfC) program, which uses two layers of independent, commercial off-the-shelf cryptographic products to protect classified information, includes a red/gray/black concept. In this extension of the red/black concept, the separated gray compartment handles data that has been encrypted only once, which happens at the red/gray boundary. The gray/black interface adds or removes a second layer of encryption.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Yergason's test** Yergason's test: Yergason's test is a special test used for orthopedic examination of the shoulder and upper arm region, specifically the biceps tendon. Purpose: It identifies the presence of a pathology involving the biceps tendon or glenoid labrum. The specific positive findings to the test include pain in the bicipital groove indicating biceps tendinitis, subluxation of the long head of the biceps brachii muscle, and presence of a SLAP tear. Procedure: Palpating the biceps tendon as it passes through the bicipital groove to identify any lesions, abnormal bumps, or abnormal movement (i.e. biceps tendon) in the involved area. Mechanism: To perform the test, the examiner must stand on the affected side of the patient. The patient needs to be in a seated position with the elbow flexed to 90°, forearm pronated (palm facing the ground), and the arm stabilized against the thorax. The examiner places the stabilizing hand on the proximal portion of the humerus near the bicipital groove, and the resistance hand on the distal forearm and wrist.The patient is instructed to actively supinate the forearm, externally rotate the humerus, and flex the elbow against the resistance of the examiner. Referred pain by the patient results in one of positive findings. Mechanism: Modification involves the examiner resisting elbow flexion as the humerus moves into external rotation. Results: Biceps tendinitis or subluxation of the biceps tendon can normally be addressed by palpating the long head of the biceps tendon in the bicipital groove. The patient will exhibit a pain response, snapping or both in the bicipital groove. Pain with no associated popping might indicate bicipital tendinopathy. A snapping indicates a tear or laxity of the transverse humeral ligament, which would prevent the ligament from securing the tendon in the groove. Pain at the superior glenohumeral joint is indicative of a SLAP tear. Adverse effects: This is a difficult test to perform for an accurate diagnosis. False positive findings can be the result of a rotator cuff tear, while pain in the superior glenohumeral region is a weak predictor of a SLAP tear.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Harvest Moon 3D: A New Beginning** Harvest Moon 3D: A New Beginning: Harvest Moon 3D: A New Beginning (牧場物語はじまりの大地, Bokujō Monogatari: Hajimari no Daichi, lit. "Ranch Story: Land of the Beginning") is a game for the Nintendo 3DS released by Natsume. It is the last entry in the franchise released on the Nintendo 3DS systems to receive the title of Harvest Moon. Gameplay: The story involves reviving an abandoned town named Echo Village in order to allow the residents and animals to return. Gameplay: New features to the Harvest Moon series include extensive character customization, design of the house and furniture of the protagonist, and the ability to customize the appearance of the village the game takes place in.The multiplayer mode is region-free, and players can bring their cows and furry animals like sheep and alpaca, and can milk or shear each other's animals. Sometimes a giant animal will spawn, which give players five big products. Starting players can get a lot of money from collecting animal products in multiplayer, thus the good reception of the multiplayer feature. Players must bring a gift which will be swapped randomly at the beginning of the session. Players can do multiplayer over local connection or Internet, and with "Anyone" or "Friends". Gameplay: There are twelve marriage candidates for the player to choose from, six women and six men. Each are unlocked at different points during the game as the town is developed, and three are not unlocked until the end of the game including the Witch Princess, Amir, and Sanjay. Plot: The player (male is by default named Henry and female is by default named Rachel) arrives in a town called Echo Village, where they meet Dunhill, the town's mayor. He reveals that the town is fallen into disarray and many villagers have moved away as a result. After showing the player their farm, the player attempts to revive the village and construct buildings to motivate the villagers into coming back and convince new people to move in. Aiding the player is the Harvest Goddess and two Harvest Sprites: Aaron and Alice. Once the player is successful, a firework celebration is held to honor their success in restoring Echo Village. Development: Natsume, Inc. announced on May 29, 2012, that Harvest Moon 3D: A New Beginning would be released in North America. The game was released early by Natsume in North America and started shipping on October 19 instead of closer to its original street date, November 6. It was announced on June 5, 2013, that the game would be released in Europe by Marvelous AQL Europe during Q3 of 2013. A New Beginning is the first true 3DS Harvest Moon game, preceded by Harvest Moon: The Tale of Two Towns which was developed for the DS and released alongside a port for the 3DS. A New Beginning introduces features to the series, including the ability to fully customize the player, farm, and the town of setting. Release: Special edition preorders included a stuffed cow doll, and regular version preorders included a yak doll. The publisher Natsume announced on October 17, 2012, that the game had gone gold and that there was "unprecedented" interest in the special 15th anniversary edition of the game. Reception: The game received above-average reviews according to the review aggregation website Metacritic. IGN cited the edit features, character customization, extensive tutorials, and a gradual beginning. In Japan, Famitsu gave it a score of 32 out of 40.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Shinayakana Systems Approach** Shinayakana Systems Approach: The Shinayakana Systems Approach is a systems approach for "solving the complex systems with ill-defined structure" proposed by Sawaragi, Nakayama and Nakamori in 1987. This approach is interactive, intelligent and interdisciplinary, and emphasizes honesty, humanity and harmony. History: The Shinayakana Systems Approach was the first approach in a wave of East Asian systems scientists theorizing about their systems methodologies in the last decade of the twentieth century using the concepts of intuition and group collaboration, which has resulted in several new approached to knowledge creation. The Name: "Shinayakana" is an adjective in Japanese, the closest translation to English is "supple." The meaning is something between hard and soft. The Approach: The Shinayakana Systems Approach tried to resolve the controversy between hard and soft systems methodologies, using the Eastern philosophy of yin and yang. The approach does not specify an algorithmic recipe for knowledge and technology creation, only a set of principles for systemic problem solving:"Using intuition, keeping an open mind, trying diverse approaches and perspectives, being adaptive and ready to learn from mistakes, and being elastic like a willow but sharp as a sword - in short, Shinayakana."This approach is an interactive, intelligent and interdisciplinary (I3) system approach with emphasizing honesty, humanity and harmony (H3). The Approach: Interactive The Shinayakana Systems Approach limits the role of mathematical methods and models to that of problem solving support only because the authors believe that no model will ever incorporate all human concerns. The authors consider human-computer interaction to be essential:"Models should be built interactively, involving not only analysts but also domai experts and decision makers. Their perceptions of the problem, the relevant data and the model validity should be taken into account in model building so that the model can express their goals and preferences definitely and correctly. The interaction is essential at the decision stage as well, and it should be dynamical. The interaction should be designed carefully only to support the thinking process of decision makers; it should not be a set of leading questions." Intelligent The authors also believe that for the element of interaction to be used to its full potential the support system is required to be intelligent. In other words, the system should have a base of knowledge in the area being considered."Frameworks of dynamical knowledge utilization should be designed so that we can not only retrieve data or knowledge, but also acquire or modify them interactively. The mechanism of knowledge acquisition has two aspects: one is knowledge recognition from the knowledge base or decision support environment, the second is knowledge association by the communication with knowledge base systems." Interdisciplinary The third "I" is that the problem solving should be interdisciplinary, not limiting ones problem solving group to one area of expertise, but using many different perspectives to reach a more holistic result. The Approach: The Three H's: Honesty, Humanity and Harmony In addition to the three I's the approach uses three H's: "Honesty in modeling the reality. Humanity in designing support systems. Harmony of the research group." For more information The books and papers below go in to more detail describing the Shinayakana Systems Approach. Sawaragi, Y. and Nakamori, Y. (1989) "Shinayakana" Systems Approach in Developing an Urban Environment Simulator. IIASA Working Paper. IIASA, Laxenburg, Austria, WP-89-008. http://pure.iiasa.ac.at/3336/ Nakamori, Y. Knowledge and systems science: enabling systemic knowledge synthesis. Boca Raton: CRC Press, 2014. 39–40. Dolk, D., and Granat, J. Modeling for Decision Support in Network-based Services the Application of Quantitative Modeling to Service Science. Berlin: Springer, 2012. 264–70. i-System: The Shinayakana Systems Approach forms the basis of the i-system, which takes the I3 to I5 (intelligence, imagination, involvement, integration and intervention)."Further development of the Shinayakana Systems Approach was given in Nakamori (2000), in a systemic and process-like approach to knowledge creation called Knowledge Pentagram System or i-System. The five ontological elements (or subsystems) of this system are Intervention (and the will to solve problems), Intelligence (and existing scientific knowledge), Involvement (and social motivation), Imagination (and other aspects of creativity), and Integration (using systemic knowledge). True to the Shinayakana tradition, there is no algorithmic recipe for how to move between these ontological nodes: all transitions are equally advisable, according to individual needs. Thus, i-System stresses the need to move freely between diverse dimensions of creative space."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cardiolipin** Cardiolipin: Cardiolipin (IUPAC name 1,3-bis(sn-3’-phosphatidyl)-sn-glycerol, "sn" designating stereospecific numbering) is an important component of the inner mitochondrial membrane, where it constitutes about 20% of the total lipid composition. It can also be found in the membranes of most bacteria. The name "cardiolipin" is derived from the fact that it was first found in animal hearts. It was first isolated from the beef heart in the early 1940s by Mary C. Pangborn. In mammalian cells, but also in plant cells, cardiolipin (CL) is found almost exclusively in the inner mitochondrial membrane, where it is essential for the optimal function of numerous enzymes that are involved in mitochondrial energy metabolism. Structure: Cardiolipin (CL) is a kind of diphosphatidylglycerol lipid. Two phosphatidic acid moieties connect with a glycerol backbone in the center to form a dimeric structure. So it has four alkyl groups and potentially carries two negative charges. As there are four distinct alkyl chains in cardiolipin, the potential for complexity of this molecule species is enormous. However, in most animal tissues, cardiolipin contains 18-carbon fatty alkyl chains with 2 unsaturated bonds on each of them. It has been proposed that the (18:2)4 acyl chain configuration is an important structural requirement for the high affinity of CL to inner membrane proteins in mammalian mitochondria. However, studies with isolated enzyme preparations indicate that its importance may vary depending on the protein examined. Structure: Since there are two phosphates in the molecule, each of them can catch one proton. Although it has a symmetric structure, ionizing one phosphate happens at a very different levels of acidity than ionizing both: pK1 = 3 and pK2 > 7.5. So under normal physiological conditions (wherein pH is around 7), the molecule may carry only one negative charge. The hydroxyl groups (–OH and –O−) on phosphate would form a stable intramolecular hydrogen bond with the centered glycerol's hydroxyl group, thus forming a bicyclic resonance structure. This structure traps one proton, which is quite helpful for oxidative phosphorylation. Structure: As the head group forms such compact bicycle structure, the head group area is quite small relative to the big tail region consisting of 4 acyl chains. Based on this special structure, the fluorescent mitochondrial indicator, nonyl acridine orange (NAO) was introduced in 1982, and was later found to target mitochondria by binding to CL. NAO has a very large head and small tail structure which can compensate with cardiolipin's small head and large tail structure, and arrange in a highly ordered way. Several studies were published utilizing NAO both as a quantitative mitochondrial indicator and an indicator of CL content in mitochondria. However, NAO is influenced by membrane potential and/or the spatial arrangement of CL, so it's not proper to use NAO for CL or mitochondria quantitative studies of intact respiring mitochondria. But NAO still represents a simple method of assessing CL content. Structure: Methods to quantify and detect cardiolipin The detection, quantification, and localisation of CL species is a valuable tool to investigate mitochondrial dysfunction and the pathophysiological mechanisms underpinning several human disorders. CL is measured using liquid chromatography, usually combined with mass spectrometry, mass spectrometry imaging, shotgun lipidomics, ion mobility spectrometry, fluorometry, and radiolabelling. Therefore, the choice of the analytical method depends on the experimental question, level of detail, and sensitivity required. Metabolism and catabolism: Metabolism Eukaryotic pathway In eukaryotes such as yeasts, plants and animals, the synthesis processes are believed to happen in mitochondria. The first step is the acylation of glycerol-3-phosphate by a glycerol-3-phosphate acyltransferase. Then acylglycerol-3-phosphate can be once more acylated to form a phosphatidic acid (PA). With the help of the enzyme CDP-DAG synthase (CDS) (phosphatidate cytidylyltransferase), PA is converted into cytidinediphosphate-diacylglycerol (CDP-DAG). The following step is conversion of CDP-DAG to phosphatidylglycerol phosphate (PGP) by the enzyme PGP synthase, followed by dephosphorylation by PTPMT1 to form PG. Finally, a molecule of CDP-DAG is bound to PG to form one molecule of cardiolipin, catalyzed by the mitochondria-localized enzyme cardiolipin synthase (CLS). Metabolism and catabolism: Prokaryotic pathway In prokaryotes such as bacteria, diphosphatidylglycerol synthase catalyses a transfer of the phosphatidyl moiety of one phosphatidylglycerol to the free 3'-hydroxyl group of another, with the elimination of one molecule of glycerol, via the action of an enzyme related to phospholipase D. The enzyme can operate in reverse under some physiological conditions to remove cardiolipin. Catabolism Catabolism of cardiolipin may happen by the catalysis of phospholipase A2 (PLA) to remove fatty acyl groups. Phospholipase D (PLD) in the mitochondrion hydrolyses cardiolipin to phosphatidic acid. Functions: Regulates aggregate structures Because of cardiolipin's unique structure, a change in pH and the presence of divalent cations can induce a structural change. CL shows a great variety of forms of aggregates. It is found that in the presence of Ca2+ or other divalent cations, CL can be induced to have a lamellar-to-hexagonal (La-HII) phase transition. And it is believed to have a close connection with membrane fusion. Functions: Facilitates the quaternary structure The enzyme cytochrome c oxidase, also known as Complex IV, is a large transmembrane protein complex found in mitochondria and bacteria. It is the last enzyme in the respiratory electron transport chain located in the inner mitochondrial or bacterial membrane. It receives an electron from each of four cytochrome c molecules, and transfers them to one oxygen molecule, converting molecular oxygen to two molecules of water. Complex IV has been shown to require two associated CL molecules in order to maintain its full enzymatic function. Cytochrome bc1 (Complex III) also needs cardiolipin to maintain its quaternary structure and functional role. Complex V of the oxidative phosphorylation machinery also displays high binding affinity for CL, binding four molecules of CL per molecule of complex V. Functions: Triggers apoptosis Cardiolipin distribution to the outer mitochondrial membrane would lead to apoptosis of the cells, as evidenced by cytochrome c (cyt c) release, Caspase-8 activation, MOMP induction and NLRP3 inflammasome activation. During apoptosis, cyt c is released from the intermembrane spaces of mitochondria into the cytosol. Cyt c can then bind to the IP3 receptor on endoplasmic reticulum, stimulating calcium release, which then reacts back to cause the release of cyt c. When the calcium concentration reaches a toxic level, this causes cell death. Cytochrome c is thought to play a role in apoptosis via the release of apoptotic factors from the mitochondria. Functions: A cardiolipin-specific oxygenase produces CL hydroperoxides which can result in the conformation change of the lipid. The oxidized CL transfers from the inner membrane to the outer membrane, and then helps to form a permeable pore which releases cyt c. Functions: Serves as proton trap for oxidative phosphorylation During the oxidative phosphorylation process catalyzed by Complex IV, large quantities of protons are transferred from one side of the membrane to another side causing a large pH change. CL is suggested to function as a proton trap within the mitochondrial membranes, thereby strictly localizing the proton pool and minimizing the changes in pH in the mitochondrial intermembrane space. Functions: This function is due to CL's unique structure. As stated above, CL can trap a proton within the bicyclic structure while carrying a negative charge. Thus, this bicyclic structure can serve as an electron buffer pool to release or absorb protons to maintain the pH near the membranes. Other functions Cholesterol translocation from outer to the inner mitochondrial membrane Activates mitochondrial cholesterol side-chain cleavage Import protein into mitochondrial matrix Anticoagulant function Modulates α-synuclein - malfunction of this process is thought to be a cause of Parkinson's disease. Clinical significance: Increasing evidence links aberrant CL metabolism and content to human disease. Human conditions include neurological disorders, cancer, and cardiovascular and metabolic disorders (a full list can be found at). As the number of human diseases with CL profile abnormalities has exponentially grown, the use of qualitative and quantitative diagnostics has emerged as a necessity. Clinical significance: Metabolic diseases Barth syndrome Barth syndrome is a rare genetic disorder that was recognised in the 1970s to cause infantile death. It has a mutation in the gene coding for tafazzin, an enzyme involved in the biosynthesis of cardiolipin. Tafazzin is an indispensable enzyme to synthesize cardiolipin in eukaryotes involved in the remodeling of CL acyl chains by transferring linoleic acid from PC to monolysocardiolipin. Mutation of tafazzin would cause insufficient cardiolipin remodeling. However, it appears that cells compensate and ATP production is similar or higher than normal cells. Females heterozygous for the trait are unaffected. Sufferers of this condition have mitochondria that are abnormal. Cardiomyopathy and general weakness is common to these patients. Clinical significance: Combined malonic and methylmalonic aciduria (CMAMMA) In the metabolic disease combined malonic and methylmalonic aciduria (CMAMMA) due to ACSF3 deficiency, there is an altered composition of complex lipids as a result of impaired mitochondrial fatty acid synthesis (mtFAS), so for example the content of cardiolipins is strongly increased. Clinical significance: Tangier disease Tangier disease is also linked to CL abnormalities. Tangier disease is characterized by very low blood plasma levels of HDL cholesterol, accumulation of cholesteryl esters in tissues, and an increased risk for developing cardiovascular disease. Unlike Barth syndrome, Tangier disease is mainly caused by abnormal enhanced production of CL. Studies show that there are three to fivefold increase of CL level in Tangier disease. Because increased CL levels would enhance cholesterol oxidation, and then the formation of oxysterols would consequently increase cholesterol efflux. This process could function as an escape mechanism to remove excess cholesterol from the cell. Clinical significance: Parkinson's disease and Alzheimer's disease Oxidative stress and lipid peroxidation are believed to be contributing factors leading to neuronal loss and mitochondrial dysfunction in the substantia nigra in Parkinson's disease, and may play an early role in the pathogenesis of Alzheimer's disease. It is reported that CL content in the brain decreases with aging, and a recent study on rat brain shows it results from lipid peroxidation in mitochondria exposed to free radical stress. Another study shows that the CL biosynthesis pathway may be selectively impaired, causing 20% reduction and composition change of the CL content. It is also associated with a 15% reduction in linked complex I/III activity of the electron transport chain, which is thought to be a critical factor in the development of Parkinson's disease. Clinical significance: Nonalcoholic fatty liver disease and heart failure Recently, it is reported that in non-alcoholic fatty liver disease and heart failure, decreased CL levels and change in acyl chain composition are also observed in the mitochondrial dysfunction. However, the role of CL in aging and ischemia/reperfusion is still controversial. Clinical significance: Diabetes Heart disease is twice as common in people with diabetes. In diabetics, cardiovascular complications occur at an earlier age and often result in premature death, making heart disease the major killer of diabetic people. Cardiolipin has been found to be deficient in the heart at the earliest stages of diabetes, possibly due to a lipid-digesting enzyme that becomes more active in diabetic heart muscle. Clinical significance: Syphilis Cardiolipin from a cow heart is used as an antigen in the Wassermann test for syphilis. Anti-cardiolipin antibodies can also be increased in numerous other conditions, including systemic lupus erythematosus, malaria and tuberculosis, so this test is not specific. Clinical significance: HIV-1 Human immunodeficiency virus-1 (HIV-1) has infected more than 60 million people worldwide. HIV-1 envelope glycoprotein contains at least four sites for neutralizing antibodies. Among these sites, the membrane-proximal region (MPR) is particularly attractive as an antibody target because it facilitates viral entry into T cells and is highly conserved among viral strains. However, it is found that two antibodies directed against 2F5, 4E10 in MPR react with self-antigens, including cardiolipin. Thus, it's difficult for such antibodies to be elicited by vaccination. Clinical significance: Cancer It was first proposed by Otto Heinrich Warburg that cancer originated from irreversible injury to mitochondrial respiration, but the structural basis for this injury has remained elusive. Since cardiolipin is an important phospholipid found almost exclusively in the inner mitochondrial membrane and very essential in maintaining mitochondrial function, it is suggested that abnormalities in CL can impair mitochondrial function and bioenergetics. A study published in 2008 on mouse brain tumors supporting Warburg's cancer theory shows major abnormalities in CL content or composition in all tumors. Clinical significance: Antiphospholipid syndrome Patients with anti-cardiolipin antibodies (Antiphospholipid syndrome) can have recurrent thrombotic events even early in their mid- to late-teen years. These events can occur in vessels in which thrombosis may be relatively uncommon, such as the hepatic or renal veins. These antibodies are usually picked up in young women with recurrent spontaneous abortions. In anti-cardiolipin-mediated autoimmune disease, there is a dependency on the apolipoprotein H for recognition. Additional anti-cardiolipin diseases Bartonella infection Bartonellosis is a serious chronic bacterial infection shared by both cats and humans. Spinella found that one patient with bartonella henselae also had anti-cardiolipin antibodies, suggesting that bartonella may trigger their production. Chronic fatigue syndrome Chronic fatigue syndrome is debilitating illness of unknown cause that often follows an acute viral infection. According to one research study, 95% of CFS patients have anti-cardiolipin antibodies.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Self-criticism (Marxism–Leninism)** Self-criticism (Marxism–Leninism): Self-criticism (Russian: Самокритика, Samokritika; Chinese: 自我批评, Zìwǒ pīpíng; Vietnamese: Tự phê bình) is a philosophical and political concept developed within the ideology of Marxism–Leninism, Stalinism, and Maoism. According to David Priestland, the concept of "criticism and self-criticism" developed within the Stalinist period of the Soviet Union as a way to publicly interrogate intellectuals who were suspected of possessing counter-revolutionary positions. The concept would be a major component of the political philosophy of Chinese Marxist leader Mao Zedong. Self-criticism (Marxism–Leninism): The concept of self-criticism is a component of some Marxist schools of thought, primarily that of Marxism–Leninism, Stalinism, Maoism and Marxism–Leninism–Maoism. The concept was first introduced by Joseph Stalin in his 1924 work The Foundations of Leninism and later expanded upon in his 1928 work Against Vulgarising the Slogan of Self-Criticism. The Marxist concept of self-criticism is also present in the works of Mao Zedong, who was heavily influenced by Stalin, dedicating an entire chapter of The Little Red Book to the issue. Accordingly, many party members who had fallen out of favor with the nomenklatura were forced to undergo self-criticism sessions, producing either written or verbal statements detailing their ideological errors and affirming their renewed belief in the party line. History: Soviet Union According to David Priestland, the concept of politically enforced "criticism and self-criticism" originated during the 1921–1924 purges of academia within the Soviet Union. This would eventually develop into the practise of "criticism and self-criticism" campaigns in which intellectuals suspected of possessing counter-revolutionary tendencies were publicly interrogated as part of a policy of "proletariatization." This policy would be expanded past academia into the economic spheres of Russia with managers and party-bosses coerced to undergo campaigns of popular criticism.Joseph Stalin introduced the concept of self-criticism in his 1924 work The Foundations of Leninism. He would later expand this concept in his 1928 article "Against Vulgarising the Slogan of Self-Criticism". Stalin wrote in 1928 "I think, comrades, that self-criticism is as necessary to us as air or water. I think that without it, without self-criticism, our Party could not make any headway, could not disclose our ulcers, could not eliminate our shortcomings. And shortcomings we have in plenty. That must be admitted frankly and honestly."However, Stalin posited that self-criticism "date[s] back to the first appearance of Bolshevism in our country". Stalin stated that self-criticism was needed even after obtaining power as failing to observe weaknesses "make things easier for their enemies" and that "without self-criticism there can be no proper education of the Party, the class, and the masses". Vladimir Lenin wrote in One Step Forward, Two Steps Back (1904) that the Russian Social Democratic Labour Party engages in "self-criticism and ruthless exposure of their own shortcomings". Lenin further discussed the idea in "Left-Wing" Communism: An Infantile Disorder (1920), "Frankly admitting a mistake, ascertaining the reasons for it, analysing the circumstances which gave rise to it, and thoroughly discussing the means of correcting it—that is the earmark of a serious party". Lenin again further elaborated at a later date (1922) that "All the revolutionary parties that have perished so far, perished because they grew conceited, failed to see where their strength lay, and feared to speak of their weaknesses. But we shall not perish, for we do not fear to speak of our weaknesses and shall learn to overcome them". History: According to the official history of the October Revolution and Soviet Union produced under Stalin, The History of the Communist Party of the Soviet Union (Bolsheviks), the concept is described briefly in the twelfth chapter, In order to be fully prepared for this turn, the Party had to be its moving spirit, and the leading role of the Party in the forthcoming elections had to be fully ensured. But this could be done only if the Party organizations themselves became thoroughly democratic in their everyday work, only if they fully observed the principles of democratic centralism in their inner-Party life, as the Party Rules demanded, only if all organs of the Party were elected, only if criticism and self-criticism in the Party were developed to the full, only if the responsibility of the Party bodies to the members of the Party were complete, and if the members of the Party themselves became thoroughly active. History: Following the death of Joseph Stalin in 1953, successor to Soviet premiership Nikita Khrushchev would reaffirm the Communist Party of the Soviet Union's ideological dedication to the concepts of "criticism and self criticism" in the conclusion to the 1956 speech before the 20th Party Congress, while also denouncing the policies and actions of Stalin. History: People's Republic of China Mao Zedong provides a significant focus on the idea of self-criticism, dedicating a whole chapter of the Little Red Book to the issue. Mao saw "conscientious practice" of self-criticism as a quality that distinguished the Chinese Communist Party from other parties. Mao championed self-criticism saying "dust will accumulate if a room is not cleaned regularly, our faces will get dirty if they are not washed regularly. Our comrades' minds and our Party's work may also collect dust, and also need sweeping and washing."In the People's Republic of China, self-criticism—called ziwo pipan (自我批判) or jiǎntǎo (检讨)—is an important part of Maoist practice. Mandatory self-criticism as a part of political rehabilitation common under Mao, ended by Deng Xiaoping and partially revived by Xi Jinping—is known as a struggle session in reference to class struggle. History: Vietnam Vietnamese leader Ho Chi Minh made numerous references to the importance of self-criticism within the Vietnamese Communist Party. History: Cambodia In Democratic Kampuchea, self-criticism sessions were known as rien sot, meaning "religious education". In his memoir The Gate, François Bizot recalls observing the Khmer Rouge engaging in frequent self-criticism to reinforce group cohesion during his imprisonment in rural Cambodia in 1971: Several evenings a week—every evening it didn't rain—the guards gathered for a collective confession. Douch (Kang Kek Iew) did not take part. I was a privileged witness to these circles, where they would sit on the ground under the direction of an elder. Military homilies alternated with simple, repetitive songs. "Comrades," began the eldest, "let us appraise the day that has passed, in order to correct our faults. We must cleanse ourselves of the repeated sins that accumulate and slow down our beloved revolution. Do not be surprised at this!" "I," said the first one, "should have replaced the rattan rod today, the one north of the first shelter, which we use to dry clothes. I have done nothing about it... on account of my laziness." The man presiding over the session nodded with a frown, though not severely, only meaning to show that he knew how hard it was to combat inertia, so natural in man when he is not sustained by revolutionary convictions. He passed wordlessly onto the next man, indicating who this should be by pursing his lips in his direction. History: North Korea North Koreans are required to engage in saenghwal ch’onghwa sessions in which they confess to wrongdoings, transgressions, and deviations from Kim Il Sung's Ten Principles for the Establishment of a Monolithic Ideological System. They are required to attend self-criticism sessions from the age of 8. Members of the ruling Korean Workers' Party can be dismissed if they do not attend sessions for longer than three months. Inmates at North Korean kwalliso camps are required to engage in self-criticism sessions, which often lead to harsh collective punishments for entire work-units. The practice was introduced in 1962 during a series of ideological disputes with the Soviet Union. History: Outside the Communist Bloc French Marxist philosopher Louis Althusser wrote "Essays in Self-Criticism" focused on the issue of ideologically correcting ideas expressed in his prior works, most prominently For Marx and Reading Capital. The American New Left revolutionary organization Weather Underground dedicated a chapter of their work Prairie Fire to self-criticism of their prior revolutionary strategies. Likewise, the German Red Army Faction discussed the issues of self criticism in their publication The Urban Guerrilla Concept.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CP-GEP** CP-GEP: CP-GEP is a non-invasive prediction model for cutaneous melanoma patients that combines clinicopathologic (CP) variables with gene expression profiling (GEP). CP-GEP is able to identify cutaneous melanoma patients at low-risk for nodal metastasis who may forgo the sentinel lymph node biopsy (SLNB) procedure. The CP-GEP model was developed by the Mayo Clinic and SkylineDx BV, and it has been clinically validated in multiple studies. Clinical relevance: The sentinel lymph node biopsy (SLNB) is the standard of care for detecting nodal metastases in cutaneous melanoma patients and has been the most informative prognostic factor to guide subsequent treatment. However, ~85% of patients undergoing this procedure have no evidence of nodal metastasis. These patients are exposed to the risk of surgical complications. Well-known complications of SLNB include seroma formation, infections, lymphedema and other comorbidities. Because the SLNb procedure is highly complex, involves multiple medical disciplines, and is difficult to standardize, the false-negative rate is relatively high at 15%. Likewise, SLNB results that show minimal tumor cell deposits are difficult to interpret and may falsely indicate high-risk disease. The use of CP-GEP is expected to reduce the number of negative, nontherapeutic SLNB, as it has been specifically developed to identify and deselect patients with a low risk of nodal metastasis (below 10%). Per current clinical guidelines (NCCN, 2022), patients with a risk of having nodal metastases below 10% may choose to forgo SLNB, whereas patients with a nodal metastases risk of greater than 10% are recommended to undergo SLNB surgery. A diagnostic tool (rule-out test) that deselects patients for SLNB is therefore likely to improve clinical care. Better patient selection for SLNB would increase the accuracy of the clinicopathological assessment and reduce the exposure to unnecessary SLNB surgeries, thereby optimizing the allocation of healthcare resources. Moreover, initial studies have shown that the CP-GEP model may help predict the likelihood of melanoma recurrence. Model development: The CP-GEP model classifies patients as low or high risk for nodal metastasis based on patient age at melanoma biopsy (clinical factor), Breslow thickness (pathological factor) - a well-established risk factor currently used in clinical practice for melanoma staging – and the expression of eight genes from the primary tumor. These eight genes are involved in biological processes like fibrinolysis, angiogenesis, and epithelial-mesenchymal transition. The specific genes included in this CP-GEP model are MLANA, PLAT, ITGB3, SERPINE2, LOXL4, IL8, TGFBR1, and GDF15. Technical specifications: The sample type used is Formalin-Fixed Paraffin-Embedded (FFPE) tissue from the diagnostic biopsy of the primary melanoma. This material is collected via a shaved/punched biopsy or full excision. A total of 50-micron sections (e.g., five sections of 10 micron, or 10 sections of 5 micron) is required for molecular analysis and no macrodissection is needed for further processing. Gene expression data is obtained via quantitative PCR. The CP-GEP model is a logistic regression model. A repeated nested cross-validation scheme (double loop cross validation) was used to determine the performance of CP-GEP Clinical practice and GEP testing: In current clinical care, most providers adhere to the NCCN guidelines when considering SLNB referral of newly diagnosed melanoma patients. Currently, these guidelines do not recommend the usage of GEP testing in routine clinical practice, and state that pathological staging procedures should not be replaced. However, they do acknowledge the important potential of GEP tools in clinical care, and emphasize that these tests should be more extensively evaluated in prospective studies with large contemporary datasets of unselected patients. Scientific consensus has been reached by Grossman and colleagues from the Melanoma Working Prevention Group [ref] regarding the use of GEP tools in clinical practice. These guidelines are regarded as a benchmark for the development of GEP-based risk-stratification tools in the melanoma field.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Monitor proofing** Monitor proofing: Monitor proofing or soft-proofing is a step in the prepress printing process. It uses specialized computer software and hardware to check the accuracy of text and images used for printed products. Monitor proofing differs from conventional forms of “hard-copy” or ink-on-paper color proofing in its use of a calibrated display(s) as the output device.Monitor proofing systems rely on calibration, profiling and color management to produce an accurate representation of how images will look when printed. While a “soft-proof” function has existed in desktop publishing applications for some time, commercial monitor proofing extends this capability to multiple users and multiple locations by specifying the hardware to be used, and by enforcing one set of calibration procedures and color management policies for all users of the system. This ensures that all viewers are calibrated to a known set of conditions, and given hardware of equal capabilities will therefore be viewing the same color on screen. System Components: Monitor proofing systems consist of the following hardware and software components: Computer with calibration and profiling software Calibration and profiling software is often provided by, or bundled with the monitor proofing application by the software vendor. Color management support for ICC profiles created by the monitor proofing system is available through the operating system on most Windows, Macintosh and Linux computers. System Components: Graphics monitor High-quality monitors are a key enabling technology for monitor proofing systems. The International Organization for Standardization (ISO) finalized the standards for color proofing on displays in 2004 and since this publication date manufacturers including Apple, EIZO and NEC have produced LCD displays used in monitor proofing systems. System Components: Calibration hardware and software A colorimeter or spectrophotometer is used in conjunction with special calibration software to adjust the primary RGB monitor gains, set the white point to the desired color temperature and optionally set the monitor luminance to a specified levels. The calibration target for a monitor proofing system is typically D50 (5000K) and should be at least 160 cd/m2 luminance as specified in ISO 12646. System Components: Monitor Proofing Application Software Monitor proofing application software integrates the necessary color management tools with a viewing application containing markup, review and approval tools and some form of routing or collaboration. Proofing assets reside in a database and are made available for viewing over LAN or Internet connections via client-server connections. Third Party Certification: SWOP and Fogra offer independent third party certifications to ensure that a monitor proofing system is capable of reproducing certain reference printing conditions tied to known and traceable standards. A monitor proof that is prepared in accordance with these certification programs can serve as a contract proof or legal binding agreement between the proof provider and customer.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Playback Theatre** Playback Theatre: Playback Theatre is an original form of improvisational theatre in which audience or group members tell stories from their lives and watch them enacted on the spot. History: The first Playback Theatre company was founded in 1975 by Jonathan Fox and Jo Salas. Fox was a student of improvisational theatre, oral traditional storytelling, Jacob Moreno's psychodrama method and the work of educator Paulo Freire. Salas was a trained musician and activist. Both had served as volunteers in developing countries: Fox as a Peace Corps volunteer in Nepal, Salas with New Zealand's Volunteer Service Abroad in Malaysia. History: The original Playback Theatre Company made its home in Dutchess and Ulster Counties of New York State, just north of New York City. This group, while developing the basis of the Playback form, took it to schools, prisons, centers for the elderly, conferences, and festivals in an effort to encourage individuals from all walks of society to let their stories be heard. They also performed monthly for the public-at-large. History: The Playback Theatre idea has inspired many people. As an immediate result of a teaching and performing tour by some of the members of the original Playback Theatre Company to Australasia in 1980, companies were founded in Sydney (1980), Melbourne (1981), Perth, and Wellington. All four companies still exist, and are now the oldest extant companies in the world. History: Since that time the form has spread throughout North America and Europe, and Playback companies now exist on six continents. The International Playback Theatre Network (IPTN) was founded in 1990 to support Playback activity throughout the world through international conferences and the IPTN Journal (formerly Interplay). As of 2018, the IPTN has 192 group members and 320 practitioner and individual members from 40 countries.A network was started in 2011 for people interested in Playback Theatre in North America. As of 2022, 55 active companies perform, predominantly in their local communities. Playback North America hosts regular teleconferences, periodic gatherings, leadership coaching, and several publications, including a 300-page training guide on artistic, business, and company development for Playback Theatre (see below). History: To meet the demand for training which this level of growth has created, in 1993 Jonathan Fox founded the School of Playback Theatre to provide beginning, intermediate and advanced levels of training in Playback Theatre. The School was renamed the Centre for Playback Theatre in 2006, expanding its focus to worldwide development of Playback Theatre. Graduates of the training program may become accredited trainers of Playback Theatre (APTTs). Other schools for training exist in Italy, Germany, Japan. and São Paulo, BrazilRussian School of Playback Theatre (Sondrio)</ref> Russia, United Kingdom, Israel, Hungary, Hong Kong, Australasia and Sweden. The Playback Centre keeps an online list of affiliated schools Festivals and gatherings: There are regular and semi-regular Playback gatherings and festivals in different parts of the world, including in Finland, the UK, Italy, Germany, Eastern Europe, Israel, Hong Kong, Nepal and India. Playback North America, a network of playback companies in North America, has held several conferences. The International Playback Theatre Network (IPTN) holds a conference every four years in different parts of the world. IPTN conferences have taken place in Sydney, Australia (1992), in a village north of Helsinki, Finland (1993), in Olympia, Washington, USA (1995), Perth, Australia (1997), York, England (1999), Shizuoka, Japan (2003), São Paulo, Brazil (2007), Frankfurt, Germany (2011), Montreal, Canada (2015), and Bangalore, India (2019). The next international conference will take place in South Africa in December, 2023. Theatrical form: The Playback 'form' as developed by Fox and Salas utilizes component theatrical forms or pieces, developed from its sources in improvisational theatre, storytelling, and psychodrama. These components include scenes (also called stories or vignettes) and narrative or non-narrative short forms, including "fluid sculptures", "pairs", and "chorus". In a Playback event, someone in the audience tells a moment or story from their life, chooses the actors to play the different roles, and then all those present watch the enactment, as the story "comes to life" with artistic shape and nuance. Actors draw on non-naturalistic styles to convey meaning, such as metaphor or song. Theatrical form: Playback performers tend to specialize in one of several roles - conductor, actor, or musician. Some companies also have members who specialize in other roles, such as lighting. For audiences, the active performers can seem preternaturally gifted, as they create their performances without a script or score. Following the practice of the original company, most companies do not consult or "huddle" prior to beginning the story, trusting instead to a shared understanding of the story they have heard and a readiness to respond to each other's cues. Theatrical form: The role of conductor, by contrast, can seem relatively easy, involving as it does conversing with the audience as a group or individually, and generally involving no acting. However, it is recognized within the community of Playback performers as the most difficult role to fill successfully. Applications: Playback Theatre is used in a broad range of settings: theatres and community centres (where performances take place for the general public), in schools, private sector organizations, nonprofit organizations, prisons, hospice centers, day treatment centers, at conferences of all kinds, and colleges and universities. Playback theatre has also been used in the following fields: transitional justice, human rights, civic dialogue, refugees and immigrants, disaster recovery, climate change, birthdays, weddings, and conferences. Education: Playback practitioners have used the method in schools on issues such as bullying (students tell stories about their experiences in relation to bullying, watch them played back, and then explore ways to create a respectful and safe school environment). Playback is used both by classroom teachers and by visiting performers/leaders. Social change: Playback Theatre is used to provide a forum for the exchange of diverse experiences in such contexts as the aftermath of Hurricane Katrina; Martin Luther King Jr. Day celebrations examining on racial conflict and reconciliation; incarcerated men and women; immigrant and refugee organizations and their host communities; events honoring human rights. Other examples include: A project in Afghanistan trains victims of violence to enact each other's stories in the context of transitional justice. A project in Melbourne, Australia trains youth to enact stories of refugee youths' experiences in the context of interactions with police; and to enact stories of police experiences in the context of interactions with refugee youth. The purpose of which is to bridge understanding between these two groups (2010, 2011). Business: Since the mid-1990s Playback Theatre and allied techniques have increasingly been used as an effective tool in workplace training of subjects such as management and communication skills and diversity awareness. In some cases, participants describe events which have taken place in the workplace, often events which gave rise to conflict or difficult feelings. Playback actors "replay" the events described and the facilitator orchestrates discussion about the replay, from which many participants describe valuable learning outcomes. A workplace performance can also invite any kind of stories, from out of the work environment. Therapy: Although Playback Theatre is not primarily a therapeutic technique, it is adaptable for use by therapists who are also trained in Playback Theatre. Clients can gain insight, catharsis, connection, and self-expression through telling their stories and participating in enacting stories of others.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nephoscope** Nephoscope: A nephoscope is a 19th-century instrument for measuring the altitude, direction, and velocity of clouds, using transit-time measurement. This is different from a nephometer, which is an instrument used in measuring the amount of cloudiness. Description: A nephoscope emits a light ray, which strikes and reflects off the base of a targeted cloud. The distance to the cloud can be estimated using the delay between sending the light ray and receiving it back: distance = (speed of light × travel time) / 2 Mirror nephoscope: Developed by Carl Gottfrid Fineman, this instrument consists of a magnetic compass, the case of which is covered with a black mirror, around which is movable a circular metal frame. A little window in this mirror enables the observer to see the tip of the compass needle underneath. On the surface of the mirror are engraved three concentric circles and four diameters; one of the latter passes through the middle of the little window. The mirror constitutes a compass card, its radii corresponding to the cardinal points. On the movable frame surrounding the mirror is fixed a vertical pointer graduated in millimeters, which can be moved up and down by means of a rack and pinion. The whole apparatus is mounted on a tripod stand provided with leveling screws. Mirror nephoscope: To make an observation, the mirror is adjusted to the horizontal with the leveling-screws, and is oriented to the meridian by moving the whole apparatus until the compass needle is seen through the window, to lie in the north-south line of the mirror (making, however, allowance for the magnetic declination). The observer stands in such a position as to bring the image of any chosen part of a cloud at the center of the mirror. The vertical pointer is also adjusted by screwing it up or down and by rotating it around the mirror until its tip is reflected in the center of the mirror. As the image of the cloud moves toward the circumference of the mirror, the observer moves his head so as to keep the tip of the pointer and the cloud image in coincidence. The radius along which the image moves gives the direction of the cloud's movement, and the time required to pass from one circle to the next its relative speed, which may be reduced to certain arbitrary units. Mirror nephoscope: This instrument is, however, not very easy to use, and gives only moderately accurate measurements. Comb nephoscope: Developed by Louis Besson in 1912, this apparatus consists of a horizontal bar fitted with several equidistant spikes and mounted on the upper end of a vertical pole which can be rotated on its axis. When an observation is to be made, the observer places himself in such a position that the central spike is projected on any chosen part of a cloud. Then, without altering his position, he causes the "comb" to turn by means of two cords in such a manner that the cloud is seen to follow along the line of spikes. A graduated circle, turning with the vertical pole, gives the direction of the cloud's motion. Comb nephoscope: It is read with the aid of a fixed pointer. Moreover, when the apparatus is once oriented, the observer can determine the relative speed of the cloud by noting the time the latter requires to pass from one spike to the next. If the instrument stands on level ground, so that the observer's eye is always at the same height, and if the interval between two successive spikes is equal to one-tenth of their altitude above the eye-level of the observer, one only needs to multiply the time required for the cloud to pass over one interval by 10 to determine the time the cloud travels a horizontal distance equal to its altitude. Comb nephoscope: Besson revived an old method, invented by Bravais for measuring the actual height of clouds. The apparatus in this case consists of a plate of glass having parallel faces, mounted on a graduated vertical circle which indicates its angle of inclination. A sheet of water, situated at a lower level, serves as a mirror to reflect the cloud. The water is contained in a reservoir of blackened cement surrounded by shrubbery, and is only a small fraction of an inch in depth, so that the wind may not disturb its level surface. Comb nephoscope: The observer, having mounted the glass plate on the horizontal axis of a theodolite set on a window-sill some 30 or 40 feet above the ground, places his eye close to it and adjusts its inclination so that the images of a cloud reflected in the plate and in the sheet of water coincide. Then from a curve traced once for all on a sheet of plotting paper he reads off the altitude of the cloud corresponding to the observed angle on the glass plate. The curve is plotted from simple trigonometrical calculations. Comb nephoscope: At the Observatory of Montsouris, the degree of cloudiness, i. e., the amount of the whole sky covered with clouds at a given moment, is determined by means of the nephometer, also devised by Besson. This consists of a convex glass mirror, a segment of a sphere, about twelve inches in diameter, in which is seen the reflection of the celestial vault divided into ten sections of equal area by means of lines engraved on the glass. As shown in the front page engraving, the meteorologist observes through an eyepiece fixed in an invariable position with respect to the mirror, which latter turns freely on a vertical axis. The observer, whose own image partly obstructs sections 8, 9. and 10, notes the degree of cloudiness in the sections numbered 1 to 7. The cloudiness of each section is estimated on a scale of 0 to 10: zero meaning no clouds and 10 entirely overcast. The observer would then rotate the mirror and eyepiece 180 degrees and observes the cloudiness in sections 7, 5, and 2, which represent the regions of the sky that at the first observation corresponded to sections 8, 9, and 10. Grid nephoscope: The grid nephoscope is a variation of the comb nephoscope, invented in Norway. Russian nephoscope: Mikhail Pomortsev invented a nephoscope in Russia in 1894.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**B-ration** B-ration: The B-ration (officially Field Ration, Type B) was a United States military ration consisting of packaged and preserved food intended to be prepared in field kitchens by cooks. Its modern successor is the Unitized Group Ration – M (UGR-M), which combines multiple types of rations, including the B-ration, under one unified system.The B-ration differs from other American alphabetized rations such as the A-ration, consisting of fresh food; C-ration, consisting of prepared wet food when A- and B-rations were not available; D-ration, consisting of military chocolate; K-ration, consisting of three balanced meals; and emergency rations, intended for emergencies when other food or rations are unavailable. Overview: Field rations such as the A-ration, B-ration, and emergency rations consisted of food items issued to troops operating in the field. Like the A-ration, the B-ration required the use of trained cooks and a field kitchen for preparation; however, it consisted entirely of semi-perishable foods and so did not require refrigeration equipment.As of 1982, the B-ration consisted of approximately 100 items which were issued in bulk and packaged in cans, cartons, pouches, and other packing material. An individual ration had a gross weight of 3.639 pounds, measured 0.1173 cubic feet, and could supply approximately 4,000 calories. B-rations were organized into a ten-day menu cycle which ensured a variety of different meals each day and could be altered as the service needed.The advantage of the B-ration was that it provided balanced nutrition in all climates and individual components could be easily substituted with fresh foods when they became available, a practice highly encouraged to avoid food monotony. However the meals could not be made without trained cooks and required significant investment. Preparing a meal for 100 personnel using B-rations required two to three hours for two cooks to prepare (plus additional personnel to help with serving and clean-up) and on average 75 gallons of potable water. Unitized Group Ration M: The modern equivalent to the B-ration is the Unitized Ground Ration – M, formerly called the Unitized Ground Ration – B. It is distinct from other forms of UGR, such as the UGR-H&S, in that it consists of dehydrated ingredients with an intended recipe in mind, as opposed to precooked or preassembled meals. Unlike the B-ration, the UGR-M is only issued to the United States Marine Corps.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mapping class group** Mapping class group: In mathematics, in the subfield of geometric topology, the mapping class group is an important algebraic invariant of a topological space. Briefly, the mapping class group is a certain discrete group corresponding to symmetries of the space. Motivation: Consider a topological space, that is, a space with some notion of closeness between points in the space. We can consider the set of homeomorphisms from the space into itself, that is, continuous maps with continuous inverses: functions which stretch and deform the space continuously without breaking or gluing the space. This set of homeomorphisms can be thought of as a space itself. It forms a group under functional composition. We can also define a topology on this new space of homeomorphisms. The open sets of this new function space will be made up of sets of functions that map compact subsets K into open subsets U as K and U range throughout our original topological space, completed with their finite intersections (which must be open by definition of topology) and arbitrary unions (again which must be open). This gives a notion of continuity on the space of functions, so that we can consider continuous deformation of the homeomorphisms themselves: called homotopies. We define the mapping class group by taking homotopy classes of homeomorphisms, and inducing the group structure from the functional composition group structure already present on the space of homeomorphisms. Definition: The term mapping class group has a flexible usage. Most often it is used in the context of a manifold M. The mapping class group of M is interpreted as the group of isotopy classes of automorphisms of M. So if M is a topological manifold, the mapping class group is the group of isotopy classes of homeomorphisms of M. If M is a smooth manifold, the mapping class group is the group of isotopy classes of diffeomorphisms of M. Whenever the group of automorphisms of an object X has a natural topology, the mapping class group of X is defined as Aut Aut 0⁡(X) , where Aut 0⁡(X) is the path-component of the identity in Aut ⁡(X) . (Notice that in the compact-open topology, path components and isotopy classes coincide, i.e., two maps f and g are in the same path-component iff they are isotopic). For topological spaces, this is usually the compact-open topology. In the low-dimensional topology literature, the mapping class group of X is usually denoted MCG(X), although it is also frequently denoted Aut ⁡(X)) , where one substitutes for Aut the appropriate group for the category to which X belongs. Here π0 denotes the 0-th homotopy group of a space. Definition: So in general, there is a short exact sequence of groups: Aut Aut MCG 1. Frequently this sequence is not split.If working in the homotopy category, the mapping class group of X is the group of homotopy classes of homotopy equivalences of X. Definition: There are many subgroups of mapping class groups that are frequently studied. If M is an oriented manifold, Aut ⁡(M) would be the orientation-preserving automorphisms of M and so the mapping class group of M (as an oriented manifold) would be index two in the mapping class group of M (as an unoriented manifold) provided M admits an orientation-reversing automorphism. Similarly, the subgroup that acts as the identity on all the homology groups of M is called the Torelli group of M. Examples: Sphere In any category (smooth, PL, topological, homotopy) MCG ⁡(S2)≃Z/2Z, corresponding to maps of degree ±1. Torus In the homotopy category MCG GL ⁡(n,Z). This is because the n-dimensional torus Tn=(S1)n is an Eilenberg–MacLane space. For other categories if n≥5 , one has the following split-exact sequences: In the category of topological spaces MCG GL ⁡(n,Z)→0 In the PL-category MCG GL ⁡(n,Z)→0 (⊕ representing direct sum). In the smooth category MCG GL ⁡(n,Z)→0 where Γi are the Kervaire–Milnor finite abelian groups of homotopy spheres and Z2 is the group of order 2. Examples: Surfaces The mapping class groups of surfaces have been heavily studied, and are sometimes called Teichmüller modular groups (note the special case of MCG ⁡(T2) above), since they act on Teichmüller space and the quotient is the moduli space of Riemann surfaces homeomorphic to the surface. These groups exhibit features similar both to hyperbolic groups and to higher rank linear groups. They have many applications in Thurston's theory of geometric three-manifolds (for example, to surface bundles). The elements of this group have also been studied by themselves: an important result is the Nielsen–Thurston classification theorem, and a generating family for the group is given by Dehn twists which are in a sense the "simplest" mapping classes. Every finite group is a subgroup of the mapping class group of a closed, orientable surface,; in fact one can realize any finite group as the group of isometries of some compact Riemann surface (which immediately implies that it injects in the mapping class group of the underlying topological surface). Examples: Non-orientable surfaces Some non-orientable surfaces have mapping class groups with simple presentations. For example, every homeomorphism of the real projective plane P2(R) is isotopic to the identity: MCG 1. The mapping class group of the Klein bottle K is: MCG ⁡(K)=Z2⊕Z2. The four elements are the identity, a Dehn twist on a two-sided curve which does not bound a Möbius strip, the y-homeomorphism of Lickorish, and the product of the twist and the y-homeomorphism. It is a nice exercise to show that the square of the Dehn twist is isotopic to the identity. We also remark that the closed genus three non-orientable surface N3 (the connected sum of three projective planes) has: MCG GL ⁡(2,Z). This is because the surface N has a unique class of one-sided curves such that, when N is cut open along such a curve C, the resulting surface N∖C is a torus with a disk removed. As an unoriented surface, its mapping class group is GL ⁡(2,Z) . (Lemma 2.1). 3-Manifolds Mapping class groups of 3-manifolds have received considerable study as well, and are closely related to mapping class groups of 2-manifolds. For example, any finite group can be realized as the mapping class group (and also the isometry group) of a compact hyperbolic 3-manifold. Mapping class groups of pairs: Given a pair of spaces (X,A) the mapping class group of the pair is the isotopy-classes of automorphisms of the pair, where an automorphism of (X,A) is defined as an automorphism of X that preserves A, i.e. f: X → X is invertible and f(A) = A. Mapping class groups of pairs: Symmetry group of knot and links If K ⊂ S3 is a knot or a link, the symmetry group of the knot (resp. link) is defined to be the mapping class group of the pair (S3, K). The symmetry group of a hyperbolic knot is known to be dihedral or cyclic, moreover every dihedral and cyclic group can be realized as symmetry groups of knots. The symmetry group of a torus knot is known to be of order two Z2. Torelli group: Notice that there is an induced action of the mapping class group on the homology (and cohomology) of the space X. This is because (co)homology is functorial and Homeo0 acts trivially (because all elements are isotopic, hence homotopic to the identity, which acts trivially, and action on (co)homology is invariant under homotopy). The kernel of this action is the Torelli group, named after the Torelli theorem. Torelli group: In the case of orientable surfaces, this is the action on first cohomology H1(Σ) ≅ Z2g. Orientation-preserving maps are precisely those that act trivially on top cohomology H2(Σ) ≅ Z. H1(Σ) has a symplectic structure, coming from the cup product; since these maps are automorphisms, and maps preserve the cup product, the mapping class group acts as symplectic automorphisms, and indeed all symplectic automorphisms are realized, yielding the short exact sequence: Tor MCG Sp Sp 2g⁡(Z)→1 One can extend this to Tor MCG Sp Sp 2g±⁡(Z)→1 The symplectic group is well understood. Hence understanding the algebraic structure of the mapping class group often reduces to questions about the Torelli group. Torelli group: Note that for the torus (genus 1) the map to the symplectic group is an isomorphism, and the Torelli group vanishes. Stable mapping class group: One can embed the surface Σg,1 of genus g and 1 boundary component into Σg+1,1 by attaching an additional hole on the end (i.e., gluing together Σg,1 and Σ1,2 ), and thus automorphisms of the small surface fixing the boundary extend to the larger surface. Taking the direct limit of these groups and inclusions yields the stable mapping class group, whose rational cohomology ring was conjectured by David Mumford (one of conjectures called the Mumford conjectures). The integral (not just rational) cohomology ring was computed in 2002 by Ib Madsen and Michael Weiss, proving Mumford's conjecture.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Calone** Calone: Calone or methylbenzodioxepinone, trade-named Calone 1951, also known in the industry as "watermelon ketone", was discovered by Pfizer in 1966. It is used to give the olfactory impression of a fresh seashore through the marine and ozone nuances. Calone is similar in structure to brown algae pheromones like ectocarpene and is also distantly related in structure to the benzodiazepine class of sedatives.Calone is an unusual chemical compound which has an intense "sea-breeze" note with slight floral and fruit overtones. It has been used as a scent component since the 1980s for its watery, fresh, ozone accords, and as a more dominant note in several perfumes of the marine trend, beginning in the 1990s. In 2014, Plummer et al. reported the synthesis and fragrance properties of several related aliphatic analogues.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Destination dispatch** Destination dispatch: Destination dispatch is an optimization technique used for multi-elevator installations, in which groups of passengers heading to the same destinations use the same elevators, thereby reducing waiting and travel times. Comparatively, the traditional approach is where all passengers wishing to ascend or descend enter any available lift and then request their destination. Using destination dispatch, passengers request travel to a particular floor using a keypad, touch screen, or proximity card room-key prior in the lobby and are immediately directed to an appropriate elevator car. Algorithms: Based on information about the trips that passengers wish to make, the controller will dynamically allocate individuals to elevators to avoid excessive intermediate stops. Overall trip-times can be reduced by 25% with capacity up by 30%.Controllers can also offer different levels of service to passengers based on information contained in key-cards. A high-privilege user may be allocated the nearest available elevator and always be guaranteed a direct service to their floor, and may be allocated an elevator with exclusive use; other users, such as handicapped people, may be provided with accessibility features such as extended door-opening times. Limitations: The smooth operation of a destination dispatch system depends upon each passenger indicating their destination intention separately. In most cases, the elevator system has no way of differentiating a group of passengers from a single passenger if the group's destination is only keyed in a single time. This could potentially lead to an elevator stopping to pick up more passengers than the elevator actually has capacity for, creating delays for other users. This situation is handled by two solutions, a load vane sensor on the elevator, or a group function button on the keypad. The load vane will tell the elevator controller that there is a high load in the elevator car, this makes it so the elevator doesn't stop at other floors until the load is low enough to pick up more passengers. The group function button asks for how many passengers are going to a floor, and then the system sends the correct number of elevators to that floor if available.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quantum solvent** Quantum solvent: A quantum solvent is essentially a superfluid (aka a quantum liquid) used to dissolve another chemical species. Any superfluid can theoretically act as a quantum solvent, but in practice the only viable superfluid medium that can currently be used is helium-4, and it has been successfully accomplished in controlled conditions. Such solvents are currently under investigation for use in spectroscopic techniques in the field of analytical chemistry, due to their superior kinetic properties. Quantum solvent: Any matter dissolved (or otherwise suspended) in the superfluid will tend to aggregate together in clumps, encapsulated by a 'quantum solvation shell'. Due to the totally frictionless nature of the superfluid medium, the entire object then proceeds to act very much like a nanoscopic ball bearing, allowing effectively complete rotational freedom of the solvated chemical species. A quantum solvation shell consists of a region of non-superfluid helium-4 atoms that surround the molecule(s) and exhibit adiabatic following around the centre of gravity of the solute. As such, the kinetics of an effectively gaseous molecule can be studied without the need to use an actual gas (which can be impractical or impossible). It is necessary to make a small alteration to the rotational constant of the chemical species being examined, in order to compensate for the higher mass entailed by the quantum solvation shell. Quantum solvation has so far been achieved with a number of organic, inorganic and organometallic compounds, and it has been speculated that as well as the obvious use in the field of spectroscopy, quantum solvents could be used as tools in nanoscale chemical engineering, perhaps to manufacture components for use in nanotechnology.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Windows Hardware Engineering Conference** Windows Hardware Engineering Conference: The Windows Hardware Engineering Community (WinHEC) is a series of technical conferences and workshops, where Microsoft elaborates on its hardware plans for Windows devices. The WinHEC from 1992 to 2008, which stood for Windows Hardware Engineering Conference, was an annual software and hardware developer-oriented trade show and business conference, where Microsoft elaborated on its hardware plans for Microsoft Windows-compatible PCs. From 2008 to 2015, WinHEC was then replaced in Microsoft's schedule by the Professional Developers Conference, later merged into the Build conference. Windows Hardware Engineering Conference: On September 26, 2014, Microsoft announced that WinHEC would be returning in 2015 in the form of multiple conferences held throughout the year. The first conference was to be held in Shenzhen, China on March 18 to 19. The industry had changed significantly since Microsoft's prior WinHEC event, with innovation happening at a much quicker pace and across more geographically diverse locations. Because of that, Microsoft evolved WinHEC to be more than a single annual conference. WinHEC was to consist of technical conferences and smaller, more frequent, topic focused workshops that were local to the hardware ecosystem hubs. The WinHEC acronym also changed its meaning to "Windows Hardware Engineering Community". Windows Hardware Engineering Conference: On December 17, 2014, Microsoft announced that registration was open for the first of its re-launched WinHEC summit, taking place March 18–19, 2015 in Shenzhen, China. The company also announced that Terry Myerson, Executive Vice President of the Operating Systems Group would keynote the event. They would discuss advancements in the Windows platform making it easier for companies to build devices powered by Windows as well as Microsoft’s growing investments in the Shenzhen and China ecosystem. Audience: WinHEC will stay true to its strong technical roots. The agenda will be packed with executive keynotes, deep technical training sessions, hands-on labs, and opportunities for Q&A on topics across the spectrum of Windows-based hardware. For executives, engineering managers, engineers and technical product managers at OEMs, ODMs, IHVs, and IDHs who are working with or want to work with Windows technologies Events: 1992 – San Francisco, California. March 1–3, 1992 1993 – San Jose, California. March 1–3, 1993 1994 – San Francisco, California. February 23–25, 1994 1995 – San Francisco, California. March 20–22, 1995 1996 – San Jose, California. April 1–3, 1996 1997 – San Francisco, California. April 8–10, 1997 1998 – Orlando, Florida. March 25–27, 1998 1999 – Los Angeles, California. April 7–9, 1999 2000 – New Orleans, Louisiana. April 25–27, 2000 2001 – Anaheim, California. March 26–28, 2001. Events: Announcement of the availability of Windows XP Beta 2, which includes the first public beta of Internet Explorer 6. 2002 – Seattle, Washington. April 16–18, 2002. 2003 – New Orleans, Louisiana. May 6–8, 2003. Bill Gates keynote; demonstrated "Athens" PC concept, discussed 64-bit computing, uptake of Windows XP. Events: Initial Windows Longhorn demonstrations and discussions, focusing on a new Desktop Composition Engine (which later became known as the Desktop Window Manager) 2004 – Seattle, Washington. May 4–7, 2004.Discussion of Longhorn release timeline and upcoming service packs for Windows XP and Windows Server 2003 Updated Athens concept PC design, named "Troy" based on a Longhorn user interface 2005 – Washington State Convention and Trade Center, Seattle, Washington. April 25–27, 2005.Bill Gates gave a keynote speech on various topics including Windows "Longhorn" (known later as Windows Vista) and 64-bit computing. Events: 2006 – Washington State Convention and Trade Center, Seattle, Washington. May 23–25, 2006. Attendance of more than 3,700.Microsoft announced the release of beta 2 of Windows Vista, Windows Server "Longhorn" and Microsoft Office 2007. The Free Software Foundation staged a protest outside the venue, wearing yellow hazmat suits and handing out pamphlets claiming that Microsoft products are "Defective by Design" because of the Digital Rights Management technologies included in them. 2007 – Los Angeles Convention Center, Los Angeles, California. May 15–17, 2007. 2008 – Los Angeles Convention Center, Los Angeles, California. November 4–6, 2008.Immediately following PDC 2008, held at the same venue, October 27–30. Focusing on the then upcoming Windows 7. 2015 – Grand Hyatt Shenzhen Hotel, Shenzhen, China. March 18–19, 2015.Microsoft released the source of the Windows Driver Frameworks. Focused on Windows 10.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lan blood group system** Lan blood group system: The Lan blood group system (short for Langereis) is a human blood group defined by the presence or absence of the Lan antigen on a person's red blood cells. More than 99.9% of people are positive for the Lan antigen. Individuals with the rare Lan-negative blood type, which is a recessive trait, can produce an anti-Lan antibody when exposed to Lan-positive blood. Anti-Lan antibodies may cause transfusion reactions on subsequent exposures to Lan-positive blood, and have also been implicated in mild cases of hemolytic disease of the newborn. However, the clinical significance of the antibody is variable. The antigen was first described in 1961, and Lan was officially designated a blood group in 2012. Molecular biology: The Lan antigen is carried on the protein ABCB6, an ATP-binding cassette transporter encoded by the ABCB6 gene on chromosome 2q36. The Lan-negative blood type is inherited in an autosomal recessive manner, being expressed by individuals who are homozygous for nonfunctional alleles of ABCB6. Some variant alleles cause a weak positive phenotype, which may be mistaken for a Lan-negative phenotype in serologic testing. As of 2018, more than 40 null or weak alleles of ABCB6 have been described.ABCB6 is involved in heme synthesis and porphyrin transport and is widely expressed throughout the body, particularly in the heart, skeletal muscle, eye, fetal liver, mitochondrial membrane, and Golgi bodies.: 220  The Lan antigen is more strongly expressed on cord blood cells than on adult red blood cells.: 490  Despite the protein's wide distribution, Lan-negative individuals do not appear to experience any adverse effects from the absence of ABCB6. It is thought that other porphyrin transporters, such as ABCG2 (which carries the Junior blood group antigen), may compensate.: 220  A 2018 study found that Lan-negative blood cells exhibited resistance to Plasmodium falciparum in vitro. Epidemiology: The prevalence of the Lan antigen exceeds 99.9% in most populations. The frequency of the Lan-negative blood type is estimated at 1 in 50,000 in Japanese populations, 1 in 20,000 in Caucasians, and 1 in 1,500 in black people from South Africa. Clinical significance: When Lan-negative individuals are exposed to Lan-positive blood through transfusion or pregnancy, they may develop an anti-Lan antibody. Anti-Lan is considered a clinically significant antibody,: 220  but its effects are variable. It has been associated with severe transfusion reactions and mild cases of hemolytic disease of the newborn, but in some cases individuals with the antibody have not experienced any adverse effects from exposure to Lan-positive blood. It is recommended that individuals with anti-Lan are transfused with Lan-negative blood, especially if the antibody titer is high. One case of autoimmune hemolytic anemia involving auto-anti-Lan has been described. Laboratory testing: Serologic reagents and molecular assays for Lan antigen typing were not commercially available as of 2013.Anti-Lan antibodies are typically composed of immunoglobulin G and may bind complement. As an IgG antibody, anti-Lan can be detected using the indirect antiglobulin test. The antibody is resistant to treatment with ficin, papain, trypsin, DTT, and EDTA/glycine-acid.: 220 History: The Lan antigen was first described in 1961 by Van der Hart et al., when a Dutch patient suffered a severe hemolytic transfusion reaction.: 220 : 489  The patient was found to produce an antibody that reacted with all but 1 out of 4,000 blood donors tested. The causative antigen was identified and designated "Langereis" after the patient's last name. Lan was officially designated a blood group by the International Society of Blood Transfusion in 2012, following the discovery of the molecular basis of the Lan-negative phenotype.: 220 
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Radium compounds** Radium compounds: Radium compounds are compounds containing the element radium (Ra). Due to radium's radioactivity, not many compounds have been well characterized. Solid radium compounds are white as radium ions provide no specific coloring, but they gradually turn yellow and then dark over time due to self-radiolysis from radium's alpha decay. Insoluble radium compounds coprecipitate with all barium, most strontium, and most lead compounds. Oxides and hydroxides: Radium oxide (RaO) has not been characterized well past its existence, despite oxides being common compounds for the other alkaline earth metals. Radium hydroxide (Ra(OH)2) is the most readily soluble among the alkaline earth hydroxides and is a stronger base than its barium congener, barium hydroxide. It is also more soluble than actinium hydroxide and thorium hydroxide: these three adjacent hydroxides may not be separated by precipitating them with ammonia. Halides: Radium fluoride (RaF2) is a highly radioactive compound. It can be coprecipitated with lanthanide fluorides. Radium fluoride has the same crystal form as calcium fluoride (fluorite). It can be prepared by the reaction of radium metal and hydrogen fluoride gas: Ra + 2 HF → RaF2 + H2Radium chloride (RaCl2) is a colorless, luminous compound. It becomes yellow after some time due to self-damage by the alpha radiation given off by radium when it decays. Small amounts of barium impurities give the compound a rose color. It is soluble in water, though less so than barium chloride, and its solubility decreases with increasing concentration of hydrochloric acid. Crystallization from aqueous solution gives the dihydrate RaCl2·2H2O, isomorphous with its barium analog.Radium bromide (RaBr2) is also a colorless, luminous compound. In water, it is more soluble than radium chloride. Like radium chloride, crystallization from aqueous solution gives the dihydrate RaBr2·2H2O, isomorphous with its barium analog. The ionizing radiation emitted by radium bromide excites nitrogen molecules in the air, making it glow. The alpha particles emitted by radium quickly gain two electrons to become neutral helium, which builds up inside and weakens radium bromide crystals. This effect sometimes causes the crystals to break or even explode. Other compounds: Radium nitrate (Ra(NO3)2) is a white compound that can be made by dissolving radium carbonate in nitric acid. As the concentration of nitric acid increases, the solubility of radium nitrate decreases, an important property for the chemical purification of radium.Radium forms much the same insoluble salts as its lighter congener barium: it forms the insoluble sulfate (RaSO4, the most insoluble known sulfate), chromate (RaCrO4), carbonate (RaCO3), iodate (Ra(IO3)2), tetrafluoroberyllate (RaBeF4), and nitrate (Ra(NO3)2). With the exception of the carbonate, all of these are less soluble in water than the corresponding barium salts, but they are all isostructural to their barium counterparts. Additionally, radium phosphate, radium oxalate, and radium sulfite are probably also insoluble, as they coprecipitate with the corresponding insoluble barium salts. The great insolubility of radium sulfate (at 20 °C, only 2.1 mg will dissolve in 1 kg of water) means that it is one of the less biologically dangerous radium compounds. The large ionic radius of Ra2+ (148 pm) results in weak complexation and poor extraction of radium from aqueous solutions when not at high pH.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Subdural hygroma** Subdural hygroma: A subdural hygroma (SDG) is a collection of cerebrospinal fluid (CSF), without blood, located under the dural membrane of the brain. Most subdural hygromas are believed to be derived from chronic subdural hematomas. They are commonly seen in elderly people after minor trauma but can also be seen in children following infection or trauma. One of the common causes of subdural hygroma is a sudden decrease in pressure as a result of placing a ventricular shunt. This can lead to leakage of CSF into the subdural space especially in cases with moderate to severe brain atrophy. In these cases the symptoms such as mild fever, headache, drowsiness and confusion can be seen, which are relieved by draining this subdural fluid. Etiology and Pathophysiology: Subdural hygromas require two conditions in order to occur. First, there must be a separation in the layers of the meninges of the brain. Second, the resulting subdural space that occurs from the separation of layers must remain uncompressed in order for CSF to accumulate in the subdural space, resulting in the hygroma. The arachnoid mater is torn and cerebrospinal fluid (CSF) from the subarachnoid space accumulates in the subdural space. Hygromas also push the subarachnoid vessels away from the inner table of the skull. Subdural hygroma can appear in the first day, but the mean time of appearance is 9 days on CT scan. Subdural hygroma does not have internal membranes that can easily rupture like subdural haematoma, but hygroma can sometimes occur together with hemorrhage to become hematohygroma.Subdural hygromas most commonly occur when events such as head trauma, infections, or cranial surgeries happen in tandem with brain atrophy, severe dehydration, prolonged spinal drainage, or any other event that causes a decrease in intracranial pressure. This provides the basis for why subdural hygromas more commonly occur in infants and elderly; infants have compressible brains while elderly patients have a greater amount of space for fluid to accumulate due to brain atrophy from age. Signs and symptoms: Most subdural hygromas are small and clinically insignificant. A majority of patients with SDG will not experience symptoms. However, some commonly reported but nonspecific symptoms of SDG that have been reported include headache and nausea. Focal neurologic deficits and seizures have also been reported but are nonspecific to SDG. Larger hygromas may cause secondary localized mass effects on the adjacent brain parenchyma, enough to cause a neurologic deficit or other symptoms. Acute subdural hygromas can be a potential neurosurgical emergency, requiring decompression. Acute hygromas are typically a result of head trauma—they are a relatively common posttraumatic lesion—but can also develop following neurosurgical procedures, and have also been associated with a variety of conditions, including dehydration in the elderly, lymphoma and connective tissue diseases. Diagnosis: In CT scan, subdural hygroma will have same density as the normal CSF. Meanwhile, in MRI, subdural hygroma will have same intensity with CSF. If iodinated contrast is administered during CT scan, the hygroma will produce high density because of the contrast at 120 kVp. However, at 190 kVp, hygroma with contrast will have intermediate density.In the majority of cases, if there has not been any acute trauma or severe neurologic symptoms, a small subdural hygroma on the head CT scan will be an incidental finding. If there is an associated localized mass effect that may explain the clinical symptoms, or concern for a potential chronic SDH that could rebleed, then an MRI, with or without neurologic consultation, may be useful. Diagnosis: It is not uncommon for chronic subdural hematomas (SDHs) on CT reports for scans of the head to be misinterpreted as subdural hygromas, and vice versa. Magnetic resonance imaging (MRI) should be done to differentiate a chronic SDH from a subdural hygroma, when clinically warranted. Elderly patients with marked cerebral atrophy, and secondary widened subarachnoid CSF spaces, can also cause confusion on CT. To distinguish chronic subdural hygromas from simple brain atrophy and CSF space expansion, a gadolinium-enhanced MRI can be performed. Visualization of cortical veins traversing the collection favors a widened subarachnoid space as seen in brain atrophy, whereas subdural hygromas will displace the cortex and cortical veins. Treatment: Most subdural hygromas that are asymptomatic do not require any treatment. Some might opt to perform a simple burr-holes to alleviate intracranial pressure (ICP). Occasionally a temporary drain is placed for 24-48 hours post op. In recurrent cases a craniotomy may be performed to attempt to locate the location of the CSF Leak. In certain cases a shunt can be placed for additional drainage. Great caution is used when choosing to look for the CSF leak due to them generally being difficult to spot.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Doubleheader (baseball)** Doubleheader (baseball): In the sport of baseball, a doubleheader is a set of two games played between the same two teams on the same day. Historically, doubleheaders have been played in immediate succession, in front of the same crowd. Contemporarily, the term is also used to refer to two games played between two teams in a single day in front of different crowds and not in immediate succession. Doubleheader (baseball): The record for the most doubleheaders played by a major-league team in one season is 44 by the Chicago White Sox in 1943. Between September 4 and September 15, 1928, the Boston Braves played nine consecutive doubleheaders – 18 games in 12 days. History: For many decades, major-league doubleheaders were routinely scheduled numerous times each season. However, any major-league doubleheader now played is generally the result of a prior game between the same two teams being postponed due to inclement weather or other factors. Most often the game is rescheduled for a day on which the two teams play each other again. Often it is within the same series, but in some cases, may be weeks or months after the original date. On rare occasions, the last game between two teams in that particular city is rained out, and a doubleheader may be scheduled at the other team's home park to replace the missed game. History: Currently, major-league teams playing two games in a day usually play a "day-night doubleheader", in which the stadium is emptied of spectators and a separate admission is required for the second game. However, such games are officially regarded as separate games on the same date, rather than as a doubleheader. True doubleheaders are less commonly played. Classic doubleheaders, also known as day doubleheaders, were more common in the past, and although they are rare in the major leagues, they still are played at the minor league and college levels. History: In 1959, at least one league played a quarter of its games as classic doubleheaders. The rate declined to 10% in 1979. Eventually, eight years passed between two officially scheduled doubleheaders. Reasons for the decline include clubs' desire to maximize revenue, longer duration of games, five-day pitching rotation as opposed to four-day rotation, time management of relievers and catchers, and lack of consensus among players. Types of doubleheaders: The Official Baseball Rules used by Major League Baseball (MLB) discuss doubleheaders in section 4.08.: 16–17  The document makes mention of "conventional" and "split" doubleheaders.: 16 Conventional In conventional doubleheaders, a spectator may attend both games by purchasing a single ticket. After the first game ends, a break, normally lasting 30 to 45 minutes per the Official Baseball Rules, occurs and the second game is then started.: 16–17  For statistical purposes, the attendance is counted only for the second game, with the first game's attendance recorded as zero. Types of doubleheaders: Day The "classic" day doubleheader consists of the first game played in the early afternoon and, following a break, the second is played in the late afternoon. This was often done out of necessity in the years before many ballparks had lights. Often, if either game went into extra innings, the second game was eventually called when it grew dark. Types of doubleheaders: This type of doubleheader is now more prominent in Minor League Baseball. It is now uncommon in the major leagues, even for rain makeups, since the use of stadium lights allows for night games. They are still occasionally scheduled, one example being the Tampa Bay Rays hosting the Oakland Athletics in a single-admission doubleheader starting at 1:05 p.m. on the afternoon of June 10, 2017, at Tropicana Field. Types of doubleheaders: Twi-night In a twi-night (short for "twilight-night") doubleheader, the first game is played in the late afternoon and, following a break, the second begins at night. Under the Collective Bargaining Agreement (CBA) between MLB and the Major League Baseball Players Association (MLBPA), this is allowed provided the start time of the first game is no later than 5:00 p.m. local time, although they generally start at 4:00 p.m. This type of doubleheader is still used in the minor leagues, or occasionally in MLB as the result of a rainout. Types of doubleheaders: Split In a split or "day-night" doubleheader, the first game is played in the early afternoon and the second is played at night. In this scenario, separate tickets are sold for admission to each individual game. Such doubleheaders are favored by major-league organizations because they can charge admission for each game individually, and most often occur as the result of a rainout, where tickets have already been sold to the individual games. Types of doubleheaders: Except in special circumstances with the approval of the MLBPA, such as a makeup game resulting from a rainout, scheduling split doubleheaders is prohibited under the terms of the 2002 CBA. Exceptions have occurred; for example, on August 22, 2012, the Arizona Diamondbacks hosted the Miami Marlins in a day-night doubleheader, the first doubleheader ever played at Chase Field, which was arranged due to a scheduling error violating another section of the CBA, which prohibits 23 consecutive games without a day off.Since the 2012 season, the CBA has allowed teams to expand their active roster by one player (currently from 26 to 27 players) for split doubleheaders, as long as those doubleheaders were scheduled with at least 48 hours' notice. Tripleheaders: Three instances of a tripleheader are recorded in MLB, indicating three games between the same two teams on the same day. These occurred between the Brooklyn Bridegrooms and Pittsburgh Innocents on September 1, 1890 (Brooklyn won all three); between the Baltimore Orioles and Louisville Colonels on September 7, 1896 (Baltimore won all three); and between the Pittsburgh Pirates and Cincinnati Reds on October 2, 1920 (Cincinnati won two of the three).Tripleheaders are prohibited under the current CBA, except if the first game is the conclusion of a game suspended from a prior date: this would only happen in the extremely rare event when the only remaining dates between teams are doubleheaders, and no single games are left for the suspended game to precede. Tripleheaders: In 2019, a Friday doubleheader at the end of the season between the Tigers and White Sox was rained out after one of the games started but did not go 5 innings. As a result, one of the games was moved to a doubleheader Saturday and the other was cancelled. Had the broader definition of suspended games rule been in play for 2019, it is possible a tripleheader would have happened between the Tigers and White Sox on that Saturday due to the rules allowing for such. Seven-inning doubleheaders: Under some rulesets, games played as part of a doubleheader last seven innings each instead of the usual nine. Seven-inning doubleheaders: In college and minor league baseball College and minor league baseball typically use seven-inning doubleheaders. This applies even in the postseason; in 1994, the first game of the five-game Pacific Coast League championship series between Vancouver and Albuquerque was rained out; the two teams played a doubleheader, seven innings each, on the originally scheduled date of the second game. In the minors, if the first game is the completion of a suspended game from a prior day, the suspended game is played to completion (seven or nine innings, whichever it was scheduled to be when it started), and the second game of the doubleheader is seven innings. Seven-inning doubleheaders: In leagues which place a runner on second base at the start of extra innings, the rule applies starting in the eighth inning. Seven-inning doubleheaders: In Major League Baseball, 2020–2021 After the COVID-19 pandemic delayed the start of MLB's 2020 season to July from its original intended start in March, the league announced on July 31 that all doubleheader games would be scheduled for seven innings each during the shortened season, to reduce strain on teams' pitchers. The league and the MLBPA came to an agreement to put this rule in place only for the 2020 season, later extended to the 2021 season as well. The 2022 season reverted to nine-inning doubleheaders. Seven-inning doubleheaders: The first major-league seven-inning doubleheader was played on August 2, 2020, between the Cincinnati Reds and the Detroit Tigers at Comerica Park, with the Reds winning both games. Seven-inning doubleheaders: Statistical impact Some major-league feats in a seven-inning game were counted as-is, while others were not. For example, a shutout was credited when it occurred in a seven-inning game; Reds pitcher Trevor Bauer threw the first seven-inning shutout under the rule.A no-hitter was only credited if the game lasted at least nine innings (i.e. extra innings were played, due to a tie score). Under the 1991 guidelines recognizing major-league no-hitters, the feat is only officially recognized when a team's pitcher (or pitchers) allows no hits in a minimum of nine innings (that is, records at least 27 outs without allowing a hit). On April 25, 2021, Madison Bumgarner of the Arizona Diamondbacks pitched a complete seven-inning game allowing no hits to the Atlanta Braves in the second game of a doubleheader, but did not receive credit for a no-hitter. Five pitchers of the Tampa Bay Rays held the Cleveland Indians hitless in a seven-inning game, the second game of a doubleheader on July 7, 2021, and also did not receive credit for a no-hitter. Doubleheaders of note: The home-and-home doubleheader, in which each team hosts one game, is extremely rare, as it requires the teams' home ballparks to be in close geographical proximity. During the 20th century and before the advent of interleague play in 1997, only one instance was recorded in Major League Baseball: a Labor Day special event involving the New York Giants and Brooklyn Superbas. Doubleheaders of note: September 7, 1903 Game 1: Washington Park (II): Giants 6, Superbas 4 Game 2: Polo Grounds (III): Superbas 3, Giants 0This is the only home-and-home doubleheader known to have been part of the original major league season schedule.Since interleague play began, the New York Mets and the New York Yankees have on three occasions played home-and-home doubleheaders. Each occasion was due to a rainout during the first series of the season. During the second series of the season, a makeup game was scheduled at the ballpark of the opposing team as part of a day-night doubleheader. Doubleheaders of note: July 8, 2000Game 1: Shea Stadium: Yankees 4, Mets 2 Game 2: Yankee Stadium (I): Yankees 4, Mets 2 (June 11 makeup) June 28, 2003 Game 1: Yankee Stadium (I): Yankees 7, Mets 1 Game 2: Shea Stadium: Yankees 9, Mets 8 (June 21 makeup) June 27, 2008 Game 1: Yankee Stadium (I): Mets 15, Yankees 6 (May 16 makeup) Game 2: Shea Stadium: Yankees 9, Mets 0On September 13, 1951, the St. Louis Cardinals hosted a doubleheader against two different teams. The first game was a 6–4 win against the New York Giants. The second game resulted in a 2–0 loss to the Boston Braves.On September 25, 2000, the Cleveland Indians also hosted a doubleheader against two different teams. The September 10 game against the Chicago White Sox in Cleveland had been rained out. With no common days off for the remainder of the season and both teams in a postseason race, the teams agreed to play a day game in Cleveland on the same day that the Indians were to host the Minnesota Twins for a night game. The Indians defeated the White Sox 9–2 in the first game, while the Twins defeated the Indians 4–3 in the second.On occasion, teams may play both games of a doubleheader at the same park, but one team is designated home for each game. This is usually the result of earlier postponements. For example, in 2007, when snow storms in northern Ohio caused the Cleveland Indians to postpone an entire four-game series from April 5–8 against the Seattle Mariners; three of the games were made up in Cleveland throughout the season, while the fourth was made up as part of a doubleheader in Seattle on September 26 with the Indians as the designated home team for the first game. The Indians won the first game acting as the home team, 12–4, but lost the second as the away team, 3–2. In popular culture: National Baseball Hall of Fame inductee Ernie Banks, who spent his entire MLB career with the Chicago Cubs, was known for his catchphrase, "It's a beautiful day for a ballgame ... Let's play two!", expressing his wish to play a doubleheader every day out of his love of baseball.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Data transformation (statistics)** Data transformation (statistics): In statistics, data transformation is the application of a deterministic mathematical function to each point in a data set—that is, each data point zi is replaced with the transformed value yi = f(zi), where f is a function. Transforms are usually applied so that the data appear to more closely meet the assumptions of a statistical inference procedure that is to be applied, or to improve the interpretability or appearance of graphs. Data transformation (statistics): Nearly always, the function that is used to transform the data is invertible, and generally is continuous. The transformation is usually applied to a collection of comparable measurements. For example, if we are working with data on peoples' incomes in some currency unit, it would be common to transform each person's income value by the logarithm function. Motivation: Guidance for how data should be transformed, or whether a transformation should be applied at all, should come from the particular statistical analysis to be performed. For example, a simple way to construct an approximate 95% confidence interval for the population mean is to take the sample mean plus or minus two standard error units. However, the constant factor 2 used here is particular to the normal distribution, and is only applicable if the sample mean varies approximately normally. The central limit theorem states that in many situations, the sample mean does vary normally if the sample size is reasonably large. However, if the population is substantially skewed and the sample size is at most moderate, the approximation provided by the central limit theorem can be poor, and the resulting confidence interval will likely have the wrong coverage probability. Thus, when there is evidence of substantial skew in the data, it is common to transform the data to a symmetric distribution before constructing a confidence interval. If desired, the confidence interval can then be transformed back to the original scale using the inverse of the transformation that was applied to the data.Data can also be transformed to make them easier to visualize. For example, suppose we have a scatterplot in which the points are the countries of the world, and the data values being plotted are the land area and population of each country. If the plot is made using untransformed data (e.g. square kilometers for area and the number of people for population), most of the countries would be plotted in tight cluster of points in the lower left corner of the graph. The few countries with very large areas and/or populations would be spread thinly around most of the graph's area. Simply rescaling units (e.g., to thousand square kilometers, or to millions of people) will not change this. However, following logarithmic transformations of both area and population, the points will be spread more uniformly in the graph. Motivation: Another reason for applying data transformation is to improve interpretability, even if no formal statistical analysis or visualization is to be performed. For example, suppose we are comparing cars in terms of their fuel economy. These data are usually presented as "kilometers per liter" or "miles per gallon". However, if the goal is to assess how much additional fuel a person would use in one year when driving one car compared to another, it is more natural to work with the data transformed by applying the reciprocal function, yielding liters per kilometer, or gallons per mile. In regression: Data transformation may be used as a remedial measure to make data suitable for modeling with linear regression if the original data violates one or more assumptions of linear regression. For example, the simplest linear regression models assume a linear relationship between the expected value of Y (the response variable to be predicted) and each independent variable (when the other independent variables are held fixed). If linearity fails to hold, even approximately, it is sometimes possible to transform either the independent or dependent variables in the regression model to improve the linearity. For example, addition of quadratic functions of the original independent variables may lead to a linear relationship with expected value of Y, resulting in a polynomial regression model, a special case of linear regression. In regression: Another assumption of linear regression is homoscedasticity, that is the variance of errors must be the same regardless of the values of predictors. If this assumption is violated (i.e. if the data is heteroscedastic), it may be possible to find a transformation of Y alone, or transformations of both X (the predictor variables) and Y, such that the homoscedasticity assumption (in addition to the linearity assumption) holds true on the transformed variables and linear regression may therefore be applied on these. In regression: Yet another application of data transformation is to address the problem of lack of normality in error terms. Univariate normality is not needed for least squares estimates of the regression parameters to be meaningful (see Gauss–Markov theorem). However confidence intervals and hypothesis tests will have better statistical properties if the variables exhibit multivariate normality. Transformations that stabilize the variance of error terms (i.e. those that address heteroscedaticity) often also help make the error terms approximately normal. In regression: Examples Equation: Y=a+bX Meaning: A unit increase in X is associated with an average of b units increase in Y.Equation: log ⁡(Y)=a+bX (From exponentiating both sides of the equation: Y=eaebX Meaning: A unit increase in X is associated with an average increase of b units in log ⁡(Y) , or equivalently, Y increases on an average by a multiplicative factor of eb . For illustrative purposes, if base-10 logarithm were used instead of natural logarithm in the above transformation and the same symbols (a and b) are used to denote the regression coefficients, then a unit increase in X would lead to a 10 b times increase in Y on an average. If b were 1, then this implies a 10-fold increase in Y for a unit increase in XEquation: log ⁡(X) Meaning: A k-fold increase in X is associated with an average of log ⁡(k) units increase in Y. For illustrative purposes, if base-10 logarithm were used instead of natural logarithm in the above transformation and the same symbols (a and b) are used to denote the regression coefficients, then a tenfold increase in X would result in an average increase of log 10 10 )=b units in YEquation: log log ⁡(X) (From exponentiating both sides of the equation: Y=eaXb Meaning: A k-fold increase in X is associated with a kb multiplicative increase in Y on an average. Thus if X doubles, it would result in Y changing by a multiplicative factor of 2b Alternative Generalized linear models (GLMs) provide a flexible generalization of ordinary linear regression that allows for response variables that have error distribution models other than a normal distribution. GLMs allow the linear model to be related to the response variable via a link function and allow the magnitude of the variance of each measurement to be a function of its predicted value. Common cases: The logarithm transformation and square root transformation are commonly used for positive data, and the multiplicative inverse transformation (reciprocal transformation) can be used for non-zero data. The power transformation is a family of transformations parameterized by a non-negative value λ that includes the logarithm, square root, and multiplicative inverse transformations as special cases. To approach data transformation systematically, it is possible to use statistical estimation techniques to estimate the parameter λ in the power transformation, thereby identifying the transformation that is approximately the most appropriate in a given setting. Since the power transformation family also includes the identity transformation, this approach can also indicate whether it would be best to analyze the data without a transformation. In regression analysis, this approach is known as the Box–Cox transformation. Common cases: The reciprocal transformation, some power transformations such as the Yeo–Johnson transformation, and certain other transformations such as applying the inverse hyperbolic sine, can be meaningfully applied to data that include both positive and negative values (the power transformation is invertible over all real numbers if λ is an odd integer). However, when both negative and positive values are observed, it is sometimes common to begin by adding a constant to all values, producing a set of non-negative data to which any power transformation can be applied.A common situation where a data transformation is applied is when a value of interest ranges over several orders of magnitude. Many physical and social phenomena exhibit such behavior — incomes, species populations, galaxy sizes, and rainfall volumes, to name a few. Power transforms, and in particular the logarithm, can often be used to induce symmetry in such data. The logarithm is often favored because it is easy to interpret its result in terms of "fold changes." The logarithm also has a useful effect on ratios. If we are comparing positive quantities X and Y using the ratio X / Y, then if X < Y, the ratio is in the interval (0,1), whereas if X > Y, the ratio is in the half-line (1,∞), where the ratio of 1 corresponds to equality. In an analysis where X and Y are treated symmetrically, the log-ratio log(X / Y) is zero in the case of equality, and it has the property that if X is K times greater than Y, the log-ratio is the equidistant from zero as in the situation where Y is K times greater than X (the log-ratios are log(K) and −log(K) in these two situations). Common cases: If values are naturally restricted to be in the range 0 to 1, not including the end-points, then a logit transformation may be appropriate: this yields values in the range (−∞,∞). Common cases: Transforming to normality 1. It is not always necessary or desirable to transform a data set to resemble a normal distribution. However, if symmetry or normality are desired, they can often be induced through one of the power transformations. 2. A linguistic power function is distributed according to the Zipf-Mandelbrot law. The distribution is extremely spiky and leptokurtic, this is the reason why researchers had to turn their backs to statistics to solve e.g. authorship attribution problems. Nevertheless, usage of Gaussian statistics is perfectly possible by applying data transformation.3. To assess whether normality has been achieved after transformation, any of the standard normality tests may be used. A graphical approach is usually more informative than a formal statistical test and hence a normal quantile plot is commonly used to assess the fit of a data set to a normal population. Alternatively, rules of thumb based on the sample skewness and kurtosis have also been proposed. Common cases: Transforming to a uniform distribution or an arbitrary distribution If we observe a set of n values X1, ..., Xn with no ties (i.e., there are n distinct values), we can replace Xi with the transformed value Yi = k, where k is defined such that Xi is the kth largest among all the X values. This is called the rank transform, and creates data with a perfect fit to a uniform distribution. This approach has a population analogue. Common cases: Using the probability integral transform, if X is any random variable, and F is the cumulative distribution function of X, then as long as F is invertible, the random variable U = F(X) follows a uniform distribution on the unit interval [0,1]. From a uniform distribution, we can transform to any distribution with an invertible cumulative distribution function. If G is an invertible cumulative distribution function, and U is a uniformly distributed random variable, then the random variable G−1(U) has G as its cumulative distribution function. Putting the two together, if X is any random variable, F is the invertible cumulative distribution function of X, and G is an invertible cumulative distribution function then the random variable G−1(F(X)) has G as its cumulative distribution function. Common cases: Variance stabilizing transformations Many types of statistical data exhibit a "variance-on-mean relationship", meaning that the variability is different for data values with different expected values. As an example, in comparing different populations in the world, the variance of income tends to increase with mean income. If we consider a number of small area units (e.g., counties in the United States) and obtain the mean and variance of incomes within each county, it is common that the counties with higher mean income also have higher variances. Common cases: A variance-stabilizing transformation aims to remove a variance-on-mean relationship, so that the variance becomes constant relative to the mean. Examples of variance-stabilizing transformations are the Fisher transformation for the sample correlation coefficient, the square root transformation or Anscombe transform for Poisson data (count data), the Box–Cox transformation for regression analysis, and the arcsine square root transformation or angular transformation for proportions (binomial data). While commonly used for statistical analysis of proportional data, the arcsine square root transformation is not recommended because logistic regression or a logit transformation are more appropriate for binomial or non-binomial proportions, respectively, especially due to decreased type-II error. Transformations for multivariate data: Univariate functions can be applied point-wise to multivariate data to modify their marginal distributions. It is also possible to modify some attributes of a multivariate distribution using an appropriately constructed transformation. For example, when working with time series and other types of sequential data, it is common to difference the data to improve stationarity. If data generated by a random vector X are observed as vectors Xi of observations with covariance matrix Σ, a linear transformation can be used to decorrelate the data. To do this, the Cholesky decomposition is used to express Σ = A A'. Then the transformed vector Yi = A−1Xi has the identity matrix as its covariance matrix.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spinal mobilization** Spinal mobilization: Spinal mobilization is a type of passive movement of a spinal segment or region. It is usually performed with the aim of achieving a therapeutic effect. Spinal mobilization has been described as "a gentle, often oscillatory, passive movement applied to a spinal region or segment so as gently to increase the passive range of motion of that segment or region." Types of techniques: Spinal mobilization employ a range of techniques or schools of approaches in delivering the passive movement. Some examples include Maitland Technique Mulligan Technique
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Project Blogger** Project Blogger: Project Blogger is an educational initiative in Ireland by Discover Science & Engineering (DSE). It provides blogging tools and an online space for secondary school students and their teachers to create blogs about their school science experiments and science interests. Through the blogs, the students can share their experiences about science with their classmates, as well as with students from other schools across Ireland.The scheme was piloted in the 2007–08 academic year and was extended the following year. DSE has also teamed up with Scifest for students to use Project Blogger in their SciFest projects. The students use their online science diaries to store ongoing project results, images, ideas, graphs, video and discussions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Wax museum** Wax museum: A wax museum or waxworks usually consists of a collection of wax sculptures representing famous people from history and contemporary personalities exhibited in lifelike poses, wearing real clothes. Wax museum: Some wax museums have a special section dubbed the "Chamber of Horrors", in which the more grisly exhibits are displayed. Some collections are more specialized, as, for example, collections of wax medical models once used for training medical professionals. Many museums or displays in historical houses that are not wax museums as such use wax figures as part of their displays. The origin of wax museums goes back to the early 18th century at least, and wax funeral effigies of royalty and some other figures exhibited by their tombs had essentially been tourist attractions well before that. History before 1800: The making of life-size wax figures wearing real clothes grew out of the funeral practices of European royalty. In the Middle Ages it was the habit to carry the corpse, fully dressed, on top of the coffin at royal funerals, but this sometimes had unfortunate consequences in hot weather, and the custom of making an effigy in wax for this role grew, again wearing actual clothes so that only the head and hands needed wax models. After the funeral these were often displayed by the tomb or elsewhere in the church, and became a popular attraction for visitors, which it was often necessary to pay to view.The Westminster Abbey Museum in London has a collection of British royal funeral effigies made of varying materials going back to that of Edward III of England's wooden likeness (died 1377), as well as those of figures such as the naval hero Horatio Nelson, and Frances Stewart, Duchess of Richmond, who also had her parrot stuffed and displayed. From the funeral of Charles II in 1680 they were no longer placed on the coffin but were still made for later display. The effigy of Charles II, open-eyed and standing, was displayed over his tomb until the early 19th century, when all the Westminster effigies were removed from the abbey itself. Nelson's effigy was a pure tourist attraction, commissioned the year after his death in 1805, and his burial not in the Abbey but in St Paul's Cathedral after a government decision that major public figures should in future be buried there. Concerned for their revenue from visitors, the Abbey decided it needed a rival attraction for admirers of Nelson. History before 1800: In European courts including that of France the making of posed wax figures became popular. Antoine Benoist (1632–1717) was a French court painter and sculptor in wax to King Louis XIV. He exhibited forty-three wax figures of the French Royal Circle at his residence in Paris. Thereafter, the king authorized the figurines to be shown throughout France. His work became so highly regarded that James II of England invited him to visit England in 1684. There he executed works of the English king and members of his court. A seated figure of Peter the Great of Russia survives, made by an Italian artist, after the Tsar was impressed by the figures he saw at the Chateau of Versailles. The Danish court painter Johann Salomon Wahl executed figures of the Danish king and queen in about 1740.The 'Moving Wax Works of the Royal Court of England', a museum or exhibition of 140 life-size figures, some apparently with clockwork moving parts, opened by Mrs Mary in Fleet Street in London was doing excellent business in 1711. Philippe Curtius, waxwork modeller to the French court, opened his Cabinet de Cire as a tourist attraction in Paris in 1770, which remained open until 1802. In 1783 this added a Caverne des Grands Voleurs ("Cave of the Great Thieves"), an early "Chamber of Horrors". He bequeathed his collection to his protégée Marie Tussaud, who during the French Revolution made death masks of the executed royals. Notable wax museums: Madame Tussauds, historically associated with London, is the most famous name associated with wax museums, although it was not the earliest wax museum, as is sometimes thought. In 1835 Madame Tussaud established her first permanent exhibition in London's Baker Street. By the late 19th century most large cities had some kind of commercial wax museum, like the Musée Grévin in Paris or the Panoptikum Hamburg, and for a century these remained highly popular. In the late 20th century it became harder for them to compete with other attractions. Notable wax museums: Today there are also Madame Tussauds in Dam Square, Amsterdam; Berlin; Madame Tussauds Hong Kong; Shanghai; and five locations in the United States: the Venetian Hotel in Las Vegas, Nevada, Times Square in New York City, Washington, D.C., Fisherman's Wharf in San Francisco and Hollywood. Louis Tussaud's wax museum in San Antonio, Texas, is across the street from the historic Alamo. Others are located on the Canadian side of Niagara Falls, and Grand Prairie, Texas. Notable wax museums: One of the most popular wax museums in the United States for decades was The Movieland Wax Museum in Buena Park, California, near Knott's Berry Farm. The museum opened in 1962 and through the years added many wax figures of famous show business figures. Several stars attended the unveilings of the wax incarnations. The museum closed its doors on October 31, 2005, after years of dwindling attendance. Notable wax museums: However, the most enduring museum in the United States is the Hollywood Wax Museum located in Hollywood, California which features almost exclusively figures of movie actors displayed in settings associated with their roles in popular movies. This group of museums includes Hollywood Wax Museum Branson in Branson, Missouri along with Hollywood Wax Museum Pigeon Forge in Pigeon Forge, Tennessee and Hollywood Wax Museum Myrtle Beach in Myrtle Beach, South Carolina. With the original location having been developed in the mid-1960s, this group of museums went against the late 20th century trend of declining wax museum attendance, with the Branson location having undergone a substantial expansion and remodeling in 2008 and 2009 including an animated ride and a mirror maze. Notable wax museums: Another popular wax museum is the Musée Conti Wax Museum in New Orleans, Louisiana, which features wax figures portraying the city's history as well as a "Haunted Dungeon" section of wax figures of famous characters from horror films and literature. This museum is currently closed as the Conti building is being converted into condos. The museum should reopen at Jazzland Theme Park some time in the future. Another popular wax museum in the U.S. is the Wax Museum at Fisherman's Wharf in San Francisco, California. Notable wax museums: BibleWalk is a Christian wax museum in Mansfield, Ohio. It has received attention for its use of celebrity wax figures in its religious scenes, originally a cost-saving measure when new wax figures were deemed too expensive.The Royal London Wax Museum was open in downtown Victoria, British Columbia, Canada, from 1970 to 2010 in the Steamship Terminal building, it featured "royalty to rogues and the renowned." It was forced to close when the building required seismic upgrades. Notable wax museums: The National Wax Museum in Dublin, Ireland is a wax museum which hosts well over a hundred figures. For many years it has had only one sculptor, PJ Heraty, who continued producing figures even while the museum was closed. Meanwhile, it could be re-opened at a new location. During the last few years some other new wax museums are starting around the world. In 2009 Dreamland Wax Museum opened in Gramado, in the south of Brazil. Notable wax museums: The National Presidential Wax Museum in Keystone, South Dakota is the only wax museum in the world to feature every U.S. president. Their exhibits also include other notable figures from history such as General George Custer, Alexander Graham Bell, Thomas Edison, and Sitting Bull. Originally created by the famed sculptor Katherine Stubergh, the museum includes death and life masks of notable Hollywood celebrities including Mae West and Sid Grauman. Their most revered exhibit is a depiction of George W. Bush standing on the rubble of the World Trade Center with NYFD fireman Bob Beckwith following the attacks on September 11, 2001. Notable wax museums: India's first wax museum opened in December 2005 in Kanyakumari. Now located to Lonavala it contains 100 wax statues of celebrities at Lonavala Square Mall. The biggest in India wax museum named Mother's Wax Museum was opened in November 2014 in New Town, Kolkata. Another branch opened in July 2008 at the historical site of Old Goa with a collection of religious statues. Notable wax museums: Madame Tussauds opened its first museum in India at New Delhi in 2017. Depictions: Mystery of the Wax Museum House of Wax (1953 film) Museo del horror Terror in the Wax Museum Waxwork (film) House of Wax (2005 film)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Slugging** Slugging: Slugging, also known as casual carpooling, is the practice of forming ad hoc, informal carpools for purposes of commuting, essentially a variation of ride-share commuting and hitchhiking. A driver picks up these non-paying passengers (known as "slugs" or "sluggers") at key locations, as having these additional passengers means that the driver can qualify to use an HOV lane or enjoy toll reduction. While the practice is most common and most publicized in the congested Washington, D.C. metropolitan area, slugging also occurs in San Francisco, Houston, and other cities. Background: In order to relieve traffic volume during the morning and evening rush hours, high-occupancy vehicle (HOV) lanes that require more than one person per automobile were introduced in many major American cities to encourage carpooling and greater use of public transport, first appearing in the Washington D.C. metropolitan area in 1975. The failure of the new lanes to relieve congestion, and frustration over failures of public-transport systems and high fuel prices, led to the creation in the 1970s of "slugging", a form of hitchhiking between strangers that is beneficial to both parties, as drivers and passengers are able to use the HOV lane for a quicker trip. While passengers are able to travel for free, or cheaper than via other modes of travel, and HOV drivers sometimes pay no tolls, "slugs are, above all, motivated by time saved, not money pocketed". Concern for the environment is not their primary motivation; Virginia drivers of hybrid automobiles are, for example, eligible to use HOV lanes with no passengers.In the Washington area—with the second-busiest traffic during rush hour in the United States and Canada as of 2010—slugging occurs on Interstates 95, 66 and 395 between Washington and northern Virginia. As of 2006, there were about 6,459 daily slugging participants there.In the San Francisco Bay Area, with the third-busiest rush hour, casual carpooling occurs on Interstate 80 between the East Bay and San Francisco. As of 1998, 8,000 to 9,000 people slugged in San Francisco daily. However, after bridge tolls were levied on carpool vehicles in 2010, casual carpooling saw a significant decline and etiquette became more uncertain. Among the effects of the COVID-19 pandemic in the San Francisco Bay Area was the end of casual carpooling in March 2020. As of November 2022 the tradition has not resumed; although drivers continue to hope to see waiting passengers at designated pickup spots, the spontaneous nature of the program means that there is no one to restart it.Slugging also occurs in tenth-busiest Houston, at a rate of 900 daily in 2007, and in Pittsburgh.Slugging is shown to be effective in reducing vehicle travel distance as a form of ridesharing.Slugging is more used during morning commutes than evening commutes. The most common mode that slugging replaces is the transit bus.David D. Friedman's The Machinery of Freedom proposed a similar system (which he referred to as "jitney transit") in the 1970s. However, his plan assumed that passengers would be expected to pay for their transit, and that security measures such as electronic identification cards (recording the identity of both driver and passenger in a database readily available to police, in the event one or both parties disappeared) would be needed in order for people to feel safe. Although slugging is informal, ad hoc, and free, in 30 years no violence or crime was reported from Washington D.C. slugging until October 2010, when former Sergeant Major of the Army Gene McKinney struck one of his passengers with his car after they threatened to report his reckless driving to the police. Etymology: The term slug (used as both a noun and a verb) came from bus drivers who had to determine if the people waiting at the stop were genuine bus passengers or merely people wanting a free lift, in the same way that they look out for fake coins—or "slugs"—being thrown into the fare-collection box. General practices: In practice, slugging involves the creation of free, unofficial ad hoc carpool networks, often with published routes and pick-up and drop-off locations. In the morning, sluggers gather at local businesses and at government-run locations such as park and ride-like facilities or bus stops and subway stations with lines of sluggers. Drivers pull up to the queue for the route they will follow and either display a sign or call out the designated drop-off point they are willing to drive to and how many passengers they can take; in the Washington area the Pentagon—the largest place of employment in the United States, with 25,000 workers—is a popular destination. Enough riders fill the car and the driver departs. In the evening, the routes reverse.Many unofficial rules of etiquette exist, and websites allow sluggers to post warnings about those who break them. Some Washington D.C. rules are: The slug first in line gets the next ride to their destination and also gets to choose the front or back seat. Slugs should never take a ride out of turn. General practices: Drivers are not to pick up sluggers en route to or standing outside the line, a practice referred to as "body snatching". A woman is not to be left in the line alone, for her safety. No eating, smoking, or putting on of makeup is allowed. The driver has full control of the radio and climate controls. Windows may not be opened unless the driver approves. No money is exchanged or requested, as the driver and slugs all benefit from slugging. Driver and passengers say "Thank you" at the end. Government involvement: While local governments sometimes aid sluggers by posting signs labeled with popular destinations for people to queue at, slugging is organized by its participants and no slug line has ever been created by government. Slug lines are organized and maintained by volunteers. Government officials have become more aware of sluggers' needs when planning changes that affect their behavior, and solicit their suggestions. The Virginia Department of Transportation even includes links on their governmental webpage regarding slugging. Other countries: In Jakarta, "car jockeys" are paid by commuters to ride into the center of the city to permit the use of high-occupancy vehicle lanes.In India, it is illegal for drivers to randomly pick up commuters from the public roads and there is evidence that such drivers have been fined. In the Polish People's Republic, hitchhiking was officially supported by the government (and formalized), and in Cuba, government vehicles are obligated to take hitchhikers, but these systems have nothing to do with high-occupancy lanes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Interpersonal deception theory** Interpersonal deception theory: Interpersonal deception theory (IDT) is one of a number of theories that attempts to explain how individuals handle actual (or perceived) deception at the conscious or subconscious level while engaged in face-to-face communication. The theory was put forth by David Buller and Judee Burgoon in 1996 to explore this idea that deception is an engaging process between receiver and deceiver. IDT assumes that communication is not static; it is influenced by personal goals and the meaning of the interaction as it unfolds. The sender's overt (and covert) communications are affected by the overt and covert communications of the receiver, and vice versa. IDT explores the interrelation between the sender's communicative meaning and the receiver's thoughts and behavior in deceptive exchanges. Interpersonal deception theory: Intentional deception requires greater cognitive exertion than truthful communication, regardless of whether the sender attempts falsification (lying), concealment (omitting material facts) or equivocation (skirting issues by changing the subject or responding indirectly). Theoretical perspective: IDT views deception through the lens of interpersonal communication, considering deception as an interactive process between sender and receiver. In contrast with previous studies of deception (which focused on the sender and receiver individually), IDT focuses on the dyadic and relational nature of deceptive communication. Behaviors by sender and receiver are dynamic, multifunctional, multidimensional and multi-modal. Theoretical perspective: Dyadic communication is communication between two people; a dyad is a group of two people between whom messages are sent and received. Relational communication is communication in which meaning is created by two people simultaneously filling the roles of sender and receiver. Dialogic activity is the active communicative language of the sender and receiver, each relying upon the other in the exchange. "Both individuals within the communicative situation are actively participating in strategies to obtain or achieve goals set by themselves. The decision to actively deceive or not, is not that of a passive nature, it is done with intent by both individuals during the conversation". Theoretical perspective: In psychotherapy and psychological counseling, dyadic, relational and dialogic activity between therapist and patient relies on honest, open communication if the patient is to recover and be capable of healthier relationships. Deception uses the same theoretical framework in reverse; the communication of one participant is deliberately false. History: Current research literature documents well that human beings are poor detectors of deception. Research reveals that accuracy rates of people's ability to tell truth from deception are only a little above chance (54%). Concerningly, observers perform slightly worse given only visual information (52% accuracy) and better when they can hear (but not see) the target person (63%), While experts are more confident than laypersons, they are not more accurate.Interpersonal Deception Theory (IDT) attempts to explain the manner in which individuals engaged in face-to-face communication deal with actual or perceived deception on the conscious and subconscious levels. IDT proposes that the majority of individuals overestimate their ability to detect deception. In some cultures, various means of deception are acceptable while other forms are not. Acceptance of deception can be found in language terms that classify, rationalize or condemn, such behavior. Deception that may be considered a simple white lie to save feelings may me be determined socially acceptable, while deception used to gain certain advantages can be determined to be ethically questionable. It has been estimated that "deception and suspected deception arise in at least one quarter of all conversations".Interpersonal deception detection between partners is difficult unless a partner tells an outright lie or contradicts something the other partner knows is true. While it is difficult to deceive a person over a long period of time, deception often occurs in day-to-day conversations between relational partners. Maintaining a deception over time is difficult because it places a significant cognitive load on the deceiver. The deceiver must recall previous statements so that their story remains consistent and believable. As a result, deceivers often leak important information both verbally and nonverbally. History: In the early twentieth century, Sigmund Freud studied nonverbal cues to detect deception about a century ago. Freud observed a patient being asked about his darkest feelings. If his mouth was shut and his fingers were trembling, he was considered to be lying. Freud also noted other nonverbal cues, such as drumming one's fingers when telling a lie. More recently, scientists have attempted to establish the differences between truthful and deceptive behavior using a myriad of psychological and physiological approaches. In 1969, Ekman and Friesen used straightforward observation methods to determine deceptive non-verbal leakage cues, while more recently Rosenfeldet et al. used magnetic resonance imaging (MRI) to detect differences between honest and deceptive responses.In 1989, DePaulo and Kirkendol developed the Motivation Impairment Effect (MIE). MIE states the harder people try to deceive others, the more likely they are to get caught. Burgoon and Floyd, however, revisited this research and formed the idea that deceivers are more active in their attempt to deceive than most would anticipate or expect. History: IDT was developed in 1996 by David B. Buller and Judee K. Burgoon. Prior to their study, deception had not been fully considered as a communication activity. Previous work had focused upon the formulation of principles of deception. These principles were derived by evaluating the lie detection ability of individuals observing unidirectional communication. These early studies found initially that "although humans are far from infallible in their efforts to diagnose lies, they are substantially better at the task than would result merely by chance." Additionally, research has shown that deception and suspected deception occurs in at least one quarter of all conversations.Buller and Burgoon discount the value of highly controlled studies – usually one-way communication experiments – designed to isolate unmistakable cues that people are lying. Therefore, IDT is based on two-way communication and intended to describe deception as an interaction communicative process. In other words, deception is an interpersonal communication method that required the active participation of both the deceiver and receiver. Buller and Burgoon wanted to emphasize that both the receiver and deceiver are active participants in the deception process. Both are constantly engaged in conscious and unconscious behaviors that relay their true intentions. Buller and Burgoon initially based their theory of IPD on the four-factor model of deception developed by social psychologist Miron Zuckerman, who argues that the four components of deceit inevitably cause cognitive overload and therefore leakage. Zuckerman's four factors include the attempt to control information, which fosters behavior that can come across as too practiced, followed by physiological arousal as a result of deception. This arousal then leads to the third factor, felt emotions, which are usually guilt and anxiety, which can become noticeable to an observer. Additionally, the many cognitive factors and mental gymnastics that are going on during a deception often lead to nonverbal leakage cues, such as increased blinking and a higher pitched voice. Propositions: IDT's model of interpersonal deception has 21 verifiable propositions. Based on assumptions of interpersonal communication and deception, each proposition can generate a testable hypothesis. Although some propositions originated in IDT, many are derived from earlier research. The propositions attempt to explain the cognition and behavior of sender and receiver during the process of deception, from before interaction through interaction to the outcome after interaction. Propositions: Context and relationship IDT's explanations of interpersonal deception depend on the situation in which interaction occurs and the relationship between sender and receiver. 1. Sender and receiver cognition and behaviors vary, since deceptive communication contexts vary in access to social cues, immediacy, relationship, conversational demands and spontaneity. 2. In deceptive interchanges, sender and receiver cognition and behaviors vary; relationships vary in familiarity (informational and behavioral) and valence. Other factors before interaction Individuals approach deceptive exchanges with factors such as expectancy, knowledge, goals or intentions and behaviors reflecting their communication competence. IDT posits that these factors influence the deceptive exchange. 3. Compared with truth-tellers, deceivers engage in more strategic activity designed to manage information, behavior and image and have more nonstrategic arousal cues, negative and muted affect and non-involvement. Effects on sender's deception and fear of detection IDT posits that factors before the interaction influence the sender's deception and fear of detection. 4. Context moderates deception; increased interaction produces greater strategic activity (information, behavior and image management) and reduced nonstrategic activity (arousal or muted affect) over time. 5. Initial expectations of honesty are related to the degree of interactivity and the relationship between sender and receiver. 6. Deceivers' fear of detection and associated strategic activity are inversely related to expectations of honesty, a function of context and relationship quality. 7. Goals and motivation influence behavior. 8. As receivers' informational, behavioral and relational familiarity increase, deceivers have a greater fear of detection and exhibit more strategic information, behavior and image management and nonstrategic leakage behavior. 9. Skilled senders convey a truthful demeanor, with more strategic behavior and less nonstrategic leakage, better than unskilled ones. Effects on receiver cognition IDT also posits that factors before the interaction, combined with initial behavior, affect receiver suspicion and detection accuracy. 10. Receiver judgment of sender credibility is related to receiver truth biases, context interactivity, sender encoding skills and sender deviation from expected patterns. 11. Detection accuracy is related to receiver truth biases, context interactivity, sender encoding skills, informational and behavioral familiarity, receiver decoding skills and sender deviation from expected patterns. Interaction patterns IDT describes receiver suspicion and sender reaction. 12. Receiver suspicion is displayed in a combination of strategic and nonstrategic behavior. 13. Senders perceive suspicion. 14. Suspicion, perceived or actual, increases senders' strategic and nonstrategic behavior. 15. Deception and suspicion displays change over time. 16. Reciprocity is the predominant interaction pattern between senders and receivers during interpersonal deception. Outcomes IDT posits that interaction between sender and receiver influences how credible the receiver thinks the sender is and how suspicious the sender thinks the receiver is. 17. Receiver detection accuracy, bias, and judgments of sender credibility after an interaction are functions of receiver cognition (suspicion and truth bias), receiver decoding skill and final sender behavior. 18. Sender perceived deception success is a function of final sender cognition (perceived suspicion) and receiver behavior. Strategic and Non Strategic Linguistic Behavior: Strategic linguistic behavior: Information and image management is most relevant to language use during deception; there are three sub-strategies that can be used for this: Reticence (reserving or restraining) Reticence is a very common way of creating deception; it is withholding truthful information, and/or reducing the amount of specificity in content details. Vagueness and Uncertainty The message becomes evasive and ambiguous through language choices. Non-Immediacy Reduces the degree of directness and intensity of the interaction between the communicator and the object or the event communicated about. This has the effect of distancing senders from their messages. Receiver's role: Although most people believe they can spot deception, IDT posits that they cannot. A deceiver must manage his or her verbal and nonverbal cues to ensure that what they are saying appears true. According to IDT, the more socially aware a receiver is, the better he or she is at detecting deceit. Receiver's role: Humans have a predisposition to believe what they are told. This is referred to as a "truth bias." In a common social agreement, people are honest with one another and believe that others will be honest with them. If a deceiver begins a deceptive exchange with an accurate statement, the statement may induce the receiver to believe the rest of the deceiver's story is also true. The sender prepares the receiver to accept his or her information as truth, even if some (or all) of the dialogue is false. If the sender repeats the same tactic, the receiver will become more aware that the sender is lying.When suspicion is aroused in the receiver, there are a variety of ways that this suspicion can be expressed. Buller and Burgoon (1996) emphasized that there is no uniform receiver style to express suspicion, but instead is expressed through a variety of ways that they had discovered in previous research. According to Buller et al. (1991), receivers often utilize follow-up questions to question their deceivers if they begin to detect deception. Buller et al. found that this did not elicit as much suspicion as probes from nonsuspicious receivers. Burgoon et al. (1995) found that some receivers engaged in a more dominant interview style to engage with their deceiver, which represents a more aggressive and "unpleasant" style of questioning that aroused suspicion on the part of the deceiver. Emotion: Emotion plays a central role in IDT as a motivation and result of deception. Emotion can motivate deception, with the sender relying on relevant knowledge (informational, relational and behavioral familiarity) to achieve goals such as self-gratification, avoiding a negative emotional outcome or creating a negative emotional outcome for the target of deception. Emotion can be a result of deception, since a physical response occurs in the sender (usually arousal and negative affect). Emotion: Leakage The concept of leakage predates the development of IDT and was developed by Miron Zuckerman, et al., who created a four-factor model to explain when and why leakage is apt to occur. Leakage in deception is manifested most overtly in nonverbal signals; studies indicate that over 90 percent of emotional meaning is communicated non-verbally. Humans are sensitive to body signals, and communication is often ambiguous; something is communicated verbally and its opposite non-verbally. Leakage occurs when nonverbal signals betray the true content of a contradictory verbal message. Facial expression is difficult to read, and the Facial Action Coding System (FACS) is a means of uncovering deception. Small facial movements, known as micro-expressions, can be detected in this system using action units. Emotion: Micro-expressions and action units Action units (AUs) can be examined frame by frame, since these micro-expressions are often rapid. Paul Ekman’s research in facial deception has found several constants in certain expressions, with the action units relating to lip-corner pulling (AU12) and cheek-raising (AU6) qualifiers for happiness in most people. Brow-lowering (AU4) and lip-stretching (AU20) are disqualifiers for happiness. According to Ekman, emotional leakage appears in these fleeting expressions. Emotion: Ekman's research has received much attention in the popular media, but it also has been heavily criticized on both experimental and theoretical grounds. His theory that micro-expressions are effective markers for detecting deception is no longer considered to be well-supported. One criticism is that the theory "confounds emotion and deception", like use of the polygraph in assuming that an innocent person and a guilty one will feel different emotions in a situation which has severe possible outcomes. Concerns with such emotionally-based theories have led later researchers to develop theories based on cognition and cognitive processes. Emotion: Facial expression Seven basic emotions are communicated through facial expression: anger, fear, sadness, joy, disgust, surprise and contempt. These emotions are recognized universally. These expressions are innate or develop through socialization. Cultures have a variety of rules governing the social use of facial expression; for example, the Japanese discourage the display of negative emotions. Individuals may find it difficult to control facial expression, and the face may "leak" information about how they feel. Gaze People use eye contact to indicate threat, intimacy and interest. Eye contact is used to regulate turn-taking in conversation, and indicates how interested the listener (receiver) is in what the speaker is saying. Receivers make eye contact about 70–75 percent of the time, with each contact averaging 7.8 seconds. Gesture Gestures are among the most culture-specific forms of nonverbal communication, and may lead to misinterpretation. Involuntary self-touching, such as touching the face, scratching, gripping the hands together or putting the hands in (or near) the mouth, occur when people experience intense emotions such as depression, elation or extreme anxiety. Ekman and Friesen demonstrated gesture leakage by showing films of a depressed woman to a group, which was asked to judge the woman's mood. Those shown only the woman's face thought she was happy and cheerful, while the group who saw only her body thought she was tense and disturbed. Emotion: Touch Touch can reassure and indicate understanding. Humans touch one another in sexual intimacy, affiliation and understanding; in greetings and farewells; as an act of aggression, and to demonstrate dominance. According to Argyle in 1996, there "appear to be definite rules which permit certain kinds of touch, between certain people, on certain occasions only. Bodily contact outside these narrow limits is unacceptable". Criticism: DePaulo, Ansfield and Bell questioned IDT: "We can find the 'why' question in Buller and Burgoon's synthesis. There is no intriguing riddle or puzzle that needs to be solved, and no central explanatory mechanism is ever described." Although they praised Buller and Burgoon's 18 propositions as a comprehensive description of the timeline of deceptive interactions, they said the propositions lacked the interconnectedness and predictive power of a unifying theory. DePaulo et al. criticized IDT for failing to distinguish between interactive communication (which emphasizes the situational and contextual aspects of communicative exchanges) from interpersonal communication, which emphasizes exchanges in which the sender and receiver make psychological predictions about the other's behavior based on specific prior knowledge; this conceptual ambiguity limited IDT's explanatory power. However, Buller and Burgoon responded to this type of critique, saying the theory "was not meant to advance a single explanatory mechanism but instead to fit a broad communicative perspective on the phenomenon and to include multiple causal mechanisms that fit a general interpersonal communication account of the process."Park and Levine (2015) provide additional commentary questioning IDT stating that "because both interactive and noninteractive experiments lead to the same conclusions about truth-bias and accuracy regardless of interactivity, interactivity is not the all-important consideration as IDT claims." In IDT, a crucial emphasis is placed in the aspect of interactivity to determine deception detection accuracy. However, Park and Levine do not see an empirical basis for this foundational claim of IDT.Park and Levine provide additional commentary questioning IDT stating that "because both interactive and noninteractive experiments lead to the same conclusions about truth-bias and accuracy regardless of interactivity, interactivity is not the all-important consideration as IDT claims." In IDT, a crucial emphasis is placed in the aspect of interactivity to determine deception detection accuracy. However, Park and Levine do not see an empirical basis for this foundational claim of IDT. Experiment: Buller and Burgoon asked participants to put themselves in the following situation: "You've been dating Pat for nearly three years and feel quite close in your relationship. Since Pat goes to a different school upstate, the two of you have agreed to date other people. Nevertheless, Pat is quite jealous and possessive. During the school year you see Pat only occasionally, but you call each other every Sunday and talk for over an hour. On Friday one of your friends invites you to a party on Saturday night, but the party is 'couples only' so you need a date. There's no way that Pat could come down for the weekend. You decide to ask someone from your class who you've been attracted to so that you can go to the party. The two of you go and have a great time. On Sunday afternoon, there's a knock on your door and it's Pat. Pat walks in and says, 'Decided to come down and surprise you, tried calling you all last night, but you weren't around. What were you doing?'" The researchers listed three possible responses: lying ("I was at the library getting ready for my theory exam"), telling part of the truth while omitting important details ("Went to a party at a friend's apartment") or being intentionally vague or evasive ("Went out for a while"). Online dating: Research on the use of deception in online dating has shown that people are generally truthful about themselves with the exception of physical attributes to appear more attractive. Most online deception is subtle with slight exaggerations, representing people's attempts to portray themselves in the best possible light. Of all online contexts, online dating appears the most prone to deception. In general, no matter the setting, people are more likely to be deceptive when looking for a date than in other social situations. Online dating: Research suggests that while slight misrepresentations on online dating sites are quite common, major deceptions are actually rare. It seems that those who engage in online dating realize that while they want to make the best possible impression, if they want to pursue an offline relationship, they can't begin it with outright falsehoods that will quickly be revealed. One survey of over 5,000 users of online dating sites how likely they were to misrepresent themselves in areas such as appearance and job information. The average rating on these items was a 2 on a 10-point scale, indicating a relatively low level of deception overall. Online dating: Some people are more prone to deceptive behavior online than others, such as those with high sensation-seeking tendencies, and those who show addictive behavior toward the Internet. Conversely, those who are introverted or have high tendencies for social anxiety are especially likely to be honest about their personalities online, revealing hidden aspects of the self that they would not normally show to others offline.According to the Scientific American, "nine out of ten online daters will fib about their height, weight, or age" such that men were more likely to lie about height while women were more likely to lie about weight. In addition, those high in the trait of self-monitoring are more likely to be dishonest on dating websites. In all aspects of their social lives, self-monitors are concerned with outward appearance and adapt their behavior to match the social situation. Thus, they also tend to be more deceptive in their attempts to attract dates both offline and online.In a study conducted by Toma and Hancock, "less attractive people were found to be more likely to have chosen a profile picture in which they were significantly more attractive than they were in everyday life." Both genders used this strategy in online dating profiles, but women more so than men. Additionally, the researchers found that those deemed less attractive were more likely to express deception in the areas of physical attractiveness such as height and weight. Online dating: A qualitative study investigated deception in online dating. The study focused on four questions: (1) About what characteristics are online daters deceptive? (2) What motivation do online daters have for their deception of others in the online-dating environment? (3) What perceptions do online daters have about other daters' deceit towards them in the online-dating environment? (4) How does deception affect romantic relationships formed in the online-dating environment? In an online survey, data was collected from 15 open-ended questions. The study had 52 participants, ranging in age from 21 to 37, and found that most online daters consider themselves (and others) mostly honest in their online self-presentation. Online daters who used deception were motivated to do so by the desire to attract partners and project a positive self-image. Daters were willing to overlook deception in others if they viewed the dishonesty as a slight exaggeration or a characteristic of little value to the dater. Despite deception, participants believe that the online-dating environment can develop successful romantic relationships.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SOH-States of Humanity** SOH-States of Humanity: SOH is an abbreviation for States of Humanity and is an initiative of multimedia artist Alex Vermeulen, which led to an interdisciplinary Gesamtkunstwerk. the SOH concept: Since 1996 Alex Vermeulen has been developing ‘States of Humanity”; a total-concept art project which consists of distinct parts. SOH focuses on themes regarding the counterpoints where the individual meets society: religion, violence, individualism, reflectivity, spirituality, contemplativity, perceptivity and sexuality. Alex Vermeulen uses films, photographs, sculptures, installations, and inter-disciplinary collaborations to represent the essence of our current ‘zeitgeist’. He questions the mechanisms fundamental to its existence, its future direction, as well as the dilemmas we face today. the SOH concept: SOH is a comprehensive work of art to which every participant makes a subjective contribution, from their personal point of view and discipline. The various facets of the project come about because of this Gesamtkunstwerk. Results: So far, in collaboration with, among others, architect Greg Lynn, filmmaker Lodge Kerrigan, composer David Shea, performer Kate Strong, author Robert Greene etc. 29 SOH project have been produced such as books, installations, exhibitions, sculptures in the public space. an Opera, iBooks, video clips, a feature film etc. Most notable projects: SOH1 the Architectural Film (in collaboration with 55 New Yorkers) SOH10 the Opera (composer David Shea, performer Kate Strong) SOH19 States of Nature (in collaboration with the Technical University Eindhoven)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Solar eclipse of September 23, 2090** Solar eclipse of September 23, 2090: A total solar eclipse will occur on September 23, 2090. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. A total solar eclipse occurs when the Moon's apparent diameter is larger than the Sun's, blocking all direct sunlight, turning day into darkness. Totality occurs in a narrow path across Earth's surface, with the partial solar eclipse visible over a surrounding region thousands of kilometres wide. Solar eclipse of September 23, 2090: This solar eclipse will be the first total solar eclipse visible from Great Britain since August 11, 1999, and the first visible from Ireland since May 22, 1724. The totality will be visible in southern Greenland, Valentia, West Cork, Poole, Newquay, Plymouth, Southampton, Isle of Wight, nothern France (including Paris and Rennes) and south Belgium and a partially eclipsed sun will be visible in Birmingham, London, Exeter, Cardiff, Belfast, Dublin, Weston Super Mare, Bristol and Oxford. Related eclipses: Solar eclipses 2087–2090 This eclipse is a member of a semester series. An eclipse in a semester series of solar eclipses repeats approximately every 177 days and 4 hours (a semester) at alternating nodes of the Moon's orbit. Related eclipses: Tritos series This eclipse is a part of a tritos cycle, repeating at alternating nodes every 135 synodic months (≈ 3986.63 days, or 11 years minus 1 month). Their appearance and longitude are irregular due to a lack of synchronization with the anomalistic month (period of perigee), but groupings of 3 tritos cycles (≈ 33 years minus 3 months) come close (≈ 434.044 anomalistic months), so eclipses are similar in these groupings. Related eclipses: Metonic series The metonic series repeats eclipses every 19 years (6939.69 days), lasting about 5 cycles. Eclipses occur in nearly the same calendar date. In addition, the octon subseries repeats 1/5 of that or every 3.8 years (1387.94 days). All eclipses in this table occur at the Moon's ascending node.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Continental shelf pump** Continental shelf pump: In oceanic biogeochemistry, the continental shelf pump is proposed to operate in the shallow waters of the continental shelves, acting as a mechanism to transport carbon (as either dissolved or particulate material) from surface waters to the interior of the adjacent deep ocean. Overview: Originally formulated by Tsunogai et al. (1999), the pump is believed to occur where the solubility and biological pumps interact with a local hydrography that feeds dense water from the shelf floor into sub-surface (at least subthermocline) waters in the neighbouring deep ocean. Tsunogai et al.'s (1999) original work focused on the East China Sea, and the observation that, averaged over the year, its surface waters represented a sink for carbon dioxide. This observation was combined with others of the distribution of dissolved carbonate and alkalinity and explained as follows : the shallowness of the continental shelf restricts convection of cooling water as a consequence, cooling is greater for continental shelf waters than for neighbouring open ocean waters this leads to the production of relatively cool and dense water on the shelf the cooler waters promote the solubility pump and lead to an increased storage of dissolved inorganic carbon this extra carbon storage is augmented by the increased biological production characteristic of shelves the dense, carbon-rich shelf waters sink to the shelf floor and enter the sub-surface layer of the open ocean via isopycnal mixing Significance: Based on their measurements of the CO2 flux over the East China Sea (35 g C m−2 y−1), Tsunogai et al. (1999) estimated that the continental shelf pump could be responsible for an air-to-sea flux of approximately 1 Gt C y−1 over the world's shelf areas. Given that observational and modelling of anthropogenic emissions of CO2 estimates suggest that the ocean is currently responsible for the uptake of approximately 2 Gt C y−1, and that these estimates are poor for the shelf regions, the continental shelf pump may play an important role in the ocean's carbon cycle. Significance: One caveat to this calculation is that the original work was concerned with the hydrography of the East China Sea, where cooling plays the dominant role in the formation of dense shelf water, and that this mechanism may not apply in other regions. However, it has been suggested that other processes may drive the pump under different climatic conditions. For instance, in polar regions, the formation of sea-ice results in the extrusion of salt that may increase seawater density. Similarly, in tropical regions, evaporation may increase local salinity and seawater density. Significance: The strong sink of CO2 at temperate latitudes reported by Tsunogai et al. (1999) was later confirmed in the Gulf of Biscay, the Middle Atlantic Bight and the North Sea. On the other hand, in the sub-tropical South Atlantic Bight reported a source of CO2 to the atmosphere.Recently, work has compiled and scaled available data on CO2 fluxes in coastal environments, and shown that globally marginal seas act as a significant CO2 sink (-1.6 mol C m−2 y−1; -0.45 Gt C y−1) in agreement with previous estimates. However, the global sink of CO2 in marginal seas could be almost fully compensated by the emission of CO2 (+11.1 mol C m−2 y−1; +0.40 Gt C y−1) from the ensemble of near-shore coastal ecosystems, mostly related to the emission of CO2 from estuaries (0.34 Gt C y−1). Significance: An interesting application of this work has been examining the impact of sea level rise over the last de-glacial transition on the global carbon cycle. During the last glacial maximum sea level was some 120 m (390 ft) lower than today. As sea level rose the surface area of the shelf seas grew and in consequence the strength of the shelf sea pump should increase.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Meizu M6 miniPlayer** Meizu M6 miniPlayer: The M6 miniPlayer, from Meizu, is a flash-based portable media player that plays audio files in MP3, WMA, WAV, FLAC, APE and Ogg and is also capable of AVI video playback (using the XVID codec) on a 2.4-inch QVGA screen. The Mini Player includes an FM tuner, voice recorder, calendar, stopwatch, calculator, a basic ebook reader for TXT files, and two games. Background: The M6 is from Meizu's digital audio player productions. Accordingly, it is only emerging in certain parts of the world including the United States, Australia, France, Russia and more. Though the M6 supports many audio formats, the US release did not support MP3 format because of licensing issues. Apparently, there are workarounds to the issue with specific firmware upgrades. Dane-Elec formed a deal with Meizu to provide distribution of the M6, apparently ironing out MP3 licensing issues. Many European models distributed by Dane‐Elec had their FM tuner disabled because of EU import duties; this could also be remedied by a firmware update. The M6 has been touted as an “iPod killer” because of its capabilities with respect to its aesthetics. One characteristic regarding the Meizu M6 is its ability to function without proprietary file formats and procedures. Specifications: The following are some of the more important specifications regarding the Meizu M6: Software support: File transfer The M6 is connected to a computer via a USB 2.0 cable, upon which it is typically recognized as a mass storage device (starting with the 2.00x firmware series MTP is also supported). Transferring media files and firmware upgrades is accomplished by simply dragging and dropping. Thus, no proprietary software is needed, allowing it to be a true cross-platform media player. It is confirmed that the Linux 2.6 kernel driver for UMS devices works with the M6. Software support: Video conversion For converting videos to the required Xvid format, Meizu provides a custom version of VirtualDub. There is also a Meizu profile available for another open source program, Iriverter and Batman Video Converter is available. Mac users can convert with MPEG Streamclip video converter. Customization and variants: The miniPlayer allows the user to change a few display items such as the background image and font color. Unofficially, it is possible to modify the RESOURCE.BIN file to skin the player with different icons. A number of stick‐on covers are available which allow the front surface and thumb pad to be colored. Two versions of the miniPlayer were originally produced: The TP version has a Toshiba screen with better color reproduction at the cost of lower brightness. It is also 2 mm shallower. This model is no longer produced. The SP version has a brighter and slightly cheaper Samsung screen. This model is still in production.The two versions have different firmware and the screen does not work if the wrong one is loaded. The RESOURCE.BIN files are the same, however. There was also a special SP edition where the back metal plate would be black matte, instead of shiny metal. M6SL and M6SE A slimmer version of the miniPlayer, named Meizu M6SL (M6 “slim”), was released at the end of September 2007. The main difference from the original edition is the decreased thickness—7 mm (like the M3 Music Card) instead of 10 mm and new, better quality, Wolfson produced DACs.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GeneWeb** GeneWeb: GeneWeb is a free multi-platform genealogy software tool created and owned by Daniel de Rauglaudre of INRIA. GeneWeb is accessed by a Web browser, either off-line or as a server in a Web environment. It uses very efficient techniques of relationship and consanguinity computing, developed in collaboration with Didier Rémy, research director at INRIA. GeneWeb is used as the engine for several public genealogy websites, including Geneanet, a collection of inter-searchable genealogical databases currently containing references to more than 225 million persons. GeneWeb: Notable features of GeneWeb include: High capacity: GeneWeb can allow multiple wizards to manage the genealogical database. GeneWeb can manage large databases: for example, the Roglo database contains over 9 million entries, managed by more than 200 wizards. Web Server: When GeneWeb runs on a computer connected to the internet, it can accept HTTP requests from web clients, generating and serving HTML web pages and linked objects (images, etc.). GEDCOM: GeneWeb supports import and export of GEDCOM files. UTF-8: GeneWeb supports UTF-8.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Target lesion** Target lesion: In dermatology, a target lesion or bull's-eye lesion, named for its resemblance to the bull's-eye of a shooting target, is a rash with central clearing. It occurs in several diseases, as follows: Target lesions are the typical lesions of erythema multiforme, in which a vesicle is surrounded by an often hemorrhagic maculopapule. Erythema multiforme is often self-limited, of acute onset, resolves in three to six weeks, and has a cyclical pattern. Its lesions are multiform (polymorphous) and include macules, papules, vesicles, and bullae. Target lesion: Target lesions are also typical of Lyme disease. In the context of Lyme disease, the target lesion is synonymous with erythema migrans (erythema chronicum migrans), although not everyone who gets Lyme disease will have a target-shaped rash, and some will have no rash at all. Causes: Such lesions may be idiopathic or may follow infections, drug therapy, or immunodeficiency. Morphology: Target lesion consists of three zones : Dark centre of small papule, vesicle, or bulla (iris) Pale intermediate zone Peripheral rim of erythema
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Large Value Transfer System** Large Value Transfer System: The Large Value Transfer System, or LVTS, was the primary system in Canada for electronic wire transfers of large sums of money, and was operated by Payments Canada. It permitted the participating institutions and their clients to send large sums of money securely in real-time with complete certainty that the payment will settle. The system was replaced in September 2021 by a new high-value payment system called Lynx.Established in 1999, LVTS processed the majority of payments made every day in Canada, and was designed to work with funds in Canadian dollars (CAD). On a normal business day, it cleared and settled approximately CA$398 billion. Frequently, when settling the payments made through LVTS between each other, some banks found themselves with extra funds while others found themselves short; to come up with money, the banks were able borrow it from each other for a day, or "overnight". The rate at which they borrowed being called overnight rate, targets for which were set by the Bank of Canada as part of its monetary policy.LVTS was a real-time payment system: the recipient of the payment received it irrevocably in near real-time. As it settled on a deferred net basis at the end of each day, it was not a real-time gross settlement system. Participating institutions: As of August 2021, there were 16 institutions, including the Bank of Canada, participating in LVTS:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ectoplasmic specialisation** Ectoplasmic specialisation: Ectoplasmic specializations are actin-related cell–cell junctions present in the testicular seminiferous epithelium and occur during spermatogenesis. These junctions are located at the Sertoli–Sertoli cell interface and Sertoli-elongating spermatid interface, which occur during the seminiferous epithelial cycle of spermatogenesis. There must be vast reconstructing of the anchoring junctions such as the ectoplasmic specializations within the testies. The reconstruction of these junctions is important because it facilitates the migration of the developing germ cells across the seminiferous epithelium
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Equid gammaherpesvirus 5** Equid gammaherpesvirus 5: Equid gammaherpesvirus 5 (EHV-5), formerly Equine herpesvirus 5, is a species of virus in the genus Percavirus, subfamily Gammaherpesvirinae, family Herpesviridae, and order Herpesvirales. It is thought to be the cause of a chronic lung disease of adult horses; equine multinodular pulmonary fibrosis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Limit-experience** Limit-experience: Limit-experience (French: expérience limite) refers to actions which approach the limits of possible experience. This can be in terms of their intensity and seemingly impossible or paradoxical qualities. A limit-experience dissociates the subject from the experience that it exists in and identifies with, leading to a confrontation with the Real. The idea was proposed by Karl Jaspers and later, the French philosopher Georges Bataille, and subsequently became associated with French philosophers Maurice Blanchot and Michel Foucault. Interpretations: Georges Bataille Reaching back to Charles Baudelaire and his poetics of paradoxical experience, such as in the line "O filthy grandeur! O sublime disgrace!" in poem 25 of Baudelaire's Les Fleurs du mal, Bataille was struck by what he saw as "the fact that these two complete contrasts were identical—divine ecstasy and extreme horror". He went on to challenge the conventions laid down by the surrealists at the time with an anti-idealist philosophy conditioned on what he called "the impossible", defined by breaking "rules" until something beyond all rules was reached.In this way, he strove for the limit-experience, what Foucault would later summarize as "the point of life which lies as close as possible to the impossibility of living, which lies at the limit or the extreme". Bataille sought to identify experiences of this kind, and to establish a philosophy that would convey how to live at the edge of limits where the ability to comprehend experience breaks down. Interpretations: Michel Foucault Foucault remarked that "the idea of a limit-experience that wrenches the subject from itself is what was important to me in my reading of Nietzsche, Bataille, and Blanchot". In his manner, the systems of philosophy and psychology and their conceptions of reality and the unified subject could be challenged and exposed in favor of what their systems and structures refused and excluded, viewing them from a standpoint informed by the potentials of limit-experience.How far Foucault's fascination with intense experiences goes in his entire body of work is the subject of debate, with the concept arguably being absent from his later and more well known work on sexuality and discipline, as well as strongly associated with the cult of the mad artist in Madness and Civilization. Interpretations: Jacques Lacan Influenced by Bataille, from whom he drew the idea of impossibility, Lacan explored the role of limit-experiences, such as "desire, boredom, confinement, revolt, prayer, sleeplessness ... and panic" in the formation of the Other. He also adopted some of Bataille's views on love, seeing it as predicated on man having previously "experienced the limit within which, like desire, he is bound". He saw masochism in particular as a limit-experience, an aspect which fed into his article "Kant avec Sade".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dynamical simulation** Dynamical simulation: Dynamical simulation, in computational physics, is the simulation of systems of objects that are free to move, usually in three dimensions according to Newton's laws of dynamics, or approximations thereof. Dynamical simulation is used in computer animation to assist animators to produce realistic motion, in industrial design (for example to simulate crashes as an early step in crash testing), and in video games. Body movement is calculated using time integration methods. Physics engines: In computer science, a program called a physics engine is used to model the behaviors of objects in space. These engines allow simulation of the way bodies of many types are affected by a variety of physical stimuli. They are also used to create Dynamical simulations without having to know anything about physics. Physics engines are used throughout the video game and movie industry, but not all physics engines are alike. They are generally broken into real-time and the high precision, but these are not the only options. Most real-time physics engines are inaccurate and yield only the barest approximation of the real world, whereas most high-precision engines are far too slow for use in everyday applications. Physics engines: To understand how these Physics engines are built, a basic understanding of physics is required. Physics engines are based on the actual behaviors of the world as described by classical mechanics. Engines do not typically account for Modern Mechanics (see Theory of relativity and quantum mechanics) because most visualization deals with large bodies moving relatively slowly, but the most complicated engines perform calculations for Modern Mechanics as well as Classical. The models used in Dynamical simulations determine how accurate these simulations are. Particle model: The first model which may be used in physics engines governs the motion of infinitesimal objects with finite mass called “particles.” This equation, called Newton’s Second law (see Newton's laws) or the definition of force, is the fundamental behavior governing all motion: F→=ma→ This equation will allow us to fully model the behavior of particles, but this is not sufficient for most simulations because it does not account for the rotational motion of rigid bodies. This is the simplest model that can be used in a physics engine and was used extensively in early video games. Inertial model: Bodies in the real world deform as forces are applied to them, so we call them “soft,” but often the deformation is negligibly small compared to the motion, and it is very complicated to model, so most physics engines ignore deformation. A body that is assumed to be non-deformable is called a rigid body. Rigid body dynamics deals with the motion of objects that cannot change shape, size, or mass but can change orientation and position. Inertial model: To account for rotational energy and momentum, we must describe how force is applied to the object using a moment, and account for the mass distribution of the object using an inertia tensor. We describe these complex interactions with an equation somewhat similar to the definition of force above: d(Iω)dt=∑j=1Nτj where I is the central inertia tensor, ω→ is the angular velocity vector, and τj is the moment of the jth external force about the mass center. Inertial model: The inertia tensor describes the location of each particle of mass in a given object in relation to the object's center of mass. This allows us to determine how an object will rotate dependent on the forces applied to it. This angular motion is quantified by the angular velocity vector. As long as we stay below relativistic speeds (see Relativistic dynamics), this model will accurately simulate all relevant behavior. This method requires the Physics engine to solve six ordinary differential equations at every instant we want to render, which is a simple task for modern computers. Euler model: The inertial model is much more complex than we typically need but it is the most simple to use. In this model, we do not need to change our forces or constrain our system. However, if we make a few intelligent changes to our system, simulation will become much easier, and our calculation time will decrease. The first constraint will be to put each torque in terms of the principal axes. This makes each torque much more difficult to program, but it simplifies our equations significantly. When we apply this constraint, we diagonalize the moment of inertia tensor, which simplifies our three equations into a special set of equations called Euler's equations. These equations describe all rotational momentum in terms of the principal axes: I1ω˙1+(I3−I2)ω2ω3=N1I2ω˙2+(I1−I3)ω3ω1=N2I3ω˙3+(I2−I1)ω1ω2=N3 The N terms are applied torques about the principal axes The I terms are the principal moments of inertia The ω terms are angular velocities about the principal axesThe drawback to this model is that all the computation is on the front end, so it is still slower than we would like. The real usefulness is not apparent because it still relies on a system of non-linear differential equations. To alleviate this problem, we have to find a method that can remove the second term from the equation. This will allow us to integrate much more easily. The easiest way to do this is to assume a certain amount of symmetry. Symmetric/torque model: The two types of symmetric objects that will simplify Euler's equations are “symmetric tops” and “symmetric spheres.” The first assumes one degree of symmetry, this makes two of the I terms equal. These objects, like cylinders and tops, can be expressed with one very simple equation and two slightly simpler equations. This does not do us much good, because with one more symmetry we can get a large jump in speed with almost no change in appearance. The symmetric sphere makes all of the I terms equal (the Moment of inertia scalar), which makes all of these equations simple: Iω˙1=N1Iω˙2=N2Iω˙3=N3 The N terms are applied torques about the principal axes The ω terms are angular velocities about the principal axes The I term is the scalar Moment of inertia: I=def∫Vl2(m)dm=∭Vl2(v)ρ(v)dv=∭Vl2(x,y,z)ρ(x,y,z)dxdydz whereV is the volume region of the object, r is the distance from the axis of rotation, m is mass, v is volume, ρ is the pointwise density function of the object, x, y, z are the Cartesian coordinates.These equations allow us to simulate the behavior of an object that can spin in a way very close to the method simulate motion without spin. This is a simple model but it is accurate enough to produce realistic output in real-time Dynamical simulations. It also allows a Physics engine to focus on the changing forces and torques rather than varying inertia.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Perspective control** Perspective control: Perspective control is a procedure for composing or editing photographs to better conform with the commonly accepted distortions in constructed perspective. The control would: make all lines that are vertical in reality vertical in the image. This includes columns, vertical edges of walls and lampposts. This is a commonly accepted distortion in constructed perspective; perspective is based on the notion that more distant objects are represented as smaller on the page; however, even though the top of the cathedral tower is in reality further from the viewer than base of the tower (due to the vertical distance), constructed perspective considers only the horizontal distance and considers the top and bottom to be the same distance away; make all parallel lines (such as four horizontal edges of a cubic room) cross in one point.Perspective distortion occurs in photographs when the film plane is not parallel to lines that are required to be parallel in the photo. A common case is when a photo is taken of a tall building from ground level by tilting the camera backwards: the building appears to fall away from the camera. Perspective control: The popularity of amateur photography has made distorted photos made with cheap cameras so familiar that many people do not immediately realise the distortion. This "distortion" is relative only to the accepted norm of constructed perspective (where vertical lines in reality do not converge in the constructed image), which in itself is distorted from a true perspective representation (where lines that are vertical in reality would begin to converge above and below the horizon as they become more distant from the viewer). At exposure: Professional cameras where perspective control is important control the perspective at exposure by raising the lens parallel to the film. There is more information on this in the view camera article. At exposure: Most large format (4x5 and up) cameras have this feature, as well as plane of focus control built into the camera body in the form of flexible bellows and moveable front (lens) and rear (film holder) elements. Thus any focal length lens mounted on a view camera or field camera, and many press cameras can be used with perspective control. At exposure: Some interchangeable lens medium format, 35 mm film SLR, and Digital SLR camera systems have PC, shift, or tilt/shift lens options which allow perspective control and, in the case of a tilt/shift lens, plane of focus control, but only at a specific focal length. In the darkroom: A darkroom technician can correct perspective distortion in the printing process. It is usually done by exposing the paper at an angle to the film, with the paper raised toward the part of the image that is larger, therefore not allowing the light from the enlarger to spread as much as the other side of the exposure. In the darkroom: The process is known as rectification printing, and is done using a rectifying printer (transforming printer), which involves rotating the negative and/or easel. Restoring parallelism to verticals (for instance) is easily done by tilting one plane, but if the focal length of the enlarger is not suitably chosen, the resulting image will have vertical distortion (compression or stretching). For correct perspective correction, the proper focal length (specifically, angle of view) must be chosen so that the enlargement replicates the perspective of the camera. During digital post-processing: Digital post-processing software provides means to correct converging verticals and other distortions introduced at image capture. During digital post-processing: Adobe Photoshop and GIMP have several "transform" options to achieve, with care, the desired control without any significant degradation in the overall image quality. Photoshop CS2 and subsequent releases includes perspective correction as part of its Lens Distortion Correction Filter; DxO Optics Pro from DxO Labs includes perspective correction; while GIMP (as of 2.6) does not include a specialized tool for correcting perspective, though a plug-in, EZ Perspective, is available. RawTherapee, a free and open-source raw converter, includes horizontal and vertical perspective correction tools too. Note that because the mathematics of projective transforms depends on the angle of view, perspective tools require that the angle of view or 35 mm equivalent focal length be entered, though this can often be determined from Exif metadata.It is commonly suggested to correct perspective using a general projective transformation tool, correcting vertical tilt (converging verticals) by stretching out the top; this is the "Distort Transform" in Photoshop, and the "perspective tool" in GIMP. However, this introduces vertical distortion – objects appear squat (vertically compressed, horizontally extended) – unless the vertical dimension is also stretched. This effect is minor for small angles, and can be corrected by hand, manually stretching the vertical dimension until the proportions look right, but is automatically done by specialized perspective transform tools. During digital post-processing: An alternative interface, found in Photoshop (CS and subsequent releases) is the "perspective crop", which enables the user to perform perspective control with the cropping tool, setting each side of the crop to independently determined angles, which can be more intuitive and direct.Other software with mathematical models on how lenses and different types of optical distortions affect the image can correct this by being able to calculate the different characteristics of a lens and re-projecting the image in a number of ways (including non-rectilinear projections). An example of this kind of software is the panorama creation suite Hugin.However these techniques do not enable the recovery of lost spatial resolution in the more distant areas of the subject, or the recovery of lost depth of field due to the angle of the film/sensor plane to the subject. These transforms involve interpolation, as in image scaling, which degrades the image quality, in particular blurring high-frequency detail. How significant this is depends on the original image resolution, degree of manipulation, print/display size, and viewing distance, and perspective correction must be traded off against preserving high-frequency detail. In virtual environments: Architectural images are commonly "rendered" from 3D computer models, for use in promotional material. These have virtual cameras within to create the images, which normally have modifiers capable of correcting (or distorting) the perspective to the artist's taste. See 3D projection for details.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**N-tert-Butylbenzenesulfinimidoyl chloride** N-tert-Butylbenzenesulfinimidoyl chloride: N-tert-Butylbenzenesulfinimidoyl chloride is a useful oxidant for organic synthesis reactions. It is a good electrophile, and the sulfimide S=N bond can be attacked by nucleophiles, such as alkoxides, enolates, and amide ions. The nitrogen atom in the resulting intermediate is basic, and can abstract an α-hydrogen to create a new double bond. Preparation: This reagent can be synthesized quickly and in near-quantitative yield by reacting phenyl thioacetate with tert-butyldichloroamine in hot benzene. After the reaction is complete, the product can be isolated as a yellow, moisture-sensitive solid by vacuum distillation. Mechanism: The first two steps in an oxidation reaction involving N-tert-butylbenzenesulfinimidoyl chloride are similar to a nucleophilic acyl substitution reaction. A nucleophile, such as an alkoxide (1), attacks the S=N bond in 2. The resulting intermediate (3) collapses and ejects chloride ion, which is a good leaving group. The resulting sulfimide has two resonance forms - 4a and 4b. Because of this, the nitrogen is basic, and via a five-membered ring transition state, it can abstract the hydrogen adjacent to the oxygen. This forms a new C=O bond and ejects a neutral sulfenamide (5), giving ketone 6 as the product. N-tert-Butylbenzenesulfinimidoyl chloride reacts with enolates, amides, and primary alkoxides by the same general mechanism. Mechanism: The Swern oxidation, which converts primary and secondary alcohols to aldehydes and ketones, respectively, also uses a sulfur-containing compound (DMSO) as the oxidant and proceeds by a similar mechanism. In the Swern oxidation, elimination also occurs via a five-membered ring transition state, but the basic species is a sulfur ylide instead of a negatively charged nitrogen. Several other oxidation reactions also make use of DMSO as the oxidant and pass through a similar transition state (see #See also). Reactions: Reacting an aldehyde with a Grignard reagent or organolithium and treating the resulting secondary alkoxide with N-tert-butylbenzenesulfinimidoyl chloride is a convenient one-pot reaction for converting aldehydes to ketones. While Grignards can be used for this reaction, organolithium compounds give higher yields, due to the higher reactivity of a lithium alkoxide compared to the corresponding magnesium salt. In some cases, an equivalent of DMPU, a Lewis base, will increase yields. For example, treating benzaldehyde with n-butyllithium and N-tert-butylbenzenesulfinimidoyl chloride in THF gives 1-phenyl-1-pentanone in good yield. Reactions: N-tert-Butylbenzenesulfinimidoyl chloride can also be used to synthesize imines from amines. Imines synthesized in this fashion have been shown to undergo a one-pot Mannich reaction with 1,3-dicarbonyl compounds, such as malonate esters and 1,3-diketones. In this example, Cbz-protected benzylamine is deprotonated using n-butyllithium, then treated with N-tert-butylbenzenesulfinimidoyl chloride to form the protected imine. Dimethyl malonate acts as the nucleophile and reacts with the imine to give the final product, a Mannich base.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lumiliximab** Lumiliximab: Lumiliximab is an IgG1k monoclonal antibody that targets CD23. It acts as an immunomodulator and was awarded orphan drug status and fast track designation by the FDA.It was investigated in Phase II/III clinical trials for the treatment of chronic lymphocytic leukemia. It has also been studied for use in allergic asthma. The drug is a chimeric antibody from Macaca irus and Homo sapiens.Lumiliximab was developed by IDEC Pharmaceuticals, which was acquired by Biogen. Lumiliximab: Clinical trials for CLL were terminated in 2010, and for allergic asthma in 2007. Results published from the CLL clinical trial failed to meet primary endpoints.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Parbelos** Parbelos: The parbelos is a figure similar to the arbelos but instead of three half circles it uses three parabola segments. More precisely the parbelos consists of three parabola segments, that have a height that is one fourth of the width at their bases. The two smaller parabola segments are placed next to each other with their bases on a common line and the largest parabola is placed over the two smaller ones such that its width is the sum of the widths of the smaller ones (see graphic). Parbelos: The parbelos has a number of properties which are somewhat similar or even identical to the some of the properties of the Arbelos. For instance the following two properties are identical to those of the arbelos: The arc length of the outer parabola is equal to the sum of the arc lengths of the inner parabolas. Parbelos: In a nested arbelos construction with the inner parabola segments being arbeloses themselves the two innermost parabola segments being adjacent to the cusp of the outer arbelos are congruent, that is of equal size.The quadrilateral BM2MM1 formed by the inner cusp B and the midpoints M,M1,M2 of the three parabola arcs is a parallelogram the area of which relates to the area of the parbelos as follows: parallelogram parbelos The four tangents at the three cusps of the parabola intersect in four points, which form a rectangle being called the tangent rectangle. The circumcircle of the tangent rectangle intersects the base side of the outer parabola segment in its midpoint, which is the focus of the outer parabola. One diagonal of the tangent rectangle lies on a tangent to the outer parabola and its common point with it is identical to its point of intersection with perpendicular to the base at the inner cusp. For the area of the tangent rectangle the following equation holds: rectangle parbelos
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Malay units of measurement** Malay units of measurement: Units of measurement used in Malaysia and neighbouring countries include the kati, a unit of mass, and the gantang, a unit of volume. Mass: For mass, the catty equals 0.6 kg. Another unit is picul which equals 60 kg. Volume: The gantang is equivalent to an imperial gallon, or 4.54609 cubic decimetres.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Water salute** Water salute: A water salute is an occasional occurrence used for a ceremonial purpose. It typically consists of a vehicle which travels under plumes of water expelled by one or more fire-fighting vehicles, as a mark of respect or appreciation. Water salute: At an airport, typically an even number of airport crash tender fire-fighting vehicles will arrange themselves perpendicularly along the sides of a taxiway or apron; they will emit coordinated plumes of water will form an arch (or series of arches) as an aircraft passes. Symbolically, the procession looks similar to a bridal party walking under a wedding arch or the saber arch at a military wedding. Water salute: Water salutes have been used to mark the retirement of a senior pilot or air traffic controller, the first or last flight of an airline to an airport, the first or last flight of a specific type of aircraft, as a token of respect for the remains of soldiers killed in action, or other notable events. When Concorde flew its last flight in 2003 from John F. Kennedy International Airport, red, white and blue coloured plumes were used.Water salutes are also used for ships and other watercraft, with water being delivered by fireboats. This is often done for the first or last visit or retirement of a senior captain, the first or last cruise of a ship, the visit of a warship, or other ceremonial occasions. A notable example was the water salute to HMS Hermes (R12) as she returned to Southampton following her part in the victory of the Falklands War.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ADaMSoft** ADaMSoft: ADaMSoft is a free and open-source statistical software developed in Java and can run on any platform supporting Java. History: ADaMSoft was initially started by Marco Scarnò as a simple prototype of the statistical software developed by UNESCO and called WinIDAMS. Later it resulted useful for several activities of the CASPUR statistical group (ADaMS group). The software was further developed until it became an interesting package which was tested and, finally, opened to the web community. Features: Statistical methods ADaMSoft can perform a wide range of analytical methods: Neural Networks MLP Graphs Data Mining Linear regression Logistic regression Methods for Statistical classification Record linkage methods Contains algorithms for Decision trees Cluster analysis Data Editing and imputation Principal component analysis Correspondence analysis Data sources It can read/write statistical data values from various/to sources including: Text Files Excel Spreadsheets ODBC data sources MySQL Postgresql Oracle Web Application Server By using the ADaMSoft Web Application Server it is possible to use all the software facilities through the web; in other words to let that internet users can access to the ADaMSoft procedures without having it installed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Epithemia** Epithemia: Epithemia is a genus of diatoms belonging to the family Rhopalodiaceae.The genus has cosmopolitan distribution.They have recently been linked to nitrogen fixation and can be a possible indicator of eutrophication. This is because levels of epithemia “containing cyanobacteria endosymbionts, decreased with increased ambient inorganic N concentrations” (Stancheva 2013). Concentrations of members of the epithemia genus existing with cyanobacteria endosymbionts would mean that there is more fixed nitrogen in the ecosystem. It could act as an early indicator of nutrient overload. Species: Species: Epithemia alpestris Kützing, 1844 Epithemia alpestris W.Smith, 1853 Epithemia anasthasiae Pantocsek, 1902
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Root (linguistics)** Root (linguistics): A root (or root word) is the core of a word that is irreducible into more meaningful elements. In morphology, a root is a morphologically simple unit which can be left bare or to which a prefix or a suffix can attach. The root word is the primary lexical unit of a word, and of a word family (this root is then called the base word), which carries aspects of semantic content and cannot be reduced into smaller constituents. Root (linguistics): Content words in nearly all languages contain, and may consist only of, root morphemes. However, sometimes the term "root" is also used to describe the word without its inflectional endings, but with its lexical endings in place. For example, chatters has the inflectional root or lemma chatter, but the lexical root chat. Inflectional roots are often called stems, and a root in the stricter sense, a root morpheme, may be thought of as a monomorphemic stem. Root (linguistics): The traditional definition allows roots to be either free morphemes or bound morphemes. Root morphemes are the building blocks for affixation and compounds. However, in polysynthetic languages with very high levels of inflectional morphology, the term "root" is generally synonymous with "free morpheme". Many such languages have a very restricted number of morphemes that can stand alone as a word: Yup'ik, for instance, has no more than two thousand. Root (linguistics): The root is conventionally indicated using the mathematical symbol √; for instance, the Sanskrit root "√bhū-" means the root "bhū-". Examples: The root of a word is a unit of meaning (morpheme) and, as such, it is an abstraction, though it can usually be represented alphabetically as a word. For example, it can be said that the root of the English verb form running is run, or the root of the Spanish superlative adjective amplísimo is ampli-, since those words are derived from the root forms by simple suffixes that do not alter the roots in any way. In particular, English has very little inflection and a tendency to have words that are identical to their roots. But more complicated inflection, as well as other processes, can obscure the root; for example, the root of mice is mouse (still a valid word), and the root of interrupt is, arguably, rupt, which is not a word in English and only appears in derivational forms (such as disrupt, corrupt, rupture, etc.). The root rupt can be written as if it were a word, but it is not. Examples: This distinction between the word as a unit of speech and the root as a unit of meaning is even more important in the case of languages where roots have many different forms when used in actual words, as is the case in Semitic languages. In these, roots (semitic roots) are formed by consonants alone, and speakers elaborate different words (belonging potentially to different parts of speech) from the root by inserting different vowels. For example, in Hebrew, the root ג-ד-ל g-d-l represents the idea of largeness, and from it we have gadol and gdola (masculine and feminine forms of the adjective "big"), gadal "he grew", higdil "he magnified" and magdelet "magnifier", along with many other words such as godel "size" and migdal "tower". Examples: Roots and reconstructed roots can become the tools of etymology. Secondary roots: Secondary roots are roots with changes in them, producing a new word with a slightly different meaning. In English, a rough equivalent would be to see conductor as a secondary root formed from the root to conduct. In abjad languages, the most familiar of which are Arabic and Hebrew, in which families of secondary roots are fundamental to the language, secondary roots are created by changes in the roots' vowels, by adding or removing the long vowels a, i, u, e and o. (Notice that Arabic does not have the vowels e and o.) In addition, secondary roots can be created by prefixing (m−, t−), infixing (−t−), or suffixing (−i, and several others). There is no rule in these languages on how many secondary roots can be derived from a single root; some roots have few, but other roots have many, not all of which are necessarily in current use. Secondary roots: Consider the Arabic language: مركز [mrkz] or [markaza] meaning ‘centralized (masculine, singular)’, from [markaz] ‘centre’, from [rakaza] ‘plant into the earth, stick up (a lance)’ ( ر-ك-ز | r-k-z). This in turn has derived words مركزي [markaziy], meaning 'central', مركزية [markaziy:ah], meaning 'centralism' or 'centralization', and لامركزية, [la:markaziy:ah] 'decentralization' أرجح [rjh] or [ta'arjaħa] meaning ‘oscillated (masculine, singular)’, from ['urju:ħa] ‘swing (n)’, from [rajaħa] ‘weighed down, preponderated (masculine, singular)’ ( ر-ج-ح | r-j-ħ). Secondary roots: محور [mhwr] or [tamaħwara] meaning ‘centred, focused (masculine, singular)’, from [mihwar] meaning ‘axis’, from [ħa:ra] ‘turned (masculine, singular)’ (ح-و-ر | h-w-r). Secondary roots: مسخر [msxr], تمسخر [tamasxara] meaning ‘mocked, made fun (masculine, singular)', from مسخرة [masxara] meaning ‘mockery’, from سخر [saxira] ‘mocked (masculine, singular)’ (derived from س-خ-ر[s-x-r])." Similar cases may be found in other Semitic languages such as Hebrew, Syriac, Aramaic, Maltese language and to a lesser extent Amharic.Similar cases occur in Hebrew, for example Israeli Hebrew מ-ק-מ‎ √m-q-m ‘locate’, which derives from Biblical Hebrew מקום‎ måqom ‘place’, whose root is ק-ו-מ‎ √q-w-m ‘stand’. A recent example introduced by the Academy of the Hebrew Language is מדרוג‎ midrúg ‘rating’, from מדרג‎ midrág, whose root is ד-ר-ג‎ √d-r-g ‘grade’."According to Ghil'ad Zuckermann, "this process is morphologically similar to the production of frequentative (iterative) verbs in Latin, for example: iactito ‘to toss about’ derives from iacto ‘to boast of, keep bringing up, harass, disturb, throw, cast, fling away’, which in turn derives from iacio ‘to throw, cast’ (from its past participle iactum).Consider also Rabbinic Hebrew ת-ר-מ‎ √t-r-m ‘donate, contribute’ (Mishnah: T’rumoth 1:2: ‘separate priestly dues’), which derives from Biblical Hebrew תרומה‎ t'rūmå ‘contribution’, whose root is ר-ו-מ‎ √r-w-m ‘raise’; cf. Rabbinic Hebrew ת-ר-ע‎ √t-r-' ‘sound the trumpet, blow the horn’, from Biblical Hebrew תרועה‎ t'rū`å ‘shout, cry, loud sound, trumpet-call’, in turn from ר-ו-ע‎ √r-w-`." and it describes the suffix. Category-neutral roots: Decompositional generative frameworks suggest that roots hold little grammatical information and can be considered "category-neutral". Category-neutral roots are roots without any inherent lexical category but with some conceptual content that becomes evident depending on the syntactic environment. The ways in which these roots gain lexical category are discussed in Distributed Morphology and the Exoskeletal Model. Category-neutral roots: Theories adopting a category-neutral approach have not, as of 2020, reached a consensus about whether these roots contain a semantic type but no argument structure, neither semantic type nor argument structure, or both semantic type and argument structure.In support of the category-neutral approach, data from English indicates that the same underlying root appears as a noun and a verb - with or without overt morphology. Category-neutral roots: In Hebrew, the majority of roots consist of segmental consonants √CCC. Arad (2003) describes that the consonantal root is turned into a word due to pattern morphology. Thereby, the root is turned into a verb when put into a verbal environment where the head bears the "v" feature (the pattern).Consider the root √š-m-n (ש-מ-נ). Although all words vary semantically, the general meaning of a greasy, fatty material can be attributed to the root. Category-neutral roots: Furthermore, Arad states that there are two types of languages in terms of root interpretation. In languages like English, the root is assigned one interpretation whereas in languages like Hebrew, the root can form multiple interpretations depending on its environment. This occurrence suggests a difference in language acquisition between these two languages. English speakers would need to learn two roots in order to understand two different words whereas Hebrew speakers would learn one root for two or more words. Category-neutral roots: Alexiadou and Lohndal (2017) advance the claim that languages have a typological scale when it comes to roots and their meanings and state that Greek lies in between Hebrew and English.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**3-Methoxy-4-ethoxyphenethylamine** 3-Methoxy-4-ethoxyphenethylamine: MEPEA, or 3-methoxy-4-ethoxyphenethylamine, is a lesser-known psychedelic drug. MEPEA was first synthesized by Alexander Shulgin. In his book PiHKAL (Phenethylamines i Have Known And Loved), the minimum dosage is listed as 300 mg, and the duration unknown. MEPEA produces a light lifting feeling and a +1 on the Shulgin Rating Scale. Very little data exists about the pharmacological properties, metabolism, and toxicity of MEPEA.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Foldase** Foldase: In molecular biology, foldases are a particular kind of molecular chaperones that assist the non-covalent folding of proteins in an ATP-dependent manner. Examples of foldase systems are the GroEL/GroES and the DnaK/DnaJ/GrpE system.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MEDINA** MEDINA: MEDINA (short for Model EDitor Interactive for Numerical Simulation Analysis) is a universal pre-/postprocessor for finite element analysis. The development of MEDINA started in the early 1990s at Daimler-Benz AG and was proceeded at debis Systemhaus. Since 2001 the support and the development of MEDINA takes place by T-Systems International GmbH. The current release is MEDINA Rel. 9.0.1.2 Architecture and interfaces: MEDINA was designed as general purpose pre-/postprocessor for various areas of finite element analysis supporting most of the common CAD-formats, solvers and operating systems. CAD-formats supported Currently, the following CAD-formats are supported by MEDINA: CATIA IGES JT SAT (ACIS) STEP STL VDA-FSFurther CAD-formats can be supported using the solution for 3D data conversion of T-Systems called COM/FOX. Architecture and interfaces: FEA interfaces supported In the current release, particularly the following solvers are supported by MEDINA: Abaqus ANSYS AutoSEA LS-DYNA Marc Nastran PAM-CRASH PATRAN Star-CD SYSTUS Universal VECTIS PERMAS OS and hardware supported In the current release, MEDINA is running under the following operating systems and hardware architectures: Linux Microsoft Windows FE-analysis in MEDINA Particularly, MEDINA is being used for the following tasks of FE-analysis: Crash simulations; durability analysis (thermal and mechanical loading); NVH (Noise Vibration Harshness); simulations about pedestrian safety and passenger protection.MEDINA consists of two modules: a FEM preprocessor (MEDINA.Pre) and a FEM postprocessor (MEDINA.Post).In the preprocessor all steps are taken before the computation can start, i.e.: Import of geometry data from CAD system; Import of associated meta data from the CAD-system or PDM-system; Import of FE-models; Editing and repair of CAD geometry; Meshing; Model structuring; Definition of material parameters; Definition of boundary conditions; Definition of load cases; Generation of the solver specific input deck.In the postprocessor all steps are taken after the computation of the primary data of the solver is finished, e.g.: Determination of the derived secondary data; Illustration of the results (graphics, animations); Export functionalities; Generation of reports. Characteristics: MEDINA was designed to support complex simulation tasks and huge FE models—found typically in automotive and aerospace industries—with high performance.Important design elements to achieve high performance are parts structures and connector elements. Parts enable a 1:1 mapping of the product structure of the CAD-/PDM-system within the FE model. Connector elements are used for the generic as well as solver and client specific modeling of assembling techniques like welding, bolting, bonding.Within the process step of the so-called "model assembly" the single FE-components (parts structures and connector elements) are merged to the complex comprehensive FE-model representing complex products like vehicles, aircraft, etc. Single process steps or complete process chains can be automated by protocol and script techniques. Dynamic commands enable to integrate client specific plug-ins within the standard functionality of MEDINA. Target groups/user groups: Due to the development roots of MEDINA and the included functionalities for the analysis of huge FE-models MEDINA is a widely used pre-/postprocessor for FE analysis especially in automotive industries. Furthermore, MEDINA is used in aerospace, manufacturing industries, engineering service providers and universities.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded