text
stringlengths
60
353k
source
stringclasses
2 values
**Techno-animism** Techno-animism: Techno-animism or technoanimism is a culture of technological practice where technology is imbued with human and spiritual characteristics. It assumes that technology, humanity and religion can be integrated into one entity. As an anthropology theory, techno-animism examines the interactions between the material and the spiritual aspects of technology in relation to humans. Techno-animism has been studied in the context of Japan, since techno-animism traces most of its roots to the Shinto religion, and also in DIY culture where Actor–network theory and non-human agencies have been labeled as techno-animist practices. Background and history: The practice of instilling human and spiritual characteristics into physical objects has always been part of the Shinto religion. Deities in the Shinto religion often symbolizes objects of the physical world and their statues often take human forms. With these practices, people form tighter bonds with physical objects. In Japanese culture, the interaction between humans and non-human objects is critical to the harmonious coexistence of humans and nature. A prime example of this type of interaction is that before meals, Japanese people always say "itadakimasu" which expresses gratitude for the ingredients of the meal may it be animals or plants. Background and history: Techno-animism builds upon the practices of the Shinto religion by instilling human and spiritual characteristics into technology. As for representation, techno-animism is often embodied in the engineering design of objects and the way that people interact with those objects. In a larger social context, Techno-animism provides a means for technology to be integrated into the human society because new technology can always be instilled with traditional values.Examples of Techno-animism also exist within the context of the DIY ethic and Maker culture: linking contemporary theories of material agency and Material culture with post-modern ideas of Animism and ethnographies " with recent academic studies suggesting that a form of Techno-animism can be observed in the highly developed practices of material engagement present in certain Do it yourself sub-cultures recorded in contemporary ethnographic studies of technology. Examples: The design of certain objects can have human-related traits that illustrate techno-animism. A robot designed by Honda called ASIMO takes the form of an astronaut wearing a spacesuit. The form factor along with the spiritual values associated with space exploration makes ASIMO an embodiment of techno-animism. In addition, ASIMO can also communicate with humans through language and gestures. Communication is a defining factor of determining whether something is an individual being or not. In Japan, the robot industry offers a wide range of functions from talking robots to sex robots. Conversation and sexual relationships used to be concepts that only belonged to humans. However, technological advancements and techno-animism are breaking down that barrier with engineering designs that embodies human and spiritual characteristics. Examples: Beyond the design of objects, the way that people choose to interact with objects could also demonstrate techno-animism. In Shinjuku, Tokyo, there is a restaurant where the waiters are robots instead of humans. Rather than talking to another person, customers only interact with machines throughout the dining process. In this process, customers accept the fact that technology has become part of the human society and has a unique way of interacting with humans. Social implications: Japanese culture and legislation are generally supportive of the techno-animism trend. Considering that Japan's modernization took place in a relatively short period of time in comparison to western nations, techno-animism is seen as a major reason why Japan has been one of the world's centers of technological innovations. As a result, acceptance of techno-animism is the current attitude in Japan both culturally and legislatively.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**China Seas** China Seas: The China Seas consist of a series of marginal seas in the Western Pacific Ocean, around China. They are the major components signifying the transition from the continent of Asia to the Pacific Ocean. They have been described in terms of their collective vastness and complexity: The four seas of China, the Bohai Sea, the Huanghai Sea, the East China Sea, and the South China Sea, occupy a total area of about 4.7 million sq. km, half of the area of Mainland China. These seas are located in the southeastern margin of the Eurasian continent and subject to the interactions between the Eurasian, Pacific, and Indian-Australian plates. The seas have complicated geology and rich natural resources. China Seas: Seas included in the China Seas are: The East China Sea The South China Sea The Yellow Sea (including Bohai Sea and Korea Bay)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Unsaturated monomer** Unsaturated monomer: Unsaturated monomers are those having carbon–carbon double bonds. In general, the term "unsaturated" refers to the presence of one or more double (or triple) bonds and the ability to "saturate" the molecule by addition of H2. Some examples of unsaturated monomers include: acrylic acid, acrylamide, acryloyl chloride, and methyl methacrylate. Research suggests that unsaturated monomers that are coordinatively complexed together may be important in the process of enantioselective cyclopropanation of synthetic fibers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Many antennas** Many antennas: Many antennas is a smart antenna technique which overcomes the performance limitation of single user multiple-input multiple-output (MIMO) techniques. In cellular communication, the maximum number of considered antennas for downlink is 2 and 4 to support 3GPP Long Term Evolution (LTE) and IMT Advanced requirements, respectively. Since the available spectrum band will probably be limited while the data rate requirement will continuously increase beyond IMT-A to support the mobile multimedia services, it is highly probable that the number of transmit antennas at the base station must be increased to 8–64 or more. The installation of many antennas at single base stations introduced many challenges and required development of several high technologies: a new SDMA engine, a new beamforming algorithm and a new antenna array. Many antennas: New space-division multiple access (SDMA) engine: multi-user MIMO, network MIMO, coordinate multi-point transmission (COMP) (Cooperative diversity), remote radio equipment (RRE). New beam-forming: linear beam-forming such as MF, ZF and MMSE and non-linear beam-forming (precoding) such as Tomlinson-Harashima precoding (THP), vector perturbation (VP), and Dirty paper coding (DPC). New antenna array: direct, remote and wireless antenna array. Direct antenna array: linear and 3D phased array, new structure array, and dynamic antenna array. Remote and wireless antenna array: distributed antenna array and cooperative beam-forming. Multiple air interfaces: single chip antenna array for an energy efficient short-range transmission. History of multiple antennas in cellular communications: The table summarizes the recent history of multiple antenna techniques in cellular communications. The table includes the future prediction as well for IMT-A and beyond.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hildegund C. J. Ertl** Hildegund C. J. Ertl: Hildegund C. J. Ertl is a researcher who works at The Wistar Institute in Philadelphia. Career: Ertl's research into vaccine has taken a different approach from conventional wisdom, combining parts of different viruses that pose no harm to humans but still stimulate an immune response.In 2007, Ertl helped create The Wistar Institute Vaccine Center. Ertl said that the vaccines the laboratories in the center are developing "have important implications for public health because they can reduce disease and death from very common infections. Additionally, she said that she wants to make existing vaccines more accessible in developing areas such as Africa and Asia.In interviews, Ertl has been cautious and critical when it comes to the development of vaccines for AIDS. Her research has shown that the vaccine may exhaust key cells of the immune system that are needed to fight the virus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sketch (drawing)** Sketch (drawing): A sketch (ultimately from Greek σχέδιος – schedios, "done extempore") is a rapidly executed freehand drawing that is not usually intended as a finished work. A sketch may serve a number of purposes: it might record something that the artist sees, it might record or develop an idea for later use or it might be used as a quick way of graphically demonstrating an image, idea or principle. Sketching is the most inexpensive art medium.Sketches can be made in any drawing medium. The term is most often applied to graphic work executed in a dry medium such as silverpoint, graphite, pencil, charcoal or pastel. It may also apply to drawings executed in pen and ink, digital input such as a digital pen, ballpoint pen, marker pen, water colour and oil paint. The latter two are generally referred to as "water colour sketches" and "oil sketches". A sculptor might model three-dimensional sketches in clay, plasticine or wax. Methods: The two methods in sketching are line drawing and shading. Line art A line drawing is the most direct means of expression. This type of drawing without shading or lightness, is usually the first to be attempted by an artist. It may be somewhat limited in effect, yet it conveys dimension, movement, structure and mood; it can also suggest texture to some extent. Shading Line gives character but shading gives depth and value-it is like adding an extra dimension to the sketch. Advanced techniques: Pencil painting When the pencil is handled almost as if it was a brush, resulting a paintlike quality, then the technique is called Pencil Painting. Wash and benzene Starting with a pencil drawing first, then washing over the pencil areas with a sable haired water color brush dipping it into the benzine is called Wash and Benzene. Benzene does not itself add color, but merely modifies the shaded pencil areas. Uses: Sketching is generally a prescribed part of the studies of art students. This generally includes making sketches (croquis) from a live model whose pose changes every few minutes. A "sketch" usually implies a quick and loosely drawn work, while related terms such as study, modello and "preparatory drawing" usually refer to more finished and careful works to be used as a basis for a final work, often in a different medium, but the distinction is imprecise. Underdrawing is drawing underneath the final work, which may sometimes still be visible, or can be viewed by modern scientific methods such as X-rays. Uses: Most visual artists use, to a greater or lesser degree, the sketch as a method of recording or working out ideas. The sketchbooks of some individual artists have become very well known, including those of Leonardo da Vinci and Edgar Degas which have become art objects in their own right, with many pages showing finished studies as well as sketches. The term "sketchbook" refers to a book of blank paper on which an artist can draw (or has already drawn) sketches. The book might be purchased bound or might comprise loose leaves of sketches assembled or bound together.Sketching is also used as a form of communication in areas of product design such as industrial design. It can be used to communicate design intent and is most widely used in ideation. It can be used to map out floor plans of homes.The ability to quickly record impressions through sketching has found varied purposes in today's culture. Courtroom sketches record scenes and individuals in law courts. Sketches drawn to help authorities find or identify wanted people are called composite sketches. Street artists in popular tourist areas sketch portraits within minutes. Sources: Chisholm, Hugh, ed. (1911). "Sketch" . Encyclopædia Britannica. Vol. 25 (11th ed.). Cambridge University Press. p. 186. Fabry, Alois (1958). Sketching Basics by Alois Fabry. Mud Puddle Books.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dissociated vertical deviation** Dissociated vertical deviation: Dissociated vertical deviation (DVD) is an eye condition which occurs in association with a squint, typically infantile esotropia. The exact cause is unknown, although it is logical to assume it is from faulty innervation of eye muscles. Presentation: The eye drifts upward spontaneously or after being covered. The condition usually affects both eyes, but can occur unilaterally or asymmetrically. It is often associated with latent or manifest-latent nystagmus and, as well as occurring with infantile esotropia, can also be found associated with exotropias and vertical deviations. Presentation: DVDs are usually controlled from occurring with both eyes open, but may become manifest with inattention. Usually some level of dissociative occlusion is required to trigger the brain to suppress vision in that eye and then not control a DVD from occurring. The level of dissociative occlusion required may involve using a red filter, a darker filter or complete occlusion (e.g. with a hand). Presentation: Onset DVD typically becomes apparent between 18 months and three years of age; however, the difficulties of achieving the prolonged occlusion required for accurate detection in the very young make it possible that onset is generally earlier than these figures suggest. Mechanism: Dissociation refers to the situation where the innervation of one eye causes it to move involuntarily and independently of the other eye. Usually both eyes work together as described by Hering's and Sherrington's laws of innervation. A DVD is a slow upward and sometimes temporal movement of one eye, with cortical suppression of the vision in that eye while it is deviated. On returning downward and possibly inward to take up fixation, the DVD slow movement will be reversed.The dissociative movement seen 'objectively' should not be confused with the dissociation that occurs 'subjectively' – as when the brain begins to not visualise both images simultaneously (by ignoring or suppressing vision in that eye). Diagnosis: A test called the Bielschowsky Darkening Wedge Test can be used to reveal and diagnose the presence of dissociated vertical deviation, although any (or no) amount of dissociative occlusion may also prompt it to occur. Diagnosis: The patient is asked to look at a light. One eye is covered and a filter is placed in front of the other eye. The density or opacity of this filter is gradually increased, and the behaviour of the eye under the cover (not of the eye beneath the filter) is observed. Initially, if DVD is present, the covered eye will have elevated, but as the filter opacity is increased the eye under the cover will gradually move downwards. This Bielschowsky phenomenon is present in over 50% of persons with prominent DVD, all the more if the DVD is asymmetric and amblyopia is present as well.The Bielschowsky phenomenon is also present in the horizontal plane in patients with prominent DHD (dissociated horizontal deviation). Diagnosis: Differential diagnosis DVD is often mistaken for over-action of the inferior oblique extra-ocular muscles. DVD can be revealed on ocular movement testing when one eye is occluded by the nose on lateral gaze. This eye will then elevate, simulating an inferior oblique over action. However, in a unilateral case, overaction of the superior rectus muscle in the unaffected dominant eye, can also be a causing factor as well as causing a V pattern exophoria. Treatment: Management of this condition is surgical and typically involves reducing the strength of the superior rectus muscle or anterior transposition of the inferior oblique muscle of the affected eyes.Several different surgical procedures exist for the correction of DVD including: inferior oblique anteriorization, inferior oblique anteriorization plus resection, superior rectus recession, superior rectus recession plus posterior fixation suture, and inferior oblique myectomy, though there is insufficient evidence to determine which procedure results in the best outcomes for patients.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mag-Thor** Mag-Thor: Mag-Thor is the common name for a range of magnesium (Mg) alloys containing thorium (Th) that are used in aerospace engineering. Alloys: These alloys commonly contain manganese and zinc, but there are other combinations known. Some common alloys are named HK31, HM21, HM31, HZ32, ZH42, ZH62; where the "H" indicates that the alloy contains thorium. Magnesium alloy names are often given by two letters following by two numbers. The two letters indicate the main elements present in the alloy where A = aluminum, Z = zinc, M = manganese, S = silicon, etc. the numbers tell percentage compositions of the two elements. So, AZ31 would indicate that there is 3% aluminum and 1% zinc in the alloy. Alloys: Magnesium-thorium alloys have been used in several military applications, particularly in missile construction. The most noted example of this is the ramjet components in the CIM-10 Bomarc missile and Lockheed D-21 drone, which implemented thoriated magnesium in their engine construction. This is due to thoriated magnesium alloys being lightweight, having high strength, and creep resistance up to 350 °C. But, these alloys are no longer used due to radiation concerns involving thorium's radioactivity. This has resulted in several missiles being removed from public display. Similarly, the structure of the Equipment and Retro-Rocket Modules of the Gemini spacecraft (the white-painted portions) were made of thoriated magnesium for their strength-to-weight ratio and thermal properties. These were not part of the inhabited cabin, though the radiator tubing, whose silicone coolant flowed through the cabin, was also made of the same material. All examples burned up in the atmosphere upon reentry. Alloys: Another concern for the thoriated magnesium alloys is the low melting point and rapid oxidation of the metal. This can result in dangerous flash fires during the production of the alloys. Additionally, thorium-free magnesium alloys have been developed that exhibit similar characteristics to mag-thor, causing currently used magnesium-thorium alloys to be cycled out of use.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**BSAFE** BSAFE: Dell BSAFE, formerly known as RSA BSAFE, is a FIPS 140-2 validated cryptography library, available in both C and Java. BSAFE was initially created by RSA Security, which was purchased by EMC and then, in turn, by Dell. When Dell sold the RSA business to Symphony Technology Group in 2020, Dell elected to retain the BSAFE product line. BSAFE was one of the most common encryption toolkits before the RSA patent expired in September 2000. It also contained implementations of the RCx ciphers, with the most common one being RC4. From 2004 to 2013 the default random number generator in the library was a NIST-approved RNG standard, widely known to be insecure from at least 2006, containing a kleptographic backdoor from the American National Security Agency (NSA), as part of its secret Bullrun program. In 2013 Reuters revealed that RSA had received a payment of $10 million to set the compromised algorithm as the default option. The RNG standard was subsequently withdrawn in 2014, and the RNG removed from BSAFE beginning in 2015. Cryptography backdoors: Dual_EC_DRBG random number generator From 2004 to 2013, the default cryptographically secure pseudorandom number generator (CSPRNG) in BSAFE was Dual_EC_DRBG, which contained an alleged backdoor from NSA, in addition to being a biased and slow CSPRNG. The cryptographic community had been aware that Dual_EC_DRBG was a very poor CSPRNG since shortly after the specification was posted in 2005, and by 2007 it had become apparent that the CSPRNG seemed to be designed to contain a hidden backdoor for NSA, usable only by NSA via a secret key. In 2007, Bruce Schneier described the backdoor as "too obvious to trick anyone to use it." The backdoor was confirmed in the Snowden leaks in 2013, and it was insinuated that NSA had paid RSA Security US$10 million to use Dual_EC_DRBG by default in 2004, though RSA Security denied that they knew about the backdoor in 2004. The Reuters article which revealed the secret $10 million contract to use Dual_EC_DRBG described the deal as "handled by business leaders rather than pure technologists". RSA Security has largely declined to explain their choice to continue using Dual_EC_DRBG even after the defects and potential backdoor were discovered in 2006 and 2007, and has denied knowingly inserting the backdoor. Cryptography backdoors: So why would RSA pick Dual_EC as the default? You got me. Not only is Dual_EC hilariously slow – which has real performance implications – it was shown to be a just plain bad random number generator all the way back in 2006. By 2007, when Shumow and Ferguson raised the possibility of a backdoor in the specification, no sensible cryptographer would go near the thing. And the killer is that RSA employs a number of highly distinguished cryptographers! It's unlikely that they'd all miss the news about Dual_EC. Cryptography backdoors: As a cryptographically secure random number generator is often the basis of cryptography, much data encrypted with BSAFE was not secure against NSA. Specifically it has been shown that the backdoor makes SSL/TLS completely breakable by the party having the private key to the backdoor (i.e. NSA). Since the US government and US companies have also used the vulnerable BSAFE, NSA can potentially have made US data less safe, if NSA's secret key to the backdoor had been stolen. It is also possible to derive the secret key by solving a single instance of the algorithm's elliptic curve problem (breaking an instance of elliptic curve cryptography is considered unlikely with current computers and algorithms, but a breakthrough may occur). Cryptography backdoors: In June 2013, Edward Snowden began leaking NSA documents. In November 2013, RSA switched the default to HMAC DRBG with SHA-256 as the default option. The following month, Reuters published the report based on the Snowden leaks stating that RSA had received a payment of $10 million to set Dual_EC_DRBG as the default.With subsequent releases of Crypto-C Micro Edition 4.1.2 (April 2016), Micro Edition Suite 4.1.5 (April 2016) and Crypto-J 6.2 (March 2015), Dual_EC_DRBG was removed entirely. Cryptography backdoors: Extended Random TLS extension "Extended Random" was a proposed extension for the Transport Layer Security (TLS) protocol, submitted for standardization to IETF by an NSA employee, although it never became a standard. The extension would otherwise be harmless, but together with the Dual_EC_DRBG, it would make it easier to take advantage of the backdoor.The extension was previously not known to be enabled in any implementations, but in December 2017, it was found enabled on some Canon printer models, which use the RSA BSAFE library, because the extension number conflicted a part of TLS version 1.3. Varieties: Crypto-J is a Java encryption library. In 1997, RSA Data Security licensed Baltimore Technologies' J/CRYPTO library, with plans to integrate it as part of its new JSAFE encryption toolkit and released the first version of JSAFE the same year. JSAFE 1.0 was featured in the January 1998 edition of Byte magazine. Cert-J is a Public Key Infrastructure API software library, written in Java. It contains the cryptographic support necessary to generate certificate requests, create and sign digital certificates, and create and distribute certificate revocation lists. As of Cert-J 6.2.4, the entire API has been deprecated in favor of similar functionality provided BSAFE Crypto-J JCE API. BSAFE Crypto-C Micro Edition (Crypto-C ME) was initially released in June 2001 under the name "RSA BSAFE Wireless Core 1.0". The initial release targeted Microsoft Windows, EPOC, Linux, Solaris and Palm OS. Varieties: BSAFE Micro Edition Suite is a cryptography SDK in C. BSAFE Micro Edition Suite was initially announced in February 2002 as a combined offering of BSAFE SSL-C Micro Edition, BSAFE Cert-C Micro Edition and BSAFE Crypto-C Micro Edition. Both SSL-C Micro Edition and Cert-C Micro Edition reached EOL in September 2014, while Micro Edition Suite remains supported with Crypto-C Micro Edition as its FIPS-validated cryptographic provider. Varieties: SSL-C is an SSL toolkit in the BSAFE suite. It was originally written by Eric A. Young and Tim J. Hudson, as a fork of the open library SSLeay, that they developed prior to joining RSA. SSL-C reached End Of Life in December 2016. SSL-J is a Java toolkit that implements TLS. SSL-J was released as part of RSA JSAFE initial product offering in 1997. Crypto-J is the default cryptographic provider of SSL-J. Product suite support status: On November 25, 2015, RSA announced End of Life (EOL) dates for BSAFE. The End of Primary Support (EOPS) was to be reached on January 31, 2017, and the End of Extended Support (EOXS) was originally set to be January 31, 2019. That date was later further extended by RSA for some versions until January 31, 2022. During Extended Support, even though the support policy stated that only the most severe problems would be patched, new versions were released containing bugfixes, security fixes and new algorithms.On December 12, 2020, Dell announced the reversal of RSA's past decision, allowing BSAFE product support beyond January 2022 as well as the possibility to soon acquire new licenses. Dell also announced it was rebranding the toolkits to Dell BSAFE.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Worse-than-average effect** Worse-than-average effect: The worse-than-average effect or below-average effect is the human tendency to underestimate one's achievements and capabilities in relation to others.It is the opposite of the usually pervasive better-than-average effect (in contexts where the two are compared or the overconfidence effect in other situations). It has been proposed more recently to explain reversals of that effect, where people instead underestimate their own desirable traits. Worse-than-average effect: This effect seems to occur when chances of success are perceived to be extremely rare. Traits which people tend to underestimate include juggling ability, the ability to ride a unicycle, the odds of living past 100 or of finding a U.S. twenty dollar bill on the ground in the next two weeks. Some have attempted to explain this cognitive bias in terms of the regression fallacy or of self-handicapping. In a 2012 article in Psychological Bulletin it is suggested the worse-than-average effect (as well as other cognitive biases) can be explained by a simple information-theoretic generative mechanism that assumes a noisy conversion of objective evidence (observation) into subjective estimates (judgment).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Certified Wireless Security Professional** Certified Wireless Security Professional: The Certified Wireless Security Professional (CWSP) is an advanced level certification that measures the ability to secure any wireless network.A wide range of security topics focusing on the 802.11 wireless LAN technology are covered in the coursework and exam, which is vendor neutral. Certification track: The CWSP certification is awarded to candidates who pass the CWSP exam and who also hold the CWNA certification. The CWNA certification is a prerequisite to earning the CWSP certification. CWSP requirements: This certification covers a wide range of security areas. These include detecting attacks, wireless analysis, policy, monitoring and solutions. Recertification: The CWSP certification is valid for three years. The certification may be renewed by retaking the CWSP exam or by advancing on to CWNE which is also valid for 3 years.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cocos2d** Cocos2d: Cocos2d is an open-source game development framework for creating 2D games and other graphical software for iOS, Android, Windows, macOS, Linux, HarmonyOS, OpenHarmony and web platforms. It is written in C++ and provides bindings for various programming languages, including C++, C#, Lua, and JavaScript. The framework offers a wide range of features, including physics, particle systems, skeletal animations, tile maps, and others. Cocos2d was first released in 2008, and was originally written in Python. It contains many branches with the best known being Cocos2d-ObjC (formerly known as Cocos2d-iPhone), Cocos2d-x, Cocos2d-JS and Cocos2d-XNA. There are also many third-party tools, editors and libraries made by the Cocos2d community, such as particle editors, spritesheet editors, font editors, and level editors, like SpriteBuilder and CocoStudio. Sprites and scenes: All versions of Cocos2d work using the basic primitive known as a sprite. A sprite can be thought of as a simple 2D image, but can also be a container for other sprites. In Cocos2D, sprites are arranged together to form a scene, like a game level or a menu. Sprites can be manipulated in code based on events or actions or as part of animations. The sprites can be moved, rotated, scaled, have their image changed, etc. Features: Animation Cocos2D provides basic animation primitives that can work on sprites using a set of actions and timers. They can be chained and composed together to form more complex animations. Most Cocos2D implementations let you manipulate the size, scale, position, and other effects of the sprite. Some versions of Cocos2D let you also animate particle effects, image filtering effects via shaders (warp, ripple, etc.). Features: GUI Cocos2D provides primitives to representing common GUI elements in game scenes. This includes things like text boxes, labels, menus, buttons, and other common elements. Physics system Many Cocos2D implementations come with support for common 2D physics engines like Box2D and Chipmunk. Audio Various versions of Cocos2D have audio libraries that wrap OpenAL or other libraries to provide full audio capabilities. Features are dependent on the implementation of Cocos2D. Scripting support Support binding to JavaScript, Lua, and other engines exist for Cocos2D. For example, Cocos2d JavaScript Binding (JSB) for C/C++/Objective-C is the wrapper code that sits between native code and JavaScript code using Mozilla's SpiderMonkey. With JSB, you can accelerate your development process by writing your game using easy and flexible JavaScript. Features: Editor support End of life support SpriteBuilder: Previously known as CocosBuilder, SpriteBuilder is an IDE for Cocos2D-SpriteBuilder apps. SpriteBuilder is free and its development was sponsored by Apportable, who also sponsored the free Cocos2D-SpriteBuilder, Cocos3D, and Chipmunk physics projects. It was available as a free app in the Mac App Store. Its latest official version is 1.4. Its latest unofficial version is 1.5 which is compatible with cocos2d-objC 3.4.9. It supports Objective-C. Features: CocoStudio: a proprietary toolkit based on Cocos2d-x, containing UI Editor, Animation Editor, Scene Editor and Data Editor, together forming a complete system; the former two are tools mainly for artists while the latter are two mainly for designers. This is a proprietary project developed by Chukong Technologies. Its latest version is 3.10 which is compatible with cocos2d-X 3.10. It supports C++. In April 2016 it was deprecated and replaced with Cocos Creator. Features: Current support Cocos Creator, which is a proprietary unified game development tool for Cocos2d-X. As of August 2017, it supports JavaScript and TypeScript only and does not support C++ nor Lua. It was based on the free Fireball-X. A C++ and Lua support for creator is under alpha-stage development since April 2017. SpriteBuilderX, a free scene editor for Cocos2d-X with C++ support and runs on macOS only. X-Studio, a proprietary scene editor for Cocos2d-X with Lua support and runs on Windows only. CCProjectGenerator: a project generator for Cocos2d-ObjC 3.5 that generates Swift or Objective-C projects for Xcode. History: Cocos2d (Python) February 2008, in the village of Los Cocos, near Córdoba, Argentina, Ricardo Quesada, a game developer, and Lucio Torre created a 2D game engine for Python with several of their developer friends. They named it "Los Cocos" after its birthplace. A month later, the group released the version 0.1 and changed its name to "Cocos2d". Cocos2d-iPhone Attracted by the potential of the new Apple App Store for the iPhone, Quesada rewrote Cocos2d in Objective-C and in June 2008 released "Cocos2d for iPhone" v0.1, the predecessor of the later Cocos2d family.Cocos2D-ObjC (formerly known as Cocos2D-iPhone and Cocos2D-SpriteBuilder), is maintained by Lars Birkemose. Also, the English designer Michael Heald designed a new logo for Cocos2d (the Cocos2d logo was previously a running coconut). Cocos2d-x November 2010, a developer from China named Zhe Wang branched Cocos2d-x based on Cocos2d. Cocos2d-x is also a free engine under MIT License, and it allows for compiling and running on multiple platforms with one code base. In 2013, Quesada left cocos2d-iPhone and joined in cocos2d-x team. In March 2017, Quesada was laid off from the Chukong company. In 2015, there are 4 cocos2d branches being actively maintained. Cocos2d-x & Cocos2d-html5 is maintained and sponsored by developers at Chukong Technologies. Chukong is also developing CocoStudio, which is a WYSIWYG editor for Cocos2d-x and Cocos2D-html5, and a free Cocos3d-x fork of the Cocos3D project. History: Other ports, forks, and bindings Cocos2d has been ported into various programming languages and to all kinds of platforms. Among them there are: ShinyCocos, in Ruby Cocos2d-Android, in Java for Android Cocos2d-windows, in C++ for Windows XP and Windows 7 CocosNet, in C# based on Mono Cocos2d-javascript, in JavaScript for web browsers Cocos2d-XNA was born in cocos2d-x community for supporting Windows Phone 7, but now it's branched to an independent project using C# and mono to run on multiple platforms. Jacob Anderson at Totally Evil Entertainment is leading this branch. History: Cocos3d works as an extension on cocos2d-iPhone, written in Objective-C. Bill Hollings at Brenwill Workshop Ltd is leading this branch. Games developed with cocos2d: FarmVille Plague Inc. Geometry Dash (cocos2d-x) Miitomo (cocos2d-x) Badland (cocos2d-iphone) Shadow Fight 2 (cocos2d-x) Cookie Run: OvenBreak Fire Emblem Heroes
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chronicle Security** Chronicle Security: Chronicle Security is a cybersecurity company which is part of the Google Cloud Platform. Chronicle is a cloud service, built as a specialized layer on top of core Google infrastructure, designed for enterprises to privately retain, analyze, and search the massive amounts of security and network telemetry they generate. The company began as a product by X, but became its own company in January 2018. Chronicle creates tools for businesses to prevent cybercrime on their platforms. Chronicle announced "Backstory" at RSA 2019 in March, adding log capture and analysis to the family of products that include VirusTotal, and UpperCase which provide threat intelligence (Known Malicious IPs and URLs). Backstory claims to "Extract signals from your security telemetry to find threats instantly," by combining log data with threat intelligence. Chronicle Security: In June 2019 Thomas Kurian announced that Chronicle would be merged into Google Cloud.Backstory and VirusTotal are now offered to Google Cloud customers as part of an Autonomic Security Operations solution that also includes Looker and BigQuery.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Call Taxi (India)** Call Taxi (India): Call Taxis are taxi services in India in several cities in India. Call Taxi (India): In some cities, they operate under a regular taxi permit, while in some cities, they are treated as tourist vehicles for hire. They often offer services at all times of the day.Call Taxi services are not officially recognised by the Motor Vehicles Act.They are preferred as they are considered safer, more convenient than ordinary taxis or autorickshaws, and reliable.In Mumbai, ordinary taxicabs can be booked over the internet or with a phone. In Coimbatore, a service was launched where autorickshaws can be booked over the phone. History: Call Taxis first appeared in Chennai and were described as 'no nonsense' in comparison to regular taxicabs. In Bangalore, Call taxis gained prominence after the opening of the Information Technology sectors.In 2013, Uber commenced operations in India. The number of Uber drivers has been growing steadily over the past few years. As the company has gained in popularity, more and more drivers are applying to drive for Uber, and driver sign ups have soared.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**HD 102776** HD 102776: HD 102776, also known by its Bayer designation j Centauri, is a suspected astrometric binary star system in the southern constellation of Centaurus. It has a blue-white hue and is faintly visible to the naked eye with a typical apparent visual magnitude of 4.30. The distance to this star is approximately 600 light years based on parallax, and it is drifting further away with a radial velocity of ~29 km/s. It is a member of the Lower Centaurus Crux subgroup of the Sco OB2 association. HD 102776 has a relatively large peculiar velocity of 31.1 km/s and is a candidate runaway star that was ejected from its association, most likely by a supernova explosion.The stellar classification of the visible component is B3V, matching a B-type main-sequence star. It is around 32 million years old and is spinning rapidly with estimates of its projected rotational velocity ranging from 200 up to 270 km, giving it an equatorial bulge that is up to 11% larger than the polar radius. This is a Be star showing emission features in its Balmer lines due to a circumstellar disk of decreated gas. It is classified as a suspected Gamma Cassiopeiae type variable star with a visual magnitude varying from +4.30 down to +4.39.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Whitebait** Whitebait: Whitebait is a collective term for the immature fry of fish, typically between 25 and 50 millimetres (1 and 2 in) long. Such young fish often travel together in schools along coasts, and move into estuaries and sometimes up rivers where they can be easily caught using fine-meshed fishing nets. Whitebaiting is the activity of catching whitebait. Individual whitebait are tender and edible, and are considered a delicacy in New Zealand. The entire fish is eaten - including head, fins, bones, and bowels. Some species make better eating than others, and the particular species that are marketed as "whitebait" vary in different parts of the world. As whitebait consists of immature fry of many important food species (such as herring, sprat, sardines, mackerel, bass and many others) it is not an ecologically viable foodstuff and several countries impose strict controls on harvesting. Whitebait by region: Alboran Sea The Alboran Sea is the westernmost element of the Mediterranean Sea. Whitebait have been consumed as a favoured element of the diet of peoples living along the northern coasts of the Alboran Sea in Spain, even though sale of these products has been banned. Australia In Australia whitebait refers to the juvenile stage of several predominantly galaxias species during their return to freshwater from the marine phase of their lifecycle. Species referred to as whitebait in Australia include Common galaxias G. maculatus, Climbing galaxias G. brevipinnis, Spotted galaxias G. truttaceus, Tasmanian whitebait Lovettia sealii, Tasmanian mudfish Neochanna cleaveri, and Tasmanian smelt Retropinna tasmanica. Whitebait were once subject to a substantial commercial fishery but today only recreational fishers are permitted to gather them, under strict conditions and for a limited season. China Chinese whitebait is raised in fish farms and plentiful quantities are produced for export. The Chinese whitebait is larger than the New Zealand whitebait and not nearly so delicate. The frozen product is commonly available in food stores and supermarkets at reasonable prices. The Chinese name for these is often translated as "silver fish" in English. Italy Gianchetti (also bianchetti) are the whitebait of the pesce azzurro of the Mediterranean (sardines and anchovies, etc.), caught with special nets named from the Ligurian sciabegottu (similar to the net to sciabica, but with smaller dimensions) in the early months of the year. Whitebait by region: A speciality of the Liguria cuisine, gianchetti are generally lightly boiled in salted water and served hot, dressed with oil and lemon juice. Another classic approach is to make fritters of the fish together with an egg and flour batter; finally they may simply be dipped in flour and deep fried (Frittelle di Gianchetti/Bianchetti). The gianchetti of a red colour (ruscetti, rossetti) are tougher and scaly to the palate: they are largely used to flavour fish-based sauces. Whitebait by region: In Sicilian cuisine whitebait are known as ceruses (literally translated as "baby"). Whitebait are the principal ingredient of the Sicilian specialty croquette polpette di neonata; which are a type of rolled meatball of whitebait with parsley, and egg and/or a bit of flour to amalgamate, fried in olive oil or sometimes deep-fried in peanut oil. In Neapolitan cuisine whitebait are known as cicenielli. In Brindisian cuisine whitebait are known as chuma (literally foam of sea). Whitebait by region: Japan In Japan, the whitebait (しらす/白子, shirasu) fishing industry is concentrated in Shizuoka Prefecture, where the major landing ports for them are situated. The shirasu boiled in salted hot water is called kamaage shirasu (釜上げしらす, boiled whitebait), and this product retains about 85% or greater water ratio.The boiled whitefish which are subsequently semi-dried are referred to generally as shirasuboshi (しらす干し, literally 'dried whitebait'), but this is in the wider sense of the term; in the stricter sense shirasuboshi (aka Kantō boshi, or 'Eastern Japan style dried') refers to soft-dried products (50–85% water ratio), and distinguished from chirimen-jako (縮緬雑魚) (aka Kansai boshi or 'Western Japan style dried') which are dried to harder consistency (30% to shy of 50% water content.)The whitebait used in these shirasu products is generally the larvae of the Japanese anchovy, but in the vernacular Japanese language anchovy (片口鰯, katakuchi iwashi) is called a type of sardine (鰯, iwashi), thus shirasu may be (somewhat misleadingly) described as sardine fry in some literature, though the larvae of clupeids do occur as bycatch in the shirasu being harvested. The shirasu landed in Shizuoka Prefecture consists of the 2–3 month old, and 1–2 cm length larvae of mostly Japanese anchovy, and a small proportion of Japanese pilchard (真鰯, ma iwashi), Sardinops sagax melanostictus, a subspecies of sardine.One specialty product is the tatami iwashi (たたみいわし, literally 'tatami sardine'), a paper-thin square wafer made from uncooked dry shirasu, spreading the washed fish thinly inside square molds then drying them, which has become a pricey delicacy. Whitebait by region: New Zealand New Zealand whitebait are the juveniles of five galaxiid species which live as adults in freshwater rivers and streams. Four of these five species have been classified by the Department of Conservation as endangered. The whitebait are caught during their migration into freshwater habitats after their larval stage at sea. They are much smaller than Chinese or British whitebait, averaging 45–55 mm in length and are around 15–22 weeks old. Whitebait by region: The most common whitebait species in New Zealand is the common galaxias or īnanga, which lays its eggs during the very high spring tides in autumn amongst bankside grasses that are flooded by the tide. The eggs develop out of the water until inundated by the next spring tide which stimulates the eggs to hatch. The larvae are then carried to sea on the outgoing tide where they join the ocean's plankton. After approximately six months, the juvenile fish migrate back into freshwater habitats where they mature to adulthood. The four other galaxiid species in New Zealand whitebait are the kōaro, banded kōkopu, giant kōkopu and shortjaw kōkopu. These species also spawn in bankside vegetation, but their spawning is triggered by autumn floods rather than tides.New Zealand whitebait are caught in the lower reaches of the rivers using large, open-mouthed, hand-held scoop nets, long sock nets, or rigid, typically wedge-shaped set nets. Whitebaiters must constantly attend the nets in order to lift them as soon as a shoal enters the net, otherwise the whitebait quickly swim back out of the net. Whitebaiters may fish from platforms known as a 'stand', which may include screens to direct the fish and systems for raising and lowering nets. Whitebait by region: Whitebaiting in New Zealand is a seasonal activity with a legally fixed and limited period which spans part of the annual migration. The timing of the allowed fishing season is set to target the more common inanga, while avoiding the less common species that mainly migrate before and after the whitebaiting season. There is strict control over net sizes and rules against blocking the river or channelling the fish into the net to allow some fish to reach the adult habitats. The whitebait themselves are very sensitive to objects in the river and are adept at dodging the nets. Whitebait by region: Whitebait is a traditional food for Māori, and was widely eaten by European settlers in the 19th Century. By the 20th Century, the price of whitebait rose and it became known as a delicacy. Currently, it commands high prices to the extent that it is the most expensive fish on the market, when available. The wholesale price (NZD) is typically $60–$70 per kilogram ($27–$32/lb), but the retail price can be up to $140 per kilogram ($64/lb). It is normally sold fresh in small quantities, although some are frozen to extend the sale period. Nevertheless, whitebait can normally only be purchased during or close after the netting season. The most popular way of cooking whitebait in New Zealand is the whitebait fritter, which is essentially an omelette containing whitebait. Purists use only the egg white in order to minimise interfering with the taste of the bait.The degradation of waterways through forest clearance, and the impacts of agriculture and urbanisation, have caused the whitebait catch to decline. The loss of suitable spawning habitat has been particularly severe, especially for inanga, which rely on dense riparian vegetation lining the tidal portions of waterways. Amongst other factors, a lack of shade over waterways has been shown to kill developing whitebait eggs. Whitebait by region: United Kingdom In the United Kingdom today, whitebait principally refers to the fry of Clupeidae fish, young sprats, most commonly herring. They are normally deep-fried, coated in flour or a light batter, and served very hot with sprinkled lemon juice and bread and butter. Whitebait are very hard to buy fresh unless the buyer goes to a fishing harbour early in the morning, as most are frozen on the boat. Whitebait by region: Records of whitebait as a food in England date back to 1612. By the 1780s it was fashionable to dine on whitebait. In those days, whitebait was thought to be a species or group on its own right, and the French zoologist Valenciennes proposed that whitebait was a new genus, which he called Rogenia. In 1903, Dr James Murie, in his 'Report on the sea fisheries and fishing industry of the Thames estuary' conducted studies on the contents of boxes sold as whitebait. He discovered that some boxes of whitebait contained up to 23 species of immature fish, including the fry of eel, plaice, whiting, herring sprat and bass, along with shrimp, crab, octopus and even jellyfish. Whitebait by region: For Londoners in the 19th century and before, summer excursions down the Thames to Greenwich or Blackwall to dine on whitebait were popular. For instance, the Cabinet undertook such a trip every year shortly before the prorogation of Parliament. An annual whitebait festival takes place in Southend.Given that UK and imported whitebait still consists of immature herring, sprat, sardines, mackerel, bass and many others, it is not an ecologically viable foodstuff. Removing these fish at such a juvenile stage, before they have had a chance to grow and reproduce, might severely reduce future fish stocks. The Marine Conservation Society (MCS) is a non-government organisation that provides independent information on the sustainability of fish stocks and species around the world, and has a rating system for fish sustainability, in order to safeguard future stocks. The MCS suggests avoiding eating and purchasing the juvenile whitebait as it is detrimental to sustainable fish populations. Whitebait by region: Puerto Rico Residents of Arecibo, Puerto Rico traditionally fish for whitebait at the mouth of the Río Grande de Arecibo. The fish are known locally as cetí and classified as Pellona bleekeriana or Sicydium plumieri. Elvers: Elvers are young eels. Traditionally, fishermen consumed elvers as a cheap dish, but environmental changes have reduced eel populations. Similar to whitebait, they are now considered a delicacy and are priced at up to 1000 euro per kilogram. Cuttlefish, octopus and squid: Battered and fried baby cephalopods (usually cuttlefish, but sometimes squid or octopus), known as puntillitas or chopitos, are popular in southern Spain and the Balearic Islands and possibly elsewhere.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Distributed Interactive Simulation** Distributed Interactive Simulation: Distributed Interactive Simulation (DIS) is an IEEE standard for conducting real-time platform-level wargaming across multiple host computers and is used worldwide, especially by military organizations but also by other agencies such as those involved in space exploration and medicine. History: The standard was developed over a series of "DIS Workshops" at the Interactive Networked Simulation for Training symposium, held by the University of Central Florida's Institute for Simulation and Training (IST). The standard itself is very closely patterned after the original SIMNET distributed interactive simulation protocol, developed by Bolt, Beranek and Newman (BBN) for Defense Advanced Research Project Agency (DARPA) in the early through late 1980s. BBN introduced the concept of dead reckoning to efficiently transmit the state of battle field entities. History: In the early 1990s, IST was contracted by the United States Defense Advanced Research Project Agency to undertake research in support of the US Army Simulator Network (SimNet) program. Funding and research interest for DIS standards development decreased following the proposal and promulgation of its successor, the High Level Architecture (simulation) (HLA) in 1996. HLA was produced by the merger of the DIS protocol with the Aggregate Level Simulation Protocol (ALSP) designed by MITRE. History: There was a NATO standardisation agreement (STANAG 4482, Standardised Information Technology Protocols for Distributed Interactive Simulation (DIS), adopted in 1995) on DIS for modelling and simulation interoperability. This was retired in favour of HLA in 1998 and officially cancelled in 2010 by the NATO Standardization Agency (NSA). The DIS family of standards: DIS is defined under IEEE Standard 1278: IEEE 1278-1993 - Standard for Distributed Interactive Simulation - Application protocols IEEE 1278.1-1995 - Standard for Distributed Interactive Simulation - Application protocols IEEE 1278.1-1995 - Standard for Distributed Interactive Simulation - Application protocols (Corrections) IEEE 1278.1A-1998 - Standard for Distributed Interactive Simulation - Application protocols Errata (May 1998) IEEE 1278.1-2012 - Standard for Distributed Interactive Simulation - Application protocols IEEE-1278.2-1995 - Standard for Distributed Interactive Simulation - Communication Services and Profiles IEEE 1278.3-1996 - Recommended Practice for Distributed Interactive Simulation - Exercise Management and Feedback IEEE 1278.4-1997 - Recommended Practice for Distributed Interactive - Verification Validation & Accreditation IEEE P1278.5-XXXX - Fidelity Description Requirements (never published)In addition to the IEEE standards, the Simulation Interoperability Standards Organization (SISO) maintains and publishes an "enumerations and bit encoded fields" document yearly. This document is referenced by the IEEE standards and used by DIS, TENA and HLA federations. Both PDF and XML versions are available. Current status: SISO, a sponsor committee of the IEEE, promulgates improvements in DIS. Major changes occurred in the DIS 7 update to IEEE 1278.1 to make DIS more extensible, efficient and to support the simulation of more real world capabilities. Application protocol: Simulation state information is encoded in formatted messages, known as protocol data units (PDUs) and exchanged between hosts using existing transport layer protocols, including multicast, though broadcast User Datagram Protocol is also supported. There are several versions of the DIS application protocol, not only including the formal standards, but also drafts submitted during the standards balloting process. Application protocol: Version 1 - Standard for Distributed Interactive Simulation - Application Protocols, Version 1.0 Draft (1992) Version 2 - IEEE 1278-1993 Version 3 - Standard for Distributed Interactive Simulation - Application Protocols, Version 2.0 Third Draft (May 1993) Version 4 - Standard for Distributed Interactive Simulation - Application Protocols, Version 2.0 Fourth Draft (March 1994) Version 5 - IEEE 1278.1-1995 Version 6 - IEEE 1278.1a-1998 (amendment to IEEE 1278.1-1995) Version 7 - IEEE 1278.1-2012 (See External Link - DIS Product Development Group.) Version 7 is also called DIS 7. This is a major upgrade to DIS to enhance extensibility and flexibility. It provides extensive clarification and more details of requirements, and adds some higher-fidelity mission capabilities. Protocol data units: The current version (DIS 7) defines 72 different PDU types, arranged into 13 families. Frequently used PDU types are listed below for each family. PDU and family names shown in italics are found in DIS 7. Protocol data units: Entity information/interaction family - Entity State, Collision, Collision-Elastic, Entity State Update, Attribute Warfare family - Fire, Detonation, Directed Energy Fire, Entity Damage Status Logistics family - Service Request, Resupply Offer, Resupply Received, Resupply Cancel, Repair Complete, Repair Response Simulation management family - Start/Resume, Stop/Freeze, Acknowledge Distributed emission regeneration family - Designator, Electromagnetic Emission, IFF/ATC/NAVAIDS, Underwater Acoustic, Supplemental Emission/Entity State (SEES) Radio communications family - Transmitter, Signal, Receiver, Intercom Signal, Intercom Control Entity management family Minefield family Synthetic environment family Simulation management with reliability family Live entity family Non-real time family Information Operations family - Information Operations Action, Information Operations Report Realtime Platform Reference FOM (RPR FOM): The RPR FOM is a Federation Object Model (FOM) for the High-Level Architecture designed to organize the PDUs of DIS into an HLA object class and interaction class hierarchy. It has been developed as the SISO standard SISO-STD-001. The purpose is to support transition of legacy DIS systems to the HLA, to enhance a priori interoperability among RPR FOM users and to support newly developed federates with similar requirements. The most recent version is RPR FOM version 2.0 that corresponds to DIS version 6.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Prolactin-releasing hormone** Prolactin-releasing hormone: Prolactin-releasing hormone, also known as PRLH, is a hypothetical human hormone or hormone releasing factor. Existence of this factor has been hypothesized as prolactin is the only currently known hormone for which almost exclusively negative regulating factors are known (such as dopamine, leukemia inhibitory factor, some prostaglandins) but few stimulating factor. Its secretion is mediated by estrogen from placenta during pregnancy to elevate blood level of prolactin . While many prolactin stimulating and enhancing factors are well known (such as thyrotropin-releasing hormone, oxytocin, vasoactive intestinal peptide and estrogen) those have primary functions other than stimulating prolactin release and the search for hypothetical releasing factor or factors continues. Prolactin-releasing hormone: The prolactin-releasing peptide identified in 1998 was a candidate for this function, however as of 2008 it appears its function is not yet completely elucidated.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stateful firewall** Stateful firewall: In computing, a stateful firewall is a network-based firewall that individually tracks sessions of network connections traversing it. Stateful packet inspection, also referred to as dynamic packet filtering, is a security feature often used in non-commercial and business networks. Description: A stateful firewall keeps track of the state of network connections, such as TCP streams, UDP datagrams, and ICMP messages, and can apply labels such as LISTEN, ESTABLISHED, or CLOSING. State table entries are created for TCP streams or UDP datagrams that are allowed to communicate through the firewall in accordance with the configured security policy. Once in the table, all RELATED packets of a stored session are streamlined allowed, taking fewer CPU cycles than standard inspection. Related packets are also permitted to return through the firewall even if no rule is configured to allow communications from that host. If no traffic is seen for a specified time (implementation dependent), the connection is removed from the state table. Applications can send keepalive messages periodically to prevent a firewall from dropping the connection during periods of no activity or for applications which by design have long periods of silence. Description: The method of maintaining a session's state depends on the transport protocol being used. TCP is a connection oriented protocol and sessions are established with a three-way handshake using SYN packets and ended by sending a FIN notification. The firewall can use these unique connection identifiers to know when to remove a session from the state table without waiting for a timeout. UDP is a connectionless protocol, which means it does not send unique connection related identifiers while communicating. Because of that, a session will only be removed from the state table after the configured time-out. UDP hole punching is a technology that leverages this trait to allow for dynamically setting up data tunnels over the internet. ICMP messages are distinct from TCP and UDP and communicate control information of the network itself. A well known example of this is the ping utility. ICMP responses will be allowed back through the firewall. In some scenarios, UDP communication can use ICMP to provide information about the state of the session so ICMP responses related to a UDP session will also be allowed back through. Description: Stateful inspection firewall advantages Monitors the entire session for the state of the connection, while also checking IP addresses and payloads for more thorough security Offers a high degree of control over what content is let in or out of the network Does not need to open numerous ports to allow traffic in or out Delivers substantive logging capabilities Stateful inspection firewall disadvantages Resource-intensive and interferes with the speed of network communications More expensive than other firewall options Doesn't provide authentication capabilities to validate traffic sources are not spoofed Doesn't work with asymmetric routing (opposite directions use different paths)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Applied mathematics** Applied mathematics: Applied mathematics is the application of mathematical methods by different fields such as physics, engineering, medicine, biology, finance, business, computer science, and industry. Thus, applied mathematics is a combination of mathematical science and specialized knowledge. The term "applied mathematics" also describes the professional specialty in which mathematicians work on practical problems by formulating and studying mathematical models. In the past, practical applications have motivated the development of mathematical theories, which then became the subject of study in pure mathematics where abstract concepts are studied for their own sake. The activity of applied mathematics is thus intimately connected with research in pure mathematics. History: Historically, applied mathematics consisted principally of applied analysis, most notably differential equations; approximation theory (broadly construed, to include representations, asymptotic methods, variational methods, and numerical analysis); and applied probability. These areas of mathematics related directly to the development of Newtonian physics, and in fact, the distinction between mathematicians and physicists was not sharply drawn before the mid-19th century. This history left a pedagogical legacy in the United States: until the early 20th century, subjects such as classical mechanics were often taught in applied mathematics departments at American universities rather than in physics departments, and fluid mechanics may still be taught in applied mathematics departments. Engineering and computer science departments have traditionally made use of applied mathematics. Divisions: Today, the term "applied mathematics" is used in a broader sense. It includes the classical areas noted above as well as other areas that have become increasingly important in applications. Even fields such as number theory that are part of pure mathematics are now important in applications (such as cryptography), though they are not generally considered to be part of the field of applied mathematics per se. Divisions: There is no consensus as to what the various branches of applied mathematics are. Such categorizations are made difficult by the way mathematics and science change over time, and also by the way universities organize departments, courses, and degrees. Divisions: Many mathematicians distinguish between "applied mathematics", which is concerned with mathematical methods, and the "applications of mathematics" within science and engineering. A biologist using a population model and applying known mathematics would not be doing applied mathematics, but rather using it; however, mathematical biologists have posed problems that have stimulated the growth of pure mathematics. Mathematicians such as Poincaré and Arnold deny the existence of "applied mathematics" and claim that there are only "applications of mathematics." Similarly, non-mathematicians blend applied mathematics and applications of mathematics. The use and development of mathematics to solve industrial problems is also called "industrial mathematics".The success of modern numerical mathematical methods and software has led to the emergence of computational mathematics, computational science, and computational engineering, which use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary. Divisions: Applicable mathematics Sometimes, the term applicable mathematics is used to distinguish between the traditional applied mathematics that developed alongside physics and the many areas of mathematics that are applicable to real-world problems today, although there is no consensus as to a precise definition.Mathematicians often distinguish between "applied mathematics" on the one hand, and the "applications of mathematics" or "applicable mathematics" both within and outside of science and engineering, on the other. Some mathematicians emphasize the term applicable mathematics to separate or delineate the traditional applied areas from new applications arising from fields that were previously seen as pure mathematics. For example, from this viewpoint, an ecologist or geographer using population models and applying known mathematics would not be doing applied, but rather applicable, mathematics. Even fields such as number theory that are part of pure mathematics are now important in applications (such as cryptography), though they are not generally considered to be part of the field of applied mathematics per se. Such descriptions can lead to applicable mathematics being seen as a collection of mathematical methods such as real analysis, linear algebra, mathematical modelling, optimisation, combinatorics, probability and statistics, which are useful in areas outside traditional mathematics and not specific to mathematical physics. Divisions: Other authors prefer describing applicable mathematics as a union of "new" mathematical applications with the traditional fields of applied mathematics. With this outlook, the terms applied mathematics and applicable mathematics are thus interchangeable. Utility: Historically, mathematics was most important in the natural sciences and engineering. However, since World War II, fields outside the physical sciences have spawned the creation of new areas of mathematics, such as game theory and social choice theory, which grew out of economic considerations. Further, the utilization and development of mathematical methods expanded into other areas leading to the creation of new fields such as mathematical finance and data science. Utility: The advent of the computer has enabled new applications: studying and using the new computer technology itself (computer science) to study problems arising in other areas of science (computational science) as well as the mathematics of computation (for example, theoretical computer science, computer algebra, numerical analysis). Statistics is probably the most widespread mathematical science used in the social sciences. Status in academic departments: Academic institutions are not consistent in the way they group and label courses, programs, and degrees in applied mathematics. At some schools, there is a single mathematics department, whereas others have separate departments for Applied Mathematics and (Pure) Mathematics. It is very common for Statistics departments to be separated at schools with graduate programs, but many undergraduate-only institutions include statistics under the mathematics department. Status in academic departments: Many applied mathematics programs (as opposed to departments) consist primarily of cross-listed courses and jointly appointed faculty in departments representing applications. Some Ph.D. programs in applied mathematics require little or no coursework outside mathematics, while others require substantial coursework in a specific area of application. In some respects this difference reflects the distinction between "application of mathematics" and "applied mathematics". Status in academic departments: Some universities in the U.K. host departments of Applied Mathematics and Theoretical Physics, but it is now much less common to have separate departments of pure and applied mathematics. A notable exception to this is the Department of Applied Mathematics and Theoretical Physics at the University of Cambridge, housing the Lucasian Professor of Mathematics whose past holders include Isaac Newton, Charles Babbage, James Lighthill, Paul Dirac, and Stephen Hawking. Status in academic departments: Schools with separate applied mathematics departments range from Brown University, which has a large Division of Applied Mathematics that offers degrees through the doctorate, to Santa Clara University, which offers only the M.S. in applied mathematics. Research universities dividing their mathematics department into pure and applied sections include MIT. Students in this program also learn another skill (computer science, engineering, physics, pure math, etc.) to supplement their applied math skills. Associated mathematical sciences: Applied mathematics is associated with the following mathematical sciences: Engineering and technological engineering With applications of applied geometry together with applied chemistry. Scientific computing Scientific computing includes applied mathematics (especially numerical analysis), computing science (especially high-performance computing), and mathematical modelling in a scientific discipline. Computer science Computer science relies on logic, algebra, discrete mathematics such as graph theory, and combinatorics. Operations research and management science Operations research and management science are often taught in faculties of engineering, business, and public policy. Associated mathematical sciences: Statistics Applied mathematics has substantial overlap with the discipline of statistics. Statistical theorists study and improve statistical procedures with mathematics, and statistical research often raises mathematical questions. Statistical theory relies on probability and decision theory, and makes extensive use of scientific computing, analysis, and optimization; for the design of experiments, statisticians use algebra and combinatorial design. Applied mathematicians and statisticians often work in a department of mathematical sciences (particularly at colleges and small universities). Associated mathematical sciences: Actuarial science Actuarial science applies probability, statistics, and economic theory to assess risk in insurance, finance and other industries and professions. Associated mathematical sciences: Mathematical economics Mathematical economics is the application of mathematical methods to represent theories and analyze problems in economics. The applied methods usually refer to nontrivial mathematical techniques or approaches. Mathematical economics is based on statistics, probability, mathematical programming (as well as other computational methods), operations research, game theory, and some methods from mathematical analysis. In this regard, it resembles (but is distinct from) financial mathematics, another part of applied mathematics.According to the Mathematics Subject Classification (MSC), mathematical economics falls into the Applied mathematics/other classification of category 91: Game theory, economics, social and behavioral scienceswith MSC2010 classifications for 'Game theory' at codes 91Axx and for 'Mathematical economics' at codes 91Bxx. Associated mathematical sciences: Other disciplines The line between applied mathematics and specific areas of application is often blurred. Many universities teach mathematical and statistical courses outside the respective departments, in departments and areas including business, engineering, physics, chemistry, psychology, biology, computer science, scientific computation, information theory, and mathematical physics.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cochlear nucleus** Cochlear nucleus: The cochlear nuclear (CN) complex comprises two cranial nerve nuclei in the human brainstem, the ventral cochlear nucleus (VCN) and the dorsal cochlear nucleus (DCN). The ventral cochlear nucleus is unlayered whereas the dorsal cochlear nucleus is layered. Auditory nerve fibers, fibers that travel through the auditory nerve (also known as the cochlear nerve or eighth cranial nerve) carry information from the inner ear, the cochlea, on the same side of the head, to the nerve root in the ventral cochlear nucleus. At the nerve root the fibers branch to innervate the ventral cochlear nucleus and the deep layer of the dorsal cochlear nucleus. All acoustic information thus enters the brain through the cochlear nuclei, where the processing of acoustic information begins. The outputs from the cochlear nuclei are received in higher regions of the auditory brainstem. Structure: The cochlear nuclei (CN) are located at the dorso-lateral side of the brainstem, spanning the junction of the pons and medulla. The ventral cochlear nucleus (VCN) on the ventral aspect of the brain stem, ventrolateral to the inferior peduncle. The dorsal cochlear nucleus (DCN), also known as the tuberculum acusticum or acoustic tubercle, curves over the VCN and wraps around the cerebellar peduncle. The VCN is further divided by the nerve root into the posteroventral cochlear nucleus (PVCN) and the anteroventral cochlear nucleus (AVCN). Structure: Projections to the cochlear nuclei The major input to the cochlear nucleus is from the auditory nerve, a part of cranial nerve VIII (the vestibulocochlear nerve). The auditory nerve fibers form a highly organized system of connections according to their peripheral innervation of the cochlea. Axons from the spiral ganglion cells of the lower frequencies innervate the ventrolateral portions of the ventral cochlear nucleus and lateral-ventral portions of the dorsal cochlear nucleus. The axons from the higher frequency organ of corti hair cells project to the dorsal portion of the ventral cochlear nucleus and the dorsal-medial portions of the dorsal cochlear nucleus. The mid frequency projections end up in between the two extremes; in this way the tonotopic organization that is established in the cochlea is preserved in the cochlear nuclei. This tonotopic organization is preserved because only a few inner hair cells synapse on the dendrites of a nerve cell in the spiral ganglion, and the axon from that nerve cell synapses on only a very few dendrites in the cochlear nucleus. In contrast with the VCN that receives all acoustic input from the auditory nerve, the DCN receives input not only from the auditory nerve but it also receives acoustic input from neurons in the VCN (T stellate cells). The DCN is therefore in a sense a second order sensory nucleus. Structure: The cochlear nuclei have long been thought to receive input only from the ipsilateral ear. There is evidence, however, for stimulation from the contralateral ear via the contralateral CN, and also the somatosensory parts of the brain. Structure: Projections from the cochlear nuclei There are three major fiber bundles, axons of cochlear nuclear neurons, that carry information from the cochlear nuclei to targets that are mainly on the opposite side of the brain. Through the medulla, one projection goes to the contralateral superior olivary complex (SOC) via the trapezoid body, whilst the other half shoots to the ipsilateral SOC. This pathway is called the ventral acoustic stria (VAS or, more commonly, the trapezoid body). Another pathway, called the dorsal acoustic stria (DAS, also known as the stria of von Monakow), rises above the medulla into the pons where it hits the nuclei of the lateral lemniscus along with its kin, the intermediate acoustic stria (IAS, also known as the stria of Held). The IAS decussates across the medulla, before joining the ascending fibers in the contralateral lateral lemniscus. The lateral lemniscus contains cells of the nuclei of the lateral lemniscus, and in turn projects to the inferior colliculus. The inferior colliculus receives direct, monosynaptic projections from the superior olivary complex, the contralateral dorsal acoustic stria, some classes of stellate neurons of the VCN, as well as from the different nuclei of the lateral lemniscus. Structure: Most of these inputs terminate in the inferior colliculus, although there are a few small projections that bypass the inferior colliculus and project to the medial geniculate, or other forebrain structures. Medial superior olive (MSO) via trapezoid body (TB) – Ipsilateral and contralateral stimulation for low frequency sounds. Lateral superior olive (LSO) directly and via TB – Ipsilateral stimulation for high frequency sounds. Medial nucleus of trapezoid body (MNTB) – Contralateral stimulation. Inferior colliculus – Contralateral stimulation. Periolivary nuclei (PON) – Ipsilateral and contralateral stimulation. Lateral lemniscus (LL) and lemniscal nuclei (LN) – Ipsilateral and contralateral stimulation. Histology Three types of principal cells convey information out of the ventral cochlear nucleus: Bushy cells, stellate cells, and octopus cells. Structure: Bushy cells are found mainly in the anterior ventral cochlear nucleus (AVCN). These can be further divided into large spherical, small spherical and globular bushy cells, depending on their appearance, and also their location. Within the AVCN there is an area of large spherical cells; caudal to this are smaller spherical cells, and globular cells occupy the region around the nerve root. An important difference between these subtypes is that they project to differing targets in the superior olivary complex. Large spherical bushy cells project to the ipsilateral and contralateral medial superior olive. Globular bushy cells project to the contralateral medial nucleus of the trapezoid body, and small spherical bushy cells likely project to the lateral superior olive. They have a few (1-4) very short dendrites with numerous small branching, which cause it to resemble a “bush”. The bushy cells have specialized electrical properties that allow them to transmit timing information from the auditory nerve to more central areas of the auditory system. Because bushy cells receive input from multiple auditory nerve fibers that are tuned to similar frequencies, bushy cells can improve the precision of the timing information by in essence averaging out jitter in timing of the inputs. Bushy cells can also be inhibited by sounds adjacent to the frequency to which they are tuned, leading to even sharper tuning than seen in auditory nerve fibers. These cells are usually innervated only by a few auditory nerve fibres, which dominate its firing pattern. These afferent nerve fibres wrap their terminal branches around the entire soma, creating a large synapse onto the bushy cells, called an "endbulb of Held". Therefore, a single unit recording of an electrically stimulated bushy neuron characteristically produces exactly one action potential and constitutes the primary response. Structure: Stellate cells (aka multipolar cells), have longer dendrites that lie parallel to fascicles of auditory nerve fibers. They are also called chopper cells, in reference to their ability to fire a regularly spaced train of action potentials for the duration of a tonal or noise stimulus. The chopping pattern is intrinsic to the electrical excitability of the stellate cell, and the firing rate depends on the strength of the auditory input more than on the frequency. Each stellate cell is narrowly tuned and has inhibitory sidebands, enabling the population of stellate cells to encode the spectrum of sounds, enhancing spectral peaks and valleys. These neurons provide acoustic input to the DCN. Structure: Octopus cells are found in a small region of the posterior ventral cochlear nucleus (PVCN). The distinguishing features of these cells are their long, thick and tentacle-shaped dendrites that typically emanate from one side of the cell body. Octopus cells produce an "Onset Response" to simple tonal stimuli. That is, they respond only at the onset of a broad-band stimulus. The octopus cells can fire with some of the highest temporal precision of any neuron in the brain. Electrical stimuli to the auditory nerve evoke a graded excitatory postsynaptic potential in the octopus cells. These EPSPs are very brief. The octopus cells are thought to be important for extracting timing information. It has been reported that these cells can respond to click trains at a rate of 800 Hz.Two types of principal cells convey information out of the dorsal cochlear nucleus (DCN) to the contralateral inferior colliculus. The principal cells receive two systems of inputs. Acoustic input comes to the deep layer through several paths. Excitatory acoustic input comes from auditory nerve fibers and also from stellate cells of the VCN. Acoustic input is also conveyed through inhibitory interneurons (tuberculoventral cells of the DCN and "wide band inhibitors" in the VCN). Through the outermost molecular layer, the DCN receives other types of sensory information, most importantly information about the location of the head and ears, through parallel fibers. This information is distributed through a cerebellar like circuit that also includes inhibitory interneurons. Fusiform cells (also known as pyramidal cells). Fusiform cells integrate information through two tufts of dendrites, the apical dendrites receiving multisensory, excitatory and inhibitory input through the outermost molecular layer and the basal dendrites receiving excitatory and inhibitory acoustic input from the basal dendrites that extend into the deep layer. These neurons are thought to enable mammals to analyze the spectral cues that enable us to localize sounds in elevation and when we lose hearing in one ear. Structure: Giant cells also integrate inputs from the molecular and deep layers but input from the deep layer is predominant. It is unclear what their role is in hearing. Function: The cochlear nuclear complex is the first integrative, or processing, stage in the auditory system. Information is brought to the nuclei from the ipsilateral cochlea via the cochlear nerve. Several tasks are performed in the cochlear nuclei. By distributing acoustic input to multiple types of principal cells, the auditory pathway is subdivided into parallel ascending pathways, which can simultaneously extract different types of information. The cells of the ventral cochlear nucleus extract information that is carried by the auditory nerve in the timing of firing and in the pattern of activation of the population of auditory nerve fibers. The cells of the dorsal cochlear nucleus perform a non-linear spectral analysis and place that spectral analysis into the context of the location of the head, ears and shoulders and that separate expected, self-generated spectral cues from more interesting, unexpected spectral cues using input from the auditory cortex, pontine nuclei, trigeminal ganglion and nucleus, dorsal column nuclei and the second dorsal root ganglion. It is likely that these neurons help mammals to use spectral cues for orienting toward those sounds. The information is used by higher brainstem regions to achieve further computational objectives (such as sound source location or improvement in signal-to-noise ratio). The inputs from these other areas of the brain probably play a role in sound localization. Function: In order to understand in more detail the specific functions of the cochlear nuclei it is first necessary to understand the way sound information is represented by the fibers of the auditory nerve. Briefly, there are around 30,000 auditory nerve fibres in each of the two auditory nerves. Each fiber is an axon of a spiral ganglion cell that represents a particular frequency of sound, and a particular range of loudness. Information in each nerve fibre is represented by the rate of action potentials as well as the particular timing of individual action potentials. The particular physiology and morphology of each cochlear nucleus cell type enhances different aspects of sound information.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Buttermilk Crispy Tenders** Buttermilk Crispy Tenders: Buttermilk Crispy Tenders (and their precursor, Chicken Selects) were chicken strips sold by the international fast food restaurant chain McDonald's in the United States and Canada. Chicken Selects were introduced in early 1998 for a limited time and offered again in early 2002 and late 2003 and then permanently starting in 2004. In the UK, they were launched on the "Pound Saver Menu", which offers various menu items for £0.99. Buttermilk Crispy Tenders: In mid-2006, McDonald's introduced the Snack Wrap, which contains a Chicken Selects Premium Breast Strip, or as of January 2007, a Grilled Chicken Breast Strip, cheddar/jack cheese, lettuce, and either ranch, honey mustard, or chipotle barbecue sauce, all wrapped inside a white flour tortilla, priced at 99¢-$1.39 depending on the market. Chicken Selects were terminated in 2013. The product briefly returned in 2015 as a limited-time promotion. In August 2017, a similar chicken tender product named "Buttermilk Crispy Tenders" was added to the menu. However, they were discontinued in 2020 as a result of the COVID-19 pandemic. Composition: Ingredients for the Chicken Selects Premium Breast Strip are listed as "Chicken breast strips, water, seasoning [salt, monosodium glutamate, carrageenan gum, chicken broth, natural flavor (plant and animal source), maltodextrin, spice, autolyzed yeast extract, chicken fat, polysorbate 80], modified potato starch, and sodium phosphates. Breaded with: wheat flour, water, food starch-modified, salt, spices, leavening (baking soda, sodium aluminum phosphate, monocalcium phosphate), garlic powder, onion powder, dextrose, spice extractives, and extractives of paprika. Prepared in vegetable oil (may contain one of the following: Canola oil, corn oil, soybean oil, hydrogenated soybean oil, partially hydrogenated soybean oil, partially hydrogenated corn oil with TBHQ and citric acid added to preserve freshness), dimethylpolysiloxane added as an antifoaming agent)."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Flag of Denmark** Flag of Denmark: The national flag of Denmark (Danish: Dannebrog, pronounced [ˈtænəˌpʁoˀ]) is red with a white Nordic cross, which means that the cross extends to the edges of the flag and the vertical part of the cross is shifted to the hoist side. Flag of Denmark: A banner with a white-on-red cross is attested as having been used by the kings of Denmark since the 14th century. An origin legend with considerable impact on Danish national historiography connects the introduction of the flag to the Battle of Lindanise of 1219. The elongated Nordic cross, which represents Christianity, reflects its use as a maritime flag in the 18th century. The flag became popular as a national flag in the early 16th century. Its private use was outlawed in 1834 but again permitted by a regulation of 1854. The flag holds the world record of being the oldest continuously used national flag. Description: In 1748, a regulation defined the correct lengths of the two last fields in the flag as 6⁄4. In May 1893 a new regulation to all chiefs of police stated that the police should not intervene, if the two last fields in the flag were longer than 6⁄4 as long as these did not exceed 7⁄4, and provided that this was the only rule violated. This regulation is still in effect today and thus the legal proportions of the National flag today are 3:1:3 in width and anywhere between 3:1:4.5 and 3:1:5.25 in length. No official definition of "Dannebrog rød" exists. The private company Dansk Standard, regulation number 359 (2005), defines the red colour of the flag as Pantone 186c. History: 1219 origin legend A tradition recorded in the 16th century traces the origin of the flag to the campaigns of Valdemar II of Denmark (r. 1202–1241). The oldest of them is in Christiern Pedersen's "Danske Krønike", which is a sequel to Saxo's Gesta Danorum, written 1520–23. Here, the flag falls from the sky during one of Valdemar's military campaigns overseas. Pedersen also states that the very same flag was taken into exile by Eric of Pomerania in 1440. History: The second source is the writing of the Franciscan friar Petrus Olai (Peder Olsen) of Roskilde (died c. 1570). This record describes a battle in 1208 near Fellin during the Estonia campaign of King Valdemar II. The Danes were all but defeated when a lamb-skin banner depicting a white cross fell from the sky and miraculously led to a Danish victory. In a third account, also by Petrus Olai, in Danmarks Tolv Herligheder ("Twelve Splendours of Denmark"), in splendour number nine, the same story is re-told almost verbatim, with a paragraph inserted correcting the year to 1219. Now, the flag is falling from the sky in the Battle of Lindanise, also known as the Battle of Valdemar (Danish: Volmerslaget), near Lindanise (Tallinn) in Estonia, of 15 June 1219. History: It is this third account that has been the most influential, and some historians have treated it as the primary account taken from a (lost) source dating to the first half of the 15th century. History: In Olai's account, the battle was going badly, and defeat seemed imminent. However the Danish Bishop Anders Sunesen, on top of a hill overlooking the battle, prayed to God with his arms raised, and the Danes moved closer to victory the more he prayed. When he raised his arms the Danes surged forward but when his arms grew tired and he let them fall, the Estonians turned the Danes back. Attendants rushed forward to raise his arms once again and the Danes again surged forward. But for a second time he grew so tired that he dropped his arms and the Danes again lost the advantage and were moving closer to defeat. He needed two soldiers to keep his hands up. When the Danes were about to lose, 'Dannebrog' miraculously fell from the sky and the King took it, showed it to the troops, their hearts were filled with courage, and the Danes won the battle. History: The possible historical nucleus behind this origin legend was extensively discussed by Danish historians in the 19th to 20th centuries. One such example is Adolf Ditlev Jørgensen, who argued that Bishop Theoderich was the original instigator of the 1218 inquiry from Bishop Albert of Buxhoeveden to King Valdemar II which led to the Danish participation in the Baltic crusades. Jørgensen speculates that Bishop Theoderich might have carried the Knight Hospitaller's banner in the 1219 battle and that "the enemy thought this was the King's symbol and mistakenly stormed Bishop Theoderich tent. He claims that the origin of the legend of the falling flag comes from this confusion in the battle."The Danish church-historian L. P. Fabricius (1934) ascribes the origin to the 1208 Battle of Fellin, not the Battle of Lindanise in 1219, based on the earliest source available about the story. Fabricius speculated that it might have been Archbishop Andreas Sunesøn's personal ecclesiastical banner or perhaps even the flag of Archbishop Absalon, under whose initiative and supervision several smaller crusades had already been conducted in Estonia. The banner would then already be known in Estonia. Fabricius repeats Jørgensen's idea about the flag being planted in front of Bishop Theodorik's tent, which the enemy mistakenly attacked believing it to be the tent of the King. History: A different theory is briefly discussed by Fabricius and elaborated more by Helge Bruhn (1949). Bruhn interprets the story in the context of the widespread tradition of the miraculous appearance of crosses in the sky in Christian legend, specifically comparing such an event attributed to a battle of 10 September 1217 near Alcazar, where it is said that a golden cross on white appeared in the sky, to bring victory to the Christians.In Swedish national historiography of the 18th century, there is a tale paralleling the Danish legend, in which a golden cross appears in the blue sky during a Swedish battle in Finland in 1157. History: Middle Ages The white-on-red cross emblem originates in the age of the Crusades. In the 12th century, it was also used as war flag by the Holy Roman Empire. History: In the Gelre Armorial, dated c. 1340–1370, such a banner is shown alongside the coat of arms of the king of Denmark. This is the earliest known undisputed colour rendering of the Dannebrog. At about the same time, Valdemar IV of Denmark displays a cross in his coat of arms on his Danælog seal (Rettertingsseglet, dated 1356). The image from the Armorial Gelre is nearly identical to an image found in a 15th-century coat of arms book now located in the National Archives of Sweden (Riksarkivet). The seal of Eric of Pomerania (1398) as king of the Kalmar union displays the arms of Denmark's chief dexter, three lions. In this version, the lions are holding a Dannebrog banner. History: The reason why the kings of Denmark in the 14th century begin displaying the cross banner in their coats of arms is unknown. Caspar Paludan-Müller (1873) suggested that it may reflect a banner sent by the pope to support the Danish king during the Livonian Crusade. Adolf Ditlev Jørgensen (1875) identifies the banner as that of the Knights Hospitaller, which order had a presence in Denmark from the later 12th century.Several coins, seals, and images exist, both foreign and domestic, from the 13th to 15th centuries and even earlier, showing heraldic designs similar to Dannebrog, alongside the royal coat of arms (three blue lions on a golden shield.) There is a record suggesting that the Danish army had a "chief banner" (hoffuitbanner) in the early 16th century. Such a banner is mentioned in 1570 by Niels Hemmingsøn in the context of a 1520 battle between Danes and Swedes near Uppsala as nearly captured by the Swedes but saved by the heroic actions of the banner-carrier Mogens Gyldenstierne and Peder Skram. The legend attributing the miraculous origin of the flag to the campaigns of Valdemar II of Denmark (r. 1202–1241) was recorded by Christiern Pedersen and Petrus Olai in the 1520s. History: Hans Svaning's History of King Hans from 1558 to 1559 and Johan Rantzau's History about the Last Dithmarschen War, from 1569, record the further fate of the Danish hoffuitbanner: According to this tradition, the original flag from the Battle of Lindanise was used in the small campaign of 1500 when King Hans tried to conquer Dithmarschen (in western Holstein in the north Germany). The flag was lost in a devastating defeat at the Battle of Hemmingstedt on 17 February 1500. In 1559, King Frederik II recaptured it during his own Dithmarschen campaign. History: In 1576, the son of Johan Rantzau, Henrik Rantzau, also writes about the war and the fate of the flag, noting that the flag was in a poor condition when returned. He records that the flag after its return to Denmark was placed in the cathedral in Slesvig. Slesvig historian Ulrik Petersen (1656–1735) confirms the presence of such a banner in the cathedral in the early 17th century and records that it had crumbled away by about 1660. History: Contemporary records describing the battle of Hemmingstedt make no reference to the loss of the original Dannebrog, although the capitulation state that all Danish banners lost in 1500 was to be returned. In a letter dated 22 February 1500 to Oluf Stigsøn, King John describes the battle but does not mention the loss of an important flag. In fact, the entire letter gives the impression that the lost battle was of limited importance. In 1598, Neocorus wrote that the banner captured in 1500 was brought to the church in Wöhrden and hung there for the next 59 years until it was returned to the Danes as part of the peace settlement in 1559. History: Modern period Used as a maritime flag since the 16th century, the Dannebrog was introduced as a regimental flag in the Danish army in 1785, and for the militia (landeværn) in 1801. From 1842, it was used as the flag of the entire army.During the first half of the 19th century, in parallel to the development of Romantic nationalism in other European countries, the military flag increasingly came to be seen as representing the nation itself. Poems of this period invoking the Dannebrog were written by B.S. Ingemann, N.F.S. Grundtvig, Oehlenschläger, Chr. Winther and H.C. Andersen. By the 1830s, the military flag had become popular as an unofficial national flag, and its use by private citizens was outlawed in a circular enacted on 7 January 1834. History: In the national enthusiasm sparked by the First Schleswig War during 1848–1850, the flag was still very widely displayed, and the prohibition of private use was repealed in a regulation of 7 July 1854, for the first time allowing Danish citizens to display the Dannebrog (but not the swallow-tailed Splitflag variant). Special permission to use the Splitflag was given to individual institutions and private companies, especially after 1870. In 1886, the war ministry introduced a regulation indicating that the flag should be flown from military buildings on thirteen specified days, including royal birthdays, the date of the signing of the Constitution of 5 June 1849 and on days of remembrance for military battles. In 1913, the naval ministry issued its own list of flag days. On 10 April 1915, the hoisting of any other flag on Danish soil was prohibited. From 1939 until 2012, the yearbook Hvem-Hvad-Hvor included a list of flag days. As of 2019 flag days can be viewed at the "Ministry of Justice (Justitsministeriet)" as well as "The Denmark Society (Danmarks-Samfundet)". Variants: Maritime flag and corresponding Kingdom flag The size and shape of the civil ensign ("Koffardiflaget") for merchant ships is given in the regulation of 11 June 1748, which says: A red flag with a white cross with no split end. The white cross must be 1⁄7 of the flag's height. The two first fields must be square in form and the two outer fields must be 6⁄4 lengths of those. The proportions are thus: 3:1:3 vertically and 3:1:4.5 horizontally. This definition are the absolute proportions for the Danish national flag to this day, for both the civil version of the flag ("Stutflaget"), as well as the merchant flag ("Handelsflaget"). The civil flag and the merchant flag are identical in colour and design. Variants: A regulation passed in 1758 required Danish ships sailing in the Mediterranean to carry the royal cypher in the center of the flag in order to distinguish them from Maltese ships, due to the similarity of the flag of the Sovereign Military Order of Malta. Variants: According to the regulation of 11 June 1748 the colour was simply red, which is common known today as "Dannebrog rød" ("Dannebrog red"). The only available red fabric dye in 1748 was made of madder root, which can be processed to produce a brilliant red dye (used historically for British soldiers' jackets). A regulation of 4 May 1927 once again states that Danish merchant ships have to fly flags according to the regulation of 1748. Variants: The first regulation regarding the Splitflag dates from 27 March 1630, in which King Christian IV orders that Norwegian Defensionskibe (armed merchants ships) may only use the Splitflag if they are in Danish war service. In 1685 an order, distributed to a number of cities in Slesvig, states that all ships must carry the Danish flag, and in 1690 all merchant ships are forbidden to use the Splitflag, with the exception of ships sailing in the East Indies, West Indies and along the coast of Africa. In 1741 it is confirmed that the regulation of 1690 is still very much in effect; that merchant ships may not use the Splitflag. At the same time the Danish East India Company is allowed to fly the Splitflag when past the equator. Variants: Some confusion must have existed regarding the Splitflag. In 1696 the Admiralty presented the King with a proposal for a standard regulating both size and shape of the Splitflag. In the same year a royal resolution defines the proportions of the Splitflag, which in this resolution is called Kongeflaget (the King's flag), as follows: The cross must be 1⁄7 of the flags height. The two first fields must be square in form with the sides three times the cross width. The two outer fields are rectangular and 1+1⁄2 the length of the square fields. The tails are the length of the flag. Variants: These numbers are the basic for the Splitflag, or Orlogsflag, today, though the numbers have been slightly altered. The term Orlogsflag dates from 1806 and denotes use in the Danish Navy. From about 1750 to the early 19th century, a number of ships and companies which the government has interests in, received approval to use the Splitflag. Variants: In the royal resolution of 25 October 1939 for the Danish Navy, it is stated that the Orlogsflag is a Splitflag with a deep red ("dybrød") or madder red ("kraprød") colour. Like the National flag, no nuance is given, but in modern days this is given as 195U. Furthermore, the size and shape is corrected in this resolution to be: "The cross must be 1⁄7 of the flag's height. The two first fields must be square in form with the height of 3⁄7 of the flag's height. The two outer fields are rectangular and 5⁄4 the length of the square fields. The tails are 6⁄4 the length of the rectangular fields". Thus, if compared to the standard of 1696, both the rectangular fields and the tails have decreased in size. Variants: The Splitflag and Orlogsflag have similar shapes but different sizes and shades of red. Legally, they are two different flags. The Splitflag is a Danish flag ending in a swallow-tail, it is Dannebrog red, and is used on land. The Orlogsflag is an elongated Splitflag with a deeper red colour and is only used on sea. The Orlogsflag with no markings, may only be used by the Royal Danish Navy. There are though a few exceptions to this. A few institutions have been allowed to fly the clean Orlogsflag. The same flag with markings has been approved for a few dozen companies and institutions over the years. Furthermore, the Orlogsflag is only described as such if it has no additional markings. Any swallow-tail flag, no matter the colour, is called a Splitflag provided it bears additional markings. Variants: Royal standards MonarchThe current version of the royal standard was introduced on 16 November 1972 when the Queen adopted a new version of her personal coat of arms. The royal standard is the flag of Denmark with a swallow-tail and charged with the monarch's coat of arms set in a white square. The centre square is 32 parts in a flag with the ratio 56:107. Variants: Other members of the royal family Other flags in the Kingdom of Denmark: Greenland and the Faroe Islands are autonomous territories within the Kingdom of Denmark. They have their own official flags. Some areas in Denmark have unofficial flags. While they have no legal recognition or regulation, they can be used freely. The regional flags of Bornholm and Ærø are occasionally used by locals of those islands and in tourist-related businesses. Other flags in the Kingdom of Denmark: The proposal for a flag of Jutland has hardly found any actual use, maybe in part due to its peculiar design.The flag of Vendsyssel (Vendelbrog) is seen infrequently, but many locals recognise it. According to an article in the newspaper Nordjyske, the flag had been used in the former insignia of Flight Eskadrille 723 of Aalborg Air Base, in the 1980s.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**The Gene Bomb** The Gene Bomb: The Gene Bomb is a 1996 book by David E. Comings, self-published by Hope Press, that puts forth the theory that higher education and advanced technology may unintentionally favor the selection of genes that increase the likelihood of ADHD, autism, drug addiction, learning disorders, and behavior problems. Comings claims that the prevalence of these disorders is rising and I.Q. is decreasing; others argue that other factors may be responsible, including increased detection of these disorders. He claims that society is inadvertently creating delays for the highly educated that reduce their reproductivity and causes them to have children later in life, thus raising the odds of certain disorders like autism. On the other hand, he claims that those having learning disorders tend to drop out of school earlier and have more children, thus passing on learning disorders at a higher rate. Environmental and societal factors are usually accepted as the cause, but Comings argues the opposite.According to a review of the book in the British Developmental Medicine and Child Neurology journal, "The arguments are developed in this book with an alarming lack of scientific accuracy and satisfactory supporting evidence." The review concludes that the book "is an apocalyptic, irrational, and emotional treatise which opens up scientifically unsound issues that have already been formally buried". A book review in the Journal of Medical Genetics said, "This is the sort of book which gets geneticists a bad name", adding that some facts "are simply wrong", while vital facts "are simply missing". Comings replied, in a Letter to the Editor, that the review "missed the whole point of the book and presented to the readers of this journal a distorted view of the issues I attempted to raise".Other Tourette syndrome (TS) researchers say of Comings' Tourette's research that his "assertions fall outside of the mainstream of the very extensive TS literature that has developed over the past 2 decades".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Virtual Volleyball** Virtual Volleyball: Virtual Volleyball is a video game developed and published by Imagineer Co. for the Sega Saturn. Gameplay: Virtual Volleyball is the first volleyball game using polygons to be published for any game system. Reception: Next Generation reviewed the Saturn version of the game, rating it one star out of five, and stated, "There are inherent problems in doing a volleyball game when considering the matter of trying to control an entire team, but Virtual Volleyball seems to make no effort to solve any of these problems, leaving the gamer with an extremely vacant feeling."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Transcoder free operation** Transcoder free operation: In a telecommunication network Transcoder free operation, or TrFO, also known as Out of band transcoder control is the concept of removing transcoding function in a call path. In legacy GSM networks a call between two mobile stations involved two transcoding functions, one at each BSC. This transcoding functionality was generally implemented in a separate Transcoder and Rate Adaptation Unit, or TRAU. TRAU was connected to BSC and MSC through TDM E1 or STM-1. Transcoder free operation: With the introduction of NGN and 3G networks the Radio Network Controller was connected to MGW through ATM or IP instead of TDM. Therefore, this external transcoder was removed and transcoding function was moved up to the MGW. NGN also introduced Nb interface over IP such that it became possible to carry compressed voice codecs such as AMR in the Nb interface. In a call scenario such as this the transcoding functionality in the MGW could be eliminated such that voice quality can be improved and resources in MGW also could be saved. Concept of TrFO became applicable for 2G networks also with the "A interface over IP" implementation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rake angle** Rake angle: In machining, the rake angle is a parameter used in various cutting processes, describing the angle of the cutting face relative to the workpiece. There are three types of rake angles: positive, zero or neutral, and negative. Positive rake: A tool has a positive rake when the face of the cutting tool slopes away from the cutting edge at inner side. Zero rake: A tool has a zero (or neutral) rake when the face of the cutting tool is perpendicular to the cutting edge at inner side. Negative rake: A tool has a negative rake angle when the face of the cutting tool slopes away from the cutting edge at outer side.Positive rake angles generally: Make the tool more sharp and pointed. This reduces the strength of the tool, as the small included angle in the tip may cause it to chip away. Reduce cutting forces and power requirements. Helps in the formation of continuous chips in ductile materials. Can help avoid the formation of a built-up edge.Negative rake angles generally: Increase the strength of the cutting edge. The tool is more blunt. Increases the cutting force. Increases the power required for a cut. Can increase friction, resulting in higher temperatures. Can improve surface finish.Zero rake angles: Easier to manufacture. Easier to resharpen. Less power and cutting forces than a negative raked tool. Chip will wear and 'crater' the rake face. Recommended rake angles: Recommended rake angles can vary depending on the material being cut, tool material, depth of cut, cutting speed, machine, setup and process. This table summarizes recommended rake angles for: single-point turning on a lathe, drilling, milling, and sawing.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Radical 80** Radical 80: Radical 80 or radical do not (毋部) meaning "mother" or "do not" is one of the 34 Kangxi radicals (214 radicals in total) composed of 4 strokes. Chinese characters with a similar component 母 "mother" may also be classified under this radical. In the Kangxi Dictionary, there are 16 characters (out of 49,030) to be found under this radical. 毋 is also the 99th indexing component in the Table of Indexing Chinese Character Components predominantly adopted by Simplified Chinese dictionaries published in mainland China. In the Hokkien language, 毋 is often used to represent the negation particle [m̩], spelled m̄ in Peh-oe-ji and Tai-lo. Literature: Fazzioli, Edoardo (1987). Chinese calligraphy : from pictograph to ideogram : the history of 214 essential Chinese/Japanese characters. calligraphy by Rebecca Hon Ko. New York: Abbeville Press. ISBN 0-89659-774-1. Lunde, Ken (Jan 5, 2009). "Appendix J: Japanese Character Sets" (PDF). CJKV Information Processing: Chinese, Japanese, Korean & Vietnamese Computing (Second ed.). Sebastopol, Calif.: O'Reilly Media. ISBN 978-0-596-51447-1.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tarp tent** Tarp tent: A tarp tent is a tarpaulin, a plastic or nylon sheet, used in place of a tent. It is usually rigged with poles, tent pegs, and guy lines. Ultralight backpackers use tarp tents because they are lightweight compared to other backpacking shelters. Tarp tent: In its simplest form it is floorless with open ends, as a fly or with the sides attached to the ground. It can also be set up as a loue with two adjacent sides by the ground and the opposite corner as highest point, giving more protection from wind and reflecting heat from an optional fire in front of the open side. Tarp tent: A tarp tent is commonly lighter and cheaper than a tent and easier to set up. However, because it is more open, it does not provide as much protection from rain, snow, wind, or cold as a tent does. It provides no protection from insects. Tarp tent: More sophisticated tarp tents are now manufactured or homemade with such things as bug screening and storm flaps on the ends and even floors and vents. According to Harvey Manning in his book Backpacking One Step at a Time (The REI Press Seattle), "The term 'tarp-tent' as used here denotes a broad category which at one boundary is nothing more than a shaped tarp and at the other end verges on a 'true' tent. The common characteristic is a single wall, in most cases, waterproof." In Mountaineering the Freedom of the Hills (4th ed. The Mountaineers, Seattle, WA) it says, "A tarp tent is both light in weight and low in cost, and offers adequate shelter from all but extreme weather in lowland forests and among subalpine trees." Tarp tents are frequently made of silnylon material because it is lightweight, strong, and waterproof. Tarp tent: The basha is essentially a tarp tent used by the British and Australian armies.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fortezza** Fortezza: Fortezza is an information security system that uses the Fortezza Crypto Card, a PC Card-based security token. It was developed for the U.S. government's Clipper chip project and has been used by the U.S. Government in various applications. Each individual who is authorized to see protected information is issued a Fortezza card that stores private keys and other data needed to gain access. It contains an NSA approved security microprocessor called Capstone (MYK-80) that implements the Skipjack encryption algorithm. Fortezza: The original Fortezza card (KOV-8) is a Type 2 product which means it cannot be used for classified information. The most widely used Type 1 encryption card is the KOV-12 Fortezza card which is used extensively for the Defense Message System (DMS). The KOV-12 is cleared up to TOP SECRET/SCI. A later version, called KOV-14 or Fortezza Plus, uses a Krypton microprocessor that implements stronger, Type 1 encryption and may be used for information classified up to TOP SECRET/SCI. It, in turn, is being replaced by the newer KSV-21 PC card with more modern algorithms and additional capabilities. Fortezza: The cards are interchangeable within the many types of equipment that support Fortezza and can be rekeyed and reprogrammed by the owners, making them easy to issue and reuse. This simplifies the process of rekeying equipment for crypto changes: instead of requiring an expensive fill device, a technician is able to put a new Fortezza card in the device's PCMCIA slot. Fortezza: The Fortezza Plus card and its successors are used with NSA's Secure Terminal Equipment voice and data encryption systems that are replacing the STU-III. It is manufactured by the Mykotronx Corporation and by Spyrus. Each card costs about $240 and they are commonly used with card readers sold by Litronic Corporation. The Fortezza card has been used in government, military, and banking applications to protect sensitive data.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Clifford bundle** Clifford bundle: In mathematics, a Clifford bundle is an algebra bundle whose fibers have the structure of a Clifford algebra and whose local trivializations respect the algebra structure. There is a natural Clifford bundle associated to any (pseudo) Riemannian manifold M which is called the Clifford bundle of M. General construction: Let V be a (real or complex) vector space together with a symmetric bilinear form <·,·>. The Clifford algebra Cℓ(V) is a natural (unital associative) algebra generated by V subject only to the relation v2=−⟨v,v⟩ for all v in V. One can construct Cℓ(V) as a quotient of the tensor algebra of V by the ideal generated by the above relation. General construction: Like other tensor operations, this construction can be carried out fiberwise on a smooth vector bundle. Let E be a smooth vector bundle over a smooth manifold M, and let g be a smooth symmetric bilinear form on E. The Clifford bundle of E is the fiber bundle whose fibers are the Clifford algebras generated by the fibers of E: Cℓ(E)=∐x∈MCℓ(Ex,gx) The topology of Cℓ(E) is determined by that of E via an associated bundle construction. General construction: One is most often interested in the case where g is positive-definite or at least nondegenerate; that is, when (E, g) is a Riemannian or pseudo-Riemannian vector bundle. For concreteness, suppose that (E, g) is a Riemannian vector bundle. The Clifford bundle of E can be constructed as follows. Let CℓnR be the Clifford algebra generated by Rn with the Euclidean metric. The standard action of the orthogonal group O(n) on Rn induces a graded automorphism of CℓnR. The homomorphism ρ:O(n)→Aut(CℓnR) is determined by ρ(A)(v1v2⋯vk)=(Av1)(Av2)⋯(Avk) where vi are all vectors in Rn. The Clifford bundle of E is then given by Cℓ(E)=F(E)×ρCℓnR where F(E) is the orthonormal frame bundle of E. It is clear from this construction that the structure group of Cℓ(E) is O(n). Since O(n) acts by graded automorphisms on CℓnR it follows that Cℓ(E) is a bundle of Z2-graded algebras over M. The Clifford bundle Cℓ(E) can then be decomposed into even and odd subbundles: Cℓ(E)=Cℓ0(E)⊕Cℓ1(E). General construction: If the vector bundle E is orientable then one can reduce the structure group of Cℓ(E) from O(n) to SO(n) in the natural manner. Clifford bundle of a Riemannian manifold: If M is a Riemannian manifold with metric g, then the Clifford bundle of M is the Clifford bundle generated by the tangent bundle TM. One can also build a Clifford bundle out of the cotangent bundle T*M. The metric induces a natural isomorphism TM = T*M and therefore an isomorphism Cℓ(TM) = Cℓ(T*M). There is a natural vector bundle isomorphism between the Clifford bundle of M and the exterior bundle of M: Cℓ(T∗M)≅Λ(T∗M). This is an isomorphism of vector bundles not algebra bundles. The isomorphism is induced from the corresponding isomorphism on each fiber. In this way one can think of sections of the Clifford bundle as differential forms on M equipped with Clifford multiplication rather than the wedge product (which is independent of the metric). The above isomorphism respects the grading in the sense that Cℓ0(T∗M)=Λeven(T∗M)Cℓ1(T∗M)=Λodd(T∗M). Local description For a vector v∈TxM at x∈M , and a form ψ∈Λ(TxM) the Clifford multiplication is defined as vψ=v∧ψ+v⌟ψ where the metric duality to change vector to the one form is used in the first term. Clifford bundle of a Riemannian manifold: Then the exterior derivative d and coderivative δ can be related to the metric connection ∇ using the choice of an orthonormal base {ea} by d=ea∧∇ea,δ=−ea⌟∇ea Using these definitions, the Dirac-Kähler operator is defined by D=ea∇ea=d−δ On a star domain the operator can be inverted using Poincare lemma for exterior derivative and its Hodge star dual for coderivative. Practical way of doing this is by homotopy and cohomotopy operators.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Suprachoroidal drug delivery** Suprachoroidal drug delivery: Suprachoroidal drug delivery is an ocular route of drug administration. It involves using a microneedle to provide a minimally invasive method and injecting particles of a medication into the suprachoroidal space (SCS) between the sclera and choroid in the eye.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hyperpolarization (biology)** Hyperpolarization (biology): Hyperpolarization is a change in a cell's membrane potential that makes it more negative. It is the opposite of a depolarization. It inhibits action potentials by increasing the stimulus required to move the membrane potential to the action potential threshold. Hyperpolarization (biology): Hyperpolarization is often caused by efflux of K+ (a cation) through K+ channels, or influx of Cl– (an anion) through Cl– channels. On the other hand, influx of cations, e.g. Na+ through Na+ channels or Ca2+ through Ca2+ channels, inhibits hyperpolarization. If a cell has Na+ or Ca2+ currents at rest, then inhibition of those currents will also result in a hyperpolarization. This voltage-gated ion channel response is how the hyperpolarization state is achieved. In neurons, the cell enters a state of hyperpolarization immediately following the generation of an action potential. While hyperpolarized, the neuron is in a refractory period that lasts roughly 2 milliseconds, during which the neuron is unable to generate subsequent action potentials. Sodium-potassium ATPases redistribute K+ and Na+ ions until the membrane potential is back to its resting potential of around –70 millivolts, at which point the neuron is once again ready to transmit another action potential. Voltage-gated ion channels and hyperpolarization: Voltage gated ion channels respond to changes in the membrane potential. Voltage gated potassium, chloride and sodium channels are key components in the generation of the action potential as well as hyper-polarization. These channels work by selecting an ion based on electrostatic attraction or repulsion allowing the ion to bind to the channel. This releases the water molecule attached to the channel and the ion is passed through the pore. Voltage gated sodium channels open in response to a stimulus and close again. This means the channel either is open or not, there is no part way open. Sometimes the channel closes but is able to be reopened right away, known as channel gating, or it can be closed without being able to be reopened right away, known as channel inactivation. Voltage-gated ion channels and hyperpolarization: At resting potential, both the voltage gated sodium and potassium channels are closed but as the cell membrane becomes depolarized the voltage gated sodium channels begin to open up and the neuron begins to depolarize, creating a current feedback loop known as the Hodgkin cycle. However, potassium ions naturally move out of the cell and if the original depolarization event was not significant enough then the neuron does not generate an action potential. If all the sodium channels are open, however, then the neuron becomes ten times more permeable to sodium than potassium, quickly depolarizing the cell to a peak of +40 mV. At this level the sodium channels begin to inactivate and voltage gated potassium channels begin to open. This combination of closed sodium channels and open potassium channels leads to the neuron re-polarizing and becoming negative again. The neuron continues to re-polarize until the cell reaches ~ –75 mV, which is the equilibrium potential of potassium ions. This is the point at which the neuron is hyperpolarized, between –70 mV and –75 mV. After hyperpolarization the potassium channels close and the natural permeability of the neuron to sodium and potassium allows the neuron to return to its resting potential of –70 mV. During the refractory period, which is after hyper-polarization but before the neuron has returned to its resting potential the neuron is capable of triggering an action potential due to the sodium channels ability to be opened, however, because the neuron is more negative it becomes more difficult to reach the action potential threshold. Voltage-gated ion channels and hyperpolarization: HCN channels are activated by hyperpolarization. Recent research has shown that neuronal refractory periods can exceed 20 milliseconds where the relation between hyperpolarization and the neuronal refractory was questioned. Experimental technique: Hyperpolarization is a change in membrane potential. Neuroscientists measure it using a technique known as patch clamping that allows them to record ion currents passing through individual channels. This is done using a glass micropipette, also called a patch pipette, with a 1 micrometer diameter. There is a small patch that contains a few ion channels and the rest is sealed off, making this the point of entry for the current. Using an amplifier and a voltage clamp, which is an electronic feedback circuit, allows the experimenter to maintain the membrane potential at a fixed point and the voltage clamp then measures tiny changes in current flow. The membrane currents giving rise to hyperpolarization are either an increase in outward current or a decrease in inward current. Examples: During the afterhyperpolarization period after an action potential, the membrane potential is more negative than when the cell is at the resting potential. In the figure to the right, this undershoot occurs at approximately 3 to 4 milliseconds (ms) on the time scale. The afterhyperpolarization is the time when the membrane potential is hyperpolarized relative to the resting potential. Examples: During the rising phase of an action potential, the membrane potential changes from negative to positive, a depolarization. In the figure, the rising phase is from approximately 1 to 2 ms on the graph. During the rising phase, once the membrane potential becomes positive, the membrane potential continues to depolarize (overshoot) until the peak of the action potential is reached at about +40 millivolts (mV). After the peak of the action potential, a hyperpolarization repolarizes the membrane potential to its resting value, first by making it less positive, until 0 mV is reached, and then by continuing to make it more negative. This repolarization occurs in the figure from approximately 2 to 3 ms on the time scale.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Infratrochlear nerve** Infratrochlear nerve: The infratrochlear nerve is a branch of the nasociliary nerve (itself a branch of the ophthalmic nerve (CN V1)) in the orbit. It exits the orbit inferior to the trochlea of superior oblique. It provides sensory innervation to structures of the orbit and skin of adjacent structures.: 631, 783 Structure: The nasociliary nerve terminates by bifurcating into the infratrochlear and the anterior ethmoidal nerves. The infratrochlear nerve travels anteriorly in the orbit along the upper border of the medial rectus muscle and underneath the trochlea of the superior oblique muscle. It exits the orbit medially and divides into small sensory branches. Distribution The infratrochlear nerve provides sensory innervation to the skin of the eyelids, the conjunctiva, lacrimal sac, lacrimal caruncle, and the side of the nose superior to the medial canthus.: 631, 783 Communications The infratrochlear nerve receives a descending communicating branch from the supratrochlear nerve.: 782 Etymology: The infratrochlear nerve is named after a structure it passes under. Infratrochlear means "below the trochlea". The term trochlea means "pulley" in Latin. Specifically, the trochlea refers to a fibrocartilaginous loop at the superomedial surface of the orbit called the trochlea, through which the tendon of the superior oblique muscle passes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spare parts management** Spare parts management: Service parts management is the main component of a complete strategic service management process that companies use to ensure that right spare part and resources are at the right place (where the broken part is) at the right time. Spare parts, are extra parts that are available and in proximity to a functional item, such as an automobile, boat, engine, for which they might be used for repair. Economic considerations: Spare parts are sometimes considered uneconomical since: the parts might never be used the parts might not be stored properly, leading to defects maintaining inventory of spare parts has associated costs parts may not be available when needed from a supplierBut without the spare part on hand, a company's customer satisfaction levels could drop if a customer has to wait too long for their item to be fixed. Therefore, companies need to plan and align their service parts inventory and workforce resources to achieve optimal customer satisfaction levels with minimal costs. User considerations: The user of the item, which might require the parts, may overlook the economic considerations because: the expense is not the user's but the supplier's of a known high rate of failure of certain equipment of delays in getting the part from a vendor or a supply room, resulting in machine outage to have the parts on hand requires less "paperwork" when the parts are suddenly needed of the mental comfort it provides to the user in knowing the parts are on-hand when needed The parts are un-economic to be repaired i.e. it's cheaper to discard than to get it repaired Cost-effect compromise: In many cases where the item is not stationary, a compromise is reached between cost and statistical probability. Some examples: an automobile carries a less-functional "donut" tire as replacement instead of a functionally equivalent tire. a member of a household buys extra light bulbs since it is probable that one of the lights in the house will eventually burn out and require replacement. a computer user will purchase a ream of computer paper instead of a sheet at a time. a race car team will bring another engine to the race track "just in case". a ship carries "spare parts" for its engine in case of breakdown at sea. Measures of effectiveness: The effectiveness of spares inventory can be measured by metrics such as fill rate and availability of the end item. Notes: SD-19 in conjunction with MIL-HDBK-512, Parts Management guidance MIL-HDBK-512 handbook is a guide for Military Acquisition Activities (AA) in the preparation of Requests for Proposals (RFPs) with respect to a parts management program, and will help determine to what extent parts management should be for a given program. It will also identify those elements in a proposal to manage the selection and use of parts.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Refrigeration** Refrigeration: Refrigeration is any of various types of cooling of a space, substance, or system to lower and/or maintain its temperature below the ambient one (while the removed heat is ejected to a place of higher temperature). Refrigeration is an artificial, or human-made, cooling method.Refrigeration refers to the process by which energy, in the form of heat, is removed from a low-temperature medium and transferred to a high-temperature medium. This work of energy transfer is traditionally driven by mechanical means (whether ice or electromechanical machines), but it can also be driven by heat, magnetism, electricity, laser, or other means. Refrigeration has many applications, including household refrigerators, industrial freezers, cryogenics, and air conditioning. Heat pumps may use the heat output of the refrigeration process, and also may be designed to be reversible, but are otherwise similar to air conditioning units. Refrigeration: Refrigeration has had a large impact on industry, lifestyle, agriculture, and settlement patterns. The idea of preserving food dates back to human prehistory, but for thousands of years humans were limited regarding the means of doing so. They used curing via salting and drying, and they made use of natural coolness in caves, root cellars, and winter weather, but other means of cooling were unavailable. In the 19th century, they began to make use of the ice trade to develop cold chains. In the late 19th through mid-20th centuries, mechanical refrigeration was developed, improved, and greatly expanded in its reach. Refrigeration has thus rapidly evolved in the past century, from ice harvesting to temperature-controlled rail cars, refrigerator trucks, and ubiquitous refrigerators and freezers in both stores and homes in many countries. The introduction of refrigerated rail cars contributed to the settlement of areas that were not on earlier main transport channels such as rivers, harbors, or valley trails. Refrigeration: These new settlement patterns sparked the building of large cities which are able to thrive in areas that were otherwise thought to be inhospitable, such as Houston, Texas, and Las Vegas, Nevada. In most developed countries, cities are heavily dependent upon refrigeration in supermarkets in order to obtain their food for daily consumption. The increase in food sources has led to a larger concentration of agricultural sales coming from a smaller percentage of farms. Farms today have a much larger output per person in comparison to the late 1800s. This has resulted in new food sources available to entire populations, which has had a large impact on the nutrition of society. History: Earliest forms of cooling The seasonal harvesting of snow and ice is an ancient practice estimated to have begun earlier than 1000 BC. A Chinese collection of lyrics from this time period known as the Shijing, describes religious ceremonies for filling and emptying ice cellars. However, little is known about the construction of these ice cellars or the purpose of the ice. The next ancient society to record the harvesting of ice may have been the Jews in the book of Proverbs, which reads, "As the cold of snow in the time of harvest, so is a faithful messenger to them who sent him." Historians have interpreted this to mean that the Jews used ice to cool beverages rather than to preserve food. Other ancient cultures such as the Greeks and the Romans dug large snow pits insulated with grass, chaff, or branches of trees as cold storage. Like the Jews, the Greeks and Romans did not use ice and snow to preserve food, but primarily as a means to cool beverages. Egyptians cooled water by evaporation in shallow earthen jars on the roofs of their houses at night. The ancient people of India used this same concept to produce ice. The Persians stored ice in a pit called a Yakhchal and may have been the first group of people to use cold storage to preserve food. In the Australian outback before a reliable electricity supply was available many farmers used a Coolgardie safe, consisting of a room with hessian (burlap) curtains hanging from the ceiling soaked in water. The water would evaporate and thereby cool the room, allowing many perishables such as fruit, butter, and cured meats to be kept. History: Ice harvesting Before 1830, few Americans used ice to refrigerate foods due to a lack of ice-storehouses and iceboxes. As these two things became more widely available, individuals used axes and saws to harvest ice for their storehouses. This method proved to be difficult, dangerous, and certainly did not resemble anything that could be duplicated on a commercial scale.Despite the difficulties of harvesting ice, Frederic Tudor thought that he could capitalize on this new commodity by harvesting ice in New England and shipping it to the Caribbean islands as well as the southern states. In the beginning, Tudor lost thousands of dollars, but eventually turned a profit as he constructed icehouses in Charleston, Virginia and in the Cuban port town of Havana. These icehouses as well as better insulated ships helped reduce ice wastage from 66% to 8%. This efficiency gain influenced Tudor to expand his ice market to other towns with icehouses such as New Orleans and Savannah. This ice market further expanded as harvesting ice became faster and cheaper after one of Tudor's suppliers, Nathaniel Wyeth, invented a horse-drawn ice cutter in 1825. This invention as well as Tudor's success inspired others to get involved in the ice trade and the ice industry grew. History: Ice became a mass-market commodity by the early 1830s with the price of ice dropping from six cents per pound to a half of a cent per pound. In New York City, ice consumption increased from 12,000 tons in 1843 to 100,000 tons in 1856. Boston's consumption leapt from 6,000 tons to 85,000 tons during that same period. Ice harvesting created a "cooling culture" as majority of people used ice and iceboxes to store their dairy products, fish, meat, and even fruits and vegetables. These early cold storage practices paved the way for many Americans to accept the refrigeration technology that would soon take over the country. History: Refrigeration research The history of artificial refrigeration began when Scottish professor William Cullen designed a small refrigerating machine in 1755. Cullen used a pump to create a partial vacuum over a container of diethyl ether, which then boiled, absorbing heat from the surrounding air. The experiment even created a small amount of ice, but had no practical application at that time. History: In 1758, Benjamin Franklin and John Hadley, professor of chemistry, collaborated on a project investigating the principle of evaporation as a means to rapidly cool an object at Cambridge University, England. They confirmed that the evaporation of highly volatile liquids, such as alcohol and ether, could be used to drive down the temperature of an object past the freezing point of water. They conducted their experiment with the bulb of a mercury thermometer as their object and with a bellows used to quicken the evaporation; they lowered the temperature of the thermometer bulb down to −14 °C (7 °F), while the ambient temperature was 18 °C (65 °F). They noted that soon after they passed the freezing point of water 0 °C (32 °F), a thin film of ice formed on the surface of the thermometer's bulb and that the ice mass was about a 6.4 millimetres (1⁄4 in) thick when they stopped the experiment upon reaching −14 °C (7 °F). Franklin wrote, "From this experiment, one may see the possibility of freezing a man to death on a warm summer's day". In 1805, American inventor Oliver Evans described a closed vapor-compression refrigeration cycle for the production of ice by ether under vacuum. History: In 1820, the English scientist Michael Faraday liquefied ammonia and other gases by using high pressures and low temperatures, and in 1834, an American expatriate to Great Britain, Jacob Perkins, built the first working vapor-compression refrigeration system in the world. It was a closed-cycle that could operate continuously, as he described in his patent: I am enabled to use volatile fluids for the purpose of producing the cooling or freezing of fluids, and yet at the same time constantly condensing such volatile fluids, and bringing them again into operation without waste.His prototype system worked although it did not succeed commercially.In 1842, a similar attempt was made by American physician, John Gorrie, who built a working prototype, but it was a commercial failure. Like many of the medical experts during this time, Gorrie thought too much exposure to tropical heat led to mental and physical degeneration, as well as the spread of diseases such as malaria. He conceived the idea of using his refrigeration system to cool the air for comfort in homes and hospitals to prevent disease. American engineer Alexander Twining took out a British patent in 1850 for a vapour compression system that used ether. History: The first practical vapour-compression refrigeration system was built by James Harrison, a British journalist who had emigrated to Australia. His 1856 patent was for a vapour-compression system using ether, alcohol, or ammonia. He built a mechanical ice-making machine in 1851 on the banks of the Barwon River at Rocky Point in Geelong, Victoria, and his first commercial ice-making machine followed in 1854. Harrison also introduced commercial vapour-compression refrigeration to breweries and meat-packing houses, and by 1861, a dozen of his systems were in operation. He later entered the debate of how to compete against the American advantage of unrefrigerated beef sales to the United Kingdom. In 1873 he prepared the sailing ship Norfolk for an experimental beef shipment to the United Kingdom, which used a cold room system instead of a refrigeration system. The venture was a failure as the ice was consumed faster than expected. History: The first gas absorption refrigeration system using gaseous ammonia dissolved in water (referred to as "aqua ammonia") was developed by Ferdinand Carré of France in 1859 and patented in 1860. Carl von Linde, an engineer specializing in steam locomotives and professor of engineering at the Technological University of Munich in Germany, began researching refrigeration in the 1860s and 1870s in response to demand from brewers for a technology that would allow year-round, large-scale production of lager; he patented an improved method of liquefying gases in 1876. His new process made possible using gases such as ammonia, sulfur dioxide (SO2) and methyl chloride (CH3Cl) as refrigerants and they were widely used for that purpose until the late 1920s. History: Thaddeus Lowe, an American balloonist, held several patents on ice-making machines. His "Compression Ice Machine" would revolutionize the cold-storage industry. In 1869, other investors and he purchased an old steamship onto which they loaded one of Lowe's refrigeration units and began shipping fresh fruit from New York to the Gulf Coast area, and fresh meat from Galveston, Texas back to New York, but because of Lowe's lack of knowledge about shipping, the business was a costly failure. History: Commercial use In 1842, John Gorrie created a system capable of refrigerating water to produce ice. Although it was a commercial failure, it inspired scientists and inventors around the world. France's Ferdinand Carre was one of the inspired and he created an ice producing system that was simpler and smaller than that of Gorrie. During the Civil War, cities such as New Orleans could no longer get ice from New England via the coastal ice trade. Carre's refrigeration system became the solution to New Orleans' ice problems and, by 1865, the city had three of Carre's machines. In 1867, in San Antonio, Texas, a French immigrant named Andrew Muhl built an ice-making machine to help service the expanding beef industry before moving it to Waco in 1871. In 1873, the patent for this machine was contracted by the Columbus Iron Works, a company acquired by the W.C. Bradley Co., which went on to produce the first commercial ice-makers in the US. History: By the 1870s, breweries had become the largest users of harvested ice. Though the ice-harvesting industry had grown immensely by the turn of the 20th century, pollution and sewage had begun to creep into natural ice, making it a problem in the metropolitan suburbs. Eventually, breweries began to complain of tainted ice. Public concern for the purity of water, from which ice was formed, began to increase in the early 1900s with the rise of germ theory. Numerous media outlets published articles connecting diseases such as typhoid fever with natural ice consumption. This caused ice harvesting to become illegal in certain areas of the country. All of these scenarios increased the demands for modern refrigeration and manufactured ice. Ice producing machines like that of Carre's and Muhl's were looked to as means of producing ice to meet the needs of grocers, farmers, and food shippers.Refrigerated railroad cars were introduced in the US in the 1840s for short-run transport of dairy products, but these used harvested ice to maintain a cool temperature. History: The new refrigerating technology first met with widespread industrial use as a means to freeze meat supplies for transport by sea in reefer ships from the British Dominions and other countries to the British Isles. Although not actually the first to achieve successful transportation of frozen goods overseas (the Strathleven had arrived at the London docks on 2 February 1880 with a cargo of frozen beef, mutton and butter from Sydney and Melbourne ), the breakthrough is often attributed to William Soltau Davidson, an entrepreneur who had emigrated to New Zealand. Davidson thought that Britain's rising population and meat demand could mitigate the slump in world wool markets that was heavily affecting New Zealand. After extensive research, he commissioned the Dunedin to be refitted with a compression refrigeration unit for meat shipment in 1881. On February 15, 1882, the Dunedin sailed for London with what was to be the first commercially successful refrigerated shipping voyage, and the foundation of the refrigerated meat industry.The Times commented "Today we have to record such a triumph over physical difficulties, as would have been incredible, even unimaginable, a very few days ago...". The Marlborough—sister ship to the Dunedin – was immediately converted and joined the trade the following year, along with the rival New Zealand Shipping Company vessel Mataurua, while the German Steamer Marsala began carrying frozen New Zealand lamb in December 1882. Within five years, 172 shipments of frozen meat were sent from New Zealand to the United Kingdom, of which only 9 had significant amounts of meat condemned. Refrigerated shipping also led to a broader meat and dairy boom in Australasia and South America. J & E Hall of Dartford, England outfitted the SS Selembria with a vapor compression system to bring 30,000 carcasses of mutton from the Falkland Islands in 1886. In the years ahead, the industry rapidly expanded to Australia, Argentina and the United States. History: By the 1890s, refrigeration played a vital role in the distribution of food. The meat-packing industry relied heavily on natural ice in the 1880s and continued to rely on manufactured ice as those technologies became available. By 1900, the meat-packing houses of Chicago had adopted ammonia-cycle commercial refrigeration. By 1914, almost every location used artificial refrigeration. The major meat packers, Armour, Swift, and Wilson, had purchased the most expensive units which they installed on train cars and in branch houses and storage facilities in the more remote distribution areas. History: By the middle of the 20th century, refrigeration units were designed for installation on trucks or lorries. Refrigerated vehicles are used to transport perishable goods, such as frozen foods, fruit and vegetables, and temperature-sensitive chemicals. Most modern refrigerators keep the temperature between –40 and –20 °C, and have a maximum payload of around 24,000 kg gross weight (in Europe). History: Although commercial refrigeration quickly progressed, it had limitations that prevented it from moving into the household. First, most refrigerators were far too large. Some of the commercial units being used in 1910 weighed between five and two hundred tons. Second, commercial refrigerators were expensive to produce, purchase, and maintain. Lastly, these refrigerators were unsafe. It was not uncommon for commercial refrigerators to catch fire, explode, or leak toxic gases. Refrigeration did not become a household technology until these three challenges were overcome. History: Home and consumer use During the early 1800s, consumers preserved their food by storing food and ice purchased from ice harvesters in iceboxes. In 1803, Thomas Moore patented a metal-lined butter-storage tub which became the prototype for most iceboxes. These iceboxes were used until nearly 1910 and the technology did not progress. In fact, consumers that used the icebox in 1910 faced the same challenge of a moldy and stinky icebox that consumers had in the early 1800s.General Electric (GE) was one of the first companies to overcome these challenges. In 1911, GE released a household refrigeration unit that was powered by gas. The use of gas eliminated the need for an electric compressor motor and decreased the size of the refrigerator. However, electric companies that were customers of GE did not benefit from a gas-powered unit. Thus, GE invested in developing an electric model. In 1927, GE released the Monitor Top, the first refrigerator to run on electricity.In 1930, Frigidaire, one of GE's main competitors, synthesized Freon. With the invention of synthetic refrigerants based mostly on a chlorofluorocarbon (CFC) chemical, safer refrigerators were possible for home and consumer use. Freon led to the development of smaller, lighter, and cheaper refrigerators. The average price of a refrigerator dropped from $275 to $154 with the synthesis of Freon. This lower price allowed ownership of refrigerators in American households to exceed 50% by 1940. Freon is a trademark of the DuPont Corporation and refers to these CFCs, and later hydro chlorofluorocarbon (HCFC) and hydro fluorocarbon (HFC), refrigerants developed in the late 1920s. These refrigerants were considered — at the time — to be less harmful than the commonly-used refrigerants of the time, including methyl formate, ammonia, methyl chloride, and sulfur dioxide. The intent was to provide refrigeration equipment for home use without danger. These CFC refrigerants answered that need. In the 1970s, though, the compounds were found to be reacting with atmospheric ozone, an important protection against solar ultraviolet radiation, and their use as a refrigerant worldwide was curtailed in the Montreal Protocol of 1987. Impact on settlement patterns in the United States of America: In the last century, refrigeration allowed new settlement patterns to emerge. This new technology has allowed for new areas to be settled that are not on a natural channel of transport such as a river, valley trail or harbor that may have otherwise not been settled. Refrigeration has given opportunities to early settlers to expand westward and into rural areas that were unpopulated. These new settlers with rich and untapped soil saw opportunity to profit by sending raw goods to the eastern cities and states. In the 20th century, refrigeration has made "Galactic Cities" such as Dallas, Phoenix and Los Angeles possible. Impact on settlement patterns in the United States of America: Refrigerated rail cars The refrigerated rail car (refrigerated van or refrigerator car), along with the dense railroad network, became an exceedingly important link between the marketplace and the farm allowing for a national opportunity rather than a just a regional one. Before the invention of the refrigerated rail car, it was impossible to ship perishable food products long distances. The beef packing industry made the first demand push for refrigeration cars. The railroad companies were slow to adopt this new invention because of their heavy investments in cattle cars, stockyards, and feedlots. Refrigeration cars were also complex and costly compared to other rail cars, which also slowed the adoption of the refrigerated rail car. After the slow adoption of the refrigerated car, the beef packing industry dominated the refrigerated rail car business with their ability to control ice plants and the setting of icing fees. The United States Department of Agriculture estimated that, in 1916, over sixty-nine percent of the cattle killed in the country was done in plants involved in interstate trade. The same companies that were also involved in the meat trade later implemented refrigerated transport to include vegetables and fruit. The meat packing companies had much of the expensive machinery, such as refrigerated cars, and cold storage facilities that allowed for them to effectively distribute all types of perishable goods. During World War I, a national refrigerator car pool was established by the United States Administration to deal with problem of idle cars and was later continued after the war. The idle car problem was the problem of refrigeration cars sitting pointlessly in between seasonal harvests. This meant that very expensive cars sat in rail yards for a good portion of the year while making no revenue for the car's owner. The car pool was a system where cars were distributed to areas as crops matured ensuring maximum use of the cars. Refrigerated rail cars moved eastward from vineyards, orchards, fields, and gardens in western states to satisfy Americas consuming market in the east. The refrigerated car made it possible to transport perishable crops hundreds and even thousands of kilometres or miles. The most noticeable effect the car gave was a regional specialization of vegetables and fruits. The refrigeration rail car was widely used for the transportation of perishable goods up until the 1950s. By the 1960s, the nation's interstate highway system was adequately complete allowing for trucks to carry the majority of the perishable food loads and to push out the old system of the refrigerated rail cars. Impact on settlement patterns in the United States of America: Expansion west and into rural areas The widespread use of refrigeration allowed for a vast amount of new agricultural opportunities to open up in the United States. New markets emerged throughout the United States in areas that were previously uninhabited and far-removed from heavily populated areas. New agricultural opportunity presented itself in areas that were considered rural, such as states in the south and in the west. Shipments on a large scale from the south and California were both made around the same time, although natural ice was used from the Sierras in California rather than manufactured ice in the south. Refrigeration allowed for many areas to specialize in the growing of specific fruits. California specialized in several fruits, grapes, peaches, pears, plums, and apples, while Georgia became famous for specifically its peaches. In California, the acceptance of the refrigerated rail cars led to an increase of car loads from 4,500 carloads in 1895 to between 8,000 and 10,000 carloads in 1905. The Gulf States, Arkansas, Missouri and Tennessee entered into strawberry production on a large-scale while Mississippi became the center of the tomato industry. New Mexico, Colorado, Arizona, and Nevada grew cantaloupes. Without refrigeration, this would have not been possible. By 1917, well-established fruit and vegetable areas that were close to eastern markets felt the pressure of competition from these distant specialized centers. Refrigeration was not limited to meat, fruit and vegetables but it also encompassed dairy product and dairy farms. In the early twentieth century, large cities got their dairy supply from farms as far as 640 kilometres (400 mi). Dairy products were not as easily transported over great distances like fruits and vegetables due to greater perishability. Refrigeration made production possible in the west far from eastern markets, so much in fact that dairy farmers could pay transportation cost and still undersell their eastern competitors. Refrigeration and the refrigerated rail gave opportunity to areas with rich soil far from natural channel of transport such as a river, valley trail or harbors. Impact on settlement patterns in the United States of America: Rise of the galactic city "Edge city" was a term coined by Joel Garreau, whereas the term "galactic city" was coined by Lewis Mumford. These terms refer to a concentration of business, shopping, and entertainment outside a traditional downtown or central business district in what had previously been a residential or rural area. There were several factors contributing to the growth of these cities such as Los Angeles, Las Vegas, Houston, and Phoenix. The factors that contributed to these large cities include reliable automobiles, highway systems, refrigeration, and agricultural production increases. Large cities such as the ones mentioned above have not been uncommon in history, but what separates these cities from the rest are that these cities are not along some natural channel of transport, or at some crossroad of two or more channels such as a trail, harbor, mountain, river, or valley. These large cities have been developed in areas that only a few hundred years ago would have been uninhabitable. Without a cost efficient way of cooling air and transporting water and food from great distances, these large cities would have never developed. The rapid growth of these cities was influenced by refrigeration and an agricultural productivity increase, allowing more distant farms to effectively feed the population. Impact on agriculture and food production: Agriculture's role in developed countries has drastically changed in the last century due to many factors, including refrigeration. Statistics from the 2007 census gives information on the large concentration of agricultural sales coming from a small portion of the existing farms in the United States today. This is a partial result of the market created for the frozen meat trade by the first successful shipment of frozen sheep carcasses coming from New Zealand in the 1880s. As the market continued to grow, regulations on food processing and quality began to be enforced. Eventually, electricity was introduced into rural homes in the United States, which allowed refrigeration technology to continue to expand on the farm, increasing output per person. Today, refrigeration's use on the farm reduces humidity levels, avoids spoiling due to bacterial growth, and assists in preservation. Impact on agriculture and food production: Demographics The introduction of refrigeration and evolution of additional technologies drastically changed agriculture in the United States. During the beginning of the 20th century, farming was a common occupation and lifestyle for United States citizens, as most farmers actually lived on their farm. In 1935, there were 6.8 million farms in the United States and a population of 127 million. Yet, while the United States population has continued to climb, citizens pursuing agriculture continue to decline. Based on the 2007 US Census, less than one percent of a population of 310 million people claim farming as an occupation today. However, the increasing population has led to an increasing demand for agricultural products, which is met through a greater variety of crops, fertilizers, pesticides, and improved technology. Improved technology has decreased the risk and time involved for agricultural management and allows larger farms to increase their output per person to meet society's demand. Impact on agriculture and food production: Meat packing and trade Prior to 1882, the South Island of New Zealand had been experimenting with sowing grass and crossbreeding sheep, which immediately gave their farmers economic potential in the exportation of meat. In 1882, the first successful shipment of sheep carcasses was sent from Port Chalmers in Dunedin, New Zealand, to London. By the 1890s, the frozen meat trade became increasingly more profitable in New Zealand, especially in Canterbury, where 50% of exported sheep carcasses came from in 1900. It wasn't long before Canterbury meat was known for the highest quality, creating a demand for New Zealand meat around the world. In order to meet this new demand, the farmers improved their feed so sheep could be ready for the slaughter in only seven months. This new method of shipping led to an economic boom in New Zealand by the mid 1890s.In the United States, the Meat Inspection Act of 1891 was put in place in the United States because local butchers felt the refrigerated railcar system was unwholesome. When meat packing began to take off, consumers became nervous about the quality of the meat for consumption. Upton Sinclair's 1906 novel The Jungle brought negative attention to the meat packing industry, by drawing to light unsanitary working conditions and processing of diseased animals. The book caught the attention of President Theodore Roosevelt, and the 1906 Meat Inspection Act was put into place as an amendment to the Meat Inspection Act of 1891. This new act focused on the quality of the meat and environment it is processed in. Impact on agriculture and food production: Electricity in rural areas In the early 1930s, 90 percent of the urban population of the United States had electric power, in comparison to only 10 percent of rural homes. At the time, power companies did not feel that extending power to rural areas (rural electrification) would produce enough profit to make it worth their while. However, in the midst of the Great Depression, President Franklin D. Roosevelt realized that rural areas would continue to lag behind urban areas in both poverty and production if they were not electrically wired. On May 11, 1935, the president signed an executive order called the Rural Electrification Administration, also known as REA. The agency provided loans to fund electric infrastructure in the rural areas. In just a few years, 300,000 people in rural areas of the United States had received power in their homes. Impact on agriculture and food production: While electricity dramatically improved working conditions on farms, it also had a large impact on the safety of food production. Refrigeration systems were introduced to the farming and food distribution processes, which helped in food preservation and kept food supplies safe. Refrigeration also allowed for shipment of perishable commodities throughout the United States. As a result, United States farmers quickly became the most productive in the world, and entire new food systems arose. Impact on agriculture and food production: Farm use In order to reduce humidity levels and spoiling due to bacterial growth, refrigeration is used for meat, produce, and dairy processing in farming today. Refrigeration systems are used the heaviest in the warmer months for farming produce, which must be cooled as soon as possible in order to meet quality standards and increase the shelf life. Meanwhile, dairy farms refrigerate milk year round to avoid spoiling. Effects on lifestyle and diet: In the late 19th Century and into the very early 20th Century, except for staple foods (sugar, rice, and beans) that needed no refrigeration, the available foods were affected heavily by the seasons and what could be grown locally. Refrigeration has removed these limitations. Refrigeration played a large part in the feasibility and then popularity of the modern supermarket. Fruits and vegetables out of season, or grown in distant locations, are now available at relatively low prices. Refrigerators have led to a huge increase in meat and dairy products as a portion of overall supermarket sales. As well as changing the goods purchased at the market, the ability to store these foods for extended periods of time has led to an increase in leisure time. Prior to the advent of the household refrigerator, people would have to shop on a daily basis for the supplies needed for their meals. Effects on lifestyle and diet: Impact on nutrition The introduction of refrigeration allowed for the hygienic handling and storage of perishables, and as such, promoted output growth, consumption, and the availability of nutrition. The change in our method of food preservation moved us away from salts to a more manageable sodium level. The ability to move and store perishables such as meat and dairy led to a 1.7% increase in dairy consumption and overall protein intake by 1.25% annually in the US after the 1890s.People were not only consuming these perishables because it became easier for they themselves to store them, but because the innovations in refrigerated transportation and storage led to less spoilage and waste, thereby driving the prices of these products down. Refrigeration accounts for at least 5.1% of the increase in adult stature (in the US) through improved nutrition, and when the indirect effects associated with improvements in the quality of nutrients and the reduction in illness is additionally factored in, the overall impact becomes considerably larger. Recent studies have also shown a negative relationship between the number of refrigerators in a household and the rate of gastric cancer mortality. Current applications of refrigeration: Probably the most widely used current applications of refrigeration are for air conditioning of private homes and public buildings, and refrigerating foodstuffs in homes, restaurants and large storage warehouses. The use of refrigerators and walk-in coolers and freezers in kitchens, factories and warehouses for storing and processing fruits and vegetables has allowed adding fresh salads to the modern diet year round, and storing fish and meats safely for long periods. Current applications of refrigeration: The optimum temperature range for perishable food storage is 3 to 5 °C (37 to 41 °F).In commerce and manufacturing, there are many uses for refrigeration. Refrigeration is used to liquefy gases – oxygen, nitrogen, propane, and methane, for example. In compressed air purification, it is used to condense water vapor from compressed air to reduce its moisture content. In oil refineries, chemical plants, and petrochemical plants, refrigeration is used to maintain certain processes at their needed low temperatures (for example, in alkylation of butenes and butane to produce a high-octane gasoline component). Metal workers use refrigeration to temper steel and cutlery. When transporting temperature-sensitive foodstuffs and other materials by trucks, trains, airplanes and seagoing vessels, refrigeration is a necessity. Current applications of refrigeration: Dairy products are constantly in need of refrigeration, and it was only discovered in the past few decades that eggs needed to be refrigerated during shipment rather than waiting to be refrigerated after arrival at the grocery store. Meats, poultry and fish all must be kept in climate-controlled environments before being sold. Refrigeration also helps keep fruits and vegetables edible longer. Current applications of refrigeration: One of the most influential uses of refrigeration was in the development of the sushi/sashimi industry in Japan. Before the discovery of refrigeration, many sushi connoisseurs were at risk of contracting diseases. The dangers of unrefrigerated sashimi were not brought to light for decades due to the lack of research and healthcare distribution across rural Japan. Around mid-century, the Zojirushi corporation, based in Kyoto, made breakthroughs in refrigerator designs, making refrigerators cheaper and more accessible for restaurant proprietors and the general public. Methods of refrigeration: Methods of refrigeration can be classified as non-cyclic, cyclic, thermoelectric and magnetic. Methods of refrigeration: Non-cyclic refrigeration This refrigeration method cools a contained area by melting ice, or by sublimating dry ice. Perhaps the simplest example of this is a portable cooler, where items are put in it, then ice is poured over the top. Regular ice can maintain temperatures near, but not below the freezing point, unless salt is used to cool the ice down further (as in a traditional ice-cream maker). Dry ice can reliably bring the temperature well below water freezing point. Methods of refrigeration: Cyclic refrigeration This consists of a refrigeration cycle, where heat is removed from a low-temperature space or source and rejected to a high-temperature sink with the help of external work, and its inverse, the thermodynamic power cycle. In the power cycle, heat is supplied from a high-temperature source to the engine, part of the heat being used to produce work and the rest being rejected to a low-temperature sink. This satisfies the second law of thermodynamics. Methods of refrigeration: A refrigeration cycle describes the changes that take place in the refrigerant as it alternately absorbs and rejects heat as it circulates through a refrigerator. It is also applied to heating, ventilation, and air conditioning HVACR work, when describing the "process" of refrigerant flow through an HVACR unit, whether it is a packaged or split system. Methods of refrigeration: Heat naturally flows from hot to cold. Work is applied to cool a living space or storage volume by pumping heat from a lower temperature heat source into a higher temperature heat sink. Insulation is used to reduce the work and energy needed to achieve and maintain a lower temperature in the cooled space. The operating principle of the refrigeration cycle was described mathematically by Sadi Carnot in 1824 as a heat engine. Methods of refrigeration: The most common types of refrigeration systems use the reverse-Rankine vapor-compression refrigeration cycle, although absorption heat pumps are used in a minority of applications. Methods of refrigeration: Cyclic refrigeration can be classified as: Vapor cycle, and Gas cycleVapor cycle refrigeration can further be classified as: Vapor-compression refrigeration Sorption Refrigeration Vapor-absorption refrigeration Adsorption refrigeration Vapor-compression cycle The vapor-compression cycle is used in most household refrigerators as well as in many large commercial and industrial refrigeration systems. Figure 1 provides a schematic diagram of the components of a typical vapor-compression refrigeration system. Methods of refrigeration: The thermodynamics of the cycle can be analyzed on a diagram as shown in Figure 2. In this cycle, a circulating refrigerant such as a low boiling hydrocarbon or hydrofluorocarbons enters the compressor as a vapour. From point 1 to point 2, the vapor is compressed at constant entropy and exits the compressor as a vapor at a higher temperature, but still below the vapor pressure at that temperature. From point 2 to point 3 and on to point 4, the vapor travels through the condenser which cools the vapour until it starts condensing, and then condenses the vapor into a liquid by removing additional heat at constant pressure and temperature. Between points 4 and 5, the liquid refrigerant goes through the expansion valve (also called a throttle valve) where its pressure abruptly decreases, causing flash evaporation and auto-refrigeration of, typically, less than half of the liquid. Methods of refrigeration: That results in a mixture of liquid and vapour at a lower temperature and pressure as shown at point 5. The cold liquid-vapor mixture then travels through the evaporator coil or tubes and is completely vaporized by cooling the warm air (from the space being refrigerated) being blown by a fan across the evaporator coil or tubes. The resulting refrigerant vapour returns to the compressor inlet at point 1 to complete the thermodynamic cycle. Methods of refrigeration: The above discussion is based on the ideal vapour-compression refrigeration cycle, and does not take into account real-world effects like frictional pressure drop in the system, slight thermodynamic irreversibility during the compression of the refrigerant vapor, or non-ideal gas behavior, if any. Vapor compression refrigerators can be arranged in two stages in cascade refrigeration systems, with the second stage cooling the condenser of the first stage. This can be used for achieving very low temperatures. Methods of refrigeration: More information about the design and performance of vapor-compression refrigeration systems is available in the classic Perry's Chemical Engineers' Handbook. Methods of refrigeration: Sorption cycle Absorption cycle In the early years of the twentieth century, the vapor absorption cycle using water-ammonia systems or LiBr-water was popular and widely used. After the development of the vapor compression cycle, the vapor absorption cycle lost much of its importance because of its low coefficient of performance (about one fifth of that of the vapor compression cycle). Today, the vapor absorption cycle is used mainly where fuel for heating is available but electricity is not, such as in recreational vehicles that carry LP gas. It is also used in industrial environments where plentiful waste heat overcomes its inefficiency. Methods of refrigeration: The absorption cycle is similar to the compression cycle, except for the method of raising the pressure of the refrigerant vapor. In the absorption system, the compressor is replaced by an absorber which dissolves the refrigerant in a suitable liquid, a liquid pump which raises the pressure and a generator which, on heat addition, drives off the refrigerant vapor from the high-pressure liquid. Some work is needed by the liquid pump but, for a given quantity of refrigerant, it is much smaller than needed by the compressor in the vapor compression cycle. In an absorption refrigerator, a suitable combination of refrigerant and absorbent is used. The most common combinations are ammonia (refrigerant) with water (absorbent), and water (refrigerant) with lithium bromide (absorbent). Methods of refrigeration: Adsorption cycle The main difference with absorption cycle, is that in adsorption cycle, the refrigerant (adsorbate) could be ammonia, water, methanol, etc., while the adsorbent is a solid, such as silica gel, activated carbon, or zeolite, unlike in the absorption cycle where absorbent is liquid. The reason adsorption refrigeration technology has been extensively researched in recent 30 years lies in that the operation of an adsorption refrigeration system is often noiseless, non-corrosive and environment friendly. Methods of refrigeration: Gas cycle When the working fluid is a gas that is compressed and expanded but doesn't change phase, the refrigeration cycle is called a gas cycle. Air is most often this working fluid. As there is no condensation and evaporation intended in a gas cycle, components corresponding to the condenser and evaporator in a vapor compression cycle are the hot and cold gas-to-gas heat exchangers in gas cycles. Methods of refrigeration: The gas cycle is less efficient than the vapor compression cycle because the gas cycle works on the reverse Brayton cycle instead of the reverse Rankine cycle. As such, the working fluid does not receive and reject heat at constant temperature. In the gas cycle, the refrigeration effect is equal to the product of the specific heat of the gas and the rise in temperature of the gas in the low temperature side. Therefore, for the same cooling load, a gas refrigeration cycle needs a large mass flow rate and is bulky. Methods of refrigeration: Because of their lower efficiency and larger bulk, air cycle coolers are not often used nowadays in terrestrial cooling devices. However, the air cycle machine is very common on gas turbine-powered jet aircraft as cooling and ventilation units, because compressed air is readily available from the engines' compressor sections. Such units also serve the purpose of pressurizing the aircraft. Methods of refrigeration: Thermoelectric refrigeration Thermoelectric cooling uses the Peltier effect to create a heat flux between the junction of two types of material. This effect is commonly used in camping and portable coolers and for cooling electronic components and small instruments. Peltier coolers are often used where a traditional vapor-compression cycle refrigerator would be impractical or take up too much space, and in cooled image sensors as an easy, compact and lightweight, if inefficient, way to achieve very low temperatures, using two or more stage peltier coolers arranged in a cascade refrigeration configuration, meaning that two or more Peltier elements are stacked on top of each other, with each stage being larger than the one before it, in order to extract more heat and waste heat generated by the previous stages. Peltier cooling has a low COP (efficiency) when compared with that of the vapor-compression cycle, so it emits more waste heat (heat generated by the Peltier element or cooling mechanism) and consumes more power for a given cooling capacity. Methods of refrigeration: Magnetic refrigeration Magnetic refrigeration, or adiabatic demagnetization, is a cooling technology based on the magnetocaloric effect, an intrinsic property of magnetic solids. The refrigerant is often a paramagnetic salt, such as cerium magnesium nitrate. The active magnetic dipoles in this case are those of the electron shells of the paramagnetic atoms. Methods of refrigeration: A strong magnetic field is applied to the refrigerant, forcing its various magnetic dipoles to align and putting these degrees of freedom of the refrigerant into a state of lowered entropy. A heat sink then absorbs the heat released by the refrigerant due to its loss of entropy. Thermal contact with the heat sink is then broken so that the system is insulated, and the magnetic field is switched off. This increases the heat capacity of the refrigerant, thus decreasing its temperature below the temperature of the heat sink. Methods of refrigeration: Because few materials exhibit the needed properties at room temperature, applications have so far been limited to cryogenics and research. Methods of refrigeration: Other methods Other methods of refrigeration include the air cycle machine used in aircraft; the vortex tube used for spot cooling, when compressed air is available; and thermoacoustic refrigeration using sound waves in a pressurized gas to drive heat transfer and heat exchange; steam jet cooling popular in the early 1930s for air conditioning large buildings; thermoelastic cooling using a smart metal alloy stretching and relaxing. Many Stirling cycle heat engines can be run backwards to act as a refrigerator, and therefore these engines have a niche use in cryogenics. In addition, there are other types of cryocoolers such as Gifford-McMahon coolers, Joule-Thomson coolers, pulse-tube refrigerators and, for temperatures between 2 mK and 500 mK, dilution refrigerators. Methods of refrigeration: Elastocaloric refrigeration Another potential solid-state refrigeration technique and a relatively new area of study comes from a special property of super elastic materials. These materials undergo a temperature change when experiencing an applied mechanical stress (called the elastocaloric effect). Since super elastic materials deform reversibly at high strains, the material experiences a flattened elastic region in its stress-strain curve caused by a resulting phase transformation from an austenitic to a martensitic crystal phase. Methods of refrigeration: When a super elastic material experiences a stress in the austenitic phase, it undergoes an exothermic phase transformation to the martensitic phase, which causes the material to heat up. Removing the stress reverses the process, restores the material to its austenitic phase, and absorbs heat from the surroundings cooling down the material. Methods of refrigeration: The most appealing part of this research is how potentially energy efficient and environmentally friendly this cooling technology is. The different materials used, commonly shape-memory alloys, provide a non-toxic source of emission free refrigeration. The most commonly studied materials studied are shape-memory alloys, like nitinol and Cu-Zn-Al. Nitinol is of the more promising alloys with output heat at about 66 J/cm3 and a temperature change of about 16–20 K. Due to the difficulty in manufacturing some of the shape memory alloys, alternative materials like natural rubber have been studied. Even though rubber may not give off as much heat per volume (12 J/cm3 ) as the shape memory alloys, it still generates a comparable temperature change of about 12 K and operates at a suitable temperature range, low stresses, and low cost.The main challenge however comes from potential energy losses in the form of hysteresis, often associated with this process. Since most of these losses comes from incompatibilities between the two phases, proper alloy tuning is necessary to reduce losses and increase reversibility and efficiency. Balancing the transformation strain of the material with the energy losses enables a large elastocaloric effect to occur and potentially a new alternative for refrigeration. Methods of refrigeration: Fridge Gate The Fridge Gate method is a theoretical application of using a single logic gate to drive a refrigerator in the most energy efficient way possible without violating the laws of thermodynamics. It operates on the fact that there are two energy states in which a particle can exist: the ground state and the excited state. The excited state carries a little more energy than the ground state, small enough so that the transition occurs with high probability. There are three components or particle types associated with the fridge gate. The first is on the interior of the refrigerator, the second on the outside and the third is connected to a power supply which heats up every so often that it can reach the E state and replenish the source. In the cooling step on the inside of the refrigerator, the g state particle absorbs energy from ambient particles, cooling them, and itself jumping to the e state. In the second step, on the outside of the refrigerator where the particles are also at an e state, the particle falls to the g state, releasing energy and heating the outside particles. In the third and final step, the power supply moves a particle at the e state, and when it falls to the g state it induces an energy-neutral swap where the interior e particle is replaced by a new g particle, restarting the cycle. Methods of refrigeration: Passive systems When combining a passive daytime radiative cooling system with thermal insulation and evaporative cooling, one study found a 300% increase in ambient cooling power when compared to a stand-alone radiative cooling surface, which could extend the shelf life of food by 40% in humid climates and 200% in desert climates without refrigeration. The system's evaporative cooling layer would require water "re-charges" every 10 days to a month in humid areas and every 4 days in hot and dry areas. Capacity ratings: The refrigeration capacity of a refrigeration system is the product of the evaporators’ enthalpy rise and the evaporators’ mass flow rate. The measured capacity of refrigeration is often dimensioned in the unit of kW or BTU/h. Domestic and commercial refrigerators may be rated in kJ/s, or Btu/h of cooling. For commercial and industrial refrigeration systems, the kilowatt (kW) is the basic unit of refrigeration, except in North America, where both ton of refrigeration and BTU/h are used. Capacity ratings: A refrigeration system's coefficient of performance (CoP) is very important in determining a system's overall efficiency. It is defined as refrigeration capacity in kW divided by the energy input in kW. While CoP is a very simple measure of performance, it is typically not used for industrial refrigeration in North America. Owners and manufacturers of these systems typically use performance factor (PF). A system's PF is defined as a system's energy input in horsepower divided by its refrigeration capacity in TR. Both CoP and PF can be applied to either the entire system or to system components. For example, an individual compressor can be rated by comparing the energy needed to run the compressor versus the expected refrigeration capacity based on inlet volume flow rate. It is important to note that both CoP and PF for a refrigeration system are only defined at specific operating conditions, including temperatures and thermal loads. Moving away from the specified operating conditions can dramatically change a system's performance. Capacity ratings: Air conditioning systems used in residential application typically use SEER (Seasonal Energy Efficiency Ratio)for the energy performance rating. Air conditioning systems for commercial application often use EER (Energy Efficiency Ratio) and IEER (Integrated Energy Efficiency Ratio) for the energy efficiency performance rating.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**FicML** FicML: FicML (Fiction Markup Language) is an XML format for fictional stories (short stories, novellas, novels, etc.). Originally conceived of by multiple contributors, it is an initiative and is in the process of forming its first specification. XML format: The speculated XML elements in a typical FicML document are: <ficml version="0.2"> This is the root element. It must contain the version attribute and one head and one body element.<head> Contains metadata. May include any of these optional elements: title, dateCreated, dateModified, authorName, authorEmail.<body> Contains the body of the story, the contents of the narrative. It must have one or more story elements.<story> Represents the general text of the fictional story. It may contain any number of arbitrary attributes. Common attributes include tense (as in past, present), voice (as in first or third person), and view (as in omniscient or limited).<character> Represents where characters appear within a narrative. It may have several attributes such as name, surname, nickname, and role.<setting> Represents where sections of a narrative take place. It may have several attributes such as name, type, alt. Setting tags can appear within other setting tags in order to illustrate a relationship. The setting of an apartment would be within the larger setting of a city or building.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Public open space** Public open space: A public open space is defined as an open piece of land both green space or hard space to which there is public access. Public open space is often referred to by urban planners and landscape architects by the acronym 'POS'. Varied interpretations of the term are possible. Public open space: 'Public' can mean: owned by a national or local government body owned by 'public' body (e.g. a not-for-profit organization) and held in trust for the public owned by a private individual or organization but made available for public use or available public access, see privately owned public space (POPS)'Open' can mean: open for public access open for public recreation outdoors, i.e. not a space within a building vegetatedDepending on which of these definitions are adopted, any of the following could be called Public Open Space: a public park a town square a greenway which is open to the public but runs through farmland or a forest a public highway a private road with public access
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Homeokinetics** Homeokinetics: Homeokinetics is the study of self-organizing, complex systems. Standard physics studies systems at separate levels, such as atomic physics, nuclear physics, biophysics, social physics, and galactic physics. Homeokinetic physics studies the up-down processes that bind these levels. Tools such as mechanics, quantum field theory, and the laws of thermodynamics provide the key relationships. The subject, described as the physics and thermodynamics associated with the up down movement between levels of systems, originated in the late 1970s work of American physicists Harry Soodak and Arthur Iberall. Complex systems are universes, galaxies, social systems, people, or even those that seem as simple as gases. The basic premise is that the entire universe consists of atomistic-like units bound in interactive ensembles to form systems, level by level, in a nested hierarchy. Homeokinetics treats all complex systems on an equal footing, animate and inanimate, providing them with a common viewpoint. The complexity in studying how they work is reduced by the emergence of common languages in all complex systems. History: Arthur Iberall, Warren McCulloch and Harry Soodak developed the concept of homeokinetics as a new branch of physics. It began through Iberall's biophysical research for the NASA exobiology program into the dynamics of mammalian physiological processes They were observing an area that physics has neglected, that of complex systems with their very long internal factory day delays. They were observing systems associated with nested hierarchy and with an extensive range of time scale processes. It was such connections, referred to as both up-down or in-out connections (as nested hierarchy) and side-side or flatland physics among atomistic-like components (as heterarchy) that became the hallmark of homeokinetic problems. By 1975, they began to put a formal catch-phrase name on those complex problems, associating them with nature, life, human, mind, and society. The major method of exposition that they began using was a combination of engineering physics and a more academic pure physics. In 1981, Iberall was invited to the Crump Institute for Medical Engineering of UCLA, where he further refined the key concepts of homeokinetics, developing a physical scientific foundation for complex systems. Self-organizing complex Systems: A system is a collective of interacting ‘atomistic’-like entities. The word ‘atomism’ is used to stand both for the entity and the doctrine. As is known from ‘kinetic’ theory, in mobile or simple systems, the atomisms share their ‘energy’ in interactive collisions. That so-called ‘equipartitioning’ process takes place within a few collisions. Physically, if there is little or no interaction, the process is considered to be very weak. Physics deals basically with the forces of interaction—few in number—that influence the interactions. They all tend to emerge with considerable force at high ‘density’ of atomistic interaction. In complex systems, there is also a result of internal processes in the atomisms. They exhibit, in addition to the pair-by-pair interactions, internal actions such as vibrations, rotations, and association. If the energy and time involved internally creates a very large—in time—cycle of performance of their actions compared to their pair interactions, the collective system is complex. If you eat a cookie and you do not see the resulting action for hours, that is complex; if boy meets girl and they become ‘engaged’ for a protracted period, that is complex. What emerges from that physics is a broad host of changes in state and stability transitions in state. Viewing Aristotle as having defined a general basis for systems in their static-logical states and trying to identify a logic-metalogic for physics, e.g., metaphysics, then homeokinetics is viewed to be an attempt to define the dynamics of all those systems in the universe. Flatland physics vs. homeokinetic physics: Ordinary physics is a flatland physics, a physics at some particular level. Examples include nuclear and atomic physics, biophysics, social physics, and stellar physics. Homeokinetic physics combines flatland physics with the study of the up down processes that binds the levels. Tools, such as mechanics, quantum field theory, and the laws of thermodynamics, provide key relationships for the binding of the levels, how they connect, and how the energy flows up and down. And whether the atomisms are atoms, molecules, cells, people, stars, galaxies, or universes, the same tools can be used to understand them. Homeokinetics treats all complex systems on an equal footing, animate and inanimate, providing them with a common viewpoint. The complexity in studying how they work is reduced by the emergence of common languages in all complex systems. Applications: A homeokinetic approach to complex systems has been applied to understanding life, ecological psychology, mind, anthropology, geology, law, motor control, bioenergetics, healing modalities, and political science. It has also been applied to social physics where a homeokinetics analysis shows that one must account for flow variables such as the flow of energy, of materials, of action, reproduction rate, and value-in-exchange. Iberall's conjectures on life and mind have been used as a springboard to develop theories of mental activity and action.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rapid Attack Identification Detection Reporting System** Rapid Attack Identification Detection Reporting System: The Rapid Attack Identification Detection Reporting System, also known as RAIDRS is a ground-based space control system that provides near real-time event detection. Mission: RAIDRS will be a family of systems being designed to detect, report, identify, locate, and classify attacks against military space assets. RAIDRS will include detection sensors, information processors, and a reporting architecture. The RAIDRS system will detect and report attacks on both ground and space-based elements of operational space systems. It will notify operators and users, and carry information to decision-makers Block 10: Worldwide network of sensors; Centralized management Detect, Identify, Characterize SATCOM EMI interference Identify signal characteristics Geo-locate SATCOM EMI (Electro-Magnetic Interference) Report interference on blue space systems and/or services Block 20: Commander's decision support tool that provides Defensive Counterspace (DCS) attack assessment Integrates and processes critical Space Situation Awareness (SSA) information to provide the integrated space picture that enables DCS operations Multi-level distributed data fusion; Advanced visualization Contract Information: The RAIDRS system is unique in the acquisitions process for being tailored to small businesses and utilizing commercial off-the-shelf (COTS) hardware and software. According to the Air Force budget, the service intends to spend about $16 million in 2005 on the RAIDRS program; $16.4 million in 2006; $12.1 million in 2007; $12.4 million in 2008; and $66.6 million in 2009. Contractor: Kratos Defense & Security Solutions Locations: Peterson AFB, Colorado (2007–present) (Central Operating Location)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Wiswesser line notation** Wiswesser line notation: Wiswesser line notation (WLN), invented by William J. Wiswesser in 1949, was the first line notation capable of precisely describing complex molecules. It was the basis of ICI Ltd's CROSSBOW database system developed in the late 1960s. WLN allowed for indexing the Chemical Structure Index (CSI) at the Institute for Scientific Information (ISI). It was also the tool used to develop the CAOCI (Commercially Available Organic Chemical Intermediates) database, the datafile from which Accelrys' (successor to MDL) ACD file was developed. WLN is still being extensively used by BARK Information Services. Descriptions of how to encode molecules as WLN have been published in several books. Examples: 1H : methane 2H : ethane 3H : propane 1Y : isobutane 1X : neopentane Q1 : methanol 1R : toluene 1V1 : acetone 2O2 : diethyl ether 1VR : acetophenone ZR CVQ : 3-aminobenzoic acid QVYZ1R : phenylalanine QX2&2&2 : 3-ethylpentan-3-ol QVY3&1VQ : 2-propylbutanedioic acid L66J BMR& DSWQ IN1&1 : 6-dimethylamino-4-phenylamino-naphthalene-2-sulfonic acid QVR-/G 5 : pentachlorobenzoic acid
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Journal of Veterinary Diagnostic Investigation** Journal of Veterinary Diagnostic Investigation: The Journal of Veterinary Diagnostic Investigation is an international peer-reviewed academic journal published bimonthly in English that publishes papers in the field of Veterinary Sciences. The journal's editor is Grant Maxie, DVM, PhD, DACVP (University of Guelph). The Journal has been in publication since 1989 and is currently published by SAGE Publications in association with American Association of Veterinary Laboratory Diagnosticians, Inc. Scope: JVDI is devoted to all aspects of veterinary laboratory diagnostic science including the major disciplines of anatomic pathology, bacteriology/mycology, clinical pathology, epidemiology, immunology, laboratory information management, molecular biology, parasitology, public health, toxicology, and virology. Abstracting and indexing: The Journal of Veterinary Diagnostic Investigation is abstracted and indexed in, among other databases: SCOPUS, PubMed/Medline, and the Social Sciences Citation Index. According to the Journal Citation Reports, its 2016 impact factor is 0.925, ranking it 64 out of 136 journals in the category Veterinary Sciences. About the journal: Three manuscript formats are accepted for review: Review Articles, Full Scientific Reports, and Brief Communications. Review articles are strongly encouraged provided they cover subjects of current and broad interest to veterinary laboratory diagnosticians. JVDI also publishes position announcements for employment and advertisements for diagnostic products. JVDI content is open access after a 12-month embargo. This journal is a member of the Committee on Publication Ethics (COPE)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ischial tuberosity** Ischial tuberosity: The ischial tuberosity (or tuberosity of the ischium, tuber ischiadicum), also known colloquially as the sit bones or sitz bones, or as a pair the sitting bones, is a large swelling posteriorly on the superior ramus of the ischium. It marks the lateral boundary of the pelvic outlet. When sitting, the weight is frequently placed upon the ischial tuberosity. The gluteus maximus provides cover in the upright posture, but leaves it free in the seated position. The distance between a cyclist's ischial tuberosities is one of the factors in the choice of a bicycle saddle. Divisions: The tuberosity is divided into two portions: a lower, rough, somewhat triangular part, and an upper, smooth, quadrilateral portion. Divisions: The lower portion is subdivided by a prominent longitudinal ridge, passing from base to apex, into two parts: The outer gives attachment to the adductor magnus The inner to the sacrotuberous ligament The upper portion is subdivided into two areas by an oblique ridge, which runs downward and outward: From the upper and outer area the semimembranosus arises From the lower and inner, the long head of the biceps femoris and the semitendinosus
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Muscarinic agonist** Muscarinic agonist: A muscarinic agonist is an agent that activates the activity of the muscarinic acetylcholine receptor. The muscarinic receptor has different subtypes, labelled M1-M5, allowing for further differentiation. Clinical significance: M1 M1-type muscarinic acetylcholine receptors play a role in cognitive processing. In Alzheimer disease (AD), amyloid formation may decrease the ability of these receptors to transmit signals, leading to decreased cholinergic activity. As these receptors themselves appear relatively unchanged in the disease process, they have become a potential therapeutic target when trying to improve cognitive function in patients with AD.A number of muscarinic agonists have been developed and are under investigation to treat AD. These agents show promise as they are neurotrophic, decrease amyloid depositions, and improve damage due to oxidative stress. Tau-phosphorylation is decreased and cholinergic function enhanced. Notably several agents of the AF series of muscarinic agonists have become the focus of such research:. AF102B, AF150(S), AF267B. In animal models that are mimicking the damage of AD, these agents appear promising. Clinical significance: The agent xanomeline has been proposed as a potential treatment for schizophrenia. M3 In the form of pilocarpine, muscarinic receptor agonists have been used medically for a short time. M3 agonists Aceclidine, for glaucoma Arecoline, an alkaloid present in the Betel nut Pilocarpine is a drug that acts as a muscarinic receptor agonist that is used to treat glaucoma Cevimeline (AF102B) (Evoxac®) is a muscarinic agonist that is a Food and Drug Administration (FDA)-approved drug and used for the management of dry mouth in Sjögren's syndrome Muscarinic acetylcholine receptor subtypes: The targets for muscarinic agonists are the muscarinic receptors: M1, M2, M3, M4 and M5. These receptors are GPCRs coupled to either Gi or Gq subunits.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Motor constants** Motor constants: The motor size constant ( KM ) and motor velocity constant ( Kv , alternatively called the back EMF constant) are values used to describe characteristics of electrical motors. Motor constant: KM is the motor constant (sometimes, motor size constant). In SI units, the motor constant is expressed in newton metres per square root watt ( N⋅m/W ): KM=τP where τ is the motor torque (SI unit: newton–metre) P is the resistive power loss (SI unit: watt)The motor constant is winding independent (as long as the same conductive material is used for wires); e.g., winding a motor with 6 turns with 2 parallel wires instead of 12 turns single wire will double the velocity constant, Kv , but KM remains unchanged. KM can be used for selecting the size of a motor to use in an application. Kv can be used for selecting the winding to use in the motor. Motor constant: Since the torque τ is current I multiplied by KT then KM becomes KM=KTIP=KTII2R=KTR where I is the current (SI unit, ampere) R is the resistance (SI unit, ohm) KT is the motor torque constant (SI unit, newton–metre per ampere, N·m/A), see belowIf two motors with the same Kv and torque work in tandem, with rigidly connected shafts, the Kv of the system is still the same assuming a parallel electrical connection. The KM of the combined system increased by 2 , because both the torque and the losses double. Alternatively, the system could run at the same torque as before, with torque and current split equally across the two motors, which halves the resistive losses. Units: The motor constant may be provided in one of several units. The table below provides conversions between common SI units Motor velocity constant, back EMF constant: Kv is the motor velocity, or motor speed, constant (not to be confused with kV, the symbol for kilovolt), measured in revolutions per minute (RPM) per volt or radians per volt second, rad/V·s: no-load peak The Kv rating of a brushless motor is the ratio of the motor's unloaded rotational speed (measured in RPM) to the peak (not RMS) voltage on the wires connected to the coils (the back EMF). For example, an unloaded motor of Kv = 5,700 rpm/V supplied with 11.1 V will run at a nominal speed of 63,270 rpm (= 5,700 rpm/V × 11.1 V). Motor velocity constant, back EMF constant: The motor may not reach this theoretical speed because there are non-linear mechanical losses. On the other hand, if the motor is driven as a generator, the no-load voltage between terminals is perfectly proportional to the RPM and true to the Kv of the motor/generator. Motor velocity constant, back EMF constant: The terms Ke , Kb are also used, as are the terms back EMF constant, or the generic electrical constant. In contrast to Kv the value Ke is often expressed in SI units volt–seconds per radian (V⋅s/rad), thus it is an inverse measure of Kv . Sometimes it is expressed in non SI units volts per kilorevolution per minute (V/krpm). Motor velocity constant, back EMF constant: peak no-load =1Kv The field flux may also be integrated into the formula: Kω=Ebϕω where Eb is back EMF, Kω is the constant, ϕ is the flux, and ω is the angular velocity. By Lenz's law, a running motor generates a back-EMF proportional to the speed. Once the motor's rotational velocity is such that the back-EMF is equal to the battery voltage (also called DC line voltage), the motor reaches its limit speed. Motor torque constant: KT is the torque produced divided by armature current. It can be calculated from the motor velocity constant Kv 60 v(RPM) v(SI) where Ia is the armature current of the machine (SI unit: ampere). KT is primarily used to calculate the armature current for a given torque demand: Ia=τKT The SI units for the torque constant are newton meters per ampere (N·m/A). Since 1 N·m = 1 J, and 1 A = 1 C/s, then 1 N·m/A = 1 J·s/C = 1 V·s (same units as back EMF constant). Motor torque constant: The relationship between KT and Kv is not intuitive, to the point that many people simply assert that torque and Kv are not related at all. An analogy with a hypothetical linear motor can help to convince that it is true. Suppose that a linear motor has a Kv of 2 (m/s)/V, that is, the linear actuator generates one volt of back-EMF when moved (or driven) at a rate of 2 m/s. Conversely, s=VKv (s is speed of the linear motor, V is voltage). Motor torque constant: The useful power of this linear motor is P=VI , P being the power, V the useful voltage (applied voltage minus back-EMF voltage), and I the current. But, since power is also equal to force multiplied by speed, the force F of the linear motor is F=P/(VKv) or F=I/Kv . The inverse relationship between force per unit current and Kv of a linear motor has been demonstrated. Motor torque constant: To translate this model to a rotating motor, one can simply attribute an arbitrary diameter to the motor armature e.g. 2 m and assume for simplicity that all force is applied at the outer perimeter of the rotor, giving 1 m of leverage. Motor torque constant: Now, supposing that Kv (angular speed per unit voltage) of the motor is 3600 rpm/V, it can be translated to "linear" by multiplying by 2π m (the perimeter of the rotor) and dividing by 60, since angular speed is per minute. This is linear 377 (m/s)/V Now, if this motor is fed with current of 2 A and assuming that back-EMF is exactly 2 V, it is rotating at 7200 rpm and the mechanical power is 4 W, and the force on rotor is v(SI) 377 N or 0.0053 N. The torque on shaft is 0.0053 N⋅m at 2 A because of the assumed radius of the rotor (exactly 1 m). Assuming a different radius would change the linear Kv but would not change the final torque result. To check the result, remember that 60 So, a motor with 3600 rpm 377 rad V·s will generate 0.00265 N⋅m of torque per ampere of current, regardless of its size or other characteristics. This is exactly the value estimated by the KT formula stated earlier.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ab-polar current** Ab-polar current: Ab-polar current, an obsolete term sometimes found in 19th century meteorological literature, refers to any air current moving away from either the North Pole or the South Pole. In the Northern Hemisphere, this term indicates a northerly wind. The Latin prefix ab- means "from" or "away from".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**LYPLAL1** LYPLAL1: Lysophospholipase-like 1 is a protein in humans that is encoded by the LYPLAL1 gene. LYPLAL1: The protein is a α/β-hydrolase of uncharacterized metabolic function. Genome-wide association studies in humans have linked the gene to fat distribution and waist-to-hip ratio. The protein's enzymatic function is unclear. LYPLAL1 was reported to act as a triglyceride lipase in adipose tissue and another study suggested that the protein may play a role in the depalmitoylation of calcium-activated potassium channels. However, LYPLAL1 does not depalmitoylate the oncogene Ras and a structural and enzymatic study concluded that LYPLAL1 is generally unable to act as a lipase and is instead an esterase that prefers short-chain substrates, such as acetyl groups. Structural comparisons have suggested that LYPLAL1 might be a protein deacetylase, but this has not been experimentally tested. Relationship to acyl-protein thioesterases: Sequence conservation and structural homology suggest a close relationship of LYPLAL1 proteins to acyl-protein thioesterases, and, therefore, it has been suggested that LYPLAL1 might be the third human acyl-protein thioesterase. However, the major structural difference between both protein families has been established in the hydrophobic substrate binding tunnel, which has been identified in human acyl-protein thioesterases 1 and 2, as well as in Zea mays acyl-protein thioesterase 2. In LYPLAL1, this tunnel is closed due to a different loop conformation, changing the enzyme's substrate specificity to short acyl chains. Model organisms: Model organisms have been used in the study of LYPLAL1 function. A conditional knockout mouse line called Lyplal1tm1a(KOMP)Wtsi was generated at the Wellcome Trust Sanger Institute. Male and female animals underwent a standardized phenotypic screen to determine the effects of deletion. Additional screens performed: - In-depth immunological phenotyping
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Variation potential** Variation potential: A variation potential (VP) (also called slow wave potential) is a hydraulically propagating electrical signal occurring exclusively in plant cells. It is one of three propagating signals in plants, the other two being action potential (AP) and wound potential (WP) (also unique to plants). Variation potentials are responsible for the induction of many physiological processes and are a mechanism for plant systematic responses to local wounding. They induce changes in gene expression; the production of abscisic acid, jasmonic acid, and ethylene; temporary decreases in photosynthesis; and increases in respiration. Variation potentials have been widely shown in vascular plants.A variation potential, like an action potential, is a temporary change in the membrane potential of the plant cell by depolarization and consequent repolarization. However, it is distinguished by its slower, delayed repolarization phase, variability in shape and amplitude, and the decrease in its velocity with increasing distance from the initial point. Variation potentials can only be produced if the pressure in the xylem is disturbed and followed by an increase in xylem pressure. Additionally, it uses vascular bundles to complete systemic potential throughout the plant.Variation potentials are distinct from action potentials in their cause of stimulation. Depolarization arises from an increase in plant cell turgor pressure from a hydraulic pressure wave that moves through the xylem after events like rain, embolism, bending, local wounds, organ excision, and local burning. Unlike action potentials, variation potentials are not all or nothing. Variation potential: Depolarization of a variation potential is determined by the difference in pressure between the atmosphere and the plant's intact interior. However, it has been shown that variation potentials can be suppressed by high humidity and continued darkness. The ionic mechanism is assumed to involve a brief shutdown of the P-type H+ -ATPase in the plasma membrane. Variation potential propagation is accomplished hydraulically by moving with a rapid pressure increase that establishes an axial pressure gradient in the xylem. This gradient transforms with distance into increasing lag phases for the pressure-induced depolarization in the epidermal cells. This allows for communication between the leaf and stem that can move in both directions along the axis of the plant.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Marble cheese** Marble cheese: Marble cheese is a name given to cheeses with marbled patterns. These are produced by combining either two different colored curds, cheese curds or processed cheeses. Description: Marble cheeses originate from the UK. They are usually hard, processed cow's milk cheeses. Colby-Jack which combines Colby cheese and Monterey Jack is most popular in the United States.Others are produced from a combination of the curds of white and orange cheddars (for Marbled Cheddar), or similar. The marbling is usually not achieved with artificial additives, though cheeses such as Red Windsor and Sage Derby may contain colourings such as Chlorophyll (E140) and Carmine (E120). Description: Types Marble cheddar, a blend of white and orange cheddar. Colby-Jack, a blend of Colby cheese and Monterey Jack. Red Windsor, cheddar cheese with added red wine (usually Port or Bordeaux), or with a red food colouring. Sage Derby, a Derby cheese traditionally made with added sage; now usually made using green plants such as spinach, parsley and marigold; or with green vegetable dye.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ergosophy** Ergosophy: Ergosophy is a term coined by the scientist Frederick Soddy, in the early 1920s, and refers to aspects of energy in relation to human existence and energy measurement as in (Ergs). Soddy's aim was to apply science theories and ideas and move the human understanding of work beyond the restrictions of management theory into a new theory of energy economics. Ergosophy: Frederick Soddy first used the term in his book on work and economics: The Role of Money.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Differential space–time code** Differential space–time code: Differential space–time codes are ways of transmitting data in wireless communications. They are forms of space–time code that do not need to know the channel impairments at the receiver in order to be able to decode the signal. They are usually based on space–time block codes, and transmit one block-code from a set in response to a change in the input signal. The differences among the blocks in the set are designed to allow the receiver to extract the data with good reliability. The first differential space-time block code was disclosed by Vahid Tarokh and Hamid Jafarkhani.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ammunition** Ammunition: Ammunition is the material fired, scattered, dropped, or detonated from any weapon or weapon system. Ammunition is both expendable weapons (e.g., bombs, missiles, grenades, land mines) and the component parts of other weapons that create the effect on a target (e.g., bullets and warheads). Ammunition: The purpose of ammunition is to project a force against a selected target to have an effect (usually, but not always, lethal). An example of ammunition is the firearm cartridge, which includes all components required to deliver the weapon effect in a single package. Until the 20th century, black powder was the most common propellant used but has now been replaced in nearly all cases by modern compounds. Ammunition: Ammunition comes in a great range of sizes and types and is often designed to work only in specific weapons systems. However, there are internationally recognized standards for certain ammunition types (e.g., 5.56×45mm NATO) that enable their use across different weapons and by different users. There are also specific types of ammunition that are designed to have a specialized effect on a target, such as armor-piercing shells and tracer ammunition, used only in certain circumstances. Ammunition is commonly labeled or colored in a specific manner to assist in its identification and to prevent the wrong ammunition types from being used accidentally or inappropriately. Glossary: A round is a single cartridge containing a projectile, propellant, primer and casing. A shell is a form of ammunition that is fired by a large caliber cannon or artillery piece. Before the mid-19th century, these shells were usually made of solid materials and relied on kinetic energy to have an effect. However, since that time, they are more often filled with high explosives (see artillery). A shot refers to a single release of a weapons system. This may involve firing just one round or piece of ammunition (e.g., from a semi-automatic firearm), but can also refer to ammunition types that release a large number of projectiles at the same time (e.g., cluster munitions or shotgun shells). Glossary: A dud refers to loaded ammunition that fails to function as intended, typically failing to detonate on landing. However, it can also refer to ammunition that fails to fire inside the weapon, known as a misfire, or when the ammunition only partially functions, known as a hang fire. Dud ammunition, which is classified as an unexploded ordnance (UXO), is regarded as highly dangerous. In former conflict zones, it is not uncommon for dud ammunition to remain buried in the ground for many years. Large quantities of ammunition from World War I continue to be regularly found in fields throughout France and Belgium and occasionally still claim lives. Although classified as a UXO, landmines that have been left behind after conflict are not considered duds as they have not failed to work and may still be fully functioning. Glossary: A bomb or, more specifically, a guided or unguided bomb (also called an aircraft bomb or aerial bomb), is typically an airdropped, unpowered explosive weapon. Mines and the warheads used in guided missiles and rockets are also referred to as bomb-type ammunition. Etymology: The term ammunition can be traced back to the mid-17th century. The word comes from the French la munition, for the material used for war. Ammunition and munition are often used interchangeably, although munition now usually refers to the actual weapons system with the ammunition required to operate it. In some languages other than English ammunition is still referred to as munition, such as French ("munitions"), German ("Munition"), Italian ("munizione") and Portuguese ("munição"). Design: Ammunition design has evolved throughout history as different weapons have been developed and different effects required. Historically, ammunition was of relatively simple design and build (e.g., sling-shot, stones hurled by catapults), but as weapon designs developed (e.g., rifling) and became more refined, the need for more specialized ammunition increased. Modern ammunition can vary significantly in quality but is usually manufactured to very high standards. Design: For example, ammunition for hunting can be designed to expand inside a target, maximizing the damage inflicted by one round. Anti-personnel shells are designed to fragment into many pieces and can affect a large area. Armor-piercing rounds are specially hardened to penetrate armor, while smoke ammunition covers an area with a fog that screens people from view. More generic ammunition (e.g., 5.56×45mm NATO) can often be altered slightly to give it a more specific effect (e.g., tracer, incendiary), whilst larger explosive rounds can be altered by using different fuzes. Components: The components of ammunition intended for rifles and munitions may be divided into these categories: fuze or primer explosive materials and propellants projectiles of all kinds cartridge casing Fuzes The term fuze refers to the detonator of an explosive round or shell. The spelling is different in British English and American English (fuse/fuze respectively) and they are unrelated to a fuse (electrical). A fuse was earlier used to ignite the propellant (e.g., such as on a firework) until the advent of more reliable systems such as the primer or igniter that is used in most modern ammunition. Components: The fuze of a weapon can be used to alter how the ammunition works. For example, a common artillery shell fuze can be set to "point detonation" (detonation when it hits a target), delay (detonate after it has hit and penetrated a target), time-delay (explode a specified time after firing or impact) and proximity (explode above or next to a target without hitting it, such as for airburst effects or anti-aircraft shells). These allow a single ammunition type to be altered to suit the situation it is required for. There are many designs of a fuze, ranging from simple mechanical to complex radar and barometric systems. Components: Fuzes are usually armed by the acceleration force of firing the projectile, and usually arm several meters after clearing the bore of the weapon. This helps to ensure the ammunition is safer to handle when loading into the weapon and reduces the chance of the detonator firing before the ammunition has cleared the weapon. Components: Propellant or explosive The propellant is the component of ammunition that is activated inside the weapon and provides the kinetic energy required to move the projectile from the weapon to the target. Before the use of gunpowder, this energy would have been produced mechanically by the weapons system (e.g., a catapult or crossbow); in modern times, it is usually a form of chemical energy that rapidly burns to create kinetic force, and an appropriate amount of chemical propellant is packaged with each round of ammunition. In recent years, compressed gas, magnetic energy and electrical energy have been used as propellants. Components: Until the 20th-century, gunpowder was the most common propellant in ammunition. However, it has since been replaced by a wide range of fast-burning compounds that are more reliable and efficient. The propellant charge is distinct from the projectile charge which is activated by the fuze, which causes the ammunition effect (e.g., the exploding of an artillery round). Components: Cartridge case or container The cartridge is the container that holds the projectile and propellant. Not all ammunition types have a cartridge case. In its place, a wide range of materials can be used to contain the explosives and parts. With some large weapons, the ammunition components are stored separately until loaded into the weapon system for firing. With small arms, caseless ammunition can reduce the weight and cost of ammunition, and simplify the firing process for increased firing rate, but the maturing technology has functionality issues. Components: Projectile The projectile is the part of the ammunition that leaves the weapon and has the effect on the target. This effect is usually either kinetic (e.g., as with a standard bullet) or through the delivery of explosives. Storage: An ammunition dump is a military facility for the storage of live ammunition and explosives that will be distributed and used at a later date. Such a storage facility is extremely hazardous, with the potential for accidents when unloading, packing, and transferring the ammunition. In the event of a fire or explosion, the site and its surrounding area is immediately evacuated and the stored ammunition is left to detonate itself completely with limited attempts at firefighting from a safe distance. In large facilities, there may be a flooding system to automatically extinguish a fire or prevent an explosion. Typically, an ammunition dump will have a large buffer zone surrounding it, to avoid casualties in the event of an accident. There will also be perimeter security measures in place to prevent access by unauthorized personnel and to guard against the potential threat from enemy forces. Storage: A magazine is a place where a quantity of ammunition or other explosive material is stored temporarily prior to being used. The term may be used for a facility where large quantities of ammunition are stored, although this would normally be referred to as an ammunition dump. Magazines are typically located in the field for quick access when engaging the enemy. The ammunition storage area on a warship is referred to as the "ship's magazine". On a smaller scale, magazine is also the name given to the ammunition storage and feeding device of a repeating firearm. Storage: Gunpowder must be stored in a dry place (stable room temperature) to keep it usable, as long as for 10 years. It is also recommended to avoid hot places, because friction or heat might ignite a spark and cause an explosion. Common types: Small arms The standard weapon of a modern soldier is an assault rifle, which, like other small arms, uses cartridge ammunition in a size specific to the weapon. Ammunition is carried on the person in box magazines specific to the weapon, ammunition boxes, pouches or bandoliers. The amount of ammunition carried is dependent on the strength of the soldier, the expected action required, and the ability of ammunition to move forward through the logistical chain to replenish the supply. A soldier may also carry a smaller amount of specialized ammunition for heavier weapons such as machine guns and mortars, spreading the burden for squad weapons over many people. Too little ammunition poses a threat to the mission, while too much limits the soldier's mobility also being a threat to the mission. Common types: Shells A shell is a payload-carrying projectile which, as opposed to a shot, contains explosives or other fillings, in use since the 19th century. Common types: Artillery Artillery shells are ammunition that is designed to be fired from artillery which has an effect over long distances, usually indirectly (i.e., out of sight of the target). There are many different types of artillery ammunition, but they are usually high-explosive and designed to shatter into fragments on impact to maximize damage. The fuze used on an artillery shell can alter how it explodes or behaves so it has a more specialized effect. Common types of artillery ammunition include high explosive, smoke, illumination, and practice rounds. Some artillery rounds are designed as cluster munitions. Artillery ammunition will almost always include a projectile (the only exception being demonstration or blank rounds), fuze and propellant of some form. When a cartridge case is not used, there will be some other method of containing the propellant bags, usually a breech-loading weapon; see Breechloader. Common types: Tank Tank ammunition was developed in WWI as tanks first appeared on the battlefield. However, as tank-on-tank warfare developed (including the development of anti-tank warfare artillery), more specialized forms of ammunition were developed such as high-explosive anti-tank (HEAT) warheads and armour-piercing discarding sabot (APDS), including armour-piercing fin-stabilized discarding sabot (APFSDS) rounds. The development of shaped charges has had a significant impact on anti-tank ammunition design, now common in both tank-fired ammunition and in anti-tank missiles, including anti-tank guided missiles. Common types: Naval Naval weapons were originally the same as many land-based weapons, but the ammunition was designed for specific use, such as a solid shot designed to hole an enemy ship and chain-shot to cut rigging and sails. Modern naval engagements have occurred over far longer distances than historic battles, so as ship armor has increased in strength and thickness, the ammunition to defeat it has also changed. Naval ammunition is now designed to reach very high velocities (to improve its armor-piercing abilities) and may have specialized fuzes to defeat specific types of vessels. However, due to the extended ranges at which modern naval combat may occur, guided missiles have largely supplanted guns and shells. Logistics: With every successive improvement in military arms, a corresponding modification has occurred in the method of supplying ammunition in the quantity required. As soon as projectiles were required (such as javelins and arrows), there needed to be a method of replenishment. When non-specialized, interchangeable or recoverable ammunition was used (e.g., arrows), it was possible to pick up spent arrows (both friendly and enemy) and reuse them. However, with the advent of explosive or non-recoverable ammunition, this was no longer possible and new supplies of ammunition would be needed. Logistics: The weight of ammunition required, particularly for artillery shells, can be considerable, causing a need for extra time to replenish supplies. In modern times, there has been an increase in the standardization of many ammunition types between allies (e.g., the NATO Standardization Agreement) that has allowed for shared ammunition types (e.g., 5.56×45mm NATO). Environmental problems: As of 2013, lead-based ammunition production is the second-largest annual use of lead in the US, accounting for over 60,000 metric tons consumed in 2012. In contrast to the closed-loop nature of the largest annual use of lead (i.e. for lead-acid batteries, nearly all of which are, at the end of their lives, collected and recycled into new lead-acid batteries), the lead in ammunition ends up being almost entirely dispersed into the natural environment. For example, lead bullets that miss their target or remain in a carcass or body that was never retrieved can very easily enter environmental systems and become toxic to wildlife. The US military has experimented with replacing lead with copper as a slug in their green bullets which reduces the dangers posed by lead in the environment as a result of artillery. Since 2010, this has eliminated over 2000 tons of lead in waste streams.Hunters are also encouraged to use monolithic bullets, which exclude any lead content. Unexploded ordnance: Unexploded ammunition can remain active for a very long time and poses a significant threat to both humans and the environment.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Generalized minimal residual method** Generalized minimal residual method: In mathematics, the generalized minimal residual method (GMRES) is an iterative method for the numerical solution of an indefinite nonsymmetric system of linear equations. The method approximates the solution by the vector in a Krylov subspace with minimal residual. The Arnoldi iteration is used to find this vector. Generalized minimal residual method: The GMRES method was developed by Yousef Saad and Martin H. Schultz in 1986. It is a generalization and improvement of the MINRES method due to Paige and Saunders in 1975. The MINRES method requires that the matrix is symmetric, but has the advantage that it only requires handling of three vectors. GMRES is a special case of the DIIS method developed by Peter Pulay in 1980. DIIS is applicable to non-linear systems. The method: Denote the Euclidean norm of any vector v by ‖v‖ . Denote the (square) system of linear equations to be solved by The matrix A is assumed to be invertible of size m-by-m. Furthermore, it is assumed that b is normalized, i.e., that ‖b‖=1 The n-th Krylov subspace for this problem is where r0=b−Ax0 is the initial error given an initial guess x0≠0 . Clearly r0=b if x0=0 GMRES approximates the exact solution of Ax=b by the vector xn∈Kn that minimizes the Euclidean norm of the residual rn=b−Axn The vectors r0,Ar0,…An−1r0 might be close to linearly dependent, so instead of this basis, the Arnoldi iteration is used to find orthonormal vectors q1,q2,…,qn which form a basis for Kn . In particular, q1=‖r0‖2−1r0 Therefore, the vector xn∈Kn can be written as xn=x0+Qnyn with yn∈Rn , where Qn is the m-by-n matrix formed by q1,…,qn . In other words, finding finding the n-th approximation of the solution (i.e., xn ) is reduced to finding the vector yn , which is determined via minimizing the residue as described below. The method: The Arnoldi process also constructs H~n , an ( n+1 )-by- n upper Hessenberg matrix which satisfies an equality which is used to simplify the calculation of yn (see below). Note that, for symmetric matrices, a symmetric tri-diagonal matrix is actually achieved, resulting in the MINRES method. Because columns of Qn are orthonormal, we have where is the first vector in the standard basis of Rn+1 , and r0 being the first trial vector (usually zero). Hence, xn can be found by minimizing the Euclidean norm of the residual This is a linear least squares problem of size n. The method: This yields the GMRES method. On the n -th iteration: calculate qn with the Arnoldi method; find the yn which minimizes ‖rn‖ compute xn=x0+Qnyn repeat if the residual is not yet small enough.At every iteration, a matrix-vector product Aqn must be computed. This costs about 2m2 floating-point operations for general dense matrices of size m , but the cost can decrease to O(m) for sparse matrices. In addition to the matrix-vector product, O(nm) floating-point operations must be computed at the n -th iteration. Convergence: The nth iterate minimizes the residual in the Krylov subspace Kn . Since every subspace is contained in the next subspace, the residual does not increase. After m iterations, where m is the size of the matrix A, the Krylov space Km is the whole of Rm and hence the GMRES method arrives at the exact solution. However, the idea is that after a small number of iterations (relative to m), the vector xn is already a good approximation to the exact solution. Convergence: This does not happen in general. Indeed, a theorem of Greenbaum, Pták and Strakoš states that for every nonincreasing sequence a1, ..., am−1, am = 0, one can find a matrix A such that the ‖rn‖ = an for all n, where rn is the residual defined above. In particular, it is possible to find a matrix for which the residual stays constant for m − 1 iterations, and only drops to zero at the last iteration. Convergence: In practice, though, GMRES often performs well. This can be proven in specific situations. If the symmetric part of A, that is (AT+A)/2 , is positive definite, then where λmin(M) and λmax(M) denote the smallest and largest eigenvalue of the matrix M , respectively.If A is symmetric and positive definite, then we even have where κ2(A) denotes the condition number of A in the Euclidean norm. Convergence: In the general case, where A is not positive definite, we have where Pn denotes the set of polynomials of degree at most n with p(0) = 1, V is the matrix appearing in the spectral decomposition of A, and σ(A) is the spectrum of A. Roughly speaking, this says that fast convergence occurs when the eigenvalues of A are clustered away from the origin and A is not too far from normality.All these inequalities bound only the residuals instead of the actual error, that is, the distance between the current iterate xn and the exact solution. Extensions of the method: Like other iterative methods, GMRES is usually combined with a preconditioning method in order to speed up convergence. Extensions of the method: The cost of the iterations grow as O(n2), where n is the iteration number. Therefore, the method is sometimes restarted after a number, say k, of iterations, with xk as initial guess. The resulting method is called GMRES(k) or Restarted GMRES. For non-positive definite matrices, this method may suffer from stagnation in convergence as the restarted subspace is often close to the earlier subspace. Extensions of the method: The shortcomings of GMRES and restarted GMRES are addressed by the recycling of Krylov subspace in the GCRO type methods such as GCROT and GCRODR. Recycling of Krylov subspaces in GMRES can also speed up convergence when sequences of linear systems need to be solved. Comparison with other solvers: The Arnoldi iteration reduces to the Lanczos iteration for symmetric matrices. The corresponding Krylov subspace method is the minimal residual method (MinRes) of Paige and Saunders. Unlike the unsymmetric case, the MinRes method is given by a three-term recurrence relation. It can be shown that there is no Krylov subspace method for general matrices, which is given by a short recurrence relation and yet minimizes the norms of the residuals, as GMRES does. Comparison with other solvers: Another class of methods builds on the unsymmetric Lanczos iteration, in particular the BiCG method. These use a three-term recurrence relation, but they do not attain the minimum residual, and hence the residual does not decrease monotonically for these methods. Convergence is not even guaranteed. The third class is formed by methods like CGS and BiCGSTAB. These also work with a three-term recurrence relation (hence, without optimality) and they can even terminate prematurely without achieving convergence. The idea behind these methods is to choose the generating polynomials of the iteration sequence suitably. None of these three classes is the best for all matrices; there are always examples in which one class outperforms the other. Therefore, multiple solvers are tried in practice to see which one is the best for a given problem. Solving the least squares problem: One part of the GMRES method is to find the vector yn which minimizes Note that H~n is an (n + 1)-by-n matrix, hence it gives an over-constrained linear system of n+1 equations for n unknowns. Solving the least squares problem: The minimum can be computed using a QR decomposition: find an (n + 1)-by-(n + 1) orthogonal matrix Ωn and an (n + 1)-by-n upper triangular matrix R~n such that The triangular matrix has one more row than it has columns, so its bottom row consists of zero. Hence, it can be decomposed as where Rn is an n-by-n (thus square) triangular matrix. Solving the least squares problem: The QR decomposition can be updated cheaply from one iteration to the next, because the Hessenberg matrices differ only by a row of zeros and a column: where hn+1 = (h1,n+1, …, hn+1,n+1)T. This implies that premultiplying the Hessenberg matrix with Ωn, augmented with zeroes and a row with multiplicative identity, yields almost a triangular matrix: This would be triangular if σ is zero. To remedy this, one needs the Givens rotation where With this Givens rotation, we form Indeed, is a triangular matrix with rn+1,n+1=ρ2+σ2 Given the QR decomposition, the minimization problem is easily solved by noting that Denoting the vector βΩne1 by with gn ∈ Rn and γn ∈ R, this is The vector y that minimizes this expression is given by Again, the vectors gn are easy to update. Example code: Regular GMRES (MATLAB / GNU Octave) Notes: A. Meister, Numerik linearer Gleichungssysteme, 2nd edition, Vieweg 2005, ISBN 978-3-528-13135-7. Y. Saad, Iterative Methods for Sparse Linear Systems, 2nd edition, Society for Industrial and Applied Mathematics, 2003. ISBN 978-0-89871-534-7. Y. Saad and M.H. Schultz, "GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems", SIAM J. Sci. Stat. Comput., 7:856–869, 1986. doi:10.1137/0907058. S. C. Eisenstat, H.C. Elman and M.H. Schultz, "Variational iterative methods for nonsymmetric systems of linear equations", SIAM Journal on Numerical Analysis, 20(2), 345–357, 1983. J. Stoer and R. Bulirsch, Introduction to numerical analysis, 3rd edition, Springer, New York, 2002. ISBN 978-0-387-95452-3. Lloyd N. Trefethen and David Bau, III, Numerical Linear Algebra, Society for Industrial and Applied Mathematics, 1997. ISBN 978-0-89871-361-9. Notes: Dongarra et al. , Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, 2nd Edition, SIAM, Philadelphia, 1994 Amritkar, Amit; de Sturler, Eric; Świrydowicz, Katarzyna; Tafti, Danesh; Ahuja, Kapil (2015). "Recycling Krylov subspaces for CFD applications and a new hybrid recycling solver". Journal of Computational Physics 303: 222. doi:10.1016/j.jcp.2015.09.040 Imankulov T, Lebedev D, Matkerim B, Daribayev B, Kassymbek N. Numerical Simulation of Multiphase Multicomponent Flow in Porous Media: Efficiency Analysis of Newton-Based Method. Fluids. 2021; 6(10):355. https://doi.org/10.3390/fluids6100355
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Geologic Calendar** Geologic Calendar: The Geologic Calendar is a scale in which the geological lifetime of the Earth is mapped onto a calendrical year; that is to say, the day one of the Earth took place on a geologic January 1 at precisely midnight, and today's date and time is December 31 at midnight. On this calendar, the inferred appearance of the first living single-celled organisms, prokaryotes, occurred on a geologic February 25 around 12:30 pm to 1:07 pm, dinosaurs first appeared on December 13, the first flower plants on December 22 and the first primates on December 28 at about 9:43 pm. The first Anatomically modern humans did not arrive until around 11:48 p.m. on New Year's Eve, and all of human history since the end of the last ice-age occurred in the last 82.2 seconds before midnight of the new year. A variation of this analogy instead compresses Earth's 4.6 billion year-old history into a single day: While the Earth still forms at midnight, and the present day is also represented by midnight, the first life on Earth would appear at 4:00 am, dinosaurs would appear at 10:00 pm, the first flowers 10:30 pm, the first primates 11:30 pm, and modern humans would not appear until the last two seconds of 11:59 pm. A third analogy, created by University of Washington paleontologist Peter Ward and astronomer Donald Brownlee, who are both famous for their Rare Earth hypothesis, for their book The Life and Death of Planet Earth, alters the calendar so it includes the Earth's future leading up to the Sun's death in the next 5 billion years. As a result, each month now represents 1 of 12 billion years of the Earth's life. According to this calendar, the first life appears in January, and the first animals first appeared in May, with the present day taking place on May 18, even though the Sun won't destroy Earth until December 31, all animals will die out by the end of May.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Alpha-1,3-glucan synthase** Alpha-1,3-glucan synthase: In enzymology, an alpha-1,3-glucan synthase (EC 2.4.1.183) is an enzyme that catalyzes the chemical reaction UDP-glucose + [alpha-D-glucosyl-(1-3)]n ⇌ UDP + [alpha-D-glucosyl-(1-3)]n+1Thus, the two substrates of this enzyme are UDP-glucose and [[[alpha-D-glucosyl-(1-3)]n]], whereas its two products are UDP and [[[alpha-D-glucosyl-(1-3)]n+1]]. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-glucose:alpha-D-(1-3)-glucan 3-alpha-D-glucosyltransferase. Other names in common use include uridine diphosphoglucose-1,3-alpha-glucan glucosyltransferase, and 1,3-alpha-D-glucan synthase.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Opaline glass** Opaline glass: The term "opaline" refers to a number of different styles of glassware. Opaline glass: By opaline glass we mean a milky glass, which can be white or colored, and is made translucent or opaque by adding particular phosphates or oxides during the mixing. In France, the term "opaline" is used to refer to multiple types of glass, and not specifically antique colored crystal or semi-crystal, as is commonly thought, with 'opaline' often a mistakenly-given term referring to the color of a particular type of glass, rather than the age, origin or content of the glass. Description: To make the opaline glass, opacifying substances are added, such as: sodium phosphate, sodium chloride, calcium phosphate, calcium chloride, tin oxide and talc. The glass can thus take on different colors and have varying shades of color, depending on the quantity of the added substance: from white to gray, to pink, to lavender green, to golden yellow, to light blue, up to blue and black. History: The first objects in opaline glass were made in Murano in the sixteenth century, with the addition of calcium phosphate, resulting from the calcination of bones. The technique did not remain secret and was copied in Germany, where this glass was known as bein glass. Opaline glass was produced in large quantities in France in the nineteenth century and reached the apex of diffusion and popularity during the empire of Napoleon III; but the pieces made in the period of Napoleon I, which are translucent, are the most sought after by the antiques market. History: The production centers were in Le Creusot, in Baccarat, in Saint-Louis-lès-Bitche. In England it was produced in the eighteenth century, in Bristol. From the mid-nineteenth century opaque opal glass objects came into fashion. At the Sèvres Porcelain Manufactory, a production line in white milk glass, decorated by hand, was experimented with, which attempted to imitate the transparency of Chinese porcelain. History: With this particular glass objects of common use were handcrafted: vases, bowls, cups, goblets, carafes, perfume bottles, boxes, lamps. Some objects were also decorated in cold enamel, with flowers, with landscapes, with birds. Sometimes a bronze or silver support was added to the opal vase. Most green or yellow opaline glass are uranium glass. 19th century opaline glass: Many different pieces were produced in opaline glass, including vases, bowls, cups, coupes, decanters, perfume bottles, boxes, clocks and other implements. All opaline glass is hand-blown and has a rough or polished pontil on the bottom. There are no seams and no machine engraving, and most opaline glass is not branded or signed. Many pieces of opaline glass are decorated with gilding. Some with handpainted flowers or birds. Several have bronze ormolu mounts, rims, hinges or holders. Later opaline glass: The French factory Portieux Vallérysthal in 1930 has put opal glass objects on the market in a particular blue-azure color. Some pieces have decorations in pure gold or polychrome enamels and are sometimes equipped with supports or hinges in gilded bronze (sets of plates, cruets, sets of glasses and cups, boxes, lamps, flacons, chandeliers). The blue-blue color of the glass is inspired by that of the American robin's egg. Later opaline glass: In the late 20th century the venetian master glassmaker Vincenzo Nason, began producing a similar type of glass, labelled 'Veritable Opaline de Murano'.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**The Kitchen Cabinet (radio show)** The Kitchen Cabinet (radio show): The Kitchen Cabinet is a BBC Radio 4 programme hosted by Jay Rayner in which members of the public can put questions to a panel of experts about food and cooking. History: The programme was first broadcast on 7 Feb 2012; as of February 2023 it is in its 39th series. It is a Somethin' Else production. Format: The show is a similar format to the long established Gardeners Question Time, coming from public venues at interesting 'food locations' in Britain in front of an audience. Panel members have included the food historian Professor Peter Barham, James 'Jocky' Petrie, the former Head of Creative Development for Heston Blumenthal and food writer Tim Hayward.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Netflix Prize** Netflix Prize: The Netflix Prize was an open competition for the best collaborative filtering algorithm to predict user ratings for films, based on previous ratings without any other information about the users or films, i.e. without the users being identified except by numbers assigned for the contest. Netflix Prize: The competition was held by Netflix, a video streaming service, and was open to anyone who is neither connected with Netflix (current and former employees, agents, close relatives of Netflix employees, etc.) nor a resident of certain blocked countries (such as Cuba or North Korea). On September 21, 2009, the grand prize of US$1,000,000 was given to the BellKor's Pragmatic Chaos team which bested Netflix's own algorithm for predicting ratings by 10.06%. Problem and data sets: Netflix provided a training data set of 100,480,507 ratings that 480,189 users gave to 17,770 movies. Each training rating is a quadruplet of the form <user, movie, date of grade, grade>. The user and movie fields are integer IDs, while grades are from 1 to 5 (integer) stars.The qualifying data set contains over 2,817,131 triplets of the form <user, movie, date of grade>, with grades known only to the jury. A participating team's algorithm must predict grades on the entire qualifying set, but they are informed of the score for only half of the data: a quiz set of 1,408,342 ratings. The other half is the test set of 1,408,789, and performance on this is used by the jury to determine potential prize winners. Only the judges know which ratings are in the quiz set, and which are in the test set—this arrangement is intended to make it difficult to hill climb on the test set. Submitted predictions are scored against the true grades in the form of root mean squared error (RMSE), and the goal is to reduce this error as much as possible. Note that, while the actual grades are integers in the range 1 to 5, submitted predictions need not be. Netflix also identified a probe subset of 1,408,395 ratings within the training data set. The probe, quiz, and test data sets were chosen to have similar statistical properties. Problem and data sets: In summary, the data used in the Netflix Prize looks as follows: Training set (99,072,112 ratings not including the probe set; 100,480,507 including the probe set) Probe set (1,408,395 ratings) Qualifying set (2,817,131 ratings) consisting of: Test set (1,408,789 ratings), used to determine winners Quiz set (1,408,342 ratings), used to calculate leaderboard scoresFor each movie, the title and year of release are provided in a separate dataset. No information at all is provided about users. In order to protect the privacy of the customers, "some of the rating data for some customers in the training and qualifying sets have been deliberately perturbed in one or more of the following ways: deleting ratings; inserting alternative ratings and dates; and modifying rating dates."The training set is constructed such that the average user rated over 200 movies, and the average movie was rated by over 5000 users. But there is wide variance in the data—some movies in the training set have as few as 3 ratings, while one user rated over 17,000 movies.There was some controversy as to the choice of RMSE as the defining metric. Would a reduction of the RMSE by 10% really benefit the users? It has been claimed that even as small an improvement as 1% RMSE results in a significant difference in the ranking of the "top-10" most recommended movies for a user. Prizes: Prizes were based on improvement over Netflix's own algorithm, called Cinematch, or the previous year's score if a team has made improvement beyond a certain threshold. A trivial algorithm that predicts for each movie in the quiz set its average grade from the training data produces an RMSE of 1.0540. Cinematch uses "straightforward statistical linear models with a lot of data conditioning."Using only the training data, Cinematch scores an RMSE of 0.9514 on the quiz data, roughly a 10% improvement over the trivial algorithm. Cinematch has a similar performance on the test set, 0.9525. In order to win the grand prize of $1,000,000, a participating team had to improve this by another 10%, to achieve 0.8572 on the test set. Such an improvement on the quiz set corresponds to an RMSE of 0.8563. Prizes: As long as no team won the grand prize, a progress prize of $50,000 was awarded every year for the best result thus far. However, in order to win this prize, an algorithm had to improve the RMSE on the quiz set by at least 1% over the previous progress prize winner (or over Cinematch, the first year). If no submission succeeded, the progress prize was not to be awarded for that year. Prizes: To win a progress or grand prize a participant had to provide source code and a description of the algorithm to the jury within one week after being contacted by them. Following verification the winner also had to provide a non-exclusive license to Netflix. Netflix would publish only the description, not the source code, of the system. (To keep their algorithm and source code secret, a team could choose not to claim a prize.) The jury also kept their predictions secret from other participants. A team could send as many attempts to predict grades as they wish. Originally submissions were limited to once a week, but the interval was quickly modified to once a day. A team's best submission so far counted as their current submission. Prizes: Once one of the teams succeeded to improve the RMSE by 10% or more, the jury would issue a last call, giving all teams 30 days to send their submissions. Only then, the team with best submission was asked for the algorithm description, source code, and non-exclusive license, and, after successful verification; declared a grand prize winner. The contest would last until the grand prize winner was declared. Had no one received the grand prize, it would have lasted for at least five years (until October 2, 2011). After that date, the contest could have been terminated at any time at Netflix's sole discretion. Progress over the years: The competition began on October 2, 2006. By October 8, a team called WXYZConsulting had already beaten Cinematch's results.By October 15, there were three teams who had beaten Cinematch, one of them by 1.06%, enough to qualify for the annual progress prize. By June 2007 over 20,000 teams had registered for the competition from over 150 countries. 2,000 teams had submitted over 13,000 prediction sets.Over the first year of the competition, a handful of front-runners traded first place. The more prominent ones were: WXYZConsulting, a team of Wei Xu and Yi Zhang. (A front runner during November–December 2006.) ML@UToronto A, a team from the University of Toronto led by Prof. Geoffrey Hinton. (A front runner during parts of October–December 2006.) Gravity, a team of four scientists from the Budapest University of Technology (A front runner during January–May 2007.) BellKor, a group of scientists from AT&T Labs. (A front runner since May 2007.) Dinosaur Planet, a team of three undergraduates from Princeton University. (A front runner on September 3, 2007 for one hour before BellKor snatched back the lead.)On August 12, 2007, many contestants gathered at the KDD Cup and Workshop 2007, held at San Jose, California. During the workshop all four of the top teams on the leaderboard at that time presented their techniques. The team from IBM Research — Yan Liu, Saharon Rosset, Claudia Perlich, and Zhenzhen Kou — won the third place in Task 1 and first place in Task 2. Progress over the years: Over the second year of the competition, only three teams reached the leading position: BellKor, a group of scientists from AT&T Labs. (front runner during May 2007 - September 2008.) BigChaos, a team of Austrian scientists from commendo research & consulting (single team front runner since October 2008) BellKor in BigChaos, a joint team of the two leading single teams (A front runner since September 2008) 2007 Progress Prize On September 2, 2007, the competition entered the "last call" period for the 2007 Progress Prize. Over 40,000 teams from 186 countries had entered the contest. They had thirty days to tender submissions for consideration. At the beginning of this period the leading team was BellKor, with an RMSE of 0.8728 (8.26% improvement), followed by Dinosaur Planet (RMSE = 0.8769; 7.83% improvement), and Gravity (RMSE = 0.8785; 7.66% improvement). In the last hour of the last call period, an entry by "KorBell" took first place. This turned out to be an alternate name for Team BellKor.On November 13, 2007, team KorBell (formerly BellKor) was declared the winner of the $50,000 Progress Prize with an RMSE of 0.8712 (8.43% improvement). The team consisted of three researchers from AT&T Labs, Yehuda Koren, Robert Bell, and Chris Volinsky. As required, they published a description of their algorithm. Progress over the years: 2008 Progress Prize The 2008 Progress Prize was awarded to the team BellKor. Their submission combined with a different team, BigChaos achieved an RMSE of 0.8616 with 207 predictor sets. Progress over the years: The joint-team consisted of two researchers from commendo research & consulting GmbH, Andreas Töscher and Michael Jahrer (originally team BigChaos) and three researchers from AT&T Labs, Yehuda Koren, Robert Bell, and Chris Volinsky (originally team BellKor). As required, they published a description of their algorithm.This was the final Progress Prize because obtaining the required 1% improvement over the 2008 Progress Prize would be sufficient to qualify for the Grand Prize. The prize money was donated to the charities chosen by the winners. Progress over the years: 2009 On June 26, 2009 the team "BellKor's Pragmatic Chaos," a merger of teams "Bellkor in BigChaos" and "Pragmatic Theory," achieved a 10.05% improvement over Cinematch (a Quiz RMSE of 0.8558). The Netflix Prize competition then entered the "last call" period for the Grand Prize. In accord with the Rules, teams had thirty days, until July 26, 2009 18:42:37 UTC, to make submissions that will be considered for this Prize.On July 25, 2009 the team "The Ensemble," a merger of the teams "Grand Prize Team" and "Opera Solutions and Vandelay United," achieved a 10.09% improvement over Cinematch (a Quiz RMSE of 0.8554).On July 26, 2009, Netflix stopped gathering submissions for the Netflix Prize contest.The final standing of the Leaderboard at that time showed that two teams met the minimum requirements for the Grand Prize. "The Ensemble" with a 10.10% improvement over Cinematch on the Qualifying set (a Quiz RMSE of 0.8553), and "BellKor's Pragmatic Chaos" with a 10.09% improvement over Cinematch on the Qualifying set (a Quiz RMSE of 0.8554). The Grand Prize winner was to be the one with the better performance on the Test set. Progress over the years: On September 18, 2009, Netflix announced team "BellKor's Pragmatic Chaos" as the prize winner (a Test RMSE of 0.8567), and the prize was awarded to the team in a ceremony on September 21, 2009. "The Ensemble" team had matched BellKor's result, but since BellKor submitted their results 20 minutes earlier, the rules award the prize to BellKor.The joint-team "BellKor's Pragmatic Chaos" consisted of two Austrian researchers from Commendo Research & Consulting GmbH, Andreas Töscher and Michael Jahrer (originally team BigChaos), two researchers from AT&T Labs, Robert Bell, and Chris Volinsky, Yehuda Koren from Yahoo! (originally team BellKor) and two researchers from Pragmatic Theory, Martin Piotte and Martin Chabbert. As required, they published a description of their algorithm.The team reported to have achieved the "dubious honors" (sic Netflix) of the worst RMSEs on the Quiz and Test data sets from among the 44,014 submissions made by 5,169 teams was "Lanterne Rouge," led by J.M. Linacre, who was also a member of "The Ensemble" team. Cancelled sequel: On March 12, 2010, Netflix announced that it would not pursue a second Prize competition that it had announced the previous August. The decision was in response to a lawsuit and Federal Trade Commission privacy concerns. Cancelled sequel: Privacy concerns Although the data sets were constructed to preserve customer privacy, the Prize has been criticized by privacy advocates. In 2007 two researchers from The University of Texas at Austin were able to identify individual users by matching the data sets with film ratings on the Internet Movie Database.On December 17, 2009, four Netflix users filed a class action lawsuit against Netflix, alleging that Netflix had violated U.S. fair trade laws and the Video Privacy Protection Act by releasing the datasets. There was public debate about privacy for research participants. On March 19, 2010, Netflix reached a settlement with the plaintiffs, after which they voluntarily dismissed the lawsuit.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**TrashMail** TrashMail: TrashMail is a free disposable e-mail address service created in 2002 by Stephan Ferraro, a computer science student at Epitech Paris which belongs now to Ferraro Ltd. The service provides temporary email addresses that can be abandoned if they start receiving email spam. It mainly forwards emails to a real hidden email address. Description: TrashMail receives emails and forwards them to a real hidden email address. On account creation there is the option to set a number of total forwards and a date when the disposable email expires. For each forwarded email the counter is decreased by 1. When the counter reaches 0 or the date limit is expired then the temporary email address will be deleted. Description: After the temporary email address is deleted, any incoming email is rejected by the SMTP code 550 5.1.1. TrashMail also provides a free open-source add-on for Mozilla Firefox available from the official store. The email registration and community forum are provided by HTTPS (SSL over HTTP) access to protect privacy. Additionally the SMTP server communication has TLS enabled by default. Description: As many spammers rely on harvested email addresses, the best method of avoiding spam is not to publish one's real email address. By providing a temporary address, TrashMail allows users to protect their real email. Extras: TrashMail differs from other disposable email address services in its possibility to use the Challenge-Response System for each free disposable email address. Additionally, it provides real-time spam stats on its main page. It is possible to verify the current incoming spam amount on this site. Software: TrashMail can be used via the web. However an API is provided and documented on the forum which explains how to write custom software to use for the free service. A Mozilla Firefox add-on for the service is available.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Compressed air filters** Compressed air filters: Compressed air filters, often referred to as line filters, are used to remove contaminants from compressed air after compression has taken place. When the filer is combined with a regulator and an oiler, it is called an air set.Air leaving a standard screw or piston compressor will generally have a high water content, as well as a high concentration of oil and other contaminants. There are many different types of filters, suitable for different pneumatics applications. Working principle: Unfiltered compressed air frequently contains dust, oil, rust, moisture and other harmful substances, and therefore requires filtration. In the first stage of filtration, the compressed air passes through a tube-shaped mesh filter, which creates a coalescence effect. Here bigger particles are adsorbed on the filter and the water will condense into larger droplets, which can then pass into the separation chamber. The compressed air is slowed down, which makes the particles condense on a honeycomb-like pad, allowing the water droplets to travel to the bottom of the drainage system and through an automatic or electric drain valve to the discharge. In the first filtration stage more than 95% of the water droplets, oil and large particles are removed. This practice is most common for removing water, but is also used for removing oil.In the second filtration stage, the air is passed through a fiber filter. This process generates thousands of small vortices and other disturbances that cause the airflow to be less uniform. In doing this, the air comes into contact with more surface area of the filtration medium. Fine particulate is then captured because it will not fit through the small pores in the fiber filter. However, there will be a small pressure drop due to the added resistance to the airflow. Types of filters: Particulate filters Particulate compressed air filters are used to remove dust and particles from the air. Activated carbon filters Activated carbon filters utilize a composite carbon material to remove gases and odors from the air. They are used in factories where food is produced or for breathing gas. Types of filters: Coalescing filters High oil compressed air coalescing filters remove water and oil aerosols by coalescing the aerosols into droplets. This happens partially because of tortuous path and pressure drop. Coalescers remove both water and oil aerosols from the air stream, and are rated at particulate contamination through direct interception. Filtration of oil, water aerosols, dust and dirt particles to 0.01 µm the best achievable in industry. Types of filters: Cold coalescing filters Cold coalescing filters are coalescing filters operated at around 35 °F (2 °C), allowing them to be more effective at removing moistures. Compressed intake filters Intake filters are the first line of defense in filtering. These filters can remove contaminants down to 0.3 µm and can remove chemical contaminants.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gum over platinum** Gum over platinum: Gum over platinum is a historical chemical photographic process, which was commonly used in art photography. It is a very complex process, in which a specially treated platinum print photograph is coated with washes of gum arabic, then re-exposed to the same photographic negative. The finished process results in a sepia toned print, and is said to impart added luminosity and depth. It is sometimes called "pigment over platinum". Gum over platinum: To sensitize the gum arabic it must first be placed in contact with ammonium or potassium dichromate. Gum arabic is not photosensitive by itself. To clear the chromic acid, the print is washed in 1% potassium metabisulfite after proper development in water. Interested individuals should read up on the process before attempting, as the chromic acids are very dangerous to work with. Gum over platinum: The mechanics of the gum portion is not entirely known; what occurs is the exposed gum is hardened and becomes water-insoluble. Upon washing, the unexposed portions wash away, leaving the white paper exposed. The technique is related to platinum printing.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lock-on (protest tactic)** Lock-on (protest tactic): A lock-on is a technique used by protesters to make it difficult to remove them from their place of protest. It often involves improvised or specially designed and constructed hardware, although a basic lock-on is the human chain which relies simply on hand grip. Objective: In American protest movements dating from the 1960s and 1970s, the term lockdown applies to a person's attaching themself to a building, object, fence or other immobile object. The safe removal of the protesters necessitates the involvement of skilled technicians, and is often time-consuming. Objective: The lock-on chosen by the protester may be the difference between being arrested or not, or may vary the kind or number of charges brought against them by the police. If a protester can remove themselves when asked to by the police, they may stand a better chance of not being arrested. However, if they can remove themselves and they choose not to, they may receive a charge for refusing to remove themselves from the lock-on. Objective: Locking on is a very successful means of slowing down operations that are perceived by the protesters to be illegal or immoral. It is also often used to allow time for journalists to arrive to record the scene and take statements from the group's spokespeople. Devices: Lock-ons were originally performed with chains and handcuffs, but other devices have been introduced, including tripods and tubes or pipes with handholds built in to link a person to an object or to create chains of people. Other common hardware includes padlocks, U-locks and other bicycle locks, lockboxes and tripods and platforms and other rigging in tree sitting.A more complicated lock-on is the sleeping dragon, which involves protesters putting their limbs through pipes containing concrete, or a mixture of steel and concrete, and is only limited by the imagination and ingenuity of those making the lock-on. The protester can choose between a type that will allow them to willingly remove themselves or a type that requires machinery to remove them. Devices can be buried as an additional barrier to removal. A car dragon is a car concreted into place after removing the wheels, where protestors can then lock-on to a further device fixed to the car. Opposition in law: In the United Kingdom in May 2023, the Public Order Act 2023 made it a criminal offence for a person to "attach themselves to another person, to an object or to land" with the intention of causing serious disruption. "Going equipped" with such an aim was also criminalised.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Proprotein convertase subtilisin/kexin type 1 inhibitor** Proprotein convertase subtilisin/kexin type 1 inhibitor: Proprotein convertase subtilisin/kexin type 1 inhibitor is a protein by the name of proSAAS that in humans is encoded by the PCSK1N gene. Function: This protein is expressed largely in cells possessing a regulated secretory pathway, such as endocrine/neuroendocrine cells and neurons. The intact proSAAS protein, as well as the carboxyy-terminal peptide containing the inhibitory hexapeptide LLRVKR, functions as an inhibitor of prohormone convertase 1/3, which accomplishes the initial proteolytic cleavage of peptide precursors. ProSAAS is further processed at the N- and C-termini into multiple short peptides, leaving the central segment intact. This central, unprocessed portion of the protein may function as a neural- and endocrine-specific chaperone due to its potent ability to block the aggregation of beta amyloid and alpha synuclein in vitro, and to block oligomer cytotoxicity in cells. Recent data show that nigral proSAAS expression blocks the deterioration of the striatonigral pathway in a synuclein rat model of Parkinson's disease. ProSAAS also oligomerizes and undergoes liquid-liquid phase separation. Function: Differential expression of this gene may be associated with obesity.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Yale–Brown Obsessive Compulsive Scale** Yale–Brown Obsessive Compulsive Scale: The Yale–Brown Obsessive–Compulsive Scale (Y-BOCS) is a test to rate the severity of obsessive–compulsive disorder (OCD) symptoms. Yale–Brown Obsessive Compulsive Scale: The scale, which was designed by Wayne K. Goodman and his colleagues, is used extensively in research and clinical practice to both determine severity of OCD and to monitor improvement during treatment. This scale, which measures obsessions separately from compulsions, specifically measures the severity of symptoms of obsessive–compulsive disorder without being biased towards or against the type of content the obsessions or compulsions might present. Following the original publication, the total score is usually computed from the subscales for obsessions (items 1–5) and compulsions (items 6–10), but other algorithms exist. Accuracy and modifications: Goodman and his colleagues have developed the Yale–Brown Obsessive–Compulsive Scale—Second Edition (Y-BOCS-II) in an effort to modify the original scale which, according to Goodman, "[has become] the gold standard measure of obsessive–compulsive disorder (OCD) symptom severity". In creating the Y-BOCS-II, changes were made "to the Severity Scale item content and scoring framework, integrating avoidance into the scoring of Severity Scale items, and modifying the Symptom Checklist content and format". After reliability tests, Goodman concluded that "Taken together, the Y-BOCS-II has excellent psychometric properties in assessing the presence and severity, of obsessive–compulsive symptoms. Although the Y-BOCS remains a reliable and valid measure, the Y-BOCS-II may provide an alternative method of assessing symptom presence and severity."Studies have been conducted by members of the Iranian Journal of Psychiatry and Clinical Psychology to determine the accuracy of the Yale–Brown Obsessive–Compulsive Scale (specifically as it appears in its Persian format). The members applied the scale to a group of individuals and, after ensuring a normal distribution of data, a series of reliability tests were performed. According to the authors, "[the] results supported satisfactory validity and reliability of translated form of Yale–Brown Obsessive–Compulsive Scale for research and clinical diagnostic applications". Children's version: The children's version of the Y-BOCS, or the Children's Yale–Brown Obsessive–Compulsive Scales (CY-BOCS), is a clinician-report questionnaire designed to assess symptoms of obsessive–compulsive disorder from childhood through early adolescence.The CY-BOCS contains 70 questions and takes about 15 to 25 minutes. Each question is designed to ask about symptoms of obsessive–compulsive behavior, though the exact breakdown of questions is unknown. For each question, children rate the degree to which the question applies on a scale of 0–4. Based on research, this assessment has been found to be statistically valid and reliable, but not necessarily helpful. Children's version: Other versions The CY-BOCS has been adapted into several self- and parent-report versions, designed to be completed by parent and child working together, although most have not been psychometrically validated. However, these versions still ask the child to rate the severity of their obsessive–compulsive behaviors and the degree to which each has been impairing. While this measure has been found to be useful in a clinic setting, scores and interpretations are taken with a grain of salt, given the lack of validation.Another version, which is parent-focused, is similar to the original CY-BOCS and is administered to both parent and child by the clinician. This version was distributed by Solvay Pharmaceuticals in the late 1990s, creating an association between the measure and a number of pharmaceutical groups that has caused it to be avoided by most clinicians. Severity cutoff scores for this version have not been empirically determined.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Coiled sewn sandals** Coiled sewn sandals: Coiled sewn sandals are an ancient Egyptian footwear constructed using a technique similar to that used in basket weaving with a technique whereby coils were sewn together with the same material used in construction of the coils. The shoes were typically woven using halfa grass.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nissan RD engine** Nissan RD engine: The Nissan RD engine series is basically a Nissan RB engine design, except that it is only a single overhead cam six-cylinder diesel engine. It was the successor to the Nissan LD and SD six-cylinder engines and was joined by the six-cylinder Nissan TD engine. From 1997 onwards the turbocharged versions were fitted with electronic fuel injection. The turbodiesel version known as the RD28T (or RD28ET with electronic fuel injection) and were also fitted to the Nissan Safari (also known as the Nissan Patrol) off-road vehicle. Nissan RD engine: Since the Nissan RD engine is based on the Nissan RB engine, they have many similarities and many parts are interchangeable. The engine block was similar to the RB30 engine except it had more material, was heavier and had 85mm bore vs the 86mm bore of the RB30 and a 83mm stroke vs 85mm stroke. One issue is that the stronger vibrations from the diesel engine can loosen the crank/harmonic balancer bolt (originally from the RB engines) and in turn become loose or fall off causing major engine damage. It is recommended to use thread locking fluid when installing. The cylinder head was of a non-crossflow design, meaning that the exhaust and intake ports were on one side of the cylinder head. RD28: 2.8 L (2,826 cc) SOHC, 85 mm (3.35 in) bore RD28 Series 1 12 valves (two per cylinder). When originally introduced, JIS gross were used rather than JIS net, meaning that early information claims 100 PS (74 kW; 99 bhp) and 18.5 kg⋅m (181 N⋅m; 134 lb⋅ft) at the same engine speeds.94 PS (69 kW; 93 bhp) at 4,800 rpm18 kg⋅m (177 N⋅m; 130 lb⋅ft) at 2,400 rpm Nissan Skyline R31 series 1985–1987 Nissan Laurel C32 ~ C34 series 1986–1993 Nissan Cedric / Nissan Gloria Y30 ~ Y32 series 1985–1993 Commercial (taxi) Nissan Cedric / Nissan Gloria Y31 series sedan 1987–1999 Nissan Crew K30 series 1993–1999 Nissan Cefiro A31 series 1988–1993No PCV on the tappet cover. RD28: RD28 Series 2 100 PS (74 kW; 99 bhp) at 4,800 rpm18.2 kg⋅m (178 N⋅m; 132 lb⋅ft) at 2,400 rpm Nissan Cedric / Nissan Gloria Y32 & Y33 series 1993–1999 Nissan Laurel C34 - C35 series 1994–1999 RD28E 100 PS (74 kW; 99 bhp) at 4,800 rpm18.2 kg⋅m (178 N⋅m; 132 lb⋅ft) at 2,400 rpm Commercial (taxi) Nissan Cedric Y31 series sedan 1999.08-2002 Nissan Laurel C35 series 1999–2001 Nissan Crew K30 series 1999-2009Vacuum pump located on tappet cover. RD28T: 2.8 L (2,826 cc) SOHC turbodiesel125 PS (92 kW; 123 bhp) at 4,400 rpm26 kg⋅m (255 N⋅m; 188 lb⋅ft) at 2,400 rpm Nissan Safari Spirit series Y60 2-door soft-top 1996–1997 Nissan Civilian Bus RD28ETi1 electronically controlled turbodiesel with an intercooler135 PS (99 kW; 133 bhp) at 4,000 rpm29.3 kg⋅m (287 N⋅m; 212 lb⋅ft) at 2,000 rpm Nissan Safari Spirit series Y61 2-door soft-top 1997–1999 (automatic transmission) RD28ETi2 electronically controlled turbodiesel with an intercooler145 PS (107 kW; 143 bhp) at 4,000 rpm26.6 kg⋅m (261 N⋅m; 192 lb⋅ft) at 2,000 rpm
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Arch Linux ARM** Arch Linux ARM: Arch Linux ARM is a port of Arch Linux for ARM processors. Its design philosophy is "simplicity and full control to the end user," and like its parent operating system Arch Linux, aims to be very Unix-like. This goal of minimalism and complete user control, however, can make it difficult for Linux beginners as it requires more knowledge of and responsibility for the operating system. History and development: Arch Linux ARM is based on Arch Linux, which is a minimalist Linux distribution first released on March 11, 2002. The idea of making a single, official port of Arch Linux for devices with ARM processors was born from members of the Arch Linux PlugApps and ArchMobile development teams, notably Mike Staszel, who went on to found the Arch Linux ARM project.Kevin Mihelich is currently Arch Linux ARM's primary developer. Arch Linux ARM is community-developed, with software development and user support provided fully by volunteer effort and donations. Also, unlike other community-supported operating systems such as Ubuntu, Arch Linux ARM has a relatively small user base, making user participation in development especially important.Arch Linux ARM has a rolling release cycle, i.e. new software is packaged as it is released. This "bleeding edge" release cycle of small, frequent package updates differs from release cycles of Linux distributions such as Debian, which focus on large, scheduled releases of packages proven to be stable. Supported processors: Unlike Arch Linux, which is aimed at x86-64 CPUs, Arch Linux ARM targets ARM CPUs and, as a result, many single-board computers such as the Raspberry Pi.There is support for: ARMv7 1st generation Cortex-A8 platforms, such as the BeagleBoard or Cubieboard ARMv7 2nd generation Cortex-A9 and Tegra platforms, such as the PandaBoard or TrimSlice ARMv7 3rd generation Cortex-A7 and Cortex-A15 platforms, such as the Cubieboard2, Odroid XU, Samsung Chromebook (series 3), Samsung Chromebook 2 or Raspberry Pi 2 ARMv8 64-bit capable Cortex-A53 and Cortex-A72 platforms, such as the Odroid C2 and N2, Acer Chromebook R13 or Raspberry Pi 3.Arch Linux ARM can run on any device that supports ARMv7 or ARMv8 instruction sets, including the 64-bit ARMv8 instruction set of the Raspberry Pi 3 and 4.For a list of officially supported platforms, see archlinuxarm.org's Platforms page. For a list of unofficial, community-supported devices, see archlinuxarm.org's Community-Supported Devices forum. Reception: Arch Linux ARM has gained popularity as a lightweight Linux distribution, and in 2014 was growing in popularity among single-board computer hobbyists. Arch Linux ARM is also known for having good community support. In 2021-2022, The Asahi Linux Project used a tailored version of Arch Linux ARM with the use of special imaging requirements, scripts, and other utilities to get the Apple Hardware correctly read by the operating system, ultimately however, the project was moved to use Fedora’s OSes following problems in dependencies, slow response times when requesting support on the manner, and other issues.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tropical green building** Tropical green building: Tropical Green Building refers to a style of construction that focuses on energy reduction, reduced use of chemicals, and supporting local labor and community. This requires close cooperation of the design team, the architects, the engineers, and the client at all project stages, from site selection, scheme formation, material selection and procurement, to project implementation. Tropical Green Building has the same basis as green building in more temperate climates, but the methods of construction are completely different. In the tropics, the focus is on keeping cool, preventing insect infestations, and reduced mould, damp and maintenance in the home. Tropical green building: Generally, tropical green building also seeks to reduce power consumption through intelligent architecture, such as by allowing in much natural light so electric lights aren't needed during the daytime, and at night, using white-painted roofs, ceilings and low-energy light bulbs such as compact fluorescents or LED lamps. Solar power, wind power, and/or the use of micro hydro are often deployed, but not always the focus of tropical green building.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tribsoft** Tribsoft: Tribsoft was a Canadian software company that specialized in porting computer games to the Linux platform. Tribsoft: It was responsible for porting Jagged Alliance 2, as well as gaining the porting rights to Europa Universalis, Majesty: The Fantasy Kingdom Sim and Jagged Alliance 2: Unfinished Business. In the end only Majesty was ever ported and that was done by Linux Game Publishing. Europa Universalis II was also said to be coming to Linux.Sometime in 2002 the owner of Tribsoft mentioned that he was "taking a short break" from porting games to Linux. This break eventually became permanent, when Tribsoft shut down in late 2002.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spore-like cell** Spore-like cell: Spore-like cells were proposed to be pluripotent cells that lie dormant in animal tissue and become active under stress or injury as adult stem cells, exhibiting behavior characteristic of spores. They were proposed in 2001 by brothers Charles and Martin Vacanti and colleagues. Further work in collaboration with Japanese researchers led to the apparent discovery of STAP cells, in which the pluripotent cells were newly created by stress or injury. This work was published in 2014, but soon found to be due to fraudulent work by Haruko Obokata. Characteristics: Spore-like cells were said to be a specific class of stem cells in adult organisms, including humans, which are small, versatile, and most frequently remain in a dormant "spore-like" state as the rest of the cells of the organism divide, grow, and die. Despite their dormancy, they apparently retain the ability to grow, divide, and differentiate into other cell types expressing characteristics appropriate to the tissue environment from which they were initially isolated, if some external stimulus should prompt them to do so. This capacity to continue to regenerate new cells has been shown in in vitro conditions for some animals in which all other cells have died, especially if the animal died from exposure to cold elements. Characteristics: Spore-like cells were said to remain viable in unprepared tissue (using no special preservation techniques), frozen at -86 °C and then thawed, or heated to 85 °C for more than 30 minutes. This has led researchers to try to revitalize spore-like cells from tissue samples of frozen carcasses deposited in permafrost for decades (frozen walrus meat more than 100 years old, and mammoth and bison in Alaska estimated to be 50,000 years old). Vacanti et al. believed that these unique cells lie dormant until activated by injury or disease, and that they have the potential to regenerate tissues lost to disease or damage. Because the cell-size of less than 5 micrometers seems rather small as to contain the entire human genome the authors speculate on the "concept of a minimal genome" for these cells. Later work: Charles Vacanti continued to work on these cells when he moved to Harvard, including with thoracic surgeon Koji Kojima who identified them in lung tissue. Working with a graduate student Haruko Obokata in his lab at Harvard from 2008, Vacanti later refined this theory to suggest that stress or injury could actually trigger the development of pluripotency in somatic cells. He first proposed this to Obokata and Masayuki Yamato at a conference in Florida in 2010; Yamato had independently come to the same conclusion. Obokata returned to Japan and continued this work at RIKEN. Vacanti presented these results in July 2012 at the Society of Cardiovascular Anesthesiologists conference, and then in January 2014 the journal Nature published two articles suggesting that a simple acid treatment could cause mouse blood cells to become pluripotent. The Boston Globe reported that "His discovery is a reminder that as specialized as science is, sometimes, a little ignorance may be a virtue. A stem-cell expert would probably never have even bothered to try the experiment Vacanti has been pursuing, on and off, since the late 1990s." Both STAP articles were retracted in July 2014 after an investigation by RIKEN concluded that the data were fabricated.Researcher Mariusz Ratajczak has linked spore-like cells to his idea of Very small embryonic-like stem cells, also proposed to be very small adult stem cells.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bicategory** Bicategory: In mathematics, a bicategory (or a weak 2-category) is a concept in category theory used to extend the notion of category to handle the cases where the composition of morphisms is not (strictly) associative, but only associative up to an isomorphism. The notion was introduced in 1967 by Jean Bénabou. Bicategories may be considered as a weakening of the definition of 2-categories. A similar process for 3-categories leads to tricategories, and more generally to weak n-categories for n-categories. Definition: Formally, a bicategory B consists of: objects a, b, ... called 0-cells; morphisms f, g, ... with fixed source and target objects called 1-cells; "morphisms between morphisms" ρ, σ, ... with fixed source and target morphisms (which should have themselves the same source and the same target), called 2-cells;with some more structure: given two objects a and b there is a category B(a, b) whose objects are the 1-cells and morphisms are the 2-cells. The composition in this category is called vertical composition; given three objects a, b and c, there is a bifunctor ∗:B(b,c)×B(a,b)→B(a,c) called horizontal composition.The horizontal composition is required to be associative up to a natural isomorphism α between morphisms h∗(g∗f) and (h∗g)∗f . Some more coherence axioms, similar to those needed for monoidal categories, are moreover required to hold: a monoidal category is the same as a bicategory with one 0-cell. Example: Boolean monoidal category: Consider a simple monoidal category, such as the monoidal preorder Bool based on the monoid M = ({T, F}, ∧, T). As a category this is presented with two objects {T, F} and single morphism g: F → T. We can reinterpret this monoid as a bicategory with a single object x (one 0-cell); this construction is analogous to construction of a small category from a monoid. The objects {T, F} become morphisms, and the morphism g becomes a natural transformation (forming a functor category for the single hom-category B(x, x)).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Embryokine** Embryokine: Embryokines (Greek: embryuon "embryo" + kinōs "movement") are regulatory molecules produced by the oviduct and endometrium in the reproductive tract that modulate embryonic growth and development.Embryokines include growth factors such as insulin-like growth factor-1, and activin a transforming growth factor; cytokines such as colony stimulating factor 2, WNT regulatory proteins including DKK1; Other small molecule amino acids are included that regulate embryonic development through the mTOR signalling pathway. Prostacyclin 1 can activate peroxisome proliferator-activated receptor 6 to increase blastocyst hatching, and cannabinoids that can also act to regulate implantation and development.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SARS-CoV-2 in mink** SARS-CoV-2 in mink: Both the American mink and the European mink have shown high susceptibility to SARS-CoV-2 since the earliest stages of the COVID-19 pandemic, first in mink farms across Europe, followed by mink farms in the United States. Mortality has been extremely high among mink, with 35–55% of infected adult animals dying from COVID-19 in a study of farmed mink in the U.S. state of Utah.In November 2020, in Denmark, it was announced that all mink nationwide were being slaughtered due to reports that a mutated SARS-CoV-2 virus was being passed from mink to humans via mink farms, and that at least 12 human infections had been discovered in Northern Jutland. While the State Serum Institute (SSI, Statens Serum Institut) suggested that this mutation was no more dangerous than other coronaviruses, SSI head Kåre Mølbak warned that the mutation could impact the development and effectiveness of COVID-19 vaccines.The first known transmission of SARS-CoV-2 among wild mink was reported in Utah, which researchers believed was due to contact with infected captive mink rather than through an intermediary vector in the wild or direct human-to-mink transmission. Tracking the origin and spread of mink-related COVID variants has proven more difficult in the United States, where the reporting of outbreaks on mink farms has been voluntary, as opposed to the mandatory screening procedures introduced during outbreaks in Denmark and the Netherlands. Transmission: Due to the mink ACE2 receptor being a similar or better fit for SARS-CoV-2 compared to humans and the cramped living conditions of farm-raised animals, mink readily transmit SARS-CoV-2 to one another and develop symptoms of COVID-19. Additionally, Dutch researchers determined that the bedding materials and airborne dust on mink farms with outbreaks had also become highly contaminated. Mutations and variants: In Denmark, there have been five clusters of mink variants of SARS-CoV-2; the Danish State Serum Institute (SSI) has designated these as clusters 1–5 (Danish: cluster 1–5). In Cluster 5, also referred to as ΔFVI‑spike by the SSI, several different mutations in the spike protein of the virus have been confirmed. The specific mutations include 69–70deltaHV (a deletion of the histidine and valine residues at the 69th and 70th position in the protein), Y453F (a change from tyrosine to phenylalanine at position 453, inside the spike protein's receptor-binding domain), I692V (isoleucine to valine at position 692), M1229I (methionine to isoleucine at position 1229), and a non-conservative substitution S1147L.In North America, a mink-human spillover event in Michigan, resulting in four human infections that were largely kept from public view upon their discovery late 2020, and only announced by the US Centers for Disease Control (CDC) in March 2021, was deemed ancestral to the Ontario WTD clade spillover event from white-tailed deer nearly a year later in Ontario, Canada. The Michigan spillback into humans was the first documented case of any animal spillback in the United States.In late 2022, scientists continued to monitor residual Delta strains, such as Delta strain AY.103, which have picked up Omicron mutations during co-infection in mink and deer and form the potential for so-called "Deltacron" spillover events. These hybrid strains could potentially combine the increased fatality rate of Delta with the enhanced transmissibility of Omicron.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Shakes (timber)** Shakes (timber): Shakes are cracks in timber. Arising in cut timber they generally cause a reduction in strength. When found in a log they can result in a significant amount of waste, when a log is converted to lumber. Apart from heart shakes, often found in trees felled past their best, shakes in a log have no effect on the strength of shake free lumber obtained therefrom.They are often seen in oak-framed buildings, which are constructed of oak which has not been dried and thus cracks while drying. Due to the immense strength of the oak beams, they are not a cause for concern in a properly engineered building, and are considered part of the charm of the style. Shakes (timber): In the majority of cases of shake, the underlying cause is a weakening of the wood due to action by anaerobic bacteria which have entered the tree stem through the root system. researchers have isolated anaerobic and facultative anaerobic bacteria from shake surfaces, in particular the anaerobes "Clostridiu" Research suggests that shakes develop due to natural stresses in wood which has been weakened by bacterial degradation of the middle-lamella between cells. Heart shake: Heart shake is a crack in the heartwood, near the centre of the tree. It is caused by poor seasoning, or by using trees felled past maturity. Star shake: A crack or cracks propagating from near the edge of the log towards the centre, usually along the line of the medullary rays, causing the wood to shrink more at right angles to the medullary rays than along them, causing warping of anything made from the wood. The cause is often rapid or uneven seasoning, causing the outside of the log to shrink faster than the heart. Exposure to the elements can cause star shakes, as can frost during the growth of the tree. Frost shake: Frost shake begins on the outside where moisture from rain or other means has penetrated, and freezes, causing damage to the wood on the inside. Cup or ring shake: A cup or ring shake follows the line of annual rings. The separation of the rings is generally caused during the growth of the tree, either by a check in the growth, or by bending and twisting under high winds. Thunder shake or upset: Thunder shake is across the grain, and hard to detect until the boards are being planed. It is caused by shock to the wood, such as thunder, or concussion during felling. This fault seriously weakens the timber.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Richard Vuduc** Richard Vuduc: Richard Vuduc is a tenured professor of computer science at the Georgia Institute of Technology. His research lab, The HPC Garage, studies high-performance computing, scientific computing, parallel algorithms, modeling, and engineering. He is a member of the Association for Computing Machinery (ACM). As of 2022, Vuduc serves as Vice President of the SIAM Activity Group on Supercomputing. He has co-authored over 200 articles in peer-reviewed journals and conferences. Education: Dr. Vuduc received his Ph.D. in computer science from the University of California, Berkeley, in 2004. He received his B.S in computer science at Cornell University in 1997. He is also an alumnus of the Thomas Jefferson High School for Science and Technology in Alexandria, Virginia. Academic career: Vuduc was a Postdoctoral Scholar in the Center for Advanced Scientific Computing at the Lawrence Livermore National Laboratory. He has served as an associate editor of both the International Journal of High-Performance Computing Applications and IEEE Transactions on Parallel and Distributed Systems. He co-chaired the Technical Papers Program of the “Supercomputing” (SC) Conference in 2016 and was later elected to be Vice President of the SIAM Activity Group on Supercomputing from 2016 to 2018. He also served as department’s Associate Chair and Director of its graduate (MS & Ph.D.) programs from 2013-2016. Major honors and awards: Member of the DARPA Computer Science Study Group Recipient NSF CAREER award Collaborative Gordon Bell Prize 2010 Lockheed-Martin Aeronautics Company Dean’s Award for Teaching Excellence 2013 Best Paper Awards, including the SIAM Conference on Data Mining (SDM, 2012) and IEEE Parallel and Distributed Processing Symposium (IPDPS, 2015) Major publications: Williams, Samuel; Oliker, Leonid; Vuduc, Richard; Shalf, John; Yelick, Katherine; Demmel, James (2007). "Optimization of sparse matrix-vector multiplication on emerging multicore platforms". Proceedings of the 2007 ACM/IEEE conference on Supercomputing - SC '07. p. 1. doi:10.1145/1362622.1362674. ISBN 9781595937643. S2CID 1845814. Vuduc, Richard; Demmel, James W.; Yelick, Katherine A. (2005). "OSKI: A library of automatically tuned sparse matrix kernels". Journal of Physics: Conference Series. 16 (1): 521. Bibcode:2005JPhCS..16..521V. doi:10.1088/1742-6596/16/1/071. ISSN 1742-6596. Vuduc, Richard (Rich). "Model-driven autotuning of sparse matrix-vector multiply on GPUs". ACM SIGPLAN Notices. Im, Eun-Jin; Yelick, Katherine; Vuduc, Richard (February 2004). "Sparsity: Optimization Framework for Sparse Matrix Kernels". Int. J. High Perform. Comput. Appl. 18 (1): 135–158. CiteSeerX 10.1.1.137.5844. doi:10.1177/1094342004041296. ISSN 1094-3420. S2CID 2447843. Vuduc, Richard Wilson (2003). Automatic Performance Tuning of Sparse Matrix Kernels (Thesis). University of California, Berkeley. Demmel, J.; Dongarra, J.; Eijkhout, V.; Fuentes, E.; Petitet, A.; Vuduc, R.; Whaley, R. C.; Yelick, K. (February 2005). "Self-Adapting Linear Algebra Algorithms and Software". Proceedings of the IEEE. 93 (2): 293–312. CiteSeerX 10.1.1.108.7568. doi:10.1109/JPROC.2004.840848. ISSN 0018-9219. S2CID 3065125. Vuduc, Richard; Demmel, James W.; Yelick, Katherine A.; Kamil, Shoaib; Nishtala, Rajesh; Lee, Benjamin (2002). "Performance Optimizations and Bounds for Sparse Matrix-vector Multiply". Proceedings of the 2002 ACM/IEEE Conference on Supercomputing. SC '02. Los Alamitos, CA, USA: IEEE Computer Society Press. pp. 1–35. Lashuk, Ilya; Chandramowlishwaran, Aparna; Langston, Harper; Nguyen, Tuan-Anh; Sampath, Rahul; Shringarpure, Aashay; Vuduc, Richard; Ying, Lexing; Zorin, Denis (May 2012). "A Massively Parallel Adaptive Fast Multipole Method on Heterogeneous Architectures". Communications of the ACM. 55 (5): 101–109. doi:10.1145/2160718.2160740. ISSN 0001-0782. S2CID 2272736. Rahimian, Abtin; Lashuk, Ilya; Veerapaneni, Shravan; Chandramowlishwaran, Aparna; Malhotra, Dhairya; Moon, Logan; Sampath, Rahul; Shringarpure, Aashay; Vetter, Jeffrey; Vuduc, Richard; Zorin, Denis; Biros, George (2010). "Petascale Direct Numerical Simulation of Blood Flow on 200K Cores and Heterogeneous Architectures". 2010 ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis. pp. 1–11. doi:10.1109/SC.2010.42. ISBN 9781424475599. S2CID 5490197. Sim, Jaewoong; Dasgupta, Aniruddha; Kim, Hyesoon; Vuduc, Richard (2012). "A performance analysis framework for identifying potential benefits in GPGPU applications". Proceedings of the 17th ACM SIGPLAN symposium on Principles and Practice of Parallel Programming - PPoPP '12. p. 11. CiteSeerX 10.1.1.226.3542. doi:10.1145/2145816.2145819. ISBN 9781450311601. S2CID 6817445. Vuduc, Richard; Chandramowlishwaran, Aparna; Choi, Jee; Guney, Murat; Shringarpure, Aashay (2010). "On the Limits of GPU Acceleration". Proceedings of the 2nd USENIX Conference on Hot Topics in Parallelism. HotPar'10. Berkeley, CA, USA: USENIX Association. p. 13. Vuduc, Richard W.; Moon, Hyun-Jin (2005). "Fast Sparse Matrix-Vector Multiplication by Exploiting Variable Block Structure". High Performance Computing and Communications. Lecture Notes in Computer Science. Vol. 3726. Berlin, Heidelberg: Springer-Verlag. pp. 807–816. doi:10.1007/11557654_91. ISBN 978-3540290315. Park, Sangmin; Vuduc, Richard W.; Harrold, Mary Jean (2010). "Falcon". Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - ICSE '10. Vol. 1. p. 245. doi:10.1145/1806799.1806838. ISBN 9781605587196. S2CID 8744239. Vuduc, Richard; Demmel, James W.; Bilmes, Jeff A. (February 2004). "Statistical Models for Empirical Search-Based Performance Tuning". The International Journal of High Performance Computing Applications. 18 (1): 65–94. CiteSeerX 10.1.1.64.5699. doi:10.1177/1094342004041293. ISSN 1094-3420. S2CID 2563412. Qing, Yi; Keith, Seymour; Haihang, You; Richard, Vuduc; Dan, Quinlan. "POET: Parameterized Optimizations for Empirical Tuning". 2007 IEEE International Parallel and Distributed Processing Symposium. Chandramowlishwaran, A.; Knobe, K.; Vuduc, R. (April 2010). "Performance evaluation of concurrent collections on high-performance multicore computing systems". 2010 IEEE International Symposium on Parallel & Distributed Processing (IPDPS). pp. 1–12. CiteSeerX 10.1.1.169.5643. doi:10.1109/IPDPS.2010.5470404. ISBN 978-1-4244-6442-5. S2CID 1133093.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Student teams-achievement divisions** Student teams-achievement divisions: Student teams-achievement divisions (STAD) is a Cooperative learning strategy in which small groups of learners with different levels of ability work together to accomplish a shared learning goal. It was devised by Robert Slavin and his associates at Johns Hopkins University. Student teams-achievement divisions: STAD is considered as one of the most researched, simplest, and most straightforward of all cooperative learning. It was established based on the fulfillment of instructional pedagogy. It is used in meeting well-defined instructional objectives. It is a learning strategy in which there are small group of learners with different levels of abilities, wherein they all come together to accomplish a shared learning goal. Working of STAD: The students are placed in small groups or teams. The class in its entirety is presented with a lesson and students are subsequently tested. Individuals are graded on the team's performance . Although the tests are taken individually, students are encouraged to work together to improve the overall performance of the group. It is basically a team work, but students are graded individually according to their contribution that they make towards their team. Usually in STAD students are assigned four to five members in a group that are mixed in performance level, gender, and ethnicity. Working of STAD: The teacher teaches a lesson to the students and they then work in teams and ensure that they have mastered the lesson. The students take individual quizzes on the material, at which they may not help each other. Their scores are compared to their own past averages and points are awarded on the basis of the degree to which students meet or exceed their own earlier performance. It encourages the students to take up responsibility for other members in their group as well as themselves. Thus in this way it is guaranteed that all group members with different levels are equally motivated to do their best. Slavin (1995) enumerated three main concepts of STAD as team rewards, individual accountability and for equal opportunities for success . Team rewards are certificates or either rewards which are given if a STAD group achieves higher than predetermined level . In this way a spirit of positive competition is reinforced and all or none of the groups would be rewarded based on how they score. In terms of individual accountability, the individual learning of each of the group members determines the success of the terms. Working of STAD: STAD has been used in a wide variety of subjects from mathematics to language, arts to social science and used from 2nd grade in schools through college. It is the most appropriate for teaching well defined objectives by incorporating more open-ended assessments, such as essays or performance.In STAD, students are assigned to four orfive5-member heterogeneous groups. Once these assignments are made, a four-step cycle is initiated: (i) teach, (ii) team study, (iii) test and (iv) recognition. Working of STAD: Teach In the teaching stage, the teacher presents materials usually in a lecture-discussion format. Students should be told what it is they are going to learn and why it is important. Team study In the team study stage, group members work cooperatively with teacher-provided worksheets and answer sheets. Test In the testing stage, each student individually takes a quiz. The teacher grades the quiz and notes the current scores as well as the improvement over previous quizzes. Recognition Each team receives recognition awards depending on the average scores of each team. For example, teams that average 15 to 19 improvement points receive a GOOD TEAM certificate, teams that average 20 to 24 improvement points receive a GREAT TEAM certificate, and teams that average 25 to 30 improvement points receive a SUPER TEAM certificate. Components of STAD: Class presentation Teams Quizzes Individual improvement score Team recognition Advantages Group has greater information resources than individuals do Group has to employ a greater number of creative problem-solving methods Group members gain a better understanding of themselves as they interact with each other. Working in a group foster learning and comprehension of idea discussed. Disadvantages An individual group member may dominate the discussion. Some group members may rely too much on others to get the job done. Group members may pressure others to conform to the majority opinion.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CX-516** CX-516: CX-516 is an ampakine and nootropic that acts as an AMPA receptor positive allosteric modulator and had been undergoing development by a collaboration between Cortex, Shire, and Servier. It was studied as a potential treatment for Alzheimer's disease under the brand name Ampalex, and was also being examined as a treatment for ADHD. CX-516: CX-516 was the first ampakine compound developed by Cortex and while it showed good in vitro activity and positive results in animal tests, the human trials proved disappointing due mainly to low potency and short half-life. However, CX-516 is still widely used in animal research into the ampakine drugs and is the standard reference compound that newer, more potent drugs of this class such as farampator and CX-717 are compared to.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Forelock** Forelock: The forelock or foretop is a part of a horse's mane, that grows from the animal's poll and falls forward between the ears and onto the forehead. Some breeds, particularly pony breeds, have a naturally thick forelock, while other breeds, such as many Thoroughbreds, have a thinner forelock. Primitive wild equines such as the Przewalski's horse with a naturally short, upright mane generally have no hair falling forward onto the forehead. Other equidae such as donkeys and zebras, have no discernible forelock at all. Purpose: Little research has been published on the purpose of the forelock. However, the thick forelock is more prevalent in breeds developed in the cold, wet climates of northern Europe and is minimal on wild horse subspecies and other equine species adapted to hot, dry climates, such as the zebra or donkey. It tends to be fine and thin on many oriental horse breeds, even if they otherwise have long manes and tails. Thus, it may play a role in temperature regulation and to keep pests at bay. Grooming: In competition the forelock is braided for some events, such as those in the dressage and hunt seat disciplines. Conversely, some breeds, such as the Andalusian, are usually shown with a long, full, forelock that is never braided. Other breeds may confine the forelock with rubber bands and anchor it to the bridle. The forelock may also be roached (shaved off) in some competitions, such as polo. Human use: Forelock is slang for a human hairstyle popular in the 1980s. In the 19th century, it was a common salute where a person saluted another by "tugging the forelock" (see Salute).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Reed mat (craft)** Reed mat (craft): Reed mats are handmade mats of plaited reed or other plant material. East Asia: In Japan, a traditional reed mat is the tatami (畳). Tatami are covered with a weft-faced weave of soft rush (藺草, igusa) (common rush), on a warp of hemp or weaker cotton. There are four warps per weft shed, two at each end (or sometimes two per shed, one at each end, to cut costs). The doko (core) is traditionally made from sewn-together rice straw, but contemporary tatami sometimes have compressed wood chip boards or extruded polystyrene foam in their cores, instead or as well. The long sides are usually edged (縁, heri) with brocade or plain cloth, although some tatami have no edging. Southeast Asia: In the Philippines, woven reed mats are called banig. They are used as sleeping mats or floor mats, and were also historically used as sails. They come in many different weaving styles and typically have colorful geometric patterns unique to the ethnic group that created them. They are made from buri palm leaves, pandan leaves, rattan, or various kinds of native reeds known by local names like tikog, sesed (Fimbristykis miliacea), rono, or bamban.In Thailand and Cambodia, the mats are produced by plaiting reeds, strips of palm leaf, or some other easily available local plant. The supple mats made by this process of weaving without a loom are widely used in Thai homes. These mats are also now being made into shopping bags, place mats, and decorative wall hangings. Southeast Asia: One popular kind of Thai mat is made from a kind of reed known as Kachud, which grows in the southern marshes. After the reeds are harvested, they are steeped in mud, which toughens them and prevents them from becoming brittle. They are then dried in the sun for a time and pounded flat, after which they are ready to be dyed and woven into mats of various sizes and patterns. Southeast Asia: Other mats are produced in different parts of Thailand, most notably in the eastern province of Chanthaburi. Durable as well as attractive, they are plaited entirely by hand with an intricacy that makes the best resemble finely woven fabrics. South Asia: In India, reed mats (called paay in Tamil or chatai in Hindi) are used as cooling and eco-friendly floor coverings.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sommerfeld parameter** Sommerfeld parameter: The Sommerfeld parameter η, named after Arnold Sommerfeld, is a dimensionless quantity used in nuclear astrophysics in the calculation of reaction rates between two nuclei and also appears in the definition of the astrophysical S-factor. It is defined as η=Z1Z2e24πϵ0ℏv=αZ1Z2μc22E ,where e is the elementary charge, Z1 and Z2 are the atomic numbers of two interacting nuclides, v is the magnitude of the relative incident velocity in the center-of-mass frame, α is the unitless fine-structure constant, c is the speed of light, and μ is the reduced mass of the two nuclides of interest. Sommerfeld parameter: One of its best-known applications is in the exponent of the Gamow factor P (also known as the penetrability factor), exp ⁡(−2πη) ,which is the probability of an s-wave nuclide to penetrate the Coulomb barrier, according to the WKB approximation. This factor is particularly helpful in characterizing the nuclear contribution to low-energy nucleon-scattering cross-sections - namely, through the astrophysical S-factor. Sommerfeld parameter: One of the first articles in which the Sommerfeld parameter appeared was published in 1967.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Symmetry** Symmetry: Symmetry (from Ancient Greek συμμετρία (summetría) 'agreement in dimensions, due proportion, arrangement') in everyday language refers to a sense of harmonious and beautiful proportion and balance. In mathematics, the term has a more precise definition and is usually used to refer to an object that is invariant under some transformations, such as translation, reflection, rotation, or scaling. Although these two meanings of the word can sometimes be told apart, they are intricately related, and hence are discussed together in this article. Symmetry: Mathematical symmetry may be observed with respect to the passage of time; as a spatial relationship; through geometric transformations; through other kinds of functional transformations; and as an aspect of abstract objects, including theoretic models, language, and music.This article describes symmetry from three perspectives: in mathematics, including geometry, the most familiar type of symmetry for many people; in science and nature; and in the arts, covering architecture, art, and music. Symmetry: The opposite of symmetry is asymmetry, which refers to the absence or a violation of symmetry. In mathematics: In geometry A geometric shape or object is symmetric if it can be divided into two or more identical pieces that are arranged in an organized fashion. This means that an object is symmetric if there is a transformation that moves individual pieces of the object, but doesn't change the overall shape. The type of symmetry is determined by the way the pieces are organized, or by the type of transformation: An object has reflectional symmetry (line or mirror symmetry) if there is a line (or in 3D a plane) going through it which divides it into two pieces that are mirror images of each other. In mathematics: An object has rotational symmetry if the object can be rotated about a fixed point (or in 3D about a line) without changing the overall shape. An object has translational symmetry if it can be translated (moving every point of the object by the same distance) without changing its overall shape. An object has helical symmetry if it can be simultaneously translated and rotated in three-dimensional space along a line known as a screw axis. An object has scale symmetry if it does not change shape when it is expanded or contracted. Fractals also exhibit a form of scale symmetry, where smaller portions of the fractal are similar in shape to larger portions. Other symmetries include glide reflection symmetry (a reflection followed by a translation) and rotoreflection symmetry (a combination of a rotation and a reflection). In logic A dyadic relation R = S × S is symmetric if for all elements a, b in S, whenever it is true that Rab, it is also true that Rba. Thus, the relation "is the same age as" is symmetric, for if Paul is the same age as Mary, then Mary is the same age as Paul. In propositional logic, symmetric binary logical connectives include and (∧, or &), or (∨, or |) and if and only if (↔), while the connective if (→) is not symmetric. Other symmetric logical connectives include nand (not-and, or ⊼), xor (not-biconditional, or ⊻), and nor (not-or, or ⊽). Other areas of mathematics Generalizing from geometrical symmetry in the previous section, one can say that a mathematical object is symmetric with respect to a given mathematical operation, if, when applied to the object, this operation preserves some property of the object. The set of operations that preserve a given property of the object form a group. In general, every kind of structure in mathematics will have its own kind of symmetry. Examples include even and odd functions in calculus, symmetric groups in abstract algebra, symmetric matrices in linear algebra, and Galois groups in Galois theory. In statistics, symmetry also manifests as symmetric probability distributions, and as skewness—the asymmetry of distributions. In science and nature: In physics Symmetry in physics has been generalized to mean invariance—that is, lack of change—under any kind of transformation, for example arbitrary coordinate transformations. This concept has become one of the most powerful tools of theoretical physics, as it has become evident that practically all laws of nature originate in symmetries. In fact, this role inspired the Nobel laureate PW Anderson to write in his widely read 1972 article More is Different that "it is only slightly overstating the case to say that physics is the study of symmetry." See Noether's theorem (which, in greatly simplified form, states that for every continuous mathematical symmetry, there is a corresponding conserved quantity such as energy or momentum; a conserved current, in Noether's original language); and also, Wigner's classification, which says that the symmetries of the laws of physics determine the properties of the particles found in nature.Important symmetries in physics include continuous symmetries and discrete symmetries of spacetime; internal symmetries of particles; and supersymmetry of physical theories. In science and nature: In biology In biology, the notion of symmetry is mostly used explicitly to describe body shapes. Bilateral animals, including humans, are more or less symmetric with respect to the sagittal plane which divides the body into left and right halves. Animals that move in one direction necessarily have upper and lower sides, head and tail ends, and therefore a left and a right. The head becomes specialized with a mouth and sense organs, and the body becomes bilaterally symmetric for the purpose of movement, with symmetrical pairs of muscles and skeletal elements, though internal organs often remain asymmetric.Plants and sessile (attached) animals such as sea anemones often have radial or rotational symmetry, which suits them because food or threats may arrive from any direction. Fivefold symmetry is found in the echinoderms, the group that includes starfish, sea urchins, and sea lilies.In biology, the notion of symmetry is also used as in physics, that is to say to describe the properties of the objects studied, including their interactions. A remarkable property of biological evolution is the changes of symmetry corresponding to the appearance of new parts and dynamics. In science and nature: In chemistry Symmetry is important to chemistry because it undergirds essentially all specific interactions between molecules in nature (i.e., via the interaction of natural and human-made chiral molecules with inherently chiral biological systems). The control of the symmetry of molecules produced in modern chemical synthesis contributes to the ability of scientists to offer therapeutic interventions with minimal side effects. A rigorous understanding of symmetry explains fundamental observations in quantum chemistry, and in the applied areas of spectroscopy and crystallography. The theory and application of symmetry to these areas of physical science draws heavily on the mathematical area of group theory. In science and nature: In psychology and neuroscience For a human observer, some symmetry types are more salient than others, in particular the most salient is a reflection with a vertical axis, like that present in the human face. Ernst Mach made this observation in his book "The analysis of sensations" (1897), and this implies that perception of symmetry is not a general response to all types of regularities. Both behavioural and neurophysiological studies have confirmed the special sensitivity to reflection symmetry in humans and also in other animals. Early studies within the Gestalt tradition suggested that bilateral symmetry was one of the key factors in perceptual grouping. This is known as the Law of Symmetry. The role of symmetry in grouping and figure/ground organization has been confirmed in many studies. For instance, detection of reflectional symmetry is faster when this is a property of a single object. Studies of human perception and psychophysics have shown that detection of symmetry is fast, efficient and robust to perturbations. For example, symmetry can be detected with presentations between 100 and 150 milliseconds.More recent neuroimaging studies have documented which brain regions are active during perception of symmetry. Sasaki et al. used functional magnetic resonance imaging (fMRI) to compare responses for patterns with symmetrical or random dots. A strong activity was present in extrastriate regions of the occipital cortex but not in the primary visual cortex. The extrastriate regions included V3A, V4, V7, and the lateral occipital complex (LOC). Electrophysiological studies have found a late posterior negativity that originates from the same areas. In general, a large part of the visual system seems to be involved in processing visual symmetry, and these areas involve similar networks to those responsible for detecting and recognising objects. In social interactions: People observe the symmetrical nature, often including asymmetrical balance, of social interactions in a variety of contexts. These include assessments of reciprocity, empathy, sympathy, apology, dialogue, respect, justice, and revenge. Reflective equilibrium is the balance that may be attained through deliberative mutual adjustment among general principles and specific judgments. In social interactions: Symmetrical interactions send the moral message "we are all the same" while asymmetrical interactions may send the message "I am special; better than you." Peer relationships, such as can be governed by the golden rule, are based on symmetry, whereas power relationships are based on asymmetry. Symmetrical relationships can to some degree be maintained by simple (game theory) strategies seen in symmetric games such as tit for tat. In the arts: There exists a list of journals and newsletters known to deal, at least in part, with symmetry and the arts. In the arts: In architecture Symmetry finds its ways into architecture at every scale, from the overall external views of buildings such as Gothic cathedrals and The White House, through the layout of the individual floor plans, and down to the design of individual building elements such as tile mosaics. Islamic buildings such as the Taj Mahal and the Lotfollah mosque make elaborate use of symmetry both in their structure and in their ornamentation. Moorish buildings like the Alhambra are ornamented with complex patterns made using translational and reflection symmetries as well as rotations.It has been said that only bad architects rely on a "symmetrical layout of blocks, masses and structures"; Modernist architecture, starting with International style, relies instead on "wings and balance of masses". In the arts: In pottery and metal vessels Since the earliest uses of pottery wheels to help shape clay vessels, pottery has had a strong relationship to symmetry. Pottery created using a wheel acquires full rotational symmetry in its cross-section, while allowing substantial freedom of shape in the vertical direction. Upon this inherently symmetrical starting point, potters from ancient times onwards have added patterns that modify the rotational symmetry to achieve visual objectives. In the arts: Cast metal vessels lacked the inherent rotational symmetry of wheel-made pottery, but otherwise provided a similar opportunity to decorate their surfaces with patterns pleasing to those who used them. The ancient Chinese, for example, used symmetrical patterns in their bronze castings as early as the 17th century BC. Bronze vessels exhibited both a bilateral main motif and a repetitive translated border design. In the arts: In carpets and rugs A long tradition of the use of symmetry in carpet and rug patterns spans a variety of cultures. American Navajo Indians used bold diagonals and rectangular motifs. Many Oriental rugs have intricate reflected centers and borders that translate a pattern. Not surprisingly, rectangular rugs have typically the symmetries of a rectangle—that is, motifs that are reflected across both the horizontal and vertical axes (see Klein four-group § Geometry). In the arts: In quilts As quilts are made from square blocks (usually 9, 16, or 25 pieces to a block) with each smaller piece usually consisting of fabric triangles, the craft lends itself readily to the application of symmetry. In the arts: In other arts and crafts Symmetries appear in the design of objects of all kinds. Examples include beadwork, furniture, sand paintings, knotwork, masks, and musical instruments. Symmetries are central to the art of M.C. Escher and the many applications of tessellation in art and craft forms such as wallpaper, ceramic tilework such as in Islamic geometric decoration, batik, ikat, carpet-making, and many kinds of textile and embroidery patterns.Symmetry is also used in designing logos. By creating a logo on a grid and using the theory of symmetry, designers can organize their work, create a symmetric or asymmetrical design, determine the space between letters, determine how much negative space is required in the design, and how to accentuate parts of the logo to make it stand out. In the arts: In music Symmetry is not restricted to the visual arts. Its role in the history of music touches many aspects of the creation and perception of music. Musical form Symmetry has been used as a formal constraint by many composers, such as the arch (swell) form (ABCBA) used by Steve Reich, Béla Bartók, and James Tenney. In classical music, Bach used the symmetry concepts of permutation and invariance. In the arts: Pitch structures Symmetry is also an important consideration in the formation of scales and chords, traditional or tonal music being made up of non-symmetrical groups of pitches, such as the diatonic scale or the major chord. Symmetrical scales or chords, such as the whole tone scale, augmented chord, or diminished seventh chord (diminished-diminished seventh), are said to lack direction or a sense of forward motion, are ambiguous as to the key or tonal center, and have a less specific diatonic functionality. However, composers such as Alban Berg, Béla Bartók, and George Perle have used axes of symmetry and/or interval cycles in an analogous way to keys or non-tonal tonal centers. George Perle explains "C–E, D–F♯, [and] Eb–G, are different instances of the same interval … the other kind of identity. … has to do with axes of symmetry. C–E belongs to a family of symmetrically related dyads as follows:" Thus in addition to being part of the interval-4 family, C–E is also a part of the sum-4 family (with C equal to 0). In the arts: Interval cycles are symmetrical and thus non-diatonic. However, a seven pitch segment of C5 (the cycle of fifths, which are enharmonic with the cycle of fourths) will produce the diatonic major scale. Cyclic tonal progressions in the works of Romantic composers such as Gustav Mahler and Richard Wagner form a link with the cyclic pitch successions in the atonal music of Modernists such as Bartók, Alexander Scriabin, Edgard Varèse, and the Vienna school. At the same time, these progressions signal the end of tonality.The first extended composition consistently based on symmetrical pitch relations was probably Alban Berg's Quartet, Op. 3 (1910). In the arts: Equivalency Tone rows or pitch class sets which are invariant under retrograde are horizontally symmetrical, under inversion vertically. See also Asymmetric rhythm. In aesthetics The relationship of symmetry to aesthetics is complex. Humans find bilateral symmetry in faces physically attractive; it indicates health and genetic fitness. Opposed to this is the tendency for excessive symmetry to be perceived as boring or uninteresting. Rudolf Arnheim suggested that people prefer shapes that have some symmetry, and enough complexity to make them interesting. In literature Symmetry can be found in various forms in literature, a simple example being the palindrome where a brief text reads the same forwards or backwards. Stories may have a symmetrical structure, such as the rise and fall pattern of Beowulf.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded