text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Deprecation**
Deprecation:
In several fields, especially computing, deprecation is the discouragement of use of some terminology, feature, design, or practice, typically because it has been superseded or is no longer considered efficient or safe, without completely removing it or prohibiting its use. Typically, deprecated materials are not completely removed to ensure legacy compatibility or back up practice in case new methods are not functional in an odd scenario.
Deprecation:
It can also imply that a feature, design, or practice will be removed or discontinued entirely in the future.
Etymology:
In general English usage, the infinitive "to deprecate" means "to express disapproval of (something)". It derives from the Latin verb deprecari, meaning "to ward off (a disaster) by prayer".
An early documented usage of "deprecate" in this sense is in Usenet posts in 1984, referring to obsolete features in 4.2BSD and the C programming language. An expanded definition of "deprecate" was cited in the Jargon File in its 1991 revision, and similar definitions are found in commercial software documentation from 2014 and 2023.
Software:
While a deprecated software feature remains in the software, its use may raise warning messages recommending alternative practices. Deprecated status may also indicate the feature will be removed in the future. Features are deprecated, rather than immediately removed, to provide backward compatibility and to give programmers time to bring affected code into compliance with the new standard.
Software:
Among the most common reasons for deprecation are: The feature has been replaced by a more powerful alternative feature. For instance, the Linux kernel contains two modules to communicate with Windows networks: smbfs and cifs. The latter provides better security, supports more protocol features, and integrates better with the rest of the kernel. Since the inclusion of cifs, smbfs has been deprecated.
Software:
The feature contains a design flaw, frequently a security flaw, and so should be avoided, but existing code depends upon it. The simple C standard function gets() is an example, because using this function can introduce a buffer overflow into the program that uses it. The Java API methods Thread.stop, .suspend and .resume are further examples.
Software:
The feature is considered extraneous, and will be removed in the future in order to simplify the system as a whole. Early versions of the Web markup language HTML included a FONT element to allow page designers to specify the font in which text should be displayed. With the release of Cascading Style Sheets and HTML 4.0, the FONT element became extraneous, and detracted from the benefits of noting structural markup in HTML and graphical formatting in CSS. Thus, the FONT element was deprecated in the Transitional HTML 4.0 standard, and eliminated in the Strict variant.
Software:
A future version of the software will make major structural changes, making it impossible (or impractical) to support older features. For instance, when Apple Inc. planned the transition from Mac OS 9 to Mac OS X, it created a subset of the older system's API which would support most programs with minor changes: the Carbon library (that has since been deprecated), available in both Mac OS 9 and Mac OS X. Programmers who were, at the time, chiefly using Mac OS 9, could ensure that their programs would run natively on Mac OS X by using only the API functions supported in Carbon. Other Mac OS 9 functions were deprecated, and were never supported natively in Mac OS X.
Software:
Standardization or increased consistency in naming. Projects that are developed over long periods of time, or by multiple individuals or groups, can contain inconsistencies in the naming of various items. These might result from a lack of foresight, changes in nomenclature over time, or personal, regional, or educational differences in terminology. Since merely renaming an item would break backwards compatibility, the existing name must be left in place. The original name will likely remain indefinitely, but will be deprecated to encourage use of the newer, more consistent naming convention. An example would be an API that alternately used the spelling "color" and "colour". Standardization would result in the use of only one of the regional spellings throughout, and all occurrences of the other spelling would be deprecated.
Software:
A feature that once was available only independently is now combined with its co-feature. An example is VLC Media Player; VLC used to stand for "VideoLan Client", and a separate "VideoLan Server" was available as its co-feature. Both the client and server became available in the same package and so getting one independently would be impractical.
Other usage:
A building code example is the use of ungrounded ("2-prong") electrical receptacles. Over time, these older devices were widely deprecated in favor of safer grounded ("3-prong") receptacles. The older, ungrounded receptacles were still permitted in many places by "grandfathering" them in existing electrical wiring, while prohibiting them for new installations. Thus, though ungrounded receptacles may still be available for legal purchase in a location where they are obsolete, they would generally be intended only for repairs to existing older electrical installations.
Other usage:
In writing and editing, usage of a word may be deprecated because it is ambiguous, confusing, or offensive to some readers. For example, the words sanction and inflammable may be misinterpreted because they have auto-antonymic or self-contradictory meanings; writing style guides often recommend substituting other words that are clearly understood and unambiguous. Some word usages that have acquired different connotations over time, such as gay or colored, may be deprecated as obsolete in formal writing.
Other usage:
In technical standards, use of a certain clause may be discouraged or superseded by new clauses. As an example, in the Ethernet standard IEEE 802.3-2012, Clause 5 (Layer Management) is "deprecated" by Clause 30 (Management), except for 5.2.4.
Other usage:
Deprecation may also occur when a technical term becomes obsolete, either through change or supersession. An example from paleontology is the previously deprecated term Brontosaurus; before being re-recognized as a unique genus, it was considered a popular, yet deprecated, name for the genus Apatosaurus. Some examples of deprecated terms from medicine include consumption (tuberculosis), grippe (influenza), and apoplexy (stroke). In chemical nomenclature, the international standards organization IUPAC (International Union of Pure and Applied Chemistry) has deprecated the term "methyl ethyl ketone", and now recommends using the term "ethyl methyl ketone" instead. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fdm (software)**
Fdm (software):
fdm (fetch/filter and deliver mail) is a mail delivery agent and email filtering software for Unix-like operating systems, similar to fetchmail and procmail. It was started in 2006 by Nicholas Marriott who later also started tmux in 2007.
Adoption:
fdm is available as a package in many Unix-like operating systems. It has been included in OpenBSD ports since 2007-01-18.In 2014, the last maintainer of procmail posted a message to an OpenBSD mailing list himself suggesting that he removed the procmail port, it had been suggested by a well-known OpenBSD ports maintainer that fdm is the natural alternative (the procmail port, however, had not been removed and remained in place as of 2020).
Adoption:
fdm is listed on the OpenBSD Innovations page, in the section of projects maintained by OpenBSD developers outside of OpenBSD. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Proto.io**
Proto.io:
Proto.io is an application prototyping platform launched in 2011 and developed by PROTOIO Inc. Originally designed to prototype on mobile devices, Proto.io has expanded to allow users to prototype apps for anything with a screen interface, including Smart TVs, digital camera interfaces, cars, airplanes, and gaming consoles. Proto.io utilizes a drag and drop user interface (UI) and does not require coding.
History:
Since its launch in 2011, there have been six versions of Proto.io released.
History:
Version 1 In 2011, the 100% web-based Proto.io tool became available online. The web-based environment allowed users to create a project for either the iPad or iPhone. After a user created a few screens for a developing app, Proto.io could then link those pages together with interactive actions that are custom to hand held devices, such as clicks, taps, tap and holds, and swipes. With the platform, users could also create reusable templates into which prepackaged and editable elements could be dragged. Once the user had completed the prototype, Proto.io could then publish and preview the finished product not only on the web browser but also on the actual mobile device.
History:
Version 2 Proto.io V2 was released in early 2012 and expanded the supported mobile devices to accommodate for the Android platform, to include the Android Smartphone and Tablet. The platform also came with a newer user interface. Proto.io V2 also added collaboration features like comments and annotations as well as export to HTML functionality.
History:
Version 3 On September 28, 2012, with the release of version 3 of the platform, Proto.io became the first prototyping tool to allow users to prototype on almost any device with a screen interface, and the first mobile prototyping tool to support full feature animations of user interface items within a prototype screen. The included icon gallery contains thousands of SVG icons for use as buttons, lists and tab bars. Proto.io V3 also supports web fonts, which allows the user to access all available online fonts.
History:
Version 4 The fourth version of Proto.io was launched in April 2013. This version was not as heavily focused on introducing new individual features, but rather aimed to improve the tool’s user interface and overall efficiency with a completely revamped editor.
History:
Version 5 October 2013 brought one of the major releases of Proto.io. With Version 5 users gained the capability to build HTML5 interactive animations more intuitively with the new Animation States & Timelines feature. This release introduced a wide variety of other new functionalities as well, such as easier Drag & Rotate, Variables and Item property interactions and new touch events.
History:
Version 6 The most recent release of Proto.io was launched in July 2016. The entire interface was redesigned, making the most used tools easily accessible to users. Additionally, animations became for the first time replayable directly on the editor, making it easier to finalize the motion design process. Adding and editing interactions was also simplified, with the introduction of an Interaction Wizard and Interaction Design Patterns. Single-click sharing and exporting also became available in the same release. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**BioGeM**
BioGeM:
The BioGeM Institute (Biologia e Genetica Molecolare, "Biology and Molecular Genetics") is a nonprofit consortium formed by the National Research Council (CNR), the University of Naples "Federico II", the LUMSA of Rome, the Trieste AREA Science Park, the University of Udine, the Stazione Zoologica Anton Dohrn of Naples, the University of Sannio in Benevento, the University of Foggia, the University of Milan Bicocca, the Second University and the Suor Orsola Benincasa University of Naples, the Chamber of Commerce of Avellino and the local mountain community of Ufita Valley.BioGeM was inaugurated in 2006 by Nobel laureate Rita Levi Montalcini and comprises research laboratories, services and teaching facilities. Scientific research, led by “Gaetano Salvatore” Genetics Research Institute (IRGS), takes place within Genetics and Translational Medicine (GTM) department and is aimed at understanding biological mechanisms and identifying the genes involved in the development and proliferation of various human diseases; the research is based on the use of animal models raised in a high level laboratory after the approval of the ethics committee for animal experimentation. Research primarily aims at fighting cancer and degenerative diseases, often in collaboration with international groups.Furthermore, the pharmacological research is carried out by a proper department named Medicinal Investigational Research (MIR), whose task covers the experimental verification of new drugs and the release of the related certifications. Furthermore, the training activities take place in a specific functional area named Life and Mind Science School (LIMSS).
BioGeM:
Since 2010 Biogem has yearly hosted a meeting named Le due culture ("The two cultures") which over the years has been attended by a number of Nobel laureates and also (in 2018) the president of Italy Sergio Mattarella. The objective of the meeting is to find a common ground between humanistic and scientific knowledge.The institute also hosts an office of Sopra Steria, a French company specialized in consulting and digital services. Since 2013 Biogem has a forensic genetics laboratory too.
Museum:
Within the research center there is also Biogeo, a museum dedicated to the origins of geological history throughout the eras from the Precambrian until the Jurassic. Founded in collaboration with the National Institute of Geophysics and Volcanology, the museum attempts to showcase and illustrate the origin and development of life on Earth, with particular regard to the relationship between genome͵ environment and evolution. Starting from 2012, Biogeo has provided itself with a tetrasphere, designed by the Italian physicist Paco Lanciano. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Phoenix (ATC)**
Phoenix (ATC):
PHOENIX is a multipurpose Radar Data Processing System(RDPS) / Surveillance Data Processing System (SDPS) - a.k.a. tracker - used for many ATC applications in the Deutsche Flugsicherung (DFS), and is continuously extended and maintained ever since. PHOENIX is also foreseen as a fundamental component for all future ATM systems in the DFS into the 2020s and part of the DFS initiative for “ATS componentware” in the European SESAR programme.
Introduction:
Since 2001, the DFS has developed its own radar and sensor data processing system, called PHOENIX (a programmatic name instead of an acronym), which is applied in a variety of environments, for a variety of purposes, and with a variety of functional requirements. With PHOENIX the DFS aimed at the level of an advanced ATC system in terms of the previous definitions, not to ATM. To meet these challenges a series of general concepts had been developed and implemented, which are of general interest for the definition and implementation of advanced ATC and C³ systems.
Introduction:
The PHOENIX tracker was originally developed for the surveillance of civilian ATC traffic. It is capable to perform MSDF utilizing very different sensor types regarding accuracy, update rates, as well as their supported attributes. Due to its flexible design it is perfectly suitable for surface movement ground surveillance.
Introduction:
Grand context German air traffic of today comprises between 1,000 and 2,000 aircraft tracks at the same time in the national airspace. Besides classical ATC radars also new types of sensors or position information sources like Multilateration, ADS-B, and others are to be integrated. Per day it is required to process up to 10,000 flight plans. In the context of the discussion and development of transnational functional airspaces block like FABEC the required number of maintainable tracks will even grow beyond the 3,000, possibly more than 5,000 simultaneous tracks. An equivalent growth in needed flightplan handling capacity can be reasonably assumed. Each aircraft needs suitable Kalman filtering for tracking to cope both with steady flight and manoeuvre conditions in the different airspaces, and each IFR aircraft needs linkage processing to correlate flightplan data correctly to the track; simple code-callsign-pairing is insufficient due to multiple use of SSR codes.
Introduction:
At the same time the track and flightplan data have to be presented to a number of controller workstations (CWPs), ranging from 1 (low-end applications) or 5 (in towers) to 120 (in ACCs), which results in the demand of an excellent scalability for such a system. Furthermore, CWPs will create much coordination data and additional track-related information which are distributed over the LAN and eventually to external partner systems. To keep the total complex still controllable, system status monitoring and commanding facilities have to be inbuilt. Last but not least such system environments need large sets of configuration and resource data that have to be managed efficiently.
Introduction:
Phoenix deployment PHOENIX is a common R/SDPS tool in the German ATC world, used at more than 150 operational locations, scheduled for more than 700 additional locations, and used as a test, analysis, and evaluation tool in more than 200 locations. Today, PHOENIX is an international R/SDPS tool with system recognised internationally.
Phoenix Paradigm:
Phoenix As ATS Component Phoenix has been developed following the decomposition of ATS componentware (ATS CW): ATS units shall be regarded as "system of systems", e.g. a system for each decomposed domain. An ATS systems may consist of subsystems, e.g. an ACC ATS system may consist of a subsystem "Main" and a subsystem "Fallback". ATS systems or subsystems always comprise Hardware (HW), Software (SW), and Network Infrastructure (NET). HW, SW, NET consist of segments, e.g.
Phoenix Paradigm:
a HW segment is a single host, a SW segment is the application SW for a host, a NET segment is a LAN segment. SW segments consist of components, Components are executable (UNIX/LINUX) processes and/or scripts Phoenix Tracking Engineering PHOENIX includes a 2 track server configuration, one with an IMMKF and another with a 1MKF. Targets with different manoeuvrability have different statistics, which is expressed by the process noise of the motion model. The process noise is a mathematical description for the uncertainty of a future position and velocity target given the current and past observations. Targets for which constant motion is an established fact essentially have zero process noise, and all uncertainties due to changes to the targets’ motion state are modelled by nonzero process noise.
Phoenix Paradigm:
Phoenix Software Engineering PHOENIX is based on the use of Commercial Off-The-Shelf hardware and software, on the LINUX operating system, and on a modular Air Traffic Control (ATC) system philosophy. The existing system with its open architecture design is adaptable and scalable ranging from a simple tower automation application over a complex approach application up to an independent air traffic control fallback solution for a multi sector area control centre.
Phoenix Paradigm:
Phoenix Standards The development and evolution of the PHOENIX product has been based on compliance with internationally accepted standards for air traffic management systems including operational, safety, security and equipment standards promulgated by organisations such as ICAO, EUROCONTROL and other recognized organisations.
Phoenix Components:
Server Multi radar track servers (1MKF, IMMKF, MSDF; d-mrts) Track distribution services (d-trksend) Configuration and distribution servers (d-dis) Recording and replay servers (d-rdr) Message servers (d-msg) Flightplan and Linkage processing servers (d-fps) Persistence Servers (d-pds) Information Data Server for direction finders and weather reports (d-ids) Radar weather server (d-ws) Safety Net Server for STCA, RAI, MSAW, GPM (d-snet) Airport Situation Assessment Server for RWY incursions, TWY infringements, etc. (d-asas) Online tracking quality control statistics server (d-otqc) LANBLF Interface and Proxy FATMAC interface and TWRTID Server Client Controller Working Position (d-cwp) Tower Touch Input Approach Display (twrtid) Flight Data Workstation (d-fdb) Analysis Working Position (d-awp) Maintenance Working Position (MWP) with: Adaptation Data Editor (d-adg) Configuration distribution HMI (d-disfront) Map Editor (d-map) System Monitoring (d-mon) Support processes (daemons, interface agents, utilities) Proxies for PHOENIX middleware (proxy_server) Status collector agents (d-agent) Application initialisation agents (d-init) Interface agents for various flightplan data formats (d-fplIa) Bridges for sensor data, flightplan messages (d-sbr,...) Interfaces to various printers Test data generators (d-gen, d-stca, etc.) Video switch controllers | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**European Conference on Underwater Acoustics**
European Conference on Underwater Acoustics:
The European Conference on Underwater Acoustics (ECUA) was a conference on underwater acoustics that took place in Europe every two years, until 2012, when it was held in Edinburgh (Scotland), and organized by the Institute of Acoustics. Previous editions took place in Delft (Netherlands, 2004), Algarve, (Portugal, 2006) and Paris (France, 2008) and Istanbul (Turkey, 2010). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Quote to cash**
Quote to cash:
Quote-to-cash (or QTC or Q2C) is an information technology term for the integration and automated management of end-to-end business processes on the sell side. It includes the following aspects of the sales process: Product (or Service) Configuration Pricing Quote creation for a prospect or customer or channel partner, and its negotiation Customer acceptance of the deal Product ordering and fulfillment Invoicing Payment receipt Renewals. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cornet**
Cornet:
The cornet (, US: ) is a brass instrument similar to the trumpet but distinguished from it by its conical bore, more compact shape, and mellower tone quality. The most common cornet is a transposing instrument in B♭. There is also a soprano cornet in E♭ and cornets in A and C. All are unrelated to the Renaissance and early Baroque cornett.
History:
The cornet was derived from the posthorn by applying rotary valves to it in the 1820s, in France. However, by the 1830s, Parisian makers were using piston valves. Cornets first appeared as separate instrumental parts in 19th-century French compositions.The instrument could not have been developed without the improvement of piston valves by Silesian horn players Friedrich Blühmel (or Blümel) and Heinrich Stölzel, in the early 19th century. These two instrument makers almost simultaneously invented valves, though it is likely that Blühmel was the inventor, while Stölzel developed a practical instrument. They were jointly granted a patent for a period of ten years. François Périnet received a patent in 1838 for an improved valve, which became the model for modern brass instrument piston valves. The first notable virtuoso player was Jean-Baptiste Arban, who studied the cornet extensively and published La grande méthode complète de cornet à piston et de saxhorn, commonly referred to as the Arban method, in 1864. Up until the early 20th century, the trumpet and cornet co-existed in musical ensembles; symphonic repertoire often involves separate parts for trumpet and cornet. As several instrument builders made improvements to both instruments, they started to look and sound more alike. The modern-day cornet is used in brass bands, concert bands, and in specific orchestral repertoire that requires a more mellow sound.The name "cornet" derives from the French corne, meaning "horn", itself from Latin cornu. While not musically related, instruments of the Zink family (which includes serpents) are named "cornetto" or "cornett" in modern English, to distinguish them from the valved cornet described here. The 11th edition of the Encyclopædia Britannica referred to serpents as "old wooden cornets". The Roman/Etruscan cornu (or simply "horn") is the lingual ancestor of these. It is a predecessor of the post horn, from which the cornet evolved, and was used like a bugle to signal orders on the battlefield.
Relationship to trumpet:
The cornet's valves allowed for melodic playing throughout the instrument's register. Trumpets were slower to adopt the new valve technology, so for 100 years or more, composers often wrote separate parts for trumpet and cornet. The trumpet would play fanfare-like passages, while the cornet played more melodic ones. The modern trumpet has valves that allow it to play the same notes and fingerings as the cornet.
Relationship to trumpet:
Cornets and trumpets made in a given key (usually the key of B♭) play at the same pitch, and the technique for playing the instruments is nearly identical. However, cornets and trumpets are not entirely interchangeable, as they differ in timbre. Also available, but usually seen only in the brass band, is an E♭ soprano model, pitched a fourth above the standard B♭.
Relationship to trumpet:
Unlike the trumpet, which has a cylindrical bore up to the bell section, the tubing of the cornet has a mostly conical bore, starting very narrow at the mouthpiece and gradually widening towards the bell. Cornets following the 1913 patent of E. A. Couturier can have a continuously conical bore. This shape is primarily responsible for the instrument's characteristic warm, mellow tone, which can be distinguished from the more penetrating sound of the trumpet. The conical bore of the cornet also makes it more agile than the trumpet when playing fast passages, but correct pitching is often less assured. The cornet is often preferred for young beginners as it is easier to hold, with its centre of gravity much closer to the player.
Relationship to trumpet:
The cornet mouthpiece has a shorter and narrower shank than that of a trumpet, so it can fit the cornet's smaller mouthpiece receiver. The cup size is often deeper than that of a trumpet mouthpiece.
Relationship to trumpet:
One variety is the short-model traditional cornet, also known as a "Shepherd's Crook" shaped model. These are most often large-bore instruments with a rich mellow sound. There is also a long-model, or "American-wrap" cornet, often with a smaller bore and a brighter sound, which is produced in a variety of different tubing wraps and is closer to a trumpet in appearance. The Shepherd's Crook model is preferred by cornet traditionalists. The long-model cornet is generally used in concert bands in the United States and has found little following in British-style brass and concert bands.
Relationship to trumpet:
A third, and relatively rare variety—distinct from the "American-wrap" cornet—is the "long cornet", which was produced in the mid-20th century by C. G. Conn and F. E. Olds and is visually nearly indistinguishable from a trumpet, except that it has a receiver fashioned to accept cornet mouthpieces.
Relationship to trumpet:
Echo cornet The echo cornet has been called an obsolete variant. It has a mute chamber (or echo chamber) mounted to the side, acting as a second bell when the fourth valve is pressed. The second bell has a sound similar to that of a Harmon mute and is typically used to play echo phrases, whereupon the player imitates the sound from the primary bell using the echo chamber.
Playing technique:
Like the trumpet and all other modern brass wind instruments, the cornet makes a sound when the player vibrates ("buzzes") the lips in the mouthpiece, creating a vibrating column of air in the tubing. The frequency of the air column's vibration can be modified by changing the lip tension and aperture, or embouchure, and by altering the tongue position to change the shape of the oral cavity, thereby increasing or decreasing the speed of the airstream. In addition, the column of air can be lengthened by engaging one or more valves, thus lowering the pitch. Double and triple tonguing are also possible.
Playing technique:
Without valves, the player could produce only a harmonic series of notes, like those played by the bugle and other "natural" brass instruments. These notes are far apart for most of the instrument's range, making diatonic and chromatic playing impossible, except in the extreme high register. The valves change the length of the vibrating column and provide the cornet with the ability to play chromatically.
Ensembles with cornets:
Brass band British brass bands consist only of brass instruments and a percussion section. The cornet is the leading melodic instrument in this ensemble; trumpets are never used. The ensemble consists of about thirty musicians, including nine B♭ cornets and one E♭ cornet (soprano cornet). In the UK, companies such as Besson and Boosey & Hawkes specialized in instruments for brass bands. In America, 19th-century manufacturers such as Graves and Company, Hall and Quinby, E.G. Wright, and the Boston Musical Instrument Manufactury made instruments for this ensemble.
Ensembles with cornets:
Concert band The cornet features in the British-style concert band, and early American concert band pieces, particularly those written or transcribed before 1960, often feature distinct, separate parts for trumpets and cornets. Cornet parts are rarely included in later American pieces, however, and they are replaced in modern American bands by the trumpet. This slight difference in instrumentation derives from the British concert band's heritage in military bands, where the highest brass instrument is always the cornet. There are usually four to six B♭ cornets present in a British concert band, but no E♭ instrument, as this role is taken by the E♭ clarinet.
Ensembles with cornets:
Fanfareorkest Fanfareorkesten ("fanfare orchestras"), found in only the Netherlands, Belgium, northern France, and Lithuania, use the complete saxhorn family of instruments. The standard instrumentation includes both the cornet and the trumpet; however, in recent decades, the cornet has largely been replaced by the trumpet.
Ensembles with cornets:
Jazz ensemble In old-style jazz bands, the cornet was preferred to the trumpet, but from the swing era onwards, it has been largely replaced by the louder, more piercing trumpet. Likewise, the cornet has been largely phased out of big bands by a growing taste for louder and more aggressive instruments, especially since the advent of bebop in the post-World War II era.
Ensembles with cornets:
Jazz pioneer Buddy Bolden played the cornet, and Louis Armstrong started off on the instrument, but his switch to the trumpet is often credited with the beginning of the trumpet's dominance in jazz. Cornetists such as Bubber Miley and Rex Stewart contributed substantially to the Duke Ellington Orchestra's early sound. Other influential jazz cornetists include Freddie Keppard, King Oliver, Bix Beiderbecke, Ruby Braff, Bobby Hackett, and Nat Adderley. Notable performances on cornet by players generally associated with the trumpet include Freddie Hubbard's on Empyrean Isles, by Herbie Hancock, and Don Cherry's on The Shape of Jazz to Come, by Ornette Coleman. The band Tuba Skinny is led by cornetist Shaye Cohn.
Ensembles with cornets:
Symphony orchestra Soon after its invention, the cornet was introduced into the symphony orchestra, supplementing the trumpets. The use of valves meant they could play a full chromatic scale in contrast with trumpets, which were still restricted to the harmonic series. In addition, their tone was found to unify the horn and trumpet sections. Hector Berlioz was the first significant composer to use them in these ways, and his orchestral works often use pairs of both trumpets and cornets, the latter playing more of the melodic lines. In his Symphonie fantastique (1830), he added a counter-melody for a solo cornet in the second movement (Un Bal).
Ensembles with cornets:
Cornets continued to be used, particularly in French compositions, well after the valve trumpet was common. They blended well with other instruments and were held to be better suited to certain types of melody. Tchaikovsky used them effectively this way in his Capriccio Italien (1880).From the early 20th century, the cornet and trumpet combination was still favored by some composers, including Edward Elgar and Igor Stravinsky, but tended to be used for occasions when the composer wanted the specific mellower and more agile sound. The sounds of the cornet and trumpet have grown closer together over time, and the former is now rarely used as an ensemble instrument: in the first version of his ballet Petrushka (1911), Stravinsky gives a celebrated solo to the cornet; in the 1946 revision, he removed cornets from the orchestration and instead assigned the solo to the trumpet. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**High School Rapper**
High School Rapper:
High School Rapper (Hangul: 고등래퍼) is a South Korean survival hip-hop TV show, known as a students of high school counterpart of Show Me The Money and Unpretty Rapstar.
Seasons:
Season 1 (2017) Mentors Finalists Season 2 (2018) Mentors Finalists Season 3 (2019) Mentors Finalists Season 4 (2021) Mentors Finalists | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Wings 3D**
Wings 3D:
Wings 3D is a free and open-source subdivision modeler inspired by Nendo and Mirai from Izware. Wings 3D is named after the winged-edge data structure it uses internally to store coordinate and adjacency data, and is commonly referred to by its users simply as Wings.Wings 3D is available for Windows, Linux, and Mac OS X, using the Erlang environment.
Overview:
Wings 3D can be used to model and texture low to mid-range polygon models. Wings does not support animations and has only basic OpenGL rendering facilities, although it can export to external rendering software such as POV-Ray and YafRay. Wings is often used in combination with other software, whereby models made in Wings are exported to applications more specialized in rendering and animation such as Blender.
Interface:
Wings 3D uses context-sensitive menus as opposed to a highly graphical, icon-oriented interface. Modeling is done using the mouse and keyboard to select and modify different aspects of a model's geometry in four different selection modes: Vertex, Edge, Face and Body. Because of Wings's context-sensitive design, each selection mode has its own set of mesh tools. Many of these tools offer both basic and advanced uses, allowing users to specify vectors and points to change how a tool will affect their model. Wings also allows users to add textures and materials to models, and has built-in AutoUV mapping facilities.
Features:
A wide variety of Selection and Modeling Tools Modeling Tool support for Magnets and Vector Operations Customizable Hotkeys and Interface Tweak Mode lets you make quick adjustments to a mesh Assign and edit Lighting, Materials, Textures, and Vertex Colours AutoUV Mapping Ngon mesh support A Plugin Manager for adding and removing plugins Import and Export in many popular formats
Supported file formats:
Wings loads and saves models in its own format (.wings), but also supports several standard 3D formats.
Import Nendo (.ndo) 3D Studio (.3ds) Adobe Illustrator (.ai) Lightwave/Modo (.lwo/.lxo) Wavefront (.obj) PostScript (Inkscape) (.ps) Encapsulated PostScript (.eps) Stereolithography (.stl) Paths (.svg) Export Nendo (.ndo) 3D Studio (.3ds) Adobe Illustrator (.ai) BZFlag (.bzw) Kerkythea (.xml) Lightwave/Modo (.lwo/.lxo) Wavefront (.obj) POV-Ray (.pov) Cartoon Edges (.eps/.svg) Stereolithography (.stl) Renderware (.rwx) VRML 2.0 (.wrl) DirectX (.x) Collada (.dae) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Books on Tape (company)**
Books on Tape (company):
Books on Tape (sometimes abbreviated BoT) is an audiobook publishing imprint of Random House which emphasizes unabridged audiobook recordings for schools and libraries. It was previously an independent California-based company before its acquisition by Random House, in 2001.The company was founded by Olympic gold medalist Duvall Hecht in 1975 as a direct to consumer mail order rental service for unabridged audiobooks on cassette tape. It was one of the pioneering companies in the fledgling audiobook business along with Recorded Books. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Blastmycin**
Blastmycin:
Blastmycin is an antibiotic with the molecular formula C26H36N2O9 which is produced by the bacterium Streptomyces blastmyceticus. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Traditional Korean roof construction**
Traditional Korean roof construction:
Traditional Korean roof construction has traditionally used many kinds of natural materials. They are made of neowa (shingle), giwa (tiles), byeotjib (rice Straw), stone giwa (tiles), eoksae (eulalia) and goolpy (oak bark)
Neowa (Shingle) roof:
Neowajib (a shingle-roofed house) can be seen in mountain villages (for example, in Gangwon-do), since these are places which are hard to get materials, such as Giwa and rice straw. Instead, it is made with the pieces of thick bark of about 200-year-old red pine trees which are easy to get. The size of neowa is not fixed, but it is usually about 20–30 cm wide, 40–59 cm long and 4–5 cm thickness. Usually 105–140 of neowa used to complete a roof. To protect neowa from the wind, heavy stones or logs were put on the roof. The air can be changed through the gaps between neowa, since there was no smokestack. When it rains, the wood with moisture can have a waterproof effect. In common, neowa's durability is around 5 years. However, it is not true that all of neowa changed at the same time. If there's rotten one it was replaced by new one. Neowajib has rooms, a kitchen and a cow shed under one square roof, to protect domestic animals from mountain beasts, and to keep warm in winter. As red pine trees disappear, neowajib disappear gradually. Finally there are only 3 neowajib in Korea.
Giwa (Tile):
Giwa is a construction material for put roofing. It is also called gaewa. One of the basic forms of giwa is amkiwa (flat giwa) and sukiwa (round giwa); one giwa can be made by putting together two of these. Roofs are generally made by this way. Clay is kneaded and is spread thinly. Then amkiwa is extended upward and downward, and sukiwa cover joints at right and left side. By classifying giwa through materials, there are togiwa made by kneading and baking clay, cement giwa-made by mixing cement and sand, and metal giwa made by cutting and making form with metal plate. There were stone giwa and bronze giwa at the ruins of Rome, and marble giwa was used at Greek temples. In addition, there are cheonggiwa, ozigiwa, etc. glazing by various kinds of glaze. As a matter of form, we call original giwa bongiwa and Japanese giwa "geolchimgiwa". There are giwa in many countries.
Byeotjib (Rice Straw) roof:
Chogajiboong (a straw roof) is made with byeotjib (rice straw), eulalia or reed, but generally made with byeotjib. Byeotjib protects residents from the sun in summer and keeps them warm in winter, because it is empty inside. Moreover, rain falls down well and hardly soaks through a roof because it has a relatively smooth surface. So, a thick roof is not needed. Warm and soft feeling is given by chogajiboong, because of the original properties of byeotjib. It is put over another byeotjib every year, and it shows clean and new feature without any special effort. The gentle roof is used like a farm for drying crops like red pepper and planting pumpkins or gourds.
Stone Giwa (Tile) roof:
Flat layered stone roofs are called argillite (germpanam), on the roof in the much coal produced area; it takes a role of giwa. Giwa is formed in this way. Bluestone (cheongseok) is so smooth as to control raindrops gently. Its system is not different from giwa's. The bluestone is put at the bottom, and then largely different bluestone is put on it. By this way the process is continued. It can endure for a long time. These roofs can be seen commonly in the argillite zone. But it was possible to put this roof only for certain social classes because of the difficulty of purchasing and transporting the materials. Nowadays it can be seen at some areas of Gyeonggi-do and Gangwon-do.
Eoksae (Eulalia) roof:
Korea has ten kinds of eulalia. The eulalia leaf is good for waterproofing and durability. It is a strong material that will last for ten years after covering the roof with eulalia once. The straw rope twisted with eulalia is not only strong and elastic but also good for waterproofing. It is used in weaving rain-gear (rainwear) or straw sandals. The material should be dried with dew for a week. Then it is put in a shady pot for good airing.
Goolpy (Oak Bark) roof:
This is a kind of roof material used usually in mountain villages. The oak bark is over 20 years old. At first, the oak bark is peeled at the time of Chuseo (one of the 24 seasonal divisions, about August 23). Next it is put into water. After that, it is dried and a heavy stone is placed on it to flatten it. Bark made this way is commonly about 1.3 meters wide. If the air gets dry, the bark shrinks and obtains many holes. However, if it rains or its humidity is increased, holes get smaller and smaller. At last they disappear quickly. The stone between the joint is not to blow the bark. The life of the oak bark is so long that there is a saying, "Giwa exists for ten thousand years, and the Oak Bark for one thousand years." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Atomic battery**
Atomic battery:
An atomic battery, nuclear battery, radioisotope battery or radioisotope generator is a device which uses energy from the decay of a radioactive isotope to generate electricity. Like nuclear reactors, they generate electricity from nuclear energy, but differ in that they do not use a chain reaction. Although commonly called batteries, they are technically not electrochemical and cannot be charged or recharged. They are very costly, but have an extremely long life and high energy density, and so they are typically used as power sources for equipment that must operate unattended for long periods of time, such as spacecraft, pacemakers, underwater systems and automated scientific stations in remote parts of the world.Nuclear battery technology began in 1913, when Henry Moseley first demonstrated a current generated by charged particle radiation. The field received considerable in-depth research attention for applications requiring long-life power sources for space needs during the 1950s and 1960s. In 1954 RCA researched a small atomic battery for small radio receivers and hearing aids. Since RCA's initial research and development in the early 1950s, many types and methods have been designed to extract electrical energy from nuclear sources. The scientific principles are well known, but modern nano-scale technology and new wide-bandgap semiconductors have created new devices and interesting material properties not previously available.
Atomic battery:
Nuclear batteries can be classified by energy conversion technology into two main groups: thermal converters and non-thermal converters. The thermal types convert some of the heat generated by the nuclear decay into electricity. The most notable example is the radioisotope thermoelectric generator (RTG), often used in spacecraft. The non-thermal converters extract energy directly from the emitted radiation, before it is degraded into heat. They are easier to miniaturize and do not require a thermal gradient to operate, so they are suitable for use in small-scale applications. The most notable example is the betavoltaic cell.
Atomic battery:
Atomic batteries usually have an efficiency of 0.1–5%. High-efficiency betavoltaic devices can reach 6–8% efficiency.
Thermal conversion:
Thermionic conversion A thermionic converter consists of a hot electrode, which thermionically emits electrons over a space-charge barrier to a cooler electrode, producing a useful power output. Caesium vapor is used to optimize the electrode work functions and provide an ion supply (by surface ionization) to neutralize the electron space charge.
Thermal conversion:
Thermoelectric conversion A radioisotope thermoelectric generator (RTG) uses thermocouples. Each thermocouple is formed from two wires of different metals (or other materials). A temperature gradient along the length of each wire produces a voltage gradient from one end of the wire to the other; but the different materials produce different voltages per degree of temperature difference. By connecting the wires at one end, heating that end but cooling the other end, a usable, but small (millivolts), voltage is generated between the unconnected wire ends. In practice, many are connected in series (or in parallel) to generate a larger voltage (or current) from the same heat source, as heat flows from the hot ends to the cold ends. Metal thermocouples have low thermal-to-electrical efficiency. However, the carrier density and charge can be adjusted in semiconductor materials such as bismuth telluride and silicon germanium to achieve much higher conversion efficiencies.
Thermal conversion:
Thermophotovoltaic conversion Thermophotovoltaic (TPV) cells work by the same principles as a photovoltaic cell, except that they convert infrared light (rather than visible light) emitted by a hot surface, into electricity. Thermophotovoltaic cells have an efficiency slightly higher than thermoelectric couples and can be overlaid on thermoelectric couples, potentially doubling efficiency. The University of Houston TPV Radioisotope Power Conversion Technology development effort is aiming at combining thermophotovoltaic cells concurrently with thermocouples to provide a 3- to 4-fold improvement in system efficiency over current thermoelectric radioisotope generators.
Thermal conversion:
Stirling generators A Stirling radioisotope generator is a Stirling engine driven by the temperature difference produced by a radioisotope. A more efficient version, the advanced Stirling radioisotope generator, was under development by NASA, but was cancelled in 2013 due to large-scale cost overruns.
Non-thermal conversion:
Non-thermal converters extract energy from emitted radiation before it is degraded into heat. Unlike thermoelectric and thermionic converters their output does not depend on the temperature difference. Non-thermal generators can be classified by the type of particle used and by the mechanism by which their energy is converted.
Non-thermal conversion:
Electrostatic conversion Energy can be extracted from emitted charged particles when their charge builds up in a conductor, thus creating an electrostatic potential. Without a dissipation mode the voltage can increase up to the energy of the radiated particles, which may range from several kilovolts (for beta radiation) up to megavolts (alpha radiation). The built up electrostatic energy can be turned into usable electricity in one of the following ways.
Non-thermal conversion:
Direct-charging generator A direct-charging generator consists of a capacitor charged by the current of charged particles from a radioactive layer deposited on one of the electrodes. Spacing can be either vacuum or dielectric. Negatively charged beta particles or positively charged alpha particles, positrons or fission fragments may be utilized. Although this form of nuclear-electric generator dates back to 1913, few applications have been found in the past for the extremely low currents and inconveniently high voltages provided by direct-charging generators. Oscillator/transformer systems are employed to reduce the voltages, then rectifiers are used to transform the AC power back to direct current.
Non-thermal conversion:
English physicist H. G. J. Moseley constructed the first of these. Moseley's apparatus consisted of a glass globe silvered on the inside with a radium emitter mounted on the tip of a wire at the center. The charged particles from the radium created a flow of electricity as they moved quickly from the radium to the inside surface of the sphere. As late as 1945 the Moseley model guided other efforts to build experimental batteries generating electricity from the emissions of radioactive elements.
Non-thermal conversion:
Electromechanical conversion Electromechanical atomic batteries use the buildup of charge between two plates to pull one bendable plate towards the other, until the two plates touch, discharge, equalizing the electrostatic buildup, and spring back. The mechanical motion produced can be used to produce electricity through flexing of a piezoelectric material or through a linear generator. Milliwatts of power are produced in pulses depending on the charge rate, in some cases multiple times per second (35 Hz).
Non-thermal conversion:
Radiovoltaic conversion A radiovoltaic (RV) device converts the energy of ionizing radiation directly into electricity using a semiconductor junction, similar to the conversion of photons into electricity in a photovoltaic cell. Depending on the type of radiation targeted, these devices are called alphavoltaic (AV, αV), betavoltaic (BV, βV) and/or gammavoltaic (GV, γV). Betavoltaics have traditionally received the most attention since (low-energy) beta emitters cause the least amount of radiative damage, thus allowing a longer operating life and less shielding. Interest in alphavoltaic and (more recently) gammavoltaic devices is driven by their potential higher efficiency.
Non-thermal conversion:
Alphavoltaic conversion Alphavoltaic devices use a semiconductor junction to produce electrical energy from energetic alpha particles.
Betavoltaic conversion Betavoltaic devices use a semiconductor junction to produce electrical energy from energetic beta particles (electrons). A commonly used source is the hydrogen isotope tritium.
Betavoltaic devices are particularly well-suited to low-power electrical applications where long life of the energy source is needed, such as implantable medical devices or military and space applications.
Non-thermal conversion:
Gammavoltaic conversion Gammavoltaic devices use a semiconductor junction to produce electrical energy from energetic gamma particles (high-energy photons). They have only been considered in the 2010s but were proposed as early as 1981.A gammavoltaic effect has been reported in perovskite solar cells. Another patented design involves scattering of the gamma particle until its energy has decreased enough to be absorbed in a conventional photovoltaic cell. Gammavoltaic designs using diamond and Schottky diodes are also being investigated.
Non-thermal conversion:
Radiophotovoltaic (optoelectric) conversion In a radiophotovoltaic (RPV) device the energy conversion is indirect: the emitted particles are first converted into light using a radioluminescent material (a scintillator or phosphor), and the light is then converted into electricity using a photovoltaic cell. Depending on the type of particle targeted, the conversion type can be more precisely specified as alphaphotovoltaic (APV or α-PV), betaphotovoltaic (BPV or β-PV) or gammaphotovoltaic (GPV or γ-PV).Radiophotovoltaic conversion can be combined with radiovoltaic conversion to increase the conversion efficiency.
Pacemakers:
Medtronic and Alcatel developed a plutonium-powered pacemaker, the Numec NU-5, powered by a 2.5 Ci slug of plutonium 238, first implanted in a human patient in 1970. The 139 Numec NU-5 nuclear pacemakers implanted in the 1970s are expected to never need replacing, an advantage over non-nuclear pacemakers, which require surgical replacement of their batteries every 5 to 10 years. The plutonium "batteries" are expected to produce enough power to drive the circuit for longer than the 88-year halflife of the plutonium. Betavoltaic batteries are also being considered as long-lasting power sources for lead-free pacemakers.
Radioisotopes used:
Atomic batteries use radioisotopes that produce low energy beta particles or sometimes alpha particles of varying energies. Low energy beta particles are needed to prevent the production of high energy penetrating Bremsstrahlung radiation that would require heavy shielding. Radioisotopes such as tritium, nickel-63, promethium-147, and technetium-99 have been tested. Plutonium-238, curium-242, curium-244 and strontium-90 have been used. Besides the nuclear properties of the used isotope, there are also the issues of chemical properties and availability. A product deliberately produced via neutron irradiation or in a particle accelerator is more difficult to obtain than a fission product easily extracted from spent nuclear fuel.
Radioisotopes used:
Plutonium-238 must be deliberately produced via neutron irradiation of Neptunium-237 but it can be easily converted into a stable plutonium oxide ceramic. Strontium-90 is easily extracted from spent nuclear fuel but must be converted into the perovskite form strontium titanate to reduce its chemical mobility, cutting power density in half. Caesium-137, another high yield nuclear fission product, is rarely used in atomic batteries because it is difficult to convert into chemically inert substances. Another undesirable property of Cs-137 extracted from spent nuclear fuel is that it is contaminated with other isotopes of Caesium which reduce power density further.
Micro-batteries:
In the field of microelectromechanical systems (MEMS), nuclear engineers at the University of Wisconsin, Madison have explored the possibilities of producing minuscule batteries which exploit radioactive nuclei of substances such as polonium or curium to produce electric energy. As an example of an integrated, self-powered application, the researchers have created an oscillating cantilever beam that is capable of consistent, periodic oscillations over very long time periods without the need for refueling. Ongoing work demonstrate that this cantilever is capable of radio frequency transmission, allowing MEMS devices to communicate with one another wirelessly.
Micro-batteries:
These micro-batteries are very light and deliver enough energy to function as power supply for use in MEMS devices and further for supply for nanodevices.The radiation energy released is transformed into electric energy, which is restricted to the area of the device that contains the processor and the micro-battery that supplies it with energy.: 180–181 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Transient-key cryptography**
Transient-key cryptography:
Transient-key cryptography is a form of public-key cryptography wherein keypairs are generated and assigned to brief intervals of time instead of to individuals or organizations, and the blocks of cryptographic data are chained through time. In a transient-key system, private keys are used briefly and then destroyed, which is why it is sometimes nicknamed “disposable crypto.” Data encrypted with a private key associated with a specific time interval can be irrefutably linked to that interval, making transient-key cryptography particularly useful for digital trusted timestamping. Transient-key cryptography was invented in 1997 by Dr. Michael Doyle of Eolas, and has been adopted in the ANSI ASC X9.95 Standard for trusted timestamps.
Public-key vs. transient-key:
Both public-key and transient-key systems can be used to generate digital signatures that assert that a given piece of data has not changed since it was signed. But the similarities end there. In a traditional public key system, the public/private keypair is typically assigned to an individual, server, or organization. Data signed by a private key asserts that the signature came from the indicated source. Keypairs persist for years at a time, so the private component must be carefully guarded against disclosure; in a public-key system, anyone with access to a private key can counterfeit that person's digital signature.
Public-key vs. transient-key:
In transient-key systems, however, the keypair is assigned to a brief interval of time, not to a particular person or entity. Data signed by a specific private key becomes associated with a specific time and date. A keypair is active only for a few minutes, after which the private key is permanently destroyed. Therefore, unlike public-key systems, transient-key systems do not depend upon the long-term security of the private keys.
Establishing data integrity:
In a transient-key system, the source of time must be a consistent standard understood by all senders and receivers. Since a local system clock may be changed by a user, it is never used as a source of time. Instead, data is digitally signed with a time value derived from Universal Coordinated Time (UTC) accurate to within a millisecond, in accordance with the ANSI ASC X9.95 standard for Trusted Timestamping. Whenever a time interval in a transient-key system expires, a new public/private keypair is generated, and the private key from the previous interval is used to digitally certify the new public key. The old private key is then destroyed. This "key-chaining" system is the immediate ancestor of the Blockchain technology in vogue today.
Establishing data integrity:
For the new interval, time values are obtained from a trusted third-party source, and specific moments in time can be interpolated in between received times by using a time-biasing method based on the internal system timer. If a trusted time source cannot be obtained or is not running within specified tolerances, transient private keys are not issued. In that case, the time interval chain is terminated, and a fresh one is begun. The old and new chains are connected through network archives, which enable all servers to continue to verify the data integrity through time of protected data, regardless of how often the chain must be restarted.
Establishing data integrity:
The start times of the chain and of each interval can be coupled together to form an unbroken sequence of public keys, which can be used for the following: To irrefutably identify the time at which a set of data was signed.
Establishing data integrity:
To identify the exact state of the data at the time it was signed.As an extra security measure, all requests for signatures made during an interval are stored in a log that is concatenated and is itself appended to the public key at the start of the next interval. This mechanism makes it impossible to insert new “signed events” into the interval chain after the fact.
Cross-verification:
Through independently operating servers, cross-certification can provide third-party proof of the validity of a time interval chain and irrefutable evidence of consensus on the current time. Transient-key cryptographic systems display high Byzantine fault tolerance. A web of interconnected cross-certifying servers in a distributed environment creates a widely witnessed chain of trust that is as strong as its strongest link. By contrast, entire hierarchies of traditional public key systems can be compromised if a single private key is exposed.An individual transient key interval chain can be cross-certified with other transient key chains and server instances. Through cross-certification, Server A signs Server B's interval chain, the signed data of which is the interval definition. In effect, the private keys from Server B are used to sign the public keys of Server A. In the diagram, a server instance is cross-certified with two other server instances (blue and orange). Cross-certification requires that the timestamp for the interval agree with the timestamp of the cross-certifying server within acceptable tolerances, which are user-defined and typically a few hundred milliseconds in duration.
Network archives:
Along with intervals, cross-certifications are stored in a network archive. Within a transient-key network, the archive is a logical database that can be stored and replicated on any system to enable verification of data that has been timestamped and signed by transient keys. A map of the set of accessible archives is stored within every digital signature created in the system. Whenever cross-certifications are completed at the beginning of an interval, the archive map is updated and published to all servers in the network.
Verification:
During an interval, the transient private key is used to sign data concatenated with trusted timestamps and authenticity certificates. To verify the data at a later time, a receiver accesses the persistent public key for the appropriate time interval. The public key applied to the digital signature can be passed through published cryptographic routines to unpack the hash of the original data, which is then compared against a fresh hash of the stored data to verify data integrity. If the signature successfully decrypts using a particular interval's published public key, the receiver can be assured that the signature originated during that time period. If the decrypted and fresh hashes match, the receiver can be assured that the data has not been tampered with since the transient private key created the timestamp and signed the data.
Verification:
Transient-key cryptography was invented in 1997 by Dr. Michael D. Doyle of Eolas Technologies Inc., while working on the Visible Embryo Project and later acquired by and productized by ProofSpace, Inc. It has been adopted as a national standard in the ANSI ASC X9.95 standard for Trusted Timestamping. Transient-key cryptography is the predecessor to Forward secrecy and formed the foundation of the forward-signature-chaining technology in the Bitcoin blockchain system.
Verification:
ProofSpace, Inc has published a more detailed technical overview document of transient key cryptography. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Central meridian (planets)**
Central meridian (planets):
The central meridian of a celestial body that presents a disc to an observer (such as planet, moon, or star) is the meridian on the body's surface that goes through the centre of the body's disc as seen from the point of view of the observer.The term as generally used in observational astronomy refers to the central meridian of the celestial body as seen by a theoretical observer on Earth for whom the celestial body is at the zenith. An imaginary line is drawn from the centre of the Earth to the center of the other celestial body. The intersection between this line and the celestial body's surface is the sub-Earth point. The central meridian is the meridian going through the sub-Earth point.Because of the body's rotation and orbital alignment with the observer the central meridian changes with time, as it is based on the observer's point of view. For example, consider the Earth as seen from the Moon. There will be a meridian going through the centre of the Earth's visible disc (for example 75° West). This is not always the Earth's prime meridian (0° W / 0° E), as the central meridian of the Earth as seen from the Moon changes as the Earth rotates. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Human Variome Project**
Human Variome Project:
The Human Variome Project (HVP) is the global initiative to collect and curate all human genetic variation affecting human health. Its mission is to improve health outcomes by facilitating the unification of data on human genetic variation and its impact on human health.
Inception:
The HVP concept was conceived by Richard Cotton, a leader in the field of human genetic variation. His group, the Genomic Disorders Research Centre, based at the University of Melbourne and St. Vincent's Hospital, has established a consortium that covers genomic variation and its health implications in a comprehensive form. This consortium has encouraged the creation and supported many of the 571 gene specific variation databases currently available on the internet. However, these databases are of varying completeness and individualistic, so the Human Variome Project was born to establish a central project to encourage the collection and sourcing of this data, verifying it and ultimately using it for improved health outcomes.
Inception:
Geneticists, diagnosticians, researchers and bioinformatics scientists came together in June 2006 at the Human Variome Project Meeting, organized by Cotton’s team, and agreed to take on the task of organising data collection and unifying the systems of data access and storage. This initiative builds on substantial pilot work and achievements of the Human Genome Variation Society. The authority of those initiating this project is evidenced by the fact that major international bodies were present. These included WHO, OECD, European Commission, UNESCO, March of Dimes (US), Centers for Disease Control and Prevention (US), Google, representatives of two dozen international genetics bodies, numerous genetics journals, 20 countries and Australian State and Federal Governments.
Inception:
This major international project, a natural partner to the Human Genome Project, will require substantial funding to get it to a sustainable position. A five-year secure budget period of approximately US$12m per year has been proposed to initiate the project. This will enable the project to be organized and find operational funds for the tasks of system development, informatics, database curation and clinical access as well as collection systems that are open and accessible to all.
Inception:
The Human Variome Project seeks to provide open access to the full realm of genetic variation for the benefit of everyone.The Centre for Arab Genomic Studies (CAGS) has initiated efforts to proceed with the Arab Human Variome Project under the Human Variome Project. CAGS was one of the participants of the HVP meeting in Melbourne. Since then, several meetings have been held between officials of HVP and CAGS members to discuss the nature of work involved. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HamSphere**
HamSphere:
HamSphere is a subscription-based internet service that simulates amateur radio communication using VoIP connections over the Internet. The simulator allows licensed radio amateurs and unlicensed enthusiasts to communicate with one another using a simulated ionosphere. It was designed by Kelly Lindman, a radio amateur with call sign 5B4AIT.
The system allows realistic worldwide connections between amateur radio operators as well as radio enthusiasts. In general, it is similar to other VoIP applications (such as Skype), but with the unique addition of characteristics such as channel selection by tuning, modulation, noise effects and shortwave propagation simulation.
Before using the system it is necessary for a radio amateur's call sign to be validated. The HamSphere system relies on different amateur online callbooks for verification before his or her call sign is added to the list of validated users.
The system may be used without a verified radio amateur license and has a callsign generator providing unique unofficial HamSphere callsigns.
HamSphere:
The software is written to run on Microsoft Windows, Apple OS X or Linux using Java. Also available are mobile editions of the software running on Apple mobile devices (iPhone, iPod touch, and iPad) available from the Apple App Store, and on Android devices from the Google Play Store. The software is available for download as a free trial but requires a yearly subscription after the free trial expires.
Uses:
Operators using the HamSphere software can operate it in two modes: Simulation mode. This is the unique feature of HamSphere allowing the user to maintain connections under natural realistic conditions. Signals may vary and interference is present giving the user the impression that he or she is using a real shortwave transceiver.
Simulation off mode. This mode entails connection to other operators with the reliability of VoIP (noise-free) while maintaining the other typical characteristics of radio communication.
Operating modes:
The HamSphere software has two modulation types: Single-sideband suppressed-carrier transmission or SSB is the default mode of operation where the operator uses speech audio/phone.
Continuous wave or CW where the operator utilizes a built-in Morse Code keyer.
Propagation model:
The mathematical algorithm for the wave propagation is based on a stochastic model and pre recorded signal envelope. Multipath propagation is achieved by inducing multiple simulated electromagnetic paths digitally thus producing signal fading and audio distortion.
Detector and Filters:
Signals are received and converted into audible form by using a product detector mixing the local oscillator signal with the received signal, very similar to Software-defined radio. The digital artifact of the decoded audio signal is later filtered with a 17-order FIR filter with a bandwidth of 2.8 kHz. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Multidimensional spectral estimation**
Multidimensional spectral estimation:
Multidimension spectral estimation is a generalization of spectral estimation, normally formulated for one-dimensional signals, to multidimensional signals or multivariate data, such as wave vectors.
Motivation:
Multidimensional spectral estimation has gained popularity because of its application in fields like medicine, aerospace, sonar, radar, bio informatics and geophysics. In the recent past, a number of methods have been suggested to design models with finite parameters to estimate the power spectrum of multidimensional signals. In this article, we will be looking into the basics of methods used to estimate the power spectrum of multidimensional signals.
Applications:
There are many applications of spectral estimation of multi-D signals such as classification of signals as low pass, high pass, pass band and stop band. It is also used in compression and coding of audio and video signals, beam forming and direction finding in radars, Seismic data estimation and processing, array of sensors and antennas and vibrational analysis. In the field of radio astronomy, it is used to synchronize the outputs of an array of telescopes.
Basic Concepts:
In a single dimensional case, a signal is characterized by an amplitude and a time scale. The basic concepts involved in spectral estimation include autocorrelation, multi-D Fourier transform, mean square error and entropy. When it comes to multidimensional signals, there are two main approaches: use a bank of filters or estimate the parameters of the random process in order to estimate the power spectrum.
Methods:
Classical Estimation Theory It is a technique to estimate the power spectrum of a single dimensional or a multidimensional signal as it cannot be calculated accurately. Given are samples of a wide sense stationary random process and its second order statistics (measurements).The estimates are obtained by applying a multidimensional Fourier transform of the autocorrelation function of the random signal. The estimation begins by calculating a periodogram which is obtained by squaring the magnitude of the multidimensional Fourier transform of the measurements ri(n). The spectral estimates obtained from the periodogram have a large variance in amplitude for consecutive periodogram samples or in wavenumber. This problem is resolved using techniques that constitute the classical estimation theory. They are as follows: 1.Bartlett suggested a method that averages the spectral estimates to calculate the power spectrum. The measurements are divided into equally spaced segments in time and an average is taken. This gives a better estimate.
Methods:
2.Based on the wavenumber and index of the receiver/output we can partition the segments. This increases the spectral estimates and decreases the variances between consecutive segments.
3.Welch suggested that we should divide the measurements using data window functions, calculate a periodogram, average them to get a spectral estimate and calculate the power spectrum using Fast Fourier Transform (FFT). This increases the computational speed.
4.Smoothing window will help us smoothen the estimate by multiplying the periodogram with a smoothening spectrum. Wider the main lobe of the smoothening spectrum, smoother it becomes at the cost of frequency resolution.
P(Kx,w)=∫−∞∞∫−∞∞φss(x,t)e−j(wt−k′x)dxdt φss(x,t)=s[(ξ,τ)s∗(ξ−x,τ−t)] PB(w)=1detN∑l|∑nx(n+MI)e−j(w′n)|2 Bartlett's case PM(w)=1detN|∑ng(n)x(n)e−j(w′n)|2 Modified periodogram PW(w)=1detN∑l|∑ng(n)x(n+MI)e−j(w′n)|2 Welch's case Advantages Straightforward method involving Fourier transforms.LimitationsSince some of the above methods sample the sequence in time, the frequency resolution is reduced (aliasing).
Number of instances of a wide sense stationary random process are less which makes it difficult to calculate the estimates accurately.
Methods:
High Resolution Spectral Estimations This method gives a better estimate whose frequency resolution is higher than the classical estimation theory. In the high resolution estimation method we use a variable wavenumber window which allows only certain wavenumbers and suppresses the others. Capon’s work helped us establish an estimation method by using wavenumber-frequency components. This results in an estimate with a higher frequency resolution. It is similar to maximum likelihood method as the optimization tool used is similar.
Methods:
Assumption The output obtained from the sensors is a wide sense stationary random process with zero mean.
PC(Kox,wo)=E[|y(i,n)|2]=1∑α=0N−1∑β=0M−1∑l=0N−1∑m=0M−1ψe(l,α;m,β) AdvantagesHigher frequency resolution compare to other existing methods.
Better frequency estimate since we are using a variable wavenumber window as compared to classical method which uses a fixed wavenumber window.
Faster Computational speed as it uses FFT.
Methods:
Separable Spectral Estimator In this type of estimation, we select the multidimensional signal to be a separable function. Because of this property we will be able to view the Fourier analysis taking place in multiple dimensions successively. A time delay in the magnitude squaring operation will help us process the Fourier transformation in each dimension. A Discrete time Multidimensional Fourier transform is applied along each dimension and in the end a maximum entropy estimator is applied and the magnitude is squared.
Methods:
AdvantagesThe Fourier analysis is flexible as the signal is separable.
It preserves the phase components of every dimension unlike other spectral estimators.
Methods:
All-pole Spectral Modelling This method is an extension of a 1-D technique called Autoregressive spectral estimation. In autoregressive models, the output variables depend linearly on its own previous values. In this model, the estimation of power spectrum is reduced to estimating the coefficients from the autocorrelation coefficients of the random process which are assumed to be known for a specific region. The power spectrum PA(kx,w) of a random process r(i,n) is given by: PA(kx,w)=Pe(kx,w)|11−A(kx,w)|2 Above, Pe(kx,w) is the power spectrum of a random process e(i,n) , which is given as the input to a system with a transfer function |11−A(kx,w)| to obtain r(i,n) and A(kx,w) is: A(kx,w)=∑p=oN−1∑q=0M−1a(p,q)exp(jkxp−jwq) Therefore, the power estimation reduces to estimation of coefficients of a(p,q) from the auto correlation function φ(l,m) of the random process. The coefficients can also be estimated using the linear prediction formulation which deals with minimization of mean square error between the actual random signal and predicted values of the random signal.
Methods:
LimitationsIn 1-D we have the same number of linear equations with the same number of unknowns because of the autocorrelation matching property. But it may not be possible in multi-D since the set of parameters does not contain enough degrees of freedom to match autocorrelation coefficients.
We assume that the array of coefficients is limited to a certain area.
In 1-D formulation of linear prediction, the inverse filter has minimum phase property thus proving that the filter is stable. It is not always necessarily true in multi-D case.
In 1-D formulation, the autocorrelation matrix is positive definite but positive definite extension may not exist in the case of multi-D.
Methods:
Maximum Entropy Spectral Estimation In this method of spectral estimation, we try to find the spectral estimate whose inverse Fourier transform matches the known auto correlation coefficients. We maximize the entropy of the spectral estimate such that it matches the autocorrelation coefficients. The entropy equation is given as: H=14π2∫−ππ∫−ππlogP(kx,w)dkxdw The power spectrum P(k,w) can be expressed as a sum of known autocorrelation coefficients and unknown autocorrelation coefficients. By adjusting the values of unconstrained coefficients, the entropy can be maximized.
Methods:
The max entropy is of the form: PME=1∑l∑mλ(l,m)exp(jkxl−jwm) λ(l,m) must be chosen such that known autocorrelation coefficients are matched.
LimitationsIt has constrained optimization. It can be overcome by using the method of Lagrange multipliers.
Methods:
All pole spectral estimate is not the solution to maximum entropy in multidimensional case as it is in the case of 1-D. This is because the all pole spectral model does not contain enough degree of freedom to match the know autocorrelation coefficients.Advantages Errors in measuring or estimating the known autocorrelation coefficients can be taken into account since exact match is not required.
Methods:
Disadvantage Too many computations are required.
Methods:
Improved Maximum Likelihood Method(IMLM) This is a relatively new approach. Improved maximum likelihood method (IMLM) is a combination of two MLM(maximum likelihood) estimators. The improved maximum likelihood of two 2-dimensional arrays A and B at a wave number k( gives information about the orientation of the array in space) is given by the relation: IMLM(k:A,B)=11MLM(k:A)−1MLM(k:B) Array B is a subset of A. Therefore, assuming that A>B, if there is a difference between the MLM of A and MLM of B then significant part of the estimated spectral energy at the frequency may be due to power leakage from other frequencies. The de-emphasis of MLM of A may improve spectral estimate. This is accomplished by multiplying by a weighted function which is smaller when there is a greater difference between MLA of B and MLA of A.
Methods:
IMLM(k:A,B)=MLM(k:A)MLM(k:B)MLM(k:B)−MLM(k:A) IMLM(k:A,B)=MLM(k:A)WAB(k) where WAB(k) is the weighting function and is given by the expression: WAB(k)=MLM(k:B)MLM(k:B)−MLM(k:A) AdvantagesUsed as an alternative to MLM or MEM(Maximum Entropy Method/principle of maximum entropy) IMLM has better resolution than MLM and it requires lesser number of computations when compared to MEM | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Voice search**
Voice search:
Voice search, also called voice-enabled search, allows the user to use a voice command to search the Internet, a website, or an app.
In a broader definition, voice search includes open-domain keyword query on any information on the Internet, for example in Google Voice Search, Cortana, Siri and Amazon Echo.
Voice search is often interactive, involving several rounds of interaction that allows a system to ask for clarification. Voice search is a type of dialog system.
Voice search is not a replacement for typed search. Rather the search terms, experience and use cases can differ heavily depending on the input type.
Method:
Voice searching is a method of search which allows users to search using spoken voice commands rather than typing. The search can be done on any device with a voice input. Three common methods to activate voice search: Click on the voice command icon Call out the name of the virtual assistant Click on the home button or gesture on interface Activate the virtual assistant Apple: Hey, Siri Google: OK, Google Amazon: Hey, Alexa Microsoft: Hey, Cortana Samsung: Hi, Bixby
Supported language:
Language is the most essential factor for a system to understand, and provide the most accurate results of what the user searches. This covers across languages, dialects, and accents, as users want a voice assistant that both understands them and speaks to them understandably.
Supported language:
While spoken and written languages differ, voice search should support natural spoken language instead of only transforming voice into text and doing a regular text search with the help speech recognition. For example, in typed search an eCommerce user can easily copy and paste an alphanumeric product code to search field, but when speaking the search terms can be very different, such as "show me the new Bluetooth headphones by Samsung".
How it works:
The difference between text and voice search is not only the input type. The mechanism must include an automatic speech recognition (ASR) for input, but it can also include natural language understanding for natural spoken search queries such as "What's the population for the United States" It can include text-to-speech (TTS) or a regular display for output modalities. Users might sometimes be required to activate the search by using a wake word. Then, the search system will detect the language spoken by the user. It will then detect the keywords and context of the sentence. Lastly, the device will return results depending on its output. A device with a screen might display the results, while a device without a screen will speak them back to the searcher. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Explosively pumped flux compression generator**
Explosively pumped flux compression generator:
An explosively pumped flux compression generator (EPFCG) is a device used to generate a high-power electromagnetic pulse by compressing magnetic flux using high explosive.
An EPFCG only ever generates a single pulse as the device is physically destroyed during operation. They require a starting current pulse to operate, usually supplied by capacitors.
Explosively pumped flux compression generator:
Explosively pumped flux compression generators are used to create ultrahigh magnetic fields in physics and materials science research and extremely intense pulses of electric current for pulsed power applications. They are being investigated as power sources for electronic warfare devices known as transient electromagnetic devices that generate an electromagnetic pulse without the costs, side effects, or enormous range of a nuclear electromagnetic pulse device.
Explosively pumped flux compression generator:
The first work on these generators was conducted by the VNIIEF center for nuclear research in Sarov in the Soviet Union at the beginning of the 1950s followed by Los Alamos National Laboratory in the United States.
History:
At the start of the 1950s, the need for very short and powerful electrical pulses became evident to Soviet scientists conducting nuclear fusion research. The Marx generator, which stores energy in capacitors, was the only device capable at the time of producing such high power pulses. The prohibitive cost of the capacitors required to obtain the desired power motivated the search for a more economical device. The first magneto-explosive generators, which followed from the ideas of Andrei Sakharov, were designed to fill this role.
Mechanics:
Magneto-explosive generators use a technique called "magnetic flux compression", described in detail below. The technique is made possible when the time scales over which the device operates are sufficiently brief that resistive current loss is negligible, and the magnetic flux through any surface surrounded by a conductor (copper wire, for example) remains constant, even though the size and shape of the surface may change.
Mechanics:
This flux conservation can be demonstrated from Maxwell's equations. The most intuitive explanation of this conservation of enclosed flux follows from Lenz's law, which says that any change in the flux through an electric circuit will cause a current in the circuit which will oppose the change. For this reason, reducing the area of the surface enclosed by a closed loop conductor with a magnetic field passing through it, which would reduce the magnetic flux, results in the induction of current in the electrical conductor, which tends to keep the enclosed flux at its original value. In magneto-explosive generators, the reduction in area is accomplished by detonating explosives packed around a conductive tube or disk, so the resulting implosion compresses the tube or disk. Since flux is equal to the magnitude of the magnetic field multiplied by the area of the surface, as the surface area shrinks the magnetic field strength inside the conductor increases. The compression process partially transforms the chemical energy of the explosives into the energy of an intense magnetic field surrounded by a correspondingly large electric current.
Mechanics:
The purpose of the flux generator can be either the generation of an extremely strong magnetic field pulse, or an extremely strong electric current pulse; in the latter case the closed conductor is attached to an external electric circuit. This technique has been used to create the most intense manmade magnetic fields on Earth; fields up to about 1000 teslas (about 1000 times the strength of a typical neodymium permanent magnet) can be created for a few microseconds.
Mechanics:
Elementary description of flux compression An external magnetic field (blue lines) threads a closed ring made of a perfect conductor (with zero resistance). The total magnetic flux Φ through the ring is equal to the magnetic field B multiplied by the area A of the surface spanning the ring. The nine field lines represent the magnetic flux threading the ring.
Mechanics:
Suppose the ring is deformed, reducing its cross-sectional area. The magnetic flux threading the ring, represented by five field lines, is reduced by the same ratio as the area of the ring. The variation of the magnetic flux induces a current (red arrows) in the ring by Faraday's law of induction, which in turn creates a new magnetic field circling the wire (green arrows) by Ampere's circuital law. The new magnetic field opposes the field outside the ring but adds to the field inside, so that the total flux in the interior of the ring is maintained: four green field lines added to the five blue lines give the original nine field lines.
Mechanics:
By adding together the external magnetic field and the induced field, it can be shown that the net result is that the magnetic field lines originally threading the hole stay inside the hole, thus flux is conserved, and a current has been created in the conductive ring. The magnetic field lines are "pinched" closer together, so the (average) magnetic field intensity inside the ring increases by the ratio of the original area to the final area.
The various types of generators:
The simple basic principle of flux compression can be applied in a variety of different ways. Soviet scientists at the VNIIEF in Sarov, pioneers in this domain, conceived of three different types of generators: In the first type of generator (MK-1, 1951) developed by Robert Lyudaev, the magnetic flux produced by a wound conductor is confined to the interior of a hollow metallic tube surrounded by explosives, and submitted to a violent compression when the explosives are fired; a device of the same type was developed in the United States a dozen years later by C. M. (Max) Fowler's team at Los Alamos.
The various types of generators:
In the second type of generator (MK-2, 1952), the magnetic flux, confined between the windings of the external conductor and a central conductive tube filled with explosive, is compressed by the conical 'piston' created by the deformation of the central tube as the detonation wave travels across the device.
The various types of generators:
A third type of generator (DEMG), developed by Vladimir Chernyshev, is cylindrical, and contains a stack of concave metallic disks, facing each other in pairs, to create hollow modules (with the number varying according to the desired power), and separated by explosives; each module functions as an independent generator.Such generators can, if necessary, be utilised independently, or even assembled in a chain of successive stages: the energy produced by each generator is transferred to the next, which amplifies the pulse, and so on. For example, it is foreseen that the DEMG generator will be supplied by a MK-2 type generator.
The various types of generators:
Hollow tube generators In the spring of 1952, R. Z. Lyudaev, E. A. Feoktistova, G. A. Tsyrkov, and A. A. Chvileva undertook the first experiment with this type of generator, with the goal of obtaining a very high magnetic field.
The various types of generators:
The MK-1 generator functions as follows: A longitudinal magnetic field is produced inside a hollow metallic conductor, by discharging a bank of capacitors into the solenoid that surrounds the cylinder. To ensure a rapid penetration of the field in the cylinder, there is a slit in the cylinder, which closes rapidly as the cylinder deforms; The explosive charge placed around the tube is detonated in a manner that ensures that the compression of the cylinder commences when the current through the solenoid is at its maximum; The convergent cylindrical shock wave unleashed by the explosion produces a rapid contraction (greater than 1 km/s) of the central cylinder, compressing the magnetic field, and creating an inductive current, as per the explanation above (the speed of contraction permits, to first approximation, the neglect of Joule losses and the consideration of the cylinder as a perfect conductor).The first experiments were able to attain magnetic fields of millions of gauss (hundreds of teslas), given an initial field of 30 kG (3 T) which is in the free space "air" the same as H = B/μ0 = (3 Vs/m2) / (4π × 10−7 Vs/Am) = 2.387×106 A/m (approximately 2.4 MA/m).
The various types of generators:
Helical generators Helical generators were principally conceived to deliver an intense current to a load situated at a safe distance. They are frequently used as the first stage of a multi-stage generator, with the exit current used to generate a very intense magnetic field in a second generator.
The various types of generators:
The MK-2 generators function as follows: A longitudinal magnetic field is produced in between a metallic conductor and a surrounding solenoid, by discharging a battery of capacitors into the solenoid; After the charge is ignited, a detonation wave propagates in the explosive charge placed in the interior of the central metallic tube (from left to right on the figure); Under the effect of the pressure of the detonation wave, the tube deforms and becomes a cone which contacts the helically wrapped coil, diminishing the number of turns not short-circuited, compressing the magnetic field and creating an inductive current; At the point of maximal flux compression, the load switch is opened, which then delivers the maximal current to the load.The MK-2 generator is particularly interesting for the production of intense currents, up to 108 A (100 MA), as well as a very high energy magnetic field, as up to 20% of the explosive energy can be converted to magnetic energy, and the field strength can attain 2 × 106 gauss (200 T).
The various types of generators:
The practical realization of high performance MK-2 systems required the pursuit of fundamental studies by a large team of researchers; this was effectively achieved by 1956, following the production of the first MK-2 generator in 1952, and the achievement of currents over 100 megaamperes from 1953.
The various types of generators:
Disc generators A DEMG generator functions as follows: Conductive metallic discs, assembled in facing pairs to create hollow modules having the form of a lined torus, with explosive packed between pairs of modules, are stacked inside a cylinder; the number of modules can vary according to the desired power (the figure shows a device of 15 modules), as well as the radius of the discs (of the order of 20 to 40 cm).
The various types of generators:
Current runs through the device, supplied by a MK-2 generator, and an intense magnetic field is created inside each module.
When initiated, the explosion begins on the axis and propagates radially outwards, deforming the disc shaped protuberances with triangular section and pushing them away from the axis. The outward movement of this section of conductor plays the role of a piston.
As the explosion proceeds, the magnetic field is compressed in the inside of each module by the conductive piston and the simultaneous drawing together of the inner faces, also creating an inductive current.
The various types of generators:
As the induced current attains its maximum, the fuse opening switch fuses and the load switch simultaneously closes, allowing the current to be delivered to the load (the mechanism for the operation of the load switch is not explained in available documentation).Systems using up to 25 modules have been developed at VNIIEF. Output of 100 MJ at 256 MA have been produced by a generator a metre in diameter composed of three modules. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Anodontia**
Anodontia:
Anodontia is a rare genetic disorder characterized by the congenital absence of all primary or permanent teeth. It is divided into two subsections, complete absence of teeth or only some absence of teeth. It is associated with the group of skin and nerve syndromes called the ectodermal dysplasias. Anodontia is usually part of a syndrome and seldom occurs as an isolated entity. There is usually no exact cause for anodontia. The defect results in the dental lamina obstruction during embryogenesis due to local, systemic and genetic factors.
Anodontia:
Congenital absence of permanent teeth can present as hypodontia, usually missing one or two permanent teeth, or oligodontia that is the congenital absence of six or more teeth. Congenital absence of all wisdom teeth, or third molars, is relatively common. Anodontia is the congenital absence of teeth and can occur in some or all teeth; whereas partial anodontia (or hypodontia), involves two dentitions or only teeth of the permanent dentition (Dorland's 1998). Approximately 1% of the population has oligodontia. Many denominations are attributed to this anomaly: partial anodontia, hypodontia, oligodontia, the congenital absence, anodontia, bilateral aplasia. Anodontia being the term used in controlled vocabulary Medical Subject Headings (MeSH) from MEDLINE which was developed by the United States National Library of Medicine. The congenital absence of at least one permanent tooth is the most common dental anomaly and may contribute to masticator dysfunction, speech impairment, aesthetic problems, and malocclusion (Shapiro and Farrington 1983). Absence of lateral incisors represents a major stereotype. Individuals with this condition are perceived as socially most aggressive compared with people without anodontia (Shaw 1981). The occurrence of anodontia is less so than hypodontia which has a prevalence of 0.1-0.7% in primary teeth and 3–7.5% in permanent teeth.
Signs and symptoms:
The main sign of anodontia is when a child has not developed any of their permanent teeth by the age of 12. Another sign of anodontia can include the absence of baby teeth when the baby has reached 12 to 13 months.Symptoms that are associated with anodontia include: alopecia, lack of sweat glands, cleft lip or palate, and missing fingernails. Typically, these symptoms are seen because anodontia is typically associated with ectodermal dysplasia. In the rare case that ectodermal dysplasia is not present, anodontia will be caused from an unknown genetic mutation.
Cause:
Anodontia typically occurs with the presence of ectodermal dysplasia, which is a group of disorders where two or more ectodermally derived structures will have abnormal development. In the rare case that ectodermal dysplasia is not associated or present, anodontia will be caused by an unknown genetic mutation. Although no specific gene has been identified, there have been many different genes found to be associated with anodontia including EDA, EDAR, and EDARADD genes. Other genes such as MSX1, PAX9, IRF6, GREM2, AXIN2, LRP6, SMOC2, LTBP3, PITX2, and WNT10B. The WNT10A gene is considered to be the major gene involved in hypodontia and oligodontia. These genes are involved in hypodontia and oligodontia. If Anodontia is present in the maternal or paternal side, the chances of this being inherited are increased.
Mechanisms and Pathophysiology:
Anodontia is a genetic disorder that is typically occurs in result of another syndrome. Different results can occur depending on which gene is inherited. It remains unclear which specific gene is the direct cause but it is known that several genes can play a role when inherited. Many genes are involved with this and other relating disorders. The main genes involved include: EDA, EDAR, and EDARADD genes. One working gene and one non-working gene are inherited, one from an affected parent and one from a non-affected parent, which then result in a 50% chance of the child inheriting the genetic disorder. Anodontia alone will not have an effect on any other body part besides teeth being missing.
Associated syndromes:
Hypodontia and anodontia are frequently associated with a multitude of genetic disorders and syndromes, approximately 70. Syndromes particularly involved with ectodermal involvement are a prime circumstance for anodontia to occur, some examples of these are: Rieger's, Robinson's and focal dermal hypoplasia. Three syndromes which classically have signs of anodontia are oculomandibulodyscephaly, mesoectodermal dysplasia and ectodermal dysplasia. In cases of oculomandibulodyscephaly there are no permanent teeth but there are deciduous teeth present. In mesoectodermal dysplasia the symptoms are anodontia and hypodontia. In cases of ectodermal dysplasia oligodontia is also present.
Associated syndromes:
Other symptoms associated with anodontia include: Alopecia, loss of sweat glands, cleft lip or palate, or missing finger nails.
Diagnosis:
Anodontia can be diagnosed when a baby does not begin to develop teeth around the age of 12 to 13 months or when a child does not develop their permanent teeth by the age of 10. The dentist can use a special X-ray, such as a panoramic image, to check if there are any teeth developing. There is also a higher risk for a child to develop anodontia if their parent has this disorder as well. In the absence of all permanent teeth, anodontia will be diagnosed. If between one and five teeth are missing, this will be diagnosed as hypodontia. In the absence of six or more teeth, this will be diagnosed as oligodontia.
Complications:
The complications associated with anodontia can vary but the majority results in problems with aesthetic appearance, speaking, and masticatory function. Complications may occur with the placement of the dental implant. Although it is rare, some complications may include the screw of the implant becoming loose or sore spots.
Prevention and Treatment:
Anodontia cannot be prevented due to it being a genetic disorder. Prosthetic replacement of missing teeth is possible using dental implant technology or dentures. This treatment can be successful in giving patients with anodontia a more aesthetically pleasing appearance. The use of an implant prosthesis in the lower jaw could be recommended for younger patients as it is shown to significantly improve the craniofacial growth, social development and self-image. The study associated with this evidence worked with individuals who had ectodermal dysplasia of varying age groups of up to 11, 11 to 18 and more than 18 years. It was noted that the risk of implant failure was significantly higher in patients younger than 18 years, but there is significant reason to use this methodology of treatment in those older. Overall the use of an implant-prosthesis has a considerable functional, aesthetic and psychological advantage when compared to a conventional denture, in the patients.
Prognosis:
Patients diagnosed with anodontia are expected to have a normal life expectancy. Once anodontia is diagnosed, dental implants or dentures will need to be worn in order to treat this disorder. There is an 88.5% to 100% chance for dental implants in patients with ectodermal dysplasia or tooth agenesis to be successful when placed after the age of 18.
Epidemiology:
The prevalence of anodontia is unknown but it is a very rare disorder. Anodontia occurs in less than 2-8% of the general population in regards to permanent teeth and 0.1-0.7% in primary teeth. Gender and ethnicity do not play a role in anodontia.
Research:
A recent study in 2019 by R. Constance Wiener and Christopher water looked at anodontia, hypodontia, and oligodontia in children West Virginia. There is a high prevalence of children with missing permanent teeth in West Virginia compared to the rest of the nation. During this study, 500 panoramic images were taken of children between the ages of 6 and 11. Out of the 500 images taken, 60 children had at least one or more missing permanent teeth. The results showed that more females had one or more missing permanent teeth than males. From the 60 children who had missing permanent teeth, 15.5% were female and 8.8% were males.A case study conducted in 2016 of a six-year-old boy presented with anodontia. There was no family history of anodontia and the patient did not present any other symptoms for ectodermal dysplasia. It was observed the hypodontia was present in the maxillary arch and the only teeth present were the left primary first molar and the bilateral primary second molars. It was also observed that the buccal mucosa, palate, and floor of the mouth were considered normal. The patient proceeded with oral rehabilitation and give removable denture to wear. The patient struggled in the beginning to keep wearing the denture until gradually learning to adjust to it. The family reported no problems with retention and began a monthly recall visit in order to monitor any eruptions of teeth or adjustments that needed to be made. Improvements in speech skills, communication, and self esteem were also observed after placement of the denture.Another case study in 2013 of an eight-year-old boy who reported missing teeth, difficulty chewing, and difficulty speaking was seen to have other symptoms of ectodermal dysplasia. The father confirmed there is a family history of missing teeth. The patient also had sensitivity to heat, absence of sweating, dry skin, absent eyebrows and eyelashes, hyper pigmentation, and many other ectodermal dysplasia symptoms. After a full examination, the patient was diagnosed with complete anodontia. The patient was treated with a complete set of removable dentures. After the dentures were given, the patient's facial presentation and expressions improved. The patient was also set up for recall follow ups every six months. Drastic improvement was seen with improvements with chewing and speech. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vortex breaker**
Vortex breaker:
A vortex breaker is a device used in engineering to stop the formation of a vortex when a fluid (liquid or gas) is drained from a vessel such as a tank or vapor-liquid separator. The formation of vortices can entrain vapor in the liquid stream, leading to poor separation in process steps such as distillation or excessive pressure drop, or causing cavitation of downstream pumps. Vortices can also re-entrain solid particles previously separated from a gas stream in a solid-gas separation device such as a cyclone.
Design:
Many different designs of vortex breaker are available. Some use radial vanes or baffles around the liquid exit to stop some of the angular velocity of the liquid. The "floor grate" design uses a system of grating similar to the metal floor of a catwalk. Different authors give different rules of thumb for vortex breaker design. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Super-Jupiter**
Super-Jupiter:
A super-Jupiter is a gas giant exoplanet that is more massive than the planet Jupiter. For example, companions at the planet–brown dwarf borderline have been called super-Jupiters, such as around the star Kappa Andromedae.By 2011 there were 180 known super-Jupiters, some hot, some cold. Even though they are more massive than Jupiter, they remain about the same size as Jupiter up to 80 Jupiter masses. This means that their surface gravity and density go up proportionally to their mass. The increased mass compresses the planet due to gravity, thus keeping it from being larger. In comparison, planets somewhat lighter than Jupiter can be larger, so-called "puffy planets" (gas giants with a large diameter but low density). An example of this may be the exoplanet HAT-P-1b with about half the mass of Jupiter but about 1.38 times larger diameter.CoRoT-3b, with a mass around 22 Jupiter masses, is predicted to have an average density of 26.4 g/cm3, greater than osmium (22.6 g/cm3), the densest natural element under standard conditions. Extreme compression of matter inside it causes the high density, because it is likely composed mainly of hydrogen. The surface gravity is also high, over 50 times that of Earth.In 2012, the super-Jupiter Kappa Andromedae b was imaged around the star Kappa Andromedae, orbiting it about 1.8 times the distance at which Neptune orbits the Sun. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Screen-printed electrodes**
Screen-printed electrodes:
Screen-printed electrodes (SPEs) are electrochemical measurement devices that are manufactured by printing different types of ink on plastic or ceramic substrates, allowing quick in-situ analysis with high reproducibility, sensitivity and accuracy. The composition of the different inks (carbon, silver, gold, platinum) used in the manufacture of the electrode determines its selectivity and sensitivity. This fact allows the analyst to design the most optimal device according to its purpose.The evolution of these electrochemical cells arises from the need to reduce the size of the devices, that implies a decrease of the sample volume required in each experiment. In addition, the development of SPEs has enable the reduction of the production costs.One of the principal advantages is the possibility of modifying the screen-printed electrodes, modifying the composition of its inks by adding different metals, enzymes, complexing agents, polymers, etc., which is useful for the preparation of multitude electrochemical analyses.
Description:
Screen printing is one of the oldest methods of reproduction. The screen-printed electrodes (SPEs) are presented as a single device in which there are three different electrodes: Working electrode. Their response is sensitive to the analyte concentration.
Reference electrode. It allows the application of a known potential, which is independent of the analyte and other ions concentration. Its potential is constant, and the working electrode potential is measured against it.
Auxiliary or counter electrode. It is the electrode that completes the circuit of the three-electrode cell, as it allows the passage of current. It enables the analysis of processes in which electronic transfer takes place.
Description:
The three electrodes could be printed on different types of substrates (plastic or ceramic) and could be manufactured with a great variety of inks. The most common inks are those composed of silver and carbon, however, they can be based on other metals such as platinum, gold, palladium or copper. In addition, the electrodes can be modified with enzymes, metallic nanoparticles, carbon nanotubes, polymers or complexing agents. The electrode ink composition is chosen according to the final application and the selectivity and sensitivity required for the analysis.The electrode manufacturing process involves the sequential deposition of different layers of conductive and/or insulating inks on the substrates of interest. The process consists of several stages: Film deposition usually on plastic or ceramic.
Description:
Drying of the printed films, thus eliminating possible organic solvents needed to produce a proper adhesion. Drying can be done in an oven at temperatures between 300 and 1200 °C, or in cold cured ink with a subsequent UV light photocuring process.
Description:
The process can be repeated if complex structures are required using the appropriate material for the specific design.On the other hand, as mentioned above, the most commonly used inks are silver and carbon, therefore, their printing and manufacturing characteristics should be highlighted: Silver ink. This ink acts as a conductor, while the working electrodes are printed mainly with graphite inks, although gold, platinum and silver inks are also used. Some ink components induce differences in detection and analysis.
Description:
Silver/silver chloride ink. Silver/silver chloride is an industry preferred reference electrode because it has stable electrochemical potential under numerous measurement conditions. This makes silver/silver chloride ink a good choice for a variety of medical and industrial applications that require conductive ink, such as biometric monitoring or heavy metal detection. The properties of the ink can be adjusted by changing the ratio of silver to silver chloride.
Description:
Carbon ink. The electrodes composition is usually confidential information from the manufacturing company, however, there are key elements for the electrodes composition such as binders, used to improve the affinity of the substrate and ink, and solvents employed to improve the viscosity for the printing process. The type, size or charge of the graphite particles and the printing and drying conditions could affect the electron transfer and the analytical yield of the carbon sensors.
Description:
Gold ink. Gold ink is currently generating more interest due to the formation of self-assembling monolayers (SAM) by means of strong Au-S bonds. Gold ink has less use due to
Advantages and applications:
Screen-printed electrodes offer several advantages such as low cost, flexibility of their design, great reproducibility of the process and of the electrodes obtained, the possibility of manufacturing them with different materials and the wide capacity of modification of the work surface. Another advantage is the possibility of connection to a portable instrumentation allowing the in-situ determination of specific analytes. In addition, screen-printed electrodes avoid tedious cleaning processes.Currently, they are used as a support to produce portable electrochemical biosensors for environmental analysis. Some applications are: Phenolic compounds: their quick detection from electrochemical biosensors based on SPE is a challenge because they easily penetrate plants, animals and humans through their membranes and skins, producing toxic side effects.
Advantages and applications:
Nitrite and phosphate: their detection at low levels is of great importance due to their toxicity. SPEs capable of detecting nitrite and phosphate have been designed. Micro-electrodes combined with screen-printing technology have been used to manufacture nitrite-sensitive sensors.
Pesticides: Organophosphate pesticides are harmful to humans and animals because they inhibit the activity of many enzymes. Nowadays, inhibition biosensors based on SPEs have emerged.
Herbicides: Drinking water is contaminated due to the increased use of herbicides. To achieve selective detection, the most common method is the immunoassay which, combined with SPEs, is detected directly avoiding the cleaning and reuse of active components.
Advantages and applications:
Heavy metal detection: simple and economic devices are needed for in-situ detection of heavy metals, due to their high toxicity even at low concentrations. The most common toxic metal ions are Pb (II) and Hg (II)Pb (II): Sensors for lead detection are usually modified with certain materials (carbon, bismuth or gold among others) to increase their sensitivity. To improve their detection, these modifiers are attached to the SPEs surface. The most widely used is bismuth due to its great yield and improved sensitivity, reaching the level of parts per billion (ppb).
Advantages and applications:
Hg (II): mercury is the most problematic pollutant. Generally, gold electrodes are used for detection due to their high affinity. However, the use of gold electrodes produces structural changes on the surface caused to the formation of amalgam. Commercially available screen-printed gold electrodes make mercury measurements in water easier because no electrode preparation is required.Generation of SERS substrates. During the last years SPE have been used to generate in-situ SERS substrates for analytical purposes.On the other hand, a correct manufacturing process is important to avoid low reproducibilities, to encourage mineral binders or insulating polymers that achieve a high resistance of SPE, and to use inks that do not significantly affect the kinetics of the reactions that take place. In manufacturing, surface treatments are used to remove organic contaminants from the ink. This improves their electrochemical properties by increasing the surface roughness. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Methoxpropamine**
Methoxpropamine:
Methoxpropamine (MXPr, 2-Oxo-3'-methoxy-PCPr) is a dissociative anesthetic drug of the arylcyclohexylamine class and NMDA receptor antagonist that is closely related to substances such as methoxetamine and PCPr. It has been sold online as a designer drug, first being identified in Denmark in October 2019, and is illegal in Finland. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Simple Symmetric Transport Protocol**
Simple Symmetric Transport Protocol:
Simple Symmetric Transport Protocol (SSTP) is a protocol for delivering messages between clients and servers. It is used by Microsoft Groove. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Superior fascia of the urogenital diaphragm**
Superior fascia of the urogenital diaphragm:
The superior fascia of the urogenital diaphragm is continuous with the obturator fascia and stretches across the pubic arch.
Structure:
If the obturator fascia be traced medially after leaving the Obturator internus muscle, it will be found attached by some of its deeper or anterior fibers to the inner margin of the pubic arch, while its superficial or posterior fibers pass over this attachment to become continuous with the superior fascia of the urogenital diaphragm.
Behind, this layer of the fascia is continuous with the inferior fascia and with the fascia of Colles; in front it is continuous with the fascial sheath of the prostate, and is fused with the inferior fascia to form the transverse ligament of the pelvis.
Controversy:
Some sources dispute that this structure exists. However, whether this layer is real or imagined, it still serves to describe a division of the contents of the perineum in many modern anatomy resources. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Subvocalization**
Subvocalization:
Subvocalization, or silent speech, is the internal speech typically made when reading; it provides the sound of the word as it is read. This is a natural process when reading, and it helps the mind to access meanings to comprehend and remember what is read, potentially reducing cognitive load.This inner speech is characterized by minuscule movements in the larynx and other muscles involved in the articulation of speech. Most of these movements are undetectable (without the aid of machines) by the person who is reading. It is one of the components of Alan Baddeley and Graham Hitch's phonological loop proposal which accounts for the storage of these types of information into short-term memory.
History of subvocalization research:
Subvocalization has been considered as far back as 1868. Only in 1899 did an experiment take place to record movement of the larynx through silent reading by a researcher named H.S. Curtis, who concluded that silent reading was the only mental activity that created considerable movement of the larynx.In 1950 Edfelt reached a breakthrough when he created an electrically powered instrument that can record movement. He concluded that newer techniques are needed to accurately record information and that efforts should be made to understand this phenomenon instead of eliminating it. After failed attempts trying to reduce silent speech in study participants, in 1952, it came to the conclusion that silent speech is a developmental activity which reinforces learning and should not be disrupted during development. In 1960, Edfelt seconded this opinion.
Techniques for studying subvocalization:
Subvocalization is commonly studied using electromyography (EMG) recordings, concurrent speaking tasks, shadowing, and other techniques.EMG can be used to show the degree to which one is subvocalizing or to train subvocalization suppression. EMG is used to record the electrical activity produced by the articulatory muscles involved in subvocalization. Greater electrical activity suggests a stronger use of subvocalization. In the case of suppression training, the trainee is shown their own EMG recordings while attempting to decrease the movement of the articulatory muscles. The EMG recordings allows one to monitor and ideally reduce subvocalization.In concurrent speaking tasks, participants of a study are asked to complete an activity specific to the experiment while simultaneously repeating an irrelevant word. For example, one may be asked to read a paragraph while reciting the word "cola" over and over again. Speaking the repeated irrelevant word is thought to preoccupy the articulators used in subvocalization. Subvocalization, therefore, cannot be used in the mental processing of the activity being studied. Participants who had undergone the concurrent speaking task are often compared to other participants of the study who had completed the same activity without subvocalization interference. If performance on the activity is significantly less for those in the concurrent speaking task group than for those in the non-interference group, subvocalization is believed to play a role in the mental processing of that activity. The participants in the non-interference comparison group usually also complete a different, yet equally distracting task that does not involve the articulator muscles (i.e. tapping). This ensures that the difference in performance between the two groups is in fact due to subvocalization disturbances and not due to considerations such as task difficulty or a divide in attention.Shadowing is conceptually similar to concurrent speaking tasks. Instead of repeating an irrelevant word, shadowing requires participants to listen to a list of words and to repeat those words as fast as possible while completing a separate task being studied by experimenters.Techniques for subvocalization interference may also include counting, chewing or locking one's jaw while placing the tongue on the roof of one's mouth.Subvocal recognition involves monitoring actual movements of the tongue and vocal cords that can be interpreted by electromagnetic sensors. Through the use of electrodes and nanocircuitry, synthetic telepathy could be achieved allowing people to communicate silently.
Evolutionary background:
The exploration into the evolutionary background of subvocalization is currently very limited. The little known is predominantly about language acquisition and memory. Evolutionary psychologists suggest that the development of subvocalization is related to modular aspects of the brain. There has been a great amount of exploration on the evolutionary basis of universal grammar. The idea is that although the specific language one initially learns is dependent on one's culture, all languages are learned through the activation of universal "language modules" that are present in each of us. This concept of a modular mind is a prevalent idea that will help explore memory and its relation to language more clearly, and possibly illuminate the evolutionary basis of subvocalization. Evidence for the mind having modules for superior function is the example that hours may be spent toiling over a car engine in an attempt to flexibly formulate a solution, but, in contrast, extremely long and complex sentences can be comprehended, understood, related and responded to in seconds. The specific inquiry into subvocalization may be minimal right now but there remains much to investigate in regard to the modular mind.
Associated brain structures and processes:
The brain mechanics of subvocalization are still not well understood. It is safe to say that more than one part of the brain is used, and that no single test can reveal all the relevant processes. Studies often use event-related potentials; brief changes in an EEG (electroencephalography) to show brain activation, or fMRIs.
Associated brain structures and processes:
Subvocalization is related to inner speech; when inner speech is used, there is bilateral activation in predominantly the left frontal lobe. This activation could suggest that the frontal lobes may be involved in motor planning for speech output.Subvocal rehearsal is controlled by top-down processing; conceptually driven, it relies on information already in memory. There is evidence for significant left hemisphere activation in the inferior and middle frontal gyri and inferior parietal gyrus during subvocal rehearsal. Broca's area has also been found to have activation in other studies exploring subvocal rehearsal.Silent speech-reading and silent counting are also examined when experimenters look at subvocalization. These tasks show activation in the frontal cortices, hippocampus and the thalamus for silent counting. Silent-reading activates similar areas of the auditory cortex that are involved in listening.Finally, the phonological loop; proposed by Baddeley and Hitch as "being responsible for temporary storage of speech-like information" is an active subvocal rehearsal mechanism, activation originating mostly in the left hemispheric speech areas: Broca's, lateral and medial premotor cortices and the cerebellum.
Role of subvocalization in memory processes:
The phonological loop and rehearsal The ability to store verbal material in working memory, and the storage of verbal material in short-term memory relies on a phonological loop. This loop, proposed by Baddeley and Hitch, represents a system that is composed of a short-term store in which memory is represented phonologically, and a rehearsal process. This rehearsal preserves and refreshes the material by re-enacting it and re-presenting it to short-term storage, and subvocalization is a major component of this rehearsal. The phonological loop system features an interaction between subvocal rehearsal and specific storage for phonological material. The phonological loop contributes to the study of the role of subvocalization and the inner voice in auditory imagery. Subvocalization and the phonological loop interact in a non-dependent manner demonstrated by their differential requirements on different tasks. The role of subvocalization within the workings of memory processes is heavily reliant on its involvement with Baddeley's proposed phonological loop.
Role of subvocalization in memory processes:
Working memory There have been findings that support a role of subvocalization in the mechanisms underlying working memory and the holding of information in an accessible and malleable state. Some forms of internal speech-like processing may function as a holding mechanism in immediate memory tasks. The working memory span is a behavioural measure of "exceptional consistency" and is a positive function of the rate of subvocalization. Experimental data has shown that this span size increases as the rate of subvocalization increases, and the time needed to subvocalize the number of items comprising a span is generally constant. fMRI data suggests that a sequence of five letters approaches the individual capacity for immediate recall that relies on subvocal rehearsal alone.
Role of subvocalization in memory processes:
Short-term memory The role of subvocal rehearsal is also seen in short-term memory. Research has confirmed that this form of rehearsal benefits some cognitive functioning. Subvocal movements that occur when people listen to or rehearse a series of speech sounds will help the subject to maintain the phonemic representation of these sounds in their short-term memory, and this finding is supported by the fact that interfering with the overt production of speech sound did not disrupt the encoding of the sound's features in short-term memory. This suggests a strong role played by subvocalization in the encoding of speech sounds into short-term memory. It has also been found that language differences in short-term memory performance in bilingual people is mediated, but not exclusively, by subvocal rehearsal.The production of acoustic errors in short-term memory is also thought to be, in part, due to subvocalization. Individuals who stutter and therefore have a slower rate of subvocal articulation also demonstrate a short-term reproduction of serial material that is slower as compared to people who do not stutter.
Role of subvocalization in memory processes:
Encoding Subvocalization plays a large role in memory encoding. Subvocalization appears to facilitate the translating of visual linguistic information into acoustic information and vice versa. For example, subvocalization occurs when one sees a word and is asked to say it (see-say condition), or when one hears a word and is asked to write it (hear-write condition), but not when one is asked to see a word and then write it (see-write condition) or hear a word and then say it (hear-say condition). The see-say condition converts visual information into acoustic information. The hear-write condition converts acoustic information into visual information. The see-write and hear-say conditions, however, remain in the same sensory domain and do not require translation into a different type of code.This is also supported by findings that suggest that subvocalization is not required for the encoding of speech, as words being heard are already in acoustic form and therefore enter short-term memory directly without use of subvocal articulation. Furthermore, subvocalization interference impedes reading comprehension but not listening comprehension.
Role in reading comprehension:
Subvocalization's role in reading comprehension can be viewed as a function of task complexity. Subvocalization is involved minimally or not at all in immediate comprehension. For example, subvocalization is not used in the making of homophone judgements but is used more for the comprehension of sentences and even more still for the comprehension of paragraphs. Subvocalization which translates visual reading information into a more durable and flexible acoustic code is thought to allow for the integration of past concepts with those currently being processed.
Comparison to speed reading:
Advocates of speed reading generally claim that subvocalization places extra burden on the cognitive resources, thus slowing the reading down. Speedreading courses often prescribe lengthy practices to eliminate subvocalizing when reading. Normal reading instructors often simply apply remedial teaching to a reader who subvocalizes to the degree that they make visible movements on the lips, jaw, or throat.Furthermore, fMRI studies comparing fast and slow readers (during a reading task) indicate that between the two groups there are significant differences in the brain areas being activated. In particular, it was found that rapid readers show lower activation in the brain regions associated with speech, which indicates that the higher speeds were attained, in part, by the reduction in subvocalization.At the slower rates (memorizing, learning, and reading for comprehension), subvocalizing by the reader is very detectable. At the faster rates of reading (skimming and scanning), subvocalization is less detectable. For competent readers, subvocalizing to some extent even at scanning rates is normal.Typically, subvocalizing is an inherent part of reading and understanding a word. Micro-muscle tests suggest that full and permanent elimination of subvocalizing is impossible. This may originate in the way people learn to read by associating the sight of words with their spoken sounds. Sound associations for words are indelibly imprinted on the nervous system—even of deaf people, since they will have associated the word with the mechanism for causing the sound or a sign in a particular sign language.At the slower reading rates (100–300 words per minute), subvocalizing may improve comprehension. Subvocalizing or actual vocalizing can indeed be of great help when one wants to learn a passage verbatim. This is because the person is repeating the information in an auditory way, as well as seeing the piece on the paper.
Auditory imagery:
The definition of auditory imagery is analogous to definitions used in other modalities of imagery (such as visual, auditory and olfactory imagery) in that it is, according to Intons-Peterson (1992), "the introspective persistence of an auditory experience, including one constructed from components drawn from long-term memory, in the absence of direct sensory instigation of that experience.". Auditory imagery is often but not necessarily influenced by subvocalization, and has ties to the rehearsal process of working memory. The conception of working memory relies on a relationship between the "inner ear" and the "inner voice" (subvocalization), and this memory system is posited to be at the basis of auditory imagery. Subvocalization and the phonological store work in partnership in many auditory imagery tasks.The extent to which an auditory image can influence detection, encoding and recall of a stimulus through its relationships to perception and memory has been documented. It has been suggested that auditory imagery may slow the decay of memory for pitch, as demonstrated by T. A. Keller, Cowan, and Saults (1995) who demonstrated that the prevention of rehearsal resulted in decreased memory performance for pitch comparison tasks through the introduction of distracting and competing stimuli. It has also been reported that auditory imagery for verbal material is impaired when subvocalization is blocked. These findings suggest that subvocalization is common to both auditory imagery and rehearsal.
Auditory imagery:
In objection to a subvocalization mechanism basis for auditory imagery is in the fact that a significant amount of auditory imagery does not involve speech or stimuli similar to speech, such as music and environmental sounds. However, to combat this point, it has been suggested that rehearsal of non-speech sounds can indeed be carried out by the phonological mechanisms previously mentioned, even if the creation of nonspeech sounds within this mechanism is not possible.
Role in speech:
There are two general types of individuals when it comes to subvocalization. There are Low-Vocalizers and High-Vocalizers. Using electromyography to record the muscle action potential of the larynx (i.e. muscle movement of the larynx), an individual is categorized under a high or low vocalizer depending on how much muscle movement the muscles in the larynx undergo during silent reading.
Role in speech:
Regulation of speech intensity Often in both high and low vocalizers, the rate of speech is constantly regulated depending on intensity/volume of words (said to be affected by long delays between readings) and increasing the delay of speech and hearing ones' voice is an effect called “delayed auditory feedback”. Increasing the voice intensity while reading was found to be higher in low-vocalizers than high-vocalizers. It is believed that because high-vocalizers have greater muscle movement of the larynx during silent reading, low-vocalizers read louder to compensate for this lack of movement so they can understand the material. When individuals undergo “feedback training”, where they are conscious of these muscle movements, this difference diminishes.
Role in speech:
Role in articulation Articulation during silent speech is important, though speech is not solely dependent on articulation alone. Impairing articulation can reduce sensory input from the muscle movements of the larynx to the brain to understand information being read and it also impairs ongoing speech production during reading to direct thinking. Words that are of high similarity minimize articulation, causing interference, and may reduce subvocal rehearsal. As articulation of similar words is affecting subvocalization, there is an increase in acoustic errors for short-term memory and recall.Impairing or suppressing articulation causes a greater impact on performance. An example of articulation suppression is repeating the same word over many times such as 'the' and attempting to memorise other words into short-term memory. Even though primary cues may be given for these words in attempt to retrieve them, words will either be recalled for the incorrect cue or will not be recalled at all.
Schizophrenia and subvocalization:
People with schizophrenia known to experience auditory hallucinations could show the result of over-activation of the muscles in the larynx. Using an electromyography to record muscle movement, individuals experiencing hallucinations showed greater muscle activation before these hallucinations occurred. However, this muscle activation is not easily detected which means the muscle movement must be measured on a wider range. Though a wider range is needed to detect the muscle movement, it is still considered as subvocalization. Much more research is needed to link subvocalization with hallucination but many schizophrenics report "hearing voices" (as hallucinations) coming from their throat. This small fact could be a clue to finding if there is a true link between subvocalization and hallucinations, but it is very difficult to see this connection because not many patients experience hallucinations. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Coded set**
Coded set:
In telecommunication, a coded set is a set of elements onto which another set of elements has been mapped according to a code.
Examples of coded sets include the list of names of airports that is mapped onto a set of corresponding three-letter representations of airport names, the list of classes of emission that is mapped onto a set of corresponding standard symbols, and the names of the months of the year mapped onto a set of two-digit decimal numbers.
This article incorporates public domain material from Federal Standard 1037C. General Services Administration. Archived from the original on 2022-01-22. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Formal fallacy**
Formal fallacy:
In logic and philosophy, a formal fallacy, deductive fallacy, logical fallacy or non sequitur (; Latin for "[it] does not follow") is a pattern of reasoning rendered invalid by a flaw in its logical structure that can neatly be expressed in a standard logic system, for example propositional logic. It is defined as a deductive argument that is invalid. The argument itself could have true premises, but still have a false conclusion. Thus, a formal fallacy is a fallacy where deduction goes wrong, and is no longer a logical process. This may not affect the truth of the conclusion, since validity and truth are separate in formal logic.
Formal fallacy:
While a logical argument is a non sequitur if, and only if, it is invalid, the term "non sequitur" typically refers to those types of invalid arguments which do not constitute formal fallacies covered by particular terms (e.g., affirming the consequent). In other words, in practice, "non sequitur" refers to an unnamed formal fallacy.
A special case is a mathematical fallacy, an intentionally invalid mathematical proof, often with the error subtle and somehow concealed. Mathematical fallacies are typically crafted and exhibited for educational purposes, usually taking the form of spurious proofs of obvious contradictions.
A formal fallacy is contrasted with an informal fallacy which may have a valid logical form and yet be unsound because one or more premises are false. A formal fallacy; however, may have a true premise, but a false conclusion.
Taxonomy:
Prior Analytics is Aristotle's treatise on deductive reasoning and the syllogism. The standard Aristotelian logical fallacies are: Fallacy of four terms (Quaternio terminorum); Fallacy of the undistributed middle; Fallacy of illicit process of the major or the minor term; Affirmative conclusion from a negative premise.Other logical fallacies include: The self-reliant fallacyIn philosophy, the term logical fallacy properly refers to a formal fallacy—a flaw in the structure of a deductive argument, which renders the argument invalid.
Taxonomy:
It is often used more generally in informal discourse to mean an argument that is problematic for any reason, and encompasses informal fallacies as well as formal fallacies—valid but unsound claims or poor non-deductive argumentation.
Taxonomy:
The presence of a formal fallacy in a deductive argument does not imply anything about the argument's premises or its conclusion (see fallacy fallacy). Both may actually be true, or even more probable as a result of the argument (e.g. appeal to authority), but the deductive argument is still invalid because the conclusion does not follow from the premises in the manner described. By extension, an argument can contain a formal fallacy even if the argument is not a deductive one; for instance an inductive argument that incorrectly applies principles of probability or causality can be said to commit a formal fallacy.
Taxonomy:
Affirming the consequent Any argument that takes the following form is a non sequitur: If A is true, then B is true.
B is true.
Therefore, A is true.Even if the premise and conclusion are both true, the conclusion is not a necessary consequence of the premise. This sort of non sequitur is also called affirming the consequent.
An example of affirming the consequent would be: If Jackson is a human (A), then Jackson is a mammal. (B) Jackson is a mammal. (B) Therefore, Jackson is a human. (A)While the conclusion may be true, it does not follow from the premise: Humans are mammals.
Jackson is a mammal.
Therefore, Jackson is a human.The truth of the conclusion is independent of the truth of its premise – it is a 'non sequitur', since Jackson might be a mammal without being human. He might be an elephant.
Affirming the consequent is essentially the same as the fallacy of the undistributed middle, but using propositions rather than set membership.
Denying the antecedent Another common non sequitur is this: If A is true, then B is true.
A is false.
Therefore, B is false.While B can indeed be false, this cannot be linked to the premise since the statement is a non sequitur. This is called denying the antecedent.
An example of denying the antecedent would be: If I am Japanese, then I am Asian.
I am not Japanese.
Therefore, I am not Asian.While the conclusion may be true, it does not follow from the premise. The statement's declarant could be another ethnicity of Asia, e.g., Chinese, in which case the premise would be true but the conclusion false. This argument is still a fallacy even if the conclusion is true.
Affirming a disjunct Affirming a disjunct is a fallacy when in the following form: A or B is true.
B is true.
Therefore, A is not true.*The conclusion does not follow from the premise as it could be the case that A and B are both true. This fallacy stems from the stated definition of or in propositional logic to be inclusive.
An example of affirming a disjunct would be: I am at home or I am in the city.
I am at home.
Taxonomy:
Therefore, I am not in the city.While the conclusion may be true, it does not follow from the premise. For all the reader knows, the declarant of the statement very well could be in both the city and their home, in which case the premises would be true but the conclusion false. This argument is still a fallacy even if the conclusion is true.
Taxonomy:
*Note that this is only a logical fallacy when the word "or" is in its inclusive form. If the two possibilities in question are mutually exclusive, this is not a logical fallacy. For example, I am either at home or I am in the city. (but not both) I am at home.
Therefore, I am not in the city.
Denying a conjunct Denying a conjunct is a fallacy when in the following form: It is not the case that A and B are both true.
B is not true.
Therefore, A is true.The conclusion does not follow from the premise as it could be the case that A and B are both false.
An example of denying a conjunct would be: I cannot be both at home and in the city.
I am not at home.
Taxonomy:
Therefore, I am in the city.While the conclusion may be true, it does not follow from the premise. For all the reader knows, the declarant of the statement very well could neither be at home nor in the city, in which case the premise would be true but the conclusion false. This argument is still a fallacy even if the conclusion is true.
Taxonomy:
Illicit commutativity Illicit commutativity is a fallacy when in the following form: If A is the case, then B is the case.
Therefore, if B is the case, then A is the case.The conclusion does not follow from the premise as unlike other logical connectives, the implies operator is one-way only. "P and Q" is the same as "Q and P", but "P implies Q" is not the same as "Q implies P".
An example of this fallacy is as follows: If it is raining, then I have my umbrella.
Taxonomy:
If I have my umbrella, then it is raining.While this may appear to be a reasonable argument, it is not valid because the first statement does not logically guarantee the second statement. The first statement says nothing like "I do not have my umbrella otherwise", which means that having my umbrella on a sunny day would render the first statement true and the second statement false.
Taxonomy:
Fallacy of the undistributed middle The fallacy of the undistributed middle is a fallacy that is committed when the middle term in a categorical syllogism is not distributed. It is a syllogistic fallacy. More specifically it is also a form of non sequitur.
The fallacy of the undistributed middle takes the following form: All Zs are Bs.
Y is a B.
Therefore, Y is a Z.It may or may not be the case that "all Zs are Bs", but in either case it is irrelevant to the conclusion. What is relevant to the conclusion is whether it is true that "all Bs are Zs," which is ignored in the argument.
An example can be given as follows, where B=mammals, Y=Mary and Z=humans: All humans are mammals.
Mary is a mammal.
Therefore, Mary is a human.Note that if the terms (Z and B) were swapped around in the first co-premise then it would no longer be a fallacy and would be correct.
In contrast to informal fallacy:
Formal logic is not used to determine whether or not an argument is true. Formal arguments can either be valid or invalid. A valid argument may also be sound or unsound: A valid argument has a correct formal structure. A valid argument is one where if the premises are true, the conclusion must be true.
A sound argument is a formally correct argument that also contains true premises.Ideally, the best kind of formal argument is a sound, valid argument.
Formal fallacies do not take into account the soundness of an argument, but rather its validity. Premises in formal logic are commonly represented by letters (most commonly p and q). A fallacy occurs when the structure of the argument is incorrect, despite the truth of the premises.
In contrast to informal fallacy:
As modus ponens, the following argument contains no formal fallacies: If P then Q P Therefore, QA logical fallacy associated with this format of argument is referred to as affirming the consequent, which would look like this: If P then Q Q Therefore, PThis is a fallacy because it does not take into account other possibilities. To illustrate this more clearly, substitute the letters with premises: If it rains, the street will be wet.
In contrast to informal fallacy:
Therefore, it rained.Although it is possible that this conclusion is true, it does not necessarily mean it must be true. The street could be wet for a variety of other reasons that this argument does not take into account. If we look at the valid form of the argument, we can see that the conclusion must be true: If it rains, the street will be wet.
In contrast to informal fallacy:
It rained.
Therefore, the street is wet.This argument is valid and, if it did rain, it would also be sound.
If statements 1 and 2 are true, it absolutely follows that statement 3 is true. However, it may still be the case that statement 1 or 2 is not true. For example: If Albert Einstein makes a statement about science, it is correct.
Albert Einstein states that all quantum mechanics is deterministic.
Therefore, it's true that quantum mechanics is deterministic.In this case, statement 1 is false. The particular informal fallacy being committed in this assertion is argument from authority. By contrast, an argument with a formal fallacy could still contain all true premises: If an animal is a dog, then it has four legs.
My cat has four legs.
Therefore, my cat is a dog.Although 1 and 2 are true statements, 3 does not follow because the argument commits the formal fallacy of affirming the consequent.
An argument could contain both an informal fallacy and a formal fallacy yet lead to a conclusion that happens to be true, for example, again affirming the consequent, now also from an untrue premise: If a scientist makes a statement about science, it is correct.
It is true that quantum mechanics is deterministic.
Therefore, a scientist has made a statement about it.
Common examples:
"Some of your key evidence is missing, incomplete, or even faked! That proves I'm right!""The vet can't find any reasonable explanation for why my dog died. See! See! That proves that you poisoned him! There’s no other logical explanation!" In the strictest sense, a logical fallacy is the incorrect application of a valid logical principle or an application of a nonexistent principle: Most Rimnars are Jornars.
Common examples:
Most Jornars are Dimnars.
Therefore, most Rimnars are Dimnars.This is fallacious. And so is this: People in Kentucky support a border fence.
People in New York do not support a border fence.
Therefore, people in New York do not support people in Kentucky.Indeed, there is no logical principle that states: For some x, P(x).
For some x, Q(x).
Therefore, for some x, P(x) and Q(x).An easy way to show the above inference as invalid is by using Venn diagrams. In logical parlance, the inference is invalid, since under at least one interpretation of the predicates it is not validity preserving.
People often have difficulty applying the rules of logic. For example, a person may say the following syllogism is valid, when in fact it is not: All birds have beaks.
That creature has a beak.
Common examples:
Therefore, that creature is a bird."That creature" may well be a bird, but the conclusion does not follow from the premises. Certain other animals also have beaks, for example: an octopus and a squid both have beaks, some turtles and cetaceans have beaks. Errors of this type occur because people reverse a premise. In this case, "All birds have beaks" is converted to "All beaked animals are birds." The reversed premise is plausible because few people are aware of any instances of beaked creatures besides birds—but this premise is not the one that was given. In this way, the deductive fallacy is formed by points that may individually appear logical, but when placed together are shown to be incorrect.
Non sequitur in everyday speech:
In everyday speech, a non sequitur is a statement in which the final part is totally unrelated to the first part, for example: Life is life and fun is fun, but it's all so quiet when the goldfish die. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Triazolopyridine**
Triazolopyridine:
Triazolopyridines are a class of heterocyclic chemical compounds with a triazole ring fused to a pyridine ring. There are multiple isomers which differ by the location of the nitrogen atoms and the nature of the ring fusion.
The term triazolopyridine can also refer to a class of antidepressant drugs whose chemical structure includes a trazolopyridine-derived ring system. One example is trazodone.Other pharmaceutical drugs that contain a triazolopyridine ring system include filgotinib, tucatinib, and enarodustat. In addition, the reagents used in organic chemistry HATU, HOAt, and PyAOP are triazolopyridine derivatives. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Carbo-mer**
Carbo-mer:
In organic chemistry, a carbo-mer (often carbo-mer or carbomer) is an expanded molecule obtained by insertion of C2 units into a given molecule. Carbo-mers differ from their templates in size but not in symmetry when each C–C single bond is replaced by an alkyne bond C-C≡C-C, each C=C double bond is replaced by an allene bond C=C=C=C, and each C≡C triple bond is replaced by C≡C-C≡C. The size of the carbo-mer continues to increase when more C2 units are inserted, so carbo-mers are also called carbon-molecules, where "n" is the number of acetylene or allene groups in an n-expansion unit. This concept, devised by Rémi Chauvin in 1995, is aimed at introducing new chemical properties for existing chemical motifs.
Carbo-mer:
Two distinct expansions of benzene can be called carbo-benzene (C18H6): One (above right) expands each C-H bond to C-C≡C-H, making hexaethynylbenzene, a substituted benzene derivative.One (above left) expands each C=C and C≡C bond of the benzene core, making 1,2,4,5,7,8,10,11,13,14,16,17-dodecadehydro[18]annulene. An analog of this molecule, with the hydrogen atoms replaced by phenyl groups, 3,6,9,12,15,18-hexaphenyl-1,2,4,5,7,8,10,11,13,14,16,17-dodecadehydro[18]annulene, is stable. Its proton NMR spectrum shows that the phenyl protons are shifted downfield compared to a proton position in benzene itself (chemical shift position for the ortho proton is 9.49 ppm), suggesting the presence of a diamagnetic ring current and thus aromaticity. The final step in its organic synthesis is reaction of the triol with stannous chloride and hydrochloric acid in diethyl ether: With both core and periphery expanded, the total carbo-mer of benzene (C30H6) only exists in silico (computer simulation).
Carbo-mer:
Calculations predict a planar D6h structure with bond lengths similar to the other two carbobenzenes. Its non-planar isomer is called "hexaethynyl-carbo-[6]trannulene" - a pun on the all-cis annulenes - and resembles a cyclohexane ring. This hypothetical molecule is predicted to be more energetic by 65 kcal/mol. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Advanced Power Management**
Advanced Power Management:
Advanced power management (APM) is a technical standard for power management developed by Intel and Microsoft and released in 1992 which enables an operating system running an IBM-compatible personal computer to work with the BIOS (part of the computer's firmware) to achieve power management.
Revision 1.2 was the last version of the APM specification, released in 1996. ACPI is the successor to APM. Microsoft dropped support for APM in Windows Vista. The Linux kernel still mostly supports APM, though support for APM CPU idle was dropped in version 3.0.
Overview:
APM uses a layered approach to manage devices. APM-aware applications (which include device drivers) talk to an OS-specific APM driver. This driver communicates to the APM-aware BIOS, which controls the hardware. There is the ability to opt out of APM control on a device-by-device basis, which can be used if a driver wants to communicate directly with a hardware device.
Overview:
Communication occurs both ways; power management events are sent from the BIOS to the APM driver, and the APM driver sends information and requests to the BIOS via function calls. In this way the APM driver is an intermediary between the BIOS and the operating system.
Power management happens in two ways; through the above-mentioned function calls from the APM driver to the BIOS requesting power state changes, and automatically based on device activity.
In APM 1.0 and APM 1.1, power management is almost fully controlled by the BIOS. In APM 1.2, the operating system can control PM time (e.g. suspend timeout).
Power management events:
There are 12 power events (such as standby, suspend and resume requests, and low battery notifications), plus OEM-defined events, that can be sent from the APM BIOS to the operating system. The APM driver regularly polls for event change notifications.
Power Management Events:
APM functions:
There are 21 APM function calls defined that the APM driver can use to query power management statuses, or request power state transitions. Example function calls include letting the BIOS know about current CPU usage (the BIOS may respond to such a call by placing the CPU in a low-power state, or returning it to its full-power state), retrieving the current power state of a device, or requesting a power state change.
Power states:
The APM specification defines system power states and device power states.
System power states APM defines five power states for the computer system: Full On: The computer is powered on, and no devices are in a power saving mode.
APM Enabled: The computer is powered on, and APM is controlling device power management as needed.
APM Standby: Most devices are in their low-power state, the CPU is slowed or stopped, and the system state is saved. The computer can be returned to its former state quickly (in response to activity such as the user pressing a key on the keyboard).
APM Suspend: Most devices are powered off, but the system state is saved. The computer can be returned to its former state, but takes a relatively long time. (Hibernation is a special form of the APM Suspend state).
Off: The computer is turned off.
Device power states APM also defines power states that APM-aware hardware can implement. There is no requirement that an APM-aware device implement all states.
The four states are: Device On: The device is in full power mode.
Device Power Managed: The device is still powered on, but some functions may not be available, or may have reduced performance.
Device Low Power: The device is not working. Power is maintained so that the device may be 'woken up'.
Device Off: The device is powered off.
Hardware components:
CPU The CPU core (defined in APM as the CPU clock, cache, system bus and system timers) is treated specially in APM, as it is the last device to be powered down, and the first device to be powered back up. The CPU core is always controlled through the APM BIOS (there is no option to control it through a driver). Drivers can use APM function calls to notify the BIOS about CPU usage, but it is up to the BIOS to act on this information; a driver cannot directly tell the CPU to go into a power saving state.
Hardware components:
ATA drives The ATA specification and SATA specification defines APM provisions for hard drives, which specifies a trade-off between spin-down frequency and always-on performance. Unlike the BIOS-side APM, the ATA APM and SATA APM has never been deprecated.Aggressive spin-down frequencies may reduce drive lifespan by unnecessarily accumulating load cycles; most modern drives are specified to sustain 300,000 cycles and usually last at least 600,000. On the other hand, not spinning down the drive will cause extra power draw and heat generation; high temperatures also reduce the lifespan of hard drives. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Serine—tRNA ligase**
Serine—tRNA ligase:
In enzymology, a serine—tRNA ligase (EC 6.1.1.11) is an enzyme that catalyzes the chemical reaction ATP + L-serine + tRNASer ⇌ AMP + diphosphate + L-seryl-tRNASerThe 3 substrates of this enzyme are ATP, L-serine, and tRNA(Ser), whereas its 3 products are AMP, diphosphate, and L-seryl-tRNA(Ser).
Serine—tRNA ligase:
This enzyme belongs to the family of ligases, to be specific those forming carbon-oxygen bonds in aminoacyl-tRNA and related compounds. The systematic name of this enzyme class is L-serine:tRNASer ligase (AMP-forming). Other names in common use include seryl-tRNA synthetase, SerRS, seryl-transfer ribonucleate synthetase, seryl-transfer RNA synthetase, seryl-transfer ribonucleic acid synthetase, and serine translase. This enzyme participates in glycine, serine and threonine metabolism and aminoacyl-trna biosynthesis.
Structural studies:
As of late 2007, 13 structures have been solved for this class of enzymes, with PDB accession codes 1SER, 1SES, 1SET, 1SRY, 1WLE, 2CIM, 2CJ9, 2CJA, 2CJB, 2DQ0, 2DQ1, 2DQ2, and 2DQ3. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Psychological fiction**
Psychological fiction:
In literature, psychological fiction (also psychological realism) is a narrative genre that emphasizes interior characterization and motivation to explore the spiritual, emotional, and mental lives of the characters. The mode of narration examines the reasons for the behaviors of the character, which propel the plot and explain the story. Psychological realism is achieved with deep explorations and explanations of the mental states of the character's inner person, usually through narrative modes such as stream of consciousness and flashbacks.
Early examples:
The Tale of Genji by Lady Murasaki, written in 11th-century Japan, was considered by Jorge Luis Borges to be a psychological novel. French theorists Gilles Deleuze and Félix Guattari, in A Thousand Plateaus, evaluated the 12th-century Arthurian author Chrétien de Troyes' Lancelot, the Knight of the Cart and Perceval, the Story of the Grail as early examples of the style of the psychological novel.Stendhal's The Red and the Black and Madame de La Fayette's The Princess of Cleves are considered the first precursors of the psychological novel. The modern psychological novel originated, according to The Encyclopedia of the Novel, primarily in the works of Nobel laureate Knut Hamsun – in particular, Hunger (1890), Mysteries (1892), Pan (1894) and Victoria (1898).
Notable examples:
One of the greatest writers of the genre was Fyodor Dostoyevsky. His novels deal strongly with ideas, and characters who embody these ideas, how they play out in real world circumstances, and the value of them, most notably The Brothers Karamazov and Crime and Punishment.
In the literature of the United States, Henry James, Patrick McGrath, Arthur Miller, and Edith Wharton are considered "major contributor[s] to the practice of psychological realism."
Subgenres:
Psychological thriller A subgenre of the thriller and psychological novel genres, emphasizing the inner mind and mentality of characters in a creative work. Because of its complexity, the genre often overlaps and/or incorporates elements of mystery, drama, action, slasher, and horror — often psychological horror. It bears similarities to the Gothic and detective fiction genres.
Psychological horror A subgenre of the horror and psychological novel genres that relies on the psychological, emotional and mental states of characters to generate horror. On occasions, it overlaps with the psychological thriller subgenre to enhance the story suspensefully.
Subgenres:
Psychological drama A subgenre of drama films with psychological elements, which focuses upon the emotional, mental, and psychological development of characters in a dramatic work. One Flew Over the Cuckoo's Nest (1975) and Requiem for a Dream (2000), both based on novels, are notable examples of this subgenre. Taxi Driver (1976) and The Wrestler (2008) are original psychological drama films.
Subgenres:
Psychological science fiction A genre with films that are considered dramas or thrillers occurring in a science fiction setting. Often the focus is on the character's inner struggle dealing with political or technological forces. A Clockwork Orange (1971) is a notable examples of this genre. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Jónsson–Tarski algebra**
Jónsson–Tarski algebra:
In mathematics, a Jónsson–Tarski algebra or Cantor algebra is an algebraic structure encoding a bijection from an infinite set X onto the product X×X. They were introduced by Bjarni Jónsson and Alfred Tarski (1961, Theorem 5). Smirnov (1971), named them after Georg Cantor because of Cantor's pairing function and Cantor's theorem that an infinite set X has the same number of elements as X×X. The term Cantor algebra is also occasionally used to mean the Boolean algebra of all clopen subsets of the Cantor set, or the Boolean algebra of Borel subsets of the reals modulo meager sets (sometimes called the Cohen algebra).
Jónsson–Tarski algebra:
The group of order-preserving automorphisms of the free Jónsson–Tarski algebra on one generator is the Thompson group F.
Definition:
A Jónsson–Tarski algebra of type 2 is a set A with a product w from A×A to A and two 'projection' maps p1 and p2 from A to A, satisfying p1(w(a1,a2)) = a1, p2(w(a1,a2)) = a2, and w(p1(a),p2(a)) = a. The definition for type > 2 is similar but with n projection operators.
Example:
If w is any bijection from A×A to A then it can be extended to a unique Jónsson–Tarski algebra by letting pi(a) be the projection of w−1(a) onto the ith factor. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gallate dioxygenase**
Gallate dioxygenase:
Gallate dioxygenase (EC 1.13.11.57, GalA) is an enzyme with systematic name gallate:oxygen oxidoreductase. This enzyme catalyses the following chemical reaction gallate + O2 ⇌ (1E)-4-oxobut-1-ene-1,2,4-tricarboxylateGallate dioxygenase contains non-heme Fe2+. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cynarine**
Cynarine:
Cynarine is a hydroxycinnamic acid derivative and a biologically active chemical constituent of artichoke (Cynara cardunculus).Chemically, it is an ester formed from quinic acid and two units of caffeic acid. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**107 (number)**
107 (number):
107 (one hundred [and] seven) is the natural number following 106 and preceding 108.
In mathematics:
107 is the 28th prime number. The next prime is 109, with which it comprises a twin prime, making 107 a Chen prime.Plugged into the expression 2p−1 , 107 yields 162259276829213363391578010288127, a Mersenne prime. 107 is itself a safe prime.It is the fourth Busy beaver number, the maximum number of steps that any Turing machine with 2 symbols and 4 states can make before eventually halting.It is the number of triangle-free graphs on 7 vertices.It is the ninth emirp, because reversing its digits gives another prime number (701)
In other fields:
As "one hundred and seven", it is the smallest positive integer requiring six syllables in English (without the "and" it only has five syllables and seventy-seven is a smaller 5-syllable number).
107 is also: The atomic number of bohrium.
The emergency telephone number in Argentina and Cape Town.
The telephone of the police in Hungary.
A common designation for the fair use exception in copyright law (from 17 U.S.C. 107) Peugeot 107 model of car In sports The 107% rule, a Formula One Sporting Regulation in operation from 1996 to 2002 and 2011 onward.
The number 107 is also associated with the Timbers Army supporters group of the Portland Timbers soccer team, in reference to the stadium seating section where the group originally congregated. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Local energy-based shape histogram**
Local energy-based shape histogram:
Local energy-based shape histogram (LESH) is a proposed image descriptor in computer vision. It can be used to get a description of the underlying shape. The LESH feature descriptor is built on local energy model of feature perception, see e.g. phase congruency for more details. It encodes the underlying shape by accumulating local energy of the underlying signal along several filter orientations, several local histograms from different parts of the image/patch are generated and concatenated together into a 128-dimensional compact spatial histogram. It is designed to be scale invariant. The LESH features can be used in applications like shape-based image retrieval, medical image processing, object detection, and pose estimation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Paleoecology**
Paleoecology:
Paleoecology (also spelled palaeoecology) is the study of interactions between organisms and/or interactions between organisms and their environments across geologic timescales. As a discipline, paleoecology interacts with, depends on and informs a variety of fields including paleontology, ecology, climatology and biology.
Paleoecology:
Paleoecology emerged from the field of paleontology in the 1950s, though paleontologists have conducted paleoecological studies since the creation of paleontology in the 1700s and 1800s. Combining the investigative approach of searching for fossils with the theoretical approach of Charles Darwin and Alexander von Humboldt, paleoecology began as paleontologists began examining both the ancient organisms they discovered and the reconstructed environments in which they lived. Visual depictions of past marine and terrestrial communities have been considered an early form of paleoecology. The term "paleo-ecology" was coined by Frederic Clements in 1916.
Overview of paleoecological approaches:
Classic paleoecology uses data from fossils and subfossils to reconstruct the ecosystems of the past. It involves the study of fossil organisms and their associated remains (such as shells, teeth, pollen, and seeds), which can help in the interpretation of their life cycle, living interactions, natural environment, communities, and manner of death and burial. Such interpretations aid the reconstruction of past environments (i.e., paleoenvironments). Paleoecologists have studied the fossil record to try to clarify the relationship animals have to their environment, in part to help understand the current state of biodiversity. They have identified close links between vertebrate taxonomic and ecological diversity, that is, between the diversity of animals and the niches they occupy. Classical paleoecology is a primarily reductionist approach: scientists conduct detailed analysis of relatively small groups of organisms within shorter geologic timeframes.
Overview of paleoecological approaches:
Evolutionary paleoecology uses data from fossils and other evidence to examine how organisms and their environments change throughout time. Evolutionary paleoecologists take the holistic approach of looking at both organism and environmental change, accounting for physical and chemical changes in the atmosphere, lithosphere and hydrosphere across time. By studying patterns of evolution and extinction in the context of environmental change, evolutionary paleoecologists are able to examine concepts of vulnerability and resilience in species and environments.
Overview of paleoecological approaches:
Community paleoecology uses statistical analysis to examine the composition and distribution of groups of plants or animals. By quantifying how plants or animals are associated, community paleoecologists are able to investigate the structures of ancient communities of organisms. Advances in technology have helped propel the field, through the use of physical models and computer-based analysis.
Major principles:
While the functions and relationships of fossil organisms may not be observed directly (as in ecology), scientists can describe and analyze both individuals and communities over time. To do so, paleoecologists make the following assumptions: All organisms are adapted and restricted to a particular environment, and are usually adapted to a particular lifestyle.
Essentially all organisms depend on another organism, whether directly or indirectly.
The fossil or physical records are inherently incomplete - the geologic record is selective and some environments are more likely to be preserved than others. Taphonomy, affecting the over- and underrepresentation of fossils, is an extremely important consideration in interpreting fossil assemblages.
Uniformitarianism is the concept that processes that took place in the geologic past are the same as the ones that are observed taking place today. In paleoecology, uniformitarianism is used as a methodology: paleoecologists make inferences about ancient organisms and environments based on analogies they find in the present.
Paleoecological methods:
The aim of paleoecology is to build the most detailed model possible of the life environment of previously living organisms found today as fossils. The process of reconstructing past environments requires the use of archives (e.g., sediment sequences), proxies (e.g., the micro or mega-fossils and other sediment characteristics that provide the evidence of the biota and the physical environment), and chronology (e.g., obtaining absolute (or relative) dating of events in the archive). Such reconstruction takes into consideration complex interactions among environmental factors such as temperatures, food supplies, and degree of solar illumination. Often much of this information is lost or distorted by the fossilization process or diagenesis of the enclosing sediments, making interpretation difficult.
Paleoecological methods:
Some other proxies for reconstructing past environments include charcoal and pollen, which synthesize fire and vegetation data, respectively. Both of these alternates can be found in lakes and peat settings, and can provide moderate to high resolution information. These are well studied methods often utilized in the paleoecological field. The environmental complexity factor is normally tackled through statistical analysis of the available numerical data (quantitative paleontology or paleostatistics), while the study of post-mortem processes is known as the field of taphonomy.
Quaternary:
Because the Quaternary period is well represented in geographically extensive and high temporal-resolution records, many hypotheses arising from ecological studies of modern environments can be tested at the millennial scale using paleoecological data. In addition, such studies provide historical (pre-industrialization) baselines of species composition and disturbance regimes for ecosystem restoration, or provide examples for understanding the dynamics of ecosystem change through periods of large climate changes. Paleoecological studies are used to inform conservation, management and restoration efforts. In particular, fire-focused paleoecology is an informative field of study to land managers seeking to restore ecosystem fire regimes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Arcuate nucleus**
Arcuate nucleus:
The arcuate nucleus of the hypothalamus (also known as ARH, ARC, or infundibular nucleus) is an aggregation of neurons in the mediobasal hypothalamus, adjacent to the third ventricle and the median eminence. The arcuate nucleus includes several important and diverse populations of neurons that help mediate different neuroendocrine and physiological functions, including neuroendocrine neurons, centrally projecting neurons, and astrocytes. The populations of neurons found in the arcuate nucleus are based on the hormones they secrete or interact with and are responsible for hypothalamic function, such as regulating hormones released from the pituitary gland or secreting their own hormones. Neurons in this region are also responsible for integrating information and providing inputs to other nuclei in the hypothalamus or inputs to areas outside this region of the brain. These neurons, generated from the ventral part of the periventricular epithelium during embryonic development, locate dorsally in the hypothalamus, becoming part of the ventromedial hypothalamic region. The function of the arcuate nucleus relies on its diversity of neurons, but its central role is involved in homeostasis. The arcuate nucleus provides many physiological roles involved in feeding, metabolism, fertility, and cardiovascular regulation.
Cell populations:
Neuroendocrine neurons Different groups of arcuate nucleus neuroendocrine neurons secrete various types or combinations of neurotransmitters and neuropeptides, such as neuropeptide Y (NPY), gonadotropin-releasing hormone (GnRH), agouti-related peptide (AgRP), cocaine- and amphetamine-regulated transcript (CART), kisspeptin, dopamine, substance P, growth hormone–releasing hormone (GHRH), neurokinin B (NKB), β-endorphin, melanocyte-stimulating hormone (MSH), and somatostatin. Proopiomelanocortin (POMC) is a precursor polypeptide that is cleaved into MSH, ACTH, and β-endorphin and expressed in the arcuate nucleus.Groups of neuroendocrine neurons include: TIDA neurons, or tuberoinfundibular dopamine neurons, are neurons that regulate the secretion of prolactin from the pituitary gland and release the neurotransmitter dopamine. TIDA neurons have nerve endings in the median eminence that release dopamine into the hypophysial portal blood. In lactating females, TIDA neurons are inhibited by the stimulus of suckling. Dopamine released from their nerve endings at the median eminence is transported to the anterior pituitary gland, where it regulates the secretion of prolactin. Dopamine inhibits prolactin secretion, so when the TIDA neurons are inhibited, there is increased secretion of prolactin, which stimulates lactogenesis (milk production). Prolactin acts in a short-loop negative feedback manner to decrease its levels by stimulating the release of dopamine. Dopaminergic neurons of the arcuate also inhibit the release of gonadotropin-releasing hormone, explaining in part why lactating (or otherwise hyperprolactinemic) women experience oligomenorrhea or amenorrhea (infrequency or absence of menses).
Cell populations:
Kisspeptin/NKB neurons within the arcuate nucleus form synaptic inputs with TIDA neurons. These neurons express estrogen receptors and also coexpress neurokinin B in female rats.
GHRH neurons help to control growth hormone (GH) secretion in conjunction with somatostatin and NPY.
Cell populations:
NPY/AgRP neurons and POMC/CART neurons make up two groups of neurons in the arcuate nucleus that are centrally involved in the neuroendocrine function of feeding. Medial neurons utilize NPY peptides as neurotransmitters to stimulate appetite, and lateral neurons utilize POMC/CART to inhibit appetite. NPY and POMC/CART neurons are sensitive to peripheral hormones such as leptin and insulin. POMC/CART neurons also secrete melanocyte-stimulating hormone, which suppresses appetite.: 419 GnRH neurons have also been found. These neurons secrete GnRH and histamine.
Cell populations:
There are also groups of neurons expressing NKB and dynorphin that help to control reproduction.
Cell populations:
Centrally-projecting neurons Other types of neurons have projection pathways from the arcuate nucleus to mediate different regions of the hypothalamus or to other regions outside of the hypothalamus. Projections of these neurons extend a long distance from the arcuate nucleus to the median eminence to influence the release of hormones from the pituitary gland. Neurons of the arcuate nucleus have intrahypothalamic projections for neuroendocrine circuitry. such as neural projections that influence feeding behavior project to the paraventricular nucleus of the hypothalamus (PVH), the dorsomedial hypothalamic nucleus (DMH), and the lateral hypothalamic area (LHA). Populations of neurons connect to the intermediate lobes of the pituitary gland, from the lateral division of the ARH to the neural and intermediate parts of the pituitary gland, and the caudal division of ARH to the median eminence.Groups of neurons that project elsewhere within the central nervous system include: Centrally projecting neurons that contain neuropeptide Y (NPY), agouti-related protein (AGRP), and the inhibitory neurotransmitter GABA. These neurons, in the most ventromedial part of the nucleus, project strongly to the lateral hypothalamus and to the paraventricular nucleus of the hypothalamus, and are important in the regulation of appetite. When activated, these neurons can produce ravenous eating. These neurons are inhibited by leptin, insulin, and peptide YY and activated by ghrelin.
Cell populations:
Centrally projecting neurons that contain peptide products of pro-opiomelanocortin (POMC), and cocaine- and amphetamine-regulated transcript (CART). These neurons have widespread projections to many brain areas, including to all nuclei in the hypothalamus. These cells are important in the regulation of appetite, and, when activated, they inhibit feeding. These neurons are activated by circulating concentrations of leptin and insulin, and they are directly innervated and inhibited by the NPY neurons. POMC neurons that project to the medial preoptic nucleus are also involved in the regulation of sexual behavior in both males and females. The expression of POMC is regulated by gonadal steroids. The release of a POMC product, beta-endorphin is regulated by NPY.
Cell populations:
Centrally projecting neurons that make somatostatin; the neurosecretory somatostatin neurons that regulate growth hormone secretion are a different population, located in the periventricular nucleus.
Feeding regulatory neurons also activate oxytocin-containing neurons of the periventricular nucleus (PVN), which projects to nucleus of tractus solitarius in the medulla oblongata.
Others receive direct synaptic inputs from extra hypothalamic sites projecting into the amygdala, the hippocampus, and the entorhinal cortex.
Other neurons Other cell populations include: A small population of neurons that sensitive to ghrelin. The role of this population is not known; many neurons in the arcuate nucleus express receptors for ghrelin, but these are thought to respond mainly to blood-borne ghrelin.
The arcuate nucleus also contains a population of specialized ependymal cells, called tanycytes.
Cell populations:
Astrocytes in the arcuate nucleus hold high capacity glucose transporters that function as nutrient sensors for appetite controlling neurons The diverse and specialized collections of neurons reside within a special compartment with glial cells and have their own network of capillaries and a membrane of tanycytes that help create a blood brain barrier. Circulating or molecules such as hormones travel in the blood and can directly affect these neurons and their plasticity as evidence by adult neurogenesis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sober**
Sober:
Sober usually refers to sobriety, the state of not having any measurable levels or effects from alcohol or drugs.
Sober may also refer to:
Music:
Sôber, Spanish rock band Songs "Sober" (Bad Wolves song), from the 2019 album Nation "Sober" (Big Bang song), from the 2016 album Made "Sober" (Childish Gambino song), from the 2014 extended play, Kauai "Sober" (Demi Lovato song), 2018 single "Sober" (G-Eazy song), from the 2017 album The Beautiful & Damned "Sober" (Inna song), 2020 single "Sober" (Jennifer Paige song), from the 1999 album Jennifer Paige "Sober" (Kelly Clarkson song), from the 2007 album My December "Sober" (Little Big Town song), from the 2012 album Tornado "Sober" (Lorde song) and "Sober II (Melodrama)", two songs from the 2017 album Melodrama "Sober" (Loreen song), from the 2012 album Heal "Sober" (Pink song), from the 2008 album Funhouse "Sober" (Selena Gomez song), from the 2015 album Revival "Sober" (Tool song), from the 1993 album Undertow "Sober", by Bazzi "Sober", by Blink-182 from the 2016 album California "Sober", by Cheat Codes "Sober", by DJ Snake featuring John Ryan From the 2016 album Encore "Sober", by Muse from the 1999 album Showbiz "Sober", by Sam Smith from the 2020 album Love Goes "Sober", by Fidlar from the 2015 album Too
People:
Bojan Sober (born 1957), Croatian opera singer Elliott Sober (born 1947), American philosopher of science Olga Sober, Serbian singer:
Places:
Sober Hall, village in Ingleby Barwick, England Sober Island, Nova Scotia Sober, Spain
Other:
Sober Grid, an app to help people in recovery from alcohol and drug addiction find and connect with one another for peer support Sober Meal, painting by Pieter Franciscus Dierckx Sober space, type of sobriety of a topological space in mathematics Sober living houses Sober (worm), a family of computer worms | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Juvenile nephronophthisis**
Juvenile nephronophthisis:
Juvenile nephronophthisis is the juvenile form of nephronophthisis that causes end stage kidney disease around the age of 13; infantile nephronophthisis and adolescent nephronophthisis cause ESKD around the ages of 1 and 19, respectively.
Signs and symptoms:
Typically, the signs and symptoms of juvenile nephronophthisis are limited to the kidneys. They include polyuria, polydipsia, weakness, and fatigue.Anemia, growth retardation, no hypertension. Proteinuria and hematuria are usually absent. Polyuria is resistant to vasopressin.
When other organ systems are affected, symptoms can include situs inversus, heart abnormalities, and liver fibrosis. Juvenile nephronophthisis can also be associated with other rare disorders, including Senior–Løken syndrome and Joubert syndrome.
Pathophysiology:
Juvenile nephronophthisis causes fibrosis and scarring of the kidneys, which accounts for the symptoms observed. The kidneys also often have corticomedullary cysts.
Inability to conserve sodium because of defect of tubules leading to polyuria and polydipsia.
Anemia is attributed to a deficiency of erythropoietin production by failing kidneys.
Growth retardation, malaise and pallor are secondary to anemia.
No hypertension as nephronophthisis is a salt-losing enteropathy.
Diagnosis:
Ultrasonography shows bilateral small kidneys with loss of corticomedullary junction and multiple cysts only in the medulla. Cysts may only be seen if they are large enough, they are rarely visible early in disease.
Differential diagnosis Patients with medullary cystic disease present with similar features as juvenile nephronophthisis but they can be differentiated by: Absence of growth retardation.
Age of presentation is third or fourth decade.
Hypertension may occur (in JN, hypertension is not seen).In polycystic kidney disease, there is bilateral enlargement of kidneys (small kidneys in JN).
Treatment:
The only option is renal transplant.
Epidemiology:
It is the most common genetic cause of end stage kidney disease (kidney failure) in childhood and adolescence. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Insulin-degrading enzyme**
Insulin-degrading enzyme:
Insulin-degrading enzyme, also known as IDE, is an enzyme.Known alternatively as insulysin or insulin protease, IDE is a large zinc-binding protease of the M16 metalloprotease family known to cleave multiple short polypeptides that vary considerably in sequence. Other members of this family include the mitochondrial processing peptidase and presequence protease.
Structure:
Gene The gene IDE encodes protein Insulin-degrading enzyme. The human gene IDE has 28 exons and is located at chromosome band 10q23-q25.
Protein Due to alternative splicing, The human protein Insulin-degrading Enzyme has two isoforms. Isoform1 is ~118 kDa in size and composed of 1019 amino acids while the isoform 2 is ~54.2 kDa size and composed of 464 amino acids (missing 1-555 amino acids). The calculated theoretical pI of this protein isoform is 6.26.
Structure:
Structural studies of IDE by Shen et al. have provided insight into the functional mechanisms of the protease. Reminiscent of the previously determined structure of the bacterial protease pitrilysin, the IDE crystal structure reveals defined N and C terminal units that form a proteolytic chamber containing the zinc-binding active site. In addition, it appears that IDE can exist in two conformations: an open conformation, in which substrates can access the active site, and a closed state, in which the active site is contained within the chamber formed by the two concave domains. Targeted mutations that favor the open conformation result in a 40-fold increase in catalytic activity. Based upon this observation, it has been proposed that a possible therapeutic approach to Alzheimer’s might involve shifting the conformational preference of IDE to the open state, and thus increasing Aβ degradation, preventing aggregation, and, ideally, preventing the neuronal loss that leads to disease symptoms.
Function:
IDE was first identified by its ability to degrade the B chain of the hormone insulin. This activity was observed over sixty years ago, though the enzyme specifically responsible for B chain cleavage was identified more recently. This discovery revealed considerable amino acid sequence similarity between IDE and the previously characterized bacterial protease pitrilysin, suggesting a common proteolytic mechanism. IDE, which migrates at 110 kDa during gel electrophoresis under denaturing conditions, has since been shown to have additional substrates, including the signaling peptides glucagon, TGF alpha, and β-endorphin.
Clinical Significance:
Alzheimer's disease Considerable interest in IDE has been stimulated due to the discovery that IDE can degrade amyloid beta (Aβ), a peptide implicated in the pathogenesis of Alzheimer's disease. The underlying cause or causes of the disease are unclear, though the primary neuropathology observed is the formation of amyloid plaques and neurofibrillary tangles. One hypothesized mechanism of disease, called the amyloid hypothesis, suggests that the causative agent is the hydrophobic peptide Aβ, which forms quaternary structures that, by an unclear mechanism, cause neuronal death. Aβ is a byproduct generated as the result of proteolytic processing of the amyloid precursor protein (APP) by proteases referred to as the β and γ secretases. The physiological role of this processing is unclear, though it may play a role in nervous system development.Numerous in vitro and in vivo studies have shown correlations between IDE, Aβ degradation, and Alzheimer’s disease. Mice engineered to lack both alleles of the IDE gene exhibit a 50% decrease in Aβ degradation, resulting in cerebral accumulation of Aβ. Studies of genetically inherited forms of Alzheimer’s show reduction in both IDE expression and catalytic activity among affected individuals. Despite the evident role of IDE in disease, relatively little is known about its physiological functions. These may be diverse, as IDE has been localized to several locations, including the cytosol, peroxisomes, endosomes, proteasome complexes, and the surface of cerebrovascular endothelial cells.
Clinical Significance:
Based upon the aforementioned observation in protein structure, it has been proposed that a possible therapeutic approach to Alzheimer’s might involve shifting the conformational preference of IDE to the open state, and thus increasing Aβ degradation, preventing aggregation, and, ideally, preventing the neuronal loss that leads to disease symptoms.
Clinical Significance:
Regulation of extracellular amyloid β-protein Reports of IDE localized to the cytosol and peroxisomes have raised concerns regarding how the protease could degrade endogenous Aβ. Several studies have detected insulin-degrading activity in the conditioned media of cultured cells, suggesting the permeability of the cell membrane and thus possible release of IDE from leaky cells. Qiu and colleagues revealed the presence of IDE in the extracellular media using antibodies to the enzyme. They also quantified levels of Aβ-degrading activity using elution from column chromatography. Correlating the presence of IDE and Aβ-degrading activity in the conditioning medium confirmed that leaky membranes are responsible for extracellular IDE activity. However, other reports have indicated that it is released via exosomes.
Clinical Significance:
Potential role in the oligomerization of Aβ Recent studies have observed that the oligomerization of synthetic Aβ was completely inhibited by the competitive IDE substrate, insulin. These findings suggest that IDE activity is capable of joining of several Aβ fragments together. Qui et al. hypothesized that the Aβ fragments generated by IDE can either enhance oligomerization of the Aβ peptide or can oligomerize themselves. It is also entirely possible that IDE could mediate the degradation and oligomerization of Aβ by independent actions that have yet to be investigated.
Mechanism:
The mechanism of the IDE enzyme remains poorly understood. The first step of one proposed mechanism includes a zinc-bound hydroxide group performing a nucleophilic attack on a carbon substrate that materializes into the intermediate INT1. In this species, we can note that the zinc-bound hydroxide is completely transferred on the carbonyl carbon of substrate as a consequence of the Zn2+−OH bond breaking. In TS2, the Glu111 residue rotates to assume the right disposition to form two hydrogen bonds with the amide nitrogen and the −OH group linked to the carbon atom of substrate, thus behaving as hydrogen donor and acceptor, simultaneously. The formation of the second cited bond favors the re-establishment of the Zn2+−OH bond broken previously at the INT1 level. The nucleophilic addition and the protonation of peptide amide nitrogen is a very fast process that is believed to occur as a single step in the catalytic process. The final species on the path is the product PROD. As a consequence of transfer of the proton of Glu111 onto the amide nitrogen of substrate that occurred in TS3, the peptide N—C bond is broken.
Mechanism:
A look at the whole reaction path indicates that the rate-determining step in this process is the nucleophilic addition. After this point, the catalytic event should proceed without particular obstacles.
Model organisms:
Model organisms have been used in the study of IDE function. A conditional knockout mouse line, called Idetm1a(EUCOMM)Wtsi was generated as part of the International Knockout Mouse Consortium program — a high-throughput mutagenesis project to generate and distribute animal models of disease to interested scientists.Male and female animals underwent a standardized phenotypic screen to determine the effects of deletion. Twenty three tests were carried out on mutant mice and two significant abnormalities were observed. Homozygous mutant animals displayed abnormal drinking behavior, and males also had an increased NK cell number. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Polyp (medicine)**
Polyp (medicine):
In anatomy, a polyp is an abnormal growth of tissue projecting from a mucous membrane. If it is attached to the surface by a narrow elongated stalk, it is said to be pedunculated; if it is attached without a stalk, it is said to be sessile. Polyps are commonly found in the colon, stomach, nose, ear, sinus(es), urinary bladder, and uterus. They may also occur elsewhere in the body where there are mucous membranes, including the cervix, vocal folds, and small intestine. Some polyps are tumors (neoplasms) and others are non-neoplastic, for example hyperplastic or dysplastic, which are benign. The neoplastic ones are usually benign, although some can be pre-malignant, or concurrent with a malignancy.
Polyp (medicine):
The name is of ancient origin, in use in English from about 1400 for a nasal polyp, from Latin polypus through Greek. The animal of similar appearance called polyp is attested from 1742, although the word was earlier used for an octopus.
Digestive polyps:
Relative incidences by location: Colorectal polyp While colon polyps are not commonly associated with symptoms, occasionally they may cause rectal bleeding, and on rare occasions pain, diarrhea or constipation. They are a concern because of the potential for colon cancer being present microscopically, and the risk of benign colon polyps becoming malignant over time. Since most polyps are asymptomatic, they are usually discovered at the time of colon cancer screening. Common screening methods are occult blood test, colonoscopy with a modern flexible endoscope, sigmoidoscopy (usually with the older rigid endoscope), lower gastrointestinal series (barium enema), digital rectal examination (DRE), virtual colonoscopy or Cologuard. The polyps are routinely removed at the time of colonoscopy, either with a wire loop known as a polypectomy snare (first description by P. Deyhle, Germany, 1970), or with biopsy forceps. If an adenomatous polyp is found, it must be removed, since such a polyp is pre-cancerous and has a propensity to become cancerous. For certainty, all polyps which are found by any diagnostic modality, are removed by a colonoscopy. Although colon cancer is usually not found in polyps smaller than 2.5 cm, all polyps found are removed since their removal reduces the likelihood of future colon cancer. When adenomatous polyps are removed, a repeat colonoscopy is usually performed three to five years later.Most colon polyps can be categorized as sporadic.
Digestive polyps:
Inherited polyposis syndromes Familial adenomatous polyposis Peutz–Jeghers syndrome Turcot syndrome Juvenile polyposis syndrome Cowden disease Bannayan–Riley–Ruvalcaba syndrome (Bannayan–Zonana syndrome) Gardner's syndrome Serrated polyposis syndrome Non-inherited polyposis syndromes Cronkhite–Canada syndrome Types of colon polyps Malignant Hamartomatous Hyperplastic Inflammatory: Inflammatory fibroid polyp Adenomatous polyps Adenomatous polyps, or adenomas, are polyps that grow on the lining of the colon and which carry a high risk of cancer. The adenomatous polyp is considered pre-malignant, i.e., likely to develop into colon cancer. The other types of polyps that can occur in the colon are hyperplastic and inflammatory polyps, which are unlikely to develop into colorectal cancer.About 5% of people aged 60 will have at least one adenomatous polyp of 1 cm diameter or greater. Multiple adenomatous polyps often result from familial polyposis coli or familial adenomatous polyposis, a condition that carries a very high risk of colon cancer.
Digestive polyps:
Types Adenomas constitute approximately 10% of digestive polyps. Most polyps (approximately 90%) are small, usually less than 1 cm in diameter, and have a small potential for malignancy. The remaining 10% of adenomas are larger than 1 cm and approach a 10% chance of containing invasive cancer.There are three types of adenomatous polyp: Tubular adenomas (tube-like shape) are the most common of the adenomatous polyps; they may occur everywhere in the colon and they are the least likely colon polyps to develop into colon cancer Tubulovillous Villous adenomas are commonly found in the rectal area and they are normally larger in size than the other two types of adenomas. They tend to be non-pedunculated, velvety, or cauliflower-like in appearance and they are associated with the highest morbidity and mortality rates of all polyps. They can cause hypersecretory syndromes characterized by hypokalemia and profuse mucous discharge, and can harbor carcinoma in situ or invasive carcinoma more frequently than other adenomas.
Digestive polyps:
Risks The risks of progression to colorectal cancer increase if the polyp is larger than 1 cm and contains a higher percentage of villous component. Also, the shape of the polyps is related to the risk of progression into carcinoma. Polyps that are pedunculated (with a stalk) are usually less dangerous than sessile polyps (flat polyps). Sessile polyps have a shorter pathway for migration of invasive cells from the tumor into submucosal and more distant structures, and they are also more difficult to remove and ascertain. Sessile polyps larger than 2 cm usually contain villous features, have a higher malignant potential, and tend to recur following colonoscopic polypectomy.Although polyps do not carry significant risk of colon cancer, tubular adenomatous polyps may become cancerous when they grow larger. Larger tubular adenomatous polyps have an increased risk of malignancy when larger because then they develop more villous components and may become sessile.It is estimated that an individual whose parents have been diagnosed with an adenomatous polyp has a 50% greater chance to develop colon cancer than individuals with no family history of colonic polyps. As of 2019 there is no way to establish the risks of colon polyps of patients with a family history of them. Overall, nearly 6% of the population, regardless of family history, is at risk of developing colon cancer.
Digestive polyps:
Screening Screening for colonic polyps as well as preventing them has become an important part of the management of the condition. Medical societies have established guidelines for colorectal screening in order to prevent adenomatous polyps and to minimize the chances of developing colon cancer. It is believed that some changes in the diet might be helpful in preventing polyps from occurring, but there is no other way to prevent the polyps from developing into cancerous growths than detecting and removing them.Colon polyps as they grow can sometimes cause bleeding within the intestine, which can be detected by an occult blood test. According to American Cancer Society guidelines, people over 50 should have an annual occult blood test. People in their 50s are recommended to have flexible sigmoidoscopies performed once every 3 to 5 years to detect any abnormal growth which could be an adenomatous polyp. If adenomatous polyps are detected during this procedure, a colonoscopy is recommended. Medical societies recommend colonoscopies every ten years starting at age 50 as a necessary screening practice for colon cancer. The screening provides an accurate image of the intestine and also allows the removal of the polyp, if found. Once an adenomatous polyp is identified during colonoscopy, there are several methods of removal, including using a snare or a heating device. Colonoscopies are preferred over sigmoidoscopies because they allow the examination of the entire colon and can detect polyps in the upper colon, where more than half of polyps occur.It has been statistically demonstrated that screening programs are effective in reducing the number of deaths caused by colon cancer due to adenomatous polyps. The risk of complications associated with colonoscopies is approximately 0.35 percent, compared to a lifetime risk of developing colon cancer of around 6 percent. As there is a small likelihood of recurrence, surveillance after polyp removal is recommended.
Endometrial polyp:
An endometrial polyp or uterine polyp is a polyp or lesion in the lining of the uterus (endometrium) that takes up space within the uterine cavity. Commonly occurring, they are experienced by up to 10% of women. They may have a large flat base (sessile) or be attached to the uterus by an elongated pedicle (pedunculated). Pedunculated polyps are more common than sessile ones. They range in size from a few millimeters to several centimeters. If pedunculated, they can protrude through the cervix into the vagina. Small blood vessels may be present in polyps, particularly large ones.
Cervical polyp:
A cervical polyp is a common benign polyp or tumor on the surface of the cervical canal. They can cause irregular menstrual bleeding or increased pain but often show no symptoms.
Nasal polyps:
Nasal polyps are polypoidal masses arising mainly from the mucous membranes of the nose and paranasal sinuses. They are overgrowths of the mucosa that frequently accompany allergic rhinitis. They are freely movable and nontender.
Laryngeal polyps:
Polyps on the vocal folds can take on many different forms, and can sometimes result from vocal abuse, although this is not always the cause. They can occur on one or both vocal folds, and appear as swelling, a bump (similar to a nodule), a stalk-like growth, or a blister-like lesion. Most polyps are larger than nodules, which are more similar to callouses on the vocal folds. Polyps and nodules can exhibit similar symptoms including hoarseness or breathiness, "rough" or "scratchy" voice, harshness in vocal quality, shooting pain from ear to ear, sensation of having "a lump in the back of the throat", neck pain, decreased pitch range in the voice, and vocal and bodily fatigue.If an individual experiences symptoms for more than 2 to 3 weeks, they should see a physician. For a diagnosis, a thorough evaluation of the voice should include a physical examination, preferably by an otolaryngologist (ear, nose, and throat doctor) who specializes in voice, a voice evaluation with a speech-language pathologist (SLP), a neurological examination (in certain cases) The qualities of the voice that will be evaluated include quality, pitch, loudness, and ability to sustain voicing. In some cases, an instrumental examination may be performed with an endoscope into the mouth or nose; this gives a clear look at the vocal folds and larynx in general. In addition to this, a stroboscope (flashing light) may be used to observe the movement of the vocal folds during speech.Polyps may be treated with medical, surgical, or behavioral intervention. Surgical intervention involves removing the polyp from the vocal fold. This approach is only used when the growth(s) are very large or have existed for an extended amount of time. In children, surgical intervention is rare. Existing medical problems may be treated in an effort to reduce the strain and negative impact on the vocal cords. This could include treatment for gastrointestinal reflux disease, allergies, and thyroid problems. Intervention to stop smoking and reduce stress may also be needed. Most people receive behavioral intervention, or vocal therapy, from an SLP. This might involve teaching good vocal hygiene, and reducing or stopping vocal abuse behaviors. Direct voice treatments may be used to alter pitch, loudness, or breathe support to promote good voicing. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Phonovoltaic**
Phonovoltaic:
A phonovoltaic (pV) cell converts vibrational (phonons) energy into a direct current much like the photovoltaic effect in a photovoltaic (PV) cell converts light (photon) into power. That is, it uses a p-n junction to separate the electrons and holes generated as valence electrons absorb optical phonons more energetic than the band gap, and then collects them in the metallic contacts for use in a circuit. The pV cell is an application of heat transfer physics and competes with other thermal energy harvesting devices like the thermoelectric generator.
Phonovoltaic:
While the thermoelectric generator converts heat, a broad spectrum of phonon and electron energy, to electricity, the pV cell converts only a narrow band of phonon energy, i.e., only the most energetic optical phonon modes. A narrow band of excited optical phonons has much less entropy than heat. Thus, the pV cell can exceed the thermoelectric efficiency. However, exciting and harvesting the optical phonon poses a challenge.
Satisfying the laws of thermodynamics:
By the first law of thermodynamics, the excitation driving electron generation in both photo- and phonovoltaic cells, i.e., the photon or phonon, must have more energy than the semiconductor band gap. For a PV cell, many materials are available with a band gap ( ΔEe,g ) well matched to the solar photon spectrum, like Silicon or Gallium Arsenide. For a pV cell, however, no current semiconducting materials have a band gap smaller than the energy of their most energetic (optical) phonon modes ( Ep,O ). Thus, novel materials are required with both energetic optical phonon modes ( 100 meV, e.g., graphene, diamond, or boron nitride) and a small band gap ( ΔEe,g<Ep,O , e.g., graphene).
Satisfying the laws of thermodynamics:
By the second law of thermodynamics, the excitation must be "hotter" than the cell for power generation to occur. In a PV, the light comes from an outside source, for example, the sun, which is nearly 6000 kelvins, whereas the PV is around 300 kelvins. Thus, the second law is satisfied and energy conversion is possible. However, the crystal vibrations driving power generation in a pV are intrinsic to the material itself. As such, they can not be imported from an outside source like the sun, but must instead be excited by some other process until they are hotter than the cell. The temperature of the optical phonon population is calculated by comparing the number of optical phonon to the number expected at a given temperature, which comes from the Bose–Einstein statistics.
Non-equilibrium optical phonon population and the nanoscale requirement:
There are a number of ways to excite a population of vibrations, i.e., create a hot optical phonon population. For example, if the electron population is excited, using a laser or electric field, they will typically relax by emitting optical phonons. Additionally, a hot molecular gas can impart its vibrations to a crystal when chemisorbed. Regardless of method, the conversion efficiency is limited by the optical phonon temperature achieved as compared to the electron temperature within the device due to Carnot's theorem.
Non-equilibrium optical phonon population and the nanoscale requirement:
In a nanoscale device, this temperature is approximately equal to the temperature of the device itself. However, in a macroscale device the generated electrons accumulate faster than they are collected. Thus, the electron population is heated up to the optical phonon temperature and further generation is inhibited. The down-conversion is simultaneously inhibited as the acoustic phonon population is heated to the optical phonon temperature. Thus, the large pV cell develops a near-equilibrium state where it is heated. At best, it will act like a thermoelectric generator and exhibit thermoelectric effects. Such a device is called a thermovoltaic, rather than a phonovoltaic.
Entropy generation and efficiency:
Entropy generation and inefficiency in a PV cell is the result of photons more energetic than the band gap producing electrons with kinetic energy in addition to the potential energy provided by the band gap. Similarly, optical phonon energy in excess of the band gap generates an entropy flow in the pV cell, rather than electric power. The energy efficiency ( ηϕ ) is quantified by the ratio of the band gap and optical phonon energy, that is In addition this typical inefficiency, hot optical phonon populations tend to downconvert into multiple low-energy, acoustic phonon modes (whereas photons typically do not downconvert into low energy infrared waves). This efficiency ( ηQE ) is quantified by the tendency of a hot optical phonon to downconvert rather than generate an electron-hole pair, that is where γ˙e−p is the rate of generation and γ˙p−p is the rate of downconversion, i.e., the rate at which an optical phonon produces multiple low-energy, acoustic phonons. This provides a second entropy flow reducing the efficiency of a pV cell.
Entropy generation and efficiency:
Finally, entropy is generated in both pV and PV cells due to the inefficient separation of the generated electrons and holes. This efficiency ( ηpn ) is limited by the Carnot efficiency given by where TpV is the temperature of the pV cell and Tp,O is the temperature of the optical phonon population, as dictated by the Bose–Einstein statistics. This efficiency is reduced the smaller the band gap is in comparison to the thermal energy ( kBT , where kB is the Boltzmann constant and T is the temperature). Indeed, the p-n junction efficiency is approximately Thus, the overall efficiency ( ηpV ) is where the temperature independent terms become the material figure of merit ( ZpV ), If the band gap and optical phonon mode are resonant, and the optical phonon tends to generate electrons, the phonovoltaic cell can approach the Carnot limit as Tpv→0
The electron-phonon coupling:
The electron-phonon coupling is responsible for electron generation in the pV cell. In this phenomenon, the phonon leads to ion motion which perturbs the highest occupied valence state (HOS). This state begins to overlap with the lowest unoccupied conduction state (LUS), and the electron can switch states if energy and momentum are conserved. If it does, an electron-hole pair is generated.
The electron-phonon coupling:
Using a taylor expansion of the change in electron potential, φ , due to the ionic displacement of a phonon provides a matrix element for use in Fermi's golden rule, and the derivation of a generation rate. This Taylor expansion gives the following matrix element ⟨f|He−p′|i⟩=Me−p=(ℏ2⟨m⟩ωκp,α)1/2⟨κe+κp,j|∂φ∂dκp,α|κe,i⟩, where ⟨m⟩ is the average atomic mass, ωκp,α and dκp,α are the frequency and atomic displacement due to a phonon with polarization α and momentum κp , and |κe,i⟩ is the electron wavefunction for an electron with momentum κe in band i. From Fermi's golden rule γ˙e−p=2πℏ|Me−p|2δ(Ee,i,κe−Ee,j,κe+κp±ℏωκp,α)fe,i,κe(1−fe,i,κe+κp)(12∓12+fp,α,κp) where Ee,i,κe is the energy of an electron in band i and momentum κe , fe,i,κe is the corresponding electron occupation, and fp,α,κp is the phonon occupancy.
The phonon-phonon coupling:
Competing with the generation of electrons is the downconversion of optical phonons into multiple acoustic phonons. The coupling arises from the crystal Hamiltonian (H) expanded in terms of the ionic displacement ( diα ) from the equilibrium position ( ri ) of atom i in direction α in direction, i.e., H=Ho+12∑i,j∑α,β∂2⟨φ⟩∂diα∂djβ|odiαdjβ+16∑i,j,k∑α,β,γ∂3⟨φ⟩∂diα∂djβ∂dkγ|odiαdjβdkγ+⋯=Ho+∑i,j∑α,βΓij,αβdiαdjβ+∑i,j,k∑α,β,γΨijk,αβγdiαdjβdkγ+⋯, where Ho is the ground-state Hamiltonian, the linear term vanishes (as the ground state is found by minimizing the energy in terms of the ionic position), and Γij,αβ and Ψijk,αβγ are the second- and third-order force constants between atoms i, j, and k when moved in along coordinate α , β , and γ . The second order term is primarily responsible for the phonon dispersion, while the anharmonic (third order and higher) terms are responsible for thermal expansion as well as the phonon up- (multiple low-energy optical phonons combine to form a high-energy phonon) and downconversion (a high-energy phonon splits into multiple low-energy phonons).
The phonon-phonon coupling:
Typically, up- and down-conversion is dominated by the third-order interaction. Thus, the perturbation Hamiltonian used in Fermi's golden rule for phonon up- and downconversion is Hp−p′=Ψκpκp′κp″,αα′α″Ψκpκp′κp″,αα′α″=∑ijk∑αβγΨijk,αβγsiακpsjα′κp′skα″κp″(⟨m⟩ωκp,α)1/2exp[i(κp⋅ri+κp′⋅rj+κp⋅rk)], where siακp is the direction of displacement for atom i due to the phonon. The resulting downconversion rate, from Fermi's golden rule, is 16 |Ψκpκp′κp″,αα′α″|2×δκp,κp′+κp″δ(ωκp,α−ωκp′,α′−ωκ″,α″)×(fp′+fp″+1), where two phonons are produced with polarization α′ and α″ and momentum κp′ and κp″
The suitability of graphene as a phonovoltaic material:
As outlined above, an efficient pV cell requires a material with an optical phonon mode more energetic than the bandgap, which in turn is much more energetic than the thermal energy at the intended operating temperature (Ep,O≃ΔEe,g≫kBTpV) . Furthermore, the pV cell requires a material wherein a hot optical phonon prefers to produce an electron rather than multiple low energy acoustic phonons ( γ˙e−p∗→1 ).
The suitability of graphene as a phonovoltaic material:
Very few materials offer this combination of properties. Indeed, the vast majority of crystals have optical phonon energies limited to below 50 meV, and those with more energetic optical phonons tend to have much more energetic band gaps. In general, a material with a first-row element (periodic table) is required to have a highly energetic optical phonon. However, the high electronegativity of a first-row elements tends to create a very large band gap, as in diamond and the boron nitride allotropes. Graphene is one of the few materials which diverges from this trend, with no bandgap and an exceptionally energetic optical phonon mode near 200 meV. Thus, graphene has been the initial target for development of a phonovoltaic material through the opening and tuning of its bandgap.Opening and tuning the bandgap of graphene has received substantial attention, and numerous strategies have been suggested and investigated. These include the use of uniaxial strain, electric fields, and chemical doping and functionalization. In general, these mechanisms work by either changing the symmetry of graphene (both Carbon atoms in the unit cell are identical) or hybridization ( sp2 ).
The suitability of graphene as a phonovoltaic material:
In the first phonovoltaic material investigations, it has been suggested that the latter technique destroys the electron-phonon coupling while the former preserves it. In particular, these investigations predict that hydrogenating graphene, to produce graphane, reduces the electron-phonon coupling so substantially that the material figure of merit vanishes; and that doping graphene with boron nitride maintains the strong electron-phonon coupling in graphene, such that its figure of merit is predicted to reach 0.65 and enable heat harvesting with twice the efficiency of a typical thermoelectric generator. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tempo rubato**
Tempo rubato:
Tempo rubato (Italian for 'stolen time'; UK: , US: , Italian: [ˈtɛmpo ruˈbaːto]; 'free in the presentation') is a musical term referring to expressive and rhythmic freedom by a slight speeding up and then slowing down of the tempo of a piece at the discretion of the soloist or the conductor. Rubato is an expressive shaping of music that is a part of phrasing.While rubato is often loosely taken to mean playing with expressive and rhythmic freedom, it was traditionally used specifically in the context of expression as speeding up and then slowing down the tempo. In the past, expressive and free playing (beyond only rubato) was often associated with the terms "ad libitum". Rubato, even when not notated, is often used liberally by musicians, e.g. singers frequently use it intuitively to let the tempo of the melody expressively shift slightly and freely above that of the accompaniment. This intuitive shifting leads to rubato's main effect: making music sound expressive and natural. Nineteenth century composer-pianist Frédéric Chopin is often mentioned in the context of rubato (see Chopin's technique and performance style).
Tempo rubato:
The term rubato existed even before the romantic era. In the 18th century, rubato meant expressing rhythm spontaneously, with freedom. In many cases, it was achieved by playing uneven notes. This idea was used, among others, by Ernst Wilhelm Wolf and Carl Philipp Emanuel Bach. In addition to that, Leopold Mozart claimed that the accompaniment should remain strictly in tempo.In the mid 18th century, the meaning of rubato began to change gradually. People were using the term as being able to move notes freely back and forth. Johann Friedrich Agricola interpreted rubato as "stealing the time".As time moves on to the 19th century, people recognized rubato yet slightly differently. In Chopin's music rubato functioned as a way to make a melody more emotional through changing the tempo by, for instance, accelerando, ritenuto and syncopations. Chopin "often played with the melody subtly lingering or passionately anticipating the beat while the accompaniment stayed at least relatively, if not strictly, in time". In this case, rubato is used as a concept of flexibility of tempo for a more expressive melody.
Types:
One can distinguish two types of rubato: in one the tempo of the melody is flexible, while the accompaniment is kept in typical regular pulse (yet not rigidly in mechanical fashion; but adjusting to the melody as necessary—see below). Another type affects melody and accompaniment. While it is often associated with music of the Romantic Period, classical performers frequently use rubato for emotional expressiveness in all kinds of works.
Types:
Tempo rubato (or a tempo rubato) means literally in robbed time, i.e., duration taken from one measure or beat and given to another, but in modern practice the term is quite generally applied to any irregularity of rhythm or tempo not definitely indicated in the score.The terms ad libitum, (ad lib.), a piacere, and a capriccio, also indicate a modification of the tempo at the will of the performer. Ad libitum means at liberty; a piacere, at pleasure; and a capriccio, at the caprice (of the performer).
Types:
A tempo rubato. Lit. "in robbed time", i. e. time in which, while every bar is of its proper time value, one portion of it may be played faster or slower at the expense of the remaining portion, so that, if the first half be somewhat slackened, the second half is somewhat quickened, and vice versa. With indifferent performers, this indication is too often confounded with some expression signifying ad libitum.
Types:
The opinion given by Tom S. Wotton, that "every bar has its proper time value" may be regarded as an inaccurate description: Karl Wilson Gehrkens mentions "duration taken from one measure [...] and given to another" which implies bars of differing duration. Rubato relates to phrasing; and since phrases often go over multiple bars; it is often impossible (and also not desired) for each bar to be identically long.
Types:
Early twentieth century Early twentieth-century rubato seems to be very eventful. Robert Philip in his book Early recordings and musical style: Changing tastes in instrumental performance, 1900-1950 specifies three types of rubato used at that time: accelerando and rallentando, tenuto and agogic accents, and melodic rubato.
Types:
Accelerando and rallentando Late 19th century dictionaries of musical terms defined tempo rubato as "robbed or stolen time." This effect can be achieved by a slight quickening of speed in ascending passages, for instance, and calando on descending phrases. Ignacy Jan Paderewski says that tempo rubato relies on "more or less important slackening or quickening of the time or rate of the movement." Many theoreticians and performers claimed at that time that the "robbed" time must be eventually "paid back" later within the same measure, so that the change of tempo would not affect the length of the measure. However, the balance theory caused controversy, as many theoreticians dismissed the assumption that the "stolen" time should necessarily be "paid back." In the third edition of Grove's Dictionary we read: "The rule has been given and repeated indiscriminately that the "robbed" time must be "paid back" within the bar. That is absurd, because the bar line is a notational, not a musical, matter. But there is no necessity to pay back even within the phrase: it is the metaphor that is wrong."Paderewski also discarded this theory saying: "(...) the value of notes diminished in one period through an accelerando, cannot always be restored in another through a ritardando. What is lost is lost."Some theoreticians, however, rejected even the idea that rubato relies on accelerando and ritardando. They were not recommending that a performance should be strictly metronomic, but they came up with a theory saying that rubato should consist of tenuto and shortened notes.
Types:
Tenuto and agogic accents The first writer who extended the theory of "agogics" was Hugo Riemann in his book Musikalische Dynamik und Agogik (1884). The theory was based on the idea of using small changes of rhythm and tempo for expression. Riemann used the term "agogic accent", by which he meant accentuation achieved by lengthening of a note.
Types:
The theory found many supporters. J. Alfred Johnstone called the idea of agogic accents "quasi tempo rubato." He also expressed his appreciation for this theory, saying that "modern editors are coming to recognize it as one of the important principles of expressive interpretation." In his illustration of agogic accents in the Mendelssohn's Andante and Rondo Capriccioso op. 14, Johnstone explains, that even though the rhythm consists of equal quarter notes, they should not be played the same length; the highest note of the phrase ought to be the longest while other notes shortened proportionally. One of the musicians known for using agogic accents in their playing was the violinist Joseph Joachim.
Types:
Some writers compared this type of rubato to declamation in speech. This idea was widely developed by singers. According to Gordon Heller: "If groups of notes happen to occur, which have to be sung to one word, the student must be careful to make the first note very slightly longer – though only very slightly – than the rest of the group. Should a triplet be written by the composer, care must be taken here to make the first note of the three a trifle longer than the rest, and thus give a musicianly rendering of it. To hurry the time in such a pace would spoil the rhythm..." Melodic rubato Both of the theories described above had their opponents and supporters. There was one question, though, that emerged in reference to both. Regardless if a melody is released from strict note values by accelerando and ritardando or agogic accents, should the accompaniment follow the melody or remain strict in time? The latter means that the melody would be either behind or ahead of the accompaniment for a moment. Eventually, in spite of doubts of some, it has become a tradition that the accompaniment did not follow the flexibility of the melody. As Franklin Taylor writes: "It should be observed that any independent accompaniment to a rubato phrase must always keep strict time, and it is, therefore, quite possible that no note of a rubato melody will fall exactly with its corresponding note in the accompaniment, except, perhaps, the first note in the bar."Robert Philip's further research shows that these three components (accelerando and rallentando, tenuto and agogic accents, and melodic rubato) were most often used together, as each performer could combine all of them and give the melody flexibility in their own specific way.
Chopin:
Frederic Chopin (1810–1849) wrote the term rubato in fourteen different works. All of the spots marked rubato in his fourteen compositions have a flowing melody in the right hand and several accompanying notes in the left hand. Thus, Chopin's rubato can be approached with delaying or anticipating those melody notes. According to descriptions of Chopin's playing, he played with the melody slightly delaying or excitedly anticipating the beat while the left-hand accompaniment went on playing in time.Usually, his usage of the term rubato in a score suggested a brief effect. However, when the term sempre rubato was marked, it indicated a rubato that continued for about two measures. Interestingly, Chopin never marked a tempo following rubato. This leaves the length of the "momentary effect" up to the interpretation of the performer. Therefore, the performer must understand the purpose of why rubato is indicated from the composer.
Chopin:
There are three purposes why Chopin marks the word rubato in his compositions: to articulate a repetition, to emphasize an expressive high point or appoggiatura and to set a particular mood at the beginning of a piece.The first main purpose for which Chopin marks rubato is to articulate the repetition of a unit of music. For example, the rubato marked in bar 9 in Mazurka Op. 6 No. 1 points out the beginning of the repetition after the first eight-measure unit. Another example of this usage of rubato occurs in the Mazurka Op. 7 No. 3. In this piece, the theme begins at measure 9 and repeats at measure 17, which is where the rubato is marked. From this, the performer is given the cue to approach the repeated material differently the second time it occurs. Chopin's second main purpose for using rubato is to create an intensely expressive moment such as at the high point of a melodic line or at an appoggiatura. For example, in the Nocturne Op. 9 No. 2, bar 26 has an intensely singing moment where the melody leaps up to an E-flat. However, this E-flat is not the highest point of the phrase. Therefore, Chopin marked poco rubato to signify to the player that they can emphasize the intensely expressive moment, but to also hold back for the actual climax occurring one measure later. A second example of rubato used at a singing moment is in his Second Piano Concerto. In a similar situation, the melody leaps up to three A-flat played consecutively and the rubato marked tells the player to perform them in a singing quality. Chopin primarily marks rubato to emphasize expressive melodic lines or repetition. However, in some cases, he also uses rubato to establish a certain mood at the beginning of a piece. The Nocturne Op. 15 No. 3 is one of the examples of rubato being used for setting up a mood. In the Nocturne Op. 15 No. 3, Chopin marked Languido e rubato in the first bar, as a general suggestion of the work's comprehensive way of delivery. The rubato in a languid manner would affect the tempo, tone color, touch, and dynamics, which influence performers to set the mood at the beginning of the piece.
Quotations:
There is no absolute rhythm. In the course of the dramatic developments of a musical composition, the initial themes change their character, consequently rhythm changes also, and, in conformity with that character, it has to be energetic or languishing, crisp or elastic, steady or capricious.
[...] Rubato must emerge spontaneously from the music, it can't be calculated but must be totally free. It's not even something you can teach: each performer must feel it on the basis of his or her own sensitivity. There's no magic formula: to assume otherwise would be ridiculous.
Performers also frequently show a tendency to speed up and slow down when this is not indicated in the score. Such modifications of tempo typically occur in relation to phrase structure, as a way of marking phrase boundaries.
Quotations:
Tempo Rubato is a potent factor in musical oratory, and every interpreter should be able to use it skillfully and judiciously, as it emphasizes the expression, introduces variety, infuses life into mechanical execution. It softens the sharpness of lines, blunts the structural angles without ruining them, because its action is not destructive: it intensifies, subtilizes, idealizes the rhythm. As stated above, it converts energy into languor, crispness into elasticity, steadiness into capriciousness. It gives music, already possessed of the metric and rhythmic accents, a third accent, emotional, individual, that which Mathis Lussy, in his excellent book on musical expression, calls l'accent pathètique.
Quotations:
Variations of Tempo, the ritardando, accelerando, and tempo rubato, are all legitimate aids demanded by Expression. [...] use is determined by sound judgment and correct musicianly taste.
Because the purpose of rubato is to add a sense of improvisatory freedom to the performance, one should avoid using the same kind of rubato repeatedly in a piece. Stretching or rushing successive phrases in the same way creates a monotonous sense of predictability that defeats the purpose.
Quotations:
In keeping tempo Chopin was inflexible, and it will surprise many to learn that the metronome never left his piano. Even in his much-slandered rubato, one hand, the accompanying hand, always played in strict tempo, while the other - singing, either indecisively hesitating or entering ahead of the beat and moving more quickly with a certain impatient vehemence, as in passionate speech - freed the truth of the musical expression from all rhythmic bonds.
Misinterpretations:
Definitions of musical concepts (such as rubato) cause misinterpretations if they disregard artistic musical expression. The type of rubato in which the accompaniment is kept regular does not require absolute regularity; the accompaniment still gives full regard to the melody (often the singer or soloist) and yields tempo where necessary: It is amusing to note that even some serious persons express the idea that in tempo rubato "the right hand may use a certain freedom while the left hand must keep strict time." (See Frederick Niecks' Life of Chopin, II, p. 101.) A nice sort of music would result from such playing. Something like the singing of a good vocalist accompanied by a poor blockhead who hammers away in strict time without yielding to the singer who, in sheer despair, must renounce all artistic expression.In the music of Chopin, the word "rubato" appears in just 14 of his works. While other composers (such as Schumann and Mahler) are ignored in regards to this issue, we often fail to consider the German terms, like "Zeit lassen", for the same principle. The fact that "rubato" is more an aspect of performance than a compositional device makes us question whether some other terms that could be interpreted as tempo distortions, like "cedéz", "espressivo", "calando", "incalzando", or even Brahms' special "dolce" and "sostenuto", are clear-cut in performance. [...] nothing in general can be more disagreeable than this species of brilliant accompaniment, where the voice is only considered as an accessory and where the accompanier, without regarding the taste, feeling, compass, or style of the singer, the pathos of the air, or sense of the words, either mechanically runs through the prescribed solemnity of the adagio, with the one two three precision of the metronome, or rattles away without mercy through the allegro whenever an occasion presents itself for the luxuriant ad libitum introduction of turns, variations, and embellishments.
Misinterpretations:
[...] a Metronome is apt to kill the finer Time-sense implied by Rubato.
Examples:
Sergei Rachmaninoff is one of the composers who uses the proper term "tempo rubato" in some passages of his orchestral works, such as the buzzy introduction for the 2nd movement of his Symphonic Dances (Rachmaninoff).
Another example, is the 2nd theme of the first movement of Symphony No. 3 (Rachmaninoff): Rachmaninoff's rubato re-created the eloquence and sure musical instinct that must have characterised the rubato-practise of Mozart, Beethoven or Chopin. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**86 (number)**
86 (number):
86 (eighty-six) is the natural number following 85 and preceding 87.
In mathematics:
86 is: nontotient and a noncototient.
the 25th distinct semiprime and the 13th of the form (2.q).
together with 85 and 87, forms the middle semiprime in the 2nd cluster of three consecutive semiprimes; the first comprising 33, 34, 35.
an Erdős–Woods number, since it is possible to find sequences of 86 consecutive integers such that each inner member shares a factor with either the first or the last member.
a happy number and a self number in base 10.
In mathematics:
with an aliquot sum of 46; itself a semiprime, within an aliquot sequence of seven members (86,46,26,16,15,9,4,3,1,0) in the Prime 3-aliquot tree.It appears in the Padovan sequence, preceded by the terms 37, 49, 65 (it is the sum of the first two of these).It is conjectured that 86 is the largest n for which the decimal expansion of 2n contains no 0.86 = (8 × 6 = 48) + (4 × 8 = 32) + (3 × 2 = 6). That is, 86 is equal to the sum of the numbers formed in calculating its multiplicative persistence.
In science:
86 is the atomic number of radon.
There are 86 metals on the modern periodic table.
In other fields:
In American English, and particularly in the food service industry, 86 has become a slang term referring to an item being out of stock or discontinued, and by extension to a person no longer welcome on the premises.
The number of the French department Vienne. This number is also reflected in the department's postal code and in the name of a local basketball club, Poitiers Basket 86.
+86 is the code for international direct dial phone calls to China.
An art gallery in Ventura, California, displaying art pieces from such artists Billy Childish, Stacy Lande and Derek Hess, most of which include the number *86 hidden or overtly shown in the art, and some of which fall under the genre of lowbrow.
86 is the device number for a lockout relay function in electrical engineering electrical circuit protection schemes.
86 is often used in Japan as the nickname for the Toyota AE86.
86 is the name of a series of Japanese science fiction light novels written by Asato Asato, later adapted as a manga and an anime. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**IBM Secure Service Container**
IBM Secure Service Container:
IBM Secure Service Container is the trusted execution environment available for IBM Z and IBM LinuxONE servers.
History:
In 2016 IBM introduced the z Appliance Container Infrastructure ("zACI") feature for the IBM z13, z13s, LinuxONE Rockhopper, and LinuxONE Emperor servers, delivered via a driver (firmware) update (driver level 27). IBM originally conceived its trusted execution environment as best suited for software "appliances," such as its own z/VSE Network Appliance, zAware, and GDPS Virtual Appliance offerings. As IBM improved zACI and broadened its applicability, the company quickly changed its name to IBM Secure Service Container (SSC) when the IBM z14 and LinuxONE Emperor II models launched in 2017.
Details:
IBM Secure Service Container consists of a combination of hardware, firmware, and software technologies that are commercially available in recent IBM Z and IBM LinuxONE servers. The hardware and firmware elements are primarily extensions to IBM's PR/SM logical partitioning technologies which are Common Criteria Enterprise Assurance Level (EAL) 5+ certified for separation and isolation. A logical partition (LPAR) type of "SSC" is available, and up to 16 TiB of usable main system memory can be allocated per LPAR (the limit as of the IBM z14 and IBM Emperor II server models introduced in 2017).
Details:
IBM also supplies a generalized, open source-based software framework for SSCs in the form of IBM Secure Service Container for IBM Cloud Private and a paired, firmware-based enabling feature. This generalized software framework facilitates running conventional virtual machines (VMs) and Docker containers on Linux within the SSC, without requiring special programming to adapt to SSC architecture. In other words, the IBM Secure Service Container (SSC) is the outer "envelope" within which VMs and software containers (such as Docker containers) run in a highly secure, trusted execution environment.
Details:
IBM uses SSCs to host many of its own public cloud services, including IBM Cloud Hyper Protect Services. First adopters of IBM SSC technologies include organizations with extremely demanding security requirements, including digital asset and cryptocurrency firms such as Digital Asset Custody Services (DACS). Most organizations using IBM Secure Service Container also rely heavily on the services that IBM's FIPS 140-2 Level 4 certified Crypto Express hardware security modules and Trusted Key Entry (TKE) equipment provide, although these IBM Z and IBM LinuxONE system features can also be used separately, on their own. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Agricultural Information Management Standards**
Agricultural Information Management Standards:
Agricultural Information Management Standards (AIMS) is a web site managed by the Food and Agriculture Organization of the United Nations (FAO) for accessing and discussing agricultural information management standards, tools and methodologies connecting information workers worldwide to build a global community of practice. Information management standards, tools and good practices can be found on AIMS: to support the implementation of structured and linked information and knowledge to enable institutions and individuals from different technical backgrounds to build open and interoperable information systems; to provide advice on how to best manage, disseminate, share and exchange agricultural scientific information; to promote good practices widely applicable and easy to implement, and; to foster communities of practices centered on interoperability, reusability and cooperation.
Users:
AIMS is primarily intended for information workers—librarians, information managers, software developers—but is also of interest to those who are simply passionate about knowledge and information sharing. The success of AIMS depends upon its communities reaching a critical mass to show that the investment in interoperability standards has a return.
Community:
AIMS holds 9 communities of practice. They are intended to discuss and share information about the different ongoing initiatives under the AIMS umbrella. AIMS supports collaboration through forums and blogs amongst institutions and individuals that wish to share expertise on how to use tools, standards and methodologies. Moreover, news and events are published on AIMS as part of its ‘one-stop” access to interoperability and reusability of information resources. The AIMS communities are aimed at the global agricultural community, including information providers, from research institutes, academic institutions, educational and extension institutions and also the private sector.
Content:
Vocabularies AGROVOC is a comprehensive multilingual vocabulary that contains close to 40,000 concepts in over 20 languages covering subject fields in agriculture, forestry and fisheries together with cross-cutting themes such as land use, rural livelihoods and food security. It standardizes data description to enable a set of core integration goals: interoperability, reusability and cooperation. In this spirit of collaboration, AGROVOC also works with other organizations that are using Linked Open Data techniques to connect vocabularies and build the backbone of the next generation of internet data; data that is marked up not just for style but for meaning. It is maintained by a global community of librarians, terminologists, information managers and software developers using VocBench, a multilingual, web-based vocabulary editor and workflow management tool that allows for simultaneous, distributed editing.
Content:
In addition to AGROVOC, AIMS provides access to other vocabularies like the Geopolitical ontology and Fisheries Ontologies. The Geopolitical ontology is used to facilitate data exchange and sharing in a standardized manner among systems managing information about countries and/or regions. The network of fisheries ontologies was created as a part of the NeOn Project and it covers the following areas: Water areas: for statistical reporting, jurisdictional (EEZ), environmental (LME), Species: taxonomic classification, ISSCAAP commercial classification, Aquatic resources, Land areas, Fisheries commodities, Vessel types and size, Gear types, AGROVOC, ASFA.
Content:
AgMES is as a namespace designed to include agriculture specific extensions for terms and refinements from established standard metadata namespaces like Dublin Core or AGLS, used for Document-like Information Objects, for example like publications, articles, books, web sites, papers, etc.
Content:
Linked Open Data (LOD) - Enabled Bibliographic Data (LODE-BD) Recommendations 2.0 are a reference tool that assists bibliographic data providers in selecting appropriate encoding strategies according to their needs in order to facilitate metadata exchange by, for example, constructing crosswalks between their local data formats and widely used formats or even with a Linked Data representation Tools AgriDrupal Archived 2011-10-20 at the Wayback Machine is both a suite of solutions for agricultural information management and a community of practice around these solutions. The AgriDrupal community is made up of people who work in the community of agricultural information management specialists and have been experimenting with IM solutions in Drupal.
Content:
AgriOcean DSpace Archived 2012-06-13 at the Wayback Machine is a joint initiative of the United Nations agencies of FAO and UNESCO-IOC/IODE to provide a customized version of DSpace. It uses standards for metadata, thesauri and other controlled vocabularies for oceanography, marine science, food, agriculture, development, fisheries, forestry, natural resources and other related sciences.
Content:
VocBench is a web-based multilingual vocabulary management tool developed by FAO and hosted by MIMOS Berhad. It transforms thesauri, authority lists and glossaries into SKOS/RDF concept schemes for use in a linked data environment. VocBench also manages the workflow and editorial processes implied by vocabulary evolution such as user rights/roles, validation and versioning. VocBench supports a growing set of user communities, including the global, distributed group of terminologists who manage AGROVOC.
Content:
WebAGRIS is a multilingual Web-based system for distributed data input, processing and dissemination (through the Internet or on CD-Rom), of agricultural bibliographic information. It is based on common standards of data input and dissemination formats (XML, HTML, ISO2709), as well as subject categorization schema and AGROVOC.
Content:
Services AgriFeeds is a service that allows users to search and filter news and events from several agricultural information sources and to create custom feeds based on the filters applied. AgriFeeds was designed in the context on CIARD (Coherence in Information for Agricultural Research for Development). Within CIARD, the partners who designed and implemented AgriFeeds are FAO and GFAR. AgriFeeds is currently maintained by FAO.
Content:
AGRIS is a global public domain database with nearly 3 million structured bibliographical records on agricultural science and technology. The database is maintained by FAO, with the content provided by more than 100 participating institutions from 65 countries.
Content:
CIARD Routemap to Information Nodes and Gateways (RING) is a project implemented within CIARD and is led by GFAR. The RING is a global registry of web-based services that give access to any kind of information pertaining to agricultural research for development (ARD). It allows information providers to register their services in various categories and so facilitate the discovery of sources of agriculture-related information across the world.
Content:
Since January 2011, AIMS supports E-LIS, the international electronic archive for library and information science (LIS). E-LIS is established, managed and maintained by an international team of 73 librarians and information scientists from 47 countries and support for 22 languages. It is freely accessible, aligned with the Open Access (OA) movement and is a voluntary enterprise. Currently it is the largest international repository in the LIS field. Searching or browsing E-LIS is a kind of multilingual, multicultural experience, an example of what could be accomplished through open access archives to bring the people of the world together.
Content:
VEST Registry is a catalog of controlled vocabularies (such as authority files, classification systems, concept maps, controlled lists, dictionaries, ontologies or subject headings); metadata sets (metadata element sets, namespaces and application profiles); and tools (such as library management software, content management systems or document repository software). It is concerned primarily with collecting and maintaining a consistent set of metadata for each resource. The scope of the VEST Registry is to provide a clearing house for tools, metadata sets and vocabularies used in food, agriculture, development, fisheries, forestry and natural resources information management context. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tabular bone**
Tabular bone:
The tabular bones are a pair of triangular flat bones along the rear edge of the skull which form pointed structures known as tabular horns in primitive Teleostomi. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Aux-send**
Aux-send:
An aux-send (auxiliary send) is an electronic signal-routing output used on multi-channel sound mixing consoles used in recording and broadcasting settings and on PA system amplifier-mixers used in music concerts. The signal from the auxiliary send is often routed through outboard audio processing effects units (e.g., reverb, digital delay, compression, etc.) and then returned to the mixer using an auxiliary return input jack, thus creating an effects loop. This allows effects to be added to an audio source or channel within the mixing console. Another common use of the aux send mix is to create monitor mixes for the onstage performers' monitor speakers or in-ear monitors. The aux send's monitor mix is usually different from the front of house mix the audience is hearing.
Purpose:
The routing configuration and usage of an aux-send will vary depending on the application. Two types of aux-sends commonly exist: pre-fader and post-fader. Pre-fader sends are not affected by the main fader for the channel, while post-fader sends are affected by the position of the main fader slider control for the channel.
Purpose:
In a common configuration, a post-fader aux-send output is connected to the audio input of an outboard (i.e., an external [usually rack-mounted] unit that is not part of the mixer console) audio effects unit (most commonly a temporal/time-based effect such as reverb or delay; compressors and other dynamic processors would normally be on an insert, instead). The audio output of the outboard unit is then connected to the aux-return input on the mixing console (if the recording console has one), or, alternatively, it can be looped back to one of the console's unused input channels. A post-fader output is used in order to prevent channels whose faders are at zero gain from "contaminating" the effects-return loop with hiss and hum.
Purpose:
Mixing consoles most commonly have a group of aux-send knobs in each channel strip, or, on small mixers, a single aux-send knob per channel, where one knob corresponds to each aux-send on the board. The controls enable the operator to adjust the amount of signal that will be sent from its corresponding channel into the signal bus routed to its corresponding aux-send output. The largest, most expensive mixers have a number of aux-send knobs on every channel, thus giving the audio engineer the flexibility to create many live sound and/or recording applications for the mixer.
Purpose:
A benefit of using an aux-send is that it enables the signals from multiple channels on a mixing console to be simultaneously routed to a single outboard device. For instance, audio signals from all the channels of a sixteen-channel mixing console can be routed to a single outboard reverb unit so that all channels are heard with reverb. The aux-sends from a group of inputs can also be routed to an amplifier and then sent to monitor speakers so that the onstage musicians can hear their singing or playing through monitor wedge speakers on the stage or through in-ear monitors. The benefit of using the pre-fader aux-send function is that the volume of the vocals or instruments in the monitor mix does not have to be the same as the "front-of-house" mix for the audience. Musicians whose voices are barely present in the "front-of-house" mix, such as backup vocalists, can have their sound clearly and loudly sent through a monitor speaker so that they can hear themselves singing and ensure that their pitch and timing is correct. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Primary constraint**
Primary constraint:
In Hamiltonian mechanics, a primary constraint is a relation between the coordinates and momenta that holds without using the equations of motion. A secondary constraint is one that is not primary—in other words it holds when the equations of motion are satisfied, but need not hold if they are not satisfied The secondary constraints arise from the condition that the primary constraints should be preserved in time. A few authors use more refined terminology, where the non-primary constraints are divided into secondary, tertiary, quaternary, etc. constraints. The secondary constraints arise directly from the condition that the primary constraints are preserved by time, the tertiary constraints arise from the condition that the secondary ones are also preserved by time, and so on. Primary and secondary constraints were introduced by Anderson and Bergmann and developed by Dirac.The terminology of primary and secondary constraints is confusingly similar to that of first and second class constraints. These divisions are independent: both first and second class constraints can be either primary or secondary, so this gives altogether four different classes of constraints. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Single suiter**
Single suiter:
In contract bridge, a single suiter (or single-suited hand) is a hand containing at least six cards in one suit and with all other suits being at least two cards shorter than this longest suit. Many hand patterns can be classified as single suiters. Typical examples are 6-3-2-2, 6-3-3-1 and 7-3-2-1 distribution.
Single suiter:
Single-suiters form the cornerstone of preemptive bidding. Weak single-suiters with six card length are traditionally opened preemptively at the two level, whilst seven carders are used to preempt at the three level. The modern trend is to lower these minimum length requirements, especially when non-vulnerable. Conventional preemptive openings used to introduce a weak single-suited hand include the multi 2 diamonds and the gambling 3NT conventions.
Single suiter:
Over an opposing opening, single suiters are usually introduced via a natural overcall. But see also list of defenses to 1NT. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Evangelienmotetten**
Evangelienmotetten:
Evangelienmotetten or Gospel motets (sometimes called Spruchmotetten, "Bible-text motets") were settings to music of verses from the New Testament. They were selected as an essence or Kernspruch ("text-kernel") of the verses in question, with the intention of highlighting dramatically or summarising in a terse fashion a significant thought from the Gospels.There is a long tradition in Germany, dating back to the medieval era, of highlighting the importance of gospel readings through polyphonic musical settings of gospel texts. They became an increasingly popular genre from the 16th century onwards and were intended for use in Lutheran church services. They could thus be written in either Latin or German. The latter came to predominate by the end of the 16th century due to the emphasis placed by the Reformation on the need to make the Bible accessible to all people through the use of the vernacular language.During the late 16th and early 17th centuries a number of composers drew on Gospel readings for an entire church year's worth of Sundays and feast days to create complete cycles of motets. Their text comprised phrases or paraphrases from the narrative readings or sometimes only the dialogue passages. A fashion for the latter prompted the development in Germany of the dramatic concertato dialogue from the 1620s onward. Composers of Gospel motet cycles included Leonhard Päminger, Johann Wanning, Andreas Raselius, Christoph Demantius, Thomas Elsbeth, Melchior Vulpius and Melchior Franck, whose work was gathered into collections by printers.Gospel motets were the principal musical piece in the liturgy of the Mass, serving to enhance the reading of the Gospel lesson of the day immediately before the performance. By the later 17th century they were increasingly replaced by concertatos supplemented with arias and chorales and after 1700 by the cantata, which not only highlighted biblical passages but interpreted them as well. The genre fell out of general fashion by the early 18th century but was still in demand for use in funerals, as evidenced by the composition of motets by Johann Sebastian Bach for such ceremonies. Bach wrote for the function of enhancing the prescribed gospel reading several cycles of cantatas for all occasions of the liturgical year.
Evangelienmotetten:
The manner in which Gospel motets were used within the Protestant German liturgy of the 17th century is unclear. Some musicologists have suggested that they were used as part of a seasonal cycle of liturgical readings, sung in place of the liturgical intonation or as an additional musical work to provide an exposition before the sermon. Motets which used the verbatim text of the Gospels may have been used to punctuate the recitation of the liturgy by the cantor or priest; at the point in the text where the motet setting began, the choir would take over, sing the motet and conclude the lesson. Alternatively they may have been related more to a tradition of exegetical and didactic practice to set out a narrative of Christ's life, thus being "attached to a broader base of devotional practice, rather than being confined to strict liturgical use", as Craig Westendorf has argued.Evangelienmotetten were still composed in the 20th century, for example by Ernst Pepping who wrote Drei Evangelienmotetten for choir a capella, including Jesus und Nikodemus, in 1937–38. Gustav Gunsenheimer composed between 1966 and 1972 six motets for choir a cappella for five Sundays in Lent, including Die Versuchung Jesu (The temptation of Jesus), and one for a Sunday after Easter. Siegfried Strohbach composed 6 Evangelien-Motetten for mixed choir a cappella. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HspE7**
HspE7:
HspE7 is an investigational therapeutic vaccine candidate being developed by Nventa Biopharmaceuticals for the treatment of precancerous and cancerous lesions caused by the human papillomavirus (HPV). HspE7 uses recombinant DNA technology to covalently fuse a heat shock protein (Hsp) to a target antigen, thereby stimulating cellular immune system responses to specific diseases. HspE7 is a patented construct consisting of the HPV Type 16 E7 protein and heat shock protein 65 (Hsp65) and is currently the only candidate using Hsp technology to target the over 20 million Americans already infected with HPV.The candidate is being developed with a Toll-like receptor 3 (TLR3) agonist adjuvant for multiple indications, including cervical intraepithelial neoplasia (also known as cervical dysplasia or CIN), genital warts, cervical cancer, and head and neck cancers.
Therapeutic rationale:
Over 100 different HPV types have been identified and are referred to by number. About a dozen HPV types, including types 16, 18, 31 and 45, are called "high-risk" types because they can lead to cervical cancer, as well as anal cancer, vulvar cancer, and penile cancer. Several types of HPV, particularly type 16, have been found to be associated with oropharyngeal squamous-cell carcinoma, a form of head and neck cancer. HPV-induced cancers often have viral sequences integrated into the cellular DNA. Some of the HPV "early" genes, such as E6 and E7, are known to act as oncogenes that promote tumor growth and malignant transformation.An infection with one or more high-risk HPV types is believed to be a prerequisite for the development of cervical cancer (the vast majority of HPV infections are not high risk); according to the American Cancer Society, women with no history of the virus do not develop this type of cancer. However, most HPV infections are cleared rapidly by the immune system and do not progress to cervical cancer. Because the process of transforming normal cervical cells into cancerous ones is slow, cancer occurs in people who have been infected with HPV for a long time, usually over a decade or more.HPV infection is a necessary factor in the development of nearly all cases of cervical cancer. A cervical Pap smear is used to detect cellular abnormalities. This allows targeted surgical removal of condylomatous and/or potentially precancerous lesions prior to the development of invasive cervical cancer. Although the widespread use of Pap testing has reduced the incidence and lethality of cervical cancer in developed countries, the disease still kills several hundred thousand women per year worldwide. HPV vaccines Gardasil and Cervarix, which block initial infection with some of the most common sexually transmitted HPV types, may lead to further decreases in the incidence of HPV-induced cancer, however, they do not address the millions of people worldwide already infected with the virus.
Therapeutic rationale:
It is estimated that nearly 10 million women are diagnosed with some form of cervical dysplasia each year in major global markets in the U.S., EU and Japan, many of whom could benefit from a non-surgical treatment. As a result, HspE7 is first being developed for the treatment of CIN.
Clinical progress:
Nventa originally advanced HspE7 as a single-agent therapy into multiple Phase 2 clinical trials with positive results, including trials in cervical dysplasia and recurrent respiratory papillomatosis (RRP). These trials were initiated prior to the discovery that potency could be greatly enhanced by addition of a vaccine adjuvant, and, as a result, Nventa is currently developing HspE7 combined with the adjuvant Poly-ICLC.
Clinical progress:
A Phase 1b study has been completed assessing the safety and tolerability of HspE7 with Poly-ICLC in four cohorts totaling 17 patients with CIN. All patients were administered 500 mcg of HspE7 with each of the four cohorts receiving escalating doses of adjuvant – 50, 500, 1,000 and 2,000 mcg.
Nventa has indicated that it is currently working with the U.S. Food and Drug Administration (FDA) to finalize the trial design for a Phase 2 clinical study for HspE7 in patients with high grade cervical dysplasia (CIN 2/3). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Plug (jewellery)**
Plug (jewellery):
A plug (sometimes earplug or earspool), in the context of body modification, is a short, cylindrical piece of jewelry commonly worn in larger-gauge body piercings. Modern western plugs are also called flesh tunnels. Because of their size—which is often substantially thicker than a standard metal earring—plugs can be made out of almost any material. Acrylic glass, metal, wood, bone, stone, horn, glass, silicone or porcelain are all potential plug materials.
Plug (jewellery):
Plugs are commonly, and have historically, been worn in the ears. They can, however, be inserted into any piercing.
In order for a plug to stay put within a piercing, the ends of its cylindrical shape are often flared out, or the plug is fastened in place by o-rings. Combinations of these two methods may also be used.
A double-flared (or saddle) plug, flares outward at both ends, and is thinner towards the middle. No o-rings are needed to keep the plug in the piercing, but the fistula needs to be wide enough to accommodate the flare when the plug is initially put in.
Plug (jewellery):
A single flared plug has one flared end, usually worn on the front of the piercing, and one end with no flare. The no flare end is held in place by an o-ring and may or may not be grooved. These plugs give the aesthetic of double-flared plugs without requiring that the wearer's fistulas be large enough to accommodate flares.
Plug (jewellery):
A straight plug (or no-flare plug) is a typical-looking cylinder, without flares, and is kept in place by sliding o-rings against both ends of the plug. A grooved plug is a variation on the straight plug, with grooves carved in the material to hold the o-rings snug.
Modern use:
A flesh tunnel is a hollow, tube-shaped variety of body piercing jewelry. It is also sometimes referred to as a spool, fleshy, earlet, expander, or eyelet.
Modern use:
A flesh tunnel is usually used in stretched or scalpelled piercings. Flesh tunnels are made in smaller gauges. However, the smaller the gauge the smaller the effect to see through the plug becomes. A person may choose to wear flesh tunnels instead of flesh plugs because they weigh less; at higher gauges, the weight difference increases. Flesh tunnels may be worn with a captive bead ring or other object passed through them.
Modern use:
Flesh tunnels are fashioned from a broad range of materials, including surgical steel, titanium, Pyrex glass, silicone, acrylic glass, bone, horn, amber, bamboo, stone, and wood. Flesh tunnels, like flesh plugs, may feature a decorative inlay or semi-precious stones. Some flesh tunnels have flares to keep the jewellery from falling out. If there are no flares, grooves may be cut near the edges to allow rubber or silicone o-rings to hold the jewellery in place. The back of the flesh tunnel may also screw off. A flesh tunnel may also have an internally threaded backing, as externally threaded pieces can rip freshly stretched ears.
Modern use:
Although flesh tunnels are often worn in the earlobe, other soft-tissue piercings (such as in the nasal septum or nipples) can be fitted with one of an appropriate length.
History:
During the ancient Egyptian New Kingdom, both sexes wore a variety of jewelry, including earplugs and large-gauge hoop-style earrings.They were particularly used among indigenous cultures of the Americas, including Mesoamerican cultures such as the Maya and the Aztecs. They were most commonly made of gold, silver, or wood, but could also include shells or feathers. Their use could sometimes significantly stretch the earlobe. In Mesoamerica they were used from as early as the Preclassic Period (2000–100 BC).Inca men wore gold or silver plugs in the ears, which indicated their nobility. Their stretched piercings, which could reach the size of two inches, later inspired a Spanish nickname for the Inca people: orejones ("big ears").Ivory earplugs have been used by the Hmong people.Silver plugs, called rombin, are worn by Aka women.During the Bronze Age in what is today Spain, earlobe plugs were uncommon grave goods, indicating that they were reserved for high-status individuals. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vitaly Napadow**
Vitaly Napadow:
Vitaly Napadow is a Ukrainian-born American neuroscientist and acupuncturist. He is a full professor of Physical Medicine & Rehabilitation and Radiology at Harvard Medical School. He is also the Director of the Scott Schoen and Nancy Adams Discovery Center for Recovery from Chronic Pain at Spaulding Rehabilitation Hospital and Director of the Center for Integrative Pain NeuroImaging at the Martinos Center for Biomedical Imaging at Massachusetts General Hospital. He is a former president of the Society for Acupuncture Research. He has been a pain neuroimaging researcher for more than 20 years. Somatosensory, cognitive, and affective factors all influence the malleable experience of chronic pain, and Dr. Napadow’s Lab has applied human functional and structural neuroimaging to localize and suggest mechanisms by which different brain circuitries modulate pain perception. Dr. Napadow’s neuroimaging research also aims to better understand how non-pharmacological therapies, from acupuncture and transcutaneous neuromodulation to cognitive behavioral therapy and mindfulness meditation training, ameliorate aversive perceptual states such as pain. In fact, his early career was known for researching acupuncture and its effects on the brain. He has also researched the brain circuitry underlying nausea and itch. He is also known for developing a novel approach in applying measures of resting state brain connectivity as potential biomarkers for spontaneous clinical pain in chronic pain disorders such as fibromyalgia.In 2009, he invented an innovative approach to transcutaneous auricular vagus nerve stimulation (taVNS), wherein stimulation is gated to a specific phase of the respiratory cycle. This form of taVNS, called Respiratory-gated Auricular Vagal Afferent Nerve Stimulation (RAVANS), has been evaluated for pain, depression, hypertension, functional dyspepsia, and other medical disorders.
Vitaly Napadow:
In 2016, he applied hyperscanning fMRI to evaluate the patient-clinician relationship and how therapeutic alliance and the "art of medicine" impacts clinical outcomes for many different therapies. The first publication came out in 2020, linking brain-to-brain concordance in the temporoparietal junction to anesthesia in chronic pain patients.
Biography:
Napadow was born in Kharkov, Ukraine in 1971 and immigrated to the Baltimore area in the United States as a refugee in 1978. He graduated with a bachelors of science degree in mechanical engineering from Cornell University in 1996, and worked as an intern at the Johnson Space Center, in Clear Lake, TX. He received his master's degree in acupuncture from the New England School of Acupuncture in 2002 and his Ph.D. in biomedical engineering from the Harvard–MIT Program of Health Sciences and Technology in 2001. He joined the faculty of Harvard Medical School in 2004 as an instructor in radiology, where he became an assistant professor of anesthesiology in 2010, an associate professor of radiology in 2014 and a full professor in 2021. In 2021, he also joined Spaulding Rehabilitation Hospital as their Director for Pain Research and serves on the board of the United States Association for the Study of Pain. He has published more than 200 papers in peer-reviewed journals. In 2020 Napadow and his hyperscanning research was featured in a special issue on Pain in the National Geographic. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Penalty kick (association football)**
Penalty kick (association football):
A penalty kick (commonly known as a penalty or a spot kick) is a method of restarting play in association football, in which a player is allowed to take a single shot at the goal while it is defended only by the opposing team's goalkeeper. It is awarded when an offence punishable by a direct free kick is committed by a player in their own penalty area. The shot is taken from the penalty mark, which is 11 m (12 yards) from the goal line and centred between the touch lines.
Procedure:
The ball is placed on the penalty mark, regardless of where in the penalty area the foul occurred. The player taking the kick must be identified to the referee. Only the kicker and the defending team's goalkeeper are allowed to be within the penalty area; all other players must be within the field of play, outside the penalty area, behind the penalty mark, and a minimum of 9.15 m (10 yd) from the penalty mark (this distance is denoted by the penalty arc). The goalkeeper is allowed to move before the ball is kicked, but must remain on the goal-line between the goal-posts, facing the kicker, without touching the goalposts, crossbar, or goal net. At the moment the kick is taken, the goalkeeper must have at least part of one foot touching, or in line with, the goal line. The assistant referee responsible for the goal line where the penalty kick is being taken is positioned at the intersection of the penalty area and goal line, and assists the referee in looking for infringements and/or whether a goal is scored.
Procedure:
The referee blows the whistle to indicate that the penalty kick may be taken. The kicker may make feinting (deceptive or distracting) movements during the run-up to the ball, but may not do so once the run-up is completed. The kick and the last step the kicker takes must be in motion. The ball must be stationary before the kick, and it must be kicked forward. The ball is in play once it is kicked and moves, and at that time other players may enter the penalty area and the penalty arc. The kicker may not touch the ball a second time until it has been touched by another player of either team or goes out of play (including into the goal).
Infringements:
In case of an infringement of the laws of the game during a penalty kick, most commonly entering the penalty area illegally, the referee must consider both whether the ball entered the goal, and which team(s) committed the offence. If both teams commit an offence, a rekick is taken.
Infringements:
The following infringements committed by the kicking team result in an indirect free kick for the defending team, regardless of the outcome of the kick: a teammate of the identified kicker kicks the ball instead (the player who took the kick is cautioned) kicker feints kicking the ball at the end of the run-up (the kicker is cautioned) kick does not go forward kicker touches the ball a second time before it touches another player (includes rebounds off the goal posts or crossbar)In the case of a player repeatedly infringing the laws during the penalty kick, the referee may caution the player for persistent infringement. All offences that occur before kick may be dealt with in this manner, regardless of the location of the offence.
Infringements:
If the ball touches an outside agent (i.e., an object foreign to the playing field) as it moves forward from the kick, the kick is retaken.
Tap penalty:
A two-man penalty, or "tap" penalty, occurs when the kicker, instead of shooting for goal, taps the ball slightly forward so that a teammate can run on to it and shoot or pass. If properly executed, it is a legal play since the kicker is not required to shoot for goal and need only kick the ball forward. This strategy relies heavily on the element of surprise, as it first requires the goalkeeper to believe the kicker will actually shoot, then dive or move to one side in response. It then requires the goalkeeper to remain out of position long enough for the kicker's teammate to reach the ball before any defenders, and for that teammate to place a shot on the undefended side of the goal.
Tap penalty:
The first recorded tap penalty was taken by Jimmy McIlroy and Danny Blanchflower of Northern Ireland against Portugal on 1 May 1957. Another was taken by Rik Coppens and André Piters in the World Cup Qualifying match Belgium v Iceland on 5 June 1957. Another attempt was made by Mike Trebilcock and John Newman, playing for Plymouth Argyle in 1964. In 1982, Johan Cruyff passed to his Ajax team-mate Jesper Olsen, who then passed back, allowing Cruyff to tap in for a goal.Arsenal players Thierry Henry and Robert Pires failed in an attempt at a similar penalty in 2005, during a Premier League match against Manchester City at Highbury. Pires ran in to take the kick, attempted to pass to the onrushing Henry, but miskicked and the ball hardly moved; as he had slightly touched the ball, he could not touch it again, and City defender Sylvain Distin cleared the ball before Henry could shoot.Lionel Messi tapped a penalty for Luis Suárez as Suárez completed his hat-trick on 14 February 2016 against league opponents Celta de Vigo.
Saving tactics:
"Reading" the kicker Defending against a penalty kick is one of the most difficult tasks a goalkeeper can face. Owing to the short distance between the penalty spot and the goal, there is very little time to react to the shot. Because of this, the goalkeeper will usually start their dive before the ball is actually struck. In effect, the goalkeeper must act on their best prediction about where the shot will be aimed. Some goalkeepers decide which way they will dive beforehand, thus giving themselves a good chance of diving in time. Others try to read the kicker's motion pattern. On the other side, kickers often feign and prefer a relatively slow shot in an attempt to foil the goalkeeper. The potentially most fruitful approach, shooting high and centre, i.e., in the space that the goalkeeper will evacuate, also carries the highest risk of shooting above the bar.
Saving tactics:
As the shooter makes their approach to the ball, the goalkeeper has only a fraction of a second to "read" the shooter's motions and decide where the ball will go. If their guess is correct, this may result in a missed penalty. Helmuth Duckadam, Steaua București's goalkeeper, saved a record four consecutive penalties in the 1986 European Cup Final against Barcelona. He dived three times to the right and a fourth time to his left to save all penalties taken, securing victory for his team.
Saving tactics:
Use of knowledge of kicker's history A goalkeeper may also rely on knowledge of the shooter's past behaviour to inform their decision. An example of this would be by former Netherlands national team goalkeeper Hans van Breukelen, who always had a box with cards with all the information about the opponent's penalty specialist. Ecuadorian goalkeeper Marcelo Elizaga saving a penalty from Carlos Tevez in a 2010 FIFA World Cup qualifier between Ecuador and Argentina, revealed that he had studied some penalty kicks from Tevez and suspected he was going to shoot to the goalkeeper's left side. Two other examples occurred during the 2006 FIFA World Cup: Portugal national team goalkeeper Ricardo in a quarter-final match against England, where he saved three penalties out of four.
Saving tactics:
The quarter-final match between Argentina and Germany also came down to penalties, and German goalkeeper Jens Lehmann was seen looking at a piece of paper kept in his sock before each Argentinian player would come forward for a penalty kick. Lehmann had researched the penalty taking habits of seven players on the Argentinian team. However, only two players on his list ended up taking a penalty that day. On the attempts by those two players, Lehmann saved one and came close to saving the other. He then had to guess on Esteban Cambiasso's kick since he did not have any information written on his list about Cambiasso. However, he derived an educated guess from the videos he had studied and pretended to read the piece of paper and nodded his head before putting it away, implying to Cambiasso that he did in fact have information on the kicker. Lehmann guessed correctly and saved the penalty, thus winning the shootout for Germany. "Lehmann's list" became so popular in the annals of German football history that it is now in the Haus der Geschichte museum.This approach may not always be successful; the player may intentionally switch from their favoured spot after witnessing the goalkeeper obtaining knowledge of their kicks. Most times, especially in amateur football, the goalkeeper is often forced to guess. Game theoretic research shows that both the penalty taker and also the goalkeeper must randomize their strategies in precise ways to avoid having the opponent take advantage of their predictability.
Saving tactics:
Distraction The goalkeeper also may try to distract the penalty taker, as the expectation is on the penalty taker to succeed, hence more pressure on the penalty taker, making them more vulnerable to mistakes. For example, in the 2008 UEFA Champions League Final between Manchester United and Chelsea, United goalkeeper Edwin van der Sar pointed to his left side when Nicolas Anelka stepped up to take a shot in the penalty shoot out. This was because all of Chelsea's penalties went to the left. Anelka's shot instead went to Van der Sar's right, which was saved. Liverpool goalkeeper Bruce Grobbelaar used a method of distracting the players called the "spaghetti legs" trick to help his club defeat Roma to win the 1984 European Cup. This tactic was emulated in the 2005 UEFA Champions League Final, which Liverpool also won, by Liverpool goalkeeper Jerzy Dudek, helping his team defeat Milan.
Saving tactics:
An illegal method of saving penalties is for the goalkeeper to make a quick and short jump forward just before the penalty taker connects with the ball. This not only shuts down the angle of the shot, but also distracts the penalty taker. The method was used by Brazilian goalkeeper Cláudio Taffarel. FIFA was less strict on the rule during that time. In more recent times, FIFA has advised all referees to strictly obey the rule book.Similarly, a goalkeeper may also attempt to delay a penalty by cleaning their boots, asking the referee to see if the ball is placed properly and other delaying tactics. This method builds more pressure on the penalty taker, but the goalkeeper may risk punishments, most likely a yellow card.
Saving tactics:
A goalkeeper can also try to distract the taker by talking to them prior to the penalty being taken. Netherlands national team goalkeeper Tim Krul used this technique during the penalty shootout in the quarter-final match of the 2014 FIFA World Cup against Costa Rica. As the Costa Rican players were preparing to take the kick, Krul told them that he "knew where they were going to put their penalty" in order to "get in their heads". This resulted in him saving two penalties and the Netherlands winning the shootout 4–3.
Saving tactics:
Argentine goalkeeper Emiliano Martinez is known for using mind games in shootouts, most notably when he talked to the Colombian players as they went to take their penalties in the 2021 Copa America semifinal, and throwing the ball away when Aurélien Tchouaméni was about to kick in the World Cup final.
Scoring statistics:
Even if the goalkeeper succeeds in blocking the shot, the ball may rebound back to the penalty taker or one of their teammates for another shot, with the goalkeeper often in a poor position to make a second save. This makes saving penalty kicks more difficult. This is not a concern in penalty shoot-outs, where only a single shot is permitted.
Scoring statistics:
While penalty kicks are considerably more often successful than not, missed penalty kicks are not uncommon: for instance, of the 78 penalty kicks taken during the 2005–06 English Premier League season, 57 resulted in a goal, thus almost 30% of the penalties were unsuccessful.A German professor who has been studying penalty statistics in the German Bundesliga for 16 years found 76% of all the penalties during those 16 years went in, and 99% of the shots in the higher half of the goal went in, although the higher half of the goal is a more difficult target to aim at. During his career, Italian striker Roberto Baggio had two occurrences where his shot hit the upper bar, bounced downwards, rebounded off the keeper and passed the goal line for a goal.
Scoring statistics:
Saving statistics Some goalkeepers have become well known for their ability to save penalty kicks. One such goalkeeper is Diego Alves, who boasts a 49 per cent save success rate. Other goalkeepers with high save rates include Claudio Bravo, Kevin Trapp, Samir Handanović, Gianluigi Buffon, Tim Krul, Danijel Subašić, and Manuel Neuer.
Offences for which the penalty kick is awarded:
A penalty kick is awarded whenever one of the following offences is committed by a player within that player's own penalty area while the ball is in play (the ball must be in play at the time of the offence, but it does not need to be within the penalty area at that time).
Offences for which the penalty kick is awarded:
handball (excluding handling offences committed by the goalkeeper) any of the following offences against an opponent, if committed in a manner considered by the referee to be careless, reckless or using excessive force:charges jumps at kicks or attempts to kick pushes strikes or attempts to strike (including head-butt) tackles or challenges trips or attempts to trip holding an opponent impeding an opponent with contact biting or spitting at someone throwing an object at the ball, an opponent or a match official, or making contact with the ball with a held object (the location of the offence is considered to be the position where the object struck or would have struck the person or the ball, or the nearest boundary line if this is off the field of play).
Offences for which the penalty kick is awarded:
any physical offence against a team-mate, substitute, substituted or sent-off player, team official or a match official a player who requires the referee's permission to re-enter the field of play, substitute, substituted player, sent-off player, or team official enters the field of play without the referee's permission, and interferes with play (A rare example of this offence occurred in an October 2019 match between Holstein Kiel and VfL Bochum: Kiel substitute Michael Eberwein, warming up behind his own team's goal-line, kicked the ball before it had gone out of play. The referee awarded a penalty to Bochum after VAR review.) a player who requires the referee's permission to re-enter the field of play, substitute, substituted player, sent-off player or team official is on the field of play without the referee's permission while that person's team scores a goal (the goal is disallowed; the location of the offence is considered to be the location of the offender at the time the disallowed goal was scored).
Offences for which the penalty kick is awarded:
a player temporarily off the field of play, substitute, substituted player, sent-off player or team official throws or kicks an object onto the field of play, and the object interferes with play, an opponent, or a match official (the location of the offence is considered to be the place where the thrown or kicked object interfered with play, or struck or would have struck the opponent, match official or the ball).A penalty kick is also awarded if, while the ball is in play, a player, substitute, substituted player, sent-off player or team official commits any direct free-kick offence against a match official or against an opposing player, substitute, substituted player, sent-off player, or team official outside the field of play, provided that the closest boundary line to the location of the offence is within the offending team's own penalty area.
History:
Early proposals The original laws of the game, in 1863, had no defined punishments for infringements of the rules. In 1872, the indirect free kick was introduced as a punishment for illegal handling of the ball; it was later extended to other offences. This indirect free-kick was thought to be an inadequate remedy for a handball which prevented an otherwise-certain goal. As a result of this, in 1882 a law was introduced to award a goal to a team prevented from scoring by an opponent's handball. This law lasted only one season before being abolished in 1883.
History:
Introduction of the penalty-kick The invention of the penalty kick is credited to the goalkeeper and businessman William McCrum in 1890 in Milford, County Armagh. The Irish Football Association presented the idea to the International Football Association Board's 1890 meeting, where it was deferred until the next meeting in 1891.Two incidents in the 1890–1 season lent additional force to the argument for the penalty kick. On 20 December 1890, in the Scottish Cup quarter-final between East Stirlingshire and Heart of Midlothian Jimmy Adams fisted the ball out from under the bar, and on 14 February 1891, there was a blatant goal-line handball by a Notts County player in the FA Cup quarter-final against Stoke City Finally after much debate, the International Football Association Board approved the idea on 2 June 1891. The penalty-kick law ran: If any player shall intentionally trip or hold an opposing player, or deliberately handle the ball, within twelve yards [11 m] from his own goal-line, the referee shall, on appeal, award the opposing side a penalty kick, to be taken from any point twelve yards [11 m] from the goal-line, under the following conditions:— All players, with the exception of the player taking the penalty kick and the opposing goalkeeper (who shall not advance more than six yards [5.5 m] from the goal-line) shall stand at least six yards [5.5 m] behind the ball. The ball shall be in play when the kick is taken, and a goal may be scored from the penalty kick.
History:
Some notable differences between this original 1891 law and today's penalty-kick are listed below: It was awarded for an offence committed within 12 yards (11 m) of the goal-line (the penalty area was not introduced until 1902).
It could be taken from any point along a line 12 yards (11 m) from the goal-line (the penalty spot was likewise not introduced until 1902).
It was awarded only after an appeal.
There was no restriction on dribbling.
The ball could be kicked in any direction.
History:
The goalkeeper was allowed to advance up to 6 yards (5.5 m) from the goal-line.The world's first penalty kick was awarded just 5 days after the change had been approved and introduced to the rules of the game by the Scottish Football Association. It was awarded to Royal Albert against Airdrieonians in the final of the Airdrie Charity Cup on 6 June 1891 at Airdrieonians' then home ground of Mavisbank Park. 15 minutes into the match, Airdrieonians' full-back Andrew Mitchell tripped Royal Albert's Lambie inside the penalty area and the referee, Mr Robertson, had no hesitation in awarding a penalty kick. Royal Albert's McLuggage took the kick, and although Airdrieonians' Scottish international goalkeeper Jimmy Connor got his fingertips to the ball, his efforts couldn't stop a goal from being scored. The 'Airdrie Advertiser' newspaper reporter of the time noted in the match report that the penalty kick was the only foul awarded against Airdrieonians in the game. The first penalty kick in the Football League was awarded to Wolverhampton Wanderers in their match against Accrington at Molineux Stadium on 14 September 1891. The penalty was taken and scored by Billy Heath as Wolves went on to win the game 5–0.
History:
Subsequent developments In 1892, the player taking the penalty-kick was forbidden to kick the ball again before the ball had touched another player. A provision was also added that "[i]f necessary, time of play shall be extended to admit of the penalty kick being taken".In 1896, the ball was required to be kicked forward, and the requirement for an appeal was removed.In 1902, the penalty area was introduced with its current dimensions (a rectangle extending 18 yards (16 m) from the goal-posts). The penalty spot was also introduced, 12 yards (11 m) from the goal. All other players were required to be outside the penalty area.In 1905, the goalkeeper was required to remain on the goal-line.In 1923, all other players were required to be at least 10 yards (9.15 m) from the penalty-spot (in addition to being outside the penalty-area). This change was made in order to stop defenders from lining up on the edge of the penalty area to impede the player taking the kick.
History:
In 1930, a footnote was appended to the laws, stating that "the goal-keeper must not move his feet until the penalty kick has been taken".In 1937, an arc (colloquially known as the "D") was added to the pitch markings, to assist in the enforcement of the 10-yard (9.15 m) restriction. The goalkeeper was required to stand between the goal-posts.In 1939, it was specified that the ball must travel the distance of its circumference before being in play. In 1997, this requirement was eliminated: the ball became in play as soon as it was kicked and moved forward. In 2016, it was specified that the ball must "clearly" move.In 1995, all other players were required to remain behind the penalty spot. The Scottish Football Association claimed that this new provision would "eliminate various problems which have arisen regarding the position of players who stand in front of the penalty-mark at the taking of a penalty-kick as is presently permitted".In 1997, the goalkeeper was once again allowed to move the feet, and was also required to face the kicker.The question of "feinting" during the run-up to a penalty was popularized by Pelé in the 1970s and it was called paradinha, which in Portuguese means "little stop". It has occupied the International FA Board since 1982, when it was decided that "if a player stops in his run-up it is an offence for which he shall be cautioned (for ungentlemanly conduct) by the referee". However, in 1985 the same body reversed itself, deciding that the "assumption that feigning was an offence" was "wrong", and that it was up to the Referee to decide whether any instance should be penalized as ungentlemanly conduct. From 2000 to 2006, documents produced by IFAB specified that feinting during the run-up to a penalty-kick was permitted. In 2007, this guidance emphasized that "if in the opinion of the referee the feinting is considered an act of unsporting behaviour, the player shall be cautioned". In 2010, because of concern over "an increasing trend in players feinting a penalty kick to deceive the goalkeeper", a proposal was adopted to specify that while "feinting in the run-up to take a penalty kick to confuse opponents is permitted as part of football", "feinting to kick the ball once the player has completed his run-up is considered an infringement of Law 14 and an act of unsporting behaviour for which the player must be cautioned".
History:
Summary Offences for which a penalty kick was awarded Since its introduction in 1891, a penalty kick has been awarded for two broad categories of offences: handball serious offences involving physical contactThe number of offences eligible for punishment by a penalty-kick, small when initially introduced in 1891, expanded rapidly thereafter. This led to some confusion: for example, in September 1891, a referee awarded a penalty kick against a goalkeeper who "[lost] his temper and [kicked] an opponent", even though under the 1891 laws this offence was punishable only by an indirect free-kick.The table below shows the punishments specified by the laws for offences involving handling the ball or physical contact, between 1890 and 1903: Since 1903, the offences for which a penalty kick is awarded within the defending team's penalty area have been identical to those for which a direct free kick is awarded outside the defending team's penalty area. These consisted of handball (excluding technical handling offences by the goalkeeper), and foul play, with the following exceptions (which were punished instead by an indirect free kick in the penalty area): Dangerous play (since 1903) Obstructing / impeding the progress of an opponent (1951–2016) and impeding an opponent without contact (from 2016) Charging when not attempting to play the ball (1948-1997) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sub-probability measure**
Sub-probability measure:
In the mathematical theory of probability and measure, a sub-probability measure is a measure that is closely related to probability measures. While probability measures always assign the value 1 to the underlying set, sub-probability measures assign a value lesser than or equal to 1 to the underlying set.
Definition:
Let μ be a measure on the measurable space (X,A) Then μ is called a sub-probability measure if μ(X)≤1
Properties:
In measure theory, the following implications hold between measures: So every probability measure is a sub-probability measure, but the converse is not true. Also every sub-probability measure is a finite measure and a σ-finite measure, but the converse is again not true. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HP CloudSystem**
HP CloudSystem:
HP CloudSystem is a cloud infrastructure from Hewlett Packard Enterprise (HPE) that combines storage, servers, networking and software.
HP CloudSystem is now branded HP Helion CloudSystem and is an integral component of the HPE Helion portfolio.
History:
HP CloudSystem was first launched in January 2011. Many of its components are based on earlier HP products. HP CloudSystem is based on HP BladeSystem Matrix technology, which was originally launched in 2009. BladeSystem Matrix is a combination of HP Systems Insight Manager, HP BladeSystem c-Class blade chassis and HP StorageWorks EVA Fibre Channel Storage framework, along with Microsoft Active Directory and virtualization Hypervisors from Microsoft and VMware. HP Insight Orchestration provides the orchestration functionality.Previous versions of HP CloudSystem combined HP Matrix Operating Environment, which manages, monitors and provisions servers for physical and virtual resources and HP Cloud Service Automation Software, a set of system management tools used to provide and manage the lifecycle of IT services. BladeSystem Matrix supports HP ProLiant x64 blades running Microsoft Windows and Linux, and HP Integrity blades running HP-UX.HP CloudSystem is now branded HP Helion CloudSystem and is an integral component of the HP Helion portfolio.
Cloud migration challenges:
The migration of traditional IT, in which IT directly controls IT purchasing, deployment, management, and use, to a cloud computing model holds a number of challenges. A paper titled "Cloud Migration: A Case Study of Migrating an Enterprise IT System to IaaS," by researchers at the Cloud Computing Co-laboratory, School of Computer Science, University of St Andrews, raises several socio-technical issues related to the migration of IT services to the cloud. The researchers state that in-house IT personnel are at risk of becoming dependent upon the cloud service vendor, of which the user organization has no control over. The researchers also note that the user organization could also require more resources to carry out migration to the cloud and overcome the issues that could crop up after migration, such as a lack of in-house knowledge of cloud operations.
Cloud migration challenges:
The researchers also note that the user organization's customer representatives could take longer to resolve customer problems, as their questions may require input from the external cloud services provider. Furthermore, migrating to cloud computing could reduce job satisfaction among IT staffers, whose jobs change from a hands-on technical role to managing external service providers. User organizations must also learn to cope with a new way of managing IT as they are no longer in charge of software support contracts or hardware maintenance issues.Other cloud migration challenges that have been cited include security, vendor management and technical integration. Security experts have raised the issue that because public clouds are multi-tenant (see: Multitenancy), meaning that the cloud provider hosts data from many different user organizations, opening up security risks. Vulnerabilities or defects in one organization's applications could negatively affect other applications hosted by the same service provider.Under the vendor management issue, user organizations must also take responsibility of ensuring different cloud providers meet their Service-level agreements. User organizations must identify what is required to integrate the cloud service with their existing IT infrastructure. The user organization may need to create a virtual machine template that describes the infrastructure, application and the security that's required of the service provider.Dan Kusnetzky, cloud computing analyst at research firm the 451 group cites resistance to change as an inhibitor to cloud computing. He is quoted as saying: "Basically, IT people are charged with keeping the status quo, because the possibility for changes introduces the chances that things will stop working." HP's Cloud Discovery Workshop and HP Cloud Roadmap Services has been described as created to address these concerns. The workshops aim to educate user organizations about the impact of cloud computing to company culture. The opportunities and risks of cloud computing are also discussed. The roadmap service is offered as an addition to the discovery workshop. It aims to help organizations plan and adopt cloud computing.
CloudSystem environments and characteristics:
Cloud Service Automation Software CloudSystem includes HP Cloud Service Automation Software, a set of system management tools used to provide and manage the lifecycle of IT services. That is done in an automated way. Cloud Service Automation Software is based on technology from a company acquired by HP in 2007 called Opsware. The tools that are part of CSA manage the following cloud tasks: workflow, configuration, provisioning, management and monitoring.Lauren Nelson, an analyst with industry research company Forrester Research has been cited as describing CSA as offering tools to create a cloud that would be similar to Amazon Web Services or Rackspace Cloud from a collection of servers.
CloudSystem environments and characteristics:
CloudSystem Matrix Industry analyst company Illuminata has described CloudSystem Matrix as a "coordinated system for setting up pools of modular resources and flexibly deploying IT services across those pools." HP CloudSystem Matrix is a platform for private clouds. It has been described as helping to shorten the time it takes for user organizations to deliver complex applications and IT infrastructures.HP CloudSystem Matrix 7.0 cloud automation environment was launched in November 2011. It includes pre-integrated server, network, storage and software components. This version also includes cloud bursting capabilities and automated self-provisioning.Judith Hurwitz, a widely recognized industry analyst described CloudSystem Matrix as a linchpin of HP's cloud strategy. She has described CloudSystem Matrix as a unified computing system that combines virtual and physical server blades. A central console is used to manage resource pools, physical and virtual servers and network connectivity. Among the virtualization hypervisors supported are VMware, KVM and Microsoft Hyper-V, Hurwitz writes in her article.
CloudSystem environments and characteristics:
The engine of CloudSystem Matrix is the Matrix Operating Environment, which manages, monitors and provisions servers for physical and virtual resources. Matrix OE also carries out network management. It is an infrastructure lifecycle management software that enables users to provision and modify complex infrastructures according to dynamic business demands. CloudSystem Matrix includes HP Cloud Maps, a set of templates to assist user organizations to architect cloud computing infrastructures to deliver applications and services. HP Cloud Maps templates are pre-configured to include workflow and deployment scripts for cloud services. Cloud Maps have also been described as pre-configured templates for creating application stacks. These stacks could include workflows or third-party applications from Microsoft, SAP and Oracle to be delivered as cloud services. The templates can be loaded into CloudSystem to create a cloud services catalog. HP has been cited as describing Cloud Maps as reducing the time it takes to provision cloud services by 80%, and to reduce the time it takes for compliance management by up to 75%.
CloudSystem environments and characteristics:
HP CloudStart HP CloudStart enables user organizations to design, build and install a private cloud based on CloudSystem Matrix. CloudStart includes Cloud Service Automation and a virtual infrastructure provisioning product that features a self-service portal. It also provides resource metering and chargeback facilities for private clouds. IT services are also included with CloudStart, including an assessment of the user organization's existing IT infrastructure plus recommendations for how the organization could use cloud computing. The cloud configuration is set up by HP, which also moves the defined services to the cloud. Training is also provided to the user organization. HP has been cited as saying that it can create a private cloud for user organizations within 30 days of its initial engagement with the organization.
CloudSystem environments and characteristics:
Cloud orchestration Cloud computing disperses applications and services among multiple servers and, in some cases, multiple data centers. Cloud orchestration coordinates the different cloud elements to ensure systems and applications management, integration, support, billing, provisioning, service-level agreement management and contract management. Cloud orchestration is a part of CloudSystem and the feature is based on Opsware Workflow, a centralized console for provisioning and server and patch management.
CloudSystem environments and characteristics:
CloudSystem Enterprise CloudSystem Enterprise provides a single environment for the creation and management of private clouds. Based on Cloud Service Automation, CloudSystem Enterprise manages private clouds running on standalone physical servers, and hybrid clouds running across private and public clouds. CloudSystem Enterprise includes a lifecycle management tool to manage the deployment and retirement of applications and infrastructure.
CloudSystem environments and characteristics:
HP CloudSystem Service Provider CloudSystem Service Provider is a platform for service providers to deliver public and virtual hosted private cloud services. It is aimed at service providers building Private clouds or Public clouds for Infrastructure as a service, Platform as a service, or software-as-a-service. CloudSystem can be used by service providers to provide cloud services on carrier-level networks and IT infrastructures. It also allow service providers to automate and provision cloud services.
CloudSystem environments and characteristics:
Cloud bursting In November 2011, HP announced a cloud bursting feature for CloudSystem. This feature enables CloudSystem to use the capacity of external private and public clouds when demand increases.CloudSystem includes other software. This includes HP TippingPoint, which provides intrusion detection, network security and traffic management. CloudSystem is based on templates called CloudMaps that provide common enterprise application stacks. Compliance is provided by HP ArcSight.
CloudSystem environments and characteristics:
Third-party support CloudSystem is supported by a partner program, which helps cloud computing services vendors to build Cloud Centers of Excellence. These have been described as demonstration clouds for HP CloudSystem and HP Converged Systems platforms. The centers can include products from HP Networking, HP TippingPoint and HP 3PAR storage.Certified HP partners are also able to offer HP workshops, such as the HP Cloud Discovery Workshops and other training support. The Cloud Discovery Workshop is designed to educate attendees about cloud computing and helps attendees establish how cloud computing fits into their organizations and discusses best practices. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Oil cleansing method**
Oil cleansing method:
The oil cleansing method, often abbreviated as OCM, is a system for cleaning the human body. It is sometimes used for treating acne. Sometimes, oils can be mixed; one example is 50% extra virgin olive oil and 50% castor oil. This mixture can be optimized based on skin type and personal preference.
Oil cleansing method:
In accordance with skin type variations, castor oil may be too harsh in some skin-care regimens and is sometimes used in a 1:9 ratio. However, overly oily skin can make use of a larger proportion of castor oil. Other oils that are commonly used are jojoba oil, sweet almond oil, coconut oil, argan oil, rosehip oil, sunflower oil, Safflower oil, and grapeseed oil. Furthermore, some sources say that the oil cleansing method is not viable for sensitive skin.
History:
The modern OCM method claims to be derived from ancient bathing practices. It differs from these practices in its focus solely on oil, and the ancients would also use water. Modern soap was not produced industrially until the 19th century. In the ancient world people would use olive oil as part of their bathing. They may have combined the oil with ash, and we know they used a scraping implement called a strigil. In the Roman baths, a man would bathe in this way before taking a Caldarium or 'hot bath'. Pliny the Elder himself mentions ancient bathing practices.
Method:
In this beauty treatment, the oil is rubbed into skin for approximately two minutes. Next, a warm, damp microfiber wash cloth is used to wipe off the excess oil. Applied sparingly, oil may be used to moisturize the skin after the cleansing oil has been removed from the face. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Called party**
Called party:
The called party (in some contexts called the "B-Number") is a person who (or device that) answers a telephone call. The person who (or device that) initiates a telephone call is the calling party.
Called party:
In some situations, the called party may number more than one. Such an instance is known as a conference call. In some systems, only one called party is contacted at each event. To initiate a conference call the calling party contacts the first called party, then this person contacts the second called party, but audio is transferred to both called parties.
Called party:
In a collect call (i.e. reverse charge), the called party pays the fee for the call, when it is usually the calling party that does so. The called party also pays if the number dialed is a toll-free telephone number.
In some countries such as Canada, the United States and China, users of mobile phones pay for the "airtime" to receive calls. In most other countries (e.g. most European countries), the elevated interconnect fees are paid fully by the calling party and the called party incurs no charge. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Raised-relief map**
Raised-relief map:
A raised-relief map, terrain model or embossed map is a three-dimensional representation, usually of terrain, materialized as a physical artifact. When representing terrain, the vertical dimension is usually exaggerated by a factor between five and ten; this facilitates the visual recognition of terrain features.
History:
If the account of Sima Qian (c. 145–86 BCE) in his Records of the Grand Historian is proven correct upon the unearthing of Qin Shi Huang's tomb, the raised-relief map has existed since the Qin dynasty (221–206 BCE) of China. Joseph Needham suggests that certain pottery vessels of the Han dynasty (202 BCE – 220 CE) showing artificial mountains as lid decorations may have influenced the raised-relief map.The Han dynasty general Ma Yuan made a raised-relief map of valleys and mountains in a rice-constructed model of 32 CE. Such rice models were expounded on by the Tang dynasty (618–907) author Jiang Fang in his Essay on the Art of Constructing Mountains with Rice (c. 845). A raised-relief map made of wood representing all the provinces of the empire and put together like a giant 0.93 m2 (10 ft2) jigsaw puzzle was invented by Xie Zhuang (421–466) during the Liu Song dynasty (420–479).
History:
Shen Kuo (1031-1095) created a raised-relief map using sawdust, wood, beeswax, and wheat paste. His wooden model pleased Emperor Shenzong of Song, who later ordered that all the prefects administering the frontier regions should prepare similar wooden maps which could be sent to the capital and stored in an archive.In 1130, Huang Shang made a wooden raised-relief map which later caught the attention of the Neo-Confucian philosopher Zhu Xi, who tried to acquire it but instead made his own map out of sticky clay and wood. The map, made of eight pieces of wood connected by hinges, could be folded up and carried around by one person.Later, Ibn Battuta (1304–1377) described a raised-relief map while visiting Gibraltar.In his 1665 paper for the Philosophical Transactions of the Royal Society, John Evelyn (1620–1706) believed that wax models imitating nature and bas relief maps were something entirely new from France. Some later scholars attributed the first raised-relief map to one Paul Dox, who represented the area of Kufstein in his raised-relief map of 1510.
Construction:
There are a number of ways to create a raised-relief map. Each method has advantages and disadvantages in regards to accuracy, price, and relative ease of creation.
Construction:
Layer Stacking Starting with a topographic map, one can cut out successive layers from some sheet material, with edges following the contour lines on the map. These may be assembled in a stack to obtain a rough approximation of the terrain. This method is commonly used as the base for architectural models, and is usually done without vertical exaggeration. For models of landforms, the stack can then be smoothed by filling with some material. This model may be used directly, or for greater durability a mold may be made from it. This mold may then be used to produce a plaster model.
Construction:
Vacuum Formed Plastic Maps A combination of computer numerical control (CNC) machining a master model, and vacuum forming copies from this, can be been used to rapidly mass-produce raised-relief maps. The Vacuum Forming technique, invented in 1947 by the Army Map Service in Washington, D.C., uses vacuum-formed plastic sheets and heat to increase the production rate of these maps. To make the Vacuum-Formed plastic maps, first a master model made of resin or other materials is created with a computer guided milling machine using a digital terrain model. Then a reproduction mold is cast using the master mold and a heat and pressure resistant material. Fine holes are put into the reproduction mold so that the air can later be removed by a vacuum. Next, a plastic sheet is applied to the mold so that they are airtight, and a heater is placed above the plastic for about 10 seconds. The vacuum is then applied to remove the remaining air. After letting the plastic cool, it can be removed and the terrain is complete. After this step, a color map can be overlaid/printed onto the bases that were created to make it realistic.Vacuum-formed plastic maps have many advantages and disadvantages. They can be quickly produced, which can be beneficial in time of war or disaster. However, the accuracy of certain points throughout the model can vary. The points that touch the mold first are the most accurate, while the points that touch the mold last can become bulged and slightly distorted. Also, the effectiveness of this particular construction method varies by the terrain being represented. They are not good at representing sharp-edged land forms like high mountain ranges or urban areas.
Construction:
3D Printing Another method which is becoming more widespread is the use of 3D printing. With the rapid development of this technology its use is becoming increasingly economic. In order to create a raised-relief map using a 3D printer, Digital Elevation Models (DEM) are rendered into a 3D computer model, which can then be sent to a 3D printer. Most consumer-level 3D printers extrude plastic layer by layer to create a 3D object. However, if a map is needed for commercial and professional uses, higher-end printers can be used. These 3D printers use a combination of powders, resins, and even metals to create higher-quality models. After the model is created, color can be added to show different land cover characteristics, providing a more realistic view of the area. Some benefits of using a 3D printed model include the technology and DEMs being more prevalent easier to find, and that they are easier to understand than a typical topographic map.
Construction:
DEM/TIN Formed Papercraft Maps Creating a papercraft raised relief map via a Digital Elevation Model (DEM) is a low cost alternative to many other methods. The method involves converting the DEM to a triangulated irregular network (TIN), unfolding the TIN, printing the unfolded TIN on paper, and assembling the printout into a physical 3D model. This method allows raised relief maps to be constructed without the need for specialized equipment or extensive training. The degree of realism and accuracy of the resulting maps is similar to that of layer stacking models. However, the quality of the final map heavily depends on the characteristics of the TIN used.
Non-terrain applications:
For appropriate mathematical functions and especially for certain types of statistics displays, a similar model may be constructed as an aid to understanding a function or as an aid to studying the statistical data.
Notable examples:
The Great Polish Map of Scotland is claimed to be the largest terrain relief model, constructed out of brick and concrete in the grounds of a hotel near Peebles, Scotland. It measures 50 by 40 metres (160 ft × 130 ft).The Relief map of Guatemala, in Guatemala City, is of similar size as the Great Polish Map of Scotland. It was built in 1904–05.
Notable examples:
However, a site in Ningxia province, China at 38°15′57″N 105°57′4″E was spotted in 2006 using satellite imagery. It measured 900 by 700 metres (3,000 ft × 2,300 ft), had a 3-kilometre (1.9 mi) perimeter and appeared to be a large scale relief model (1:500) of Aksai Chin, a disputed territory between China and India. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lehmer code**
Lehmer code:
In mathematics and in particular in combinatorics, the Lehmer code is a particular way to encode each possible permutation of a sequence of n numbers. It is an instance of a scheme for numbering permutations and is an example of an inversion table.
The Lehmer code is named in reference to Derrick Henry Lehmer, but the code had been known since 1888 at least.
The code:
The Lehmer code makes use of the fact that there are n!=n×(n−1)×⋯×2×1 permutations of a sequence of n numbers. If a permutation σ is specified by the sequence (σ1, …, σn) of its images of 1, …, n, then it is encoded by a sequence of n numbers, but not all such sequences are valid since every number must be used only once. By contrast the encodings considered here choose the first number from a set of n values, the next number from a fixed set of n − 1 values, and so forth decreasing the number of possibilities until the last number for which only a single fixed value is allowed; every sequence of numbers chosen from these sets encodes a single permutation. While several encodings can be defined, the Lehmer code has several additional useful properties; it is the sequence where L(σ)i=#{j>i:σj<σi}, in other words the term L(σ)i counts the number of terms in (σ1, …, σn) to the right of σi that are smaller than it, a number between 0 and n − i, allowing for n + 1 − i different values.
The code:
A pair of indices (i,j) with i < j and σi > σj is called an inversion of σ, and L(σ)i counts the number of inversions (i,j) with i fixed and varying j. It follows that L(σ)1 + L(σ)2 + … + L(σ)n is the total number of inversions of σ, which is also the number of adjacent transpositions that are needed to transform the permutation into the identity permutation. Other properties of the Lehmer code include that the lexicographical order of the encodings of two permutations is the same as that of their sequences (σ1, …, σn), that any value 0 in the code represents a right-to-left minimum in the permutation (i.e., a σi smaller than any σj to its right), and a value n − i at position i similarly signifies a right-to-left maximum, and that the Lehmer code of σ coincides with the factorial number system representation of its position in the list of permutations of n in lexicographical order (numbering the positions starting from 0).
The code:
Variations of this encoding can be obtained by counting inversions (i,j) for fixed j rather than fixed i, by counting inversions with a fixed smaller value σj rather than smaller index i, or by counting non-inversions rather than inversions; while this does not produce a fundamentally different type of encoding, some properties of the encoding will change correspondingly. In particular counting inversions with a fixed smaller value σj gives the inversion table of σ, which can be seen to be the Lehmer code of the inverse permutation.
Encoding and decoding:
The usual way to prove that there are n! different permutations of n objects is to observe that the first object can be chosen in n different ways, the next object in n − 1 different ways (because choosing the same number as the first is forbidden), the next in n − 2 different ways (because there are now 2 forbidden values), and so forth. Translating this freedom of choice at each step into a number, one obtains an encoding algorithm, one that finds the Lehmer code of a given permutation. One need not suppose the objects permuted to be numbers, but one needs a total ordering of the set of objects. Since the code numbers are to start from 0, the appropriate number to encode each object σi by is the number of objects that were available at that point (so they do not occur before position i), but which are smaller than the object σi actually chosen. (Inevitably such objects must appear at some position j > i, and (i,j) will be an inversion, which shows that this number is indeed L(σ)i.) This number to encode each object can be found by direct counting, in several ways (directly counting inversions, or correcting the total number of objects smaller than a given one, which is its sequence number starting from 0 in the set, by those that are unavailable at its position). Another method which is in-place, but not really more efficient, is to start with the permutation of {0, 1, … n − 1} obtained by representing each object by its mentioned sequence number, and then for each entry x, in order from left to right, correct the items to its right by subtracting 1 from all entries (still) greater than x (to reflect the fact that the object corresponding to x is no longer available). Concretely a Lehmer code for the permutation B,F,A,G,D,E,C of letters, ordered alphabetically, would first give the list of sequence numbers 1,5,0,6,3,4,2, which is successively transformed 1506342140523114042311403120140312014031101403110 where the final line is the Lehmer code (at each line one subtracts 1 from the larger entries to the right of the boldface element to form the next line).
Encoding and decoding:
For decoding a Lehmer code into a permutation of a given set, the latter procedure may be reversed: for each entry x, in order from right to left, correct the items to its right by adding 1 to all those (currently) greater than or equal to x; finally interpret the resulting permutation of {0, 1, … n − 1} as sequence numbers (which amounts to adding 1 to each entry if a permutation of {1, 2, … n} is sought). Alternatively the entries of the Lehmer code can be processed from left to right, and interpreted as a number determining the next choice of an element as indicated above; this requires maintaining a list of available elements, from which each chosen element is removed. In the example this would mean choosing element 1 from {A,B,C,D,E,F,G} (which is B) then element 4 from {A,C,D,E,F,G} (which is F), then element 0 from {A,C,D,E,G} (giving A) and so on, reconstructing the sequence B,F,A,G,D,E,C.
Applications to combinatorics and probabilities:
Independence of relative ranks The Lehmer code defines a bijection from the symmetric group Sn to the Cartesian product [n]×[n−1]×⋯×[2]×[1] , where [k] designates the k-element set {0,1,…,k−1} . As a consequence, under the uniform distribution on Sn, the component L(σ)i defines a uniformly distributed random variable on [n − i], and these random variables are mutually independent, because they are projections on different factors of a Cartesian product.
Applications to combinatorics and probabilities:
Number of right-to-left minima and maxima Definition : In a sequence u=(uk)1≤k≤n, there is right-to-left minimum (resp. maximum) at rank k if uk is strictly smaller (resp. strictly bigger) than each element ui with i>k, i.e., to its right.
Applications to combinatorics and probabilities:
Let B(k) (resp. H(k)) be the event "there is right-to-left minimum (resp. maximum) at rank k", i.e. B(k) is the set of the permutations Sn which exhibit a right-to-left minimum (resp. maximum) at rank k. We clearly have Thus the number Nb(ω) (resp. Nh(ω)) of right-to-left minimum (resp. maximum) for the permutation ω can be written as a sum of independent Bernoulli random variables each with a respective parameter of 1/k : Indeed, as L(k) follows the uniform law on [[1,k]], The generating function for the Bernoulli random variable 11B(k) is therefore the generating function of Nb is (using the rising factorial notation), which allows us to recover the product formula for the generating function of the Stirling numbers of the first kind (unsigned).
Applications to combinatorics and probabilities:
The secretary problem This is an optimal stop problem, a classic in decision theory, statistics and applied probabilities, where a random permutation is gradually revealed through the first elements of its Lehmer code, and where the goal is to stop exactly at the element k such as σ(k)=n, whereas the only available information (the k first values of the Lehmer code) is not sufficient to compute σ(k).
Applications to combinatorics and probabilities:
In less mathematical words : a series of n applicants are interviewed one after the other. The interviewer must hire the best applicant, but must make his decision (“Hire” or “Not hire”) on the spot, without interviewing the next applicant (and a fortiori without interviewing all applicants).
The interviewer thus knows the rank of the kth applicant, therefore, at the moment of making his kth decision, the interviewer knows only the k first elements of the Lehmer code whereas he would need to know all of them to make a well informed decision.
To determine the optimal strategies (i.e. the strategy maximizing the probability of a win), the statistical properties of the Lehmer code are crucial.
Applications to combinatorics and probabilities:
Allegedly, Johannes Kepler clearly exposed this secretary problem to a friend of his at a time when he was trying to make up his mind and choose one out eleven prospective brides as his second wife. His first marriage had been an unhappy one, having been arranged without himself being consulted, and he was thus very concerned that he could reach the right decision.
Similar concepts:
Two similar vectors are in use. One of them is often called inversion vector, e.g. by Wolfram Alpha.
See also Inversion (discrete mathematics) § Inversion related vectors. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Triphala**
Triphala:
Triphala ("three fruits") is an Ayurvedic herbal rasayana formula consisting of equal parts of three myrobalans, taken without seed: Amalaki (Phyllanthus emblica), Bibhitaki (Terminalia bellirica), and Haritaki (Terminalia chebula). It contains vitamin C. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pneumonic device**
Pneumonic device:
A pneumonic device is any equipment designed for use with or relating to the diaphragm. The iron lung and medical ventilator may be considered pneumonic devices. The term may also refer to any device used in the field of respiratory therapy. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cryptanalytic computer**
Cryptanalytic computer:
A cryptanalytic computer is a computer designed to be used for cryptanalysis, which nowadays involves massive statistical analysis and multiple trial decryptions that since before World War II are possible only with automated equipment. Polish cryptanalysts designed and built automated aids in their work on Enigma traffic. Arguably, the first modern computer (digital, electronic, and somewhat programmable) was built for cryptanalytic work at Bletchley Park (the Colossus) during the war. More modern computers were important after World War II, and some machines (like the Cray-1) are reported to have had machine instructions hardwired in at the request of NSA.
Cryptanalytic computer:
Computers continue to be important in cryptanalysis well into the 21st century. NSA, in fact, is said to have the largest number of installed computers on the planet. Whether or not this is true in an age of Google computer farms and such is doubtful but remains publicly unknown. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hypocenter**
Hypocenter:
A hypocenter or hypocentre (from Ancient Greek ὑπόκεντρον (hupókentron) 'below the center'), also called ground zero or surface zero, is the point on the Earth's surface directly below a nuclear explosion, meteor air burst, or other mid-air explosion. In seismology, a hypocenter of an earthquake is its point of origin below ground; a synonym is the focus of an earthquake.Generally, the terms ground zero and surface zero are also used in relation to epidemics, and other disasters to mark the point of the most severe damage or destruction. The term is distinguished from the term zero point in that the latter can also be located in the air, underground, or underwater.
Trinity, Hiroshima and Nagasaki:
The term "ground zero" originally referred to the hypocenter of the Trinity test in Jornada del Muerto desert near Socorro, New Mexico, and the atomic bombings of Hiroshima and Nagasaki in Japan. The United States Strategic Bombing Survey of the atomic attacks, released in June 1946, used the term liberally, defining it as: For convenience, the term 'ground zero' will be used to designate the point on the ground directly beneath the point of detonation, or 'air zero.' William Laurence, an embedded reporter with the Manhattan Project, reported that "Zero" was "the code name given to the spot chosen for the [Trinity] test" in 1945.The Oxford English Dictionary, citing the use of the term in a 1946 New York Times report on the destroyed city of Hiroshima, defines ground zero as "that part of the ground situated immediately under an exploding bomb, especially an atomic one." The term was military slang, used at the Trinity site where the weapon tower for the first nuclear weapon was at "point zero", and moved into general use very shortly after the end of World War II. At Hiroshima, the hypocenter of the attack was Shima Hospital, approximately 800 ft (240 m) away from the intended aiming point at Aioi Bridge.
The Pentagon:
During the Cold War, The Pentagon, the headquarters of the United States Department of Defense in Arlington County, Virginia, was an assured target in the event of nuclear war. The open space in the center of the Pentagon became known informally as ground zero. A snack bar that used to be located at the center of this open space was nicknamed "Cafe Ground Zero".
World Trade Center:
During the September 11 attacks in 2001, two aircraft were hijacked by 10 al-Qaeda terrorists and were flown into the Twin Towers of the World Trade Center in New York City, causing massive damage and starting fires that caused the weakened 110-story skyscrapers to collapse. The destroyed World Trade Center site soon became known as "ground zero". Rescue workers also used the term "The Big Momma!", referring to the pile of rubble that was left after the buildings collapsed.Even after the site was cleaned up and construction on the new One World Trade Center and the National September 11 Memorial & Museum were well under way, the term was still frequently used to refer to the site, as when opponents of the Park51 project that was to be located two blocks away from the site labeled it the "Ground Zero mosque".
World Trade Center:
In advance of the 10th anniversary of the attacks, New York City mayor Michael Bloomberg urged that the "ground zero" moniker be retired, saying, "…the time has come to call those 16 acres [6.5 hectares] what they are: The World Trade Center and the National September 11th Memorial and Museum."
Meteor air bursts:
The hypocenter of a meteor air burst, an asteroid or comet that explodes in the atmosphere rather than strike the surface, is the closest point on the surface to the explosion. The Tunguska event occurred in Siberia in 1908 and flattened an estimated 80 million trees over an area of 2,150 km2 (830 sq mi) of forest. However, the trees at the hypocenter of the blast were left standing, all their limbs having been blown off by the shockwave. The 2013 Chelyabinsk meteor's hypocenter in Russia was more populated than that of Tunguska, resulting in civil damage and injury, mostly from flying glass shards from broken windows.
Earthquakes:
An earthquake's hypocenter is the position where the strain energy stored in the rock is first released, marking the point where the fault begins to rupture. This occurs directly beneath the epicenter, at a distance known as the hypocentral depth or focal depth.The focal depth can be calculated from measurements based on seismic wave phenomena. As with all wave phenomena in physics, there is uncertainty in such measurements that grows with the wavelength so the focal depth of the source of these long-wavelength (low frequency) waves is difficult to determine exactly. Very strong earthquakes radiate a large fraction of their released energy in seismic waves with very long wavelengths and therefore a stronger earthquake involves the release of energy from a larger mass of rock.
Earthquakes:
Computing the hypocenters of foreshocks, main shock, and aftershocks of earthquakes allows the three-dimensional plotting of the fault along which movement is occurring. The expanding wavefront from the earthquake's rupture propagates at a speed of several kilometers per second; this seismic wave is what is measured at various surface points in order to geometrically determine an initial guess as to the hypocenter. The wave reaches each station based upon how far away it was from the hypocenter. A number of things need to be taken into account, most importantly variations in the waves speed based upon the materials that it is passing through. With adjustments for velocity changes, the initial estimate of the hypocenter is made, then a series of linear equations is set up, one for each station. The equations express the difference between the observed arrival times and those calculated from the initial estimated hypocenter. These equations are solved by the method of least squares which minimizes the sum of the squares of the differences between the observed and calculated arrival times, and a new estimated hypocenter is computed. The system iterates until the location is pinpointed within the margin of error for the velocity computations. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Propyl benzoate**
Propyl benzoate:
Propyl benzoate is an organic chemical compound used as a food additive. It is an ester.
Uses:
Propyl benzoate has a nutty odor and sweet fruity or nut-like taste, and as such, it is used as a synthetic flavoring agent in foods. It also has antimicrobial properties and is used as a preservative in cosmetics. It occurs naturally in the sweet cherry and in clove stems, as well as in butter.
Reactions:
Propyl benzoate can be synthesized by the transesterification of methyl benzoate with propanol.
Propyl benzoate can also be synthesized by means of Fischer esterification of benzoic acid with propanol. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ragone plot**
Ragone plot:
A Ragone plot ( rə-GOH-nee) is a plot used for comparing the energy density of various energy-storing devices. On such a chart the values of specific energy (in W·h/kg) are plotted versus specific power (in W/kg). Both axes are logarithmic, which allows comparing performance of very different devices. Ragone plots can reveal information about gravimetric energy density, but do not convey details about volumetric energy density.
Ragone plot:
The Ragone plot was first used to compare performance of batteries. However, it is suitable for comparing any energy-storage devices, as well as energy devices such as engines, gas turbines, and fuel cells. The plot is named after David V. Ragone.Conceptually, the vertical axis describes how much energy is available per unit mass, while the horizontal axis shows how quickly that energy can be delivered, otherwise known as power per unit mass. A point in a Ragone plot represents a particular energy device or technology.
Ragone plot:
The amount of time (in hours) during which a device can be operated at its rated power is given as the ratio between the specific energy (Y-axis) and the specific power (X-axis). This is true regardless of the overall scale of the device, since a larger device would have proportional increases in both power and energy. Consequently, the iso curves (constant operating time) in a Ragone plot are straight lines. For electrical systems, the following equations are relevant: specific energy =V×I×tm, specific power =V×Im, where V is voltage (V), I electric current (A), t time (s) and m mass (kg). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Review of Scientific Instruments**
Review of Scientific Instruments:
Review of Scientific Instruments is a monthly peer-reviewed scientific journal published by the American Institute of Physics. Its area of interest is scientific instruments, apparatus, and techniques. According to the Journal Citation Reports, the journal has a 2018 impact factor of 1.587. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Retinoid**
Retinoid:
The retinoids are a class of chemical compounds that are vitamers of vitamin A or are chemically related to it. Retinoids have found use in medicine where they regulate epithelial cell growth.
Retinoids have many important functions throughout the body including roles in vision, regulation of cell proliferation and differentiation, growth of bone tissue, immune function, and activation of tumor suppressor genes.
Research is also being done into their ability to treat skin cancers. Currently, alitretinoin (9-cis-retinoic acid) may be used topically to help treat skin lesions from Kaposi's sarcoma, and tretinoin (all-trans- retinoic acid) is used to treat acute promyelocytic leukemia.
Types:
There are four generations of retinoids: First generation include retinol, retinal, tretinoin (retinoic acid), isotretinoin, and alitretinoin Second generation include etretinate and its metabolite acitretin Third generation include adapalene, bexarotene, and tazarotene Fourth generation includes Trifarotene
Structure:
The basic structure of the hydrophobic retinoid molecule consists of a cyclic end group, a polyene side chain and a polar end group. The conjugated system formed by alternating C=C double bonds in the polyene side chain are responsible for the color of retinoids (typically yellow, orange, or red). Hence, many retinoids are chromophores. Alternation of side chains and end groups creates the various classes of retinoids.
Structure:
First and second generation retinoids are able to bind with several retinoid receptors due to the flexibility imparted by their alternating single and double bonds.
Third generation retinoids are less flexible than first- and second-generation retinoids and therefore, interact with fewer retinoid receptors.
Fourth generation retinoid, Trifarotene, binds selectively to the RAR-y receptor. It was approved for use in the US in 2019.
Absorption:
The major source of retinoids from the diet are plant pigments such as carotenes and retinyl esters derived from animal sources. Retinyl esters are hydrolyzed in the intestinal lumen to yield free retinol and the corresponding fatty acid (i.e. palmitate or stearate). After hydrolysis, retinol is taken up by the enterocytes. Retinyl ester hydrolysis requires the presence of bile salts that serve to solubilize the retinyl esters in mixed micelles and to activate the hydrolyzing enzymes Several enzymes that are present in the intestinal lumen may be involved in the hydrolysis of dietary retinyl esters. Cholesterol esterase is secreted into the intestinal lumen from the pancreas and has been shown, in vitro, to display retinyl ester hydrolase activity. In addition, a retinyl ester hydrolase that is intrinsic to the brush-border membrane of the small intestine has been characterized in the rat as well as in the human. The different hydrolyzing enzymes are activated by different types of bile salts and have distinct substrate specificities. For example, whereas the pancreatic esterase is selective for short-chain retinyl esters, the brush-border membrane enzyme preferentially hydrolyzes retinyl esters containing a long-chain fatty acid such as palmitate or stearate. Retinol enters the absorptive cells of the small intestine, preferentially in the all-trans-retinol form.
Uses:
Common skin conditions treated by retinoids include acne and psoriasis. Retinoids are used in the treatment of many diverse diseases and are effective in the treatment of a number of dermatological conditions such as inflammatory skin disorders, skin cancers, such as bexaroten for mycosis fungoides, disorders of increased cell turnover (e.g. psoriasis), photoaging, and skin wrinkles.Isotretinoin was originally a chemotherapy treatment for certain cancers, such as leukemia.
Toxicity:
Toxic effects occur with prolonged high intake. The specific toxicity is related to exposure time and the exposure concentration. A medical sign of chronic poisoning is the presence of painful tender swellings on the long bones. Anorexia, skin lesions, hair loss, hepatosplenomegaly, papilloedema, bleeding, general malaise, pseudotumor cerebri, and death may also occur.Chronic overdose also causes an increased lability of biological membranes and of the outer layer of the skin to peel.Recent research has suggested a role for retinoids in cutaneous adverse effects for a variety of drugs including the antimalarial drug proguanil. It is proposed that drugs such as proguanil act to disrupt retinoid homeostasis.
Toxicity:
Systemic retinoids (isotretinoin, etretinate) are contraindicated during pregnancy as they may cause CNS, cranio-facial, cardiovascular and other defects. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Interference engine**
Interference engine:
An interference engine is a type of 4-stroke internal combustion piston engine in which one or more valves in the fully open position extends into any area through which the piston may travel. By contrast, in a non-interference engine, the piston does not travel into any area into which the valves open. Interference engines rely on timing gears, chains, or belts to prevent the piston from striking the valves by ensuring that the valves are closed when the piston is near top dead center. Interference engines are prevalent among modern production automobiles and many other four-stroke engine applications; the main advantage is that it allows engine designers to maximize the engine's compression ratio. However, such engines risk major internal damage if a piston strikes a valve due to failure of camshaft drive belts, drive chains, or drive gears.
Timing gear failure:
In interference engine designs, replacing a timing belt in regular intervals or repairing chain issues as soon as they are discovered is essential, as incorrect timing may result in the pistons and valves colliding and causing extensive internal engine damage. The piston will likely bend the valves, or, if a piece of valve or piston is broken off within the cylinder, the broken piece may cause severe damage within the cylinder, possibly affecting the connecting rods.
Timing gear failure:
If a timing belt or chain breaks in an interference engine, mechanics check for bent valves by performing a leak-down test of each cylinder or by checking the valve gaps. A very large valve gap points to a bent valve. Repair options depend on the damage. If the pistons and cylinders are damaged, the engine must be rebuilt or replaced. If valves are bent, but there is no other damage, replacing bent valves and rebuilding the cylinder head, as well as replacing the timing belt/chain components might be enough. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Journal of Periodontology**
Journal of Periodontology:
The Journal of Periodontology is the academic journal of the American Academy of Periodontology (AAP). It was established in 1930.
Journal of Periodontology:
It is dedicated to Dr. Gillette Hayden. According to the July 1933 Journal, "The Journal of Periodontology is lovingly dedicated to the memory of Doctor Gillette Hayden. Her selfless devotion and untiring efforts in behalf of periodontia and the American Academy of Periodontology, have served as an inspiration to her close associates which can only be consummated by carrying onward the work for which she spent her life." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Kobon triangle problem**
Kobon triangle problem:
The Kobon triangle problem is an unsolved problem in combinatorial geometry first stated by Kobon Fujimura (1903-1983). The problem asks for the largest number N(k) of nonoverlapping triangles whose sides lie on an arrangement of k lines. Variations of the problem consider the projective plane rather than the Euclidean plane, and require that the triangles not be crossed by any other lines of the arrangement.
Known upper bounds:
Saburo Tamura proved that the number of nonoverlapping triangles realizable by k lines is at most ⌊k(k−2)/3⌋ . G. Clément and J. Bader proved more strongly that this bound cannot be achieved when k is congruent to 0 or 2 (mod 6). The maximum number of triangles is therefore at most one less in these cases. The same bounds can be equivalently stated, without use of the floor function, as: Solutions yielding this number of triangles are known when k is 3, 4, 5, 6, 7, 8, 9, 13, 15 or 17. For k = 10, 11 and 12, the best solutions known reach a number of triangles one less than the upper bound.
Known constructions:
Given an optimal solution with k0 > 3 lines, other Kobon triangle solution numbers can be found for all ki-values where by using the procedure by D. Forge and J. L. Ramirez Alfonsin. For example, the solution for k0 = 5 leads to the maximal number of nonoverlapping triangles for k = 5, 9, 17, 33, 65, .... | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Locally compact quantum group**
Locally compact quantum group:
In mathematics and theoretical physics, a locally compact quantum group is a relatively new C*-algebraic approach toward quantum groups that generalizes the Kac algebra, compact-quantum-group and Hopf-algebra approaches. Earlier attempts at a unifying definition of quantum groups using, for example, multiplicative unitaries have enjoyed some success but have also encountered several technical problems.
One of the main features distinguishing this new approach from its predecessors is the axiomatic existence of left and right invariant weights. This gives a noncommutative analogue of left and right Haar measures on a locally compact Hausdorff group.
Definitions:
Before we can even begin to properly define a locally compact quantum group, we first need to define a number of preliminary concepts and also state a few theorems.
Definitions:
Definition (weight). Let A be a C*-algebra, and let A≥0 denote the set of positive elements of A . A weight on A is a function ϕ:A≥0→[0,∞] such that ϕ(a1+a2)=ϕ(a1)+ϕ(a2) for all a1,a2∈A≥0 , and ϕ(r⋅a)=r⋅ϕ(a) for all r∈[0,∞) and a∈A≥0 .Some notation for weights. Let ϕ be a weight on a C*-algebra A . We use the following notation: := {a∈A≥0∣ϕ(a)<∞} , which is called the set of all positive ϕ -integrable elements of A := {a∈A∣ϕ(a∗a)<∞} , which is called the set of all ϕ -square-integrable elements of A := Span Span Nϕ∗Nϕ , which is called the set of all ϕ -integrable elements of A .Types of weights. Let ϕ be a weight on a C*-algebra A We say that ϕ is faithful if and only if ϕ(a)≠0 for each non-zero a∈A≥0 We say that ϕ is lower semi-continuous if and only if the set {a∈A≥0∣ϕ(a)≤λ} is a closed subset of A for every λ∈[0,∞] We say that ϕ is densely defined if and only if Mϕ+ is a dense subset of A≥0 , or equivalently, if and only if either Nϕ or Mϕ is a dense subset of A We say that ϕ is proper if and only if it is non-zero, lower semi-continuous and densely defined.Definition (one-parameter group). Let A be a C*-algebra. A one-parameter group on A is a family α=(αt)t∈R of *-automorphisms of A that satisfies αs∘αt=αs+t for all s,t∈R . We say that α is norm-continuous if and only if for every a∈A , the mapping R→A defined by t↦αt(a) is continuous (surely this should be called strongly continuous?).
Definitions:
Definition (analytic extension of a one-parameter group). Given a norm-continuous one-parameter group α on a C*-algebra A , we are going to define an analytic extension of α . For each z∈C , let := {y∈C∣|ℑ(y)|≤|ℑ(z)|} ,which is a horizontal strip in the complex plane. We call a function f:I(z)→A norm-regular if and only if the following conditions hold: It is analytic on the interior of I(z) , i.e., for each y0 in the interior of I(z) , the limit lim y→y0f(y)−f(y0)y−y0 exists with respect to the norm topology on A It is norm-bounded on I(z) It is norm-continuous on I(z) .Suppose now that z∈C∖R , and let := There exists a norm-regular such that for all t∈R}.
Definitions:
Define αz:Dz→A by := f(z) . The function f is uniquely determined (by the theory of complex-analytic functions), so αz is well-defined indeed. The family (αz)z∈C is then called the analytic extension of α Theorem 1. The set ∩z∈CDz , called the set of analytic elements of A , is a dense subset of A Definition (K.M.S. weight). Let A be a C*-algebra and ϕ:A≥0→[0,∞] a weight on A . We say that ϕ is a K.M.S. weight ('K.M.S.' stands for 'Kubo-Martin-Schwinger') on A if and only if ϕ is a proper weight on A and there exists a norm-continuous one-parameter group (σt)t∈R on A such that ϕ is invariant under σ , i.e., ϕ∘σt=ϕ for all t∈R , and for every Dom (σi/2) , we have ϕ(a∗a)=ϕ(σi/2(a)[σi/2(a)]∗) .We denote by M(A) the multiplier algebra of A Theorem 2. If A and B are C*-algebras and π:A→M(B) is a non-degenerate *-homomorphism (i.e., π[A]B is a dense subset of B ), then we can uniquely extend π to a *-homomorphism π¯:M(A)→M(B) Theorem 3. If ω:A→C is a state (i.e., a positive linear functional of norm 1 ) on A , then we can uniquely extend ω to a state ω¯:M(A)→C on M(A) Definition (Locally compact quantum group). A (C*-algebraic) locally compact quantum group is an ordered pair G=(A,Δ) , where A is a C*-algebra and Δ:A→M(A⊗A) is a non-degenerate *-homomorphism called the co-multiplication, that satisfies the following four conditions: The co-multiplication is co-associative, i.e., Δ⊗ι¯∘Δ=ι⊗Δ¯∘Δ The sets id ¯(Δ(a))|ω∈A∗,a∈A} and id ⊗ω¯(Δ(a))|ω∈A∗,a∈A} are linearly dense subsets of A There exists a faithful K.M.S. weight ϕ on A that is left-invariant, i.e., id ¯(Δ(a)))=ω¯(1M(A))⋅ϕ(a) for all ω∈A∗ and a∈Mϕ+ There exists a K.M.S. weight ψ on A that is right-invariant, i.e., id ⊗ω¯(Δ(a)))=ω¯(1M(A))⋅ψ(a) for all ω∈A∗ and a∈Mϕ+ .From the definition of a locally compact quantum group, it can be shown that the right-invariant K.M.S. weight ψ is automatically faithful. Therefore, the faithfulness of ψ is a redundant condition and does not need to be postulated.
Duality:
The category of locally compact quantum groups allows for a dual construction with which one can prove that the bi-dual of a locally compact quantum group is isomorphic to the original one. This result gives a far-reaching generalization of Pontryagin duality for locally compact Hausdorff abelian groups.
Alternative formulations:
The theory has an equivalent formulation in terms of von Neumann algebras. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HealthMap**
HealthMap:
HealthMap is a freely accessible, automated electronic information system for monitoring, organizing, and visualizing reports of global disease outbreaks according to geography, time, and infectious disease agent. In operation since September 2006, and created by John Brownstein, PhD and Clark Freifeld, PhD, HealthMap acquires data from a variety of freely available electronic media sources (e.g. ProMED-mail, Eurosurveillance, Wildlife Disease Information Node) to obtain a comprehensive view of the current global state of infectious diseases.Users of HealthMap come from a variety of organizations including state and local public health agencies, the World Health Organization (WHO), the US Centers for Disease Control and Prevention, and the European Centre for Disease Prevention and Control. HealthMap is used both as an early detection system and supports situational awareness by providing current, highly local information about outbreaks, even from areas relatively invisible to traditional global public health efforts. Currently, HealthMap monitors information sources in English, Chinese, Spanish, Russian, French, Portuguese, and Arabic.In March 2014, the Healthmap software tracked early press and social media reports of a hemorrhagic fever in West Africa, subsequently identified by WHO as Ebola. The HealthMap team subsequently created a dedicated HealthMap visualization at healthmap.org/ebola. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bracket matching**
Bracket matching:
Bracket matching, also known as brace matching or parentheses matching, is a syntax highlighting feature of certain text editors and integrated development environments that highlights matching sets of brackets (square brackets, curly brackets, or parentheses) in languages such as Java, JavaScript, and C++ that use them. The purpose is to help the programmer navigate through the code and also spot any improper matching, which would cause the program to not compile or malfunction. If a closing bracket is left out, for instance, the compiler will not know that the end of a block of code has been reached. Bracket matching is particularly useful when many nested if statements, program loops, etc. are involved.
Implementations:
Vim's % command does bracket matching, and NetBeans has bracket matching built-in.
Bracket matching can also be a tool for code navigation. In Visual Studio C++ 6.0, bracket matching behavior was set to ignore brackets found in comments. In VSC 7.0, its behavior was changed to compute commented brackets.
IntelliJ IDEA's Ruby on Rails plugin also enables bracket matching. It has been proposed that Perl 5 be modified to facilitate bracket matching. The Microsoft Excel 2003 formula bar has parentheses matching. Its implementation shows all the pairs of parentheses as different colors, so it is possible to easily analyze them all at once.
Example:
In this example, the user has just typed the closing curly brace '}' defining a code block, and that brace and its corresponding opening brace are both highlighted.
for (int i = 0; i < 10; i++) { System.out.println(i); }│ | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**GPR183**
GPR183:
G-protein coupled receptor 183 also known as Epstein-Barr virus-induced G-protein coupled receptor 2 (EBI2) is a protein (GPCR) expressed on the surface of some immune cells, namely B cells and T cells; in humans it is encoded by the GPR183 gene. Expression of EBI2 is one critical mediator of immune cell localization within lymph nodes, responsible in part for the coordination of B cell, T cell, and dendritic cell movement and interaction following antigen exposure. EBI2 is a receptor for oxysterols. The most potent activator is 7α,25-dihydroxycholesterol (7α,25-OHC), with other oxysterols exhibiting varying affinities for the receptor. Oxysterol gradients drive chemotaxis, attracting the EBI2-expressing cells to locations of high ligand concentration. The GPR183 gene was identified due to its upregulation during Epstein-Barr virus infection of the Burkitt's lymphoma cell line BL41, hence its name: EBI2.
Tissue distribution and function:
B cells EBI2 helps B cell homing to the outer follicular region within a lymph node. Approximately three hours following B cell exposure to plasma-soluble antigen, EBI2 is upregulated via the transcription factor BRRF1. More surface receptors binding the oxysterol ligand results in cellular migration up the gradient, to the outer follicular region. The reason for this early migration is still unknown; however, because soluble antigen enters lymph nodes via afferent lymphatic vasculature, near the outer region of the follicle, it is hypothesized that B cell movement is motivated by increased exposure to the antigen. Six hours after antigen exposure, EBI2 is downregulated to low levels, permitting the B cells to migrate to the border between the B cell and T cell zones of the lymph node. Here, B cells interact with T helper cells previously activated by antigen-presenting dendritic cells. Though CCR7 is the dominant receptor in this stage of B cell migration, EBI2 is still critical, the low expression of which contributes to organized interaction along the T zone border that maximizes interactions with T cells. Following B cell receptor and CD40 co-stimulation, EBI2 is again upregulated. The B cells thus move back toward the outer follicular space, where they begin cell division. At this point, a B cell either downregulates EBI2 expression in order to enter a germinal center or maintains EBI2 expression and remains in outer follicular regions. In germinal centers (GC), B cells downregulate the receptor via the transcriptional repressor B-cell lymphoma-6 (BCL6) and, following somatic hypermutation, differentiate into long-lived antibody-secreting plasma cells or memory B cells. EBI2 must turn off to move B cells to the germinal center from the periphery, and must turn on for B cells to exit the germinal center and re-enter the periphery. Meanwhile, those remaining outside the follicle differentiate into plasmablasts, eventually becoming short-lived plasma cells. Thus, EBI2 expression modulates B cell differentiation by directing cells toward or away from germinal centers.
Tissue distribution and function:
T cells EBI2 also regulates intra-lymphatic T cell migration. Mature T helper cells upregulate EBI2 to follow the oxysterol gradient, migrating to the outer edges of the T cell zone to receive signals from antigen-presenting dendritic cells arriving from the tissues. This migration is critical as the resulting T cell-DC interaction induces T helper cell differentiation into T follicular helper cells. In concert with upregulation of CXCR5, the downregulation of EBI2 helps T follicular helper cells move toward the follicle center to help B cells undergoing affinity maturation in germinal centers.
Tissue distribution and function:
Dendritic cells EBI2 expression on CD4+ dendritic cells is a key initiator of immune response. Antigen-activated dendritic cells are driven to lymph node bridging channels via the oxysterol-EBI2 pathway. In the spleen, bridging channels connect the marginal zone, where dendritic cells pick up plasma-soluble antigen, to the T cell zone, where they present antigen to T helper cells. This results in T cell proliferation and differentiation. Localization to bridging channels is also associated with dendritic cell reception of lymphotoxin beta signaling, which augments their blood pathogen uptake, resulting in an increase in T cell responses.
Ligand:
Oxysterols bind to and activate EBI2. The highest affinity oxysterol ligand is 7α,25-dihydroxycholesterol (7α,25-OHC), formed by enzymatic oxidation of cholesterol by the hydroxylases CH25H and CYP7B1. 7α,25-OHC is concentrated in bridging channels and the outer perimeter of B cell follicles. Conversely it is not present in follicle centers, germ centers, nor in the T zone. The enzymes responsible for ligand biosynthesis, CH25H and CYP7B1, are unsurprisingly abundant in lymphoid stromal cells. On the other hand, the enzyme that deactivates the ligand, HSD3B7, is highly concentrated in areas where the ligand concentration should be lowest—the T zone. Though it is not a cytokine, the EBI2 ligand acts much like a chemokine in that its gradient drives cellular migration.
Virus infection:
GPR183 plays a crucial role in driving inflammation in the lungs during severe viral respiratory infections such as influenza A virus (IAV) and SARS-CoV-2. Studies using preclinical murine models of infection revealed that the activation of GPR183 by oxidized cholesterols leads to the recruitment of monocytes/macrophages and the production of inflammatory cytokines in the lungs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Singularity spectrum**
Singularity spectrum:
The singularity spectrum is a function used in multifractal analysis to describe the fractal dimension of a subset of points of a function belonging to a group of points that have the same Hölder exponent. Intuitively, the singularity spectrum gives a value for how "fractal" a set of points are in a function.
More formally, the singularity spectrum D(α) of a function, f(x) , is defined as: D(α)=DF{x,α(x)=α} Where α(x) is the function describing the Hölder exponent, α(x) of f(x) at the point x . DF{⋅} is the Hausdorff dimension of a point set. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Curing salt**
Curing salt:
Curing salt is used in meat processing to generate a pinkish shade and to extend shelf life. It is both a color agent and a means to facilitate food preservation as it prevents or slows spoilage by bacteria or fungus. Curing salts are generally a mixture of sodium chloride (table salt) and sodium nitrite, and are used for pickling meats as part of the process to make sausage or cured meat such as ham, bacon, pastrami, corned beef, etc. Though it has been suggested that the reason for using nitrite-containing curing salt is to prevent botulism, a 2018 study by the British Meat Producers Association determined that legally permitted levels of nitrite have no effect on the growth of the Clostridium botulinum bacteria that causes botulism, in line with the UK’s Advisory Committee on the Microbiological Safety of Food opinion that nitrites are not required to prevent C. botulinum growth and extend shelf life. (see also Sodium Nitrite: Inhibition of microbial growth).
Curing salt:
Many curing salts also contain red dye that makes them pink to prevent them from being confused with common table salt. Thus curing salt is sometimes referred to as "pink salt". Curing salts are not to be confused with Himalayan pink salt, a halite which is 97–99% sodium chloride (table salt) with trace elements that give it a pink color.
Types:
There are many types of curing salts often specific to a country or region.
Prague Powder #1 One of the most common curing salts. It is also called Insta Cure #1 or Pink curing salt #1. It contains 6.25% sodium nitrite and 93.75% table salt. It is recommended for meats that require short cures and will be cooked and eaten relatively quickly. Sodium nitrite provides the characteristic flavor and color associated with curing.
Types:
Prague Powder #2 Also called Pink curing salt #2. It contains 6.25% sodium nitrite, 4% sodium nitrate, and 89.75% table salt. The sodium nitrate found in Prague powder #2 gradually breaks down over time into sodium nitrite, and by the time a dry cured sausage is ready to be eaten, no sodium nitrate should be left. For this reason it is recommended for meats that require long (weeks to months) cures, like hard salami and country ham.
Types:
Saltpetre Another name for potassium nitrate (KNO3), saltpetre, also called saltpeter or nitrate of potash, has been a common ingredient of some types of salted meat for centuries but its use has been mostly discontinued due to inconsistent results compared to nitrite compounds (KNO2, NaNO2, NNaNO2, etc.) Even so, saltpetre is still used in some food applications, such as some charcuterie products. It should not be confused with Chile saltpetre or Peru saltpetre, which is sodium nitrate (NaNO3). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Luteoskyrin**
Luteoskyrin:
Luteoskyrin is a carcinogenic mycotoxin with the molecular formula C30H22O12 which is produced by the mold Penicillium islandicum. Luteoskyrin has strong cytotoxic effects. Luteoskyrin can cause the yellow rice disease. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Phosphorylation**
Phosphorylation:
In biochemistry, phosphorylation is the attachment of a phosphate group to a molecule or an ion. This process and its inverse, dephosphorylation, are common in biology. Protein phosphorylation often activates (or deactivates) many enzymes.
During respiration and photosynthesis:
Phosphorylation is essential to the processes of both anaerobic and aerobic respiration, which involve the production of adenosine triphosphate (ATP), the "high-energy" exchange medium in the cell. During aerobic respiration, ATP is synthesized in the mitochondrion by addition of a third phosphate group to adenosine diphosphate (ADP) in a process referred to as oxidative phosphorylation. ATP is also synthesized by substrate-level phosphorylation during glycolysis. ATP is synthesized at the expense of solar energy by photophosphorylation in the chloroplasts of plant cells.
Phosphorylation of glucose:
Glucose metabolism Phosphorylation of sugars is often the first stage in their catabolism. Phosphorylation allows cells to accumulate sugars because the phosphate group prevents the molecules from diffusing back across their transporter. Phosphorylation of glucose is a key reaction in sugar metabolism. The chemical equation for the conversion of D-glucose to D-glucose-6-phosphate in the first step of glycolysis is given by: D-glucose + ATP → D-glucose 6-phosphate + ADPΔG° = −16.7 kJ/mol (° indicates measurement at standard condition) Glycolysis Glycolysis is an essential process of glucose degrading into two molecules of pyruvate, through various steps, with the help of different enzymes. It occurs in ten steps and proves that phosphorylation is a much required and necessary step to attain the end products. Phosphorylation initiates the reaction in step 1 of the preparatory step (first half of glycolysis), and initiates step 6 of payoff phase (second phase of glycolysis).Glucose, by nature, is a small molecule with the ability to diffuse in and out of the cell. By phosphorylating glucose (adding a phosphoryl group in order to create a negatively charged phosphate group), glucose is converted to glucose-6-phosphate, which is trapped within the cell as the cell membrane is negatively charged. This reaction occurs due to the enzyme hexokinase, an enzyme that helps phosphorylate many six-membered ring structures. Phosphorylation takes place in step 3, where fructose-6-phosphate is converted to fructose 1,6-bisphosphate. This reaction is catalyzed by phosphofructokinase.
Phosphorylation of glucose:
While phosphorylation is performed by ATPs during preparatory steps, phosphorylation during payoff phase is maintained by inorganic phosphate. Each molecule of glyceraldehyde 3-phosphate is phosphorylated to form 1,3-bisphosphoglycerate. This reaction is catalyzed by glyceraldehyde-3-phosphate dehydrogenase (GAPDH). The cascade effect of phosphorylation eventually causes instability and allows enzymes to open the carbon bonds in glucose.
Phosphorylation functions is an extremely vital component of glycolysis, as it helps in transport, control, and efficiency.
Phosphorylation of glucose:
Glycogen synthesis Glycogen is a long-term store of glucose produced by the cells of the liver. In the liver, the synthesis of glycogen is directly correlated with blood glucose concentration. High blood glucose concentration causes an increase in intracellular levels of glucose 6-phosphate in the liver, skeletal muscle, and fat (adipose) tissue. Glucose 6-phosphate has role in regulating glycogen synthase.
Phosphorylation of glucose:
High blood glucose releases insulin, stimulating the translocation of specific glucose transporters to the cell membrane; glucose is phosphorylated to glucose 6-phosphate during transport across the membrane by ATP-D-glucose 6-phosphotransferase and non-specific hexokinase (ATP-D-hexose 6-phosphotransferase). Liver cells are freely permeable to glucose, and the initial rate of phosphorylation of glucose is the rate-limiting step in glucose metabolism by the liver.The liver's crucial role in controlling blood sugar concentrations by breaking down glucose into carbon dioxide and glycogen is characterized by the negative Gibbs free energy (ΔG) value, which indicates that this is a point of regulation with. The hexokinase enzyme has a low Michaelis constant (Km), indicating a high affinity for glucose, so this initial phosphorylation can proceed even when glucose levels at nanoscopic scale within the blood.
Phosphorylation of glucose:
The phosphorylation of glucose can be enhanced by the binding of fructose 6-phosphate (F6P), and lessened by the binding fructose 1-phosphate (F1P). Fructose consumed in the diet is converted to F1P in the liver. This negates the action of F6P on glucokinase, which ultimately favors the forward reaction. The capacity of liver cells to phosphorylate fructose exceeds capacity to metabolize fructose-1-phosphate. Consuming excess fructose ultimately results in an imbalance in liver metabolism, which indirectly exhausts the liver cell's supply of ATP.Allosteric activation by glucose 6-phosphate, which acts as an effector, stimulates glycogen synthase, and glucose 6 phosphate may inhibit the phosphorylation of glycogen synthase by cyclic AMP-stimulated protein kinase.
Phosphorylation of glucose:
Other processes Phosphorylation of glucose is imperative in processes within the body. For example, phosphorylating glucose is necessary for insulin-dependent mechanistic target of rapamycin pathway activity within the heart. This further suggests a link between intermediary metabolism and cardiac growth.
Protein phosphorylation:
Protein phosphorylation is the most abundant post-translational modification in eukaryotes. Phosphorylation can occur on serine, threonine and tyrosine side chains (often called 'residues') through phosphoester bond formation, on histidine, lysine and arginine through phosphoramidate bonds, and on aspartic acid and glutamic acid through mixed anhydride linkages. Recent evidence confirms widespread histidine phosphorylation at both the 1 and 3 N-atoms of the imidazole ring. Recent work demonstrates widespread human protein phosphorylation on multiple non-canonical amino acids, including motifs containing phosphorylated histidine, aspartate, glutamate, cysteine, arginine and lysine in HeLa cell extracts. However, due to the chemical lability of these phosphorylated residues, and in marked contrast to Ser, Thr and Tyr phosphorylation, the analysis of phosphorylated histidine (and other non-canonical amino acids) using standard biochemical and mass spectrometric approaches is much more challenging and special procedures and separation techniques are required for their preservation alongside classical Ser, Thr and Tyr phosphorylation.The prominent role of protein phosphorylation in biochemistry is illustrated by the huge body of studies published on the subject (as of March 2015, the MEDLINE database returns over 240,000 articles, mostly on protein phosphorylation). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**N00280**
N00280:
The n00280 RNA was identified by RNA deep sequencing of Clostridioides difficile 630 where it is located within gene CD0749 (putative DNA helicase, UvrD/REP type) in the same direction as the gene. A strong transcription start site for this RNA was experimentally confirmed, in the 3’ region of gene CD0749. The 3’ end of the sRNA has not been confirmed and its length was arbitrarily set to 100nt. Only sequences from order Clostridiales were included in the family, mostly because the starting sequence belonged to this order and the large majority of sequences found by the different RFAM search iterations belonged to this order as well. Associated e-values did not exceed 1e-09. As the original RNA is part of a coding sequence, it is possible that homologies were detected due to selection pressure for the CDS rather than for a sRNA. R-scape identified only two significantly covarying pairs present in the structure (4-100 and 7-97). Thus evidence for structure is weak. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dapivirine**
Dapivirine:
Dapivirine is a non-nucleoside reverse transcriptase inhibitor developed at Janssen Therapeutics (formerly Tibotec Therapeutics). The International Partnership for Microbicides has held exclusive worldwide rights to dapivirine since 2014, building upon a 2004 royalty-free license to develop dapivirine-based microbicides for women in resource-poor countries.A monthly intravaginal ring containing dapivirine has been developed as a way of preventing infection by human immunodeficiency virus in women. Two phase 3 clinical trials of intravaginal dapivirine rings for HIV prevention were completed in 2015 and results were announced at the 2016 Conference on Retroviruses and Opportunistic Infections. The ASPIRE Study (MTN-020) reported a 27% reduction in HIV-1 acquisition (95% CI 12-57%, p=0.007), with a trend toward greater protection in women over age 21 and no significant protection for women under age 21. The Ring Study (IPM-027) reported a 31% reduction in HIV acquisition (95% CI 0.9-51.5%, p=0.040) also with a trend toward greater efficacy in women over age 21. In both trials, more than 80% of returned rings showed signs of drug depletion indicating at least some use, and more than 80% of blood samples from participants in the active arm had levels of dapivirine consistent at least 8 hours of continuous use preceding the blood test. Neither trial could evaluate whether the product was used consistently between study visits.
Dapivirine:
As of December 2019, it became the first of its kind to be submitted for regulatory approval. The ring is currently under review by the European Medicines Agency with an opinion expected in 2020. Further regulatory submissions are planned to the US Food and Drug Administration, the South African Health Products Regulatory Authority, and other regulators in Africa where women face the highest risk for HIV. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Scancode**
Scancode:
A scancode (or scan code) is the data that most computer keyboards send to a computer to report which keys have been pressed. A number, or sequence of numbers, is assigned to each key on the keyboard.
Variants:
Mapping key positions by row and column requires less complex computer hardware; therefore, in the past, using software or firmware to translate the scancodes to text characters was less expensive than wiring the keyboard by text character. This cost difference is not as profound as it used to be. However, many types of computers still use their traditional scancodes to maintain backward compatibility.
Variants:
Some keyboard standards include a scancode for each key being pressed and a different one for each key being released. In addition, many keyboard standards (for example, IBM PC compatible standards) allow the keyboard itself to generate "typematic" repeating keys by having the keyboard itself generate the pressed-key scancode repeatedly while the key is held down, with the release scancode sent once when the key is released.
Scancode sets:
On some operating systems one may discover a key's downpress scancode by holding the key down while the computer is booting. With luck, the scancode (or some part of it) will be specified in the resulting "stuck key" error message. [Note: On Windows 7 only one byte of the scancode appears.] PC compatibles Scancodes on IBM PC compatible computer keyboards are sets of 1 to 3 bytes which are sent by the keyboard. Most character keys have a single byte scancode; keys that perform special functions have 2-byte or 3-byte scancodes, usually beginning with the byte (in hexadecimal) E0, E1, or E2. In addition, a few keys send longer scancodes, effectively emulating a series of keys to make it easier for different types of software to process.
Scancode sets:
PC keyboards since the PS/2 keyboard support up to three scancode sets. The most commonly encountered are the "XT" ("set 1") scancodes, based on the 83-key keyboard used by the IBM PC XT and earlier. These mostly consist of a single byte; the low 7 bits identify the key, and the most significant bit is clear for a key press or set for a key release. Some additional keys have an E0 (or rarely, E1 or E2) prefix. These were initially assigned so that ignoring the E0 prefix (which is in the key-up range and thus would have no effect on an operating system that did not understand them) would produce reasonable results. For example the numeric keypad's Enter key produces a scancode of E0 1C, which corresponds to the Return key's scancode of 1C.
Scancode sets:
The IBM 3270 PC introduced its own set of scancodes ("set 3"), with a different key numbering and where a key release is indicated by an F0 prefix. For backward compatibility, the 3270 PC translated these to XT (set 1) scancodes using an add-on card and a BIOS extension. This set is used by Linux by default when it detects a PS/2 keyboard that can properly support scan code set 3.The IBM PC AT introduced the "AT" ("set 2") scancodes. On the 84-key AT keyboard these were largely a subset of set 3, with some differences caused by the revised layout (for example, the position and scancodes of the function keys changed). Keys added since the PC AT often have different scancodes in set 2 and set 3, and in set 2 frequently have an E0 or E1 prefix. Again, key release is indicated by an F0 prefix.
Scancode sets:
For computers since the IBM PC AT, the keyboard controller on the motherboard translates AT (set 2) scancodes into XT (set 1) scancodes in so called translation mode. This translation can be disabled in pass-through-mode, allowing the raw scancodes to be seen. Therefore, whether a software developer will encounter AT scancodes or XT scancodes on a modern PC-compatible depends on how the keyboard is being accessed.
Scancode sets:
A compliant PS/2 keyboard can be told to send scancodes in set 1, 2 or 3.
USB USB keyboards use a new set of scancodes, mostly specified in the USB standard. All computers that recognize USB keyboards recognize these new scancodes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Microfluidics**
Microfluidics:
Microfluidics refers to a system that manipulates a small amount of fluids ((10−9 to 10−18 liters) using small channels with sizes ten to hundreds micrometres. It is a multidisciplinary field that involves molecular analysis, biodefence, molecular biology, and microelectronics. It has practical applications in the design of systems that process low volumes of fluids to achieve multiplexing, automation, and high-throughput screening. Microfluidics emerged in the beginning of the 1980s and is used in the development of inkjet printheads, DNA chips, lab-on-a-chip technology, micro-propulsion, and micro-thermal technologies.
Microfluidics:
Typically, micro means one of the following features: Small volumes (μL, nL, pL, fL) Small size Low energy consumption Microdomain effectsTypically microfluidic systems transport, mix, separate, or otherwise process fluids. Various applications rely on passive fluid control using capillary forces, in the form of capillary flow modifying elements, akin to flow resistors and flow accelerators. In some applications, external actuation means are additionally used for a directed transport of the media. Examples are rotary drives applying centrifugal forces for the fluid transport on the passive chips. Active microfluidics refers to the defined manipulation of the working fluid by active (micro) components such as micropumps or microvalves. Micropumps supply fluids in a continuous manner or are used for dosing. Microvalves determine the flow direction or the mode of movement of pumped liquids. Often, processes normally carried out in a lab are miniaturised on a single chip, which enhances efficiency and mobility, and reduces sample and reagent volumes.
Microscale behaviour of fluids:
The behaviour of fluids at the microscale can differ from "macrofluidic" behaviour in that factors such as surface tension, energy dissipation, and fluidic resistance start to dominate the system. Microfluidics studies how these behaviours change, and how they can be worked around, or exploited for new uses.At small scales (channel size of around 100 nanometers to 500 micrometers) some interesting and sometimes unintuitive properties appear. In particular, the Reynolds number (which compares the effect of the momentum of a fluid to the effect of viscosity) can become very low. A key consequence is co-flowing fluids do not necessarily mix in the traditional sense, as flow becomes laminar rather than turbulent; molecular transport between them must often be through diffusion.High specificity of chemical and physical properties (concentration, pH, temperature, shear force, etc.) can also be ensured resulting in more uniform reaction conditions and higher grade products in single and multi-step reactions.
Various kinds of microfluidic flows:
Microfluidic flows need only be constrained by geometrical length scale - the modalities and methods used to achieve such a geometrical constraint are highly dependent on the targeted application. Traditionally, microfluidic flows have been generated inside closed channels with the channel cross section being in the order of 10 μm x 10 μm. Each of these methods has its own associated techniques to maintain robust fluid flow which have matured over several years.
Various kinds of microfluidic flows:
Open microfluidics The behavior of fluids and their control in open microchannels was pioneered around 2005 and applied in air-to-liquid sample collection and chromatography. In open microfluidics, at least one boundary of the system is removed, exposing the fluid to air or another interface (i.e. liquid). Advantages of open microfluidics include accessibility to the flowing liquid for intervention, larger liquid-gas surface area, and minimized bubble formation. Another advantage of open microfluidics is the ability to integrate open systems with surface-tension driven fluid flow, which eliminates the need for external pumping methods such as peristaltic or syringe pumps. Open microfluidic devices are also easy and inexpensive to fabricate by milling, thermoforming, and hot embossing. In addition, open microfluidics eliminates the need to glue or bond a cover for devices, which could be detrimental to capillary flows. Examples of open microfluidics include open-channel microfluidics, rail-based microfluidics, paper-based, and thread-based microfluidics. Disadvantages to open systems include susceptibility to evaporation, contamination, and limited flow rate.
Various kinds of microfluidic flows:
Continuous-flow microfluidics Continuous flow microfluidics rely on the control of a steady state liquid flow through narrow channels or porous media predominantly by accelerating or hindering fluid flow in capillary elements. In paper based microfluidics, capillary elements can be achieved through the simple variation of section geometry. In general, the actuation of liquid flow is implemented either by external pressure sources, external mechanical pumps, integrated mechanical micropumps, or by combinations of capillary forces and electrokinetic mechanisms. Continuous-flow microfluidic operation is the mainstream approach because it is easy to implement and less sensitive to protein fouling problems. Continuous-flow devices are adequate for many well-defined and simple biochemical applications, and for certain tasks such as chemical separation, but they are less suitable for tasks requiring a high degree of flexibility or fluid manipulations. These closed-channel systems are inherently difficult to integrate and scale because the parameters that govern flow field vary along the flow path making the fluid flow at any one location dependent on the properties of the entire system. Permanently etched microstructures also lead to limited reconfigurability and poor fault tolerance capability. Computer-aided design automation approaches for continuous-flow microfluidics have been proposed in recent years to alleviate the design effort and to solve the scalability problems.
Various kinds of microfluidic flows:
Process monitoring capabilities in continuous-flow systems can be achieved with highly sensitive microfluidic flow sensors based on MEMS technology, which offers resolutions down to the nanoliter range.
Various kinds of microfluidic flows:
Droplet-based microfluidics Droplet-based microfluidics is a subcategory of microfluidics in contrast with continuous microfluidics; droplet-based microfluidics manipulates discrete volumes of fluids in immiscible phases with low Reynolds number and laminar flow regimes. Interest in droplet-based microfluidics systems has been growing substantially in past decades. Microdroplets allow for handling miniature volumes (μl to fl) of fluids conveniently, provide better mixing, encapsulation, sorting, and sensing, and suit high throughput experiments. Exploiting the benefits of droplet-based microfluidics efficiently requires a deep understanding of droplet generation to perform various logical operations such as droplet manipulation, droplet sorting, droplet merging, and droplet breakup.
Various kinds of microfluidic flows:
Digital microfluidics Alternatives to the above closed-channel continuous-flow systems include novel open structures, where discrete, independently controllable droplets are manipulated on a substrate using electrowetting. Following the analogy of digital microelectronics, this approach is referred to as digital microfluidics. Le Pesant et al. pioneered the use of electrocapillary forces to move droplets on a digital track. The "fluid transistor" pioneered by Cytonix also played a role. The technology was subsequently commercialised by Duke University. By using discrete unit-volume droplets, a microfluidic function can be reduced to a set of repeated basic operations, i.e., moving one unit of fluid over one unit of distance. This "digitisation" method facilitates the use of a hierarchical and cell-based approach for microfluidic biochip design. Therefore, digital microfluidics offers a flexible and scalable system architecture as well as high fault-tolerance capability. Moreover, because each droplet can be controlled independently, these systems also have dynamic reconfigurability, whereby groups of unit cells in a microfluidic array can be reconfigured to change their functionality during the concurrent execution of a set of bioassays. Although droplets are manipulated in confined microfluidic channels, since the control on droplets is not independent, it should not be confused as "digital microfluidics". One common actuation method for digital microfluidics is electrowetting-on-dielectric (EWOD). Many lab-on-a-chip applications have been demonstrated within the digital microfluidics paradigm using electrowetting. However, recently other techniques for droplet manipulation have also been demonstrated using magnetic force, surface acoustic waves, optoelectrowetting, mechanical actuation, etc.
Various kinds of microfluidic flows:
Paper-based microfluidics Paper-based microfluidic devices fill a growing niche for portable, cheap, and user-friendly medical diagnostic systems. Paper based microfluidics rely on the phenomenon of capillary penetration in porous media. To tune fluid penetration in porous substrates such as paper in two and three dimensions, the pore structure, wettability and geometry of the microfluidic devices can be controlled while the viscosity and evaporation rate of the liquid play a further significant role. Many such devices feature hydrophobic barriers on hydrophilic paper that passively transport aqueous solutions to outlets where biological reactions take place. Paper-based microfluidics are considered as portable point-of-care biosensors used in a remote setting where advanced medical diagnostic tools are not accessible. Current applications include portable glucose detection and environmental testing, with hopes of reaching areas that lack advanced medical diagnostic tools.
Various kinds of microfluidic flows:
Particle detection microfluidics One application area that has seen significant academic effort and some commercial effort is in the area of particle detection in fluids. Particle detection of small fluid-borne particles down to about 1 μm in diameter is typically done using a Coulter counter, in which electrical signals are generated when a weakly-conducting fluid such as in saline water is passed through a small (~100 μm diameter) pore, so that an electrical signal is generated that is directly proportional to the ratio of the particle volume to the pore volume. The physics behind this is relatively simple, described in a classic paper by DeBlois and Bean, and the implementation first described in Coulter's original patent. This is the method used to e.g. size and count erythrocytes (red blood cells [wiki]) as well as leukocytes (white blood cells) for standard blood analysis. The generic term for this method is resistive pulse sensing (RPS); Coulter counting is a trademark term. However, the RPS method does not work well for particles below 1 μm diameter, as the signal-to-noise ratio falls below the reliably detectable limit, set mostly by the size of the pore in which the analyte passes and the input noise of the first-stage amplifier.The limit on the pore size in traditional RPS Coulter counters is set by the method used to make the pores, which while a trade secret, most likely uses traditional mechanical methods. This is where microfluidics can have an impact: The lithography-based production of microfluidic devices, or more likely the production of reusable molds for making microfluidic devices using a molding process, is limited to sizes much smaller than traditional machining. Critical dimensions down to 1 μm are easily fabricated, and with a bit more effort and expense, feature sizes below 100 nm can be patterned reliably as well. This enables the inexpensive production of pores integrated in a microfluidic circuit where the pore diameters can reach sizes of order 100 nm, with a concomitant reduction in the minimum particle diameters by several orders of magnitude.
Various kinds of microfluidic flows:
As a result there has been some university-based development of microfluidic particle counting and sizing with the accompanying commercialization of this technology. This method has been termed microfluidic resistive pulse sensing (MRPS).
Various kinds of microfluidic flows:
Microfluidic-assisted magnetophoresis One major area of application for microfluidic devices is the separation and sorting of different fluids or cell types. Recent developments in the microfluidics field have seen the integration of microfluidic devices with magnetophoresis: the migration of particles by a magnetic field. This can be accomplished by sending a fluid containing at least one magnetic component through a microfluidic channel that has a magnet positioned along the length of the channel. This creates a magnetic field inside the microfluidic channel which draws magnetically active substances towards it, effectively separating the magnetic and non-magnetic components of the fluid. This technique can be readily utilized in industrial settings where the fluid at hand already contains magnetically active material. For example, a handful of metallic impurities can find their way into certain consumable liquids, namely milk and other dairy products. Conveniently, in the case of milk, many of these metal contaminants exhibit paramagnetism. Therefore, before packaging, milk can be flowed through channels with magnetic gradients as a means of purifying out the metal contaminants.
Various kinds of microfluidic flows:
Other, more research-oriented applications of microfluidic-assisted magnetophoresis are numerous and are generally targeted towards cell separation. The general way this is accomplished involves several steps. First, a paramagnetic substance (usually micro/nanoparticles or a paramagnetic fluid) needs to be functionalized to target the cell type of interest. This can be accomplished by identifying a transmembranal protein unique to the cell type of interest and subsequently functionalizing magnetic particles with the complementary antigen or antibody. Once the magnetic particles are functionalized, they are dispersed in a cell mixture where they bind to only the cells of interest. The resulting cell/particle mixture can then be flowed through a microfluidic device with a magnetic field to separate the targeted cells from the rest.
Various kinds of microfluidic flows:
Conversely, microfluidic-assisted magnetophoresis may be used to facilitate efficient mixing within microdroplets or plugs. To accomplish this, microdroplets are injected with paramagnetic nanoparticles and are flowed through a straight channel which passes through rapidly alternating magnetic fields. This causes the magnetic particles to be quickly pushed from side to side within the droplet and results in the mixing of the microdroplet contents. This eliminates the need for tedious engineering considerations that are necessary for traditional, channel-based droplet mixing. Other research has also shown that the label-free separation of cells may be possible by suspending cells in a paramagnetic fluid and taking advantage of the magneto-Archimedes effect. While this does eliminate the complexity of particle functionalization, more research is needed to fully understand the magneto-Archimedes phenomenon and how it can be used to this end. This is not an exhaustive list of the various applications of microfluidic-assisted magnetophoresis; the above examples merely highlight the versatility of this separation technique in both current and future applications.
Key application areas:
Microfluidic structures include micropneumatic systems, i.e. microsystems for the handling of off-chip fluids (liquid pumps, gas valves, etc.), and microfluidic structures for the on-chip handling of nanoliter (nl) and picoliter (pl) volumes. To date, the most successful commercial application of microfluidics is the inkjet printhead. Additionally, microfluidic manufacturing advances mean that makers can produce the devices in low-cost plastics and automatically verify part quality.Advances in microfluidics technology are revolutionizing molecular biology procedures for enzymatic analysis (e.g., glucose and lactate assays), DNA analysis (e.g., polymerase chain reaction and high-throughput sequencing), proteomics, and in chemical synthesis. The basic idea of microfluidic biochips is to integrate assay operations such as detection, as well as sample pre-treatment and sample preparation on one chip.An emerging application area for biochips is clinical pathology, especially the immediate point-of-care diagnosis of diseases. In addition, microfluidics-based devices, capable of continuous sampling and real-time testing of air/water samples for biochemical toxins and other dangerous pathogens, can serve as an always-on "bio-smoke alarm" for early warning.
Key application areas:
Microfluidic technology has led to the creation of powerful tools for biologists to control the complete cellular environment, leading to new questions and discoveries. Many diverse advantages of this technology for microbiology are listed below: General single cell studies including growth Cellular aging: microfluidic devices such as the "mother machine" allow tracking of thousands of individual cells for many generations until they die Microenvironmental control: ranging from mechanical environment to chemical environment Precise spatiotemporal concentration gradients by incorporating multiple chemical inputs to a single device Force measurements of adherent cells or confined chromosomes: objects trapped in a microfluidic device can be directly manipulated using optical tweezers or other force-generating methods Confining cells and exerting controlled forces by coupling with external force-generation methods such as Stokes flow, optical tweezer, or controlled deformation of the PDMS (Polydimethylsiloxane) device Electric field integration Plant on a chip and plant tissue culture Antibiotic resistance: microfluidic devices can be used as heterogeneous environments for microorganisms. In a heterogeneous environment, it is easier for a microorganism to evolve. This can be useful for testing the acceleration of evolution of a microorganism / for testing the development of antibiotic resistance.Some of these areas are further elaborated in the sections below: DNA chips (microarrays) Early biochips were based on the idea of a DNA microarray, e.g., the GeneChip DNAarray from Affymetrix, which is a piece of glass, plastic or silicon substrate, on which pieces of DNA (probes) are affixed in a microscopic array. Similar to a DNA microarray, a protein array is a miniature array where a multitude of different capture agents, most frequently monoclonal antibodies, are deposited on a chip surface; they are used to determine the presence and/or amount of proteins in biological samples, e.g., blood. A drawback of DNA and protein arrays is that they are neither reconfigurable nor scalable after manufacture. Digital microfluidics has been described as a means for carrying out Digital PCR.
Key application areas:
Molecular biology In addition to microarrays, biochips have been designed for two-dimensional electrophoresis, transcriptome analysis, and PCR amplification. Other applications include various electrophoresis and liquid chromatography applications for proteins and DNA, cell separation, in particular, blood cell separation, protein analysis, cell manipulation and analysis including cell viability analysis and microorganism capturing.
Key application areas:
Evolutionary biology By combining microfluidics with landscape ecology and nanofluidics, a nano/micro fabricated fluidic landscape can be constructed by building local patches of bacterial habitat and connecting them by dispersal corridors. The resulting landscapes can be used as physical implementations of an adaptive landscape, by generating a spatial mosaic of patches of opportunity distributed in space and time. The patchy nature of these fluidic landscapes allows for the study of adapting bacterial cells in a metapopulation system. The evolutionary ecology of these bacterial systems in these synthetic ecosystems allows for using biophysics to address questions in evolutionary biology.
Key application areas:
Cell behavior The ability to create precise and carefully controlled chemoattractant gradients makes microfluidics the ideal tool to study motility, chemotaxis and the ability to evolve / develop resistance to antibiotics in small populations of microorganisms and in a short period of time. These microorganisms including bacteria and the broad range of organisms that form the marine microbial loop, responsible for regulating much of the oceans' biogeochemistry.
Key application areas:
Microfluidics has also greatly aided the study of durotaxis by facilitating the creation of durotactic (stiffness) gradients.
Cellular biophysics By rectifying the motion of individual swimming bacteria, microfluidic structures can be used to extract mechanical motion from a population of motile bacterial cells. This way, bacteria-powered rotors can be built.
Optics The merger of microfluidics and optics is typical known as optofluidics. Examples of optofluidic devices are tunable microlens arrays and optofluidic microscopes.
Microfluidic flow enables fast sample throughput, automated imaging of large sample populations, as well as 3D capabilities. or superresolution.
Key application areas:
Photonics Lab on a Chip (PhLOC) Due to the increase in safety concerns and operating costs of common analytic methods (ICP-MS, ICP-AAS, and ICP-OES), the Photonics Lab on a Chip (PhLOC) is becoming an increasingly popular tool for the analysis of actinides and nitrates in spent nuclear waste. The PhLOC is based on the simultaneous application of Raman and UV-Vis-NIR spectroscopy, which allows for the analysis of more complex mixtures which contain several actinides at different oxidation states. Measurements made with these methods have been validated at the bulk level for industrial tests, and are observed to have a much lower variance at the micro-scale. This approach has been found to have molar extinction coefficients (UV-Vis) in line with known literature values over a comparatively large concentration span for 150 μL via elongation of the measurement channel, and obeys Beer’s Law at the micro-scale for U(IV). Through the development of a spectrophotometric approach to analyzing spent fuel, an on-line method for measurement of reactant quantities is created, increasing the rate at which samples can be analyzed and thus decreasing the size of deviations detectable within reprocessing.Through the application of the PhLOC, flexibility and safety of operational methods are increased. Since the analysis of spent nuclear fuel involves extremely harsh conditions, the application of disposable and rapidly produced devices (Based on castable and/or engravable materials such as PDMS, PMMA, and glass) is advantageous, although material integrity must be considered under specific harsh conditions. Through the usage of fiber optic coupling, the device can be isolated from instrumentation, preventing irradiative damage and minimizing the exposure of lab personnel to potentially harmful radiation, something not possible on the lab scale nor with the previous standard of analysis. The shrinkage of the device also allows for lower amounts of analyte to be used, decreasing the amount of waste generated and exposure to hazardous materials.Expansion of the PhLOC to miniaturize research of the full nuclear fuel cycle is currently being evaluated, with steps of the PUREX process successfully being demonstrated at the micro-scale. Likewise, the microfluidic technology developed for the analysis of spent nuclear fuel is predicted to expand horizontally to analysis of other actinide, lanthanides, and transition metals with little to no modification.
Key application areas:
High Performance Liquid Chromatography (HPLC) HPLC in the field of microfluidics comes in two different forms. Early designs included running liquid through the HPLC column then transferring the eluted liquid to microfluidic chips and attaching HPLC columns to the microfluidic chip directly. The early methods had the advantage of easier detection from certain machines like those that measure fluorescence. More recent designs have fully integrated HPLC columns into microfluidic chips. The main advantage of integrating HPLC columns into microfluidic devices is the smaller form factor that can be achieved, which allows for additional features to be combined within one microfluidic chip. Integrated chips can also be fabricated from multiple different materials, including glass and polyimide which are quite different from the standard material of PDMS used in many different droplet-based microfluidic devices. This is an important feature because different applications of HPLC microfluidic chips may call for different pressures. PDMS fails in comparison for high-pressure uses compared to glass and polyimide. High versatility of HPLC integration ensures robustness by avoiding connections and fittings between the column and chip. The ability to build off said designs in the future allows the field of microfluidics to continue expanding its potential applications.
Key application areas:
The potential applications surrounding integrated HPLC columns within microfluidic devices have proven expansive over the last 10–15 years. The integration of such columns allows for experiments to be run where materials were in low availability or very expensive, like in biological analysis of proteins. This reduction in reagent volumes allows for new experiments like single-cell protein analysis, which due to size limitations of prior devices, previously came with great difficulty. The coupling of HPLC-chip devices with other spectrometry methods like mass-spectrometry allow for enhanced confidence in identification of desired species, like proteins. Microfluidic chips have also been created with internal delay-lines that allow for gradient generation to further improve HPLC, which can reduce the need for further separations. Some other practical applications of integrated HPLC chips include the determination of drug presence in a person through their hair and the labeling of peptides through reverse phase liquid chromatography.
Key application areas:
Acoustic droplet ejection (ADE) Acoustic droplet ejection uses a pulse of ultrasound to move low volumes of fluids (typically nanoliters or picoliters) without any physical contact. This technology focuses acoustic energy into a fluid sample to eject droplets as small as a millionth of a millionth of a litre (picoliter = 10−12 litre). ADE technology is a very gentle process, and it can be used to transfer proteins, high molecular weight DNA and live cells without damage or loss of viability. This feature makes the technology suitable for a wide variety of applications including proteomics and cell-based assays.
Key application areas:
Fuel cells Microfluidic fuel cells can use laminar flow to separate the fuel and its oxidant to control the interaction of the two fluids without the physical barrier that conventional fuel cells require.
Key application areas:
Astrobiology To understand the prospects for life to exist elsewhere in the universe, astrobiologists are interested in measuring the chemical composition of extraplanetary bodies. Because of their small size and wide-ranging functionality, microfluidic devices are uniquely suited for these remote sample analyses. From an extraterrestrial sample, the organic content can be assessed using microchip capillary electrophoresis and selective fluorescent dyes. These devices are capable of detecting amino acids, peptides, fatty acids, and simple aldehydes, ketones, and thiols. These analyses coupled together could allow powerful detection of the key components of life, and hopefully inform our search for functioning extraterrestrial life.
Key application areas:
Food Science Microfluidic techniques such as droplet microfluidics, paper microfluidics, and lab-on-a-chip are used in the realm of food science in a variety of categories. Research in nutrition, food processing, and food safety benefit from microfluidic technique because experiments can be done with less reagents.Food processing requires the ability to enable shelf stability in foods, such as emulsions or additions of preservatives. Techniques such as droplet microfluidics are used to create emulsions that are more controlled and complex than those created by traditional homogenization due to the precision of droplets that is achievable. Using microfluidics for emulsions is also more energy efficient compared to homogenization in which “only 5% of the supplied energy is used to generate the emulsion, with the rest dissipated as heat” . Although these methods have benefits, they currently lack the ability to be produced at large scale that is needed for commercialization. Microfluidics are also used in research as they allow for innovation in food chemistry and food processing. An example in food engineering research is a novel micro-3D-printed device fabricated to research production of droplets for potential food processing industry use, particularly in work with enhancing emulsions.Paper and droplet microfluidics allow for devices that can detect small amounts of unwanted bacteria or chemicals, making them useful in food safety and analysis. Paper-based microfluidic devices are often referred to as microfluidic paper-based analytical devices (µPADs) and can detect such things as nitrate, preservatives, or antibiotics in meat by a colorimetric reaction that can be detected with a smartphone. These methods are being researched because they use less reactants, space, and time compared to traditional techniques such as liquid chromatography. µPADs also make home detection tests possible , which is of interest to those with allergies and intolerances. In addition to paper-based methods, research demonstrates droplet-based microfluidics shows promise in drastically shortening the time necessary to confirm viable bacterial contamination in agricultural waters in the domestic and international food industry.
Key application areas:
Future directions Microfluidics for personalized cancer treatment Personalized cancer treatment is an tuned method based on the patient's diagnosis and background. Microfluidic technology offers sensitive detection with higher throughput, as well as reduced time and costs. For personalized cancer treatment, tumor composition and drug sensitivities are very important.A patient's drug response can be predicted based on the status of biomarkers, or the severity and progression of the disease can be predicted based on the atypical presence of specific cells. Drop-qPCR is a droplet microfluidic technology in which droplets are transported in a reusable capillary and alternately flow through two areas maintained at different constant temperatures and fluorescence detection. It can be efficient with a low contamination risk to detect Her2. A digital droplet‐based PCR method can be used to detect the KRAS mutations with TaqMan probes, to enhance detection of the mutative gene ratio. In addition, accurate prediction of postoperative disease progression in breast or prostate cancer patients is essential for determining post-surgery treatment. A simple microfluidic chamber, coated with a carefully formulated extracellular matrix mixture is used for cells obtained from tumor biopsy after 72 hours of growth and a thorough evaluation of cells by imaging.Microfluidics is also suitable for circulating tumor cells (CTCs) and non-CTCs liquid biopsy analysis. Beads conjugate to anti‐epithelial cell adhesion molecule (EpCAM) antibodies for positive selection in the CTCs isolation chip (iCHIP). CTCs can also be detected by using the acidification of the tumor microenvironment and the difference in membrane capacitance. CTCs are isolated from blood by a microfluidic device, and are cultured on-chip, which can be a method to capture more biological information in a single analysis. For example, it can be used to test the cell survival rate of 40 different drugs or drug combinations. Tumor‐derived extracellular vesicles can be isolated from urine and detected by an integrated double‐filtration microfluidic device; they also can be isolated from blood and detected by electrochemical sensing method with a two‐level amplification enzymatic assay.Tumor materials can directly be used for detection through microfluidic devices. To screen primary cells for drugs, it is often necessary to distinguish cancerous cells from non-cancerous cells. A microfluidic chip based on the capacity of cells to pass small constrictions can sort the cell types, metastases. Droplet‐based microfluidic devices have the potential to screen different drugs or combinations of drugs, directly on the primary tumor sample with high accuracy. To improve this strategy, the microfluidic program with a sequential manner of drug cocktails, coupled with fluorescent barcodes, is more efficient. Another advanced strategy is detecting growth rates of single-cell by using suspended microchannel resonators, which can predict drug sensitivities of rare CTCs.Microfluidics devices also can simulate the tumor microenvironment, to help to test anticancer drugs. Microfluidic devices with 2D or 3D cell cultures can be used to analyze spheroids for different cancer systems (such as lung cancer and ovarian cancer), and are essential for multiple anti-cancer drugs and toxicity tests. This strategy can be improved by increasing the throughput and production of spheroids. For example, one droplet-based microfluidic device for 3D cell culture produces 500 spheroids per chip. These spheroids can be cultured longer in different surroundings to analyze and monitor. The other advanced technology is organs‐on‐a‐chip, and it can be used to simulate several organs to determine the drug metabolism and activity based on vessels mimicking, as well as mimic pH, oxygen... to analyze the relationship between drugs and human organ surroundings.A recent strategy is single-cell chromatin immunoprecipitation (ChiP)‐Sequencing in droplets, which operates by combining droplet‐based single cell RNA sequencing with DNA‐barcoded antibodies, possibly to explore the tumor heterogeneity by the genotype and phenotype to select the personalized anti-cancer drugs and prevent the cancer relapse.
Notable people:
Hang Lu, Professor of Chemical and Biomolecular Engineering at the Georgia Institute of Technology | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HiperMAN**
HiperMAN:
HiperMAN (High Performance Radio Metropolitan Area Network) is a standard created by the European Telecommunications Standards Institute (ETSI) Broadband Radio Access Networks (BRAN) group to provide a wireless network communication in the 2–11 GHz bands across Europe and other countries which follow the ETSI standard. HiperMAN is a European alternative to WiMAX (or the IEEE 802.16 standard) and the Korean technology WiBro.
HiperMAN:
HiperMAN is aiming principally for providing broadband Wireless Internet access, while covering a large geographic area. The standardization focuses on broadband solutions optimized for access in frequency bands below 11 GHz (mainly in the 3.5 GHz band). HiperMAN is optimised for packet switched networks, and supports fixed and nomadic applications, primarily in the residential and small business user environments.
HiperMAN:
HiperMAN will be an interoperable broadband fixed wireless access system operating at radio frequencies between 2 GHz and 11 GHz. The HiperMAN standard is designed for Fixed Wireless Access provisioning to SMEs and residences using the basic MAC (DLC and CLs) of the IEEE 802.16-2001 standard. It has been developed in very close cooperation with IEEE 802.16, such that the HiperMAN standard and a subset of the IEEE 802.16a-2003 standard will interoperate seamlessly. HiperMAN is capable of supporting ATM, though the main focus is on IP traffic. It offers various service categories, full quality of service, fast connection control management, strong security, fast adaptation of coding, modulation and transmit power to propagation conditions and is capable of non-line-of-sight operation. HiperMAN enables both PTMP and Mesh network configurations. HiperMAN also supports both FDD and TDD frequency allocations and H-FDD terminals. All this is achieved with a minimum number of options to simplify implementation and interoperability. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**JWH-015**
JWH-015:
JWH-015 is a chemical from the naphthoylindole family that acts as a subtype-selective cannabinoid agonist. Its affinity for CB2 receptors is 13.8 nM, while its affinity for CB1 is 383 nM, meaning that it binds almost 28 times more strongly to CB2 than to CB1. However, it still displays some CB1 activity, and in some model systems can be very potent and efficacious at activating CB1 receptors, and therefore it is not as selective as newer drugs such as JWH-133. It has been shown to possess immunomodulatory effects, and CB2 agonists may be useful in the treatment of pain and inflammation. It was discovered and named after John W. Huffman.
Metabolism:
JWH-015 has been shown in vitro to be metabolized primarily by hydroxylation and N-dealkylation, and also by epoxidation of the naphthalene ring, similar to the metabolic pathways seen for other aminoalkylindole cannabinoids such as WIN 55,212-2. Epoxidation of polycyclic aromatic hydrocarbons (see for example benzo(a)pyrene toxicity) can produce carcinogenic metabolites, although there is no evidence to show that JWH-015 or other aminoalkylindole cannabinoids are actually carcinogenic in vivo. A study published in the British Journal of Cancer shows that JWH-015 may signal certain cancers to shrink through a process called apoptosis.
Legal status:
In the United States, all CB1 receptor agonists of the 3-(1-naphthoyl)indole class such as JWH-015 are Schedule I Controlled Substances.As of October 2015 JWH-015 is a controlled substance in China. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Metonitazene**
Metonitazene:
Metonitazene is an analgesic compound related to etonitazene, which was first reported in 1957, and has been shown to have approximately 100 times the potency of morphine by central routes of administration, but if used orally it has been shown to have approximately 10 times the potency of morphine.Its effects are similar to other opioids such as fentanyl and heroin, including analgesia, euphoria, and sleepiness. Adverse effects include vomiting, and respiratory depression that can potentially be fatal. Because of high dependency potential and dangerous adverse effects it has never been introduced into pharmacotherapy. It is instead commonly used in the illicit manufacture of counterfit opioid pills such as OxyContin.
Legal status:
In the United States, metonitazene is a Schedule I controlled substance under the Controlled Substances Act.
Metonitazene is not controlled under the 1971 Convention on Psychotropic Substances; however, in many countries possession or intent to sell for human consumption might be prosecuted under several analog acts. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cyclopentadienyl allyl palladium**
Cyclopentadienyl allyl palladium:
Cyclopentadienyl allyl palladium is an organopalladium compound with formula (C5H5)Pd(C3H5). This reddish solid is volatile with an unpleasant odor. It is soluble in common organic solvents. The molecule consists of a Pd centre sandwiched between a Cp and allyl ligands.
Preparation:
This complex is produced by the reaction of allylpalladium chloride dimer with sodium cyclopentadienide: 2 C5H5Na + (C3H5)2Pd2Cl2 → 2 (C5H4)Pd(C3H5) + 2 NaCl
Structure and reactions:
The 18-electron complex adopts a half-sandwich structure with Cs symmetry, i.e., the molecule has a plane of symmetry. The complex can be decomposed readily by reductive elimination. C3H5PdC3H5 → Pd(0) + C5H5C3H5The compound readily reacts with alkyl isocyanides to produce clusters with the approximate formula [Pd(CNR)2]n. It reacts with bulky alkyl phosphines to produce two-coordinated palladium(0) complexes: CpPd(allyl) + 2 PR3 → Pd(PR3)2 + C5H5C3H5The compound has been used to deposit thin film chemical vapor deposition of metallic palladium. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**We Are Smarter Than Me**
We Are Smarter Than Me:
We Are Smarter Than Me is a collaborative-writing project using wiki software, whose initial goal was producing a book about decision making processes that use large numbers of people. The first book was published as a printed book, late in 2007, by the publishing conglomerate Pearson Education. Along with Pearson, the project's four core sponsors include research institutes of the MIT Sloan School of Management and the Wharton School of the University of Pennsylvania.
We Are Smarter Than Me:
The wiki book was featured in a November 28, 2006, broadcast of NPR's All Things Considered.
History and overview of project:
The project was started as "a business community formed by business professionals to research and discuss the impact of social networks on traditional business functions".
History and overview of project:
Initiated by illustrious faculty from the Wharton School and MIT Sloan School of Management The people behind this initiative are Barry Libert, CEO of Shared Insights, Jon Spector, vice dean and director of Wharton's Aresty Institute of Executive Education, Thomas W. Malone, Patrick J. McGovern Professor of Management at the MIT Sloan School of Management, and founder and director of the MIT Center for Collective Intelligence, Tim Moore, editor-in-chief of Pearson Education and Jerry (Yoram) Wind, Lauder Professor and Professor of Marketing at the Wharton Business School of the University of Pennsylvania and founding director of the Wharton “think tank,” the SEI Center for Advanced Studies in Management.
History and overview of project:
The project was started in late 2006 and a wiki website was established to allow people to contribute text to the book. It was published on October 5, 2007.
History and overview of project:
Participation According to the project's website, "over a million students, faculty and alumni of the Wharton School of the University of Pennsylvania and the MIT Sloan School of Management, as well as leaders, authors, and experts from the fields of management and technology were invited to contribute in a wiki-based community that coalesced at wearesmarter.org. Members were asked to develop and share their insights about why community approaches work or don't work when it comes to marketing, business development, distribution, and more, and what companies have to do to make them work better." They had reached the following participation statistics by the time the book was ready for publication: 4375 registered members 737 forum posts 250 wiki contributors 1600 wiki postsThe project's website reports that, "In addition to actual community members and contributors, the project was influenced by hundreds of bloggers, Podcasters, and conference attendees at the inaugural Community 2.0 Conference in Las Vegas." Advisory board The project's advisory board for phase 1 (the writing of the first book) included: Chairman: Thomas W. Malone — the Patrick J. McGovern Professor of Management at the MIT Sloan School of Management, founder and director of the MIT Center for Collective Intelligence.Board members: Tim Moore of Wharton School Publishing and FTPress.
History and overview of project:
Jimmy Wales — founder and former Chair of the Board of Trustees of the Wikimedia Foundation.
Yoram (Jerry) Wind is The Lauder Professor and Professor of Marketing at the Wharton School of the University of Pennsylvania. He is the founding director of the Wharton "think tank", The SEI Center for Advanced Studies in Management.
Philip Evans — a senior vice president in the Boston office of the Boston Consulting Group, author of the best-selling book Blown to Bits.
History and overview of project:
Content According to the authors, "the goal of the project was to develop a book that addresses what other best-selling books on community have not. Wikinomics and The Wisdom of Crowds have identified the phenomena of emerging social networks, but they do not confront how businesses can profit from the wisdom of crowds".The book contains case studies from several companies, including Eli Lilly and Company, Amazon.com, Dell Computers, Cambrian House, Angie's List and Procter & Gamble
Media coverage and acceptance:
The project received wide coverage in US media, including such venues as The Wall Street Journal, Forbes.com, Newsweek and NPR's radio show "All Things Considered".
The book was ranked #6 in Amazon's "Best of 2007" and "Top 10 book to inspire your business for 2008" from TheStreet.com.
Further and related reading:
Don Tapscott and Anthony D. Williams (2006). Wikinomics: How Mass Collaboration Changes Everything. Portfolio. ISBN 978-1-59184-193-7.
Yochai Benkler (2007). The Wealth of Networks: How Social Production Transforms Markets and Freedom. Yale University Press. ISBN 978-0-300-12577-1.
Further and related reading:
Surowiecki, James (2004). The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations Little, Brown ISBN 0-316-86173-1 Cass R. Sunstein, Infotopia: How Many Minds Produce Knowledge (2006) Oxford University Press, ISBN 0-19-518928-0 Thomas W. Malone, The Future of Work: How the New Order of Business Will Shape Your Organization, Your Management Style and Your Life Harvard Business School Press (April 2, 2004) ISBN 978-1-59139-125-8 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ghost population**
Ghost population:
A ghost population is a population that has been inferred through using statistical techniques.
Population studies:
In 2004, it was proposed that maximum likelihood or Bayesian approaches that estimate the migration rates and population sizes using coalescent theory can use datasets which contain a population that has no data. This is referred to as a "ghost population". The manipulation allows exploration in the effects of missing populations on the estimation of population sizes and migration rates between two specific populations. The biases of the inferred population parameters depend on the magnitude of the migration rate from the unknown populations. The technique for deriving ghost populations attracted criticism because ghost populations were the result of statistical models, along with their limitations.
Population genetics:
In 2012, DNA analysis and statistical techniques were used to infer that a now-extinct human population in northern Eurasia had interbred with both the ancestors of Europeans and a Siberian group that later migrated to the Americas. The group was referred to as a ghost population because they were identified by the echoes that they leave in genomes—not by bones or ancient DNA. In 2013, another study found the remains of a member of this ghost group, fulfilling the earlier prediction that they had existed.According to a study published in 2020, there are indications that 2% to 19% (or about ≃6.6 and ≃7.0%) of the DNA of four West African populations may have come from an unknown archaic hominin which split from the ancestor of humans and Neanderthals between 360 kya to 1.02 mya. However, the study also suggests that at least part of this archaic admixture is also present in Eurasians/non-Africans, and that the admixture event or events range from 0 to 124 ka B.P, which includes the period before the Out-of-Africa migration and prior to the African/Eurasian split (thus affecting in part the common ancestors of both Africans and Eurasians/non-Africans). Another recent study, which discovered substantial amounts of previously undescribed human genetic variation, also found ancestral genetic variation in Africans that predates modern humans and was lost in most non-Africans.In 2015, a study of the lineage and early migration of the domestic pig found that the best model that fitted the data included gene flow from a ghost population during the Pleistocene that is now extinct. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tiger Mask W**
Tiger Mask W:
Tiger Mask W (Japanese: タイガーマスクW(ダブル), Taigā Masuku Daburu) is a Tiger Mask anime series which premiered on October 2, 2016, with 38 episodes. It has a mix of 2D and 3D CGI animation.It is a sequel of the original anime, whereas Tiger Mask II is only considered a parallel universe.
Plot:
Over forty years since Naoto Date mysteriously vanished after killing his arch-nemesis Tiger the Great in a brutal deathmatch, the Global Wrestling Monopoly (GWM) challenged the Zipangu Pro-Wrestling team and single-handedly destroyed them along with their manager Daisuke Fuji, who was brutally defeated by GWM's top fighter Yellow Devil. The incident drove two young members, Naoto Azuma, the main protagonist of the series, and his friend Takuma Fuji, son of Daisuke, to seek revenge on the GWM and the two parted ways. While Naoto trained under Kentaro Takaoka, Naoto Date's former ally, Takuma trained under the Tiger's Den, under the leadership of Mr. X, after being recruited by Yellow Devil under the pretence of giving him the chance for revenge.
Plot:
Three years later, Naoto has taken the identity of Tiger Mask and signed for New Japan Pro-Wrestling (NJPW) while Takuma has taken on the identity of Tiger the Dark under the Tiger's Den, with none of them being aware of the other's identity but sharing the same goal. The GWM, under the leadership of Miss X, sees the new Tiger Mask as a threat that must be eliminated to ensure their control over the wrestling world and sends their fighters in several schemes to destroy him. Tiger Mask trains with the help of his current trainer and the aid of several pro wrestlers to become a stronger fighter with the ultimate goal to fight and defeat Yellow Devil.
Plot:
Both he and Takuma have their first chance at the Masked Tournament hosted in Miss X's new Max Dome, their first step to conquering Japan, and hosting none other than Yellow Devil as one of the participants. Tiger Mask. with the advice of the Indian wrestler Mister Question, decides to develop his own signature technique and trains with his co-workers at NJPW to develop his technique. He eventually fights and defeats Takuma using an unperfected version of the move, and manages to face and defeat Yellow Devil but, much to his surprise, finds another man to have taken on its identity, leaving Naoto's and Takuma's quest inconclusive but with the revelation to Takuma that Tiger Mask is also looking after Yellow Devil. Takuma is eventually demoted to undergo the Tiger's Execution, which would make him a living punching bag, but decides to take the "Hell in the Hole" trial, an illegal wrestling match with several bets carrying a special chance which can restore him as a main fighter but at the risk of losing his life if he fails.
Plot:
The "Hell in the Hole" trial begins, with its participants being wrestlers defeated by Tiger Mask, including Odin, Billy the Kidman, Black Python and Red Death Mask, as well as Ricardo, Cox, Phantom, Takuma and Kevin, Takuma's friend. The fight is a battle royale with no rules, wrestlers are given weapons after defeating contestants and alliances are possible too. Takuma and Kevin team up and eventually prevail over Odin, Billy the Kidman and Red Death Mask while the rest were eliminated. Takuma and Kevin are forced to fight a "gatekeeper" to gain their escape, but the wrestler is discovered to be a powerful robot. Odin regains conscience and aids the duo using a firebreath tool on the robot's face, which causes it to malfunction and assault the nearby spectators. The strongest wrestler of the Tiger's Den, Tiger the Third, faces and easily defeats the robot with a piledriver. Takuma, Kevin and Odin thus succeed in the Hell in the Hole, but Odin quits the GWM.
Plot:
The GWM then prepares their next big tournament, Wrestle Max War Game, which will pit wrestlers from all around the world for the right to challenge Tiger the Third, who is the World Heavyweight Champion, for the belt and a Million Dollar Prize. Tiger the Third will also participate in the event himself. Naoto decides to enter the tournament in order to become champion, as it will allow him to select his opponents and as such be able to force the true Yellow Devil out of hiding. Thinking that the battle may not play out one-on-one, Naoto decides to request Fukuwara Mask to partner with him to increase his chances of winning.
Plot:
The day of the War Game begins and the rules are explained, the wrestlers will fight in a pyramid-shaped ring with five stages and a ground level. They are divided in four blocks which are interconnected and when the bell sounds, they will be able to move between blocks. Once a wrestler defeats an opponent, he will be allowed to move to the next stage, where he fights his next opponent, and repeat the process until the first two fighters are standing at the top stage of the ring, at which point the battle is over. Following an interval the two combatants will fight and the winner earns the right to challenge Tiger the Third. The Champion, being one of the participants, declares that whoever defeats him can take the belt. Surprisingly, a man taking the name of Yellow Devil is also a participant, causing Naoto and Takuma to pursue him.
Plot:
The fights rage, and Takuma finds and easily defeats Yellow Devil, unmasking him as an impostor. Tiger the Third faces Ryu Wakamatsu in his Dragon Young persona, Fukuwara Mask and Tetsuya Naito, making him the first fighter to get to the top. Tiger Mask and Tiger the Dark face off once more with the fight being even grounded. Tiger Mask uses his Tiger Driver move, but Takuma is able to counter it at the last moment. Before Naoto and Takuma can resume their fight, Kevin interferes and ambushes Naoto, allowing Takuma to defeat him, thus Tiger the Dark is able to challenge Tiger the Third for the championship. A recess takes place before the fight, but Takuma is revealed to be greatly injured as a result of Naoto's knee strike.
Plot:
Before his match, Takuma is alerted that Tiger the Third is potentially the true Yellow Devil. In order to confirm it, he bets his mask with Tiger the Third, which he accepts. The match begins and Takuma seemingly gains the upper hand. Tiger the Third then confirms Takuma's suspicions that he's indeed Yellow Devil by using his old signature techniques. Takuma is able to counter his Devil's Crush and almost defeats The Third. However, the injuries caused by Naoto take his toll and Tiger the Third uses his true finishing move Sacrifice, greatly injuring Takuma and causing his defeat. The Third then unmasks Takuma, revealing his identity to Naoto, who rushes to his friend's side revealing his own identity as well as he's transported to the hospital.
Plot:
A mysterious trio of wrestlers calling themselves the "Miracle 3" appear and sabotage several NJPW matches by committing fouls, while promising to keep "multiplying". After Tiger Mask battles one of GWM's strongest fighters, King Tiger, and defeats him in a deadly fight, Miss X agrees to let Tiger Mask challenge Tiger the Third, but bind him by a contract that forces him to betray NJPW and align himself with the Miracles. As the battles escalate, Tiger Mask, desperate to get his match with Tiger the Third, sinks to committing fouls along with the Miracles in the matches, which destroys his reputation.
Plot:
During a final encounter between the Miracles and NJPW in a 5-on-5 match, Tiger Mask is cruelly used as a tool in order to secure a victory against Kazuchika Okada, NJPW's strongest wrestler. Before NJPW is able to win against the Miracles, Tiger the Third and Miss X interrupt the match. Tiger Mask attempts to attack Tiger the Third, but is easily countered. Miss X then announces the Final Wars, the ultimate confrontation between NJPW and the GWM, where the winner will take all the belts from the losers, effectively making it an all-or-nothing match. Yuji Nagata reluctantly accepts the challenge. Meanwhile, Tiger Mask is struggling, as his recent actions have eroded his trust with NJPW, leaving him alone and still bound to the GWM by contract, despite his assault against Tiger the Third. As a result, he is forced by Miss X to fight Mr. Miracle IV in another match. Mr. Miracle IV reveals himself to be Universal Mask, an expert in aerial combat while the two fight in a special ring elevated from the floor level and with pipes, giving Universal Mask a one-sided advantage, which is meant to punish Tiger Mask for his betrayal. Tiger Mask eventually defeats him thanks to Fukuwara Mask's advice, but has difficulties in developing a new killer move.
Plot:
Tiger Mask decides to go to Kyoto in order to gain inspiration for a killer move from the Arashi Dojo, just like Naoto Date did at one point. However, he comes to find two Dojos, with one being a fake. Tiger Mask helps unravel the impostor and gains useful advice from the true master, but is still not sufficient knowledge. The Final Wars begin between NJPW and GWM. During the first fight, NJPW gains an advantage but Mr. Miracle III interferes, causing Tiger Mask to intercept the Miracles' foul combo and rendering the match a no contest. Immediately after, as per the contract with the GWM, Tiger Mask is forced to fight the final fight against Mr. Miracle II, who is GWM's coach O'Connor, in a Lumberjack match where any fighter who leaves the ring must be returned by seconds. However, the match quickly becomes vicious as Mr. Miracle III continuously pushes Tiger outside the ring to be mercilessly punished by GWM's seconds, effectively making it a 4-on-1 fight. Having regained their trust, NJPW interferes on Tiger Mask's behalf, allowing him to defeat Mr. Miracle III and reform their alliance.
Plot:
After losing two fights against NJPW, Tiger's Den sends on their strongest reinforcements, Big Tiger the Second and Black Tiger, two wrestlers who are part of Tiger's Den Four Heavenly Kings, along with the previously defeated King Tiger, whose strength is only surpassed by Tiger the Third. A tag match is scheduled with Big Tiger the Second and Black Tiger against Tiger Mask and Yuji Nagata. The NJPW pair, however, is soundly defeated and Nagata is seriously injured by Big Tiger the Second. Takuma requests Naoto to allow him to use his training him to accelerate his recovery and Takaoka agrees.
Plot:
The Final Wars begin, consisting of five different matches. The first one ends in a draw between Black Tiger and Togi Makabe as they are both counted out. The second match is a double tag-match with Black Tiger and Tiger the Third vs Makabe and Kazuchika Okada. Makabe is able to knock Black Tiger unconscious, but is himself defeated by Tiger the Third's Devil Crush, earning GWM a win. In the next round, a triple tag-match, Big Tiger the Second, Mr. Miracle I and Mr. Miracle II face Hiroshi Tanahashi, Tiger Mask and Tiger the Dark, who joins NJPW. During the fight, Mr. Miracle II unmasks himself as Kevin and battles Takuma until he is knocked unconscious by Tiger Mask's new killer move Tiger Fang. However, Big Tiger the Second defeats Tanahashi with his Skewer move, granting GWM a second win. The following round is a double tag-match of Tiger Mask and Tiger the Dark vs Tiger the Third and Big Tiger the Second. During the fight, Tiger the Third unmasks Naoto by ripping his mask. Tiger the Dark, with renewed strength from his father's encouragement, assaults Tiger the Third and Big Tiger the Second with his new killer move, Crossbow, injuring them both and allowing Naoto to defeat Big Tiger the Second, earning NJPW a win. For the fifth single match, Mr. Miracle I faces NJPW's champion Okada. However, Mr. Miracle I blinds Okada by spitting Asian Mist into his eyes.
Plot:
Okada is able to defeat Mr. Miracle I despite his handicap, but his arm is badly injured, so Naoto fights with a new patched mask under the name Tiger Mask W to face Tiger the Third in a Sudden Death Match. The two fight, with Naoto combining his own moves alongside Takuma's. Although effective, Tiger the Third severely injures Naoto with a chain of Devil's Tornadoes. Attempting to defeat him with the Devil's Crush, the move fails and Naoto counters and injures Tiger the Third. The two clash for a final time, but Tiger the Third's arm gives out, causing him to be unable to counter Naoto's Tiger Fang. Tiger the Third is instantly defeated, which causes the destruction of the Tiger's Den. Naoto and Takuma part ways, deciding to fight abroad using new masks made of a combination of their former masks and calling themselves Tiger Mask W. They vowed to meet each other again in the ring, either as a tag team or as opponents.
Plot:
Following Naoto and Takuma's departure, Miss X founds and leads her own organization, Girls Wrestling Movement, recruiting Haruna Takaoka to serve as her main fighter under her Spring Tiger (later Springer) alter ego, along with her friends Milk and Mint. Although Haruna is initially reluctant to pursue a wrestling career due to Yoko Takaoka's disapproval, who is against it because the truth about her hero Tiger Mask made her mother despise the sport she had always revered as a child, after a month of fighting and later defeating Japan's strongest female wrestler, Mother Devil, along with Miss X's encouragement, she builds up the courage to tell her family.
Broadcast:
Director: Toshiaki Komura Script: Katsuhiko Chiba Music: Yasuharu Takanashi and -yaiba- Character Design: Hisashi Kagawa Art Design: Yoshito Watanabe Action Animation Director: Junichi HayamaThe opening theme is "Ike! Tiger Mask" ( 行け!タイガーマスク; Go! Tiger Mask) by Shōnan no Kaze and the ending theme is "KING OF THE WILD" also by Shonan no Kaze. The opening theme is a new rendition of the opening in the original Tiger Mask, which was originally performed by Hideyo Morimoto. The show airs on TV Asahi's 26:45 (2:45 AM) time slot on Saturday, technically Sunday morning.Naoto Date's training facility on Mount Fuji and his dream about a children land are original from the manga, but omitted in the first anime.
Promotion:
In conjunction with the premiere of the show, NJPW debuted a live action version of Tiger Mask W, portrayed by Kota Ibushi, on October 10, 2016, at their King of Pro-Wrestling event. Since then, Red Death Mask, portrayed by Juice Robinson, and Tiger the Dark, portrayed by A. C. H., have also debuted for NJPW. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sony NEX-5R**
Sony NEX-5R:
The Sony α NEX-5R is a mid-range rangefinder-styled digital mirrorless interchangeable lens camera announced by Sony on 29 August 2012. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Color index**
Color index:
In astronomy, the color index is a simple numerical expression that determines the color of an object, which in the case of a star gives its temperature. The lower the color index, the more blue (or hotter) the object is. Conversely, the larger the color index, the more red (or cooler) the object is. This is a consequence of the logarithmic magnitude scale, in which brighter objects have smaller (more negative) magnitudes than dimmer ones. For comparison, the whitish Sun has a B−V index of 0.656 ± 0.005, whereas the bluish Rigel has a B−V of −0.03 (its B magnitude is 0.09 and its V magnitude is 0.12, B−V = −0.03). Traditionally, the color index uses Vega as a zero point.
Color index:
To measure the index, one observes the magnitude of an object successively through two different filters, such as U and B, or B and V, where U is sensitive to ultraviolet rays, B is sensitive to blue light, and V is sensitive to visible (green-yellow) light (see also: UBV system). The set of passbands or filters is called a photometric system. The difference in magnitudes found with these filters is called the U−B or B−V color index respectively.
Color index:
In principle, the temperature of a star can be calculated directly from the B−V index, and there are several formulae to make this connection. A good approximation can be obtained by considering stars as black bodies, using Ballesteros' formula (also implemented in the PyAstronomy package for Python): 4600 0.92 1.7 0.92 0.62 ).
Color indices of distant objects are usually affected by interstellar extinction, that is, they are redder than those of closer stars. The amount of reddening is characterized by color excess, defined as the difference between the observed color index and the normal color index (or intrinsic color index), the hypothetical true color index of the star, unaffected by extinction.
For example, in the UBV photometric system we can write it for the B−V color: B- observed intrinsic .
Color index:
The passbands most optical astronomers use are the UBVRI filters, where the U, B, and V filters are as mentioned above, the R filter passes red light, and the I filter passes infrared light. This system of filters is sometimes called the Johnson–Cousins filter system, named after the originators of the system (see references). These filters were specified as particular combinations of glass filters and photomultiplier tubes. M. S. Bessell specified a set of filter transmissions for a flat response detector, thus quantifying the calculation of the color indices. For precision, appropriate pairs of filters are chosen depending on the object's color temperature: B−V are for mid-range objects, U−V for hotter objects, and R−I for cool ones. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Véronique Cortier**
Véronique Cortier:
Véronique Cortier is a French mathematician and computer scientist specializing in cryptography. Her research has applied mathematical logic in the formal verification of cryptographic protocols, and has included the development of secure electronic voting systems. She has also contributed to the public dissemination of knowledge about cryptography through a sequence of posts on the binaire blog of Le Monde. She is a director of research with CNRS, associated with the Laboratoire Lorrain de Recherche en Informatique et ses Applications (LORIA) at the University of Lorraine in Nancy.
Education and career:
Cortier studied mathematics and computer science at the École normale supérieure de Cachan from 1997 until 2001, earning a master's degree and completing her agrégation. She remained at Cachan for her doctoral studies, completing a Ph.D. in 2003 with the dissertation Automatic Verification of Cryptographic Protocols supervised by Hubert Comon. She joined the French Centre national de la recherche scientifique (CNRS) in 2003, completed a habilitation in 2009, and became a director of research with CNRS in 2010.
Recognition:
Cortier was the 2003 winner of the Gilles Kahn Prize of the Société informatique de France for the best French dissertation in computer science. She also won a second dissertation prize, from Le Monde. In 2015 she became the second woman to win the INRIA and French Academy of Sciences Young Researcher Award for her work on Belenios, a secure electronic voting system. In 2022 she won the CNRS Silver Medal. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.