text
stringlengths
60
353k
source
stringclasses
2 values
**Product term** Product term: In Boolean logic, a product term is a conjunction of literals, where each literal is either a variable or its negation. Examples: Examples of product terms include: A∧B A∧(¬B)∧(¬C) ¬A Origin: The terminology comes from the similarity of AND to multiplication as in the ring structure of Boolean rings. Minterms: For a boolean function of n variables x1,…,xn , a product term in which each of the n variables appears once (in either its complemented or uncomplemented form) is called a minterm. Thus, a minterm is a logical expression of n variables that employs only the complement operator and the conjunction operator.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Automatic parallelization** Automatic parallelization: Automatic parallelization, also auto parallelization, or autoparallelization refers to converting sequential code into multi-threaded and/or vectorized code in order to use multiple processors simultaneously in a shared-memory multiprocessor (SMP) machine. Fully automatic parallelization of sequential programs is a challenge because it requires complex program analysis and the best approach may depend upon parameter values that are not known at compilation time.The programming control structures on which autoparallelization places the most focus are loops, because, in general, most of the execution time of a program takes place inside some form of loop. Automatic parallelization: There are two main approaches to parallelization of loops: pipelined multi-threading and cyclic multi-threading. For example, consider a loop that on each iteration applies a hundred operations, and runs for a thousand iterations. This can be thought of as a grid of 100 columns by 1000 rows, a total of 100,000 operations. Cyclic multi-threading assigns each row to a different thread. Pipelined multi-threading assigns each column to a different thread. Automatic parallelization technique: Parse This is the first stage where the scanner will read the input source files to identify all static and extern usages. Each line in the file will be checked against pre-defined patterns to segregate into tokens. These tokens will be stored in a file which will be used later by the grammar engine. The grammar engine will check patterns of tokens that match with pre-defined rules to identify variables, loops, control statements, functions etc. in the code restart. Automatic parallelization technique: Analyze The analyzer is used to identify sections of code that can be executed concurrently. The analyzer uses the static data information provided by the scanner-parser. The analyzer will first find all the totally independent functions and mark them as individual tasks. The analyzer then finds which tasks have dependencies. Schedule The scheduler will list all the tasks and their dependencies on each other in terms of execution and start times 6h. The scheduler will produce the optimal schedule in terms of number of processors to be used or the total execution time for the application. Automatic parallelization technique: Code Generation The scheduler will generate a list of all the tasks and the details of the cores on which they will execute along with the time that they will execute for. The code Generator will insert special constructs in the code that will be read during execution by the scheduler. These constructs will instruct the scheduler on which core a particular task will execute along with the start and end times. Cyclic multi-threading: A cyclic multi-threading parallelizing compiler tries to split up a loop so that each iteration can be executed on a separate processor concurrently. Cyclic multi-threading: Compiler parallelization analysis The compiler usually conducts two passes of analysis before actual parallelization in order to determine the following: Is it safe to parallelize the loop? Answering this question needs accurate dependence analysis and alias analysis Is it worthwhile to parallelize it? This answer requires a reliable estimation (modeling) of the program workload and the capacity of the parallel system.The first pass of the compiler performs a data dependence analysis of the loop to determine whether each iteration of the loop can be executed independently of the others. Data dependence can sometimes be dealt with, but it may incur additional overhead in the form of message passing, synchronization of shared memory, or some other method of processor communication. Cyclic multi-threading: The second pass attempts to justify the parallelization effort by comparing the theoretical execution time of the code after parallelization to the code's sequential execution time. Somewhat counterintuitively, code does not always benefit from parallel execution. The extra overhead that can be associated with using multiple processors can eat into the potential speedup of parallelized code. Example A loop is called DOALL if all of its iterations, in any given invocation, can be executed concurrently. The Fortran code below is DOALL, and can be auto-parallelized by a compiler because each iteration is independent of the others, and the final result of array z will be correct regardless of the execution order of the other iterations. There are many pleasingly parallel problems that have such DOALL loops. For example, when rendering a ray-traced movie, each frame of the movie can be independently rendered, and each pixel of a single frame may be independently rendered. On the other hand, the following code cannot be auto-parallelized, because the value of z(i) depends on the result of the previous iteration, z(i - 1). This does not mean that the code cannot be parallelized. Indeed, it is equivalent to the DOALL loop However, current parallelizing compilers are not usually capable of bringing out these parallelisms automatically, and it is questionable whether this code would benefit from parallelization in the first place. Pipelined multi-threading: A pipelined multi-threading parallelizing compiler tries to break up the sequence of operations inside a loop into a series of code blocks, such that each code block can be executed on separate processors concurrently. There are many pleasingly parallel problems that have such relatively independent code blocks, in particular systems using pipes and filters. Pipelined multi-threading: For example, when producing live broadcast television, the following tasks must be performed many times a second: Read a frame of raw pixel data from the image sensor, Do MPEG motion compensation on the raw data, Entropy compress the motion vectors and other data, Break up the compressed data into packets, Add the appropriate error correction and do a FFT to convert the data packets into COFDM signals, and Send the COFDM signals out the TV antenna.A pipelined multi-threading parallelizing compiler could assign each of these six operations to a different processor, perhaps arranged in a systolic array, inserting the appropriate code to forward the output of one processor to the next processor. Pipelined multi-threading: Recent research focuses on using the power of GPU's and multicore systems to compute such independent code blocks( or simply independent iterations of a loop) at runtime. The memory accessed (whether direct or indirect) can be simply marked for different iterations of a loop and can be compared for dependency detection. Using this information, the iterations are grouped into levels such that iterations belonging to the same level are independent of each other, and can be executed in parallel. Difficulties: Automatic parallelization by compilers or tools is very difficult due to the following reasons: dependence analysis is hard for code that uses indirect addressing, pointers, recursion, or indirect function calls because it is difficult to detect such dependencies at compile time; loops have an unknown number of iterations; accesses to global resources are difficult to coordinate in terms of memory allocation, I/O, and shared variables; irregular algorithms that use input-dependent indirection interfere with compile-time analysis and optimization. Workaround: Due to the inherent difficulties in full automatic parallelization, several easier approaches exist to get a parallel program in higher quality. One of these is to allow programmers to add "hints" to their programs to guide compiler parallelization, such as HPF for distributed memory systems and OpenMP or OpenHMPP for shared memory systems. Another approach is to build an interactive system between programmers and parallelizing tools/compilers. Notable examples are Vector Fabrics' Pareon, SUIF Explorer (The Stanford University Intermediate Format compiler), the Polaris compiler, and ParaWise (formally CAPTools). Finally, another approach is hardware-supported speculative multithreading. Parallelizing compilers and tools: Most research compilers for automatic parallelization consider Fortran programs, because Fortran makes stronger guarantees about aliasing than languages such as C. Typical examples are: Paradigm compiler Polaris compiler Rice Fortran D compiler SUIF compiler Vienna Fortran compiler
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Zfone** Zfone: Zfone is software for secure voice communication over the Internet (VoIP), using the ZRTP protocol. It is created by Phil Zimmermann, the creator of the PGP encryption software. Zfone works on top of existing SIP- and RTP-programs, but should work with any SIP- and RTP-compliant VoIP-program. Zfone: Zfone turns many existing VoIP clients into secure phones. It runs in the Internet Protocol stack on any Windows XP, Mac OS X, or Linux PC, and intercepts and filters all the VoIP packets as they go in and out of the machine, and secures the call on the fly. A variety of different software VoIP clients can be used to make a VoIP call. The Zfone software detects when the call starts, and initiates a cryptographic key agreement between the two parties, and then proceeds to encrypt and decrypt the voice packets on the fly. It has its own separate GUI, telling the user if the call is secure. Zfone describes itself to end-users as a "bump on the wire" between the VoIP client and the Internet, which acts upon the protocol stack. Zfone: Zfone's libZRTP SDK libraries are released under either the Affero General Public License (AGPL) or a commercial license. Note that only the libZRTP SDK libraries are provided under the AGPL. The parts of Zfone that are not part of the libZRTP SDK libraries are not licensed under the AGPL or any other open source license. Although the source code of those components is published for peer review, they remain proprietary. The Zfone proprietary license also contains a time bomb provision. Zfone: It appears that Zfone development has stagnated, however, as the most recent version was released on 22 Mar 2009. In addition, since 29 Jan 2011, it has not been possible to download Zfone from the developer's website since the download server has gone offline. Platforms and specification: Availability – Mac OS X, Linux, and Windows as compiled programs as well as an SDK. Encryption standards – Based on ZRTP, which uses 128- or 256-bit AES together with a 3072-bit key exchange system and voice-based verification to prevent man-in-the-middle attacks. ZRTP protocol – Published as an IETF RFC 6189: "ZRTP: Media Path Key Agreement for Unicast Secure RTP" VoIP clients – Zfone has been tested with the following VoIP clients: X-Lite, Gizmo5, XMeeting, Google Talk VoIP client, and SJphone.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quantcast File System** Quantcast File System: Quantcast File System (QFS) is an open-source distributed file system software package for large-scale MapReduce or other batch-processing workloads. It was designed as an alternative to the Apache Hadoop Distributed File System (HDFS), intended to deliver better performance and cost-efficiency for large-scale processing clusters. Design: QFS is software that runs on a cluster of hundreds or thousands of commodity Linux servers and allows other software layers to interact with them as if they were one giant hard drive. It has three components: A chunk server runs on each machine that will host data, manages I/O to its hard drives, and monitors its activity and capacity. Design: A central process called the metaserver keeps the directory structure and maps of files to physical storage. It coordinates activities of all the chunk servers and monitors the overall health of the file system. For high performance it holds all its data in memory, writing checkpoints and transaction logs to disk for recovery. Design: A client component is the interface point that presents a file system application programming interface (API) to other layers of the software. It makes requests of the metaserver to identify which chunk servers hold (or will hold) its data, then interacts with the chunk servers directly to read and write.In a cluster of hundreds or thousands of machines, the odds are low that all will be running and reachable at any given moment, so fault tolerance is the central design challenge. QFS meets it with Reed–Solomon error correction. The form of Reed–Solomon encoding used in QFS stores redundant data in nine places and can reconstruct the file from any six of these stripes. When it writes a file, it by default stripes it across nine physically different machines — six holding the data, three holding parity information. Any three of those can become unavailable. If any six remain readable, QFS can reconstruct the original data. The result is fault tolerance at a cost of a 50% expansion of data. QFS is written in the programming language C++, operates within a fixed memory footprint, and uses direct input and output (I/O). History: QFS evolved from the Kosmos File System (KFS), an open source project started by Kosmix in 2005. Quantcast adopted KFS in 2007, built its own improvements on it over the next several years, and released QFS 1.0 as an open source project in September, 2012.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**BeGeistert** BeGeistert: BeGeistert is an annual (formerly semiannual) users' and developers' conference for the open source operating system Haiku.The conference usually takes place over a weekend in the autumn of each year in Düsseldorf, Germany. The programme typically consists of demonstrations by European software vendors, coding demonstrations, and workshops and presentations on advancements made in developing Haiku. History: BeGeistert originally started as a BeOS conference, playing an important part in its community. For example, developers and representatives of Be Europe and other important contributors attended the conference in the late 1990s. This history is reflected in the conference name, as the use of capitalization in the German word "begeistert" (meaning: "excited") alludes to Be Inc., the developer of the BeOS. History: After the bankruptcy of Be Inc. caused the BeOS to be discontinued in the early 2000s, the conference was attended by representatives of the two projects deriving from the BeOS: ZETA, a closed source commercial initiative by yellowTAB based on the source code of an unreleased version of the BeOS, and the Haiku project, an initiative to create an open source operating system that is inspired by and compatible with the latest official release of the BeOS. After yellowTAB's insolvency in 2006, the conference's focus shifted entirely to Haiku. In 2007, feathers similar to those in the Haiku logo were added to the BeGeistert logo to reflect this shift. History: The conference is organized and partially funded by the Haiku Support Association e.V., formerly BeFAN e.V., and also supported by Haiku, Inc. Between 2004 and 2006, Haiku, Inc. also organized an official annual Haiku conference in various places in the United States under the name "WalterCon", which was discontinued due to a "lack of community interest" for the 2007 conference. Editions: In 1998, BeGeistert started as a one-day event, but became a two-day conference since BeGeistert 004, mostly taking place in April and October. As of 2013, the conference is held annually. Editions: Between 2000 and 2008, some conference weekends included coding challenges, which among others resulted in the first localizable version of OpenTracker. Since 2003, it is being organized in the City Youth Hostel in Düsseldorf's Oberkassel neighborhood, except during the youth hostel's reconstruction in 2006. Since 2008, nearly every edition of the conference was either preceded or followed by a coding event, later becoming the Haiku CodeSprint, lasting up to a week.Each edition of BeGeistert is identified by a three-digit number and a motto that often refers to the current events in the BeOS/Haiku world, such as moving on after the bankruptcy of Be, Inc., or the unveiling of a new (alpha) release of Haiku.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Igusa quartic** Igusa quartic: In algebraic geometry, the Igusa quartic (also called the Castelnuovo–Richmond quartic CR4 or the Castelnuovo–Richmond–Igusa quartic) is a quartic hypersurface in 4-dimensional projective space, studied by Igusa (1962). It is closely related to the moduli space of genus 2 curves with level 2 structure. It is the dual of the Segre cubic. It can be given as a codimension 2 variety in P5 by the equations ∑xi=0 (∑xi2)2=4∑xi4
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Partial inverse of a matrix** Partial inverse of a matrix: In linear algebra and statistics, the partial inverse of a matrix is an operation related to Gaussian elimination which has applications in numerical analysis and statistics. It is also known by various authors as the principal pivot transform, or as the sweep, gyration, or exchange operator. Partial inverse of a matrix: Given an n×n matrix A over a vector space V partitioned into blocks: 11 12 21 22 ) If 11 is invertible, then the partial inverse of A around the pivot block 11 is created by inverting 11 , putting the Schur complement 11 in place of 22 , and adjusting the off-diagonal elements accordingly: inv 11 11 12 21 11 22 21 11 12 ) Conceptually, partial inversion corresponds to a rotation of the graph of the matrix (X,AX)∈V×V , such that, for conformally-partitioned column matrices (x1,x2)T and (y1,y2)T inv 1⁡(A)(y1x2)=(x1y2) As defined this way, this operator is its own inverse: inv inv k⁡(A))=A , and if the pivot block 11 is chosen to be the entire matrix, then the transform simply gives the matrix inverse A−1 . Note that some authors define a related operation (under one of the other names) which is not an inverse per se; particularly, one common definition instead has inv k)2(A)=−A . The transform is often presented as a pivot around a single non-zero element akk , in which case one has inv k⁡(A)]ij={1akki=j=k−akjakki=k,j≠kaikakki≠k,j=kaij−aikakjakki≠k,j≠k Partial inverses obey a number of nice properties: inversions around different blocks commute, so larger pivots may be built up from sequences of smaller ones partial inversion preserves the space of symmetric matricesUse of the partial inverse in numerical analysis is due to the fact that there is some flexibility in the choices of pivots, allowing for non-invertible elements to be avoided, and because the operation of rotation (of the graph of the pivoted matrix) has better numerical stability than the shearing operation which is implicitly performed by Gaussian elimination. Use in statistics is due to the fact that the resulting matrix nicely decomposes into blocks which have useful meanings in the context of linear regression.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**The Scent of the Roses** The Scent of the Roses: The Scent of the Roses is a novel by the American writer Aleen Leslie.The title comes from a line of a poem by Thomas Moore: "You may break, you may shatter the vase, if you will, But the scent of the roses will hang round it still." Set in 1908 Pittsburgh, Pennsylvania, this nostalgic novel is the recollection of a year in the life of Jane Carlyle. At age ten she enters the threshold of a large house in Squirrel Hill and meets the Weber family, owners of a department store in the adjacent booming steel town of Braddock, and soon Jane is swept up into the exciting and sometimes eccentric happenings of the Weber household. This is the debut novel of Aleen Leslie, a native Pittsburgher best known for her Hollywood screenwriting credits.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Blind write** Blind write: In computing, a blind write occurs when a transaction writes a value without reading it. Any view serializable schedule that is not conflict serializable must contain a blind write. In particular, a write wi(X) is said to be blind if it is not the last action of resource X and the following action on X is a write wj(X).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GNU Units** GNU Units: GNU Units is a cross-platform computer program for conversion of units of quantities. It has a database of measurement units, including esoteric and historical units. This for instance allows conversion of velocities specified in furlongs per fortnight, and pressures specified in tons per acre. Output units are checked for consistency with the input, allowing verification of conversion of complex expressions. History: GNU Units was written by Adrian Mariano as an implementation of the units utility included with the Unix operating system. It was originally available under a permissive license. The GNU variant is distributed under the GPL although the FreeBSD project maintains a free fork of units from before the license change. units (Unix utility) The original units program has been a standard part of Unix since the early Bell Laboratories versions. Source code for a version very similar to the original is available from the Heirloom Project. The GNU implementation GNU units includes several extensions to the original version, including Exponents can be written with ^ or **. Exponents can be larger than 9 if written with ^ or **. Rational and decimal exponents are supported. Sums of units (e.g., btu + ft lbf) can be converted. Conversions can be made to sums of units, termed unit lists (e.g., from degrees to degrees, minutes, and seconds). Units that measure reciprocal dimensions can be converted (e.g., S to megohm). Parentheses for grouping are supported. This sometimes allows more natural expressions, such as in the example given in Complex units expressions. Roots of units (e.g., sqrt((lbf/inch) / lb) can be computed. Nonlinear units conversions (e.g., °F to °C) are supported. Functions such as sin, cos, ln, log, and log2 are included. A script for updating the currency conversions is included; the script requires Python.Units definitions, including nonlinear conversions and unit lists, are user extensible. The plain text database definitions.units is a good reference in itself, as it is extensively commented and cites numerous sources. Other implementations UDUNITS is a similar utility program, except that it has an additional programming library interface and date conversion abilities. UDUNITS is considered the de facto program and library for variable unit conversion for netCDF files. History: Version history GNU Units version 2.19 was released on 31 May 2019, to reflect the new 2019 revision of the SI; Version 2.14 released on 8 March 2017 fixed several minor bugs and improved support for building on Windows. Version 2.10, released on 26 March 2014, added support for rational exponents greater than one, and added the ability to save an interactive session in a file to provide a record of the conversions performed. Beginning with version 2.10, a 32-bit Windows binary distribution has been available on the project Web page (a 32-bit Windows port of version 1.87 has been available since 2008 as part of the GnuWin32 project). History: Version 2.02, released on 11 July 2013, added hexadecimal floating-point output and two other options to simplify changing the output format. History: Version 2.0, released on 2 July 2012, added the ability to convert to sums of units, such as hours and minutes or feet and inches. In addition, this release added support for UTF-8 encoding. Provision for locale-specific unit definitions was added. The syntax for defining non-linear units was changed, and added optional domain and range specifications. The names of the standard and personal units data files were changed, and the currency definitions were placed in a separate data file; a Python script for updating the currency definitions was added. History: The version history is covered in detail in the NEWS file included with the source distribution. Usage: Units will output the result of the conversion in two lines. Usually, the first line (multiplication) is the desired result; the second line is the same conversion expressed as a division. Units can also function as a general-purpose scientific calculator; it includes several built-in mathematical functions such as sin, cos, atan, ln, exp, etc. Attempting to convert types of measurements that are incompatible will cause units to print a conformability error message and display a reduced form of each measurement. Examples: The examples that follow show results from GNU units version 2.10. Examples: Interactive mode Currency exchange rates from www.timegenie.com on 2014-03-28 2729 units, 92 prefixes, 77 nonlinear units You have: 10 furlongs You want: miles * 1.25 / 0.8 You have: 1 gallon + 3 pints You want: quarts * 5.5 / 0.18181818 You have: sqrt(meter) Unit not a root You have: sqrt(acre) You want: ft * 208.71033 / 0.0047913298 You have: 21 btu + 6500 ft lbf You want: btu * 29.352939 / 0.034068139 You have: _ You want: J * 30968.99 / 3.2290366e-005 You have: 3.277 hr You want: time 3 hr + 16 min + 37.2 sec You have: 1|2 inch You want: cm * 1.27 / 0.78740157 The underscore ('_') is used to indicate the result of the last successful unit conversion. Examples: On the command line (non-interactive) Complex units expressions One form of the Darcy–Weisbach equation for fluid flow is ΔP=8π2ρfLQ2d5, where ΔP is the pressure drop, ρ is the mass density, f is the (dimensionless) friction factor, L is the length of the pipe, Q is the volumetric flow rate, and d is the pipe diameter. It might be desirable to have the equation in the form ΔP=A1ρfLQ2d5 that would accept typical US units; the constant A1 could be determined manually using the unit-factor method, but it could be determined more quickly and easily using units: Crane Technical Paper No. 410, Eq. 3-5, gives the multiplicative value as 43.5.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Moon Whistle** Moon Whistle: Moon Whistle (ムーンホイッスル, Mūn Hoissuru) is a Japanese language freeware role-playing video game created with RPG Tsukūru 95. Mainly made by Kōichirō Takaki (高木 幸一郎), also known as Kannazuki Sasuke (神無月サスケ), this game involves an adventure of a five-year-old kindergartner in a pseudo-Japanese city of the 1980s and 1990s. It won an ASCII-held monthly contest, Internet Contest Park, two popular votes on Internet Contest Park, reaching respectively #1 and #8, and an honorable mention in the ASCII Entertainment Software Contest. Overview: Moon Whistle is a story-based role-playing game, featuring sprite characters and objects, 2D exploring maps, cutscene dialogues, and combats ― conventional fares of old-school role-playing games as a whole. The game's prominent feature is its setting; contrary to a regular save-the-world plot in a high fantasy setting, the story of Moon Whistle unfolds mostly in a fictionalized modern Japanese city completed with original chip sets and character sprites. Much like the Mother series, most of main characters are preteen children. Their wielding weapons are everyday items and toys, they wear casual clothing or mascot costumes instead of a piece of armor, and monsters they fight include weird ones like burner/tripod monsters and iron frogs. The entire music was composed for the game by the pseudonym Saia Hyōseki (氷石彩亜). Overview: Basically its story revolves around a kindergarten-aged boy, Zenon (ぜのん), and his friends, while it also focuses on topics like alienation mainly from the viewpoint of another lead character, X Ranger (Xレンジャー). Zenon generally does not speak lines but with a few exceptions. The whole story is divided into over 20 small episodes, every of which takes place in a day. Once the player is finished with a day, the next day featured continues. Gameplay: While unique in appearances, its gameplay is a customary one for a role-playing game; buying items in shops, equipping gadgets, random encounters, boss battles, leveling up, explorations, communicating with townspeople, and so forth. Its battle system is the default one of RPG Tsukūru 95 so that battles are operated in a front-view, turn-based fashion that resembles to that of SNES Dragon Quest games. Plot: Set in Motomachi Town (もとまちタウン), a fictional 1980s Japanese suburb, the story begins on Zenon's first school day following spring vacation. After school, he goes to a hill in the town together with Narumi looking for X-Ranger, a superhero in a children's TV show, who allegedly appears in Motomachi Town for some reason. After meeting a man in X-Ranger's costume, Zenon and Narumi fight a bad monster in the hill in hope of helping him. Thanks to this, Zenon becomes popular enough with school-age children to be invited to their "secret base" to join them. Plot: Later Zenon and the gang happen to visit X-Ranger's hideout, and touch the panel of a giant machine sitting in there without his consent. Then they find themselves in a strange town, where they save Zeta (ツェータ) from Hart (ハルト), a spoiled mean child, and are befriended with him. Once prohibited to use the machine by X-Ranger, later they are allowed to go between two cities. A cat-like creature, Max (マックス), is also acquainted with them. Plot: After some events, Zeta notices that Zenon's town is very similar to his, Motomachi City (もとまちシティ), and questions X-Ranger on it. He reveals that he is a 19-year-old Zenon who came from the future (Motomachi City) to his past (Motomachi Town) by a time machine in order to recover hopes and dreams of people. Eidos (エイドス), a Priest of Time (時の神官), later confronts X-Ranger claiming that X-Ranger's action causes the timeline to shift, as well as Max, who, being also a Priest of Time, helped X-Ranger complete a time machine. Plot: In autumn, a new toy Rocket Monster (ロケットモンスター) has become a fad among kids in Motomachi City. However, suddenly one of them goes berserk, and it results in severe criticism to the product by parents. Zenon and his friends find out that it was plotted by Hart's father, who intended to damage the profits of his rival firm by harming repute of their product, Rocket Monster. In consequence, Rocket Monster has succeeded in recovering its status. Plot: After the next year began, people in Motomachi City suddenly all got melancholic. Zenon and his friends investigate the case, and fight with X Ranger's shadow. After the battle, X-Ranger realizes that this world is not real and he in reality is presumably unconscious for an attempted suicide. Also, this dream world was shared by Hart in reality. He acquires in his dream immense power, and challenges Zenon and Zeta again, only to be defeated. Out of grudge, he destroys Sphere (スフィア), enshrined in the Realm of Time (時の神殿), which maintains the order of worlds, while the master of the Shrine, Volatilis (ボラティル), restores it in exchange for his life. Despite this, still Sphere was split in two. Plot: Since Sphere was broken, Zenon was no longer able to access to the future. X-Ranger decides to return to his reality, and together with Zenon, visits the Realm of Time to challenge the Final Trial, in which X-Ranger faces his surrounding problems. At the end of the Trial, Volatilis' spirit awaits to kill Zenon so as to extinguish the world of the dream. With the help of X Ranger, however, Zenon and his friends defeat Volatilis. After this, X-Ranger and Max return to each place. In March, Zenon and Narumi attend the graduation ceremony of their kindergarten. Off-shoots: As a spin-off of Moon Whistle, Another Moon Whistle: Kuzureteku Nyūdōgumo (アナザームーンホイッスル くずれてく入道雲, lit. "Another Moon Whistle: Collapsing Cumulonimbus") was released in April 2003, which, at that time, was developed by the use of RPG Tsukūru 2000. Boku no Sumu Machi (ぼくのすむまち, literally; "A town where I live") was the omnibus of six short RPGs accomplished with RPG Tsukūru for Mobile and provided by Enterbrain via NTT Docomo's i-appli service. The sixth episode of it featured X Ranger, Zenon, and Narumi.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fagin's theorem** Fagin's theorem: Fagin's theorem is the oldest result of descriptive complexity theory, a branch of computational complexity theory that characterizes complexity classes in terms of logic-based descriptions of their problems rather than by the behavior of algorithms for solving those problems. Fagin's theorem: The theorem states that the set of all properties expressible in existential second-order logic is precisely the complexity class NP. It was proven by Ronald Fagin in 1973 in his doctoral thesis, and appears in his 1974 paper. The arity required by the second-order formula was improved (in one direction) in Lynch (1981), and several results of Grandjean have provided tighter bounds on nondeterministic random-access machines. Proof: In addition to Fagin's 1974 paper, Immerman (1999) provides a detailed proof of the theorem. It is straightforward to show that every existential second-order formula can be recognized in NP, by nondeterministically choosing the value of all existentially-qualified variables, so the main part of the proof is to show that every language in NP can be described by an existential second-order formula. To do so, one can use second-order existential quantifiers to arbitrarily choose a computation tableau. In more detail, for every timestep of an execution trace of a non-deterministic Turing machine, this tableau encodes the state of the Turing machine, its position in the tape, the contents of every tape cell, and which nondeterministic choice the machine makes at that step. A first-order formula can constrain this encoded information so that it describes a valid execution trace, one in which the tape contents and Turing machine state and position at each timestep follow from the previous timestep. Proof: A key lemma used in the proof is that it is possible to encode a linear order of length nk (such as the linear orders of timesteps and tape contents at any timestep) as a 2k -ary relation R on a universe A of size n . One way to achieve this is to choose a linear ordering L of A and then define R to be the lexicographical ordering of k -tuples from A with respect to L
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Behavior tree** Behavior tree: Behavior trees are a formal, graphical modelling language used primarily in systems and software engineering. Behavior trees employ a well-defined notation to unambiguously represent the hundreds or even thousands of natural language requirements that are typically used to express the stakeholder needs for a large-scale software-integrated system. Overview: The amount of detail in the large number of natural language requirements for a large-scale system causes short-term memory overload and may create a barrier that prevents anyone from gaining a deep, accurate and holistic understanding of the system needs. Also, because of the use of natural language, there are likely to be many ambiguities, aliases, inconsistencies, redundancies and incompleteness problems associated with the requirements information. This adds further to the uncertainty and complexity. Generally, at best, a few people understand parts of the system or situation well, but no one has other than a superficial understanding of the whole – that is, the detailed integrated behavior of the system. Overview: The behavior tree representation, (with the help of the composition tree representation that resolves alias and other vocabulary problems with large sets of requirements) allows people to avoid short-term memory overload and produce a deep, accurate, holistic representation of system needs that can be understood by all stakeholders because it strictly uses the vocabulary of the original requirements. Because the behavior tree notation uses a formal semantics, for any given example, it already is, or can be made executable. Overview: Behavior tree forms Single and composite or integrated behavior tree forms are both important in the application of behavior trees in systems and software engineering. Requirement behavior trees: Initially, individual requirement behavior trees (RBTs) are used to capture all the fragments of behavior in each individual natural language requirement by a process of rigorous, intent-preserving and vocabulary-preserving translation. The translation process can uncover a range of defects in original natural language requirements. Overview: Integrated behavior trees: Because a set of requirements imply the integrated behavior of a system, all the individual requirement behavior trees can be composed to construct an integrated behavior tree (IBT) that provides a single holistic view of the emergent integrated behavior of the system. This enables the building of the integrated behavior of a system out of its requirements. An analogy to help describe this process is the transition from a randomly arranged set of jigsaw puzzle pieces to putting each of the pieces in its appropriate place. When we do this, we see each piece of information in its intended context and we see the pieces of information as a whole and the emergent properties of the whole.Having all the requirements converted to behavior trees (RBTs) is similar to having all the pieces for a jigsaw puzzle randomly spread out on a table – until we put all the pieces together we cannot see the emergent picture and whether any pieces are missing or do not fit. Constructing an integrated behavior tree (IBT) allows us to do this. Overview: Behavior engineering process Representation used – (critical)BEHAVIOR TREES provide a vehicle for growing a shared understanding of a complex system. The role of the COMPOSITION TREE in the overall process is to provide a vehicle for overcoming the imperfect knowledge associated with the large set of requirements for a system.Process used – (critical)BEHAVIOR ENGINEERING uses behavior trees to control complexity while growing a shared understanding of a complex system. That shared, holistic understanding of a complex system, because it integrates the requirements, shows the emergent behavior of the system implied by requirements. History: Behavior trees and the concepts for their application in systems and software engineering were originally developed by Dromey with first publication of some of the key ideas in 2001. Early publications on this work used the terms "genetic software engineering" and "genetic design" to describe the application of behavior trees. The reason for originally using the word genetic was because sets of genes, sets of jigsaw puzzle pieces and sets of requirements represented as behavior trees all appeared to share several key properties: they contained enough information as a set to allow them to be composed – with behavior trees this allows a system to be built out of its requirements the order in which the pieces were put together was not important – with requirements this aids coping with complexity when all the members of the set were put together the resulting integrated entity exhibited a set of important emergent properties.For behavior trees important emergent properties include the integrated behavior of the system implied by the requirements the coherent behavior of each component referred to in the requirements.These genetic parallels, in another context, were originally spelled by Woolfson, (A. Woolfson, Living Without Genes, Flamingo, 2000) Further weight for use of the term genetic came from eighteenth-century thinker Giambattista Vico, who said, "To understand something, and not merely be able to describe it, or analyse it into its component parts, is to understand how it came into being – its genesis, its growth … true understanding is always genetic". Despite these legitimate genetic parallels it was felt that this emphasis led to confusion with the concept of genetic algorithms. As a result, the term behavior engineering was introduced to describe the processes that exploit behavior trees to construct systems. The term "behavior engineering" has previously been used in a specialized area of Artificial Intelligence – robotics research. The present use embraces a much broader rigorous formalization and integration of large sets of behavioral and compositional requirements needed to model large-scale systems. History: Since the behavior tree notation was originally conceived a number of people from the DCCS (Dependable Complex Computer-based Systems Group – a joint University of Queensland, Griffith University research group) have made important contributions to the evolution and refinement of the notation and to the use of behavior trees. Members of this group include: David Carrington, Rob Colvin, Geoff Dromey, Lars Grunske, Ian Hayes, Diana Kirk, Peter Lindsay, Toby Myers, Dan Powell, John Seagrott, Cameron Smith, Larry Wen, Nisansala Yatapanage, Kirsten Winter, Saad Zafar, Forest Zheng. History: Probabilistic timed behavior trees have recently been developed by Colvin, Grunske and Winter so that reliability, performance and other dependability properties can be expressed. Key concepts: Behavior tree notation A behavior tree is used to formally represent the fragment of behavior in each individual requirement. Behavior for a large-scale system in general, where concurrency is admitted, appears abstractly as a set of communicating sequential processes. The behavior tree notation captures these composed component-states in a simple tree-like form. Key concepts: Behavior is expressed in terms of components realizing states and components creating and breaking relations. Using the logic and graphic forms of conventions found in programming languages, components can support actions, composition, events, control-flow, data-flow, and threads.Traceability tags (see Section 1.2 of behavior tree notation) in behavior tree nodes link the formal representation to the corresponding natural language requirement. Behavior trees accurately capture behavior expressed in the natural language representation of functional requirements. Requirements behavior trees strictly use the vocabulary of the natural language requirements but employ graphical forms for behavior composition in order to eliminate risk of ambiguity. By doing this they provide a direct and clearly traceable relationship between what is expressed in the natural language representation and its formal specification.A basis of the notation is that behavior is always associated with some component. Component-states which represent nodes of behavior are composed sequentially or concurrently to construct a behavior tree that represents the behavior expressed in the natural language requirements. Key concepts: A behavior tree with leaf nodes may revert (symbolized by adding the caret operator ^) to an ancestor node to repeat behavior, or start a new thread (symbolized by two carets ^^). A behavior tree specifies state changes in components, how data and control is passed between components and how threads interact. There are constructs for creating and breaking relations. There are also constructs for setting and testing states of components as well as mechanisms for inter-process communication that include message passing (events), shared variable blocking and synchronization. For a complete reference to behavior tree notation, version 1.0, see: Behavior Tree Notation v1.0 (2007) Semantics The formal semantics of behavior trees is given via a process algebra and its operational semantics. The semantics has been used as the basis for developing simulation, model checking and failure modes and effects analysis. Requirements translation Requirements translation is the vehicle used to cross the informal-formal barrier. Consider the process of translation for requirement R1 below. The first tasks are to identify the components (bold), identify the behaviors (underline) and identify indicators of the order (italics) in which behaviors take place. The corresponding behavior tree can then be constructed. What is clear from the outcome of this process is that apart from pronouns, definite articles, etc., essentially all the words in the sentences that contribute to the behavior they describe have been accounted for and used. Key concepts: Requirements integration Once the set of requirements are formalized as individual requirement behavior trees, two joint properties of systems and requirements need to be exploited in order to proceed with composing the integrated behavior tree: In general, a fragment of behavior expressed by a requirement always has associated with it a precondition which needs to be satisfied before the behavior can take place (this precondition may or may not be expressed in the requirement). Key concepts: If the requirement is really part of the system then some other requirement in the set must establish the precondition needed in (1).For requirements represented as behavior trees this amounts to finding where the root node of one tree occurs in some other behavior tree and integrating the two trees at that node. The example below illustrates requirements integration for two requirements, R1 and R3. In other words, it shows how these two requirements interact. Operations on integrated behavior trees Once an integrated behavior tree has been composed, there are a number of important operations that can be performed upon it. Key concepts: Inspection: defect detection and correction In general, many defects become much more visible when there is an integrated view of the requirements and each requirement has been placed in the behavior context where it needs to execute. For example, it is much easier to tell whether a set of conditions or events emanating from a node is complete and consistent. The traceability tags also make it easy to refer back to the original natural-language requirements. There is also the potential to automate a number of defect and consistency checks on an integrated behavior tree.When all defects have been corrected and the IBT is logically consistent and complete it becomes a model behavior tree (MBT) which serves as a formal specification for the system's behavior that has been constructed out of the original requirements. This is the clearly defined stopping point for the analysis phase. With other modelling notations and methods (for instance, with UML) it is less clear-cut when modelling can stop. In some cases, parts of a model behavior tree may need to be transformed to make the specification executable. Once an MBT has been made executable it is possible to carry out a number of other dependability checks. Key concepts: Simulation A model behavior tree can be readily simulated to explore the dynamic properties of the system. Both a symbolic tool and a graphics tool have been constructed to support these activities. Model-checking A translator has been written to convert a model behavior tree into the "actions systems" language. This input can then be fed into the SAL Model-checker in order to allow checks to be made as to whether certain safety and security properties are satisfied. Key concepts: Failure mode and effects analysis (FMEA) Model-checking has often been applied to system models to check that hazardous states cannot be reached during normal operation of the system. It is possible to combine model-checking with behavior trees to provide automated support for failure mode and effects analysis (FMEA). The advantage of using behavior trees for this purpose is that they allow the formal method aspects of the approach to be hidden from non-expert users. Key concepts: Requirements change The ideal that is sought when responding to a change in the functional requirements for a system is that it can be quickly determined: where to make the change, how the change affects the architecture of the existing system, which components of the system are affected by the change, and what behavioral changes will need to be made to the components (and their interfaces) that are affected by the change of requirements.Because a system is likely to undergo many sets of changes over its service time, there is also a need to record, manage and optimize the system's evolution driven by the change sequence. Key concepts: A traceability model, which uses behavior trees as a formal notation to represent functional requirements, reveals change impacts on different types of design constructs (documents) caused by the changes of the requirements. The model introduces the concept of evolutionary design documents that record the change history of the designs. From these documents, any version of a design document as well as the difference between any two versions can be retrieved. An important advantage of this model is that the major part of the procedure to generate these evolutionary design documents can be supported by automated tools. Key concepts: Code generation and execution The behavior tree representation of the integrated behavior of the system affords several important advantages as an executable model. It clearly separates the tasks of component integration from the task of individual component implementation. The integrated behavior of the system that emerges from integrating the requirements can be used as a foundation to create a design by applying design decisions. The result is a design behavior tree (DBT): an executable multithreaded component integration specification that has been built out of the original requirements. Key concepts: Behavior tree models are executed in a virtual machine called the behavior run-time environment (BRE). The BRE links together components using middleware, allowing components to be independent programs written in one of several languages that can be executed in a distributed environment. The BRE also contains an expression parser that automatically performs simple operations to minimize the amount of code required to be manually implemented in the component. Key concepts: The implementation of components is supported by views that are automatically extractable from the DBT. These views provide the component behavior trees (CBTs) of individual components together with the interfaces of individual components. This information, together with the information in the integrated composition tree (ICT) captured about each individual component, provides the information that is needed to implement each individual component. Key concepts: Several BRE's can be linked together to form complex systems using a system-of-systems construct and the behavior engineering component integration environment (BECIE). BECIE is also used to monitor and control the behavior tree models being executed within a BRE, similar to supervisory control and data acquisition (SCADA) systems used in industrial process control. Executable behavior trees have been developed for case studies including automated train protection, mobile robots with dynamic object following, an ambulatory infusion pump and traffic light management systems. A version of the BRE suited for embedded systems (eBRE) is also available that has reduced functionality to tailor it to small-footprint microcontrollers. Applications: Behavior tree modelling can and has been applied to a diverse range of applications over a number of years. Some of the main application areas are described below. Applications: Large-scale systems Modeling large-scale systems with large sets of natural-language requirements have always been the major focus for trialling behavior trees and the overall behavior engineering process. Conducting these evaluations and trials of the method has involved work with a number of industry partners and government departments in Australia. The systems studied have included a significant number of defense systems, enterprise systems, transportation systems, information systems, health systems and sophisticated control systems with stringent safety requirements. The results of these studies have all been commercial-in-confidence. However the results of the extensive industry trails with Raytheon Australia are presented below in the Industry Section. Applications: What all this work has consistently shown is that by translating requirements and creating dynamic and static integrated views of requirements a very significant number of major defects are discovered early, over and above the defects that are found by current industry best-practice. Applications: Embedded systems Failure of a design to satisfy a system's requirements can result in schedule and cost overruns. If there are also critical dependability issues, not satisfying system requirements can have life-threatening consequences. However, in current approaches, ensuring requirements are satisfied is often delayed until late in the development process during a cycle of testing and debugging. This work describes how the system development approach, behavior engineering, can be used to develop software for embedded systems. The result is a model-driven development approach that can create embedded system software that satisfies its requirements, as a result of applying the development process. Applications: Hardware–software systems Many large-scale systems consist of a mixture of co-dependent software and hardware. The different nature of software and hardware means they are often modelled separately using different approaches. This can subsequently lead to integration problems due to incompatible assumptions about hardware/software interactions. These problems can be overcome by integrating behavior trees with the Modelica, mathematical modelling approach. The environment and hardware components are modelled using Modelica and integrated with an executable software model that uses behavior trees. Applications: Role-based access control To ensure correct implementation of complex access control requirements, it is important that the validated and verified requirements are effectively integrated with the rest of the system. It is also important that the system can be validated and verified early in the development process. An integrated, role-based access control model has been developed. The model is based on the graphical behavior tree notation, and can be validated by simulation, as well as verified using a model checker. Using this model, access control requirements can be integrated with the rest of the system from the outset, because: a single notation is used to express both access control and functional requirements; a systematic and incremental approach to constructing a formal behavior tree specification can be adopted; and the specification can be simulated and model checked. The effectiveness of the model has been evaluated using a case study with distributed access control requirements. Applications: Biological systems Because behavior trees describe complex behavior, they can be used for describing a range of systems not limited to those that are computer-based. In a biological context, BTs can be used to piece together a procedural interpretation of biological functions described in research papers, treating the papers as the requirements documents as described above. This can help to construct a more concrete description of the process than is possible from reading only, and can also be used as the basis for comparing competing theories in alternative papers. In ongoing research, the behavior tree notation is being used to develop models of the brain function in rats under fear conditioning. Applications: Game AI modeling While BTs have become popular for modeling the artificial intelligence in computer games such as Halo and Spore, these types of trees are very different from the ones described on this page, and are closer to a combination of hierarchical finite state machines or decision trees. Soccer-player modeling has also been a successful application of BTs. Applications: Model Based Testing is an approach to software testing that requires testers to create test models from requirements of Software Under Test (SUT). Traditionally, UML state charts, FSMs, EFSMs, Flow charts are used as the modeling language. Recently, an interesting approach in which Event-Driven Swim Lane Petri Net (EDSLPN) is used as the modeling language also appears. Behavior tree notation should be considered as a good modeling notation to MBT also, and it has a few advantages among other notations: It has the same expressiveness level as UML state charts and EDSLPN It is intuitive to use as a modeling notation due to its graphical nature Each behavior tree node has a requirement tag, this makes creating a traceability matrix from requirement to test artifact a piece of cakeSuch an attempt has been made here. The MBTester is composed of a modeler and a test case generation engine. Business owners or testers translate their requirements into behavior trees using the modeler, and then (optionally) integrate a few related behavior trees into a composite one. A behavior tree can be fed into the backend engine to generate test cases, test scripts, and test data automatically. Scalability and industry applications: The first industry trials to test the feasibility of the method and refine its capability were conducted in 2002. Over the last three years a number of systematic industry trials on large-scale defence, transportation and enterprise systems have been conducted. This work has established that the method scales to systems with large numbers of requirements but also that it is important to use tool support in order to efficiently navigate and edit such large integrated views of graphical data. Several main results have come out of this work with industry. On average, over a number of projects, 130 confirmed major defects per 1000 requirements have consistently been found after normal reviews and corrections have been made. With less mature requirements sets much higher defect rates have been observed. An important part of this work with industry has involved applying the analysis part of the method to six large-scale defence projects for Raytheon Australia. They see the method as "a key risk mitigation strategy, of use in both solution development and as a means of advising the customer on problems with acquisition documentation". An outcome of these industry trials has been the joint development with Raytheon Australia of an industry-strength tool to support the analysis, editing and display of large integrated sets of requirements. More extensive details of industry findings can be found on the Behavior Engineering website.Dr Terry Stevenson (chief technical officer, Raytheon Australia) and Mr Jim Boston (senior project manager Raytheon Australia), Mr Adrian Pitman from the Australian Defence Materiel Organization, Dr Kelvin Ross (CEO, K.J.Ross & Associates) and Christine Cornish (Bushell & Cornish) have provided the special opportunities needed to support this research and to conduct the industry trials and live project work. This work has been supported by the Australian Research Council – ARC Centre for Complex Systems and funds received from industry. Benefits, advantages: As a behavior modelling representation, behavior trees have a number of significant benefits and advantages: They employ a well-defined and effective strategy for dealing with requirements complexity, particularly where the initial needs of a system are expressed using hundreds or thousands of requirements written in natural language. This significantly reduces the risk on large-scale projects. By rigorously translating then integrating requirements at the earliest possible time they provide a more effective means for uncovering requirements defects than competing methods. They employ a single, simple notation for analysis, specification and to represent the behavior design of a system. They represent the system behavior as an executable integrated whole. They build the behavior of a system out of its functional requirements in a directly traceable way which aids verification and validation. They can be understood by stakeholders without the need for formal methods training. By strictly retaining the vocabulary of the original requirements this eases the burden of understanding. They have a formal semantics, they support concurrency, they are executable and they can be simulated, model-checked and used to undertake failure mode and effects analysis. They can be used equally well to model human processes, to analyse contracts, to represent forensic information, to represent biological systems, and numerous other applications. In each case they deliver the same benefits in terms of managing complexity, and seeing things as a whole. They can also be used for safety critical systems, embedded systems and real-time systems. Criticisms, disadvantages: For small textbook level examples, their tree-like nature means that the graphic models produced are sometimes not as compact as statechart or state machine behavior specifications. Tool support is needed to navigate the very large integrated behavior trees for systems that have hundreds or thousands of requirements. For group walkthroughs of very large systems good display facilities are needed. There is a need to provide additional sophisticated tool support to fully exploit integrated behavior tree models.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Isovaleryl-CoA dehydrogenase** Isovaleryl-CoA dehydrogenase: In enzymology, an isovaleryl-CoA dehydrogenase (EC 1.3.8.4) is an enzyme that catalyzes the chemical reaction 3-methylbutanoyl-CoA + acceptor ⇌ 3-methylbut-2-enoyl-CoA + reduced acceptorThus, the two substrates of this enzyme are 3-methylbutanoyl-CoA and acceptor, whereas its two products are 3-methylbut-2-enoyl-CoA and reduced acceptor. This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-CH group of donor with other acceptors. The systematic name of this enzyme class is 3-methylbutanoyl-CoA:acceptor oxidoreductase. Other names in common use include isovaleryl-coenzyme A dehydrogenase, isovaleroyl-coenzyme A dehydrogenase, and 3-methylbutanoyl-CoA:(acceptor) oxidoreductase. This enzyme participates in valine, leucine and isoleucine degradation. It employs one cofactor, FAD. Structural studies: As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 1IVH. It was created by a group containing K.A.Tiffany, D.L.Roberts, M.Wang, R.Paschke, A.-W.A.Mohsen, J.Vockley, and J.J.P.Kim. The structure was released on May 20, 1998.Doe. "PDBsum entry: 1ivh". Retrieved November 25, 2019.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Amifampridine** Amifampridine: Amifampridine is used as a drug, predominantly in the treatment of a number of rare muscle diseases. The free base form of the drug has been used to treat congenital myasthenic syndromes and Lambert–Eaton myasthenic syndrome (LEMS) through compassionate use programs since the 1990s and was recommended as a first line treatment for LEMS in 2006, using ad hoc forms of the drug, since there was no marketed form. Amifampridine: Around 2000 doctors at Assistance Publique – Hôpitaux de Paris created a phosphate salt form, which was developed through a series of companies ending with BioMarin Pharmaceutical which obtained European approval in 2009 under the trade name Firdapse, and which licensed the US rights to Catalyst Pharmaceuticals in 2012. As of January 2017, Catalyst and another US company, Jacobus Pharmaceutical, which had been manufacturing and giving it away for free since the 1990s, were both seeking FDA approval for their iterations and marketing rights. Amifampridine: Amifampridine phosphate has orphan drug status in the EU for Lambert–Eaton myasthenic syndrome and Catalyst holds both an orphan designation and a breakthrough therapy designation in the US. In May 2019 the U.S. Food and Drug Administration (FDA) approved amifampridine tablets under the trade name Ruzurgi for the treatment of Lambert-Eaton myasthenic syndrome (LEMS) in patients 6 to less than 17 years of age. This is the first FDA approval of a treatment specifically for pediatric patients with LEMS. The FDA granted the approval of Ruzurgi to Jacobus Pharmaceutical. The only other treatment approved for LEMS (Firdapse) is only approved for use in adults. Medical uses: Amifampridine is used to treat many of the congenital myasthenic syndromes, particularly those with defects in choline acetyltransferase, downstream kinase 7, and those where any kind of defect causes "fast channel" behaviour of the acetylcholine receptor. It is also used to treat symptoms of Lambert–Eaton myasthenic syndrome. Contraindications: Because it affects voltage-gated ion channels in the heart, it is contraindicated in people with long QT syndrome and in people taking a drug that might prolong QT time like sultopride, disopyramide, cisapride, domperidone, rifampicin or ketoconazol. It is also contraindicated in people with epilepsy or badly controlled asthma. Adverse effects: The dose-limiting side effects include tingling or numbness, difficulty sleeping, fatigue, and loss of muscle strength.Amifampridine can cause seizures, especially but not exclusively when given at high doses and/or in particularly vulnerable individuals who have a history of seizures. Interactions: The combination of amifampridine with pharmaceuticals that prolong QT time increases the risk of ventricular tachycardia, especially torsade de pointes; and combination with drugs that lower the seizure threshold increases the risk of seizures. Interactions via the liver's cytochrome P450 enzyme system are considered unlikely. Pharmacology: Mechanism of action In Lambert–Eaton myasthenic syndrome, acetylcholine release is inhibited as antibodies involved in the host response against certain cancers cross-react with Ca2+ channels on the prejunctional membrane. Amifampridine works by blocking potassium channel efflux in nerve terminals so that action potential duration is increased. Ca2+ channels can then be open for a longer time and allow greater acetylcholine release to stimulate muscle at the end plate. Pharmacology: Pharmacokinetics Amifampridine is quickly and almost completely (93–100%) absorbed from the gut. In a study with 91 healthy subjects, maximum amifampridine concentrations in blood plasma were reached after 0.6 (±0.25) hours when taken without food, or after 1.3 (±0.9) hours after a fatty meal, meaning that the speed of absorption varies widely. Biological half-life (2.5±0.7 hrs) and the area under the curve (AUC = 117±77 ng∙h/ml) also vary widely between subjects, but are nearly independent of food intake.The substance is deactivated by acetylation via N-acetyltransferases to the single metabolite 3-N-acetylamifampridine. Activity of these enzymes (primarily N-acetyltransferase 2) in different individuals seems to be primarily responsible for the mentioned differences in half-life and AUC: the latter is increased up to 9-fold in slow metabolizers as compared to fast metabolizers.Amifampridine is eliminated via the kidneys and urine to 74–81% as N-acetylamifampridine and to 19% in unchanged form. Chemistry: 3,4-Diaminopyridine is a pale yellow to pale brown crystalline powder that melts at about 218–220 °C (424–428 °F) under decomposition. It is readily soluble in methanol, ethanol and hot water, but only slightly in diethyl ether. Solubility in water at 20 °C (68 °F) is 25 g/L. The drug formulation amifampridine phosphate contains the phosphate salt, more specifically 4-aminopyridine-3-ylammonium dihydrogen phosphate. This salt forms prismatic, monoclinic crystals (space group C2/c) and is readily soluble in water. The phosphate salt is stable, and does not require refrigeration. History: The development of amifampridine and its phosphate has brought attention to orphan drug policies that grant market exclusivity as an incentive for companies to develop therapies for conditions that affect small numbers of people.Amifampridine, also called 3,4-DAP, was discovered in Scotland in the 1970s, and doctors in Sweden first showed its use in LEMS in the 1980s.In the 1990s, doctors in the US, on behalf of Muscular Dystrophy Association, approached a small family-owned manufacturer of active pharmaceutical ingredients in New Jersey, Jacobus Pharmaceuticals, about manufacturing amifampridine so they could test it in clinical trials. Jacobus did so, and when the treatment turned out to be effective, Jacobus and the doctors were faced with a choice — invest in clinical trials to get FDA approval or give the drug away for free under a compassionate use program to about 200 patients out of the estimated 1500-3000 LEMS patients in the U.S.. Jacobus elected to give the drug away to this subset of LEMS patients, and did so for about twenty years.Doctors at the Assistance Publique – Hôpitaux de Paris had created a phosphate salt of 3,4-DAP (3,4-DAPP), and obtained an orphan designation for it in Europe in 2002. The hospital licensed the intellectual property on the phosphate form to the French biopharma company OPI, which was acquired by EUSA Pharma in 2007, and the orphan application was transferred to EUSA in 2008. In 2008 EUSA submitted an application for approval to market the phosphate form to the European Medicines Agency under the brand name Zenas. EUSA, through a vehicle called Huxley Pharmaceuticals, sold the rights to 3,4-DAPP to BioMarin in 2009, the same year that 3,4-DAPP was approved in Europe under the new name Firdapse.The licensing of Firdapse in 2010 in Europe led to a sharp increase in price for the drug. In some cases, this has led to hospitals using an unlicensed form rather than the licensed agent, as the price difference proved prohibitive. BioMarin has been criticized for licensing the drug on the basis of previously conducted research, and yet charging exorbitantly for it. A group of UK neurologists and pediatricians petitioned to prime minister David Cameron in an open letter to review the situation. The company responded that it submitted the licensing request at the suggestion of the French government, and points out that the increased cost of a licensed drug also means that it is monitored by regulatory authorities (e.g. for uncommon side effects), a process that was previously not present in Europe. A 2011 Cochrane review compared the cost of the 3,4-DAP and 3,4-DAPP in the UK and found an average price for 3,4-DAP base of £1/tablet and an average price for 3,4-DAP phosphate of £20/tablet; and the authors estimated a yearly cost per person of £730 for the base versus £29,448 for the phosphate formulation.Meanwhile, in Europe, a task force of neurologists had recommended 3,4-DAP as the firstline treatment for LEMS symptoms in 2006, even though there was no approved form for marketing; it was being supplied ad hoc.: 5  In 2007 the drug's international nonproprietary name was published by the WHO.In the face of the seven-year exclusivity that an orphan approval would give to Biomarin, and of the increase in price that would accompany it, Jacobus began racing to conduct formal clinical trials in order to get approval for the free base form before BioMarin; its first Phase II trial was opened in January 2012.In October 2012, while BioMarin had a Phase III trial ongoing in the US, it licensed the US rights to 3,4-DAPP, including the orphan designation and the ongoing trial, to Catalyst Pharmaceuticals. Catalyst anticipated that it could earn $300 to $900 million per year in sales at peak sales for treatment of people with LEMS and other indications, and analysts anticipated the drug would be priced at around. $100,000 in the US. Catalyst went on to obtain a breakthrough therapy designation for 3,4-DAPP in LEMS in 2013, an orphan designation for congenital myasthenic syndromes in 2015 and an orphan designation for myasthenia gravis in 2016.In August 2013, analysts anticipated that FDA approval would be granted to Catalyst in LEMS by 2015.In October 2014, Catalyst began making available under an expanded access program.In March 2015, Catalyst obtained an orphan designation for the use of 3,4-DAPP to treat of congenital myasthenic syndrome. In April 2015, Jacobus presented clinical trial results with 3,4-DAP at a scientific meeting.In December 2015 a group of 106 neuromuscular doctors who had worked with both Jacobus and BioMarin/Catalyst published an editorial in the journal, Muscle & Nerve, expressing concern about the potential for the price of the drug to be dramatically increased should Catalyst obtain FDA approval, and stating that 3,4-DAPP represented no real innovation and didn't deserve exclusivity under the Orphan Drug Act, which was meant to spur innovation to meet unmet needs. Catalyst responded to this editorial with a response in 2016 that explained that Catalyst was conducting a full range of clinical and non-clinical studies necessary to obtain approval in order to specifically address the unmet need among the estimated 1500-3000 LEMs patients since about 200 were receiving the product through compassionate use – and that this is exactly what the Orphan Drug Act was intended to do: deliver approved products to orphan drug populations so that all patients have full access.In December 2015, Catalyst submitted its new drug application to the FDA, and in February 2016 the FDA refused to accept it, on the basis that it wasn't complete. In April 2016 the FDA told Catalyst it would have to gather further data. Catalyst cut 30% of its workforce, mainly from the commercial team it was building to support an approved product, to save money to conduct the trials. In March 2018 the company re-submitted its NDA. The FDA approved amifampridine for the treatment of adults with Lambert-Eaton myasthenic syndrome on November 29, 2018.In February 2019, U.S. Senator Bernie Sanders questioned the high price ($375,000) charged by Catalyst Pharmaceuticals for Firdapse.In May 2019, the privately held US company Jacobus Pharmaceutical, Princeton, New Jersey gained approval by the FDA for amifampridine tablets (Ruzurgi) for the treatment of LEMS in patients 6 to less than 17 years of age. This is the first FDA approval of a treatment specifically for pediatric patients with LEMS. Firdapse is only approved for use in adults. Although Ruzurgi has been approved for pediatric patients, this approval makes it possible for adults with LEMS to get the drug off-label. Jacobus Pharmaceutical had been manufacturing and giving it away for free since the 1990s. The FDA decision dropped the stock of Catalyst Pharmaceuticals. The company's stock price has dropped about 50%. Research: Amifampridine has also been proposed for the treatment of multiple sclerosis (MS). A 2002 Cochrane systematic review found that there was no unbiased data to support its use for treating MS. There was no change as of 2012.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Laurentide Ice Sheet** Laurentide Ice Sheet: The Laurentide Ice Sheet was a massive sheet of ice that covered millions of square miles, including most of Canada and a large portion of the Northern United States, multiple times during the Quaternary glacial epochs, from 2.58 million years ago to the present.The last advance covered most of northern North America between c. 95,000 and c. 20,000 years before the present day and, among other geomorphological effects, gouged out the five Great Lakes and the hosts of smaller lakes of the Canadian Shield. These lakes extend from the eastern Northwest Territories, through most of northern Canada, and the upper Midwestern United States (Minnesota, Wisconsin, and Michigan) to the Finger Lakes, through Lake Champlain and Lake George areas of New York, across the northern Appalachians into and through all of New England and Nova Scotia. Laurentide Ice Sheet: At times, the ice sheet's southern margin included the present-day sites of coastal towns of the Northeastern United States, and cities such as Boston and New York City and Great Lakes coastal cities and towns as far south as Chicago and St. Louis, Missouri, and then followed the present course of the Missouri River up to the northern slopes of the Cypress Hills, beyond which it merged with the Cordilleran Ice Sheet. The ice coverage extended approximately as far south as 38 degrees latitude mid-continent. Description: This ice sheet was the primary feature of the Pleistocene epoch in North America, commonly referred to as the ice age. During the Pre-Illinoian Stage, the Laurentide Ice Sheet extended as far south as the Missouri and Ohio River valleys. It was up to 2 mi (3.2 km) thick in Nunavik, Quebec, Canada, but much thinner at its edges, where nunataks were common in hilly areas. It created much of the surface geology of southern Canada and the northern United States, leaving behind glacially scoured valleys, moraines, eskers and glacial till. It also caused many changes to the shape, size, and drainage of the Great Lakes. As but one of many examples, near the end of the last ice age, Lake Iroquois extended well beyond the boundaries of present-day Lake Ontario, and drained down the Hudson River into the Atlantic Ocean.Its cycles of growth and melting were a decisive influence on global climate during its existence. This is because it served to divert the jet stream southward, which would otherwise flow from the relatively warm Pacific Ocean through Montana and Minnesota. That gave the Southwestern United States, otherwise a desert, abundant rainfall during ice ages, in extreme contrast to most other parts of the world which became exceedingly dry, though the effect of ice sheets in Europe had an analogous effect on the rainfall in Afghanistan, parts of Iran, possibly western Pakistan in winter, as well as North Africa. Description: Its melting also caused major disruptions to the global climate cycle, because the huge influx of low-salinity water into the Arctic Ocean via the Mackenzie River is believed to have disrupted the formation of North Atlantic Deep Water, the very saline, cold, deep water that flows from the Greenland Sea. That interrupted the thermohaline circulation, creating the brief Younger Dryas cold epoch and a temporary re-advance of the ice sheet, which did not retreat from Nunavik until 6,500 years ago. Description: After the end of the Younger Dryas, the Laurentide Ice Sheet retreated rapidly to the north, becoming limited to only the Canadian Shield until even it became deglaciated. The ultimate collapse of the Laurentide Ice Sheet is also suspected to have influenced European agriculture indirectly through the rise of global sea levels. Canada's oldest ice is a 20,000-year-old remnant of the Laurentide Ice Sheet called the Barnes Ice Cap, on central Baffin Island. Ice centers: During the Late Pleistocene, the Laurentide ice sheet reached from the Rocky Mountains eastward through the Great Lakes, into New England, covering nearly all of Canada east of the Rocky Mountains. Ice centers: Three major ice centers formed in North America: the Labrador, Keewatin, and Cordilleran. The Cordilleran covered the region from the Pacific Ocean to the eastern front of the Rocky Mountains and the Labrador and Keewatin fields are referred to as the Laurentide Ice Sheet. Central North America has evidence of the numerous lobes and sublobes. The Keewatin covered the western interior plains of North America from the Mackenzie River to the Missouri River and the upper reaches of the Mississippi River. The Labrador covered spread over eastern Canada and the northeastern part of the United States abutting the Keewatin lobe in the western Great Lakes and Mississippi valley. Ice centers: Cordilleran ice flow The Cordilleran ice sheet covered up to 2,500,000 square kilometres (970,000 sq mi) at the Last Glacial Maximum. The eastern edge abutted the Laurentide ice sheet. The sheet was anchored in the Coast Mountains of British Columbia and Alberta, south into the Cascade Range of Washington. That is one and a half times the water held in the Antarctic. Anchored in the mountain backbone of the west coast, the ice sheet dissipated north of the Alaska Range where the air was too dry to form glaciers. Ice centers: It is believed that the Cordilleran ice melted rapidly, in less than 4000 years. The water created numerous Proglacial lakes along the margins such as Lake Missoula, often leading to catastrophic floods as with the Missoula Floods. Much of the topography of Eastern Washington and northern Montana and North Dakota was affected. Ice centers: Keewatin ice flow The Keewatin ice sheet has had four or five primary lobes identified ice divides extending from a dome over west-central Keewatin (Kivalliq). Two of the lobes abut the adjacent Labrador and Baffin ice sheets. The primary lobes flow (1) towards Manitoba and Saskatchewan; (2) toward Hudson Bay; (3) towards the Gulf of Boothia, and (4) towards the Beaufort Sea. Ice centers: Labrador ice flow The Labrador ice sheet flowed across all of Maine and into the Gulf of St. Lawrence, completely covering the Maritime Provinces. The Appalachian Ice Complex, flowed from the Gaspé Peninsula over New Brunswick, the Magdalen Shelf, and Nova Scotia. Ice centers: The Labrador flow extended across the mouth of the St. Lawrence River, reaching the Gaspé Peninsula and across Chaleur Bay. From the Escuminac center on the Magdalen Shelf, flowed onto the Acadian Peninsula of New Brunswick and southeastward, onto the Gaspe, burying the western end of Prince Edward Island and reached the head of Bay of Fundy. From the Gaspereau center, on the divide crossing New Brunswick flowed into the Bay of Fundy and Chaleur Bay.In New York, the ice that covered Manhattan was about 2,000 feet high before it began to melt in about 16,000 BC. The ice in the area disappeared around 10,000 BC. The ground in the New York area has since risen by more than 150 ft because of the removal of the enormous weight of the melted ice. Ice centers: Baffin ice flow The Baffin ice sheet was circular and centered over the Foxe Basin. A major divide across the basin, created a westward flow across the Melville Peninsula, from an eastward flow over Baffin Island and Southampton Island. Across southern Baffin Island, two divides created four additional lobes. The Penny Ice Divide split the Cumberland Peninsula, where Pangnirtung created flow toward Home Bay on the north and Cumberland Sound on the south. The Amadjuak Ice Divide on the Hall Peninsula, where Iqaluit sits created a north flow into Cumberland Sound and a south flow into the Hudson Strait. A secondary Hall Ice Divide formed a link to a local ice cap on the Hall Peninsula. The current ice caps on Baffin Island are thought to be a remnant from this time period, but it was not a part of the Baffin ice flow, but an autonomous flow.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Arnold tongue** Arnold tongue: In mathematics, particularly in dynamical systems, Arnold tongues (named after Vladimir Arnold) are a pictorial phenomenon that occur when visualizing how the rotation number of a dynamical system, or other related invariant property thereof, changes according to two or more of its parameters. The regions of constant rotation number have been observed, for some dynamical systems, to form geometric shapes that resemble tongues, in which case they are called Arnold tongues.Arnold tongues are observed in a large variety of natural phenomena that involve oscillating quantities, such as concentration of enzymes and substrates in biological processes and cardiac electric waves. Sometimes the frequency of oscillation depends on, or is constrained (i.e., phase-locked or mode-locked, in some contexts) based on some quantity, and it is often of interest to study this relation. For instance, the outset of a tumor triggers in the area a series of substance (mainly proteins) oscillations that interact with each other; simulations show that these interactions cause Arnold tongues to appear, that is, the frequency of some oscillations constrain the others, and this can be used to control tumor growth.Other examples where Arnold tongues can be found include the inharmonicity of musical instruments, orbital resonance and tidal locking of orbiting moons, mode-locking in fiber optics and phase-locked loops and other electronic oscillators, as well as in cardiac rhythms, heart arrhythmias and cell cycle.One of the simplest physical models that exhibits mode-locking consists of two rotating disks connected by a weak spring. One disk is allowed to spin freely, and the other is driven by a motor. Mode locking occurs when the freely-spinning disk turns at a frequency that is a rational multiple of that of the driven rotator. Arnold tongue: The simplest mathematical model that exhibits mode-locking is the circle map, which attempts to capture the motion of the spinning disks at discrete time intervals. Standard circle map: Arnold tongues appear most frequently when studying the interaction between oscillators, particularly in the case where one oscillator drives another. That is, one oscillator depends on the other but not the other way around, so they do not mutually influence each other as happens in Kuramoto models, for example. This is a particular case of driven oscillators, with a driving force that has a periodic behaviour. As a practical example, heart cells (the external oscillator) produce periodic electric signals to stimulate heart contractions (the driven oscillator); here, it could be useful to determine the relation between the frequency of the oscillators, possibly to design better artificial pacemakers. The family of circle maps serves as a useful mathematical model for this biological phenomenon, as well as many others.The family of circle maps are functions (or endomorphisms) of the circle to itself. It is mathematically simpler to consider a point in the circle as being a point x in the real line that should be interpreted modulo 2π , representing the angle at which the point is located in the circle. When the modulo is taken with a value other than 2π , the result still represents an angle, but must be normalized so that the whole range [0,2π] can be represented. With this in mind, the family of circle maps is given by: θi+1=g(θi)+Ω where Ω is the oscillator's "natural" frequency and g is a periodic function that yields the influence caused by the external oscillator. Note that if g(θ)=θ for all θ the particle simply walks around the circle at Ω units at a time; in particular, if Ω is irrational the map reduces to an irrational rotation. Standard circle map: The particular circle map originally studied by Arnold, and which continues to prove useful even nowadays, is: sin ⁡(2πθi) where K is called coupling strength, and θi should be interpreted modulo 1 . This map displays very diverse behavior depending on the parameters K and Ω ; if we fix Ω=1/3 and vary K , the bifurcation diagram around this paragraph is obtained, where we can observe periodic orbits, period-doubling bifurcations as well as possible chaotic behavior. Deriving the circle map: Another way to view the circle map is as follows. Consider a function y(t) that decreases linearly with slope a . Once it reaches zero, its value is reset to a certain oscillating value, described by a function sin ⁡(2πt) . We are now interested in the sequence of times {tn} at which y(t) reaches zero. This model tells us that at time tn−1 it is valid that sin ⁡(2πtn−1) . From this point, y will then decrease linearly until tn , where the function y is zero, thus yielding: sin sin sin ⁡(2πtn−1) and by choosing Ω=c/a and K=2πb/a we obtain the circle map discussed previously: sin ⁡(2πtn−1). Glass, L. (2001) argues that this simple model is applicable to some biological systems, such as regulation of substance concentration in cells or blood, with y(t) above representing the concentration of a certain substance. In this model, a phase-locking of N:M would mean that y(t) is reset exactly N times every M periods of the sinusoidal z(t) . The rotation number, in turn, would be the quotient N/M Properties: Consider the general family of circle endomorphisms: θi+1=g(θi)+Ω where, for the standard circle map, we have that sin ⁡(2πθ) . Sometimes it will also be convenient to represent the circle map in terms of a mapping f(θ) sin ⁡(2πθi). We now proceed to listing some interesting properties of these circle endomorphisms. P1. f is monotonically increasing for K<1 , so for these values of K the iterates θi only move forward in the circle, never backwards. To see this, note that the derivative of f is: cos ⁡(2πθ) which is positive as long as K<1 P2. When expanding the recurrence relation, one obtains a formula for θn sin ⁡(2πθi). Properties: P3. Suppose that mod 1 , so they are periodic fixed points of period n . Since the sine oscillates at frequency 1 Hz, the number of oscillations of the sine per cycle of θi will be M=(θn−θ0)⋅1 , thus characterizing a phase-locking of n:M .P4. For any p∈N , it is true that f(θ+p)=f(θ)+p , which in turn means that mod 1 . Because of this, for many purposes it does not matter if the iterates θi are taken modulus 1 or not. Properties: P5 (translational symmetry). Suppose that for a given Ω there is a n:M phase-locking in the system. Then, for Ω′=Ω+p with integer p , there would be a n:(M+np) phase-locking. This also means that if θ0,…,θn is a periodic orbit for parameter Ω , then it is also a periodic orbit for any Ω′=Ω+p,p∈N P6. For K=0 there will be phase-locking whenever Ω is a rational. Moreover, let Ω=p/q∈Q , then the phase-locking is q:p Mode locking: For small to intermediate values of K (that is, in the range of K = 0 to about K = 1), and certain values of Ω, the map exhibits a phenomenon called mode locking or phase locking. In a phase-locked region, the values θn advance essentially as a rational multiple of n, although they may do so chaotically on the small scale. Mode locking: The limiting behavior in the mode-locked regions is given by the rotation number. lim n→∞θnn. which is also sometimes referred to as the map winding number. Mode locking: The phase-locked regions, or Arnold tongues, are illustrated in yellow in the figure to the right. Each such V-shaped region touches down to a rational value Ω = p/q in the limit of K → 0. The values of (K,Ω) in one of these regions will all result in a motion such that the rotation number ω = p/q. For example, all values of (K,Ω) in the large V-shaped region in the bottom-center of the figure correspond to a rotation number of ω = 1/2. One reason the term "locking" is used is that the individual values θn can be perturbed by rather large random disturbances (up to the width of the tongue, for a given value of K), without disturbing the limiting rotation number. That is, the sequence stays "locked on" to the signal, despite the addition of significant noise to the series θn. This ability to "lock on" in the presence of noise is central to the utility of the phase-locked loop electronic circuit.There is a mode-locked region for every rational number p/q. It is sometimes said that the circle map maps the rationals, a set of measure zero at K = 0, to a set of non-zero measure for K ≠ 0. The largest tongues, ordered by size, occur at the Farey fractions. Fixing K and taking a cross-section through this image, so that ω is plotted as a function of Ω, gives the "Devil's staircase", a shape that is generically similar to the Cantor function. Mode locking: One can show that for K<1, the circle map is a diffeomorphism, there exist only one stable solution. However as K>1 this holds no longer, and one can find regions of two overlapping locking regions. For the circle map it can be shown that in this region, no more than two stable mode locking regions can overlap, but if there is any limit to the number of overlapping Arnold tongues for general synchronised systems is not known.The circle map also exhibits subharmonic routes to chaos, that is, period doubling of the form 3, 6, 12, 24,.... Chirikov standard map: The Chirikov standard map is related to the circle map, having similar recurrence relations, which may be written as sin ⁡(2πθn)pn+1=θn+1−θn with both iterates taken modulo 1. In essence, the standard map introduces a momentum pn which is allowed to dynamically vary, rather than being forced fixed, as it is in the circle map. The standard map is studied in physics by means of the kicked rotor Hamiltonian. Applications: Arnold tongues have been applied to the study of Cardiac rhythms - see Glass, L. et al. (1983) and McGuinness, M. et al. (2004) Synchronisation of a resonant tunneling diode oscillators
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spoilt vote** Spoilt vote: In voting, a ballot is considered spoilt, spoiled, void, null, informal, invalid or stray if a law declares or an election authority determines that it is invalid and thus not included in the vote count. This may occur accidentally or deliberately. The total number of spoilt votes in a United States election has been called the residual vote. In Australia, such votes are generally referred to as informal votes, and in Canada they are referred to as rejected votes. Spoilt vote: In some jurisdictions spoilt votes are counted and reported. Types of spoilt vote: A ballot may be spoilt in a number of ways, including: Failing to mark the ballot at all (blank vote), or otherwise defacing the ballot instead of attempting to vote. Filling out the ballot in a manner that is incompatible with the voting system being used, e.g.: Marking more choices than permitted (overvoting), or fewer than necessary (undervoting). Filling a preference ballot out of sequence, e.g. 1-2-2-3-4 or 1-2-4-5-6, 1-4-2-4-5. In most cases, only the first two choices in these examples would be counted as valid. Adding a write-in candidate when such an option is not permitted. The vote for this candidate would be discarded. Filling the ballot in a manner that makes the voter's decision unclear. Physically deforming ballots, especially those counted by machine. Types of spoilt vote: Making marks on the ballot other than those necessary to complete it, from which the voter's identity can be ascertained, compromising the secrecy of the ballot.As an example, UK law specifically precludes ballots "on which votes are given for more candidates than the voter is entitled to vote for", "on which anything is written or marked by which the voter can be identified" or "which [are] unmarked or void for uncertainty". Replacement ballots: If a voter makes a mistake while completing a ballot, it may be possible to cancel it and start the voting process again. In the United States, cancelled physical ballots may be called "spoiled ballots", as distinct from an "invalid vote" which has been cast. Replacement ballots: In Canada, a spoiled ballot is one that has been handled by an elector in such a manner that it is ruined beyond use, or that the deputy returning officer finds soiled or improperly printed. The spoilt ballot is not placed in the ballot box, but rather is marked as spoilt by the deputy returning officer and set aside. The elector is given another ballot. A 'rejected ballot' is one which cannot be counted due to improper marking by the voter. Examples of this are ballots which have more than one mark, the intent of the voter cannot be ascertained, or the voter can be identified by their mark.In many jurisdictions, if multiple elections or referendums are held simultaneously, then there may be separate physical ballots for each, which may be printed on different-colored paper and posted into separate ballot boxes. In the United States, a single physical ballot is often used to record multiple separate votes. In such cases one can distinguish an "invalid ballot", where all votes on the ballot are rendered invalid, from a "partially valid" ballot, with some votes are valid and others invalid. Intentional spoiling: A voter may deliberately spoil a vote, for example as a protest vote, especially in compulsory voting jurisdictions, to show disapproval of the candidates standing whilst still taking part in the electoral process. Intentionally spoiling someone else's ballot before or during tabulation is an electoral fraud. Intentional spoiling: The validity of an election may be questioned if there is an unusually high proportion of spoilt votes. In multiple-vote U.S. ballots, "voter roll-off" is calculated by subtracting the number of votes cast for a "down-ballot" office, such as mayor, from the number of votes cast for a "top-of-the-ballot" office, such as president. When the election jurisdiction does not report voter turnout, roll-off can be used as a proxy for residual votes. Some voters may only be interested in voting for the major offices, and not bother filling in the lower positions, resulting in a partially valid ballot. Intentional spoiling: While it is not illegal to advocate informal voting in Australian federal elections, it was briefly illegal to advise voters to fill out their ballots using duplicated numbers. Albert Langer was jailed for violating an injunction not to advocate incomplete preference voting for the 1996 Australian federal election.During the 2021 Hong Kong legislative elections, pro-democratic supporters urged voters to cast spoilt ballots or not vote in the election in protest of the rewriting of election rules by the National People's Congress in Beijing. Despite the Government's attempts to criminalise inciting voters to cast invalid ballots or not vote, as well as attempts to boost voter turnout, the election recorded a record number of invalid ballots as well as historically low voter turnout. Unintentional spoiling: Voter instructions are usually intended to minimize the accidental spoiling of votes. Ballot design can aid or inhibit clarity in an election, resulting in less or more accidental spoiling. Some election officials have discretion to allow ballots where the criteria for acceptability are not strictly met but the voter's intention is clear. More complicated electoral systems may be more prone to errors. Group voting tickets were introduced in Australia owing to the high number of informal votes cast in single transferable vote (STV) elections, but have since been abolished in all states and territories aside from Victoria. When multiple Irish STV elections are simultaneous (as for local and European elections) some voters have marked, say, 1-2-3 on one ballot paper and 4-5-6 on the other; some returning officers consequently allowed 4-5-6 ballots to be counted, until a Supreme Court case in 2015 ruled they were invalid.The United States Election Assistance Commission's survey of the 2006 midterm elections reported undervoting rate of 0.1% in US Senate elections and 1.6% in US House elections; overvotes were much rarer. Some paper-based voting systems and most DRE voting machines can notify voters of under-votes and over-votes. The Help America Vote Act requires that voters are informed when they have overvoted, unless a paper-ballot voting system is in use.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Poor ovarian reserve** Poor ovarian reserve: Poor ovarian reserve is a condition of low fertility characterized by 1): low numbers of remaining oocytes in the ovaries or 2) possibly impaired preantral oocyte development or recruitment. Recent research suggests that premature ovarian aging and premature ovarian failure (aka primary ovarian insufficiency) may represent a continuum of premature ovarian senescence. It is usually accompanied by high FSH (follicle stimulating hormone) levels. Poor ovarian reserve: Quality of the eggs may also be impaired. However, other studies show no association with elevated FSH levels and genetic quality of embryos after adjusting for age. The decline in quality was age related, not FSH related as the younger women with high day three FSH levels had higher live birth rates than the older women with high FSH. There was no significant difference in genetic embryo quality between same aged women regardless of FSH levels. A 2008 study concluded that diminished reserve did not affect the quality of oocytes and any reduction in quality in diminished reserve women was age related. One expert concluded: in young women with poor reserve when eggs are obtained they have near normal rates of implantation and pregnancy rates, but they are at high risk for IVF cancellation; if eggs are obtained, pregnancy rates are typically better than in older woman with normal reserve. However, if the FSH level is extremely elevated these conclusions are likely not applicable. Presentation: Related conditions Premature ovarian failure: Defined as no menses for six months before the age of forty due to any cause. Often diagnosed by elevated gonadotropin (Follicle-stimulating hormone (FSH) and LH) levels. In some cases (more so in younger women) ovarian function and ovulation can spontaneously resume. With POF up to 50% of women may ovulate once in any given year and 5–10% may become pregnant. POF is often associated with autoimmune diseases. Presentation: Premature menopause: An outdated synonym for premature ovarian failure. The term encompasses premature menopause due to any cause, including surgical removal of the ovaries for any reason. Early menopause and premature ovarian failure are no longer considered to be the same condition. Cause: Natural decline of ovarian reserve due to age. Idiopathic. Genetic factors, such as fragile x syndrome. Approximately 20–28% of women with an FMR1 premutation (55–200 CGG repeats) experience fragile x primary ovarian insufficiency (POI) and another 23% experience early menopause (i.e., menopause before the age of forty five). Autoimmune disorders. Adrenal gland impairment. Iatrogenic, e.g., due to radiation, chemotherapy or surgery, such as laserization of the surface of the ovary to treat endometriosis. Excessive laparoscopic ovarian drilling has been reported to cause premature ovarian failure. (The primordial follicles are located in the thin outer one-millimeter layer of the ovary.) Diagnosis: There is some controversy as the accuracy of the tests used to predict poor ovarian reserve. One systematic review concluded that the accuracy of predicting the occurrence of pregnancy is very limited. When a high threshold is used, to prevent couples from wrongly being refused IVF, only approximately 3% of IVF-indicated cases are identified as having unfavourable prospects in an IVF treatment cycle. Also, the review concluded the use of any ORT (Ovarian Reserve Testing) for outcome prediction cannot be supported. Also Centers for Disease Control and Prevention statistics show that the success rates for IVF with diminished ovarian reserve vary widely between IVF centers. Diagnosis: Follicle stimulating hormone Elevated serum follicle stimulating hormone (FSH) level measured on day three of the menstrual cycle. (First day of period flow is counted as day one. Spotting is not considered start of period.) If a lower value occurs from later testing, the highest value is considered the most predictive. FSH assays can differ somewhat so reference ranges as to what is normal, premenopausal or menopausal should be based on ranges provided by the laboratory doing the testing. Estradiol (E2) should also be measured as women who ovulate early may have elevated E2 levels above 80 pg/mL (due to early follicle recruitment, possibly due to a low serum inhibin B level) which will mask an elevated FSH level and give a false negative result.High FSH strongly predicts poor IVF response in older women, less so in younger women. One study showed an elevated basal day-three FSH is correlated with diminished ovarian reserve in women aged over 35 years and is associated with poor pregnancy rates after treatment of ovulation induction (6% versus 42%).The rates for spontaneous pregnancy in older women with elevated FSH levels have not been studied very well and the spontaneous pregnancy success rate, while low, may be underestimated due to non reporting bias, as most infertility clinics will not accept women over the age of forty with FSH levels in the premenopausal range or higher.A woman can have a normal day-three FSH level yet still respond poorly to ovarian stimulation and hence can be considered to have poor reserve. Thus, another FSH-based test is often used to detect poor ovarian reserve: the clomid challenge test, also known as CCCT (clomiphene citrate challenge test). Diagnosis: Antral follicle count Transvaginal ultrasonography can be used to determine antral follicle count (AFC). This is an easy-to-perform and noninvasive method (but there may be some discomfort). Several studies show this test to be more accurate than basal FSH testing for older women (< 44 years of age) in predicting IVF outcome. This method of determining ovarian reserve is recommended by Dr. Sherman J. Silber, author and medical director of the Infertility Center of St. Louis.AFC and Median Fertile Years Remaining Note, the above table from Silber's book may be in error as it has no basis in any scientific study, and contradicts data from Broekmans, et al. 2004 study. The above table closely matches Broekmans' data only if interpreted as the total AFC of both ovaries. Only antral follicles that were 2–10 mm in size were counted in Broekmans' study. Diagnosis: Age and AFC and Age of Loss of Natural Fertility (See Broekmans, et al. [2004]) AFC and FSH Stimulation Recommendations for Cycles Using Assisted Reproduction Technology Other Declining serum levels of anti-müllerian hormone. Recent studies have validated the use of serum AMH levels as a marker for the quantitative aspect of ovarian reserve. Because of the lack of cycle variations in serum levels of AMH, this marker has been proposed to be used as part of the standard diagnostic procedures to assess ovarian dysfunctions, such as premature ovarian failure. One study has shown AMH to be a better marker than basal FSH for women with proven (prior) fertility in measuring age related decline in ovarian reserve. Diagnosis: Inhibin B blood level. Inhibin B levels tend to decline in advanced reproductive aged women due to both fewer follicles and decreased secretion by the granulosa cells. Inhibin B levels start to rise around day zero and low day three levels are associated with poor IVF outcome. Ultrasound measurement of ovarian volume. Lass and Brinsden (1999) report that the correlation between ovarian volume and follicular density appears to only hold in women ≥ 35 years of age. Diagnosis: Dynamic Assessment Following GnRH-a Administration (GAST). This test measures the change in serum estradiol levels between cycle day two and three after administration of one mg of subcutaneous leuprolide acetate, a gonadotropin releasing hormone agonist. Patients with estradiol elevations by day two followed by a decline by day three had improved implantation and pregnancy rates than those patients with either no rise in estradiol or persistently elevated estradiol levels. Diagnosis: Home testing of FSH urine concentration to alert a woman to possible impaired ovarian reserve became possible in June 2007 with the introduction of Fertell in the United States and UK, which claims a 95% equivalence to standard serum marker results. Treatment: Variable success rate with treatment, very few controlled studies, mostly case reports. Treatment success strongly tends to diminish with age and degree of elevation of FSH. Treatment: Donor oocyte. Oocyte donation is the most successful method for producing pregnancy in perimenopausal women. In the UK the use of donor oocytes after natural menopause is controversial. A 1995 study reported that women age fifty or higher experience similar pregnancy rates after oocyte donation as younger women. They are at equal risk for multiple gestation as younger women. In addition, antenatal complications were experienced by the majority of patients, and that high risk obstetric surveillance and care is vital. Treatment: Natural or Mini-IVF, but without the use of hCG to trigger ovulation, instead the GnRH agonist Synarel (nafarelin acetate) in a diluted form is taken as a nasal spray to trigger ovulation. Human chorionic gonadotropin (hCG) has a long half life and may stimulate (luteinize) small follicles prematurely and cause them to become cysts. Whereas nafarelin acetate in a nasal spray induces a short lived LH surge that is high enough to induce ovulation in large follicles, but too short lived to adversely affect small follicles. This increases the likelihood of the small follicles and oocytes therein developing normally for upcoming cycles and also allows the woman to cycle without taking a break and consequently increases the probability of conception in poor ovarian reserve women and advanced reproductive aged women. Treatment: Pretreatment with 50 mcg ethinylestradiol three times a day for two weeks, followed by recombinant FSH 200 IU/day subcutaneously. Ethinylestradiol treatment was maintained during FSH stimulation. When at least one follicle reached 18mm in diameter and serum estradiol was greater or equal to 150 pg/ML ovulation was induced with an intramuscular injection of 10,000 IU of hCG (human chorionic gonadotropin hormone). For luteal phase support 5,000 IU of hCG was administered every 72 hours. Out of 25 patients 8 ovulated and 4 became pregnant. In the control group there were no ovulations. The patients ranged in age between 24 and 39 years with an average age of 32.7. All women had amenorrhea for at least 6 months (average 16.75 months) and FSH levels greater or equal than 40 mIU/mL (average FSH 68 mIU/ML). The researchers believe this protocol would work for women in early post menopause as well. Treatment: Ethinylestradiol or other synthetic estrogens along with luteal phase progesterone (twice daily 200 mg vaginal suppositories) and estradiol support. Ethinylestradiol lowers high FSH levels which then, it is theorized, up regulates FSH receptor sites and restores sensitivity to FSH. Ethinylestradiol also has the advantage that it does not interfere with the measurement of serum levels of endogenous estradiol. During the luteal phase the FSH levels should be kept low for subsequent cycles, thus the phase is supplemented with 4 mg oral estradiol. Since conception may have occurred estradiol is used instead of the synthetic ethinylestradiol. Treatment: Cyclical hormone replacement therapy. The following protocols have shown promise: high dose gonadoropins, flare up GnRH-a protocol (standard or microdose), stop protocols, short protocol, natural cycle or modified natural cycle and low dose hCG during the beginning of the stimulation protocol. Gonadotropin-releasing hormone agonist/antagonist conversion with estrogen priming (AACEP) protocol. Fisch, Keskintepe and Sher report 35% (14 out of 40) ongoing gestation in women with elevated FSH levels (all women had prior IVF and poor quality embryos); among women aged 41–42 the ongoing gestation rate was 19% (5 out of 26). Treatment: DHEA: Recent clinical trial by the Center for Human Reproduction in New York showed significant effectiveness. Leonidas and Eudoxia Mamas report six cases of premature ovarian failure. After two to six months of treatment with DHEA (Two 25 mg capsules daily in five cases and three 25 mg capsules daily in one case.) all women conceived. One delivered via C-section, one aborted at 7 weeks and the remaining four were reported at 11 to 27 weeks gestation. Ages were from 37 to 40. FSH levels were from 30 to 112 mIU/mL. Ammenorhea ranged from 9 to 13 months. In addition, there is strong evidence that continuous micronized DHEA 25 mg TID reduces miscarriage and aneuploidy rates, especially above age 35. Treatment: A combined pentoxifylline-tocopherol treatment has been reported effective in improving uterine parameters in women with POF undergoing IVF with donor oocytes (IVF-OD). Three women with uterine hormonoresistance despite high estradiol (E2) plasma levels received treatment with 800 mg pentoxifylline and 1000 IU of vitamin E for at least nine months. Three frozen-thawed embryo transfers (ETs) resulted in two viable pregnancies. Mean endometrial thickness increased from 4.9 mm (with thin uterine crosses) to 7.4 mm with nice uterine crosses. This treatment protocol has also reversed some cases of iatrogenic POF caused by full body radiation treatment. Research: While the primary cause of the end to menstrual cycles is the exhaustion of ovarian follicles, there is some evidence that a defect in the hypothalamus is critical in the transition from regular to irregular cycles. This is supported by at least one study in which transplantation of ovaries from old rats to young ovariectomized rats resulted in follicular development and ovulation. Also, electrical stimulation of the hypothalamus is capable of restoring reproductive function in aged animals. Due to the complex interrelationship among the hypothalamus, pituitary and ovaries (HPO axis) defects in the functioning of one level can cause defects on the other levels.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Supermultiplet** Supermultiplet: In theoretical physics, a supermultiplet is a representation of a supersymmetry algebra, possibly with extended supersymmetry. Then a superfield is a field on superspace which is valued in such a representation. Naïvely, or when considering flat superspace, a superfield can simply be viewed as a function on superspace. Formally, it is a section of an associated supermultiplet bundle. Phenomenologically, superfields are used to describe particles. It is a feature of supersymmetric field theories that particles form pairs, called superpartners where bosons are paired with fermions. These supersymmetric fields are used to build supersymmetric quantum field theories, where the fields are promoted to operators. History: Superfields were introduced by Abdus Salam and J. A. Strathdee in a 1974 article. Operations on superfields and a partial classification were presented a few months later by Sergio Ferrara, Julius Wess and Bruno Zumino. Naming and classification: The most commonly used supermultiplets are vector multiplets, chiral multiplets (in d=4,N=1 supersymmetry for example), hypermultiplets (in d=4,N=2 supersymmetry for example), tensor multiplets and gravity multiplets. The highest component of a vector multiplet is a gauge boson, the highest component of a chiral or hypermultiplet is a spinor, the highest component of a gravity multiplet is a graviton. The names are defined so as to be invariant under dimensional reduction, although the organization of the fields as representations of the Lorentz group changes. Naming and classification: The use of these names for the different multiplets can vary in literature. A chiral multiplet (whose highest component is a spinor) may sometimes be referred to as a scalar multiplet, and in d=4,N=2 SUSY, a vector multiplet (whose highest component is a vector) can sometimes be referred to as a chiral multiplet. Superfields in d = 4, N = 1 supersymmetry: Conventions in this section follow the notes by Figueroa-O'Farrill (2001). A general complex superfield Φ(x,θ,θ¯) in d=4,N=1 supersymmetry can be expanded as Φ(x,θ,θ¯)=ϕ(x)+θχ(x)+θ¯χ¯′(x)+θ¯σμθVμ(x)+θ2F(x)+θ¯2F¯′(x)+θ¯2θξ(x)+θ2θ¯ξ¯′(x)+θ2θ¯2D(x) ,where ϕ,χ,χ¯′,Vμ,F,F¯′,ξ,ξ¯′,D are different complex fields. This is not an irreducible supermultiplet, and so different constraints are needed to isolate irreducible representations. Chiral superfield A (anti-)chiral superfield is a supermultiplet of d=4,N=1 supersymmetry. In four dimensions, the minimal N=1 supersymmetry may be written using the notion of superspace. Superspace contains the usual space-time coordinates xμ , μ=0,…,3 , and four extra fermionic coordinates θα,θ¯α˙ with α,α˙=1,2 , transforming as a two-component (Weyl) spinor and its conjugate. In d=4,N=1 supersymmetry, a chiral superfield is a function over chiral superspace. There exists a projection from the (full) superspace to chiral superspace. So, a function over chiral superspace can be pulled back to the full superspace. Such a function Φ(x,θ,θ¯) satisfies the covariant constraint D¯Φ=0 , where D¯ is the covariant derivative, given in index notation as D¯α˙=−∂¯α˙−iθασαα˙μ∂μ. A chiral superfield Φ(x,θ,θ¯) can then be expanded as Φ(y,θ)=ϕ(y)+2θψ(y)+θ2F(y), where yμ=xμ+iθσμθ¯ . The superfield is independent of the 'conjugate spin coordinates' θ¯ in the sense that it depends on θ¯ only through yμ . It can be checked that 0. The expansion has the interpretation that ϕ is a complex scalar field, ψ is a Weyl spinor. There is also the auxiliary complex scalar field F , named F by convention: this is the F-term which plays an important role in some theories. The field can then be expressed in terms of the original coordinates (x,θ,θ¯) by substituting the expression for y :Φ(x,θ,θ¯)=ϕ(x)+2θψ(x)+θ2F(x)+iθσμθ¯∂μϕ(x)−i2θ2∂μψ(x)σμθ¯−14θ2θ¯2◻ϕ(x). Antichiral superfields Similarly, there is also antichiral superspace, which is the complex conjugate of chiral superspace, and antichiral superfields. An antichiral superfield Φ† satisfies DΦ†=0, where Dα=∂α+iσαα˙μθ¯α˙∂μ. An antichiral superfield can be constructed as the complex conjugate of a chiral superfield. Actions from chiral superfields For an action which can be defined from a single chiral superfield, see Wess–Zumino model. Vector superfield The vector superfield is a supermultiplet of N=1 supersymmetry. A vector superfield (also known as a real superfield) is a function V(x,θ,θ¯) which satisfies the reality condition V=V† . Such a field admits the expansion V=C+iθχ−iθ¯χ¯+i2θ2(M+iN)−i2θ2¯(M−iN)−θσμθ¯Aμ+iθ2θ¯(λ¯+i2σ¯μ∂μχ)−iθ¯2θ(λ+i2σμ∂μχ¯)+12θ2θ¯2(D+12◻C). The constituent fields are Two real scalar fields C and D A complex scalar field M+iN Two Weyl spinor fields χα and λα A real vector field (gauge field) Aμ Their transformation properties and uses are further discussed in supersymmetric gauge theory. Using gauge transformations, the fields C,χ and M+iN can be set to zero. This is known as Wess-Zumino gauge. In this gauge, the expansion takes on the much simpler form WZ =θσμθ¯Aμ+θ2θ¯λ¯+θ¯2θλ+12θ2θ¯2D. Then λ is the superpartner of Aμ , while D is an auxiliary scalar field. It is conventionally called D , and is known as the D-term. Scalars: A scalar is never the highest component of a superfield; whether it appears in a superfield at all depends on the dimension of the spacetime. For example, in a 10-dimensional N=1 theory the vector multiplet contains only a vector and a Majorana–Weyl spinor, while its dimensional reduction on a d-dimensional torus is a vector multiplet containing d real scalars. Similarly, in an 11-dimensional theory there is only one supermultiplet with a finite number of fields, the gravity multiplet, and it contains no scalars. However again its dimensional reduction on a d-torus to a maximal gravity multiplet does contain scalars. Hypermultiplet: A hypermultiplet is a type of representation of an extended supersymmetry algebra, in particular the matter multiplet of N=2 supersymmetry in 4 dimensions, containing two complex scalars Ai, a Dirac spinor ψ, and two further auxiliary complex scalars Fi. The name "hypermultiplet" comes from old term "hypersymmetry" for N=2 supersymmetry used by Fayet (1976); this term has been abandoned, but the name "hypermultiplet" for some of its representations is still used. Extended supersymmetry (N > 1): This section records some commonly used irreducible supermultiplets in extended supersymmetry in the d=4 case. These are constructed by a highest-weight representation construction in the sense that there is a vacuum vector annihilated by the supercharges QA,A=1,⋯,N . The irreps have dimension 2N . For supermultiplets representing massless particles, on physical grounds the maximum allowed N is N=8 , while for renormalizability, the maximum allowed N is N=4 N = 2 The N=2 vector or chiral multiplet Ψ contains a gauge field Aμ , two Weyl fermions λ,ψ , and a scalar ϕ (which also transform in the adjoint representation of a gauge group). These can also be organised into a pair of N=1 multiplets, an N=1 vector multiplet W=(Aμ,λ) and chiral multiplet Φ=(ϕ,ψ) . Such a multiplet can be used to define Seiberg–Witten theory concisely. Extended supersymmetry (N > 1): The N=2 hypermultiplet or scalar multiplet consists of two Weyl fermions and two complex scalars, or two N=1 chiral multiplets. N = 4 The N=4 vector multiplet contains one gauge field, four Weyl fermions, six scalars, and CPT conjugates. This appears in N = 4 supersymmetric Yang–Mills theory.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Anatomical terms of bone** Anatomical terms of bone: Many anatomical terms descriptive of bone are defined in anatomical terminology, and are often derived from Greek and Latin. Bone in the human body is categorized into long bone, short bone, flat bone, irregular bone and sesamoid bone. Types of bone: Long bones A long bone is one that is cylindrical in shape, being longer than it is wide. However, the term describes the shape of a bone, not its size, which is relative. Long bones are found in the arms (humerus, ulna, radius) and legs (femur, tibia, fibula), as well as in the fingers (metacarpals, phalanges) and toes (metatarsals, phalanges). Long bones function as levers; they move when muscles contract. Types of bone: Short bones A short bone is one that is cube-like in shape, being approximately equal in length, width, and thickness. The only short bones in the human skeleton are in the carpals of the wrists and the tarsals of the ankles. Short bones provide stability and support as well as some limited motion. Types of bone: Flat bones The term “flat bone” is something of a misnomer because, although a flat bone is typically thin, it is also often curved. Examples include the cranial (skull) bones, the scapulae (shoulder blades), the sternum (breastbone), and the ribs. Flat bones serve as points of attachment for muscles and often protect internal organs. Flat bones do not have a medullary cavity because they are thin. Types of bone: Irregular bones An irregular bone is one that does not have an easily classified shape and defies description. These bones tend to have more complex shapes, like the vertebrae that support the spinal cord and protect it from compressive forces. Many facial bones, particularly the ones containing sinuses, are classified as irregular bones. Types of bone: Sesamoid bones A sesamoid bone is a small, round bone that, as the name suggests, is shaped like a sesame seed. These bones form in tendons (the sheaths of tissue that connect bones to muscles) where a great deal of pressure is generated in a joint. The sesamoid bones protect tendons by helping them overcome compressive forces. Sesamoid bones vary in number and placement from person to person but are typically found in tendons associated with the feet, hands, and knees. The only type of sesamoid bone that is common to everybody is the kneecap (patella, pl. patellae) which is also the largest of the sesamoid bones. Protrusions: Rounded A condyle is the round prominence at the end of a bone, most often part of a joint – an articulation with another bone. The epicondyle refers to a projection near a condyle, particularly the medial epicondyle of the humerus. These terms derive from Greek. An eminence refers to a relatively small projection or bump, particularly of bone, such as the medial eminence.A process refers to a relatively large projection or prominent bump, as does a promontory such as the sacral promontory.Both tubercle and tuberosity refer to a projection or bump with a roughened surface, with a "tubercle" generally smaller than a "tuberosity". These terms are derived from tuber (Latin: swelling)., as is also protuberance, which occasionally is synonymous with "tuberosity". Protrusions: A ramus (Latin: branch) refers to an extension of bone, such as the ramus of the mandible in the jaw or Superior pubic ramus. Ramus may also be used to refer to nerves, such as the ramus communicans. A facet refers to a small, flattened articular surface. Pointed A line refers to a long, thin projection, often with a rough surface. Ridge and crest refer to a long, narrow line. Unlike many words used to describe anatomical terms, the word ridge is derived from Old English. A spine, as well as referring to the spinal cord, may be used to describe a relatively long, thin projection or bump. Special These terms are used to describe bony protuberances in specific parts of the body. Protrusions: The Malleolus (Latin: "small hammer") is the bony prominence on each side of the ankle. These are known as the medial and lateral malleolus. Each leg is supported by two bones, the tibia on the inner side (medial) of the leg and the fibula on the outer side (lateral) of the leg. The medial malleolus is the prominence on the inner side of the ankle, formed by the lower end of the tibia. The lateral malleolus is the prominence on the outer side of the ankle, formed by the lower end of the fibula. Protrusions: The trochanters are parts of the femur, to which muscles attach. It may refer to the greater, lesser, or third trochanter Cavities: Openings The following terms are used to describe cavities that connect to other areas: A foramen is any opening, particularly referring to those in bone. Foramina inside the body of humans and other animals typically allow muscles, nerves, arteries, veins, or other structures to connect one part of the body with another. An example is the foramen magnum in occipital bone. A canal is a long, tunnel-like foramen, usually a passage for notable nerves or blood vessels. An example is the auditory canal. Cavities: Blind-ended The following terms are used to describe cavities that do not connect to other areas: A fossa (from the Latin "fossa", ditch or trench) is a depression or hollow, usually in a bone, such as the hypophyseal fossa, the depression in the sphenoid bone.A meatus is a short canal that opens to another part of the body. An example is the external auditory meatus. Cavities: A fovea (Latin: pit) is a small pit, usually on the head of a bone. An example of a fovea is the fovea capitis of the head of the femur. Walls The following terms are used to describe the walls of a cavity: A labyrinth refers to the bony labyrinth and membranous labyrinth, components of the inner ear, due to their fine and complex structure.A sinus refers to a bony cavity, usually within the skull. Joints: A joint, or articulation is the region where adjacent bones contact each other, for example the elbow, shoulder, or costovertebral joint. Terms that refer to joints include: articular process, referring to a projection that contacts an adjacent bone. suture, referring to an articulation between cranial bones. Features of long bones: Gross features Bones are commonly described with the terms head, neck, shaft, body and base The head of a bone usually refers to the distal end of the bone. The shaft refers to the elongated sections of long bone, and the neck the segment between the head and shaft (or body). The end of the long bone opposite to the head is known as the base. Features of long bones: Internal regions Internal and external The cortex of a bone is used to refer to its outer layers, and medulla used to refer to the inner surface of the bone. Red marrow, in which blood is formed is present in spongy bone as well as in the medullary cavity, while the fatty yellow marrow is present primarily in the medullary cavity.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cosmic latte** Cosmic latte: Cosmic latte is the average color of the universe as perceived from the Earth, found by a team of astronomers from Johns Hopkins University (JHU). In 2002, Karl Glazebrook and Ivan Baldry determined that the average color of the universe was a greenish white, but they soon corrected their analysis in a 2003 paper in which they reported that their survey of the light from over 200,000 galaxies averaged to a slightly beigeish white. The hex triplet value for cosmic latte is #FFF8E7. Discovery of the color: Finding the average color of the universe was not the focus of the study. Rather, the study examined spectral analysis of different galaxies to study star formation. Like Fraunhofer lines, the dark lines displayed in the study's spectral ranges display older and younger stars and allow Glazebrook and Baldry to determine the age of different galaxies and star systems. What the study revealed is that the overwhelming majority of stars formed about 5 billion years ago. Because these stars would have been "brighter" in the past, the color of the universe changes over time, shifting from blue to red as more blue stars change to yellow and eventually red giants. Discovery of the color: As light from distant galaxies reaches the Earth, the average "color of the universe" (as seen from Earth) tends towards pure white, due to the light coming from the stars when they were much younger and bluer. Discovery of the color: Naming the color The corrected color was initially published on the Johns Hopkins News website and updated on the team's initial announcement. Multiple news outlets, including NPR and BBC, displayed the color in stories and some relayed the request by Glazebrook on the announcement asking for suggestions for names, jokingly adding all were welcome as long as they were not "beige".These were the results of a vote of the JHU astronomers involved based on the new color: Though Drum's suggestion of "cappuccino cosmico" received the most votes, the researchers favored Drum's other suggestion, "cosmic latte". "Latte" means "milk" in Italian, Galileo's native language, and the similar "latteo" means "milky", similar to the Italian term for the Milky Way, "Via Lattea". They enjoyed the fact that the color would be similar to the Milky Way's average color as well, as it is part of the sum of the universe. They also claimed to be "caffeine biased".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Familial amyloid cardiomyopathy** Familial amyloid cardiomyopathy: Familial amyloid cardiomyopathy (FAC), or transthyretin amyloid cardiomyopathy (ATTR-CM) results from the aggregation and deposition of mutant and wild-type transthyretin (TTR) protein in the heart. TTR is usually circulated as a homo-tetramer—a protein made up of four identical subunits—however, in FAC populations, TTR dissociates from this typical form and misassembles into amyloid fibrils which are insoluble and resistant to degradation. Due to this resistance to degradation, when amyloid fibrils accumulate in the heart's walls, specifically the left ventricle, rigidity prevents the heart from properly relaxing and refilling with blood: this is called diastolic dysfunction which can ultimately lead to heart failure. Types: There are two types of ATTR-CM: Hereditary (hATTR-CM) and wild type (wATTR-CM).Both mutant and wild-type transthyretin comprise the aggregates because the TTR blood protein is a tetramer composed of mutant and wild-type TTR subunits in heterozygotes. Several mutations in TTR are associated with FAC, including V122I, V20I, P24S, A45T, Gly47Val, Glu51Gly, I68L, Gln92Lys, and L111M. One common mutation (V122I), which is a substitution of isoleucine for valine at position 122, occurs with high frequency in African-Americans, with a prevalence of approximately 3.5%. FAC is clinically similar to senile systemic amyloidosis, in which cardiomyopathy results from the aggregation of wild-type transthyretin exclusively. Presentation: The onset of FAC caused by aggregation of the V122I mutation and wild-type TTR, and senile systemic amyloidosis caused by the exclusive aggregation of wild-type TTR, typically occur after age 60. Greater than 40% of these patients present with carpal tunnel syndrome before developing ATTR-CM. Cardiac involvement is often identified with the presence of conduction system disease (sinus node or atrioventricular node dysfunction) and/or congestive heart failure, including shortness of breath, peripheral edema, syncope, exertional dyspnea, generalized fatigue, or heart block. Unfortunately, echocardiographic findings are indistinguishable from those seen in AL amyloidosis, and include thickened ventricular walls (concentric hypertrophy, both right and left) with a normal-to-small left ventricular cavity, increased myocardial echogenicity, normal or mildly reduced ejection fraction (often with evidence of diastolic dysfunction and severe impairment of contraction along the longitudinal axis), and bi-atrial dilation with impaired atrial contraction. Unlike the situation in AL amyloidosis, the ECG voltage is often normal, although low voltage may be seen (despite increased wall thickness on echocardiography). Marked axis deviation, bundle branch block, and AV block are common, as is atrial fibrillation. Management: Although not based on a human clinical trial, the only currently accepted disease-modifying therapeutic strategy available for familial amyloid cardiomyopathy is a combined liver and heart transplant. Treatments aimed at symptom relief are available, and include diuretics, pacemakers, and arrhythmia management. Thus, Senile systemic amyloidosis and familial amyloid polyneuropathy are often treatable diseases that are misdiagnosed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Allantoic acid** Allantoic acid: Allantoic acid is an organic compound with the chemical formula C4H8N4O4. It is a crystalline acid obtained by hydrolysis of allantoin. In nature, allantoic acid is produced from allantoin by the enzyme allantoinase (encoded by the gene AllB (Uniprot: P77671) in Escherichia coli and other bacteria).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Antibiotic misuse** Antibiotic misuse: Antibiotic misuse, sometimes called antibiotic abuse or antibiotic overuse, refers to the misuse or overuse of antibiotics, with potentially serious effects on health. It is a contributing factor to the development of antibiotic resistance, including the creation of multidrug-resistant bacteria, informally called "super bugs": relatively harmless bacteria (such as Staphylococcus, Enterococcus and Acinetobacter) can develop resistance to multiple antibiotics and cause life-threatening infections. History of antibiotic regulation: Antibiotics have been around since 1928 when penicillin was discovered by Alexander Fleming. In the 1980s, antibiotics that were determined medically important for treatment of animals could be approved under veterinary oversight. In 1996, the National Antimicrobial Resistance Monitoring System (NARMS) was established. Starting in 2010, publications regarding antimicrobial drugs in food became an annual report. Starting in 2012, there was publicly solicited input on how data is to be collected and reported for matters relating to the use of antimicrobials for food-producing animals. Resulting from this, the FDA revised its sampling structure within NARMS with the goal of obtaining more representative livestock data for the key organisms under surveillance. "NARMS partners at CDC and USDA have published over 150 peer-reviewed research articles examining the nature and magnitude of antimicrobial resistance hazards associated with antibiotic use in food-producing animals." In 2014, the FDA began working with the United States Department of Agriculture (USDA) and the Centers of Disease Control and Prevention (CDC) to explore additional mechanisms to obtain data that is representative of antibiotic use in food-producing animals. In 2015, the FDA issued the Veterinary Feed Directive (VFD) final rule, under which veterinarians must authorize the use of antimicrobials within feed for the animals they serve.In addition to antibiotic regulation in food production, there have been numerous policies put in place to regulate antibiotic distribution in healthcare, specifically in hospital settings. In 2014, the CDC officially recognized the need for antimicrobial stewardship within all U.S. hospitals in their publication of the Core Elements of Hospital Antibiotic Stewardship Programs. These programs outline opportunities for reducing unnecessary antibiotic usage, and provide guidelines for antibiotic prescription for common infections. The CDC highlighted post-prescription tactics for antibiotic regulation, such as reassessing dosages and the class or type of antibiotic used, in order to optimally treat each infection. The CDC also emphasized the need for evidence-based prescribing, a practice that focuses on the utilization of evidence and research to make informed medical decisions; these sentiments were echoed by the American Dental Association (ADA) which works to provide detailed guidelines for dentists considering prescribing their patients antibiotics. In 2019, the CDC published a report concerning the issue and updating the public on the effectiveness of past policy. This report, titled Antibiotic Resistance Threats in the United States, 2019, indicated which pathogens posed the greatest threat of resistance, and highlighted the importance of infection prevention, providing recommendations for prevention strategies.There has also been a substantial effort to educate not only prescribers, but patients too on the issue of antibiotic misuse. The World Health Organization (WHO) has designated a "World Antimicrobial Awareness Week" in November. In 2021, the week's theme was "Spread Awareness, Stop Resistance" and the organization published many different forms of media including podcasts, articles, and infographics to raise awareness for the issue. In the United States, the CDC has published posters and other materials for the purpose of educating the public on antibiotic resistance. State health departments, such as Colorado's Department of Public Health & Environment, have partnered with the CDC to distribute these materials to healthcare providers. Instances of antibiotic misuse: Antibiotics treats bacterial infections rather than viral infections. Common situations in which antibiotics are overused include the following: Apparent viral respiratory illness in children should not be treated with antibiotics. If there is a diagnosis of bacterial infection, then antibiotics may be used. Despite acute respiratory-tract infections being mainly caused by viruses, as many as 75% of cases are treated with antibiotics. When children with ear tubes get ear infections, they should have antibiotic eardrops put into their ears to go to the infection rather than having oral antibiotics, which are more likely to have unwanted side effects. Swimmer's ear should be treated with antibiotic eardrops, not oral antibiotics. Sinusitis should not be treated with antibiotics because it is usually caused by a virus, and even when it is caused by a bacterium, antibiotics are not indicated except in atypical circumstances as it usually resolves without treatment. Viral conjunctivitis should not be treated with antibiotics. Antibiotics should only be used with confirmation that a patient has bacterial conjunctivitis. Older persons often have bacteria in their urine which is detected in routine urine tests, but unless the person has the symptoms of a urinary tract infection, antibiotics should not be used in response. Eczema should not be treated with oral antibiotics. Dry skin can be treated with lotions or other symptom treatments. The use of topical antibiotics to treat surgical wounds does not reduce infection rates in comparison with non-antibiotic ointment or no ointment at all. The use of doxycycline in acne vulgaris has been associated with increased risk of Crohn's disease. The use of minocycline in acne vulgaris has been associated with skin and gut dysbiosis. Social and economic impact of antibiotic misuse: Antibiotics can cause severe reactions and add significantly to the cost of care. In the United States, antibiotics and anti-infectives are the leading cause of adverse effect from drugs. In a study of 32 States in 2011, antibiotics and anti-infectives accounted for nearly 24 percent of ADEs that were present on admission, and 28 percent of those that occurred during a hospital stay.If antimicrobial resistance continues to increase from current levels, it is estimated that by 2050 ten million people would die every year due to lack of available treatment and the world's GDP would be 2 – 3.5% lower in 2050. If worldwide action is not taken to combat antibiotic misuse and the development of antimicrobial resistance, from 2014 – 2050 it is estimated that 300 million people could die prematurely due to drug resistance and $60 – 100 trillion of economic output would be lost. If the current worldwide development of antimicrobial resistance is delayed by just 10 years, $65 trillion of the world's GDP output can be saved from 2014 to 2050.Prescribing by an infectious disease specialist compared with prescribing by a non-infectious disease specialist decreases antibiotic consumption and reduces costs. Antibiotic resistance: Though antibiotics are required to treat severe bacterial infections, misuse has contributed to a rise in bacterial resistance. The overuse of fluoroquinolone and other antibiotics fuels antibiotic resistance in bacteria, which can inhibit the treatment of antibiotic-resistant infections. Their excessive use in children with otitis media has given rise to a breed of bacteria resistant to antibiotics entirely. Additionally, the use of antimicrobial substances in building materials and personal care products has contributed to a higher percentage of antibiotic resistant bacteria in the indoor environment, where humans spend a large majority of their lives.Widespread use of fluoroquinolones as a first-line antibiotic has led to decreased antibiotic sensitivity, with negative implications for serious bacterial infections such as those associated with cystic fibrosis, where quinolones are among the few viable antibiotics. Inappropriate use: Human health Antibiotics have no effect on viral infections such as the common cold. They are also ineffective against sore throats, which are usually viral and self-resolving. Most cases of bronchitis (90–95%) are viral as well, passing after a few weeks—the use of antibiotics against bronchitis is superfluous and can put the patient at risk of developing adverse reactions. If you take an antibiotic when you have a viral infection, the antibiotic attacks bacteria in your body, bacteria that are either beneficial or at least not causing disease. This misdirected treatment can then promote antibiotic-resistant properties in harmless bacteria that can be shared with other bacteria, or create an opportunity for potentially harmful bacteria to replace the harmless ones.Official guidelines by the American Heart Association for dental antibiotic prophylaxis call for the administration of antibiotics to prevent infective endocarditis. Though the current (2007) guidelines dictate more restricted antibiotic use, many dentists and dental patients follow the 1997 guidelines instead, leading to overuse of antibiotics.A study by Imperial College London in February 2017 found that of 20 online websites, 9 would provide antibiotics (illegally) without a prescription to UK residents.Studies have shown that common misconceptions about the effectiveness and necessity of antibiotics to treat common mild illnesses contribute to their overuse. Antibiotics should also be used at the lowest dose for the shortest course. For example, research in the UK has shown that a 3-day course of antibiotics (amoxicillin) was as effective as 7-day course for treating children with pneumonia. Inappropriate use: Common examples of avoidable antibiotic misuse in clinics 1) Unadequate dosing; 2) unnecessary wide spectrum; 3) unnecessary double anaerobic coverage; 4) limited intravenous-to-oral shift; 5) unnecessary long antibiotic therapy duration; 6) limited access to outpatient parenteral antibiotic therapy (OPAT); 7) limited exploitation of the PK/PD potential of a certain antibiotic; 8) limited clinical use of biomarkers; 9) limited knowledge of old (but effective) antibiotics; 10) limited antibiotic allergy de-labelling Livestock There has been significant use of antibiotics in animal husbandry. The most abundant use of antimicrobials worldwide is in livestock; they are typically distributed in animal feed or water for purposes such as disease prevention and growth promotion. Inappropriate use: Debates have arisen surrounding the extent of the impact of these antibiotics, particularly antimicrobial growth promoters, on human antibiotic resistance. Although some sources assert that there remains a lack of knowledge on which antibiotic use generates the most risk to humans, policies and regulations have been placed to limit any harmful effects, such as the potential of bacteria developing antibiotic resistance within livestock, and that bacteria transferring resistance genes to human pathogens Many countries already ban growth promotion, and the European Union has banned the use of antibiotics for growth promotion since 2006. On 1 January 2017, the FDA enacted legislation to require that all human medically important feed-grade antibiotics (many prior over-the-counter-drugs) become classified as Veterinary Feed Directive drugs (VFD). This action requires that farmers establish and work with veterinaries to receive a written VFD order. The effect of this act places a requirement on an established veterinarian-client-patient relationship (VCPR). Through this relationship, farmers will receive an increased education in the form of advice and guidance from their veterinarian. Resistant bacteria in food can cause infections in humans. Similar to humans, giving antibiotics to food animals will kill most bacteria, but resistant bacteria can survive. When food animals are slaughtered and processed, resistant germs in the animal gut can contaminate the meat or other animal products. Inappropriate use: Resistant germs from the animal gut can also get into the environment, like water and soil, from animal manure. If animal manure or water containing resistant germs are used on fruits, vegetables, or other produce as fertilizer or irrigation, then this can spread resistant germs.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Coumachlor** Coumachlor: Coumachlor is a first generation anticoagulant rodenticide which blocks formation of prothrombin and inhibits blood coagulation causing death by internal haemorrhage. The chemical can be absorbed through the skin. The symptoms of human contact can be nosebleeds, bleeding gums, bloody urine, extensive bruising in the absence of injury, fatigue, shortness of breath (dyspnea) on exertion. The human consumption or inhalation of compound can also cause fluid in lungs (pulmonary edema).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Wigner fusion** Wigner fusion: The Wigner fusion research groups are involved in magnetically confined nuclear fusion experiments around the world. Wigner fusion consists of research groups from four different research institutes and universities, 3 if which are located in the Department of Plasma Physics at the Wigner Research Centre for Physics, one in the Institute of Nuclear Techniques (INT) of the Budapest University of Technology and Economics other specialist are involved from the Centre for Energy Research and from the Institute for Nuclear Research of the Hungarian Academy of Sciences, in the coordination of the Wigner Research Centre for Physics. Wigner fusion connected to the European fusion research programme through EUROfusion consortium which coordinated fusion research in Europe. At Wigner fusion more than 40 researchers, engineers and technicians work together in these research groups who are involved in more than half a dozen magnetic confinement experiments around the world, such as ITER, JET, Asdex-Upgrade, W7-X, KSTAR, EAST, MAST-Upgrade and COMPASS. Wigner fusion: The research groups of Wigner fusion: Pellet and Video Diagnostics Group, Wigner RCP ITER and Fusion Diagnostics Group, Wigner RCP Beam Emission Spectroscopy Group, Wigner RCP Fusion Research Group, BME NTI
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Constrained optimization** Constrained optimization: In mathematical optimization, constrained optimization (in some contexts called constraint optimization) is the process of optimizing an objective function with respect to some variables in the presence of constraints on those variables. The objective function is either a cost function or energy function, which is to be minimized, or a reward function or utility function, which is to be maximized. Constraints can be either hard constraints, which set conditions for the variables that are required to be satisfied, or soft constraints, which have some variable values that are penalized in the objective function if, and based on the extent that, the conditions on the variables are not satisfied. Relation to constraint-satisfaction problems: The constrained-optimization problem (COP) is a significant generalization of the classic constraint-satisfaction problem (CSP) model. COP is a CSP that includes an objective function to be optimized. Many algorithms are used to handle the optimization part. General form: A general constrained minimization problem may be written as follows: min for Equality constraints for Inequality constraints where gi(x)=cifori=1,…,n and hj(x)≥djforj=1,…,m are constraints that are required to be satisfied (these are called hard constraints), and f(x) is the objective function that needs to be optimized subject to the constraints. In some problems, often called constraint optimization problems, the objective function is actually the sum of cost functions, each of which penalizes the extent (if any) to which a soft constraint (a constraint which is preferred but not required to be satisfied) is violated. Solution methods: Many constrained optimization algorithms can be adapted to the unconstrained case, often via the use of a penalty method. However, search steps taken by the unconstrained method may be unacceptable for the constrained problem, leading to a lack of convergence. This is referred to as the Maratos effect. Solution methods: Equality constraints Substitution method For very simple problems, say a function of two variables subject to a single equality constraint, it is most practical to apply the method of substitution. The idea is to substitute the constraint into the objective function to create a composite function that incorporates the effect of the constraint. For example, assume the objective is to maximize f(x,y)=x⋅y subject to 10 . The constraint implies 10 −x , which can be substituted into the objective function to create 10 10 x−x2 . The first-order necessary condition gives 10 −2x=0 , which can be solved for x=5 and, consequently, 10 −5=5 Lagrange multiplier If the constrained problem has only equality constraints, the method of Lagrange multipliers can be used to convert it into an unconstrained problem whose number of variables is the original number of variables plus the original number of equality constraints. Alternatively, if the constraints are all equality constraints and are all linear, they can be solved for some of the variables in terms of the others, and the former can be substituted out of the objective function, leaving an unconstrained problem in a smaller number of variables. Solution methods: Inequality constraints With inequality constraints, the problem can be characterized in terms of the geometric optimality conditions, Fritz John conditions and Karush–Kuhn–Tucker conditions, under which simple problems may be solvable. Solution methods: Linear programming If the objective function and all of the hard constraints are linear and some hard constraints are inequalities, then the problem is a linear programming problem. This can be solved by the simplex method, which usually works in polynomial time in the problem size but is not guaranteed to, or by interior point methods which are guaranteed to work in polynomial time. Solution methods: Nonlinear programming If the objective function or some of the constraints are nonlinear, and some constraints are inequalities, then the problem is a nonlinear programming problem. Quadratic programming If all the hard constraints are linear and some are inequalities, but the objective function is quadratic, the problem is a quadratic programming problem. It is one type of nonlinear programming. It can still be solved in polynomial time by the ellipsoid method if the objective function is convex; otherwise the problem may be NP hard. KKT conditions Allowing inequality constraints, the KKT approach to nonlinear programming generalizes the method of Lagrange multipliers. It can be applied under differentiability and convexity. Solution methods: Branch and bound Constraint optimization can be solved by branch-and-bound algorithms. These are backtracking algorithms storing the cost of the best solution found during execution and using it to avoid part of the search. More precisely, whenever the algorithm encounters a partial solution that cannot be extended to form a solution of better cost than the stored best cost, the algorithm backtracks, instead of trying to extend this solution. Solution methods: Assuming that cost is to be minimized, the efficiency of these algorithms depends on how the cost that can be obtained from extending a partial solution is evaluated. Indeed, if the algorithm can backtrack from a partial solution, part of the search is skipped. The lower the estimated cost, the better the algorithm, as a lower estimated cost is more likely to be lower than the best cost of solution found so far. Solution methods: On the other hand, this estimated cost cannot be lower than the effective cost that can be obtained by extending the solution, as otherwise the algorithm could backtrack while a solution better than the best found so far exists. As a result, the algorithm requires an upper bound on the cost that can be obtained from extending a partial solution, and this upper bound should be as small as possible. Solution methods: A variation of this approach called Hansen's method uses interval methods. It inherently implements rectangular constraints. Solution methods: First-choice bounding functions One way for evaluating this upper bound for a partial solution is to consider each soft constraint separately. For each soft constraint, the maximal possible value for any assignment to the unassigned variables is assumed. The sum of these values is an upper bound because the soft constraints cannot assume a higher value. It is exact because the maximal values of soft constraints may derive from different evaluations: a soft constraint may be maximal for x=a while another constraint is maximal for x=b Russian doll search This method runs a branch-and-bound algorithm on n problems, where n is the number of variables. Each such problem is the subproblem obtained by dropping a sequence of variables x1,…,xi from the original problem, along with the constraints containing them. After the problem on variables xi+1,…,xn is solved, its optimal cost can be used as an upper bound while solving the other problems, In particular, the cost estimate of a solution having xi+1,…,xn as unassigned variables is added to the cost that derives from the evaluated variables. Virtually, this corresponds on ignoring the evaluated variables and solving the problem on the unassigned ones, except that the latter problem has already been solved. More precisely, the cost of soft constraints containing both assigned and unassigned variables is estimated as above (or using an arbitrary other method); the cost of soft constraints containing only unassigned variables is instead estimated using the optimal solution of the corresponding problem, which is already known at this point. Solution methods: There is similarity between the Russian Doll Search method and dynamic programming. Like dynamic programming, Russian Doll Search solves sub-problems in order to solve the whole problem. But, whereas Dynamic Programming directly combines the results obtained on sub-problems to get the result of the whole problem, Russian Doll Search only uses them as bounds during its search. Solution methods: Bucket elimination The bucket elimination algorithm can be adapted for constraint optimization. A given variable can be indeed removed from the problem by replacing all soft constraints containing it with a new soft constraint. The cost of this new constraint is computed assuming a maximal value for every value of the removed variable. Formally, if x is the variable to be removed, C1,…,Cn are the soft constraints containing it, and y1,…,ym are their variables except x , the new soft constraint is defined by: max a∑iCi(x=a,y1=a1,…,yn=an). Solution methods: Bucket elimination works with an (arbitrary) ordering of the variables. Every variable is associated a bucket of constraints; the bucket of a variable contains all constraints having the variable has the highest in the order. Bucket elimination proceed from the last variable to the first. For each variable, all constraints of the bucket are replaced as above to remove the variable. The resulting constraint is then placed in the appropriate bucket.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Geometric networks** Geometric networks: A geometric network is an object commonly used in geographic information systems to model a series of interconnected features. A geometric network is similar to a graph in mathematics and computer science, and can be described and analyzed using theories and concepts similar to graph theory. Geometric networks are often used to model road networks and public utility networks (such as electric, gas, and water utilities). Geometric networks are called in recent years very often spatial networks. Composition of a Geometric Network: A geometric network is composed of edges that are connected. Connectivity rules for the network specify which edges are connected and at what points they are connected, commonly referred to as junction or intersection points. These edges can have weights or flow direction assigned to them, which dictate certain properties of these edges that affect analysis results . In the case of certain types of networks, source points (points where flow originates) and sink points (points where flow terminates) may also exist. In the case of utility networks, a source point may correlate with an electric substation or a water pumping station, and a sink point may correlate with a service connection at a residential household. Functions: Networks define the interconnectedness of features. Through analyzing this connectivity, paths from one point to another on the network can be traced and calculated. Through optimization algorithms and utilizing network weights and flow, these paths can also be optimized to show specialized paths, such as the shortest path between two points on the network, as is commonly done in the calculation of driving directions. Networks can also be used to perform spatial analysis to determine points or edges that are encompassed in a certain area or within a certain distance of a specified point. This has applications in hydrology and urban planning, among other fields. Applications: Routing: for calculating driving directions, paths from one point of interest to another, locating nearby points of interest Urban Planning: for site suitability studies, and traffic and congestion studies. Electric Utility Industry: for modeling an electrical grid in GIS, tracing from a generation source Other Public Utilities: for modeling water distribution flow and natural gas distribution
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rasmussen's encephalitis** Rasmussen's encephalitis: Rasmussen's encephalitis is a rare inflammatory neurological disease, characterized by frequent and severe seizures, loss of motor skills and speech, hemiparesis (weakness on one side of the body), encephalitis (inflammation of the brain), and dementia. The illness affects a single cerebral hemisphere and generally occurs in children under the age of 15. Signs and symptoms: The condition mostly affects children, with an average age of 6 years. However, one in ten people with the condition develops it in adulthood.There are two main stages, sometimes preceded by a 'prodromal stage' of a few months. In the acute stage, lasting four to eight months, the inflammation is active and the symptoms become progressively worse. These include weakness of one side of the body (hemiparesis), loss of vision for one side of the visual field (hemianopia), and cognitive difficulties (affecting learning, memory or language, for example). Epileptic seizures are also a major part of the illness, although these are often partial. Focal motor seizures or epilepsia partialis continua are particularly common, and may be very difficult to control with drugs.In the chronic or residual stage, the inflammation is no longer active, but the affected individual is left with some or all of the symptoms because of the damage that the inflammation has caused. In the long term, most patients are left with some epilepsy, paralysis and cognitive problems, but the severity varies considerably. Pathophysiology: In Rasmussen's encephalitis, there is chronic inflammation of the brain, with infiltration of T lymphocytes into the brain tissue. In most cases, this affects only one cerebral hemisphere, either the left or the right. This inflammation causes permanent damage to the cells of the brain, leading to atrophy of the hemisphere; the epilepsy that this causes may itself contribute to the brain damage. The epilepsy might derive from a disturbed GABA release, the main inhibitory neurotransmitter of the mammalian brain. Pathophysiology: The cause of the inflammation is not known: infection by a virus has been suggested, but the evidence for this is inconclusive. In the 1990s it was suggested that auto-antibodies against the glutamate receptor GluR3 were important in causing the disease, but this is no longer thought to be the case. However, more recent studies report the presence of autoantibodies against the NMDA-type glutamate receptor subunit GluRepsilon2 (anti-NR2A antibodies) in a subset of patients with Rasmussen's encephalitis. There has also been some evidence that patients with RE express auto-antibodies against alpha 7 subunit of the nicotinic acetylcholine receptor. By sequencing T cell receptors from various compartments it could be shown that RE patients present with peripheral CD8+ T-cell expansion which in some cases have been proven for years after disease onset.Rasmussen's encephalitis has been recorded with a neurovisceral porphyria, and acute intermittent porphyria. Diagnosis: The diagnosis may be made on the clinical features alone, along with tests to rule out other possible causes. An EEG will usually show the electrical features of epilepsy and slowing of brain activity in the affected hemisphere, and MRI brain scans will show gradual shrinkage of the affected hemisphere with signs of inflammation or scarring.Brain biopsy can provide very strong confirmation of the diagnosis, but this is not always necessary. Treatment: During the acute stage, treatment is aimed at reducing the inflammation. As in other inflammatory diseases, steroids may be used first of all, either as a short course of high-dose treatment, or in a lower dose for long-term treatment. Intravenous immunoglobulin is also effective both in the short term and in the long term, particularly in adults where it has been proposed as first-line treatment. Other similar treatments include plasmapheresis and tacrolimus, though there is less evidence for these. None of these treatments can prevent permanent disability from developing.During the residual stage of the illness when there is no longer active inflammation, treatment is aimed at improving the remaining symptoms. Standard anti-epileptic drugs are usually ineffective in controlling seizures, and it may be necessary to surgically remove or disconnect the affected cerebral hemisphere, in an operation called hemispherectomy or via a corpus callosotomy. This usually results in further weakness, hemianopsia and cognitive problems, but the other side of the brain may be able to take over some of the function, particularly in young children. The operation may not be advisable if the left hemisphere is affected, since this hemisphere contains most of the parts of the brain that control language. However, hemispherectomy is often very effective in reducing seizures. History: It is named for the neurosurgeon Theodore Rasmussen (1910–2002), who succeeded Wilder Penfield as head of the Montreal Neurological Institute, and served as Neurosurgeon-in-Chief at the Royal Victoria Hospital. Society: The Hemispherectomy Foundation was formed in 2008 to assist families with children who have Rasmussen's encephalitis and other conditions that require hemispherectomy.The RE Children's Project was founded in 2010 to increase awareness of Rasmussen's encephalitis. Its primary purpose is to support scientific research directed toward finding a cure for this disease.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hip examination** Hip examination: In medicine, physiotherapy, chiropractic, and osteopathy the hip examination, or hip exam, is undertaken when a patient has a complaint of hip pain and/or signs and/or symptoms suggestive of hip joint pathology. It is a physical examination maneuver. Examination steps: The hip examination, like all examinations of the joints, is typically divided into the following sections: Palpation: Motion Special maneuversThe middle three steps are often remembered with the saying look, feel, move. Position/lighting/draping Position – for most of the exam the patient should be supine and the bed or examination table should be flat. The patient's hands should remain at their sides with the head resting on a pillow. The knees and hips should be in the anatomical position (knee extended, hip neither flexed nor extended). Lighting – adjusted so that it is ideal. Draping – both of the patient's hips should be exposed so that the quadriceps muscles and greater trochanter can be assessed. Palpation: Inspection Inspection done while the patient is standing Look Front and back of pelvis/hips and legs and comment on Ischaemic or trophic changes· Level of ASIS (anterior superior iliac spine) Swelling (soft tissue, bony swellings) Scars (old injuries, previous surgery) Sinuses (infection, neuropathic ulcers) Wasting (old polio, Carcot-Marie-Tooth) or hypertrophy (e.g. calf pseudohypertrophy in muscular dystrophy) Deformity (leg length discrepancy, pes cavus, scoliosis, lordosis, khyphosis)Feel Any swellings·Anteriorly in scarpas triangle, Trochanteric region or gluteal region Pelvic tilt by palpating level of ASIS (anterior superior illiac spine)Move· Gait: Observe Smooth and progression of phases of gait cycle Comment on stance, toe-off, swing heel strike, stride and step length Sufficient flexion/extension at hip/knee ankle and foot: Any fixed contractures? Arm-swing and balance on turning around·Abnormal Gait Patterns Trendelenburg (pelvic sway/tilt, aka waddling gait if bilateral) Broad-based (ataxia) High-stepping (loss of proprioception/drop foot) Antalgic (mention “with reduced stance phase on left/right side”) In-toeing (persistent femoral anteversion) Inspection done while supine The hip should be examined for: Masses Scars Lesions Signs of trauma/previous surgery Bony alignment (rotation, leg length) Muscle bulk and symmetry at the hip and knee Measures True leg length – Greater Trochanter of the femur or Anterior Superior Iliac Spine of pelvis to medial malleolus of ipsilateral leg. Palpation: Apparent leg length – umbilicus or xiphisternum (noting which is used) to the medial malleolus of ipsilateral leg.In hip fractures the affected leg is often shortened and externally rotated. Palpation The hip joint lies deep inside the body and cannot normally be directly palpated. To assess for pelvic fracture one should palpate the: Iliac spines Superior and inferior pubic rami Movement: Internal rotation – with knee and hip both flexed at 90 degrees the ankle is abducted. External rotation – with knee and hip both flexed at 90 degrees the ankle is adducted. (also done with the Patrick's test / FABER test) Flexion (also known as the Gaenslen's test) Extension – done with the patient on their side. Alignment should be assessed by palpation of the ASIS, PSIS and greater trochanter. Abduction – assessed whilst palpating the contralateral ASIS. Adduction – assessed whilst palpating the ipsilateral ASIS. Assessment for a hidden flexion contracture of the hip – hip flexion contractures may be occult, due to compensation by the back. They are assessed by: Placing a hand behind the lumbar region of back Getting the patient to fully flex the contralateral hip. The hand in the lumbar region is used to confirm the back is straightened (flexed relative to the anatomic position). If there is a flexion contracture in the ipsilateral hip it should be evident, as the hip will appear flexed. Normal range of motion Internal rotation – 40° External rotation – 45° Flexion – 125° Extension – 10–40° Abduction – 45° Adduction – 30° Special maneuvers: Trendelenburg test/sign:Make sure pelvis is horizontal by palpating ASIS. Ask patient to stand on one leg and then on the other. Assess any pelvic tilt by keeping an index finger on each ASIS. Normal (Trendelenburg negative): In the one-legged stance, the unsupported side of the pelvis remains at the same level as the side the patient is standing or even rise a little, because of powerful contraction of hip abductors on the stance leg. Abnormal (Trendelenburg positive): In the one-legged stance, the unsupported side of the pelvis drops below the level as the side the patient isstanding on. This is because of (abnormal) weakness of hip abductors on the stance leg. The latter hip joint may therefore be abnormal. Special maneuvers: Assisted Trendlenburg test If balance is a problem, face the patient and ask them to place their hands on yours to support him/her as he/she does alternate one-legged stance. Increased asymmetrical pressure on one hand indicates a positive Trendelenburg test, on the side of the abnormal hip A ‘delayed’ Trendelenburg has also been described, where the pelvic tilt appears after a minute or so: this indicates abnormal fatiguability of the hip abductors.Romberg's test This assesses proprioception/balance (dorsal columns of spinal cord/spino-cerebellarpathways). Special maneuvers: Ask the patient to stand with heels together and hands by the side. Ask the patient to close his/her eyes and observe for swaying for about 10seconds. Most people sway a bit but then quickly decrease the amplitude of swaying. If however, the swaying is not corrected, or the patient opens the eyes or takes a step to regain balance, Romberg's test is positive. Special maneuvers: When doing this test, stand facing the patient with your arms outstretched and hands are at the level of the patient's shoulders to catch or stabilise him/her in case of a positive Romberg's test.Ober's test for tight ITB (IlioTibial Band, also called IlioTibial Tract) performed with patient side lying on unaffected side and the provider extending the affected hip. Stabilize the pelvis and let the affected leg drop. A positive test is indicated if the leg does not adduct to the table. Special maneuvers: Thomas test for tight hip flexors both performed by the provider holding the unaffected leg to the chest and leaving the affected leg on the table. If the affected leg cannot lie flat on the table it is a positive test. the Kendall test is similar, but the patient holds the unaffected leg to their chest. Special maneuvers: Rectus Femoris Contracture test for tight rectus femoris performed like Thomas test, but with the affected leg bent off the end of a table. a positive test is indicated if the thigh is not parallel with the table.Kaltenborn test or Hip Lag Sign for hip abductor function. To perform the Kaltenborn test, the patient has to lie in a lateral, neutral position with the affected leg being on top. The examiner then positions one arm under this leg to have good hold and control over the relaxed extremity, whereas the other hand stabilizes the pelvis. The next step is to passively extend to 10° in the hip, abduct to 20° and rotate internally as far as possible, while the knee remains in a flexed position of 45°. After the patient is asked to hold the leg actively in this position, the examiner releases the leg. The Hip Lag Sign is considered positive if the patient is not able to keep the leg in the aforementioned abducted, internally rotated position and the foot drops more than 10 cm. To ensure an accurate result, the test should be repeated three times. Other tests: A knee examination should be undertaken in the ipsilateral knee to rule-out knee pathology.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**BEX4** BEX4: Brain expressed, X-linked 4 is a protein that in humans is encoded by the BEX4 gene.This gene is a member of the brain expressed X-linked gene family. Proteins encoded by other members of this family act as transcription elongation factors that allow RNA polymerase II to escape pausing during elongation. Multiple alternatively spliced variants encoding the same protein have been identified.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**NqrA-Marinomonas RNA motif** NqrA-Marinomonas RNA motif: The nqrA-Marinomonas RNA motif is a conserved RNA structure that was discovered by bioinformatics.nqrA-Marinomonas motif RNAs are found in Marinomonas. NqrA-Marinomonas RNA motif: All known nqrA-Marinomonas RNAs are found upstream of "nqr" operons whose genes encode subunits of the sodium-translocating NADH:quinone oxidoreductase, whose function is suspected to be to allow marine bacteria to form a sodium gradient. The name of the RNA motif is derived from the first gene in the operon, which is called nqrA. Based on their locations, it is reasonable to hypothesize that nqrA-Marinomonas RNAs function as cis-regulatory elements. However, promoter sequences occur upstream of the nqr operon in the distantly related organism Vibrio anguillarum. The fact that these promoters do not occur upstream of (or in the same inter-genic region) as nqrA-Marinomonas RNAs raises questions about the putative cis-regulatory function of these RNAs. However, it is possible that the promoters that were determined in Vibrio anguillarum are simply to diverged to be detected in Marinomonas species.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Computer-based test interpretation in psychological assessment** Computer-based test interpretation in psychological assessment: Computer-based test interpretation (CBTI) programs are technological tools that have been commonly used to interpret data in psychological assessments since the 1960s. CBTI programs are used for a myriad of psychological tests, like clinical interviews or problem rating, but are most frequently exercised in psychological and neuropsychological assessments. CBTI programs are either empirically based or clinically based. The empirically based programs, or actuarial assessment programs, use statistical analyses to interpret the data, while the clinically based programs, or automated assessment programs, rely on information from expert clinicians and research. Although CBTI programs are successful in test-retest reliability, there have been major concerns and criticisms regarding the programs' ability to assess inter-rater and internal consistency reliability. Research has shown that the validity of CBTI programs has not been confirmed, due to the varying reports of individual programs. CBTI programs are very efficient in that they save time, reduce human error, are cost effective, and are objective/reliable, yet limited in that they are not always used by adequately trained evaluators or are not integrated with multiple sources of data. As technology continues to transform our modern society, computer-based interpretation programs have the possibility to expand their software and even alleviate some of the current concerns with the programs' methodology. History: Computerized testing methods were first introduced over 60 years ago. The first program able to interpret computerized assessment data was developed in 1962 at the Mayo Clinic. The program was used to evaluate MMPI data from hospital patients and generated a list of 110 possible descriptive statements which corresponded to particular scale elevations. This rudimentary computerized interpretation is not far off from the methods used today. In 1969, the first program able to generate narrative reports based on scale configurations was released. By 1985, it was estimated that as many as 1.5 million MMPI protocols had been interpreted by computer-based test interpretation (CBTI) programs. In 1987 as many as 72 separate suppliers of over 300 computer-based assessment products were in existence, nearly half which were developed for personality assessment. Since this time, the popularity and accessibility of computer-based testing and CBTI programs has increased dramatically, a trend that will continue into the future as the utilization of technology in the mental health profession increases. Present status: Currently, CBTI programs fall into one of two categories: actuarial assessment programs or automated assessment programs. Actuarial assessment programs are based on statistical or actuarial prediction (e.g., statistical analyses, linear regression equations and Bayesian rules), which is empirically based while automated assessment programs consist of a series of if-then statements derived by expert clinicians and informed by published research and clinical experience. For the purposes of this article, both types will be referred to as computer-based test interpretations (CBTIs). The use of CBTIs is found in a variety of psychological domains (e.g., clinical interviewing and problem rating), but is most commonly utilized in personality and neuropsychological assessments. This article will focus on the use of CBTIs in personality assessment, most commonly using the MMPI and its subsequent revised editions. Reliability: The ability for CBTIs to eliminate human-error is considered a benefit, and as a result reliability of CBTIs is considered to be better than those of clinician interpretations. However, CBTIs have demonstrated poor reliability. Research regarding the equivalence of CBTIs and paper-and-pencil measures has been found to be equivocal (for reviews see). Further, CBTI research has been criticized for failure to assess inter-rater (comparing the interpretation of one protocol by two different programs) and internal consistency reliability (comparing the reliability of different sections of the same interpretation). On the other hand, test-retest reliability of CBTIs is considered perfect (i.e., the same protocol will repeatedly yield the same interpretation), if the same program is used. Validity: Research on the validity of CBTIs tends to utilize three types of studies: external criterion studies (comparing the CBTI report to some external criterion measure of the construct, such as a self-report or behavioral measure), consumer satisfaction studies (asking clients whether the reports are accurate representations of themselves), and comparison with clinical conclusions (comparing CBTI reports to clinician interpretations). Comprehensive reviews of CBTI validity can be found elsewhere (e.g.,). In general, the validity of CBTIs has not been demonstrated, and the validity of individual CBTI systems has been found to vary. However, many validity studies are flawed due to small samples, criterion contamination, the Barnum effect, inadequate input data to generate powerful statistical prediction rules, unreliability of measures and the practice of generalizing across testing situations and populations without considering potential moderators. Strengths and weaknesses: CBTI programs can be found for nearly every type of personality assessment available today. CBTI programs arguably have many benefits over traditional hand-scored assessments and clinician interpretations which may contribute to their popularity. For example, CBTI programs save time and eliminate human responding and scoring errors. Further, CBTI programs are often more comprehensive than clinician interpretation, tend to be more reliable than clinician interpretation, are cost effective, and more objective which may allow clients to be more accepting of feedback. Strengths and weaknesses: Despite these benefits, there are significant limitations of CBTIs to consider. For example, CBTI reports may suggest an unwarranted impression of scientific precision and reports may be too general to provide differential information. Additionally, CBTIs may promote exceedingly cavalier attitudes towards clinical assessment and interpretation, and as they are increasingly available to inadequately trained evaluators, the potential for misuse is high. Clinicians are cautioned to educate themselves before using CBTI programs, not to blindly interpret computer-generated reports as true or use CBTIs as a way to circumvent their responsibilities as a clinician to integrate multiple sources of data. Future: As our healthcare system and society as a whole becomes increasingly reliant on technology, it is inevitable that the availability and use of CBTI software will also expand. The potential of the internet for extending the use of CBTIs has been recognized, although the potential problems associated with this modality have yet to be fully understood and will need to be addressed before the use of internet-based CBTI utilization proliferates. In addition, the application of computer-adaptive testing, although successfully applied in other assessment domains (i.e., ability and aptitude), provides a promising, yet under researched addition to personality assessment. Lastly, there is a call for the more effective integration of clinical and computer-based prediction methods, beginning with a partnership between clinicians and researchers in the development of CBTI programs.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nickel manganese oxide** Nickel manganese oxide: Nickel manganese oxides, or nickel manganates, are spinel structure compounds of Nickel, Manganese and Oxygen of the form: Ni(x)Mn(3-x)O(y)Some common forms are Ni2MnO4, NiMn2O4, and Ni1.5Mn1.5O4They are most commonly formulated for use in thin film resistors and thermistors. The resistivity and temperature coefficient can be accurately controlled in the manufacturing process. In nickel-metal hydride batteries the addition of manganese oxide provides for formation of spinel structure nickel manganates in various oxidation states, with higher conductivity and charge capacity than nickel hydroxides alone. Nickel manganese oxide: Although nickel manganates exhibit ferromagnetic behaviour at low temperatures, they have a paramagnetic behavior at room temperature. The magnetic behavior depends on the cation distribution and spin alignment among them. In nickel manganates, the Ni2+ and Mn4+ ions have a strong preference for occupying octrahedral sites instead of tetrahedral sites. Mn2+ ion is in tretrahedral sites and Mn3+ ion can be in both sites. Also, these compounds has been studied as electrodes for supercapacitors and as photocatalyst taking advantage of the Change in the oxidation state of the Mn catios.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Overdrafting** Overdrafting: Overdrafting is the process of extracting groundwater beyond the equilibrium yield of an aquifer. Groundwater is one of the largest sources of fresh water and is found underground. The primary cause of groundwater depletion is the excessive pumping of groundwater up from underground aquifers. Overdrafting: There are two sets of yields: safe yield and sustainable yield. Safe yield is the amount of groundwater that can be withdrawn over a period of time without exceeding the long-term recharge rate or affecting the aquifer integrity. Sustainable yield is the amount of water extraction that can be sustained indefinitely without negative hydrological impacts, taking into account both recharge rate and surface water impacts.There are two types of aquifers: confined and unconfined. In confined aquifers, there is an overbearing layer called aquitard, which contains impermeable materials through which groundwater cannot be extracted. In unconfined aquifers, there is no aquitard, and groundwater can be freely extracted from the surface. Extracting groundwater from unconfined aquifers is like borrowing the water: it has to be recharged at a proper rate. Recharge can happen through artificial recharge and natural recharge.Insufficient recharge can lead to depletion, reducing the usefulness of the aquifer for humans. Depletion can also have impacts on the environment around the aquifer, such as soil compression and land subsidence, local climatic change, soil chemistry changes, and other deterioration of the local environment. Mechanism: When groundwater is extracted from an aquifer, a cone of depression is created around the well. As the drafting of water continues, the cone increases in radius. Extracting too much water (overdrafting) can lead to negative impacts such as a drop of the water table, land subsidence, and loss of surface water reaching the streams. In extreme cases, the supply of water that naturally recharges the aquifer is pulled directly from streams and rivers, lowering their water levels. This affects wildlife, as well as humans who might be using the water for other purposes.The natural process of aquifer recharge takes place through the percolation of surface water. An aquifer may be artificially recharged, such as by pumping reclaimed water from wastewater management projects directly into the aquifer. An example of is the Orange County Water District in California. This organization takes wastewater, treats it to a proper level, and then systematically pumps it back into the aquifers for artificial recharge. Mechanism: Since every groundwater basin recharges at a different rate depending on precipitation, vegetative cover, and soil conservation practices, the quantity of groundwater that can be safely pumped varies greatly among regions of the world and even within provinces. Some aquifers require a very long time to recharge, and thus overdrafting can effectively dry up certain sub-surface water supplies. Subsidence occurs when excessive groundwater is extracted from rocks that support more weight when saturated. This can lead to a capacity reduction in the aquifer.Changes in freshwater availability stem from natural and human activities (in conjunction with climate change) that interfere with groundwater recharge patterns. One of the leading anthropogenic activities causing groundwater depletion is irrigation. Roughly 40% of global irrigation is supported by groundwater, and irrigation is the primary activity causing groundwater storage loss across the U.S. Around the world: This ranking is based on the amount of groundwater each country uses for agriculture. This issue is becoming significant in the United States (most notably in California), but it has been an ongoing problem in other parts of the world, such as was documented in Punjab, India, in 1987. Around the world: United States In the U.S., an estimated 800 km3 of groundwater was depleted during the 20th century. The development of cities and other areas of highly concentrated water usage has created a strain on groundwater resources. In post-development scenarios, interactions between surface water and groundwater are reduced; there is less intermixing between the surface and subsurface (interflow), leading to depleted water tables.Groundwater recharge rates are also affected by rising temperatures which increase surface evaporation and transpiration, resulting in decreased water content of the soil. Anthropogenic changes to groundwater storage, such as over-pumping and the depletion of water tables combined with climate change, effectively reshape the hydrosphere and impact the ecosystems that depend on the groundwater. Accelerated decline in subterranean reservoirs: According to a 2013 report by research hydrologist Leonard F. Konikow at the United States Geological Survey (USGS), the depletion of the Ogallala Aquifer between 2001–2008 is about 32% of the cumulative depletion during the entire 20th century. In the United States, the biggest users of water from aquifers include agricultural irrigation, and oil and coal extraction. According to Konikow, "Cumulative total groundwater depletion in the United States accelerated in the late 1940s and continued at an almost steady linear rate through the end of the century. In addition to widely recognized environmental consequences, groundwater depletion also adversely impacts the long-term sustainability of groundwater supplies to help meet the Nation’s water needs."As reported by another USGS study of withdrawals from 66 major US aquifers, the three greatest uses of water extracted from aquifers were irrigation (68%), public water supply (19%), and "self-supplied industrial" (4%). The remaining 8% of groundwater withdrawals were for "self-supplied domestic, aquaculture, livestock, mining, and thermoelectric power uses." Environmental impacts: The environmental impacts of overdrafting include: Groundwater-related subsidence: the collapse of land due to lack of support (from the water that is being depleted). The first recorded case of land subsidence was in the 1940s. Land subsidence can be as little as local land collapsing or as large as an entire region's land being lowered. The subsidence can lead to infrastructural and ecosystem damage. Environmental impacts: Lowering of the water table, which makes water harder to reach streams and rivers Reduction of water volume in streams and lakes because their supply of water is being diminished by surface water recharging the aquifers Impacts on animals that depend on streams and lakes for food, water, and habitat Deterioration to air quality and water quality Increase in the cost of water to the consumer due to a lower water table—more energy is needed to pump further down, so operating costs increase for companies, who pass on the expense to the consumer Decrease in crop production from lack of water (a large loss in the U.S. in particular, where 60% of irrigation relies on groundwater) Disturbances to the water cycle Groundwater related subsidence Climatic changes Aquifer drawdown or overdrafting and the pumping of fossil water may be contributing to sea-level rise. By increasing the amount of moisture available to fall as precipitation, severe weather events are more likely to occur. To some extent, moisture in the atmosphere accelerates the probability of a global warming event. The correlation coefficient is not yet scientifically determined. Socio-economic effects: Scores of countries are overpumping aquifers as they struggle to satisfy their growing water needs, including each of the big three grain producers: China, India, and the United States. These three, along with several other countries where water tables are falling, are home to more than half the world's people.Water is intrinsic to biological and economic growth, and overdrafting reduces its available supply. According to Liebig's law of the minimum, population growth is therefore impeded. Deeper wells must be drilled as the water table drops, which can become expensive. In addition, the energy needed to extract a given volume of water increases with the amount the aquifer has been depleted. The deeper the water is extracted the worse the quality of the water becomes, which increases the cost of filtration. Saltwater intrusion is another consequence of overdrafting, leading to a reduction in water quality. Possible solutions: Since recharge is the natural replenishment of water, artificial recharge is the man-made replenishment of groundwater, though there is only a limited amount of suitable water available for replenishing.In areas where recharge alone will not work, decreased water use can also be used. Notably, this requires actions such as switching to less water-intensive crops. Consumptive use refers to the water that is naturally taken from the system (for example, in transpiration).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cracking the Particle Code of the Universe: The Hunt for the Higgs Boson** Cracking the Particle Code of the Universe: The Hunt for the Higgs Boson: Cracking the Particle Code of the Universe: The Hunt for the Higgs Boson is a 2014 popular science book by Canadian physicist John Moffat. The first half of the book gives the reader an explanation of the particle physicists' Standard Model and the physical concepts associated with it, together with some possible alternatives to, and extensions of, the Standard Model. In the second half of the book, Moffat gives his personal account (up to March 2013) of how the discovery of the Higgs boson actually happened at the Large Hadron Collider (LHC). He writes about conferences he attended and interviews with some of the LHC physicists.The book received favorable reviews from Sabine Hossenfelder in Physics World and from Michael Peskin in Physics Today
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Phosphatidylinositol-3,4,5-trisphosphate 5-phosphatase** Phosphatidylinositol-3,4,5-trisphosphate 5-phosphatase: Phosphatidylinositol-3,4,5-trisphosphate 5-phosphatase (EC 3.1.3.86, SHIP, p150Ship) is an enzyme with systematic name 1-phosphatidyl-1D-myo-inositol-3,4,5-trisphosphate 5-phosphohydrolase, that has two isoforms: SHIP1 (produced by the gene INPP5D) and SHIP2 (INPPL1). This enzyme catalyses the following chemical reaction 1-phosphatidyl-1D-myo-inositol 3,4,5-trisphosphate + H2O ⇌ 1-phosphatidyl-1D-myo-inositol 3,4-bisphosphate + phosphateThis enzyme hydrolyses 1-phosphatidyl-1D-myo-inositol 3,4,5-trisphosphate (PtdIns(3,4,5)P3) to produce PtdIns(3,4)P2.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Crotamiton** Crotamiton: Crotamiton is a drug that is used both as a scabicidal (for treating scabies) and as a general antipruritic (anti-itching drug). It is a prescription, lotion-based medicine that is applied to the whole body to get rid of the scabies parasite that burrows under the skin and causes itching. Use: For treating scabies, crotamiton should be applied to the whole body rather than a localized area. It is applied two or three times, with a 24-hour delay between applications, and the patient is asked to take a shower no sooner than after 48 hours. For children under 3 years, it is applied once daily. It can also be used to treat itching stemming from other causes, e.g. insect bites, in which case it is applied to the itching areas only, and repeated if necessary after 4 to 8 hours. Use near the eyes, or breaks in the skin, should be avoided. Pharmacology: Crotamiton is toxic to the scabies mite. It probably acts as a general antipruritic by inhibition of TRPV4, a sensory ion channel that is expressed in the skin and primary sensory neurons. Pharmacokinetics: After topical application, crotamiton is absorbed systemically. It has an elimination half-life of 30.9 hours and 4.8-8.8% is excreted in the urine. Side effects: The most common side effect of crotamiton is skin irritation. Trade name: Crotamiton is marketed under the trade names Eurax, which is manufactured by Ranbaxy Laboratories in the United States, and GlaxoSmithKline in the United Kingdom, and Euracin, which is manufactured by Green Cross in South Korea. In Germany, it is marketed under the brand name Crotamitex. In India, it is sold as Eurax by Ranbaxy Laboratories.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GigaMesh Software Framework** GigaMesh Software Framework: The GigaMesh Software Framework is a free and open-source software for display, editing and visualization of 3D-data typically acquired with structured light or structure from motion.It provides numerous functions for analysis of archaeological objects like cuneiform tablets, ceramics or converted LiDAR data. Typically applications are unwrappings (or rollouts), profile cuts (or cross sections) as well as visualizations of distances and curvature, which can be exported as raster graphics or vector graphics. GigaMesh Software Framework: The retrieval of text in 3D like damaged cuneiform tablets or weathered medieval headstones using Multi Scale Integral Invariant (MSII) filtering is a core function of the software. Furthermore, small or faint surface details like fingerprints can be visualized. The polygonal meshes of the 3D-models can be inspected, cleaned and repaired to provide optimal filtering results. The repaired datasets are suitable for 3D printing and for digital publishing in a dataverse. Name and logo: The name "GigaMesh" refers to the processing of large 3D-datasets and relates intentionally to the mythical Sumerian king Gilgamesh and his heroic epic described on a set of clay tablets.: 115  The central element of the logo is the cuneiform sign 𒆜 (kaskal) meaning street or road junction, which symbolizes the intersection of the humanities and computer science. The surrounding circle refers to the integral invariant computation using a spherical domain. The red color is derived from carmine, the color used by the Heidelberg University, where GigaMesh is developed. Development and application in research projects: The development began in 2009 and was inspired by the edition project Keilschrifttexte aus Assur literarischen Inhalts (KAL, cuneiform texts with literary content) of the Heidelberg Academy of Sciences and Humanities. In parallel it was applied within the Austrian Corpus Vasorum Antiquorum of the Austrian Academy of Sciences for documentation of red-figure pottery. Current projects are funded by the DFG and the BMBF for contextualization and analysis of seals and sealings of the Corpus der minoischen und mykenischen Siegel, where Thin Plate Splines are used for comparing sealings. Analog to the developments for processing cuneiform tablets there are further approaches for adaption of the combined Computer Vision and Machine Learning methods for other Scripts in 3D. An example is the application within the Text Database and Dictionary of Classic Mayan.In 2017 GigaMesh was tested by the DAI at an excavation in Guadalupe, near Trujillo, Honduras for immediate visualization of in-situ acquired findings with different 3D-scanners including a comparison with manual drawings. Since then GigaMesh is permanently used by the excavation team, their feedback led to numerous changes to the GUI, improving the user experience (UX). Additionally online tutorials are published having a focus on tasks required to compile excavation reports. Development and application in research projects: The Scanning for Syria (SfS) project of the Leiden University used GigaMesh in 2018 for 3D reconstruction of molds of tablets lost in ar-Raqqa, Syria based on Micro-CT-scans. As a follow-up project the TU Delft acquired further Micro-CT-scans for virtually extracting clay tablets still wrapped into clay envelopes, which are unopened for thousands of years. Development and application in research projects: In May 2020 the SfS project won the European Union Prize for Cultural Heritage of the Europa Nostra in the category research.A first version (190416) for Windows was released in preparation for presentations about new functions shown at the international CAA 2019.The command line interface of GigaMesh is well suited to process large amounts of 3D-measurement data within repositories. This was demonstrated with almost 2.000 cuneiform tablets of the Hilprecht Collection of the Jena University, which were processed and digitally published as benchmarkdatabase (HeiCuBeDa) for machine learning as well as database of images including 3D- and meta-data (HeiCu3Da) using CC BY licenses. A baseline for period classification of tablets was established using a Geometric Neural Network being a Convolutional Neural Network typically used for 3D-datasets.The Louvre showed GigaMesh based rollouts of an Aryballos from the collection of the KFU Graz representing the use of digital methods for research on pottery of ancient Greece within the CVA project, which had its 100th anniversary in 2019. Renderings of the rollouts were on display in the second half of 2019 in the display case named L’ère du numèrique et de l’imagerie scientifique (the digital era and scientific imaging).Version 191219 supports Texture maps common for 3D-data captured using photogrammetry. This allows processing and in particular unwrapping of objects acquired with Structure-from-Motion widely used for documentation of Cultural Heritage and in archaeology.The Nara National Research Institute for Cultural Properties in Japan adapted GigaMesh for documentation and rollouts of vessels and published a tutorial, which was used to implement the workflow for ceramics of the Jōmon period within the Togariishi Museum of Jōmon Archaeology.In April 2020 the source code was published on GitLab and the license changed from freeware to the GPL. Version 200529 allows for the first time to apply the MSII filter using the Graphical User Interface to visualize the smallest details like fingerprints. The DFG funded edition of texts from Haft Tepe project is using MSII filtered renderings of tablets in the so-called fat-cross arrangement of side views.GigaMesh is increasingly being used in areas that have methodological overlap with archaeology, such as geoengineering for the analysis of seashells. Development and application in research projects: File formats and research data infrastructures Primarily the Polygon File Format is supported and used to store additional information from the processing. This is not possible with the — additionally supported — Wavefront OBJ due to its specification. The marking of interpolated points and triangles by filling voids in the triangular grid represents meta-information to be captured e.g. in the context of the National Research Data Infrastructure (NFDI) in Germany. Other metadata such as inventory numbers, material, and hyperlinks or Digital Object Identifiers (DOIs) can be captured. In addition, there is the ability to calculate topological metrics that describe the quality of a 3D measurement dataset.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Steam exploded fiber** Steam exploded fiber: Steam exploded fiber is a pulping and extraction technique that has been applied to wood and other fibrous organic material. The resulting fibers can be combined with organic polymers to produce fiber composite materials. Alternatively, the fibers, along with other extracted substances, can be processed chemically or digested to produce ethanol and other useful substances.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Strip game** Strip game: Strip games or stripping games are games which have clothing removal as a gameplay element or mechanic. The clothing may be removed to keep score, or as penalty for a loss. Some games are sexualised and the eventual complete loss of clothing is considered part of a usual game in the style of a striptease, whereas others merely presume the loss of clothing as an inconvenience. While games involving stripping have been invented independently of non-stripping games, it is also the case that games not normally involving clothing loss can be adapted into strip games. In such instances, some rulesets are more amenable to adaptation than others. Notable games: Mahjong Many strip mahjong ( 脱衣麻雀 ( だついマージャン ), datsui mājan) video games have been published, especially by Japanese software companies. Murchéh daréh "Murcheh dareh che kar konam?" ("There are ants [here], what shall I do?"), or just "Murcheh dareh" (مورچه داره, There Are Ants), is a traditional Iranian dance-based game played by women, although a parody played by men exists ("Zar gazidam"). Notable games: Poker Strip poker is a party game and a variation of the traditional poker where players remove clothing when they lose a round. Any form of poker can be adapted to a strip form; however, it is usually played with simple variants with few betting rounds, like five-card draw.Strip poker can be played by single-sex groups or by mixed groups in social situations and intended to generate an atmosphere of fun and to lighten the social atmosphere by the removal of clothing. While the game is sometimes employed as a type of foreplay, it is itself usually not considered a sexual interaction due to the fact that it does not require contact and full nudity is involved only at the end of the game or sometimes not at all (depending on rule variants). The game is sometimes integrated with Truth or Dare? rules. Strip poker has been adapted for solo gameplay, such as by use of online or offline video games. Notable games: Rules The rules of strip poker are flexible and intended to generate an atmosphere of fun in a group of consenting adults. Notable games: At the beginning of each turn, each player must remove an article of clothing as an ante. If there are two couples playing there should be four shoes in the pot before the cards are dealt. At the outset, one of the articles of clothing is removed from the game permanently. So the winner will receive three articles of clothing in the ante. The opener must bet, and they can be raised, just like ordinary poker. After the draw, the players make another bet, like regular poker. Once an article of clothing is removed, it cannot be put back on. The clothing is just used as a stake for betting. Only clothing can be bet. No player may withdraw once the game begins without forfeiting all articles of clothing. Notable games: Strategy In some rule sets, players who fold before the flop are not required to remove clothing. As such, a player who is uncomfortable removing clothing (or, more commonly, a player who does not want to remove all their clothing first) can simply fold very often or every time before the flop, essentially playing a "tight" pre-flop strategy. Using this strategy, a player could stay clothed for the entire game simply by folding their hands. Notable games: Strip poker requires a different overall strategy from poker played with betting chips since the maximum loss on a hand of strip poker is (typically) one item of clothing. In a betting environment, a player who stays in the pot with a weak hand is liable to lose many chips in a single hand. In strip poker, the risk of staying in a hand is significantly limited, so players can play hands with lower probabilities than they would in a cash game. For example, in a cash game, because it occurs only 8% of the time, an inside straight draw might be a poor hand to play, hence the saying "Never draw to an inside straight." In strip poker, when the potential loss is only one item of clothing whether you fold or call, an 8% chance to win the hand is better than the alternative. Notable games: Another variant uses some sort of betting token, allowing for normal poker strategies. Once a player runs out of tokens, they can "sell" a piece of clothing for more tokens in order to stay in the game. Notable games: History While it has been suggested that strip poker originated in New Orleans brothels in the United States around the same time as original poker in the 19th century, the term is only attested since 1916. Strip poker most likely began as a prank among boys, and as late as the 1930s, the current mixed-gender version was called "mixed strip poker" in England to differentiate it from the all-male variety. Notable games: Media portrayals of strip poker Strip poker games are presented in a number of films, including: Welcome to the Cabin Kicking the Dog American Pie 2 Friday the 13th In Time The WanderersStrip poker based television shows include: Tutti Frutti/Colpo Grosso - (Germany/Italy - 1990) Räsypokka - (subTV - Finland - 2002) Strip! - (RTL II - Germany - 1999) Everything Goes - (United States - 1981-1988) Strip Poker - (USA - 2000)Strip poker productions on pay per view or DVD often feature pinup models. Examples include: National Lampoon's Strip Poker - 2005 Strip Poker Invitational - 2005Both examples featured Playboy models, World Wrestling Entertainment models and other pinup models in a no-limit Texas Hold'em competition. National Lampoon's Strip Poker was filmed at the Hedonism II resort in Negril, Jamaica. Strip Poker Invitational productions were filmed in Las Vegas. Both productions aired on Pay-Per-View in 2005. Notable games: Strip poker on music video: Music video of "Poker Face" by Lady Gaga - 2009In 1982, an American computer game company, Artworx, produced a Strip Poker game for the Apple II computer. It has been ported to many other computers since then and is still available today. Many others followed. Strip poker is featured as an Easter Egg in the Windows 8 card suite Card Hero. Notable games: Yakyūken Popularised by Kinichi Hagimoto (although he later expressed regret) and inspired by contemporary Japanese baseball pep routines — Yakyūken (野球拳) means 'baseball fist' — strip games became a popular fixture of comedy and game shows in the mid-20th century initially in the form of yakyūken, in which several rounds of jyan-ken-pon (rock-paper-scissors) are played and televised, with the loser removing a layer of clothing. Legality: Depending on jurisdiction and the particular circumstances of the game, strip games may encounter regulation on the grounds of clothes-wearing (or not wearing) norms, gaming and gambling, and sexual expression. Additionally, verbally coercing someone to play a strip game is often considered a form of sexual harassment.In 2013, the concept of strip sports betting was launched in the United States by creating live internet broadcasts using models and pornstars who bet on football games and take their clothes off as they lose.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tauopathy** Tauopathy: Tauopathy belongs to a class of neurodegenerative diseases involving the aggregation of tau protein into neurofibrillary or gliofibrillary tangles in the human brain. Tangles are formed by hyperphosphorylation of the microtubule protein known as tau, causing the protein to dissociate from microtubules and form insoluble aggregates. (These aggregations are also called paired helical filaments.) The mechanism of tangle formation is not well understood, and whether tangles are a primary cause of Alzheimer's disease or play a peripheral role is unknown. Detection and imaging: Post-mortem Tau tangles are seen microscopically in stained brain samples.Pre-mortem In living patients tau tangle locations can be imaged with a PET scan using a suitable radio-emissive agent. Alzheimer's disease: Neurofibrillary tangles were first described by Alois Alzheimer in one of his patients with Alzheimer's disease (AD). The tangles are considered a secondary tauopathy. AD is also classified as an amyloidosis because of the presence of senile plaques.When tau becomes hyperphosphorylated, the protein dissociates from the microtubules in axons. Then, tau becomes misfolded and the protein begins to aggregate, which eventually forms the neurofibrillary tangles (NFT) seen in Alzheimer's patients. Microtubules also destabilize when tau is dissociated. The combination of the neurofibrillary tangles and destabilized microtubules result in disruption of processes such as axonal transport and neural communication.The degree of NFT involvement in AD is defined by Braak stages. Braak stages I and II are used when NFT involvement is confined mainly to the transentorhinal region of the brain, stages III and IV when there is also involvement of limbic regions such as the hippocampus, and V and VI when there's extensive neocortical involvement. This should not be confused with the degree of senile plaque involvement, which progresses differently. Other diseases: Primary age-related tauopathy (PART) dementia, with NFTs similar to AD, but without amyloid plaques. Other diseases: Chronic traumatic encephalopathy (CTE) Progressive supranuclear palsy (PSP) Corticobasal degeneration (CBD) Frontotemporal dementia and parkinsonism linked to chromosome 17 (FTDP-17) Vacuolar tauopathy Lytico-bodig disease (Parkinson-dementia complex of Guam) Ganglioglioma and gangliocytoma Meningioangiomatosis Postencephalitic parkinsonism Subacute sclerosing panencephalitis (SSPE) As well as lead encephalopathy, tuberous sclerosis, pantothenate kinase-associated neurodegeneration, and lipofuscinosisIn both Pick's disease and corticobasal degeneration, tau proteins are deposited as inclusion bodies within swollen or "ballooned" neurons.Argyrophilic grain disease (AGD), another type of dementia, is marked by an abundance of argyrophilic grains and coiled bodies upon microscopic examination of brain tissue. Some consider it to be a type of Alzheimer's disease. It may co-exist with other tauopathies such as progressive supranuclear palsy and corticobasal degeneration, and also Pick's disease.Tauopathies are often overlapped with synucleinopathies, possibly due to interaction between the synuclein and tau proteins.The non-Alzheimer's tauopathies are sometimes grouped together as "Pick's complex" due to their association with frontotemporal dementia, or frontotemporal lobar degeneration. Research: It is found that activation of cannabinoid receptor type 1 (CB1) mediate inhibition of astroglial-derived nitric oxide (NO), that could be used as a new potential target to blunt tau protein hyperphosphorylation and the consequent related tauopathy in Alzheimer disease (AD).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**HNRNPA1** HNRNPA1: Heterogeneous nuclear ribonucleoprotein A1 is a protein that in humans is encoded by the HNRNPA1 gene. Mutations in hnRNP A1 are causative of amyotrophic lateral sclerosis and the syndrome multisystem proteinopathy. Function: This gene belongs to the A/B subfamily of ubiquitously expressed heterogeneous nuclear ribonucleoproteins (hnRNPs). The hnRNPs are RNA binding proteins and they complex with heterogeneous nuclear RNA (hnRNA). These proteins are associated with pre-mRNAs in the nucleus and appear to influence pre-mRNA processing and other aspects of mRNA metabolism and transport. While all of the hnRNPs are present in the nucleus, some seem to shuttle between the nucleus and the cytoplasm. The hnRNP proteins have distinct nucleic acid binding properties. The protein encoded by this gene has two repeats of quasi-RRM domains that bind to RNAs in the N-terminal domain which are pivotal for RNA specificity and binding. The protein also has a glycine rich arginine-glycine-glycine (RGG) region called the RGG box which enables protein and RNA binding. It affects many critical genes that are responsible for controlling metabolic pathways at the transcriptional, post-transcriptional, translation, and post-translation levels. It is one of the most abundant core proteins of hnRNP complexes and it is localized to the nucleoplasm. This protein, along with other hnRNP proteins, is exported from the nucleus, probably bound to mRNA, and is immediately re-imported. Its M9 nuclear localisation sequence (NLS), a glycine rich region downstream from the RGG box, acts as both a nuclear localization and nuclear export signal. The encoded protein is involved in the packaging of pre-mRNA into hnRNP particles, transport of poly A+ mRNA from the nucleus to the cytoplasm, and may modulate splice site selection. Multiple alternatively spliced transcript variants have been found for this gene but only two transcripts are fully described. These variants have multiple alternative transcription initiation sites and multiple polyA sites.Post translational modifications are also known to affect hnRNP A1's function. Methylation of arginine residues in the RGG box may regulate RNA-binding activity. Kinases such as protein kinase C (PKC), mitogen-activated protein kinases (MAPKs), and ribosomal S6 kinases (S6Ks) phosphorylate serine residues at both the N and C terminals to regulate function. Phosphorylation of the C-terminal region causes cytoplasmic accumulation of the protein. However, addition of O-GlcNAcylation (GlcNAc) moiety to serine or threonince is a common and reversible modification that impairs the protein's binding of karyopherin beta (Transportin-1) resulting in nuclear localization of hnRNPA1. Interactions: hnRNP A1 has been shown to interact with BAT2, Flap structure-specific endonuclease 1 and IκBα. Role in Viruses: hnRNP A1 is involved in the life cycle of DNA, positive sense RNA, and negative sense RNA viruses are multiple stages post-infection. The proteins role in viral life cycles varies depending on the virus and can even play contradictory roles. In some, it promotes viral replication while in others, it abrogates it. Role in Viruses: hnRNP A1's anti-viral effect is present in human T-cell lymphotropic virus type I (HTLV-1) cell culture model. hnRNP A1 inhibits the binding of Rex protein to its response element in 3’ long terminal repeat (LTR) of all viral RNAs. Ectopic expression of hnRNP A1 antagonizes post-transcriptional activity of Rex via competitive binding, eliciting an antiviral response against HTLV-1 infection by negatively affecting the rate of viral replication. In the case of Hepatitis C virus (HCV), a positive sense RNA virus, hnRNP A1 interacts with a crucial region near the 3’ end of the virus’ open reading frame (ORF) called the cis-acting replication element. When hnRNP A1 is upregulated, HCV replication decreases and when hnRNPA1 is downregulated, HCV replication increases. Role in Viruses: hnRNP A1's pro-viral effect is present in the Sindbis virus (a positive sense RNA virus) infection model. hnRNP A1 has been found redistributed in the cytoplasmic site of viral replication bound to the 5’ UTR region of the viral RNA, promoting synthesis of negative-strand RNA. hnRNP A1 has a similar role in porcine epidemic diarrhea virus (PEDV) infection in which hnRNP A1 co-immunoprecipitates with PEDV nucleocapsid protein during infection. hnRNP A1 also bound to terminal leader sequences and intergenic sequences that are crucial for efficient viral replication. Similar trends have also been observed in rhinovirus (HRV), enterovirus 71 (EV-71), and avian reovirus (ARV) infections. Role in Viruses: In the case of some viruses, such as human immunodeficiency virus 1 (HIV-1), contradictory results have been reported in different research studies. Monette et al. reported increased endogenous expression of hnRNP A1 after HIV-1 infection, as enhanced hnRNPA1 levels were seen as favorable for the virus. They also found that down regulation of hnRNPA1 negatively affected viral replication. In contrast, Zahler et al. found that over expression of hnRNP A1 in vitro adversely affected viral replication. As a result, the role of hnRNPA1 in HIV-1's life cycle is somewhat controversial. Role in other disease: Mutations in hnRNP A1 are a cause of amyotrophic lateral sclerosis and multisystem proteinopathy. hnRNP A1 antagonizes cellular senescence and induction of the senescence-associated secretory phenotype by stabilizing Oct-4 and sirtuin 1 mRNAs.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sinus (botany)** Sinus (botany): In botany, a sinus is a space or indentation between two lobes or teeth, usually on a leaf. The term is also used in mycology. For example, one of the defining characteristics of North American species in the Morchella elata clade of morels is the presence of a sinus where the cap attaches to the stipe.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Antiestrogen** Antiestrogen: Antiestrogens, also known as estrogen antagonists or estrogen blockers, are a class of drugs which prevent estrogens like estradiol from mediating their biological effects in the body. They act by blocking the estrogen receptor (ER) and/or inhibiting or suppressing estrogen production. Antiestrogens are one of three types of sex hormone antagonists, the others being antiandrogens and antiprogestogens. Antiestrogens are commonly used to stop steroid hormones, estrogen, from binding to the estrogen receptors leading to the decrease of estrogen levels. Decreased levels of estrogen can lead to complications in sexual development. Antiandrogens are sex hormone antagonists which are able to lower the production and the effects that testosterone can have on female bodies. Types and examples: Antiestrogens include selective estrogen receptor modulators (SERMs) like tamoxifen, clomifene, and raloxifene, the ER silent antagonist and selective estrogen receptor degrader (SERD) fulvestrant, aromatase inhibitors (AIs) like anastrozole, and antigonadotropins including androgens/anabolic steroids, progestogens, and GnRH analogues. Types and examples: Estrogen receptors (ER) like ERα and ERβ include activation function 1 (AF1) domain and activation function 2 (AF2) domain in which SERMS act as antagonists for the AF2 domain, while “pure” antiestrogens like ICI 182,780 and ICI 164,384 are antagonists for the AF1 and AF2 domains.Although aromatase inhibitors and antigonadotropins can be considered antiestrogens by some definitions, they are often treated as distinct classes. Aromatase inhibitors and antigonadotropins reduce the production of estrogen, while the term "antiestrogen" is often reserved for agents reducing the response to estrogen. Medical uses: Antiestrogens are used for: Estrogen deprivation therapy in the treatment of ER-positive breast cancer Ovulation induction in infertility due to anovulation Male hypogonadism Gynecomastia (breast development in men) A component of hormone replacement therapy for transgender men Side effects: In women, the side effects of antiestrogens include hot flashes, osteoporosis, breast atrophy, vaginal dryness, and vaginal atrophy. In addition, they may cause depression and reduced libido. Pharmacology: Antiestrogens act as antagonists of the estrogen receptors, ERα and ERβ. History: The first nonsteroidal antiestrogen was discovered by Lerner and coworkers in 1958. Ethamoxytriphetol (MER-25) was the first antagonist of the ER to be discovered, followed by clomifene and tamoxifen.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Heinrich Wieland Prize** Heinrich Wieland Prize: The Heinrich Wieland Prize is awarded annually by the Boehringer Ingelheim Foundation for outstanding research on biologically active molecules and systems in the areas of chemistry, biochemistry and physiology as well as their clinical importance. In 1963, the Margarine Institute established the Heinrich Wieland Prize to support research in the field of lipids. In 2000, the Margarine Institute ended its sponsorship of the Prize and the pharmaceutical company Boehringer Ingelheim became the new sponsor. In 2011, the Boehringer Ingelheim Foundation took over the prize. The awardee is selected by a scientific board of trustees. The prize is named after the Nobel Prize Laureate in chemistry Professor Heinrich Wieland (1877-1957), one of the leading lipid chemists of the first half of the 20th century. To mark its 50th anniversary in 2014, the prize money was raised to 100,000 euros. Four of its awardees have gone on to receive the Nobel Prize: Michael S. Brown and Joseph L. Goldstein (1974), Bengt Samuelsson (1981) and James E. Rothman 1990. Prize winners: Source: Boehringer Ingelheim Foundation Heinrich Wieland Prize Laureates since 2020
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nagare (web framework)** Nagare (web framework): Nagare is a free and open-source web framework for developing web applications in Stackless Python.Nagare uses a component model inspired by Seaside, and, like Seaside, Nagare uses continuations to provide a framework where the HTTP connectionless request / response cycle doesn't break the normal control flow of the application. This allows web applications to be developed in much the same way as desktop applications, for rapid application development. However, Nagare is written in Python rather than Smalltalk.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ACS Chemical Neuroscience** ACS Chemical Neuroscience: ACS Chemical Neuroscience is a monthly peer-reviewed scientific journal published by the American Chemical Society. It covers research on the molecular underpinnings of nerve function. The journal was established in 2010. The founding editor-in-chief was Craig W. Lindsley (Vanderbilt University), the current editor-in-chief is Jacob Hooker (Harvard Medical School). According to the Journal Citation Reports, the journal has a 2021 impact factor of 5.780. Types of content: The journal publishes research letters, articles, and review articles. In addition, specially commissioned articles that describe journal content and advances in neuroscience are solicited.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Glucose/fructose/phosphoric acid** Glucose/fructose/phosphoric acid: Glucose/fructose/phosphoric acid (trade name Emetrol) is an over-the-counter antiemetic taken to relieve nausea and vomiting. Made by WellSpring Pharmaceutical Corporation, it was formerly distributed by McNeil Consumer Healthcare. History: Emetrol was created by Kinney and Company of Columbus, Indiana and was first used in 1949.It is a phosphorated carbohydrate solution, and comes in syrup form. Contraindications: Since Emetrol contains fructose it is contraindicated for people with hereditary fructose intolerance (HFI). In diabetes patients, it can cause potentially harmful hyperglycaemia (high blood sugar).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Linear programming decoding** Linear programming decoding: In information theory and coding theory, linear programming decoding (LP decoding) is a decoding method which uses concepts from linear programming (LP) theory to solve decoding problems. This approach was first used by Jon Feldman et al. They showed how the LP can be used to decode block codes. The basic idea behind LP decoding is to first represent the maximum likelihood decoding of a linear code as an integer linear program, and then relax the integrality constraints on the variables into linear inequalities.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Duvetyne** Duvetyne: Duvetyne, or duvetyn, (also known as Molton and Rokel) is a twill fabric with a velvet-like nap on one side. Duvetyne has a matte finish and its high opacity makes it ideal for blocking light. It may be woven from cotton, wool, or—in rare cases, mainly in the early 20th century—silk. If made of cotton, it is usually called suede cloth. If wool or wool-blend, it is fulled, napped, and sheared. This entirely hides the weave, making it a blind-faced cloth. Although it is most commonly used in the motion picture industry, early sources list duvetyne as a common fabric for dresses, suits, and coats. By the 1930s, however, it was widely noted for its use in constructing theatrical cycloramas and theater curtains. Duvetyne: In modern times, fire-retardant black duvetyne is commonly used for curtains, for scenery, and to control light spill. Many commercial lighting flags are made from duvetyne. When used in film applications, especially in the eastern United States, duvetyne is also known as "commando cloth".In the first season of the original Star Trek television series, the exterior shots of "space" were created by gluing glitter onto black duvetyne.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Elasticity (cloud computing)** Elasticity (cloud computing): In distributed system and system resource, elasticity is defined as "the degree to which a system is able to adapt to workload changes by provisioning and de-provisioning resources in an autonomic manner, such that at each point in time the available resources match the current demand as closely as possible". Elasticity is a defining characteristic that differentiates cloud computing from previously proposed computing paradigms, such as grid computing. The dynamic adaptation of capacity, e.g., by altering the use of computing resources, to meet a varying workload is called "elastic computing".In the world of distributed systems, there are several definitions according to the authors, some considering the concepts of scalability a sub-part of elasticity, others as being distinct. Example: Let us illustrate elasticity through a simple example of a service provider who wants to run a website on an IaaS cloud. At moment t0 , the website is unpopular and a single machine (most commonly a virtual machine) is sufficient to serve all web users. At moment t1 , the website suddenly becomes popular, for example, as a result of a flash crowd, and a single machine is no longer sufficient to serve all users. Based on the number of web users simultaneously accessing the website and the resource requirements of the web server, it might be that ten machines are needed. An elastic system should immediately detect this condition and provision nine additional machines from the cloud, so as to serve all web users responsively. Example: At time t2 , the website becomes unpopular again. The ten machines that are currently allocated to the website are mostly idle and a single machine would be sufficient to serve the few users who are accessing the website. An elastic system should immediately detect this condition and deprovision nine machines and release them to the cloud. Purpose: Elasticity aims at matching the amount of resource allocated to a service with the amount of resource it actually requires, avoiding over- or under-provisioning. Over-provisioning, i.e., allocating more resources than required, should be avoided as the service provider often has to pay for the resources that are allocated to the service. For example, an Amazon EC2 M4 extra-large instance costs US$0.239/hour. If a service has allocated two virtual machines when only one is required, the service provider wastes $2,095 every year. Hence, the service provider's expenses are higher than optimal and their profit is reduced. Purpose: Under-provisioning, i.e., allocating fewer resources than required, must be avoided, otherwise the service cannot serve its users with a good service. In the above example, under-provisioning the website may make it seem slow or unreachable. Web users eventually give up on accessing it, thus, the service provider loses customers. On the long term, the provider's income will decrease, which also reduces their profit. Problems: Resources provisioning time One potential problem is that elasticity takes time. A cloud virtual machine (VM) can be acquired at any time by the user; however, it may take up to several minutes for the acquired VM to be ready to use. The VM startup time is dependent on factors, such as image size, VM type, data center location, number of VMs, etc. Cloud providers have different VM startup performance. This implies any control mechanism designed for elastic applications must consider in its decision process the time needed for the elasticity actions to take effect, such as provisioning another VM for a specific application component. Problems: Monitoring elastic applications Elastic applications can allocate and deallocate resources (such as VMs) on demand for specific application components. This makes cloud resources volatile, and traditional monitoring tools which associate monitoring data with a particular resource (i.e. VM), such as Ganglia or Nagios, are no longer suitable for monitoring the behavior of elastic applications. For example, during its lifetime, a data storage tier of an elastic application might add and remove data storage VMs due to cost and performance requirements, varying the number of used VMs. Thus, additional information is needed in monitoring elastic applications, such as associating the logical application structure over the underlying virtual infrastructure. This in turn generates other problems, such as how to aggregate data from multiple VMs towards extracting the behavior of the application component running on top of those VMs, as different metrics might need to be aggregated differently (e.g., cpu usage could be averaged, network transfer might be summed up). Problems: Elasticity requirements When deploying applications in cloud infrastructures (IaaS/PaaS), requirements of the stakeholder need to be considered in order to ensure proper elasticity behavior. Even though traditionally one would try to find the optimal trade-off between cost and quality or performance, for real world cloud users requirements regarding the behavior are more complex and target multiple dimensions of elasticity (e.g., SYBL). Problems: Multiple levels of control Cloud applications can be of varying types and complexities, with multiple levels of artifacts deployed in layers. Controlling such structures must take into consideration a variety of issues, an approach in this sense being rSYBL. For multi-level control, control systems need to consider the impact lower level control has upon higher level ones and vice versa (e.g., controlling virtual machines, web containers, or web services in the same time), as well as conflicts which may appear between various control strategies from various levels. Elastic strategies on Clouds can take advantage of control-theoretic methods (e.g., predictive control has been experimented in Cloud scenarios by showing considerable advantages with respect to reactive methods).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nanowire lasers** Nanowire lasers: Semiconductor nanowire lasers are nano-scaled lasers that can be embedded on chips and constitute an advance for computing and information processing applications. Nanowire lasers are coherent light sources (single mode optical waveguides) as any other laser device, with the advantage of operating at the nanoscale. Built by molecular beam epitaxy, nanowire lasers offer the possibility for direct integration on silicon, and the construction of optical interconnects and data communication at the chip scale. Nanowire lasers are built from III–V semiconductor heterostructures. Their unique 1D configuration and high refractive index allow for low optical loss and recirculation in the active nanowire core region. This enables subwavelength laser sizes of only a few hundred nanometers. Nanowires are Fabry–Perot resonator cavities defined by the end facets of the wire, therefore they do not require polishing or cleaving for high-reflectivity facets as in conventional lasers. Properties: Nanowire lasers can be grown site-selectively on Si/SOI wafers with conventional MBE techniques, allowing for pristine structural quality without defects. Nanowire lasers using the group-III nitride and ZnO materials systems have been demonstrated to emit in the visible and ultraviolet, however infrared at the 1.3–1.55 μm is important for telecommunication bands. Lasing at those wavelengths has been achieved by removing the nanowire from the silicon substrate. Nanowire lasers have shown pulse durations down to <1ps, and enable repetition rates greater than 200 GHz. Also, nanowire lasers have shown to store the phase information of a pulse over 30ps when excited with subsequent pulse pairs. Mode locked lasers at the nano-scale are therefore feasible with such configurations.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Social viewing** Social viewing: Social viewing (also known as Watch Party or GroupWatch) describes a recently developed practice revolving around the ability for multiple users to aggregate from multiple sources and view online videos together in a synchronized viewing experience. Typically the experience also involves some form of instant messaging or communication to facilitate discussion pertaining to the common viewing experience. History: The term in this context originated with the Toronto and Los Angeles-based company View2Gether which has created proprietary technology for aggregating content from sources not controlled by the user for synchronized play and inclusion in common playlists by multiple participants with a commensurate instant messaging chat function. Other sites which provide similar functionality include Oortle (Photophlow), SeeToo and development of social viewing for existing portals such as Yahoo have recently been announced.The term has been used in some cases to describe online viewing within the framework of a social network, however View2gether and similar sites have reconfigured the term to mean a common viewing experience as a social activity. History: Social viewing has also been used in the past to describe activities such as gathering for the viewing of particular television programs, such as soap operas.Some examples of modern social viewing sites include Twitch, YouTube, Facebook, TikTok, Instagram, Zoom, and Twitter. It was also officially added as a built-in feature in some over-the-top media services in various names. While Amazon and Hulu both call it Watch Party, Disney+ (which offers it only in some countries) calls it GroupWatch. Social viewing experience: Nowadays we can watch a video while interacting with other people thanks to social viewing and all the resources that it provide us. We can watch a movie while chatting with our friends or discus about a concrete scene. One factor to improve could be the synchronization between users, to be sure that everyone is watching the same scene, so that no problem arises while discussing it.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dimethylformamide** Dimethylformamide: Dimethylformamide is an organic compound with the formula (CH3)2NC(O)H. Commonly abbreviated as DMF (although this initialism is sometimes used for dimethylfuran, or dimethyl fumarate), this colourless liquid is miscible with water and the majority of organic liquids. DMF is a common solvent for chemical reactions. Dimethylformamide is odorless, but technical-grade or degraded samples often have a fishy smell due to impurity of dimethylamine. Dimethylamine degradation impurities can be removed by sparging samples with an inert gas such as argon or by sonicating the samples under reduced pressure. As its name indicates, it is structurally related to formamide, having two methyl groups in the place of the two hydrogens. DMF is a polar (hydrophilic) aprotic solvent with a high boiling point. It facilitates reactions that follow polar mechanisms, such as SN2 reactions. Structure and properties: As for most amides, the spectroscopic evidence indicates partial double bond character for the C-N and C-O bonds. Thus, the infrared spectrum shows a C=O stretching frequency at only 1675 cm−1, whereas a ketone would absorb near 1700 cm−1.DMF is a classic example of a fluxional molecule. Structure and properties: The ambient temperature 1H NMR spectrum shows two methyl signals, indicative of hindered rotation about the (O)C-N bond. At temperatures near 100 °C, the 500 MHz NMR spectrum of this compound shows only one signal for the methyl groups. DMF is miscible with water. The vapour pressure at 20 °C is 3.5 hPa. A Henry's law constant of 7.47 × 10−5 hPa m3 mol−1 can be deduced from an experimentally determined equilibrium constant at 25 °C. The partition coefficient log POW is measured to −0.85. Since the density of DMF (0.95 g cm−3 at 20 °C) is similar to that of water, significant flotation or stratification in surface waters in case of accidental losses is not expected. Reactions: DMF is hydrolyzed by strong acids and bases, especially at elevated temperatures. With sodium hydroxide, DMF converts to formate and dimethylamine. DMF undergoes decarbonylation near its boiling point to give dimethylamine. Distillation is therefore conducted under reduced pressure at lower temperatures.In one of its main uses in organic synthesis, DMF was a reagent in the Vilsmeier–Haack reaction, which is used to formylate aromatic compounds. The process involves initial conversion of DMF to a chloroiminium ion, [(CH3)2N=CH(Cl)]+, known as a Vilsmeier reagent, which attacks arenes. Reactions: Organolithium compounds and Grignard reagents react with DMF to give aldehydes after hydrolysis in a reaction called Bouveault aldehyde synthesis.Dimethylformamide forms 1:1 adducts with a variety of Lewis acids such as the soft acid I2, and the hard acid phenol. It is classified as a hard Lewis base and its ECW model base parameters are EB= 2.19 and CB= 1.31. Its relative donor strength toward a series of acids, versus other Lewis bases, can be illustrated by C-B plots. Production: DMF was first prepared in 1893 by the French chemist Albert Verley (8 January 1867 – 27 November 1959), by distilling a mixture of dimethylamine hydrochloride and potassium formate.DMF is prepared by combining methyl formate and dimethylamine or by reaction of dimethylamine with carbon monoxide.Although currently impractical, DMF can be prepared from supercritical carbon dioxide using ruthenium-based catalysts. Applications: The primary use of DMF is as a solvent with low evaporation rate. DMF is used in the production of acrylic fibers and plastics. It is also used as a solvent in peptide coupling for pharmaceuticals, in the development and production of pesticides, and in the manufacture of adhesives, synthetic leathers, fibers, films, and surface coatings. It is used as a reagent in the Bouveault aldehyde synthesis and in the Vilsmeier-Haack reaction, another useful method of forming aldehydes. It is a common solvent in the Heck reaction. Applications: It is also a common catalyst used in the synthesis of acyl halides, in particular the synthesis of acyl chlorides from carboxylic acids using oxalyl or thionyl chloride. The catalytic mechanism entails reversible formation of an imidoyl chloride (also known as the 'Vilsmeier reagent'):DMF penetrates most plastics and makes them swell. Because of this property DMF is suitable for solid phase peptide synthesis and as a component of paint strippers. Applications: DMF is used as a solvent to recover olefins such as 1,3-butadiene via extractive distillation. It is also used in the manufacturing of solvent dyes as an important raw material. It is consumed during reaction. Pure acetylene gas cannot be compressed and stored without the danger of explosion. Industrial acetylene is safely compressed in the presence of dimethylformamide, which forms a safe, concentrated solution. The casing is also filled with agamassan, which renders it safe to transport and use.As a cheap and common reagent, DMF has many uses in a research laboratory. DMF is effective at separating and suspending carbon nanotubes, and is recommended by the NIST for use in near infrared spectroscopy of such. DMF can be utilized as a standard in proton NMR spectroscopy allowing for a quantitative determination of an unknown compound. In the synthesis of organometallic compounds, it is used as a source of carbon monoxide ligands. DMF is a common solvent used in electrospinning. DMF is commonly used in the solvothermal synthesis of metal–organic frameworks. DMF-d7 in the presence of a catalytic amount of KOt-Bu under microwave heating is a reagent for deuteration of polyaromatic hydrocarbons. Safety: Reactions including the use of sodium hydride in DMF as a solvent are somewhat hazardous; exothermic decompositions have been reported at temperatures as low as 26 °C. On a laboratory scale any thermal runaway is (usually) quickly noticed and brought under control with an ice bath and this remains a popular combination of reagents. On a pilot plant scale, on the other hand, several accidents have been reported.Dimethylformamide vapor exposure has shown reduced alcohol tolerance and skin irritation in some cases.On the 20 of June 2018, the Danish Environmental Protective Agency published an article about the DMF's use in squishies. The density of the compound in the toy resulted in all squishies being removed from the Danish market. All squishies were recommended to be thrown out as household waste. Toxicity: The acute LD50 (oral, rats and mice) is 2.2–7.55 g/kg. Hazards of DMF have been examined.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Immediate early gene** Immediate early gene: Immediate early genes (IEGs) are genes which are activated transiently and rapidly in response to a wide variety of cellular stimuli. They represent a standing response mechanism that is activated at the transcription level in the first round of response to stimuli, before any new proteins are synthesized. IEGs are distinct from "late response" genes, which can only be activated later, following the synthesis of early response gene products. Thus IEGs have been called the "gateway to the genomic response". The term can describe viral regulatory proteins that are synthesized following viral infection of a host cell, or cellular proteins that are made immediately following stimulation of a resting cell by extracellular signals. Immediate early gene: In their role as "gateways to genomic response", many IEG products are natural transcription factors or other DNA-binding proteins. However, other important classes of IEG products include secreted proteins, cytoskeletal proteins, and receptor subunits. Neuronal IEGs are used prevalently as a marker to track brain activities in the context of memory formation and development of psychiatric disorders. IEGs are also of interest as a therapeutic target for treatment of human cytomegalovirus. Types: The earliest identified and best characterized IEGs include c-fos, c-myc and c-jun, genes that were found to be homologous to retroviral oncogenes. Thus IEGs are well known as early regulators of cell growth and differentiation signals. However, other findings suggest roles for IEGs in many other cellular processes. Regulation: Expression of IEGs occurs in response to internal and external cell signals, occurring rapidly without the need to synthesize new transcription factors. The genetic sequences of IEGs are generally shorter in length (~19kb) and exhibit an enrichment of specific transcription factor binding sites, offering redundancy in transcription initiation. Translation of IEG mRNA into proteins occurs regardless of protein synthesis inhibitors which disrupts the process of protein production. Rapid expression of IEGs is also attributed to the accessibility of its promotor sequence through histone acetylation that is consistent pre- and post-expression. Downregulation of mRNA transcription occurs through redundant targeting of the 3' UTR region by microRNAs, resulting in translational repression and degradation. The expression of IEG protein is often transient due to rapid mRNA downregulation and increased proteolysis of translated products. Function: Activation of gene transcription is a complex system of signal cascades and recruitment of necessary components such as RNA polymerase and transcription factors. IEGs are often the first responders to regulatory signals with many reaching peak expression within 30 minutes after stimuli compared to 2–4 hours in the case of delayed primary response gene. There are many signaling pathways leading to the activation of IEGs, many of which (MAPK/ERK, PI3K, etc.) are studied in the context of cancer. As such, many IEGs function as transcription factors regulating expression of downstream genes or are proto-oncogenes associated with altered cell growth. Clinical significance: Expression of IEGs is involved in neuronal activity and specifically memory formation, neuropsychiatric diseases, and behavioral activities. Immediate early genes present in the brain are associated with a range of functions such as modifying synaptic functions through transient and rapid activation growth factors or the expression of cellular proteins. These changes are theorized to be the means in which memory is stored in the brain as outline in the concept of memory trace or engram. In the context of neuropsychiatric diseases, up-regulation of certain IEGs related to the formation of fear-related memories contribute to the development of a variety of disease such as schizophrenia, Panic disorder, Post-traumatic stress disorder Memory formation Some IEGs such as ZNF268 and Arc have been implicated in learning and memory and long-term potentiation.A wide range of neuronal stimulation have been shown to induce IEG expression ranging from sensory and behavioral to drug-induced convulsions. As such, IEGs are utilized as a marker to understand neuronal ensembles associated with formations of certain memories such as fear, commonly attributed to the development of psychiatric disorders. For example, neurons expression Arc in the hippocampus show phenotypic and behavioral differences in response to stimuli such as altered dendritic spine morphology or spontaneous firing rate. This association suggests the expression of certain IEGs in response to a stimulus results in expansion of the related neuronal circuit by incorporating the activated neuron assembles. Other IEGs effect different neural properties with knock out of Arc showing adverse affects on the formation of long-term memory. These findings offer insight into the molecular mechanism and functional changes brought about by IEG expression, expanding the theory of memory trace. Clinical significance: Memory consolidation during a learning experience depends on the rapid expression of a set of IEGs in brain neurons. In general, expression of genes often can be epigenetically repressed by the presence of 5-methylcytosine in the DNA promoter regions of the genes. However, in the case of IEGs associated with memory consolidation demethylation of 5-methylcytosine to form the normal base cytosine can induce rapid gene expression. Demethylation appears to occur by a DNA repair process involving the GADD45G protein. Clinical significance: Psychiatric disorders Classification and diagnosis of neuropsychiatric illnesses are symptom-based, often exhibiting similar brain activity. Furthermore, the development of psychiatric illnesses is dependent of both genetics and environmental factors, as such, predictive risk assessment of diseases such as schizophrenia has lagged behind other prevalent illnesses. Using IEGs as a marker, animal models have identified altered levels of Arc,effecting synaptic activities, and EGR1, involved in memory trace encoding, in the case of depression. Similarly, other neuropsychiatric illnesses such as schizophrenia also exhibit altered IEG expression with recent studies showing a correlation of low expression of EGR3, a transcription factor downstream of NMDARs, in patients exhibiting schizophrenia. As such, IEGs are crucial markers in evaluating neuronal activity in the context of psychiatric illness with its expression pattern shaped by environmental and genetic factors. Potential therapeutic applications: Human Cytomegalovirus (HCMV) Human Cytomegalovirus is a prevalent beta herpesvirus that remains in the latent state, going unnoticed in healthy individuals with serious consequences if the individual is immunocompromised. The virus cycles in and out of the latent state and is characterized by different gene expression regions: immediate-early (IE), early, and late. Conventional anti-viral treatments such as Ganciclovir use nucleoside analogs to target the early events of the viral replication cycles, however, these approaches are prone to developing resistance. Targeting IE1 and IE2 are thought to be crucial in regulating the pathogenesis of HCMV and retaining the virus in the latent state. Viral proteins derived from IE1 and IE2 regulate viral latency by controlling subsequent expression of early and late genes. Silencing of IE gene expression through antisense oligonucleotides, RNA interference, and gene-targeting ribosomes have been investigated for therapeutic applications. Alternatively, the rise of CRISPR technology allows for precise DNA editing that can knockout HCMV genes responsible for IE transcription. DNA targeting is more effective in latent infections, in which viral mRNA is absent or at a low concentration. Small molecule chemical inhibitors are also being investigated that target epigenetic factors and signaling proteins involved in IE expression.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ethanol fermentation** Ethanol fermentation: Ethanol fermentation, also called alcoholic fermentation, is a biological process which converts sugars such as glucose, fructose, and sucrose into cellular energy, producing ethanol and carbon dioxide as by-products. Because yeasts perform this conversion in the absence of oxygen, alcoholic fermentation is considered an anaerobic process. It also takes place in some species of fish (including goldfish and carp) where (along with lactic acid fermentation) it provides energy when oxygen is scarce.Ethanol fermentation is the basis for alcoholic beverages, ethanol fuel and bread dough rising. Biochemical process of fermentation of sucrose: The chemical equations below summarize the fermentation of sucrose (C12H22O11) into ethanol (C2H5OH). Alcoholic fermentation converts one mole of glucose into two moles of ethanol and two moles of carbon dioxide, producing two moles of ATP in the process. C6H12O6 → 2 C2H5OH + 2 CO2Sucrose is a sugar composed of a glucose linked to a fructose. In the first step of alcoholic fermentation, the enzyme invertase cleaves the glycosidic linkage between the glucose and fructose molecules. Biochemical process of fermentation of sucrose: C12H22O11 + H2O + invertase → 2 C6H12O6Next, each glucose molecule is broken down into two pyruvate molecules in a process known as glycolysis. Glycolysis is summarized by the equation: C6H12O6 + 2 ADP + 2 Pi + 2 NAD+ → 2 CH3COCOO− + 2 ATP + 2 NADH + 2 H2O + 2 H+CH3COCOO− is pyruvate, and Pi is inorganic phosphate. Finally, pyruvate is converted to ethanol and CO2 in two steps, regenerating oxidized NAD+ needed for glycolysis: 1. CH3COCOO− + H+ → CH3CHO + CO2catalyzed by pyruvate decarboxylase 2. CH3CHO + NADH + H+ → C2H5OH + NAD+This reaction is catalyzed by alcohol dehydrogenase (ADH1 in baker's yeast).As shown by the reaction equation, glycolysis causes the reduction of two molecules of NAD+ to NADH. Two ADP molecules are also converted to two ATP and two water molecules via substrate-level phosphorylation. Biochemical process of fermentation of sucrose: Related processes Fermentation of sugar to ethanol and CO2 can also be done by Zymomonas mobilis, however the path is slightly different since formation of pyruvate does not happen by glycolysis but instead by the Entner–Doudoroff pathway. Other microorganisms can produce ethanol from sugars by fermentation but often only as a side product. Examples are Heterolactic acid fermentation in which Leuconostoc bacteria produce lactate + ethanol + CO2 Mixed acid fermentation where Escherichia produce ethanol mixed with lactate, acetate, succinate, formate, CO2, and H2 2,3-butanediol fermentation by Enterobacter producing ethanol, butanediol, lactate, formate, CO2, and H2 Effect of oxygen: Fermentation does not require oxygen. If oxygen is present, some species of yeast (e.g., Kluyveromyces lactis or Kluyveromyces lipolytica) will oxidize pyruvate completely to carbon dioxide and water in a process called cellular respiration, hence these species of yeast will produce ethanol only in an anaerobic environment (not cellular respiration). This phenomenon is known as the Pasteur effect. Effect of oxygen: However, many yeasts such as the commonly used baker's yeast Saccharomyces cerevisiae or fission yeast Schizosaccharomyces pombe under certain conditions, ferment rather than respire even in the presence of oxygen. In wine making this is known as the counter-Pasteur effect. These yeasts will produce ethanol even under aerobic conditions, if they are provided with the right kind of nutrition. During batch fermentation, the rate of ethanol production per milligram of cell protein is maximal for a brief period early in this process and declines progressively as ethanol accumulates in the surrounding broth. Studies demonstrate that the removal of this accumulated ethanol does not immediately restore fermentative activity, and they provide evidence that the decline in metabolic rate is due to physiological changes (including possible ethanol damage) rather than to the presence of ethanol. Several potential causes for the decline in fermentative activity have been investigated. Viability remained at or above 90%, internal pH remained near neutrality, and the specific activities of the glycolytic and alcohologenic enzymes (measured in vitro) remained high throughout batch fermentation. None of these factors appears to be causally related to the fall in fermentative activity during batch fermentation. Bread baking: Ethanol fermentation causes bread dough to rise. Yeast organisms consume sugars in the dough and produce ethanol and carbon dioxide as waste products. The carbon dioxide forms bubbles in the dough, expanding it to a foam. Less than 2% ethanol remains after baking. Alcoholic beverages: Ethanol contained in alcoholic beverages is produced by means of fermentation induced by yeast. Alcoholic beverages: Wine is produced by fermentation of the natural sugars present in grapes; cider and perry are produced by similar fermentation of natural sugar in apples and pears, respectively; and other fruit wines are produced from the fermentation of the sugars in any other kinds of fruit. Brandy and eaux de vie (e.g. slivovitz) are produced by distillation of these fruit-fermented beverages. Alcoholic beverages: Mead is produced by fermentation of the natural sugars present in honey. Alcoholic beverages: Beer, whiskey, and sometimes vodka are produced by fermentation of grain starches that have been converted to sugar by the enzyme amylase, which is present in grain kernels that have been malted (i.e. germinated). Other sources of starch (e.g. potatoes and unmalted grain) may be added to the mixture, as the amylase will act on those starches as well. It may also be amylase-induce fermented with saliva in a few countries. Whiskey and vodka are also distilled; gin and related beverages are produced by the addition of flavoring agents to a vodka-like feedstock during distillation. Alcoholic beverages: Rice wines (including sake) are produced by the fermentation of grain starches converted to sugar by the mold Aspergillus oryzae. Baijiu, soju, and shōchū are distilled from the product of such fermentation. Alcoholic beverages: Rum and some other beverages are produced by fermentation and distillation of sugarcane. Rum is usually produced from the sugarcane product molasses.In all cases, fermentation must take place in a vessel that allows carbon dioxide to escape but prevents outside air from coming in. This is to reduce risk of contamination of the brew by unwanted bacteria or mold and because a buildup of carbon dioxide creates a risk the vessel will rupture or fail, possibly causing injury or property damage. Feedstocks for fuel production: Yeast fermentation of various carbohydrate products is also used to produce the ethanol that is added to gasoline. Feedstocks for fuel production: The dominant ethanol feedstock in warmer regions is sugarcane. In temperate regions, corn or sugar beets are used.In the United States, the main feedstock for the production of ethanol is currently corn. Approximately 2.8 gallons of ethanol are produced from one bushel of corn (0.42 liter per kilogram). While much of the corn turns into ethanol, some of the corn also yields by-products such as DDGS (distillers dried grains with solubles) that can be used as feed for livestock. A bushel of corn produces about 18 pounds of DDGS (320 kilograms of DDGS per metric ton of maize). Although most of the fermentation plants have been built in corn-producing regions, sorghum is also an important feedstock for ethanol production in the Plains states. Pearl millet is showing promise as an ethanol feedstock for the southeastern U.S. and the potential of duckweed is being studied.In some parts of Europe, particularly France and Italy, grapes have become a de facto feedstock for fuel ethanol by the distillation of surplus wine. Surplus sugary drinks may also be used. In Japan, it has been proposed to use rice normally made into sake as an ethanol source. Feedstocks for fuel production: Cassava as ethanol feedstock Ethanol can be made from mineral oil or from sugars or starches. Starches are cheapest. The starchy crop with highest energy content per acre is cassava, which grows in tropical countries. Thailand already had a large cassava industry in the 1990s, for use as cattle feed and as a cheap admixture to wheat flour. Nigeria and Ghana are already establishing cassava-to-ethanol plants. Production of ethanol from cassava is currently economically feasible when crude oil prices are above US$120 per barrel. New varieties of cassava are being developed, so the future situation remains uncertain. Currently, cassava can yield between 25 and 40 tonnes per hectare (with irrigation and fertilizer), and from a tonne of cassava roots, circa 200 liters of ethanol can be produced (assuming cassava with 22% starch content). A liter of ethanol contains circa 21.46 MJ of energy. The overall energy efficiency of cassava-root to ethanol conversion is circa 32%. The yeast used for processing cassava is Endomycopsis fibuligera, sometimes used together with bacterium Zymomonas mobilis. Byproducts of fermentation: Ethanol fermentation produces unharvested byproducts such as heat, carbon dioxide, food for livestock, water, methanol, fuels, fertilizer and alcohols. The cereal unfermented solid residues from the fermentation process, which can be used as livestock feed or in the production of biogas, are referred to as Distillers grains and sold as WDG, Wet Distiller's grains, and DDGS, Dried Distiller's Grains with Solubles, respectively. Microbes used in ethanol fermentation: Yeast Saccharomyces cerevisiae Schizosaccharomyces Zymomonas mobilis (a bacterium)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Peroxide process** Peroxide process: The peroxide process is a method for the industrial production of hydrazine. Peroxide process: In this process hydrogen peroxide is used as an oxidant instead of sodium hypochlorite, which is traditionally used to generate hydrazine. The main advantage of the peroxide process to hydrazine relative to the traditional Olin Raschig process is that it does not coproduce salt. In this respect, the peroxide process is an example of green chemistry. Since many millions of kilograms of hydrazine are produced annually, this method is of both commercial and environmental significance. Production: Ketazine formation In the usual implementation, hydrogen peroxide is used together with acetamide. This mixture does not react with ammonia directly but does so in the presence of methyl ethyl ketone to give the oxaziridine. Production: Balanced equations for the individual steps are as follows. Imine formation through condensation: Me(Et)C=O + NH3 → Me(Et)C=NH + H2OOxidation of the imine to the oxaziridine: Me(Et)C=NH + H2O2 → Me(Et)CONH + H2OCondensation of the oxaziridine with a second molecule of ammonia to give the hydrazone: Me(Et)CONH + NH3 → Me(Et)C=NNH2 + H2OThe hydrazone then condenses with a second equivalent of ketone to give the ketazine: Me(Et)C=O + Me(Et)C=NNH2 → Me(Et)C=NN=C(Et)Me + H2OTypical process conditions are 50 °C and atmospheric pressure, with a feed mix of H2O2:ketone:NH3 in a molar ratio of about 1:2:4. Methyl ethyl ketone is advantageous to acetone because the resulting ketazine is immiscible in the reaction mixture and can be separated by decantation. A similar process based on benzophenone has also been described. Production: Ketazine to hydrazine The final stage involves hydrolysis of the purified ketazine: Me(Et)C=NN=C(Et)Me + 2 H2O → 2 Me(Et)C=O + N2H4The hydrolysis of the azine is acid-catalyzed, hence the need to isolate the azine from the initial ammonia-containing reaction mixture. It is also endothermic, and so requires an increase in temperature (and pressure) to shift the equilibrium in favour of the desired products: ketone (which is recycled) and hydrazine hydrate. The reaction is carried out by simple distillation of the azeotrope: typical conditions are a pressure of 8 bar and temperatures of 130 °C at the top of the column and 179 °C at the bottom of the column. The hydrazine hydrate (30–45% aqueous solution) is run off from the base of the column, while the methyl ethyl ketone is distilled off from the top of the column and recycled. History: The peroxide process, also called the Pechiney–Ugine–Kuhlmann process, was developed in the early 1970s by Produits Chimiques Ugine Kuhlmann. Originally the process used acetone instead of methyl ethyl ketone. Methyl ethyl ketone is advantageous because the resulting ketazine is immiscible in the reaction mixture and can be separated by decantation. The world's largest hydrazine hydrate plant is in Lannemezan in France, producing 17,000 tonnes of hydrazine products per year. History: Bayer ketazine process Before invention of the peroxide process, the Bayer ketazine process had been commercialized. In the Bayer process, the oxidation of ammonia by sodium hypochlorite is conducted in the presence of acetone. The process generates the ketazine but also sodium chloride: 2 Me2CO + 2 NH3 + NaOCl → Me2C=NN=CMe2 + 3 H2O + NaCl Me2C=NN=CMe2 + 2 H2O → N2H4 + 2 Me2CO
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SKF-91488** SKF-91488: SKF-91488 is a histamine N-methyltransferase inhibitor. It prevents the degradation of histamine, leading to increased histamine levels.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**IAmaze** IAmaze: iAmaze is an Internet company that specializes in web applications created in dynamic HTML. Applications created by the company are designed to run on all browsers and operating systems, without downloads or plug-ins. AOL purchased the company on September 5, 2000, and commented that "iAmaze will help further strengthen its services by providing Web-based applications from this platform with the speed and functionality of traditional client-side desktop applications."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Time/memory/data tradeoff attack** Time/memory/data tradeoff attack: A time/memory/data tradeoff attack is a type of cryptographic attack where an attacker tries to achieve a situation similar to the space–time tradeoff but with the additional parameter of data, representing the amount of data available to the attacker. An attacker balances or reduces one or two of those parameters in favor of the other one or two. This type of attack is very difficult, so most of the ciphers and encryption schemes in use were not designed to resist it. History: Tradeoff attacks on symmetric cryptosystems date back to 1980, when Martin Hellman suggested a time/memory tradeoff method to break block ciphers with N possible keys in time T and memory M related by the tradeoff curve TM2=N2 where 1≤T≤N . Later, in 1995, Babbage and Golic devised a different tradeoff attack for stream ciphers with a new bound such that TM=N for 1≤T≤D where D is the output data available to the cryptanalyst at real time. Attack mechanics: This attack is a special version of the general cryptanalytic time/memory tradeoff attack, which has two main phases: Preprocessing: During this phase, the attacker explores the structure of the cryptosystem and is allowed to record their findings in large tables. This can take a long time. Attack mechanics: Realtime: In this phase, the cryptanalyst is granted real data obtained from a specific unknown key. They then try to use this data with the precomputed table from the preprocessing phase to find the particular key in as little time as possible.Any time/memory/data tradeoff attack has the following parameters: N search space size P time required for the preprocessing phase T time required for the realtime phase M amount of memory available to the attacker D amount of realtime data available to the attacker Hellman's attack on block ciphers: For block ciphers, let N be the total number of possible keys and also assume the number of possible plaintexts and ciphertexts to be N . Also let the given data be a single ciphertext block of a specific plaintext counterpart. If we consider the mapping from the key x to the ciphertext y as a random permutation function f over an N point space, and if this function f is invertible; we need to find the inverse of this function f−1(y)=x . Hellman's technique to invert this function: During the preprocessing stage Try to cover the N point space by an m×t rectangular matrix that is constructed by iterating the function f on m random starting points in N for t times. The start points are the leftmost column in the matrix and the end points are the rightmost column. Then store the pairs of start and end points in increasing order of end points values.Now, only one matrix will not be able to cover the whole N space. But if we add more rows to the matrix, we will end up with a huge matrix that includes recovered points more than once. So, we find the critical value of m at which the matrix contains exactly m different points. Consider the first m paths from start points to end points are all disjoint with mt points, such that the next path which has at least one common point with one of those previous paths and includes exactly t points. Those two sets of mt and t points are disjoint by the birthday paradox if we make sure that t⋅mt≤N . We achieve this by enforcing the matrix stopping rule: mt2=N Nevertheless, an m×t matrix with mt2=N covers a portion mt/N=1/t of the whole space. To generate t to cover the whole space, we use a variant of f defined: fi(x)=hi(f(x)) and hi is simple out manipulation such as reordering of bits of f(x) (refer to the original paper for more details). And one can see that the total preprocessing time is P≈N . Also M=mt since we only need to store the pairs of start and end points and we have t matrices each of m pairs.During the real time phase The total computation required to find f−1(y) is T=t2 because we need to do t inversion attempts as it is likely to be covered by one matrix and each of the attempts takes t evaluations of some fi . The optimum tradeoff curve is obtained by using the matrix stopping rule mt2=N and we get TM2=N2,P=N,D=1 and choice of T and M depends on the cost of each resource.According to Hellman, if the block cipher at hand has the property that the mapping from its key to cipher text is a random permutation function f over an N point space, and if this f is invertible, the tradeoff relationship becomes much better: TM=N Babbage-and-Golic attack on stream ciphers: For stream ciphers, N is specified by the number of internal states of the bit generator—probably different from the number of keys. D is the count of the first pseudorandom bits produced from the generator. Finally, the attacker's goal is to find one of the actual internal states of the bit generator to be able to run the generator from this point on to generate the rest of the key. Associate each of the possible N internal states of the bit generator with the corresponding string that consists of the first log(N) bits obtained by running the generator from that state by the mapping f(x)=y from states x to output prefixes y . This previous mapping is considered a random function over the N points common space. To invert this function, an attacker establishes the following. During the preprocessing phase, pick M random xi states and compute their corresponding yi output prefixes. Babbage-and-Golic attack on stream ciphers: Store the pairs (xi,yi) in increasing order of yi in a large table. During the realtime phase, you have D+log(N)−1 generated bits. Calculate from them all D possible combinations of y1,y2,...,yD, of consecutive bits with length log(N) Search for each yi in the generated table which takes log time. If you have a hit, this yi corresponds to an internal state xi of the bit generator from which you can forward run the generator to obtain the rest of the key. Babbage-and-Golic attack on stream ciphers: By the Birthday Paradox, you are guaranteed that two subsets of a space with N points have an intersection if the product of their sizes is greater than N .This result from the Birthday attack gives the condition DM=N with attack time T=D and preprocessing time P=M which is just a particular point on the tradeoff curve TM=N . We can generalize this relation if we ignore some of the available data at real time and we are able to reduce T from T=D to 1 and the general tradeoff curve eventually becomes TM=N with 1≤T≤D and P=M Shamir and Biryukov's attack on stream ciphers: This novel idea introduced in 2000 combines the Hellman and Babbage-and-Golic tradeoff attacks to achieve a new tradeoff curve with better bounds for stream cipher cryptoanalysis. Hellman's block cipher technique can be applied to a stream cipher by using the same idea of covering the N points space through matrices obtained from multiple variants fi of the function f which is the mapping of internal states to output prefixes. Recall that this tradeoff attack on stream cipher is successful if any of the given D output prefixes is found in any of the matrices covering N . This cuts the number of covered points by the matrices from N to N/D points. This is done by reducing the number of matrices from t to t/D while keeping m as large as possible (but this requires t≥D to have at least one table). For this new attack, we have M=mt/D because we reduced the number of matrices to t/D and the same for the preprocessing time P=N/D . The realtime required for the attack is T=(t/D)⋅t⋅D=t2 which is the product of the number of matrices, length of each iteration and number of available data points at attack time. Shamir and Biryukov's attack on stream ciphers: Eventually, we again use the matrix stopping rule to obtain the tradeoff curve TM2D2=t2⋅(m2t2/D2)⋅D2=m2t4=N2 for D2≤T≤N (because t≥D ). Attacks on stream ciphers with low sampling resistance This attack, invented by Biryukov, Shamir, and Wagner, relies on a specific feature of some stream ciphers: that the bit generator undergoes only few changes in its internal state before producing the next output bit. Therefore, we can enumerate those special states that generate k zero bits for small values of k at low cost. But when forcing large number of output bits to take specific values, this enumeration process become very expensive and difficult. Now, we can define the sampling resistance of a stream cipher to be R=2−k with k the maximum value which makes such enumeration feasible. Shamir and Biryukov's attack on stream ciphers: Let the stream cipher be of N=2n states each has a full name of n bits and a corresponding output name which is the first n bits in the output sequence of bits. If this stream cipher has sampling resistance R=2−k , then an efficient enumeration can use a short name of n−k bits to define the special states of the generator. Each special state with n−k short name has a corresponding short output name of n−k bits which is the output sequence of the special state after removing the first k leading bits. Now, we are able to define a new mapping over a reduced space of NR=2n−k points and this mapping is equivalent to the original mapping. If we let DR≥1 , the realtime data available to the attacker is guaranteed to have at least one output of those special states. Otherwise, we relax the definition of special states to include more points. If we substitute for D by DR and N by NR in the new time/memory/data tradeoff attack by Shamir and Biryukov, we obtain the same tradeoff curve TM2D2=N2 but with (DR)2≤T≤NR . This is actually an improvement since we could relax the lower bound on T since (DR)2 can be small up to 1 which means that our attack can be made faster. This technique reduces the number of expensive disk access operations from t to tR since we will be accessing only the special DR points, and makes the attack faster because of the reduced number of expensive disk operations.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**System console** System console: One meaning of system console, computer console, root console, operator's console, or simply console is the text entry and display device for system administration messages, particularly those from the BIOS or boot loader, the kernel, from the init system and from the system logger. It is a physical device consisting of a keyboard and a screen, and traditionally is a text terminal, but may also be a graphical terminal. System consoles are generalized to computer terminals, which are abstracted respectively by virtual consoles and terminal emulators. Today communication with system consoles is generally done abstractly, via the standard streams (stdin, stdout, and stderr), but there may be system-specific interfaces, for example those used by the system kernel.Another, older, meaning of system console, computer console, hardware console, operator's console or simply console is a hardware component used by an operator to control the hardware, typically some combination of front panel, keyboard/printer and keyboard/display. History: Prior to the development of alphanumeric CRT system consoles, some computers such as the IBM 1620 had console typewriters and front panels while the very first programmable computer, the Manchester Baby, used a combination of electromechanical switches and a CRT to provide console functions—the CRT displaying memory contents in binary by mirroring the machine's Williams-Kilburn tube CRT-based RAM. History: Some early operating systems supported either a single keyboard/print or keyboard/display device for controlling the OS. Some also supported a single alternate console, and some supported a hardcopy console for retaining a record of commands, responses and other console messages. However, in the late 1960s it became common for operating systems to support many more consoles than 3, and operating systems began appearing in which the console was simply any terminal with a privileged user logged on. History: On early minicomputers, the console was a serial console, an RS-232 serial link to a terminal such as a ASR-33 or later a DECWriter or DEC VT100. This terminal was usually kept in a secured room since it could be used for certain privileged functions such as halting the system or selecting which media to boot from. Large midrange systems, e.g. those from Sun Microsystems, Hewlett-Packard and IBM, still use serial consoles. In larger installations, the console ports are attached to multiplexers or network-connected multiport serial servers that let an operator connect a terminal to any of the attached servers. Today, serial consoles are often used for accessing headless systems, usually with a terminal emulator running on a laptop. Also, routers, enterprise network switches and other telecommunication equipment have RS-232 serial console ports. History: On PCs and workstations, the computer's attached keyboard and monitor have the equivalent function. Since the monitor cable carries video signals, it cannot be extended very far. Often, installations with many servers therefore use keyboard/video multiplexers (KVM switches) and possibly video amplifiers to centralize console access. In recent years, KVM/IP devices have become available that allow a remote computer to view the video output and send keyboard input via any TCP/IP network and therefore the Internet. History: Some PC BIOSes, especially in servers, also support serial consoles, giving access to the BIOS through a serial port so that the simpler and cheaper serial console infrastructure can be used. Even where BIOS support is lacking, some operating systems, e.g. FreeBSD and Linux, can be configured for serial console operation either during bootup, or after startup. Starting with the IBM 9672, IBM large systems have used a Hardware Management Console (HMC), consisting of a PC and a specialized application, instead of a 3270 or serial link. Other IBM product lines also use an HMC, e.g., System p. It is usually possible to log in from the console. Depending on configuration, the operating system may treat a login session from the console as being more trustworthy than a login session from other sources.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Biotechnology Heritage Award** Biotechnology Heritage Award: The Biotechnology Heritage Award recognizes individuals who have made significant contributions to the development of biotechnology through discovery, innovation, and public understanding. It is presented annually at the Biotechnology Innovation Organization (BIO) Annual International Convention by the Biotechnology Innovation Organization (BIO, formerly the Biotechnology Industry Organization) and the Science History Institute (formerly the Chemical Heritage Foundation). The purpose of the award is "to encourage emulation, inspire achievement, and promote public understanding of modern science, industry, and economics". Recipients: The award is given yearly and was first presented in 1999. Recipients: Ivor Royston, 2020 Janet Woodcock, 2019 William Rastetter, 2018 John C. Martin, 2017 Stanley Norman Cohen, 2016 Moshe Alafi and William K. Bowes, 2015 Robert S. Langer, 2014 George Rosenkranz, 2013 Nancy Chang, 2012 Joshua S. Boger, 2011 Arthur D. Levinson, 2010 Robert T. Fraley, 2009 Henri A. Termeer, 2008 Ronald E. Cape, 2007 Alejandro Zaffaroni, 2006 Paul Berg, 2005 Leroy Hood, 2004 William J. Rutter, 2003 Walter Gilbert, 2002 Francis S. Collins and J. Craig Venter, 2001 Herbert Boyer and Robert A. Swanson, 2000 George B. Rathmann, 1999 Photo Gallery
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Varimax rotation** Varimax rotation: In statistics, a varimax rotation is used to simplify the expression of a particular sub-space in terms of just a few major items each. The actual coordinate system is unchanged, it is the orthogonal basis that is being rotated to align with those coordinates. The sub-space found with principal component analysis or factor analysis is expressed as a dense basis with many non-zero weights which makes it hard to interpret. Varimax is so called because it maximizes the sum of the variances of the squared loadings (squared correlations between variables and factors). Preserving orthogonality requires that it is a rotation that leaves the sub-space invariant. Intuitively, this is achieved if, (a) any given variable has a high loading on a single factor but near-zero loadings on the remaining factors and if (b) any given factor is constituted by only a few variables with very high loadings on this factor while the remaining variables have near-zero loadings on this factor. If these conditions hold, the factor loading matrix is said to have "simple structure," and varimax rotation brings the loading matrix closer to such simple structure (as much as the data allow). From the perspective of individuals measured on the variables, varimax seeks a basis that most economically represents each individual—that is, each individual can be well described by a linear combination of only a few basis functions. Varimax rotation: One way of expressing the varimax criterion formally is this: arg max R(1p∑j=1k∑i=1p(ΛR)ij4−∑j=1k(1p∑i=1p(ΛR)ij2)2). Suggested by Henry Felix Kaiser in 1958, it is a popular scheme for orthogonal rotation (where all factors remain uncorrelated with one another). Rotation in factor analysis: A summary of the use of varimax rotation and of other types of factor rotation is presented in this article on factor analysis. Implementations: In the R programming language the varimax method is implemented in several packages including stats (function varimax( )), or in contributed packages including GPArotation or psych. In SAS varimax rotation is available in PROC FACTOR using ROTATE = VARIMAX.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rack card** Rack card: A rack card is a document used for commercial advertising, frequently in convenience stores, hotels, landmarks, restaurants, rest areas and other locations that enjoy significant foot traffic. Rack cards are typically 4 by 9 inches in size and sport high-impact graphic design.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Door-to-balloon** Door-to-balloon: Door-to-balloon is a time measurement in emergency cardiac care (ECC), specifically in the treatment of ST segment elevation myocardial infarction (or STEMI). The interval starts with the patient's arrival in the emergency department, and ends when a catheter guidewire crosses the culprit lesion in the cardiac cath lab. Because of the adage that "time is muscle", meaning that delays in treating a myocardial infarction increase the likelihood and amount of cardiac muscle damage due to localised hypoxia, ACC/AHA guidelines recommend a door-to-balloon interval of no more than 90 minutes. As of 2006 in the United States, fewer than half of STEMI patients received reperfusion with primary percutaneous coronary intervention (PCI) within the guideline-recommended timeframe. It has become a core quality measure for the Joint Commission on Accreditation of Healthcare Organizations (TJC). Improving door-to-balloon times: Door to Balloon (D2B) Initiative The benefit of prompt, expertly performed primary percutaneous coronary intervention over thrombolytic therapy for acute ST elevation myocardial infarction is now well established. Few hospitals can provide PCI within the 90 minute interval, which prompted the American College of Cardiology (ACC) to launch a national Door to Balloon (D2B) Initiative in November 2006. The D2B Alliance seeks to "take the extraordinary performance of a few hospitals and make it the ordinary performance of every hospital." Over 800 hospitals have joined the D2B Alliance as of March 16, 2007.The D2B Alliance advocates six key evidence-based strategies and one optional strategy to help reduce door-to-balloon times: ED physician activates the cath lab Single-call activation system activates the cath lab Cath lab team is available within 20–30 minutes Prompt data feedback Senior management commitment Team based approach (Optional) Prehospital 12 lead ECG activates the cath lab Mission: Lifeline On May 30, 2007, the American Heart Association launched 'Mission: Lifeline', a "community-based initiative aimed at quickly activating the appropriate chain of events critical to opening a blocked artery to the heart that is causing a heart attack." It is seen as complementary to the ACC's D2B Initiative. The program will concentrate on patient education to make the public more aware of the signs of a heart attack and the importance of calling 9-1-1 for emergency medical services (EMS) for transport to the hospital. In addition, the program will attempt to improve the diagnosis of STEMI patients by EMS personnel. According to Alice Jacobs, MD, who led the work group that addressed STEMI systems, when patients arrive at non-PCI hospitals they will stay on the EMS stretcher with paramedics in attendance while a determination is made as to whether or not the patient will be transferred. For walk-in STEMI patients at non-PCI hospitals, EMS calls to transfer the patient to a PCI hospital should be handled with the same urgency as a 9-1-1 call. Improving door-to-balloon times: EMS-to-balloon (E2B) Although incorporating a prehospital 12 lead ECG into critical pathways for STEMI patients is listed as an optional strategy by the D2B Alliance, the fastest median door-to-balloon times have been achieved by hospitals with paramedics who perform 12 lead ECGs in the field. EMS can play a key role in reducing the first-medical-contact-to-balloon time, sometimes referred to as EMS-to-balloon (E2B) time, by performing a 12 lead ECG in the field and using this information to triage the patient to the most appropriate medical facility.Depending on how the prehospital 12 lead ECG program is structured, the 12 lead ECG can be transmitted to the receiving hospital for physician interpretation, interpreted on-site by appropriately trained paramedics, or interpreted on-site by paramedics with the help of computerized interpretive algorithms. Some EMS systems utilize a combination of all three methods. Prior notification of an inbound STEMI patient enables time saving decisions to be made prior to the patient's arrival. This may include a "cardiac alert" or "STEMI alert" that calls in off duty personnel in areas where the cardiac cath lab is not staffed 24 hours a day. The 30-30-30 rule takes the goal of achieving a 90-minute door-to-balloon time and divides it into three equal time segments. Each STEMI care provider (EMS, the emergency department, and the cardiac cath lab) has 30 minutes to complete its assigned tasks and seamlessly "hand off" the STEMI patient to the next provider. In some locations, the emergency department may be bypassed altogether. Common themes in hospitals achieving rapid door-to-balloon times: Bradley et al. (Circulation 2006) performed a qualitative analysis of 11 hospitals in the National Registry of Myocardial Infarction that had median door-to-balloon times = or < 90 minutes. They identified 8 themes that were present in all 11 hospitals: An explicit goal of reducing door-to-balloon times Visible support of senior management Innovative, standardized protocols Flexibility in implementing standardized protocols Uncompromising individual clinical leaders Collaborative interdisciplinary teams Data feedback to monitor progress and identify problems or successes Organizational culture that fostered persistence despite challenges and setbacks Criteria for an ideal primary PCI center: Granger et al. (Circulation 2007) identified the following criteria of an ideal primary PCI center. Criteria for an ideal primary PCI center: Institutional resources Primary PCI is the routine treatment for eligible STEMI patients 24 hours a day, 7 days a week Primary PCI is performed as soon as possible Institution is capable of providing supportive care to STEMI patients and handling complications Written commitment by hospital administration to support the program Identifies physician director for PCI program Creates multidisciplinary group that includes input from all relevant stakeholders, including cardiology, emergency medicine, nursing, and EMS Institution designs and implements a continuing education program For institution without on-site surgical backup, there is a written agreement with tertiary institution and EMS to provide for rapid transfer of STEMI patients when needed Physician resources Interventional cardiologists meet ACC/AHA criteria for competence Interventional cardiologists participate in, and are responsive to formal on-call schedule Program requirements Minimum of 36 primary PCI procedures and 400 total PCI procedures annually Program is described in a "manual of operations" that is compliant with ACC/AHA guidelines Mechanisms for monitoring program performance and ongoing quality improvement activities Other features of ideal system Robust data collection and feedback including door-to-balloon time, first door-to-balloon time (for transferred patients), and the proportion of eligible patients receiving some form of reperfusion therapy Earliest possible activation of the cardiac cath lab, based on prehospital ECG whenever possible, and direct referral to PCI-hospital based on field diagnosis of STEMI Standardized ED protocols for STEMI management Single phone call activation of cath lab that does not depend on cardiologist interpretation of ECG Gaps and barriers to timely access to primary PCI: Granger et al. (Circulation 2007) identified the following barriers to timely access to primary PCI. Gaps and barriers to timely access to primary PCI: Busy PCI hospitals may have to divert patients Significant delays in ED diagnosis of STEMI may occur, particularly when patient does not arrive by EMS Manpower and financial considerations may prevent smaller PCI programs from providing primary PCI for STEMI 24 hours a day Reimbursement for optimal coordination of STEMI patients needs to be realigned to reflect performance In most PCI centers, cath lab staff is off-site during off hours, requiring a mandate that staff report with 20–30 minutes of cath lab activation
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Damerau–Levenshtein distance** Damerau–Levenshtein distance: In information theory and computer science, the Damerau–Levenshtein distance (named after Frederick J. Damerau and Vladimir I. Levenshtein) is a string metric for measuring the edit distance between two sequences. Informally, the Damerau–Levenshtein distance between two words is the minimum number of operations (consisting of insertions, deletions or substitutions of a single character, or transposition of two adjacent characters) required to change one word into the other. Damerau–Levenshtein distance: The Damerau–Levenshtein distance differs from the classical Levenshtein distance by including transpositions among its allowable operations in addition to the three classical single-character edit operations (insertions, deletions and substitutions).In his seminal paper, Damerau stated that in an investigation of spelling errors for an information-retrieval system, more than 80% were a result of a single error of one of the four types. Damerau's paper considered only misspellings that could be corrected with at most one edit operation. While the original motivation was to measure distance between human misspellings to improve applications such as spell checkers, Damerau–Levenshtein distance has also seen uses in biology to measure the variation between protein sequences. Definition: To express the Damerau–Levenshtein distance between two strings a and b , a function da,b(i,j) is defined, whose value is a distance between an i -symbol prefix (initial substring) of string a and a j -symbol prefix of b The restricted distance function is defined recursively as:: A:11  where 1(ai≠bj) is the indicator function equal to 0 when ai=bj and equal to 1 otherwise. Definition: Each recursive call matches one of the cases covered by the Damerau–Levenshtein distance: da,b(i−1,j)+1 corresponds to a deletion (from a to b), da,b(i,j−1)+1 corresponds to an insertion (from a to b), da,b(i−1,j−1)+1(ai≠bj) corresponds to a match or mismatch, depending on whether the respective symbols are the same, da,b(i−2,j−2)+1(ai≠bj) corresponds to a transposition between two successive symbols.The Damerau–Levenshtein distance between a and b is then given by the function value for full strings: da,b(|a|,|b|) , where i=|a| denotes the length of string a, and j=|b| is the length of b. Algorithm: Presented here are two algorithms: the first, simpler one, computes what is known as the optimal string alignment distance or restricted edit distance, while the second one computes the Damerau–Levenshtein distance with adjacent transpositions. Adding transpositions adds significant complexity. The difference between the two algorithms consists in that the optimal string alignment algorithm computes the number of edit operations needed to make the strings equal under the condition that no substring is edited more than once, whereas the second one presents no such restriction. Algorithm: Take for example the edit distance between CA and ABC. The Damerau–Levenshtein distance LD(CA, ABC) = 2 because CA → AC → ABC, but the optimal string alignment distance OSA(CA, ABC) = 3 because if the operation CA → AC is used, it is not possible to use AC → ABC because that would require the substring to be edited more than once, which is not allowed in OSA, and therefore the shortest sequence of operations is CA → A → AB → ABC. Note that for the optimal string alignment distance, the triangle inequality does not hold: OSA(CA, AC) + OSA(AC, ABC) < OSA(CA, ABC), and so it is not a true metric. Algorithm: Optimal string alignment distance Optimal string alignment distance can be computed using a straightforward extension of the Wagner–Fischer dynamic programming algorithm that computes Levenshtein distance. In pseudocode: algorithm OSA-distance is input: strings a[1..length(a)], b[1..length(b)] output: distance, integer let d[0..length(a), 0..length(b)] be a 2-d array of integers, dimensions length(a)+1, length(b)+1 // note that d is zero-indexed, while a and b are one-indexed. Algorithm: for i := 0 to length(a) inclusive do d[i, 0] := i for j := 0 to length(b) inclusive do d[0, j] := j for i := 1 to length(a) inclusive do for j := 1 to length(b) inclusive do if a[i] = b[j] then cost := 0 else cost := 1 d[i, j] := minimum(d[i-1, j] + 1, // deletion d[i, j-1] + 1, // insertion d[i-1, j-1] + cost) // substitution if i > 1 and j > 1 and a[i] = b[j-1] and a[i-1] = b[j] then d[i, j] := minimum(d[i, j], d[i-2, j-2] + 1) // transposition return d[length(a), length(b)] The difference from the algorithm for Levenshtein distance is the addition of one recurrence: if i > 1 and j > 1 and a[i] = b[j-1] and a[i-1] = b[j] then d[i, j] := minimum(d[i, j], d[i-2, j-2] + 1) // transposition Distance with adjacent transpositions The following algorithm computes the true Damerau–Levenshtein distance with adjacent transpositions; this algorithm requires as an additional parameter the size of the alphabet Σ, so that all entries of the arrays are in [0, |Σ|):: A:93  algorithm DL-distance is input: strings a[1..length(a)], b[1..length(b)] output: distance, integer da := new array of |Σ| integers for i := 1 to |Σ| inclusive do da[i] := 0 let d[−1..length(a), −1..length(b)] be a 2-d array of integers, dimensions length(a)+2, length(b)+2 // note that d has indices starting at −1, while a, b and da are one-indexed. Algorithm: maxdist := length(a) + length(b) d[−1, −1] := maxdist for i := 0 to length(a) inclusive do d[i, −1] := maxdist d[i, 0] := i for j := 0 to length(b) inclusive do d[−1, j] := maxdist d[0, j] := j for i := 1 to length(a) inclusive do db := 0 for j := 1 to length(b) inclusive do k := da[b[j]] ℓ := db if a[i] = b[j] then cost := 0 db := j else cost := 1 d[i, j] := minimum(d[i−1, j−1] + cost, //substitution d[i, j−1] + 1, //insertion d[i−1, j ] + 1, //deletion d[k−1, ℓ−1] + (i−k−1) + 1 + (j-ℓ−1)) //transposition da[a[i]] := i return d[length(a), length(b)] To devise a proper algorithm to calculate unrestricted Damerau–Levenshtein distance, note that there always exists an optimal sequence of edit operations, where once-transposed letters are never modified afterwards. (This holds as long as the cost of a transposition, WT , is at least the average of the cost of an insertion and deletion, i.e., 2WT≥WI+WD .) Thus, we need to consider only two symmetric ways of modifying a substring more than once: (1) transpose letters and insert an arbitrary number of characters between them, or (2) delete a sequence of characters and transpose letters that become adjacent after deletion. The straightforward implementation of this idea gives an algorithm of cubic complexity: max (M,N)) , where M and N are string lengths. Using the ideas of Lowrance and Wagner, this naive algorithm can be improved to be O(M⋅N) in the worst case, which is what the above pseudocode does. Algorithm: It is interesting that the bitap algorithm can be modified to process transposition. See the information retrieval section of[1] for an example of such an adaptation. Applications: Damerau–Levenshtein distance plays an important role in natural language processing. In natural languages, strings are short and the number of errors (misspellings) rarely exceeds 2. In such circumstances, restricted and real edit distance differ very rarely. Oommen and Loke even mitigated the limitation of the restricted edit distance by introducing generalized transpositions. Nevertheless, one must remember that the restricted edit distance usually does not satisfy the triangle inequality, and thus cannot be used with metric trees. Applications: DNA Since DNA frequently undergoes insertions, deletions, substitutions, and transpositions, and each of these operations occurs on approximately the same timescale, the Damerau–Levenshtein distance is an appropriate metric of the variation between two strands of DNA. More common in DNA, protein, and other bioinformatics related alignment tasks is the use of closely related algorithms such as Needleman–Wunsch algorithm or Smith–Waterman algorithm. Applications: Fraud detection The algorithm can be used with any set of words, like vendor names. Since entry is manual by nature, there is a risk of entering a false vendor. A fraudster employee may enter one real vendor such as "Rich Heir Estate Services" versus a false vendor "Rich Hier State Services". The fraudster would then create a false bank account and have the company route checks to the real vendor and false vendor. The Damerau–Levenshtein algorithm will detect the transposed and dropped letter and bring attention of the items to a fraud examiner. Applications: Export control The U.S. Government uses the Damerau–Levenshtein distance with its Consolidated Screening List API.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SIP extensions for the IP Multimedia Subsystem** SIP extensions for the IP Multimedia Subsystem: The Session Initiation Protocol (SIP) is the signaling protocol selected by the 3rd Generation Partnership Project (3GPP) to create and control multimedia sessions with two or more participants in the IP Multimedia Subsystem (IMS), and therefore is a key element in the IMS framework. SIP extensions for the IP Multimedia Subsystem: SIP was developed by the Internet Engineering Task Force (IETF) as a standard for controlling multimedia communication sessions in Internet Protocol (IP) networks, working in the application layer of the Internet Protocol Suite. Several SIP extensions have been added to the basic protocol specification in order to extend its functionality. These extensions are based on Request for Comments (RFC) protocol recommendations by the IETF. SIP extensions for the IP Multimedia Subsystem: The 3GPP, which is a collaboration between groups of telecommunications associations aimed at developing and maintaining the IMS, stated a series of requirements for SIP to be successfully used in the IMS. Some of them could be addressed by using existing capabilities and extensions in SIP while, in other cases, the 3GPP had to collaborate with the IETF to standardize new SIP extensions to meet the new requirements. In any case, the IETF evolves SIP in a generic basis, so that the use of its extensions is not restricted to the IMS framework. 3GPP requirements for SIP: The 3GPP has stated several general requirements for operation of the IMS. These include an efficient use of the radio interface by minimizing the exchange of signaling messages between the mobile terminal and the network, a minimum session setup time by performing tasks prior to session establishment instead of during session establishment, a minimum support required in the terminal, the support for roaming and non-roaming scenarios with terminal mobility management (supported by the access network, not SIP), and support for IPv6 addressing. 3GPP requirements for SIP: Other requirements involve protocol extensions, such as SIP header fields to exchange user or server information, and SIP methods to support new network functionality: requirement for registration, re-registration, de-registration, event notifications, instant messaging or call control primitives with additional capabilities such as call transference. Other specific requirements are: Quality of service support with policy and charging control, as well as resource negotiation and allocation before alerting the destination user. 3GPP requirements for SIP: Identification of users for authentication, authorization and accounting purposes. Security between users and the network and among network nodes is a major issue to be addressed by using mutual authentication mechanisms such as private and public keys and digests, as well as media authorization extensions. It must be also possible to present both the caller and the called party the identities of their counterparts, with the ability to hide this information if required. Anonymity in session establishment and privacy are also important. 3GPP requirements for SIP: Protection of SIP signaling with integrity and confidentiality support based on initial authentication and symmetric cryptographic keys; error recovery and verification are also needed. Session release initiated by the network (e.g. in case the user terminal leaves coverage or runs out of credit). Source-routing mechanisms. The routing of SIP messages has its own requirements in the IMS as all terminal originated session setup attempts must transit both the P-CSCF and the S-CSCF so that these call session control functions (CSCFs) servers can properly provide their services. There can be special path requirements for certain messages as well. Interoperation between IMS and the public switched telephone network (PSTN).Finally, it is also necessary that other protocols and network services such as DHCP or DNS are adapted to work with SIP, for instance for outbound proxy (P-CSCF) location and SIP Uniform Resource Identifier (URI) to IP address resolution, respectively. Extension negotiation mechanism: There is a mechanism in SIP for extension negotiation between user agents (UA) or servers, consisting of three header fields: supported, require and unsupported, which UAs or servers (i.e. user terminals or call session control function (CSCF) in IMS) may use to specify the extensions they understand. When a client initiates a SIP dialog with a server, it states the extensions it requires to be used and also other extensions that are understood (supported), and the server will then send a response with a list of extensions that it requires. If these extensions are not listed in the client's message, the response from the server will be an error response. Likewise, if the server does not support any of the client's required extensions, it will send an error response with a list of its unsupported extensions. This kind of extensions are called option tags, but SIP can also be extended with new methods. In that case, user agents or servers use the Allow header to state which methods they support. To require the use of a particular method in a particular dialog, they must use an option tag associated to that method. SIP extensions: Caller preferences and user agent capabilities These two extensions allow users to specify their preferences about the service the IMS provides. SIP extensions: With the caller preferences extension, the calling party is able to indicate the kind of user agent they want to reach (e.g. whether it is fixed or mobile, a voicemail or a human, personal or for business, which services it is capable to provide, or which methods it supports) and how to search for it, with three header fields: Accept-Contact to describe the desired destination user agents, Reject-Contact to state the user agents to avoid, and Request-Disposition to specify how the request should be handled by servers in the network (i.e. whether or not to redirect and how to search for the user: sequentially or in parallel). SIP extensions: By using the user agent capabilities extension, user agents (terminals) can describe themselves when they register so that others can search for them according to their caller preferences extension headers. For this purpose, they list their capabilities in the Contact header field of the REGISTER message. Event notification The aim of event notification is to obtain the status of a given resource (e.g. a user, one's voicemail service) and to receive updates of that status when it changes. SIP extensions: Event notification is necessary in the IMS framework to inform about the presence of a user (i.e. "online" or "offline") to others that may be waiting to contact them, or to notify a user and its P-CSCF of its own registration state, so that they know if they are reachable and what public identities they have registered. Moreover, event notification can be used to provide additional services such as voicemail (i.e. to notify that they have new voice messages in their inbox). SIP extensions: To this end, the specific event notification extension defines a framework for event notification in SIP, with two new methods: SUBSCRIBE and NOTIFY, new header fields and response codes and two roles: the subscriber and the notifier. The entity interested in the state information of a resource (the subscriber) sends a SUBSCRIBE message with the Uniform Resource Identifier (URI) of the resource in the request initial line, and the type of event in the Event header. Then the entity in charge of keeping track of the state of the resource (the notifier), receives the SUBSCRIBE request and sends back a NOTIFY message with a subscription-state header as well as the information about the status of the resource in the message body. Whenever the resource state changes, the notifier sends a new NOTIFY message to the subscriber. Each kind of event a subscriber can subscribe to is defined in a new event package. An event package describes a new value for the SUBSCRIBE Event header, as well as a MIME type to carry the event state information in the NOTIFY message. SIP extensions: There is also an allow-events header to indicate event notification capabilities, and the 202 accepted and 489 bad event response codes to indicate if a subscription request has been preliminary accepted or has been turned down because the notifier does not understand the kind of event requested. SIP extensions: In order to make an efficient use of the signaling messages, it is also possible to establish a limited notification rate (not real-time notifications) through a mechanism called event throttling. Moreover, there is also a mechanism for conditional event notification that allows the notifier to decide whether or not to send the complete NOTIFY message depending on if there is something new to notify since last subscription or there is not. SIP extensions: State publication The event notification framework defines how a user agent can subscribe to events about the state of a resource, but it does not specify how that state can be published. The SIP extension for event state publication was defined to allow user agents to publish the state of an event to the entity (notifier) thais responsible for composing the event state and distributing it to the subscribers. SIP extensions: The state publication framework defines a new method: PUBLISH, which is used to request the publication of the state of the resource specified in the request-URI, with reference to the event stated in the Event header, and with the information carried in the message body. Instant messaging The functionality of sending instant messages to provide a service similar to text messaging is defined in the instant messaging extension. These messages are unrelated to each other (i.e. they do not originate a SIP dialog) and sent through the SIP signaling network, sharing resources with control messages. This functionality is supported by the new MESSAGE method, that can be used to send an instant message to the resource stated in the request-URI, with the content carried in the message body. This content is defined as a MIME type, being text/plain the most common one. In order to have an instant messaging session with related messages, the Message Session Relay Protocol (MSRP) should be used. SIP extensions: Call transfer The REFER method extension defines a mechanism to request a user agent to contact a resource which is identified by a URI in the Refer-To header field of the request message. A typical use of this mechanism is call transfer: during a call, the participant who sends the REFER message tells the recipient to contact to the user agent identified by the URI in the corresponding header field. The REFER message also implies an event subscription to the result of the operation, so that the sender will know whether or not the recipient could contact the third person. SIP extensions: However, this mechanism is not restricted to call transfer, since the Refer-To header field can be any kind of URI, for instance, an HTTP URI, to require the recipient to visit a web page. SIP extensions: Reliability of provisional responses In the basic SIP specification, only requests and final responses (i.e. 2XX response codes) are transmitted reliably, this is, they are retransmitted by the sender until the acknowledge message arrives (i.e. the corresponding response code to a request, or the ACK request corresponding to a 2XX response code). This mechanism is necessary since SIP can run not only over reliable transport protocols (TCP) that assure that the message is delivered, but also over unreliable ones (UDP) that offer no delivery guarantees, and it is even possible that both kinds of protocols are present in different parts of the transport network. SIP extensions: However, in such an scenario as the IMS framework, it is necessary to extend this reliability to provisional responses to INVITE requests (for session establishment, this is, to start a call). The reliability of provisional responses extension provides a mechanism to confirm that provisional responses such as the 180 Ringing response code, that lets the caller know that the callee is being alerted, are successfully received. To do so, this extension defines a new method: PRACK, which is the request message used to tell the sender of a provisional response that his or her message has been received. This message includes a RACK header field which is a sequence number that matches the RSeq header field of the provisional response that is being acknowledged, and also contains the CSeq number that identifies the corresponding INVITE request. To indicate that the user agent requests or supports reliable provisional responses, the 100rel option tag will be used. SIP extensions: Session description updating The aim of the UPDATE method extension is to allow user agents to provide updated session description information within a dialog, before the final response to the initial INVITE request is generated. This can be used to negotiate and allocate the call resources before the called party is alerted. SIP extensions: Preconditions In the IMS framework, it is required that once the callee is alerted, the chances of a session failure are minimum. An important source of failure is the inability to reserve network resources to support the session, so these resources should be allocated before the phone rings. However, in the IMS, to reserve resources the network needs to know the callee's IP address, port and session parameters and therefore it is necessary that the initial offer/answer exchange to establish a session has started (INVITE request). In basic SIP, this exchange eventually causes the callee to be alerted. To solve this problem, the concept of preconditions was introduced. In this concept the caller states a set of constraints about the session (i.e. codecs and QoS requirements) in the offer, and the callee responds to the offer without establishing the session or alerting the user. This establishment will occur if and only if both the caller and the callee agree that the preconditions are met. SIP extensions: The preconditions SIP extension affects both SIP, with a new option tag (precondition) and defined offer/answer exchanges, and Session Description Protocol (SDP), which is a format used to describe streaming media initialization parameters, carried in the body of SIP messages. The new SDP attributes are meant to describe the current status of the resource reservation, the desired status of the reservation to proceed with session establishment, and the confirmation status, to indicate when the reservation status should be confirmed. SIP extensions: The SDP offer/answer model using PRACK and UPDATE requests In the IMS, the initial session parameter negotiation can be done by using the provisional responses and session description updating extensions, along with SDP in the body of the messages. SIP extensions: The first offer, described by means of SDP, can be carried by the INVITE request and will deal with the caller's supported codecs. This request will be answered by the provisional reliable response code 183 Session Progress, that will carry the SDP list of supported codecs by both the caller and the callee. The corresponding PRACK to this provisional answer will be used to select a codec and initiate the QoS negotiation. SIP extensions: The QoS negotiation is supported by the PRACK request, that starts resource reservation in the calling party network, and it is answered by a 2XX response code. Once this response has been sent, the called party has selected the codec too, and starts resource reservation on its side. Subsequent UPDATE requests are sent to inform about the reservation progress, and they are answered by 2XX response codes. In a typical offer/answer exchange, one UPDATE will be sent by the calling party when its reservation is completed, then the called party will respond and eventually finish allocating the resources. It is then, when all the resources for the call are in place, when the caller is alerted. SIP extensions: Identification and charging In the IMS framework it is fundamental to handle user identities for authentication, authorization and accounting purposes. The IMS is meant to provide multimedia services over IP networks, but also needs a mechanism to charge users for it. All this functionality is supported by new special header fields. P-headers The Private Header Extensions to SIP, also known as P-Headers, are special header fields whose applicability is limited to private networks with a certain topology and characteristics of lower layers' protocols. They were designed specifically to meet the 3GPP requirements because a more general solution was not available. SIP extensions: These header fields are used for a variety of purposes including charging and information about the networks a call traverses: P-Charging-Vector: A collection of charging information, such as the IMS Charging Identity (ICID) value, the address of the SIP proxy that creates the ICID value, and the Inter Operator Identifier (IOI). It may be filled during the establishment of a session or as a standalone transaction outside a dialog. SIP extensions: P-Charging-Function-Address: The addresses of the charging functions (functional entities that receive the charging records or events) in the user's home network. It also may be filled during the establishment of a dialog or as a standalone transaction, and informs each proxy involved in a transaction. P-Visited-Network-ID: Identification string of the visited network. It is used during registrations, to indicate to the user's home network which network is providing services to a roaming user, so that the home network is able to accept the registration according to their roaming agreements. SIP extensions: P-Access-Network-Info: Information about the access technology (the network providing the connectivity), such as the radio access technology and cell identity. It is used to inform service proxies and the home network, so that they can optimize services or simply so that they can locate the user in a wireless network P-Called-Party-ID: The URI originally indicated in the request-URI of a request generated by the calling user agent. When the request reaches the registrar (S-CSCF) of the called user, the registrar re-writes the request-URI on the first line of the request with the registered contact address (i.e. IP address) of the called user, and stores the replaced request-URI in this header field. In the IMS, a user may be identified by several SIP URIs (address-of-record), for instance, a SIP URI for work and another SIP URI for personal use, and when the registrar replaces the request-URI with the effective contact address, the original request-URI must be stored so that the called party knows to which address-of-record was the invitation sent. SIP extensions: P-Associated-URI: Additional URIs that are associated with a user that is registering. It is included in the 200 OK response to a REGISTER request to inform a user which other URIs the service provider has associated with an address-of-record (AOR) URI.More private headers have been defined for user database accessing: P-User-Database: The address of the user database, this is, the Home Subscriber Server (HSS), that contains the profile of the user that generated a particular request. Although the HSS is a unique master database, it can be distributed into different nodes for reliability and scalability reasons. In this case, a Subscriber location function (SLF) is needed to find the HSS that handles a particular user. When a user request reaches the I-CSCF at the edge of the administrative domain, this entity queries the SLF for the corresponding HSS and then, to prevent the S-CSCF from having to query the SLF again, sends the HSS address to the S-CSCF in the P-User-Database header. Then the S-CSCF will be able to directly query the HSS to get information about the user (e.g. authentication information during a registration). SIP extensions: P-Profile-Key: The key to be used to query the user database (HSS) for a profile corresponding to the destination SIP URI of a particular SIP request. It is transmitted among proxies to perform faster database queries: the first proxy finds the key and the others query the database by directly using the key. This is useful when Wildcarded Service Identities are used, this is, Public Service Identities that match a regular expression, because the first query has to resolve the regular expression to find the key. SIP extensions: Asserted identity The private extensions for asserted identity within trusted networks are designed to enable a network of trusted SIP servers to assert the identity of authenticated users, only within an administrative domain with previously agreed policies for generation, transport and usage of this identification information. These extensions also allow users to request privacy so that their identities are not spread outside the trust domain. To indicate so, they must insert the privacy token id into the Privacy header field.The main functionality is supported by the P-Asserted-Identity extension header. When a proxy server receives a request from an untrusted entity and authenticates the user (i.e. verifies that the user is who he or she says that he or she is), it then inserts this header with the identity that has been authenticated, and then forwards the request as usual. This way, other proxy servers that receive this SIP request within the Trust Domain (i.e. the network of trusted entities with previously agreed security policies) can safely rely on the identity information carried in the P-Asserted-Identity header without the necessity of re-authenticating the user. SIP extensions: The P-Preferred-Identity extension header is also defined, so that a user with several public identities is able to tell the proxy which public identity should be included in the P-Asserted-Identity header when the user is authenticated. Finally, when privacy is requested, proxies must withhold asserted identity information outside the trusted domain by removing P-Asserted-Identity headers before forwarding user requests to untrusted identities (outside the Trust Domain). SIP extensions: There exist analogous extension headers for handling the identification of services of users, instead of the users themselves. In this case, Uniform Resource Names are used to identify a service (e.g. a voice call, an instant messaging session, an IPTV streaming) Security mechanisms Access security in the IMS consists of first authenticating and authorizing the user, which is done by the S-CSCF, and then establishing secure connections between the P-CSCF and the user. There are several mechanisms to achieve this, such as: HTTP digest access authentication, which is part of the basic SIP specification and leads to a Transport Layer Security connection between the user and the proxy. SIP extensions: HTTP digest access authentication using AKA, a more secure version of the previous mechanism for cellular networks that uses the information from the user's smart card and commonly creates two IPsec security associations between the P-CSCF and the terminal.The security mechanisms agreement extension for SIP was then introduced to provide a secure mechanism for negotiating the security algorithms and parameters to be used by the P-CSCF and the terminal. This extension uses three new header fields to support the negotiation process: First, the terminal adds a security–client header field containing the mechanisms, authentication and encryption algorithms it supports to the REGISTER request. SIP extensions: Then, the P-CSCF adds a security-server header field to the response that contains the same information as the client's but with reference to the P-CSCF. In case there are more than one mechanism, they are associated with a priority value. SIP extensions: Finally, the user agent sends a new REGISTER request over the just created secure connection with the negotiated parameters, including a security-verify header field that carries the same contents as the previously received security-server header field. This procedure protects the negotiation mechanism from Man-in-the-middle attacks: if an attacker removed the strongest security mechanisms from the Security-Server header field in order to force the terminal to choose weaker security algorithms, then the Security-Verify and Security-Server header fields would not match. The contents of the Security-Verify header field cannot be altered as they are sent through the new established secure association, as long as this association is no breakable by the attacker in real time (i.e. before the P-CSCF discovers the Man-in-the-middle attack in progress. SIP extensions: Media authorization The necessity in the IMS of reserving resources to provide quality of service (QoS) leads to another security issue: admission control and protection against denial-of-service attacks. To obtain transmission resources, the user agent must present an authorization token to the network (i.e. the policy enforcement point, or PEP) . This token will be obtained from its P-CSCF, which may be in charge of QoS policy control or have an interface with the policy control entity in the network (i.e. the policy decision function, or PDF) which originally provides the authorization token. SIP extensions: The private extensions for media authorization link session signaling to the QoS mechanisms applied to media in the network, by defining the mechanisms for obtaining authorization tokens and the P-Media-Authorization header field to carry these tokens from the P-CSCF to the user agent. This extension is only applicable within administrative domains with trust relationships. It was particularly designed for specialized SIP networks like the IMS, and not for the general Internet. SIP extensions: Source-routing mechanisms Source routing is the mechanism that allows the sender of a message to specify partially or completely the route the message traverses. In SIP, the route header field, filled by the sender, supports this functionality by listing a set of proxies the message will visit. In the IMS context, there are certain network entities (i.e. certain CSCFs) that must be traversed by requests from or to a user, so they are to be listed in the Route header field. To allow the sender to discover such entities and populate the route header field, there are mainly two extension header fields: path and service-route. SIP extensions: Path The extension header field for registering non-adjacent contacts provides a Path header field which accumulates and transmits the SIP URIs of the proxies that are situated between a user agent and its registrar as the REGISTER message traverses then. This way, the registrar is able to discover and record the sequence of proxies that must be transited to get back to the user agent. SIP extensions: In the IMS every user agent is served by its P-CSCF, which is discovered by using the Dynamic Host Configuration Protocol or an equivalent mechanism when the user enters the IMS network, and all requests and responses from or to the user agent must traverse this proxy. When the user registers to the home registrar (S-CSCF), the P-CSCF adds its own SIP URI in a Path header field in the REGISTER message, so that the S-CSCF receives and stores this information associated with the contact information of the user. This way, the S-CSCF will forward every request addressed to that user through the corresponding P-CSCF by listing its URI in the route header field. SIP extensions: Service route The extension for service route discovery during registration consists of a Service-Route header field that is used by the registrar in a 2XX response to a REGISTER request to inform the registering user of the entity that must forward every request originated by him or her. SIP extensions: In the IMS, the registrar is the home network's S-CSCF and it is also required that all requests are handled by this entity, so it will include its own SIP URI in the service-route header field. The user will then include this SIP URI in the Route header field of all his or her requests, so that they are forwarded through the home S-CSCF. SIP extensions: Globally routable user agent URIs In the IMS it is possible for a user to have multiple terminals (e.g. a mobile phone, a computer) or application instances (e.g. video telephony, instant messaging, voice mail) that are identified with the same public identity (i.e. SIP URI). Therefore, a mechanism is needed in order to route requests to the desired device or application. That is what a Globally Routable User Agent URI (GRU) is: a URI that identifies a specific user agent instance (i.e. terminal or application instance) and it does it globally (i.e. it is valid to route messages to that user agent from any other user agent on the Internet). SIP extensions: These URIs are constructed by adding the gr parameter to a SIP URI, either to the public SIP URI with a value that identifies the user agent instance, or to a specially created URI that does not reveal the relationship between the GRUU and the user's identity, for privacy purposes. They are commonly obtained during the registration process: the registering user agent sends a Uniform Resource Name (URN) that uniquely identifies that SIP instance, and the registrar (i.e. S-CSCF) builds the GRUU, associates it to the registered identity and SIP instance and sends it back to the user agent in the response. When the S-CSCF receives a request for that GRUU, it will be able to route the request to the registered SIP instance. SIP extensions: Signaling compression The efficient use of network resources, which may include a radio interface or other low-bandwidth access, is essential in the IMS in order to provide the user with an acceptable experience in terms of latency. To achieve this goal, SIP messages can be compressed using the mechanism known as SigComp (signaling compression). SIP extensions: Compression algorithms perform this operation by substituting repeated words in the message by its position in a dictionary where all these words only appear once. In a first approach, this dictionary may be built for each message by the compressor and sent to the decompressor along with the message itself. However, as many words are repeated in different messages, the extended operations for SigComp define a way to use a shared dictionary among subsequent messages. Moreover, in order to speed up the process of building a dictionary along subsequent messages and provide high compression ratios since the first INVITE message, SIP provides a static SIP/SDP dictionary which is already built with common SIP and SDP terms. SIP extensions: There is a mechanism to indicate that a SIP message is desired to be compressed. This mechanism defines the comp=sigcomp parameter for SIP URIs, which signals that the SIP entity identified by the URI supports SigComp and is willing to receive compressed messages. When used in request-URIs, it indicates that the request is to be compressed, while in Via header fields it signals that the subsequent response is to be compressed. SIP extensions: Content Indirection In order to obtain even shorter SIP messages and make a very efficient use of the resources, the content indirection extension makes it possible to replace a MIME body part of the message with an external reference, typically an HTTP URI. This way the recipient of the message can decide whether or not to follow the reference to fetch the resource, depending on the bandwidth available. SIP extensions: NAT traversal Network address translation (NAT) makes it impossible for a terminal to be reached from outside its private network, since it uses a private address that is mapped to a public one when packets originated by the terminal cross the NAT. Therefore, NAT traversal mechanisms are needed for both the signaling plane and the media plane. SIP extensions: Internet Engineering Task Force's RFC 6314 summarizes and unifies different methods to achieve this, such as symmetric response routing and client-initiated connections for SIP signaling, and the use of STUN, TURN and ICE, which combines both previous ones, for media streams Internet Protocol version 6 compatibility Internet Engineering Task Force's RFC 6157 describes the necessary mechanisms to guarantee that SIP works successfully between both Internet Protocol versions during the transition to IPv6. While SIP signaling messages can be transmitted through heterogeneous IPv4/IPv6 networks as long as proxy servers and DNS entries are properly configured to relay messages across both networks according to these recommendations, user agents will need to implement extensions so that they can directly exchange media streams. These extensions are related to the Session Description Protocol offer/answer initial exchange, that will be used to gather the IPv4 and IPv6 addresses of both ends so that they can establish a direct communication. SIP extensions: Interworking with other technologies Apart from all the explained extensions to SIP that make it possible for the IMS to work successfully, it is also necessary that the IMS framework interworks and exchanges services with existing network infrastructures, mainly the Public switched telephone network (PSTN). There are several standards that address this requirements, such as the following two for services interworking between the PSTN and the Internet (i.e. the IMS network): PSTN Interworking Service Protocol (PINT), that extends SIP and SDP for accessing classic telephone call services in the PSTN (e.g. basic telephone calls, fax service, receiving content over the telephone). Services in PSTN requesting Internet Services (SPIRITS), that provides the opposite functionality to PINT, this is, supporting the access to Internet services from the PSTN.And also for PSTN-SIP gateways to support calls with one end in each network: Session Initiation Protocol for Telephones (SIP-T), that describes the practices and uses of these gateways. SIP extensions: ISDN User Part (ISUP) to Session Initiation Protocol (SIP) Mapping, which makes it possible to translate SIP signaling messages into ISUP messages of the Signaling System No. 7 (SS7) which is used in the PSTN, and vice versa.Moreover, the SIP INFO method extension is designed to carry user information between terminals without affecting the signaling dialog and can be used to transport the dual-tone multi-frequency signaling to provide telephone keypad function for users. Books: Poikselkä, Miikka; Mayer, Georg; Khartabil, Hisham; Niemi, Aki (March 10, 2006). The IMS: IP multimedia concepts and services (2 ed.). John Wiley & Sons. ISBN 978-0-470-01906-1. Retrieved 15 November 2014. Camarillo, Gonzalo; García-Martín, Miguel A. (November 4, 2008). The 3G IP Multimedia Subsystem (IMS): Merging the Internet and the Cellular Worlds (3 ed.). John Wiley & Sons. ISBN 978-0-470-51662-1. Retrieved 15 November 2014.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**OPIE Authentication System** OPIE Authentication System: OPIE is the initialism of "One time Passwords In Everything". Opie is a mature, Unix-like login and password package installed on the server and the client which makes untrusted networks safer against password-sniffing packet-analysis software like dSniff and safe against shoulder surfing. It works by circumventing the delayed attack method because the same password is never used twice after installing Opie. OPIE implements a one-time password (OTP) scheme based on S/KEY, which will require a secret passphrase (not echoed) to generate a password for the current session, or a list of passwords you can print and carry on your person. OPIE uses an MD4 or MD5 hash function to generate passwords. OPIE can restrict its logins based on IP address. It uses its own passwd and login modules. If the Enter key ↵ Enter is pressed at the password prompt, it will turn echo on, so what is being typed can be seen when entering an unfamiliar password from a printout. OPIE can improve security when accessing online banking at conferences, hotels and airports. Some countries require banks to implement OTP. OPIE shipped with DragonFly BSD, FreeBSD and OpenSUSE. It can be installed on a Unix-like server and clients for improved security. The commands are opiepasswd opiekey
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Knockdown texture** Knockdown texture: Knockdown texture is a drywall finishing style. It is a mottled texture, more intense than a simple flat finish, but less intense than orange peel, or popcorn, texture. Knockdown texture is created by watering down joint compound to a soupy consistency. A trowel is then used to apply the joint compound. The joint compound will begin to form stalactites as it dries. The trowel is then run over the surface of the drywall, knocking off the stalactites and leaving the mottled finish. Knockdown texture: A much more common, and faster technique is to apply the texture mud (which is slightly different from joint compound, in that it has less shrinkage upon drying) with a texture machine – a compressor and a texture spray hopper which sprays mud instead of paint. This applies what is referred to as a splatter coat. The use of a compressor allows this to be applied to walls as well as ceilings. When knocking this down, the mud is allowed to dry for a short period, then skimmed with a knockdown knife – a large, usually plastic (to reduce noticeable edges) knife. Knockdown texture: Knockdown texture reduces construction costs because it conceals imperfections in the drywall that normally require higher more expensive stages of sand and prime for drywall installers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**VPS29** VPS29: VPS29 is a human gene coding for the vacuolar protein sorting protein Vps29, a component of the retromer complex. Yeast homolog: The homologous protein (one that performs the same function) in yeast is Vacuolar protein sorting 29 homolog (S. cerevisiae). Function: VPS29 belongs to a group of genes coding for vacuolar protein sorting (VPS) proteins that, when functionally impaired, disrupt the efficient delivery of vacuolar hydrolases. The protein encoded by this gene, Vps29, is a component of a large multimeric complex, termed the retromer complex, which is involved in retrograde transport of proteins from endosomes to the trans-Golgi network. Vps29 may be involved in the formation of the inner shell of the retromer coat for retrograde vesicles leaving the prevacuolar compartment. Alternative splice variants encoding different isoforms, and usage of multiple polyadenylation sites have been found for this gene.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Trelagliptin** Trelagliptin: Trelagliptin (trade name Zafatek) is a pharmaceutical drug used for the treatment of type 2 diabetes (diabetes mellitus). Indications: It is a highly selective dipeptidyl peptidase-4 inhibitor that is typically used as an add on treatment when the first line treatment of metformin is not achieving the expected glycemic goals; though it has been approved for use as a first line treatment when metformin cannot be used. Biochemistry: DPP-4 inhibitors activate T-cells and are more commonly known as T-cell activation antigens (specifically CD26). Chemically, it is a fluorinated derivative of alogliptin. Development: Formulated as the salt trelagliptin succinate, it was approved for use in Japan in March 2015. Takeda, the company that developed trelagliptin, chose to not get approval for the drug in the US and EU. The licensing rights that Takeda purchased from Furiex Pharmaceuticals for DPP-4 inhibitors included a clause specific to development of this drug in the US and EU. The clause required that all services done for phase II and phase III clinical studies in the US and EU be purchased through Furiex. Takeda chose to cease development of this drug in the US and EU because of the high costs quoted by Furiex for these services. Gliptins have been on the market since 2006 and there are 8 gliptins currently registered as drugs (worldwide). Gliptins are an emerging market and are thus being developed at an increasing rate; there are currently two gliptins in advanced stages of development that are expected to be on the market in the coming year.Gliptins are thought to have cardiovascular protective abilities though the extent of these effects is still being studied. They are also being studied for the ability that this class of drugs has at promoting B-cell survival. Administration and dosing: Similar drugs in the same class as trelagliptin are administered once daily while trelagliptin is administered once weekly. Alogliptin (Nesina) is the other major DPP-4 inhibitor on the market. It is also owned by Takeda and is administered once daily. A dosing of once per week is advantageous as a reduction in the frequency of required dosing is known to increase patient compliance. Administration and dosing: A recent meta-analysis published by Dutta et. al. highlighted the good glycaemic efficacy and safety of this molecule as compared to peer DPP4 inhibitors which have to be taken daily like alogliptin, sitagliptin, linagliptin, teneligliptin, anagliptin or vildagliptin, having an advantage of reducing the monthly pill count from 30 to 4. Brand names: In Bangladesh it is marketed under the trade name Wedica.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Functional airspace block** Functional airspace block: Functional airspace block (FAB) is defined in the SES-2 legislative package, as follows: A FAB means an airspace block based on operational requirements and established regardless of State boundaries, where the provision of air navigation services and related functions are performance-driven and optimized with a view to introducing, in each functional airspace block, enhanced cooperation among air navigation service providers or, where appropriate, an integrated provider. In the context of the Single European Sky (SES) regulations of the European Union and in particular in accordance with Article 8 of the Framework Regulation, the European Commission has issued a mandate to the Eurocontrol Agency for support in the establishment of Functional Airspace Blocks (FABs). Functional airspace block: The SES-II regulation requires all EU members to be part of a FAB by 2012. All nine FABs have been declared, established and notified to the European Commission: UK-IRELAND FAB: United Kingdom, Ireland Danish-Swedish FAB Archived 2 February 2014 at the Wayback Machine: Denmark, Sweden BALTIC FAB: Poland, Lithuania BLUE MED: Italy, Malta, Greece, Cyprus, (Egypt, Tunisia, Albania, Jordan as observers) FABCE (FAB Central Europe): Czech Republic, Slovak Republic, Austria, Hungary, Croatia, Slovenia, Bosnia and Herzegovina FABEC (FAB Europe Central): France, Germany, Belgium, Netherlands, Luxembourg, and Switzerland DANUBE: Bulgaria, Romania NEFAB (North European FAB): Estonia, Finland, Latvia, Norway SW FAB (South-West FAB): Portugal (Lisbon FIR), SpainIn 2017 the European Court of Auditors determined that the functional airspace blocks have failed to defragment European airspace as they have not been fully implemented, with aircraft still being serviced by a different air navigation provider in each member state with different rules and requirements. This was due to a "lack of commitment on the part of the member states".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pocket mutation chess** Pocket mutation chess: Pocket mutation chess is a chess variant invented by Mike Nelson in 2003. In this game a player can take a piece from the board and put it into a pocket. The piece in the pocket can be put back on the board later. When placing the piece into the pocket the player can mutate the piece, i.e. change it to a different piece. Pocket mutation chess: The game is one of the Recognized Chess Variants at The Chess Variant Pages. Rules: The starting position in this game is the same as in standard chess. Players make moves as in standard chess. Instead of moving, a player can take one of their own pieces from the board and put it into the pocket, provided that the pocket is empty. If the piece is placed into the pocket from the last rank, it gets promoted to a piece of higher class. Otherwise the player has an option to mutate the piece into a different piece of the same class. The choice of mutating (or not) must be made at the time the piece is removed. White cannot use the pocket on the first move. The king cannot be placed into the pocket. Rules: As a players move, a piece in the pocket can be dropped on any empty position on the board, except the last rank. A pawn can make only a single step from the first rank, but can do a double step from the second one, even if dropped there or moved from the first rank. The en passant rule applies as in standard chess. Pawns that reach the last rank do not get promoted immediately. Instead, they can be placed into the pocket and promoted to a piece of higher class. Rules: There is no castling in this chess variant. The game is declared a draw if no capture or promotion was made for 50 consecutive moves. Classes of the pieces: Besides usual pieces there are several fairy chess pieces in this game. All pieces are divided into the following classes. All pieces from the same class are of presumably the same (or close) value.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Metaphysics** Metaphysics: Metaphysics is the branch of philosophy that studies the fundamental nature of reality. This includes the first principles of: being or existence, identity, change, space and time, cause and effect, necessity, actuality, and possibility.Metaphysics is considered one of the four main branches of philosophy, along with epistemology, logic, and ethics. Metaphysics: It includes questions about the nature of consciousness and the relationship between mind and matter, between substance and attribute, and between potentiality and actuality.Metaphysics studies questions related to what it is for something to exist and what types of existence there are. Metaphysics seeks to answer, in an abstract and fully general manner, the questions of: What is it that exists; and What it is like. Etymology: The word "metaphysics" derives from the Greek words μετά (metá, "after") and φυσικά (physiká, "physics"). It has been suggested that the term might have been coined by a first century CE editor who assembled various small selections of Aristotle's works into the treatise we now know by the name Metaphysics (μετὰ τὰ φυσικά, meta ta physika, lit. 'after the Physics ' – another of Aristotle's works). The prefix meta- ("after") indicates that these works come "after" the chapters on physics. Aristotle himself did not call the subject of his books "metaphysics"; he referred to it as "first philosophy" (Greek: πρώτη φιλοσοφία; Latin: philosophia prima). The editor of Aristotle's works, Andronicus of Rhodes, is thought to have placed the books on first philosophy right after another work, Physics, and called them τὰ μετὰ τὰ φυσικὰ βιβλία (tà metà tà physikà biblía) or "the books [that come] after the [books on] physics".However, once the name was given, the commentators sought to find other reasons for its appropriateness. For instance, Thomas Aquinas understood it to refer to the chronological or pedagogical order among our philosophical studies, so that the "metaphysical sciences" would mean "those that we study after having mastered the sciences that deal with the physical world".The term was misread by other medieval commentators, who thought it meant "the science of what is beyond the physical". Following this tradition, the prefix meta- has more recently been prefixed to the names of sciences to designate higher sciences dealing with ulterior and more fundamental problems: hence metamathematics, metalinguistics, metaphysiology, etc.A person who creates or develops metaphysical theories is called a metaphysician.Common parlance also uses the word metaphysics for a different referent from that of those already mentioned, namely for beliefs in arbitrary non-physical or magical entities. For example, "metaphysical healing" to refer to healing by means of remedies that are magical rather than scientific. This usage stemmed from the various historical schools of speculative metaphysics which operated by postulating all manner of physical, mental and spiritual entities as bases for particular metaphysical systems. Metaphysics as a subject does not preclude beliefs in such magical entities but neither does it promote them. Rather, it is the subject which provides the vocabulary and logic with which such beliefs might be analyzed and studied, for example to search for inconsistencies both within themselves and with other accepted systems such as science. Epistemological foundation: Metaphysical study is conducted using deduction from that which is known a priori. Like foundational mathematics (which is sometimes considered a special case of metaphysics applied to the existence of number), it tries to give a coherent account of the structure of the world, capable of explaining our everyday and scientific perception of the world, and being free from contradictions. In mathematics, there are many different ways to define numbers; similarly, in metaphysics, there are many different ways to define objects, properties, and Goals, for it to become concepts of its Nature, and other entities that are claimed to make up the world. While metaphysics may, as a special case, study the entities postulated by fundamental science such as atoms and superstrings, its core topic is the set of categories such as object, property and causality which those scientific theories assume. For example: claiming that "electrons have charge" is espousing a scientific theory; while exploring what it means for electrons to be (or at least, to be perceived as) "objects", charge to be a "property", and for both to exist in a topological entity called "space", is the task of metaphysics.There are two broad stances about what is "the world" studied by metaphysics. According to metaphysical realism, the objects studied by metaphysics exist independently of any observer so that the subject is the most fundamental of all sciences. Metaphysical anti-realism, on the other hand, assumes that the objects studied by metaphysics exist inside the mind of an observer, so the subject becomes a form of introspection and conceptual analysis. This position is of more recent origin. Some philosophers, notably Kant, discuss both of these "worlds" and what can be inferred about each one. Some, such as the logical positivists, and many scientists, reject the metaphysical realism as meaningless and unverifiable. Others reply that this criticism also applies to any type of knowledge, including hard science, which claims to describe anything other than the contents of human perception, and thus that the world of perception is the objective world in some sense. Metaphysics itself usually assumes that some stance has been taken on these questions and that it may proceed independently of the choice—the question of which stance to take belongs instead to another branch of philosophy, epistemology. Central questions: Ontology (being) Ontology is the branch of philosophy that studies concepts such as existence, being, becoming, and reality. It includes the questions of how entities are grouped into basic categories and which of these entities exist on the most fundamental level. Ontology is sometimes referred to as the science of being. It has been characterized as general metaphysics in contrast to special metaphysics, which is concerned with more particular aspects of being. Ontologists often try to determine what the categories or highest kinds are and how they form a system of categories that provides an encompassing classification of all entities. Commonly proposed categories include substances, properties, relations, states of affairs and events. These categories are characterized by fundamental ontological concepts, like particularity and universality, abstractness and concreteness or possibility and necessity. Of special interest is the concept of ontological dependence, which determines whether the entities of a category exist on the most fundamental level. Disagreements within ontology are often about whether entities belonging to a certain category exist and, if so, how they are related to other entities. Central questions: Identity and change Identity is a fundamental metaphysical concern. Metaphysicians investigating identity are tasked with the question of what, exactly, it means for something to be identical to itself, or – more controversially – to something else. Issues of identity arise in the context of time: what does it mean for something to be itself across two moments in time? How do we account for this? Another question of identity arises when we ask what our criteria ought to be for determining identity, and how the reality of identity interfaces with linguistic expressions. Central questions: The metaphysical positions one takes on identity have far-reaching implications on issues such as the mind–body problem, personal identity, ethics, and law. A few ancient Greeks took extreme positions on the nature of change. Parmenides denied change altogether, while Heraclitus argued that change was ubiquitous: "No man ever steps in the same river twice." Identity, sometimes called numerical identity, is the relation that a thing bears to itself, and which no thing bears to anything other than itself (cf. sameness). A modern philosopher who made a lasting impact on the philosophy of identity was Leibniz, whose law of the indiscernibility of identicals is still widely accepted today. It states that if some object x is identical to some object y, then any property that x has, y will have as well. Put formally, it states ∀x∀y(x=y→∀P(P(x)↔P(y))) However, it does seem that objects can change over time. Two rival theories to account for the relationship between change and identity are perdurantism, which treats objects as a series of object-stages, and endurantism, which maintains that the organism—the same object—is present at every stage in its history. Central questions: By appealing to intrinsic and extrinsic properties, endurantism finds a way to harmonize identity with change. Endurantists believe that objects persist by being strictly numerically identical over time. However, if Leibniz's law of the indiscernibility of identicals is used to define numerical identity here, it seems that objects must be completely unchanged in order to persist. Discriminating between intrinsic properties and extrinsic properties, endurantists state that numerical identity means that, if some object x is identical to some object y, then any intrinsic property that x has, y will have as well. Thus, if an object persists, intrinsic properties of it are unchanged, but extrinsic properties can change over time. Besides the object itself, environments and other objects can change over time; properties that relate to other objects would change even if this object does not change. Central questions: Perdurantism can harmonize identity with change in another way. In four-dimensionalism, a version of perdurantism, what persists is a four-dimensional object which does not change although three-dimensional slices of the object may differ. Central questions: Space and time Objects appear to us in space and time, while abstract entities such as classes, properties, and relations do not. How do space and time serve this function as a ground for objects? Are space and time entities themselves, of some form? Must they exist prior to objects? How exactly can they be defined? How is time related to change; must there always be something changing in order for time to exist? Causality Classical philosophy recognized a number of causes, including teleological final causes. In special relativity and quantum field theory the notions of space, time and causality become tangled together, with temporal orders of causations becoming dependent on who is observing them. The laws of physics are symmetrical in time, so could equally well be used to describe time as running backwards. Why then do we perceive it as flowing in one direction, the arrow of time, and as containing causation flowing in the same direction? For that matter, can an effect precede its cause? This was the title of a 1954 paper by Michael Dummett, which sparked a discussion that continues today. Earlier, in 1947, C. S. Lewis had argued that one can meaningfully pray concerning the outcome of, e.g., a medical test while recognizing that the outcome is determined by past events: "My free act contributes to the cosmic shape." Likewise, some interpretations of quantum mechanics, dating to 1945, involve backward-in-time causal influences.Causality is linked by many philosophers to the concept of counterfactuals. To say that A caused B means that if A had not happened then B would not have happened. This view was advanced by David Lewis in his 1973 paper "Causation". His subsequent papers further develop his theory of causation. Central questions: Causality is usually required as a foundation for philosophy of science if science aims to understand causes and effects and make predictions about them. Central questions: Necessity and possibility Metaphysicians investigate questions about the ways the world could have been. David Lewis, in On the Plurality of Worlds, endorsed a view called concrete modal realism, according to which facts about how things could have been are made true by other concrete worlds in which things are different. Other philosophers, including Gottfried Leibniz, have dealt with the idea of possible worlds as well. A necessary fact is true across all possible worlds. A possible fact is true in some possible world, even if not in the actual world. For example, it is possible that cats could have had two tails, or that any particular apple could have not existed. By contrast, certain propositions seem necessarily true, such as analytic propositions, e.g., "All bachelors are unmarried." The view that any analytic truth is necessary is not universally held among philosophers. A less controversial view is that self-identity is necessary, as it seems fundamentally incoherent to claim that any x is not identical to itself; this is known as the law of identity, a putative "first principle". Similarly, Aristotle describes the principle of non-contradiction: It is impossible that the same quality should both belong and not belong to the same thing ... This is the most certain of all principles ... Wherefore they who demonstrate refer to this as an ultimate opinion. For it is by nature the source of all the other axioms. Peripheral questions: Metaphysical cosmology and cosmogony Metaphysical cosmology is the branch of metaphysics that deals with the world as the totality of all phenomena in space and time. Historically, it formed a major part of the subject alongside ontology, though its role is more peripheral in contemporary philosophy. It has had a broad scope, and in many cases was founded in religion. The ancient Greeks drew no distinction between this use and their model for the cosmos. However, in modern times it addresses questions about the Universe which are beyond the scope of the physical sciences. It is distinguished from religious cosmology in that it approaches these questions using philosophical methods (e.g. dialectics). Peripheral questions: Cosmogony deals specifically with the origin of the universe. Modern metaphysical cosmology and cosmogony try to address questions such as: What is the origin of the Universe? What is its first cause? Is its existence necessary? (see monism, pantheism, emanationism and creationism) What are the ultimate material components of the Universe? (see mechanism, dynamism, hylomorphism, atomism) What is the ultimate reason for the existence of the Universe? Does the cosmos have a purpose? (see teleology) Mind and matter Accounting for the existence of mind in a world largely composed of matter is a metaphysical problem which is so large and important as to have become a specialized subject of study in its own right, philosophy of mind. Peripheral questions: Substance dualism is a classical theory in which mind and body are essentially different, with the mind having some of the attributes traditionally assigned to the soul, and which creates an immediate conceptual puzzle about how the two interact. This form of substance dualism differs from the dualism of some eastern philosophical traditions (like Nyāya), which also posit a soul; for the soul, under their view, is ontologically distinct from the mind. Idealism postulates that material objects do not exist unless perceived and only as perceptions. Adherents of panpsychism, a kind of property dualism, hold that everything has a mental aspect, but not that everything exists in a mind. Neutral monism postulates that existence consists of a single substance that in itself is neither mental nor physical, but is capable of mental and physical aspects or attributes – thus it implies a dual-aspect theory. For the last century, the dominant theories have been science-inspired including materialistic monism, type identity theory, token identity theory, functionalism, reductive physicalism, nonreductive physicalism, eliminative materialism, anomalous monism, property dualism, epiphenomenalism and emergentism. Peripheral questions: Determinism and free will Determinism is the philosophical proposition that every event, including human cognition, decision and action, is causally determined by an unbroken chain of prior occurrences. It holds that nothing happens that has not already been determined. The principal consequence of the deterministic claim is that it poses a challenge to the existence of free will. Peripheral questions: The problem of free will is the problem of whether rational agents exercise control over their own actions and decisions. Addressing this problem requires understanding the relation between freedom and causation, and determining whether the laws of nature are causally deterministic. Some philosophers, known as incompatibilists, view determinism and free will as mutually exclusive. If they believe in determinism, they will therefore believe free will to be an illusion, a position known as hard determinism. Proponents range from Baruch Spinoza to Ted Honderich. Henri Bergson defended free will in his dissertation Time and Free Will from 1889. Peripheral questions: Others, labeled compatibilists (or "soft determinists"), believe that the two ideas can be reconciled coherently. Adherents of this view include Thomas Hobbes and many modern philosophers such as John Martin Fischer, Gary Watson, Harry Frankfurt, and the like. Incompatibilists who accept free will but reject determinism are called libertarians, a term not to be confused with the political sense. Robert Kane and Alvin Plantinga are modern defenders of this theory. Peripheral questions: Natural and social kinds The earliest type of classification of social construction traces back to Plato in his dialogue Phaedrus where he claims that the biological classification system seems to carve nature at the joints. In contrast, later philosophers such as Michel Foucault and Jorge Luis Borges have challenged the capacity of natural and social classification. In his essay The Analytical Language of John Wilkins, Borges makes us imagine a certain encyclopedia where the animals are divided into (a) those that belong to the emperor; (b) embalmed ones; (c) those that are trained; ... and so forth, in order to bring forward the ambiguity of natural and social kinds. According to metaphysics author Alyssa Ney: "The reason all this is interesting is that there seems to be a metaphysical difference between the Borgesian system and Plato's". The difference is not obvious but one classification attempts to carve entities up according to objective distinction while the other does not. According to Quine this notion is closely related to the notion of similarity. The philosopher of social science Jason Josephson Storm has attempted to provide a more precise definition of social kinds, arguing that social kinds may still be real insofar as they are determined by empricially observable causal processes and that many cases of what appear to be natural kinds — including biological natural kinds and the category of "natural kind" itself — are in fact social kinds; such a view would mitigate the need to prioritize natural kinds above social kinds for much scientific practice. Peripheral questions: Number There are different ways to set up the notion of number in metaphysics theories. Platonist theories postulate number as a fundamental category itself. Others consider it to be a property of an entity called a "group" comprising other entities; or to be a relation held between several groups of entities, such as "the number four is the set of all sets of four things". Many of the debates around universals are applied to the study of number, and are of particular importance due to its status as a foundation for the philosophy of mathematics and for mathematics itself. Peripheral questions: Applied metaphysics Although metaphysics as a philosophical enterprise is highly hypothetical, it also has practical application in most other branches of philosophy, science, and now also information technology. Such areas generally assume some basic ontology (such as a system of objects, properties, classes, and space-time) as well as other metaphysical stances on topics such as causality and agency, then build their own particular theories upon these. Peripheral questions: In science, for example, some theories are based on the ontological assumption of objects with properties (such as electrons having charge) while others may reject objects completely (such as quantum field theories, where spread-out "electronness" becomes property of space-time rather than an object). Peripheral questions: "Social" branches of philosophy such as philosophy of morality, aesthetics and philosophy of religion (which in turn give rise to practical subjects such as ethics, politics, law, and art) all require metaphysical foundations, which may be considered as branches or applications of metaphysics. For example, they may postulate the existence of basic entities such as value, beauty, and God. Then they use these postulates to make their own arguments about consequences resulting from them. When philosophers in these subjects make their foundations they are doing applied metaphysics, and may draw upon its core topics and methods to guide them, including ontology and other core and peripheral topics. As in science, the foundations chosen will in turn depend on the underlying ontology used, so philosophers in these subjects may have to dig right down to the ontological layer of metaphysics to find what is possible for their theories. Peripheral questions: Systems engineering is essentially based on metaphysics, although without acknowledging it. This is because systems-engineering is primarily concerned with identifying what would be of interest in a prospective new system. Investigating the nature of the situation aka ontology and surveying the possibilities in measuring, evaluating, specifying, planning, implementing, integrating, testing and using it aka epistemology. Relation to other disciplines: Science Prior to the modern history of science, scientific questions were addressed as a part of natural philosophy. Originally, the term "science" (Latin: scientia) simply meant "knowledge". The scientific method, however, transformed natural philosophy into an empirical activity deriving from experiment, unlike the rest of philosophy. By the end of the 18th century, it had begun to be called "science" to distinguish it from other branches of philosophy. Science and philosophy have been considered separated disciplines ever since. Thereafter, metaphysics denoted philosophical enquiry of a non-empirical character into the nature of existence.Metaphysics continues asking "why" where science leaves off. For example, any theory of fundamental physics is based on some set of axioms, which may postulate the existence of entities such as atoms, particles, forces, charges, mass, or fields. Stating such postulates is considered to be the "end" of a science theory. Metaphysics takes these postulates and explores what they mean as human concepts. For example, do all theories of physics require the existence of space and time, objects, and properties? Or can they be expressed using only objects, or only properties? Do the objects have to retain their identity over time or can they change? If they change, then are they still the same object? Can theories be reformulated by converting properties or predicates (such as "red") into entities (such as redness or redness fields) or processes ('there is some redding happening over there' appears in some human languages in place of the use of properties). Is the distinction between objects and properties fundamental to the physical world or to our perception of it? Much recent work has been devoted to analyzing the role of metaphysics in scientific theorizing. Alexandre Koyré led this movement, declaring in his book Metaphysics and Measurement, "It is not by following experiment, but by outstripping experiment, that the scientific mind makes progress." That metaphysical propositions can influence scientific theorizing is John Watkins' most lasting contribution to philosophy. Since 1957 "he showed the ways in which some un-testable and hence, according to Popperian ideas, non-empirical propositions can nevertheless be influential in the development of properly testable and hence scientific theories. These profound results in applied elementary logic...represented an important corrective to positivist teachings about the meaninglessness of metaphysics and of normative claims". Imre Lakatos maintained that all scientific theories have a metaphysical "hard core" essential for the generation of hypotheses and theoretical assumptions. Thus, according to Lakatos, "scientific changes are connected with vast cataclysmic metaphysical revolutions."An example from biology of Lakatos' thesis: David Hull has argued that changes in the ontological status of the species concept have been central in the development of biological thought from Aristotle through Cuvier, Lamarck, and Darwin. Darwin's ignorance of metaphysics made it more difficult for him to respond to his critics because he could not readily grasp the ways in which their underlying metaphysical views differed from his own.In physics, new metaphysical ideas have arisen in connection with quantum mechanics, where subatomic particles arguably do not have the same sort of individuality as the particulars with which philosophy has traditionally been concerned. Also, adherence to a deterministic metaphysics in the face of the challenge posed by the quantum-mechanical uncertainty principle led physicists such as Albert Einstein to propose alternative theories that retained determinism. A.N. Whitehead is famous for creating a process philosophy metaphysics inspired by electromagnetism and special relativity.In chemistry, Gilbert Newton Lewis addressed the nature of motion, arguing that an electron should not be said to move when it has none of the properties of motion.Katherine Hawley notes that the metaphysics even of a widely accepted scientific theory may be challenged if it can be argued that the metaphysical presuppositions of the theory make no contribution to its predictive success. Relation to other disciplines: Theology There is a relationship between theological doctrines and philosophical reflection in the philosophy of a religion (such as Christian philosophy); philosophical reflections are strictly rational. On this way of seeing the two disciplines, if at least one of the premises of an argument is derived from revelation, the argument falls in the domain of theology; otherwise it falls into philosophy's domain. Rejections of metaphysics: Meta-metaphysics is the branch of philosophy that is concerned with the foundations of metaphysics. A number of individuals have suggested that much or all of metaphysics should be rejected, a meta-metaphysical position known as metaphysical deflationism or ontological deflationism.In the 16th century, Francis Bacon rejected scholastic metaphysics, and argued strongly for what is now called empiricism, being seen later as the father of modern empirical science. In the 18th century, David Hume took a strong position, arguing that all genuine knowledge involves either mathematics or matters of fact and that metaphysics, which goes beyond these, is worthless. He concluded his Enquiry Concerning Human Understanding (1748) with the statement: If we take in our hand any volume [book]; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion. Rejections of metaphysics: Thirty-three years after Hume's Enquiry appeared, Immanuel Kant published his Critique of Pure Reason. Although he followed Hume in rejecting much of previous metaphysics, he argued that there was still room for some synthetic a priori knowledge, concerned with matters of fact yet obtainable independent of experience. These included fundamental structures of space, time, and causality. He also argued for the freedom of the will and the existence of "things in themselves", the ultimate (but unknowable) objects of experience. Rejections of metaphysics: Wittgenstein introduced the concept that metaphysics could be influenced by theories of aesthetics, via logic, vis. a world composed of "atomical facts".In the 1930s, A.J. Ayer and Rudolf Carnap endorsed Hume's position; Carnap quoted the passage above. They argued that metaphysical statements are neither true nor false but meaningless since, according to their verifiability theory of meaning, a statement is meaningful only if there can be empirical evidence for or against it. Thus, while Ayer rejected the monism of Spinoza, he avoided a commitment to pluralism, the contrary position, by holding both views to be without meaning. Carnap took a similar line with the controversy over the reality of the external world. While the logical positivism movement is now considered dead (with Ayer, a major proponent, admitting in a 1979 TV interview that "nearly all of it was false"), it has continued to influence philosophy development.Arguing against such rejections, the Scholastic philosopher Edward Feser held that Hume's critique of metaphysics, and specifically Hume's fork, is "notoriously self-refuting". Feser argues that Hume's fork itself is not a conceptual truth and is not empirically testable. Rejections of metaphysics: Some living philosophers, such as Amie Thomasson, have argued that many metaphysical questions can be dissolved just by looking at the way words are used; others, such as Ted Sider, have argued that metaphysical questions are substantive, and that progress can be made toward answering them by comparing theories according to a range of theoretical virtues inspired by the sciences, such as simplicity and explanatory power. History and schools of metaphysics: Pre-history Cognitive archeology such as analysis of cave paintings and other pre-historic art and customs suggests that a form of perennial philosophy or Shamanic metaphysics may stretch back to the birth of behavioral modernity, all around the world. Similar beliefs are found in present-day "stone age" cultures such as Australian aboriginals. Perennial philosophy postulates the existence of a spirit or concept world alongside the day-to-day world, and interactions between these worlds during dreaming and ritual, or on special days or at special places. It has been argued that perennial philosophy formed the basis for Platonism, with Plato articulating, rather than creating, much older widespread beliefs. History and schools of metaphysics: Bronze Age Bronze Age cultures such as ancient Mesopotamia and ancient Egypt (along with similarly structured but chronologically later cultures such as Mayans and Aztecs) developed belief systems based on mythology, anthropomorphic gods, mind–body dualism, and a spirit world, to explain causes and cosmology. These cultures appear to have been interested in astronomy and may have associated or identified the stars with some of these entities. In ancient Egypt, the ontological distinction between order (maat) and chaos (Isfet) seems to have been important. History and schools of metaphysics: Pre-Socratic Greece The first named Greek philosopher, according to Aristotle, is Thales of Miletus, early 6th century BCE. He made use of purely physical explanations to explain the phenomena of the world rather than the mythological and divine explanations of tradition. He is thought to have posited water as the single underlying principle (or arche in later Aristotelian terminology) of the material world. His fellow, but younger Miletians, Anaximander and Anaximenes, also posited monistic underlying principles, namely apeiron (the indefinite or boundless) and air respectively. History and schools of metaphysics: Another school was the Eleatics, in southern Italy. The group was founded in the early fifth century BCE by Parmenides, and included Zeno of Elea and Melissus of Samos. Methodologically, the Eleatics were broadly rationalist, and took logical standards of clarity and necessity to be the criteria of truth. Parmenides' chief doctrine was that reality is a single unchanging and universal Being. Zeno used reductio ad absurdum, to demonstrate the illusory nature of change and time in his paradoxes. History and schools of metaphysics: Heraclitus of Ephesus, in contrast, made change central, teaching that "all things flow". His philosophy, expressed in brief aphorisms, is quite cryptic. For instance, he also taught the unity of opposites. Democritus and his teacher Leucippus, are known for formulating an atomic theory for the cosmos. They are considered forerunners of the scientific method. History and schools of metaphysics: Classical China Metaphysics in Chinese philosophy can be traced back to the earliest Chinese philosophical concepts from the Zhou dynasty such as Tian (Heaven) and yin and yang. The fourth century BCE saw a turn towards cosmogony with the rise of Taoism (in the Daodejing and Zhuangzi) and sees the natural world as dynamic and constantly changing processes which spontaneously arise from a single immanent metaphysical source or principle (Tao). Another philosophical school which arose around this time was the School of Naturalists which saw the ultimate metaphysical principle as the Taiji, the "supreme polarity" composed of the forces of yin and yang which were always in a state of change seeking balance. Another concern of Chinese metaphysics, especially Taoism, is the relationship and nature of being and non-being (you 有 and wu 無). The Taoists held that the ultimate, the Tao, was also non-being or no-presence. Other important concepts were those of spontaneous generation or natural vitality (Ziran) and "correlative resonance" (Ganying). History and schools of metaphysics: After the fall of the Han dynasty (220 CE), China saw the rise of the Neo-Taoist Xuanxue school. This school was very influential in developing the concepts of later Chinese metaphysics. Buddhist philosophy entered China (c. 1st century) and was influenced by the native Chinese metaphysical concepts to develop new theories. The native Tiantai and Huayen schools of philosophy maintained and reinterpreted the Indian theories of shunyata (emptiness, kong 空) and Buddha-nature (Fo xing 佛性) into the theory of interpenetration of phenomena. Neo-Confucians like Zhang Zai under the influence of other schools developed the concepts of "principle" (li) and vital energy (qi). History and schools of metaphysics: Classical Greece Socrates and Plato Plato is famous for his theory of forms (which he places in the mouth of Socrates in his dialogues). Platonic realism (also considered a form of idealism) is considered to be a solution to the problem of universals; i.e., what particular objects have in common is that they share a specific Form which is universal to all others of their respective kind. History and schools of metaphysics: The theory has a number of other aspects: Epistemological: knowledge of the Forms is more certain than mere sensory data. Ethical: The Form of the Good sets an objective standard for morality. Time and Change: The world of the Forms is eternal and unchanging. Time and change belong only to the lower sensory world. "Time is a moving image of Eternity". Abstract objects and mathematics: Numbers, geometrical figures, etc., exist mind-independently in the World of Forms.Platonism developed into Neoplatonism, a philosophy with a monotheistic and mystical flavour that survived well into the early Christian era. Aristotle Plato's pupil Aristotle wrote widely on almost every subject, including metaphysics. His solution to the problem of universals contrasts with Plato's. Whereas Platonic Forms are existentially apparent in the visible world, Aristotelian essences dwell in particulars. Potentiality and actuality are principles of a dichotomy which Aristotle used throughout his philosophical works to analyze motion, causality and other issues. The Aristotelian theory of change and causality stretches to four causes: the material, formal, efficient and final. The efficient cause corresponds to what is now known as a cause simplicity. Final causes are explicitly teleological, a concept now regarded as controversial in science. The Matter/Form dichotomy was to become highly influential in later philosophy as the substance/essence distinction. The opening arguments in Aristotle's Metaphysics, Book I, revolve around the senses, knowledge, experience, theory, and wisdom. The first main focus in the Metaphysics is attempting to determine how intellect "advances from sensation through memory, experience, and art, to theoretical knowledge". Aristotle claims that eyesight provides the capability to recognize and remember experiences, while sound allows learning. History and schools of metaphysics: Classical India More on Indian philosophy: Hindu philosophy Sāṃkhya Sāṃkhya is an ancient system of Indian philosophy based on a dualism involving the ultimate principles of consciousness and matter. It is described as the rationalist school of Indian philosophy. It is most related to the Yoga school of Hinduism, and its method was most influential on the development of Early Buddhism.The Sāmkhya is an enumerationist philosophy whose epistemology accepts three of six pramanas (proofs) as the only reliable means of gaining knowledge. These include pratyakṣa (perception), anumāṇa (inference) and śabda (āptavacana, word/testimony of reliable sources).Samkhya is strongly dualist. Sāmkhya philosophy regards the universe as consisting of two realities; puruṣa (consciousness) and prakṛti (matter). Jiva (a living being) is that state in which puruṣa is bonded to prakṛti in some form. This fusion, state the Samkhya scholars, led to the emergence of buddhi ("spiritual awareness") and ahaṅkāra (ego consciousness). The universe is described by this school as one created by purusa-prakṛti entities infused with various permutations and combinations of variously enumerated elements, senses, feelings, activity and mind. During the state of imbalance, one of more constituents overwhelm the others, creating a form of bondage, particularly of the mind. The end of this imbalance, bondage is called liberation, or moksha, by the Samkhya school.The existence of God or supreme being is not directly asserted, nor considered relevant by the Samkhya philosophers. Sāṃkhya denies the final cause of Ishvara (God). While the Samkhya school considers the Vedas as a reliable source of knowledge, it is an atheistic philosophy according to Paul Deussen and other scholars. A key difference between Samkhya and Yoga schools, state scholars, is that Yoga school accepts a "personal, yet essentially inactive, deity" or "personal god".Samkhya is known for its theory of guṇas (qualities, innate tendencies). Guṇa, it states, are of three types: sattva being good, compassionate, illuminating, positive, and constructive; rajas is one of activity, chaotic, passion, impulsive, potentially good or bad; and tamas being the quality of darkness, ignorance, destructive, lethargic, negative. Everything, all life forms and human beings, state Samkhya scholars, have these three guṇas, but in different proportions. The interplay of these guṇas defines the character of someone or something, of nature and determines the progress of life. The Samkhya theory of guṇas was widely discussed, developed and refined by various schools of Indian philosophies, including Buddhism. Samkhya's philosophical treatises also influenced the development of various theories of Hindu ethics. History and schools of metaphysics: Vedānta Realization of the nature of self-identity is the principal object of the Vedanta system of Indian metaphysics. In the Upanishads, self-consciousness is not the first-person indexical self-awareness or the self-awareness which is self-reference without identification, and also not the self-consciousness which as a kind of desire is satisfied by another self-consciousness. It is self-realisation; the realisation of the self consisting of consciousness that leads all else.The word self-consciousness in the Upanishads means the knowledge about the existence and nature of manusya, human being. It means the consciousness of our own real being, the primary reality. Self-consciousness means self-knowledge, the knowledge of Prajna i.e. of Prana which is attained by a Brahman. According to the Upanishads the Atman or Paramatman is phenomenally unknowable; it is the object of realisation. The Atman is unknowable in its essential nature; it is unknowable in its essential nature because it is the eternal subject who knows about everything including itself. The Atman is the knower and also the known.Metaphysicians regard the self either to be distinct from the absolute or entirely identical with the absolute. They have given form to three schools of thought – the dualistic school, the quasi-dualistic school and the monistic school, as the result of their varying mystical experiences. Prakrti and Atman, when treated as two separate and distinct aspects form the basis of the dualism of the Shvetashvatara Upanishad. Quasi-dualism is reflected in the Vaishnavite-monotheism of Ramanuja and the absolute monism, in the teachings of Adi Shankara.Self-consciousness is the fourth state of consciousness or Turiya, the first three being Vaisvanara, Taijasa and Prajna. These are the four states of individual consciousness. History and schools of metaphysics: There are three distinct stages leading to self-realisation. The first stage is in mystically apprehending the glory of the self within one as though one were distinct from it. The second stage is in identifying the "I-within" with the self, that one is in essential nature entirely identical with the pure self. The third stage is in realising that the Atman is Brahman, that there is no difference between the self and the absolute. The fourth stage is in realising "I am the Absolute" – Aham Brahman Asmi. The fifth stage is in realising that Brahman is the "all" that exists, as also that which does not exist. History and schools of metaphysics: Buddhist metaphysics In Buddhist philosophy there are various metaphysical traditions that have proposed different questions about the nature of reality based on the teachings of the Buddha in the early Buddhist texts. The Buddha of the early texts does not focus on metaphysical questions but on ethical and spiritual training and in some cases, he dismisses certain metaphysical questions as unhelpful and indeterminate Avyakta, which he recommends should be set aside. The development of systematic metaphysics arose after the Buddha's death with the rise of the Abhidharma traditions. The Buddhist Abhidharma schools developed their analysis of reality based on the concept of dharmas which are the ultimate physical and mental events that makeup experience and their relations to each other. Noa Ronkin has called their approach "phenomenological".Later philosophical traditions include the Madhyamika school of Nagarjuna, which further developed the theory of the emptiness (shunyata) of all phenomena or dharmas which rejects any kind of substance. This has been interpreted as a form of anti-foundationalism and anti-realism which sees reality as having no ultimate essence or ground. The Yogacara school meanwhile promoted a theory called "awareness only" (vijnapti-matra) which has been interpreted as a form of Idealism or Phenomenology and denies the split between awareness itself and the objects of awareness. History and schools of metaphysics: Islamic metaphysics Major ideas in Islamic metaphysics (Arabic: ما وراء الطبيعة, romanized: Mawaraultabia) have surrounded the concept of weḥdah (وحدة) meaning 'unity', or in Arabic توحيد tawhid. Waḥdat al-wujūd literally means the 'unity of existence' or 'unity of being'. In modern times the phrase has been translated as "pantheism." Wujud (i.e. existence or presence) here refers to Allah's wujud (compare tawhid). However, waḥdat ash-shuhūd, meaning 'apparentism' or 'monotheism of witness', holds that god and his creation are entirely separate. History and schools of metaphysics: Scholasticism and the Middle Ages Between about 1100 and 1500, philosophy as a discipline took place as part of the Catholic church's teaching system, known as scholasticism. Scholastic philosophy took place within an established framework blending Christian theology with Aristotelian teachings. Although fundamental orthodoxies were not commonly challenged, there were nonetheless deep metaphysical disagreements, particularly over the problem of universals, which engaged Duns Scotus and Pierre Abelard. William of Ockham is remembered for his principle of ontological parsimony. History and schools of metaphysics: Continental rationalism In the early modern period (17th and 18th centuries), the system-building scope of philosophy is often linked to the rationalist method of philosophy, that is the technique of deducing the nature of the world by pure reason. The scholastic concepts of substance and accident were employed. Leibniz proposed in his Monadology a plurality of non-interacting substances. Descartes is famous for his dualism of material and mental substances. History and schools of metaphysics: Spinoza believed reality was a single substance of God-or-nature.Christian Wolff had theoretical philosophy divided into an ontology or philosophia prima as a general metaphysics, which arises as a preliminary to the distinction of the three "special metaphysics" on the soul, world and God: rational psychology, rational cosmology and rational theology. The three disciplines are called empirical and rational because they are independent of revelation. This scheme, which is the counterpart of religious tripartition in creature, creation, and Creator, is best known to philosophical students by Kant's treatment of it in the Critique of Pure Reason. In the "Preface" of the 2nd edition of Kant's book, Wolff is defined "the greatest of all dogmatic philosophers." British empiricism British empiricism marked something of a reaction to rationalist and system-building metaphysics, or speculative metaphysics as it was pejoratively termed. The skeptic David Hume famously declared that most metaphysics should be consigned to the flames (see below). Hume was notorious among his contemporaries as one of the first philosophers to openly doubt religion, but is better known now for his critique of causality. John Stuart Mill, Thomas Reid and John Locke were less skeptical, embracing a more cautious style of metaphysics based on realism, common sense and science. Other philosophers, notably George Berkeley were led from empiricism to idealistic metaphysics. History and schools of metaphysics: Kant Immanuel Kant attempted a grand synthesis and revision of the trends already mentioned: scholastic philosophy, systematic metaphysics, and skeptical empiricism, not to forget the burgeoning science of his day. As did the systems builders, he had an overarching framework in which all questions were to be addressed. Like Hume, who famously woke him from his 'dogmatic slumbers', he was suspicious of metaphysical speculation, and also places much emphasis on the limitations of the human mind. History and schools of metaphysics: Kant described his shift in metaphysics away from making claims about an objective noumenal world, towards exploring the subjective phenomenal world, as a Copernican Revolution, by analogy to (though opposite in direction to) Copernicus' shift from man (the subject) to the sun (an object) at the center of the universe. History and schools of metaphysics: Kant saw rationalist philosophers as aiming for a kind of metaphysical knowledge he defined as the synthetic apriori—that is knowledge that does not come from the senses (it is a priori) but is nonetheless about reality (synthetic). Inasmuch as it is about reality, it differs from abstract mathematical propositions (which he terms synthetic apriori), and being apriori it is distinct from empirical, scientific knowledge (which he terms synthetic aposteriori). The only synthetic apriori knowledge we can have is of how our minds organise the data of the senses; that organising framework is space and time, which for Kant have no mind-independent existence, but nonetheless operate uniformly in all humans. Apriori knowledge of space and time is all that remains of metaphysics as traditionally conceived. There is a reality beyond sensory data or phenomena, which he calls the realm of noumena; however, we cannot know it as it is in itself, but only as it appears to us. He allows himself to speculate that the origins of phenomenal God, morality, and free will might exist in the noumenal realm, but these possibilities have to be set against its basic unknowability for humans. Although he saw himself as having disposed of metaphysics, in a sense, he has generally been regarded in retrospect as having a metaphysics of his own, and as beginning the modern analytical conception of the subject. History and schools of metaphysics: Late modern philosophy Nineteenth-century philosophy was overwhelmingly influenced by Kant and his successors. Schopenhauer, Schelling, Fichte and Hegel all purveyed their own panoramic versions of German Idealism, Kant's own caution about metaphysical speculation, and refutation of idealism, having fallen by the wayside. The idealistic impulse continued into the early twentieth century with British idealists such as F. H. Bradley and J. M. E. McTaggart. Followers of Karl Marx took Hegel's dialectic view of history and re-fashioned it as materialism. History and schools of metaphysics: Early analytic philosophy and positivism During the period when idealism was dominant in philosophy, science had been making great advances. The arrival of a new generation of scientifically minded philosophers led to a sharp decline in the popularity of idealism during the 1920s. Analytic philosophy was spearheaded by Bertrand Russell and G. E. Moore. Russell and William James tried to compromise between idealism and materialism with the theory of neutral monism. History and schools of metaphysics: The early to mid-twentieth-century philosophy saw a trend to reject metaphysical questions as meaningless. The driving force behind this tendency was the philosophy of logical positivism as espoused by the Vienna Circle, which argued that the meaning of a statement was its prediction of observable results of an experiment, and thus that there is no need to postulate the existence of any objects other than these perceptual observations. History and schools of metaphysics: At around the same time, the American pragmatists were steering a middle course between materialism and idealism. System-building metaphysics, with a fresh inspiration from science, was revived by A. N. Whitehead and Charles Hartshorne. Continental philosophy The forces that shaped analytic philosophy—the break with idealism, and the influence of science—were much less significant outside the English speaking world, although there was a shared turn toward language. Continental philosophy continued in a trajectory from post Kantianism. History and schools of metaphysics: The phenomenology of Husserl and others was intended as a collaborative project for the investigation of the features and structure of consciousness common to all humans, in line with Kant's basing his synthetic apriori on the uniform operation of consciousness. It was officially neutral with regards to ontology, but was nonetheless to spawn a number of metaphysical systems. Brentano's concept of intentionality would become widely influential, including on analytic philosophy. History and schools of metaphysics: Heidegger, author of Being and Time, saw himself as re-focusing on Being-qua-being, introducing the novel concept of Dasein in the process. Classing himself an existentialist, Sartre wrote an extensive study of Being and Nothingness. The speculative realism movement marks a return to full blooded realism. Process metaphysics There are two fundamental aspects of everyday experience: change and persistence. Until recently, the Western philosophical tradition has arguably championed substance and persistence, with some notable exceptions, however. According to process thinkers, novelty, flux and accident do matter, and sometimes they constitute the ultimate reality. History and schools of metaphysics: In a broad sense, process metaphysics is as old as Western philosophy, with figures such as Heraclitus, Plotinus, Duns Scotus, Leibniz, David Hume, Georg Wilhelm Friedrich Hegel, Friedrich Wilhelm Joseph von Schelling, Gustav Theodor Fechner, Friedrich Adolf Trendelenburg, Charles Renouvier, Karl Marx, Ernst Mach, Friedrich Wilhelm Nietzsche, Émile Boutroux, Henri Bergson, Samuel Alexander and Nicolas Berdyaev. It seemingly remains an open question whether major "Continental" figures such as the late Martin Heidegger, Maurice Merleau-Ponty, Gilles Deleuze, Michel Foucault, or Jacques Derrida should be included.In a strict sense, process metaphysics may be limited to the works of a few philosophers: G. W. F. Hegel, Charles Sanders Peirce, William James, Henri Bergson, A. N. Whitehead, and John Dewey. From a European perspective, there was a very significant and early Whiteheadian influence on the works of outstanding scholars such as Émile Meyerson (1859–1933), Louis Couturat (1868–1914), Jean Wahl (1888–1974), Robin George Collingwood (1889–1943), Philippe Devaux (1902–1979), Hans Jonas (1903–1993), Dorothy M. Emmett (1904–2000), Maurice Merleau Ponty (1908–1961), Enzo Paci (1911–1976), Charlie Dunbar Broad (1887–1971), Wolfe Mays (1912–2005), Ilya Prigogine (1917–2003), Jules Vuillemin (1920–2001), Jean Ladrière (1921–2007), Gilles Deleuze (1925–1995), Wolfhart Pannenberg (1928–2014), Reiner Wiehl (1929–2010), and Alain Badiou (1937-). History and schools of metaphysics: Contemporary analytic philosophy While early analytic philosophy tended to reject metaphysical theorizing, under the influence of logical positivism, it was revived in the second half of the twentieth century. Philosophers such as David K. Lewis and David Armstrong developed elaborate theories on a range of topics such as universals, causation, possibility and necessity and abstract objects. However, the focus of analytic philosophy generally is away from the construction of all-encompassing systems and toward close analysis of individual ideas. History and schools of metaphysics: Among the developments that led to the revival of metaphysical theorizing were Quine's attack on the analytic–synthetic distinction, which was generally taken to undermine Carnap's distinction between existence questions internal to a framework and those external to it.The philosophy of fiction, the problem of empty names, and the debate over existence's status as a property have all come of relative obscurity into the limelight, while perennial issues such as free will, possible worlds, and the philosophy of time have had new life breathed into them.The analytic view is of metaphysics as studying phenomenal human concepts rather than making claims about the noumenal world, so its style often blurs into philosophy of language and introspective psychology. Compared to system-building, it can seem very dry, stylistically similar to computer programming, mathematics or even accountancy (as a common stated goal is to "account for" entities in the world).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Secure Electronic Transaction** Secure Electronic Transaction: Secure Electronic Transaction (SET) is a communications protocol standard for securing credit card transactions over networks, specifically, the Internet. SET was not itself a payment system, but rather a set of security protocols and formats that enabled users to employ the existing credit card payment infrastructure on an open network in a secure fashion. However, it failed to gain attraction in the market. Visa now promotes the 3-D Secure scheme. Secure Electronic Transaction: Secure Electronic Transaction (SET) is a system for ensuring the security of financial transactions on the Internet. It was supported initially by Mastercard, Visa, Microsoft, Netscape, and others. With SET, a user is given an electronic wallet (digital certificate) and a transaction is conducted and verified using a combination of digital certificates and digital signatures among the purchaser, a merchant, and the purchaser's bank in a way that ensures privacy and confidentiality History and development: SET was developed by the SET Consortium, established in 1996 by Visa and Mastercard in cooperation with GTE, IBM, Microsoft, Netscape, SAIC, Terisa Systems, RSA, and VeriSign. The consortium’s goal was to combine the card associations' similar but incompatible protocols (STT from Visa/Microsoft and SEPP from Mastercard/IBM) into a single standard.SET allowed parties to identify themselves to each other and exchange information securely. Binding of identities was based on X.509 certificates with several extensions. SET used a cryptographic blinding algorithm that, in effect, would have let merchants substitute a certificate for a user's credit card number. If SET were used, the merchant itself would never have had to know the credit-card numbers being sent from the buyer, which would have provided verified good payment but protected customers and credit companies from fraud. History and development: SET was intended to become the de facto standard payment method on the Internet between the merchants, the buyers, and the credit-card companies. History and development: Unfortunately, the implementation by each of the primary stakeholders was either expensive or cumbersome. There were also some external factors that may have complicated how the consumer element would be integrated into the browser. There was a rumor circa 1994-1995 that suggested that Microsoft sought an income stream of 0.25% from every transaction secured by Microsoft's integrated SET compliant components they would implement in their Internet browser. Key features: To meet the business requirements, SET incorporates the following features: Confidentiality of information Integrity of data Cardholder account authentication Merchant authentication Participants: A SET system includes the following participants: Cardholder Merchant Issuer Acquirer Payment gateway Certification authority How it works Both cardholders and merchants must register with the CA (certificate authority) first, before they can buy or sell on the Internet. Once registration is done, cardholder and merchant can start to do transactions, which involve nine basic steps in this protocol, which is simplified. Participants: Customer browses the website and decides on what to purchase Customer sends order and payment information, which includes two parts in one message: a. Purchase order – this part is for merchant b. Card information – this part is for merchant’s bank only. Merchant forwards card information to their bank Merchant’s bank checks with the issuer for payment authorization Issuer sends authorization to the merchant’s bank Merchant’s bank sends authorization to the merchant Merchant completes the order and sends confirmation to the customer Merchant captures the transaction from their bank Issuer prints credit card bill (invoice) to the customer Dual signature: As described in (Stallings 2000): An important innovation introduced in SET is the dual signature. The purpose of the dual signature is to link two messages that are intended for two different recipients. In this case, the customer wants to send the order information (OI) to the merchant and the payment information (PI) to the bank. The merchant does not need to know the customer's credit-card number, and the bank does not need to know the details of the customer's order. The customer is afforded extra protection in terms of privacy by keeping these two items separate. However, the two items must be linked in a way that can be used to resolve disputes if necessary. The link is needed so that the customer can prove that this payment is intended for this order and not for some other goods or service. Dual signature: The message digest (MD) of the OI and the PI are independently calculated by the customer. These are concatenated and another MD is calculated from this. Finally, the dual signature is created by encrypting the MD with the customer's secret key. The dual signature is sent to both the merchant and the bank. The protocol arranges for the merchant to see the MD of the PI without seeing the PI itself, and the bank sees the MD of the OI but not the OI itself. The dual signature can be verified using the MD of the OI or PI, without requiring either the OI or PI. Privacy is preserved as the MD can't be reversed, which would reveal the contents of the OI or PI.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**OsiriX** OsiriX: OsiriX is an image processing application for the Apple MacOS operating system dedicated to DICOM images (".dcm" / ".DCM" extension) produced by equipment (MRI, CT, PET, PET-CT, ...). OsiriX is complementary to existing viewers, in particular to nuclear medicine viewers. It can also read many other file formats: TIFF (8,16, 32 bits), JPEG, PDF, AVI, MPEG and QuickTime. It is fully compliant with the DICOM standard for image communication and image file formats. OsiriX is able to receive images transferred by DICOM communication protocol from any PACS or medical imaging modality (STORE SCP - Service Class Provider, STORE SCU - Service Class User, and Query/Retrieve). OsiriX: Since 2010, a commercial version of OsiriX, named "OsiriX MD", is available. Its original source code is still available on GitHub. A demo version, "OsiriX Lite", still remains available free of charge with some limitations. History: The OsiriX project started in 2004 at UCLA with Dr Antoine Rosset and Prof. Osman Ratib. OsiriX has been developed by Rosset, working in LaTour Hospital (Geneva, Switzerland) and Joris Heuberger, a computer scientist from Geneva. In 2010, a version of OsiriX for iPhone and iPod touch was released. History: Major milestones in OsiriX versions OsiriX 6.5 - 3D ROIs are introduced OsiriX 7.0 - Several reporting plugins are included: PI-RADS, BI-RADS, Coronary Angiography, TAVI and Liver report plugins OsiriX 7.5 - Dark Mode and vessel tracking (centreline) OsiriX 8.5 - DICOMweb protocol support OsiriX 9.0 - Smart Display (adjust image scaling to image content) OsiriX 9.5 - Javascript web viewer for the built-in Web Portal functionality OsiriX 12.0 - Compiled for Apple Silicon processors (M1, M2, …) OsiriX 13.0 - DICOM fields editing directly in the database window Features: OsiriX has been specifically designed for navigation and visualization of multimodality and multidimensional images: 2D Viewer, 3D Viewer, 4D Viewer (3D series with temporal dimension, for example: Cardiac-CT) and 5D Viewer (3D series with temporal and functional dimensions, for example: Cardiac-PET-CT). The 3D Viewer offers all modern rendering modes: Multiplanar reconstruction (MPR), Surface Rendering, Volume Rendering and Maximum intensity projection (MIP). All these modes support 4D data and are able to produce image fusion between two different series (for example: PET-CT). Features: OsiriX is simultaneously a DICOM PACS workstation for imaging and an image processing software package for research (radiology and nuclear imaging), functional imaging, 3D imaging, confocal microscopy and molecular imaging. OsiriX supports a complete plug-in architecture that allows one to expand the capabilities of OsiriX for personal needs. OsiriX is released under a proprietary license and runs under macOS. OsiriX source code makes heavy use of Apple idioms such as Cocoa. The source is almost entirely in Objective-C. Pixmeo company: In 2010, the OsiriX Team created the company Pixmeo to promote and distribute a special limited version of OsiriX called OsiriX MD. Unlike the regular version, this version is certified for medical imaging. OsiriX MD is a FDA cleared 510k class II medical device, according to US Food And Drug Regulation CFR21 part 820. OsiriX MD complies with European Directive 93/42/EEC concerning medical devices. Under this directive, it is regarded as a class IIa.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ratio (journal)** Ratio (journal): Ratio is a peer-reviewed academic journal of analytic philosophy, edited by David S. Oderberg (Reading University) and published by Wiley-Blackwell. Ratio is published quarterly and in December publishes a special issue that is focused specifically on one area, calling on specialists in that field of study to contribute. It is a successor to a previous journal, also called Ratio and published in parallel editions in German and English. It was sponsored by the Society for the Furtherance of the Critical Philosophy and the Philosophisch-politische Akademie which ran from 1957 until December 1987 with 29 volumes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Zhani** Zhani: Zhani (asomtavruli Ⴏ, nuskhuri ⴏ, mkhedruli ჟ) is the 18th letter of the three Georgian scripts.In the system of Georgian numerals it has a value of 90.Zhani commonly represents voiced palato-alveolar sibilant consonant /ʒ/, like the pronunciation of ⟨ʒ⟩ in "vision".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Loeys–Dietz syndrome** Loeys–Dietz syndrome: Loeys–Dietz syndrome (LDS) is an autosomal dominant genetic connective tissue disorder. It has features similar to Marfan syndrome and Ehlers–Danlos syndrome. The disorder is marked by aneurysms in the aorta, often in children, and the aorta may also undergo sudden dissection in the weakened layers of the wall of the aorta. Aneurysms and dissections also can occur in arteries other than the aorta. Because aneurysms in children tend to rupture early, children are at greater risk for dying if the syndrome is not identified. Surgery to repair aortic aneurysms is essential for treatment. Loeys–Dietz syndrome: There are five types of the syndrome, labelled types I through V, which are distinguished by their genetic cause. Type 1, Type 2, Type 3, Type 4 and Type 5 are caused by mutations in TGFBR1, TGFBR2, SMAD3, TGFB2, and TGFB3 respectively. These five genes encoding transforming growth factors play a role in cell signaling that promotes growth and development of the body's tissues. Mutations of these genes cause production of proteins without function. The skin cells for individuals with Loeys–Dietz syndrome are not able to produce collagen, the protein that allows skin cells to be strong and elastic. This causes these individuals to be susceptible to different tears in the skin such as hernias. Although the disorder has an autosomal pattern of inheritance, this disorder results from a new gene mutation in 75% of cases and occurs in people with no history of the disorder in their family. In other cases it is inherited from one affected parent.Loeys–Dietz syndrome was identified and characterized by pediatric geneticists Bart Loeys and Harry "Hal" Dietz at Johns Hopkins University in 2005. Signs and symptoms: There is considerable variability in the phenotype of Loeys–Dietz syndrome, from mild features to severe systemic abnormalities. The primary manifestations of Loeys–Dietz syndrome are arterial tortuosity (winding course of blood vessels), widely spaced eyes (hypertelorism), wide or split uvula, and aneurysms at the aortic root. Other features may include cleft palate and a blue/gray appearance of the white of the eyes. Cardiac defects and club foot may be noted at birth.There is overlap in the manifestations of Loeys–Dietz and Marfan syndromes, including increased risk of ascending aortic aneurysm and aortic dissection, abnormally long limbs and fingers, and dural ectasia (a gradual stretching and weakening of the dura mater that can cause abdominal and leg pain). Findings of hypertelorism (widely spaced eyes), bifid or split uvula, and skin findings such as easy bruising or abnormal scars may distinguish Loeys–Dietz from Marfan syndrome.Affected individuals often develop immune system related problems such as allergies to food, asthma, hay fever, and inflammatory disorders such as eczema or inflammatory bowel disease.Findings of Loeys–Dietz syndrome may include: Skeletal/spinal malformations: craniosynostosis, Scoliosis, spinal instability and spondylolisthesis, Kyphosis Sternal abnormalities: pectus excavatum, pectus carinatum Contractures of fingers and toes (camptodactyly) Long fingers and lax joints Weakened or missing eye muscles (strabismus) Club foot Premature fusion of the skull bones (craniosynostosis) Joint hypermobility Congenital heart problems including patent ductus arteriosus (connection between the aorta and the lung circulation) and atrial septal defect (connection between heart chambers) Translucency of the skin with velvety texture Abnormal junction of the brain and medulla (Arnold–Chiari malformation) Bicuspid aortic valves Criss-crossed pulmonary arteries Cause: Types (old nomenclature) Several genetic causes of Loeys–Dietz syndrome have been identified. A de novo mutation in TGFB3, a ligand of the TGF β pathway, was identified in an individual with a syndrome presenting partially overlapping symptoms with Marfan Syndrome and Loeys–Dietz Syndrome. Diagnosis: Diagnosis involves consideration of physical features and genetic testing. Presence of split uvula is a differentiating characteristic from Marfan Syndrome, as well as the severity of the heart defects. Loeys–Dietz Syndrome patients have more severe heart involvement and it is advised that they be treated for enlarged aorta earlier due to the increased risk of early rupture in Loeys–Dietz patients. Because different people express different combinations of symptoms and the syndrome was first identified in 2005, many doctors may not be aware of its existence. Treatment: As there is no known cure, Loeys–Dietz syndrome is a lifelong condition. Due to the high risk of death from aortic aneurysm rupture, patients should be followed closely to monitor aneurysm formation, which can then be corrected with vascular surgery. Previous research in laboratory mice has suggested that the angiotensin II receptor antagonist losartan, which appears to block TGF-beta activity, can slow or halt the formation of aortic aneurysms in Marfan syndrome. A large clinical trial sponsored by the National Institutes of Health is currently underway to explore the use of losartan to prevent aneurysms in Marfan syndrome patients. Both Marfan syndrome and Loeys–Dietz syndrome are associated with increased TGF-beta signaling in the vessel wall. Therefore, losartan also holds promise for the treatment of Loeys–Dietz syndrome. In those patients in which losartan is not halting the growth of the aorta, irbesartan has been shown to work and is currently also being studied and prescribed for some patients with this condition.If an increased heart rate is present, a cardioselective beta-1 blocker, with or without losartan, is sometimes prescribed to reduce the heart rate to prevent any extra pressure on the tissue of the aorta. Likewise, strenuous physical activity is discouraged in patients, especially weight lifting and contact sports. Epidemiology: The incidence of Loeys–Dietz syndrome is unknown; however, Type 1 and 2 appear to be the most common.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Overture (software)** Overture (software): Overture is a music notation (scorewriter) program for Windows and Macintosh platforms, published and developed by Sonic Scores. While Overture is primarily a scorewriter program, it also allows editing the score's MIDI audio playback data in the manner of sequencer and digital audio workstation (DAW) software. To facilitate film scoring, Overture has the ability to play film video footage synchronized to the score playback, and to insert precise time markers into the score.Overture was the first scorewriter to feature full Virtual Studio Technology (VST) hosting, allowing audio playback of the score with virtual instruments, controlled by the program's mixing-desk style interface. Editing and note entry: Editing When Overture was developed, the developer aimed to retain Encore's previous user-friendly interface design, but included the ability to notate elements regarded as complex at that time. These included adjustable engraver spacings between elements, non-standard notehead shapes, varying numbers of staff lines, guitar fingering charts, and tablature notation. Each line of drums staves could be user-mapped to different percussion instruments. It was also the first music scorewriter software that gave users control over all MIDI playback data such as note velocity, pitch bend and duration. Editing and note entry: Most notational symbols can be repositioned by dragging them with the mouse. Most other editing of notational symbols is performed by selecting the symbols using the mouse, and selecting the appropriate editing command from a menu or by clicking on a palette. Overture 5 and higher supports editing and page navigation, such as pinch-to-zoom, using one's fingers or a stylus on touch screens. Editing and note entry: Note entry In Overture, input of note data can be done by any of several methods: via an onscreen virtual piano keyboard; via the computer keyboard; directly onto the staves with the mouse; or with a MIDI keyboard. MIDI keyboard note entry may be done by playing pitches singly ("step entry") or by real-time recording. For keyboard or mouse step entry, note lengths are selected from a palette or via the numeric keys (for example, pressing 4 selects quarter notes, pressing 8 selects eighth notes). Computer keyboard note entry in Overture 5 is performed by typing the letter name of the musical pitches (optionally followed by the Enter key, depending on user settings), followed by the letter "o" or "O" if an octave change upwards or downwards, respectively, is required. Editing and note entry: Audio Editing The software enables graphical editing of all MIDI audio playback data (such as duration, loudness, pitch bend, sustain, attack/decay time, and breath control) for each individual note. This can be done either on the score itself, or via a scrolling view in the style of a DAW. MIDI data is displayed as a scrolling piano roll view, alongside either a piano keyboard, or treble and bass staves. Background and Development: In the early 1990s, the music notation software market was dominated by the Finale program, published by Coda. It was capable of handling large, complicated scores and non-traditional notation. However, its immense power and flexibility came at the expense of a "complex user interface".Other notation programs with different interfaces were eventually developed, including Encore, which Williams had previously worked on. Encore featured the ability to add notes by simply selecting the note value on a palette and placing it in the required position on a staff; most notational elements could also be selected with the mouse, but unlike Finale, at the time, Encore was unable to handle many unconventional notation elements. Background and Development: In 1994, Professor Alan Belkin of the University of Montreal published a study of notation software available at the time (dominated by programs for Macintosh). Among other things, it described the advantages and disadvantages of the mouse- and keyboard-driven approaches to notation-interface design, which he exemplified referring to Encore and Finale, respectively, and other software packages.On its release in 1994, Overture's interface combined features of two of Williams' earlier software projects: Overture's score interface resembled Encore; with its MIDI data editing view resembling the piano-roll view of Master Tracks Pro. Later versions allow viewing the piano roll alongside either a visual piano keyboard, or treble and bass staves. When first released in 1994, Overture always showed the score in a fully editable WYSIWYG page view, in which all notational elements could be entered or edited. This contrasted with Finale, in which, at the time, the user had to select between a large number of editing modes before performing different types of edits. Later versions of Overture also introduced a scrolling linear view, which enabled editing of both notational elements and playback data. Most previous notation programs either lacked an editable WYSIWYG page view, or switched between a scrolling linear view used for editing; and a page view used for print previews with limited editing functions only, as in Finale at the time. Background and Development: As of 2017, Overture supports synchronized film/video playback, and plugins such as Garritan instrument libraries.In 2018, Sonic Scores announced the release of the Amadeus Symphonic Orchestra sampled instrument library, accessible in Overture using the Kontakt instrument library interface. The Amadeus instrument library contains a large number of sampled instruments, playing with different articulations.As at June 2021, Overture is in version 5.6.3-3. Sonic Scores generally releases multiple updates each year, with the current version as the main download at the Sonic Scores website, listed with past updates, although not every previous update is publicly available. Updates often include improvements suggested by the user community. Publisher: Overture has been continuously maintained by the developer since it was first released. It was originally published by Opcode Systems, which produced MIDI sequencing and digital audio software. After Opcode ceased product development in 1999, having been bought out by Gibson Brands, Overture found a new publisher, Cakewalk. Cakewalk published the software from 1999 to 2001. In 2001, Williams' own company, GenieSoft – now known as Sonic Scores – purchased Overture from Cakewalk. Greg Hendershott, CEO of Cakewalk at the time, announced, "The fact that GenieSoft founder Don Williams is the original developer of these products is great news for those customers. He's committed to continuing customer support and product enhancements." GenieSoft later changed its name to Sonic Scores, and has published and developed Overture since 2001. Publisher: Sonic Scores also markets Score Writer, a less expensive version of Overture with reduced features. In addition, Sonic Scores is known as the publisher of the Amadeus Symphonic Orchestra sampled instrument library, which is compatible with many scorewriter programs. Demonstration versions of Overture and Score Writer are available at the Sonic Scores website. The demonstration versions are fully functional for a 30 day trial period, after which, saving and printing are disabled. Site licences are also sold. Website, support and user community: Support from the developer and the user community is provided via a support forum area on the website. Version release information on each update and beta versions are also available via the forum. Reviews: Reviewers of Overture have generally highlighted the software's logical user interface and ease of use, although some reviewers have found version 5 less intuitive.In 1996, Marc Battier reviewed version 1.2 of the program for the Leonardo Music Journal, writing, "...Overture has found its place among the highly regarded common music notation software for the Macintosh." Battier points out that Overture is set apart by its ability to edit MIDI playback data whilst retaining a full set of notational tools, "It is less usual to see notation programs that have substantial MIDI control implementation... Overture has clearly inherited a number of features from its older cousin, the well-known sequencer Vision... One can use the program as a MIDI sequencer while retaining full capability of editing data with a comprehensive music notation set of tools." While Battier felt a weakness was that, at the time, Overture lacked a function to create user-drawn graphics, he points out that these can be imported. He praised the ability to create custom MIDI drum maps, and Overture's tool palettes, which can be put out of the way of the score workspace.In 1997, Ross Whitney reviewed version 2 of the software in the Music Library Association's journal, Notes. Praising Overture's design, he wrote, "Built on a solid base of experience and insight... the program can hardly be considered immature. Its design is essentially intuitive, efficient and flexible." He adds that "Overture accommodates virtually every standard notational practice of Western music used by educators, professional composers, arrangers and copyists."In 2012, Chad Criswell, of MusicEdMagic, reviewed Overture version 4, writing, "The Overture music notation system is another in a long line of lesser known but well designed music writing programs... the Overture system provides most of the same functionality and capabilities as Finale or Sibelius but does so in a lighter, somewhat easier to use package."Criswell observes how changing noteheads, and adding articulations and markings, while cumbersome in some software, is easy in Overture. He writes, "One of the more helpful things I discovered right off the bat in Overture is that they put the options for changing the appearance of music note heads, articulations, and other markings right up front in easy to use pull down menus... Overture makes it very easy."However, Criswell also notes that Overture version 4 "...is good but not perfect", pointing out that it lacked instrument parts which are dynamically linked to the master score. (Version 5, released in 2016, allows viewing, layout editing and printing of individual parts directly within the master document.)In 2019, Ana Marculescu, of the Romanian tech-news site, Softpedia, reviewed version 5. Marculescu describes Overture as "an advanced software application designed for helping composers, music educators and students create complex tabulator scores." Marculescu was somewhat overwhelmed by the interface of version 5, "The layout cannot be described as highly intuitive as it may look a bit overwhelming at a first glance." Marculescu sums up, "All in all, Overture includes a comprehensive suite of editing tools and symbols palettes that can be used by professional musicians in order to compose music." Score Writer: Score Writer is a program also available from Sonic Scores. It has the same scoring interface as Overture, but with a lower price and without graphic MIDI data view and many of the advanced features available in Overture. Score Writer is marketed as a simple package for people new to notation and composition, and easily allows the creation of small to medium ensemble scores of up to 20 tracks/instruments, and lead sheets with guitar frames. In Score Writer, the score page view zooming is limited to small, medium and large sizes in WYSIWYG page layout view only. Score Writer: Among the more advanced features of Overture which are not included in Score Writer are: cross-staff and feathered beaming; graphic view MIDI editing (although MIDI data can be edited on-score); automatic and customised guitar tablature; video playback and SMPTE time code insertion into the score; compatibility with VST and the Amadeus Symphonic Orchestra instrument library; custom engraver spacing; ability to hide individual staves; and ossia staves. Compatibility: When first released, Overture ran only on Mac OS computers, with a Windows version being added in a later release. Overture versions from 3 onwards have been released for both Windows and Mac OS. The software is 64-bit native, and is compatible with MacOS 11 (Big Sur). Overture 5 requires Windows 7 or later, or MacOS 10.9 or later. As of 2021, the Overture interface operates in English, French, Chinese, Norwegian and Spanish. Overture is compatible with VST and Kontakt player libraries. In addition to its own file format, (.ove and .ovex), Overture can read and write the industry standard Music XML (.musicxml and .mxl) files for sharing scores with other music scoring programs. It can read Score Writer (.scwx) files, and can open, play and edit MIDI audio data files (.mid) as scores.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mastoid cells** Mastoid cells: The mastoid cells (also called air cells of Lenoir or mastoid cells of Lenoir) are air-filled cavities within the mastoid process of the temporal bone of the cranium. The mastoid cells are a form of skeletal pneumaticity. Infection in these cells is called mastoiditis. The term "cells" here refers to enclosed spaces, not cells as living, biological units. Anatomy: The mastoid air cells vary greatly in number, shape, and size; they may be extensive or minimal or even absent.: 746 The cells are typically interconnected and their walls lined by mucosa that is continuous with that of the mastoid antrum and tympanic cavity.: 746 Extent They may excavate the mastoid process to its tip, and be separated from the posterior cranial fossa and sigmoid sinus by a mere slip of bone or not at all. They may extend into the squamous part of temporal bone, petrous part of the temporal bone zygomatic process of temporal bone, and - rarely - the jugular process of occipital bone; they may thus come to adjoin many important structures (including the bony labyrinth, tympanic cavity, external acoustic meatus, pharyngotympanic tube, superior jugular bulb, posterior cranial fossa, middle cranial fossa, carotid canal, abducens nerve, sigmoid sinus) to which they may disseminate infection in case of infective mastoiditis.: 746 Innervation The cells receive innervation from the posterior branch of the meningeal branch of the mandibular nerve (nervus spinosus),: 400.e2 : 364  and branches of the tympanic plexus.: 749 : 366 Vasculature The cells receive arterial supply from the stylomastoid branch of the occipital artery or posterior auricular artery, and (sometimes) a mastoid branch of the occipital artery.: 749 The superior petrosal sinus receives venous drainage from the mastoid air cells (mastoid infection may thus lead to a cerebellar abscess).: 443 Development At birth, the mastoid is not pneumatized, but becomes aerated before age six. At birth, the mastoid antrum is well developed but the air cells are represented only by small diverticula from the antrum. The air cells then gradually extend into the bone of the mastoid during the first years of life. Their most significant enlargements takes place during puberty.: 746 Function: The air cells are hypothesised to protect the temporal bone and the inner and middle ear against trauma and to regulate air pressure. Clinical significance: Infections in the middle ear easily spread into the mastoid air cells through the aditus ad antrum, resulting in mastoiditis, a potentially dangerous and life-threatening condition. Infection may then further spread into the middle cranial fossa or posterior cranial fossa, causing meningitis or abscess of adjacent brain tissue. Infection may also spread to muscles of the neck, causing pain and torticollis.: 746
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**1,2-Propanedithiol** 1,2-Propanedithiol: 1,2-Propanedithiol, sometimes called 1,2-dimercaptopropane, is a thiol with the formula HSCH2CH(SH)CH3. This colorless, intensely odorous liquid is the simplest chiral dithiol. Related dithiols include 1,2-ethanedithiol, 2,3-dimercapto-1-propanesulfonic acid, and 1,3-propanedithiol. It is generated by the addition of H2S to the related episulfide, CH3CHCH2S. Refractive index = 1.531-1.541
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Adduct purification** Adduct purification: Adduct purification is a technique for preparing extremely pure simple organometallic compounds, which are generally unstable and hard to handle, by purifying a stable adduct with a Lewis acid and then obtaining the desired product from the pure adduct by thermal decomposition. Epichem Limited is the licensee of the major patents in this field, and uses the trademark EpiPure to refer to adduct-purified materials; Professor Anthony Jones at Liverpool University is the initiator of the field and author of many of the important papers. The choice of Lewis acid and of reaction medium is important; the desired organometallics are almost always air- and water-sensitive. Initial work was done in ether, but this led to oxygen impurities, and so more recent work involves tertiary amines or nitrogen-substituted crown ethers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Diazaquinone** Diazaquinone: A diazaquinone is a chemical compound that has an heterocyclic aromatic core including two consecutive doubly-bonded nitrogen atoms −N=N−, with the two =CH− carbon units adjacent to the nitrogens replaced by carbonyl (ketone) groups −(C=O)−. These carbon and nitrogen atoms then comprise a diacyl diimide unit, −(C=O)−N=N−(C=O)−.Two canonical examples are 3,6-pyridazinedione (a quinone of pyridine), emerald-green; and 1,4-phthalazinedione (a quinone of phthalazine) a green crystalline solid (both soluble in acetone and stable at -77 °C).The name was proposed by Thomas J. Kealy in 1962.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**LAN eXtensions for Instrumentation** LAN eXtensions for Instrumentation: LAN eXtensions for Instrumentation (LXI) is a standard developed by the LXI Consortium, a consortium that maintains the LXI specification and promotes the LXI Standard. The LXI standard defines the communication protocols for instrumentation and data acquisition systems using Ethernet. Ethernet is a ubiquitous communication standard providing a versatile interface, the LXI standard describes how to use the Ethernet standards for test and measurement applications in a way that promotes simple interoperability between instruments. The LXI Consortium ensures LXI compliant instrumentation developed by various vendors works together with no communication or setup issues. The LXI Consortium ensures that the LXI standard complements other test and measurement control systems, such as GPIB and PXI systems. Overview: Proposed in 2005 by Keysight(formerly called Agilent Technologies) and VTI Instruments (formerly called VXI Technology and now part of Ametek), the LXI standard adapts the Ethernet and World Wide Web standards and applies them to test and measurement applications. The standard defines how existing standards should be used in instrumentation applications to provide a consistent feel and ensure compatibility between vendors equipment. The LXI standard does not define a mechanical format, allowing LXI solutions to take any physical form deemed suitable for products in their intended market. LXI products can be modular, rack mounted, bench mounted or take any other physical form. Overview: LXI supports synthetic instruments and peer-to-peer networking, providing a number of unique capabilities to the test engineer. LXI products may have no front panel or display, or they may include embedded keyboards and displays. Connections to the DUT are permitted to be on the front or the rear to suit market demand, most devices provide front panel connectivity to allow Ethernet and power connections to be provided to the rear panel. Use of Ethernet allows the simple construction of systems requiring distributed instrumentation systems and control and monitoring systems over large distances, with suitable VPN connections it is possible to connect systems together over inter-continental distances without the use of specialised equipment. The inclusion of an optional Extended Function based on IEEE 1588 Precision Timing Protocol allows instruments to communicate on a time basis, initiating events at specified times or intervals and time stamping events to indicate when these events occurred in a system. Interoperability and IVI: LXI devices can coexist with Ethernet devices that are not themselves LXI compliant. They can also be present in test systems which include products based on the GPIB, VXI, and PXI standards. The standard mandates that every LXI instrument must have an Interchangeable Virtual Instrument (IVI) driver. The IVI Foundation defines a standard driver application programming interface (API) for programmable instruments. IVI driver formats can be IVI-COM for working with COM-based development environments and IVI-C for working in traditional programming languages or IVI.NET for use in a .NET Framework. Most LXI instruments can be programmed with methods other than IVI, so it is not mandatory to work with an IVI driver. Developers can use other driver technologies or work directly with SCPI commands. Standardization: The LXI Standard has three major elements: A standardized LAN interface that provides a framework for web based interfacing and programmatic control. The LAN interface can include wireless connectivity, as well as physically connected interfaces. The interface supports peer-to-peer operation, as well as master/slave operation. Devices can optionally support IPv6. An optional trigger facility based on the IEEE 1588 Precision Timing Protocol that enables modules to have a sense of time, which allows modules to time stamp actions and initiate triggered events over the LAN interface. Standardization: An optional physical wired trigger system based on an Multipoint Low-Voltage Differential Signaling (M-LVDS) electrical interface that tightly synchronizes the operation of multiple LXI instruments.The specification is organized into a set of documents which describe: The LXI Device Core Specification which contains the requirements for the LAN interface which all LXI Devices must adhere to A set of optional Extended Functions which LXI devices can adhere to. If a device claims conformance it must have been tested under the LXI Consortium Conformance regime. As of March 2016, there are 7 Extended Functions specified HiSLIP IPv6 LXI Wired Trigger Bus LXI Event Messaging LXI Clock Synchronization (based on IEEE1588) LXI Time Stamped Data LXI Event Log LXI Consortium: The LXI Consortium is a US not-for-profit 501(c) organization made up of test and measurement companies. The Consortium's primary purpose is to create, maintain, develop and promote the adoption of the LXI Standard. The LXI Consortium is open to all test and measurement companies, and participation by industry professionals, systems integrators, and government representatives is encouraged. The first Consortium meeting was held November 17–18, 2004. Membership is divided into four levels: Strategic (Keysight Technologies, Pickering Interfaces and Rohde & Schwarz), Participating, Advisory, and Informational. LXI Consortium: Consortium members meet several times a year at PlugFests held around the world where conversations regarding the LXI Standard are discussed face-to-face meetings in working groups. The public is invited to attend tutorials intended for users and manufacturers interested in joining the LXI Consortium. It meetings also provide an opportunity for vendors to certify new products as LXI conformant by having an independent testing authority present at the meeting. LXI Consortium: The Consortium's standard development efforts are performed by volunteers working through a number of committees and technical working groups (WG's), Work progression is managed by use of Statement of Work (SoW) documents that set out the reasons and objectives for new work items. New standards are voted on by the members of the consortium once the work is completed. Specification History: In September 2005, the LXI Consortium released Version 1.0 of the LXI Standard. Just one year later, Version 1.1 followed with minor corrections and clarifications. In October 2007, the Consortium adopted Version 1.2; its major focus was discovery mechanisms. A discovery mechanism allows the test system to recognize and register a new instrument plugged into the system so the user and other instruments can work with it. Specifically, LXI 1.2 included enhancements to support mDNS discovery of LXI devices. Version 1.3 incorporates the 2008 version of IEEE 1588 for synchronizing time among instruments, All the revisions of the LXI standard provide backward compatibility and systems can be created which contain any of the versions of the standard. Specification History: The latest version of the standard (and older versions) is available on the Consortium Specification page of its website. As of November 2016, the standard is at Revision 1.5. Version 1.5 of the standard has made VXI-11 based discovery methods optional (as an Extended Function), removed unnecessary recommendations and re-organised Extended Functions into separate documents. Conformance testing: The LXI Consortium is unique amongst test and measurements standards in requiring LXI Devices to be tested to the standard. The compliance requirements ensure that at the point of test devices are fully conformant to the standard giving users confidence that there will be no compatibility issues between vendors products To support this compliance regime an LXI Test Suite is available. After a vendor joins the LXI Consortium they can gain access to the Consortium's Conformance Test Suite software, which they can use as a pre-test before submitting the product to the Consortium for compliance testing. Once a product is ready to submit, a vendor can choose to have its product tested at a PlugFest or an approved test house. A Technical Justification route allows vendors to certify compliance of derivative products by submitting test results to the Consortium to show that the device has been tested on the LXI Test Suite. The consortium provides guidance on when the Technical Justification route can be used and when a new formal test is required. Conformant instruments: The number of LXI-compliant instruments has grown dramatically, starting from a handful of products from just two vendors in December 2005. This expansion in instrument availability has encouraged migration to LXI from older instrument platforms. As of January 2017, the Consortium had certified over 3600 instruments as being compliant with the Standard.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**EN 10034** EN 10034: The EN 10034 "Structural steel I and H sections. Tolerances on shape and dimensions" is a European Standard. The standard is developed by the technical committee ECISS/TC 103 - Structural steels other than reinforcements.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cor Caroli** Cor Caroli: Cor Caroli is a binary star designated Alpha Canum Venaticorum or α Canum Venaticorum. The International Astronomical Union uses the name "Cor Caroli" specifically for the brighter star of the binary. Alpha Canum Venaticorum is the brightest point of light in the northern constellation of Canes Venatici. Nomenclature: α Canum Venaticorum, Latinised to Alpha Canum Venaticorum, is the system's Bayer designation. The brighter of the two stars is designated α2 Canum Venaticorum, the fainter α1 Canum Venaticorum.In the western world Alpha Canum Venaticorum had no name until the 17th century, when it was named Cor Caroli, which means "Charles's Heart". There has been some uncertainty whether it was named in honour of King Charles I of England, who was executed in 1649 during the English Civil War, or of his son, Charles II, who restored the English monarchy to the throne in 1660. The name was coined in 1660 by Sir Charles Scarborough, physician to Charles II, who claimed the star seemed to shine exceptionally brightly on the night of Charles II's return to England. In Star Names, R.H. Allen claimed that Scarborough suggested the name to Edmond Halley and intended it to refer to Charles II. However, Robert Burnham Jr. notes that "the attribution of the name to Halley appears in a report published by J. E. Bode at Berlin in 1801, but seems to have no other verification". In Star Tales, Ian Ridpath points out that the name's first appearance on a star map was in the 1673 chart of Francis Lamb, who labelled it Cor Caroli Regis Martyris ('the heart of Charles the martyred king'), clearly indicating that it was seen as referring to Charles I.In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Cor Caroli for the star α2 Canum Venaticorum. Nomenclature: In Chinese, 常陳 (Cháng Chén), meaning Imperial Guards, refers to an asterism consisting of α Canum Venaticorum, 10 Canum Venaticorum, Beta Canum Venaticorum, 6 Canum Venaticorum, 2 Canum Venaticorum and 67 Ursae Majoris. Consequently, the Chinese name for Alpha Canum Venaticorum itself is 常陳一 (Cháng Chén yī, English: the First Star of Imperial Guards). From this Chinese name, the name Chang Chen was derived. Stellar properties: Alpha Canum Venaticorum is a binary star with a combined apparent magnitude of 2.81. The two stars are 19.6 arcseconds apart in the sky and are easily resolved in small telescopes. The system lies approximately 110 light-years from the Sun. It marks the northern vertex of the asterism known as the Great Diamond or the Diamond of Virgo. Stellar properties: α2 Canum Venaticorum α2 Canum Venaticorum has a spectral type of A0, and has an apparent visual magnitude which varies between 2.84 and 2.98, with a period of 5.47 days. It is a chemically peculiar star with a strong magnetic field, about 5,000 times as strong as the Earth's, and is also classified as an Ap/Bp star. Its atmosphere has overabundances of some elements, such as silicon, mercury and europium. This is thought to be due to some elements sinking down into the star under the force of gravity while others are elevated by radiation pressure. This star is the prototype of a class of variable stars, the so-called α2 Canum Venaticorum variables. The strong magnetic field of these stars is believed to produce starspots of enormous extent. Due to these starspots the brightness of α2 Canum Venaticorum stars varies considerably during their rotation. Stellar properties: α1 Canum Venaticorum α1 Canum Venaticorum is an F-type main-sequence star. It is considerably fainter than its companion and has an apparent visual magnitude of approximately 5.60. Namesakes: Cor Caroli was a U.S. Navy Crater-class cargo ship named after the star.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mathematics education in the United Kingdom** Mathematics education in the United Kingdom: Mathematics education in the United Kingdom is largely carried out at ages 5–16 at primary school and secondary school (though basic numeracy is taught at an earlier age). However voluntary Mathematics education in the UK takes place from 16 to 18, in sixth forms and other forms of further education. Whilst adults can study the subject at universities and higher education more widely. Mathematics education is not taught uniformly as exams and the syllabus vary across the countries of the United Kingdom, notably Scotland. History: The School Certificate was established in 1918, for education up to 16, with the Higher School Certificate for education up to 18; these were both established by the Secondary Schools Examinations Council (SSEC), which had been established in 1917. 1950s The Association of Teachers of Mathematics was founded in 1950. History: 1960s The Joint Mathematical Council was formed in 1963 to improve the teaching of mathematics in UK schools. The Ministry of Education had been created in 1944, which became the Department of Education and Science in 1964. The Schools Council was formed in 1964, which regulated the syllabus of exams in the UK, and existed until 1984. The exam body Mathematics in Education and Industry in Trowbridge was formed in 1963, formed by the Mathematical Association; the first exam Additional Mathematics was first set in 1965. The Institute of Mathematics and its Applications was formed in 1964, and is the UK's chartered body for mathematicians, being based in Essex. History: Before calculators, many calculations would be done by hand with slide rules and log tables. 1970s Decimal Day, on 15 February 1971, allowed less time on numerical calculations at school. The Metric system has curtailed lengthy calculations as well; the US, conversely, largely does not have the metric system. 1980s Electronic calculators began to be owned at school from the early 1980s, becoming widespread from the mid-1980s. Parents and teachers believed that calculators would diminish abilities of mental arithmetic. Scientific calculators came to the aid for those working out logarithms and trigonometric functions. Since 1988, exams in Mathematics at age sixteen, except Scotland, have been provided by the GCSE. 1990s From the 1990s, mainly the late 1990s, computers became integrated into mathematics education at primary and secondary levels in the UK. History: On Wednesday 18 November 1992 exam league tables were published for 108 local authorities, in England, under the Education Secretary John Patten, Baron Patten. The tables showed GCSE and A-levels for all 4,400 state secondary schools in England. Independent schools results were shown from 1993, and would include truancy rates. Left-wing parent groups, teachers' unions had opposed the move. Labour said it showed the government's simplistic approach to education standards, adding that raw results cannot reflect the real achievement of schools. The Liberal Democrats were not opposed, but thought that any information being provided was limited. Ofsted would be brought in the next year by Education Minister Emily Blatch, Baroness Blatch.The specialist schools programme was introduced in the mid-1990s in England. Fifteen new City Technology Colleges (CTCs) from the early 1990s often focussed on Maths. History: In 1996 the United Kingdom Mathematics Trust was formed to run the British Mathematical Olympiad, run by the British Mathematical Olympiad Subtrust. The United Kingdom Mathematics Trust summer school is held at The Queen's Foundation in Birmingham each year. 2000s Mathematics and Computing Colleges were introduced in 2002 as part of the widened specialist schools programme; by 2007 there were 222 of these in England. The Excellence in Cities report was launched in March 1999, which led to the Advanced Extension Award in 2002, replacing the S-level for the top 10% of A-level candidates. Since 2008, the AEA is only available for Maths, provided by Edexcel; the scheme was introduced when the A* grade was introduced; the scheme was provided until 2018. History: In a 2006 House of Lords report on science education, the Lib Dem chair Baroness Sharp, took an interest in the reduced participation in Maths in schools; she had worked with the Science Policy Research Unit at the University of Sussex. The 2001 report by the Lords Science and Technology Committee led to the National Science Learning Centre (Science Learning Centres) at National STEM Centre, with the University of York in 2006, with a Maths centre at University of Southampton. History: The National Centre for Excellence in the Teaching of Mathematics was founded 2006, after the Smith Report, being now in Sheffield. History: The National Higher Education Science, Technology, Engineering and Mathematics (HE STEM) Programme was founded in August 2009 by HEFCE and HEFCW; the scheme had six regions across England and Wales, working with the universities of Bath, Birmingham, Bradford, Manchester Metropolitan, Southampton and Swansea; it was funded by £21m, and developed by the University of Birmingham STEM Education Centre; the scheme finished in July 2012. Also involved was the MSOR centre of the HEA (now Advance HE) Subject Centre, and the Centre for Excellence in University Wide Mathematics and Statistics Support at Loughborough University. History: 2010s The HEA subject centres closed in August 2011. History: Mathematics free schools were opened in 2014 - the King's College London Mathematics School in Lambeth, and Exeter Mathematics School in Devon; both were selective sixth form colleges; others opened at Liverpool and Lancaster; more selective sixth form maths schools are to open in Cambridge, Surrey, and Durham. A newer curriculum for Maths GCSE (and English) was introduced in September 2015, with a new grading scale of 1–9. Nations: England Mathematics education in England up to the age of 19 is provided in the National Curriculum by the Department for Education, which was established in 2010. Early years education is called the Early Years Foundation Stage in England, which includes arithmetic. In England there are 24,300 schools, of which 3,400 are secondary. The National Curriculum for mathematics aims to ensure that all pupils: become fluent in the fundamentals of mathematics, including through varied and frequent practice with increasingly complex problems over time, so that pupils develop conceptual understanding and the ability to recall and apply knowledge rapidly and accurately. reason mathematically by following a line of enquiry, conjecturing relationships and generalisations, and developing an argument, justification or proof using mathematical language. Nations: can solve problems by applying their mathematics to various routine and non-routine problems with increasing sophistication, including breaking down problems into a series of more straightforward steps and persevering in seeking solutions.Mathematics is a related subject in which pupils must be able to move fluently between representations of mathematical ideas. It is essential to everyday life, critical to science, technology and engineering, an appreciation of the beauty and power of mathematics, and a sense of and necessary for financial literacy and most forms of employment. A high-quality mathematics education, therefore, provides a foundation for understanding the world, the ability to reason mathematically, and curiosity about the subject. Pupils should build connections across mathematical ideas to develop fluency, mathematical reasoning and competence in solving increasingly sophisticated problems. They should also apply their mathematical knowledge in science, geography, computing and other subjects. Nations: Wales Wales takes the GCSE and A-level in Mathematics, but has its own Department for Education and Skills. Wales does not produce school league tables. Wales has 1550 schools, of which 180 are secondary. Scotland Education Scotland, formed in 2011, regulates education at school in Scotland, with qualifications monitored by the Scottish Qualifications Authority (SQA) and the Mathematics syllabus follows the country's Curriculum for Excellence. Scotland does not produce school league tables. Scotland has 5,050 schools, of which 350 are secondary. Northern Ireland Northern Ireland is the only country in the UK to have exclusively selective schools - it has sixty nine grammar schools. Mathematics education is provided by the Department of Education (DENI), with further education provided by the Department for Employment and Learning. Northern Ireland has 1120 schools, of which 190 are secondary. Primary level: The Department of Education and Science set up an Assessment of Performance Unit in 1976 to monitor attainment of children at a national level, with standards of mathematics being monitored from 1978 by the National Foundation for Educational Research (NFER). Before this time, assessment of primary school standards had not been carried out at a national level. Children at primary school are expected to know their times tables. Children are taught about long division, fractions, decimals, averages, ratios, negative numbers, and long multiplication. Around 90% of primary school teachers in the UK gave up any formal study of Mathematics from the age of 16. Secondary level: Study of Mathematics is compulsory up to the school leaving age. The Programme for International Student Assessment coordinated by the OECD currently ranks the knowledge and skills of British 15-year-olds in mathematics and science above OECD averages. In 2011, the Trends in International Mathematics and Science Study (TIMSS) rated 13–14-year-old pupils in England and Wales 10th in the world for maths and 9th for science. Secondary level: Mathematics teachers Qualifications vary by region; the East Midlands and London have the most degree-qualified Maths teachers and North East England the least. For England about 40% mostly have a maths degree and around 20% have a BSc degree with QTS or a BEd degree. Around 20% have a PGCE, and around 10% have no higher qualification than A level Maths. Secondary level: For schools without sixth forms, only around 30% of Maths teachers have a degree, but for schools with sixth forms and sixth form colleges around 50% have a Maths degree. There are around 27,500 Maths teachers in England, of whom around 21,000 are Maths specialists; there are around 31,000 science teachers in England. Sixth-form level: At A-level, participation by gender is broadly mixed; about 60% of A-level entrants are male, and around 40% are female. Further Mathematics is an additional course available at A-level. A greater proportion of females take Further Maths (30%) than take Physics (15%), which at A-level is overwhelmingly a male subject. Sixth-form level: Professor Robert Coe, Director of the Centre for Evaluation and Monitoring (CEM) at Durham University conducted research on grade inflation. By 2007, 25% of Maths A-level grades were an A; he found that an A grade A-level would have been a grade B in 1996 and a grade C in 1988. The Labour government wanted to expand higher education, so required 'proof' that academic standards at A-level appeared to be rising, or at least not falling, so requiring higher education to expand for this wider apparent academic achievement. University level: Admission to Mathematics at university in the UK will require three A-levels, often good A-levels. It is prevalently males who study Maths at university, and has been for decades. There are around 42–43,000 Maths undergraduates at British universities, with around 27,000 being male and around 16–17,000 being female. Mathematics at university is also taught for other physical sciences and Engineering, but much fewer women than men are taught on these types of courses. Broadcasting: Educational series on television have included Mathematics and Life on BBC TV in 1961 Mathematics in Action BBC1 from the early 1970s to the late 1980s, with Malcolm Bevan, Prof John Crank, Kenneth Wigley, and Prof John Crane Maths Today on BBC1 in the early 1970s with Brenda Briggs, the wife of Trevor Jack Cole, and Stewart Gartside Maths Workshop on BBC1 in the early 1970s with Jim Boucher and Michael Holt (author) Middle School Mathematics in the late 1960s with Alan Tammadge, the President of the Mathematical Association in 1978 Results by region in England: Of all A-level entrants at Key Stage 5, 23% take Maths A-level, with 16% of all female entrants and 30% of all male entrants; 4% of all entrants take Further Maths, with 2% of female entrants and 6% of male entrants. By number of A-level entries, 11.0% were Maths A-levels with 7.7% female and 15.0% male.In England in 2016 there were 81,533 entries for Maths A-level, with 65,474 from the state sector; there were 14,848 entries for Further Maths with 10,376 from the state sector Entries for Further Maths in 2016 by region - South East 2987 East of England 1270 North West 1111 South West 1070 West Midlands 868 East Midlands 774 Yorkshire and the Humber 749 North East 414 Results by LEA in England: Results shown are for 2016. In the 1980s, some areas with low Maths participation at A-level lost all sixth forms at the area's comprehensive schools, being replaced with stand-alone sixth form colleges, such as in Manchester and Portsmouth; this course of action may have helped in attracting qualified Maths teachers to those areas. The supply of qualified (QTS in England and Wales) Maths teachers in the UK is largely a postcode lottery. Lowest number of entries for Maths A-level The north of England (except Lancashire) has a worse record for Mathematics entries at A-level than other regions. Results by LEA in England: Knowsley 6 Portsmouth 51 Salford 66 (Manchester entered 647 as a comparison) Halton 70 Middlesbrough 79 South Tyneside 85 Barnsley 96 Highest number of entries for Maths A-level Hampshire 2573 Hertfordshire 2039 Kent 1775 Surrey 1668 Essex 1499 Lancashire 1492 Birmingham 1403 Buckinghamshire 1284 Barnet 1189Trafford entered 505, which is high for a small borough and almost the same number as Cumbria. Kirklees entered 661, which is more than Sheffield's 596; Kirklees is a much smaller borough by population than Sheffield. Results by LEA in England: Lowest number of entries for Further Maths A-level Knowsley 0 (Knowsley only entered 61 A-level exams in 2016) Sandwell 5 Blackburn with Darwen 6 Salford 7 Portsmouth 8 North East Lincolnshire 9 Middlesbrough 11 Stoke-on-Trent 15 Barnsley 15 Halton 16 Southampton 16 Torbay 16 Bury 18 Merton 18 Rochdale 19 Highest number of entries for Further Maths A-level Hampshire and Hertfordshire are the top two for Maths and Further Maths Hampshire 381 Hertfordshire 370 Kent 297 Surrey 276 Essex 260 Buckinghamshire 244 Lancashire 206
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Automatic shotgun** Automatic shotgun: An automatic shotgun is an automatic firearm that fires shotgun shells (thereby making it a shotgun) and uses some of the energy of each shot to automatically cycle the action and load a new round. It will fire repeatedly until the trigger is released or ammunition runs out. Automatic shotguns have a very limited range, but provide tremendous firepower at close range. Design: Automatic shotguns generally employ mechanisms very similar to other kinds of automatic weapons. There are several methods of operation, with the most common being gas, recoil, and blowback operated: Gas operation uses the pressure of the gas (created by the burning propellant) behind the projectile to unlock the bolt assembly and then move it rearward. Blowback operation uses the backward force applied by the projectile (due to Newton's Third Law of Motion) to retract the bolt assembly. Design: Recoil operation uses the backward force to retract the entire barrel and bolt assembly, which unlock at the rear of the barrel's path.Each of these methods use springs to return the retracted parts to their forward positions and restart the cycle.Many automatic shotguns are capable of selective fire, meaning they can fire in multiple modes (semi-automatic, burst, and sometimes fully automatic). Ammunition: They generally store ammunition in detachable box or drum magazines in order to decrease reloading time, whereas most pump-action and semi-automatic shotguns use under-barrel tubular magazines. Ammunition: Automatic shotgun ammunition choices are slightly limited because the fired shot must provide sufficient recoil energy to reliably cycle the action. This means they are not compatible for use with low powered rounds, e.g. less-than-lethal ammunition. The most common shotgun shell used in combat shotguns contains 00 buckshot, 8 to 10 lead balls, which is very effective against unarmored targets. Strengths and weaknesses: A standard shotgun shot fires multiple small projectiles at once, increasing the chances of hitting the target. Shotguns have a short effective range of about 50–70 metres (160–230 ft), but provide a lot of firepower at close range. Automatic fire enhances these effects, due to the increase in the rate of fire. Strengths and weaknesses: Automatics typically have much shorter barrels than pump-action shotguns (especially hunting shotguns). Short-barreled shotguns have a very high chance of hitting close range targets, and can even hit multiple targets in one area, which is ideal for close combat situations. Long-barreled guns as long pump action shotguns are more accurate and have increased range, which is ideal for hunting and sporting purposes.Automatic shotguns are generally viewed as less reliable than manual operation shotguns, because there are more moving parts and increased chances of error. If any one piece fails, it will most likely halt the operation and cause damage to the weapon and/or user. Automatic weapons are also more susceptible to jamming and negative effects from dirtiness. Use: Automatic shotguns are intended for use as military combat shotguns. They typically have a high rate of fire and relatively low recoil, making them ideal for engaging targets in a fast-paced, close range combat situation. They are able to fulfill many different combat roles due to the wide variety of shotgun ammunition available.Automatic shotguns have not seen much use in the United States, but have been slightly more popular in some other countries. List: AAI CAWS Atchisson AA-12 Daewoo USAS-12 FAS-173 Gordon CSWS Heckler & Koch HK CAWS LW-3 Pancor Jackhammer Saiga-12 (if converted to fully automatic fire) Smith & Wesson AS-3 Special Operations Weapon Remington 7188 Vepr-12 (when converted to full auto)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Collectanea Mathematica** Collectanea Mathematica: Collectanea Mathematica (Collect. Math.) is a mathematical journal of the Institute of Mathematics of the University of Barcelona (IMUB), published by Springer since 2011, with a periodicity of three issues per year. It publishes original research papers in all fields of pure and applied mathematics. History: Collectanea Mathematica was founded in 1948 by José M. Orts (it is the oldest mathematical journal in Spain). Thanks to the contribution of some relevant mathematicians in Catalonia, like Ferran Sunyer Balaguer, and eminent international collaborators (Wilhelm Blaschke, Hugo Hadwiger, Gaston Julia or Ernst Witt), the journal reached a central role in the Spanish scientific publications, under the direction of Enrique Linés (1969-1971), who was also president of the Real Sociedad Matemática Española, and specially with Josep Teixidor (1971-1986), president of the Societat Catalana de Ciències Físiques, Químiques i Matemàtiques (1968-1973). During the period 1987–2007, with Joan Cerdà as the Main Editor, the journal took several steps forward to further improve its scientific quality. In 2003, the recently created Institute of Mathematics of the University of Barcelona started to be in charge of its publication, providing Collectanea Mathematica with a stable economic and scientific support. As a consequence, coverage of Collectanea Mathematica by the Journal Citation Reports (JCR) began with the 2005 volume and 2007 was its first impact factor in JCR. In 2008 Rosa Maria Miró Roig became editor-in-chief. In this period the journal has changed its editorial policy and Springer is the new publisher since 2011. Since 2021 the current editor-in-chief is Carlos D'Andrea. Abstracting and indexing: Collectanea Mathematica is indexed in databases such as Current Contents (Physical Chemical and Earth Sciences) and ISI Web of Science, MathSciNet, Zentralblatt MATH, and Scopus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Barwise compactness theorem** Barwise compactness theorem: In mathematical logic, the Barwise compactness theorem, named after Jon Barwise, is a generalization of the usual compactness theorem for first-order logic to a certain class of infinitary languages. It was stated and proved by Barwise in 1967. Statement: Let A be a countable admissible set. Let L be an A -finite relational language. Suppose Γ is a set of LA -sentences, where Γ is a Σ1 set with parameters from A , and every A -finite subset of Γ is satisfiable. Then Γ is satisfiable.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Minkowski problem for polytopes** Minkowski problem for polytopes: In the geometry of convex polytopes, the Minkowski problem for polytopes concerns the specification of the shape of a polytope by the directions and measures of its facets. The theorem that every polytope is uniquely determined up to translation by this information was proven by Hermann Minkowski; it has been called "Minkowski's theorem", although the same name has also been given to several unrelated results of Minkowski. The Minkowski problem for polytopes should also be distinguished from the Minkowski problem, on specifying convex shapes by their curvature. Specification and necessary conditions: For any d -dimensional polytope, one can specify its collection of facet directions and measures by a finite set of d -dimensional nonzero vectors, one per facet, pointing perpendicularly outward from the facet, with length equal to the (d−1) -dimensional measure of its facet. To be a valid specification of a bounded polytope, these vectors must span the full d -dimensional space, and no two can be parallel with the same sign. Additionally, their sum must be zero; this requirement corresponds to the observation that, when the polytope is projected perpendicularly onto any hyperplane, the projected measure of its top facets and its bottom facets must be equal, because the top facets project to the same set as the bottom facets. Minkowski's uniqueness theorem: It is a theorem of Hermann Minkowski that these necessary conditions are sufficient: every finite set of vectors that spans the whole space, has no two parallel with the same sign, and sums to zero describes the facet directions and measures of a polytope. More, the shape of this polytope is uniquely determined by this information: every two polytopes that give rise to the same set of vectors are translations of each other. Blaschke sums: The sets of vectors representing two polytopes can be added by taking the union of the two sets and, when the two sets contain parallel vectors with the same sign, replacing them by their sum. The resulting operation on polytope shapes is called the Blaschke sum. It can be used to decompose arbitrary polytopes into simplices, and centrally symmetric polytopes into parallelotopes. Generalizations: With certain additional information (including separating the facet direction and size into a unit vector and a real number, which may be negative, providing an additional bit of information per facet) it is possible to generalize these existence and uniqueness results to certain classes of non-convex polyhedra.It is also possible to specify three-dimensional polyhedra uniquely by the direction and perimeter of their facets. Minkowski's theorem and the uniqueness of this specification by direction and perimeter have a common generalization: whenever two three-dimensional convex polyhedra have the property that their facets have the same directions and no facet of one polyhedron can be translated into a proper subset of the facet with the same direction of the other polyhedron, the two polyhedra must be translates of each other. However, this version of the theorem does not generalize to higher dimensions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Union-closed sets conjecture** Union-closed sets conjecture: The union-closed sets conjecture is an open problem in combinatorics posed by Péter Frankl in 1979. A family of sets is said to be union-closed if the union of any two sets from the family belongs to the family. The conjecture states: For every finite union-closed family of sets, other than the family containing only the empty set, there exists an element that belongs to at least half of the sets in the family. Union-closed sets conjecture: Professor Timothy Gowers has called this "one of the best known open problems in combinatorics" and has said that the conjecture "feels as though it ought to be easy (and as a result has attracted a lot of false proofs over the years). A good way to understand why it isn't easy is to spend an afternoon trying to prove it. That clever averaging argument you had in mind doesn't work ..." Example: The family of setsconsists of five different sets and is union-closed. The element 1 is contained in three of the five sets (and so is the element 2 ), thus the conjecture holds in this case. Basic results: It is easy to show that if a union-closed family contains a singleton {a} (as in the example above), then the element a must occur in at least half of the sets of the family. Basic results: If there is a counterexample to the conjecture, then there is also a counterexample consisting only of finite sets. Therefore, without loss of generality, we will assume that all sets in the given union-closed family are finite.Given a finite non-empty set U , the power set P(U) consisting of all subsets of U is union-closed. Each element of U is contained in exactly half of the subsets of U . Therefore, in general we cannot ask for an element contained in more than half of the sets of the family: the bound of the conjecture is sharp. Equivalent forms: Intersection formulation The union-closed set conjecture is true if and only if a set system X which is intersection-closed contains an element of U(X) in at most half of the sets of X , where U(X) is the universe set, i.e. the union of all members of the system X The following facts show the equivalence. Firstly, we show that a set system is union-closed if and only if its complement is intersection-closed. Lemma 1. If X is a union-closed family of sets with universe U(X) , the family of complement sets to sets in X is closed under intersection. Proof. Equivalent forms: We define the complement of the set system X as := {U(X)−S:S∈X} . Let X1 , X2 be arbitrary sets in X and so U(X)−X1 and U(X)−X2 are both in Xc . Since X is union-closed, X1∪X2=X3 is in X , and therefore the complement of X3 , U(X)−X3 is in Xc , the elements in neither X1 , nor X2 And this is exactly the intersection of the complements of X1 and X2 , (U(X)−X1)∩(U(X)−X2) . Therefore, X is union-closed if and only if the complement of X , Xc is intersection closed. Equivalent forms: Secondly, we show that if a set system contains an element in at least half the sets, then its complement has an element in at most half. Lemma 2. A set system X contains an element in half of its sets if and only if the complement set system X , X∗ contains an element in at most half of its sets. Proof. Trivial. Equivalent forms: Therefore, if X is a union-closed family of sets, the family of complement sets to sets in X relative to the universe U(X) , is closed under intersection, and an element that belongs to at least half of the sets of X belongs to at most half of the complement sets. Thus, an equivalent form of the conjecture (the form in which it was originally stated) is that, for any intersection-closed family of sets that contains more than one set, there exists an element that belongs to at most half of the sets in the family. Equivalent forms: Lattice formulation Although stated above in terms of families of sets, Frankl's conjecture has also been formulated and studied as a question in lattice theory. A lattice is a partially ordered set in which for two elements x and y there is a unique greatest element less than or equal to both of them (the meet of x and y) and a unique least element greater than or equal to both of them (the join of x and y). The family of all subsets of a set S, ordered by set inclusion, forms a lattice in which the meet is represented by the set-theoretic intersection and the join is represented by the set-theoretic union; a lattice formed in this way is called a Boolean lattice. Equivalent forms: The lattice-theoretic version of Frankl's conjecture is that in any finite lattice there exists an element x that is not the join of any two smaller elements, and such that the number of elements greater than or equal to x totals at most half the lattice, with equality only if the lattice is a Boolean lattice. As Abe (2000) shows, this statement about lattices is equivalent to the Frankl conjecture for union-closed sets: each lattice can be translated into a union-closed set family, and each union-closed set family can be translated into a lattice, such that the truth of the Frankl conjecture for the translated object implies the truth of the conjecture for the original object. This lattice-theoretic version of the conjecture is known to be true for several natural subclasses of lattices but remains open in the general case. Equivalent forms: Graph-theoretic formulation Another equivalent formulation of the union-closed sets conjecture uses graph theory. In an undirected graph, an independent set is a set of vertices no two of which are adjacent to each other; an independent set is maximal if it is not a subset of a larger independent set. In any graph, the "heavy" vertices that appear in more than half of the maximal independent sets must themselves form an independent set. So, if the graph is non-empty, there always exists at least one non-heavy vertex, a vertex that appears in at most half of the maximal independent sets. The graph formulation of the union-closed sets conjecture states that every finite non-empty graph contains two adjacent non-heavy vertices. It is automatically true when the graph contains an odd cycle, because the independent set of all heavy vertices cannot cover all the edges of the cycle. Therefore, the more interesting case of the conjecture is for bipartite graphs, which have no odd cycles. Another equivalent formulation of the conjecture is that, in every bipartite graph, there exist two vertices, one on each side of the bipartition, such that each of these two vertices belongs to at most half of the graph's maximal independent sets. This conjecture is known to hold for chordal bipartite graphs, bipartite series–parallel graphs, and bipartite graphs of maximum degree three. Partial results: The conjecture has been proven for many special cases of union-closed set families. In particular, it is known to be true for families of at most 46 sets. families of sets whose union has at most 11 elements. families of sets in which the smallest set has one or two elements. Partial results: families of at least (12−ε)2n subsets of an n -element set, for some constant ε>0 , according to an unpublished preprint.In November 2022, a preprint was posted claiming a proof of the following statement: For every union-closed family, other than the family containing only the empty set, there exists an element that belongs to at least a fraction of 0.01 of the sets in the family. Partial results: The proof used probabilistic and entropy methods to obtain such a bound. A conjecture within this paper implied a possible improvement to a lower bound fraction of 0.38197 . The union-closed sets conjecture itself corresponds to the fraction 0.5 . A few days later, three preprints were posted that established the lower bound fraction of (3−5)/2 . These were shortly followed by two other preprints increasing the lower bound to 0.38234 History: Péter Frankl stated the conjecture in terms of intersection-closed set families in 1979, and so the conjecture is usually credited to him and is sometimes referred to as the Frankl conjecture. The earliest publication of the union-closed version of the conjecture appears to be by Duffus (1985). A history of the work on the conjecture up to 2013 was published by Bruhn & Schaudt (2015).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Scatterplot smoothing** Scatterplot smoothing: In statistics, several scatterplot smoothing methods are available to fit a function through the points of a scatterplot to best represent the relationship between the variables. Scatterplot smoothing: Scatterplots may be smoothed by fitting a line to the data points in a diagram. This line attempts to display the non-random component of the association between the variables in a 2D scatter plot. Smoothing attempts to separate the non-random behaviour in the data from the random fluctuations, removing or reducing these fluctuations, and allows prediction of the response based value of the explanatory variable.Smoothing is normally accomplished by using any one of the techniques mentioned below. Scatterplot smoothing: A straight line (simple linear regression) A quadratic or a polynomial curve Local regression Smoothing splinesThe smoothing curve is chosen so as to provide the best fit in some sense, often defined as the fit that results in the minimum sum of the squared errors (a least squares criterion).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Theta criterion** Theta criterion: The theta-criterion (also named θ-criterion) is a constraint on x-bar theory that was first proposed by Noam Chomsky (1981) as a rule within the system of principles of the government and binding theory, called theta-theory (θ-theory). As theta-theory is concerned with the distribution and assignment of theta-roles (a.k.a. thematic roles), the theta-criterion describes the specific match between arguments and theta-roles (θ-roles) in logical form (LF): Being a constraint on x-bar theory, the criterion aims to parse out ill-formed sentences. Thus, if the number or categories of arguments in a sentence does not meet the theta-role assigner's requirement in any given sentence, that sentence will be deemed ungrammatical. (Carnie 2007, p. 224). In other words, theta-criterion sorts sentences into grammatical and ungrammatical bins based on c-selection and s-selection. Applied: Theta grid A theta-role is a status of thematic relation (Chomsky 1981, p. 35). In other words, a theta-role describes the connection of meaning between a predicate or a verb and a constituent selected by this predicate. The number, types and positions of theta-roles that a lexicon assigns is encoded in its lexical entry (Chomsky 1981, p. 38) and must be satisfied in syntactic structure following Projection Principle. The selection of a constituent by a head based on meaning is called s-selection (semantic-selection) and those based on grammatical categories are called c-selection. (Sportiche, Koopman & Stabler 2014, p. 141) Such information can be expressed with a theta grid. Applied: In the example below the verb 'love' has two theta-roles to assign: agent (the entity who loves) and theme (the entity being loved). In accordance with the theta-criterion, each theta-role must have its argument counterpart. In Example 1a, Megan and Kevin are the arguments that the verb assigns the agent and theme theta-roles to, respectively. Because there is a one-to-one mapping of argument to theta-role, the theta-criterion is satisfied and the sentence is deemed grammatical (Carnie 2007, p. 225). Below are two examples where the theta-criterion has not been fulfilled and are thus ungrammatical. Applied: Example 1b is ungrammatical (marked with *) because there are more theta-roles available than there are arguments. The theta-role theme does not have an argument matched to it. On the other hand, in example (1c), there are more arguments than theta-roles. Both theta-roles are matched to arguments (Megan with Agent and Jason with theme), but there is an argument left without a corresponding theta-role (Kevin has no theta-role) (Carnie 2007). Thus for reasons of inequality in number between theta-roles and arguments, with either having more than the other, the result will be ungrammatical. Applied: Consequence on movement Since trace transmits theta-role, movements resulting in non-local relations between theta-role assigners and receivers in surface structure don't violate theta-criterion. This allows us to generate sentences with DP-raising, head movement, wh-movement, etc. However, if a phrase occupies a theta-position (complement or selected subject) in D-structure, it can no longer move to another theta-position or it will receive two theta-roles (Chomsky 1981, p. 46). Special cases: Transitivity Verbs that can be either transitive or intransitive at the first glance could present a problem for the theta-criterion. For a transitive verb, such as "hit," we assign the theta-roles agent and theme to the arguments, as shown in (2b), (2c), and (2d): The action of hitting here requires an animate subject, an agent, carry out the action. The theme is then someone or something that undergoes the action. Special cases: For an intransitive verb, such as "arrive," we assign the theta-role theme to the sole argument, since "Mary" is the one that undergoes the action: The theta-criterion assigns the theta-role in the underlying structure, as shown by (3c). The past-tense morpheme then requires a subject at the spec-TP position and forces the movement of "Mary," as shown by (3d). Special cases: A verb like "eat" can choose to take an object, as shown in (4): For this type of verb, the potential object is usually semantically limited and therefore can be inferred from the verb at a default value (Rice 1988, pp. 203–4). For instance, for (4a), the listener/reader automatically assumes that John ate "something." What necessitates the object in (4c) is the distinction from the default meaning achieved by specifying what John ate (Rice 1988, p. 208). As a result, this type of verb can be treated the same as transitive verbs. The theta-roles of "agent" and "theme" can be assigned: In summary, by assigning the correct theta-roles, theta-criterion is able to tell the real intransitive verbs, such as "arrive" apart from verbs that can appear intransitive, such as "eat." PRO and pro PRO PRO (pronounced 'big pro') is a null pronoun phrase that occurs in a position where it does not get case (or gets null case) but takes the theta-role assigned by the non-finite verb to its subject. PRO's meaning is determined by the precedent DP that controls it (Carnie 2012, p. 429). As theta criterion states that each argument is assigned a theta-role, and those theta-roles must consist of a syntactic category that the verb selects even when there is no overt subject. This is where PRO comes in to help satisfy theta-criterion by appearing as the null subject attaining the appropriate theta role (Camacho 2013). Special cases: Below is an example containing PRO in a sentence: (5) a. Jeani is likely [ti to leave]. b. Jean is reluctant [PRO to leave]. (Carnie 2012, p. 430 (1)) Example (5a) is a raising sentence, and in contrast, (5b) is a control sentence, meaning it does not involve any DP movement. The PRO, which is a "null DP" is in the subject position of the embedded clause. (6) a. Jean wants Briani [ti to leave]. b. Jean persuaded Brian [PRO to leave]. Special cases: (Carnie 2012, p. 430 (4) Similarly, example (6a) is a raising-to-object sentence; "Brian" raises to the object position of the verb want. In contrast, (6b) is an object control sentence.(Carnie 2012, p. 430) The verb persuade has three theta-roles to assign: "agent" to Jean, "theme" to Brian, and "proposition" to the clause [PRO to leave]. There is no raising, but there is a PRO in the subject position of the embedded clause that takes the verb leave's only theta-role, "agent". Since Brian does not receive theta-role from leave, it only bears one theta-role, nor does PRO receive a second theta-role from persuade. Every argument only receives one theta-role, and every theta-role of the two predicates is assigned to only one argument. The sentence is thus grammatical. Special cases: pro pro, also known as little pro, is an empty category that occurs in a subject or object position of a finite clause (finite clauses must contain a verb which shows tense) in languages like Italian, Spanish, Portuguese, Chinese, and some Arabic dialects (Jaeggli & Safir 1989; Rizzi 1986). pro differs from PRO in that it contains case. The meaning of pro is determined not by its antecedent but by verb agreement in the sentence. The DP is 'dropped' from a sentence if its reference can be recovered from the context.(Carnie 2012, p. 429) For example: The verb restituire 'give back' assigns three theta-roles, but there are only two overt arguments in the sentence. It ultimately satisfies theta-criterion because the role, theme, is taken by a pro, whose existence can be proved by the properly bound reflexive pronoun se stessi. Compare (7a) with (7b) below: (7b) *Un bravo psicanalista può dare aiuto a se stessi. Special cases: A good psychoanalyst can give help to oneself. 'A good psychoanalyst can give help to themselves'. (Adapted from Rizzi (1986, p. 504 (13b))) When the reflexive pronoun se stessi 'themselves' doesn't have a proper antecedent to co-refer to, the sentence can't be grammatical. This indicates that in (7a) se stessi must have a proper antecedent in the sentence—the pro that takes the theme role. Cognate object Cognate objects are nominal complements of their cognate verbs that are normally intransitive. For example, (8) John died a gruesome death. Special cases: (Jones 1988, p. 89 (1a)) Such a structure posed a problem for theta-criterion because normally the verb assigns only one theta-role, theme, which is already taken by the DP, "John." The sentence should be thus predicted ill-formed. To explain the phenomenon, one way is to re-categorize such a verb as "die" so as to change the way it assigns theta-roles. For that purpose, (8) can be interpreted as follows: (9) John met a gruesome death. Special cases: (Jones 1988, p. 91 (5a)) Or John underwent a gruesome death. If the verb "die" is essentially similar to the operation-verb "meet," the cognate objects should be assigned a theta-role—one restricted to the nominal form of the verb head (Jones 1988). In other words, "die" is now classified as a potentially transitive verb, assigning two theta-roles, agent to "John" and theme to "a gruesome death." Such a possibility is falsified, however, because cognate object constructions cannot be passivized (Jones 1988). Special cases: (10) a. A gruesome death was met by John. b. *A gruesome death was died by John. (Jones 1988, p. 91 (6a), p=92 (11a)) As we can no longer consider verbs that take cognate objects the same as potentially transitive verbs, Jones (1988) argues, based on the framework of Zubizarreta (1982), that cognate objects are adjuncts rather than arguments, having the same meaning and structure as the manner adverbs in (12b). Such an analysis restores cognate objects to the group of arguments satisfying the theta-criterion, as adjuncts, by definition, are not counted as arguments and therefore need not be restricted by theta criterion. The tree form (11c) shows the adjunct DP in its relative position. Special cases: Deverbal nouns Deverbal nouns are derived from verbs and thus assign theta-roles as their verb stems do. For example, (12) (i) the barbarians' destruction of Rome (ii) Rome's destruction (by the barbarians) (iii) the destruction of Rome (by the barbarians) (iv) *the barbarian's destruction ((Chomsky 1981, p. 104 (7))) According to Chomsky (1981, p. 104), the constructions in (12) are analogous to "the barbarians destroyed Rome" and destruction needs to assign theta-roles in line with theta-criterion. It assigns "agent" to the barbarians and "theme" to Rome so (i) is fine. The verb "destroy" alone doesn't obligatorily assign theta-role to its subject so (ii) and (iii) is well-formed, too. However, "destroy" must assign a "theme", so (iv) is ruled out. Alternative approaches: Theta-criterion experienced its golden age in the 1980s when people discussed its application to various languages and structures and developed many other theories from it. However, after the minimalist program challenged some cornerstones of government and binding theory, people started to question the validity of this criterion, especially the number of theta-roles allowed to be taken by an argument. Hornstein and Boeckx, for example, proposed that there is no upper limit on the number of theta-roles an argument can receive during derivation. In their theory, the function of selecting correct number of arguments is shouldered by case theory, and theta-roles are just features on verbs that needs to be checked (Hornstein 1999).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Emerald** Emerald: Emerald is a gemstone and a variety of the mineral beryl (Be3Al2(SiO3)6) colored green by trace amounts of chromium or sometimes vanadium. Beryl has a hardness of 7.5–8 on the Mohs scale. Most emeralds have lots of material trapped inside during the gem's formation, so their toughness (resistance to breakage) is classified as generally poor. Emerald is a cyclosilicate. Etymology: The word "emerald" is derived (via Old French: esmeraude and Middle English: emeraude), from Vulgar Latin: esmaralda/esmaraldus, a variant of Latin smaragdus, which was via Ancient Greek: σμάραγδος (smáragdos; "green gem") from a Semitic language. According to Webster's Dictionary the term emerald was first used in the 14th century. Properties determining value: Emeralds, like all colored gemstones, are graded using four basic parameters known as "the four Cs": color, clarity, cut and carat weight. Normally, in the grading of colored gemstones, color is by far the most important criterion. However, in the grading of emeralds, clarity is considered a close second. A fine emerald must possess not only a pure verdant green hue as described below, but also a high degree of transparency to be considered a top gemstone.This member of the beryl family ranks among the traditional "big four" gems along with diamonds, rubies and sapphires.In the 1960s, the American jewelry industry changed the definition of emerald to include the green vanadium-bearing beryl. As a result, vanadium emeralds purchased as emeralds in the United States are not recognized as such in the United Kingdom and Europe. In America, the distinction between traditional emeralds and the new vanadium kind is often reflected in the use of terms such as "Colombian emerald". Properties determining value: Color In gemology, color is divided into three components: hue, saturation, and tone. Emeralds occur in hues ranging from yellow-green to blue-green, with the primary hue necessarily being green. Yellow and blue are the normal secondary hues found in emeralds. Only gems that are medium to dark in tone are considered emeralds; light-toned gems are known instead by the species name green beryl. The finest emeralds are approximately 75% tone on a scale where 0% tone is colorless and 100% is opaque black. In addition, a fine emerald will be saturated and have a hue that is bright (vivid). Gray is the normal saturation modifier or mask found in emeralds; a grayish-green hue is a dull-green hue. Properties determining value: Clarity Emeralds tend to have numerous inclusions and surface-breaking fissures. Unlike diamonds, where the loupe standard (i.e., 10× magnification) is used to grade clarity, emeralds are graded by eye. Thus, if an emerald has no visible inclusions to the eye (assuming normal visual acuity) it is considered flawless. Stones that lack surface breaking fissures are extremely rare and therefore almost all emeralds are treated ("oiled", see below) to enhance the apparent clarity. The inclusions and fissures within an emerald are sometimes described as jardin (French for garden), because of their mossy appearance. Imperfections are unique for each emerald and can be used to identify a particular stone. Eye-clean stones of a vivid primary green hue (as described above), with no more than 15% of any secondary hue or combination (either blue or yellow) of a medium-dark tone, command the highest prices. The relative non-uniformity motivates the cutting of emeralds in cabochon form, rather than faceted shapes. Faceted emeralds are most commonly given an oval cut, or the signature emerald cut, a rectangular cut with facets around the top edge. Properties determining value: Treatments Most emeralds are oiled as part of the post-lapidary process, in order to fill in surface-reaching cracks so that clarity and stability are improved. Cedar oil, having a similar refractive index, is often used in this widely adopted practice. Other liquids, including synthetic oils and polymers with refractive indexes close to that of emeralds, such as Opticon, are also used. The least expensive emeralds are often treated with epoxy resins, which are effective for filling stones with many fractures. These treatments are typically applied in a vacuum chamber under mild heat, to open the pores of the stone and allow the fracture-filling agent to be absorbed more effectively. The U.S. Federal Trade Commission requires the disclosure of this treatment when an oil-treated emerald is sold. The use of oil is traditional and largely accepted by the gem trade, although oil-treated emeralds are worth much less than untreated emeralds of similar quality. Untreated emeralds must also be accompanied by a certificate from a licensed, independent gemology laboratory. Other treatments, for example the use of green-tinted oil, are not acceptable in the trade. Gems are graded on a four-step scale; none, minor, moderate and highly enhanced. These categories reflect levels of enhancement, not clarity. A gem graded none on the enhancement scale may still exhibit visible inclusions. Laboratories apply these criteria differently. Some gemologists consider the mere presence of oil or polymers to constitute enhancement. Others may ignore traces of oil if the presence of the material does not improve the look of the gemstone. Emerald mines: Emeralds in antiquity were mined in Ancient Egypt at locations on Mount Smaragdus since 1500 BC, and India and Austria since at least the 14th century AD. The Egyptian mines were exploited on an industrial scale by the Roman and Byzantine Empires, and later by Islamic conquerors. Mining in Egypt ceased with the discovery of the Colombian deposits. Today, only ruins remain in Egypt.Colombia is by far the world's largest producer of emeralds, constituting 50–95% of the world production, with the number depending on the year, source and grade. Emerald production in Colombia has increased drastically in the last decade, increasing by 78% from 2000 to 2010. The three main emerald mining areas in Colombia are Muzo, Coscuez, and Chivor. Rare "trapiche" emeralds are found in Colombia, distinguished by ray-like spokes of dark impurities. Emerald mines: Zambia is the world's second biggest producer, with its Kafubu River area deposits (Kagem Mines) about 45 km (28 mi) southwest of Kitwe responsible for 20% of the world's production of gem-quality stones in 2004. In the first half of 2011, the Kagem Mines produced 3.74 tons of emeralds.Emeralds are found all over the world in countries such as Afghanistan, Australia, Austria, Brazil, Bulgaria, Cambodia, Canada, China, Egypt, Ethiopia, France, Germany, India, Kazakhstan, Madagascar, Mozambique, Namibia, Nigeria, Norway, Pakistan, Russia, Somalia, South Africa, Spain, Switzerland, Tanzania, the United States, Zambia, and Zimbabwe. In the US, emeralds have been found in Connecticut, Montana, Nevada, North Carolina, and South Carolina. In 1998, emeralds were discovered in the Yukon Territory of Canada. Emerald mines: Origin determinations Since the onset of concerns regarding diamond origins, research has been conducted to determine if the mining location could be determined for an emerald already in circulation. Traditional research used qualitative guidelines such as an emerald's color, style and quality of cutting, type of fracture filling, and the anthropological origins of the artifacts bearing the mineral to determine the emerald's mine location. More recent studies using energy dispersive X-ray spectroscopy methods have uncovered trace chemical element differences between emeralds, including ones mined in close proximity to one another. American gemologist David Cronin and his colleagues have extensively examined the chemical signatures of emeralds resulting from fluid dynamics and subtle precipitation mechanisms, and their research demonstrated the chemical homogeneity of emeralds from the same mining location and the statistical differences that exist between emeralds from different mining locations, including those between the three locations: Muzo, Coscuez, and Chivor, in Colombia, South America. Synthetic emerald: Both hydrothermal and flux-growth synthetics have been produced, and a method has been developed for producing an emerald overgrowth on colorless beryl. The first commercially successful emerald synthesis process was that of Carroll Chatham, likely involving a lithium vanadate flux process, as Chatham's emeralds do not have any water and contain traces of vanadate, molybdenum and vanadium. The other large producer of flux emeralds was Pierre Gilson Sr., whose products have been on the market since 1964. Gilson's emeralds are usually grown on natural colorless beryl seeds, which are coated on both sides. Growth occurs at the rate of 1 mm per month, a typical seven-month growth run produces emerald crystals 7 mm thick.Hydrothermal synthetic emeralds have been attributed to IG Farben, Nacken, Tairus, and others, but the first satisfactory commercial product was that of Johann Lechleitner of Innsbruck, Austria, which appeared on the market in the 1960s. These stones were initially sold under the names "Emerita" and "Symeralds", and they were grown as a thin layer of emerald on top of natural colorless beryl stones. Later, from 1965 to 1970, the Linde Division of Union Carbide produced completely synthetic emeralds by hydrothermal synthesis. According to their patents (attributable to E.M. Flanigen), acidic conditions are essential to prevent the chromium (which is used as the colorant) from precipitating. Also, it is important that the silicon-containing nutrient be kept away from the other ingredients to prevent nucleation and confine growth to the seed crystals. Growth occurs by a diffusion-reaction process, assisted by convection. The largest producer of hydrothermal emeralds today is Tairus, which has succeeded in synthesizing emeralds with chemical composition similar to emeralds in alkaline deposits in Colombia, and whose products are thus known as “Colombian created emeralds” or “Tairus created emeralds”. Luminescence in ultraviolet light is considered a supplementary test when making a natural versus synthetic determination, as many, but not all, natural emeralds are inert to ultraviolet light. Many synthetics are also UV inert. Synthetic emerald: Synthetic emeralds are often referred to as "created", as their chemical and gemological composition is the same as their natural counterparts. The U.S. Federal Trade Commission (FTC) has very strict regulations as to what can and what cannot be called a "synthetic" stone. The FTC says: "§ 23.23(c) It is unfair or deceptive to use the word "laboratory-grown", "laboratory-created", "[manufacturer name]-created", or "synthetic" with the name of any natural stone to describe any industry product unless such industry product has essentially the same optical, physical, and chemical properties as the stone named." In culture and lore: Emerald is regarded as the traditional birthstone for May as well as the traditional gemstone for the astrological sign of Cancer (June/July) Traditional alchemical lore ascribes several uses and characteristics to emeralds: The virtue of the Emerald is to counteract poison. They say that if a venomous animal should look at it, it will become blinded. The gem also acts as a preservative against epilepsy; it cures leprosy, strengthens sight and memory, checks copulation, during which act it will break, if worn at the time on the finger. In culture and lore: According to French writer Brantôme (c. 1540-1614) Hernán Cortés had one of the emeralds which he had looted from Mexico text engraved, Inter Natos Mulierum non surrexit major ("Among those born of woman there hath not arisen a greater," Matthew 11:11), in reference to John the Baptist. Brantôme considered engraving such a beautiful and simple product of nature sacrilegious and considered this act the cause for Cortez's loss in 1541 of an extremely precious pearl (to which he dedicated a work, A beautiful and incomparable pearl), and even for the death of King Charles IX of France, who died (1574) soon afterward.In American author L. Frank Baum's 1900 children's novel The Wonderful Wizard of Oz, and the 1939 MGM film adaptation, the protagonist must travel to an Emerald City to meet the eponymous character, the Wizard. In culture and lore: The chief deity of one of India's most famous temples, the Meenakshi Amman Temple in Madurai, is the goddess Meenakshi, whose idol is traditionally thought to be made of emerald.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**High Efficiency Video Coding** High Efficiency Video Coding: High Efficiency Video Coding (HEVC), also known as H.265 and MPEG-H Part 2, is a video compression standard designed as part of the MPEG-H project as a successor to the widely used Advanced Video Coding (AVC, H.264, or MPEG-4 Part 10). In comparison to AVC, HEVC offers from 25% to 50% better data compression at the same level of video quality, or substantially improved video quality at the same bit rate. It supports resolutions up to 8192×4320, including 8K UHD, and unlike the primarily 8-bit AVC, HEVC's higher fidelity Main 10 profile has been incorporated into nearly all supporting hardware. High Efficiency Video Coding: While AVC uses the integer discrete cosine transform (DCT) with 4×4 and 8×8 block sizes, HEVC uses both integer DCT and discrete sine transform (DST) with varied block sizes between 4×4 and 32×32. The High Efficiency Image Format (HEIF) is based on HEVC. Concept: In most ways, HEVC is an extension of the concepts in H.264/MPEG-4 AVC. Both work by comparing different parts of a frame of video to find areas that are redundant, both within a single frame and between consecutive frames. These redundant areas are then replaced with a short description instead of the original pixels. The primary changes for HEVC include the expansion of the pattern comparison and difference-coding areas from 16×16 pixel to sizes up to 64×64, improved variable-block-size segmentation, improved "intra" prediction within the same picture, improved motion vector prediction and motion region merging, improved motion compensation filtering, and an additional filtering step called sample-adaptive offset filtering. Effective use of these improvements requires much more signal processing capability for compressing the video, but has less impact on the amount of computation needed for decompression. Concept: HEVC was standardized by the Joint Collaborative Team on Video Coding (JCT-VC), a collaboration between the ISO/IEC MPEG and ITU-T Study Group 16 VCEG. The ISO/IEC group refers to it as MPEG-H Part 2 and the ITU-T as H.265. The first version of the HEVC standard was ratified in January 2013 and published in June 2013. The second version, with multiview extensions (MV-HEVC), range extensions (RExt), and scalability extensions (SHVC), was completed and approved in 2014 and published in early 2015. Extensions for 3D video (3D-HEVC) were completed in early 2015, and extensions for screen content coding (SCC) were completed in early 2016 and published in early 2017, covering video containing rendered graphics, text, or animation as well as (or instead of) camera-captured video scenes. In October 2017, the standard was recognized by a Primetime Emmy Engineering Award as having had a material effect on the technology of television.HEVC contains technologies covered by patents owned by the organizations that participated in the JCT-VC. Implementing a device or software application that uses HEVC may require a license from HEVC patent holders. The ISO/IEC and ITU require companies that belong to their organizations to offer their patents on reasonable and non-discriminatory licensing (RAND) terms. Patent licenses can be obtained directly from each patent holder, or through patent licensing bodies, such as MPEG LA, Access Advance, and Velos Media. Concept: The combined licensing fees currently offered by all of the patent licensing bodies are higher than for AVC. The licensing fees are one of the main reasons HEVC adoption has been low on the web and is why some of the largest tech companies (Amazon, AMD, Apple, ARM, Cisco, Google, Intel, Microsoft, Mozilla, Netflix, Nvidia, and more) have joined the Alliance for Open Media, which finalized royalty-free alternative video coding format AV1 on March 28, 2018. History: The HEVC format was jointly developed by more than a dozen organisations across the world. The majority of active patent contributions towards the development of the HEVC format came from five organizations: Samsung Electronics (4,249 patents), General Electric (1,127 patents), M&K Holdings (907 patents), NTT (878 patents), and JVC Kenwood (628 patents). Other patent holders include Fujitsu, Apple, Canon, Columbia University, KAIST, Kwangwoon University, MIT, Sungkyunkwan University, Funai, Hikvision, KBS, KT and NEC. History: Previous work In 2004, the ITU-T Video Coding Experts Group (VCEG) began a major study of technology advances that could enable creation of a new video compression standard (or substantial compression-oriented enhancements of the H.264/MPEG-4 AVC standard). In October 2004, various techniques for potential enhancement of the H.264/MPEG-4 AVC standard were surveyed. In January 2005, at the next meeting of VCEG, VCEG began designating certain topics as "Key Technical Areas" (KTA) for further investigation. A software codebase called the KTA codebase was established for evaluating such proposals. The KTA software was based on the Joint Model (JM) reference software that was developed by the MPEG & VCEG Joint Video Team for H.264/MPEG-4 AVC. Additional proposed technologies were integrated into the KTA software and tested in experiment evaluations over the next four years.Two approaches for standardizing enhanced compression technology were considered: either creating a new standard or creating extensions of H.264/MPEG-4 AVC. The project had tentative names H.265 and H.NGVC (Next-generation Video Coding), and was a major part of the work of VCEG until its evolution into the HEVC joint project with MPEG in 2010.The preliminary requirements for NGVC were the capability to have a bit rate reduction of 50% at the same subjective image quality compared with the H.264/MPEG-4 AVC High profile, and computational complexity ranging from 1/2 to 3 times that of the High profile. NGVC would be able to provide 25% bit rate reduction along with 50% reduction in complexity at the same perceived video quality as the High profile, or to provide greater bit rate reduction with somewhat higher complexity.The ISO/IEC Moving Picture Experts Group (MPEG) started a similar project in 2007, tentatively named High-performance Video Coding. An agreement of getting a bit rate reduction of 50% had been decided as the goal of the project by July 2007. Early evaluations were performed with modifications of the KTA reference software encoder developed by VCEG. By July 2009, experimental results showed average bit reduction of around 20% compared with AVC High Profile; these results prompted MPEG to initiate its standardization effort in collaboration with VCEG. History: Joint Collaborative Team on Video Coding MPEG and VCEG established a Joint Collaborative Team on Video Coding (JCT-VC) to develop the HEVC standard. History: Standardization A formal joint Call for Proposals on video compression technology was issued in January 2010 by VCEG and MPEG, and proposals were evaluated at the first meeting of the MPEG & VCEG Joint Collaborative Team on Video Coding (JCT-VC), which took place in April 2010. A total of 27 full proposals were submitted. Evaluations showed that some proposals could reach the same visual quality as AVC at only half the bit rate in many of the test cases, at the cost of 2–10× increase in computational complexity, and some proposals achieved good subjective quality and bit rate results with lower computational complexity than the reference AVC High profile encodings. At that meeting, the name High Efficiency Video Coding (HEVC) was adopted for the joint project. Starting at that meeting, the JCT-VC integrated features of some of the best proposals into a single software codebase and a "Test Model under Consideration", and performed further experiments to evaluate various proposed features. The first working draft specification of HEVC was produced at the third JCT-VC meeting in October 2010. Many changes in the coding tools and configuration of HEVC were made in later JCT-VC meetings.On January 25, 2013, the ITU announced that HEVC had received first stage approval (consent) in the ITU-T Alternative Approval Process (AAP). On the same day, MPEG announced that HEVC had been promoted to Final Draft International Standard (FDIS) status in the MPEG standardization process.On April 13, 2013, HEVC/H.265 was approved as an ITU-T standard. The standard was formally published by the ITU-T on June 7, 2013, and by the ISO/IEC on November 25, 2013.On July 11, 2014, MPEG announced that the 2nd edition of HEVC will contain three recently completed extensions which are the multiview extensions (MV-HEVC), the range extensions (RExt), and the scalability extensions (SHVC).On October 29, 2014, HEVC/H.265 version 2 was approved as an ITU-T standard. It was then formally published on January 12, 2015.On April 29, 2015, HEVC/H.265 version 3 was approved as an ITU-T standard.On June 3, 2016, HEVC/H.265 version 4 was consented in the ITU-T and was not approved during a vote in October 2016.On December 22, 2016, HEVC/H.265 version 4 was approved as an ITU-T standard. History: Patent licensing On September 29, 2014, MPEG LA announced their HEVC license which covers the essential patents from 23 companies. The first 100,000 "devices" (which includes software implementations) are royalty free, and after that the fee is $0.20 per device up to an annual cap of $25 million. This is significantly more expensive than the fees on AVC, which were $0.10 per device, with the same 100,000 waiver, and an annual cap of $6.5 million. MPEG LA does not charge any fee on the content itself, something they had attempted when initially licensing AVC, but subsequently dropped when content producers refused to pay it. The license has been expanded to include the profiles in version 2 of the HEVC standard.When the MPEG LA terms were announced, commenters noted that a number of prominent patent holders were not part of the group. Among these were AT&T, Microsoft, Nokia, and Motorola. Speculation at the time was that these companies would form their own licensing pool to compete with or add to the MPEG LA pool. Such a group was formally announced on March 26, 2015, as HEVC Advance. The terms, covering 500 essential patents, were announced on July 22, 2015, with rates that depend on the country of sale, type of device, HEVC profile, HEVC extensions, and HEVC optional features. Unlike the MPEG LA terms, HEVC Advance reintroduced license fees on content encoded with HEVC, through a revenue sharing fee.The initial HEVC Advance license had a maximum royalty rate of US$2.60 per device for Region 1 countries and a content royalty rate of 0.5% of the revenue generated from HEVC video services. Region 1 countries in the HEVC Advance license include the United States, Canada, European Union, Japan, South Korea, Australia, New Zealand, and others. Region 2 countries are countries not listed in the Region 1 country list. The HEVC Advance license had a maximum royalty rate of US$1.30 per device for Region 2 countries. Unlike MPEG LA, there was no annual cap. On top of this, HEVC Advance also charged a royalty rate of 0.5% of the revenue generated from video services encoding content in HEVC.When they were announced, there was considerable backlash from industry observers about the "unreasonable and greedy" fees on devices, which were about seven times that of the MPEG LA's fees. Added together, a device would require licenses costing $2.80, twenty-eight times as expensive as AVC, as well as license fees on the content. This led to calls for "content owners [to] band together and agree not to license from HEVC Advance". Others argued the rates might cause companies to switch to competing standards such as Daala and VP9.On December 18, 2015, HEVC Advance announced changes in the royalty rates. The changes include a reduction in the maximum royalty rate for Region 1 countries to US$2.03 per device, the creation of annual royalty caps, and a waiving of royalties on content that is free to end users. The annual royalty caps for a company is US$40 million for devices, US$5 million for content, and US$2 million for optional features.On February 3, 2016, Technicolor SA announced that they had withdrawn from the HEVC Advance patent pool and would be directly licensing their HEVC patents. HEVC Advance previously listed 12 patents from Technicolor. Technicolor announced that they had rejoined on October 22, 2019.On November 22, 2016, HEVC Advance announced a major initiative, revising their policy to allow software implementations of HEVC to be distributed directly to consumer mobile devices and personal computers royalty free, without requiring a patent license.On March 31, 2017, Velos Media announced their HEVC license which covers the essential patents from Ericsson, Panasonic, Qualcomm Incorporated, Sharp, and Sony.As of April 2019, the MPEG LA HEVC patent list is 164 pages long. History: Patent holders The following organizations currently hold the most active patents in the HEVC patent pools listed by MPEG LA and HEVC Advance: Versions Versions of the HEVC/H.265 standard using the ITU-T approval dates. Version 1: (April 13, 2013) First approved version of the HEVC/H.265 standard containing Main, Main10, and Main Still Picture profiles. Version 2: (October 29, 2014) Second approved version of the HEVC/H.265 standard which adds 21 range extensions profiles, two scalable extensions profiles, and one multi-view extensions profile. Version 3: (April 29, 2015) Third approved version of the HEVC/H.265 standard which adds the 3D Main profile. Version 4: (December 22, 2016) Fourth approved version of the HEVC/H.265 standard which adds seven screen content coding extensions profiles, three high throughput extensions profiles, and four scalable extensions profiles. Version 5: (February 13, 2018) Fifth approved version of the HEVC/H.265 standard which adds additional SEI messages that include omnidirectional video SEI messages, a Monochrome 10 profile, a Main 10 Still Picture profile, and corrections to various minor defects in the prior content of the Specification. Version 6: (June 29, 2019) Sixth approved version of the HEVC/H.265 standard which adds additional SEI messages that include SEI manifest and SEI prefix messages, and corrections to various minor defects in the prior content of the Specification. Version 7: (November 29, 2019) Seventh approved version of the HEVC/H.265 standard which adds additional SEI messages for fisheye video information and annotated regions, and also includes corrections to various minor defects in the prior content of the Specification. Version 8: As of August 2021 Version 8 is in "Additional Review" status while Version 7 is in force. Implementations and products: 2012 On February 29, 2012, at the 2012 Mobile World Congress, Qualcomm demonstrated a HEVC decoder running on an Android tablet, with a Qualcomm Snapdragon S4 dual-core processor running at 1.5 GHz, showing H.264/MPEG-4 AVC and HEVC versions of the same video content playing side by side. In this demonstration, HEVC reportedly showed almost a 50% bit rate reduction compared with H.264/MPEG-4 AVC. Implementations and products: 2013 On February 11, 2013, researchers from MIT demonstrated the world's first published HEVC ASIC decoder at the International Solid-State Circuits Conference (ISSCC) 2013. Their chip was capable of decoding a 3840×2160p at 30 fps video stream in real time, consuming under 0.1 W of power.On April 3, 2013, Ateme announced the availability of the first open source implementation of a HEVC software player based on the OpenHEVC decoder and GPAC video player which are both licensed under LGPL. The OpenHEVC decoder supports the Main profile of HEVC and can decode 1080p at 30 fps video using a single core CPU. A live transcoder that supports HEVC and used in combination with the GPAC video player was shown at the ATEME booth at the NAB Show in April 2013.On July 23, 2013, MulticoreWare announced, and made the source code available for the x265 HEVC Encoder Library under the GPL v2 license.On August 8, 2013, Nippon Telegraph and Telephone announced the release of their HEVC-1000 SDK software encoder which supports the Main 10 profile, resolutions up to 7680×4320, and frame rates up to 120 fps.On November 14, 2013, DivX developers released information on HEVC decoding performance using an Intel i7 CPU at 3.5 GHz with 4 cores and 8 threads. The DivX 10.1 Beta decoder was capable of 210.9 fps at 720p, 101.5 fps at 1080p, and 29.6 fps at 4K.On December 18, 2013, ViXS Systems announced shipments of their XCode (not to be confused with Apple's Xcode IDE for MacOS) 6400 SoC which was the first SoC to support the Main 10 profile of HEVC. Implementations and products: 2014 On April 5, 2014, at the NAB show, eBrisk Video, Inc. and Altera Corporation demonstrated an FPGA-accelerated HEVC Main10 encoder that encoded 4Kp60/10-bit video in real-time, using a dual-Xeon E5-2697-v2 platform.On August 13, 2014, Ittiam Systems announced availability of its third generation H.265/HEVC codec with 4:2:2 12-bit support.On September 5, 2014, the Blu-ray Disc Association announced that the 4K Blu-ray Disc specification would support HEVC-encoded 4K video at 60 fps, the Rec. 2020 color space, high dynamic range (PQ and HLG), and 10-bit color depth. 4K Blu-ray Discs have a data rate of at least 50 Mbit/s and disc capacity up to 100 GB. 4K Blu-ray Discs and players became available for purchase in 2015 or 2016.On September 9, 2014, Apple announced the iPhone 6 and iPhone 6 Plus which support HEVC/H.265 for FaceTime over cellular.On September 18, 2014, Nvidia released the GeForce GTX 980 (GM204) and GTX 970 (GM204), which includes Nvidia NVENC, the world's first HEVC hardware encoder in a discrete graphics card.On October 31, 2014, Microsoft confirmed that Windows 10 will support HEVC out of the box, according to a statement from Gabriel Aul, the leader of Microsoft Operating Systems Group's Data and Fundamentals Team. Windows 10 Technical Preview Build 9860 added platform level support for HEVC and Matroska.On November 3, 2014, Android Lollipop was released with out of the box support for HEVC using Ittiam Systems' software. Implementations and products: 2015 On January 5, 2015, ViXS Systems announced the XCode 6800 which is the first SoC to support the Main 12 profile of HEVC.On January 5, 2015, Nvidia officially announced the Tegra X1 SoC with full fixed-function HEVC hardware decoding.On January 22, 2015, Nvidia released the GeForce GTX 960 (GM206), which includes the world's first full fixed function HEVC Main/Main10 hardware decoder in a discrete graphics card.On February 23, 2015, Advanced Micro Devices (AMD) announced that their UVD ASIC to be found in the Carrizo APUs would be the first x86 based CPUs to have a HEVC hardware decoder.On February 27, 2015, VLC media player version 2.2.0 was released with robust support of HEVC playback. The corresponding versions on Android and iOS are also able to play HEVC. Implementations and products: On March 31, 2015, VITEC announced the MGW Ace which was the first 100% hardware-based portable HEVC encoder that provides mobile HEVC encoding.On August 5, 2015, Intel launched Skylake products with full fixed function Main/8-bit decoding/encoding and hybrid/partial Main10/10-bit decoding. On September 9, 2015 Apple announced the Apple A9 chip, first used in the iPhone 6S, its first processor with a hardware HEVC decoder supporting Main 8 and 10. This feature would not be unlocked until the release of iOS 11 in 2017. Implementations and products: 2016 On April 11, 2016, full HEVC (H.265) support was announced in the newest MythTV version (0.28).On August 30, 2016, Intel officially announced 7th generation Core CPUs (Kaby Lake) products with full fixed function HEVC Main10 hardware decoding support.On September 7, 2016 Apple announced the Apple A10 chip, first used in the iPhone 7, which included a hardware HEVC encoder supporting Main 8 and 10. This feature would not be unlocked until the release of iOS 11 in 2017.On October 25, 2016, Nvidia released the GeForce GTX 1050Ti (GP107) and GeForce GTX 1050 (GP107), which includes full fixed function HEVC Main10/Main12 hardware decoder. Implementations and products: 2017 On June 5, 2017, Apple announced HEVC H.265 support in macOS High Sierra, iOS 11, tvOS, HTTP Live Streaming and Safari.On June 25, 2017, Microsoft released a free HEVC app extension for Windows 10, enabling some Windows 10 devices with HEVC decoding hardware to play video using the HEVC format inside any app.On September 19, 2017, Apple released iOS 11 and tvOS 11 with HEVC encoding & decoding support.On September 25, 2017, Apple released macOS High Sierra with HEVC encoding & decoding support. Implementations and products: On September 28, 2017, GoPro released the Hero6 Black action camera, with 4K60P HEVC video encoding.On October 17, 2017, Microsoft removed HEVC decoding support from Windows 10 with the Version 1709 Fall Creators Update, making HEVC available instead as a separate, paid download from the Microsoft Store.On November 2, 2017, Nvidia released the GeForce GTX 1070 Ti (GP104), which includes full fixed function HEVC Main10/Main12 hardware decoder. Implementations and products: 2018 On September 20, 2018, Nvidia released the GeForce RTX 2080 (TU104), which includes full fixed function HEVC Main 4:4:4 12 hardware decoder. 2022 On October 25, 2022, Chrome released version 107, which starts supporting HEVC hardware decoding for all platforms "out of box", if the hardware is supported. Implementations and products: Browser support HEVC is implemented in these web browsers: Android browser (since version 5 from November 2014) Safari (since version 11 from September 2017) Edge (since version 77 from July 2017, supported on Windows 10 1709+ for devices with supported hardware when HEVC video extensions is installed, since version 107 from October 2022, supported on macOS 11+, Android 5.0+ for all devices) Chrome (since version 107 from October 2022, supported on macOS 11+, Android 5.0+ for all devices, supported on Windows 8+, ChromeOS, and Linux for devices with supported hardware) Opera (since version 94 from December 2022, supported on the same platforms as Chrome)In June 2023, an estimated 88.31% of browsers in use on desktop and mobile systems were able to play HEVC videos in HTML5 webpages, based on data from Can I Use. Coding efficiency: The design of most video coding standards is primarily aimed at having the highest coding efficiency. Coding efficiency is the ability to encode video at the lowest possible bit rate while maintaining a certain level of video quality. There are two standard ways to measure the coding efficiency of a video coding standard, which are to use an objective metric, such as peak signal-to-noise ratio (PSNR), or to use subjective assessment of video quality. Subjective assessment of video quality is considered to be the most important way to measure a video coding standard since humans perceive video quality subjectively.HEVC benefits from the use of larger coding tree unit (CTU) sizes. This has been shown in PSNR tests with a HM-8.0 HEVC encoder where it was forced to use progressively smaller CTU sizes. For all test sequences, when compared with a 64×64 CTU size, it was shown that the HEVC bit rate increased by 2.2% when forced to use a 32×32 CTU size, and increased by 11.0% when forced to use a 16×16 CTU size. In the Class A test sequences, where the resolution of the video was 2560×1600, when compared with a 64×64 CTU size, it was shown that the HEVC bit rate increased by 5.7% when forced to use a 32×32 CTU size, and increased by 28.2% when forced to use a 16×16 CTU size. The tests showed that large CTU sizes increase coding efficiency while also reducing decoding time.The HEVC Main Profile (MP) has been compared in coding efficiency to H.264/MPEG-4 AVC High Profile (HP), MPEG-4 Advanced Simple Profile (ASP), H.263 High Latency Profile (HLP), and H.262/MPEG-2 Main Profile (MP). The video encoding was done for entertainment applications and twelve different bitrates were made for the nine video test sequences with a HM-8.0 HEVC encoder being used. Of the nine video test sequences, five were at HD resolution, while four were at WVGA (800×480) resolution. The bit rate reductions for HEVC were determined based on PSNR with HEVC having a bit rate reduction of 35.4% compared with H.264/MPEG-4 AVC HP, 63.7% compared with MPEG-4 ASP, 65.1% compared with H.263 HLP, and 70.8% compared with H.262/MPEG-2 MP.HEVC MP has also been compared with H.264/MPEG-4 AVC HP for subjective video quality. The video encoding was done for entertainment applications and four different bitrates were made for nine video test sequences with a HM-5.0 HEVC encoder being used. The subjective assessment was done at an earlier date than the PSNR comparison and so it used an earlier version of the HEVC encoder that had slightly lower performance. The bit rate reductions were determined based on subjective assessment using mean opinion score values. The overall subjective bitrate reduction for HEVC MP compared with H.264/MPEG-4 AVC HP was 49.3%.École Polytechnique Fédérale de Lausanne (EPFL) did a study to evaluate the subjective video quality of HEVC at resolutions higher than HDTV. The study was done with three videos with resolutions of 3840×1744 at 24 fps, 3840×2048 at 30 fps, and 3840×2160 at 30 fps. The five second video sequences showed people on a street, traffic, and a scene from the open source computer animated movie Sintel. The video sequences were encoded at five different bitrates using the HM-6.1.1 HEVC encoder and the JM-18.3 H.264/MPEG-4 AVC encoder. The subjective bit rate reductions were determined based on subjective assessment using mean opinion score values. The study compared HEVC MP with H.264/MPEG-4 AVC HP and showed that, for HEVC MP, the average bitrate reduction based on PSNR was 44.4%, while the average bitrate reduction based on subjective video quality was 66.5%.In a HEVC performance comparison released in April 2013, the HEVC MP and Main 10 Profile (M10P) were compared with H.264/MPEG-4 AVC HP and High 10 Profile (H10P) using 3840×2160 video sequences. The video sequences were encoded using the HM-10.0 HEVC encoder and the JM-18.4 H.264/MPEG-4 AVC encoder. The average bit rate reduction based on PSNR was 45% for inter frame video. Coding efficiency: In a video encoder comparison released in December 2013, the HM-10.0 HEVC encoder was compared with the x264 encoder (version r2334) and the VP9 encoder (version v1.2.0-3088-ga81bd12). The comparison used the Bjøntegaard-Delta bit-rate (BD-BR) measurement method, in which negative values tell how much lower the bit rate is reduced, and positive values tell how much the bit rate is increased for the same PSNR. In the comparison, the HM-10.0 HEVC encoder had the highest coding efficiency and, on average, to get the same objective quality, the x264 encoder needed to increase the bit rate by 66.4%, while the VP9 encoder needed to increase the bit rate by 79.4%. Coding efficiency: In a subjective video performance comparison released in May 2014, the JCT-VC compared the HEVC Main profile to the H.264/MPEG-4 AVC High profile. The comparison used mean opinion score values and was conducted by the BBC and the University of the West of Scotland. The video sequences were encoded using the HM-12.1 HEVC encoder and the JM-18.5 H.264/MPEG-4 AVC encoder. The comparison used a range of resolutions and the average bit rate reduction for HEVC was 59%. The average bit rate reduction for HEVC was 52% for 480p, 56% for 720p, 62% for 1080p, and 64% for 4K UHD.In a subjective video codec comparison released in August 2014 by the EPFL, the HM-15.0 HEVC encoder was compared with the VP9 1.2.0–5183 encoder and the JM-18.8 H.264/MPEG-4 AVC encoder. Four 4K resolutions sequences were encoded at five different bit rates with the encoders set to use an intra period of one second. In the comparison, the HM-15.0 HEVC encoder had the highest coding efficiency and, on average, for the same subjective quality the bit rate could be reduced by 49.4% compared with the VP9 1.2.0–5183 encoder, and it could be reduced by 52.6% compared with the JM-18.8 H.264/MPEG-4 AVC encoder.In August, 2016, Netflix published the results of a large-scale study comparing the leading open-source HEVC encoder, x265, with the leading open-source AVC encoder, x264 and the reference VP9 encoder, libvpx. Using their advanced Video Multimethod Assessment Fusion (VMAF) video quality measurement tool, Netflix found that x265 delivered identical quality at bit rates ranging from 35.4% to 53.3% lower than x264, and from 17.8% to 21.8% lower than VP9. Features: HEVC was designed to substantially improve coding efficiency compared with H.264/MPEG-4 AVC HP, i.e. to reduce bitrate requirements by half with comparable image quality, at the expense of increased computational complexity. HEVC was designed with the goal of allowing video content to have a data compression ratio of up to 1000:1. Depending on the application requirements, HEVC encoders can trade off computational complexity, compression rate, robustness to errors, and encoding delay time. Two of the key features where HEVC was improved compared with H.264/MPEG-4 AVC was support for higher resolution video and improved parallel processing methods.HEVC is targeted at next-generation HDTV displays and content capture systems which feature progressive scanned frame rates and display resolutions from QVGA (320×240) to 4320p (7680×4320), as well as improved picture quality in terms of noise level, color spaces, and dynamic range. Features: Video coding layer The HEVC video coding layer uses the same "hybrid" approach used in all modern video standards, starting from H.261, in that it uses inter-/intra-picture prediction and 2D transform coding. A HEVC encoder first proceeds by splitting a picture into block shaped regions for the first picture, or the first picture of a random access point, which uses intra-picture prediction. Intra-picture prediction is when the prediction of the blocks in the picture is based only on the information in that picture. For all other pictures, inter-picture prediction is used, in which prediction information is used from other pictures. After the prediction methods are finished and the picture goes through the loop filters, the final picture representation is stored in the decoded picture buffer. Pictures stored in the decoded picture buffer can be used for the prediction of other pictures.HEVC was designed with the idea that progressive scan video would be used and no coding tools were added specifically for interlaced video. Interlace specific coding tools, such as MBAFF and PAFF, are not supported in HEVC. HEVC instead sends metadata that tells how the interlaced video was sent. Interlaced video may be sent either by coding each frame as a separate picture or by coding each field as a separate picture. For interlaced video HEVC can change between frame coding and field coding using Sequence Adaptive Frame Field (SAFF), which allows the coding mode to be changed for each video sequence. This allows interlaced video to be sent with HEVC without needing special interlaced decoding processes to be added to HEVC decoders. Features: Color spacesThe HEVC standard supports color spaces such as generic film, NTSC, PAL, Rec. 601, Rec. 709, Rec. 2020, Rec. 2100, SMPTE 170M, SMPTE 240M, sRGB, sYCC, xvYCC, XYZ, and externally specified color spaces. HEVC supports color encoding representations such as RGB, YCbCr, and YCoCg. Features: Coding tools Coding tree unit HEVC replaces 16×16 pixel macroblocks, which were used with previous standards, with coding tree units (CTUs) which can use larger block structures of up to 64×64 samples and can better sub-partition the picture into variable sized structures. HEVC initially divides the picture into CTUs which can be 64×64, 32×32, or 16×16 with a larger pixel block size usually increasing the coding efficiency. Features: Inverse transforms HEVC specifies four transform units (TUs) sizes of 4×4, 8×8, 16×16, and 32×32 to code the prediction residual. A CTB may be recursively partitioned into 4 or more TUs. TUs use integer basis functions based on the discrete cosine transform (DCT). In addition, 4×4 luma transform blocks that belong to an intra coded region are transformed using an integer transform that is derived from discrete sine transform (DST). This provides a 1% bit rate reduction but was restricted to 4×4 luma transform blocks due to marginal benefits for the other transform cases. Chroma uses the same TU sizes as luma so there is no 2×2 transform for chroma. Features: Parallel processing tools Tiles allow for the picture to be divided into a grid of rectangular regions that can independently be decoded/encoded. The main purpose of tiles is to allow for parallel processing. Tiles can be independently decoded and can even allow for random access to specific regions of a picture in a video stream. Features: Wavefront parallel processing (WPP) is when a slice is divided into rows of CTUs in which the first row is decoded normally but each additional row requires that decisions be made in the previous row. WPP has the entropy encoder use information from the preceding row of CTUs and allows for a method of parallel processing that may allow for better compression than tiles. Features: Tiles and WPP are allowed, but are optional. If tiles are present, they must be at least 64 pixels high and 256 pixels wide with a level specific limit on the number of tiles allowed. Features: Slices can, for the most part, be decoded independently from each other with the main purpose of tiles being the re-synchronization in case of data loss in the video stream. Slices can be defined as self-contained in that prediction is not made across slice boundaries. When in-loop filtering is done on a picture though, information across slice boundaries may be required. Slices are CTUs decoded in the order of the raster scan, and different coding types can be used for slices such as I types, P types, or B types. Features: Dependent slices can allow for data related to tiles or WPP to be accessed more quickly by the system than if the entire slice had to be decoded. The main purpose of dependent slices is to allow for low-delay video encoding due to its lower latency. Features: Other coding tools Entropy codingHEVC uses a context-adaptive binary arithmetic coding (CABAC) algorithm that is fundamentally similar to CABAC in H.264/MPEG-4 AVC. CABAC is the only entropy encoder method that is allowed in HEVC while there are two entropy encoder methods allowed by H.264/MPEG-4 AVC. CABAC and the entropy coding of transform coefficients in HEVC were designed for a higher throughput than H.264/MPEG-4 AVC, while maintaining higher compression efficiency for larger transform block sizes relative to simple extensions. For instance, the number of context coded bins have been reduced by 8× and the CABAC bypass-mode has been improved in terms of its design to increase throughput. Another improvement with HEVC is that the dependencies between the coded data has been changed to further increase throughput. Context modeling in HEVC has also been improved so that CABAC can better select a context that increases efficiency when compared with H.264/MPEG-4 AVC. Features: Intra predictionHEVC specifies 33 directional modes for intra prediction compared with the 8 directional modes for intra prediction specified by H.264/MPEG-4 AVC. HEVC also specifies DC intra prediction and planar prediction modes. The DC intra prediction mode generates a mean value by averaging reference samples and can be used for flat surfaces. The planar prediction mode in HEVC supports all block sizes defined in HEVC while the planar prediction mode in H.264/MPEG-4 AVC is limited to a block size of 16×16 pixels. The intra prediction modes use data from neighboring prediction blocks that have been previously decoded from within the same picture. Features: Motion compensationFor the interpolation of fractional luma sample positions HEVC uses separable application of one-dimensional half-sample interpolation with an 8-tap filter or quarter-sample interpolation with a 7-tap filter while, in comparison, H.264/MPEG-4 AVC uses a two-stage process that first derives values at half-sample positions using separable one-dimensional 6-tap interpolation followed by integer rounding and then applies linear interpolation between values at nearby half-sample positions to generate values at quarter-sample positions. HEVC has improved precision due to the longer interpolation filter and the elimination of the intermediate rounding error. For 4:2:0 video, the chroma samples are interpolated with separable one-dimensional 4-tap filtering to generate eighth-sample precision, while in comparison H.264/MPEG-4 AVC uses only a 2-tap bilinear filter (also with eighth-sample precision).As in H.264/MPEG-4 AVC, weighted prediction in HEVC can be used either with uni-prediction (in which a single prediction value is used) or bi-prediction (in which the prediction values from two prediction blocks are combined). Features: Motion vector predictionHEVC defines a signed 16-bit range for both horizontal and vertical motion vectors (MVs). This was added to HEVC at the July 2012 HEVC meeting with the mvLX variables. HEVC horizontal/vertical MVs have a range of −32768 to 32767 which given the quarter pixel precision used by HEVC allows for a MV range of −8192 to 8191.75 luma samples. This compares to H.264/MPEG-4 AVC which allows for a horizontal MV range of −2048 to 2047.75 luma samples and a vertical MV range of −512 to 511.75 luma samples.HEVC allows for two MV modes which are Advanced Motion Vector Prediction (AMVP) and merge mode. AMVP uses data from the reference picture and can also use data from adjacent prediction blocks. The merge mode allows for the MVs to be inherited from neighboring prediction blocks. Merge mode in HEVC is similar to "skipped" and "direct" motion inference modes in H.264/MPEG-4 AVC but with two improvements. The first improvement is that HEVC uses index information to select one of several available candidates. The second improvement is that HEVC uses information from the reference picture list and reference picture index. Features: Loop filters HEVC specifies two loop filters that are applied sequentially, with the deblocking filter (DBF) applied first and the sample adaptive offset (SAO) filter applied afterwards. Both loop filters are applied in the inter-picture prediction loop, i.e. the filtered image is stored in the decoded picture buffer (DPB) as a reference for inter-picture prediction. Features: Deblocking filterThe DBF is similar to the one used by H.264/MPEG-4 AVC but with a simpler design and better support for parallel processing. In HEVC the DBF only applies to a 8×8 sample grid while with H.264/MPEG-4 AVC the DBF applies to a 4×4 sample grid. DBF uses a 8×8 sample grid since it causes no noticeable degradation and significantly improves parallel processing because the DBF no longer causes cascading interactions with other operations. Another change is that HEVC only allows for three DBF strengths of 0 to 2. HEVC also requires that the DBF first apply horizontal filtering for vertical edges to the picture and only after that does it apply vertical filtering for horizontal edges to the picture. This allows for multiple parallel threads to be used for the DBF. Features: Sample adaptive offsetThe SAO filter is applied after the DBF and is designed to allow for better reconstruction of the original signal amplitudes by applying offsets stored in a lookup table in the bitstream. Per CTB the SAO filter can be disabled or applied in one of two modes: edge offset mode or band offset mode. The edge offset mode operates by comparing the value of a sample to two of its eight neighbors using one of four directional gradient patterns. Based on a comparison with these two neighbors, the sample is classified into one of five categories: minimum, maximum, an edge with the sample having the lower value, an edge with the sample having the higher value, or monotonic. For each of the first four categories an offset is applied. The band offset mode applies an offset based on the amplitude of a single sample. A sample is categorized by its amplitude into one of 32 bands (histogram bins). Offsets are specified for four consecutive of the 32 bands, because in flat areas which are prone to banding artifacts, sample amplitudes tend to be clustered in a small range. The SAO filter was designed to increase picture quality, reduce banding artifacts, and reduce ringing artifacts. Features: Range extensions Range extensions in MPEG are additional profiles, levels, and techniques that support needs beyond consumer video playback: Profiles supporting bit depths beyond 10, and differing luma/chroma bit depths. Intra profiles for when file size is much less important than random-access decoding speed. Features: Still Picture profiles, forming the basis of High Efficiency Image File Format, without any limit on the picture size or complexity (level 8.5). Unlike all other levels, no minimum decoder capacity is required, only a best-effort with reasonable fallback.Within these new profiles came enhanced coding features, many of which support efficient screen encoding or high-speed processing: Persistent Rice adaptation, a general optimization of entropy coding. Features: Higher precision weighted prediction at high bit depths. Cross-component prediction, allowing the imperfect YCbCr color decorrelation to let the luma (or G) match set the predicted chroma (or R/B) matches, which results in up to 7% gain for YCbCr 4:4:4 and up to 26% for RGB video. Particularly useful for screen coding. Intra smoothing control, allowing the encoder to turn smoothing on or off per-block, instead of per-frame. Modifications of transform skip: Residual DPCM (RDPCM), allowing more-optimal coding of residual data if possible, vs the typical zig-zag. Block size flexibility, supporting block sizes up to 32×32 (versus only 4×4 transform skip support in version 1). 4×4 rotation, for potential efficiency. Transform skip context, enabling DCT and RDPCM blocks to carry a separate context. Extended precision processing, giving low bit-depth video slightly more accurate decoding. CABAC bypass alignment, a decoding optimization specific to High Throughput 4:4:4 16 Intra profile.HEVC version 2 adds several supplemental enhancement information (SEI) messages: Color remapping: mapping one color space to another. Knee function: hints for converting between dynamic ranges, particularly from HDR to SDR. Mastering display color volume Time code, for archival purposes Screen content coding extensions Additional coding tool options have been added in the March 2016 draft of the screen content coding (SCC) extensions: Adaptive color transform. Adaptive motion vector resolution. Intra block copying. Features: Palette mode.The ITU-T version of the standard that added the SCC extensions (approved in December 2016 and published in March 2017) added support for the hybrid log–gamma (HLG) transfer function and the ICtCp color matrix. This allows the fourth version of HEVC to support both of the HDR transfer functions defined in Rec. 2100.The fourth version of HEVC adds several supplemental enhancement information (SEI) messages which include: Alternative transfer characteristics information SEI message, provides information on the preferred transfer function to use. The primary use case for this would be to deliver HLG video in a way that would be backward compatible with legacy devices. Features: Ambient viewing environment SEI message, provides information on the ambient light of the viewing environment that was used to author the video. Profiles: Version 1 of the HEVC standard defines three profiles: Main, Main 10, and Main Still Picture. Version 2 of HEVC adds 21 range extensions profiles, two scalable extensions profiles, and one multi-view profile. HEVC also contains provisions for additional profiles. Extensions that were added to HEVC include increased bit depth, 4:2:2/4:4:4 chroma sampling, Multiview Video Coding (MVC), and Scalable Video Coding (SVC). The HEVC range extensions, HEVC scalable extensions, and HEVC multi-view extensions were completed in July 2014. In July 2014 a draft of the second version of HEVC was released. Screen content coding (SCC) extensions were under development for screen content video, which contains text and graphics, with an expected final draft release date of 2015.A profile is a defined set of coding tools that can be used to create a bitstream that conforms to that profile. An encoder for a profile may choose which coding tools to use as long as it generates a conforming bitstream while a decoder for a profile must support all coding tools that can be used in that profile. Profiles: Version 1 profiles Main The Main profile allows for a bit depth of 8 bits per sample with 4:2:0 chroma sampling, which is the most common type of video used with consumer devices. Profiles: Main 10 The Main 10 profile was added at the October 2012 HEVC meeting based on proposal JCTVC-K0109 which proposed that a 10-bit profile be added to HEVC for consumer applications. The proposal said this was to allow for improved video quality and to support the Rec. 2020 color space that has become widely used in UHDTV systems and to be able to deliver higher dynamic range and color fidelity avoiding the banding artifacts. A variety of companies supported the proposal which included Ateme, BBC, BSkyB, Cisco, DirecTV, Ericsson, Motorola Mobility, NGCodec, NHK, RAI, ST, SVT, Thomson Video Networks, Technicolor, and ViXS Systems. The Main 10 profile allows for a bit depth of 8 to 10 bits per sample with 4:2:0 chroma sampling. HEVC decoders that conform to the Main 10 profile must be capable of decoding bitstreams made with the following profiles: Main and Main 10. A higher bit depth allows for a greater number of colors. 8 bits per sample allows for 256 shades per primary color (a total of 16.78 million colors) while 10 bits per sample allows for 1024 shades per primary color (a total of 1.07 billion colors). A higher bit depth allows for a smoother transition of color which resolves the problem known as color banding.The Main 10 profile allows for improved video quality since it can support video with a higher bit depth than what is supported by the Main profile. Additionally, in the Main 10 profile 8-bit video can be coded with a higher bit depth of 10 bits, which allows improved coding efficiency compared to the Main profile.Ericsson said the Main 10 profile would bring the benefits of 10 bits per sample video to consumer TV. They also said that for higher resolutions there is no bit rate penalty for encoding video at 10 bits per sample. Imagination Technologies said that 10-bit per sample video would allow for larger color spaces and is required for the Rec. 2020 color space that will be used by UHDTV. They also said the Rec. 2020 color space would drive the widespread adoption of 10-bit-per-sample video.In a PSNR based performance comparison released in April 2013 the Main 10 profile was compared to the Main profile using a set of 3840×2160 10-bit video sequences. The 10-bit video sequences were converted to 8 bits for the Main profile and remained at 10 bits for the Main 10 profile. The reference PSNR was based on the original 10-bit video sequences. In the performance comparison the Main 10 profile provided a 5% bit rate reduction for inter frame video coding compared to the Main profile. The performance comparison states that for the tested video sequences the Main 10 profile outperformed the Main profile. Profiles: Main Still Picture The Main Still Picture profile allows for a single still picture to be encoded with the same constraints as the Main profile. As a subset of the Main profile the Main Still Picture profile allows for a bit depth of 8 bits per sample with 4:2:0 chroma sampling. An objective performance comparison was done in April 2012 in which HEVC reduced the average bit rate for images by 56% compared to JPEG. A PSNR based performance comparison for still image compression was done in May 2012 using the HEVC HM 6.0 encoder and the reference software encoders for the other standards. For still images HEVC reduced the average bit rate by 15.8% compared to H.264/MPEG-4 AVC, 22.6% compared to JPEG 2000, 30.0% compared to JPEG XR, 31.0% compared to WebP, and 43.0% compared to JPEG.A performance comparison for still image compression was done in January 2013 using the HEVC HM 8.0rc2 encoder, Kakadu version 6.0 for JPEG 2000, and IJG version 6b for JPEG. The performance comparison used PSNR for the objective assessment and mean opinion score (MOS) values for the subjective assessment. The subjective assessment used the same test methodology and images as those used by the JPEG committee when it evaluated JPEG XR. For 4:2:0 chroma sampled images the average bit rate reduction for HEVC compared to JPEG 2000 was 20.26% for PSNR and 30.96% for MOS while compared to JPEG it was 61.63% for PSNR and 43.10% for MOS.A PSNR based HEVC performance comparison for still image compression was done in April 2013 by Nokia. HEVC has a larger performance improvement for higher resolution images than lower resolution images and a larger performance improvement for lower bit rates than higher bit rates. For lossy compression to get the same PSNR as HEVC took on average 1.4× more bits with JPEG 2000, 1.6× more bits with JPEG-XR, and 2.3× more bits with JPEG.A compression efficiency study of HEVC, JPEG, JPEG XR, and WebP was done in October 2013 by Mozilla. The study showed that HEVC was significantly better at compression than the other image formats that were tested. Four different methods for comparing image quality were used in the study which were Y-SSIM, RGB-SSIM, IW-SSIM, and PSNR-HVS-M. Profiles: Version 2 profiles Version 2 of HEVC adds 21 range extensions profiles, two scalable extensions profiles, and one multi-view profile: Monochrome, Monochrome 12, Monochrome 16, Main 12, Main 4:2:2 10, Main 4:2:2 12, Main 4:4:4, Main 4:4:4 10, Main 4:4:4 12, Monochrome 12 Intra, Monochrome 16 Intra, Main 12 Intra, Main 4:2:2 10 Intra, Main 4:2:2 12 Intra, Main 4:4:4 Intra, Main 4:4:4 10 Intra, Main 4:4:4 12 Intra, Main 4:4:4 16 Intra, Main 4:4:4 Still Picture, Main 4:4:4 16 Still Picture, High Throughput 4:4:4 16 Intra, Scalable Main, Scalable Main 10, and Multiview Main. All of the inter frame range extensions profiles have an Intra profile. Profiles: MonochromeThe Monochrome profile allows for a bit depth of 8 bits per sample with support for 4:0:0 chroma sampling. Monochrome 12The Monochrome 12 profile allows for a bit depth of 8 bits to 12 bits per sample with support for 4:0:0 chroma sampling. Monochrome 16The Monochrome 16 profile allows for a bit depth of 8 bits to 16 bits per sample with support for 4:0:0 chroma sampling. HEVC decoders that conform to the Monochrome 16 profile must be capable of decoding bitstreams made with the following profiles: Monochrome, Monochrome 12, and Monochrome 16. Main 12The Main 12 profile allows for a bit depth of 8 bits to 12 bits per sample with support for 4:0:0 and 4:2:0 chroma sampling. HEVC decoders that conform to the Main 12 profile must be capable of decoding bitstreams made with the following profiles: Monochrome, Monochrome 12, Main, Main 10, and Main 12. Main 4:2:2 10The Main 4:2:2 10 profile allows for a bit depth of 8 bits to 10 bits per sample with support for 4:0:0, 4:2:0, and 4:2:2 chroma sampling. HEVC decoders that conform to the Main 4:2:2 10 profile must be capable of decoding bitstreams made with the following profiles: Monochrome, Main, Main 10, and Main 4:2:2 10. Profiles: Main 4:2:2 12The Main 4:2:2 12 profile allows for a bit depth of 8 bits to 12 bits per sample with support for 4:0:0, 4:2:0, and 4:2:2 chroma sampling. HEVC decoders that conform to the Main 4:2:2 12 profile must be capable of decoding bitstreams made with the following profiles: Monochrome, Monochrome 12, Main, Main 10, Main 12, Main 4:2:2 10, and Main 4:2:2 12. Profiles: Main 4:4:4The Main 4:4:4 profile allows for a bit depth of 8 bits per sample with support for 4:0:0, 4:2:0, 4:2:2, and 4:4:4 chroma sampling. HEVC decoders that conform to the Main 4:4:4 profile must be capable of decoding bitstreams made with the following profiles: Monochrome, Main, and Main 4:4:4. Profiles: Main 4:4:4 10The Main 4:4:4 10 profile allows for a bit depth of 8 bits to 10 bits per sample with support for 4:0:0, 4:2:0, 4:2:2, and 4:4:4 chroma sampling. HEVC decoders that conform to the Main 4:4:4 10 profile must be capable of decoding bitstreams made with the following profiles: Monochrome, Main, Main 10, Main 4:2:2 10, Main 4:4:4, and Main 4:4:4 10. Profiles: Main 4:4:4 12The Main 4:4:4 12 profile allows for a bit depth of 8 bits to 12 bits per sample with support for 4:0:0, 4:2:0, 4:2:2, and 4:4:4 chroma sampling. HEVC decoders that conform to the Main 4:4:4 12 profile must be capable of decoding bitstreams made with the following profiles: Monochrome, Main, Main 10, Main 12, Main 4:2:2 10, Main 4:2:2 12, Main 4:4:4, Main 4:4:4 10, Main 4:4:4 12, and Monochrome 12. Profiles: Main 4:4:4 16 IntraThe Main 4:4:4 16 Intra profile allows for a bit depth of 8 bits to 16 bits per sample with support for 4:0:0, 4:2:0, 4:2:2, and 4:4:4 chroma sampling. HEVC decoders that conform to the Main 4:4:4 16 Intra profile must be capable of decoding bitstreams made with the following profiles: Monochrome Intra, Monochrome 12 Intra, Monochrome 16 Intra, Main Intra, Main 10 Intra, Main 12 Intra, Main 4:2:2 10 Intra, Main 4:2:2 12 Intra, Main 4:4:4 Intra, Main 4:4:4 10 Intra, and Main 4:4:4 12 Intra. Profiles: High Throughput 4:4:4 16 IntraThe High Throughput 4:4:4 16 Intra profile allows for a bit depth of 8 bits to 16 bits per sample with support for 4:0:0, 4:2:0, 4:2:2, and 4:4:4 chroma sampling. The High Throughput 4:4:4 16 Intra profile has an HbrFactor 12 times higher than other HEVC profiles allowing it to have a maximum bit rate 12 times higher than the Main 4:4:4 16 Intra profile. The High Throughput 4:4:4 16 Intra profile is designed for high end professional content creation and decoders for this profile are not required to support other profiles. Profiles: Main 4:4:4 Still PictureThe Main 4:4:4 Still Picture profile allows for a single still picture to be encoded with the same constraints as the Main 4:4:4 profile. As a subset of the Main 4:4:4 profile the Main 4:4:4 Still Picture profile allows for a bit depth of 8 bits per sample with support for 4:0:0, 4:2:0, 4:2:2, and 4:4:4 chroma sampling. Profiles: Main 4:4:4 16 Still Picture The Main 4:4:4 16 Still Picture profile allows for a single still picture to be encoded with the same constraints as the Main 4:4:4 16 Intra profile. As a subset of the Main 4:4:4 16 Intra profile the Main 4:4:4 16 Still Picture profile allows for a bit depth of 8 bits to 16 bits per sample with support for 4:0:0, 4:2:0, 4:2:2, and 4:4:4 chroma sampling. Profiles: Scalable MainThe Scalable Main profile allows for a base layer that conforms to the Main profile of HEVC. Scalable Main 10The Scalable Main 10 profile allows for a base layer that conforms to the Main 10 profile of HEVC. Multiview MainThe Multiview Main profile allows for a base layer that conforms to the Main profile of HEVC. Profiles: Version 3 and higher profiles Version 3 of HEVC added one 3D profile: 3D Main. The February 2016 draft of the screen content coding extensions added seven screen content coding extensions profiles, three high throughput extensions profiles, and four scalable extensions profiles: Screen-Extended Main, Screen-Extended Main 10, Screen-Extended Main 4:4:4, Screen-Extended Main 4:4:4 10, Screen-Extended High Throughput 4:4:4, Screen-Extended High Throughput 4:4:4 10, Screen-Extended High Throughput 4:4:4 14, High Throughput 4:4:4, High Throughput 4:4:4 10, High Throughput 4:4:4 14, Scalable Monochrome, Scalable Monochrome 12, Scalable Monochrome 16, and Scalable Main 4:4:4. Profiles: 3D MainThe 3D Main profile allows for a base layer that conforms to the Main profile of HEVC. Screen-Extended MainThe Screen-Extended Main profile allows for a bit depth of 8 bits per sample with support for 4:0:0 and 4:2:0 chroma sampling. HEVC decoders that conform to the Screen-Extended Main profile must be capable of decoding bitstreams made with the following profiles: Monochrome, Main, and Screen-Extended Main. Screen-Extended Main 10The Screen-Extended Main 10 profile allows for a bit depth of 8 bits to 10 bits per sample with support for 4:0:0 and 4:2:0 chroma sampling. HEVC decoders that conform to the Screen-Extended Main 10 profile must be capable of decoding bitstreams made with the following profiles: Monochrome, Main, Main 10, Screen-Extended Main, and Screen-Extended Main 10. Screen-Extended Main 4:4:4The Screen-Extended Main 4:4:4 profile allows for a bit depth of 8 bits per sample with support for 4:0:0, 4:2:0, 4:2:2, and 4:4:4 chroma sampling. HEVC decoders that conform to the Screen-Extended Main 4:4:4 profile must be capable of decoding bitstreams made with the following profiles: Monochrome, Main, Main 4:4:4, Screen-Extended Main, and Screen-Extended Main 4:4:4. Profiles: Screen-Extended Main 4:4:4 10The Screen-Extended Main 4:4:4 10 profile allows for a bit depth of 8 bits to 10 bits per sample with support for 4:0:0, 4:2:0, 4:2:2, and 4:4:4 chroma sampling. HEVC decoders that conform to the Screen-Extended Main 4:4:4 10 profile must be capable of decoding bitstreams made with the following profiles: Monochrome, Main, Main 10, Main 4:2:2 10, Main 4:4:4, Main 4:4:4 10, Screen-Extended Main, Screen-Extended Main 10, Screen-Extended Main 4:4:4, and Screen-Extended Main 4:4:4 10. Profiles: Screen-Extended High Throughput 4:4:4The Screen-Extended High Throughput 4:4:4 profile allows for a bit depth of 8 bits per sample with support for 4:0:0, 4:2:0, 4:2:2, and 4:4:4 chroma sampling. The Screen-Extended High Throughput 4:4:4 profile has an HbrFactor 6 times higher than most inter frame HEVC profiles allowing it to have a maximum bit rate 6 times higher than the Main 4:4:4 profile. HEVC decoders that conform to the Screen-Extended High Throughput 4:4:4 profile must be capable of decoding bitstreams made with the following profiles: Monochrome, Main, Main 4:4:4, Screen-Extended Main, Screen-Extended Main 4:4:4, Screen-Extended High Throughput 4:4:4, and High Throughput 4:4:4. Profiles: Screen-Extended High Throughput 4:4:4 10The Screen-Extended High Throughput 4:4:4 10 profile allows for a bit depth of 8 bits to 10 bits per sample with support for 4:0:0, 4:2:0, 4:2:2, and 4:4:4 chroma sampling. The Screen-Extended High Throughput 4:4:4 10 profile has an HbrFactor 6 times higher than most inter frame HEVC profiles allowing it to have a maximum bit rate 6 times higher than the Main 4:4:4 10 profile. HEVC decoders that conform to the Screen-Extended High Throughput 4:4:4 10 profile must be capable of decoding bitstreams made with the following profiles: Monochrome, Main, Main 10, Main 4:2:2 10, Main 4:4:4, Main 4:4:4 10, Screen-Extended Main, Screen-Extended Main 10, Screen-Extended Main 4:4:4, Screen-Extended Main 4:4:4 10, Screen-Extended High Throughput 4:4:4, Screen-Extended High Throughput 4:4:4 10, High Throughput 4:4:4, and High Throughput 4:4:4. Profiles: Screen-Extended High Throughput 4:4:4 14The Screen-Extended High Throughput 4:4:4 14 profile allows for a bit depth of 8 bits to 14 bits per sample with support for 4:0:0, 4:2:0, 4:2:2, and 4:4:4 chroma sampling. The Screen-Extended High Throughput 4:4:4 14 profile has an HbrFactor 6 times higher than most inter frame HEVC profiles. HEVC decoders that conform to the Screen-Extended High Throughput 4:4:4 14 profile must be capable of decoding bitstreams made with the following profiles: Monochrome, Main, Main 10, Main 4:2:2 10, Main 4:4:4, Main 4:4:4 10, Screen-Extended Main, Screen-Extended Main 10, Screen-Extended Main 4:4:4, Screen-Extended Main 4:4:4 10, Screen-Extended High Throughput 4:4:4, Screen-Extended High Throughput 4:4:4 10, Screen-Extended High Throughput 4:4:4 14, High Throughput 4:4:4, High Throughput 4:4:4 10, and High Throughput 4:4:4 14. Profiles: High Throughput 4:4:4The High Throughput 4:4:4 profile allows for a bit depth of 8 bits per sample with support for 4:0:0, 4:2:0, 4:2:2, and 4:4:4 chroma sampling. The High Throughput 4:4:4 profile has an HbrFactor 6 times higher than most inter frame HEVC profiles allowing it to have a maximum bit rate 6 times higher than the Main 4:4:4 profile. HEVC decoders that conform to the High Throughput 4:4:4 profile must be capable of decoding bitstreams made with the following profiles: High Throughput 4:4:4. Profiles: High Throughput 4:4:4 10The High Throughput 4:4:4 10 profile allows for a bit depth of 8 bits to 10 bits per sample with support for 4:0:0, 4:2:0, 4:2:2, and 4:4:4 chroma sampling. The High Throughput 4:4:4 10 profile has an HbrFactor 6 times higher than most inter frame HEVC profiles allowing it to have a maximum bit rate 6 times higher than the Main 4:4:4 10 profile. HEVC decoders that conform to the High Throughput 4:4:4 10 profile must be capable of decoding bitstreams made with the following profiles: High Throughput 4:4:4 and High Throughput 4:4:4 10. Profiles: High Throughput 4:4:4 14The High Throughput 4:4:4 14 profile allows for a bit depth of 8 bits to 14 bits per sample with support for 4:0:0, 4:2:0, 4:2:2, and 4:4:4 chroma sampling. The High Throughput 4:4:4 14 profile has an HbrFactor 6 times higher than most inter frame HEVC profiles. HEVC decoders that conform to the High Throughput 4:4:4 14 profile must be capable of decoding bitstreams made with the following profiles: High Throughput 4:4:4, High Throughput 4:4:4 10, and High Throughput 4:4:4 14. Profiles: Scalable MonochromeThe Scalable Monochrome profile allows for a base layer that conforms to the Monochrome profile of HEVC. Scalable Monochrome 12The Scalable Monochrome 12 profile allows for a base layer that conforms to the Monochrome 12 profile of HEVC. Scalable Monochrome 16The Scalable Monochrome 16 profile allows for a base layer that conforms to the Monochrome 16 profile of HEVC. Scalable Main 4:4:4The Scalable Main 4:4:4 profile allows for a base layer that conforms to the Main 4:4:4 profile of HEVC. Tiers and levels: The HEVC standard defines two tiers, Main and High, and thirteen levels. A level is a set of constraints for a bitstream. For levels below level 4 only the Main tier is allowed. The Main tier is a lower tier than the High tier. The tiers were made to deal with applications that differ in terms of their maximum bit rate. The Main tier was designed for most applications while the High tier was designed for very demanding applications. A decoder that conforms to a given tier/level is required to be capable of decoding all bitstreams that are encoded for that tier/level and for all lower tiers/levels. Tiers and levels: A The maximum bit rate of the profile is based on the combination of bit depth, chroma sampling, and the type of profile. For bit depth the maximum bit rate increases by 1.5× for 12-bit profiles and 2× for 16-bit profiles. For chroma sampling the maximum bit rate increases by 1.5× for 4:2:2 profiles and 2× for 4:4:4 profiles. For the Intra profiles the maximum bit rate increases by 2×. Tiers and levels: B The maximum frame rate supported by HEVC is 300 fps. C The MaxDpbSize is the maximum number of pictures in the decoded picture buffer. Tiers and levels: Decoded picture buffer Previously decoded pictures are stored in a decoded picture buffer (DPB), and are used by HEVC encoders to form predictions for subsequent pictures. The maximum number of pictures that can be stored in the DPB, called the DPB capacity, is 6 (including the current picture) for all HEVC levels when operating at the maximum picture size supported by the level. The DPB capacity (in units of pictures) increases from 6 to 8, 12, or 16 as the picture size decreases from the maximum picture size supported by the level. The encoder selects which specific pictures are retained in the DPB on a picture-by-picture basis, so the encoder has the flexibility to determine for itself the best way to use the DPB capacity when encoding the video content. Containers: MPEG has published an amendment which added HEVC support to the MPEG transport stream used by ATSC, DVB, and Blu-ray Disc; MPEG decided not to update the MPEG program stream used by DVD-Video. MPEG has also added HEVC support to the ISO base media file format. HEVC is also supported by the MPEG media transport standard. Support for HEVC was added to Matroska starting with the release of MKVToolNix v6.8.0 after a patch from DivX was merged. A draft document has been submitted to the Internet Engineering Task Force which describes a method to add HEVC support to the Real-time Transport Protocol.Using HEVC's intra frame encoding, a still-image coded format called Better Portable Graphics (BPG) has been proposed by the programmer Fabrice Bellard. It is essentially a wrapper for images coded using the HEVC Main 4:4:4 16 Still Picture profile with up to 14 bits per sample, although it uses an abbreviated header syntax and adds explicit support for Exif, ICC profiles, and XMP metadata. Patent license terms: License terms and fees for HEVC patents, compared with its main competitors: Provision for costless software As with its predecessor AVC, software distributors that implement HEVC in products must pay a price per distributed copy.[i] While this licensing model is manageable for paid software, it is an obstacle to most free and open-source software, which is meant to be freely distributable. In the opinion of MulticoreWare, the developer of x265, enabling royalty-free software encoders and decoders is in the interest of accelerating HEVC adoption. HEVC Advance made an exception that specifically waives the royalties on software-only implementations (both decoders and encoders) when not bundled with hardware. However, the exempted software is not free from the licensing obligations of other patent holders (e.g. members of the MPEG LA pool). Patent license terms: While the obstacle to free software is no concern in for example TV broadcast networks, this problem, combined with the prospect of future collective lock-in to the format, makes several organizations like Mozilla (see OpenH264) and the Free Software Foundation Europe wary of royalty-bearing formats for internet use. Competing formats intended for internet use (VP9 and AV1) are intended to steer clear of these concerns by being royalty free (provided there are no third-party claims of patent rights). Patent license terms: ^i : Regardless of how the software is licensed from the software authors (see software licensing), if what it does is patented, its use remains bound by the patent holders' rights unless the use of the patents has been authorized by a license. Versatile Video Coding: In October 2015, MPEG and VCEG formed Joint Video Exploration Team (JVET) to evaluate available compression technologies and study the requirements for a next-generation video compression standard. The new algorithm should have 30–50% better compression rate for the same perceptual quality, with support for lossless and subjectively lossless compression. It should also support YCbCr 4:4:4, 4:2:2 and 4:2:0 with 10 to 16 bits per component, BT.2100 wide color gamut and high dynamic range (HDR) of more than 16 stops (with peak brightness of 1,000, 4,000 and 10,000 nits), auxiliary channels (for depth, transparency, etc.), variable and fractional frame rates from 0 to 120 Hz, scalable video coding for temporal (frame rate), spatial (resolution), SNR, color gamut and dynamic range differences, stereo/multiview coding, panoramic formats, and still picture coding. Encoding complexity of 10 times that of HEVC is expected. JVET issued a final "Call for Proposals" in October 2017, with the first working draft of the Versatile Video Coding (VVC) standard released in April 2018. The VVC standard was finalized on July 6, 2020.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**OpenSG** OpenSG: OpenSG is a scene graph system to create real-time graphics programs, e.g. for virtual reality applications. It is developed following Open Source principles, LGPL licensed, and can be used freely. It runs on Windows, Linux, Solaris and OS X and is based on OpenGL. Its main features are advanced multithreading and clustering support (with sort-first and sort-last rendering, amongst other techniques), although it is perfectly usable in a single-threaded single-system application as well. It is not part of Khronos Group. History: It was started, just like many other systems, at the end of the scenegraph extinction in 1999 when Microsoft and SGI's Fahrenheit graphics API project died. Given that there was no other scene graph system on the market nor on the horizon with the features the authors wanted, they decided to start their own. OpenSG should not be confused with OpenSceneGraph which is entirely different scene graph API, somewhat similar to OpenGL Performer. Development on both started about the same time, and both chose similar names. Technology: OpenSG is a scene graph like many others, but with a number of unique features that set it apart from others. It features a blocked state management system to reduce the overhead for state change optimization, highly flexible traversal and other mechanisms to allow run-time exchange and enhancement of core data structures, but the most unusual aspect is its multi-threading approach.Scene graphs are notoriously hard targets for multi-threading, as they contain very large data structures easily consuming hundreds of megabytes of memory. Duplicating these is not an option due to the large overhead. Many scene graphs just lock individual nodes to prevent data corruption due to parallel writes, but that is only a partial solution. The state of the scene graph is represented by the whole scene graph, only protecting individual nodes can still lead to inconsistent results (e.g. when running an asynchronous physics simulation only updating parts of the graph will lead to partial simulation steps being displayed). OpenSG uses selective multi-buffering, by duplicating the small parts of the graph for each thread that needs to be protected while sharing the bulk data like vertex arrays and texture images, and only duplicating these using a copy on write mechanism. Synchronization of the changes for individual threads is done using a change list approach that allows minimal overhead. Technology: The same mechanism also allows highly flexible and effective clustering. To synchronize an application running on several machines, only the changes for each frame are sent to each machine and integrated into the local scene graph. This way the distinction between local and remote changes is almost invisible. An application that wants to run on a cluster just needs to open a ClusterWindow that can distribute all changes across the cluster and can very easily drive a Powerwall or a CAVE without having to worry about distribution protocols and other complications. People: The project was started by Dirk Reiners, Gerrit Voss and Johannes Behr. it has received contributions by many other people, most notably by Carsten Neumann, who currently functions as the main maintainer.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded