text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Xargs**
Xargs:
xargs (short for "extended arguments" ) is a command on Unix and most Unix-like operating systems used to build and execute commands from standard input. It converts input from standard input into arguments to a command.
Some commands such as grep and awk can take input either as command-line arguments or from the standard input. However, others such as cp and echo can only take input as arguments, which is why xargs is necessary.
A port of an older version of GNU xargs is available for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities. A ground-up rewrite named wargs is part of the open-source TextTools project. The xargs command has also been ported to the IBM i operating system.
Examples:
One use case of the xargs command is to remove a list of files using the rm command. POSIX systems have an ARG_MAX for the maximum total length of the command line, so the command may fail with an error message of "Argument list too long" (meaning that the exec system call's limit on the length of a command line was exceeded): rm /path/* or rm $(find /path -type f). (The latter invocation is incorrect, as it may expand globs in the output.) This can be rewritten using the xargs command to break the list of arguments into sublists small enough to be acceptable: In the above example, the find utility feeds the input of xargs with a long list of file names. xargs then splits this list into sublists and calls rm once for every sublist.
Examples:
Some implementations of xargs can also be used to parallelize operations with the -P maxprocs argument to specify how many parallel processes should be used to execute the commands over the input argument lists. However, the output streams may not be synchronized. This can be overcome by using an --output file argument where possible, and then combining the results after processing. The following example queues 24 processes and waits on each to finish before launching another.
Examples:
xargs often covers the same functionality as the command substitution feature of many shells, denoted by the backquote notation (`...` or $(...)). xargs is also a good companion for commands that output long lists of files such as find, locate and grep, but only if one uses -0 (or equivalently --null), since xargs without -0 deals badly with file names containing ', " and space. GNU Parallel is a similar tool that offers better compatibility with find, locate and grep when file names may contain ', ", and space (newline still requires -0).
Placement of arguments:
-I option: single argument The xargs command offers options to insert the listed arguments at some position other than the end of the command line. The -I option to xargs takes a string that will be replaced with the supplied input before the command is executed. A common choice is %.
The string to replace may appear multiple times in the command part. Using -I at all limits the number of lines used each time to one.
Placement of arguments:
Shell trick: any number Another way to achieve a similar effect is to use a shell as the launched command, and deal with the complexity in that shell, for example: The word sh at the end of the line is for the POSIX shell sh -c to fill in for $0, the "executable name" part of the positional parameters (argv). If it weren't present, the name of the first matched file would be instead assigned to $0 and the file wouldn't be copied to ~/backups. One can also use any other word to fill in that blank, my-xargs-script for example.
Placement of arguments:
Since cp accepts multiple files at once, one can also simply do the following: This script runs cp with all the files given to it when there are any arguments passed. Doing so is more efficient since only one invocation of cp is done for each invocation of sh.
Separator problem:
Many Unix utilities are line-oriented. These may work with xargs as long as the lines do not contain ', ", or a space. Some of the Unix utilities can use NUL as record separator (e.g. Perl (requires -0 and \0 instead of \n), locate (requires using -0), find (requires using -print0), grep (requires -z or -Z), sort (requires using -z)). Using -0 for xargs deals with the problem, but many Unix utilities cannot use NUL as separator (e.g. head, tail, ls, echo, sed, tar -v, wc, which).
Separator problem:
But often people forget this and assume xargs is also line-oriented, which is not the case (per default xargs separates on newlines and blanks within lines, substrings with blanks must be single- or double-quoted).
The separator problem is illustrated here: Running the above will cause important_file to be removed but will remove neither the directory called 12" records, nor the file called not important_file.
Separator problem:
The proper fix is to use the GNU-specific -print0 option, but tail (and other tools) do not support NUL-terminated strings: When using the -print0 option, entries are separated by a null character instead of an end-of-line. This is equivalent to the more verbose command:find . -name not\* | tr \\n \\0 | xargs -0 rm or shorter, by switching xargs to (non-POSIX) line-oriented mode with the -d (delimiter) option: find . -name not\* | xargs -d '\n' rm but in general using -0 with -print0 should be preferred, since newlines in filenames are still a problem.
Separator problem:
GNU parallel is an alternative to xargs that is designed to have the same options, but is line-oriented. Thus, using GNU Parallel instead, the above would work as expected.For Unix environments where xargs does not support the -0 nor the -d option (e.g. Solaris, AIX), the POSIX standard states that one can simply backslash-escape every character:find . -name not\* | sed 's/\(.\)/\\\1/g' | xargs rm. Alternatively, one can avoid using xargs at all, either by using GNU parallel or using the -exec ... + functionality of find.
Operating on a subset of arguments at a time:
One might be dealing with commands that can only accept one or maybe two arguments at a time. For example, the diff command operates on two files at a time. The -n option to xargs specifies how many arguments at a time to supply to the given command. The command will be invoked repeatedly until all input is exhausted. Note that on the last invocation one might get fewer than the desired number of arguments if there is insufficient input. Use xargs to break up the input into two arguments per line: In addition to running based on a specified number of arguments at a time, one can also invoke a command for each line of input with the -L 1 option. One can use an arbitrary number of lines at a time, but one is most common. Here is how one might diff every git commit against its parent.
Encoding problem:
The argument separator processing of xargs is not the only problem with using the xargs program in its default mode. Most Unix tools which are often used to manipulate filenames (for example sed, basename, sort, etc.) are text processing tools. However, Unix path names are not really text. Consider a path name /aaa/bbb/ccc. The /aaa directory and its bbb subdirectory can in general be created by different users with different environments. That means these users could have a different locale setup, and that means that aaa and bbb do not even necessarily have to have the same character encoding. For example, aaa could be in UTF-8 and bbb in Shift JIS. As a result, an absolute path name in a Unix system may not be correctly processable as text under a single character encoding. Tools which rely on their input being text may fail on such strings.
Encoding problem:
One workaround for this problem is to run such tools in the C locale, which essentially processes the bytes of the input as-is. However, this will change the behavior of the tools in ways the user may not expect (for example, some of the user's expectations about case-folding behavior may not be met). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Grain 128a**
Grain 128a:
The Grain 128a stream cipher was first purposed at Symmetric Key Encryption Workshop (SKEW) in 2011 as an improvement of the predecessor Grain 128, which added security enhancements and optional message authentication using the Encrypt & MAC approach. One of the important features of the Grain family is that the throughput can be increased at the expense of additional hardware. Grain 128a is designed by Martin Ågren, Martin Hell, Thomas Johansson and Willi Meier.
Description of the cipher:
Grain 128a consists of two large parts: Pre-output function and MAC. The pre-output function has an internal state size of 256 bits, consisting of two registers of size 128 bit: NLFSR and LFSR. The MAC supports variable tag lengths w such that 32 . The cipher uses a 128 bit key.
The cipher supports two modes of operation: with or without authentication, which is configured via the supplied IV0 such that if IV0=1 then authentication of the message is enabled, and if IV0=0 authentication of the message is disabled.
Pre-output function:
The pre-output function consists of two registers of size 128 bit: NLFSR ( b ) and LFSR ( s ) along with 2 feedback polynomials f and g and a boolean function h 32 47 58 90 121 128 32 37 72 102 128 44 60 61 125 63 67 69 101 80 88 110 111 115 117 46 50 58 103 104 106 33 35 36 40 12 13 20 95 42 60 79 12 95 94 In addition to the feedback polynomials, the update functions for the NLFSR and the LFSR are: 128 26 56 91 96 67 11 13 17 18 27 59 40 48 61 65 68 84 88 92 93 95 22 24 25 70 78 82 128 38 70 81 96 The pre-output stream ( y ) is defined as: 93 15 36 45 64 73 89 Initialisation Upon initialisation we define an IV of 96 bit, where the IV0 dictates the mode of operation.
Pre-output function:
The LFSR is initialised as: si=IVi for 95 si=1 for 96 126 127 =0 The last 0 bit ensures that similar key-IV pairs do not produce shifted versions of each other.
The NLFSR is initialised by copying the entire 128 bit key ( k ) into the NLFSR: bi=ki for 127 Start up clocking Before the pre-output function can begin to output its pre-output stream it has to be clocked 256 times to warm up, during this stage the pre-output stream is fed into the feedback polynomials g and f
Key stream:
The key stream ( z ) and MAC functionality in Grain 128a both share the same pre-output stream ( y ). As authentication is optional our key stream definition depends upon the IV0 When authentication is enabled, the MAC functionality uses the first 2w bits (where w is the tag size) after the start up clocking to initialise. The key stream is then assigned every other bit due to the shared pre-output stream.
Key stream:
If authentication is enabled: zi=y2w+2i If authentication is disabled: zi=yi
MAC:
Grain 128a supports tags of size w up to 32 bit, to do this 2 registers of size w is used, a shift register( r ) and an accumulator( a ). To create a tag of a message m where L is the length of m+1 as we have to set mL=1 to ensure that i.e. m1=1 and 10 has different tags, and also making it impossible to generate a tag that completely ignores the input from the shift register after initialisation.
MAC:
For each bit 31 in the accumulator we at time 0≤i≤L we denounce a bit in the accumulator as aij Initialisation When authentication is enabled Grain 128a uses the first 2w bits of the pre-output stream( y ) to initialise the shift register and the accumulator. This is done by: Shift register: 31 for 31 Accumulator: a0j=yj for 31 Tag generation Shift register: The shift register is fed all the odd bits of the pre-output stream( y ): 31 64 +2i+1 Accumulator: ai+1j=aij+miri+j for 0≤i≤L Final tag When the cipher has completed the L iterations the final tag( t ) is the content of the accumulator: ti=aL+1i for 31 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Medinfar**
Medinfar:
Medinfar is a Portuguese pharmaceutical company headquartered in Lisbon.
The company has a wide range of products from Prescription Medicines to Generics, OTC/Consumer Health products and veterinary products. Medinfar promotes products in several therapeutic areas as General Practice, Cardiology, Gastroenterology, Respiratory and Dermatology.
History:
it was founded in 1970. Medinfar has R&D, Production, Distribution, Marketing and Sales departments, and besides its own brands it markets licensed products in partnerships with several top international pharmaceutical companies.In 2001, with the acquisition of the contract manufacturing unit, Farmalabor, in Condeixa (Portugal center region), Medinfar increased its productive capacity. This unit has an area of 47,000 m2.
1970: Establishment of Medinfar 2000: Establishment of Medinfar Morocco 2001: Acquisition of Farmalabor 2005: Establishment of Cytothera and GP 2009: Halibut® acquisition
Medinfar Group:
Pharmaceutical Unit:- Medinfar Farma – Prescription Medicines Marketing & Sales. Main therapeutic areas: Respiratory, Dermatology and Cardiology.
- GP – Genéricos Portugueses – Marketing & Sales of Generic Drugs. Main therapeutic areas: Neurology, Psychiatry, Cardiovascular, Osteoporosis and Urology.
- Medinfar Consumer Health – Promotion and commercialization of non-prescription medicines and other health products. Main therapeutic areas: Cough & Cold, GERD & Heartburn, Antiviral, Asthenia and Vitamins.
Medinfar Group:
Contract Manufacturing Unit:- Farmalabor – contract manufacturing unit located in Condeixa, Portugal center region. Farmalabor is certified in accordance with Good Manufacturing Practices (GMP) and cGMP, GLP, ISO 9001:2008, ISO 14001:2004 and OHSAS 18001. Farmalabor has more than 40 national and international clients and manufactures solid forms (tablets, coated tablets, capsules, pellets, granules, sachets, suppositories) semi-solid forms (creams, ointments) and liquid forms (solutions, suspensions, syrups).
Medinfar Group:
Veterinary Business Unit:- Medinfar Sorológico – Veterinaty drugs, devices and Herd vaccines. Main therapeutic areas and products: Anesthetics, Antibiotics, fluid therapy, Multivitamin & Iron and Shampoos.
Biotechnology Unit:- Cytothera – First company in Europe processing and cryopreserving umbilical cord tissue stem cells.
Medinfar in the world:
As a result of its globalization challenge, Medinfar Group is present in more than 40 countries around the world, through its affiliate in Morocco and distributors throughout Europe, FWA, PALOP, Middle East and Asia. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lorentz invariance in non-critical string theory**
Lorentz invariance in non-critical string theory:
Usually non-critical string theory is considered in frames of the approach proposed by Polyakov. The other approach has been developed in. It represents a universal method to maintain explicit Lorentz invariance in any quantum relativistic theory. On an example of Nambu-Goto string theory in 4-dimensional Minkowski space-time the idea can be demonstrated as follows: Geometrically the world sheet of string is sliced by a system of parallel planes to fix a specific parametrization, or gauge on it.
Lorentz invariance in non-critical string theory:
The planes are defined by a normal vector nμ, the gauge axis.
If this vector belongs to light cone, the parametrization corresponds to light cone gauge, if it is directed along world sheet's period Pμ, it is time-like Rohrlich's gauge.
Lorentz invariance in non-critical string theory:
The problem of the standard light cone gauge is that the vector nμ is constant, e.g. nμ = (1, 1, 0, 0), and the system of planes is "frozen" in Minkowski space-time. Lorentz transformations change the position of the world sheet with respect to these fixed planes, and they are followed by reparametrizations of the world sheet. On the quantum level the reparametrization group has anomaly, which appears also in Lorentz group and violates Lorentz invariance of the theory. On the other hand, the Rohrlich's gauge relates nμ with the world sheet itself. As a result, the Lorentz generators transform nμ and the world sheet simultaneously, without reparametrizations. The same property holds if one relates light-like axis nμ with the world sheet, using in addition to Pμ other dynamical vectors available in string theory. In this way one constructs Lorentz-invariant parametrization of the world sheet, where the Lorentz group acts trivially and does not have quantum anomalies.
Lorentz invariance in non-critical string theory:
Algebraically this corresponds to a canonical transformation ai -> bi in the classical mechanics to a new set of variables, explicitly containing all necessary generators of symmetries. For the standard light cone gauge the Lorentz generators Mμν are cubic in terms of oscillator variables ai, and their quantization acquires well known anomaly. Consider a set bi = (Mμν,ξi) which contains the Lorentz group generators and internal variables ξi, complementing Mμν to the full phase space. In selection of such a set, one needs to take care that ξi will have simple Poisson brackets with Mμν and among themselves. Local existence of such variables is provided by Darboux's theorem. Quantization in the new set of variables eliminates anomaly from the Lorentz group. Canonically equivalent classical theories do not necessarily correspond to unitary equivalent quantum theories, that's why quantum anomalies could be present in one approach and absent in the other one.
Lorentz invariance in non-critical string theory:
Group-theoretically string theory has a gauge symmetry Diff S1, reparametrizations of a circle. The symmetry is generated by Virasoro algebra Ln. Standard light cone gauge fixes the most of gauge degrees of freedom leaving only trivial phase rotations U(1) ~ S1. They correspond to periodical string evolution, generated by Hamiltonian L0.
Lorentz invariance in non-critical string theory:
Let's introduce an additional layer on this diagram: a group G = U(1) x SO(3) of gauge transformations of the world sheet, including the trivial evolution factor and rotations of the gauge axis in center-of-mass frame, with respect to the fixed world sheet. Standard light cone gauge corresponds to a selection of one point in SO(3) factor, leading to Lorentz non-invariant parametrization. Therefore, one must select a different representative on the gauge orbit of G, this time related with the world sheet in Lorentz invariant way. After reduction of the mechanics to this representative anomalous gauge degrees of freedom are removed from the theory.
Lorentz invariance in non-critical string theory:
The trivial gauge symmetry U(1) x U(1) remains, including evolution and those rotations which preserve the direction of gauge axis.
Successful implementation of this program has been done in .
These are several unitary non-equivalent versions of the quantum open Nambu-Goto string theory, where the gauge axis is attached to different geometrical features of the world sheet.
Their common properties are explicit Lorentz-invariance at d=4 reparametrization degrees of freedom fixed by the gauge Regge-like spin-mass spectrumThe reader familiar with variety of branches co-existing in modern string theory will not wonder why many different quantum theories can be constructed for essentially the same physical system.
The approach described here does not intend to produce a unique ultimate result, it just provides a set of tools suitable for construction of your own quantum string theory.
Since any value of dimension can be used, and especially d=4, the applications could be more realistic.
For example, the approach can be applied in physics of hadrons, to describe their spectra and electromagnetic interactions . | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bone Marrow Transplantation (journal)**
Bone Marrow Transplantation (journal):
Bone Marrow Transplantation is a peer-reviewed medical journal covering transplantation of bone marrow in humans. It is published monthly by Nature Research. The scope of the journal includes stem cell biology, transplantation immunology, translational research, and clinical results of specific transplant protocols.According to the Journal Citation Reports, Bone Marrow Transplantation has a 2020 impact factor of 5.483.
Abstracting and indexing:
Bone Marrow Transplantation is abstracted and indexed in BIOBASE/Current Awareness in Biological Sciences, BIOSIS, Current Contents/Clinical Medicine, Current Contents/Life Sciences, EMBASE/Excerpta Medica, MEDLINE/Index Medicus, and Science Citation Index. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Database normalization**
Database normalization:
Database normalization or database normalisation (see spelling differences) is the process of structuring a relational database in accordance with a series of so-called normal forms in order to reduce data redundancy and improve data integrity. It was first proposed by British computer scientist Edgar F. Codd as part of his relational model.
Normalization entails organizing the columns (attributes) and tables (relations) of a database to ensure that their dependencies are properly enforced by database integrity constraints. It is accomplished by applying some formal rules either by a process of synthesis (creating a new database design) or decomposition (improving an existing database design).
Objectives:
A basic objective of the first normal form defined by Codd in 1970 was to permit data to be queried and manipulated using a "universal data sub-language" grounded in first-order logic. An example of such a language is SQL, though it is one that Codd regarded as seriously flawed.The objectives of normalisation beyond 1NF (first normal form) were stated by Codd as: To free the collection of relations from undesirable insertion, update and deletion dependencies.
Objectives:
To reduce the need for restructuring the collection of relations, as new types of data are introduced, and thus increase the life span of application programs.
To make the relational model more informative to users.
To make the collection of relations neutral to the query statistics, where these statistics are liable to change as time goes by.
Objectives:
When an attempt is made to modify (update, insert into, or delete from) a relation, the following undesirable side effects may arise in relations that have not been sufficiently normalized: Insertion anomaly. There are circumstances in which certain facts cannot be recorded at all. For example, each record in a "Faculty and Their Courses" relation might contain a Faculty ID, Faculty Name, Faculty Hire Date, and Course Code. Therefore, the details of any faculty member who teaches at least one course can be recorded, but a newly hired faculty member who has not yet been assigned to teach any courses cannot be recorded, except by setting the Course Code to null.
Objectives:
Update anomaly. The same information can be expressed on multiple rows; therefore updates to the relation may result in logical inconsistencies. For example, each record in an "Employees' Skills" relation might contain an Employee ID, Employee Address, and Skill; thus a change of address for a particular employee may need to be applied to multiple records (one for each skill). If the update is only partially successful – the employee's address is updated on some records but not others – then the relation is left in an inconsistent state. Specifically, the relation provides conflicting answers to the question of what this particular employee's address is.
Objectives:
Deletion anomaly. Under certain circumstances, the deletion of data representing certain facts necessitates the deletion of data representing completely different facts. The "Faculty and Their Courses" relation described in the previous example suffers from this type of anomaly, for if a faculty member temporarily ceases to be assigned to any courses, the last of the records on which that faculty member appears must be deleted, effectively also deleting the faculty member, unless the Course Code field is set to null.
Objectives:
Minimize redesign when extending the database structure A fully normalized database allows its structure to be extended to accommodate new types of data without changing existing structure too much. As a result, applications interacting with the database are minimally affected.
Normalized relations, and the relationship between one normalized relation and another, mirror real-world concepts and their interrelationships.
Normal forms:
Codd introduced the concept of normalization and what is now known as the first normal form (1NF) in 1970. Codd went on to define the second normal form (2NF) and third normal form (3NF) in 1971, and Codd and Raymond F. Boyce defined the Boyce–Codd normal form (BCNF) in 1974.Informally, a relational database relation is often described as "normalized" if it meets third normal form. Most 3NF relations are free of insertion, updation, and deletion anomalies.
Normal forms:
The normal forms (from least normalized to most normalized) are:
Example of a step-by-step normalization:
Normalization is a database design technique, which is used to design a relational database table up to higher normal form. The process is progressive, and a higher level of database normalization cannot be achieved unless the previous levels have been satisfied.That means that, having data in unnormalized form (the least normalized) and aiming to achieve the highest level of normalization, the first step would be to ensure compliance to first normal form, the second step would be to ensure second normal form is satisfied, and so forth in order mentioned above, until the data conform to sixth normal form.
Example of a step-by-step normalization:
However, it is worth noting that normal forms beyond 4NF are mainly of academic interest, as the problems they exist to solve rarely appear in practice.The data in the following example were intentionally designed to contradict most of the normal forms. In practice it is often possible to skip some of the normalization steps because the data is already normalized to some extent. Fixing a violation of one normal form also often fixes a violation of a higher normal form. In the example, one table has been chosen for normalization at each step, meaning that at the end, some tables might not be sufficiently normalized.
Example of a step-by-step normalization:
Initial data Let a database table exist with the following structure: For this example it is assumed that each book has only one author.
Example of a step-by-step normalization:
A table that conforms to the relational model has a primary key which uniquely identifies a row. Two books could have the same title, but an ISBN uniquely identifies a book, so it can be used as the primary key: Satisfying 1NF In the first normal form each field contains a single value. A field may not contain a set of values or a nested record.
Example of a step-by-step normalization:
Subject contains a set of subject values, meaning it does not comply.
To solve the problem, the subjects are extracted into a separate Subject table: In Subject, ISBN is a foreign key: It refers to the primary key in Book, and makes the relationship between these two tables explicit.
Instead of one table in unnormalized form, there are now two tables conforming to the 1NF.
Example of a step-by-step normalization:
Satisfying 2NF The Book table below has a composite key of {Title, Format} (indicated by the underlining), which will not satisfy 2NF if some subset of that key is a determinant. At this point in our design the key is not finalised as the primary key, so it is called a candidate key. Consider the following table: All of the attributes that are not part of the candidate key depend on Title, but only Price also depends on Format. To conform to 2NF and remove duplicates, every non-candidate-key attribute must depend on the whole candidate key, not just part of it.
Example of a step-by-step normalization:
To normalize this table, make {Title} a (simple) candidate key (the primary key) so that every non-candidate-key attribute depends on the whole candidate key, and remove Price into a separate table so that its dependency on Format can be preserved: Now, the Book table conforms to 2NF.
Example of a step-by-step normalization:
Satisfying 3NF The Book table still has a transitive functional dependency ({Author Nationality} is dependent on {Author}, which is dependent on {Title}). A similar violation exists for genre ({Genre Name} is dependent on {Genre ID}, which is dependent on {Title}). Hence, the Book table is not in 3NF. To make it in 3NF, let's use the following table structure, thereby eliminating the transitive functional dependencies by placing {Author Nationality} and {Genre Name} in their own respective tables: Satisfying EKNF The elementary key normal form (EKNF) falls strictly between 3NF and BCNF and is not much discussed in the literature. It is intended "to capture the salient qualities of both 3NF and BCNF" while avoiding the problems of both (namely, that 3NF is "too forgiving" and BCNF is "prone to computational complexity"). Since it is rarely mentioned in literature, it is not included in this example.
Example of a step-by-step normalization:
Satisfying 4NF Assume the database is owned by a book retailer franchise that has several franchisees that own shops in different locations. And therefore the retailer decided to add a table that contains data about availability of the books at different locations: As this table structure consists of a compound primary key, it doesn't contain any non-key attributes and it's already in BCNF (and therefore also satisfies all the previous normal forms). However, assuming that all available books are offered in each area, the Title is not unambiguously bound to a certain Location and therefore the table doesn't satisfy 4NF.
Example of a step-by-step normalization:
That means that, to satisfy the fourth normal form, this table needs to be decomposed as well: Now, every record is unambiguously identified by a superkey, therefore 4NF is satisfied.
Example of a step-by-step normalization:
Satisfying ETNF Suppose the franchisees can also order books from different suppliers. Let the relation also be subject to the following constraint: If a certain supplier supplies a certain title and the title is supplied to the franchisee and the franchisee is being supplied by the supplier, then the supplier supplies the title to the franchisee.This table is in 4NF, but the Supplier ID is equal to the join of its projections: {{Supplier ID, Title}, {Title, Franchisee ID}, {Franchisee ID, Supplier ID}}. No component of that join dependency is a superkey (the sole superkey being the entire heading), so the table does not satisfy the ETNF and can be further decomposed: The decomposition produces ETNF compliance.
Example of a step-by-step normalization:
Satisfying 5NF To spot a table not satisfying the 5NF, it is usually necessary to examine the data thoroughly. Suppose the table from 4NF example with a little modification in data and let's examine if it satisfies 5NF: Decomposing this table lowers redundancies, resulting in the following two tables: The query joining these tables would return the following data: The JOIN returns three more rows than it should; adding another table to clarify the relation results in three separate tables: What will the JOIN return now? It actually is not possible to join these three tables. That means it wasn't possible to decompose the Franchisee - Book - Location without data loss, therefore the table already satisfies 5NF.
Example of a step-by-step normalization:
C.J. Date has argued that only a database in 5NF is truly "normalized".
Example of a step-by-step normalization:
Satisfying DKNF Let's have a look at the Book table from previous examples and see if it satisfies the domain-key normal form: Logically, Thickness is determined by number of pages. That means it depends on Pages which is not a key. Let's set an example convention saying a book up to 350 pages is considered "slim" and a book over 350 pages is considered "thick".
Example of a step-by-step normalization:
This convention is technically a constraint but it is neither a domain constraint nor a key constraint; therefore we cannot rely on domain constraints and key constraints to keep the data integrity.
In other words – nothing prevents us from putting, for example, "Thick" for a book with only 50 pages – and this makes the table violate DKNF.
To solve this, a table holding enumeration that defines the Thickness is created, and that column is removed from the original table: That way, the domain integrity violation has been eliminated, and the table is in DKNF.
Example of a step-by-step normalization:
Satisfying 6NF A simple and intuitive definition of the sixth normal form is that "a table is in 6NF when the row contains the Primary Key, and at most one other attribute".That means, for example, the Publisher table designed while creating the 1NF: needs to be further decomposed into two tables: The obvious drawback of 6NF is the proliferation of tables required to represent the information on a single entity. If a table in 5NF has one primary key column and N attributes, representing the same information in 6NF will require N tables; multi-field updates to a single conceptual record will require updates to multiple tables; and inserts and deletes will similarly require operations across multiple tables. For this reason, in databases intended to serve online transaction processing (OLTP) needs, 6NF should not be used.
Example of a step-by-step normalization:
However, in data warehouses, which do not permit interactive updates and which are specialized for fast query on large data volumes, certain DBMSs use an internal 6NF representation – known as a columnar data store. In situations where the number of unique values of a column is far less than the number of rows in the table, column-oriented storage allow significant savings in space through data compression. Columnar storage also allows fast execution of range queries (e.g., show all records where a particular column is between X and Y, or less than X.) In all these cases, however, the database designer does not have to perform 6NF normalization manually by creating separate tables. Some DBMSs that are specialized for warehousing, such as Sybase IQ, use columnar storage by default, but the designer still sees only a single multi-column table. Other DBMSs, such as Microsoft SQL Server 2012 and later, let you specify a "columnstore index" for a particular table. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Wordlock**
Wordlock:
Wordlock is a brand of combination locks, made by Wordlock, Inc., that differs from traditional combination locks in that it has letters on its dials instead of numbers. This allows the combination to be a four-letter or five-letter word or name, similar to a password, and therefore potentially easier to remember than a series of digits. Wordlocks come in luggage locks, bike locks, padlocks, cable locks and commercial locks.
History:
The Chinese created the first word combination lock in the 13th Century. The idea never caught on in the West, however, until Todd Basche, former Vice President of Software Applications at Apple Inc., invented the modern word lock in 2004. He and Rahn Basche founded WordLock, Inc. in 2007 in Santa Clara, California, USA. Todd's patented WordLock algorithm maximizes the number of four-letter and five-letter words that can be spelled on the Wordlock dials.
History:
WordLock won the Staples Inc. Invention Quest in 2004 and "Top 100 New Inventions" distinction at the U.S. Patent and Trademark Office's Invent Now America competition in 2008.
Possible combinations:
The five-ring WordLock contains 10 letters per ring. One such example follows: Each ring rotates independently of the others, yielding a possible 104 (or 10,000) different combinations. WordLock contains one blank space on the fifth dial to make four letter words. About 2,000 words are possible as combinations. However, this 2,000 word figure does not include the many possibilities for quasi-words (BLATS or WOOT); certain names (DILAN or MOSES); and acronyms, foreign words or gibberish known only to the lock owner. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cantic 8-cube**
Cantic 8-cube:
In eight-dimensional geometry, a cantic 8-cube or truncated 8-demicube is a uniform 8-polytope, being a truncation of the 8-demicube.
Alternate names:
Truncated demiocteract Truncated hemiocteract (Jonathan Bowers)
Cartesian coordinates:
The Cartesian coordinates for the vertices of a truncated 8-demicube centered at the origin and edge length 6√2 are coordinate permutations: (±1,±1,±3,±3,±3,±3,±3,±3)with an odd number of plus signs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ombré**
Ombré:
Ombré (literally "shaded" in French) is the blending of one color hue to another, usually moving tints and shades from light to dark. It has become a popular feature for hair coloring, nail art, and even baking, in addition to its uses in home decorating and graphic design.In contrast to ombré, sombré is a much softer and gradual shading of one color to another.
In fashion:
History Using shading or creating an ombré effect is ubiquitous. For instance in fabric printing, a special printing block, called a "rainbowed" block, was used in the early 19th century to produce textiles with graduated color designs. Ombré as a textile treatment came back into fashion in around 1840 and was used throughout the 19th century. In machine embroidery, an ombré effect was achieved by dyeing the threads in graded colors beforehand.
In fashion:
21st century “Ombré” as a hair-coloring technique had been popularized in 2000 when the singer Aaliyah had her hair dyed in a subtle gradual fade from black at the roots to lighter towards the hair tips. As of 2010, the ombré hair trend was still popular. The style has been adopted by many celebrities, such as Britney Spears, Alexa Chung, Lauren Conrad, Vanessa Hudgens, Drew Barrymore, Beyoncé, and even Jared Leto, among others. One stylist found that the ombré hairstyle requires very little upkeep, making it easier for it to remain on trend. While ombre was initially the gradual lightening of the hair from dark to light, it has expanded to take on various other techniques, including the fading of a natural color from the roots to a more unnatural color (such as turquoise or lavender) at the tips.
In fashion:
The popularization of ombré hair encouraged the spread of this technique to other facets of beauty, such as nail art. The adoption of the ombré nails trend by celebrities such as Lauren Conrad, Victoria Beckham, and Katy Perry, helped popularise it.
Home:
Following the early 21st-century trend, many popular home decorators have incorporated ombré into their home decorating styles. Ombré can be used in many products from textiles to glassware, and as a wall-painting technique, where walls are painted in colors graduating to a lighter or darker tone towards the other end. Martha Stewart describes the gentle progression of color in ombré as a transition from wakefulness to slumber. David Kohn Architects have explored the ombré effect in the design of the floor tiling of the interior of an apartment, Carrer Avinyó, Barcelona. The tile pattern is graded in colour from green at one end of the apartment to red at the other to differentiate the two owners' private spaces. The encaustic tiles were manufactured by Mosaics Martí, suppliers of tiles to Antoni Gaudí.
Baking:
In baking, ombré effects are typically achieved through applied techniques such as frosting on a cake, but baking individual cake layers in graduated tones from light to dark is possible. The effect can also be achieved by dyeing and stacking the layers of a cake in the ombré fade.
Makeup:
Due to the colour range available for different types of cosmetic products, an ombré effect can be achieved by blending two or more shades together on the eyes, lips, or cheeks. The gradient from dark to light is similar to the practice of contouring, where different tints and shades of natural skin tones are blended, but differs in that contouring is often intended to artificially sculpt the face, whereas ombré can be said to simply mean the blending of any two or more shades, natural or otherwise. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Synchronous orbit**
Synchronous orbit:
A synchronous orbit is an orbit in which an orbiting body (usually a satellite) has a period equal to the average rotational period of the body being orbited (usually a planet), and in the same direction of rotation as that body.
Simplified meaning:
A synchronous orbit is an orbit in which the orbiting object (for example, an artificial satellite or a moon) takes the same amount of time to complete an orbit as it takes the object it is orbiting to rotate once.
Properties:
A satellite in a synchronous orbit that is both equatorial and circular will appear to be suspended motionless above a point on the orbited planet's equator. For synchronous satellites orbiting Earth, this is also known as a geostationary orbit. However, a synchronous orbit need not be equatorial; nor circular. A body in a non-equatorial synchronous orbit will appear to oscillate north and south above a point on the planet's equator, whereas a body in an elliptical orbit will appear to oscillate eastward and westward. As seen from the orbited body the combination of these two motions produces a figure-8 pattern called an analemma.
Nomenclature:
There are many specialized terms for synchronous orbits depending on the body orbited. The following are some of the more common ones. A synchronous orbit around Earth that is circular and lies in the equatorial plane is called a geostationary orbit. The more general case, when the orbit is inclined to Earth's equator or is non-circular is called a geosynchronous orbit. The corresponding terms for synchronous orbits around Mars are areostationary and areosynchronous orbits.
Formula:
For a stationary synchronous orbit: Rsyn=G(m2)T24π23 G = Gravitational constant m2 = Mass of the celestial body T = rotational period of the body Rsyn = Radius of orbitBy this formula one can find the stationary orbit of an object in relation to a given body.
Orbital speed (how fast a satellite is moving through space) is calculated by multiplying the angular speed of the satellite by the orbital radius.
Examples:
An astronomical example is Pluto's largest moon Charon.
Much more commonly, synchronous orbits are employed by artificial satellites used for communication, such as geostationary satellites.
For natural satellites, which can attain a synchronous orbit only by tidally locking their parent body, it always goes in hand with synchronous rotation of the satellite. This is because the smaller body becomes tidally locked faster, and by the time a synchronous orbit is achieved, it has had a locked synchronous rotation for a long time already. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tryptoline**
Tryptoline:
Tryptoline, also known as tetrahydro-β-carboline and tetrahydronorharmane, is a natural organic derivative of beta-carboline. It is an alkaloid chemically related to tryptamines. Derivatives of tryptoline have a variety of pharmacological properties and are known collectively as tryptolines.
Pharmacology:
Many tryptolines are competitive selective inhibitors of the enzyme monoamine oxidase type A (MAO-A). 5-Hydroxytryptoline and 5-methoxytryptoline (pinoline) are the most active monoamine oxidase inhibitors (MAOIs) with IC50s of 0.5 μM and 1.5 μM respectively, using 5-hydroxytryptamine (serotonin) as substrate.
Pharmacology:
Tryptolines are also potent reuptake inhibitors of serotonin and epinephrine, with a significantly greater selectivity for serotonin. Comparison of the inhibition kinetics of tetrahydro-β-carbolines for serotonin and epinephrine reuptake to that of the platelet aggregation response to these amines has shown that 5-hydroxymethtryptoline, methtryptoline, and tryptoline are poor inhibitors of reuptake. In all respects 5-hydroxytryptoline and 5-methoxytryptoline showed greater pharmacological activity than the tryptoline and methtryptoline.
Pharmacology:
Although the in vivo formation of tryptolines has been a matter of controversy, they have profound pharmacological activity. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tripod packing**
Tripod packing:
In combinatorics, tripod packing is a problem of finding many disjoint tripods in a three-dimensional grid, where a tripod is an infinite polycube, the union of the grid cubes along three positive axis-aligned rays with a shared apex.Several problems of tiling and packing tripods and related shapes were formulated in 1967 by Sherman K. Stein. Stein originally called the tripods of this problem "semicrosses", and they were also called Stein corners by Solomon W. Golomb. A collection of disjoint tripods can be represented compactly as a monotonic matrix, a square matrix whose nonzero entries increase along each row and column and whose equal nonzero entries are placed in a monotonic sequence of cells, and the problem can also be formulated in terms of finding sets of triples satisfying a compatibility condition called "2-comparability", or of finding compatible sets of triangles in a convex polygon.The best lower bound known for the number of tripods that can have their apexes packed into an n×n×n grid is 1.546 ) , and the best upper bound is exp log ∗n) , both expressed in big Omega notation.
Equivalent problems:
The coordinates (xi,yi,zi) of the apexes of a solution to the tripod problem form a 2-comparable sets of triples, where two triples are defined as being 2-comparable if there are either at least two coordinates where one triple is smaller than the other, or at least two coordinates where one triple is larger than the other. This condition ensures that the tripods defined from these triples do not have intersecting rays.Another equivalent two-dimensional version of the question asks how many cells of an n×n array of square cells (indexed from 1 to n ) can be filled in by the numbers from 1 to n in such a way that the non-empty cells of each row and each column of the array form strictly increasing sequences of numbers, and the positions holding each value i form a monotonic chain within the array. An array with these properties is called a monotonic matrix. A collection of disjoint tripods with apexes (xi,yi,zi) can be transformed into a monotonic matrix by placing the number zi in array cell (xi,yi) and vice versa.The problem is also equivalent to finding as many triangles as possible among the vertices of a convex polygon, such that no two triangles that share a vertex have nested angles at that vertex. This triangle-counting problem was posed by Peter Braß and its equivalence to tripod packing was observed by Aronov et al.
Lower bounds:
It is straightforward to find a solution to the tripod packing problem with Ω(n3/2) tripods. For instance, for k=⌊n⌋ , the Ω(n3/2) triples are 2-comparable.
After several earlier improvements to this naïve bound, Gowers and Long found solutions to the tripod problem of cardinality 1.546 )
Upper bounds:
From any solution to the tripod packing problem, one can derive a balanced tripartite graph whose vertices are three copies of the numbers from 0 to n−1 (one for each of the three coordinates) with a triangle of edges connecting the three vertices corresponding to the coordinates of the apex of each tripod. There are no other triangles in these graphs (they are locally linear graphs) because any other triangle would lead to a violation of 2-comparability. Therefore, by the known upper bounds to the Ruzsa–Szemerédi problem (one version of which is to find the maximum density of edges in a balanced tripartite locally linear graph), the maximum number of disjoint tripods that can be packed in an n×n×n grid is o(n2) , and more precisely exp log ∗n) . Although Tiskin writes that "tighter analysis of the parameters" can produce a bound that is less than quadratic by a polylogarithmic factor, he does not supply details and his proof that the number is o(n2) uses only the same techniques that are known for the Ruzsa–Szemerédi problem, so this stronger claim appears to be a mistake.An argument of Dean Hickerson shows that, because tripods cannot pack space with constant density, the same is true for analogous problems in higher dimensions.
Small instances:
For small instances of the tripod problem, the exact solution is known. The numbers of tripods that can be packed into an n×n×n cube, for 11 , are: For instance, the figure shows the 11 tripods that can be packed into a 5×5×5 cube.
The number of distinct monotonic matrices of order n , for n=1,2,3,… is | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Coconut milk**
Coconut milk:
Coconut milk is an opaque, milky-white liquid extracted from the grated pulp of mature coconuts. The opacity and rich taste of coconut milk are due to its high oil content, most of which is saturated fat. Coconut milk is a traditional food ingredient used in Southeast Asia, Oceania, South Asia, and East Africa. It is also used for cooking in the Caribbean, tropical Latin America, and West Africa, where coconuts were introduced during the colonial era.
Coconut milk:
Coconut milk is differentiated into subtypes based on fat content. They can be generalized into coconut cream (or thick coconut milk) with the highest amount of fat; coconut milk (or thin coconut milk) with a maximum of around 20% fat; and coconut skim milk with negligible amounts of fat. This terminology is not always followed in commercial coconut milk sold in Western countries.Coconut milk can also be used to produce milk substitutes (differentiated as "coconut milk beverages"). These products are not the same as regular coconut milk products which are meant for cooking, not drinking. A sweetened, processed, coconut milk product from Puerto Rico is also known as cream of coconut. It is used in many desserts and beverages like the piña colada, though it should not be confused with coconut cream.
Nutrition:
In a 100 milliliter (ml) portion, coconut milk contains 230 kilocalories and is 68% water, 24% total fat, 6% carbohydrates, and 2% protein (see table). The fat composition includes 21 grams of saturated fat, half of which is lauric acid.Coconut milk is a rich source (20% or more of the Daily Value, DV) of manganese (44% DV per 100 g) and an adequate source (10–19% DV per 100 g) of phosphorus, iron, and magnesium, with no other nutrients in significant content (see table).
Definition and terminology:
Coconut milk is a relatively stable oil-in-water emulsion with proteins that act as emulsifiers and thickening agents. It is opaque and milky white in color and ranges in consistency from watery to creamy. Based on fat content, coconut milk is divided into different subtypes generally simplified into "coconut cream", "coconut milk", and "coconut skim milk", from highest to lowest respectively. Coconut milk and coconut cream (also called "thin coconut milk" and "thick coconut milk", respectively) are traditionally differentiated in countries where coconuts are native based on the stages of extraction. They are also differentiated in modern standards set by the Asian and Pacific Coconut Community (APCC) and the Food and Agriculture Organization of the United Nations (FAO). However, the terminologies are not always followed in commercial coconut milk (especially in western countries) because these standards are not mandatory. This can cause confusion among consumers.The Asian and Pacific Coconut Community standardizes coconut milk and coconut cream products as: The Codex Alimentarius of the FAO standardizes coconut milk and coconut cream products as: Coconut milk can also sometimes be confused with coconut water. Coconut water is the clear fluid found within the coconut seed, while coconut milk is the extracted liquid derived from the manual or mechanical crushing of white inner flesh of mature coconuts. Coconut cream should also not be confused with creamed coconut, which is a semi-solid paste made from finely ground coconut pulp; and cream of coconut, which is a processed product made from heavily sweetened coconut cream.
Traditional preparation:
Coconut milk is traditionally made by grating the white inner flesh of mature coconuts and mixing the shredded coconut pulp with a small amount of hot water in order to suspend the fat present in the grated pulp. The grating process can be carried out manually or by machine.
Coconut milk is also traditionally divided into two grades: coconut cream (or thick coconut milk) and thin coconut milk. Coconut cream contains around 20% to 50% fat; while thin coconut milk contains 5% to 20% fat.
Traditional preparation:
Coconut cream is extracted from the first pressings of grated coconut pulp directly through cheesecloth. Sometimes a small amount of hot water may also be added, but generally coconut cream is extracted with no added water. Thin coconut milk, on the other hand, is produced by the subsequent pressings after soaking the squeezed coconut pulp with hot water.Gravity separation can also be used to derive a top layer of coconut cream and a bottom layer of coconut skim milk. This is achieved by simply allowing the extracted liquid to stand for an hour. Conversely, coconut cream can be diluted into thinner coconut milk by simply adding water.Traditionally prepared coconut milk is utilized immediately after being freshly extracted because it spoils easily when exposed to air. It becomes rancid after a few hours at room temperatures of 28 to 30 °C (82 to 86 °F) due to lipid oxidation and lipolysis. Rancid coconut milk gives off a strong unpleasant smell and has a distinctive soapy taste.Coconut cream contains a higher amount of soluble, suspended solids, which makes it a good ingredient for desserts, and rich and dry sauces. Because thin milk contains a lesser amount of these soluble solids, it is mainly used in general cooking. The distinction between coconut cream and thin coconut milk is not usually made in western nations due to the fact that fresh coconut milk is uncommon in these countries and most consumers buy coconut milk in cartons or cans.Coconut milk is also an intermediate step in the traditional wet process methods of producing virgin coconut oil by gradual heating, churning, or fermentation. These methods, however, are less efficient than coconut oil production from copra.
Traditional preparation:
Coconut graters Coconut graters (also called "coconut scrapers"), a necessary tool for traditionally extracting coconut milk, were part of the material culture of the Austronesian peoples. From Island Southeast Asia, it was carried along with the sea voyages of the Austronesian expansion both for colonization and trade, reaching as far as Polynesia in the east, and Madagascar and the Comoros in the west in prehistoric times. The technology also spread to non-Austronesian cultures in coastal East Africa by proximity. Manual coconut graters remain a standard kitchen equipment in households in the tropical Asia-Pacific and Eastern Africa, underscoring the importance of coconut milk and coconut oil extraction in the Indo-Pacific.The basic design of coconut graters consist of a low bench or stool with a horizontal serrated disk (made of metal in Asia and Africa, and stone or shell in Oceania) attached on one end. A person sits on the bench and repeatedly scrapes the inner surface of halved coconut shells with both hands over the metal disk. The scrapings are gathered by a container placed below.More modern mechanical coconut graters dating back to the mid-1800s consist of serrated blades with a hand crank. This version is believed to be a British invention.
Processed coconut milk products:
Commercially processed coconut milk products use largely the same processes to extract coconut milk from pulp, though they use more mechanical equipment like deshelling machines, grinders and pulverizers, motorized coconut shredders, and coconut milk extractors.They differ significantly in the bottling or canning process, however. Processed coconut milk products are first filtered through a 100 mesh filters. They are pasteurized indirectly by double boiling at around 70 °C (158 °F), carefully not exceeding 80 °C (176 °F), the temperature at which coconut milk starts to coagulate. After pasteurization, they are immediately transferred to filling vessels and sealed before being cooled down. They are then packed into bottles, cans, or pouches and blast frozen for storage and transport.Manufacturers of canned coconut milk typically combine diluted and comminuted milk with the addition of water as a filler. Depending on the brand and age of the milk itself, a thicker, more paste-like consistency floats to the top of the can (a gravity separation, similar to traditional methods), and is sometimes separated and used in recipes that require coconut cream rather than coconut milk. Some brands sold in Western countries undergo homogenization and add additional thickening agents and emulsifiers to prevent the milk from separating inside the can.Due to factors like pasteurization and minimal contact with oxygen, processed coconut milk generally has a longer shelf life than traditionally prepared coconut milk. It is also more efficient than traditional methods at extracting the maximum amount of coconut milk from grated coconut.
Processed coconut milk products:
Coconut milk powder Coconut cream can be dehydrated into coconut milk powder which has a far longer shelf life. They are processed by adding maltodextrin and casein to coconut cream to improve fluidity and then spray drying the mixture. The powder is packaged in moisture-proof containers. To use, water is simply added to the coconut milk powder.
Processed coconut milk products:
Coconut skim milk Coconut skim milk is coconut milk with very low levels of fat (0% to 1.5%). It is a byproduct of coconut cream and coconut oil production and are usually discarded. However, they are increasingly being used as a food ingredient for products which require coconut flavoring without the fats (including coconut powder, coconut honey, and coconut jam). They can also be used as a base in the production of coconut milk beverages used as milk substitutes, as they do not contain the high levels of fat characteristic of regular coconut milk while still being a good source of soluble proteins.
Processed coconut milk products:
Milk substitutes Processed coconut milk can be used as a substitute for milk beverages, usually marketed as "coconut milk beverage". They are sometimes confusingly also simply labeled as "coconut milk", though they are not the same product as coconut milk used for cooking (which are not meant for drinking). Milk substitutes from coconut are basically coconut milk diluted with water or coconut skim milk with additives. They contain less fat and fewer calories than milk, but also less protein. They contain high amounts of potassium and are good sources of fiber and iron. They are also commonly fortified with vitamin D and calcium.
Processed coconut milk products:
Filled milk Coconut milk is also used widely for filled milk products. It is blended with milk (usually skim milk or powdered milk) for its vegetable oils and proteins which act as substitutes for expensive butterfat in some processed milk products. They include low fat filled milk, evaporated reconstituted milk, and sweetened condensed milk.
Processed coconut milk products:
Cheese and custard production Coconut milk can also be used in cheese and custard production, substituting at most 50% of milk without lowering the overall quality of the products. By mixing skim milk with coconut milk, one procedure develops cheeses – including a garlic-spiced soft cheese called queso de ajo, a Gouda cheese substitute, and a Roquefort substitute called "Niyoblue" (a portmanteau of Tagalog: niyog, "coconut", and "blue").
Processed coconut milk products:
Soy milk enrichment Coconut milk can be used to enrich the fat content of soy milk, improving its texture and taste to be closer to that of real milk. Coconut cream can also be added to soy milk in the production of tofu to enrich its caloric density without affecting its palatability.
Processed coconut milk products:
Cream of coconut Cream of coconut is a thick, heavily sweetened, processed coconut milk product resembling condensed milk. It is originally produced by the Puerto Rican company Coco López and is used most notably in piña coladas in the United States. It can also be used for other cocktail drinks and various desserts. It should not be confused with or used as a substitute for coconut cream.
Cuisine:
Coconut milk derivatives In the Philippines, coconut milk can also be further processed into coconut caramel and coconut curds, both known as latík. The coconut caramel latík made from a reduction of muscovado sugar and coconut milk has been developed into a commercial product marketed as coconut syrup (not to be confused with coconut sugar derived from coconut sap).
Cuisine:
A similar product found throughout Southeast Asia is coconut jam. It is known as matamís sa báo in the Philippines and uses only coconut milk and sugar. However, the coconut jam versions from Indonesia, Malaysia, and Singapore (kaya); Thailand (sangkhaya); Cambodia (sankiah); and Vietnam (banh gan), add eggs in addition to sugar. The latter versions are sometimes anglicized as "coconut custard" to distinguish them from the version without egg. Coconut jam and coconut custard have a thicker, jam-like consistency and are used as ingredients or fillings in various traditional desserts.
Cuisine:
Food Coconut milk can be used in both sweet and savory dishes. In many tropical and Asian cuisines, it is a traditional ingredient in curries and other dishes, including desserts.
Cuisine:
Southeast Asia In Indonesia, coconut milk is used in various recipes ranging from savoury dishes – such as rendang, soto, gulai, mie celor, sayur lodeh, gudeg, sambal goreng krechek, and opor ayam – to sweet desserts, such as serabi, es cendol and es doger. Soto is ubiquitous in Indonesia and considered one of Indonesia's national dishes. It is also used in coconut rice, a widespread Southeast Asian dish of rice cooked in coconut milk, including the nasi lemak of Malaysia and the nasi uduk of Indonesia.
Cuisine:
In Malaysia, coconut milk is one of the essential ingredients in a lot of the dishes, this includes a few of the popular dishes in the region, such as the ubiquitous nasi lemak and nasi dagang, rendang, laksa, gulai and Tamil and Mamak style-curry, it is also used in dessert-making such as Kuih Lapis, kaya and dodol.
In the Philippines, diverse dishes cooked in coconut milk are called ginataán. They can range from savoury dishes to desserts. Coconut milk is widely used to make traditional Filipino kakanín (the generic term for rice pastries), including bibingka and biko, among others.
In Thailand, coconut milk is used in dishes such as tom kha kai, khao tom mat, mango sticky rice, and tom yum.
Cuisine:
Latin America and the Caribbean In Brazil, coconut milk is mostly used in northeastern cuisine, generally with seafood stews and desserts. In Venezuela, pulp dishes are prepared with coconut milk and shredded fish in a dish called mojito en coco. In Colombia and Panama, the grated flesh of coconut and coconut milk are used to make sweet titoté, a key ingredient in making arroz com coco (coconut rice).
Cuisine:
Coconut milk is used to make traditional Venezuelan dishes, such as majarete (a typical Venezuelan dessert), and arroz con coco (the Venezuelan version of coconut rice).
Cuisine:
Drink In Southeast Asia, coconut milk is used to make many traditional drinks. Cendol is a popular iced drink from this region containing chilled coconut milk and green jellies made of rice flour. Coconut milk is also used in hot drinks such as bandrek and bajigur, two popular drinks from Indonesia. Sweetened coconut milk, and coconut milk diluted with water are two popular coconut beverages in southern China and Taiwan.
Cuisine:
The jelly-like pulp from the inside of the coconut is often added to coconut water to make a tropical drink. In Brazil, for example, coconut milk is mixed with sugar and cachaça to make a cocktail called batida de côco.: 183 Puerto Rico is also popular for tropical drinks containing coconut, such as piña colada and coquito, which typically contain coconut milk or coconut cream.
Saturated fat and health risk:
One of the most prominent components of coconut milk is coconut oil, which many health organizations discourage people from consuming in significant amounts due to its high levels of saturated fat. Excessive coconut milk consumption can also raise blood levels of cholesterol due to the amount of lauric acid, a saturated fat that contributes to higher blood cholesterol.
Horticulture:
In 1943, it was discovered that coconut milk could actively encourage plant growth. Although there are many factors that attribute coconut milk to plant growth, the main cause is the existence of a cytokinin known as zeatin found in coconut milk. While the zeatin in coconut milk speeds up plant growth in general, it does not speed up growth in certain plants such as radishes.: 8 However, when 10% coconut milk is added to the substrate in which wheat is grown, substantial improvements have been noted.
Commerce:
Coconuts are widely produced in tropical climates and exported globally as canned products, most frequently to North America and Europe. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Caffitaly**
Caffitaly:
The Caffitaly System (known in some markets as the Caffita System) is a capsule system for making espresso and other coffee drinks in home espresso machines. The name is a portmanteau of caffè, the Italian word for coffee, and Italy. Caffitaly is based in Bologna, Italy.
Caffitaly:
Caffitaly was developed by Caffita System SpA and has been adopted by other manufacturers, notably Bewley's of Ireland, Princess of the Netherlands, Germany's Tchibo, Julius Meinl, Dallmayr, Italy's Caffe Cagliari, Crem Caffe, Swiss Chicco D'oro, Três Corações in Brazil, USA's Coffee Bean & Tea Leaf, Australia's MAP Coffee and Israeli Espresso Club as well as Löfbergs in Sweden. It is similar in principle to the competing Nespresso and Tassimo capsule systems, in which a sealed capsule containing a premeasured amount of coffee is inserted into the machine, through which hot water is forced at high pressure into a coffee cup. The capsule can be disposed of easily once the coffee is made, and the machine requires little maintenance or cleaning.
Caffitaly:
Like similar proprietary coffee-making systems, Caffitaly can be seen as an example of the razor and blades business model, in which the relatively low price of the coffeemaker is recouped through a higher profit margin on the coffee capsules it uses.
Caffita sponsored the Lampre–Caffita cycling team in 2005.
Caffitaly Systems also produces the CBTL Capsule System for The Coffee Bean & Tea Leaf and the MAP Italian Coffee Capsule System for Map Coffee Australia.
Caffitaly:
Danesi of Italy meanwhile has associated themselves with Caffitaly system brewing machines, selling in the USA through Boston King Coffee. In Australia, Woolworths and Gloria Jeans promote and make their own capsules exclusively for the system, with the latter also selling Gloria Jeans branded machines. Coles also currently make their own capsules for the machine under the "Mr. Barista" brand name. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Oculocardiac reflex**
Oculocardiac reflex:
The oculocardiac reflex, also known as Aschner phenomenon, Aschner reflex, or Aschner–Dagnini reflex, is a decrease in pulse rate associated with traction applied to extraocular muscles and/or compression of the eyeball. The reflex is mediated by nerve connections between the ophthalmic branch of the trigeminal cranial nerve via the ciliary ganglion, and the vagus nerve of the parasympathetic nervous system. Nerve fibres from the maxillary and mandibular divisions of the trigeminal nerve have also been documented. These afferents synapse with the visceral motor nucleus of the vagus nerve, located in the reticular formation of the brain stem. The efferent portion is carried by the vagus nerve from the cardiovascular center of the medulla to the heart, of which increased stimulation leads to decreased output of the sinoatrial node. This reflex is especially sensitive in neonates and children, particularly during strabismus correction surgery. Oculocardiac reflex can be profound during eye examination for retinopathy of prematurity. However, this reflex may also occur with adults. Bradycardia, junctional rhythm and asystole, all of which may be life-threatening, can be induced through this reflex. This reflex has been seen to occur during many pan facial trauma surgeries due to stimulation of any of the three branches of trigeminal nerve.
Treatment:
The reflex can be blocked by intravenous injection of an anti-muscarinic acetylcholine (ACh) antagonist, such as atropine or glycopyrrolate. If bradycardia does occur, removal of the stimulus is immediately indicated. This often results in the restoration of normal sinus rhythm of the heart. If not, the use of atropine or glycopyrrolate will usually be successful and permit continuation of the surgical procedure. Caution should be used with fast-push, intravenous opioids and dexmedetomidine which exacerbate the bradycardia. In extreme cases, such as asystole, cardiopulmonary resuscitation may be required. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nostepinne**
Nostepinne:
The nostepinne, also known as a nostepinde or nøstepinde, is a tool used in the fiber arts to wind yarn, often yarn that has been hand spun, into a ball for easily knitting, crocheting, or weaving from. In its simplest form, it is a dowel, generally between 10–12 inches (25–30 cm) long and most frequently made of wood, around which yarn can be wound. Decoratively and ornately carved nostepinnes are common. The top of the nostepinne sometimes incorporates a notch or a groove which allows one end of the yarn to be held secure while the rest is wound into a ball. The ball of yarn formed by a nostepinne is a "center pull" ball, allowing the knitter to remove the working yarn from the center of the ball rather than the outside. This keeps the yarn from rolling around the surface the yarn is sitting on and provides a more consistent tension. These center-pull balls are called "cakes" because of their short, cylindrical shape. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Electron bifurcation**
Electron bifurcation:
In biochemistry, electron bifurcation (EB) refers to a system that enables an unfavorable (endergonic) transformation by coupling to a favorable (exergonic) transformation. Two electrons are involved: one flows to an acceptor with a "higher reduction potential and the other with a lower reduction potential" than the donor. The process is suspected of being common in bioenergetics.
Electron bifurcation:
Two versions of EB are recognized. One involves redox of quinones and the other involves flavins. Quinones and flavins are cofactors that are capable of undergoing 2 e− – 2 proton redox.A pervasive example of electron bifurcation is the Q cycle, which is part of the machinery that results in oxidative phosphorylation. In that case one electron from ubiquinol is directed to a Rieske cluster and the other electron is directed to a cytochrome b. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Slate industry**
Slate industry:
The slate industry is the industry related to the extraction and processing of slate. Slate is either quarried from a slate quarry or reached by tunneling in a slate mine. Common uses for slate include as a roofing material, a flooring material, gravestones and memorial tablets, and electrical insulation.
Slate industry:
Slate mines are found around the world. 90% of Europe's natural slate used for roofing originates from the Slate Industry in Spain. The major slate mining region in the United Kingdom is Wales; in Cornwall there are several slate quarries, including prominent ones in Delabole, and in the Lake District. In the remainder of Continental Europe and the Americas, Portugal, Italy, Germany, Brazil, the east coast of Newfoundland, the Slate Valley of Vermont, New York, Pennsylvania, and Virginia are important producing regions. The Slate Valley area, centering on a town called Granville in the state of New York is one of the places in the world where colored slate (i.e. slate which is not grey or blue) is obtained. (A fuller account is given in the article Slate: section Slate extraction.)
Slate industry in Spain:
Ninety percent of Europe's natural slate used for roofing originates from the slate industry in Spain, with the region of Galicia being the primary production source.
In Galicia, the larger slate production companies are concentrated in Valdeorras in Ourense, with other important sites being situated in Quiroga, Ortigueira and Mondoñedo.
Slate industry in Spain:
The slate deposits in this region of northern Spain are over 500 million years old, having formed during the Palaeozoic period. The colour and texture of the slate produced is largely dependent upon the tectonic environment, the source of the sedimentary material from which the slate is comprised, and the chemical and physical conditions prevalent during the sedimentation process. The region has been subjected to periods of volcanism and magmatic activity, leading to a unique geological development.
Slate industry in Spain:
An important use of Spanish slate is as a roofing material. It is particularly suitable for this purpose as it has a low water absorption index of less than 0.4%, making it very resistant to frost damage and breakage due to freezing. Tiles produced from Spanish slate are usually hung using a unique hook fixing method, which reduces the appearance of weak points on the tile since no holes are drilled, and allows narrower tiles to be used to create roofing features such as valleys and domes. Hook fixing is especially prevalent in areas subject to severe climatic conditions, since there is a greater resistance to wind uplift as the lower edge of the slate is secured.
Slate industry in Wales:
Background Slate has been quarried in north Wales for almost two millennia with the Segontium Roman fort at Caernarfon being roofed by local slate in the late second century. Export of slate has been carried out for several centuries, which was recently confirmed by the discovery in the Menai Strait of the wreck of a 16th-century wooden ship carrying finished slates. Large-scale commercial slate mining in North Wales began with the opening of the Cae Braich y Cafn quarry, later to become the Penrhyn Quarry near Bethesda in the Ogwen Valley in 1782. Welsh output was far ahead of other areas and by 1882, 92% of Britain's production was from Wales (451,000 t): the quarries at Penrhyn and Dinorwic produced half of this between them.
Slate industry in Wales:
The men worked the slate in partnerships of four, six or eight and these were known as "Bargain Gangs". "Bargains" were let by the "Bargain Letter" when a price for a certain area of rock was agreed. Adjustments were made according to the quality of the slate and the proportion of "bad" rock. The first Monday of every month was "Bargain Letting Day" when these agreements were made between men and management. Half the partners worked the quarry face and the others were in the dressing sheds producing the finished slates. In the Glyndyfrdwy mines at Moel Fferna each bargain worked a horizontal stretch of 10 by 15 yards. Duchesses, Marchionesses, Countesses, Viscountesses, Ladies, Small Ladies, Doubles and Randoms were all sizes of slates produced.
Slate industry in Wales:
Rubblers helped to keep the chambers free from waste: one ton of saleable slate could produce up to 30 tons of waste. It is the mountainous heaps of this very same waste that is perhaps the first thing to strike someone visiting the old regions nowadays. The men had to pay for their ropes and chains, for tools and for services such as sharpening and repairing. Subs (advances) were paid every week, everything being settled up on the "Day of the Big Pay". If conditions had not been good, the men could end up owing the management money. At Moel Fferna a team could produce up to 35 tons of finished slate a week. In 1877 they received about 7 shillings a ton for this. After paying wages for the manager, clerks and 'trammers' the company could make a clear profit of twice this amount. This system was not finally abolished until after the Second World War.
Slate industry in Wales:
Working methods Early workings tended to be in surface pits, but as the work progressed downwards, it became necessary to work underground. This was often accompanied by the driving of one or more adits to gain direct access to a Level. In some rare instances, such as Moel Fferna, there is no trace of surface workings and the workings were entirely underground.
Slate industry in Wales:
Chambers were usually driven from the bottom, by means of a "roofing shaft" which was then continued across the width of the chamber: the chamber would then be worked downwards. Slate was freed from the rockface by blasting in shot holes hammered (and later drilled) into the rock.
Slate would be recovered from the chamber in the form of a large slab, which would be taken by truck to the mill where it would be split and cut into standard-sized roofing slates.
Slate mines were usually worked in chambers which followed the slate vein, connected via a series of horizontal "Floors" (or "Levels"). The chambers varied in size between mines and were divided by "pillars" or walls which supported the roof. The floors were connected by underground "Inclines" which used wedge-shaped trolleys to move trucks between levels.
In some mines, where slate was worked away below the main haulage floor, the route was maintained through the construction of a wooden bridge across the chamber, often supported from chains attached to the roof above. These bridges could be as much as 100 feet/30 m above the floor below.
Significant mines In North Gwynedd, the large slate producing quarries were usually confined to open-cast workings, sometimes with an adit to gain access to the bottom of the pit: Penrhyn Quarry, Bethesda. The largest slate producing quarry in the world. Bought by Alfred McAlpine plc in 1964.
Dinorwic Quarry, Llanberis.
Slate industry in Wales:
Cilgwyn quarry, Nantlle Valley. Dating from the 12th century it is thought to be the oldest in Wales.In the Blaenau Ffestiniog area, most of the workings were underground as the slate veins are steeply angled and open cast workings would require the removal of a massive amount of rock to gain access to the slate. The larger mines in the Ffestiniog area include: Llechwedd quarry – now open to the public as a "tourist mine".
Slate industry in Wales:
Manod – used by the National Gallery, London to store artworks in World War II Maenofferen Oakeley – now partially untopped as an opencast working by Alfred McAlpine plc Cwmorthin Rhosydd CroesorThere were also a number of slate mines in the Llangollen area which produced a much darker "black" slate: Berwyn Deeside and Moel Fferna PenarthAnother cluster of mines were found in mid Wales centered on Corris. These all worked a pair of slate veins that ran across the Cambrian mountain range from Tywyn in the west through Corris and Aberllefenni in the Dulas Valley to the mines around Dinas Mawddwy in the east. Slate was also mined in Pembrokeshire in places like Maenclochog.
Slate industry in Wales:
Remains Most underground slate mines in north Wales were closed by the 1960s although some open-cast quarries have remained open, including the Penrhyn Quarry and the untopping work at Oakeley in Blaenau Ffestiniog. Work also continues at Berwyn near Llangollen. The final large-scale underground working to close was Maenofferen Quarry (which is owned by the Llechwedd tourist mine) in 1999 although opencast quarrying continues at this location.
Slate industry in Wales:
Many of the mines are now in a state of considerable decay and those that are accessible should not be entered as they are on private property and contain many hidden dangers.
Historical and adventurous underground tours are provided at several mines including Rhiwbach (by Go Below), Llechwedd (Zip World and Llechwedd/Quarry Tours Ltd) and Cwmorthin (Go Below).The lower levels of many mines are now flooded and collapses are commonplace; for example, the hillside above the Rhosydd workings has many pits where the roofs of the chambers below have collapsed.
Other slate producing areas in Great Britain:
The most significant non-Welsh British slate industry is that of Cornwall and Devon where the Delabole Quarry is thought to be the largest single quarry in the island. Many of these are no longer worked owing to lower costs of extraction in the larger British workings. The quarrying of slate in Cornwall is known to have been carried out from the late mediaeval period and there was a considerable export trade from some of the quarries near the coasts in the 19th century.
Other slate producing areas in Great Britain:
Slate has also been quarried at Swithland in Leicestershire.
Other slate producing areas in Great Britain:
There are considerable workings in Cumbria. During the last 500 years, much slate extraction has taken place in the Lake District at both surface quarries and underground mines. The major workings are: Broughton Moor Old Man Complex (Coniston); Cove Quarries (south of Coniston Old Man) Elterwater Quarries Hodge Close Honister Slate Mine (including Yew Crag and Dubs) Kentmere Workings Kirkby Moor (Burlington Slate Quarries) Petts, Kirkstone Little Langdale Quarries Skiddaw Slate Tilberthwaite Common Wood, UlphaSlate was also quarried in Scotland.
Slate industry in North America:
Slate was first quarried in the United States as early as 1734 along the Pennsylvania Maryland border; however, it was not until 1785 that the first commercial slate quarry was opened in the United States, by William Docher in Peach Bottom Township, Pennsylvania. Production was limited to that which could be consumed in local markets until the middle of the nineteenth century. The slate industry in the United States has existed in several locations in the country including areas in the western states, however the majority of slate has come from three principal regions along the Great Valley of the Appalachian Mountains. Of those regions, the Taconic Mountains region of Vermont and New York, as well as Lancaster, Lehigh and Northampton counties in Pennsylvania all still have active quarries.
Slate industry in North America:
The Pennsylvania Historical and Museum Commission states that in the Slateford Water Gap area the first verified quarry started some time around 1808 . The industry in this region of Pennsylvania spread across the northern edges of both Lehigh and Northampton counties which contain between them the remains of approximately 400 individual quarries. The origins of quarrying in the Lehigh Valley are obscured by conflicting evidence, although it is safest to say that it started near the town of Slateford in the early Nineteenth Century and moved toward Bangor over a fifty-year period. By 1929, the value of slate production in Pennsylvania was approximately 5 million dollars, accounting for almost half of the 11 million dollar value of slate production for the entire United States. Quarries in this region of the country remained active throughout the first quarter of the 20th century producing roofing slate, slate for electrical uses, as well as being the largest producer of school slates and chalkboards in the country. The Slatington Slate Trade report for January 4 of 1880 showed that quarries in the town of Slatington alone had shipped 81,402 squares of roofing slates (over 8 million square feet) as well as 40,486 cases of school slates and 243 cases of blackboards.
Slate industry in North America:
The Slate Valley (the district of Granville, New York) is well known for its slate. Slate was quarried in 1839 at Fair Haven, Vermont. An influx of immigrants from the North Wales slate quarrying communities saw a boom in slate production that peaked in the latter half of the 19th century. The slate of the region comes in a variety of colors, notably green, gray, black and red. Some production continued in 2003 with 23 operating full-time mines employing 348 people.Additionally, one of the oldest quarries in America continues to quarry slate in Buckingham County, Virginia. Their trademark Buckingham Slate has been continually quarried since the 18th century and has a distinct, unfading blue/black color and Mica sheen. Buckingham Slate is used on many Federal buildings in the Washington, D.C. area.
Slate industry in North America:
Large scale slate quarrying also took place around the town of Monson, Maine where an extensive series of quarries flourished from the 1860s onwards. A small scale quarrying and dressing operation continues in Monson into the 21st century.
Slate is also found in the Arctic and was used by the Inuit to make the blades for ulus.
Slate industry in Brazil:
95% of the slate extraction in Brazil comes from Minas Gerais. Slate from this region is formed differently from traditional slate areas such as Galicia. Such products are sedimentary rocks that have split along their original bedding plane, whereas true slate has been subjected to metamorphism and does not split along bedding, but rather along planes associated with the realignment of minerals during metamorphism. This realignment, known as ‘schistosity’, bears no relationship to the original horizontal bedding planes .The independent Fundación Centro Tecnológico de la Pizarra’s report into the ’Technical properties of Bambui Slate from the State of Minas Gerais (Brazil) to ascertain its compliance with the Standard EN12326’ describes how certain products originating from Brazil on sale in the UK, are not entitled to bear the CE mark. Because such Brazilian products display higher water absorption indexes than those from other areas such as Galicia, this makes them less suitable for use as roofing tiles since the study showed a significant loss of strength when subject to thawing and freezing. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mechanical filter**
Mechanical filter:
A mechanical filter is a signal processing filter usually used in place of an electronic filter at radio frequencies. Its purpose is the same as that of a normal electronic filter: to pass a range of signal frequencies, but to block others. The filter acts on mechanical vibrations which are the analogue of the electrical signal. At the input and output of the filter, transducers convert the electrical signal into, and then back from, these mechanical vibrations.
Mechanical filter:
The components of a mechanical filter are all directly analogous to the various elements found in electrical circuits. The mechanical elements obey mathematical functions which are identical to their corresponding electrical elements. This makes it possible to apply electrical network analysis and filter design methods to mechanical filters. Electrical theory has developed a large library of mathematical forms that produce useful filter frequency responses and the mechanical filter designer is able to make direct use of these. It is only necessary to set the mechanical components to appropriate values to produce a filter with an identical response to the electrical counterpart.
Mechanical filter:
Steel alloys and iron–nickel alloys are common materials for mechanical filter components; nickel is sometimes used for the input and output couplings. Resonators in the filter made from these materials need to be machined to precisely adjust their resonance frequency before final assembly.
Mechanical filter:
While the meaning of mechanical filter in this article is one that is used in an electromechanical role, it is possible to use a mechanical design to filter mechanical vibrations or sound waves (which are also essentially mechanical) directly. For example, filtering of audio frequency response in the design of loudspeaker cabinets can be achieved with mechanical components. In the electrical application, in addition to mechanical components which correspond to their electrical counterparts, transducers are needed to convert between the mechanical and electrical domains. A representative selection of the wide variety of component forms and topologies for mechanical filters are presented in this article.
Mechanical filter:
The theory of mechanical filters was first applied to improving the mechanical parts of phonographs in the 1920s. By the 1950s mechanical filters were being manufactured as self-contained components for applications in radio transmitters and high-end receivers. The high "quality factor", Q, that mechanical resonators can attain, far higher than that of an all-electrical LC circuit, made possible the construction of mechanical filters with excellent selectivity. Good selectivity, being important in radio receivers, made such filters highly attractive. Contemporary researchers are working on microelectromechanical filters, the mechanical devices corresponding to electronic integrated circuits.
Elements:
The elements of a passive linear electrical network consist of inductors, capacitors and resistors which have the properties of inductance, elastance (inverse capacitance) and resistance, respectively. The mechanical counterparts of these properties are, respectively, mass, stiffness and damping. In most electronic filter designs, only inductor and capacitor elements are used in the body of the filter (although the filter may be terminated with resistors at the input and output). Resistances are not present in a theoretical filter composed of ideal components and only arise in practical designs as unwanted parasitic elements. Likewise, a mechanical filter would ideally consist only of components with the properties of mass and stiffness, but in reality some damping is present as well.The mechanical counterparts of voltage and electric current in this type of analysis are, respectively, force (F) and velocity (v) and represent the signal waveforms. From this, a mechanical impedance can be defined in terms of the imaginary angular frequency, jω, which entirely follows the electrical analogy.
Elements:
The scheme presented in the table is known as the impedance analogy. Circuit diagrams produced using this analogy match the electrical impedance of the mechanical system seen by the electrical circuit, making it intuitive from an electrical engineering standpoint. There is also the mobility analogy, in which force corresponds to current and velocity corresponds to voltage. This has equally valid results but requires using the reciprocals of the electrical counterparts listed above. Hence, M → C, S → 1/L, D → G where G is electrical conductance, the inverse of resistance. Equivalent circuits produced by this scheme are similar, but are the dual impedance forms whereby series elements become parallel, capacitors become inductors, and so on. Circuit diagrams using the mobility analogy more closely match the mechanical arrangement of the circuit, making it more intuitive from a mechanical engineering standpoint. In addition to their application to electromechanical systems, these analogies are widely used to aid analysis in acoustics.Any mechanical component will unavoidably possess both mass and stiffness. This translates in electrical terms to an LC circuit, that is, a circuit consisting of an inductor and a capacitor, hence mechanical components are resonators and are often used as such. It is still possible to represent inductors and capacitors as individual lumped elements in a mechanical implementation by minimising (but never quite eliminating) the unwanted property. Capacitors may be made of thin, long rods, that is, the mass is minimised and the compliance is maximised. Inductors, on the other hand, may be made of short, wide pieces which maximise the mass in comparison to the compliance of the piece.Mechanical parts act as a transmission line for mechanical vibrations. If the wavelength is short in comparison to the part then a lumped-element model as described above is no longer adequate and a distributed-element model must be used instead. The mechanical distributed elements are entirely analogous to electrical distributed elements and the mechanical filter designer can use the methods of electrical distributed-element filter design.
History:
Harmonic telegraph Mechanical filter design was developed by applying the discoveries made in electrical filter theory to mechanics. However, a very early example (1870s) of acoustic filtering was the "harmonic telegraph", which arose precisely because electrical resonance was poorly understood but mechanical resonance (in particular, acoustic resonance) was very familiar to engineers. This situation was not to last for long; electrical resonance had been known to science for some time before this, and it was not long before engineers started to produce all-electric designs for filters. In its time, though, the harmonic telegraph was of some importance. The idea was to combine several telegraph signals on one telegraph line by what would now be called frequency division multiplexing thus saving enormously on line installation costs. The key of each operator activated a vibrating electromechanical reed which converted this vibration into an electrical signal. Filtering at the receiving operator was achieved by a similar reed tuned to precisely the same frequency, which would only vibrate and produce a sound from transmissions by the operator with the identical tuning.Versions of the harmonic telegraph were developed by Elisha Gray, Alexander Graham Bell, Ernest Mercadier and others. Its ability to act as a sound transducer to and from the electrical domain was to inspire the invention of the telephone.
History:
Mechanical equivalent circuits Once the basics of electrical network analysis began to be established, it was not long before the ideas of complex impedance and filter design theories were carried over into mechanics by analogy. Kennelly, who was also responsible for introducing complex impedance, and Webster were the first to extend the concept of impedance into mechanical systems in 1920. Mechanical admittance and the associated mobility analogy came much later and are due to Firestone in 1932.It was not enough to just develop a mechanical analogy. This could be applied to problems that were entirely in the mechanical domain, but for mechanical filters with an electrical application it is necessary to include the transducer in the analogy as well. Poincaré in 1907 was the first to describe a transducer as a pair of linear algebraic equations relating electrical variables (voltage and current) to mechanical variables (force and velocity). These equations can be expressed as a matrix relationship in much the same way as the z-parameters of a two-port network in electrical theory, to which this is entirely analogous: 11 12 21 22 ][Iv] where V and I represent the voltage and current respectively on the electrical side of the transducer.
History:
Wegel, in 1921, was the first to express these equations in terms of mechanical impedance as well as electrical impedance. The element 22 is the open circuit mechanical impedance, that is, the impedance presented by the mechanical side of the transducer when no current is entering the electrical side. The element 11 , conversely, is the clamped electrical impedance, that is, the impedance presented to the electrical side when the mechanical side is clamped and prevented from moving (velocity is zero). The remaining two elements, 21 and 12 , describe the transducer forward and reverse transfer functions respectively. Once these ideas were in place, engineers were able to extend electrical theory into the mechanical domain and analyse an electromechanical system as a unified whole.
History:
Sound reproduction An early application of these new theoretical tools was in phonographic sound reproduction. A recurring problem with early phonograph designs was that mechanical resonances in the pickup and sound transmission mechanism caused excessively large peaks and troughs in the frequency response, resulting in poor sound quality. In 1923, Harrison of the Western Electric Company filed a patent for a phonograph in which the mechanical design was entirely represented as an electrical circuit. The horn of the phonograph is represented as a transmission line, and is a resistive load for the rest of the circuit, while all the mechanical and acoustic parts—from the pickup needle through to the horn—are translated into lumped components according to the impedance analogy. The circuit arrived at is a ladder topology of series resonant circuits coupled by shunt capacitors. This can be viewed as a bandpass filter circuit. Harrison designed the component values of this filter to have a specific passband corresponding to the desired audio passband (in this case 100 Hz to 6 kHz) and a flat response. Translating these electrical element values back into mechanical quantities provided specifications for the mechanical components in terms of mass and stiffness, which in turn could be translated into physical dimensions for their manufacture. The resulting phonograph has a flat frequency response in its passband and is free of the resonances previously experienced. Shortly after this, Harrison filed another patent using the same methodology on telephone transmit and receive transducers.
History:
Harrison used Campbell's image filter theory, which was the most advanced filter theory available at the time. In this theory, filter design is viewed essentially as an impedance matching problem. More advanced filter theory was brought to bear on this problem by Norton in 1929 at Bell Labs. Norton followed the same general approach though he later described to Darlington the filter he designed as being "maximally flat". Norton's mechanical design predates the paper by Butterworth who is usually credited as the first to describe the electronic maximally flat filter. The equations Norton gives for his filter correspond to a singly terminated Butterworth filter, that is, one driven by an ideal voltage source with no impedance, whereas the form more usually given in texts is for the doubly terminated filter with resistors at both ends, making it hard to recognise the design for what it is. Another unusual feature of Norton's filter design arises from the series capacitor, which represents the stiffness of the diaphragm. This is the only series capacitor in Norton's representation, and without it, the filter could be analysed as a low-pass prototype. Norton moves the capacitor out of the body of the filter to the input at the expense of introducing a transformer into the equivalent circuit (Norton's figure 4). Norton has used here the "turning round the L" impedance transform to achieve this.The definitive description of the subject from this period is Maxfield and Harrison's 1926 paper. There, they describe not only how mechanical bandpass filters can be applied to sound reproduction systems, but also apply the same principles to recording systems and describe a much improved disc cutting head.
History:
Volume production Modern mechanical filters for intermediate frequency (IF) applications were first investigated by Robert Adler of Zenith Electronics who built a 455 kHz filter in 1946. The idea was taken up by Collins Radio Company who started the first volume production of mechanical filters from the 1950s onwards. These were originally designed for telephone frequency-division multiplex applications where there is commercial advantage in using high quality filters. Precision and steepness of the transition band leads to a reduced width of guard band, which in turn leads to the ability to squeeze more telephone channels into the same cable. This same feature is useful in radio transmitters for much the same reason. Mechanical filters quickly also found popularity in VHF/UHF radio IF stages of the high end radio sets (military, marine, amateur radio and the like) manufactured by Collins. They were favoured in the radio application because they could achieve much higher Q-factors than the equivalent LC filter. High Q allows filters to be designed which have high selectivity, important for distinguishing adjacent radio channels in receivers. They also had an advantage in stability over both LC filters and monolithic crystal filters. The most popular design for radio applications was torsional resonators because radio IF typically lies in the 100 to 500 kHz band.
Transducers:
Both magnetostrictive and piezoelectric transducers are used in mechanical filters. Piezoelectric transducers are favoured in recent designs since the piezoelectric material can also be used as one of the resonators of the filter, thus reducing the number of components and thereby saving space. They also avoid the susceptibility to extraneous magnetic fields of the magnetostrictive type of transducer.
Transducers:
Magnetostrictive A magnetostrictive material is one which changes shape when a magnetic field is applied. In reverse, it produces a magnetic field when distorted. The magnetostrictive transducer requires a coil of conducting wire around the magnetostrictive material. The coil either induces a magnetic field in the transducer and sets it in motion or else picks up an induced current from the motion of the transducer at the filter output. It is also usually necessary to have a small magnet to bias the magnetostrictive material into its operating range. It is possible to dispense with the magnets if the biasing is taken care of on the electronic side by providing a d.c. current superimposed on the signal, but this approach would detract from the generality of the filter design.The usual magnetostrictive materials used for the transducer are either ferrite or compressed powdered iron. Mechanical filter designs often have the resonators coupled with steel or nickel-iron wires, but on some designs, especially older ones, nickel wire may be used for the input and output rods. This is because it is possible to wind the transducer coil directly on to a nickel coupling wire since nickel is slightly magnetostrictive. However, it is not strongly so and coupling to the electrical circuit is weak. This scheme also has the disadvantage of eddy currents, a problem that is avoided if ferrites are used instead of nickel.The coil of the transducer adds some inductance on the electrical side of the filter. It is common practice to add a capacitor in parallel with the coil so that an additional resonator is formed which can be incorporated into the filter design. While this will not improve performance to the extent that an additional mechanical resonator would, there is some benefit and the coil has to be there in any case.
Transducers:
Piezoelectric A piezoelectric material is one which changes shape when an electric field is applied. In reverse, it produces an electric field when it is distorted. A piezoelectric transducer, in essence, is made simply by plating electrodes on to the piezoelectric material. Early piezoelectric materials used in transducers such as barium titanate had poor temperature stability. This precluded the transducer from functioning as one of the resonators; it had to be a separate component. This problem was solved with the introduction of lead zirconate titanate (abbreviated PZT) which is stable enough to be used as a resonator. Another common piezoelectric material is quartz, which has also been used in mechanical filters. However, ceramic materials such as PZT are preferred for their greater electromechanical coupling coefficient.One type of piezoelectric transducer is the Langevin type, named after a transducer used by Paul Langevin in early sonar research. This is good for longitudinal modes of vibration. It can also be used on resonators with other modes of vibration if the motion can be mechanically converted into a longitudinal motion. The transducer consists of a layer of piezoelectric material sandwiched transversally into a coupling rod or resonator.Another kind of piezoelectric transducer has the piezoelectric material sandwiched in longitudinally, usually into the resonator itself. This kind is good for torsional vibration modes and is called a torsional transducer.As miniaturized by using thin film manufacturing methods piezoelectric resonators are called thin-film bulk acoustic resonators (FBARs).
Resonators:
It is possible to achieve an extremely high Q with mechanical resonators. Mechanical resonators typically have a Q of 10,000 or so, and 25,000 can be achieved in torsional resonators using a particular nickel-iron alloy. This is an unreasonably high figure to achieve with LC circuits, whose Q is limited by the resistance of the inductor coils.Early designs in the 1940s and 1950s started by using steel as a resonator material. This has given way to nickel-iron alloys, primarily to maximise the Q since this is often the primary appeal of mechanical filters rather than price. Some of the metals that have been used for mechanical filter resonators and their Q are shown in the table.Piezoelectric crystals are also sometimes used in mechanical filter designs. This is especially true for resonators that are also acting as transducers for inputs and outputs.One advantage that mechanical filters have over LC electrical filters is that they can be made very stable. The resonance frequency can be made so stable that it varies only 1.5 parts per billion (ppb) from the specified value over the operating temperature range (−25 to 85 °C), and its average drift with time can be as low as 4 ppb per day. This stability with temperature is another reason for using nickel-iron as the resonator material. Variations with temperature in the resonance frequency (and other features of the frequency function) are directly related to variations in the Young's modulus, which is a measure of stiffness of the material. Materials are therefore sought that have a small temperature coefficient of Young's modulus. In general, Young's modulus has a negative temperature coefficient (materials become less stiff with increasing temperature) but additions of small amounts of certain other elements in the alloy can produce a material with a temperature coefficient that changes sign from negative through zero to positive with temperature. Such a material will have a zero coefficient of temperature with resonance frequency around a particular temperature. It is possible to adjust the point of zero temperature coefficient to a desired position by heat treatment of the alloy.
Resonators:
Resonator modes It is usually possible for a mechanical part to vibrate in a number of different modes, however the design will be based on a particular vibrational mode and the designer will take steps to try to restrict the resonance to this mode. As well as the straightforward longitudinal mode some others which are used include flexural mode, torsional mode, radial mode and drumhead mode.Modes are numbered according to the number of half-wavelengths in the vibration. Some modes exhibit vibrations in more than one direction (such as drumhead mode which has two) and consequently the mode number consists of more than one number. When the vibration is in one of the higher modes, there will be multiple nodes on the resonator where there is no motion. For some types of resonator, this can provide a convenient place to make a mechanical attachment for structural support. Wires attached at nodes will have no effect on the vibration of the resonator or the overall filter response. In figure 5, some possible anchor points are shown as wires attached at the nodes. The modes shown are (5a) the second longitudinal mode fixed at one end, (5b) the first torsional mode, (5c) the second torsional mode, (5d) the second flexural mode, (5e) first radial expansion mode and (5f) first radially symmetric drumhead mode.
Circuit designs:
There are a great many combinations of resonators and transducers that can be used to construct a mechanical filter. A selection of some of these is shown in the diagrams. Figure 6 shows a filter using disc flexural resonators and magnetostrictive transducers. The transducer drives the centre of the first resonator, causing it to vibrate. The edges of the disc move in antiphase to the centre when the driving signal is at, or close to, resonance, and the signal is transmitted through the connecting rods to the next resonator. When the driving signal is not close to resonance, there is little movement at the edges, and the filter rejects (does not pass) the signal. Figure 7 shows a similar idea involving longitudinal resonators connected together in a chain by connecting rods. In this diagram, the filter is driven by piezoelectric transducers. It could equally well have used magnetostrictive transducers. Figure 8 shows a filter using torsional resonators. In this diagram, the input has a torsional piezoelectric transducer and the output has a magnetostrictive transducer. This would be quite unusual in a real design, as both input and output usually have the same type of transducer. The magnetostrictive transducer is only shown here to demonstrate how longitudinal vibrations may be converted to torsional vibrations and vice versa. Figure 9 shows a filter using drumhead mode resonators. The edges of the discs are fixed to the casing of the filter (not shown in the diagram) so the vibration of the disc is in the same modes as the membrane of a drum. Collins calls this type of filter a disc wire filter.The various types of resonator are all particularly suited to different frequency bands. Overall, mechanical filters with lumped elements of all kinds can cover frequencies from about 5 to 700 kHz although mechanical filters down as low as a few kilohertz (kHz) are rare. The lower part of this range, below 100 kHz, is best covered with bar flexural resonators. The upper part is better done with torsional resonators. Drumhead disc resonators are in the middle, covering the range from around 100 to 300 kHz.The frequency response behaviour of all mechanical filters can be expressed as an equivalent electrical circuit using the impedance analogy described above. An example of this is shown in figure 8b which is the equivalent circuit of the mechanical filter of figure 8a. Elements on the electrical side, such as the inductance of the magnetostrictive transducer, are omitted but would be taken into account in a complete design. The series resonant circuits on the circuit diagram represent the torsional resonators, and the shunt capacitors represent the coupling wires. The component values of the electrical equivalent circuit can be adjusted, more or less at will, by modifying the dimensions of the mechanical components. In this way, all the theoretical tools of electrical analysis and filter design can be brought to bear on the mechanical design. Any filter realisable in electrical theory can, in principle, also be realised as a mechanical filter. In particular, the popular finite element approximations to an ideal filter response of the Butterworth and Chebyshev filters can both readily be realised. As with the electrical counterpart, the more elements that are used, the closer the approximation approaches the ideal, however, for practical reasons the number of resonators does not normally exceed eight.
Circuit designs:
Semi-lumped designs Frequencies of the order of megahertz (MHz) are above the usual range for mechanical filters. The components start to become very small, or alternatively the components are large compared to the signal wavelength. The lumped-element model described above starts to break down and the components must be considered as distributed elements. The frequency at which the transition from lumped to distributed modeling takes place is much lower for mechanical filters than it is for their electrical counterparts. This is because mechanical vibrations travel at the speed of sound for the material the component is composed of. For solid components, this is many times (x15 for nickel-iron) the speed of sound in air (343 m/s) but still considerably less than the speed of electromagnetic waves (approx. 3x108 m/s in vacuum). Consequently, mechanical wavelengths are much shorter than electrical wavelengths for the same frequency. Advantage can be taken of these effects by deliberately designing components to be distributed elements, and the components and methods used in electrical distributed-element filters can be brought to bear. The equivalents of stubs and impedance transformers are both achievable. Designs which use a mixture of lumped and distributed elements are referred to as semi-lumped.An example of such a design is shown in figure 10a. The resonators are disc flexural resonators similar to those shown in figure 6, except that these are energised from an edge, leading to vibration in the fundamental flexural mode with a node in the centre, whereas the figure 6 design is energised in the centre leading to vibration in the second flexural mode at resonance. The resonators are mechanically attached to the housing by pivots at right angles to the coupling wires. The pivots are to ensure free turning of the resonator and minimise losses. The resonators are treated as lumped elements; however, the coupling wires are made exactly one half-wavelength (λ/2) long and are equivalent to a λ/2 open circuit stub in the electrical equivalent circuit. For a narrow-band filter, a stub of this sort has the approximate equivalent circuit of a parallel shunt tuned circuit as shown in figure 10b. Consequently, the connecting wires are being used in this design to add additional resonators into the circuit and will have a better response than one with just the lumped resonators and short couplings. For even higher frequencies, microelectromechanical methods can be used as described below.
Circuit designs:
Bridging wires Bridging wires are rods that couple together resonators that are not adjacent. They can be used to produce poles of attenuation in the stopband. This has the benefit of increasing the stopband rejection. When the pole is placed near the passband edge, it also has the benefit of increasing roll-off and narrowing the transition band. The typical effects of some of these on filter frequency response are shown in figure 11. Bridging across a single resonator (figure 11b) can produce a pole of attenuation in the high stopband. Bridging across two resonators (figure 11c) can produce a pole of attenuation in both the high and the low stopband. Using multiple bridges (figure 11d) will result in multiple poles of attenuation. In this way, the attenuation of the stopbands can be deepened over a broad frequency range.
Circuit designs:
The method of coupling between non-adjacent resonators is not limited to mechanical filters. It can be applied to other filter formats and the general term for this class is cross-coupled filter. For instance, channels can be cut between cavity resonators, mutual inductance can be used with discrete component filters, and feedback paths can be used with active analogue or digital filters. Nor was the method first discovered in the field of mechanical filters; the earliest description is in a 1948 patent for filters using microwave cavity resonators. However, mechanical filter designers were the first (1960s) to develop practical filters of this kind and the method became a particular feature of mechanical filters.
Microelectromechanical filters:
A new technology emerging in mechanical filtering is microelectromechanical systems (MEMS). MEMS are very small micromachines with component sizes measured in micrometres (μm), but not as small as nanomachines. These filters can be designed to operate at much higher frequencies than can be achieved with traditional mechanical filters. These systems are mostly fabricated from silicon (Si), silicon nitride (Si3N4), or polymers. A common component used for radio frequency filtering (and MEMS applications generally), is the cantilever resonator. Cantilevers are simple mechanical components to manufacture by much the same methods used by the semiconductor industry; masking, photolithography and etching, with a final undercutting etch to separate the cantilever from the substrate. The technology has great promise since cantilevers can be produced in large numbers on a single substrate—much as large numbers of transistors are currently contained on a single silicon chip.The resonator shown in figure 12 is around 120 μm in length. Experimental complete filters with an operating frequency of 30 GHz have been produced using cantilever varactors as the resonator elements. The size of this filter is around 4×3.5 mm. Cantilever resonators are typically applied at frequencies below 200 MHz, but other structures, such as micro-machined cavities, can be used in the microwave bands. Extremely high Q resonators can be made with this technology; flexural mode resonators with a Q in excess of 80,000 at 8 MHz are reported.
Adjustment:
The precision applications in which mechanical filters are used require that the resonators are accurately adjusted to the specified resonance frequency. This is known as trimming and usually involves a mechanical machining process. In most filter designs, this can be difficult to do once the resonators have been assembled into the complete filter so the resonators are trimmed before assembly. Trimming is done in at least two stages; coarse and fine, with each stage bringing the resonance frequency closer to the specified value. Most trimming methods involve removing material from the resonator which will increase the resonance frequency. The target frequency for a coarse trimming stage consequently needs to be set below the final frequency since the tolerances of the process could otherwise result in a frequency higher than the following fine trimming stage could adjust for.The coarsest method of trimming is grinding of the main resonating surface of the resonator; this process has an accuracy of around ±800 ppm. Better control can be achieved by grinding the edge of the resonator instead of the main surface. This has a less dramatic effect and consequently better accuracy. Processes that can be used for fine trimming, in order of increasing accuracy, are sandblasting, drilling, and laser ablation. Laser trimming is capable of achieving an accuracy of ±40 ppm.Trimming by hand, rather than machine, was used on some early production components but would now normally only be encountered during product development. Methods available include sanding and filing. It is also possible to add material to the resonator by hand, thus reducing the resonance frequency. One such method is to add solder, but this is not suitable for production use since the solder will tend to reduce the high Q of the resonator.In the case of MEMS filters, it is not possible to trim the resonators outside of the filter because of the integrated nature of the device construction. However, trimming is still a requirement in many MEMS applications. Laser ablation can be used for this but material deposition methods are available as well as material removal. These methods include laser or ion-beam induced deposition. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Die shot**
Die shot:
A die shot or die photography is a photo or recording of the layout of an integrated circuit, showings its design with any packaging removed. A die shot can be compared with the cross-section of an (almost) two-dimensional computer chip, on which the design and construction of various tracks and components can be clearly seen. Due to the high complexity of modern computer chips, die-shots are often displayed colourfully, with various parts coloured using special lighting or even manually.
Methods:
A die shot is a picture of a computer chip without its housing. There are two ways to capture such a chip "naked" on a photo; by either taking the photo before a chip is packaged or by removing its package.
Methods:
Avoiding the package Taking a photo before the chip ends up in a housing is typically preserved to the chip manufacturer, because the chip is packed fairly quickly in the production process to protect the sensitive very small parts against external influences. However, manufacturers may be reluctant to share die shots to prevent competitors from easily gaining insight into the technological progress and complexity of a chip.
Methods:
Removing the package Removing the housing from a chip is typically a chemical process - a chip is so small and the parts are so microscopic that opening a housing (also named delidding) with tools such as saws, sanders or dremels could damage the chip in such a way that a die shot is no longer or less useful. For example, sulphuric acid can be used to dissolve the plastic housing of a chip. This is not a harmless process - sulphuric acid can cause a lot of health damage to people, animals and the environment. Chips are immersed in a glass jar with sulphuric acid, after which the sulphuric acid is boiled for up to 45 minutes at a temperature of 337 degrees Celsius. Once the plastic housing has decayed, there may be other processes to remove leftover carbon, such as with a hot bath of concentrated nitric acid. After this, the contents of a chip are relatively exposed and a picture can be made of the chip with macrophotography or microphotography. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Calmodulin-binding transcription activator 1**
Calmodulin-binding transcription activator 1:
Calmodulin-binding transcription activator 1 is a protein that in humans is encoded by the CAMTA1 gene. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Heterologous**
Heterologous:
The term heterologous has several meanings in biology.
Gene expression:
In cell biology and protein biochemistry, heterologous expression means that a protein is experimentally put into a cell that does not normally make (i.e., express) that protein. Heterologous (meaning 'derived from a different organism') refers to the fact that often the transferred protein was initially cloned from or derived from a different cell type or a different species from the recipient. Typically the protein itself is not transferred, but instead the 'correctly edited' genetic material coding for the protein (the complementary DNA or cDNA) is added to the recipient cell. The genetic material that is transferred typically must be within a format that encourages the recipient cell to express the cDNA as a protein (i.e., it is put in an expression vector). Methods for transferring foreign genetic material into a recipient cell include transfection and transduction. The choice of recipient cell type is often based on an experimental need to examine the protein's function in detail, and the most prevalent recipients, known as heterologous expression systems, are chosen usually because they are easy to transfer DNA into or because they allow for a simpler assessment of the protein's function.
Stem cells:
In stem cell biology, a heterologous transplant refers to cells from a mixed population of donor cells. This is in contrast to an autologous transplant where the cells are derived from the same individual or an allogenic transplant where the donor cells are HLA matched to the recipient. A heterologous source of therapeutic cells will have a much greater availability than either autologous or allogenic cellular therapies.
Structural biology:
In structural biology, a heterologous association is a binding mode between the protomers of a protein structure. In a heterologous association, each protomer contributes a different set of residues to the binding interface. In contrast, two protomers form an isologous association when they contribute the same set of residues to the protomer-protomer interface. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vuvuzela**
Vuvuzela:
The vuvuzela is a horn, with an inexpensive injection-moulded plastic shell about 65 centimetres (2 ft) long, which produces a loud monotone note, typically around B♭ 3 (the first B♭ below middle C). Some models are made in two parts to facilitate storage, and this design also allows pitch variation. Many types of vuvuzela, made by several manufacturers, may produce various intensity and frequency outputs. The intensity of these outputs depends on the blowing technique and pressure exerted.The vuvuzela is commonly used at football matches in South Africa, and it has become a symbol of South African football as the stadiums are filled with its sound. The intensity of the sound caught the attention of the global football community during the 2009 FIFA Confederations Cup in anticipation of South Africa hosting the 2010 FIFA World Cup.The vuvuzela has been the subject of controversy when used by spectators at football matches. Its high volume can lead to permanent hearing loss for unprotected ears after close-range exposure, with a sound level of 120 dB(A) (the threshold of pain) at one metre (3.3 ft) from the device opening.
Origin:
Plastic aerophones, like corneta and similar devices, have been used in Brazil and other Latin American countries since the 1960s, also similar "Stadium Horns" have been marketed and available in the United States since that same date.Similar horns have been in existence for much longer. An instrument that looks like a vuvuzela appears in Winslow Homer's 1870 painting "The Dinner Horn".The origin of the device is disputed. The term vuvuzela was first used in South Africa from the Zulu language or from a Nguni language. It is also known in the Sepedi language as Lepatata; a Bokoni dialect word meaning to make a blowing sound (directly translated: ukuvuvuzela).Controversies over the invention arose in early 2010. South African Kaizer Chiefs fan Freddie "Saddam" Maake claimed the invention of the vuvuzela by fabricating an aluminium version in 1965 from a bicycle horn and has photographic evidence of himself holding the aluminium vuvuzela in the 1970s, 1980s and 1990s. He also claimed to have coined vuvuzela from the Zulu language for "welcome", "unite" and "celebration". Plastics factory Masincedane Sport popularised the ubiquitous plastic vuvuzela commonly heard at South African football games in 2002; and the Nazareth Baptist Church claimed the vuvuzela belonged to their church.
International tournaments:
The world association football governing body, FIFA, proposed banning vuvuzelas from stadiums, as they were seen as potential weapons for hooligans and could be used in ambush marketing. Columnist Jon Qwelane described the device as "an instrument from hell". South African football authorities argued that the vuvuzela was part of the South African football experience. The Spanish midfielder Xabi Alonso said, "Those trumpets? That noise I don't like ... FIFA must ban those things ... it is not nice to have a noise like that". Dutch coach Bert van Marwijk remarked, "... it was annoying ... in the stadiums you get used to it but it is still unpleasant".Commentator Farayi Mungazi said, "Banning the vuvuzela would take away the distinctiveness of a South African World Cup ... absolutely essential for an authentic South African footballing experience". FIFA President Sepp Blatter responded, "we should not try to Europeanise an African World Cup ... that is what African and South Africa football is all about – noise, excitement, dancing, shouting and enjoyment". Despite the criticisms, FIFA agreed to permit their use in stadiums during the 2009 FIFA Confederations Cup and 2010 FIFA World Cup. The South African football authority argued that during FIFA World Cup 2010, vuvuzelas achieved great popularity, though TV spectators suffered a lot due to vuvuzela noise pollution.
International tournaments:
2010 FIFA World Cup Marketing Hyundai constructed the world's largest working vuvuzela as part of a marketing campaign for the World Cup. The 35-metre (115 ft) blue vuvuzela mounted on the Foreshore Freeway Bridge, Cape Town, was intended to be used at the beginning of each match; however, it did not sound a note during the World Cup, as its volume was a cause of concern to city authorities.
International tournaments:
Reception Its ubiquity led to many suggestions for limiting its use, muffling its sound, and even an outright ban.Broadcasting organisations experienced difficulties with their presentations. Television and radio audiences often heard only the sound of vuvuzelas. The BBC, RTÉ, ESPN and BSkyB have examined the possibility of filtering the ambient noise while maintaining game commentary.The vuvuzelas raised health and safety concerns. Competitors believed the incessant noise hampered the ability of the players to get their rest, and degraded the quality of team performance. Other critics remarked that vuvuzelas disrupted team communication and players' concentration during matches. Demand for earplugs to protect from hearing loss during the World Cup outstripped supply, with many pharmacies out of stock. One major vuvuzela manufacturer even began selling its own earplugs to spectators.
International tournaments:
Audio filtration Notch filtering, an audio filtration technique, is proposed to reduce the vuvuzela sound in broadcasts and increase clarity of commentary audio. The vuvuzela produces notes at a frequency of approximately 235 Hz and its first partial at 465 Hz. However, this filtration technique affects the clarity of commentary audio. Proposals of adaptive filters by universities and research organisations address this issue by preserving the amplitude and clarity of the commentators' voices and crowd noise. Such filtration techniques have been adopted by some cable television providers.
International tournaments:
2018 FIFA World Cup Vuvuzelas made a comeback at the 2018 FIFA World Cup in Russia, used mainly by Iranian supporters. Much like in 2010, there was a backlash against their use.
Health effects and regulation:
Health concerns A study conducted in 2010 by Ruth McNerney of the London School of Hygiene & Tropical Medicine and colleagues, concluded that the airborne transmission of diseases by means of vuvuzelas was possible. They measured tiny droplets emitted from a vuvuzela that can carry flu and cold germs that are small enough to stay suspended in the air for hours, and can enter into the airways of a person's lungs. The study concluded that vuvuzelas can infect others on a greater scale than coughing or shouting.The vuvuzelas have the potential to cause noise-induced hearing loss. Prof James Hall III, Dirk Koekemoer, De Wet Swanepoel and colleagues at the University of Pretoria found that vuvuzelas can have a negative effect when a listener's eardrums are exposed to the instrument's high-intensity sound. The vuvuzelas produce an average sound pressure of 113 dB(A) at two metres (7 ft) from the device opening. The study finds that subjects should not be exposed to more than 15 minutes per day at an intensity of 100 dB(A). The study assumes that if a single vuvuzela emits a sound that is dangerously loud to subjects within a two-metre (7 ft) radius, and numerous vuvuzelas are typically blown together for the duration of a match, it may put spectators at a significant risk of hearing loss. Hearing loss experts at the U.S. National Institute for Occupational Safety and Health (NIOSH) recommend that exposure at the 113 dB(A) level not exceed 45 seconds per day. A newer model has a modified mouthpiece that reduces the volume by 20 dB.
Health effects and regulation:
Noise levels and bans Wembley Stadium (as part of an overall ban of noisemakers) 2014, 2018, and 2022 FIFA World Cups All sporting events at the Cardiff City, Sophia Gardens, and Millennium Stadiums Wimbledon Lord's Cricket Ground Melbourne Cricket Ground The WACA Ground in Perth.
The Gabba Cricket Ground in Brisbane.
The Sydney Cricket Ground The now-defunct Champions League Twenty20 cricket tournament.
Yankee Stadium Fuji Rock Festival The Southeastern Conference of US college sports Ultimate Fighting Championship events.
Gaelic Athletic Association events Little League World Series Providence Park UEFA, including all Champions League, Europa League, and European Championship matches Rugby World Cup (starting in 2011) Kontinental Hockey League 2010 FIBA World Championship and other basketball tournaments from then on National Football League (as part of an overall ban of noisemakers) The Evolution Championship Series for fighting games.
Health effects and regulation:
Vermont Principals' Association High school sports ÖFB Bundesliga Bundesliga (in some stadiums)Some shopping centres in South Africa banned the use of vuvuzelas. They were also banned at the 2010 Baltimore anime convention Otakon. The convention committee declared that any attendee carrying a vuvuzela could have it confiscated from them, and that anyone blowing one could face expulsion from the event.Another such action was taken in response to the prevalence of the vuvuzelas at the 2010 Anime Expo based in Los Angeles, attended by representatives of Otakon who felt the disruption led to discomfort for some of the attendees of Anime Expo which they wished to avoid at the later Baltimore event.Nine English Premier League clubs have banned the device. Five clubs (Arsenal, Birmingham City, Everton, Fulham and Liverpool) have banned them due to health and safety reasons while Sunderland, West Ham United, and West Bromwich Albion have barred them because of policy against musical instruments. Manchester United banned vuvuzelas from Old Trafford on August 13, 2010. However, two clubs (Manchester City and Stoke City) have allowed them.The organisers of the 2012 Olympic Games placed a ban on vuvuzelas at the sporting event.
Usage in protests:
On July 13, 2010, protesters with vuvuzelas converged on BP's London headquarters to protest the company's handling of the Deepwater Horizon oil spill.Vuvuzelas were widely used during the 2011 Wisconsin pro-union protests against governor Scott Walker, after a Madison DJ, Nick Nice, ordered 200 of them and distributed them to his fellow protesters. According to Nice, this caused vuvuzelas to be included in the list of items banned at the state's capitol.In March 2012, German protesters used vuvuzelas during the official traditional torchlight ceremony, the Großer Zapfenstreich, which bid farewell to President of Germany Christian Wulff. Wulff had resigned earlier over corruption allegations, yet he still received the honour of the military ceremony, which left Germany divided.
Usage in music:
Usage of vuvuzela in art music is limited. One of the few compositions made for it is a baroque-style double concerto in C major for vuvuzela, organ (or harpsichord) and string orchestra, written by Timo Kiiskinen, Professor of Church Music in Sibelius Academy, Helsinki; organ version of this concerto was premiered on 21 October 2010 at the Organ Hall of Sibelius Academy, and harpsichord version on 19 December 2010 at Pro Puu gallery in Lahti.John-Luke Mark Matthews has written a concerto in B-flat major for vuvuzela and orchestra. The score and parts for this are available on the IMSLP public-domain score library. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Unisolvent point set**
Unisolvent point set:
In approximation theory, a finite collection of points X⊂Rn is often called unisolvent for a space W if any element w∈W is uniquely determined by its values on X .X is unisolvent for Πnm (polynomials in n variables of degree at most m) if there exists a unique polynomial in Πnm of lowest possible degree which interpolates the data X Simple examples in R would be the fact that two distinct points determine a line, three points determine a parabola, etc. It is clear that over R , any collection of k + 1 distinct points will uniquely determine a polynomial of lowest possible degree in Πk | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Android Mini PC MK802**
Android Mini PC MK802:
The MK802 is a PC-on-a-stick produced by Rikomagic, a Chinese company using mostly two series of Systems on a chip architectures: AllWinner A1X SoC, based on an ARM architecture, composed of an ARM V7 based Cortex-A8 1 GHz processor, a Mali-400 MP GPU, Wi-Fi 802.11 b/g/n, and a VPU CedarX capable of displaying up to 1080p video.
Rockchip RK3xxx SoC, based on an ARM architecture, composed of an ARM V7 based (Cortex-A8, Cortex-A9 and then, Cortex-A17 CPU for Rikomagic Mini PC MK902 series) beside Mali-400 (and Mali-T764 for MK902).
History and models:
The thumb sized MK802, which was first brought into market in May 2012, can turn a display with a HDMI or DVI-D port into an Android computer, or several Ubuntu derived Linux distribution for LE (Linux Edition) versions. Since the original design was introduced, five other similar models have been released.
MK802: Original design with AllWinner A10 SoC (featuring single core ARM Cortex-A8 CPU and ARM Mali-400MP GPU).
MK802+: Uses AllWinner A10s SoC with RAM increased to 1 GB MK802 II: Modified form and slightly increased processor speed MK802 III: A new design featuring a Rockchip RK3066 (including a dual-core ARM CPU (Cortex-A9 at 1.6 GHz and ARM Mali-400MP GPU), and 4 GB or 8 GB flash storage that runs Android 4.1.
MK802 III LE: Picuntu (Xubuntu tweaked for Rockhip SoCs) distribution based vversion5.0.1f MK802 III; with 1 GB of RAM and 8 GB of flash storage.
MK802 IIIs: Added Bluetooth support, soft power-off function and XBMC support.
MK802 IV: Released in May 2013, a new design featuring a Rockchip RK3188/RK3188T, a quad-core ARM CPU (Cortex-A9 at 1.6 GHz, 1.4 GHz for the T model), 2 GB of RAM, 400 MHz Mali GPU and 8 GB of flash storage that runs Android 4.2.
MK802 IV LE Ubuntu version of the MK802 IV with 2 GB of RAM, 8 and 16 GB flash storage versions.
Connectors:
HDMI mini or micro USB 2.0 USB 2.0 microSD slot Power via micro-USB OTG4G All models appear similar to a somewhat enlarged USB flash drive housing a processor, RAM, storage and I/O ports. Equipped with a keyboard, mouse and display, the device can perform the functions of an Android-based computer. Linux distributions Ubuntu or PicUntu can also be installed on these devices that offer a windowed desktop environment.
Connectors:
The MK802's success and design has generated a host of similar devices with similar specifications, many of which have similar model numbers, but are not manufactured by Rikomagic. Also, these devices share many characteristics with the Raspberry Pi computer. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Epigenetic controls in ciliates**
Epigenetic controls in ciliates:
Epigenetic controls in ciliates is about the unique characteristic of Ciliates, which is that they possess two kinds of nuclei (this phenomenon is called nuclear dimorphism): a micronucleus used for inheritance, and a macronucleus, which controls the metabolism. The micronucleus contains the entirety of the genome whereas the macronucleus only contains the DNA necessary for vegetative growth. The macronucleus divides via amitosis, whereas the micronucleus undergoes typical mitosis. During sexual development a new macronucleus is formed from the meiosis of the micronucleus, where the removal of transposons occurs. On the division or reproduction of ciliates, the two nuclei are under several epigenetic controls. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**GenoCAD**
GenoCAD:
GenoCAD is one of the earliest computer assisted design tools for synthetic biology. The software is a bioinformatics tool developed and maintained by GenoFAB, Inc.. GenoCAD facilitates the design of protein expression vectors, artificial gene networks and other genetic constructs for genetic engineering and is based on the theory of formal languages.
History:
GenoCAD originated as an offshoot of an attempt to formalize functional constraints of genetic constructs using the theory of formal languages. In 2007, the website genocad.org (now retired) was set up as a proof of concept by researchers at Virginia Bioinformatics Institute, Virginia Tech. Using the website, users could design genes by repeatedly replacing high-level genetic constructs with lower level genetic constructs, and eventually with actual DNA sequences.On August 31, 2009, the National Science Foundation granted a three-year $1,421,725 grant to Dr. Jean Peccoud, an associate professor at the Virginia Bioinformatics Institute at Virginia Tech, for the development of GenoCAD. GenoCAD was and continues to be developed by GenoFAB, Inc., a company founded by Peccoud (currently CSO and acting CEO), who was also one of the authors of the originating study.Source code for GenoCAD was originally released on SourceForge in December 2009.GenoCAD version 2.0 was released in November 2011 and included the ability to simulate the behavior of the designed genetic code. This feature was a result of a collaboration with the team behind COPASI.In April, 2015, Peccoud and colleagues published a library of biological parts, called GenoLIB, that can be incorporated into the GenoCAD platform.
Goals:
The four aims of the project are to develop a: computer language to represent the structure of synthetic DNA molecules used in E.coli, yeast, mice, and Arabidopsis thaliana cells compiler capable of translating DNA sequences into mathematical models in order to predict the encoded phenotype collaborative workflow environment which allow to share parts, designs, fabrication resource means to forward the results to the user community through an external advisory board, an annual user conference, and outreach to industry
Features:
The main features of GenoCAD can be organized into three main categories.
Management of genetic sequences: The purpose of this group of features is to help users identify, within large collections of genetic parts, the parts needed for a project and to organize them in project-specific libraries.
Genetic parts: Parts have a unique identifier, a name and a more general description. They also have a DNA sequence. Parts are associated with a grammar and assigned to a parts category such a promoter, gene, etc.
Features:
Parts libraries: Collections of parts are organized in libraries. In some cases part libraries correspond to parts imported from a single source such as another sequence database. In other cases, libraries correspond to the parts used for a particular design project. Parts can be moved from one library to another through a temporary storage area called the cart (analogous to e-commerce shopping carts).
Features:
Searching parts: Users can search the parts database using the Lucene search engine. Basic and advanced search modes are available. Users can develop complex queries and save them for future reuse.
Importing/Exporting parts: Parts can be imported and exported individually or as entire libraries using standard file formats (e.g., GenBank, tab delimited, FASTA, SBML).
Combining sequences into genetic constructs: The purpose of this group of features is to streamline the process of combining genetic parts into designs compliant with a specific design strategy.
Point-and-click design tool: This wizard guides the user through a series of design decisions that determine the design structure and the selection of parts included in the design.
Design management: Designs can be saved in the user workspace. Design statuses are regularly updated to warn users of the consequences of editing parts on previously saved designs.
Exporting designs: Designs can be exported using standard file formats (e.g., GenBank, tab delimited, FASTA).
Design safety: Designs are protected from some types of errors by forcing the user to follow the appropriate design strategy.
Simulation: Sequences designed in GenoCAD can be simulated to display chemical production in the resulting cell.
User workspace: Users can personalize their workspace by adding parts to the GenoCAD database, creating specialized libraries corresponding to specific design projects, and saving designs at different stages of development.
Theoretical foundation:
GenoCAD is rooted in the theory of formal languages; in particular, the design rules describing how to combine different kinds of parts and form context-free grammars.
Theoretical foundation:
A context free grammar can be defined by its terminals, variables, start variable and substitution rules. In GenoCAD, the terminals of the grammar are sequences of DNA that perform a particular biological purpose (e.g. a promoter). The variables are less homogeneous: they can represent longer sequences that have multiple functions or can represent a section of DNA that can contain one of multiple different sequences of DNA but perform the same function (e.g. a variable represents the set of promoters). GenoCAD includes built in substitution rules to ensure that the DNA sequence is biologically viable. Users can also define their own sets of rules for other purposes.
Theoretical foundation:
Designing a sequence of DNA in GenoCAD is much like creating a derivation in a context free grammar. The user starts with the start variable and repeatedly selects a variable and a substitution for it until only terminals are left.
Alternatives:
The most common alternatives to GenoCAD are Proto, GEC and EuGene | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tuck rule (ice hockey)**
Tuck rule (ice hockey):
The tuck rule is a rule by the National Hockey League (NHL) that stipulates how jerseys must be worn over protective equipment. Notable players who have previously tucked in their jerseys include Alexander Ovechkin, Evgeni Malkin, Patrice Bergeron, Kris Letang, Pavel Datsyuk, Wayne Gretzky and Jaromir Jagr. However, the rule has not been strictly enforced since its introduction to the NHL, as some players, such as Connor McDavid, Evgeni Malkin and Wayne Simmonds have been seen with a slight tuck in their jersey. The Seattle Kraken's winger Daniel Sprong continues to tuck his jersey during play.
Rule description:
The official rule by the National Hockey League is as follows: NHL Rule 9.5. All protective equipment, except gloves, headgear, and goaltenders' leg guards must be worn under the uniform. Should it be brought to the attention of the referee that a player is wearing, for example, an elbow pad that is not covered by his jersey, he shall instruct the player to cover up the pad and a second violation by the same player would result in a minor penalty being assessed. Rule 9.5 governs all protective equipment, including pants. Players are not permitted to tuck their jersey into their pants in such a manner where the top padding of the pant and/or additional body protection (affixed to the pant or affixed to the player's body) is exposed outside the jersey. The back uniform number must not be covered or obstructed in any fashion by protruding pads or other protective padding." The NHL decided to newly enforce uniform policies starting with the 2013–14 season. As a result, players are not allowed to tuck their jerseys into their pants, expose their elbow pads, or make any other modifications to their jerseys.
Rule description:
Violations of this rule (which is called the "jersey tuck rule") are as follows: A player who does not follow the jersey tuck rule is to be issued a warning on the first offence.
A player who commits the offence a second time is to be assessed a minor penalty for delay of game.
A player who commits the offence a third time is to receive a misconduct.
A player who commits the offence a fourth time is to receive a game misconduct.It is unclear if the minor penalty will be called for delay of game or an equipment violation. As of 2013, the NHL Department of Hockey Operations and referees are still determining how the penalty will be called.
Enforcement:
Although these policies have existed since 1964, they were not enforced until general managers voted to enforce it in the 2013–14 season. Some reporters suggested that enforcing uniform rules was the National Hockey League's attempt to reduce freak accidents where a player's body was cut by skate blade while others said the league was laying down rules for eventually selling advertising space that would display prominently on the entire jersey. In a September 2013 pre-season game between the Carolina Hurricanes and the Columbus Blue Jackets, Alexander Semin became the first player penalized for this infraction. After receiving an official's initial warning in that game, his jersey became tucked in again after scoring the second goal in that game. He received the minor penalty 15 minutes later for violating the tuck rule a second time in the game. In response, Semin later stitched his jersey to his pants. However, by early October in the same year, it was reported that the league's hockey operations department would relax on enforcing the tuck rule. The penalty would not be enforced if the jersey was tucked in while skating as long as the jersey was untucked at the beginning of a shift.
Enforcement:
The rule brings the league in line with other hockey leagues and tournaments, such as the Olympics.
Reception:
Reactions to the tuck rule by NHL players and coaches were overwhelmingly negative. Alexander Ovechkin called the rule "stupid". Former Washington Capitals head coach Adam Oates also disagreed with the rule, citing that superstar players like Wayne Gretzky and Ovechkin tucking in their jerseys were part of their identity. Toronto Maple Leafs player Joffrey Lupul questioned the rule while Morgan Rielly said he was not aware of this rule until the Semin penalty. Boston Bruins centre Patrice Bergeron, often seen with his jersey tucked underneath his protective pants, said he did not do it intentionally but felt that it could become an issue during regular season games. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pig iron**
Pig iron:
Pig iron, also known as crude iron, is an intermediate good used by the iron industry in the production of steel, which is developed by smelting iron ore in a blast furnace. Pig iron has a high carbon content, typically 3.8–4.7%, along with silica and other constituents of dross, which makes it brittle and not useful directly as a material except for limited applications.The traditional shape of the molds used for pig iron ingots is a branching structure formed in sand, with many individual ingots at right angles to a central channel or "runner", resembling a litter of piglets being nursed by a sow. When the metal had cooled and hardened, the smaller ingots (the "pigs") were simply broken from the runner (the "sow"), hence the name "pig iron". As pig iron is intended for remelting, the uneven size of the ingots and the inclusion of small amounts of sand are insignificant issues when compared to the ease of casting and handling.
History:
Smelting and producing wrought iron was known in ancient Europe and the Middle East, but it was produced in bloomeries by direct reduction. Pig iron was not produced in Europe before the Middle Ages. The Chinese were making pig iron by the later Zhou dynasty (which ended in 256 BC). Furnaces such as Lapphyttan in Sweden may date back to the 12th century; and some in Mark (today part of Westphalia, Germany) to the 13th. It remains to be established whether these northern European developments derive from Chinese ones. Wagner has postulated a possible link via Persian contacts with China along the Silk Road and Viking contacts with Persia, but there is a chronological gap between the Viking period and Lapphyttan.
History:
The phase transition of the iron into liquid in the furnace was an avoided phenomenon, as decarburizing the pig iron into steel was an extremely tedious process using medieval technology.
Uses:
Traditionally, pig iron was worked into wrought iron in finery forges, later puddling furnaces, and more recently, into steel. In these processes, pig iron is melted and a strong current of air is directed over it while it is stirred or agitated. This causes the dissolved impurities (such as silicon) to be thoroughly oxidized. An intermediate product of puddling is known as refined pig iron, finers metal, or refined iron.Pig iron can also be used to produce gray iron. This is achieved by remelting pig iron, often along with substantial quantities of steel and scrap iron, removing undesirable contaminants, adding alloys, and adjusting the carbon content. Some pig iron grades are suitable for producing ductile iron. These are high purity pig irons and depending on the grade of ductile iron being produced these pig irons may be low in the elements silicon, manganese, sulfur and phosphorus. These types of pig iron are used to dilute all the elements (except carbon) in a ductile iron charge which may be harmful to the ductile iron process.
Uses:
Modern uses Until recently, pig iron was typically poured directly out of the bottom of the blast furnace through a trough into a ladle car for transfer to the steel mill in mostly liquid form; in this state, the pig iron was referred to as hot metal. The hot metal was then poured into a steelmaking vessel to produce steel, typically an electric arc furnace, induction furnace or basic oxygen furnace, where the excess carbon is burned off and the alloy composition controlled. Earlier processes for this included the finery forge, the puddling furnace, the Bessemer process, and the open hearth furnace.
Uses:
Modern steel mills and direct-reduction iron plants transfer the molten iron to a ladle for immediate use in the steel making furnaces or cast it into pigs on a pig-casting machine for reuse or resale. Modern pig casting machines produce stick pigs, which break into smaller 4–10 kg piglets at discharge. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sofosbuvir/velpatasvir/voxilaprevir**
Sofosbuvir/velpatasvir/voxilaprevir:
Sofosbuvir/velpatasvir/voxilaprevir, sold under the brand name Vosevi, is a fixed-dose combination medication for the treatment of hepatitis C. It combines three drugs that each act by a different mechanism of action against the hepatitis C virus: sofosbuvir, velpatasvir, and voxilaprevir.Vosevi was approved for medical use in the United States and in the European Union in July 2017. Vosevi is sold by Gilead Sciences. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Barrier cone**
Barrier cone:
In mathematics, specifically functional analysis, the barrier cone is a cone associated to any non-empty subset of a Banach space. It is closely related to the notions of support functions and polar sets.
Definition:
Let X be a Banach space and let K be a non-empty subset of X. The barrier cone of K is the subset b(K) of X∗, the continuous dual space of X, defined by := sup x∈K⟨ℓ,x⟩<+∞}.
Related notions:
The function sup x∈K⟨ℓ,x⟩, defined for each continuous linear functional ℓ on X, is known as the support function of the set K; thus, the barrier cone of K is precisely the set of continuous linear functionals ℓ for which σK(ℓ) is finite.
The set of continuous linear functionals ℓ for which σK(ℓ) ≤ 1 is known as the polar set of K. The set of continuous linear functionals ℓ for which σK(ℓ) ≤ 0 is known as the (negative) polar cone of K. Clearly, both the polar set and the negative polar cone are subsets of the barrier cone. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Double heterostructure**
Double heterostructure:
A double heterostructure, sometimes called double heterojunction, is formed when two semiconductor materials are grown into a "sandwich". One material (such as AlGaAs) is used for the outer layers (or cladding), and another of smaller band gap (such as GaAs) is used for the inner layer. In this example, there are two AlGaAs-GaAs junctions (or boundaries), one at each side of the inner layer. There must be two boundaries for the device to be a double heterostructure. If there was only one side of cladding material, the device would be a simple, or single, heterostructure.
Double heterostructure:
The double heterostructure is a very useful structure in optoelectronic devices and has interesting electronic properties. If one of the cladding layers is p-doped, the other cladding layer n-doped and the smaller energy gap semiconductor material undoped, a p-i-n structure is formed. When a current is applied to the ends of the pin structure, electrons and holes are injected into the heterostructure. The smaller energy gap material forms energy discontinuities at the boundaries, confining the electrons and holes to the smaller energy gap semiconductor. The electrons and holes recombine in the intrinsic semiconductor emitting photons. If the width of the intrinsic region is reduced to the order of the de Broglie wavelength, the energies in the intrinsic region no longer become continuous but become discrete. (Actually, they are not continuous but the energy levels are very close together so we think of them as being continuous.) In this situation the double heterostructure becomes a quantum well. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**NIST RBAC model**
NIST RBAC model:
The NIST RBAC model is a standardized definition of role-based access control. Although originally developed by the National Institute of Standards and Technology, the standard was adopted and is copyrighted and distributed as INCITS 359-2004 by the International Committee for Information Technology Standards (INCITS). The latest version is INCITS 359-2012.
It is managed by INCITS committee CS1.
History:
In 2000, NIST called for a unified standard for RBAC, integrating the RBAC model published in 1992 by Ferraiolo and Kuhn with the RBAC framework introduced by Sandhu, Coyne, Feinstein, and Youman (1996). This proposal was published by Sandhu, Ferraiolo, and Kuhn and presented at the ACM 5th Workshop on Role Based Access Control. Following debate and comment within the RBAC and security communities, NIST made revisions and proposed a U.S. national standard for RBAC through the INCITS. In 2004, the standard received ballot approval and was adopted as INCITS 359-2004. Sandhu, Ferraiolo, and Kuhn later published an explanation of the design choices in the model.
History:
In 2010, NIST announced a revision to RBAC, incorporating features of attribute-based access control (ABAC). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Polymatroid**
Polymatroid:
In mathematics, a polymatroid is a polytope associated with a submodular function. The notion was introduced by Jack Edmonds in 1970. It is also described as the multiset analogue of the matroid.
Definition:
Let E be a finite set and f:2E→R+ a non-decreasing submodular function, that is, for each A⊆B⊆E we have f(A)≤f(B) , and for each A,B⊆E we have f(A)+f(B)≥f(A∪B)+f(A∩B) . We define the polymatroid associated to f to be the following polytope: Pf={x∈R+E|∑e∈Ux(e)≤f(U),∀U⊆E} When we allow the entries of x to be negative we denote this polytope by EPf , and call it the extended polymatroid associated to f An equivalent definition Let E be a finite set. If u,v∈RE then we denote by |u| the sum of the entries of u , and write u≤v whenever v(i)−u(i)≥0 for every i∈E (notice that this gives an order to R+E ). A polymatroid on the ground set E is a nonempty compact subset P in R+E , the set of independent vectors, such that: We have that if v∈P , then u∈P for every u≤v If u,v∈P with |v|>|u| , then there is a vector w∈P such that max max {u(|E|),v(|E|)}) .This definition is equivalent to the one described before, where f is the function defined by max {∑i∈Av(i)|v∈P} for every A⊂E
Relation to matroids:
To every matroid M on the ground set E we can associate the set VM={wF|F∈I} , where I is the set of independent sets of M and we denote by wF the characteristic vector of F⊂E : for every i∈E wF(i)={1,i∈F;0,i∉F.
By taking the convex hull of VM we get a polymatroid. It is associated to the rank function of M . The conditions of the second definition reflect the axioms for the independent sets of a matroid.
Relation to generalized permutahedra:
Because generalized permutahedra can be constructed from submodular functions, and every generalized permutahedron has an associated submodular function, we have that there should be a correspondence between generalized permutahedra and polymatroids. In fact every polymatroid is a generalized permutahedron that has been translated to have a vertex in the origin. This result suggests that the combinatorial information of polymatroids is shared with generalized permutahedra.
Properties:
Pf is nonempty if and only if f≥0 and that EPf is nonempty if and only if f(∅)≥0 Given any extended polymatroid EP there is a unique submodular function f such that f(∅)=0 and EPf=EP
Contrapolymatroids:
For a supermodular f one analogously may define the contrapolymatroid {w∈R+E|∀S⊆E,∑e∈Sw(e)≥f(S)} This analogously generalizes the dominant of the spanning set polytope of matroids.
Discrete polymatroids:
When we only focus on the lattice points of our polymatroids we get what is called, discrete polymatroids. Formally speaking, the definition of a discrete polymatroid goes exactly as the one for polymatroids except for where the vectors will live in, instead of R+E they will live in Z+E . This combinatorial object is of great interest because of their relationship to monomial ideals. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Switching time**
Switching time:
For a frequency synthesizer, the switching time or more colloquially the switching speed is the amount of time from when the command for the next frequency is requested until the time that the synthesizer's output becomes usable and meets the specified requirements. Such requirements will vary depending on the design of the synthesizer. In the 1970s switching speeds ranged from 1 millisecond to 10 microseconds. A more general statement has been given by James A. Crawford: 50 reference cycles as a rule of thumb. IIIT-H is making a processor having clock speed higher than i7 processors having 16 cores. By this rule, a reference frequency of 50 kHz has a settling time of 1 millisecond. Two other authors state (Hamid Rategh and Thomas H. Lee) that the switching time (i.e., settling time) is a function of the percentage change in the feedback division ratio. So according to them, the delta N over N itself determines the switching time, where N is the frequency synthesizer's feedback divisor. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chorismate mutase**
Chorismate mutase:
In enzymology, chorismate mutase (EC 5.4.99.5) is an enzyme that catalyzes the chemical reaction for the conversion of chorismate to prephenate in the pathway to the production of phenylalanine and tyrosine, also known as the shikimate pathway.
Chorismate mutase:
Hence, this enzyme has one substrate, chorismate, and one product, prephenate. Chorismate mutase is found at a branch point in the pathway. The enzyme channels the substrate, chorismate to the biosynthesis of tyrosine and phenylalanine and away from tryptophan. Its role in maintaining the balance of these aromatic amino acids in the cell is vital. This is the single known example of a naturally occurring enzyme catalyzing a pericyclic reaction. Chorismate mutase is only found in fungi, bacteria, and higher plants. Some varieties of this protein may use the morpheein model of allosteric regulation.
Protein family:
This enzyme belongs to the family of isomerases, specifically those intramolecular transferases that transfer functional groups. The systematic name of this enzyme class is chorismate pyruvatemutase. Chorismate mutase, also known as hydroxyphenylpyruvate synthase, participates in phenylalanine, tyrosine and tryptophan biosynthesis. The structures of chorismate mutases vary in different organisms, but the majority belong to the AroQ family and are characterized by an intertwined homodimer of 3-helical subunits. Most chorismate mutases in this family look similar to that of Escherichia coli. For example, the secondary structure of the chorismate mutase of yeast is very similar to that of E. coli. Chorimate mutase in the AroQ family are more common in nature and are widely distributed among the prokaryotes. For optimal function, they usually have to be accompanied by another enzyme such as prephenate dehydrogenase. These chorismate mutases are typically bifunctional enzymes, meaning they contain two catalytic capacities in the same polypeptide chain. However, the chorismate mutase of eukaryotic organisms are more commonly monofunctional. There are organisms such as Bacillus subtilis whose chorismate mutase have a completely different structure and are monofunctional. These enzymes belong to the AroH family and are characterized by a trimeric α/β barrel topology.
Mechanism of catalysis:
The conversion of chorismate to prephenate is the first committed step in the pathway to the production of the aromatic amino acids: tyrosine and phenylalanine. The presence of chorismate mutase increases the rate of the reaction a million fold. In the absence of enzyme catalysis this mechanism proceeds as a concerted, but asynchronous step and is an exergonic process. The mechanism for this transformation is formally a Claisen rearrangement, supported by the kinetic and isotopic data reported by Knowles, et al.
Mechanism of catalysis:
E. coli and Yeast chorismate mutase have a limited sequence homology, but their active sites contain similar residues. The active site of the Yeast chorismate mutase contains Arg16, Arg157, Thr242, Glu246, Glu198, Asn194, and Lys168. The E. coli active site contains the same residues with the exception of these noted exchanges: Asp48 for Asn194, Gln88 for Glu248, and Ser84 for Thr242. In the enzyme active site, interactions between these specific residues and the substrate restrict conformational degrees of freedom, such that the entropy of activation is effectively reduced to zero, and thereby promotes catalysis. As a result, there is no formal intermediate, but rather a pseudo-diaxial chair-like transition state. Evidence for this conformation is provided by an inverse secondary kinetic isotope effect at the carbon directly attached to the hydroxyl group. This seemingly unfavorable arrangement is achieved through a series of electrostatic interactions, which rotate the extended chain of chorismate into the conformation required for this concerted mechanism.
Mechanism of catalysis:
An additional stabilizing factor in this enzyme-substrate complex is hydrogen bonding between the lone pair of the oxygen in the vinyl ether system and hydrogen bond donor residues. Not only does this stabilize the complex, but disruption of resonance within the vinyl ether destabilizes the ground state and reduces the energy barrier for this transformation. An alternative view is that electrostatic stabilization of the polarized transition state is of great importance in this reaction. In the chorismate mutase active site, the transition-state analog is stabilized by 12 electrostatic and hydrogen-bonding interactions. This is shown in mutants of the native enzyme in which Arg90 is replaced with citrulline to demonstrate the importance of hydrogen bonding to stabilize the transition state. Other work using chorismate mutase from Bacillus subtilis showed evidence that when a cation was aptly placed in the active site, the electrostatic interactions between it and the negatively charged transition state promoted catalysis.Additional studies have been done in order to support the relevance of a near attack conformer (NAC) in the reaction catalyzed by chorismate mutase. This NAC is the reactive conformation of the ground state that is directly converted to the transition state in the enzyme. Using thermodynamic integration (TI) methods, the standard free energies (ΔGN°) for NAC formation were calculated in six different environments. The data obtained suggests that effective catalysis is derived from stabilization of both the NAC and transition state. However, other experimental evidence supports that the NAC effect observed is simply a result of electrostatic transition state stabilization.Overall, there have been extensive studies on the exact mechanism of this reaction. However, the relative contribution of conformational constraint of the flexible substrate, specific hydrogen bonding to the transition state, and electrostatic interactions to the observed rate enhancement is still under discussion. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**We**
We:
In Modern English, we is a plural, first-person pronoun.
Morphology:
In Standard Modern English, we has six distinct shapes for five word forms: we: the nominative (subjective) form us and 's: the accusative (objective; also called the 'oblique'.: 146 ) form our: the dependent genitive (possessive) form ours: the independent genitive (possessive) form ourselves: the reflexive formThere is also a distinct determiner we as in we humans aren't perfect, which some people consider to be just an extended use of the pronoun.
History:
We has been part of English since Old English, having come from Proto-Germanic *wejes, from PIE *we-. Similarly, us was used in Old English as the accusative and dative plural of we, from PIE *nes-. The following table shows the old English first-person plural and dual pronouns: By late Middle English the dual form was lost and the dative and accusative had merged.: 117 The ours genitive can be seen as early as the 12th century. Ourselves replaced original construction we selfe, us selfum in the 15th century, so that, by century's end, the Middle English forms of we had solidified into those we use today.: 120
Gender:
We is not generally seen as participating in the system of gender. In Old English, it certainly didn't. Only third-person pronouns had distinct masculine, feminine, and neutre gender forms.: 117 But by the 17th century, that old gender system, which also marked gender on common nouns and adjectives, had disappeared, leaving only pronoun marking. At the same time, a new relative pronoun system was developing that eventually split between personal relative who and impersonal relative which. This is seen as a new personal / non-personal (or impersonal) gender system.: 1048 As a result, some scholars consider we to belong to the personal gender, along with who.
Syntax:
Functions We can appear as a subject, object, determiner or predicative complement. The reflexive form also appears as an adjunct.
Subject: We're there; us being there; our being there; we planned for ourselves to be there.
Object: They saw us; She pointed them to us; We though about ourselves.
Predicative complement: They have become us; We eventually felt we had become ourselves.
Dependent determiner: We reached our goals; We humans aren't perfect; Give it to us students.
Independent determiner: This is ours.
Adjunct: We did it ourselves.The contracted object form 's is only possible after the special let of let's do that.
Dependents Pronouns rarely take dependents, but it is possible for we to have many of the same kind of dependents as other noun phrases.
Relative clause modifier: we who arrived late Determiner: Not a lot of people know the real us.
Adjective phrase modifier: Not a lot of people know the real us.
Adverb phrase external modifier: not even us
Semantics:
We's referents generally must include the speaker, along with other persons. A few exceptional cases, which include nosism, are presented below. We is always definite and specific.
Royal we The royal we, or majestic plural (pluralis majestatis), is sometimes used by a person of high office, such as a monarch, earl, or pope. It has singular semantics.
Semantics:
Editorial we The editorial we is a similar phenomenon, in which an editorial columnist in a newspaper or a similar commentator in another medium refers to themselves as we when giving their opinion. Here, the writer casts themselves in the role of spokesperson: either for the media institution who employs them, or on behalf of the party or body of citizens who agree with the commentary. The reference is not explicit, but is generally consistent with first-person plural.
Semantics:
Author's we The author's we, or pluralis modestiae, is a practice referring to a generic third person as we (instead of one or the informal you): By adding four and five, we obtain nine.
Semantics:
We are therefore led also to a definition of "time" in physics. — Albert EinsteinWe in this sense often refers to "the reader and the author" because the author often assumes that the reader knows and agrees with certain principles or previous theorems for the sake of brevity (or, if not, the reader is prompted to look them up). This practice is discouraged by some academic style guides because it fails to distinguish between sole authorship and co-authorship. Again, the reference is not explicit, but is generally consistent with first-person plural.
Semantics:
Inclusive and exclusive we Some languages distinguish between inclusive we, which includes both the speaker and the addressee(s), and exclusive we, which excludes the addressee(s). English does not make this distinction grammatically, though we can have both inclusive and exclusive semantics.
Semantics:
Imperative let's or let us allows imperatives to be inclusive.: 925 Compare: Take this outside. (exclusive, 2nd person) Let's take this outside. (inclusive, 1st person) Second-person we We is used sometimes in place of you to address a second party: A doctor may ask a patient: "And how are we feeling today?". A waiter may ask a client: "What are we in the mood for?" Membership we The membership we is a simultaneous reference to the individual, and to the collective of which the individual is a member. If ants or hive bees could use English, they might use the pronoun we almost exclusively. Human cultures can be categorized as communal or individualist; the membership we aligns more with a communal culture. The speaker, or thinker, expresses ideas with awareness of both themselves and the collective of other members. If language constrains or liberates thinking, then using the membership we may impact our ability to understand, empathize, and bond with others. The extent of inclusion when using the membership we is loosely definite; the group may be others of the same village, nation, species, or planet. The following two examples show how meaning changes subtly depending on whether I or we is used. When using the membership we, the reader or speaker is automatically drawn into the collective, and the change in viewpoint is significant: If I consume too much, I will run out of resources. If we consume too much, we will run out of resources.
Semantics:
The more I learn, the more I should question. The more we learn, the more we should question.
Pronunciation:
According to the OED, the following pronunciations are used: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PIKO**
PIKO:
Piko (stylized PIKO, pronounced "peek-oh") is a German model train brand in Europe that also exports to the United States and other parts of the world.
History:
Founded in 1949, PIKO was once a state-owned enterprise in the German Democratic Republic (East Germany), supplying a share of model trains in Eastern Europe. In 1992, after the reunification of Germany, the company was purchased by PIKO Spielwaren GmbH. PIKO Spielwaren GmbH was founded in April 1992 by Dr. René F. Wilfer, PIKO’s President, who had been working in the toy industry since 1986 and had previously managed a model building company.
Products:
PIKO manufactures more than 1,500 products in various model train scales: G-Scale: American and European-prototype weather-resistant models for indoor and outdoor use, including starter sets, locomotives, passenger and freight cars, track, buildings, controls and accessories.
HO-Scale: European-prototype models including starter sets, locomotives (most in both 2-Rail DC and 3-Rail AC versions), passenger and freight cars, track, buildings, controls and accessories.
TT-Scale: European-prototype locomotives and cars.
N-Scale: European-prototype locomotives, cars, and buildings.
Its headquarters factory in Sonneberg (Thuringia) Germany makes the G-Scale and some HO-Scale products, while its PIKO China factory in Chashan, China, makes the HO-Scale "Expert", "Hobby", "SmartControl", "SmartControlLight" and "myTrain" lines, as well as the N-Scale and TT-Scale lines.
Manufacturing:
Distribution In Germany, PIKO products are distributed from the firm's headquarters in Sonneberg to a network of retailers. In other countries, PIKO distributors and representatives perform a similar function. In America, sales and distribution to retailers is handled by PIKO America in San Diego, CA. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Visual field test**
Visual field test:
A visual field test is an eye examination that can detect dysfunction in central and peripheral vision which may be caused by various medical conditions such as glaucoma, stroke, pituitary disease, brain tumours or other neurological deficits. Visual field testing can be performed clinically by keeping the subject's gaze fixed while presenting objects at various places within their visual field. Simple manual equipment can be used such as in the tangent screen test or the Amsler grid. When dedicated machinery is used it is called a perimeter.
Visual field test:
The exam may be performed by a technician in one of several ways. The test may be performed by a technician directly, with the assistance of a machine, or completely by an automated machine. Machine-based tests aid diagnostics by allowing a detailed printout of the patient's visual field.
Other names for this test may include perimetry, Tangent screen exam, Automated perimetry exam or Goldmann visual field exam.
Examination methods:
Techniques used to perform this test include the confrontation visual field examination (Donders' test). The examiner will ask the patient to cover one eye and stare at the examiner. Ideally, when the patient covers their right eye, the examiner covers their left eye and vice versa. The examiner will then move his hand out of the patient's visual field and then bring it back in. Commonly the examiner will use a slowly wagging finger or a hat pin for this. The patient signals the examiner when his hand comes back into view. This is frequently done by an examiner as a simple and preliminary test.
Perimetry:
Perimetry or campimetry is one way to systematically test the visual field. It is the systematic measurement of differential light sensitivity in the visual field by the detection of the presence of test targets on a defined background. Perimetry more carefully maps and quantifies the visual field, especially at the extreme periphery of the visual field. The name comes from the method of testing the perimeter of the visual field.
Perimetry:
Automated perimeters are used widely, and applications include: diagnosing disease, job selection, visual competence assessment, school or community screenings, military selection, and disability classifications.
Types Tangent screen The simplest form of perimetry uses a white tangent screen. Vision is tested by presenting different sized pins attached to a black wand, which may be moved, against a black background. This test stimulus (pins) may be white or colored.
Perimetry:
Goldmann perimeter The Goldmann perimeter is a hollow white spherical bowl positioned a set distance in front of the patient. An examiner presents a test light of variable size and intensity. The light may move towards the center from the perimeter (kinetic perimetry), or it may remain in one location (static perimetry). The Goldmann method is able to test the entire range of peripheral vision and has been used for years to follow vision changes in glaucoma patients. However, now automated perimetry is more commonly used.
Perimetry:
Automated perimetry Automated perimetry uses a mobile stimulus moved by a perimetry machine. The patient indicates whether he sees the light by pushing a button. The use of a white background and lights of incremental brightness is called "white-on-white" perimetry. This type of perimetry is the most commonly used in clinical practice, and in research trials where loss of visual field must be measured. However, the sensitivity of white-on-white perimetry is low, and the variability is relatively high; as many as 25–50 percent of the photoreceptor cells may be lost before changes in visual field acuity are detected. This method is commonly used for early detection of blind spots. The patient sits in front of an (artificial) small concave dome in a small machine with a target in the center. The chin rests on the machine and the eye that is not being tested is covered. A button is given to the patient to be used during the exam. The patient is set in front of the dome and asked to focus on the target at the center. A computer then shines lights on the inside dome and the patient clicks the button whenever a light is seen. The computer then automatically maps and calculates the patient's visual field.
Perimetry:
Microperimetry Microperimetry assesses the macular function in a similar way to perimetry. However, fundus imaging is performed at the same time. This allows for fundus tracking to ensure accurate stimulus placement. Thus, microperimetry provides enhances the retest reliability, enables precise structure-function correlation, and allows for examination of patients with unstable fixation.
Methods of stimulus presentation:
Static perimetry Static perimetry tests different locations throughout the field one at a time. First, a dim light is presented at a particular location. If the patient does not see the light, it is made gradually brighter until it is seen. The minimum brightness required for the detection of a light stimulus is called the "threshold" sensitivity level of that location. This procedure is then repeated at several other locations, until the entire visual field is tested.Threshold static perimetry is generally done using automated equipment. It is used for rapid screening and follow-up of diseases involving deficits such as scotomas, loss of peripheral vision and more subtle vision loss. Perimetry testing is important in the screening, diagnosing, and monitoring of various eye, retinal, optic nerve and brain disorders.
Methods of stimulus presentation:
Kinetic perimetry Kinetic perimetry uses a mobile stimulus moved by an examiner (perimetrist) such as in Goldmann kinetic perimetry. First, a single test light of constant size and brightness is used. The test light is moved towards the center of vision from the periphery until it is first detected by the patient. This is repeated by approaching the center of vision from different directions. Repeating this enough will establish a boundary of vision for that target. The procedure is repeated using different test lights that are larger or brighter than the original test light.
Methods of stimulus presentation:
In this way, kinetic perimetry is useful for mapping visual field sensitivity boundaries. It may be a good alternative for patients that have difficulty with automated perimetry, either due to difficulty maintaining constant gaze, or due to cognitive impairment.
Stimulus settings and photoreceptor-specific perimetry:
Photopic perimetry The most commonly performed perimetry test uses white stimuli on a bright white background (photopic white-on-white testing). This tests isolated L- and M-cone function and is applied in the setting of glaucoma.
Scotopic perimetry Following 30 minutes of dark-adaptation, it is possible to selectively test rod function using short-wavelength (blue) stimuli on a dark background. Today, it is also possible to perform this type of examination in eyes with unstable fixation using scotopic microperimetry. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**BioCyc database collection**
BioCyc database collection:
The BioCyc database collection is an assortment of organism specific Pathway/Genome Databases (PGDBs) that provide reference to genome and metabolic pathway information for thousands of organisms. As of July 2023, there were over 20,040 databases within BioCyc. SRI International, based in Menlo Park, California, maintains the BioCyc database family.
Categories of Databases:
Based on the manual curation done, BioCyc database family is divided into 3 tiers: Tier 1: Databases which have received at least one year of literature based manual curation. Currently there are seven databases in Tier 1. Out of the seven, MetaCyc is a major database that contains almost 2500 metabolic pathways from many organisms. The other important Tier 1 database is HumanCyc which contains around 300 metabolic pathways found in humans. The remaining five databases include, EcoCyc (E. coli), AraCyc (Arabidopsis thaliana), YeastCyc (Saccharomyces cerevisiae), LeishCyc (Leishmania major Friedlin) and TrypanoCyc (Trypanosoma brucei).
Categories of Databases:
Tier 2: Databases that were computationally predicted but have received moderate manual curation (most with 1–4 months curation). Tier 2 Databases are available for manual curation by scientists who are interested in any particular organism. Tier 2 databases currently contain 43 different organism databases.
Tier 3: Databases that were computationally predicted by PathoLogic and received no manual curation. As with Tier 2, Tier 3 databases are also available for curation for interested scientists.
Software tools:
The BioCyc website contains a variety of software tools for searching, visualizing, comparing, and analyzing genome and pathway information. It includes a genome browser, and browsers for metabolic and regulatory networks. The website also includes tools for painting large-scale ("omics") datasets onto metabolic and regulatory networks, and onto the genome.
Use in Research:
Since BioCyc Database family comprises a long list of organism specific databases and also data at different systems level in a living system, the usage in research has been in a wide variety of context. Here, two studies are highlighted which show two different varieties of uses, one on a genome scale and other on identifying specific SNPs (Single Nucleotide Polymorphisms) within a genome.
Use in Research:
AlgaGEM AlgaGEM is a genome scale metabolic network model for a compartmentalized algae cell developed by Gomes de Oliveira Dal’Molin et al. based on the Chlamydomonas reinhardtii genome. It has 866 unique ORFs, 1862 metabolites, 2499 gene-enzyme-reaction-association entries, and 1725 unique reactions. One of the Pathway databases used for reconstruction is MetaCyc.
SNPs The study by Shimul Chowdhury et al. showed association differed between maternal SNPs and metabolites involved in homocysteine, folate, and transsulfuration pathways in cases with Congenital Heart Defects (CHDs) as opposed to controls. The study used HumanCyc to select candidate genes and SNPs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Creatine methyl ester**
Creatine methyl ester:
Creatine methyl ester is the methyl ester derivative of the amino acid creatine. It can be prepared by the esterification of creatine with methanol. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**(Trimethylsilyl)methyl chloride**
(Trimethylsilyl)methyl chloride:
(Trimethylsilyl)methyl chloride is the organosilicon compound with the formula (CH3)3SiCH2Cl. A colorless, volatile liquid, it is an alkylating agent that is employed in organic synthesis, especially as a precursor to (trimethylsilyl)methyllithium. In the presence of triphenylphosphine, it olefinates benzophenones: (CH3)3SiCH2Cl + PPh3 + Ar2C=O → Ar2C=CH2 + OPPh3 + (CH3)3SiCl | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**TPA-023**
TPA-023:
TPA-023 (MK-0777) is an anxiolytic drug with a novel chemical structure, which is used in scientific research. It has similar effects to benzodiazepine drugs, but is structurally distinct and so is classed as a nonbenzodiazepine anxiolytic. It is a subtype-selective, mixed allosteric modular at the benzodiazepine location on GABAA receptors, where it acts as a partial agonist at the α2 and α3 subtypes, but as a silent antagonist at α1 and α5 subtypes. It has primarily anxiolytic and anticonvulsant effects in animal tests, but with no sedative effects even at 50 times the effective anxiolytic dose.In human trials on healthy volunteers, TPA-023 was comparable to lorazepam, but had much less side effects on cognition, memory, alertness or coordination. In Phase II trials, the compound was significantly superior to placebo without inducing sedation. The clinical development was halted due to preclinical toxicity (cataract) in long term dosing studies. TPA-023 is well absorbed following oral administration and extensively metabolised by the liver, with a half-life of 6.7 hours. The main enzyme involved in its metabolism is CYP3A4, with some contribution by CYP3A5. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Test money**
Test money:
Test money (or test notes, test bills, funny money, Monopoly money) are a part of the test apparatus that are often used with currency handling equipment, such as automatic teller machines. While it is often desirable to use actual banknotes or coins in the process of testing currency handling equipment, the inherent value of the objects being used means that security procedures must be put in place during the testing period that they are used. If the testing includes destructive testing, where the currency is purposefully damaged or destroyed to see how the machinery will react, further concern will be raised about the subsequent loss in value of the objects. To remove these concerns, test money is often used in place of real currency.
Test money:
Test money may share some or all of the characteristics of a given currency (size, paper type, paper thickness, colouring, printing characteristics, various denominations), but it also has some form of easily identifiable, non-removable, non-mutable characteristics that differentiate it from legal tender, scrip, or counterfeit currency.
For certain types of bulk cash handling equipment, the test money units may represent bundled or rolled currency.
Some members of the notaphiliatelic community collect test money.
Use in television and movies:
Test money is often used in television and movie production in the same fashion. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Minimal instruction set computer**
Minimal instruction set computer:
Minimal instruction set computer (MISC) is a central processing unit (CPU) architecture, usually in the form of a microprocessor, with a very small number of basic operations and corresponding opcodes, together forming an instruction set. Such sets are commonly stack-based rather than register-based to reduce the size of operand specifiers.
Such a stack machine architecture is inherently simpler since all instructions operate on the top-most stack entries.
One result of the stack architecture is an overall smaller instruction set, allowing a smaller and faster instruction decode unit with overall faster operation of individual instructions.
Characteristics and design philosophy:
Separate from the stack definition of a MISC architecture, is the MISC architecture being defined by the number of instructions supported.
Typically a minimal instruction set computer is viewed as having 32 or fewer instructions, where NOP, RESET, and CPUID type instructions are usually not counted by consensus due to their fundamental nature.
32 instructions is viewed as the highest allowable number of instructions for a MISC, though 16 or 8 instructions are closer to what is meant by "Minimal Instructions".
A MISC CPU cannot have zero instructions as that is a zero instruction set computer.
A MISC CPU cannot have one instruction as that is a one instruction set computer.
The implemented CPU instructions should by default not support a wide set of inputs, so this typically means an 8-bit or 16-bit CPU.
If a CPU has an NX bit, it is more likely to be viewed as being a complex instruction set computer (CISC) or reduced instruction set computer (RISC).
MISC chips typically lack hardware memory protection of any kind, unless there is an application specific reason to have the feature.
If a CPU has a microcode subsystem, that excludes it from being a MISC.
The only addressing mode considered acceptable for a MISC CPU to have is load/store, the same as for reduced instruction set computer (RISC) CPUs.
MISC CPUs can typically have between 64 KB to 4 GB of accessible addressable memory—but most MISC designs are under 1 megabyte.Also, the instruction pipelines of MISC as a rule tend to be very simple. Instruction pipelines, branch prediction, out-of-order execution, register renaming, and speculative execution broadly exclude a CPU from being classified as a MISC architecture.
While 1-bit CPUs are otherwise obsolete (and were not MISCs nor OISCs), the first carbon nanotube computer is a 1-bit one-instruction set computer, and has only 178 transistors, and thus likely the lowest-complexity (or next-lowest) CPU produced so far (by transistor count).
History:
Some of the first digital computers implemented with instruction sets were by modern definition minimal instruction set computers.
Among these various computers, only ILLIAC and ORDVAC had compatible instruction sets.
Manchester Baby (University of Manchester, England) made its first successful run of a stored program on June 21, 1948.
History:
Electronic Delay Storage Automatic Calculator (EDSAC, University of Cambridge, England) was the first practical stored-program electronic computer (May 1949) Manchester Mark 1 (Victoria University of Manchester, England) Developed from the Baby (June 1949) Commonwealth Scientific and Industrial Research Automatic Computer (CSIRAC, Council for Scientific and Industrial Research) Australia (November 1949) Electronic Discrete Variable Automatic Computer (EDVAC, Ballistic Research Laboratory, Computing Laboratory at Aberdeen Proving Ground 1951) Ordnance Discrete Variable Automatic Computer (ORDVAC, University of Illinois at Urbana–Champaign) at Aberdeen Proving Ground, Maryland (completed November 1951) IAS machine at Princeton University (January 1952) MANIAC I at Los Alamos Scientific Laboratory (March 1952) MESM performed its first test run in Kyiv on November 6, 1950 Illinois Automatic Computer (ILLIAC) at the University of Illinois, (September 1952) Early stored-program computers The IBM SSEC had the ability to treat instructions as data, and was publicly demonstrated on January 27, 1948. This ability was claimed in a US patent issued April 28, 1953. However, it was partly electromechanical, not fully electronic. In practice, instructions were read from paper tape due to its limited memory.
History:
The Manchester Baby, by the Victoria University of Manchester, was the first fully electronic computer to run a stored program. It ran a factoring program for 52 minutes on June 21, 1948, after running a simple division program and a program to show that two numbers were relatively prime.
The Electronic Numerical Integrator and Computer (ENIAC) was modified to run as a primitive read-only stored-program computer (using the Function Tables for program read-only memory (ROM) and was demonstrated as such on September 16, 1948, running a program by Adele Goldstine for von Neumann.
The Binary Automatic Computer (BINAC) ran some test programs in February, March, and April 1949, although was not completed until September 1949.
The Manchester Mark 1 developed from the Baby project. An intermediate version of the Mark 1 was available to run programs in April 1949, but was not completed until October 1949.
The Electronic Delay Storage Automatic Calculator (EDSAC) ran its first program on May 6, 1949.
The Electronic Discrete Variable Automatic Computer (EDVAC) was delivered in August 1949, but it had problems that kept it from being put into regular operation until 1951.
The Commonwealth Scientific and Industrial Research Automatic Computer (CSIRAC, formerly CSIR Mk I) ran its first program in November 1949.
The Standards Eastern Automatic Computer (SEAC) was demonstrated in April 1950.
The Pilot ACE ran its first program on May 10, 1950 and was demonstrated in December 1950.
The Standards Western Automatic Computer (SWAC) was completed in July 1950.
The Whirlwind was completed in December 1950 and was in actual use in April 1951.
The first ERA Atlas (later the commercial ERA 1101/UNIVAC 1101) was installed in December 1950.
Design weaknesses:
The disadvantage of a MISC is that instructions tend to have more sequential dependencies, reducing overall instruction-level parallelism.
MISC architectures have much in common with some features of some programming languages such as Forth's use of the stack, and the Java virtual machine. Both are weak in providing full instruction-level parallelism. However, one could employ macro-op fusion as a means of executing common instruction phrases as individual steps (e.g., ADD,FETCH to perform a single indexed memory read).
Notable CPUs:
Probably the most commercially successful MISC was the original INMOS transputer architecture that had no floating-point unit. However, many 8-bit microcontrollers, for embedded computer applications, qualify as MISC.
Each STEREO spacecraft includes two P24 MISC CPUs and two CPU24 MISC CPUs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**GNOME Character Map**
GNOME Character Map:
GNOME Character Map, formerly known as Gucharmap, is a free and open-source software Unicode character map program, part of GNOME. This program allows characters to be displayed by Unicode block or script type. It includes brief descriptions of related characters and occasionally meanings of the character in question. Gucharmap can also be used to input or enter characters (by copy and paste). The search functionality allows the use of several search methods, including by Unicode name or code point of the character. It is built on the GTK toolkit and can be run on any platform supported by GTK. A number of text programs use Gucharmap for character input.
History:
Version 0.1 of the program was released on December 13, 2002 with basic Unicode font viewing capabilities which were slowly developed. On July 2, 2003 it was decided that Gucharmap would be included in GNOME 2.4. Two months later on September 10 version 1.0.0 was released with bug fixes and translation updates for inclusion with GNOME 2.4. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Aquapet**
Aquapet:
Aquapets are interactive, electronic toys that were introduced in the US in 2004 by Wild Planet. They consist of a transparent, water-filled case housing a thumb-sized figure, and a base with a microchip, microphone, and speaker to register and respond to sounds made by kids or by other Aquapets. Each character has its own look, sounds and songs and responds with movement and melody. The more a child plays with their Aquapets, the more songs they will perform and the livelier they will become.
History:
The first three characters of the original Aquapets (Miku, also referred to as Kiko; Puku and Tu) were sold exclusively at Toys R Us stores starting in February 2004. By April of that year, the initial three figures were joined by three new characters (Kadet, Bunni and Stinga) to complete "Wave 1" which were available in mass and specialty stores across the country.
History:
In October 2004, six new characters were released. "Wave 2" consisted of: Lugi, Fanga, Bebe, Pizzazz, Tabi and Blotto. In January 2005, "Wave 3" was released which included Floptopus, Snorkl, Squirt, Fuego, Likabee, and Peegee. "Wave 4" was released in August 2005, and it included Bertie, Purkle, Spangle, Dilly, Skinker, and Fizzie. "Wave 5" was the final "wave" of Aquapets. It was released in April 2006, and it included Pachinko, Harf, Zot, Zmooch, Fretta, and Kitzi.
History:
In 2005, the spinoff LiquiFreaks were released. They work similar to Aquapets with two buttons, feed and zap. Another spinoff was released in 2006 called Dino-Mites, with a light in the tube that glows. They work similar to Aquapets and LiquiFreaks, but lack a microphone.
History:
In March 2011, Wild Planet released New Aquapets - redesigned and reprogrammed versions of eight characters from previous collections, Bebe, Bertie, Bunni, Fizzie, Fretta, Harf, Puku and Squirt. New Aquapets live in tear-drop shaped cases and play three new interactive games: Memory Moov (a memory sequence game), Aqua Speed (a quick reflex challenge), and Bubble Boogie (a dancing game). In late 2011, "Wave 2" of the New Aquapets were released; consisting of Tu, Miku (renamed to Muki), Purkle, Likabee, Zmooch, and Kitzi.
Reception:
Aquapets were named one of the top Tech Toys of the year in the 2004 Toy Wishes magazine and were finalists in the same category on the nationally televised Ultimate Toy Awards show. They were featured in National Geographic Kids Magazine in "5 Smart Toys – the Science Behind This Season's Coolest Toys" in December, 2004, and covered twice in 2004 by U.S. News & World Report – once in a "Best New Toys" story, and once in "Smart Chart." The Chicago Tribune recommended the toy as a stocking stuffer for parents to buy for their kids in the "Make sure your stocking has the right stuff" story in December, 2004. Disney Adventures Magazine called Aquapets "the perfect gift" and included the toy in its "All Wrapped Up" gift guide in December, 2004.In 2004, they were featured on TV news programs across the country in "Top Toys" and "Holiday Gifts for Kids" segments, including: CNN Headline News, CBS News This Morning, CBS Early Show, CNBC Closing Bell, WB Morning News, NY1 News All Weekend, FOX Evening News, Tech TV Fresh Gear, Tech TV Screen Savers, CNBC Wake Up Call, NBC Evening News, NBC News Today, and Telemundo Ahora.In 2005, Aquapets received the "Best Toy Award" Gold Seal from Oppenheim Toy Portfolio. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fragment (logic)**
Fragment (logic):
In mathematical logic, a fragment of a logical language or theory is a subset of this logical language obtained by imposing syntactical restrictions on the language. Hence, the well-formed formulae of the fragment are a subset of those in the original logic. However, the semantics of the formulae in the fragment and in the logic coincide, and any formula of the fragment can be expressed in the original logic.
Fragment (logic):
The computational complexity of tasks such as satisfiability or model checking for the logical fragment can be no higher than the same tasks in the original logic, as there is a reduction from the first problem to the other. An important problem in computational logic is to determine fragments of well-known logics such as first-order logic that are as expressive as possible yet are decidable or more strongly have low computational complexity. The field of descriptive complexity theory aims at establishing a link between logics and computational complexity theory, by identifying logical fragments that exactly capture certain complexity classes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Title (animal)**
Title (animal):
In animal husbandry and animal fancy, animals can compete in various shows and sports for titles signifying excellence. These titles vary depending on the species of the animal, the kind of show, and the country the event is held in.
Dogs:
Conformation Shows Dogs competing in conformation shows are eligible to win two titles. The first is best of breed, which signifies that a given animal is the best of its breed at the show. These best of breed winners then compete to win best in show. Animals that win enough best of breeds and best of shows are called Champions, and their show names are prefixed with Ch., such as Ch. Warren Remedy.
Dogs:
Dog Sports Dogs competing in a variety of dog sports are eligible to earn a number of titles. Often the first, or basic tile, signifies that a given animal has displayed a competent level and capable of proceeding in a given discipline. The next title is often incremented by class, level or a Champion, proceeded by a Grand Champion title, in that particular activity. The dog's registered names is then appended or prefixed with the title earned. This varies with the registry or sanctioning body that the title was awarded under. Both prefixed and appended titles are represented in Morghem's .500 Nitro Express's fully titled name CA UWP URO1 CH USJ 'PR' Morghem's .500 Nitro Express CGC TT. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Aridification**
Aridification:
Aridification is the process of a region becoming increasingly arid, or dry. It refers to long term change, rather than seasonal variation.
It is often measured as the reduction of average soil moisture content.
It can be caused by reduced precipitation, increased evaporation, lowering of water tables, and changes in ground cover acting individually or in combination.
Its major consequences include reduced agricultural production, soil degradation, ecosystem changes and decreased water catchment runoff.Some researchers have found that the Colorado River basin and other parts of western North America are currently undergoing aridification. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Buffer analysis**
Buffer analysis:
In geographic information systems (GIS) and spatial analysis, buffer analysis is the determination of a zone around a geographic feature containing locations that are within a specified distance of that feature, the buffer zone (or just buffer). A buffer is likely the most commonly used tool within the proximity analysis methods.
History:
The buffer operation has been a core part of GIS functionality since the original integrated GIS software packages of the late 1970s and early 1980s, such as ARC/INFO, Odyssey, and MOSS. Although it has been one of the most widely used GIS operations in subsequent years, in a wide variety of applications, there has been little published research on the tool itself, except for the occasional development of a more efficient algorithm.
Basic algorithm:
The fundamental method to create a buffer around a geographic feature stored in a vector data model, with a given radius r is as follows: Single point: Create a circle around the point with radius r.
Polyline, which consists of an ordered list of points (vertices) connected by straight lines. This is also used for the boundary of a polygon.Create a circle buffer around each vertex Create a rectangle along each line segment by creating a duplicate line segment offset the distance r perpendicular to each side.
Merge or dissolve the rectangles and circles into a single polygon.Software implementations of the buffer operation typically use alterations of this strategy to process more efficiently and accurately.
Basic algorithm:
Planar vs. geodesic distance Traditional implementations assumed the buffer was being created on a planar cartesian coordinate space (i.e., created by a map projection) using Euclidean geometry, because the mathematics and computation involved is relatively simple, which was important given the computing power available in the late 1970s. Due to the inherent distortions caused by map projections, the buffer computed this way will not be identical to one drawn on the surface of the Earth; at a local scale, the difference is negligible, but at larger scales, the error can be significant. Some current software, such as Esri ArcGIS Pro, offer the option to compute buffers using geodesic distance, using a similar algorithm but calculated using spherical trigonometry, including representing the lines between vertices as great circles. Other implementations use a workaround by first reprojecting the feature to a projection that minimizes distortion in that location, then computing the planar buffer.
Basic algorithm:
Options GIS software may offer variations on the basic algorithm, which may be useful in different applications: Endcaps at the end of linear buffers are rounded by default, but may be squared off or a butt end (truncated at the final vertex).
Side preference may be important, such as needing the buffer on only one side of a line, or on a polygon, selecting only the outer buffer or the inner buffer (sometimes called a setback).
Variable width, in which the features in a layer may be buffered using different radii, usually given by an attribute.
Common buffers, in which the buffers for each feature in a layer are dissolved into a single polygon. This is most commonly used when one is not concerned about which feature is near each point in space, only that a point is nearby some (anonymous) feature. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**4-HO-DPT**
4-HO-DPT:
4-HO-DPT (4-hydroxy-N,N-dipropyltryptamine, Deprocin) is a substituted tryptamine with psychedelic effects. It is the 4-hydroxyl analog of dipropyltryptamine (DPT).
In 2019, Chadeayne et al. solved the crystal structure of the fumarate salt of 4-HO-DPT. The authors describe the structure as follows: "The asymmetric unit contains one 4-HO-DPT cation, protonated at the dipropylamine N atom. There are also two independent water molecules, and half of a fumarate ion present." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**2-phosphosulfolactate phosphatase**
2-phosphosulfolactate phosphatase:
The enzyme 2-phosphosulfolactate phosphatase (EC 3.1.3.71) catalyzes the reaction (2R)-2-phospho-3-sulfolactate + H2O ⇌ (2R)-3-sulfolactate + phosphateThis enzyme belongs to the family of hydrolases, specifically those acting on phosphoric monoester bonds. The systematic name (R)-2-phospho-3-sulfolactate phosphohydrolase. Other names in common use include (2R)-phosphosulfolactate phosphohydrolase, and ComB phosphatase.
Structural studies:
As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 1VR0. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hypersurface**
Hypersurface:
In geometry, a hypersurface is a generalization of the concepts of hyperplane, plane curve, and surface. A hypersurface is a manifold or an algebraic variety of dimension n − 1, which is embedded in an ambient space of dimension n, generally a Euclidean space, an affine space or a projective space.
Hypersurfaces share, with surfaces in a three-dimensional space, the property of being defined by a single implicit equation, at least locally (near every point), and sometimes globally.
Hypersurface:
A hypersurface in a (Euclidean, affine, or projective) space of dimension two is a plane curve. In a space of dimension three, it is a surface. For example, the equation x12+x22+⋯+xn2−1=0 defines an algebraic hypersurface of dimension n − 1 in the Euclidean space of dimension n. This hypersurface is also a smooth manifold, and is called a hypersphere or an (n – 1)-sphere.
Smooth hypersurface:
A hypersurface that is a smooth manifold is called a smooth hypersurface.
In Rn, a smooth hypersurface is orientable. Every connected compact smooth hypersurface is a level set, and separates Rn into two connected components; this is related to the Jordan–Brouwer separation theorem.
Affine algebraic hypersurface:
An algebraic hypersurface is an algebraic variety that may be defined by a single implicit equation of the form p(x1,…,xn)=0, where p is a multivariate polynomial. Generally the polynomial is supposed to be irreducible. When this is not the case, the hypersurface is not an algebraic variety, but only an algebraic set. It may depend on the authors or the context whether a reducible polynomial defines a hypersurface. For avoiding ambiguity, the term irreducible hypersurface is often used. As for algebraic varieties, the coefficients of the defining polynomial may belong to any fixed field k, and the points of the hypersurface are the zeros of p in the affine space Kn, where K is an algebraically closed extension of k.
Affine algebraic hypersurface:
A hypersurface may have singularities, which are the common zeros, if any, of the defining polynomial and its partial derivatives. In particular, a real algebraic hypersurface is not necessarily a manifold.
Affine algebraic hypersurface:
Properties Hypersurfaces have some specific properties that are not shared with other algebraic varieties. One of the main such properties is Hilbert's Nullstellensatz, which asserts that a hypersurface contains a given algebraic set if and only if the defining polynomial of the hypersurface has a power that belongs to the ideal generated by the defining polynomials of the algebraic set.
Affine algebraic hypersurface:
A corollary of this theorem is that, if two irreducible polynomials (or more generally two square-free polynomials) define the same hypersurface, then one is the product of the other by a nonzero constant.
Affine algebraic hypersurface:
Hypersurfaces are exactly the subvarieties of dimension n – 1 of an affine space of dimension of n. This is the geometric interpretation of the fact that, in a polynomial ring over a field, the height of an ideal is 1 if and only if the ideal is a principal ideal. In the case of possibly reducible hypersurfaces, this result may be restated as follows: hypersurfaces are exactly the algebraic sets whose all irreducible components have dimension n – 1.
Affine algebraic hypersurface:
Real and rational points A real hypersurface is a hypersurface that is defined by a polynomial with real coefficients. In this case the algebraically closed field over which the points are defined is generally the field C of complex numbers. The real points of a real hypersurface are the points that belong to Rn⊂Cn.
The set of the real points of a real hypersurface is the real part of the hypersurface. Often, it is left to the context whether the term hypersurface refers to all points or only to the real part.
Affine algebraic hypersurface:
If the coefficients of the defining polynomial belong to a field k that is not algebraically closed (typically the field of rational numbers, a finite field or a number field), one says that the hypersurface is defined over k, and the points that belong to kn are rational over k (in the case of the field of rational numbers, "over k" is generally omitted).
Affine algebraic hypersurface:
For example, the imaginary n-sphere defined by the equation x02+⋯+xn2+1=0 is a real hypersurface without any real point, which is defined over the rational numbers. It has no rational point, but has many points that are rational over the Gaussian rationals.
Projective algebraic hypersurface:
A projective (algebraic) hypersurface of dimension n – 1 in a projective space of dimension n over a field k is defined by a homogeneous polynomial P(x0,x1,…,xn) in n + 1 indeterminates. As usual, homogeneous polynomial means that all monomials of P have the same degree, or, equivalently that P(cx0,cx1,…,cxn)=cdP(x0,x1,…,xn) for every constant c, where d is the degree of the polynomial. The points of the hypersurface are the points of the projective space whose projective coordinates are zeros of P.
Projective algebraic hypersurface:
If one chooses the hyperplane of equation x0=0 as hyperplane at infinity, the complement of this hyperplane is an affine space, and the points of the projective hypersurface that belong to this affine space form an affine hypersurface of equation 0.
Conversely, given an affine hypersurface of equation p(x1,…,xn)=0, it defines a projective hypersurface, called its projective completion, whose equation is obtained by homogenizing p. That is, the equation of the projective completion is P(x0,x1,…,xn)=0, with P(x0,x1,…,xn)=x0dp(x1/x0,…,xn/x0), where d is the degree of P.
Projective algebraic hypersurface:
These two processes projective completion and restriction to an affine subspace are inverse one to the other. Therefore, an affine hypersurface and its projective completion have essentially the same properties, and are often considered as two points-of-view for the same hypersurface. However, it may occur that an affine hypersurface is nonsingular, while its projective completion has singular points. In this case, one says that the affine surface is singular at infinity. For example, the circular cylinder of equation x2+y2−1=0 in the affine space of dimension three has a unique singular point, which is at infinity, in the direction x = 0, y = 0. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Limbic resonance**
Limbic resonance:
Limbic resonance is the idea that the capacity for sharing deep emotional states arises from the limbic system of the brain. These states include the dopamine circuit-promoted feelings of empathic harmony, and the norepinephrine circuit-originated emotional states of fear, anxiety and anger.The concept was advanced in the book A General Theory of Love (2000), and is one of three interrelated concepts central to the book's premise: that our brain chemistry and nervous systems are measurably affected by those closest to us (limbic resonance); that our systems synchronize with one another in a way that has profound implications for personality and lifelong emotional health (limbic regulation); and that these set patterns can be modified through therapeutic practice (limbic revision).: 170 In other words, it refers to the capacity for empathy and non-verbal connection that is present in mammals, and that forms the basis of our social connections as well as the foundation for various modes of therapy and healing. According to the authors (Thomas Lewis, M.D, Fari Amini, M.D. and Richard Lannon, M.D.), our nervous systems are not self-contained, but rather demonstrably attuned to those around us with whom we share a close connection. "Within the effulgence of their new brain, mammals developed a capacity we call 'limbic resonance' — a symphony of mutual exchange and internal adaptation whereby two mammals become attuned to each other's inner states."This notion of limbic resonance builds on previous formulations and similar ideas. For example, the authors retell at length the notorious experiments of Harry Harlow establishing the importance of physical contact and affection in social and cognitive development of rhesus monkeys. They also make extensive use of subsequent research by Tiffany Field in mother/infant contact, Paul D. MacLean on the triune brain (reptilian, limbic, and neocortex), and the work of G.W. Kraemer.
Importance and history:
Lewis, Amini and Lannon first make their case by examining a story from the dawn of scientific experimentation in human development when in the thirteenth century Frederick II raised a group of infants to be completely cut off from human interaction, other than the most basic care and feeding, so as to discover what language would spontaneously arise in the absence of any communication prompts. The result of this notorious experiment was that the infants, deprived of any human discourse or affection, all died.: 68, 69 The authors find the hegemony of Freudian theory in the early days of psychology and psychiatry to be almost as harmful as the ideas of Frederick II. They condemn the focus on cerebral insight, and the ideal of a cold, emotionless analyst, as negating the very benefit that psychotherapy can confer by virtue of the empathetic bond and neurological reconditioning that can occur in the course of sustained therapeutic sessions. "Freud's enviable advantage is that he never seriously undertook to follow his own advice. Many promising young therapists have their responsiveness expunged, as they are taught to be dutifully neutral observers, avoiding emotional contact....But since therapy is limbic relatedness, emotional neutrality drains life out of the process...": 184 A General Theory of Love is scarcely more sympathetic to Dr. Benjamin Spock and his "monumentally influential volume" Baby and Child Care, especially given Spock's role in promoting the movement against co-sleeping, or allowing infants to sleep in the same bed as their parents. Lewis, Amini and Lannon cite the research of sleep scientist James McKenna, which seems to suggest that the limbic regulation between sleeping parents and infants is essential to the neurological development of the latter and a major factor in preventing Sudden Infant Death Syndrome (SIDS). "The temporal unfolding of particular sleep stages and awake periods of the mother and infant become entwined....on a minute to minute basis, throughout the night, much sensory communication is occurring between them.": 195
Subsequent use of the term:
Since the first publication of A General Theory of Love in 2000, the term limbic resonance has gained popularity with subsequent writers and researchers. The term brings a higher degree of specificity to the ongoing discourse in psychological literature concerning the importance of empathy and relatedness. In "A handbook of Psychology" (2003) a clear path is traced from Winnicott 1965 identifying the concept of mother and child as a relational organism or dyad: 92 and goes on to examine the interrelation of social and emotional responding with neurological development and the role of the limbic system in regulating response to stress.: 117 Limbic resonance is also referred to as "empathic resonance", as in the book Empathy in Mental Illness (2007), which establishes the centrality of empathy or lack thereof in a range of individual and social pathologies. The authors Farrow and Woodruff cite the work of Maclean, 1985, as establishing that "Empathy is perhaps the heart of mammalian development, limbic regulation and social organization",: 50 as well as research by Carr et al., 2003, who used fMRI to map brain activity during the observation and imitation of emotional facial expressions, concluding that "we understand the feelings of others via a mechanism of action representation that shapes emotional content and that our empathic resonance is grounded in the experience of our bodies in action and the emotions associated with specific bodily movements".: 179 Other studies cited examine the link between mirror neurons (activated during such mimicking activity) and the limbic system, such as Chartrand & Bargh, 1999: "Mirror neurone areas seem to monitor this interdependence, this intimacy, this sense of collective agency that comes out of social interactions and that is tightly linked to the ability to form empathic resonance.": 317 Limbic resonance and limbic regulation are also referred to as "mood contagion" or "emotional contagion" as in the work of Sigal Barsade and colleagues at the Yale School of Management.
Subsequent use of the term:
In The Wise Heart, Buddhist teacher Jack Kornfield echoes the musical metaphor of the original definition of "limbic resonance" offered by authors Lewis, Amini and Lannon of A General Theory of Love, and correlates these findings of Western psychology with the tenets of Buddhism: "Each time we meet another human being and honor their dignity, we help those around us. Their hearts resonate with ours in exactly the same way the strings of an unplucked violin vibrate with the sounds of a violin played nearby. Western psychology has documented this phenomenon of 'mood contagion' or limbic resonance. If a person filled with panic or hatred walks into a room, we feel it immediately, and unless we are very mindful, that person's negative state will begin to overtake our own. When a joyfully expressive person walks into a room, we can feel that state as well."In March 2010, citing A General Theory of Love, Kevin Slavin referred to limbic resonance in considering the dynamics of Social television. Slavin suggests that the laugh track evolved to provide the audience—alone at home—with a sense that others around them were laughing, and that limbic resonance explains the need for that laughing audience.
Limbic regulation:
Limbic regulation, mood contagion or emotional contagion is the effect of contact with other people upon the development and stability of personality and mood.
Limbic regulation:
Subsequent use and definitions of the term In Living a connected life (2003), Dr. Kathleen Brehony looks at recent brain research which shows the importance of proximity of others in our development. "Especially in infancy, but throughout our lives, our physical bodies are influencing and being influenced by others with whom we feel a connection. Scientists call this limbic regulation."Brehony goes on to describe the parallels between the "protest/despair" cycles of an abandoned puppy and human development. Mammals have developed a tendency to experience distraction, anxiety and measurable levels of stress in response to separation from their care-givers and companions, precisely because such separation has historically constituted a threat to their survival. As anyone who has owned a puppy can attest, when left alone it will cry, bark, howl, and seek to rejoin its human or canine companions. If these efforts are unsuccessful and the isolation is prolonged, it will sink into a state of dejection and despair. The marginal effectiveness of placing a ticking clock in the puppy's bed is based on a universal need in mammals to synchronize to the rhythms of their fellow creatures.
Limbic regulation:
Limbic resonance and limbic regulation are also referred to as "mood contagion" or "emotional contagion" as in the work of Sigal Barsade. Barsade and colleagues at the Yale School of Management build on research in social cognition, and find that some emotions, especially positive ones, are spread more easily than others through such "interpersonal limbic regulation".Author Daniel Goleman has explored similar terrain across several works: in Emotional Intelligence (1995), an international best seller, The Joy Of Living, coauthored with Yongey Mingyur Rinpoche, and the Harvard Business Review on Breakthrough Leadership. In the latter book, Goleman considers the "open loop nature of the brain's limbic system" which depends on external sources to manage itself, and examines the implications of interpersonal limbic regulation and the science of moods on leadership.In Mindfully Green: A Personal and Spiritual Guide to Whole Earth Thinking (2003) author Staphine Kaza defines the term as follows: "Limbic regulation is a mutual simultaneous exchange of body signals that unfolds between people who are deeply involved with each other, especially parents and children." She goes on to correlate love with limbic engagement and asserts that children raised with love learn and remember better than those who are abused. Kaza then proposes to "take this work a step further from a systems perspective, and imagine that a child learns through some sort of limbic regulation with nature".
Limbic revision:
Limbic revision is the therapeutic alteration of personality residing in the human limbic system of the brain.
Limbic revision:
Relation to affect regulation and limbic resonance Dr. Allan Schore, of the UCLA David Geffen School of Medicine, has explored related ideas beginning with his book Affect Regulation and the Origin of the Self published in 1994. Dr. Shore looks at the contribution of the limbic system to the preservation of the species, its role in forming social bonds with other members of the species and intimate relations leading to reproduction. "It is said that natural selection favors characteristics that maximize an individual's contribution of the gene pool of succeeding generations. In humans this may entail not so much competitive and aggressive traits as an ability to enter into a positive affective relationship with a member of the opposite sex." In his subsequent book Affect regulation & the repair of the self, Schor correlates the "interactive transfer of affect" between mother and infant, on the one hand, and in a therapeutic context on the other, and describes it as "intersubjectivity". He then goes on to explore what developmental neuropsychology can reveal about both types of interrelatedness.
Limbic revision:
In Integrative Medicine: Principles for Practice, authors Kligler and Lee state "The empathic therapist offers a form of affect regulation. The roots of empathy — Limbic resonance — are found in the early caregiver experiences, which shape the ways the child learns to experience, share, and communicate affects."
In popular culture:
Limbic Resonance is the title of the first episode of the Netflix series Sense8, the episode describes how eight uniquely different people from across the globe start seeing and hearing things after inexplicably seeing a vision of a woman they've never met before. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nasal septum perforation**
Nasal septum perforation:
A nasal septum perforation is a medical condition in which the nasal septum, the bony/cartilaginous wall dividing the nasal cavities, develops a hole or fissure.
Nasal septum perforation:
This may be brought on directly, as in the case of nasal piercings, or indirectly, as by long-term topical drug application, including intranasal ethylphenidate, methamphetamine, cocaine, crushed prescription pills, or decongestant nasal sprays, chronic epistaxis, excessive nose picking and as a complication of nasal surgery like septoplasty or rhinoplasty. Much less common causes for perforated nasal septums include rare granulomatous inflammatory conditions like granulomatosis with polyangiitis. It has been reported as a side effect of anti-angiogenesis drugs like bevacizumab.
Signs and symptoms:
A perforated septum can vary in size and location, and is usually found deep inside the nose. It may be asymptomatic, or cause a variety of signs and symptoms. Small perforations can cause a whistling noise when breathing. Larger perforations usually have more severe symptoms. These can be a combination of crusting, blood discharge, difficulty breathing, nasal pressure and discomfort. The closer the perforation is to the nostrils, the more likely it is to cause symptoms.
Cause:
Infective causes include syphilis, leprosy, and rhinoscleroma. Non-infective causes include cocaine abuse, an in situ foreign body, chronic use of topical nasal decongestants, methamphetamine, facial trauma, or may develop as a consequence of nasal surgery.
Treatment:
Septal perforations are managed with a multitude of options. The treatment often depends on the severity of symptoms and the size of the perforations. Generally speaking anterior septal perforations are more bothersome and symptomatic. Posterior septal perforations, which mainly occur iatrogenically, are often managed with simple observation and are at times intended portions of skull base surgery. Septal perforations that are not bothersome can be managed with simple observation. While no septal perforation will spontaneously close, for the majority of septal perforations that are unlikely to get larger observation is an appropriate form of management. For perforations that bleed or are painful, initial management should include humidification and application of salves to the perforation edges to promote healing. Mucosalization of the perforation edges will help prevent pain and recurrent epistaxis and majority of septal perforations can be managed without surgery.
Treatment:
For perforations in which anosmia, or the loss of smell, and a persistent whistling are a concern, the use of a silicone septal button is a treatment option. These can be placed while the patient is awake and usually in the clinic setting. While complications of button insertion are minimal, the presence of the button can be bothersome to most patients.
Treatment:
For patients who desire definitive close, surgery is the only option. Prior to determining candidacy for surgical closure, the etiology of the perforation must be determined. Often this requires a biopsy of the perforation to rule out autoimmune causes. If a known cause such as cocaine is the offending agent, it must be ensured that the patient is not still using the irritant.
Treatment:
For those that are determined to be medically cleared for surgery, the anatomical location and size of the perforation must be determined. This is often done with a combination of a CT scan of the sinuses without contrast and an endoscopic evaluation by an Ear Nose and Throat doctor. Once dimensions are obtained the surgeon will decide if it is possible to close the perforation. Multiple approaches to access the septum have been described in the literature. While sublabial and midfacial degloving approaches have been described, the most popular today is the rhinoplasty approach. This can include both open and closed methods. The open method results in a scar on the columella, however, it allows for more visibility to the surgeon. The closed method utilizes an incision all on the inside of the nose. The concept behind closure includes bringing together the edges of mucosa on each side of the perforation with minimal tension. An interposition graft is also often used. The interposition graft provides extended stability and also structure to the area of the perforation. Classically, a graft from the scalp utilizing temporalis fascia was used. Kridel, et al., first described the usage of acellular dermis so that no further incisions are required; they reported an excellent closure rate of over 90 percent. Overall perforation closure rates are variable and often determined by the skill of the surgeon and technique used. Often surgeons who claim a high rate of closure choose perforations that are easier to close. An open rhinoplasty approach also allows for better access to the nose to repair any concurrent nasal deformities, such as saddle nose deformity, that occur with a septal perforation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ZooMS**
ZooMS:
Zooarchaeology by mass spectrometry, commonly referred to by the abbreviation ZooMS, is a scientific method that identifies animal species by means of characteristic peptide sequences in the protein collagen. ZooMS is the most common archaeological application of peptide mass fingerprinting (PMF) and can be used for species identification of bones, teeth, skin and antler. It is commonly used to identify objects that cannot be identified morphologically. In an archaeological context this usually means that the object is too fragmented or that it has been shaped into an artefact. Archaeologists use these species identification to study among others past environments, diet and raw material selection for the production of tools.
Developmental history:
ZooMS was first published in 2009 by a team of researchers from the University of York, but the term was coined later in a publication in 2010. The original aim of ZooMS was to distinguish between sheep and goat. The bones of these two closely related species are difficult to distinguish, especially when fragmented, yet the difference between these two common domesticates is very important for our understanding of past husbandry practices.
Developmental history:
Most of the method development following the initial publication of ZooMS has focused on the extraction of collagen from the archaeological material. In the original protocol acid was used to dissolve the bone’s mineral matrix and free up the collagen. In 2011 an alternative extraction method was published that used an ammonium bicarbonate buffer to solubilise the collagen without dissolving the mineral matrix. In contrast to the acid protocol, the ammonium bicarbonate protocol does not affect the size and mass of the sample, making it a much less destructive method compared to the original protocol. In fact, the ammonium bicarbonate protocol was proposed as a non-destructive protocol for ZooMS, but in practice destructive samples are still taken for this protocol (see ). Submerging a sample in ammonium bicarbonate does chemically alter the ample, which is why current practices continue to take a destructive sample.
Developmental history:
Non-destructive sampling protocols Although the ammonium bicarbonate protocol should not be considered a non-destructive method, it was followed by more ‘true’ non-destructive methods. The first of these was the eraser protocol, first tested on parchment, but later also applied to bone. The eraser protocol is performed by rubbing a PVC eraser on a piece of parchment or bone. The friction generates triboelectric forces, which causes small particles of the sample to cling to the eraser waste. From the eraser waste collagen can then be extracted and analysed. The eraser protocol was found to work relatively well for parchment, but it is less effective on bone. Additionally, it leaves microscopic traces on the bone surface, which appear very similar to use wear traces and could be an issue for use wear analysis.A second non-destructive protocol is the plastic bag protocol, first published in 2019. It is based on the idea that the normal friction between an object and the plastic bags, commonly used for storing archaeological objects, might be sufficient to extract enough material for ZooMS analysis.
Developmental history:
A third protocol uses the same triboelectric principle. However, instead of using an eraser, this microgrid protocol employs a fine polishing film to remove very small amounts of material from a sample.The last non-destructive protocol that has been published for ZooMS is the membrane box protocol. The membrane box protocol is based on contact electrification, which is the generation of electrostatic forces due to small localised differences in charge between two objects. These electrostatic forces can be large enough for material transfer between two surfaces.Most of these protocols have only been published recently and their respective advantages and disadvantages have not yet been tested against each other. It is therefore not yet clear how reliable these methods are and what level of preservation of the samples is required for them to work.
Developmental history:
Reference biomarkers Apart from non-destructive sampling, a second area of method development has been the expansion of reference biomarkers. To identify a species using ZooMS, a set of diagnostic biomarkers is used. These biomarkers correspond to particular fragments of the species’ collagen protein. The set of known biomarkers at the time of ZooMS’ original publication was relatively limited, but recent publications have been expanding this list. A regularly updated list of published biomarkers is maintained by the University of York and can be found here.
Principle of the method:
ZooMS identifies species based on differences in the amino acid composition of the collagen protein. The amino acid sequence of a species’ collagen protein is determined by its DNA and as a result like DNA, the amino acid sequence reflects a species’ evolutionary history. The greater the evolutionary distance between two species, the more different their collagen proteins will be. ZooMS typically can identify a sample up to genus level, though in some cases the identification can be more or less specific. A good understanding of the archaeological context of the sample can be used to further refine the resolution of the species identification.
Principle of the method:
Protocol example A ZooMS protocol (Fig. 1) typically consists of an extraction, denaturation, digestion and filtration step, followed by mass spectrometric analysis. Various destructive and non-destructive extraction protocols have already been discussed in some detail above. The key is to extract the protein preserved in the sample and then bring it into solution, usually an ammonium bicarbonate buffer. Denaturation is done to unfold the proteins and make them more accessible for the enzymatic digestion. It is done by heating the solubilised sample at around 65°C. Then an enzyme, trypsin, is added to the solution. Trypsin cleaves the protein after every arginine or lysine amino acid in its sequence, resulting in peptide fragments of predictable masses. After digestion the sample is filtered with C18 filters to get rid of non-proteinaceous material and the sample is now ready for mass spectrometric analysis, which for ZooMS generally means MALDI-TOF MS. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Directory (OpenVMS command)**
Directory (OpenVMS command):
In computer software, specifically the DCL command-line interface of the OpenVMS operating system, the DIRECTORY command (often abbreviated as DIR) is used to list the files inside a directory. It is analogous to the DOS dir and Unix ls commands. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pairs trade**
Pairs trade:
A pairs trade or pair trading is a market neutral trading strategy enabling traders to profit from virtually any market conditions: uptrend, downtrend, or sideways movement. This strategy is categorized as a statistical arbitrage and convergence trading strategy. Pair trading was pioneered by Gerry Bamberger and later led by Nunzio Tartaglia's quantitative group at Morgan Stanley in the 1980s.The strategy monitors performance of two historically correlated securities. When the correlation between the two securities temporarily weakens, i.e. one stock moves up while the other moves down, the pairs trade would be to short the outperforming stock and to long the underperforming one, betting that the "spread" between the two would eventually converge. The divergence within a pair can be caused by temporary supply/demand changes, large buy/sell orders for one security, reaction for important news about one of the companies, and so on.
Pairs trade:
Pairs trading strategy demands good position sizing, market timing, and decision making skill. Although the strategy does not have much downside risk, there is a scarcity of opportunities, and, for profiting, the trader must be one of the first to capitalize on the opportunity.
A notable pairs trader was hedge fund Long-Term Capital Management; see Dual-listed companies.
Model-based pairs trading:
While it is commonly agreed that individual stock prices are difficult to forecast, there is evidence suggesting that it may be possible to forecast the price—the spread series—of certain stock portfolios. A common way to attempt this is by constructing the portfolio such that the spread series is a stationary process. To achieve spread stationarity in the context of pairs trading, where the portfolios only consist of two stocks, one can attempt to find a cointegration irregularities between the two stock price series who generally show stationary correlation. This irregularity is assumed to be bridged soon and forecasts are made in the opposite nature of the irregularity. This would then allow for combining them into a portfolio with a stationary spread series. Regardless of how the portfolio is constructed, if the spread series is a stationary processes, then it can be modeled, and subsequently forecast, using techniques of time series analysis. Among those suitable for pairs trading are Ornstein-Uhlenbeck models, autoregressive moving average (ARMA) models and (vector) error correction models. Forecastability of the portfolio spread series is useful for traders because: The spread can be directly traded by buying and selling the stocks in the portfolio, and The forecast and its error bounds (given by the model) yield an estimate of the return and risk associated with the trade.The success of pairs trading depends heavily on the modeling and forecasting of the spread time series. Comprehensive empirical studies on pairs trading have investigated its profitability over the long-term in the US market using the distance method, co-integration, and copulas. They have found that the distance and co-integration methods result in significant alphas and similar performance, but their profits have decreased over time. Copula pairs trading strategies result in more stable but smaller profits.
Algorithmic pairs trading:
Today, pairs trading is often conducted using algorithmic trading strategies on an execution management system. These strategies are typically built around models that define the spread based on historical data mining and analysis. The algorithm monitors for deviations in price, automatically buying and selling to capitalize on market inefficiencies. The advantage in terms of reaction time allows traders to take advantage of tighter spreads.
Market neutrality:
The pairs trade helps to hedge sector- and market-risk. For example, if the whole market crashes, and the two stocks plummet along with it, the trade should result in a gain on the short position and a negating loss on the long position, leaving the profit close to zero in spite of the large move.
Pairs trade is a mean-reverting strategy, betting that the prices will eventually revert to their historical trends.
Pairs trade is a substantially self-funding strategy, since the short sale proceeds may be used to create the long position.
Drift and risk management:
Trading pairs is not a risk-free strategy. The difficulty comes when prices of the two securities begin to drift apart, i.e. the spread begins to trend instead of reverting to the original mean. Dealing with such adverse situations requires strict risk management rules, which have the trader exit an unprofitable trade as soon as the original setup—a bet for reversion to the mean—has been invalidated. This can be achieved, for example, by forecasting the spread and exiting at forecast error bounds. A common way to model, and forecast, the spread for risk management purposes is by using autoregressive moving average models.
Drift and risk management:
Some other risks include: In ‘market-neutral’ strategies, you are assuming that the CAPM model is valid and that beta is a correct estimate of systematic risk—if this is not the case, your hedge may not properly protect you in the event of a shift in the markets. Note there are other theories on how to estimate market risk—such as the Fama-French Factors.
Drift and risk management:
Measures of market risk, such as beta, are historical and could be very different in the future than they have been in the past.
If you are implementing a mean reversion strategy, you are assuming that the mean will remain the same in the future as it has been in the past. When the means change, it is sometimes referred to as ‘drift’.
A simplified example:
Pepsi (PEP) and Coca-Cola (KO) are different companies that create a similar product, soda pop. Historically, the two companies have shared similar dips and highs, depending on the soda pop market. If the price of Coca-Cola were to go up a significant amount while Pepsi stayed the same, a pairs trader would buy Pepsi stock and sell Coca-Cola stock, assuming that the two companies would later return to their historical balance point. If the price of Pepsi rose to close that gap in price, the trader would make money on the Pepsi stock, while if the price of Coca-Cola fell, they would make money on having shorted the Coca-Cola stock.
A simplified example:
The reason for the deviated stock to come back to original value is itself an assumption. It is assumed that the pair will have similar business performance as in the past during the holding period of the stock. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MTERF1**
MTERF1:
Mitochondrial transcription termination factor 1, also known as MTERF1, is a protein which in humans is encoded by the MTERF gene.This gene encodes a mitochondrial transcription termination factor. This protein participates in attenuating transcription from the mitochondrial genome; this attenuation allows higher levels of expression of 16S ribosomal RNA relative to the tRNA gene downstream. The product of this gene has three leucine zipper motifs bracketed by two basic domains that are all required for DNA binding. There is evidence that, for this protein, the zippers participate in intramolecular interactions that establish the three-dimensional structure required for DNA binding. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dead Sea salt**
Dead Sea salt:
Dead Sea salt refers to salt and other mineral deposits extracted or taken from the Dead Sea. The composition of this material differs significantly from oceanic salt.
History:
Dead Sea salt was used by the peoples of Ancient Egypt and it has been utilized in various unguents, skin creams, and soaps since then.
Mineral composition:
The Dead Sea's mineral composition varies with season, rainfall, depth of deposit, and ambient temperature. Most oceanic salt is approximately 85 wt.% sodium chloride (the same salt as table salt) while Dead Sea salt is only 30.5 wt.% of this, with the remainder composed of other dried minerals and salts. The concentrations of the major ions present in the Dead Sea water are given in the following table: The chemical composition of the crystallized Dead Sea salts does not necessarily correspond to the results presented in this table because of composition changes due to the process of fractional crystallization. The main detritic minerals present in the Dead Sea mud were carried by runoff streams flowing into the Dead Sea. They constituted large mud deposits intermixed with salt layers during the Holocene era. Their elemental composition expressed as equivalent oxides (except for Cl– and Br–) is given here below: Except for chloride and bromide, the results of the elemental composition of the Dead Sea mud given here above are presented as equivalent oxides for the sake of convenience. To illustrate this chemical convention, the neutral sodium sulfate (Na2SO4) is reported here as basic sodium oxide (Na2O) and acidic sulfur trioxide (SO3), neither of which can naturally occur under these free forms in this mud. However, one will note that the elemental composition given here above is incomplete as a major component is lacking in this table: carbon dioxide (CO2) accounting for the significant carbonate fraction present in this mud.
Therapeutic benefits:
Dead Sea salts have been claimed to treat the following conditions: Rheumatologic conditions Rheumatologic conditions can be treated in the balneotherapy of rheumatoid arthritis, psoriatic arthritis, and osteoarthritis. The minerals are absorbed while soaking, stimulating blood circulation.
Therapeutic benefits:
Common skin ailments Skin disorders such as acne and psoriasis may be relieved by regularly soaking the affected area in water with added Dead Sea salt. The National Psoriasis Foundation recommends Dead Sea and Dead Sea salts as effective treatments for psoriasis. High concentration of magnesium in Dead Sea salt may be helpful in improving skin hydration and reducing inflammation, although Epsom salt is a much less expensive salt that also contains high amounts of magnesium and therefore may be equally as useful for this purpose.
Therapeutic benefits:
Allergies The high concentration of bromide and magnesium in the Dead Sea salt may help relieve allergic reactions of the skin by reducing inflammation.
Skin ageing Dead Sea salt may reduce the depth of skin wrinkling, a form of skin ageing. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Video interlude**
Video interlude:
A video interlude is an interlude during a performance that shows a video. Video interludes are often played in concerts, showing a music video (often made specifically for the show), usually featuring the artists while the artist takes a break or costume change.
Video interludes have been used by Madonna since at least 1990. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**College circuit**
College circuit:
College circuit is a form of motion picture distribution where old films as well as new ones are shown on college campuses, usually in the evening. The selections range from art house fare to wide release films and cult classics (also see midnight movies for a similar practice).
Origins:
Beginning in the 1950s and 1960s, classes in cinema theory and history began to be taught at colleges across the United States. To facilitate the growing interest in film, prints were screened for a reduced price for students. Several movies have been salvaged thanks to this practice, including most notably Citizen Kane.
Criticisms:
Some of the movies popular on college circuits throughout the years have been either controversial, or of specious artistic merit. The content of the movies often appeal to college-age sensibilities.Critics like Roger Ebert have expressed suspicion of such films when deemed "artistic", and instead charges them as misleading by presenting exploitive material (such as sex, violence, and drug use) by means that are more aesthetically-pleasing to those educated in cinema studies, and therefore they become more acceptable to an "intellectual audience". This is an argument sometimes leveled against Blue Velvet or the ouvre of Ken Russell. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Redux (JavaScript library)**
Redux (JavaScript library):
Redux is an open-source JavaScript library for managing and centralizing application state. It is most commonly used with libraries such as React or Angular for building user interfaces. Similar to (and inspired by) Facebook's Flux architecture, it was created by Dan Abramov and Andrew Clark.
Since mid-2016, the primary maintainers are Mark Erikson and Tim Dorr.
Description:
Redux is a small library with a simple, limited API designed to be a predictable container for application state. It operates in a fashion similar to a reducing function, a functional programming concept.
History:
Redux was created by Dan Abramov and Andrew Clark in 2015. Abramov began writing the first Redux implementation while preparing for a conference talk at React Europe on hot reloading. Abramov remarks, "I was trying to make a proof of concept of Flux where I could change the logic. And it would let me time travel. And it would let me reapply the future actions on the code change."Abramov was struck by the similarity of the Flux pattern with a reducing function. "I was thinking about Flux as a reduce operation over time... your stores, they accumulate state in response to these actions. I was thinking of taking this further. What if your Flux store was not a store but a reducer function?"Abramov reached out to Andrew Clark (author of the Flux implementation Flummox) as a collaborator. Among other things, he credits Clark with making the Redux ecosystem of tools possible, helping to come up with a coherent API, implementing extension points such as middleware and store enhancers.By mid 2016, Abramov had joined the React team and passed the primary maintainership on to Mark Erikson and Tim Dorr.In February 2019, useReducer was introduced as a React hook in the 16.8 release. It provides an API that is consistent with Redux, enabling developers to create Redux-like stores that are local to component states. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**InterVideo WinDVR**
InterVideo WinDVR:
InterVideo WinDVR is a commercial digital video recorder (DVR) software package for Windows operating systems. It allows PCs to work as a TV set and a DVR at the same time, using a hardware-based TV turner card. It has an integrated electronic program guide (EPG) that is updated via the Internet.
InterVideo WinDVR:
Its direct competition came from CyberLink PowerVCR.In 2003 InterVideo posted a replacement product named WinDVD Recorder 4.5, offering discounts to the users by upgrading from WinDVR 3 or WinDVD Player 4. However, WinDVD Recorder is not compatible with Windows 98SE or ME (only 2000 and XP are supported). This is the reason WinDVR continued being sold, although without any further updates.
InterVideo WinDVR:
In 2006, InterVideo, the creator of WinDVD Recorder, was acquired by Corel Corporation. WinDVD Recorder has been discontinued, and no direct replacement has been announced. The last WinDVD Recorder version was 5.2.
Features:
The application can convert video from VHS tapes to DVD or video CD, and can capture screen shots from a program and save them as a bitmap image to a hard disk or other storage medium.
The EPG works with Decisionmark's TitanTV in the United States, Fast TV in Europe, and Sony IEPG in Japan.
It supports MPEG-1, MPEG-2, NTSC and PAL VCD, SVCD, and DVD formats.
The program displays video thumbnails of 16 channels at once so you can scan what's on at a glance.
The time-shifting feature allows pausing of live TV, and creation of instant replay, or fast-forward through commercials with InterVideo Home Theater. The software also includes support for Teletext, a television information service in Europe.
WinDVD Recorder also includes the same functions of the product WinDVD Player on which it is based: battery life extender, hyper-threading technology, Movie Encyclopedia, aspect ratio correction, time-stretching, DivX support, playlist creation, preset display settings, and PAL TruSpeed. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Interspinous plane**
Interspinous plane:
The interspinous plane (Planum interspinale) is an anatomical transverse plane that passes through the anterior superior iliac spines. It separates the lateral lumbar region from the inguinal region and the umbilical region from the pubic region. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tautology (rule of inference)**
Tautology (rule of inference):
In propositional logic, tautology is either of two commonly used rules of replacement. The rules are used to eliminate redundancy in disjunctions and conjunctions when they occur in logical proofs. They are: The principle of idempotency of disjunction: P∨P⇔P and the principle of idempotency of conjunction: P∧P⇔P Where " ⇔ " is a metalogical symbol representing "can be replaced in a logical proof with."
Formal notation:
Theorems are those logical formulas ϕ where ⊢ϕ is the conclusion of a valid proof, while the equivalent semantic consequence ⊨ϕ indicates a tautology.
Formal notation:
The tautology rule may be expressed as a sequent: P∨P⊢P and P∧P⊢P where ⊢ is a metalogical symbol meaning that P is a syntactic consequence of P∨P , in the one case, P∧P in the other, in some logical system; or as a rule of inference: P∨P∴P and P∧P∴P where the rule is that wherever an instance of " P∨P " or " P∧P " appears on a line of a proof, it can be replaced with " P "; or as the statement of a truth-functional tautology or theorem of propositional logic. The principle was stated as a theorem of propositional logic by Russell and Whitehead in Principia Mathematica as: (P∨P)→P and (P∧P)→P where P is a proposition expressed in some formal system. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Soda bread**
Soda bread:
Soda bread is a variety of quick bread traditionally made in a variety of cuisines in which sodium bicarbonate (otherwise known as "baking soda", or in Ireland, "bread soda") is used as a leavening agent instead of the traditional yeast. The ingredients of traditional soda bread are flour, baking soda, salt, and buttermilk. The buttermilk in the dough contains lactic acid, which reacts with the baking soda to form tiny bubbles of carbon dioxide. Other ingredients can be added, such as butter, egg, raisins, or nuts. An advantage of quick breads is their ability to be prepared quickly and reliably, without requiring the time-consuming skilled labor and temperature control needed for traditional yeast breads.
Preparation:
Soda bread is made with coarse flour (either white or whole meal), or a mix of the two depending on the recipe. High protein flour is not needed for this bread as the texture is described as being "moist and crumbly". Other whole grains (such as rolled oats) may be added to create different varieties. This bread does not have to be kneaded and bakers caution that kneading can toughen the dough.Buttermilk or sour milk is traditionally the liquid ingredient due to its reaction with the soda.Some recipes may add olive oil or eggs, or sweeteners like molasses, sugar, treacle, or honey, but these are not part of the basic recipe.
Origin:
Ireland Traditional Irish bread was historically cooked on a griddle as flatbread because the domestic flours did not have the properties needed to rise effectively when combined with yeast. Baking soda offered an alternative, but its popularity declined for a time when imported high-gluten flours became available. Brown soda bread (served with smoked salmon) reappeared on luxury hotel menus in the 1960s. Modern varieties can be found at Irish cafes and bakeries, some made with Guinness, treacle, walnuts, and herbs, but the sweetened version with caraway and raisins is rarely seen anymore. Soda bread made with raisins is colloquially called "Spotted Dog" or "Spotted Dick".In Ireland, the flour is typically made from soft wheat, so soda bread is best made with a cake or pastry flour (made from soft wheat), which has lower levels of gluten than a bread flour. In some recipes, the buttermilk is replaced by live yogurt or even stout. Because the leavening action starts immediately (compared to the time taken for yeast bread to rise), bakers recommend the minimum amount of mixing of the ingredients before baking; the dough should not be kneaded.Various forms of soda bread are popular throughout Ireland. Soda breads are made using wholemeal, white flour, or both. In Ulster, the wholemeal variety is usually known as wheaten bread and is normally sweetened, while the term "soda bread" is restricted to the white savoury form. In the southern provinces of Ireland, the wholemeal variety is usually known as brown bread and is almost identical to the Ulster wheaten. In some parts of Fermanagh, the white flour form of the bread is described as fadge.The "griddle cakes", "griddle bread" (or soda farls in Ulster) take a more rounded shape and have a cross cut in the top to allow the bread to expand. The griddle cake or farl is a more flattened type of bread. It is cooked on a griddle, allowing it to take a more flat shape, and it is split into four sections. The soda farl is one of the distinguishing elements of the Ulster fry, where it is served alongside potato bread, also in farl form.
Origin:
Scotland In Scotland, varieties of soda breads and griddle sodas include bannocks and farls (Scots: fardel, "a fourth"), "soda scones", or "soda farls", using baking powder or baking soda as a leavening agent giving the food a light and airy texture.Bannocks are flat cakes of barley or oatmeal dough formed into a round or oval shape, then cooked on a griddle (Scots: girdle). The most authentic versions are unleavened, but from the early 19th century bannocks have been made using baking powder, or a combination of baking soda and buttermilk or clabbered milk. Before the 19th century, bannocks were cooked on a bannock stone (Scots: stane), a large, flat, rounded piece of sandstone, placed directly onto a fire, then used as a cooking surface. Several varieties of bannock include Selkirk bannocks, beremeal bannocks, Michaelmas bannock, Yetholm bannock, and Yule bannock.The traditional soda farl is used in the full Scottish breakfast along with the potato scone (Scots: tattie scone).
Origin:
Serbia In Serbian tradition, soda bread is prepared by various rules and rituals. A coin is often put into the dough during the kneading; other small objects may also be inserted. At the beginning of Christmas dinner, the česnica is rotated three times counter-clockwise, before being broken among the family members. The person who finds the coin in their piece of the bread will supposedly be exceptionally lucky in the coming year. Before baking, the upper surface of the loaf may be inscribed with various symbols, such as a Christogram, or stars, circles, and impressions of keys or combs.
Origin:
United States of America During the early years of European settlement of the Americas, settlers used soda or pearl ash, more commonly known as potash (pot ash) or potassium carbonate, as a leavening agent (the forerunner to baking soda) in quick breads. By 1824, The Virginia Housewife by Mary Randolph was published containing a recipe for Soda Cake.In 1846, two American bakers, John Dwight and Austin Church, established the first factory in the United States to produce baking soda from sodium carbonate and carbon dioxide.
Origin:
Modern American versions of Irish soda bread often include raisins or currants, and caraway seeds. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dream argument**
Dream argument:
The dream argument is the postulation that the act of dreaming provides preliminary evidence that the senses we trust to distinguish reality from illusion should not be fully trusted, and therefore, any state that is dependent on our senses should at the very least be carefully examined and rigorously tested to determine whether it is in fact reality.
Synopsis:
While dreaming, one does not normally realize one is dreaming. On more rare occasions, the dream may be contained inside another dream with the very act of realizing that one is dreaming, itself, being only a dream that one is not aware of having. This has led philosophers to wonder whether it is possible for one ever to be certain, at any given point in time, that one is not in fact dreaming, or whether indeed it could be possible for one to remain in a perpetual dream state and never experience the reality of wakefulness at all.In Western philosophy this philosophical puzzle was referred to by Plato (Theaetetus 158b-d), Aristotle (Metaphysics 1011a6), and the Academic Skeptics. It is now best known from René Descartes' Meditations on First Philosophy. The dream argument has become one of the most prominent skeptical hypotheses.In Eastern philosophy this type of argument is sometimes referred to as the "Zhuangzi paradox": He who dreams of drinking wine may weep when morning comes; he who dreams of weeping may in the morning go off to hunt. While he is dreaming he does not know it is a dream, and in his dream he may even try to interpret a dream. Only after he wakes does he know it was a dream. And someday there will be a great awakening when we know that this is all a great dream. Yet the stupid believe they are awake, busily and brightly assuming they understand things, calling this man ruler, that one herdsman—how dense! Confucius and you are both dreaming! And when I say you are dreaming, I am dreaming, too. Words like these will be labeled the Supreme Swindle. Yet, after ten thousand generations, a great sage may appear who will know their meaning, and it will still be as though he appeared with astonishing speed.
Synopsis:
The Yogachara philosopher Vasubandhu (4th to 5th century C.E.) referenced the argument in his "Twenty verses on appearance only." The dream argument came to feature prominently in Mahayana and Tibetan Buddhist philosophy. Some schools of thought (e.g., Dzogchen) consider perceived reality to be literally unreal. As Chögyal Namkhai Norbu puts it: "In a real sense, all the visions that we see in our lifetime are like a big dream . . . ." In this context, the term 'visions' denotes not only visual perceptions, but also appearances perceived through all senses, including sounds, smells, tastes, and tactile sensations, and operations on perceived mental objects.
Simulated reality:
Dreaming provides a springboard for those who question whether our own reality may be an illusion. The ability of the mind to be tricked into believing a mentally generated world is the "real world" means at least one variety of simulated reality is a common, even nightly event.Those who argue that the world is not simulated must concede that the mind—at least the sleeping mind—is not itself an entirely reliable mechanism for attempting to differentiate reality from illusion.
Simulated reality:
Whatever I have accepted until now as most true has come to me through my senses. But occasionally I have found that they have deceived me, and it is unwise to trust completely those who have deceived us even once.
Critical discussion:
In the past, philosophers John Locke and Thomas Hobbes have separately attempted to refute Descartes's account of the dream argument. Locke claimed that you cannot experience pain in dreams. Various scientific studies conducted within the last few decades provided evidence against Locke's claim by concluding that pain in dreams can occur, but on very rare occasions. Philosopher Ben Springett has said that Locke might respond to this by stating that the agonizing pain of stepping into a fire is non-comparable to stepping into a fire in a dream. Hobbes claimed that dreams are susceptible to absurdity while the waking life is not.Many contemporary philosophers have attempted to refute dream skepticism in detail (see, e.g., Stone (1984)). Ernest Sosa (2007) devoted a chapter of a monograph to the topic, in which he presented a new theory of dreaming and argued that his theory raises a new argument for skepticism, which he attempted to refute. In A Virtue Epistemology: Apt Belief and Reflective Knowledge, he states: "in dreaming we do not really believe; we only make-believe." Jonathan Ichikawa (2008) and Nathan Ballantyne & Ian Evans (2010) have offered critiques of Sosa's proposed solution. Ichikawa argued that as we cannot tell whether our beliefs in waking life are truly beliefs and not imaginings, like in a dream, we are still not able to tell whether we are awake or dreaming.
Critical discussion:
Norman Malcolm in his monograph "Dreaming" (published in 1959) elaborated on Wittgenstein's question as to whether it really mattered if people who tell dreams "really had these images while they slept, or whether it merely seems so to them on waking". He argues that the sentence "I am asleep" is a senseless form of words; that dreams cannot exist independently of the waking impression; and that skepticism based on dreaming "comes from confusing the historical and dream telling senses...[of]...the past tense" (page 120). In the chapter: "Do I Know I Am Awake ?" he argues that we do not have to say: "I know that I am awake" simply because it would be absurd to deny that one is awake.
Critical discussion:
The dream hypothesis is also used to develop other philosophical concepts, such as Valberg's personal horizon: what this world would be internal to if this were all a dream. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Crew**
Crew:
A crew is a body or a class of people who work at a common activity, generally in a structured or hierarchical organization. A location in which a crew works is called a crewyard or a workyard. The word has nautical resonances: the tasks involved in operating a ship, particularly a sailing ship, providing numerous specialities within a ship's crew, often organised with a chain of command. Traditional nautical usage strongly distinguishes officers from crew, though the two groups combined form the ship's company. Members of a crew are often referred to by the title crewman or crew-member.
Crew:
Crew also refers to the sport of rowing, where teams row competitively in racing shells. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Total curvature**
Total curvature:
In mathematical study of the differential geometry of curves, the total curvature of an immersed plane curve is the integral of curvature along a curve taken with respect to arc length: ∫abk(s)ds=2πN.
Total curvature:
The total curvature of a closed curve is always an integer multiple of 2π, where N is called the index of the curve or turning number – it is the winding number of the unit tangent vector about the origin, or equivalently the degree of the map to the unit circle assigning to each point of the curve, the unit velocity vector at that point. This map is similar to the Gauss map for surfaces.
Comparison to surfaces:
This relationship between a local geometric invariant, the curvature, and a global topological invariant, the index, is characteristic of results in higher-dimensional Riemannian geometry such as the Gauss–Bonnet theorem.
Invariance:
According to the Whitney–Graustein theorem, the total curvature is invariant under a regular homotopy of a curve: it is the degree of the Gauss map. However, it is not invariant under homotopy: passing through a kink (cusp) changes the turning number by 1.
By contrast, winding number about a point is invariant under homotopies that do not pass through the point, and changes by 1 if one passes through the point.
Generalizations:
A finite generalization is that the exterior angles of a triangle, or more generally any simple polygon, add up to 360° = 2π radians, corresponding to a turning number of 1. More generally, polygonal chains that do not go back on themselves (no 180° angles) have well-defined total curvature, interpreting the curvature as point masses at the angles.
The total absolute curvature of a curve is defined in almost the same way as the total curvature, but using the absolute value of the curvature instead of the signed curvature.
Generalizations:
It is 2π for convex curves in the plane, and larger for non-convex curves. It can also be generalized to curves in higher dimensional spaces by flattening out the tangent developable to γ into a plane, and computing the total curvature of the resulting curve. That is, the total curvature of a curve in n-dimensional space is sgn κn−1(s)ds where κn−1 is last Frenet curvature (the torsion of the curve) and sgn is the signum function.
Generalizations:
The minimum total absolute curvature of any three-dimensional curve representing a given knot is an invariant of the knot. This invariant has the value 2π for the unknot, but by the Fáry–Milnor theorem it is at least 4π for any other knot. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pacchionian foramen**
Pacchionian foramen:
Pacchionian foramen means: incisurae tentorii (aka tentorial notch) a thick opening in the center of the diaphragm of sella through which the infundibulum passesThe Pacchionian foramen (incisura tentorii) is important due to some types of brain herniation through it, i.e. supratentorial, infratentorial herniation.
Tentorium cerebelli:
The tentorium cerebelli divides the cranial cavity into two closed spaces which communicate with each other through the incisura tentorii. The larger anterior space includes the anterior and middle cranial fossas and lodges the cerebrum; the small posterior space— the posterior cranial fossa contains the cerebellum, the pons, and the medulla. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Glk (software)**
Glk (software):
Glk is a portable application programming interface (API) created by Andrew Plotkin for use by programs with a text interface; these programs mostly include interactive fiction (IF) interpreters for Z-machine, TADS, Glulx, and Hugo games, and IF games written in more obscure file formats such as those used by Level 9 Computing and Magnetic Scrolls.
The Glk API specification describes facilities for input, output, text formatting, graphics, sound, and file I/O.
Glk (software):
Glk does not describe a virtual machine. Glulx is a virtual machine designed to be implemented using the Glk functions, and Glulxe is an interpreter for Glulx. Interpreters for other virtual machines may use Glk while being unrelated to Glulx: for example, Nitfol is an interpreter for the Z-Machine that uses Glk.The Glk API has many implementations, including GlkTerm, ScummVM's Glk, WindowsGlk, XGlk. Implementations are available on the following platforms: Java JavaScript Macintosh DOS Unix X Window System Microsoft Windows Pocket PCThe existence of the Glk API has made possible the creation of "universal translator" IF interpreters, programs such as Gargoyle and Spatterlight which can run all popular IF formats and almost all of the more obscure ones. Such programs are very useful for newcomers to the medium who are unsure of which interpreter to choose, and to experienced players who may possess games in a variety of formats. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mic in track**
Mic in track:
Mic in track (as well as Line in track and Mixer in track) was the default name of a file created after recording with the program MusicMatch Jukebox. In the late 1990s and early 2000s, Mic in track files began appearing on file-sharing networks such as Napster, usually without the knowledge of their creators. Because of the unique name, voyeurs could easily search for the files and listen to audio of unknowing individuals performing karaoke or joking around with friends. Several websites are devoted to cataloging and featuring their favorite Mic in track files. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Inclusion bodies**
Inclusion bodies:
Inclusion bodies are aggregates of specific types of protein found in neurons, a number of tissue cells including red blood cells, bacteria, viruses, and plants. Inclusion bodies of aggregations of multiple proteins are also found in muscle cells affected by inclusion body myositis and hereditary inclusion body myopathy.Inclusion bodies in neurons may be accumulated in the cytoplasm or nucleus, and are associated with many neurodegenerative diseases. Inclusion bodies in neurodegenerative diseases are aggregates of misfolded proteins (aggresomes) and are hallmarks of many of these diseases, including Lewy bodies in Lewy body dementias, and Parkinson's disease, neuroserpin inclusion bodies called Collins bodies in familial encephalopathy with neuroserpin inclusion bodies, inclusion bodies in Huntington's disease, Papp–Lantos bodies in multiple system atrophy, and various inclusion bodies in frontotemporal dementia including Pick bodies. Bunina bodies in motor neurons are a core feature of amyotrophic lateral sclerosis.Other usual cell inclusions are often temporary inclusions of accumulated proteins, fats, secretory granules or other insoluble components.Inclusion bodies are found in bacteria as particles of aggregated protein. They have a higher density than many other cell components but are porous. They typically represent sites of viral multiplication in a bacterium or a eukaryotic cell and usually consist of viral capsid proteins.
Inclusion bodies:
Inclusion bodies contain very little host protein, ribosomal components or DNA/RNA fragments. They often almost exclusively contain the over-expressed protein and aggregation and has been reported to be reversible. It has been suggested that inclusion bodies are dynamic structures formed by an unbalanced equilibrium between aggregated and soluble proteins of Escherichia coli. There is a growing body of information indicating that formation of inclusion bodies occurs as a result of intracellular accumulation of partially folded expressed proteins which aggregate through non-covalent hydrophobic or ionic interactions or a combination of both.
Composition:
Inclusion bodies have a non-unit (single) lipid membrane. Protein inclusion bodies are classically thought to contain misfolded protein. However, this has been contested, as green fluorescent protein will sometimes fluoresce in inclusion bodies, which indicates some resemblance of the native structure and researchers have recovered folded protein from inclusion bodies.
Mechanism of formation:
When genes from one organism are expressed in another organism the resulting protein sometimes forms inclusion bodies. This is often true when large evolutionary distances are crossed: a cDNA isolated from Eukarya for example, and expressed as a recombinant gene in a prokaryote risks the formation of the inactive aggregates of protein known as inclusion bodies. While the cDNA may properly code for a translatable mRNA, the protein that results will emerge in a foreign microenvironment. This often has fatal effects, especially if the intent of cloning is to produce a biologically active protein. For example, eukaryotic systems for carbohydrate modification and membrane transport are not found in prokaryotes. The internal microenvironment of a prokaryotic cell (pH, osmolarity) may differ from that of the original source of the gene. Mechanisms for folding a protein may also be absent, and hydrophobic residues that normally would remain buried may be exposed and available for interaction with similar exposed sites on other ectopic proteins. Processing systems for the cleavage and removal of internal peptides would also be absent in bacteria. The initial attempts to clone insulin in a bacterium suffered all of these deficits. In addition, the fine controls that may keep the concentration of a protein low will also be missing in a prokaryotic cell, and overexpression can result in filling a cell with ectopic protein that, even if it were properly folded, would precipitate by saturating its environment.
In neurons:
Inclusion bodies are aggregates of protein associated with many neurodegenerative diseases, accumulated in the cytoplasm or nucleus of neurons. Inclusion bodies of aggregations of multiple proteins are also found in muscle cells affected by inclusion body myositis and hereditary inclusion body myopathy.Inclusion bodies in neurodegenerative diseases are aggregates of misfolded proteins (aggresomes) and are hallmarks of many of these diseases, including Lewy bodies in Lewy body dementias, and Parkinson's disease, neuroserpin inclusion bodies called Collins bodies in familial encephalopathy with neuroserpin inclusion bodies, inclusion bodies in Huntington's disease, Papp-Lantos inclusions in multiple system atrophy, and various inclusion bodies in frontotemporal dementia including Pick bodies. Bunina bodies in motor neurons are a core feature of amyotrophic lateral sclerosis.
In red blood cells:
Normally a red blood cell does not contain inclusions in the cytoplasm. However, it may be seen because of certain hematologic disorders.
There are three kinds of red blood cell inclusions: Developmental organelles Howell-Jolly bodies: small, round fragments of the nucleus resulting from karyorrhexis or nuclear disintegration of the late reticulocyte and stain reddish-blue with Wright's stain.
Basophilic stipplings - these stipplings are either fine or coarse, deep blue to purple staining inclusion that appears in erythrocytes on a dried Wright's stain.
Pappenheimer bodies - are siderotic granules which are small, irregular, dark-staining granules that appear near the periphery of a young erythrocyte in a Wright stain.
Polychromatophilic red cells - young red cells that no longer have nucleus but still contain some RNA.
Cabot rings - ring-like structure and may appear in erythrocytes in megaloblastic anemia or in severe anemias, lead poisoning, and in dyserythropoiesis, in which erythrocytes are destroyed before being released from the bone marrow.
Abnormal hemoglobin precipitation Heinz bodies - round bodies, refractile inclusions not visible on a Wright's stain film. They are best identified by supravital staining with basic dyes.
Hemoglobin H inclusions - alpha thalassemia, greenish-blue inclusion bodies appear in many erythrocytes after four drops of blood is incubated with 0.5mL of Brilliant cresyl blue for 20 minutes at 37 °C.
Protozoan inclusion Malaria Babesia
In white blood cells:
Inclusions of immunoglobulin called Russell bodies are found in atypical plasma cells. Russell bodies clump together in large numbers displacing the cell nucleus to the edge, and the cell is then called a Mott cell.
In viruses:
Examples of viral inclusion bodies in animals are Cytoplasmic eosinophilic (acidophilic)- Downie bodies in cowpox Negri bodies in rabies Guarnieri bodies in vaccinia, variola (smallpox) Paschen bodies in variola (smallpox) Bollinger bodies in fowlpox Molluscum bodies in Molluscum contagiosum Eosinophilic inclusion bodies in boid inclusion body diseaseNuclear eosinophilic (acidophilic)- Cowdry bodies type A in Herpes simplex virus and Varicella zoster virus Torres bodies in yellow fever Cowdry bodies type B in polio and adenovirusNuclear basophilic- Cowdry bodies type B in adenovirus "Owl's eye appearance" in cytomegalovirusBoth nuclear and cytoplasmic- Warthin–Finkeldey bodies in measles and HIV/AIDSExamples of viral inclusion bodies in plants include aggregations of virus particles (like those for Cucumber mosaic virus) and aggregations of viral proteins (like the cylindrical inclusions of potyviruses). Depending on the plant and the plant virus family these inclusions can be found in epidermal cells, mesophyll cells, and stomatal cells when plant tissue is properly stained.
In bacteria:
Polyhydroxyalkanoates (PHA) are produced by bacteria as inclusion bodies. The size of PHA granules are limited in E. coli, due to its small size. Bacterial cell's inclusion bodies are not as abundant intracellularly, in comparison to eukaryotic cells.
In bacteria:
Isolation of proteins Between 70% and 80% of recombinant proteins expressed E. coli are contained in inclusion bodies (i.e., protein aggregates). The purification of the expressed proteins from inclusion bodies usually require two main steps: extraction of inclusion bodies from the bacteria followed by the solubilisation of the purified inclusion bodies. Solubilisation of inclusions bodies often involves treatment with denaturing agents, such as urea or guanidine chloride at high concentrations, to de-aggregate the collapsed proteins. Renaturation follows the treatment with denaturing agents and often consists of dialysis and/or use of molecules that promote the refolding of denatured proteins (including chaotopic agents and chaperones).
Pseudo-inclusions:
Pseudo-inclusions are invaginations of the cytoplasm into the cell nuclei, which may give the appearance of intranuclear inclusions. They may appear in papillary thyroid carcinoma.
Diseases involving inclusion bodies:
Inclusion body diseases differ from amyloid diseases in that inclusion bodies are necessarily intracellular aggregates of protein, where amyloid can be intracellular or extracellular. Amyloid also necessitates protein polymerization where inclusion bodies do not.
Preventing inclusion bodies in bacteria:
Inclusion bodies are often made of denatured aggregates of inactive proteins. Although, the renaturation of inclusion bodies can sometimes lead to the solubilisation and the recovery of active proteins, the process is still very empirical, uncertain and of low efficiency. Several techniques have been developed over the years to prevent the formation of inclusion bodies. These techniques include: The use of weaker promoters to slowdown the rate of protein expression The use of low copy number plasmids The co-expression of chaperone (such as GroES-GroEL and DnaK-DnaJ-GrpE) The use of specific E. coli strains such as (AD494 and Origami) Fusing the target protein to a soluble partner Lowering the expression temperature | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Anthrone**
Anthrone:
Anthrone is a tricyclic aromatic ketone. It is used for a common cellulose assay and in the colorimetric determination of carbohydrates.Derivatives of anthrone are used in pharmacy as laxative. They stimulate the motion of the colon and reduce water reabsorption. Some anthrone derivatives can be extracted from a variety of plants, including Rhamnus frangula, Aloe ferox, Rheum officinale, and Cassia senna. Glycosides of anthrone are also found in high amounts in rhubarb leaves, and alongside concentrated amounts of oxalic acid are the reason for the leaves being inedible.
Synthesis and reactions:
Anthrone can be prepared from anthraquinone by reduction with tin or copper.An alternative synthesis involves cyclization of o-benzylbenzoic acid induced with hydrogen fluoride.
Anthrone condenses with glyoxal to give, following dehydrogenation, acedianthrone, a useful octacyclic pigment.
Tautomer:
Anthrone is the more stable tautomer relative to the anthrol. The tautomeric equilibrium is estimated at 100 in aqueous solution. For the two other isomeric anthrols, the tautomeric equilibrium is reversed. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Savoury (dish)**
Savoury (dish):
A savoury is the final course of a traditional British formal meal, following the sweet pudding or dessert course. The savoury is designed to "clear the palate" before the port is served. It generally consists of salty and plain elements.
Savoury (dish):
Typical savouries are: Scotch woodcock Welsh rarebit Sardines on toast Angels on horseback Devils on horsebackSavouries are often served on toast, fried bread or some kind of biscuit or cracker. In Eliza Action's 1845 book Modern Cookery for Private Families, there is just one recipe for savouries which appears to be a proto-croque monsieur, with a small footnote. In the twentieth century, however, entire books on the subject appeared, such as Good Savouries by Ambrose Heath (1934). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**TEDMED**
TEDMED:
TEDMED is an annual conference focusing on health and medicine, with a year-round web-based community. TEDMED is an independent event operating under license from the nonprofit TED conference.
Background:
As of 2014, TEDMED staff operates from Stamford, Connecticut.
Background:
Talks given at TEDMED combine "the nexus of health, information and technology" with "compelling personal stories" and "a glimpse into the future of healthcare."The intent of the conference has been described as "a gathering of geniuses" that brings together "some of the most innovative, thoughtful pioneers of healthcare technology, media, and entertainment into one big four-day 'dinner party' to learn from one another and mix people up from different disciplines and industries to solve big problems in healthcare."
History:
TEDMED was founded in 1998 by TED's founder Richard Wurman. TEDMED was inactive for a number of years, and in 2008 Wurman sold the rights to TEDMED to entrepreneur Marc Hodosh. Hodosh recreated TEDMED and launched its first conference under his guidance in San Diego in October 2009.
History:
In January 2010, TED.com began including videos of TEDMED talks on the TED website. In October 2010, TEDMED was held in San Diego again and sold out for a second year, attracting notable healthcare leaders and Hollywood celebrities.In 2011, Jay Walker and a group of executives and investors purchased TEDMED from Hodosh for $16 million with future additional payments of as much as $9 million. The conference was then moved to Washington, DC.In November 2016, TEDMED was held in Palm Springs, California. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**F-test**
F-test:
An F-test is any statistical test in which the test statistic has an F-distribution under the null hypothesis. It is most often used when comparing statistical models that have been fitted to a data set, in order to identify the model that best fits the population from which the data were sampled. Exact "F-tests" mainly arise when the models have been fitted to the data using least squares. The name was coined by George W. Snedecor, in honour of Ronald Fisher. Fisher initially developed the statistic as the variance ratio in the 1920s.
Common examples:
Common examples of the use of F-tests include the study of the following cases: The hypothesis that the means of a given set of normally distributed populations, all having the same standard deviation, are equal. This is perhaps the best-known F-test, and plays an important role in the analysis of variance (ANOVA).
The hypothesis that a proposed regression model fits the data well. See Lack-of-fit sum of squares.
The hypothesis that a data set in a regression analysis follows the simpler of two proposed linear models that are nested within each other.In addition, some statistical procedures, such as Scheffé's method for multiple comparisons adjustment in linear models, also use F-tests.
Common examples:
F-test of the equality of two variances The F-test is sensitive to non-normality. In the analysis of variance (ANOVA), alternative tests include Levene's test, Bartlett's test, and the Brown–Forsythe test. However, when any of these tests are conducted to test the underlying assumption of homoscedasticity (i.e. homogeneity of variance), as a preliminary step to testing for mean effects, there is an increase in the experiment-wise Type I error rate.
Formula and calculation:
Most F-tests arise by considering a decomposition of the variability in a collection of data in terms of sums of squares. The test statistic in an F-test is the ratio of two scaled sums of squares reflecting different sources of variability. These sums of squares are constructed so that the statistic tends to be greater when the null hypothesis is not true. In order for the statistic to follow the F-distribution under the null hypothesis, the sums of squares should be statistically independent, and each should follow a scaled χ²-distribution. The latter condition is guaranteed if the data values are independent and normally distributed with a common variance.
Formula and calculation:
Multiple-comparison ANOVA problems The F-test in one-way analysis of variance (ANOVA) is used to assess whether the expected values of a quantitative variable within several pre-defined groups differ from each other. For example, suppose that a medical trial compares four treatments. The ANOVA F-test can be used to assess whether any of the treatments are on average superior, or inferior, to the others versus the null hypothesis that all four treatments yield the same mean response. This is an example of an "omnibus" test, meaning that a single test is performed to detect any of several possible differences. Alternatively, we could carry out pairwise tests among the treatments (for instance, in the medical trial example with four treatments we could carry out six tests among pairs of treatments). The advantage of the ANOVA F-test is that we do not need to pre-specify which treatments are to be compared, and we do not need to adjust for making multiple comparisons. The disadvantage of the ANOVA F-test is that if we reject the null hypothesis, we do not know which treatments can be said to be significantly different from the others, nor, if the F-test is performed at level α, can we state that the treatment pair with the greatest mean difference is significantly different at level α.
Formula and calculation:
The formula for the one-way ANOVA F-test statistic is explained variance unexplained variance , or between-group variability within-group variability .
The "explained variance", or "between-group variability" is ∑i=1Kni(Y¯i⋅−Y¯)2/(K−1) where Y¯i⋅ denotes the sample mean in the i-th group, ni is the number of observations in the i-th group, Y¯ denotes the overall mean of the data, and K denotes the number of groups.
Formula and calculation:
The "unexplained variance", or "within-group variability" is ∑i=1K∑j=1ni(Yij−Y¯i⋅)2/(N−K), where Yij is the jth observation in the ith out of K groups and N is the overall sample size. This F-statistic follows the F-distribution with degrees of freedom d1=K−1 and d2=N−K under the null hypothesis. The statistic will be large if the between-group variability is large relative to the within-group variability, which is unlikely to happen if the population means of the groups all have the same value.
Formula and calculation:
Note that when there are only two groups for the one-way ANOVA F-test, F=t2 where t is the Student's t statistic.
Formula and calculation:
Regression problems Consider two models, 1 and 2, where model 1 is 'nested' within model 2. Model 1 is the restricted model, and model 2 is the unrestricted one. That is, model 1 has p1 parameters, and model 2 has p2 parameters, where p1 < p2, and for any choice of parameters in model 1, the same regression curve can be achieved by some choice of the parameters of model 2.
Formula and calculation:
One common context in this regard is that of deciding whether a model fits the data significantly better than does a naive model, in which the only explanatory term is the intercept term, so that all predicted values for the dependent variable are set equal to that variable's sample mean. The naive model is the restricted model, since the coefficients of all potential explanatory variables are restricted to equal zero.
Formula and calculation:
Another common context is deciding whether there is a structural break in the data: here the restricted model uses all data in one regression, while the unrestricted model uses separate regressions for two different subsets of the data. This use of the F-test is known as the Chow test.
Formula and calculation:
The model with more parameters will always be able to fit the data at least as well as the model with fewer parameters. Thus typically model 2 will give a better (i.e. lower error) fit to the data than model 1. But one often wants to determine whether model 2 gives a significantly better fit to the data. One approach to this problem is to use an F-test.
Formula and calculation:
If there are n data points to estimate parameters of both models from, then one can calculate the F statistic, given by RSS RSS RSS 2n−p2), where RSSi is the residual sum of squares of model i. If the regression model has been calculated with weights, then replace RSSi with χ2, the weighted sum of squared residuals. Under the null hypothesis that model 2 does not provide a significantly better fit than model 1, F will have an F distribution, with (p2−p1, n−p2) degrees of freedom. The null hypothesis is rejected if the F calculated from the data is greater than the critical value of the F-distribution for some desired false-rejection probability (e.g. 0.05). Since F is a monotone function of the likelihood ratio statistic, the F-test is a likelihood ratio test. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Level staff**
Level staff:
A level staff, also called levelling rod, is a graduated wooden or aluminium rod, used with a levelling instrument to determine the difference in height between points or heights of points above a vertical datum.
When used for stadiametric rangefinding, the level staff is called a stadia rod.
Rod construction and materials:
Levelling rods can be one piece, but many are sectional and can be shortened for storage and transport or lengthened for use. Aluminum rods may be shortened by telescoping sections inside each other, while wooden rod sections can be attached to each other with sliding connections or slip joints, or hinged to fold when not in use.
Rod construction and materials:
There are many types of rods, with names that identify the form of the graduations and other characteristics. Markings can be in imperial or metric units. Some rods are graduated on one side only while others are marked on both sides. If marked on both sides, the markings can be identical or can have imperial units on one side and metric on the other.
Reading a rod:
In the photograph on the right, both a metric (left) and imperial (right) levelling rod are seen. This is a two-sided aluminum rod, coated white with markings in contrasting colours. The imperial side has a bright yellow background.
Reading a rod:
The metric rod has major numbered graduations in meters and tenths of meters (e.g. 18 is 1.8 m - there is a tiny decimal point between the numbers). Between the major marks are either a pattern of squares and spaces in different colours or an E shape (or its mirror image) with horizontal components and spaces between of equal size. In both parts of the pattern, the squares, lines or spaces are precisely one centimetre high. When viewed through an instrument's telescope, the observer can visually interpolate a 1 cm mark to a tenth of its height, yielding a reading with precision in mm. Usually readings are recorded with millimetre precision. On this side of the rod, the colours of the markings alternate between red and black with each meter of length.
Reading a rod:
The imperial graduations are in feet (large red numbers), tenths of a foot (small black numbers) and hundredths of a foot (unnumbered marks or spaces between the marks). The tenths of a foot point is indicated by the top of the long mark with the upward sloped end. The point halfway between tenths of a foot marks is indicated by the bottom of a medium length black mark with a downward sloped end. Each mark or space is approximately 3mm, yielding roughly the same accuracy as the metric rod.
Classes of rods:
Rods come in two classes: Self-reading rods (sometimes called speaking rods).
Classes of rods:
Target rods.Self-reading rods are rods that are read by the person viewing the rod through the telescope of the instrument. The graduations are sufficiently clear to read with good accuracy. Target rods, on the other hand, are equipped with a target. The target is a round or oval plate marked in quarters in contrasting colours such as red and white in opposite quarters. A hole in the centre allows the instrument user to see the rod's scale. The target is adjusted by the rodman according to the instructions from the instrument man. When the target is set to align with the crosshairs of the instrument, the rodman records the level value. The target may have a vernier to allow fractional increments of the graduation to be read.
Classes of rods:
Digital levels electronically read a bar-coded scale on the staff. These instruments usually include data recording capability. The automation removes the requirement for the operator to read a scale and write down the value, and so reduces blunders. It may also compute and apply refraction and curvature corrections.
Topographer's rods:
Topographer's rods are special purpose rods used in topographical surveys. The rod has the zero mark at mid-height and the graduations increase in both directions away from the mid-height. In use, the rod is adjusted so that the zero point is level with the instrument (or the surveyor's eye if he is using a hand level for low-resolution work). When placed at any point where the level is to be read, the value seen is the height above or below the viewer's position.
Topographer's rods:
An alternative topographer's rod has the graduations numbered upwards from the base. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**STEbus**
STEbus:
The STEbus (also called the IEEE-1000 bus) is a non-proprietary, processor-independent, computer bus with 8 data lines and 20 address lines. It was popular for industrial control systems in the late 1980s and early 1990s before the ubiquitous IBM PC dominated this market. STE stands for STandard Eurocard.Although no longer competitive in its original market, it is valid choice for hobbyists wishing to make 'home brew' computer systems. The Z80 and probably the CMOS 65C02 are possible processors to use. The standardized bus allows hobbyists to interface to each other's designs.
Origins:
In the early 1980s, there were many proprietary bus systems, each with its own strengths and weaknesses. Most had grown in an ad-hoc manner, typically around a particular microprocessor. The S-100 bus is based on Intel 8080 signals, the STD Bus around Z80 signals, the SS-50 bus around the Motorola 6800, and the G64 bus around 6809 signals. This made it harder to interface other processors. Upgrading to a more powerful processor would subtly change the timings, and timing restraints were not always tightly specified. Nor were electrical parameters and physical dimensions. They usually used edge-connectors for the bus, which were vulnerable to dirt and vibration.
Origins:
The VMEbus had provided a high-quality solution for high-performance 16-bit processors, using reliable DIN 41612 connectors and well-specified Eurocard board sizes and rack systems. However, these were too costly where an application only needed a modest 8-bit processor.
Origins:
In the mid 1980s, the STEbus standard addressed these issues by specifying what is rather like a VMEbus simplified for 8-bit processors. The bus signals are sufficiently generic so that they are easy for 8-bit processors to interface with. The board size was usually a single-height Eurocard (100 mm x 160 mm) but allowed for double-height boards (233 x 160 mm) as well.
Origins:
The latter positioned the bus connector so that it could neatly merge into VME-bus systems.
IEEE Working Group P1000 initially considered simply repinning the STD Bus, replacing its card edge connector with the DIN41612 connector.
But they decided to create a completely new high-performance 8-bit bus.
They decided to make a bus more like the VMEbus and Futurebus.
The STEbus was designed to be manufacturer independent, processor independent, and have multimaster capability.
Maturity:
The STEbus was very successful in its day. It was given the official standard IEEE1000-1987.
Maturity:
Many processors were available on STEbus cards, across a range of price and performance. These boards included the Intel 8031, 8085, 8088, 80188; the National Semiconductor 32008 and 32016; the Motorola 6809, 68000, and 68008; The Zilog Z80 and Z280; the Hitachi HD64180; and the Inmos Transputer.The STEbus is designed for 8-bit microprocessors. Processors that normally use a wider data bus (16-bit, etc.) can use the STEbus if the processor can handle data in byte-wide chunks, giving the slave as long as it needs to respond.The STEbus supported processors from the popular Z80, the 6809, to the 68020. The only popular micro notably absent was the 6502, because it did not naturally support wait-states while writing. The CMOS 65C02 did not have this shortcoming, but this was rarer and more expensive than the NMOS 6502 and Z80. The 6809 used cycle stretching.
Maturity:
Peripheral boards included prototyping boards, disc controllers, video cards, serial I/O, analogue and digital I/O.
The STEbus achieved its goal of providing a rack-mounting system robust enough for industrial use, with easily interchangeable boards and processor independence.Researchers describe STEbus systems as rugged, adaptable, and cost effective.
Decline:
The STEbus market began to decline as the IBM PC made progress into industrial control systems. Customers opted for PC-based products as the software base was larger and cheaper. More programmers were familiar with the PC and did not have to learn new systems.
Decline:
Memory costs fell, so there was less reason to have bus-based memory expansion when one could have plenty on the processor board. So despite the disadvantages, manufacturers created industrial PC systems and eventually dropped other bus systems. As time went on, PC systems did away with the need for card cages and backplanes by moving to the PC/104 format where boards stack onto each other. While not as well-designed as the STEbus, PC/104 is good enough for many applications. The major manufacturers from its peak period now support STEbus mostly for goodwill with old customers who bought a lot of product from them.
Decline:
As of 2013, some manufacturers still support STEbus, G64, Multibus II, and other legacy bussed systems.The IEEE have withdrawn the standard, not because of any faults but because it is no longer active enough to update.
Physical format:
3U Eurocard - The most common size was the 100 x 160 mm Eurocard.
6U Eurocard - Rare, sometimes used in VMEbus hybrid boards
Connector:
DIN 41612, rows a and c, 0.1" pitch.
VME/STE hybrid boards have the STEbus and VMEbus sharing the VME P2 connector, VME signals on row b. For this reason, STEbus boards may not use row b for any purpose.
Pinout:
Active low signals indicated by asterisk.
GND: Ground reference voltage +5V: Powers most logic.
+12V and -12V: Primarily useful for RS232 buffer power. The +12V has been used for programming voltage generators. Both can be used in analogue circuitry, but note that these are primarily power rails for digital circuitry and as such they often have digital noise. Some decoupling or local regulation is recommended for analogue circuitry.
VSTBY: Standby voltage. Optional. This line is specified as 5V (+0 to +5%), at up to 1A. However, some boards have used this line for carrying a battery backup voltage to boards that supply or consume it. A 3.6V NiCad battery is a common source. The STEbus spec is not rigid about where this should be sourced from.
Pinout:
In practice, this means that most boards requiring backup power tend to play safe and have a battery on board, often with a link to allow them to supply or accept power from VSTBY. Hence you can end up with more batteries in your system than you need, and you must then take care that no more than one battery is driving VSTBY.
Pinout:
D0...7: Data bus. This is only 8-bits wide, but most I/O or memory-mapped peripherals are byte-oriented.
Pinout:
A0...19: Address bus. This allows up to 1 MByte of memory to be addressed. Current technology is such that processor requiring large amounts of memory have this on the processor board, so this is not a great limitation. I/O space is limited to 4K, to simplify I/O address decoding to a practical level. A single 74LS688 on each slave board can decode A11...A4 to locate I/O slave boards at any I/O address with 16-byte alignment.
Pinout:
Typically 8 small jumpers or a single unit of 8 DIP switches or two binary-coded hexadecimal rotary switches are used to give each I/O slave board a unique address.CM0...2: Command Modifiers. These indicate the nature of the data transfer cycle.
Pinout:
A simple processor board can drive CM2 high for all bus access, drive CM1 from a memory/not_IO signal, and CM0 from a read/not_write signal. CM2 low state is used only during "attention request" phases (for interrupts and/or DMA cycles) for Explicit Response mode. When Implicit Response mode is used, the bus master polls the slave boards to find which one has triggered the Attention Request and reset the signal source. In that case, Vector-fetch is not used.
Pinout:
ATNRQ0...7*: Attention Requests. These are reserved for boards to signal for processor attention, a term which covers Interrupts and Direct Memory Access (DMA). The wise choice of signal does not commit these lines to being specific types, such as maskable interrupts, non-maskable interrupts, or DMA.
The number of Attention Requests reflects the intended role of the STEbus, in real-time control systems. Eight lines can be priority encoded into three bits, and is a reasonably practical number of lines to handle.
BUSRQ0...1* and BUSAK0...1*: Bus Requests and Bus Acknowledge. Optional. Used by multi-master systems.
The number of Attention Requests reflects that the STEbus aims to be simple. Single-master systems are the norm, but these signals allow systems to have secondary bus masters if needed.
DATSTB*: Data Strobe. This is the primary signal in data transfer cycles.
DATACK*: Data Acknowledge. A slave will assert this signal when to acknowledge the safe completion of a data transfer via the STEbus. This allows STEbus systems to use plug-in cards with a wide variety of speeds, an improvement on earlier bus systems that require everything to run at the speed of the slowest device.
TFRERR*: Transfer Error. A slave will assert this signal when acknowledging the erroneous completion of a data transfer via the STEbus.
Pinout:
ADRSTB*: Address Strobe. This signal indicates the address bus is valid. Originally, this had some practical use in DRAM boards which could start strobing the address lines into DRAM chips before the data bus was ready. The STEbus spec was later firmed up to say that slaves were not allowed to start transfers until DATSTB* was ready, so ADRSTB* has become quite redundant. Nowadays, STEbus masters can simply generate DATSTB* and ADRSTB* from the same logic signal. Slaves simply note when DATSTB* is valid (since the bus definition insists that the address will also be valid at the same time as the data). ADRSTB* also allows a bus master to retain ownership of the bus during indivisible read-modify-write cycles, by remaining active during two DATSTB* pulses. The sequence matches that of the 68008's bus. Other CPUs may require additional logic to create read-modify-write cycles.
Pinout:
SYSCLK: System Clock. Fixed at 16 MHz. 50% duty cycle.
SYSRST*: System Reset.
The backplane connects all the DIN connectors in parallel.
So a STEbus expansion card sees the same signals no matter which slot of the backplane it is plugged into.
Types of signals:
The SYSCLK must be driven by only one board in the system. As explained in the standard, this signal shall be generated by the System Controller.
The System Controller is also in charge of the Bus Arbitration in case there are multiple masters. When there is only one Master, the System Controller is not needed, and SYSCLK can be generated by the Master board
Technical notes:
Signal inputs must be Schmitt trigger Only one TTL load per bus line signal per board Signal outputs must have a fanout of 20 Backplane can have up to 21 sockets 50mm maximum length of bus signal line PCB trace on any board 500 mm maximum length of bus signal line length Active bus-termination recommended (270R pull-up to 2.8V) 7400 series chips are often used to build custom control boards, directly connected to the STEbus. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chntpw**
Chntpw:
chntpw is a software utility for resetting or blanking local passwords used by Windows NT operating systems on Linux. It does this by editing the SAM database where Windows stores password hashes.
Features:
There are two ways to use the program: via the standalone chntpw utility installed as a package available in most modern Linux distributions (e.g. Ubuntu) or via a bootable CD/USB image. There also was a floppy release, but its support has been dropped.
Limitations:
chntpw has no support for fully encrypted NTFS partitions (the only possible exceptions to this are encrypted partitions readable by Linux such as LUKS), usernames containing Unicode characters, or Active Directory passwords (with the exception of local users of systems that are members of an AD domain). The password changing feature is also prone to errors, so password blanking is highly recommended (in fact, for later versions of Windows it is the only possible option). Furthermore, the bootable image might have problems with controllers requiring 3rd party drivers. In such cases use of the stand-alone program in a full-featured Linux environment is recommended.
Where it is used:
The chntpw utility is included in many various Linux distributions, including ones focused on security: Kali – security-focused Linux distribution SystemRescueCD – recovery-focused Linux distribution Fedora – general distribution Ubuntu – Linux distribution published by Canonical
License change:
For the software's 10th anniversary, the author changed the license from a non-commercial one to the GNU General Public License (GPL) Version 2. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Master data management**
Master data management:
Master data management (MDM) is a technology-enabled discipline in which business and information technology work together to ensure the uniformity, accuracy, stewardship, semantic consistency and accountability of the enterprise's official shared master data assets.
Drivers for master data management:
Organisations, or groups of organisations, may establish the need for master data management when they hold more than one copy of data about a business entity. Holding more than one copy of this master data inherently means that there is an inefficiency in maintaining a "single version of the truth" across all copies. Unless people, processes and technology are in place to ensure that the data values are kept aligned across all copies, it is almost inevitable that different versions of information about a business entity will be held. This causes inefficiencies in operational data use, and hinders the ability of organisations to report and analyze. At a basic level, master data management seeks to ensure that an organization does not use multiple (potentially inconsistent) versions of the same master data in different parts of its operations, which can occur in large organizations. Other problems include (for example) issues with the quality of data, consistent classification and identification of data, and data-reconciliation issues. Master data management of disparate data systems requires data transformations as the data extracted from the disparate source data system is transformed and loaded into the master data management hub. To synchronize the disparate source master data, the managed master data extracted from the master data management hub is again transformed and loaded into the disparate source data system as the master data is updated. As with other Extract, Transform, Load-based data movement, these processes are expensive and inefficient to develop and to maintain which greatly reduces the return on investment for the master data management product.
Drivers for master data management:
There are a number of root causes for master data issues in organisations. These include: Business unit and product line segmentation Mergers and acquisitions Business unit and product line segmentation As a result of business unit and product line segmentation, the same business entity (such as Customer, Supplier, Product) will be serviced by different product lines; redundant data will be entered about the business entity in order to process the transaction. The redundancy of business entity data is compounded in the front- to back-office life cycle, where the authoritative single source for the party, account and product data is needed but is often once again redundantly entered or augmented.
Drivers for master data management:
A typical example is the scenario of a bank at which a customer has taken out a mortgage and the bank begins to send mortgage solicitations to that customer, ignoring the fact that the person already has a mortgage account relationship with the bank. This happens because the customer information used by the marketing section within the bank lacks integration with the customer information used by the customer services section of the bank. Thus the two groups remain unaware that an existing customer is also considered a sales lead. The process of record linkage is used to associate different records that correspond to the same entity, in this case the same person.
Drivers for master data management:
Mergers and acquisitions One of the most common reasons some large corporations experience massive issues with master data management is growth through mergers or acquisitions. Any organizations which merge will typically create an entity with duplicate master data (since each likely had at least one master database of its own prior to the merger). Ideally, database administrators resolve this problem through deduplication of the master data as part of the merger. In practice, however, reconciling several master data systems can present difficulties because of the dependencies that existing applications have on the master databases. As a result, more often than not the two systems do not fully merge, but remain separate, with a special reconciliation process defined that ensures consistency between the data stored in the two systems. Over time, however, as further mergers and acquisitions occur, the problem multiplies, more and more master databases appear, and data-reconciliation processes become extremely complex, and consequently unmanageable and unreliable. Because of this trend, one can find organizations with 10, 15, or even as many as 100 separate, poorly integrated master databases, which can cause serious operational problems in the areas of customer satisfaction, operational efficiency, decision support, and regulatory compliance.
Drivers for master data management:
Another problem concerns determining the proper degree of detail and normalization to include in the master data schema. For example, in a federated HR environment, the enterprise may focus on storing people data as a current status, adding a few fields to identify date of hire, date of last promotion, etc. However this simplification can introduce business impacting errors into dependent systems for planning and forecasting. The stakeholders of such systems may be forced to build a parallel network of new interfaces to track onboarding of new hires, planned retirements, and divestment, which works against one of the aims of master data management.
People, process and technology:
Master data management is enabled by technology, but is more than the technologies that enable it. An organisation's master data management capability will include also people and process in its definition.
People Several roles should be staffed within MDM. Most prominently the Data Owner and the Data Steward. Probably several people would be allocated to each role, each person responsible for a subset of Master Data (e.g. one data owner for employee master data, another for customer master data).
The Data Owner is responsible for the requirements for data quality, data security etc. as well as for compliance with data governance and data management procedures. The Data Owner should also be funding improvement projects in case of deviations from the requirements.
The Data Steward is running the master data management on behalf of the data owner and probably also being an advisor to the Data Owner.
People, process and technology:
Process Master data management can be viewed as a "discipline for specialized quality improvement" defined by the policies and procedures put in place by a data governance organization. It has the objective of providing processes for collecting, aggregating, matching, consolidating, quality-assuring, persisting and distributing master data throughout an organization to ensure a common understanding, consistency, accuracy and control, in the ongoing maintenance and application use of that data. Processes commonly seen in master data management include source identification, data collection, data transformation, normalization, rule administration, error detection and correction, data consolidation, data storage, data distribution, data classification, taxonomy services, item master creation, schema mapping, product codification, data enrichment, hierarchy management, business semantics management and data governance.
People, process and technology:
Technology A master data management tool can be used to support master data management by removing duplicates, standardizing data (mass maintaining), and incorporating rules to eliminate incorrect data from entering the system in order to create an authoritative source of master data. Master data are the products, accounts and parties for which the business transactions are completed. Where the technology approach produces a "golden record" or relies on a "source of record" or "system of record", it is common to talk of where the data is "mastered". This is accepted terminology in the information technology industry, but care should be taken, both with specialists and with the wider stakeholder community, to avoid confusing the concept of "master data" with that of "mastering data".
People, process and technology:
Implementation models There are a number of models for implementing a technology solution for master data management. These depend on an organisation's core business, its corporate structure and its goals. These include: Source of record Registry Consolidation Coexistence Transaction/centralized Source of record This model identifies a single application, database or simpler source (e.g. a spreadsheet) as being the "source of record" (or "system of record" where solely application databases are relied on). The benefit of this model is its conceptual simplicity, but it may not fit with the realities of complex master data distribution in large organisations.
People, process and technology:
The source of record can be federated, for example by groups of attribute (so that different attributes of a master data entity may have different sources of record) or geographically (so that different parts of an organisation may have different master sources). Federation is only applicable in certain use cases, where there is clear delineation of which subsets of records will be found in which sources.
People, process and technology:
The source of record model can be applied more widely than simply to master data, for example to reference data.
Transmission of master data There are several ways in which master data may be collated and distributed to other systems. This include: Data consolidation – The process of capturing master data from multiple sources and integrating into a single hub (operational data store) for replication to other destination systems.
Data federation – The process of providing a single virtual view of master data from one or more sources to one or more destination systems.
Data propagation – The process of copying master data from one system to another, typically through point-to-point interfaces in legacy systems.
Change management in implementation:
Master data management can suffer in its adoption within a large organization if the "single version of the truth" concept is not affirmed by stakeholders, who believe that their local definition of the master data is necessary. For example, the product hierarchy used to manage inventory may be entirely different from the product hierarchies used to support marketing efforts or pay sales reps. It is above all necessary to identify if different master data is genuinely required. If it is required, then the solution implemented (technology and process) must be able to allow multiple versions of the truth to exist, but will provide simple, transparent ways to reconcile the necessary differences. If it is not required, processes must be adjusted. Without this active management, users that need the alternate versions will simply "go around" the official processes, thus reducing the effectiveness of the company's overall master data management program. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rebound attack**
Rebound attack:
The rebound attack is a tool in the cryptanalysis of cryptographic hash functions. The attack was first published in 2009 by Florian Mendel, Christian Rechberger, Martin Schläffer and Søren Thomsen. It was conceived to attack AES like functions such as Whirlpool and Grøstl, but was later shown to also be applicable to other designs such as Keccak, JH and Skein.
The attack:
The Rebound Attack is a type of statistical attack on hash functions, using techniques such as rotational and differential cryptanalysis to find collisions and other interesting properties. The basic idea of the attack is to observe a certain differential characteristic in a block cipher (or in a part of it), a permutation or another type of primitive. Finding values fulfilling the characteristic is achieved by splitting the primitive E into three parts such that fw in bw . in is called the inbound phase and fw and bw together are called the outbound phase. The attacker then chooses values that realize part of the differential characteristic in the inbound phase deterministically, and fulfill the remainder of the characteristic in a probabilistic manner.
The attack:
Thus, the rebound attack consists of 2 phases: The inbound (or match-in-the-middle) phase, covers the part of the differential characteristic that is difficult to satisfy in a probabilistic way. The goal here is to find many solutions for this part of the characteristic with a low average complexity. To achieve this, the corresponding system of equations, which describes the characteristic in this phase, should be underdetermined. When searching for a solution, there are therefore many degrees of freedom, giving many possible solutions. The inbound phase may be repeated several times to obtain a sufficient number of starting points so that the outbound phase is likely to succeed.
The attack:
In the outbound phase each solution of the inbound phase is propagated outwards in both directions, while checking whether the characteristic also holds in this phase. The probability of the characteristic in the outbound phase should be as high as possible.The advantage of using an inbound and two outbound phases is the ability to calculate the difficult parts of the differential characteristic in the inbound phase in an efficient way. Furthermore, it ensures a high probability in the outbound phase. The overall probability of finding a differential characteristic thus becomes higher than using standard differential techniques.
Detailed description of the attack on hash functions with AES-like compression functions:
Consider a hash function which uses an AES-like substitution-permutation block cipher as its compression function. This compression function consists of a number of rounds composed of S-boxes and linear transformations. The general idea of the attack is to construct a differential characteristic that has its most computationally expensive part in the middle. This part will then be covered by the inbound phase, whereas the more easily achieved part of the characteristic is covered in the outbound phase. The system of equations, which describe the characteristic in the inbound, phase should be underdetermined, such that many starting points for the outbound phase can be generated. Since the more difficult part of the characteristic is contained in the inbound phase it is possible to use standard differentials here, whereas truncated differentials are used in the outbound phase to achieve higher probabilities. The inbound phase will typically have a few number of active state bytes (bytes with non-zero differences) at the beginning, which then propagate to a large number of active bytes in the middle of the round, before returning to a low number of active bytes at the end of the phase. The idea is to have the large number of active bytes at the input and output of an S-box in the middle of the phase. Characteristics can then be efficiently computed by choosing values for the differences at the start and end of the inbound phase, propagating these towards the middle, and looking for matches in the input and output of the S-box. For AES like ciphers this can typically be done row- or column-wise, making the procedure relatively efficient. Choosing different starting and ending values leads to many different differential characteristics in the inbound phase.
Detailed description of the attack on hash functions with AES-like compression functions:
In the outbound phase the goal is to propagate the characteristics found in the inbound phase backwards and forwards, and check whether the desired characteristics are followed. Here, truncated differentials are usually used, as these give higher probabilities, and the specific values of the differences are irrelevant to the goal of finding a collision. The probability of the characteristic following the desired pattern of the outbound phase depends on the number of active bytes and how these are arranged in the characteristic. To achieve a collision, it is not enough for the differentials in the outbound phase to be of some specific type; any active bytes at the start and end of the characteristic must also have a value such that any feed-forward operation is cancelled. Therefore, when designing the characteristic, any number of active bytes at the start and end of the outbound phase should be at the same position. The probability of these bytes cancelling adds to the probability of the outbound characteristic. Overall, it is necessary to generate sufficiently many characteristics in the inbound phase in order to get an expected number of correct characteristics larger than one in the outbound phase. Furthermore, near-collisions on a higher number of rounds can be achieved by starting and ending the outbound phase with several active bytes that do not cancel.
Example attack on Whirlpool:
The Rebound Attack can be used against the hash function Whirlpool to find collisions on variants where the compression function (the AES-like block cipher, W) is reduced to 4.5 or 5.5 rounds. Near-collisions can be found on 6.5 and 7.5 rounds. Below is a description of the 4.5 round attack.
Example attack on Whirlpool:
Pre-computation To make the rebound attack effective, a look-up table for S-box differences is computed before the attack. Let S:{0,1}8→{0,1}8 represent the S-box. Then for each pair (a,b)∈{0,1}8 we find the solutions x (if there are any) to the equation S(x)⊕S(x⊕a)=b ,where a represents the input difference and b represents the output difference of the S-box. This 256 by 256 table (called the difference distribution table, DDT) makes it possible to find values that follow the characteristic for any specific input/output pairs that go through the S-box. The table on the right show the possible number of solutions to the equation and how often they occur. The first row describes impossible differentials, whereas the last row describes the zero-differential.
Example attack on Whirlpool:
Performing the attack To find a collision on 4.5 rounds of Whirlpool, a differential characteristic of the type shown in the table below must be found. This characteristic has a minimum of active bytes (bytes with non-zero differences), marked in red. The characteristic can be described by the number of active bytes in each round, e.g. 1 → 8 → 64 → 8 → 1 → 1.
Example attack on Whirlpool:
The inbound phase The goal of the inbound phase is to find differences that fulfill the part of the characteristic described by the sequence of active bytes 8 → 64 → 8. This can be done in the following three steps: Choose arbitrary non-zero difference for the 8 active bytes at the output of the MixRows operation in round 3. These differences are then propagated backwards to the output of the SubBytes operation in round 3. Due to the properties of the MixRows operation, a fully active state is obtained. Note that this can be done for each row independently.
Example attack on Whirlpool:
Choose a difference for each active byte in the input of MixRows operation in round 2, and propagate these differences forward to the input of the SubBytes operation in round 3. Do this for all 255 non-zero differences of each byte. Again, this can be done independently for each row.
Example attack on Whirlpool:
In the match-in-the-middle step, we use the DDT table to find matching input/output differences (as found in step 1 and 2) to the SubBytes operation in round 3. Each row can be checked independently, and the expected number of solutions is 2 per S-box. In total, the expected number of values that follow the differential characteristic is 264.These steps can be repeated with 264 different starting values in step 1, resulting in a total of 2128 actual values that follow the differential characteristic in the inbound phase. Each set of 264 values can be found with a complexity of 28 round transformations due to the precomputation step.
Example attack on Whirlpool:
The outbound phase The outbound phase completes the differential characteristic in a probabilistic way. The outbound phase uses truncated differentials, as opposed to the inbound phase. Each starting point found in the inbound phase is propagated forwards and backwards. In order to follow the desired characteristic, 8 active bytes must propagate to a single active byte in both directions. One such 8 to 1 transition happens with a probability of 2−56, so fulfilling the characteristic has a probability of 2−112. To ensure a collision, the values at the start and end of the characteristic have to cancel during the feed-forward operation. This happens with a probability of approximately 2−8, and the overall probability of the outbound phase is therefore 2−120. To find a collision, 2120 starting points have to be generated in the inbound phase. Since this can be done with an average complexity of 1 per starting point, the overall complexity of the attack is 2120.
Example attack on Whirlpool:
Extending the attack The basic 4.5 round attack can be extended to a 5.5 round attack by using two fully active states in the inbound phase. This increases the complexity to about 2184.Extending the outbound phase so that it begins and ends with 8 active bytes leads to a near-collision in 52 bytes on Whirlpool reduced to 7.5 rounds with a complexity of 2192.By assuming that the attacker has control over the chaining value, and therefore the input to the key-schedule of Whirlpool, the attack can be further extended to 9.5 rounds in a semi-free-start near-collision on 52 bytes with a complexity of 2128. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ducrete**
Ducrete:
DUCRETE (Depleted Uranium Concrete) is a high density concrete alternative investigated for use in construction of casks for storage of radioactive waste. It is a composite material containing depleted uranium dioxide aggregate instead of conventional gravel, with a Portland cement binder.
Background and development:
In 1993, the United States Department of Energy Office of Environmental Management initiated investigation into the potential use of depleted uranium in heavy concretes. The aim of this investigation was to simultaneously find an application for depleted uranium and to create a new and more efficient method for the storage and transportation of spent nuclear fuels. The material was first conceived at the Idaho National Engineering and Environmental Laboratory (INEEL) by W. Quapp and P. Lessing, who jointly developed the processes behind the material and were awarded both U.S. and foreign patents in 1998 and 2000, respectively.
Description:
DUCRETE is a kind of concrete that replaces the standard coarse aggregate with a depleted uranium ceramic material. All of the other materials present in DUCRETE (Portland cement, sand and water) are used in the same volumetric ratio used for ordinary concrete. This ceramic material is a very efficient shielding material since it presents both high atomic number (uranium) for gamma shielding, and low atomic number (water bonded in the concrete) for neutron shielding. There exists an optimum uranium-to-binder ratio for a combined attenuation of gamma and neutron radiation at a given wall thickness. A balance needs to be established between the attenuation of the gamma flux in the Depleted Uranium Oxide (DUO2) and the cement phase with water to attenuate the neutron flux.
Description:
The key to effective shielding with depleted uranium ceramic concrete is maximum uranium oxide density. Unfortunately, the densest depleted uranium oxide is also the most chemically unstable. DUO2 has a maximum theoretical density of 10.5 g/cm3 at 95% purity. However, under oxidation conditions, this material readily transforms into the more stable depleted uranium trioxide (DUO3) or depleted triuranium octaoxide (DU3O8). Thus, if naked UO2 aggregate is used, this transitions can result in an expansion that may generate stresses that could crack the material, lowering its compressive strength. ). Another limitation for the direct use of depleted uranium dioxide fine powder is that concretes depend on their coarse aggregates to carry compressive stresses. In order to overcome these issues, DUAGG was developed.
Description:
DUAGG (depleted uranium aggregate) is the term applied to the stabilized DUO2 ceramic. This consists of sintered DUO2 particles with a silicate-based coating that covers the surfaces and fills the spaces between the grains, acting as an oxygen barrier, as well as corrosion and leach resistance. DUAGG has a density up to 8.8 g/cm3 and replaces the conventional aggregate in concrete, producing concrete with a density of 5.6 to 6.4 g/cm3, compared to 2.3 g/cm3 for conventional concrete.Also, DUCRETE presents environmentally friendly properties. The table below shows the effectiveness of converting depleted uranium into concrete, since potential leaching is decreased in a high order. The leach test used was the EPA Toxicity characteristic leaching procedure (TCLP), which is used to assess heavy metal risks to the environment.
Production:
U.S. method DUCRETE is produced by mixing a DUO2 aggregate with Portland cement. DU is a result of the enrichment of uranium for use in nuclear power generation and other fields. DU usually comes bonded with fluorine in uranium hexafluoride. This compound is highly reactive and cannot be used in the DUCRETE. Uranium hexafluoride must therefore be oxidized into triuranium octoxide and uranium trioxide. These compounds are then converted to UO2 (uranium oxide) through the addition of hydrogen gas. The UO2 is then dried, crushed, and milled into a uniform sediment. This then converted into small inch-long briquettes through the use of high pressure (6,000 psi (410 bar)). The low-atomic number binder is then added and undergoes pyrolysis. The compound then undergoes liquid phase sintering at 1300 °C until the desired density is achieved, usually around 8.9 g/cm3. The briquettes are then crushed and gap sorted and are now ready to be mixed into DUCRETE.
Production:
VNIINM (Russian) method The VNIINM method is very similar to the U.S. method except it does not gap sort the binder and UO2 after it is crushed.
Applications:
After processing, DUCRETE composite may be used in container vessels, shielding structures, and containment storage areas, all of which can be used to store radioactive waste. The primary implementation of this material is within a dry cask storage system for high level waste (HLW) and spent nuclear fuel (SNF). In such a system, the composite would be the primary component used to shield radiation from workers and the public. Cask systems made from DUCRETE are smaller and lighter in weight than casks made from conventional materials, such as traditional concrete. DUCRETE containers need only be about 1/3 as thick to provide the same degree of radiation shielding as concrete systems.Analysis has shown that DUCRETE is more cost effective than conventional materials. The cost for the production of casks made with DUCRETE is low when compared with other shielding materials such as steel, lead and DU metal, since less material is required as a consequence of a higher density. In a study by Duke Engineering at a nuclear waste facility at Savannah River, the DUCRETE cask system was evaluated at a lower cost than an alternative Glass Waste storage building. However, disposal of the DUCRETE was not considered. Since DUCRETE is a low level radioactive composite, its relatively expensive disposal could decrease the cost effectiveness of such systems. An alternative to such disposal is the use of empty DUCRETE casks as a container for high activity low-level waste.While DUCRETE shows potential for future nuclear waste programs, such concepts are far from utilization. So far, no DUCRETE cask systems have been licensed in the U.S. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**XQuery**
XQuery:
XQuery (XML Query) is a query and functional programming language that queries and transforms collections of structured and unstructured data, usually in the form of XML, text and with vendor-specific extensions for other data formats (JSON, binary, etc.). The language is developed by the XML Query working group of the W3C. The work is closely coordinated with the development of XSLT by the XSL Working Group; the two groups share responsibility for XPath, which is a subset of XQuery.
XQuery:
XQuery 1.0 became a W3C Recommendation on January 23, 2007.XQuery 3.0 became a W3C Recommendation on April 8, 2014.XQuery 3.1 became a W3C Recommendation on March 21, 2017.
"The mission of the XML Query project is to provide flexible query facilities to extract data from real and virtual documents on the World Wide Web, therefore finally providing the needed interaction between the Web world and the database world. Ultimately, collections of XML files will be accessed like databases."
Features:
XQuery is a functional, side effect-free, expression-oriented programming language with a simple type system, summed up by Kilpeläinen: All XQuery expressions operate on sequences, and evaluate to sequences. Sequences are ordered lists of items. Items can be either nodes, which represent components of XML documents, or atomic values, which are instances of XML Schema base types like xs:integer or xs:string. Sequences can also be empty, or consist of a single item only. No distinction is made between a single item and a singleton sequence. (...) XQuery/XPath sequences differ from lists in languages like Lisp and Prolog by excluding nested sequences. Designers of XQuery may have considered nested sequences unnecessary for the manipulation of document contents. Nesting, or hierarchy of document structures is instead represented by nodes and their child-parent relationships XQuery provides the means to extract and manipulate data from XML documents or any data source that can be viewed as XML, such as relational databases or office documents.
Features:
XQuery contains a superset of XPath expression syntax to address specific parts of an XML document. It supplements this with a SQL-like "FLWOR expression" for performing joins. A FLWOR expression is constructed from the five clauses after which it is named: FOR, LET, WHERE, ORDER BY, RETURN.
The language also provides syntax allowing new XML documents to be constructed. Where the element and attribute names are known in advance, an XML-like syntax can be used; in other cases, expressions referred to as dynamic node constructors are available. All these constructs are defined as expressions within the language, and can be arbitrarily nested.
The language is based on the XQuery and XPath Data Model (XDM) which uses a tree-structured model of the information content of an XML document, containing seven kinds of nodes: document nodes, elements, attributes, text nodes, comments, processing instructions, and namespaces.
XDM also models all values as sequences (a singleton value is considered to be a sequence of length one). The items in a sequence can either be XML nodes or atomic values. Atomic values may be integers, strings, booleans, and so on: the full list of types is based on the primitive types defined in XML Schema.
Features for updating XML documents or databases, and full text search capability, are not part of the core language, but are defined in add-on extension standards: XQuery Update Facility 1.0 supports update feature and XQuery and XPath Full Text 1.0 supports full text search in XML documents.
XQuery 3.0 adds support for full functional programming, in that functions are values that can be manipulated (stored in variables, passed to higher-order functions, and dynamically called).
Examples:
The sample XQuery code below lists the unique speakers in each act of Shakespeare's play Hamlet, encoded in hamlet.xml All XQuery constructs for performing computations are expressions. There are no statements, even though some of the keywords appear to suggest statement-like behaviors. To execute a function, the expression within the body is evaluated and its value is returned. Thus to write a function to double an input value, one simply writes: To write a full query saying 'Hello World', one writes the expression: This style is common in functional programming languages.
Applications:
Below are a few examples of how XQuery can be used: Extracting information from a database for use in a web service.
Generating summary reports on data stored in an XML database.
Searching textual documents on the Web for relevant information and compiling the results.
Selecting and transforming XML data to XHTML to be published on the Web.
Pulling data from databases to be used for the application integration.
Splitting up an XML document that represents multiple transactions into multiple XML documents.
XQuery and XSLT compared:
Scope Although XQuery was initially conceived as a query language for large collections of XML documents, it is also capable of transforming individual documents. As such, its capabilities overlap with XSLT, which was designed expressly to allow input XML documents to be transformed into HTML or other formats.
The XSLT 2.0 and XQuery standards were developed by separate working groups within W3C, working together to ensure a common approach where appropriate. They share the same data model (XDM), type system, and function library, and both include XPath 2.0 as a sublanguage.
XQuery and XSLT compared:
Origin The two languages, however, are rooted in different traditions and serve the needs of different communities. XSLT was primarily conceived as a stylesheet language whose primary goal was to render XML for the human reader on screen, on the web (as web template language), or on paper. XQuery was primarily conceived as a database query language in the tradition of SQL.
XQuery and XSLT compared:
Because the two languages originate in different communities, XSLT is stronger in its handling of narrative documents with more flexible structure, while XQuery is stronger in its data handling (for example, when performing relational joins).
XQuery and XSLT compared:
Versions XSLT 1.0 appeared as a Recommendation in 1999, whereas XQuery 1.0 only became a Recommendation in early 2007; as a result, XSLT is still much more widely used. Both languages have similar expressive power, though XSLT 2.0 has many features that are missing from XQuery 1.0, such as grouping, number and date formatting, and greater control over XML namespaces. Many of these features were planned for XQuery 3.0.Any comparison must take into account the version of XSLT. XSLT 1.0 and XSLT 2.0 are very different languages. XSLT 2.0, in particular, has been heavily influenced by XQuery in its move to strong typing and schema-awareness.
XQuery and XSLT compared:
Strengths and weaknesses Usability studies have shown that XQuery is easier to learn than XSLT, especially for users with previous experience of database languages such as SQL. This can be attributed to the fact that XQuery is a smaller language with fewer concepts to learn, and to the fact that programs are more concise. It is also true that XQuery is more orthogonal, in that any expression can be used in any syntactic context. By contrast, XSLT is a two-language system in which XPath expressions can be nested in XSLT instructions but not vice versa.
XQuery and XSLT compared:
XSLT is currently stronger than XQuery for applications that involve making small changes to a document (for example, deleting all the NOTE elements). Such applications are generally handled in XSLT by use of a coding pattern that involves an identity template that copies all nodes unchanged, modified by specific templates that modify selected nodes. XQuery has no equivalent to this coding pattern, though in future versions it will be possible to tackle such problems using the update facilities in the language that are under development.XQuery 1.0 lacked any kind of mechanism for dynamic binding or polymorphism; this has been remedied with the introduction of functions as first-class values in XQuery 3.0. The absence of this capability starts to become noticeable when writing large applications, or when writing code that is designed to be reusable in different environments. XSLT offers two complementary mechanisms in this area: the dynamic matching of template rules, and the ability to override rules using xsl:import, that make it possible to write applications with multiple customization layers.
XQuery and XSLT compared:
The absence of these facilities from XQuery 1.0 was a deliberate design decision: it has the consequence that XQuery is very amenable to static analysis, which is essential to achieve the level of optimization needed in database query languages. This also makes it easier to detect errors in XQuery code at compile time.
XQuery and XSLT compared:
The fact that XSLT 2.0 uses XML syntax makes it rather verbose in comparison to XQuery 1.0. However, many large applications take advantage of this capability by using XSLT to read, write, or modify stylesheets dynamically as part of a processing pipeline. The use of XML syntax also enables the use of XML-based tools for managing XSLT code. By contrast, XQuery syntax is more suitable for embedding in traditional programming languages such as Java (see XQuery API for Java) or C#. If necessary, XQuery code can also be expressed in an XML syntax called XQueryX. The XQueryX representation of XQuery code is rather verbose and not convenient for humans, but can easily be processed with XML tools, for example transformed with XSLT stylesheets.
Extensions and future work:
W3C extensions Two major extensions to the XQuery were developed by the W3C: XQuery 1.0 and XPath 2.0 Full-Text XQuery Update FacilityBoth reached Recommendation status as extensions to XQuery 1.0, but work on taking them forward to work with XQuery 3.0 was abandoned for lack of resources.
Work on XQuery 3.0 was published as a Recommendation on 8 April 2014, and XQuery 3.1 is a Recommendation as at February 2017.
A scripting (procedural) extension for XQuery was designed, but never completed.The EXPath Community Group develops extensions to XQuery and other related standards (XPath, XSLT, XProc, and XForms).
The following extensions are currently available: Packaging System File Module Binary Module Web Applications Third-party extensions JSONiq is an extension of XQuery that adds support to extract and transform data from JSON documents. JSONiq is a superset of XQuery 3.0. It is published under the Creative Commons Attribution-ShareAlike 3.0 license.
The EXQuery project develops standards around creating portable XQuery applications. The following standards are currently available: RESTXQ | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Type-1.5 superconductor**
Type-1.5 superconductor:
Type-1.5 superconductors are multicomponent superconductors characterized by two or more coherence lengths, at least one of which is shorter than the magnetic field penetration length λ , and at least one of which is longer. This is in contrast to single-component superconductors, where there is only one coherence length ξ and the superconductor is necessarily either type 1 ( ξ>λ ) or type 2 ( ξ<λ ) (often a coherence length is defined with extra 21/2 factor, with such a definition the corresponding inequalities are ξ>2λ and ξ<2λ ). When placed in magnetic field, type-1.5 superconductors should form quantum vortices: magnetic-flux-carrying excitations. They allow magnetic field to pass through superconductors due to a vortex-like circulation of superconducting particles (electronic pairs). In type-1.5 superconductors these vortices have long-range attractive, short-range repulsive interaction. As a consequence a type-1.5 superconductor in a magnetic field can form a phase separation into domains with expelled magnetic field and clusters of quantum vortices which are bound together by attractive intervortex forces. The domains of the Meissner state retain the two-component superconductivity, while in the vortex clusters one of the superconducting components is suppressed. Thus such materials should allow coexistence of various properties of type-I and type-II superconductors.
Description:
Type-I superconductors completely expel external magnetic fields if the strength of the applied field is sufficiently low. Also the supercurrent can flow only on the surface of such a superconductor but not in its interior. This state is called the Meissner state. However at elevated magnetic field, when the magnetic field energy becomes comparable with the superconducting condensation energy, the superconductivity is destroyed by the formation of macroscopically large inclusions of non-superconducting phase.
Description:
Type-II superconductors, besides the Meissner state, possess another state: a sufficiently strong applied magnetic field can produce currents in the interior of superconductor due to formation of quantum vortices. The vortices also carry magnetic flux through the interior of the superconductor. These quantum vortices repel each other and thus tend to form uniform vortex lattices or liquids. Formally, vortex solutions exist also in models of type-I superconductivity, but the interaction between vortices is purely attractive, so a system of many vortices is unstable against a collapse onto a state of a single giant normal domain with supercurrent flowing on its surface. More importantly, the vortices in type-I superconductor are energetically unfavorable. To produce them would require the application of a magnetic field stronger than what a superconducting condensate can sustain. Thus a type-I superconductor goes to non-superconducting states rather than forming vortices. In the usual Ginzburg–Landau theory, only the quantum vortices with purely repulsive interaction are energetically cheap enough to be induced by applied magnetic field.
Description:
It was proposed that the type-I/type-II dichotomy could be broken in a multi-component superconductors, which possess multiple coherence lengths.
Description:
Examples of multi-component superconductivity are multi-band superconductors magnesium diboride and oxypnictides and exotic superconductors with nontrivial Cooper-pairing. There, one can distinguish two or more superconducting components associated, for example with electrons belong to different bands band structure. A different example of two component systems is the projected superconducting states of liquid metallic hydrogen or deuterium where mixtures of superconducting electrons and superconducting protons or deuterons were theoretically predicted.
Description:
It was also pointed out that systems which have phase transitions between different superconducting states such as between s and s+is or between U(1) and U(1)×U(1) should rather generically fall into type-1.5 state near that transition due to divergence of one of the coherence lengths.
Description:
In mixtures of independently conserved condensates For multicomponent superconductors with so called U(1)xU(1) symmetry the Ginzburg-Landau model is a sum of two single-component Ginzburg-Landau model which are coupled by a vector potential A :F=∑i,j=1,212m|(∇−ieA)ψi|2+αi|ψi|2+βi|ψi|4+12(∇×A)2 where ψi=|ψi|eiϕi,i=1,2 are two superconducting condensates. In case if the condensates are coupled only electromagnetically, i.e. by A the model has three length scales: the London penetration length λ=1e|ψ1|2+|ψ2|2 and two coherence lengths ξ1=12α1,ξ2=12α2 . The vortex excitations in that case have cores in both components which are co-centered because of electromagnetic coupling mediated by the field A . The necessary but not sufficient condition for occurrence of type-1.5 regime is ξ1>λ>ξ2 . Additional condition of thermodynamic stability is satisfied for a range of parameters. These vortices have a nonmonotonic interaction: they attract each other at large distances and repel each other at short distances.
Description:
It was shown that there is a range of parameters where these vortices are energetically favorable enough to be excitable by an external field, attractive interaction notwithstanding. This results in the formation of a special superconducting phase in low magnetic fields dubbed "Semi-Meissner" state. The vortices, whose density is controlled by applied magnetic flux density, do not form a regular structure. Instead, they should have a tendency to form vortex "droplets" because of the long-range attractive interaction caused by condensate density suppression in the area around the vortex. Such vortex clusters should coexist with the areas of vortex-less two-component Meissner domains. Inside such vortex cluster the component with larger coherence length is suppressed: so that component has appreciable current only at the boundary of the cluster.
Description:
In multiband systems In a two-band superconductor the electrons in different bands are not independently conserved thus the definition of two superconducting components is different. A two-band superconductor is described by the following Ginzburg-Landau model.
F=∑i,j=1,212m|(∇−ieA)ψi|2+αi|ψi|2+βi|ψi|4−η(ψ1ψ2∗+ψ1∗ψ2)+γ[(∇−ieA)ψ1⋅(∇+ieA)ψ2∗+(∇+ieA)ψ1∗⋅(∇−ieA)ψ2]+ν|ψ1|2|ψ2|2+12(∇×A)2 where again ψi=|ψi|eiϕi,i=1,2 are two superconducting condensates. In multiband superconductors quite generically η≠0,γ≠0 When η≠0,γ≠0,ν≠0 three length scales of the problem are again the London penetration length and two coherence lengths. However, in this case the coherence lengths ξ~1(α1,β1,α2,β2,η,γ,ν),ξ~2(α1,β1,α2,β2,η,γ,ν) are associated with "mixed" combinations of density fields.
Microscopic models:
A microscopic theory of type-1.5 superconductivity has been reported.
Current experimental research:
In 2009, experimental results have been reported claiming that magnesium diboride may fall into this new class of superconductivity. The term type-1.5 superconductor was coined for this state. Further experimental data backing this conclusion was reported in. More recent theoretical works show that the type-1.5 may be more general phenomenon because it does not require a material with two truly superconducting bands, but can also happen as a result of even very small interband proximity effect and is robust in the presence of various inter-band couplings such as interband Josephson coupling.
Current experimental research:
In 2014, experimental study suggested that Sr2RuO4 is type-1.5 superconductor.
Non-technical explanation:
Type-I and type-II superconductors feature dramatically different charge flow patterns. Type-I superconductors have two state-defining properties: The lack of electric resistance and the fact that they do not allow an external magnetic field to pass through them. When a magnetic field is applied to these materials, superconducting electrons produce a strong current on the surface, which in turn produces a magnetic field in the opposite direction to cancel the interior magnetic field, similar to how typical conductors cancel interior electric fields with surface charge distributions. An externally applied magnetic field of sufficiently low strength is cancelled in the interior of a type-I superconductor by the field produced by the surface current. In type-II superconducting materials, however, a complicated flow of superconducting electrons can form deep in the interior of the material. In a type-II material, magnetic fields can penetrate into the interior, carried inside by vortices that form an Abrikosov vortex lattice. In type-1.5 superconductors, there are at least two superconducting components. In such materials, the external magnetic field can produce clusters of tightly packed vortex droplets because in such materials vortices should attract each other at large distances and repel at short length scales. Since the attraction originates in vortex core's overlaps in one of the superconducting components, this component will be depleted in the vortex cluster. Thus a vortex cluster will represent two competing types of superflow. One component will form vortices bunched together while the second component will produce supercurrent flowing on the surface of vortex clusters in a way similar to how electrons flow on the exterior of type-I superconductors. These vortex clusters are separated by "voids," with no vortices, no currents and no magnetic field.
Animations:
Movies from numerical simulations of the Semi-Meissner state where Meissner domains coexist with clusters where vortex droplets form in one superconducting components and macroscopic normal domains in the other. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Auth-Code**
Auth-Code:
An Auth-Code, also known as an EPP code, authorization code, transfer code, or Auth-Info Code, is a generated passcode required to transfer an Internet domain name between domain registrars; the code is intended to indicate that the domain name owner has authorized the transfer.Auth-Codes are created by the current registrar of the domain. The registrar is required to provide the Auth-Code to the domain name owner within five calendar days of the owner's request, and ICANN accepts complaints about registrars that do not. Some registrars allow Auth-Codes to be generated by the domain owners through the registrar's website. All Generic top-level domains use an Auth-Code in their transfer process.The .nz domain registry used an eight-character Auth-Code called Unique Domain Authentication Identifier (UDAI) for domain transfers and name conflict procedures. The UDAI was provided to the domain owner by the domain's current registrar, and expired after 30 days. With the .nz registry update in 2022 the term UDAI was retired, and the passcode is now also referred to as an Auth-Code.
Alternative systems:
The .uk and .co.uk domain registry, instead of using a passcode, has the domain owner specify the new registrar using the old registrar. The destination registrar is specified using the destination's registrar tag, also known as an Internet Provider Security (IPS) tag or Nominet Provider tag.Some registries use a document based approach either in conjunction with or instead of an Auth-Code. An example for that is .hu for which the registrant has to fill out a document and send it to the new registrar, who sends it to the registry to fulfill the domain transfer.The .is domain registry uses the domains admin NIC handle and the old registrar has to update it to the new registrars NIC handle.Some other registries use an email template (that may or may not be in part processed automatically) like .lr or .jm. In this case the technical contact is set to the registrar and can be updated by sending an updated template from the registrant or admin contacts email address. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PPGI**
PPGI:
PPGI is pre-painted galvanised iron, also known as pre-coated steel, coil coated steel, color coated steel etc., typically with a hot dip zinc coated steel substrate.
PPGI:
The term is an extension of GI which is a traditional abbreviation for Galvanized Iron. Today the term GI typically refers to essentially pure zinc (>99%) continuously hot dip coated steel, as opposed to batch dip processes. PPGI refers to factory pre-painted zinc coated steel, where the steel is painted before forming, as opposed to post painting which occurs after forming.
PPGI:
The hot dip metallic coating process is also used to manufacture steel sheet and coil with coatings of aluminium, or alloy coatings of zinc/aluminium, zinc/iron and zinc/aluminium/magnesium which may also be factory pre-painted. While GI may sometimes be used as a collective term for various hot dip metallic coated steels, it more precisely refers only to zinc coated steel. Similarly, PPGI may sometimes be used as a general term for a range of metallic coated steels that have been pre-painted, but more often refers more precisely to pre-painted zinc coated steel.
PPGI:
Zinc coated steel substrate for PPGI is typically produced on a continuous galvanizing line (CGL). The CGL may include a painting section after the hot dip galvanising section, or more commonly the metallic coated substrate in coil form is processed on a separate continuous paint line (CPL). Metallic coated steel is cleaned, pre-treated, applied with various layers of organic coatings which can be paints, vinyl dispersions, or laminates. The continuous process used to apply these coatings is often referred to as Coil Coating.
PPGI:
The steel thus produced in this process is a prepainted, prefinished and ready for further processing into finished products or components. to use material.
PPGI:
The coil coating process may be used for other substrates such as aluminium, or aluminium, stainless steel or alloy coated steel other than "pure" zinc coated steel. However, only "pure" zinc coated steel is typically referred to as PPGI. For example, PPGL may be used for pre-painted 55%Al/Zn alloy-coated steel (pre-painted GALVALUME(r) steel*) Over 30 million tons of such coated steel is produced today in over 300 coating lines just in Boxing which is a little county in North of China.
PPGI:
China, South Korea and Taiwan are the top 3 producers of PPGI steel according to PPGI marketplace platform http://www.buyppgi.com | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Crystallographic image processing**
Crystallographic image processing:
Crystallographic image processing (CIP) is traditionally understood as being a set of key steps in the determination of the atomic structure of crystalline matter from high-resolution electron microscopy (HREM) images obtained in a transmission electron microscope (TEM) that is run in the parallel illumination mode. The term was created in the research group of Sven Hovmöller at Stockholm University during the early 1980s and became rapidly a label for the "3D crystal structure from 2D transmission/projection images" approach. Since the late 1990s, analogous and complementary image processing techniques that are directed towards the achieving of goals with are either complementary or entirely beyond the scope of the original inception of CIP have been developed independently by members of the computational symmetry/geometry, scanning transmission electron microscopy, scanning probe microscopy communities, and applied crystallography communities.
HREM image contrasts and crystal potential reconstruction methods:
Many beam HREM images of extremely thin samples are only directly interpretable in terms of a projected crystal structure if they have been recorded under special conditions, i.e. the so-called Scherzer defocus. In that case the positions of the atom columns appear as black blobs in the image (when the spherical aberration coefficient of the objective lens is positive - as always the case for uncorrected TEMs). Difficulties for interpretation of HREM images arise for other defocus values because the transfer properties of the objective lens alter the image contrast as function of the defocus. Hence atom columns which appear at one defocus value as dark blobs can turn into white blobs at a different defocus and vice versa. In addition to the objective lens defocus (which can easily be changed by the TEM operator), the thickness of the crystal under investigation has also a significant influence on the image contrast. These two factors often mix and yield HREM images which cannot be straightforwardly interpreted as a projected structure. If the structure is unknown, so that image simulation techniques cannot be applied beforehand, image interpretation is even more complicated. Nowadays two approaches are available to overcome this problem: one method is the exit-wave function reconstruction method, which requires several HREM images from the same area at different defocus and the other method is crystallographic image processing (CIP) which processes only a single HREM image. Exit-wave function reconstruction provides an amplitude and phase image of the (effective) projected crystal potential over the whole field of view. The thereby reconstructed crystal potential is corrected for aberration and delocalisation and also not affected by possible transfer gaps since several images with different defocus are processed. CIP on the other side considers only one image and applies corrections on the averaged image amplitudes and phases. The result of the latter is a pseudo-potential map of one projected unit cell. The result can be further improved by crystal tilt compensation and search for the most likely projected symmetry. In conclusion one can say that the exit-wave function reconstruction method has most advantages for determining the (aperiodic) atomic structure of defects and small clusters and CIP is the method of choice if the periodic structure is in focus of the investigation or when defocus series of HREM images cannot be obtained, e.g. due to beam damage of the sample. However, a recent study on the catalyst related material Cs0.5[Nb2.5W2.5O14] shows the advantages when both methods are linked in one study.
Brief history of crystallographic image processing:
Aaron Klug suggested in 1979 that a technique that was originally developed for structure determination of membrane protein structures can also be used for structure determination of inorganic crystals. This idea was picked up by the research group of Sven Hovmöller which proved that the metal framework partial structure of the K8−xNb16−xW12+xO80 heavy-metal oxide could be determined from single HREM images recorded at Scherzer defocus. (Scherzer defocus ensures within the weak-phase object approximation a maximal contribution to the image of elastically scattered electrons that were scattered just once while contributions of doubly elastically scattered electrons to the image are optimally suppressed.) In later years the methods became more sophisticated so that also non-Scherzer images could be processed. One of the most impressive applications at that time was the determination of the complete structure of the complex compound Ti11Se4, which has been inaccessible by X-ray crystallography. Since CIP on single HREM images works only smoothly for layer-structures with at least one short (3 to 5 Å) crystal axis, the method was extended to work also with data from different crystal orientations (= atomic resolution electron tomography). This approach was used in 1990 to reconstruct the 3D structure of the mineral staurolite HFe2Al9Si4O4 and more recently to determine the structures of the huge quasicrystal approximant phase ν-AlCrFe and the structures of the complex zeolites TNU-9 and IM-5. As mentioned below in the section on crystallographic processing of images that were recorded from 2D periodic arrays with other types of microscopes, the CIP techniques were taken up since 2009 by members of the scanning transmission electron microscopy, scanning probe microscopy and applied crystallography communities.
Brief history of crystallographic image processing:
Contemporary robotics and computer vision researchers also deal with the topic of "computational symmetry", but have so far failed to utilize the spatial distribution of site symmetries that result from crystallographic origin conventions. In addition, a well known statistician noted in his comments on "Symmetry as a continuous feature" that symmetry groups possess inclusion relations (are not disjoint in other words) so that conclusions about which symmetry is most likely present in an image need to be based on "geometric inferences". Such inferences are deeply rooted in information theory, where one is not trying to model empirical data, but extracts and models the information content of the data.
Brief history of crystallographic image processing:
The key difference between geometric inference and all kinds of traditional statistical inferences is that the former merely states the co-existence of a set of definitive (and exact geometrical) constraints and noise, whereby noise is nothing else but an unknown characteristic of the measurement device and data processing operations. From this follows that "in comparing two" (or more) "geometric models we must take into account the fact that the noise is identical (but unknown) and has the same characteristic for both" (all) "models". Because many of these approaches use linear approximations, the level of random noise needs to be low to moderate, or in other words, the measuring devices must be very well corrected for all kinds of known systematic errors. These kinds of ideas have, however, only been taken up by a tiny minority of researchers within the computational symmetry and scanning probe microscopy / applied crystallography communities.
Brief history of crystallographic image processing:
It is fair to say that the members of computational symmetry community are doing crystallographic image processing under a different name and without utilization of its full mathematical framework (e.g. ignorance to the proper choice of the origin of a unit cell and preference for direct space analyses). Frequently, they are working with artificially created 2D periodic patterns, e.g. wallpapers, textiles, or building decoration in the Moorish/Arabic/Islamic tradition. The goals of these researchers are often related to the identification of point and translation symmetries by computational means and the subsequent classifications of patterns into groups. Since their patterns were artificially created, they do not need to obey all of the restrictions that nature typically imposes on long range periodic ordered arrays of atoms or molecules.
Brief history of crystallographic image processing:
Computational geometry takes a broader view on this issue and concluded already in 1991 that the problem of testing approximate point symmetries in noisy images is in general NP-hard and later on that it is also NP-complete. For restricted versions of this problem, there exist polynomial time algorithms that solve the corresponding optimization problems for a few point symmetries in 2D.
Crystallographic image processing of high-resolution TEM images:
The principal steps for solving a structure of an inorganic crystal from HREM images by CIP are as follows (for a detailed discussion see ).
Crystallographic image processing of high-resolution TEM images:
Selecting the area of interest and calculation of the Fourier transform (= power spectrum consisting of a 2D periodic array of complex numbers) Determining the defocus value and compensating for the contrast changes imposed by the objective lens (done in Fourier space) Indexing and refining the lattice (done in Fourier space) Extracting amplitudes and phase values at the refined lattice positions (done in Fourier space) Determining the origin of the projected unit cell and determining the projected (plane group) symmetry Imposing constrains of the most likely plane group symmetry on the amplitudes an phases. At this step the image phases are converted into the phases of the structure factors.
Crystallographic image processing of high-resolution TEM images:
Calculating the pseudo-potential map by Fourier synthesis with corrected (structure factor) amplitudes and phases (done in real space) Determining 2D (projected) atomic co-ordinates (done in real space)A few computer programs are available which assist to perform the necessary steps of processing. The most popular programs used by materials scientists (electron crystallographers) are CRISP, VEC, and the EDM package. There is also the recently developed crystallographic image processing program EMIA, but so far there do not seem to be reports by users of this program.
Crystallographic image processing of high-resolution TEM images:
Structural biologists achieve resolutions of a few ångströms (up from a to few nanometers in the past when samples used to be negatively stained) for membrane forming proteins in regular two-dimensional arrays, but prefer the usage of the programs 2dx, EMAN2, and IPLT. These programs are based on the Medical Research Council (MRC) image processing programs and possess additional functionality such as the "unbending" of the image. As the name suggests, unbending of the image is conceptually equivalent to "flattening out and relaxing to equilibrium positions" one building block thick samples so that all 2D periodic motifs are as similar as possible and all building blocks of the array possess the same crystallographic orientation with respect to a cartesian coordinate system that is fixed to the microscope. (The microscope's optical axis typically serves as the z-axis.) Unbending is often necessary when the 2D array of membrane proteins is paracrystalline rather than genuinely crystalline. It was estimated that unbending approximately doubles the spatial resolution with which the shape of molecules can be determinedInorganic crystals are much stiffer than 2D periodic protein membrane arrays so that there is no need for the unbending of images that were taken from suitably thinned parts of these crystals. Consequently, the CRISP program does not possess the unbending image processing feature but offers superior performance in the so-called phase origin refinement.
Crystallographic image processing of high-resolution TEM images:
The latter feature is particularly important for electron crystallographers as their samples may possess any space group out of the 230 possible groups types that exist in three dimensions. The regular arrays of membrane forming proteins that structural biologists deal with are, on the other hand, restricted to possess one out of only 17 (two-sided/black-white) layer group types (of which there are 46 in total and which are periodic only in 2D) due to the chiral nature of all (naturally occurring) proteins. Different crystallographic settings of four of these layer group types increase the number of possible layer group symmetries of regular arrays of membrane forming proteins to just 21.
Crystallographic image processing of high-resolution TEM images:
All 3D space groups and their subperiodic 2D periodic layer groups (including the above-mentioned 46 two-sided groups) project to just 17 plane space group types, which are genuinely 2D periodic and are sometimes referred to as the wallpaper groups. (Although quite popular, this is a misnomer because wallpapers are not restricted to possess these symmetries by nature.) All individual transmission electron microscopy images are projections from the three-dimensional space of the samples into two dimensions (so that spatial distribution information along the projection direction is unavoidably lost). Projections along prominent (i.e. certain low-index) zone axes of 3D crystals or along the layer normal of a membrane forming protein sample ensure the projection of 3D symmetry into 2D. (Along arbitrary high-index zone axes and inclined to the layer normal of membrane forming proteins, there will be no useful projected symmetry in transmission images.) The recovery of 3D structures and their symmetries relies on electron tomography techniques, which use sets of transmission electron microscopy images.
Crystallographic image processing of high-resolution TEM images:
The origin refinement part of CIP relies on the definition of the plane symmetry group types as provided by the International Tables of Crystallography, where all symmetry equivalent positions in the unit cell and their respective site symmetries are listed along with systematic absences in reciprocal space. Besides plane symmetry groups p1, p3, p3m1 and p31m, all other plane group symmetries are centrosymmetric so that the origin refinement simplifies to the determination of the correct signs of the amplitudes of the Fourier coefficients.
Crystallographic image processing of high-resolution TEM images:
When crystallographic image processing is utilized in scanning probe microscopy, the symmetry groups to be considered are just the 17 plane space group types in their possible 21 settings.
Crystallographic processing of images that were recorded from 2D periodic arrays with other types of microscopes:
Because digitized 2D periodic images are in the information theoretical approach just data organized in 2D arrays of pixels, core features of Crystallographic Image Processing can be utilized independent of the type of microscope with which the images/data were recorded. The CIP technique has, accordingly been applied (on the basis of the 2dx program) to atomic resolution Z-contrast images of Si-clathrates, as recorded in an aberration-corrected scanning transmission electron microscope. Images of 2D periodic arrays of flat lying molecules on a substrate as recorded with scanning tunneling microscopes were also crystallographic processed utilizing the program CRISP. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Protruding ear**
Protruding ear:
Prominent ear, otapostasis or bat ear is an abnormally protruding human ear. It may be unilateral or bilateral. The concha is large with poorly developed antihelix and scapha. It is the result of malformation of cartilage during primitive ear development in intrauterine life. The deformity can be corrected anytime after five years of age. The surgery is preferably done at the earliest possible age in order to avoid psychological distress. Correction by otoplasty involves changing the shape of the ear cartilage so that the ear is brought closer to the side of the head. The skin is not removed, but the shape of the cartilage is altered. The surgery does not affect hearing. It is done for cosmetic purposes only. The complications of the surgery, though rare, are keloid formation, hematoma formation, infection and asymmetry between the ears. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ceinture**
Ceinture:
Ceinture (French, 'belt' or 'girdle', and may refer to a ring road) may refer to:
Petite ceinture:
Chemin de fer de Petite Ceinture, a former circular railway in Paris Small Ring, Brussels, the inner ringroad
Grande ceinture:
Grande Ceinture line, a railway line around Paris Greater Ring, Brussels, the intermediate ringroad | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chuang Yin-ching**
Chuang Yin-ching:
Kenneth Chuang Yin-ching (Chinese: 莊銀清) is a Taiwanese epidemiologist. As of January 2020, he leads the Taiwan Centers for Disease Control (TCDC) Communicable Disease Control Medical Network.
Career:
Chuang earned a degree in medicine at Kaohsiung Medical University, and completed his residency at Taipei Veterans General Hospital. He specialized in epidemiology and infectious diseases while teaching at National Cheng Kung University. Chuang was the superintendent of Chi Mei Medical Center, Liouying branch.
Career:
COVID-19 pandemic in Taiwan Chuang rose to prominence during the COVID-19 pandemic in Taiwan. Chuang and two colleagues issued on 16 January a level-2 travel alert for Wuhan, China because of his and his colleague's three-day long on-the-ground experience in that city from 13 January to 15 January 2020. They told a news conference in Taipei one day later that 30 percent of the Wuhan patients had no direct exposure to the Huanan Seafood City market (HSCM), which the Chinese authorities indicate as the epicenter of the outbreak. The Chinese had closed down the HSCM on 1 January.Chuang's revelation on 16 January predates by three days the Chinese confirmation of human-to-human transmission. On 20–21 January the World Health Organization then sent to Wuhan a delegation, which reported on 22 January that human-to-human transmission was indeed occurring.The Chinese government allowed a total of ten foreign medical officials to visit, including two from Taiwan, one of whom was Chuang. The eight others were from Hong Kong and Macau.At the 16 January conference, Chuang remarked on the case of "a married couple infected in Wuhan. The husband worked at the market, but the wife, who had not recently been to the market due to limited mobility, might have contracted the illness from her husband." Chuang also was among the first to report that the SARS-CoV-2 infections were occurring in clusters.Chuang stated later, in an interview for The Daily Telegraph that: Initially… the chairperson of the meeting, tried to deny human to human transmission but finally the person from the central government health authority said 'why do you give an old conclusion? Now the conclusion is that limited human to human transmission cannot be excluded'. For me that was very important information.
Career:
Chuang "received no response to his questions about why 13 infections could not be traced to the (HSCM) seafood market."The WHO declared a PHEIC on 30 January. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**1,3-Benzodioxolyl-N-ethylbutanamine**
1,3-Benzodioxolyl-N-ethylbutanamine:
Ethylbenzodioxolylbutanamine (EBDB; Ethyl-J) is a lesser-known entactogen, stimulant, and psychedelic. It is the N-ethyl analogue of benzodioxylbutanamine (BDB; "J"), and also the α-ethyl analogue of methylenedioxyethylamphetamine (MDEA; "Eve").
1,3-Benzodioxolyl-N-ethylbutanamine:
EBDB was first synthesized by Alexander Shulgin. In his book PiHKAL, the minimum dosage consumed was 90 mg, and the duration is unknown. EBDB produced few to no effects at the dosage range tested in PiHKAL, but at higher doses of several hundred milligrams it produces euphoric effects similar to those of methylbenzodioxylbutanamine (MBDB; "Eden", "Methyl-J"), although milder and shorter lasting.Very little data exists about the pharmacological properties, metabolism, and toxicity of EBDB. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hypercyclic morphogenesis**
Hypercyclic morphogenesis:
Hypercyclic morphogenesis refers to the emergence of a higher order of self-reproducing structure or organization or hierarchy within a system, first introduced by J. Barkley Rosser, Jr. in 1991 (Chap. 12). It involves combining the idea of the hypercycle, an idea due to Manfred Eigen and Peter Schuster (1979) with that of morphogenesis, an idea due to D’Arcy W.Thompson (1917). The hypercycle involves the problem in biochemistry of molecules combining in a self-reacting group that is able to stay together, posited by Eigen and Schuster as the foundation for the emergence of multi-cellular organisms. Thompson saw morphogenesis as a central part of the development of an organism as cell differentiation led to new organs appearing as it develops and grows. Alan Turing (1952) would study the chemistry and mathematics involved in such a process, which would also be studied mathematically by René Thom (1972) in his formulation of catastrophe theory.
Hypercyclic morphogenesis:
Rosser suggested applications in political economy such as the emergence of the European Union out of the conscious actions of the leaders of its constituent nation states (1992), or the appearance of a higher level in an urban hierarchy during economic development (1994). It has been applied to the emergence of higher levels in an ecological hierarchy (Rosser, Folke, Günther, Isomäki, Perrings, and Puu, 1994), and it can be argued that the final stage of such a development for combined ecologic-economic systems would be the noosphere of Vladimir I. Vernadsky (1945). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Heligimbal**
Heligimbal:
The Cineflex Heligimbal is a form of gimbal technology consisting of a motion-stabilized helicopter mount for motion picture cameras.
The technology, originally developed by the military, provides a high degree of motion stabilization and telephoto capabilities to achieve high-quality aerial shots despite the vibration inherent in helicopter flights which makes capturing high-definition video otherwise impossible. The gyro-stabilized system works with the operator using a joystick from within the helicopter to control the camera movements.
The BBC introduced the general public to this technology in the production of the first episode of its 2006 television series Planet Earth: "An innovative heli-gimbal (sic) stabilizing device supporting a tiny high definition camera on a helicopter delivers extensive rock-steady aerial footage of animals in remote landscapes, and allows for cutting and zooming between close-ups and extreme longshots". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gravitation (book)**
Gravitation (book):
Gravitation is a widely adopted textbook on Albert Einstein's general theory of relativity, written by Charles W. Misner, Kip S. Thorne, and John Archibald Wheeler. It was originally published by W. H. Freeman and Company in 1973 and reprinted by Princeton University Press in 2017. It is frequently abbreviated MTW (for its authors' last names). The cover illustration, drawn by Kenneth Gwin, is a line drawing of an apple with cuts in the skin to show the geodesics on its surface. The book contains 10 parts and 44 chapters, each beginning with a quotation. The bibliography has a long list of original sources and other notable books in the field. While this may not be considered the best introductory text because its coverage may overwhelm a newcomer, and even though parts of it are now out of date, it remains a highly valued reference for advanced graduate students and researchers.
Content:
Subject matter After a brief review of special relativity and flat spacetime, physics in curved spacetime is introduced and many aspects of general relativity are covered; particularly about the Einstein field equations and their implications, experimental confirmations, and alternatives to general relativity. Segments of history are included to summarize the ideas leading up to Einstein's theory. The book concludes by questioning the nature of spacetime and suggesting possible frontiers of research. Although the exposition on linearized gravity is detailed, one topic which is not covered is gravitoelectromagnetism. Some quantum mechanics is mentioned, but quantum field theory in curved spacetime and quantum gravity are not included.
Content:
The topics covered are broadly divided into two "tracks", the first contains the core topics while the second has more advanced content. The first track can be read independently of the second track. The main text is supplemented by boxes containing extra information, which can be omitted without loss of continuity. Margin notes are also inserted to annotate the main text.
Content:
The mathematics, primarily tensor calculus and differential forms in curved spacetime, is developed as required. An introductory chapter on spinors near the end is also given. There are numerous illustrations of advanced mathematical ideas such as alternating multilinear forms, parallel transport, and the orientation of the hypercube in spacetime. Mathematical exercises and physical problems are included for the reader to practice.
Content:
The prose in the book is conversational; the authors use plain language and analogies to everyday objects. For example, Lorentz transformed coordinates are described as a "squashed egg-crate" with an illustration. Tensors are described as "machines with slots" to insert vectors or one-forms, and containing "gears and wheels that guarantee the output" of other tensors.
Sign and unit conventions MTW uses the − + + + sign convention, and discourages the use of the + + + + metric with an imaginary time coordinate ict . In the front endpapers, the sign conventions for the Einstein field equations are established and the conventions used by many other authors are listed.
The book also uses geometrized units, in which the gravitational constant G and speed of light c are each set to 1. The back end papers contain a table of unit conversions.
Editions and translations:
The book has been reprinted in English 24 times. Hardback and softcover editions have been published. The original citation is Misner, Charles W.; Thorne, Kip S.; Wheeler, John Archibald (1973), Gravitation, San Francisco: W. H. Freeman, ISBN 978-0-7167-0344-0.It has also been translated into other languages, including Russian (in three volumes), Chinese, and Japanese.This is a recent reprinting with new foreword and preface.
Editions and translations:
Misner, Charles W.; Thorne, Kip S.; Wheeler, John Archibald; Kaiser, David I. (2017). Gravitation. Princeton University Press. ISBN 9780691177793. Reprinting.
Reviews:
The book is still considered influential in the physics community, with generally positive reviews, but with some criticism of the book's length and presentation style. To quote Ed Ehrlich: 'Gravitation' is such a prominent book on relativity that the initials of its authors MTW can be used by other books on relativity without explanation.
Reviews:
James Hartle notes in his book: Over thirty years since its publication, Gravitation is still the most comprehensive treatise on general relativity. An authoritative and complete discussion of almost any topic in the subject can be found within its 1300 pages. It also contains an extensive bibliography with references to original sources. Written by three twentieth-century masters of the subject, it set the style for many later texts on the subject, including this one.
Reviews:
Sean M. Carroll states in his own introductory text: The book that educated at least two generations of researchers in gravitational physics. Comprehensive and encyclopedic, the book is written in an often-idiosyncratic way that you will either like or not.
Pankaj Sharan writes: This large sized (20cm × 25cm), 1272 page book begins at the very beginning and has everything on gravity (up to 1973). There are hundreds of diagrams and special boxes for additional explanations, exercises, historical and bibliographical asides and bibliographical details.
Reviews:
Ray D'Inverno suggests: I would also recommend looking at the relevant sections of the text of Misner, Thorne, and Wheeler, known for short as ‘MTW’. MTW is a rich resource and is certainly worth consulting for a whole string of topics. However, its style is not perhaps for everyone (I find it somewhat verbose in places and would not recommend it for a first course in general relativity). MTW has a very extensive bibliography.
Reviews:
Many texts on general relativity refer to it in their bibliographies or footnotes. In addition to the four given, other modern references include George Efstathiou et al., Bernard F. Schutz, James Foster et al., Robert Wald, and Stephen Hawking et al.Other prominent physics books also cite it. For example, Classical Mechanics (second edition) by Herbert Goldstein, who comments: This massive treatise (1279 pages! (the pun is irresistible)) is to be praised for the great efforts made to help the reader through the maze. The pedagogic apparatus includes separately marked tracks, boxes of various kinds, marginal comments, and cleverly designed diagrams.
Reviews:
The third edition of Goldstein's text still lists Gravitation as an "excellent" resource on field theory in its selected biography.A 2019 review of another work by Gerard F. Gilmore opened: "Every teacher of General Relativity depends heavily on two texts: one, the massive ‘Gravitation’ by Misner, Thorne and Wheeler, the second the diminutive ‘The Meaning of Relativity’ by Einstein." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Alpha-N-acetylneuraminate alpha-2,8-sialyltransferase**
Alpha-N-acetylneuraminate alpha-2,8-sialyltransferase:
In enzymology, an alpha-N-acetylneuraminate alpha-2,8-sialyltransferase (EC 2.4.99.8) is an enzyme that catalyzes the chemical reaction CMP-N-acetylneuraminate + alpha-N-acetylneuraminyl-2,3-beta-D-galactosyl-R ⇌ CMP + alpha-N-acetylneuraminyl-2,8-alpha-N-acetylneuraminyl-2,3-beta-D- galactosyl-RThus, the two substrates of this enzyme are CMP-N-acetylneuraminate and alpha-N-acetylneuraminyl-2,3-beta-D-galactosyl-R, whereas its 3 products are CMP, alpha-N-acetylneuraminyl-2,8-alpha-N-acetylneuraminyl-2,3-beta-D-, and galactosyl-R. This enzyme participates in 4 metabolic pathways: glycosphingolipid biosynthesis - neo-lactoseries, glycosphingolipid biosynthesis - globoseries, glycosphingolipid biosynthesis - ganglioseries, and glycan structures - biosynthesis 2.
Alpha-N-acetylneuraminate alpha-2,8-sialyltransferase:
This enzyme belongs to the family of transferases, specifically those glycosyltransferases that do not transfer hexosyl or pentosyl groups. The systematic name of this enzyme class is CMP-N-acetylneuraminate:alpha-N-acetylneuraminyl-2,3-beta-D-galactos ide alpha-2,8-N-acetylneuraminyltransferase. Other names in common use include cytidine monophosphoacetylneuraminate-ganglioside GM3, alpha-2,8-sialyltransferase, ganglioside GD3 synthase, ganglioside GD3 synthetase sialyltransferase, CMP-NeuAc:LM1(alpha2-8) sialyltransferase, GD3 synthase, and SAT-2. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pyloric stenosis**
Pyloric stenosis:
Pyloric stenosis is a narrowing of the opening from the stomach to the first part of the small intestine (the pylorus). Symptoms include projectile vomiting without the presence of bile. This most often occurs after the baby is fed. The typical age that symptoms become obvious is two to twelve weeks old.The cause of pyloric stenosis is unclear. Risk factors in babies include birth by cesarean section, preterm birth, bottle feeding, and being first born. The diagnosis may be made by feeling an olive-shaped mass in the baby's abdomen. This is often confirmed with ultrasound.Treatment initially begins by correcting dehydration and electrolyte problems. This is then typically followed by surgery, although some treat the condition without surgery by using atropine. Results are generally good both in the short term and in the long term.About one to two per 1,000 babies are affected, and males are affected about four times more often than females. The condition is very rare in adults. The first description of pyloric stenosis was in 1888 with surgery management first carried out in 1912 by Conrad Ramstedt. Before surgical treatment most babies died.
Signs and symptoms:
Babies with this condition usually present any time in the first weeks to 6 months of life with progressively worsening vomiting. It is more likely to affect the first-born with males more commonly than females at a ratio of 4 to 1. The vomiting is often described as non-bile stained ("non bilious") and "projectile vomiting", because it is more forceful than the usual spitting up (gastroesophageal reflux) seen at this age. Some infants present with poor feeding and weight loss but others demonstrate normal weight gain. Dehydration may occur which causes a baby to cry without having tears and to produce less wet or dirty diapers due to not urinating for hours or for a few days. Symptoms usually begin between 3 and 12 weeks of age. Findings include epigastric fullness with visible peristalsis in the upper abdomen from the infant's left to right. Constant hunger, belching, and colic are other possible signs that the baby is unable to eat properly.
Cause:
Rarely, infantile pyloric stenosis can occur as an autosomal dominant condition. It is uncertain whether it is a congenital anatomic narrowing or a functional hypertrophy of the pyloric sphincter muscle.
Pathophysiology:
The gastric outlet obstruction due to the hypertrophic pylorus impairs emptying of gastric contents into the duodenum. As a consequence, all ingested food and gastric secretions can only exit via vomiting, which can be of a projectile nature. While the exact cause of the hypertrophy remains unknown, one study suggested that neonatal hyperacidity may be involved in the pathogenesis. This physiological explanation for the development of clinical pyloric stenosis at around 4 weeks and its spontaneous long term cure without surgery if treated conservatively, has recently been further reviewed.Persistent vomiting results in loss of stomach acid (hydrochloric acid). The vomited material does not contain bile because the pyloric obstruction prevents entry of duodenal contents (containing bile) into the stomach. The chloride loss results in a low blood chloride level which impairs the kidney's ability to excrete bicarbonate. This is the factor that prevents correction of the alkalosis leading to metabolic alkalosis.A secondary hyperaldosteronism develops due to the decreased blood volume. The high aldosterone levels causes the kidneys to avidly retain Na+ (to correct the intravascular volume depletion), and excrete increased amounts of K+ into the urine (resulting in a low blood level of potassium).The body's compensatory response to the metabolic alkalosis is hypoventilation resulting in an elevated arterial pCO2.
Diagnosis:
Diagnosis is via a careful history and physical examination, often supplemented by radiographic imaging studies. Pyloric stenosis should be suspected in any young infant with severe vomiting. On physical exam, palpation of the abdomen may reveal a mass in the epigastrium. This mass, which consists of the enlarged pylorus, is referred to as the 'olive', and is sometimes evident after the infant is given formula to drink. Rarely, there are peristaltic waves that may be felt or seen (video on NEJM) due to the stomach trying to force its contents past the narrowed pyloric outlet.Most cases of pyloric stenosis are diagnosed/confirmed with ultrasound, if available, showing the thickened pylorus and non-passage of gastric contents into the proximal duodenum. Muscle wall thickness 3 millimeters (mm) or greater and pyloric channel length of 15 mm or greater are considered abnormal in infants younger than 30 days. Gastric contents should not be seen passing through the pylorus because if it does, pyloric stenosis should be excluded and other differential diagnoses such as pylorospasm should be considered. The positions of superior mesenteric artery and superior mesenteric vein should be noted because altered positions of these two vessels would be suggestive of intestinal malrotation instead of pyloric stenosis.Although the baby is exposed to radiation, an upper GI series (X-rays taken after the baby drinks a special contrast agent) can be diagnostic by showing the pylorus with elongated, narrow lumen and a dent in the duodenal bulb. This phenomenon caused "string sign" or the "railroad track/double track sign" on X-rays after contrast is given. Plain x-rays of the abdomen sometimes shows a dilated stomach.Although upper gastrointestinal endoscopy would demonstrate pyloric obstruction, physicians would find it difficult to differentiate accurately between hypertrophic pyloric stenosis and pylorospasm.Blood tests will reveal low blood levels of potassium and chloride in association with an increased blood pH and high blood bicarbonate level due to loss of stomach acid (which contains hydrochloric acid) from persistent vomiting. There will be exchange of extracellular potassium with intracellular hydrogen ions in an attempt to correct the pH imbalance. These findings can be seen with severe vomiting from any cause.
Treatment:
Infantile pyloric stenosis is typically managed with surgery; very few cases are mild enough to be treated medically.
Treatment:
The danger of pyloric stenosis comes from the dehydration and electrolyte disturbance rather than the underlying problem itself. Therefore, the baby must be initially stabilized by correcting the dehydration and the abnormally high blood pH seen in combination with low chloride levels with IV fluids. This can usually be accomplished in about 24–48 hours.Intravenous and oral atropine may be used to treat pyloric stenosis. It has a success rate of 85–89% compared to nearly 100% for pyloromyotomy, however it requires prolonged hospitalization, skilled nursing and careful follow up during treatment. It might be an alternative to surgery in children who have contraindications for anesthesia or surgery, or in children whose parents do not want surgery.
Treatment:
Surgery The definitive treatment of pyloric stenosis is with surgical pyloromyotomy known as Ramstedt's procedure (dividing the muscle of the pylorus to open up the gastric outlet). This surgery can be done through a single incision (usually 3–4 cm long) or laparoscopically (through several tiny incisions), depending on the surgeon's experience and preference.Today, the laparoscopic technique has largely supplanted the traditional open repairs which involved either a tiny circular incision around the navel or the Ramstedt procedure. Compared to the older open techniques, the complication rate is equivalent, except for a markedly lower risk of wound infection. This is now considered the standard of care at the majority of children's hospitals across the US, although some surgeons still perform the open technique. Following repair, the small 3mm incisions are hard to see.
Treatment:
The vertical incision, pictured and listed above, is no longer usually required, though many incisions have been horizontal in the past years. Once the stomach can empty into the duodenum, feeding can begin again. Some vomiting may be expected during the first days after surgery as the gastrointestinal tract settles. Rarely, the myotomy procedure performed is incomplete and projectile vomiting continues, requiring repeat surgery. Pyloric stenosis generally has no long term side-effects or impact on the child's future.
Epidemiology:
Males are more commonly affected than females, with firstborn males affected about four times as often, and there is a genetic predisposition for the disease. It is commonly associated with people of Scandinavian ancestry, and has multifactorial inheritance patterns. Pyloric stenosis is more common in Caucasians than Hispanics, Blacks, or Asians. The incidence is 2.4 per 1000 live births in Caucasians, 1.8 in Hispanics, 0.7 in Blacks, and 0.6 in Asians. It is also less common amongst children of mixed race parents. Caucasian male babies with blood type B or O are more likely than other types to be affected.Infants exposed to erythromycin are at increased risk for developing hypertrophic pyloric stenosis, especially when the drug is taken around two weeks of life and possibly in late pregnancy and through breastmilk in the first two weeks of life. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Battery (baseball)**
Battery (baseball):
In baseball, the battery is the pitcher and the catcher, who may also be called batterymen, or batterymates in relation to one another.
History:
Origins of the term The use of the word 'battery' in baseball was first coined by Henry Chadwick in the 1860s in reference to the firepower of a team's pitching staff and inspired by the artillery batteries then in use in the American Civil War. Later, the term evolved to indicate the combined effectiveness of pitcher and catcher.
History:
Pitching to a preferred batterymate Throughout the history of baseball, although teams have typically carried multiple catchers, star pitchers have often preferred the familiarity of working consistently with a single batterymate.In the early 20th century, some prominent pitchers were known to have picked their favorite catchers. Sportswriter Fred Lieb recalls the batteries of Christy Mathewson / Frank Bowerman beginning in 1899 with the New York Giants, Jack Coombs / Jack Lapp beginning in 1908 with the Philadelphia Athletics, Cy Young / Lou Criger gaining the greatest attention in 1901 with the Boston Americans (later the Red Sox), and Grover Cleveland Alexander / Bill Killefer beginning in 1911 with the Philadelphia Phillies. Other successful batteries were Ed Walsh / Billy Sullivan beginning in 1904, along with Walter Johnson / Muddy Ruel and Dazzy Vance / Hank DeBerry both starting in 1923.In 1976, several major league pitchers chose their preferred catchers; a notion that had fallen out of practice for some decades. For instance, catcher Bob Boone of the Philadelphia Phillies, though one of the best catchers of his day, was replaced with Tim McCarver at the request of pitcher Steve Carlton. The Carlton/McCarver combination worked well in 32 out of Carlton's 35 games that season, plus one playoff game. The two had previously been batterymates for four years (1966–69) with the St. Louis Cardinals. Another battery-by-choice was superstitious rookie pitcher Mark Fidrych who was new to the Detroit Tigers in 1976, insisting on rookie catcher Bruce Kimm behind the plate. The Fidrych/Kimm combination started all 29 of Fidrych's 1976 season games. The two continued as a battery through 1977.Knuckleballers have often preferred pitching to "personal" batterymates due to the difficulty of catching the unusual pitch. One notable example was Boston Red Sox pitcher Tim Wakefield and his preferred catcher, Doug Mirabelli.
Most starts:
The below table shows battery-mates that as of September 20, 2022, have appeared in more than 200 starts together since 1914. Boldface indicates active teammates.
Most starts:
Especially notable are the five Hall of Fame batteries below, including Lefty Grove (ranked by Bill James as the second-greatest pitcher of all time) and Mickey Cochrane (ranked by James as the eighth-greatest catcher) of the 1925–1933 Philadelphia Athletics, and Yogi Berra and Whitey Ford, who appeared in multiple World Series together for the New York Yankees between 1950 and 1963.
Most starts:
Member of the Baseball Hall of Fame
Most no-hitters:
The table below lists the battery combinations that share the record for most major league no-hitters (2).
(*) Catchers Silver Flint and King Kelly shared catching duties for Corcoran's August 19, 1880 no-hitter. Member of the Baseball Hall of Fame
Sibling batteries:
The following chart of major league sibling batteries lists pitcher/catcher siblings who played on the same major league team during a single major league season. The pair may or may not have performed as a battery in an actual major league game.Unique among those listed below are Mort and Walker Cooper, who formed the National League's starting battery at both the 1942 and 1943 Major League Baseball All-Star Games, and also appeared as a battery in the 1942, 1943, and 1944 World Series, the only sibling battery to achieve either feat.
Sibling batteries:
Member of the Baseball Hall of Fame
Other records and firsts:
Most games The battery that appeared in the most games together was Mariano Rivera and Jorge Posada, with 598 games together for the New York Yankees between 1995 and 2011.
Most wins The record for most team wins by a starting battery is 213 by Adam Wainwright and Yadier Molina.
Most innings Red Faber and Ray Schalk, who played together for the Chicago White Sox between 1914 and 1928, recorded the most total innings as a battery (2553.2).
Single-game records Madison Bumgarner and Buster Posey of the San Francisco Giants became the major league's first battery to hit grand slams in the same game when they accomplished the feat on July 13, 2014 against the Arizona Diamondbacks. The home run was pitcher Bumgarner's second grand slam of the season (April 11).
Other records and firsts:
First Black battery Pitcher George Stovey and catcher Moses Fleetwood Walker formed the first Black battery in professional baseball history when they teamed up for the 1887 Newark Little Giants of the International Association. The tandem recorded ten consecutive wins to begin the season before the Chicago White Stockings refused to take the field on July 15, leading to the league's implementation of the color line.
Other records and firsts:
Father-son batteries Frank Duncan, Jr. and his son, Frank Duncan III, of the 1941 Kansas City Monarchs are thought to be the only father-son battery in major league history.In 2012, former major leaguer Roger Clemens came out of retirement to pitch for the minor league Sugar Land Skeeters of the Atlantic League of Professional Baseball, and formed a battery with his son Koby Clemens in a game on September 7. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.