text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**BMS-906024**
BMS-906024:
BMS-906024 is a drug with a benzodiazepine structure, developed by Bristol-Myers Squibb and disclosed at the spring 2013 American Chemical Society meeting in New Orleans to treat breast, lung, colon cancers and leukemia. The drug works as a pan-Notch inhibitor. The structure is one of a set patented in 2012, and is being studied in clinical trials. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Marine Ecosystem Assessment for the Southern Ocean**
Marine Ecosystem Assessment for the Southern Ocean:
The Marine Ecosystem Assessment for the Southern Ocean (MEASO) is a project led by researchers at the Antarctic Climate & Ecosystems Cooperative Research Centre (ACE CRC) as a part of an international project of Integrating Climate and Ecosystem Dynamics (ICED).
Aims:
MEASO aims to assess long-term status and trends in Southern Ocean biota and foodwebs. It does not give specific scientific requirements for year to year management, but long-term assessments, and continues previous works such as the Biogeographic Atlas of the Southern Ocean and the Antarctic Climate Change and the Environment report.
MEASO 2018:
MEASO 2018 was an international conference held in Hobart in early April 2018, which included a day-long policy forum. Background to the conference, including the program and abstracts, can be found on the www.MEASO2018.aq conference web site. The conference was supported by Integrated Marine Biosphere Research (IMBeR) ICED and CLIOTOP, Southern Ocean Observing System (SOOS), Scientific Committee on Antarctic Research, and the Australian Antarctic Program to share science, enhance community input into design and planning of the MEASO, and to develop a work plan.
MEASO 2018:
175 people attended from 23 countries, including scientists, policy-makers, fishing industry, and environmental NGOs. The conference was focused on four themes: (i) assessments of parts of the ecosystem; (ii) responses of biota to changing environments; (iii) methods for modelling habitats, species and food webs; (iv) design of observing systems to measure change in the ecosystem. The day-long policy forum discussed how to better link scientists, policy makers, industry and environmental NGOs and the public at large.
MEASO 2018:
Out of the 175 participants 32% were early career researchers and the gender balance was 43% female to 57% male.
MEASO 2019:
A MEASO workshop was held at the World Wide Fund for Nature British headquarters in Woking, UK from the 3–7 June 2019. It focused on planning the delivery of the first MEASO for 2020. This workshop was attended by 30 international scientists including scientists from 12 countries and 7 early career scientists.
Publication of the MEASO:
In the initial stages of publication for MEASO an 'audit' of the surveys, data and models available to assess the status and trends of Antarctic Southern Ocean species, foodwebs and ecosystems was published in Brasier et al. 2019.An encyclopaedic resource produced by the MEASO team is the Southern Ocean Knowledge and Information (SOKI) wiki. its pages include brief fact sheets with numerical and statistical information on different Antarctic Southern Ocean species and taxonomic groups. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Junior blood group system**
Junior blood group system:
The Junior blood group system (or JR) is a human blood group defined by the presence or absence of the Jr(a) antigen, a high-frequency antigen that is found on the red blood cells of most individuals. People with the rare Jr(a) negative blood type can develop anti-Jr(a) antibodies, which may cause transfusion reactions and hemolytic disease of the newborn on subsequent exposures. Jr(a) negative blood is most common in people of Japanese heritage.
Genetics:
The gene ABCG2, located on chromosome 4q22.1, encodes an ATP-binding cassette transporter protein that carries the Jr(a) antigen. The Jr(a) negative blood type is inherited in an autosomal recessive manner: individuals who are homozygous for a null mutation of ABCG2 express this phenotype. Homozygosity for certain missense mutations, or heterozygosity for a missense mutation and a null mutation, can result in a weak phenotype with decreased expression of Jr(a) antigen. As of 2018, over 25 null and weak alleles of ABCG2 have been described.
Epidemiology:
The highest rates of the Jr(a) negative blood type have been reported in Japan, where its prevalence ranges from 1 in 60 in the Niigata region to 1 in 3800 in the Tokyo region. Additionally, a number of cases have been documented in European Romani populations. The Jr(a) negative blood type is very rare in America: a study of 9,545 Americans failed to identify any Jr(a) negative individuals.
Clinical significance:
Anti-Jr(a) antibodies are generally composed of Immunoglobulin G and develop when individuals are exposed to Jr(a) positive blood through pregnancy or blood transfusion. Some cases of anti-Jr(a) have been reported in patients who have not been previously transfused or pregnant.Jr(a) is more strongly expressed on cord blood cells than on adult red blood cells, and anti-Jr(a) has been reported to cause hemolytic disease of the newborn (HDN), including fatal cases of HDN. The antibody has also been implicated in delayed hemolytic transfusion reactions. However, the clinical significance of the antibody is variable: in some cases, individuals with anti-Jr(a) have been transfused with Jr(a) positive blood or given birth to Jr(a) positive babies without incident. It is recommended to transfuse individuals with anti-Jr(a) with Jr(a) negative blood if the antibody titer is high. In other cases, "least incompatible" blood (the blood unit that gives the weakest reactions during crossmatching) may be suitable. It is difficult to secure Jr(a) negative donor blood due to the rarity of this blood type.ABCG2 is a uric acid transporter, and the Jr(a) negative phenotype is associated with gout in Japanese populations.
Laboratory testing:
An individual's Junior blood type can be determined by serologic testing, which uses a monoclonal antibody reagent directed against the Jr(a) antigen. DNA testing may be impractical due to the high number of mutations affecting Jr(a) expression.Anti-Jr(a) antibodies are most easily detected by the indirect antiglobulin test, and their reactivity is enhanced by enzyme treatment with ficin or papain.
History:
The Junior blood group system was discovered in 1970 by researchers Stroup and MacIllroy, who reported on five patients whose blood was incompatible with all samples tested except each other's. They named the causative antigen "JR" after Rose Jacobs, one of the five patients — the common name "Junior" is in fact a misnomer.In 2012, two research groups independently identified ABCG2 as the basis of the Junior blood group system. The Junior system was officially designated a blood group by the International Society of Blood Transfusion that year. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Foreign key**
Foreign key:
A foreign key is a set of attributes in a table that refers to the primary key of another table. The foreign key links these two tables. Another way to put it: In the context of relational databases, a foreign key is a set of attributes subject to a certain kind of inclusion dependency constraints, specifically a constraint that the tuples consisting of the foreign key attributes in one relation, R, must also exist in some other (not necessarily distinct) relation, S, and furthermore that those attributes must also be a candidate key in S. In simpler words, a foreign key is a set of attributes that references a candidate key. For example, a table called TEAM may have an attribute, MEMBER_NAME, which is a foreign key referencing a candidate key, PERSON_NAME, in the PERSON table. Since MEMBER_NAME is a foreign key, any value existing as the name of a member in TEAM must also exist as a person's name in the PERSON table; in other words, every member of a TEAM is also a PERSON.
Foreign key:
Important points to note:- The reference relation should already be created.
The referenced attribute must be a part of primary key of the referenced relation.
Data type and size of referenced and referencing attribute must be same.
Summary:
The table containing the foreign key is called the child table, and the table containing the candidate key is called the referenced or parent table. In database relational modeling and implementation, a candidate key is a set of zero or more attributes, the values of which are guaranteed to be unique for each tuple (row) in a relation. The value or combination of values of candidate key attributes for any tuple cannot be duplicated for any other tuple in that relation.
Summary:
Since the purpose of the foreign key is to identify a particular row of referenced table, it is generally required that the foreign key is equal to the candidate key in some row of the primary table, or else have no value (the NULL value.). This rule is called a referential integrity constraint between the two tables.
Summary:
Because violations of these constraints can be the source of many database problems, most database management systems provide mechanisms to ensure that every non-null foreign key corresponds to a row of the referenced table.For example, consider a database with two tables: a CUSTOMER table that includes all customer data and an ORDER table that includes all customer orders. Suppose the business requires that each order must refer to a single customer. To reflect this in the database, a foreign key column is added to the ORDER table (e.g., CUSTOMERID), which references the primary key of CUSTOMER (e.g. ID). Because the primary key of a table must be unique, and because CUSTOMERID only contains values from that primary key field, we may assume that, when it has a value, CUSTOMERID will identify the particular customer which placed the order. However, this can no longer be assumed if the ORDER table is not kept up to date when rows of the CUSTOMER table are deleted or the ID column altered, and working with these tables may become more difficult. Many real world databases work around this problem by 'inactivating' rather than physically deleting master table foreign keys, or by complex update programs that modify all references to a foreign key when a change is needed.
Summary:
Foreign keys play an essential role in database design. One important part of database design is making sure that relationships between real-world entities are reflected in the database by references, using foreign keys to refer from one table to another.
Summary:
Another important part of database design is database normalization, in which tables are broken apart and foreign keys make it possible for them to be reconstructed.Multiple rows in the referencing (or child) table may refer to the same row in the referenced (or parent) table. In this case, the relationship between the two tables is called a one to many relationship between the referencing table and the referenced table.
Summary:
In addition, the child and parent table may, in fact, be the same table, i.e. the foreign key refers back to the same table. Such a foreign key is known in SQL:2003 as a self-referencing or recursive foreign key. In database management systems, this is often accomplished by linking a first and second reference to the same table.
A table may have multiple foreign keys, and each foreign key can have a different parent table. Each foreign key is enforced independently by the database system. Therefore, cascading relationships between tables can be established using foreign keys.
Summary:
A foreign key is defined as an attribute or set of attributes in a relation whose values match a primary key in another relation. The syntax to add such a constraint to an existing table is defined in SQL:2003 as shown below. Omitting the column list in the REFERENCES clause implies that the foreign key shall reference the primary key of the referenced table.
Summary:
Likewise, foreign keys can be defined as part of the CREATE TABLE SQL statement.
If the foreign key is a single column only, the column can be marked as such using the following syntax: Foreign keys can be defined with a stored procedure statement.
child_table: the name of the table or view that contains the foreign key to be defined.
parent_table: the name of the table or view that has the primary key to which the foreign key applies. The primary key must already be defined.
col3 and col4: the name of the columns that make up the foreign key. The foreign key must have at least one column and at most eight columns.
Referential actions:
Because the database management system enforces referential constraints, it must ensure data integrity if rows in a referenced table are to be deleted (or updated). If dependent rows in referencing tables still exist, those references have to be considered. SQL:2003 specifies 5 different referential actions that shall take place in such occurrences: CASCADE RESTRICT NO ACTION SET NULL SET DEFAULT CASCADE Whenever rows in the parent (referenced) table are deleted (or updated), the respective rows of the child (referencing) table with a matching foreign key column will be deleted (or updated) as well. This is called a cascade delete (or update).
Referential actions:
RESTRICT A value cannot be updated or deleted when a row exists in a referencing or child table that references the value in the referenced table.
Similarly, a row cannot be deleted as long as there is a reference to it from a referencing or child table.
Referential actions:
To understand RESTRICT (and CASCADE) better, it may be helpful to notice the following difference, which might not be immediately clear. The referential action CASCADE modifies the "behavior" of the (child) table itself where the word CASCADE is used. For example, ON DELETE CASCADE effectively says "When the referenced row is deleted from the other table (master table), then delete also from me". However, the referential action RESTRICT modifies the "behavior" of the master table, not the child table, although the word RESTRICT appears in the child table and not in the master table! So, ON DELETE RESTRICT effectively says: "When someone tries to delete the row from the other table (master table), prevent deletion from that other table (and of course, also don't delete from me, but that's not the main point here)." RESTRICT is not supported by Microsoft SQL 2012 and earlier.
Referential actions:
NO ACTION NO ACTION and RESTRICT are very much alike. The main difference between NO ACTION and RESTRICT is that with NO ACTION the referential integrity check is done after trying to alter the table. RESTRICT does the check before trying to execute the UPDATE or DELETE statement. Both referential actions act the same if the referential integrity check fails: the UPDATE or DELETE statement will result in an error.
Referential actions:
In other words, when an UPDATE or DELETE statement is executed on the referenced table using the referential action NO ACTION, the DBMS verifies at the end of the statement execution that none of the referential relationships are violated. This is different from RESTRICT, which assumes at the outset that the operation will violate the constraint. Using NO ACTION, the triggers or the semantics of the statement itself may yield an end state in which no foreign key relationships are violated by the time the constraint is finally checked, thus allowing the statement to complete successfully.
Referential actions:
SET NULL, SET DEFAULT In general, the action taken by the DBMS for SET NULL or SET DEFAULT is the same for both ON DELETE or ON UPDATE: the value of the affected referencing attributes is changed to NULL for SET NULL, and to the specified default value for SET DEFAULT.
Referential actions:
Triggers Referential actions are generally implemented as implied triggers (i.e. triggers with system-generated names, often hidden.) As such, they are subject to the same limitations as user-defined triggers, and their order of execution relative to other triggers may need to be considered; in some cases it may become necessary to replace the referential action with its equivalent user-defined trigger to ensure proper execution order, or to work around mutating-table limitations.
Referential actions:
Another important limitation appears with transaction isolation: your changes to a row may not be able to fully cascade because the row is referenced by data your transaction cannot "see", and therefore cannot cascade onto. An example: while your transaction is attempting to renumber a customer account, a simultaneous transaction is attempting to create a new invoice for that same customer; while a CASCADE rule may fix all the invoice rows your transaction can see to keep them consistent with the renumbered customer row, it won't reach into another transaction to fix the data there; because the database cannot guarantee consistent data when the two transactions commit, one of them will be forced to roll back (often on a first-come-first-served basis.)
Example:
As a first example to illustrate foreign keys, suppose an accounts database has a table with invoices and each invoice is associated with a particular supplier. Supplier details (such as name and address) are kept in a separate table; each supplier is given a 'supplier number' to identify it. Each invoice record has an attribute containing the supplier number for that invoice. Then, the 'supplier number' is the primary key in the Supplier table. The foreign key in the Invoice table points to that primary key. The relational schema is the following. Primary keys are marked in bold, and foreign keys are marked in italics.
Example:
Supplier (SupplierNumber, Name, Address) Invoice (InvoiceNumber, Text, SupplierNumber) The corresponding Data Definition Language statement is as follows. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Accommodative excess**
Accommodative excess:
In ophthalmology, accommodative excess (also known as excessive accommodation or accommodation excess) occurs when an individual uses more than normal accommodation (focusing on close objects) for performing certain near work. Accommodative excess has traditionally been defined as accommodation that is persistently higher than expected for the patient's age. Modern definitions simply regard it as an inability to relax accommodation readily. Excessive accommodation is seen in association with excessive convergence also.
Symptoms and signs:
Blurring of vision due to pseudomyopia Headache Eye strain Asthenopia Trouble concentrating when reading
Causes:
Causes related to refractive errors Accommodative excess may be seen in the following conditions: Hypermetropia: Young hypermetropes use excessive accommodation as a physiological adaptation in the interest of clear vision.
Myopia: Young myopes performing excessive near work may also use excessive accommodation in association with excessive convergence.
Astigmatism: Astigmatic eye may also be associated with accommodative excess.
Presbyopia: Early presbyopic eye may also induce excessive accommodation.
Improper or ill fitting spectacles: Use of improper or ill fitting spectacles may also cause use of excessive accommodation.
Causes related to systemic drugs Use of systemic drugs like Morphine, Digitalis, Sulfonamides, Carbonic anhydrase inhibitors may cause accommodative excess.
Causes related to diseases Unilateral excessive accommodation-Trigeminal neuralgia, and head trauma may cause ciliary spasm and may cause accommodative excess.
Bilateral excessive accommodation-Diseases like Encephalitis, Syphilis, Head trauma, Influenza, Meningitis may cause ciliary spasm and bilateral excessive accommodation.
Secondary to Convergence insufficiency Accommodative excess may occur secondary to convergence insufficiency also. In convergence insufficiency near point of convergence will recede, and positive fusional vergence (PFV) will reduce. So, the patient uses excessive accommodation to stimulate accommodative convergence to overcome reduced PFV.
Risk factors:
A large amount of near work is the main precipitating factor of accommodative excess.
Pseudomyopia:
Pseudomyopia also known as artificial myopia refers to an intermittent and temporary shift in refractive error of the eye towards myopia. It may occur due to excessive accommodation or spasm of accommodation.
Diagnosis:
Differential diagnosis Parinaud’s syndrome, which can mimic some aspects of spasm of the near reflex, such as excessive accommodation and convergence; however, pupillary near-light dissociation, not miosis, is a feature of Parinaud’s syndrome.
Treatment:
Optical: Cycloplegic refraction, and correction of Refractive errors if any Vision therapy General: Relax from near work | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Closed immersion**
Closed immersion:
In algebraic geometry, a closed immersion of schemes is a morphism of schemes f:Z→X that identifies Z as a closed subset of X such that locally, regular functions on Z can be extended to X. The latter condition can be formalized by saying that f#:OX→f∗OZ is surjective.An example is the inclusion map Spec Spec (R) induced by the canonical map R→R/I
Other characterizations:
The following are equivalent: f:Z→X is a closed immersion.
For every open affine Spec (R)⊂X , there exists an ideal I⊂R such that Spec (R/I) as schemes over U.
There exists an open affine covering Spec Rj and for each j there exists an ideal Ij⊂Rj such that Spec (Rj/Ij) as schemes over Uj There is a quasi-coherent sheaf of ideals I on X such that f∗OZ≅OX/I and f is an isomorphism of Z onto the global Spec of OX/I over X.
Other characterizations:
Definition for locally ringed spaces In the case of locally ringed spaces a morphism i:Z→X is a closed immersion if a similar list of criteria is satisfied The map i is a homeomorphism of Z onto its image The associated sheaf map OX→i∗OZ is surjective with kernel I The kernel I is locally generated by sections as an OX -module The only varying condition is the third. It is instructive to look at a counter-example to get a feel for what the third condition yields by looking at a map which is not a closed immersion, i:Gm↪A1 where Spec (Z[x,x−1]) If we look at the stalk of i∗OGm|0 at 0∈A1 then there are no sections. This implies for any open subscheme U⊂A1 containing 0 the sheaf has no sections. This violates the third condition since at least one open subscheme U covering A1 contains 0
Properties:
A closed immersion is finite and radicial (universally injective). In particular, a closed immersion is universally closed. A closed immersion is stable under base change and composition. The notion of a closed immersion is local in the sense that f is a closed immersion if and only if for some (equivalently every) open covering X=⋃Uj the induced map f:f−1(Uj)→Uj is a closed immersion.If the composition Z→Y→X is a closed immersion and Y→X is separated, then Z→Y is a closed immersion. If X is a separated S-scheme, then every S-section of X is a closed immersion.If i:Z→X is a closed immersion and I⊂OX is the quasi-coherent sheaf of ideals cutting out Z, then the direct image i∗ from the category of quasi-coherent sheaves over Z to the category of quasi-coherent sheaves over X is exact, fully faithful with the essential image consisting of G such that IG=0 .A flat closed immersion of finite presentation is the open immersion of an open closed subscheme. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Virtual manufacturing network**
Virtual manufacturing network:
A Virtual Manufacturing Network is a manufacturing network which is not owned by a simple company, but it is built with the use of ICT for bringing together different suppliers and alliance partners creating in such a way a virtual network which is able to operate as a solely owned supply network. A Virtual Manufacturing Network is in this way a Collaborative network of manufacturing enterprises (from OEMs to Suppliers), which are connected by means of ICT for configuring, managing and monitoring the manufacturing process.
Virtual manufacturing network:
Many companies have adopted a philosophy of acquiring worldwide resources through a virtual network for minimizing expenses in their whole operation, focusing on core competences and relying to other companies with specific expertise to take over the parts of the manufacturing process they cannot perform by themselves.
The evolution of a Virtual Manufacturing Network is a Dynamic Manufacturing Network, which describes a more flexible and agile manufacturing network, that is able to be instantiated or dissolved quite rapidly, in order to meet emerging market needs and business opportunities. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Video content analysis**
Video content analysis:
Video content analysis or video content analytics (VCA), also known as video analysis or video analytics (VA), is the capability of automatically analyzing video to detect and determine temporal and spatial events.
This technical capability is used in a wide range of domains including entertainment, video retrieval and video browsing, health-care, retail, automotive, transport, home automation, flame and smoke detection, safety, and security. The algorithms can be implemented as software on general-purpose machines, or as hardware in specialized video processing units.
Video content analysis:
Many different functionalities can be implemented in VCA. Video Motion Detection is one of the simpler forms where motion is detected with regard to a fixed background scene. More advanced functionalities include video tracking and egomotion estimation.Based on the internal representation that VCA generates in the machine, it is possible to build other functionalities, such as video summarization, identification, behavior analysis, or other forms of situation awareness.
Video content analysis:
VCA relies on good input video, so it is often combined with video enhancement technologies such as video denoising, image stabilization, unsharp masking, and super-resolution.
Functionalities:
Several articles provide an overview of the modules involved in the development of video analytic applications. This is a list of known functionalities and a short description.
Commercial applications:
VCA is a relatively new technology, with numerous companies releasing VCA-enhanced products in the mid-2000s. While there are many applications, the track record of different VCA solutions differ widely. Functionalities such as motion detection, people counting and gun detection are available as commercial off-the-shelf products and believed to have a decent track-record (for example, even freeware such as dsprobotics Flowstone can handle movement and color analysis). In response to the COVID-19 pandemic, many software manufacturers have introduced new public health analytics like face mask detection or social distancing tracking.In many domains VCA is implemented on CCTV systems, either distributed on the cameras (at-the-edge) or centralized on dedicated processing systems. Video Analytics and Smart CCTV are commercial terms for VCA in the security domain. In the UK the BSIA has developed an introduction guide for VCA in the security domain. In addition to video analytics and to complement it, audio analytics can also be used.Video management software manufacturers are constantly expanding the range of the video analytics modules available. With the new suspect tracking technology, it is then possible to track all of this subject's movements easily: where they came from, and when, where, and how they moved. Within a particular surveillance system, the indexing technology is able to locate people with similar features who were within the cameras’ viewpoints during or within a specific period of time. Usually, the system finds a lot of different people with similar features and presents them in the form of snapshots. The operator only needs to click on those images and subjects which need to be tracked. Within a minute or so, it's possible to track all the movements of a particular person, and even to create a step-by-step video of the movements.
Commercial applications:
Kinect is an add-on peripheral for the Xbox 360 gaming console that uses VCA for part of the user input.In retail industry, VCA is used to track shoppers inside the store. By this way, a heatmap of the store can be obtained, which is beneficial for store design and marketing optimisations. Other applications include dwell time when looking at a products and item removed/left detection.
Commercial applications:
The quality of VCA in the commercial setting is difficult to determine. It depends on many variables such as use case, implementation, system configuration and computing platform. Typical methods to get an objective idea of the quality in commercial settings include independent benchmarking and designated test locations.
VCA has been used for crowd management purposes, notably at The O2 Arena in London and The London Eye.
Law enforcement:
Police and forensic scientists analyse CCTV video when investigating criminal activity. Police use software, such as Kinesense, which performs video content analysis to search for key events in video and find suspects. Surveys have shown that up to 75% of cases involve CCTV. Police use video content analysis software to search long videos for important events.
Academic research:
Video content analysis is a subset of computer vision and thereby of artificial intelligence. Two major academic benchmark initiatives are TRECVID, which uses a small portion of i-LIDS video footage, and the PETS Benchmark Data. They focus on functionalities such as tracking, left luggage detection and virtual fencing. Benchmark video datasets such as the UCF101 enables action recognition researches incorporating temporal and spatial visual attention with convolutional neural network and long short-term memory. Video analysis software is also being paired with footage from body-worn and dashboard cameras in order to more easily redact footage for public disclosure and to identify events and people in videos.The EU is funding a FP7 project called P-REACT to integrate video content analytics on embedded systems with police and transport security databases.
Academic research:
Artificial Intelligence Artificial intelligence for video surveillance utilizes computer software programs that analyze the audio and images from video surveillance cameras in order to recognize humans, vehicles, objects and events. Security contractors program is the software to define restricted areas within the camera's view (such as a fenced off area, a parking lot but not the sidewalk or public street outside the lot) and program for times of day (such as after the close of business) for the property being protected by the camera surveillance. The artificial intelligence ("A.I.") sends an alert if it detects a trespasser breaking the "rule" set that no person is allowed in that area during that time of day. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Stub axle**
Stub axle:
A stub axle or stud axle is either one of two front axles in a rear-wheel drive vehicle, or one of the two rear axles in a front-wheel drive vehicle. In a rear wheel drive vehicle this axle is capable of angular movement about the kingpin for steering the vehicle.
The stub or stud axle is named so because it resembles the shape of a stub or stud, like a truncated end of an axle, short in shape and blunt. There are four general designs: Elliot axle Reversed Elliot axle Lemoine axle Inverted Lemoine axle | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Toll tin**
Toll tin:
Toll tin was a term historically used in tin mining in Devon and Cornwall. The holder of a set of tin bounds was required to pay the freeholder of the land on which the bounds had been pitched a portion, called toll tin, of the tin ore (or black tin) extracted.
Toll tin became due as soon as the ore was broken from the ground and, although some freeholders may have taken it in this form, it is likely that others opted for the more practical approach of taking it as a portion of the proceeds of the sale of the refined tin (or white tin).
Toll tin was not the only way in which a miner's share of the tin extracted was reduced — he was also required to pay a tax to the crown on the refined tin known as tin coinage before the tin could legally be sold. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Polydrafter**
Polydrafter:
In recreational mathematics, a polydrafter is a polyform with a 30°–60°–90° right triangle as the base form. This triangle is also called a drafting triangle, hence the name. This triangle is also half of an equilateral triangle, and a polydrafter's cells must consist of halves of triangles in the triangular tiling of the plane; consequently, when two drafters share an edge that is the middle of their three edge lengths, they must be reflections rather than rotations of each other. Any contiguous subset of halves of triangles in this tiling is allowed, so unlike most polyforms, a polydrafter may have cells joined along unequal edges: a hypotenuse and a short leg.
History:
Polydrafters were invented by Christopher Monckton, who used the name polydudes for polydrafters that have no cells attached only by the length of a short leg. Monckton's Eternity Puzzle was composed of 209 12-dudes.The term polydrafter was coined by Ed Pegg Jr., who also proposed as a puzzle the task of fitting the 14 tridrafters—all possible clusters of three drafters—into a trapezoid whose sides are 2, 3, 5, and 3 times the length of the hypotenuse of a drafter.
Extended polydrafters:
An extended polydrafter is a variant in which the drafter cells cannot all conform to the triangle (polyiamond) grid.
The cells are still joined at short legs, long legs, hypotenuses and half-hypotenuses.
See the Logelium link below.
Enumerating polydrafters:
Like polyominoes, polydrafters can be enumerated in two ways, depending on whether chiral pairs of polydrafters are counted as one polydrafter or two.
With two or more cells, the numbers are greater if extended polydrafters are included. For example, the number of didrafters rises from 6 to 13. See (sequence A289137 in the OEIS). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Phasor measurement unit**
Phasor measurement unit:
A phasor measurement unit (PMU) is a device used to estimate the magnitude and phase angle of an electrical phasor quantity (such as voltage or current) in the electricity grid using a common time source for synchronization. Time synchronization is usually provided by GPS or IEEE 1588 Precision Time Protocol, which allows synchronized real-time measurements of multiple remote points on the grid. PMUs are capable of capturing samples from a waveform in quick succession and reconstructing the phasor quantity, made up of an angle measurement and a magnitude measurement. The resulting measurement is known as a synchrophasor. These time synchronized measurements are important because if the grid’s supply and demand are not perfectly matched, frequency imbalances can cause stress on the grid, which is a potential cause for power outages.PMUs can also be used to measure the frequency in the power grid. A typical commercial PMU can report measurements with very high temporal resolution, up to 120 measurements per second. This helps engineers in analyzing dynamic events in the grid which is not possible with traditional SCADA measurements that generate one measurement every 2 or 4 seconds. Therefore, PMUs equip utilities with enhanced monitoring and control capabilities and are considered to be one of the most important measuring devices in the future of power systems. A PMU can be a dedicated device, or the PMU function can be incorporated into a protective relay or other device.
History:
In 1893, Charles Proteus Steinmetz presented a paper on simplified mathematical description of the waveforms of alternating current electricity. Steinmetz called his representation a phasor. With the invention of phasor measurement units (PMU) in 1988 by Dr. Arun G. Phadke and Dr. James S. Thorp at Virginia Tech, Steinmetz’s technique of phasor calculation evolved into the calculation of real time phasor measurements that are synchronized to an absolute time reference provided by the Global Positioning System. We therefore refer to synchronized phasor measurements as synchrophasors. Early prototypes of the PMU were built at Virginia Tech, and Macrodyne built the first PMU (model 1690) in 1992. Today they are available commercially.
History:
With the increasing growth of distributed energy resources on the power grid, more observability and control systems will be needed to accurately monitor power flow. Historically, power has been delivered in a uni-directional fashion through passive components to customers, but now that customers can generate their own power with technologies such as solar PV, this is changing into a bidirectional system for distribution systems. With this change it is imperative that transmission and distribution networks are continuously being observed through advanced sensor technology, such as ––PMUs and uPMUs.
History:
In simple terms, the public electric grid that a power company operates was originally designed to take power from a single source: the operating company's generators and power plants, and feed it into the grid, where the customers consume the power. Now, some customers are operating power generating devices (solar panels, wind turbines, etc.) and to save costs (or to generate income) are also feeding power back into the grid. Depending on the region, feeding power back into the grid may be done through net metering. Because of this process, voltage and current must be measured and regulated in order to ensure the power going into the grid is of the quality and standard that customer equipment expects (as seen through metrics such as frequency, phase synchronicity, and voltage). If this is not done, as Rob Landley puts it, "people's light bulbs start exploding." This measurement function is what these devices do.
Operation:
A PMU can measure 50/60 Hz AC waveforms (voltages and currents) typically at a rate of 48 samples per cycle making them effective at detecting fluctuations in voltage or current at less than one cycle. However, when the frequency does not oscillate around or near 50/60 Hz, PMUs are not able to accurately reconstruct these waveforms. Phasor measurements from PMU’s are constructed from cosine waves, that follow the structure below.
Operation:
cos (ωt+θ) The A in this function is a scalar value, that is most often described as voltage or current magnitude (for PMU measurements). The θ is the phase angle offset from some defined starting position, and the ω is the angular frequency of the wave form (usually 2π50 radians/second or 2π60 radians/second). In most cases PMUs only measure the voltage magnitude and the phase angle, and assume that the angular frequency is a constant. Because this frequency is assumed constant, it is disregarded in the phasor measurement. PMU’s measurements are a mathematical fitting problem, where the measurements are being fit to a sinusoidal curve. Thus, when the waveform is non-sinusoidal, the PMU is unable to fit it exactly. The less sinusoidal the waveform is, such as grid behavior during a voltage sag or fault, the worse the phasor representation becomes.
Operation:
The analog AC waveforms detected by the PMU are digitized by an analog-to-digital converter for each phase. A phase-locked oscillator along with a Global Positioning System (GPS) reference source provides the needed high-speed synchronized sampling with 1 microsecond accuracy. However, PMUs can take in multiple time sources including non-GPS references as long as they are all calibrated and working synchronously. The resultant time-stamped phasors can be transmitted to a local or remote receiver at rates up to 120 samples per second. Being able to see time synchronized measurements over a large area is helpful in examining how the grid operates at large, and determining which parts of the grid are affected by different disturbances.
Operation:
Historically, only small numbers of PMUs have been used to monitor transmission lines with acceptable errors of around 1%. These were simply coarser devices installed to prevent catastrophic blackouts. Now, with the invention of micro-synchronous phasor technology, many more of them are desired to be installed on distribution networks where power can be monitored at a very high degree of precision. This high degree of precision creates the ability to drastically improve system visibility and implement smart and preventative control strategies. No longer are PMUs just required at sub-stations, but are required at several places in the network including tap-changing transformers, complex loads, and PV generation buses.While PMUs are generally used on transmission systems, new research is being done on the effectiveness of micro-PMUs for distribution systems. Transmission systems generally have voltage that is at least an order of magnitude higher than distribution systems (between 12kV and 500kV while distribution runs at 12kV and lower). This means that transmission systems can have less precise measurements without compromising the accuracy of the measurement. However, distribution systems need more precision in order to improve accuracy, which is the benefit of uPMUs. uPMUs decrease the error of the phase angle measurements on the line from ±1° to ±0.05°, giving a better representation of the true angle value. The “micro” term in front of the PMU simply means it is a more precise measurement.
Technical overview:
A phasor is a complex number that represents both the magnitude and phase angle of the sine waves found in electricity. Phasor measurements that occur at the same time over any distance are called "synchrophasors". While it is commonplace for the terms "PMU" and "synchrophasor" to be used interchangeably they actually represent two separate technical meanings. A synchrophasor is the metered value whereas the PMU is the metering device. In typical applications, phasor measurement units are sampled from widely dispersed locations in the power system network and synchronized from the common time source of a Global Positioning System (GPS) radio clock. Synchrophasor technology provides a tool for system operators and planners to measure the state of the electrical system (over many points)and manage power quality. PMUs measure voltages and currents at principal intersecting locations (critical substations) on a power grid and can output accurately time-stamped voltage and current phasors. Because these phasors are truly synchronized, synchronized comparison of two quantities is possible in real time. These comparisons can be used to assess system conditions-such as; frequency changes, MW, MVARs, kVolts, etc. The monitored points are preselected through various studies to make extremely accurate phase angle measurements to indicate shifts in system (grid) stability. The phasor data is collected either on-site or at centralized locations using Phasor Data Concentrator technologies. The data is then transmitted to a regional monitoring system which is maintained by the local Independent System Operator (ISO). These ISO's will monitor phasor data from individual PMU's or from as many as 150 PMU's — this monitoring provides an accurate means of establishing controls for power flow from multiple energy generation sources (nuclear, coal, wind, etc.).
Technical overview:
The technology has the potential to change the economics of power delivery by allowing increased power flow over existing lines. Synchrophasor data could be used to allow power flow up to a line's dynamic limit instead of to its worst-case limit. Synchrophasor technology will usher in a new process for establishing centralized and selective controls for the flow of electrical energy over the grid. These controls will affect both large scale (multiple-states) and individual transmission line sections at intersecting substations. Transmission line congestion (over-loading), protection, and control will therefore be improved on a multiple region scale (US, Canada, Mexico) through interconnecting ISO's.
Phasor networks:
A phasor network consists of phasor measurement units (PMUs) dispersed throughout the electricity system, Phasor Data Concentrators (PDC) to collect the information and a Supervisory Control And Data Acquisition (SCADA) system at the central control facility. Such a network is used in Wide Area Measurement Systems (WAMS), the first of which began in 2000 by the Bonneville Power Administration. The complete network requires rapid data transfer within the frequency of sampling of the phasor data. GPS time stamping can provide a theoretical accuracy of synchronization better than 1 microsecond. "Clocks need to be accurate to ± 500 nanoseconds to provide the one microsecond time standard needed by each device performing synchrophasor measurement." For 60 Hz systems, PMUs must deliver between 10 and 30 synchronous reports per second depending on the application. The PDC correlates the data, and controls and monitors the PMUs (from a dozen up to 60). At the central control facility, the SCADA system presents system wide data on all generators and substations in the system every 2 to 10 seconds.
Phasor networks:
PMUs often use phone lines to connect to PDCs, which then send data to the SCADA or Wide Area Measurement System (WAMS) server. Additionally, PMUs can use ubiquitous mobile (cellular) networks for data transfer (GPRS, UMTS), which allows potential savings in infrastructure and deployment costs, at the expense of a larger data reporting latency. However, the introduced data latency makes such systems more suitable for R&D measurement campaigns and near real-time monitoring, and limits their use in real-time protective systems.
Phasor networks:
PMUs from multiple vendors can yield inaccurate readings. In one test, readings differed by 47 microseconds – or a difference of 1 degree of 60 Hz- an unacceptable variance. China's solution to the problem was to build all its own PMUs adhering to its own specifications and standards so there would be no multi-vendor source of conflicts, standards, protocols, or performance characteristics.
Installation:
Installation of a typical 10 Phasor PMU is a simple process. A phasor will be either a 3 phase voltage or a 3 phase current. Each phasor will, therefore, require 3 separate electrical connections (one for each phase). Typically an electrical engineer designs the installation and interconnection of a PMU at a substation or at a generation plant. Substation personnel will bolt an equipment rack to the floor of the substation following established seismic mounting requirements. Then the PMU along with a modem and other support equipment will be mounted on the equipment rack. They will also install the Global Positioning Satellite (GPS) antenna on the roof of the substation per manufacturer instructions. Substation personnel will also install "shunts" in all Current transformer (CT) secondary circuits that are to be measured. The PMU will also require communication circuit connection (Modem if using 4-wire connection or Ethernet for network connection).
Implementations:
The Bonneville Power Administration (BPA) was the first utility to implement comprehensive adoption of synchrophasors in its wide-area monitoring system. This was in 2000, and today there are several implementations underway.
Implementations:
The FNET project operated by Virginia Tech and the University of Tennessee utilizes a network of approximately 80 low-cost, high-precision Frequency Disturbance Recorders to collect syncrophasor data from the U.S. power grid. [1] The New York Independent System Operator has installed 48 PMUs throughout New York State, partly in response to a devastating 2003 blackout that originated in Ohio and affected regions in both the United States and Canada.
Implementations:
In 2006, China's Wide Area Monitoring Systems (WAMS) for its 6 grids had 300 PMUs installed mainly at 500 kV and 330 kV substations and power plants. By 2012, China plans to have PMUs at all 500kV substations and all powerplants of 300MW and above. Since 2002, China has built its own PMUs to its own national standard. One type has higher sampling rates than typical and is used in power plants to measure rotor angle of the generator, reporting excitation voltage, excitation current, valve position, and output of the power system stabilizer (PSS). All PMUs are connected via private network, and samples are received within 40 ms on average.
Implementations:
The North American Synchrophasor Initiative (NASPI), previously known as The Eastern Interconnect Phasor Project (EIPP), has over 120 connected phasor measurement units collecting data into a "Super Phasor Data Concentrator" system centered at Tennessee Valley Authority (TVA). This data concentration system is now an open source project known as the openPDC.
The DOE has sponsored several related research projects, including GridStat [2] at Washington State University.
ARPA-E has sponsored a related research project on Micro-Synchrophasors for Distribution Systems, at the University of California, Berkeley.
The largest Wide Area Monitoring System in the world is in India. The Unified Real Time Dynamic State Measurement system (URTDSM) is composed of 1,950 PMUs installed in 351 substations feeding synchrophasor data to 29 State Control Centres, 5 Regional Control Centres and 2 National Control Centres.
Applications:
Power system automation, as in smart grids Load shedding and other load control techniques such as demand response mechanisms to manage a power system. (i.e. Directing power where it is needed in real-time) Increase the reliability of the power grid by detecting faults early, allowing for isolation of operative system, and the prevention of power outages.
Increase power quality by precise analysis and automated correction of sources of system degradation.
Wide area measurement and control through state estimation, in very wide area super grids, regional transmission networks, and local distribution grids.
Phasor measurement technology and synchronized time stamping can be used for Security improvement through synchronized encryptions like trusted sensing base. Cyber attack recognition by verifying data between the SCADA system and the PMU data.
Distribution State Estimation and Model Verification. Ability to calculate impedances of loads, distribution lines, verify voltage magnitude and delta angles based on mathematical state models.
Event Detection and Classification. Events such as various types of faults, tap changes, switching events, circuit protection devices. Machine learning and signal classification methods can be used to develop algorithms to identify these significant events.
Microgrid applications––islanding or deciding where to detach from the grid, load and generation matching, and resynchronization with the main grid.
Standards:
The IEEE 1344 standard for synchrophasors was completed in 1995, and reaffirmed in 2001. In 2005, it was replaced by IEEE C37.118-2005, which was a complete revision and dealt with issues concerning use of PMUs in electric power systems. The specification describes standards for measurement, the method of quantifying the measurements, testing & certification requirements for verifying accuracy, and data transmission format and protocol for real-time data communication. This standard was not comprehensive- it did not attempt to address all factors that PMUs can detect in power system dynamic activity. A new version of the standard was released in December 2011, which split the IEEE C37.118-2005 standard into two parts: C37.118-1 dealing with the phasor estimation & C37.118-2 the communications protocol. It also introduced two classifications of PMU, M — measurement & P — protection. M class is close in performance requirements to that in the original 2005 standard, primarily for steady state measurement. P class has relaxed some performance requirements and is intended to capture dynamic system behavior. An amendment to C37.118.1 was released in 2014. IEEE C37.118.1a-2014 modified PMU performance requirements that were not considered achievable. Other standards used with PMU interfacing: OPC-DA / OPC-HDA — A Microsoft Windows based interface protocol that is currently being generalized to use XML and run on non Windows computers.
Standards:
IEC 61850 a standard for electrical substation automation BPA PDCStream — a variant of IEEE 1344 used by the Bonneville Power Administration (BPA) PDCs and user interface software. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Captain (baseball)**
Captain (baseball):
In baseball, a captain is an honorary title sometimes given to a member of the team to acknowledge his leadership. In the early days of baseball, a captain was a player who was responsible for many of the functions now assumed by managers and coaches, such as preparing lineups, making decisions about strategy, and encouraging teamwork. In amateur or youth baseball, a manager or coach may appoint a team captain to assist in communicating with the players and to encourage teamwork and improvement.The official rules of Major League Baseball (MLB) only briefly mention the position of team captain. Official Baseball Rule 4.03 Comment (formerly Rule 4.01 Comment), which discusses the submission of a team's lineup card to the umpire, notes that obvious errors in the lineup should be brought to the attention of the team's manager or captain.Only a few major league teams have had captains in recent years, two examples being Adrián Beltré of the Texas Rangers and David Wright of the New York Mets, both of whom served in the role from 2013 through 2018. As of the 2023 season, the New York Yankees and Kansas City Royals are the only teams with captains; Aaron Judge was given the honor on December 21, 2022 and Salvador Pérez who was given the honor on March 30, 2023. Jerry Remy, who was named as captain of the California Angels in 1977 at age 24, explains that in today's modern age of baseball, "there's probably no need for a captain on a major league team. I think there are guys who lead by example. You could name the best player on your team as captain, but he may not be the guy other players will talk to or who will quietly go to other players and give them a prod."Baseball captains in MLB generally do not wear an NHL-style "C" on their jersey. Mike Sweeney, captain of the Kansas City Royals from 2003 to 2007, wore the "C" patch, as did John Franco and Keith Hernandez of the Mets, and Jason Varitek of the Boston Red Sox. Brandon Belt of the San Francisco Giants wore an unofficial "C" patch (made from electrical tape) in a game on September 10, 2021, as a joke. Of the current captains in MLB, only Salvador Pérez wears the "C" patch.
History:
In the 19th and early 20th century, the captain held most of the on-field responsibilities that are held by managers and coaches in modern baseball. For example, according to the 1898 official rules, the captain was responsible for assigning the players' positions and batting order, for appealing to the umpire if he observed certain violations (for example, if the other team intentionally discolored the ball or its players illegally left the bench), and for informing the umpire of any special ground rules. During a period when teams didn't carry full-time coaches, the captain and one or more other players could serve as "coachers" of the base runners; the lines setting off the section where they were allowed to stand were designated as "captain's lines." If the umpire made a decision that could "be plainly shown by the code of rules to have been illegal", the "captain alone shall be allowed to make the appeal for reversal." The rules stated that the captain must be one of the nine players, implying that a non-playing manager would not have been allowed to act in the captain's role. In contrast with modern baseball, the 1898 rules did not mention the managers having any rights to interact with the umpires. The rules allowed managers to sit on the team's bench during the game, but were otherwise silent with respect to rights and responsibilities of managers.In early baseball, many teams had playing managers who had both the off-field responsibilities of managers and the on-field responsibilities of captains. They held the title of "manager-captain." In contrast, teams that had non-playing managers hired a player to serve as captain. For example, in early 1902, Jack Doyle was signed as captain and first baseman of the New York Giants while non-player Horace Fogel was manager.The role of captain has been significant in the histories of some teams, such as the Yankees, Red Sox, and Giants. Conversely, some teams have never named a captain, such as the Milwaukee Brewers.
Lists of major league team captains:
List of Atlanta Braves captains List of Baltimore Orioles captains List of Boston Red Sox captains List of Chicago Cubs captains List of Chicago White Sox captains List of Cincinnati Reds captains List of Cleveland Guardians captains List of Detroit Tigers captains List of Houston Astros captains List of Kansas City Royals captains List of Los Angeles Angels captains List of Los Angeles Dodgers captains List of Minnesota Twins captains List of New York Mets captains List of New York Yankees captains List of Oakland Athletics captains List of Philadelphia Phillies captains List of Pittsburgh Pirates captains List of San Diego Padres captains List of San Francisco Giants captains List of Seattle Mariners captains List of St. Louis Cardinals captains List of Texas Rangers captains | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Quasistatic loading**
Quasistatic loading:
In solid mechanics, quasistatic loading refers to loading where inertial effects are negligible. In other words, time and inertial force are irrelevant. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Endrov**
Endrov:
Endrov is an open-source plugin architecture aimed for image analysis and data processing. Being based on Java, it is portable and can both be run locally and as an applet. It grew out of the need for an advanced open source software that can cope with complex spatio-temporal image data, mainly obtained from microscopes in biological research. It lends much of the philosophy from ImageJ but aims to supersede it by having a more modern design.
Endrov:
Endrov grew out of the needs of a software to map the embryogenesis of C.elegans.
The lead developer, Johan Henriksson, is a Ph.D. student at Karolinska Institute.
Specifications:
Endrov is both a library and an imaging program. The design has made strong emphasis on separating GUI code from data types, filters and other data processing plugins. The idea is that the program can be used for most daily use or prototyping, and for bigger batch processing or integration, the code is invoked as a library.
Specifications:
As a program, Endrov can do what you expect from normal image processing software. It is meant to be hackable; integrating new editing tools, windows and data types is meant to be simple. The main features that set it apart from other imaging software is that it can handle additional dimensions (XYZ, time, channel) which is needed for more serious microscopy. Filters can also be used without being directly applied, and can be composed into filter sequences. Data (for example derived from analysis) is stored together with the images.
Specifications:
The native image format is OST but most common formats are supported.
Comparison with ImageJ:
ImageJ is older and hence it is more mature and has more plugins. This limits how much of ImageJ can be changed without breaking backwards-compatibility, which has caused design flaws to accumulate over time. Endrov sacrifices all backwards-compatibility for a clean design. While ImageJ consists of a core and rather independent plugins, Endrov has few core functions and plenty of plugin-plugin dependencies. The goal is to tighten the integration and increase encapsulation, thus reduce code redundancy and ease maintenance. As an example, the GUI is separate from most algorithm plugins; algorithms merely provide descriptions of input and output. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Synchronous programming language**
Synchronous programming language:
A synchronous programming language is a computer programming language optimized for programming reactive systems. Computer systems can be sorted in three main classes: (1) transformational systems that take some inputs, process them, deliver their outputs, and terminate their execution; a typical example is a compiler; (2) interactive systems that interact continuously with their environment, at their own speed; a typical example is the web; and (3) reactive systems that interact continuously with their environment, at a speed imposed by the environment; a typical example is the automatic flight control system of modern airplanes. Reactive systems must therefore react to stimuli from the environment within strict time bounds. For this reason they are often also called real-time systems, and are found often in embedded systems.
Synchronous programming language:
Synchronous programming (also synchronous reactive programming or SRP) is a computer programming paradigm supported by synchronous programming languages. The principle of SRP is to make the same abstraction for programming languages as the synchronous abstraction in digital circuits. Synchronous circuits are indeed designed at a high-level of abstraction where the timing characteristics of the electronic transistors are neglected. Each gate of the circuit (or, and, ...) is therefore assumed to compute its result instantaneously, each wire is assumed to transmit its signal instantaneously. A synchronous circuit is clocked and at each tick of its clock, it computes instantaneously its output values and the new values of its memory cells (latches) from its input values and the current values of its memory cells. In other words, the circuit behaves as if the electrons were flowing infinitely fast. The first synchronous programming languages were invented in France in the 1980s: Esterel, Lustre, and SIGNAL. Since then, many other synchronous languages have emerged.
Synchronous programming language:
The synchronous abstraction makes reasoning about time in a synchronous program a lot easier, thanks to the notion of logical ticks: a synchronous program reacts to its environment in a sequence of ticks, and computations within a tick are assumed to be instantaneous, i.e., as if the processor executing them were infinitely fast. The statement "a||b" is therefore abstracted as the package "ab" where "a" and "b" are simultaneous. To take a concrete example, the Esterel statement "every 60 second emit minute" specifies that the signal "minute" is exactly synchronous with the 60-th occurrence of the signal "second". At a more fundamental level, the synchronous abstraction eliminates the non-determinism resulting from the interleaving of concurrent behaviors. This allows deterministic semantics, therefore making synchronous programs amenable to formal analysis, verification and certified code generation, and usable as formal specification formalisms.
Synchronous programming language:
In contrast, in the asynchronous model of computation, on a sequential processor, the statement "a||b" can be either implemented as "a;b" or as "b;a". This is known as the interleaving-based non determinism. The drawback with an asynchronous model is that it intrinsically forbids deterministic semantics (e.g., race conditions), which makes formal reasoning such as analysis and verification more complex. Nonetheless, asynchronous formalisms are very useful to model, design and verify distributed systems, because they are intrinsically asynchronous.
Synchronous programming language:
Also in contrast are systems with processes that basically interact synchronously. An example would be systems based on the Communicating sequential processes (CSP) model, which allows deterministic (external) and nondeterministic (internal) choice.
Synchronous languages:
Argos Atom (a domain-specific language in Haskell for hard realtime embedded programming) Averest Blech ChucK (a synchronous reactive programming language for audio) Esterel LabVIEW LEA Lustre PLEXIL SIGNAL (a dataflow-oriented synchronous language enabling multi-clock specifications) SOL SyncCharts | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Deoxyadenosine kinase**
Deoxyadenosine kinase:
In enzymology, a deoxyadenosine kinase (EC 2.7.1.76) is an enzyme that catalyzes the chemical reaction ATP + deoxyadenosine ⇌ ADP + dAMPThus, the two substrates of this enzyme are ATP and deoxyadenosine, whereas its two products are ADP and dAMP.
This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing groups (phosphotransferases) with an alcohol group as acceptor. The systematic name of this enzyme class is ATP:deoxyadenosine 5'-phosphotransferase. This enzyme is also called purine-deoxyribonucleoside kinase. This enzyme participates in purine metabolism.
Structural studies:
As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 2JAQ. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fine-tuned universe**
Fine-tuned universe:
The characterization of the universe as finely tuned suggests that the occurrence of life in the universe is very sensitive to the values of certain fundamental physical constants and that other values different from the observed ones are, for some reason, improbable. If the values of any of certain free parameters in contemporary physical theories had differed only slightly from those observed, the evolution of the universe would have proceeded very differently and life as it is understood may not have been possible.
History:
In 1913, the chemist Lawrence Joseph Henderson wrote The Fitness of the Environment, one of the first books to explore fine tuning in the universe. Henderson discusses the importance of water and the environment to living things, pointing out that life depends entirely on Earth's very specific environmental conditions, especially the prevalence and properties of water.In 1961, physicist Robert H. Dicke claimed that certain forces in physics, such as gravity and electromagnetism, must be perfectly fine-tuned for life to exist in the universe. Fred Hoyle also argued for a fine-tuned universe in his 1984 book The Intelligent Universe. "The list of anthropic properties, apparent accidents of a non-biological nature without which carbon-based and hence human life could not exist, is large and impressive", Hoyle wrote.Belief in the fine-tuned universe led to the expectation that the Large Hadron Collider would produce evidence of physics beyond the Standard Model, such as supersymmetry, but by 2012 it had not produced evidence for supersymmetry at the energy scales it was able to probe.
Motivation:
Physicist Paul Davies has said, "There is now broad agreement among physicists and cosmologists that the Universe is in several respects ‘fine-tuned' for life". However, he continued, "the conclusion is not so much that the Universe is fine-tuned for life; rather it is fine-tuned for the building blocks and environments that life requires." He has also said that "'anthropic' reasoning fails to distinguish between minimally biophilic universes, in which life is permitted, but only marginally possible, and optimally biophilic universes, in which life flourishes because biogenesis occurs frequently". Among scientists who find the evidence persuasive, a variety of natural explanations have been proposed, such as the existence of multiple universes introducing a survivorship bias under the anthropic principle.The premise of the fine-tuned universe assertion is that a small change in several of the physical constants would make the universe radically different. As Stephen Hawking has noted, "The laws of science, as we know them at present, contain many fundamental numbers, like the size of the electric charge of the electron and the ratio of the masses of the proton and the electron. ... The remarkable fact is that the values of these numbers seem to have been very finely adjusted to make possible the development of life."If, for example, the strong nuclear force were 2% stronger than it is (i.e. if the coupling constant representing its strength were 2% larger) while the other constants were left unchanged, diprotons would be stable; according to Davies, hydrogen would fuse into them instead of deuterium and helium. This would drastically alter the physics of stars, and presumably preclude the existence of life similar to what we observe on Earth. The diproton's existence would short-circuit the slow fusion of hydrogen into deuterium. Hydrogen would fuse so easily that it is likely that all the universe's hydrogen would be consumed in the first few minutes after the Big Bang. This "diproton argument" is disputed by other physicists, who calculate that as long as the increase in strength is less than 50%, stellar fusion could occur despite the existence of stable diprotons.The precise formulation of the idea is made difficult by the fact that we do not yet know how many independent physical constants there are. The standard model of particle physics has 25 freely adjustable parameters and general relativity has one more, the cosmological constant, which is known to be nonzero but profoundly small in value. But because physicists have not developed an empirically successful theory of quantum gravity, there is no known way to combine quantum mechanics, on which the standard model depends, and general relativity. Without knowledge of this more complete theory suspected to underlie the standard model, it is impossible to definitively count the number of truly independent physical constants. In some candidate theories, the number of independent physical constants may be as small as one. For example, the cosmological constant may be a fundamental constant, but attempts have also been made to calculate it from other constants, and according to the author of one such calculation, "the small value of the cosmological constant is telling us that a remarkably precise and totally unexpected relation exists among all the parameters of the Standard Model of particle physics, the bare cosmological constant and unknown physics."
Examples:
Martin Rees formulates the fine-tuning of the universe in terms of the following six dimensionless physical constants.
N, the ratio of the electromagnetic force to the gravitational force between a pair of protons, is approximately 1036. According to Rees, if it were significantly smaller, only a small and short-lived universe could exist.
Examples:
Epsilon (ε), a measure of the nuclear efficiency of fusion from hydrogen to helium, is 0.007: when four nucleons fuse into helium, 0.007 (0.7%) of their mass is converted to energy. The value of ε is in part determined by the strength of the strong nuclear force. If ε were 0.006, a proton could not bond to a neutron, and only hydrogen could exist, and complex chemistry would be impossible. According to Rees, if it were above 0.008, no hydrogen would exist, as all the hydrogen would have been fused shortly after the Big Bang. Other physicists disagree, calculating that substantial hydrogen remains as long as the strong force coupling constant increases by less than about 50%.
Examples:
Omega (Ω), commonly known as the density parameter, is the relative importance of gravity and expansion energy in the universe. It is the ratio of the mass density of the universe to the "critical density" and is approximately 1. If gravity were too strong compared with dark energy and the initial cosmic expansion rate, the universe would have collapsed before life could have evolved. If gravity were too weak, no stars would have formed.
Examples:
Lambda (Λ), commonly known as the cosmological constant, describes the ratio of the density of dark energy to the critical energy density of the universe, given certain reasonable assumptions such as that dark energy density is a constant. In terms of Planck units, and as a natural dimensionless value, Λ is on the order of 10−122. This is so small that it has no significant effect on cosmic structures that are smaller than a billion light-years across. A slightly larger value of the cosmological constant would have caused space to expand rapidly enough that stars and other astronomical structures would not be able to form.
Examples:
Q, the ratio of the gravitational energy required to pull a large galaxy apart to the energy equivalent of its mass, is around 10−5. If it is too small, no stars can form. If it is too large, no stars can survive because the universe is too violent, according to Rees.
Examples:
D, the number of spatial dimensions in spacetime, is 3. Rees claims that life could not exist if there were 2 or 4 spatial dimensions. Rees argues this does not preclude the existence of ten-dimensional strings.Max Tegmark has argued that if there is more than one time dimension, then physical systems' behavior could not be predicted reliably from knowledge of the relevant partial differential equations. In such a universe, intelligent life capable of manipulating technology could not emerge. Moreover protons and electrons would be unstable and could decay into particles having greater mass than themselves. (This is not a problem if the particles have a sufficiently low temperature.) Carbon and oxygen An older example is the Hoyle state, the third-lowest energy state of the carbon-12 nucleus, with an energy of 7.656 MeV above the ground level. According to one calculation, if the state's energy level were lower than 7.3 or greater than 7.9 MeV, insufficient carbon would exist to support life. Furthermore, to explain the universe's abundance of carbon, the Hoyle state must be further tuned to a value between 7.596 and 7.716 MeV. A similar calculation, focusing on the underlying fundamental constants that give rise to various energy levels, concludes that the strong force must be tuned to a precision of at least 0.5%, and the electromagnetic force to a precision of at least 4%, to prevent either carbon production or oxygen production from dropping significantly.
Explanations:
Some explanations of fine-tuning are naturalistic. First, the fine-tuning might be an illusion: more fundamental physics may explain the apparent fine-tuning in physical parameters in our current understanding by constraining the values those parameters are likely to take. As Lawrence Krauss puts it, "certain quantities have seemed inexplicable and fine-tuned, and once we understand them, they don't seem to be so fine-tuned. We have to have some historical perspective." Some argue it is possible that a final fundamental theory of everything will explain the underlying causes of the apparent fine-tuning in every parameter.Still, as modern cosmology developed, various hypotheses not presuming hidden order have been proposed. One is a multiverse, where fundamental physical constants are postulated to have different values outside of our own universe. On this hypothesis, separate parts of reality would have wildly different characteristics. In such scenarios, the appearance of fine-tuning is explained as a consequence of the weak anthropic principle and selection bias (specifically survivorship bias); only those universes with fundamental constants hospitable to life (such as ours) could contain life forms capable of observing the universe and contemplating the question of fine-tuning in the first place.
Explanations:
Multiverse If the universe is just one of many, and possibly infinite universes, each with different physical phenomena and constants, it would be unsurprising that we find ourselves in a universe hospitable to intelligent life (see multiverse: anthropic principle). Some versions of the multiverse hypothesis therefore provide a simple explanation for any fine-tuning.The multiverse idea has led to considerable research into the anthropic principle and has been of particular interest to particle physicists, because theories of everything do apparently generate large numbers of universes in which the physical constants vary widely. As yet, there is no evidence for the existence of a multiverse, but some versions of the theory make predictions of which some researchers studying M-theory and gravity leaks hope to see some evidence soon.: 220–221 Laura Mersini-Houghton claimed that the WMAP cold spot could provide testable empirical evidence for a parallel universe. Variants of this approach include Lee Smolin's notion of cosmological natural selection, the Ekpyrotic universe, and the bubble universe theory.
Explanations:
Top-down cosmology Stephen Hawking and Thomas Hertog proposed that the universe's initial conditions consisted of a superposition of many possible initial conditions, only a small fraction of which contributed to the conditions we see today. On their theory, it is inevitable that we find our universe's "fine-tuned" physical constants, as the current universe "selects" only those histories that led to the present conditions. In this way, top-down cosmology provides an anthropic explanation for why we find ourselves in a universe that allows matter and life, without invoking the ontic existence of the Multiverse.
Explanations:
Carbon chauvinism Some forms of fine-tuning arguments about the formation of life assume that only carbon-based life forms are possible, an assumption sometimes called carbon chauvinism. Conceptually, alternative biochemistry or other forms of life are possible.
Explanations:
Alien design One hypothesis is that extra-universal aliens designed the universe. Some believe this would solve the problem of how a designer or design team capable of fine-tuning the universe could come to exist. Cosmologist Alan Guth believes humans will in time be able to generate new universes. By implication, previous intelligent entities may have generated our universe. This idea leads to the possibility that the extra-universal designer/designers are themselves the product of an evolutionary process in their own universe, which must therefore itself be able to sustain life. It also raises the question of where that universe came from, leading to an infinite regress.
Explanations:
John Gribbin's Designer Universe theory suggests that an advanced civilization could have deliberately made the universe in another part of the Multiverse, and that this civilization may have caused the Big Bang.
Simulation hypothesis The simulation hypothesis holds that the universe is fine-tuned simply because it is programmed that way by people similar to us but more technologically advanced.
Religious apologetics:
Some scientists, theologians, and philosophers, as well as certain religious groups, argue that providence or creation are responsible for fine-tuning.Christian philosopher Alvin Plantinga argues that random chance, applied to a single and sole universe, only raises the question as to why this universe could be so "lucky" as to have precise conditions that support life at least at some place (the Earth) and time (within millions of years of the present).
Religious apologetics:
One reaction to these apparent enormous coincidences is to see them as substantiating the theistic claim that the universe has been created by a personal God and as offering the material for a properly restrained theistic argument – hence the fine-tuning argument. It's as if there are a large number of dials that have to be tuned to within extremely narrow limits for life to be possible in our universe. It is extremely unlikely that this should happen by chance, but much more likely that this should happen if there is such a person as God.
Religious apologetics:
Philosopher and Christian apologist William Lane Craig cites this fine-tuning of the universe as evidence for the existence of God or some form of intelligence capable of manipulating (or designing) the basic physics that governs the universe.Philosopher and theologian Richard Swinburne reaches the design conclusion using Bayesian probability.Scientist and theologian Alister McGrath has pointed out that the fine-tuning of carbon is even responsible for nature's ability to tune itself to any degree.
Religious apologetics:
The entire biological evolutionary process depends upon the unusual chemistry of carbon, which allows it to bond to itself, as well as other elements, creating highly complex molecules that are stable over prevailing terrestrial temperatures, and are capable of conveying genetic information (especially DNA). [...] Whereas it might be argued that nature creates its own fine-tuning, this can only be done if the primordial constituents of the universe are such that an evolutionary process can be initiated. The unique chemistry of carbon is the ultimate foundation of the capacity of nature to tune itself.
Religious apologetics:
Theoretical physicist and Anglican priest John Polkinghorne has stated: "Anthropic fine tuning is too remarkable to be dismissed as just a happy accident."Theologian and philosopher Andrew Loke argues that there are only five possible categories of hypotheses concerning fine-tuning and order: (i) Chance, (ii) Regularity, (iii) Combinations of Regularity and Chance, (iv) Uncaused, and (v) Design, and that only Design gives an exclusively logical explanation of order in the universe. He argues that the Kalam Cosmological Argument strengthens the teleological argument by answering the question "Who designed the Designer?"Creationist Hugh Ross advances a number of fine-tuning hypotheses. One is the existence of what Ross calls "vital poisons": elemental nutrients that are harmful in large quantities but essential for animal life in smaller quantities. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Zagnut**
Zagnut:
Zagnut is a candy bar produced and sold in the United States. Its main ingredients are peanut butter and toasted coconut.
History:
The Zagnut bar was launched in 1930, by the D. L. Clark Company of western Pennsylvania, which also made the Clark bar. Clark changed its name to the Pittsburgh Food & Beverage company and was acquired by Leaf International in 1983. The Zagnut brand was later part of an acquisition by Hershey Foods Corporation in 1996.Bon Appétit, in a story about nostalgic candy, said, "We’re honestly flummoxed that Zagnuts aren’t more popular." Conversely, a columnist in The Des Moines Register compared it to a Rose Art crayon, saying "No one would ever purposely choose a Zagnut." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Topological Hochschild homology**
Topological Hochschild homology:
In mathematics, Topological Hochschild homology is a topological refinement of Hochschild homology which rectifies some technical issues with computations in characteristic p . For instance, if we consider the Z -algebra Fp then even odd but if we consider the ring structure on HH∗(Fp/Z)=Fp⟨u⟩=Fp[u,u2/2!,u3/3!,…] (as a divided power algebra structure) then there is a significant technical issue: if we set u∈HH2(Fp/Z) , so u2∈HH4(Fp/Z) , and so on, we have up=0 from the resolution of Fp as an algebra over Fp⊗LFp , i.e. HHk(Fp/Z)=Hk(Fp⊗Fp⊗LFpFp) This calculation is further elaborated on the Hochschild homology page, but the key point is the pathological behavior of the ring structure on the Hochschild homology of Fp . In contrast, the Topological Hochschild Homology ring has the isomorphism THH∗(Fp)=Fp[u] giving a less pathological theory. Moreover, this calculation forms the basis of many other THH calculations, such as for smooth algebras A/Fp
Construction:
Recall that the Eilenberg–MacLane spectrum can be embed ring objects in the derived category of the integers D(Z) into ring spectrum over the ring spectrum of the stable homotopy group of spheres. This makes it possible to take a commutative ring A and constructing a complex analogous to the Hochschild complex using the monoidal product in ring spectra, namely, ∧S acts formally like the derived tensor product ⊗L over the integers. We define the Topological Hochschild complex of A (which could be a commutative differential graded algebra, or just a commutative algebra) as the simplicial complex, pg 33-34 called the Bar complex ⋯→HA∧SHA∧SHA→HA∧SHA→HA of spectra (note that the arrows are incorrect because of Wikipedia formatting...). Because simplicial objects in spectra have a realization as a spectrum, we form the spectrum Spectra which has homotopy groups πi(THH(A)) defining the topological Hochschild homology of the ring object A | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Thymus stromal cells**
Thymus stromal cells:
Thymus stromal cells are subsets of specialized cells located in different areas of the thymus. They include all non-T-lineage cells, such as thymic epithelial cells (TECs), endothelial cells, mesenchymal cells, dendritic cells, and B lymphocytes, and provide signals essential for thymocyte development and the homeostasis of the thymic stroma.
Structure:
The thymus is a primary lymphoid organ of the immune system. It is a butterfly-shaped organ consisting of two lobes, located in the top part of the chest, that supports T cell development via specialized microenvironments that ensure a diverse, functional, and self-tolerant T cell population. These microenvironments are classically defined as distinct cortex and medulla regions that each contain specialized subsets of stromal cells. The stepwise progression of thymocyte development requires their migration through these thymic regions, where interactions with cTEC and mTEC subsets take place.
Function:
Thymus stromal cells provide chemokines and cytokines during the early stages of T lymphocyte development, and they are essential for promoting the homing thymic-seeding progenitors, inducing T-lineage differentiation, and supporting thymocyte survival and proliferation. The predominant stromal cells found in the postnatal thymus are thymic epithelial cells (TECs).cTECs – (cortical thymic epithelial cells) are located in the cortex, and they are responsible for T lineage commitment and positive selection of early thymocytes, cTECs provide cytokines, such as interleukin 7 (IL-7) and SCF complex, to promote early thymocyte progenitor (ETP) proliferation as well as DLL4-mediated Notch signaling to induce the differentiation of ETP toward the T lineage.mTECs (medullary thymic epithelial cells) in the medulla contribute to the development of T cell tolerance by purging autoreactive T lymphocytes by expression of cell-type-specific genes referred to as tissue-restricted antigens (TRA), and they also participate in the final stages of thymocyte maturation. mTECs also predominantly express receptor RANK, a major mediator of the thymic crosstalk signal, that is involved in the formation of the thymic medulla.Mesenchymal stromal cells are required to create the thymic microenvironment and to maintain epithelial architecture and function in the thymus during organogenesis. They also serve as the major source of retinoic acid, which promotes the proliferation of cTECs. In the adult thymus, mesenchymal cells are found as fibroblastic cells that express a set of structural proteins and functional molecules, such as collagens, CD34, fibroblast-specific protein-1 (FSP1), platelet-derived growth factor receptor α (PDGFRα). They are crucial for the maintenance and regeneration of mTECs.
Clinical significance:
Inborn defects of thymus stromal cells Inborn errors of thymic stromal cell development and function lead to impaired T cell development resulting in a susceptibility to opportunistic infections and autoimmunity. The most serious clinical expression of a thymic stromal cell defect is profound T cell lymphopaenia, presenting as a complete DiGeorge syndrome or severe combined immune deficiency (T−B+NK+ SCID).
Clinical significance:
Role in thymus atrophy and aging There is large evidence indicating the main target of age-linked thymic dysfunction is thymic stroma with a microenvironment consisting of thymic stromal cells. Studies showed that in the thymic stromal cells, especially cTECs, there is (in the case of an aging thymus) deficiency in the peroxide-quenching enzyme catalase. This deficiency renders thymic stromal cells sensitive to damage induced by inflammation and damage-associated molecular patterns (DAMPs), such as reactive oxygen species (ROS), and then accumulated metabolic damage and oxidative stress promote thymic dysfunction due to age and accelerated thymus atrophy. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Laxative**
Laxative:
Laxatives, purgatives, or aperients are substances that loosen stools and increase bowel movements. They are used to treat and prevent constipation.
Laxative:
Laxatives vary as to how they work and the side effects they may have. Certain stimulant, lubricant, and saline laxatives are used to evacuate the colon for rectal and bowel examinations, and may be supplemented by enemas under certain circumstances. Sufficiently high doses of laxatives may cause diarrhea. Some laxatives combine more than one active ingredient, and may be administered orally or rectally.
Types:
Bulk-forming agents Bulk-forming laxatives, also known as roughage, are substances, such as fiber in food and hydrophilic agents in over-the-counter drugs, that add bulk and water to stools so they can pass more easily through the intestines (lower part of the digestive tract).Properties Site of action: small and large intestines Onset of action: 12–72 hours Examples: dietary fiber, Metamucil, Citrucel, FiberConBulk-forming agents generally have the gentlest of effects among laxatives, making them ideal for long-term maintenance of regular bowel movements.
Types:
Dietary fiber Foods that help with laxation include fiber-rich foods. Dietary fiber includes insoluble fiber and soluble fiber, such as: Fruits, such as bananas, though this depends on their ripeness, kiwifruits, prunes, apples (with skin), pears (with skin), and raspberries Vegetables, such as broccoli, string beans, kale, spinach, cooked winter squash, cooked taro and poi, cooked peas, and baked potatoes (with skin) Whole grains Bran products Nuts Legumes, such as beans, peas, and lentils Emollient agents (stool softeners) Emollient laxatives, also known as stool softeners, are anionic surfactants that enable additional water and fats to be incorporated in the stool, making movement through the bowels easier.
Types:
Properties Site of action: small and large intestines Onset of action: 12–72 hours Examples: Docusate (Colace, Diocto), Gibs-EzeEmollient agents prevent constipation rather than treating long-term constipation.
Types:
Lubricant agents Lubricant laxatives are substances that coat the stool with slippery lipids and decrease colonic absorption of water so the stool slides through the colon more easily. Lubricant laxatives also increase the weight of stool and decrease intestinal transit time.Properties Site of action: colon Onset of action: 6–8 hours Example: mineral oilMineral oils, such as liquid paraffin, are generally the only nonprescription lubricant laxative available, but due to the risk of lipid pneumonia resulting from accidental aspiration, mineral oil is not recommended, especially in children and infants. Mineral oil may decrease the absorption of fat-soluble vitamins and some minerals.
Types:
Hyperosmotic agents Hyperosmotic laxatives cause the intestines to hold more water, creating an osmotic gradient, which adds more pressure and stimulates bowel movement.Properties Site of action: colon Onset of action: 12–72 hours (oral), 0.25–1 hour (rectal) Examples: glycerin suppositories (Hallens), sorbitol, lactulose, and PEG (Colyte, MiraLax)Lactulose works by the osmotic effect, which retains water in the colon; lowering the pH through bacterial fermentation to lactic, formic, and acetic acids; and increasing colonic peristalsis. Lactulose is also indicated in portal-systemic encephalopathy. Glycerin suppositories work mostly by hyperosmotic action, but the sodium stearate in the preparation also causes local irritation to the colon.Solutions of polyethylene glycol and electrolytes (sodium chloride, sodium bicarbonate, potassium chloride, and sometimes sodium sulfate) are used for whole bowel irrigation, a process designed to prepare the bowel for surgery or colonoscopy and to treat certain types of poisoning. Brand names for these solutions include GoLytely, GlycoLax, Cosmocol, CoLyte, Miralax, Movicol, NuLytely, Suprep, and Fortrans. Solutions of sorbitol (SoftLax) have similar effects.
Types:
Saline laxative agents Saline laxatives are nonabsorbable, osmotically active substances that attract and retain water in the intestinal lumen, increasing intraluminal pressure that mechanically stimulates evacuation of the bowel. Magnesium-containing agents also cause the release of cholecystokinin, which increases intestinal motility and fluid secretion. Saline laxatives may alter a patient's fluid and electrolyte balance.
Types:
Properties Site of action: small and large intestines Onset of action: 0.5–3 hours (oral), 2–15 minutes (rectal) Examples: sodium phosphate (and variants), magnesium citrate, magnesium hydroxide (milk of magnesia), and magnesium sulfate (Epsom salt) Stimulant agents Stimulant laxatives are substances that act on the intestinal mucosa or nerve plexus, altering water and electrolyte secretion. They also stimulate peristaltic action and can be dangerous under certain circumstances.
Types:
Properties Site of action: colon Onset of action: 6–10 hours Examples: senna, bisacodylProlonged use of stimulant laxatives can create drug dependence by damaging the colon's haustral folds, making users less able to move feces through their colon on their own. A study of patients with chronic constipation found that 28% of chronic stimulant laxative users lost haustral folds over the course of one year, while none of the control group did.
Types:
Miscellaneous Castor oil is a glyceride that is hydrolyzed by pancreatic lipase to ricinoleic acid, which produces laxative action by an unknown mechanism.
Properties Site of action: colon, small intestine (see below) Onset of action: 2–6 hours Examples: castor oilLong-term use of castor oil may result in loss of fluid, electrolytes, and nutrients.
Serotonin agonist These are motility stimulants that work through activation of 5-HT4 receptors of the enteric nervous system in the gastrointestinal tract. However, some have been discontinued or restricted due to potentially harmful cardiovascular side effects.
Types:
Tegaserod (brand name Zelnorm) was removed from the general U.S. and Canadian markets in 2007, due to reports of increased risks of heart attack or stroke. It is still available to physicians for patients in emergency situations that are life-threatening or require hospitalization.Prucalopride (brand name Resolor) is a current drug approved for use in the EU since October 15, 2009, in Canada (brand name Resotran) since December 7, 2011, and in the United States since December 2018.
Types:
Chloride channel activators Lubiprostone is used in the management of chronic idiopathic constipation and irritable bowel syndrome. It causes the intestines to produce a chloride-rich fluid secretion that softens the stool, increases motility, and promotes spontaneous bowel movements.
Comparison of available agents:
Effectiveness For adults, a randomized controlled trial found PEG (MiraLax or GlycoLax) 17 grams once per day to be superior to tegaserod at 6 mg twice per day. A randomized controlled trial found greater improvement from two sachets (26 g) of PEG versus two sachets (20 g) of lactulose. 17 g per day of PEG has been effective and safe in a randomized, controlled trial for six months. Another randomized, controlled trial found no difference between sorbitol and lactulose.For children, PEG was found to be more effective than lactulose.
Problems with use:
Laxative abuse Some of the less significant adverse effects of laxative abuse include dehydration (which causes tremors, weakness, fainting, blurred vision, kidney damage), low blood pressure, fast heart rate, postural dizziness and fainting; however, laxative abuse can lead to potentially fatal acid-base, and electrolyte imbalances. For example, severe hypokalaemia has been associated with distal renal tubular acidosis from laxative abuse. Metabolic alkalosis is the most common acid-base imbalance observed. Other significant adverse effects include rhabdomyolysis, steatorrhoea, inflammation and ulceration of colonic mucosa, pancreatitis, kidney failure, factitious diarrhea and other problems. The colon will need more quantities of laxatives to keep functioning, this will result in a lazy colon, infections, irritable bowel syndrome, and potential liver damage.
Problems with use:
Although some patients with eating disorders such as anorexia nervosa and bulimia nervosa abuse laxatives in an attempt to lose weight, laxatives act to speed up the transit of feces through the large intestine, which occurs after the absorption of nutrients in the small intestine is already complete. Thus, studies of laxative abuse have found that effects on body weight reflect primarily temporary losses of body water rather than energy (calorie) loss.
Problems with use:
Laxative gut Physicians warn against the chronic use of stimulant laxatives due to concern that chronic use could cause the colonic tissues to get worn out over time and not be able to expel feces due to long-term overstimulation. A common finding in patients having used stimulant laxatives is a brown pigment deposited in the intestinal tissue, known as melanosis coli.
Historical and health fraud uses:
Laxatives, once called "physicks" or "purgatives", were used extensively in historic medicine to treat many conditions for which they are now generally regarded as ineffective in evidence-based medicine. Likewise, laxatives (often termed colon cleanses) may be promoted in alternative medicine for various conditions of quackery, such as "mucoid plaque". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Metacinema**
Metacinema:
Metacinema, also meta-cinema, is a mode of filmmaking in which the film informs the audience that they are watching a work of fiction. Metacinema often references its own production, working against narrative conventions that aim to maintain the audience's suspension of disbelief. Elements of metacinema includes scenes where characters discuss the making of the film or where production equipment and facilities are shown. It is analogous to metafiction in literature.
History:
Examples of metacinema date back to the early days of narrative filmmaking. In the 1940s, backstage musicals and comedies like Road to Singapore (Victor Schertzinger, 1940) and Hellzapoppin' (H. C. Potter, 1941) exhibited a vogue for exploration of the medium of film at a time when Hollywood classicism dominated. Metacinema can be identified in art cinema of the 1960s like 8½ (Federico Fellini, 1963) or The Passion of Anna (Ingmar Bergman, 1969), and it can often be found in the self-reflexive filmmaking of the French New Wave in films like Contempt (Jean-Luc Godard, 1963) and Day for Night (François Truffaut, 1973). Other examples include F for Fake (Orson Welles, 1973) and Through the Olive Trees (Abbas Kiarostami, 1994).Community (2009–2015) is a sitcom which has elements of metacinema, particularly through the character of Abed Nadir (Danny Pudi) who makes comments about himself and his friends being in a sitcom, such as commenting that they are in a bottle episode in the bottle episode: "Cooperative Calligraphy" (Series 2: Episode 8), and the episode "Messianic Myths and Ancient Peoples" (Series 2: Episode 5) consists of Abed making his own metacinema film. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Soft set**
Soft set:
Soft set theory is a generalization of fuzzy set theory, that was proposed by Molodtsov in 1999 to deal with uncertainty in a parametric manner. A soft set is a parameterised family of sets - intuitively, this is "soft" because the boundary of the set depends on the parameters. Formally, a soft set, over a universal set X and set of parameters E is a pair (f, A) where A is a subset of E, and f is a function from A to the power set of X. For each e in A, the set f(e) is called the value set of e in (f, A).
Soft set:
One of the most important steps for the new theory of soft sets was to define mappings on soft sets, which was achieved in 2009 by the mathematicians Athar Kharal and Bashir Ahmad, with the results published in 2011. Soft sets have also been applied to the problem of medical diagnosis for use in medical expert systems.
Fuzzy soft sets have also been introduced. Mappings on fuzzy soft sets were defined and studied by Kharal and Ahmad. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Joseph J. Talavage**
Joseph J. Talavage:
Joseph J. Talavage was a Professor of Industrial Engineering at Purdue University. He received his Ph.D. degree in Systems Engineering from Case Institute of Technology in 1968. He published numerous research and technical papers on simulation methodology, including the development of a manufacturing decision support system, and the use of simulation to design improved hierarchical control systems for steel production. He was a consultant to numerous companies and governmental agencies, and was the prime developer of the microNET simulation language. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Clinical Schizophrenia & Related Psychoses**
Clinical Schizophrenia & Related Psychoses:
Clinical Schizophrenia & Related Psychoses is a quarterly peer-reviewed medical journal published by Walsh Medical Media. It covers research in all areas of psychiatry, especially schizophrenia, bipolar disorder, and related psychoses. The editor-in-chief is Peter F. Buckley (Virginia Commonwealth University).
Abstracting and indexing:
The journal is abstracted and indexed in EMBASE, EBSCO databases, and Scopus. From 2010–2019 it was also index by MEDLINE/PubMed. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bi-directional hypothesis of language and action**
Bi-directional hypothesis of language and action:
The bi-directional hypothesis of language and action proposes that the sensorimotor and language comprehension areas of the brain exert reciprocal influence over one another. This hypothesis argues that areas of the brain involved in movement and sensation, as well as movement itself, influence cognitive processes such as language comprehension. In addition, the reverse effect is argued, where it is proposed that language comprehension influences movement and sensation. Proponents of the bi-directional hypothesis of language and action conduct and interpret linguistic, cognitive, and movement studies within the framework of embodied cognition and embodied language processing. Embodied language developed from embodied cognition, and proposes that sensorimotor systems are not only involved in the comprehension of language, but that they are necessary for understanding the semantic meaning of words.
Development of the bi-directional hypothesis:
The theory that sensory and motor processes are coupled to cognitive processes stems from action-oriented models of cognition. These theories, such as the embodied and situated cognitive theories, propose that cognitive processes are rooted in areas of the brain involved in movement planning and execution, as well as areas responsible for processing sensory input, termed sensorimotor areas or areas of action and perception. According to action-oriented models, higher cognitive processes evolved from sensorimotor brain regions, thereby necessitating sensorimotor areas for cognition and language comprehension. With this organization, it was then hypothesized that action and cognitive processes exert influence on one another in a bi-directional manner: action and perception influence language comprehension, and language comprehension influences sensorimotor processes.
Development of the bi-directional hypothesis:
Although studied in a unidirectional manner for many years, the bi-directional hypothesis was first described and tested in detail by Aravena et al. These authors utilized the Action-Sentence Compatibility Effect (ACE), a task commonly used to study the relationship between action and language, to test the effects of performing simultaneous language comprehension and motor tasks on neural and behavioral signatures of movement and language comprehension. These authors proposed that these two tasks cooperate bi-directionally when compatible, and interfere bi-directionally when incompatible. For example, when the movement implied by the action language stimuli is compatible with the movement being performed by the subject, it was hypothesized that performance of both tasks would be enhanced. Neural evidence of the bi-directional hypothesis was demonstrated by this study, and the development of this hypothesis is ongoing.
Effects of language comprehension on systems of action:
Language comprehension tasks can exert influence over systems of action, both at the neural and behavioral level. This means that language stimuli influence both electrical activity in sensorimotor areas of the brain, as well as actual movement.
Effects of language comprehension on systems of action:
Neural activation Language stimuli influence electrical activity in sensorimotor areas of the brain that are specific to the bodily association of the words presented. This is referred to as semantic somatotopy, which indicates activation of sensorimotor areas that are specific to the bodily association implied by the word. For example, when processing the meaning of the word “kick,” the regions in the motor and somatosensory cortices that represent the legs will become more active. Boulenger et al. demonstrated this effect by presenting subjects with action-related language while measuring neural activity using fMRI. Subjects were presented with action sentences that were either associated with the legs (e.g. “John kicked the object”) or with the arms (e.g. “Jane grasped the object”). The medial region of the motor cortex, known to represent the legs, was more active when subjects were processing leg-related sentences, whereas the lateral region of the motor cortex, known to represent the arms, was more active with arm-related sentences. This body-part-specific increase in activation was exhibited about 3 seconds after presentation of the word, a time window that is thought to indicate semantic processing. In other words, this activation was associated with subjects comprehending the meaning of the word. This effect held true, and was even intensified, when subjects were presented with idiomatic sentences. Abstract language that implied more figurative actions were used, either associated with the legs (e.g. “John kicked the habit”) or the arms (e.g. “Jane grasped the idea”). Increased neural activation of leg motor regions were demonstrated with leg-related idiomatic sentences, whereas arm-related idiomatic sentences were associated with increased activation of arm motor regions. This activation was larger than that demonstrated by more literal sentences (e.g. “John kicked the object”), and was also present in the time window associated with semantic processing.Action language not only activates body-part-specific areas of the motor cortex, but also influences neural activity associated with movement. This has been demonstrated during an Action-Sentence Compatibility Effect (ACE) task, a common test used to study the relationship between language comprehension and motor behavior. This task requires the subject to perform movements to indicate understanding of a sentence, such as moving to press a button or pressing a button with a specific hand posture, that are either compatible or incompatible with movement implied by the sentence. For example, pressing a button with an open hand to indicate understanding of the sentence "Jane high-fived Jack" would be considered a compatible movement, as the sentence implies an open-handed posture. Motor potentials (MP) are an Event Related Potentials (ERPs) stemming from the motor cortex, and are associated with execution of movement. Enhanced amplitudes of MPs have been associated with precision and quickness of movements. Re-afferent potentials (RAPs) are another form of ERP, and are used as a marker of sensory feedback and attention. Both MP and RAP have been demonstrated to be enhanced during compatible ACE conditions. These results indicate that language can have a facilitory effect on the excitability of neural sensorimotor systems. This has been referred to as semantic priming, indicating that language primes neural sensorimotor systems, altering excitability and movement.
Effects of language comprehension on systems of action:
Movement The ability of language to influence neural activity of motor systems also manifests itself behaviorally by altering movement. Semantic priming has been implicated in these behavioral changes, and has been used as evidence for the involvement of the motor system in language comprehension. The Action-Sentence Compatibility Effect (ACE) is indicative of these semantic priming effects. Understanding language that implies action may invoke motor facilitation, or prime the motor system, when the action or posture being performed to indicate language comprehension is compatible with action or posture implied by the language. Compatible ACE tasks have been shown to lead to shorter reaction times. This effect has been demonstrated on various types of movements, including hand posture during button pressing, reaching, and manual rotation.Language stimuli can also prime the motor system simply by describing objects that are commonly manipulated. In a study performed by Masson et al., subjects were presented with sentences that implied non-physical, abstract action with an object (e.g. "John thought about the calculator" or "Jane remembered the thumbtack"). After presentation of language stimuli, subjects were cued to perform either functional gestures, gestures typically made when using the object described in the sentence (e.g. poking for calculator sentences), or a volumetric gesture, gestures that are more indicative of whole hand posture (e.g. horizontal grasp for calculator sentences). Target gestures were either compatible or incompatible with the described object, and were cued at two different time points, early and late. Response latencies for performing compatible functional gestures significantly decreased at both time points, whereas latencies were significantly lower for compatible volumetric gestures in the late cue condition. These results indicate that descriptions of abstract interactions with objects automatically (early time point) generate motor representations of functional gestures, priming the motor system and increasing response speed. The specificity of enhanced motor responses to the gesture-object interaction also highlights the importance of the motor system in semantic processing, as this enhanced motor response was dependent on the meaning of the word. A study performed by Dr. Olmstead et al., described in detail elsewhere, demonstrates more concretely the influence that the semantics of action language can have on movement coordination. Briefly, this study investigated the effects of action language on the coordination of rhythmic bimanual hand movements. Subjects were instructed to move two pendulums, one with each hand, either in-phase (pendulums are at the same point in their cycle, phase difference of roughly 0 degrees) or anti-phase (pendulums are at the opposite point in their cycle, phase difference of roughly 180 degrees). Robust behavioral studies have revealed that these two phase states, with phase differences 180 and 0 degrees, are the two stable relative phase states, or the two coordination patterns that produce stable movement. This pendulum swinging task was performed as subjects judged sentences for their plausibility; subjects were asked to indicate whether or not each presented sentence made logical sense. Plausible sentences described actions that could be performed by a human using the arms, hands, and/or fingers ("He is swinging the bat"), or actions that could not be performed ("The barn is housing the goat"). Implausible sentences also used similar action verbs ("He is swinging the hope"). Plausible, performable sentences lead to a significant change in the relative phase shift of the bimanual pendulum task. The coordination of the movement was altered by action language stimuli, as the relative phase shift that produced stable movement was significantly different than in the non-performable sentence and no language stimuli conditions. This development of new stable states has been used to imply a reorganization of the motor system utilized to plan and execute this movement, and supports the bi-directional hypothesis by demonstrating an effect of action language on movement.
Effects of systems of action on language comprehension:
The bi-directional hypothesis of action and language proposes that altering the activity of motor systems, either through altered neural activity or actual movement, influences language comprehension. Neural activity in specific areas of the brain can be altered using transcranial magnetic stimulation (TMS), or by studying patients with neuropathologies leading to specific sensory and/or motor deficits. Movement is also used to alter the activity of neural motor systems, increasing overall excitability of motor and pre-motor areas.
Effects of systems of action on language comprehension:
Neural activation Altered neural activity of motor systems has been demonstrated to influence language comprehension. One such study that demonstrates this effect was performed by Dr. Pulvermüller et al. TMS was used to increase the excitability of either the leg region or the arm region of the motor cortex. Authors stimulated the left motor cortex, known to be more closely involved in language processing in right-handed individuals, the right motor cortex, as well as a sham stimulation where stimulation was prevented by a plastic block placed between the coil and the skull. During the stimulation protocols, subjects were shown 50 arm, 50 leg, 50 distractor (no bodily relation), and 100 pseudo- (not real) words. Subjects were asked to indicate recognition of a meaningful word by moving their lips, and response times were measured. It was found that stimulation of the left leg region of the motor cortex significantly reduced response times for recognition of leg words as compared to arm words, whereas the reverse was true for stimulation of the arm region. Stimulation site on the right motor cortex, as well as sham stimulation, did not exhibit these effects. Therefore, somatotopically-specific stimulation of the left motor cortex facilitated word comprehension in a body-part-specific manner, where stimulation of the leg and arm regions lead to enhanced comprehension of leg and arm words, respectively. This study has been used as evidence for the bi-directional hypothesis of language and action, as it showcases that manipulating motor cortex activity alters language comprehension in a semantically-specific manner.A similar experiment has been performed on the articulatory motor cortex, or the mouth and lip regions of the motor cortex used in the production of words. Two categories of words were used as language stimuli: words that involved the lips for production (e.g. "pool") or the tongue (e.g. "tool). Subjects listened to the words, were shown pairs of pictures, and were asked to indicate which picture matched the word they heard with a button press. TMS was used prior to presentation of the language stimuli to selectively facilitate either the lip or tongue regions of the left motor cortex; these two TMS conditions were compared to a control condition where TMS was not applied. It was found that stimulation of the lip region of the motor cortex lead to a significantly decreased response time for lip words as compared to tongue words. In addition, during recognition of tongue words, reduced reaction times were seen with tongue TMS as compared to lip TMS and no TMS. Although this same effect was not seen with lip words, authors attribute this to the complexity of tongue as opposed to lip movements, and the increase difficulty of tongue words as opposed to lip. Overall, this study demonstrates that the activity in the articulatory motor cortex influences the comprehension of single spoken words, and highlights the importance of the motor cortex in speech comprehensionLesions of sensory and motor areas have also been studied to elucidate the effects of sensorimotor systems on language comprehension. One such example of this is the patient JR; this patient has a lesion in areas in the auditory association cortex implicated in processing auditory information. This patient showcases significant impairments in conceptual and perceptual processing of sound-related language and objects. For example, processing the meaning of words describing sound-related objects (e.g., "bell') was significantly impaired in JR as compared to non-sound-related objects (e.g., "armchair"). These data suggest that damage of sensory regions involved in processing auditory information specifically impair processing of sound-related conceptual information, highlighting the necessity of sensory systems for language comprehension.
Effects of systems of action on language comprehension:
Movement Movement has been shown to influence language comprehension. This has been demonstrated by priming motor areas with movement, increasing the excitability of motor and pre-motor areas associated with the body part being moved. It has been demonstrated that motor engagement of a specific body part decreases neural activity in language processing areas when processing words related to that body part. This decreased neural activity is a feature of semantic priming, and suggests that activation of specific motor areas through movement can facilitate language comprehension in a semantically-dependent manner. An interference effect has also been demonstrated. During incompatible ACE conditions, neural signatures of language comprehension have been shown to be inhibited. Combined, these pieces of evidence have been used to support a semantic role of the motor system.
Effects of systems of action on language comprehension:
Movement can also inhibit language comprehension tasks, particularly tasks of verbal working memory. When asked to memorize and verbally recall four-word sequences of either arm or leg action words, performing complex, rhythmic movements after presentation of the word sequences was demonstrated to interfere with memory performance. This performance deficit was body-part specific, where movement of the legs impaired performance of recall of leg words, and movement of the arms impaired recall of arm words. These data indicate that sensorimotor systems exhibit cortically specific "inhibitory casual effects" on memory of action words, as impairment was specific to motor engagement and bodily association of the words.
Organization of neural substrates:
Relating cognitive functions to brain structures is done in the field of cognitive neuroscience. This field attempts to map cognitive processes, such as language comprehension, onto neural activation of specific brain structures.The bi-directional hypothesis of language and action requires that action and language processes have overlapping brain structures, or shared neural substrates, thereby necessitating motor areas for language comprehension. The neural substrates of embodied cognition are often studied using the cognitive tasks of object recognition, action recognition, working memory tasks, and language comprehension tasks. These networks have been elucidated with behavioral, computational, and imaging studies, but the discovery of their exact organization is ongoing.
Organization of neural substrates:
Circuit organization It has been proposed that the control of movement is organized hierarchically, where movement is not controlled by individually controlling single neurons, but that movements are represented at a gross, more functional level. A similar concept has been applied to the control of cognition, resulting in the theory of cognitive circuits. This theory proposes that there are functional units of neurons in the brain that are strongly connected, and act coherently as a functional unit during cognitive tasks. These functional units of neurons, or "thought circuits," have been referred to as the "building blocks of cognition". Thought circuits are believed to have been originally formed from basic anatomical connections, that were strengthened with correlated activity through Hebbian learning and plasticity. Formation of these neural networks has been demonstrated with computational models using known anatomical connections and Hebbian learning principles. For example, sensory stimulation through interaction with an object activates a distributed network of neurons in the cortex. Repeated activation of these neurons, through Hebbian plasticity, may strengthen their connections and form a circuit. This sensory circuit may then be activated during the perception of known objects.This same concept has been applied to action and language, as understanding of the meaning of action words requires an understanding of the action itself. During language and motor skill development, one likely learns to associate an action word with an action or a sensation. This action or sensation, and the correlated sensorimotor areas involved, are then incorporated into the neural representation of that concept. This leads to semantic topography, or the activation of motor areas related to the meaning and bodily association of action language. These networks may be organized into "kernels," areas highly activated by language comprehension tasks, and "halos," brain areas in the periphery of networks that experience slightly increased activation. It has been hypothesized that language comprehension is housed in the left-perisylvian neuronal circuit, forming the "kernel," and sensorimotor regions are peripherally activated during semantic processing of action language, forming the "halo".
Organization of neural substrates:
Evidence for shared neural networks Many studies that have demonstrated a role of the motor system in semantic processing of action language have been used as evidence for a shared neural network between action and language comprehension processes. For example, facilitated activity in language comprehension areas, evidence of semantic priming, with movement of a body part that is associated with the action word has also been used as evidence for this shared neural network. A more specific method for identifying whether certain areas of the brain are necessary for a cognitive task is to demonstrate impaired performance of said task following a functional change to the brain area of interest. A functional change may involve a lesion, or altered excitability through stimulation, or utilization of the area for another task. According to this theory, there is only a finite amount of neural real-estate available for each task. If two tasks share a neural network, there will be competition for the associated neural substrates, and the performance of each task will be inhibited when performed simultaneously. Using this theory, proponents of the bi-directional hypothesis have postulated that performance of verbal working memory of action words would be impaired by movement of the concordant body part. This has been demonstrated with the selective impairment of memorization of arm and leg words when coupled with arm and leg movements, respectively. This implies that the neural network for verbal working memory is specifically tied to the motor systems associated with the body part implied with the word. This semantic topography has been suggested to provide evidence that action language shares a neural network with sensorimotor systems, thereby supporting the bi-directional hypothesis of language and action. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fibular hemimelia**
Fibular hemimelia:
Fibular hemimelia or longitudinal fibular deficiency is "the congenital absence of the fibula and it is the most common congenital absence of long bone of the extremities." It is the shortening of the fibula at birth, or the complete lack thereof. Fibular hemimelia often causes severe knee instability due to deficiencies of the ligaments. Severe forms of fibula hemimelia can result in a malformed ankle with limited motion and stability. Fusion or absence of two or more toes are also common. In humans, the disorder can be noted by ultrasound in utero to prepare for amputation after birth or complex bone lengthening surgery. The amputation usually takes place at six months with removal of portions of the legs to prepare them for prosthetic use. The other treatments, which include repeated corrective osteotomies and leg-lengthening surgery (Ilizarov apparatus), are costly and associated with residual deformity.
Characteristics:
Characteristics are: A fibrous band instead of the fibula Short deformed leg Absence of the lateral part of the ankle joint (due to absence of the distal end of the fibula), and what is left is unstable; the foot has an equinovalgus deformity Possible absence of part of the foot requiring surgical intervention to bring the foot into normal function, or amputation.
Characteristics:
Possible absence of one or two toes on the foot Possible conjoined toes or metatarsalsPartial or total absence of fibula is among the most frequent limb anomalies. It is the most common long bone deficiency and is the most common skeletal deformity in the leg. It most often is unilateral (present only on one side). It may also present as bilateral (affecting both legs). Paraxial fibular hemimelia is the most common manifestation in which only the postaxial portion of the limb is affected. It is commonly seen as a complete terminal deficiency, where the lateral rays of the foot are also affected. Hemimelia can also be intercalary in which case the foot remains unaffected. Although the missing bone is easily identified, this condition is not simply a missing bone. Males are affected twice as often as females in most series.
Causes:
The cause of fibular hemimelia is unclear. Purportedly, there have been some incidents of genetic distribution in a family; however, this does not account for all cases. Maternal viral infections, embryonic trauma, teratogenic environmental exposures or vascular dysgenesis (failure of the embryo to form a satisfactory blood supply) between four and seven weeks gestation are considered possible causes.In an experimental mouse model, change in the expression of a homeobox gene led to similar, but bilateral, fibular defects.
Notable people:
Aled Davies - Welsh Paralympic athlete Jessica Long – American Paralympic swimmer Barry McClements – Northern Irish Paralympic and Commonwealth Games swimmer Liam Malone – New Zealand Paralympic athlete Aimee Mullins – American Paralympic athlete, actress, and fashion model Oscar Pistorius – Former South African athlete and convicted murderer Long Jeanne Silver – American former pornographic actress Erik Stolhanske – American actor, writer, director, producer Hunter Woodhall – American Paralympic runner | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Biological network**
Biological network:
A biological network is a method of representing systems as complex sets of binary interactions or relations between various biological entities. In general, networks or graphs are used to capture relationships between entities or objects. A typical graphing representation consists of a set of nodes connected by edges.
History of networks:
As early as 1736 Leonhard Euler analyzed a real-world issue known as the Seven Bridges of Königsberg, which established the foundation of graph theory. From the 1930's-1950's the study of random graphs were developed. During the mid 1990's, it was discovered that many different types of "real" networks have structural properties quite different from random networks. In the late 2000's, scale-free and small-world networks began shaping the emergence of systems biology, network biology, and network medicine.[1] In 2014, graph theoretical methods were used by Frank Emmert-Streib to analyze biological networks.
History of networks:
In the 1980s, researchers started viewing DNA or genomes as the dynamic storage of a language system with precise computable finite states represented as a finite state machine. Recent complex systems research has also suggested some far-reaching commonality in the organization of information in problems from biology, computer science, and physics.
Networks in biology:
Protein–protein interaction networks Protein-protein interaction networks (PINs) represent the physical relationship among proteins present in a cell, where proteins are nodes, and their interactions are undirected edges. Due to their undirected nature, it is difficult to identify all the proteins involved in an interaction. Protein–protein interactions (PPIs) are essential to the cellular processes and also the most intensely analyzed networks in biology. PPIs could be discovered by various experimental techniques, among which the yeast two-hybrid system is a commonly used technique for the study of binary interactions. Recently, high-throughput studies using mass spectrometry have identified large sets of protein interactions.Many international efforts have resulted in databases that catalog experimentally determined protein-protein interactions. Some of them are the Human Protein Reference Database, Database of Interacting Proteins, the Molecular Interaction Database (MINT), IntAct, and BioGRID. At the same time, multiple computational approaches have been proposed to predict interactions. FunCoup and STRING are examples of such databases, where protein-protein interactions inferred from multiple evidences are gathered and made available for public usage.
Networks in biology:
Recent studies have indicated the conservation of molecular networks through deep evolutionary time. Moreover, it has been discovered that proteins with high degrees of connectedness are more likely to be essential for survival than proteins with lesser degrees. This observation suggests that the overall composition of the network (not simply interactions between protein pairs) is vital for an organism's overall functioning.
Networks in biology:
Gene regulatory networks (DNA–protein interaction networks) The genome encodes thousands of genes whose products (mRNAs, proteins) are crucial to the various processes of life, such as cell differentiation, cell survival, and metabolism. Genes produce such products through a process called transcription, which is regulated by a class of proteins called transcription factors. For instance, the human genome encodes almost 1,500 DNA-binding transcription factors that regulate the expression of more than 20,000 human genes. The complete set of gene products and the interactions among them constitutes gene regulatory networks (GRN). GRNs regulate the levels of gene products within the cell and in-turn the cellular processes.
Networks in biology:
GRNs are represented with genes and transcriptional factors as nodes and the relationship between them as edges. These edges are directional, representing the regulatory relationship between the two ends of the edge. For example., the directed edge from gene A to gene B indicates that A regulates the expression of B. Thus, these directional edges can not only represent the promotion of gene regulation but also its inhibition.
Networks in biology:
GRNs are usually constructed by utilizing the gene regulation knowledge available from databases such as., Reactome and KEGG. High-throughput measurement technologies, such as microarray, RNA-Seq, ChIP-chip, and ChIP-seq, enabled the accumulation of large-scale transcriptomics data, which could help in understanding the complex gene regulation patterns.
Networks in biology:
Gene co-expression networks (transcript–transcript association networks) Gene co-expression networks can be perceived as association networks between variables that measure transcript abundances. These networks have been used to provide a system biologic analysis of DNA microarray data, RNA-seq data, miRNA data, etc. weighted gene co-expression network analysis is extensively used to identify co-expression modules and intramodular hub genes. Co-expression modules may correspond to cell types or pathways, while highly connected intramodular hubs can be interpreted as representatives of their respective modules.
Networks in biology:
Metabolic networks Cells break down the food and nutrients into small molecules necessary for cellular processing through a series of biochemical reactions. These biochemical reactions are catalyzed by enzymes. The complete set of all these biochemical reactions in all the pathways represents the metabolic network. Within the metabolic network, the small molecules take the roles of nodes, and they could be either carbohydrates, lipids, or amino acids. The reactions which convert these small molecules from one form to another are represented as edges. It is possible to use network analyses to infer how selection acts on metabolic pathways.
Networks in biology:
Signaling networks Signals are transduced within cells or in between cells and thus form complex signaling networks which plays a key role in the tissue structure. For instance, the MAPK/ERK pathway is transduced from the cell surface to the cell nucleus by a series of protein-protein interactions, phosphorylation reactions, and other events. Signaling networks typically integrate protein–protein interaction networks, gene regulatory networks, and metabolic networks. Single cell sequencing technologies allows the extraction of inter-cellular signaling, an example is NicheNet, which allows to modeling intercellular communication by linking ligands to target genes.
Networks in biology:
Neuronal networks The complex interactions in the brain make it a perfect candidate to apply network theory. Neurons in the brain are deeply connected with one another, and this results in complex networks being present in the structural and functional aspects of the brain. For instance, small-world network properties have been demonstrated in connections between cortical regions of the primate brain or during swallowing in humans. This suggests that cortical areas of the brain are not directly interacting with each other, but most areas can be reached from all others through only a few interactions.
Networks in biology:
Food webs All organisms are connected through feeding interactions. If a species eats or is eaten by another species, they are connected in an intricate food web of predator and prey interactions. The stability of these interactions has been a long-standing question in ecology. That is to say if certain individuals are removed, what happens to the network (i.e., does it collapse or adapt)? Network analysis can be used to explore food web stability and determine if certain network properties result in more stable networks. Moreover, network analysis can be used to determine how selective removals of species will influence the food web as a whole. This is especially important considering the potential species loss due to global climate change.
Networks in biology:
Between-species interaction networks In biology, pairwise interactions have historically been the focus of intense study. With the recent advances in network science, it has become possible to scale up pairwise interactions to include individuals of many species involved in many sets of interactions to understand the structure and function of larger ecological networks. The use of network analysis can allow for both the discovery and understanding of how these complex interactions link together within the system's network, a property that has previously been overlooked. This powerful tool allows for the study of various types of interactions (from competitive to cooperative) using the same general framework. For example, plant-pollinator interactions are mutually beneficial and often involve many different species of pollinators as well as many different species of plants. These interactions are critical to plant reproduction and thus the accumulation of resources at the base of the food chain for primary consumers, yet these interaction networks are threatened by anthropogenic change. The use of network analysis can illuminate how pollination networks work and may, in turn, inform conservation efforts. Within pollination networks, nestedness (i.e., specialists interact with a subset of species that generalists interact with), redundancy (i.e., most plants are pollinated by many pollinators), and modularity play a large role in network stability. These network properties may actually work to slow the spread of disturbance effects through the system and potentially buffer the pollination network from anthropogenic changes somewhat. More generally, the structure of species interactions within an ecological network can tell us something about the diversity, richness, and robustness of the network. Researchers can even compare current constructions of species interactions networks with historical reconstructions of ancient networks to determine how networks have changed over time. Much research into these complex species interactions networks is highly concerned with understanding what factors (e.g., species richness, connectance, nature of the physical environment) lead to network stability.
Networks in biology:
Within-species interaction networks Network analysis provides the ability to quantify associations between individuals, which makes it possible to infer details about the network as a whole at the species and/or population level. One of the most attractive features of the network paradigm would be that it provides a single conceptual framework in which the social organization of animals at all levels (individual, dyad, group, population) and for all types of interaction (aggressive, cooperative, sexual, etc.) can be studied.Researchers interested in ethology across many taxa, from insects to primates, are starting to incorporate network analysis into their research. Researchers interested in social insects (e.g., ants and bees) have used network analyses better to understand the division of labor, task allocation, and foraging optimization within colonies. Other researchers are interested in how specific network properties at the group and/or population level can explain individual-level behaviors. Studies have demonstrated how animal social network structure can be influenced by factors ranging from characteristics of the environment to characteristics of the individual, such as developmental experience and personality. At the level of the individual, the patterning of social connections can be an important determinant of fitness, predicting both survival and reproductive success. At the population level, network structure can influence the patterning of ecological and evolutionary processes, such as frequency-dependent selection and disease and information transmission. For instance, a study on wire-tailed manakins (a small passerine bird) found that a male's degree in the network largely predicted the ability of the male to rise in the social hierarchy (i.e., eventually obtain a territory and matings). In bottlenose dolphin groups, an individual's degree and betweenness centrality values may predict whether or not that individual will exhibit certain behaviors, like the use of side flopping and upside-down lobtailing to lead group traveling efforts; individuals with high betweenness values are more connected and can obtain more information, and thus are better suited to lead group travel and therefore tend to exhibit these signaling behaviors more than other group members.Social network analysis can also be used to describe the social organization within a species more generally, which frequently reveals important proximate mechanisms promoting the use of certain behavioral strategies. These descriptions are frequently linked to ecological properties (e.g., resource distribution). For example, network analyses revealed subtle differences in the group dynamics of two related equid fission-fusion species, Grevy's zebra and onagers, living in variable environments; Grevy's zebras show distinct preferences in their association choices when they fission into smaller groups, whereas onagers do not. Similarly, researchers interested in primates have also utilized network analyses to compare social organizations across the diverse primate order, suggesting that using network measures (such as centrality, assortativity, modularity, and betweenness) may be useful in terms of explaining the types of social behaviors we see within certain groups and not others.Finally, social network analysis can also reveal important fluctuations in animal behaviors across changing environments. For example, network analyses in female chacma baboons (Papio hamadryas ursinus) revealed important dynamic changes across seasons that were previously unknown; instead of creating stable, long-lasting social bonds with friends, baboons were found to exhibit more variable relationships which were dependent on short-term contingencies related to group-level dynamics as well as environmental variability. Changes in an individual's social network environment can also influence characteristics such as 'personality': for example, social spiders that huddle with bolder neighbors tend to increase also in boldness. This is a very small set of broad examples of how researchers can use network analysis to study animal behavior. Research in this area is currently expanding very rapidly, especially since the broader development of animal-borne tags and computer vision can be used to automate the collection of social associations. Social network analysis is a valuable tool for studying animal behavior across all animal species and has the potential to uncover new information about animal behavior and social ecology that was previously poorly understood.
Networks in biology:
DNA-DNA chromatin networks Within a nucleus, DNA is constantly in motion. Perpetual actions such as genome folding and Cohesin extrusion morph the shape of a genome in real time. The spatial location of strands of chromatin relative to each other plays an important role in the activation or suppression of certain genes. DNA-DNA Chromatin Networks help biologists to understand these interactions by analyzing commonalities amongst different loci. The size of a network can vary significantly, from a few genes to several thousand and thus network analysis can provide vital support in understanding relationships among different areas of the genome. As an example, analysis of spatially similar loci within the organization in a nucleus with Genome Architecture Mapping (GAM) can be used to construct a network of loci with edges representing highly linked genomic regions.
Networks in biology:
The first graphic showcases the Hist1 region of the mm9 mouse genome with each node representing genomic loci. Two nodes are connected by an edge if their linkage disequilibrium is greater than the average across all 81 genomic windows. The locations of the nodes within the graphic are randomly selected and the methodology of choosing edges yields a, simple to show, but rudimentary graphical representation of the relationships in the dataset. The second visual exemplifies the same information as the previous; However, the network starts with every loci placed sequentially in a ring configuration. It then pulls nodes together using linear interpolation by their linkage as a percentage. The figure illustrates strong connections between the center genomic windows as well as the edge loci at the beginning and end of the Hist1 region.
Modelling biological networks:
Introduction To draw useful information from a biological network, an understanding of the statistical and mathematical techniques of identifying relationships within a network is vital. Procedures to identify association, communities, and centrality within nodes in a biological network can provide insight into the relationships of whatever the nodes represent whether they are genes, species, etc. Formulation of these methods transcends disciplines and relies heavily on graph theory, computer science, and bioinformatics.
Modelling biological networks:
Association There are many different ways to measure the relationships of nodes when analyzing a network. In many cases, the measure used to find nodes that share similarity within a network is specific to the application it is being used. One of the types of measures that biologists utilize is correlation which specifically centers around the linear relationship between two variables. As an example, weighted gene co-expression network analysis uses Pearson correlation to analyze linked gene expression and understand genetics at a systems level. Another measure of correlation is linkage disequilibrium. Linkage disequilibrium describes the non-random association of genetic sequences among loci in a given chromosome. An example of its use is in detecting relationships in GAM data across genomic intervals based upon detection frequencies of certain loci.
Modelling biological networks:
Centrality The concept of centrality can be extremely useful when analyzing biological network structures. There are many different methods to measure centrality such as betweenness, degree, Eigenvector, and Katz centrality. Every type of centrality technique can provide different insights on nodes in a particular network; However, they all share the commonality that they are to measure the prominence of a node in a network.
Modelling biological networks:
In 2005, Researchers at Harvard Medical School utilized centrality measures with the yeast protein interaction network. They found that proteins that exhibited high Betweenness centrality were more essential and translated closely to a given protein's evolutionary age.
Modelling biological networks:
Communities Studying the community structure of a network by subdividing groups of nodes into like-regions can be an integral tool for bioinformatics when exploring data as a network. A food web of The Secaucus High School Marsh exemplifies the benefits of grouping as the relationships between nodes are far easier to analyze with well-made communities. While the first graphic is hard to visualize, the second provides a better view of the pockets of highly connected feeding relationships that would be expected in a food web. The problem of community detection is still an active problem. Scientists and graph theorists continuously discover new ways of sub sectioning networks and thus a plethora of different algorithms exist for creating these relationships. Like many other tools that biologists utilize to understand data with network models, every algorithm can provide its own unique insight and may vary widely on aspects such as accuracy or time complexity of calculation.
Modelling biological networks:
In 2002, a food web of marine mammals in the Chesapeake Bay was divided into communities by biologists using a community detection algorithm based on neighbors of nodes with high degree centrality. The resulting communities displayed a sizable split in pelagic and benthic organisms. Two very common community detection algorithms for biological networks are the Louvain Method and Leiden Algorithm.
Modelling biological networks:
The Louvain method is a greedy algorithm that attempts to maximize modularity, which favors heavy edges within communities and sparse edges between, within a set of nodes. The algorithm starts by each node being in its own community and iteratively being added to the particular node's community that favors a higher modularity. Once no modularity increase can occur by joining nodes to a community, a new weighted network is constructed of communities as nodes with edges representing between-community edges and loops representing edges within a community. The process continues until no increase in modularity occurs. While the Louvain Method provides good community detection, there are a few ways that it is limited. By mainly focusing on maximizing a given measure of modularity, it may be led to craft badly connected communities by degrading a model for the sake of maximizing a modularity metric; However, the Louvain Method performs fairly and is can be easy to understand comparatively to many other community detection algorithms.The Leiden Algorithm expands on the Louvain Method by providing a number of improvements. When joining nodes to a community, only neighborhoods that have been recently changed are considered. This greatly improves the speed of merging nodes. Another optimization is in the refinement phase in-which the algorithm randomly chooses for a node from a set of communities to merge with. This allows for greater depth in choosing communities as Louvain solely focuses on maximizing the modularity that was chosen. The Leiden algorithm, while more complex than Louvain, performs faster with better community detection and can be a valuable tool for identifying groups.
Books:
E. Estrada, "The Structure of Complex Networks: Theory and Applications", Oxford University Press, 2011, ISBN 978-0-199-59175-6 J. Krause, R. James, D. Franks, D. Croft, "Animal Social Networks", Oxford University Press, 2015, ISBN 978-0199679041 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Matchy-Matchy**
Matchy-Matchy:
Matchy-Matchy is an adjective used to describe something or someone that is very or excessively colour coordinated. It is a term that is commonly used in fashion blogs to describe an outfit that is too coordinated and consists of too many of the same styles of colors, patterns, fabrics, accessories, etc. "Matchy-matchy" was added to the Oxford Dictionary of English in 2010 along with 200 new words that were previously considered as slang.According to some designers, matching too much is not a good thing. "Sometimes fashion has to reintroduce an idea that may have once been considered a bad taste," says Jane Shepherdson. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Antagonistic assets**
Antagonistic assets:
Antagonistic Assets are the opposite of complementary assets. These are defined as a combination of resources that jointly reduce value from the implementation of other resources. In other words, combining antagonistic assets produces an effect smaller than the sum of the individual effects of each resource.
Use in chemistry:
In chemistry antagonistic assets refer to two substances that when combined weaken or eliminate each other's effectiveness. For example, the effects of alcohol can be blocked by caffeine, which mediates alcohol's somnogenic and ataxic effects. Antagonistic assets are the basis of many antidotes for example in the case of poisonings.
Use in management:
The notion of antagonistic assets has been proposed as a metaphor to describe resources available to businesses which have an effect opposite to complementary assets. This idea has been mainly used in literature referring to social entrepreneurship which is proposed to turn antagonistic assets into complementarities. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gibsonian ecological theory of development**
Gibsonian ecological theory of development:
The Gibsonian ecological theory of development is a theory of development that was created by American psychologist Eleanor J. Gibson during the 1960s and 1970s. Gibson emphasized the importance of environment and context in learning and, together with husband and fellow psychologist James J. Gibson, argued that perception was crucial as it allowed humans to adapt to their environments. Gibson stated that "children learn to detect information that specifies objects, events, and layouts in the world that they can use for their daily activities". Thus, humans learn out of necessity. Children are information "hunter–gatherers", gathering information in order to survive and navigate in the world.
Key concepts:
Gibson asserted that development was driven by a complex interaction between environmental affordances and the motivated humans who perceive them. For example, to an infant, different surfaces "afford" opportunities for walking, crawling, grasping, etc. As children gain motor skills, they discover new opportunities for movement and thus new affordances. The more chances they are given to perceive and interact with their environment, the more affordances they discover, and the more accurate their perceptions become.
Key concepts:
Gibson identified four important aspects of human behavior that develop: Agency—Self-control, intentionality in behavior Agency is learning to control both one's own activity and external events Babies learn at an early age that their actions have an effect on the environment For example: Babies were observed kicking their legs at a mobile hanging above them. They had discovered their kicking made the mobile move.
Key concepts:
Prospectivity—Intentional, anticipatory, planful, future-oriented behaviors For example: A baby will reach out to try and catch an object moving toward them because the baby can anticipate that the object will continue to move close enough to catch. In other words, the baby perceives that reaching out his/her hand will afford him/her to catch the object.
Key concepts:
Search for Order—Tendency to see order, regularity, and pattern to make sense of the world For example: Before 9 months, infants begin to recognize the strong-weak stress patterns in their native language Flexibility—Perception can adjust to new situations and bodily conditions (such as growth, improved motor skills, or a sprained ankle) Examples: Three-month-old infants lying under a mobile had a string attached to their right leg and then to the mobile so that when they moved their leg the mobile would move. When the string was switched to the left legs, the infants would easily shift to moving that leg to activate the mobile.
Key concepts:
Perception is an ongoing, active process.
Methodology:
Gibson used experimental procedures while also attempting to retain ecological validity by simulating important features of the child's natural environment. In keeping with the idea of affordances, Gibson tried to provide multimodal stimulation for infants in these experiments (multiple kinds of objects, faces, or surfaces, for example) and ways of obtaining feedback through movement and exploration.One of Gibson's well-known perceptual experiments involved the construction of a "visual cliff," simulating a real cliff. Gibson and Walk placed infants near the cliff and placed mothers on the other side of the cliff. They found that infants perceived depth and were unwilling to crawl over the cliff at approximately 6–7 months. Later experiments showed that 12-month-old infants had learned to use their mothers' facial expressions as signals of potential affordances. If mothers smiled, infants were more likely to crawl over the "dangerous" cliff, but if mothers made a frightened face, infants avoided the cliff.
Criticism:
Gibson's theory has been criticized for its "unclear account of cognition". Gibson's theory pertains to direct perception and does not take into account that behaviors may involve indirect, interpretive cognition. Gibson's methodology involves an expensive and complicated experimental set up, which may prove cost- and time-prohibitive for many researchers. Finally, Gibson's research was almost exclusively confined to infants and very young children, so it is difficult to make generalizations throughout the lifespan.
Current State of Gibsonian Theory:
Aside from her own writings, Gibson's work is rarely described as a theory of development. When Gibson's primary area of research, affordances, is referenced, the citations typically refer only to James Gibson. Gibson is credited with popularizing affordances in perceptual research. However, unlike Gibson, researchers have studied affordances in all age groups, including adults. Affordances have been applied to a range of innovative topics, from automobile driving to text messaging. However, the concept of affordances is usually used in isolation rather than being integrated into Gibson's ecological framework. Some researchers are even attempting to create their own theories of affordances instead of revising Gibson's theory to accommodate new findings. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CRZ1**
CRZ1:
CRZ1, short for Calcineurin-Responsive Zinc Finger 1, is a transcription factor that regulates calcineurin dependent-genes in Candida albicans.
Mechanism of action:
The cytoplasmic protein Crz1 is dephosphorylated by the calcineurin and is then targeted to the nucleus. The nuclear protein activates the transcription of genes involved in cell-wall maintenance and ion homeostasis.
Structure:
The protein Crz1 possesses a Zinc-Finger motif that binds to a specific motif called CDRE (Calcineurin-Dependent Response Element) present on the promoter of the targeted genes. It also possesses a nuclear localization signal (NLS) at the N-terminal part | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Snorkel (swimming)**
Snorkel (swimming):
A snorkel is a device used for breathing air from above the surface when the wearer's head is face downwards in the water with the mouth and the nose submerged. It may be either separate or integrated into a swimming or diving mask. The integrated version is only suitable for surface snorkeling, while the separate device may also be used for underwater activities such as spearfishing, freediving, finswimming, underwater hockey, underwater rugby and for surface breathing with scuba equipment. A swimmer's snorkel is a tube bent into a shape often resembling the letter "L" or "J", fitted with a mouthpiece at the lower end and constructed of light metal, rubber or plastic. The snorkel may come with a rubber loop or a plastic clip enabling the snorkel to be attached to the outside of the head strap of the diving mask. Although the snorkel may also be secured by tucking the tube between the mask-strap and the head, this alternative strategy can lead to physical discomfort, mask leakage or even snorkel loss.To comply with the current European standard EN 1972 (2015), a snorkel for users with larger lung capacities should not exceed 38 centimeters (15") in length and 230 cubic centimeters (14 cu. in.) in internal volume, while the corresponding figures for users with smaller lung capacities are 35 cm (14") and 150 cc (9¼ cu. in.) respectively. Current World Underwater Federation (CMAS) Surface Finswimming Rules (2017) require snorkels used in official competitions to have a total length between 43 and 48 cm (17" and 19") and to have an inner diameter between 1.5 and 2.3 cm (½" and 1"). A longer tube would not allow breathing when snorkeling deeper, since it would place the lungs in deeper water where the surrounding water pressure is higher. The lungs would then be unable to inflate when the snorkeler inhales, because the muscles that expand the lungs are not strong enough to operate against the higher pressure. The pressure difference across the tissues in the lungs, between the blood capillaries and air spaces would increase the risk of pulmonary edema.
Snorkel (swimming):
Snorkels constitute respiratory dead space. When the user takes in a fresh breath, some of the previously exhaled air which remains in the snorkel is inhaled again, reducing the amount of fresh air in the inhaled volume, and increasing the risk of a buildup of carbon dioxide in the blood, which can result in hypercapnia. The greater the volume of the tube, and the smaller the tidal volume of breathing, the more this problem is exacerbated. Including the internal volume of the mask in the breathing circuit greatly expands the dead space. Occasional exhalation through the nose while snorkeling with a separate snorkel will slightly reduce the buildup of carbon dioxide, and may help in keeping the mask clear of water, but in cold water it will increase fogging. To some extent the effect of dead space can be counteracted by breathing more deeply and slowly, as this reduces the dead space ratio and work of breathing.
Operation:
The simplest type of snorkel is a plain tube that is held in the mouth, and allowed to flood when underwater. The snorkeler expels water from the snorkel either with a sharp exhalation on return to the surface (blast clearing) or by tilting the head back shortly before reaching the surface and exhaling until reaching or breaking the surface (displacement method) and facing forward or down again before inhaling the next breath. The displacement method expels water by filling the snorkel with air; it is a technique that takes practice but clears the snorkel with less effort, but only works when surfacing. Clearing splash water while at the surface requires blast clearing.Single snorkels and two-way twin snorkels constitute respiratory dead space. When the user takes in a fresh breath, some of the previously exhaled air which remains in the snorkel is inhaled again, reducing the amount of fresh air in the inhaled volume, and increasing the risk of a buildup of carbon dioxide in the blood, which can result in hypercapnia. The greater the volume of the tube, and the smaller the tidal volume of breathing, the more this problem is exacerbated. Including the internal volume of the mask in the breathing circuit greatly expands the dead space. A smaller diameter tube reduces the dead volume, but also increases resistance to airflow and so increases the work of breathing. Occasional exhalation through the nose while snorkeling with a separate snorkel will slightly reduce the buildup of carbon dioxide, and may help in keeping the mask clear of water, but in cold water it will increase fogging. Twin integrated snorkels with one-way valves eliminate the dead-space of the snorkels themselves, but are usually used on a full-face mask, and even if it has an inner oro-nasal section, there will be some dead space, and the valves will impede airflow through the loop. Integrated two-way snorkels include the internal volume of the mask as dead space in addition to the volume of the snorkels. To some extent the effect of dead space can be counteracted by breathing more deeply, as this reduces the dead space ratio. Slower breathing will reduce the effort needed to move the air through the circuit. There is a danger that a snorkeler who can breathe comfortably in good conditions will be unable to ventilate adequately under stress or when working harder, leading to hypercapnia and possible panic, and could get into serious difficulties if they are unable to swim effectively if they have to remove the snorkel or mask to breathe without restriction.
Operation:
Some snorkels have a sump at the lowest point to allow a small volume of water to remain in the snorkel without being inhaled when the snorkeler breathes. Some also have a non-return valve in the sump, to drain water in the tube when the diver exhales. The water is pushed out through the valve when the tube is blocked by water and the exhalation pressure exceeds the water pressure on the outside of the valve. This is almost exactly the mechanism of blast clearing which does not require the valve, but the pressure required is marginally less, and effective blast clearing requires a higher flow rate. The full face mask has a double airflow valve which allows breathing through the nose in addition to the mouth. A few models of snorkel have float-operated valves attached to the top end of the tube to keep water out when a wave passes, but these cause problems when diving as the snorkel must then be equalized during descent, using part of the diver's inhaled air supply. Some recent designs have a splash deflector on the top end that reduces entry of any water that splashes over the top end of the tube, thereby keeping it relatively free from water.Finswimmers do not normally use snorkels with a sump valve, as they learn to blast clear the tube on most if not all exhalations, which keeps the water content in the tube to a minimum as the tube can be shaped for lower work of breathing, and elimination of water traps, allowing greater speed and lowering the stress of eventual swallowing of small quantities of water, which would impede their competition performance.A common problem with all mechanical clearing mechanisms is their tendency to fail if infrequently used, or if stored for long periods, or through environmental fouling, or owing to lack of maintenance. Many also either slightly increase the flow resistance of the snorkel, or provide a small water trap, which retains a little water in the tube after clearing.Modern designs use silicone rubber in the mouthpiece and valves due to its resistance to degradation and its long service life. Natural rubber was formerly used, but slowly oxidizes and breaks down due to ultraviolet light exposure from the sun. It eventually loses its flexibility, becomes brittle and cracks, which can cause clearing valves to stick in the open or closed position, and float valves to leak due to a failure of the valve seat to seal. In even older designs, some snorkels were made with small "ping pong" balls in a cage mounted to the open end of the tube to prevent water ingress. These are no longer sold or recommended because they are unreliable and considered hazardous. Similarly, diving masks with a built-in snorkel are considered unsafe by scuba diving organizations such as PADI and BSAC because they can engender a false sense of security and can be difficult to clear if flooded.Experienced users tend to develop a surface breathing style which minimises work of breathing, carbon dioxide buildup and risk of water inspiration, while optimising water removal. This involves a sharp puff in the early stage of exhalation, which is effective for clearing the tube of remaining water, and a fairly large but comfortable exhaled volume, mostly fairly slowly for low work of breathing, followed by an immediate slow inhalation, which reduces entrainment of any residual water, to a comfortable but relatively large inhaled volume, repeated without delay. Elastic recoil is used to assist with the initial puff, which can be made sharper by controlling the start of exhalation with the tongue. This technique is most applicable to relaxed cruising on the surface. Racing finswimmers may use a different technique as they need a far greater level of ventilation when working hard.
Operation:
Scuba diving A snorkel can be useful when scuba diving as it is a safe way of swimming face down at the surface for extended periods to conserve the bottled air supply, or in an emergency situation when there is a problem with either air supply or regulator. Many dives do not require the use of a snorkel at all, and some scuba divers do not consider a snorkel a necessary or even useful piece of equipment, but the usefulness of a snorkel depends on the dive plan and the dive site. If there is no requirement to swim face down and see what is happening underwater, then a snorkel is not useful. If it is necessary to swim over heavy seaweed which can entangle the pillar valve and regulator if the diver swims face upward to get to and from the dive site, then a snorkel is useful to save breathing gas.
History:
Snorkeling is mentioned by Aristotle in his Parts of Animals. He refers to divers using "instruments for respiration" resembling the elephant's trunk. Some evidence suggests that snorkeling may have originated in Crete some 5,000 years ago as sea sponge farmers used hollowed out reeds to submerge and retrieve natural sponge for use in trade and commerce. In the fifteenth century, Leonardo da Vinci drew designs for an underwater breathing device consisting of cane tubes with a mask to cover the mouth at the demand end and a float to keep the tubes above water at the supply end. The following timeline traces the history of the swimmers' snorkel during the twentieth and twenty-first centuries.
History:
1927: First use of swimmer's breathing tube and mask. According to Dr Gilbert Doukan's 1957 World Beneath the Waves and cited elsewhere, "In 1927, and during each summer from 1927 to 1930, on the beach of La Croix-Valmer, Jacques O'Marchal (Sic. "Jacques Aumaréchal" is the name of a 1932 French swim mask patentee) could be seen using the first face mask and the first breathing tube. He exhibited them, in fact, in 1931, at the International Nautical Show. On his feet, moreover, he wore the first 'flippers' designed by Captain de Corlieu, the use of which was to become universal." 1929: First swimmers' breathing tube patent application filed. On 9 December 1929, Barney B. Girden files a patent application for a "swimming device" enabling a swimmer under instruction to be supplied with air through a tube to the mouth "whereby the wearer may devote his entire time to the mechanics of the stroke being used." His invention earns US patent 1,845,263 on 16 February 1932. On 30 July 1932, Joseph L. Belcher files a patent application for "breathing apparatus" delivering air to a submerged person by suction from the surface of the water through hoses connected to a float; US patent 1,901,219 is awarded on 14 March 1933.
History:
1938: First swimmers' mask with integrated breathing tubes. In 1938, French naval officer Yves Le Prieur introduces his "Nautilus" full-face diving mask with hoses emerging from the sides and leading upwards to an air inlet device whose ball valve opens when it is above water and closes when it is submerged. In November 1940, American spearfisherman Charles H. Wilen files his "swimmer's mask" invention, which is granted US patent 2,317,237 of 20 April 1943. The device resembles a full-face diving mask incorporating two breathing tubes topped with valves projecting above the surface for inhalation and exhalation purposes. On 11 July 1944, he obtains US design patent 138,286 for a simpler version of this mask with a flutter valve at the bottom and a single breathing tube with a ball valve at the top. Throughout their heydey of the 1950s and early 1960s, masks with integrated tubes appear in the inventories of American, Australian, British, Danish, French, German, Greek, Hong Kong, Israeli, Italian, Japanese, Polish, Spanish, Taiwanese, Turkish and Yugoslav swimming and diving equipment manufacturers. Meanwhile, in 1957, the US monthly product-testing magazine Consumer Reports concludes that "snorkel-masks have some value for swimmers lying on the surface while watching the depths in water free of vegetation and other similar hazards, but they are not recommended for a dive 'into the blue'". According to an underwater swimming equipment review in the British national weekly newspaper The Sunday Times in December 1973, "the mask with inbuilt snorkel is doubly dangerous (...) A ban on the manufacture and import of these masks is long overdue in Britain". In a decree of 2 August 1989, the French government suspends the manufacture, importation and marketing of ball-valve snorkel-masks. By the noughties, just two swim masks with attached breathing tubes remain in production worldwide: the Majorca sub 107S single-snorkel model and the Balco 558 twin-snorkel full-face model, both manufactured in Greece. In May 2014, the French Decathlon company files its new-generation full-face snorkel-mask design, which is granted US design patent 775,722 on 3 January 2017, entering production as the "Easybreath" mask designated for surface snorkeling only.
History:
1938: First front-mounted swimmer's breathing tube patent filed. In December 1938, French spearfisherman Maxime Forjot and his business partner Albert Méjean file a patent application in France for a breathing tube worn on the front of the head over a single-lens diving mask enclosing the eyes and the nose and it is granted French patent 847848 on 10 July 1939. In July 1939, Popular Science magazine publishes an article containing illustrations of a spearfisherman using a curved length of hosepipe as a front-mounted breathing tube and wearing a set of swimming goggles over his eyes and a pair of swimming fins on his feet. In the first French monograph on spearfishing La Chasse aux Poissons (1940), medical researcher and amateur spearfisherman Dr Raymond Pulvénis illustrates his "Tuba", a breathing tube he designed to be worn on the front of the head over a single-lens diving mask enclosing the eyes and the nose. Francophone swimmers and divers have called their breathing tube "un tuba" ever since. In 1943, Raymond Pulvénis and his brother Roger obtain a Spanish patent for their improved breathing tube mouthpiece design. In 1956, the UK diving equipment manufacturer E. T. Skinner (Typhoon) markets a "frontal" breathing tube with a bracket attachable to the screw at the top of an oval diving mask. Although it falls out of favour with underwater swimmers eventually, the front-mounted snorkel becomes the breathing tube of choice in competitive swimming and finswimming because it contributes to the swimmer's hydrodynamic profile.
History:
1939: First side-mounted swimmers’ breathing tube patent filed. In December 1939, expatriate Russian spearfisherman Alexandre Kramarenko files a patent in France for a breathing tube worn at the side of the head with a ball valve at the top to exclude water and a flutter valve at the bottom. Kramarenko and his business partner Charles H. Wilen refile the invention in March 1940 with the United States Patent Office, where their "underwater apparatus for swimmers" is granted US patent 2,317,236 on 20 April 1943; after entering production in France, the device is called "Le Respirator". The co-founder of Scubapro Dick Bonin is credited with the introduction of the flexible-hose snorkel in the mid-1950s and the exhaust valve to ease snorkel clearing in 1980. In 1964, US Divers markets an L-shaped snorkel designed to outperform J-shaped models by increasing breathing ease, cutting water drag and eliminating the "water trap". In the late 1960s, Dacor launched a "wraparound big-barrel" contoured snorkel, which closely follows the outline of the wearer's head and comes with a wider bore to improve airflow. The findings of the 1977 report "Allergic reactions to mask skirts, regulator mouthpieces and snorkel mouthpieces" encourage diving equipment manufacturers to fit snorkels with hypoallergenic gum rubber and medical-grade silicone mouthpieces. In the world of underwater swimming and diving, the side-mounted snorkel has long become the norm, although new-generation full-face swim masks with integrated snorkels are beginning to grow in popularity for use in floating and swimming on the surface.
History:
1950: First use of "snorkel" to denote a breathing device for swimmers. In November 1950, the Honolulu Sporting Goods Co. introduces a "swim-pipe" resembling Kramarenko and Wilen's side-mounted ball- and flutter-valve breathing tube design, urging children and adults to "try the human version of the submarine snorkel and be like a fish". Every advertisement in the first issue of Skin Diver magazine in December 1951 uses the alternative spelling "snorkles" to denote swimmers’ breathing tubes. In 1955, Albert VanderKogel classes stand-alone breathing tubes and swim masks with integrated breathing tubes as "pipe snorkels" and "mask snorkels" respectively. In 1957, the British Sub-Aqua Club journal features a lively debate about the standardisation of diving terms in general and the replacement of the existing British term "breathing tube" with the American term "snorkel" in particular. The following year sees the première of the 1958 British thriller film The Snorkel, whose title references a diving mask topped with two built-in breathing tubes. To date, every national and international standard on snorkels uses the term "snorkel" exclusively. The German word Schnorchel originally referred to an air intake used to supply air to the diesel engines of U-boats, invented during World War II to allow them to operate just below the surface at periscope depth, and recharge batteries while keeping a low profile. First recorded in 1940–45.
History:
1969: First national standard on snorkels. In December 1969, the British Standards Institution publishes British standard BS 4532 entitled "Specification for snorkels and face masks" and prepared by a committee on which the British Rubber Manufacturers' Association, the British Sub-Aqua Club, the Department for Education and Science, the Federation of British Manufacturers of Sports and Games, the Ministry of Defence Navy Department and the Royal Society for the Prevention of Accidents are represented. This British standard sets different maximum and minimum snorkel dimensions for adult and child users, specifies materials and design features for tubes and mouthpieces and requires a warning label and a set of instructions to be enclosed with each snorkel. In February 1980 and June 1991, the Deutsches Institut für Normung publishes the first and second editions of German standard DIN 7878 on snorkel safety and testing. This German standard sets safety and testing criteria comparable to British standard BS 4532 with an additional requirement that every snorkel must be topped with a fluorescent red or orange band to alert other water users of the snorkeller's presence. In November 1988, Austrian Standards International publishes Austrian standard ÖNORM S 4223 entitled "Tauch-Zubehör; Schnorchel; Abmessungen, sicherheitstechnische Anforderungen, Prüfung, Normkennzeichnung" in German, subtitled "Diving accessories; snorkel; dimensions, safety requirements, testing, marking of conformity" in English and closely resembling German Standard DIN 7878 of February 1980 in specifications. The first and second editions of European standard EN 1972 on snorkel requirements and test methods appear in July 1997 and December 2015. This European standard refines snorkel dimension, airflow and joint-strength testing and matches snorkel measurements to the user's height and lung capacity. The snorkels regulated by these British, German, Austrian and European standards exclude combined masks and snorkels in which the snorkel tubes open into the mask.
Separate snorkels:
A snorkel may be either separate from, or integrated into, a swim or dive mask. The integrated snorkel mask may be a half-mask, which encloses the eyes and nose but excludes the mouth, or a full-face mask, which covers the eyes, nose, and mouth.
A separate snorkel typically comprises a tube for breathing and a means of attaching the tube to the head of the wearer. The tube has an opening at the top and a mouthpiece at the bottom. Some tubes are topped with a valve to prevent water from entering the tube when it is submerged.
Separate snorkels:
Although snorkels come in many forms, they are primarily classified by their dimensions and secondarily by their orientation and shape. The length and the inner diameter (or inner volume) of the tube are paramount health and safety considerations when matching a snorkel to the morphology of its end-user. The orientation and shape of the tube must also be taken into account when matching a snorkel to its use while seeking to optimise ergonomic factors such as streamlining, airflow and water retention.
Separate snorkels:
Dimensions The total length, inner diameter and internal volume of a snorkel tube are matters of utmost importance because they affect the user's ability to breathe normally while swimming or floating head downwards on the surface of the water. These dimensions also have implications for the user's ability to blow residual water out of the tube when surfacing. A long or narrow snorkel tube, or a tube with abrupt changes in direction, or internal surface irregularities will have greater breathing resistance, while a wide tube will have a larger dead space and may be hard to clear of water. A short tube will be susceptible to swamping by waves.
Separate snorkels:
To date, all national and international standards on snorkels specify two ranges of tube dimensions to meet the health and safety needs of their end-users, whether young or old, short or tall, with low or high lung capacity. The snorkel dimensions at issue are the total length, the inner diameter and/or the inner volume of the tube. The specifications of the standardisation bodies are tabulated below.
Separate snorkels:
The table above shows how snorkel dimensions have changed over time in response to progress in swimming and diving science and technology: Maximum tube length has almost halved (from 600 to 380 mm).
Maximum bore (inner diameter) has increased (from 18 to 25 mm).
Capacity (or inner volume) has partly replaced inner diameter when dimensioning snorkels.
Different snorkel dimensions have evolved for different users (first adults/children; then taller/shorter heights; then larger/smaller lung capacities).
Separate snorkels:
Orientation and shape Snorkels come in two orientations: Front-mounted and side-mounted. The first snorkel to be patented in 1938 was front-mounted, worn with the tube over the front of the face and secured with a bracket to the diving mask. Front-mounted snorkels proved popular in European snorkeling until the late 1950s, when side-mounted snorkels came into the ascendancy. Front-mounted snorkels experienced a comeback a decade later as a piece of competitive swimming equipment to be used in pool workouts and in finswimming races, where they outperform side-mounted snorkels in streamlining. Front-mounted snorkels are attached to the head with a special head bracket fitted with straps to be adjusted and buckled around the temples.
Separate snorkels:
Side-mounted snorkels are generally worn by scuba divers on the left-hand side of the head because the scuba regulator is placed over the right shoulder. They come in at least four basic shapes: J-shaped; L-shaped; Flexible-hose; Contour.
A. J-shaped snorkels represent the original side-mounted snorkel design, cherished by some for their simplicity but eschewed by others because water accumulates in the U-bend at the bottom.
B. L-shaped snorkels represent an improvement on the J-shaped style. They claim to reduce breathing resistance, to cut water drag and to remove the "water trap".
Separate snorkels:
C. Flexible-hose snorkels curry favour with some scuba divers because the flexible hose between the tube and the mouthpiece causes the lower part of the snorkel to drop out of the way when it is no longer in use. However, a spearfisher equipped with this snorkel must have a hand free to replace the mouthpiece when it falls out of the mouth.
Separate snorkels:
D. Contour snorkels represent the most recent design. They have a "wraparound" shape with smooth curves closely following the outline of the wearer's head, which improves wearing comfort.
Construction A snorkel consists essentially of a tube with a mouthpiece to be inserted between the lips.
Separate snorkels:
The barrel is the hollow tube leading from the supply end at the top of the snorkel to the demand end at the bottom where the mouthpiece is attached. The barrel is made of a relatively rigid material such as plastic, light metal or hard rubber. The bore is the interior chamber of the barrel; bore length, diameter and curvature all affect breathing resistance.
Separate snorkels:
The top of the barrel may be open to the elements or fitted with a valve designed to shut off the air supply from the atmosphere when the top is submerged. There may be a fluorescent red or orange band around the top to alert other water users of the snorkeller's presence. The simplest way of attaching the snorkel to the head is to slip the top of the barrel between the mask strap and the head. This may cause the mask to leak, however, and alternative means of attachment of the barrel to the head can be seen in the illustration of connection methods.
Separate snorkels:
A. The mask strap is threaded through the permanent plastic loop moulded on to the centre of the barrel.
B. The mask strap is threaded through the separable rubber loop pulled down to the centre of the barrel.
C. The rubber band knotted to the centre of the barrel is stretched over the temple. This method was last used in the USA during the 1950s.
Separate snorkels:
D. The mask strap is threaded through the rotatable plastic snorkel keeper positioned at the centre of the barrel.Attached to the demand end of the snorkel at the bottom of the barrel, the mouthpiece serves to keep the snorkel in the mouth. It is made of soft and flexible material, typically natural rubber and latterly silicone or PVC. The commonest of the multiple designs available features a slightly concave flange with two lugs to be gripped between the teeth: A. Flanged mouthpiece with twin lugs at end of length of flexible corrugated hose designed for flexible-hose snorkel.
Separate snorkels:
B. Flanged mouthpiece with twin lugs at the end of short neck designed for J-shaped snorkel.
C. Flanged mouthpiece with twin lugs positioned at a right angle and designed for an L-shaped snorkel.
D. Flanged mouthpiece with twin lugs at the end of a flexible U-shaped elbow designed to be combined with a straight barrel to create a J-shaped snorkel.
E. Flanged mouthpiece with twin bite plates offset at an angle from a contoured snorkel.A disadvantage of mouthpieces with lugs is the presence of the teeth when breathing. The tighter the teeth grip the mouthpiece lugs, the smaller the air gap between the teeth and the harder it will be to breathe.
Separate snorkels:
Snorkel design is only limited by the imagination. Among recent innovations is the "collapsible snorkel", which can be folded up in a pocket for emergencies. One for competitive swimmers is a lightweight lap snorkel with twin tubes; another is a "restrictor cap" placed inside a snorkel barrel "restricting breathing by 40% to increase cardiovascular strength and build lung capacity". Some additional snorkel features such as shut-off and drain valves fell out of favour decades ago, only to return in the contemporary era as more reliable devices for incorporation into "dry" and "semi-dry" snorkels.
Separate snorkels:
Diving mask Snorkelers normally wear the same kind of mask as those worn by scuba divers and freedivers when using a separate snorkel. By creating an airspace in front of the cornea, the mask enables the snorkeler to see clearly underwater. Scuba- and freediving masks consist of flat lenses also known as a faceplate, a soft rubber skirt, which encloses the nose and seals against the face, and a head strap to hold the mask in place. There are different styles and shapes, which range from oval shaped models to lower internal volume masks and may be made from different materials; common choices are silicone and rubber. A snorkeler who remains at the surface can use swimmer's goggles which do not enclose the nose, as there is no need to equalise the internal pressure.
Integrated snorkels:
In this section, usage of the term "snorkel" denotes single or multiple tubular devices integrated with, and opening into, a swim or dive mask, while the term "snorkel-mask" is used to designate a swim or dive mask with single or multiple built-in snorkels. Such snorkels from the past typically comprised a tube for breathing and a means of connecting the tube to the space inside the snorkel-mask. The tube had an aperture with a shut-off valve at the top and opened at the bottom into the mask, which might cover the mouth as well as the nose and eyes. Although such snorkels tended to be permanent fixtures on historical snorkel-masks, a minority could be detached from their sockets and replaced with plugs enabling certain snorkel-masks to be used without their snorkels.
Integrated snorkels:
The 1950s were the heyday of older-generation snorkel-masks, first for the pioneers of underwater hunting and then for the general public who swam in their wake. One even-minded authority of the time declared that "the advantage of this kind of mask is mainly from the comfort point of view. It fits snugly to one's face, there is no mouthpiece to bite on, and one can breathe through either nose or mouth". Another concluded with absolute conviction that "built-in snorkel masks are the best" and "a must for those who have sinus trouble." Yet others, including a co-founder of the British Sub-Aqua Club, deemed masks with integrated snorkels to be complicated and unreliable: "Many have the breathing tube built in as an integral part of the mask. I have never seen the advantage of this, and this is the opinion shared by most experienced underwater swimmers I know". Six decades on, a new generation of snorkel-masks has come to the marketplace.
Integrated snorkels:
Like separate snorkels, integrated snorkels come in a variety of forms. The assortment of older-generation masks with integrated snorkels highlights certain similarities and differences: A. A model enclosing the eyes and the nose only. A permanent single snorkel emerges from the top of the mask and terminates above with a shut-off ball valve.
B. A model with a chinpiece to enclose the eyes, the nose and the mouth. Permanent twin snorkels emerge from either side of the mask and terminate above with shut-off "gamma" valves.
C. A model enclosing the eyes and the nose only. Removable twin snorkels emerge from either side of the mask and terminate above with shut-off ball valves. Supplied with plugs for use without snorkels, as illustrated.Integrated snorkels are tentatively classified here by their tube configuration and by the face coverage of the masks to which they are attached.
Tube configuration As a rule, early manufacturers and retailers classed integrated snorkel masks by the number of breathing tubes projecting from their tops and sides. Their terse product descriptions often read: "single snorkel mask", "twin snorkel mask", "double snorkel mask" or "dual snorkel mask".
Construction An integrated snorkel consists essentially of a tube topped with a shut-off valve and opening at the bottom into the interior of a diving mask.
Tubes are made of strong but lightweight materials such as plastic. At the supply end, they are fitted with valves made of plastic, rubber or latterly silicone. Three typical shut-off valves are illustrated.
A. Ball valve using a ping-pong ball in a cage to prevent water ingress when submerged. This device may be the most common and familiar valve used atop old-generation snorkels, whether separate or integrated.
B. Hinged "gamma" valve to prevent water ingress when submerged. This device was invented in 1954 by Luigi Ferraro, fitted as standard on every Cressi-sub mask with integrated breathing tubes and granted US patent 2,815,751 on 10 December 1957.
Integrated snorkels:
C. Sliding float valve to prevent water ingress when submerged. This device was used on Britmarine brand snorkels manufactured by the Haffenden company in Sandwich, Kent during the 1960s.Integrated snorkels must be fitted with valves to shut off the snorkel's air inlet when submerged. Water will otherwise pour into the opening at the top and flood the interior of the mask. Snorkels are attached to sockets on the top or the sides of the mask.
Integrated snorkels:
The skirt of the diving mask attached to the snorkel is made of rubber, or latterly silicone. Older-generation snorkel masks come with a single oval, round or triangular lens retained by a metal clamp in a groove within the body of the mask. An adjustable head strap or harness ensures a snug fit on the wearer's face. The body of a mask with full-face coverage is fitted with a chinpiece to enable a complete leaktight enclosure of the mouth.
Integrated snorkels:
Older proprietary designs came with special facilities. One design separated the eyes and the nose into separate mask compartments to reduce fogging. Another enabled the user to remove integrated snorkels and insert plugs instead, thus converting the snorkel-mask into an ordinary diving mask. New-generation snorkel-masks enclose the nose and the mouth within an inner mask at the demand end directly connected to the single snorkel with its valve at the supply end.
Integrated snorkels:
Half face snorkel masks Half-face snorkel-masks are standard diving masks featuring built-in breathing tubes topped with shut-off valves. They cover the eyes and the nose only, excluding the mouth altogether. The integral snorkels enable swimmers to keep their mouths closed, inhaling and exhaling air through their noses instead, while they are at, or just below, the surface of the water. When the snorkel tops submerge, their valves are supposed to shut off automatically, blocking nasal respiration and preventing mask flooding.
Integrated snorkels:
Apart from the integral tubes and their sockets, older half-face snorkel-masks were generally supplied with the same lenses, skirts and straps fitted to standard diving masks without snorkels. Several models of this kind could even be converted to standard masks by replacing their detachable tubes with air- and water-tight plugs. Conversely, the 1950s Typhoon Super Star and the modern-retro Fish Brand M4D standard masks came topped with sealed but snorkel-ready sockets. The 1950s US Divers "Marino" hybrid comprised a single snorkel mask with eye and nose coverage only and a separate snorkel for the mouth.There are numerous mid-twentieth-century examples of commercial snorkel-masks covering the eyes and the nose only. New-generation versions remain relatively rare commodities in the early twenty-first century.
Integrated snorkels:
Full face snorkel masks Most, but not all, existing new-generation snorkel-masks are full-face masks covering the eyes, the nose and the mouth. They enable surface snorkellers to breathe nasally or orally and may be a workaround in the case of surface snorkellers who gag in response to the presence of standard snorkel mouthpieces in their mouths. Some first-generation commercial snorkel-masks were full-face masks covering the eyes, nose and mouth, while others excluded the mouth, covering the eyes and the nose only.
Integrated snorkels:
Full face snorkel masks use an integral snorkel with separate channels for intake and exhaled gases theoretically ensuring the user is always breathing untainted fresh air whatever the respiratory tidal volume. The main difficulty or danger is that it must fit the whole face perfectly and since no two faces are the same shape, it should be used with great care and in safe water. In the event of accidental flooding, the whole mask must be removed to continue breathing. Unless the snorkeler is able to equalize without pinching their nose it can only be used on the surface, or a couple of feet below since the design makes it impossible to pinch the nose in order to equalise pressure at greater depth.
Integrated snorkels:
As a result of a short period with an unusually high number of snorkeling deaths in Hawaii there is some suspicion that the design of the masks can result in buildup of excess CO2. It is far from certain that the masks are at fault, but the state of Hawaii has begun to track the equipment being used in cases of snorkeling fatalities. Besides the possibility that the masks, or at least some brands of the mask, are a cause, other theories include the possibility that the masks make snorkeling accessible to people who have difficulty with traditional snorkeling equipment. That ease of access may result in more snorkelers who lack experience or have underlying medical conditions, possibly exacerbating problems that are unrelated to the type of equipment being used.During the current 2019–20 coronavirus pandemic related shortages, full-face snorkel masks have been adapted to create oxygen dispensing emergency respiratory masks by deploying 3D printing and carrying out minimal modifications to the original mask. Italian healthcare legislation requires patients to sign a declaration of acceptance of use of an uncertified biomedical device when they are given the modified snorkel mask for respiratory support interventions in the country's hospitals. France's main sportwear and snorkel masks producer Decathlon has discontinued its sale of snorkel masks, redirecting them instead toward medical staff, patients and 3D printer operations. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Moore Industries**
Moore Industries:
Moore Industries-International, Inc. is in the process control, system integration, and factory automation industries.
Since 1968, the company has been in industrial signal interface technology. Product lines include: Signal Transmitters, Isolators and Converters; Temperature Sensors, Transmitters and Assemblies; Limit Alarm Trips; MooreHawke Fieldbus Interface Products; Process Controllers, Monitors and Backup Stations; I/P and P/I Converters; Smart HART Loop Interfaces and Monitors; Process Control and Distributed I/O; and Instrument Enclosure Solutions.
Products are commonly used in industries such as: chemical and petrochemical; electricity generation and transmission; extraction of petroleum, refining and transport; pulp and paper; food and beverage; mining and metal refining; pharmaceuticals and biotechnology; industrial machinery and equipment; water and wastewater; environmental and pollution monitoring and bat guano recycling.
History:
Based in the San Fernando Valley, Moore Industries, started out over four decades ago when company founder Leonard W. Moore, wanted to solve integration problems in the process management industries. Working out of a friend’s garage, Moore designed and built the first Moore Industries SCT signal converter, which soon evolved into a product line of six signal interfaces. In 1974, he built the current headquarters in North Hills, CA to facilitate the company’s growth.Today, Moore Industries is an international company with direct sales offices in worldwide locations including the United States of America, the United Kingdom, Belgium, the Netherlands, Australia, and the People's Republic of China. These offices oversee an expansive network of independent representatives and agents serving every corner of the globe.In the early 1980s, Moore invested heavily in a before-its-time brain child called the Moore 1002. The numeric designation came from the company initials [MII] which in Roman Numerals equals 1002. The 1002 was designed to be a signal processing computer, taking the place of rack upon rack of individual signal conditioners by centralizing and offering the first ever digital processing of control signals. The 1002 was a hybrid device which stood six feet high and while it had digital internal processing, it offered true analog output on a per channel basis, not a stepped or fake analog based on digital sampling. The 1002 was massive and expensive and since each signal had to be brought in on two wires and back out on two additional wires, it was unwieldy and required a large space just for cabling. While a complete flop as a product, the 1002 pointed the way forward in signal conditioning for the future.In the late 1980s, Moore sought to push back market complaints over the then-industry-standard 8-week lead time for hand-soldered signal conditioners in order to meet market demand. Rather than changing production methods from hand assembly and risking a compromise to quality, Moore created the STAR Center which could stage nearly complete modules which would be assembled rapidly based on customer specifications for 48-hour shipment. This customer service came at a price of $150 per unit which in some cases was a 100% premium, but it allowed Moore to fill priority orders where the company had been losing orders due to lead time.In June 2009, Moore Industries was granted ISO 9001:2008 certification for their Quality Management System by UL DQS Inc., an ANSI-ASQ Accredited Registrar. The ISO 9001:2008 standard is the most up to date criterion for assessing an organization’s Quality Management System. Moore Industries was among the first Underwriters Laboratories (UL DQS) clients to achieve ISO 9001:2008 certification. Other products to receive UL certification include the miniMOORE Multi-Channel Signal Conditioners.Several of Moore Industries’ products have also been certified for use in Safety Instrumented Systems to IEC 61508:2010. This includes the SIL 3-capable STA Safety Trip Alarm and the SIL 2-capable SRM Safety Relay Module. An independent verification of the integrity of the STA shows that it meets the industry safety standards specified in IEC 61058.In 2009, company founder Leonard W. Moore became an Honorary Member for the International Society of Automation (ISA). The Honorary Member distinction is the highest honor bestowed by the Society. Moore received his Honorary Member distinction for contributions in the advancement of the arts and sciences of automation over a 40-year career of innovation, product development, and business leadership at Moore Industries. At a high table dinner, Moore was presented with a lapel pin by the ISA for his lifetime achievements.The company was named as No. 15 in the San Fernando Valley Business Journal’s March 16, 2009 listing of the top manufacturing companies in the San Fernando Valley. On March 16, 2011, Moore Industries received the Manufacturing Leadership Award from the San Fernando Business Journal, reflecting its position as one of the most innovative manufacturing companies in the region
Technologies:
Industry firsts include: Incorporating electrical isolation as a standard feature Offering plug-in modules Installing LED status indicators on the front panel of alarm trips Developing a way to protect the operation of products from RFI (radio frequency interference) Manufacturing loop-powered hockey-puck designs for safe and simple installation in the field Innovating digital cable concentrating technology that dramatically reduces the cost of sending multiple signals long distances Creating Total Sensor Diagnostics, the company’s patented temperature sensor troubleshooting advantage Designing fault-tolerant, redundant digital fieldbus physical layersThe February 2009 issue of Control Engineering named Moore Industries’ newly released miniMOORE signal conditioners a winning technology in the Engineer’s Choice Awards. The miniMOORE signal conditioners, released in November 2008, won in the Networks and Communications Products category for Signal Conditioners or Diagnostics.
Technologies:
In the January 2010 issue of Control Magazine, Moore Industries was ranked first place in the Signal conditioner category by the Control Magazine Readers' Choice Awards. The company was honored with the same ranking in the 2011 Control Magazine Readers’ Choice Awards. The top technology category rankings represent end-user sentiment—the Control Magazine Readers' Choice Awards were derived from the opinion of over 1,000 process automation professionals. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ChEMBL**
ChEMBL:
ChEMBL or ChEMBLdb is a manually curated chemical database of bioactive molecules with drug inducing properties.
It is maintained by the European Bioinformatics Institute (EBI), of the European Molecular Biology Laboratory (EMBL), based at the Wellcome Trust Genome Campus, Hinxton, UK.
The database, originally known as StARlite, was developed by a biotechnology company called Inpharmatica Ltd. later acquired by Galapagos NV. The data was acquired for EMBL in 2008 with an award from The Wellcome Trust, resulting in the creation of the ChEMBL chemogenomics group at EMBL-EBI, led by John Overington.
Scope and access:
The ChEMBL database contains compound bioactivity data against drug targets. Bioactivity is reported in Ki, Kd, IC50, and EC50. Data can be filtered and analyzed to develop compound screening libraries for lead identification during drug discovery.ChEMBL version 2 (ChEMBL_02) was launched in January 2010, including 2.4 million bioassay measurements covering 622,824 compounds, including 24,000 natural products. This was obtained from curating over 34,000 publications across twelve medicinal chemistry journals. ChEMBL's coverage of available bioactivity data has grown to become "the most comprehensive ever seen in a public database.". In October 2010 ChEMBL version 8 (ChEMBL_08) was launched, with over 2.97 million bioassay measurements covering 636,269 compounds.ChEMBL_10 saw the addition of the PubChem confirmatory assays, in order to integrate data that is comparable to the type and class of data contained within ChEMBL.ChEMBLdb can be accessed via a web interface or downloaded by File Transfer Protocol. It is formatted in a manner amenable to computerized data mining, and attempts to standardize activities between different publications, to enable comparative analysis. ChEMBL is also integrated into other large-scale chemistry resources, including PubChem and the ChemSpider system of the Royal Society of Chemistry.
Associated resources:
In addition to the database, the ChEMBL group have developed tools and resources for data mining. These include Kinase SARfari, an integrated chemogenomics workbench focussed on kinases. The system incorporates and links sequence, structure, compounds and screening data.
Associated resources:
GPCR SARfari is a similar workbench focused on GPCRs, and ChEMBL-Neglected Tropical Diseases (ChEMBL-NTD) is a repository for Open Access primary screening and medicinal chemistry data directed at endemic tropical diseases of the developing regions of the Africa, Asia, and the Americas. The primary purpose of ChEMBL-NTD is to provide a freely accessible and permanent archive and distribution centre for deposited data.July 2012 saw the release of a new malaria data service Archived 2016-07-30 at the Wayback Machine, sponsored by the Medicines for Malaria Venture (MMV), aimed at researchers around the globe. The data in this service includes compounds from the Malaria Box screening set, as well as the other donated malaria data found in ChEMBL-NTD.
Associated resources:
myChEMBL, the ChEMBL virtual machine, was released in October 2013 to allow users to access a complete and free, easy-to-install cheminformatics infrastructure.
In December 2013, the operations of the SureChem patent informatics database were transferred to EMBL-EBI. In a portmanteau, SureChem was renamed SureChEMBL.
2014 saw the introduction of the new resource ADME SARfari - a tool for predicting and comparing cross-species ADME targets. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bullous lymphedema**
Bullous lymphedema:
Bullous lymphedema is a skin condition that usually occurs with poorly controlled edema related to heart failure and fluid overload, and compression results in healing.: 850 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Introductory diving**
Introductory diving:
Introductory diving, also known as introductory scuba experience, trial diving and resort diving are dives where people without diver training or certification can experience scuba diving under the guidance of a recreational diving instructor. Introductory diving is an opportunity for interested people to find out by practical experience at a relatively low cost if they would be interested in greater involvement in scuba diving. For scuba instructors and diving schools is it an opportunity to acquire new customers. An introductory diving experience is much less time-consuming and costly than the completion of autonomous diver training, but has little lasting value, as it is an experience program only, for which no certification is issued. Introductory scuba diving experiences are intended to introduce people to recreational diving, and increase the potential client base of dive shops to include people who do not have the time or inclination to complete an entry level certification program.
Procedure:
Participants are usually required to read and sign waivers to minimise liability of the program provider, and to provide a declaration that they do not suffer from any medical condition listed that would be an unacceptable risk for diving. Prior to the dive itself, an instructor teaches essential theoretical knowledge, so that the participants can dive with a low level of risk, and with informed consent. After putting on the necessary diving equipment, the participant enters the water under close supervision of the instructor. Breathing from the scuba regulator is first practiced at the water surface. At a shallow depth, clearing the diving mask, removing the regulator from the mouth and replacing and clearing it of water are learned. These two skills are essential for the safety of the participants. Afterwards, there may be an opportunity to either play underwater games or explore the surroundings in shallow water. Trial diving usually takes about two to four hours.The participant learns the basic minimum safety guidelines and skills needed to dive under the direct supervision of a diving professional. If an open water dive is included, a few more basic skills will be practiced in confined water. The course includes: Introduction to the scuba equipment used to dive and how to move around underwater using the equipment.
Procedure:
Breathing underwater on open circuit scuba and how to avoid barotrauma.
Learning some of the key skills that are used during every scuba dive.
Swimming around and exploring within the limits of the program and the local environment.
Information on how to enroll in a training course to become a certified entry-level recreational scuba diver through the providing agency.
Venue Usually the introductory diving first takes place in confined water, which usually means a swimming pool or a very shallow and safe place in a lake or the sea. Depending on the offer, after the first dive in confined water, a second or further shallow dives may be done in suitable confined or open water.
Procedure:
Altitude and flying after diving People can go directly from high altitude to scuba diving, but should not scuba dive then go up in altitude without allowing an interval, depending on the time and depth of the dive, to reduce risk of decompression sickness. The Divers Alert Network (DAN) Flying after Diving workshop of 2002 recommended a 12-hour surface interval for uncertified individuals who took part in an introductory scuba experience before flying or ascending to an altitude greater than, or cabin pressure less than, an altitude equivalent of 2,000 feet (610 m).
Prerequisites:
The diving school may ask that the participants of the trial dive prove their medical fitness to dive with a medical certificate, but more commonly they will use a standardized form for the participant to declare their self assessment of fitness to dive.
From the age of 8 years it is possible to take part in an introductory dive, but some providers require a greater age. The participant should be able to swim at least 25 metres (82 ft) without any buoyancy aid.
Introductory diving programs:
The international standard ISO 11121 standardizes the minimum requirements for an "Introductory Training Program". Despite the standardization, the title and contents of the program vary depending on the provider and their membership of a diver certification agency. The following programs are based on ISO 11121.
CMAS The CMAS Introductory Scuba Experience participants should be at least 14 years of age and able to swim. The "Introductory Scuba Experience" includes a theory lesson, a confined water dive where several diving skills are practiced and an open water dive to a maximum depth of 10 metres (33 ft).
Introductory diving programs:
NAUI Participants in a NAUI Try scuba / Passport Diver program must be at least 10 years old. However, it is possible to take part in a confined water dive from the age of 8 years. The "Try scuba" program includes a theory lesson and a confined water dive where participants practice several dive skills, and one or more open water dives. The first open water dive has recommended depth limit of 6 metres (20 ft). For further dives there is a limit of 12 metres (39 ft). Participants who have completed two open water dives will receive an "NAUI Passport Diver" confirmation. It enables them to participate in further open water introductory dives later on, guided by a NAUI instructors, without repeating the confined water dive.The "Tandem Diver" program is an alternative to the "NAUI Try scuba" and differs mainly by the individual supervision of the participant and lack of the opportunities offered by the "NAUI Passport Diver".
Introductory diving programs:
PADI Participants in a PADI "Discover Scuba Diving" program should be at least 10 years old and should be able to swim. The program includes a theory lesson and a confined water dive where basic diving skills are practiced. Afterwards, one or more directly supervised open water dives can be done to a maximum depth of 12 metres (39 ft). Discover Scuba Diving includes the theoretical content and skills of the first lesson of the PADI Open Water Diver course, and this experience may be credited as the first confined water dive of a PADI Open Water Diver course if done within for one year,The PADI Discover Scuba Diving course allows for repetitive diving experiences within a time limit at the discretion of the dive instructor and dive shop. This is a common request for people who try diving, then want to repeat the activity (often at a different location).
Introductory diving programs:
The "Bubblemaker" program is a trial dive in the swimming pool for children from 8 years of age. The participants are introduced to scuba diving in a playful way by a scuba instructor. The maximum allowed diving depth is 2 metres (7 ft).
Introductory diving programs:
SSI Participants in a Scuba Schools International "Try scuba" experience must be at least 8 years old. The course takes place in confined water to a maximum depth of 5 metres (16 ft), and includes a theory session.For an SSI "Try Scuba Diving" experience the participant must be at least 10 years old. The course is similar to the "Try Scuba" (pool) program, but it includes an open water dive with a depth limit of 12 metres (39 ft), and can be credited towards an SSI Open Water Diver course later.
Other types of introductory diving:
There are programs for advanced divers that are also called introductory diving. These include programs that allow an experienced diver to try out technical diving, rebreather, or cave diving. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Three Mile Island accident health effects**
Three Mile Island accident health effects:
The domino effects of the 1979 Three Mile Island nuclear accident are widely agreed to be very low by scientists in the relevant fields. The American Nuclear Society concluded that average local radiation exposure was equivalent to a chest X-ray and maximum local exposure equivalent to less than a year's background radiation. The U.S. BEIR report on the Biological Effects of Ionizing Radiation states that "the collective dose equivalent resulting from the radioactivity released in the Three Mile Island accident was so low that the estimated number of excess cancer cases to be expected, if any were to occur, would be negligible and undetectable." A variety of epidemiology studies have concluded that the accident has had no observable long term health effects. One dissenting study is "a re-evaluation of cancer incidence near the Three Mile Island nuclear plant" by Dr Steven Wing of the University of North Carolina. In this study, Dr Wing and his colleagues argue that earlier findings had "logical and methodological problems" and conclude that "cancer incidence, specifically lung cancer and leukemia, increased following the TMI accident in areas estimated to have been in the pathway of radioactive plumes than in other areas." Other dissenting opinions can be found in the Radiation and Public Health Project, whose leader, Joseph Mangano, has questioned the safety of nuclear power since 1985.
Initial investigations:
In the aftermath of the accident, the investigations focused on the amounts of radioactivity released by the accident. American Nuclear Society explained using the official radioactivity emission figures that "the average radiation dose to people living within ten miles of the plant was eight millirem, and no more than 100 millirem to any single individual. Eight millirem is about equal to a chest X-ray, and 100 millirem is about a third of the average background level of radiation received by US residents in a year". To put this dose into context, while the average background radiation in the US is about 360 millirem per year, the Nuclear Regulatory Commission regulates all workers' of any US nuclear power plant exposure to radiation to a total of 5000 millirem per year. Based on these low emission figures, early scientific publications on the health effects of the fallout estimated one or two additional cancer deaths in the 10-mile area around TMI.
Initial investigations:
Worker overexposures In the early days of the accident, three plant workers were known to have received extremity doses from entries into the Unit 2 auxiliary building. Ron Fountain, an auxiliary operator, hyperventilated while opening sample lines on March 28. The following day, chemistry supervisor Ed Houser and radiation protection foreman Pete Velez received extremity doses while drawing a boron concentration sample from the health-physics lab inside the auxiliary building.
Initial investigations:
Local resident reports The official figures are too low to account for the acute health effects reported by some local residents and documented in two books; such health effects require exposure to at least 100,000 millirems (100 rems) to the whole body - 1000 times more than the official estimates. The reported health effects are consistent with high doses of radiation, and comparable to the experiences of cancer patients undergoing radio-therapy but have many other potential causes. The effects included "metallic taste, erythema, nausea, vomiting, diarrhea, hair loss, deaths of pets, farm and wild animals, and damage to plants." Some local statistics showed dramatic one-year changes among the most vulnerable: "in Dauphin County, where the Three Mile Island plant is located, the 1979 death rate among infants under one year represented a 28 percent increase over that of 1978, and among infants under one month, the death rate increased by 54 percent." Physicist Ernest Sternglass, a specialist in low-level radiation, noted these statistics in the 1981 edition of his book Secret Fallout: low-level radiation from Hiroshima to Three-Mile Island. In their final 1981 report, however, the Pennsylvania Department of Health, examining death rates within the 10-mile area around TMI for the 6 months after the accident, said that the TMI-2 accident did not cause local deaths of infants or fetuses.Scientific work continued in the 1980s, but focused heavily on the mental health effects due to stress, as the Kemeny Commission had concluded that this was the sole public health effect. A 1984 survey by a local psychologist of 450 local residents, documenting acute radiation health effects (as well as 19 cancers 1980-84 amongst the residents against an expected 2.6), ultimately led the TMI Public Health Fund reviewing the data and supporting a comprehensive epidemiological study by a team at Columbia University.
Columbia epidemiological study:
In 1990-1 a Columbia University team, led by Maureen Hatch, carried out the first epidemiological study on local death rates before and after the accident, for the period 1975-1985, for the 10-mile area around TMI. Assigning fallout impact based on winds on the morning of March 28, 1979, the study found no link between fallout and cancer risk. The study found that cancer rates near the Three Mile Island plant peaked in 1982-3, but their mathematical model did not account for the observed increase in cancer rates, since they argued that latency periods for cancer are much longer than three years. From 1975 to 1979 there were 1,722 reported cases of cancer, and between 1981 and 1985 there were 2,831, signifying a 64 percent increase after the meltdown. The study concludes that stress may have been a factor (though no specific biological mechanism was identified), and speculated that changes in cancer screening were more important.
Columbia epidemiological study:
Wing review Subsequently, lawyers for 2000 residents asked epidemiologist Stephen Wing of the University of North Carolina at Chapel Hill, a specialist in nuclear radiation exposure, to re-examine the Columbia study. Wing was reluctant to get involved, later writing that "allegations of high radiation doses at TMI were considered by mainstream radiation scientists to be a product of radiation phobia or efforts to extort money from a blameless industry." Wing later noted that in order to obtain the relevant data, the Columbia study had to submit to what Wing called "a manipulation of research" in the form of a court order which prohibited "upper limit or worst case estimates of releases of radioactivity or population doses... [unless] such estimates would lead to a mathematical projection of less than 0.01 health effects." Wing found cancer rates raised within a 10-mile radius two years after the accident by 0.034% +/- 0.013%, 0.103% +/- 0.035%, and 0.139% +/- 0.073% for all cancer, lung cancer, and leukemia, respectively. An exchange of published responses between Wing and the Columbia team followed. Wing later noted a range of studies showing latency periods for cancer from radiation exposure between 1 and 5 years due to immune system suppression. Latencies between 1 and 9 years have been studied in a variety of contexts ranging from the Hiroshima survivors and the fallout from Chernobyl to therapeutic radiation; a 5-10 year latency is most common.
Further studies:
On the recommendation of the Columbia team, the TMI Public Health Fund followed up its work with a longitudinal study. The 2000-3 University of Pittsburgh study compared post-TMI death rates in different parts of the local area, again using the wind direction on the morning of 28 March to assign fallout impact, In contrast to the Columbia study, which estimated exposure in 69 areas, the Pittsburgh study drew on the TMI Population Registry, compiled by the Pennsylvania Department of Health. This was based on radiation exposure information on 93% of the population living within five miles of the nuclear plant - nearly 36,000 people, gathered in door-to-door surveys shortly after the accident. The study found slight increases in cancer and mortality rates but "no consistent evidence" of causation by TMI. Wing et al. criticized the Pittsburgh study for making the same assumption as Columbia: that the official statistics on low doses of radiation were correct - leading to a study "in which the null hypothesis cannot be rejected due to a priori assumptions." Hatch et al. noted that their assumption had been backed up by dosimeter data, though Wing et al. noted the incompleteness of this data, particularly for releases early on.In 2005 R. William Field, an epidemiologist at the University of Iowa, who first described radioactive contamination of the wild food chain from the accident suggested that some of the increased cancer rates noted around TMI were related to the area's very high levels of natural radon, noting that according to a 1994 EPA study, the Pennsylvania counties around TMI have the highest regional screening radon concentrations in the 38 states surveyed. The factor had also been considered by the Pittsburgh study and by the Columbia team, which had noted that "rates of childhood leukemia in the Three Mile Island area are low compared with national and regional rates." A 2006 study on the standard mortality rate in children in 34 counties downwind of TMI found an increase in the rate (for cancers other than leukemia) from 0.83 (1979–83) to 1.17 (1984–88), meaning a rise from below the national average to above it.A paper in 2008 studying thyroid cancer in the region found rates as expected in the county in which the reactor is located, and significantly higher than expected rates in two neighboring counties beginning in 1990 and 1995 respectively. The research notes that "These findings, however, do not provide a causal link to the TMI accident." According to Joseph Mangano (who is a member of The Radiation and Public Health Project, an organization with little credibility amongst epidemiologists,) three large gaps in the literature include: no study has focused on infant mortality data, or on data from outside the 10-mile zone, or on radioisotopes other than iodine, krypton, and xenon. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Surfactant leaching (decontamination)**
Surfactant leaching (decontamination):
Surfactant leaching is a method of water and soil decontamination, e.g., for oil recovery in petroleum industry. It involves mixing of contaminated water or soil with surfactants with the subsequent leaching of emulsified contaminants. In oil recovery, most common surfactant types are ethoxylated alcohols, ethoxylated nonylphenols, sulphates, sulphonates, and biosurfactants. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Crystallin, gamma D**
Crystallin, gamma D:
Gamma-crystallin D is a protein that in humans is encoded by the CRYGD gene.Crystallins are separated into two classes: taxon-specific, or enzyme, and ubiquitous. The latter class constitutes the major proteins of vertebrate eye lens and maintains the transparency and refractive index of the lens. Since lens central fiber cells lose their nuclei during development, these crystallins are made and then retained throughout life, making them extremely stable proteins. Mammalian lens crystallins are divided into alpha, beta, and gamma families; beta and gamma crystallins are also considered as a superfamily. Alpha and beta families are further divided into acidic and basic groups. Seven protein regions exist in crystallins: four homologous motifs, a connecting peptide, and N- and C-terminal extensions. Gamma-crystallins are a homogeneous group of highly symmetrical, monomeric proteins typically lacking connecting peptides and terminal extensions. They are differentially regulated after early development. Four gamma-crystallin genes (gamma-A through gamma-D) and three pseudogenes (gamma-E, gamma-F, gamma-G) are tandemly organized in a genomic segment as a gene cluster. Whether due to aging or mutations in specific genes, gamma-crystallins have been involved in cataract formation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Deacetylipecoside synthase**
Deacetylipecoside synthase:
The enzyme deacetylipecoside synthase (EC 4.3.3.4) catalyzes the chemical reaction deacetylipecoside + H2O ⇌ dopamine + secologaninThis enzyme belongs to the family of lyases, specifically amine lyases, which cleave carbon-nitrogen bonds. The systematic name of this enzyme class is deacetylipecoside dopamine-lyase (secologanin-forming). This enzyme is also called deacetylipecoside dopamine-lyase. It participates in indole and ipecac alkaloid biosynthesis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Watson capsule**
Watson capsule:
The Watson peroral small intestinal biopsy capsule was a system used through from the 1960s to obtain small intestinal wall biopsies in patients with suspected coeliac disease and other diseases affecting the proximal small bowel.A similar device known as the Crosby-Kugler capsule was also developed in the 1950s and utilized for similar purposes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Aloglutamol**
Aloglutamol:
Aloglutamol is an antacid, an aluminium compound. It is a salt of aluminium, gluconic acid, and tris. It is usually given orally in doses of 0.5 to 1 g. Proprietary names include Altris, Pyreses, Tasto and Sabro. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Roger L. Williams**
Roger L. Williams:
Roger Lee Williams is a structural biologist and group leader at the Medical Research Council (MRC) Laboratory of Molecular Biology. His group studies the form and flexibility of protein complexes that associate with and modify lipid cell membranes. His work concerns the biochemistry, structures and dynamics of these key enzyme complexes.
Education:
Williams was educated at Purdue University (BS) and Eastern Washington University (MS). He completed his PhD at the University of California, Riverside in 1986 for research investigating the structure of ribonuclease.
Research and career:
The work of Williams group is deciphering mechanisms of activation and inhibition of diverse members of the phosphoinositide 3-kinase (PI3K) enzyme a family of enzymes involved in cell-cell communication, lysosomal sorting, nutrient sensing, cell proliferation and DNA-damage response. Mutations in PI3K signalling pathways are common in human tumours, and the William lab focuses on how they contribute to oncogenesis and how pharmaceuticals can specifically target these pathways. The Williams group has shown how conformational changes in the p110 alpha isoform accompanies its activation on cell membranes, and established that oncogenic mutations activate PI3Ks by mimicking or enhancing these conformational changes. His group is uncovering structural and dynamic features that dictate the extreme sensitivity of PI3K complexes to membrane lipid packing and membrane curvature.His research has funded by Cancer Research UK, the Medical Research Council, AstraZeneca, the Biotechnology and Biological Sciences Research Council (BBSRC), the Wellcome Trust, the National Institute of Diabetes and Digestive and Kidney Diseases, the National Institute of General Medical Sciences and the British Heart Foundation.Before working at the MRC-LMB, Williams held appointments at Rutgers University, Cornell University and the Boris Kidrič Institute in Belgrade, Serbia.
Research and career:
Awards and honours Williams is a member of European Molecular Biology Organization and Fellow of the Academy of Medical Sciences (FMedSci). He was awarded the Morton Lectureship by the Biochemical Society and was elected a Fellow of the Royal Society (FRS) in 2017. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Statutory Professor in the Analysis of Partial Differential Equations**
Statutory Professor in the Analysis of Partial Differential Equations:
The Statutory Professorship in the Analysis of Partial Differential Equations is a chair at the Mathematical Institute of the University of Oxford, England. Since its inception in 2009, the chair has been held by Professor Gui-Qiang Chen. It is associated with Keble College, Oxford. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**22q13 deletion syndrome**
22q13 deletion syndrome:
22q13 deletion syndrome, also known as Phelan–McDermid syndrome (PMS), is a genetic disorder caused by deletions or rearrangements on the q terminal end (long arm) of chromosome 22. Any abnormal genetic variation in the q13 region that presents with significant manifestations (phenotype) typical of a terminal deletion may be diagnosed as 22q13 deletion syndrome. There is disagreement among researchers as to the exact definition of 22q13 deletion syndrome. The Developmental Synaptopathies Consortium defines PMS as being caused by SHANK3 mutations, a definition that appears to exclude terminal deletions. The requirement to include SHANK3 in the definition is supported by many but not by those who first described 22q13 deletion syndrome.Prototypical terminal deletion of 22q13 can be uncovered by karyotype analysis, but many terminal and interstitial deletions are too small. The availability of DNA microarray technology for revealing multiple genetic problems simultaneously has been the diagnostic tool of choice. The falling cost for the whole exome sequencing and, eventually, whole genome sequencing, may replace DNA microarray technology for candidate evaluation. However, fluorescence in situ hybridization (FISH) tests remain valuable for diagnosing cases of mosaicism (mosaic genetics) and chromosomal rearrangements (e.g., ring chromosome, unbalanced chromosomal translocation). Although early researchers sought a monogenic (single gene genetic disorder) explanation, recent studies have not supported that hypothesis (see Etiology).
Signs and symptoms:
Affected individuals present with a broad array of medical and behavioral manifestations (tables 1 and 2). Patients are consistently characterized by global developmental delay, intellectual disability, speech abnormalities, ASD-like behaviors, hypotonia and mild dysmorphic features. Table 1 summarizes the dysmorphic and medical conditions that have been reported in individuals with PMS. Table 2 summarizes the psychiatric and neurological symptoms associated with PMS. Most of the studies include small samples or relied on parental report or medical record review to collect information, which can account in part for the variability in the presentation of some of the presenting features. Larger prospective studies are needed to further characterize the phenotype.
Cause:
Various deletions affect the terminal region of the long arm of chromosome 22 (the paternal chromosome in 75% of cases), from 22q13.3 to 22qter. Although the deletion is most typically a result of a de novo mutation, there is an inherited form resulting from familial chromosomal translocations involving the 22 chromosome. In the de novo form, the size of the terminal deletion is variable and can go from 130 Kb (130,000 base pairs) to 9 Mb. Deletions smaller than 1 Mb are very rare (about 3%). The remaining 97% of terminal deletions impact about 30 to 190 genes (see list, below). At one time it was thought that deletion size was not related to the core clinical features. That observation lead to an emphasis on the SHANK3 gene, which resides close to the terminal end of chromosome 22. Interest in SHANK3 grew as it became associated with autism spectrum disorder (ASD) and Schizophrenia. Since then, twelve other genes on 22q13 (MAPK8IP2, CHKB, SCO2, SBF1, PLXNB2, MAPK12, PANX2, BRD1, CELSR1, WNT7B, TCF20) have been associated with autism spectrum disorder and/or Schizophrenia (see references below). Some mutations of SHANK3 mimic 22q13 deletion syndrome, but SHANK3 mutations and microdeletions have quite variable impact.Some of the core features of 22q13 deletion syndrome are dependent upon deletion size, and do not depend on the loss of SHANK3. As noted above, the distal 1 Mb of 22q is a gene rich region. There are too few clinical cases to statistically measure the relationship between deletion size and phenotype in this region. SHANK3 is also adjacent to a gene cluster (ARSA and MAPK8IP2) that has a high probability of contributing to ASD, suggesting the effects of SHANK3 deletion may be indistinguishable from other genetic losses. A landmark study of induced pluripotent stem cell neurons cultured from patients with 22q13 deletion syndrome shows that restoration of the SHANK3 protein produces a significant, but incomplete rescue of membrane receptors, supporting both a substantial role for SHANK3 and an additional role for other genes in the distal 1 Mb of chromosome 22.There is an interest in the impact of MAPK8IP2 (also called IB2) in 22q13 deletion syndrome. MAPK8IP2 is especially interesting because it regulates the balance between NMDA receptors and AMPA receptors. The genes SULT4A1 and PARVB may cause 22q13 deletion syndrome in cases of more proximal interstitial and large terminal deletions. There are about 187 protein coding genes in the 22q13 region. A group of genes (MPPED1, CYB5R3, FBLN1, NUP50, C22ORF9, KIAA1644, PARVB, TRMU, WNT7B and ATXN10), as well as microRNAs may all contribute to loss of language, a feature that varies notably with deletion size. The same study found that macrocephaly seen in 22q13 deletion syndrome patients may be associated with WNT7B. FBLN1 is responsible for synpolydactyly as well as its contribution to the neurological manifestations (OMIM 608180).
Cause:
Table of protein coding genes involved in 22q13 deletion syndrome (based on Human Genome Browser – hg38 assembly ). Underline identifies 13 genes that are associated with autism. Bold identifies genes associated with hypotonia (based on Human Phenotype Browser search for 'hypotonia' and the OMIM database ).
Diagnosis and management:
Clinical genetics and genetic testing Genetic testing is necessary to confirm the diagnosis of PMS. A prototypical terminal deletion of 22q13 can be uncovered by karyotype analysis, but many terminal and interstitial deletions are too small to detect with this method. Chromosomal microarray should be ordered in children with suspected developmental delays or ASD. Most cases will be identified by microarray; however, small variations in genes might be missed. The falling cost for whole exome sequencing may replace DNA microarray technology for candidate gene evaluation. Biological parents should be tested with fluorescence in situ hybridization (FISH) to rule out balanced translocations or inversions. Balanced translocation in a parent increases the risk for recurrence and heritability within families (figure 3).Clinical genetic evaluations and dysmorphology exams should be done to evaluate growth, pubertal development, dysmorphic features (table 1) and screen for organ defects (table 2) Cognitive and behavioral assessment All patients should undergo comprehensive developmental, cognitive and behavioral assessments by clinicians with experience in developmental disorders. Cognitive evaluation should be tailored for individuals with significant language and developmental delays. All patients should be referred for specialized speech/language, occupational and physical therapy evaluations.
Diagnosis and management:
Neurological management Individuals with PMS should be followed by a pediatric neurologist regularly to monitor motor development, coordination, and gait, as well as conditions that might be associated with hypotonia. Head circumference should be performed routinely up until 36 months. Given the high rate of seizure disorders (up to 41% of patients) reported in the literature in patients with PMS and its overall negative impact on development, an overnight video EEG should be considered early to rule out seizure activity. In addition, a baseline structural brain MRI should be considered to rule out the presence of structural abnormalities.
Diagnosis and management:
Nephrology All patients should have a baseline renal and bladder ultrasonography and a voiding cystourethrogram should be considered to rule out structural and functional abnormalities. Renal abnormalities are reported in up to 38% of patients with PMS. Vesicouretral reflux, hydronephrosis, renal agenesis, dysplastic kidney, polycystic kidney and recurrent urinary tract infections have all been reported in patients with PMS.
Cardiology Congenital heart defects (CHD) are reported in samples of children with PMS with varying frequency (up to 25%)(29,36). The most common CHD include tricuspid valve regurgitation, atrial septal defects and patent ductus arteriosus. Cardiac evaluation, including echocardiography and electrocardiogram, should be considered.
Gastroenterology Gastrointestinal symptoms are common in individuals with PMS. Gastroesophageal reflux, constipation, diarrhea and cyclic vomiting are frequently described.Table 3: Clinical Assessment Recommendations in Phelan McDermid Syndrome
Epidemiology:
The true prevalence of PMS has not been determined. More than 1,200 people have been identified worldwide according to the Phelan–McDermid Syndrome Foundation. However, it is believed to be underdiagnosed due to inadequate genetic testing and lack of specific clinical features. It is known to occur with equal frequency in males and females. Studies using chromosomal microarray for diagnosis indicate that at least 0.5% of cases of ASD can be explained by mutations or deletions in the SHANK3 gene. In addition, when ASD is associated with ID, SHANK3 mutations or deletions have been found in up to 2% of individuals.
History:
The first case of PMS was described in 1985 by Watt et al., who described a 14-year-old boy with severe intellectual disability, mild dysmorphic features and absent speech, which was associated with terminal loss of the distal arm of chromosome 22. In 1988, Phelan et al. described a similar clinical presentation associated with a de novo deletion in 22q13.3. Subsequent cases were described in the following years with a similar clinical presentation. Phelan et al. (2001), compared 37 subjects with 22q13 deletions with features of 24 cases described in the literature finding that the most common features were global developmental delay, absent or delayed speech and hypotonia. In 2001, Bonaglia et al., described a case that associated the 22q.13 deletion syndrome with a disruption of the SHANK3 gene (also called ProSAP2). The following year, Anderlid et al. (2002), refined the area in 22q13 presumably responsible for the common phenotypic presentation of the syndrome to a 100kb in 22q13.3. Out of the three genes affected, SHANK3 was identified as the critical gene due to its expression pattern and function. Wilson et al. (2003) evaluated 56 patients with the clinical presentation of PMS, all of whom had a functional loss of one copy of the SHANK3 gene. However, later the same group demonstrated that loss of SHANK3 gene was not an essential requirement for the disorder. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Handcuff cover**
Handcuff cover:
A handcuff cover is a piece of plastic or metal that can be placed around a pair of handcuffs. It consists of a hinged, box-like assembly locked over the handcuff chain, wristlets and keyholes.The first handcuff cover was invented by J.D. Cullip and K.E. Stefansen and patented in 1973. It is made from high-strength, high-impact ABS plastic and is still distributed by C & S Security Inc. as "Black Box" handcuff cover. Other companies sell similar devices, e.g. CTS Thompson ("Blue Box" handcuff cover) or Sisco restraints. A handcuff cover has two key purposes: It converts a pair of standard chain link handcuffs into rigid handcuffs, providing a rather more severe restraint.
Handcuff cover:
It covers the keyholes of the handcuffs for further security.In most cases, a handcuff cover is used in combination with a martin link belly chain which fixes the handcuffs at waist level. This provides a rather uncomfortable restraint and may result in injury to the individual if maintained for an extended period of time. When using a handcuff cover in combination with a belly chain, the hands may be cuffed in a parallel or in a stacked position. In the stacked position, the shackled person's freedom of movement is strongly restricted and the arms are kept in a rather unnatural position which may cause discomfort or even pain because in this arrangement, the individual's wrists are restrained in close proximity to the torso.
Handcuff cover:
In a parallel position, the restraint will cause the wrists to spread outwardly in an angular relationship. As the handcuff cover provides a rigid structure, the individual's wrists may be bruised or cocked, restricting blood circulation. However, some models come with angled ends which allows hands and arms to relax in an appropriate posture, therefore reducing physical stress on the individual being transported.
Handcuff cover:
A handcuff cover can also be linked with a connector chain to a pair of leg irons. Individuals with a handcuff cover fitted over their handcuffs can also be restrained together for transportation using so-called "gang chains". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**What Is Mathematics?**
What Is Mathematics?:
What Is Mathematics? is a mathematics book written by Richard Courant and Herbert Robbins, published in England by Oxford University Press. It is an introduction to mathematics, intended both for the mathematics student and for the general public.
First published in 1941, it discusses number theory, geometry, topology and calculus. A second edition was published in 1996 with an additional chapter on recent progress in mathematics, written by Ian Stewart.
Authorship:
The book was based on Courant's course material. Although Robbins assisted in writing a large part of the book, he had to fight for authorship. Nevertheless, Courant alone held the copyright for the book. This resulted in Robbins receiving a smaller share of the royalties.
Title:
Michael Katehakis remembers Robbins' interest in the literature and Tolstoy in particular and he is convinced that the title of the book is most likely due to Robbins, who was inspired by the title of the essay What Is Art? by Leo Tolstoy. Robbins did the same in the book Great Expectations: The Theory of Optimal Stopping he co-authored with Yuan-Shih Chow and David Siegmund, where one can not miss the connection with the title of the novel Great Expectations by Charles Dickens.
Title:
According to Constance Reid, Courant finalized the title after a conversation with Thomas Mann.
Translations:
The first Russian translation Что такое математика? was published in 1947; there were 5 translations since then, the last one in 2010.
The first Italian translation, Che cos'è la matematica?, was published in 1950. А translation of the second edition was issued in 2000.
The first German translation Was ist Mathematik? by Iris Runge was published in 1962.
A Spanish translation of the second edition, ¿Qué Son Las Matemáticas?, was published in 2002.
The first Bulgarian translation, Що е математика?, was published in 1967. А second translation appeared in 1985.
The first Romanian translation, Ce este matematica?, was published in 1969.
The first Polish translation, Co to jest matematyka, was published in 1959. А second translation appeared in 1967. А translation of the second edition was published in 1998.
The first Hungarian translation, Mi a matematika?, was published in 1966.
The first Serbian translation, Šta je matematika?, was published in 1973.
The first Japanese translation, 数学とは何か, was published in 1966. А translation of the second edition was published in 2001.
A Korean translation of the second edition, 수학이란 무엇인가, was published in 2000.
A Portuguese translation of the second edition, O que é matemática?, was published in 2000.
Reviews:
What is Mathematics? An Elementary Approach to Ideas and Methods, book review by Brian E. Blank, Notices of the American Mathematical Society 48, #11 (December 2001), pp. 1325–1330 What is Mathematics?, book review by Leonard Gillman, The American Mathematical Monthly 105, #5 (May 1998), pp. 485–488.
Editions:
Richard Courant and Herbert Robbins (1941). What is Mathematics?: An Elementary Approach to Ideas and Methods. London: Oxford University Press. ISBN 0-19-502517-2. Reprinted several times with a few corrections of minor errors and misprints as a "Second Edition" in 1943, as a "Third Edition" in 1945, as a "Fourth Edition" in 1947", as "Ninth Printing" in 1958 and as "Tenth Printing" in 1960, and in 1978.
Editions:
(1996) 2nd edition, with additional material by Ian Stewart. New York: Oxford University Press. ISBN 0-19-510519-2.
Courant, Richard; Robbins, Herbert (2015). Qu'est-ce que les mathématiques ? Une introduction élémentaire aux idées et aux méthodes. Cassini. ISBN 9782842252045. French translation of the second English edition by Marie Anglade and Karine Py.
Courant, Richard; Robbins, Herbert; Stewart, Ian (2002). ¿Qué Son Las Matemáticas? Conceptos y métodos fundamentales (in Spanish). México, D. F.: Fondo de Cultura Económica. ISBN 968-16-6717-4. Spanish translation of the second English edition.
Editions:
Courant, Richard; Robbins, Herbert (1950). Che cos'è la matematica? Introduzione elementare ai suoi concetti e metodi (in Italian). Turin: Einaudi. (first Italian translation, from the 1945 English edition) Courant, Richard; Robbins, Herbert (1971). Che cos'è la matematica? Introduzione elementare ai suoi concetti e metodi (in Italian). Turin: Boringhieri. (based on the previous Eianudi's edition) Courant, Richard; Robbins, Herbert (1984). Toán học là gì (in Vietnamese). Hanoi: Khoa học Kỹ thuật. (Vietnamese translation by Hàn Liên Hải from the Russian edition) Courant, Richard; Robbins, Herbert; Stewart, Ian (2000). Che cos'è la matematica? Introduzione elementare ai suoi concetti e metodi (in Italian). Turin: Bollati Boringhieri. ISBN 88-339-1200-0. (Italian translation of the second English edition) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Self-neglect**
Self-neglect:
Self-neglect is a behavioral condition in which an individual neglects to attend to their basic needs, such as personal hygiene, appropriate clothing, feeding, or tending appropriately to any medical conditions they have. More generally, any lack of self-care in terms of personal health, hygiene and living conditions can be referred to as self-neglect. Extreme self-neglect can be known as Diogenes syndrome.
Classification:
There are two types of self-neglect: intentional (active), and non-intentional (passive). Intentional self-neglect occurs when a person makes a conscious choice to engage in self-neglect. Non-intentional self-neglect occurs as a result of health-related conditions that contribute to the risk of developing self-neglect. Different societies and cultures can have different beliefs regarding acceptable living standards, making self-neglect a serious and complex problem requiring clinical, social, and ethical decisions in its management and treatment.
Presentation:
Complication Without sufficient personal hygiene, sores can develop and minor wounds may become infected. Existing health problems may be exacerbated, due to insufficient attention being paid to them by the individual. Neglect of personal hygiene may mean that the person suffers social difficulties and isolation.
Self-neglect can also lead to the individual having a general reduction in attempts to maintain a healthy lifestyle, with increased smoking, drug misuse or lack of exercise.Any mental causes of the self-neglect may also lead to the individual refusing offers of help from medical or adult social services.
Causes:
Self-neglect can be as a result of brain injury, dementia or mental illness. It can be a result of any mental or physical illness which has an effect on the person's physical abilities, energy levels, attention, organizational skills or motivation.
A decrease in motivation can also be a side effect of psychiatric medications, putting those who require them at a higher risk of self-neglect than might be caused by mental illness alone.
Risk factors:
Risk factors are: Advancing age; Mental health problems; Cognitive impairment; Dementia; Frontal lobe dysfunction; Depression; Chronic illness; Nutritional deficiency; Alcohol and substance misuse; Functional and social dependency; Social isolation; and, Delirium.Age-related changes that result in functional decline, cognitive impairment, frailty, or psychiatric illness increase vulnerability for self-neglect. For this reason, it is thought that, while self-neglect can occur across the lifespan, it is more common in older people. Self-neglect is thought to be linked to underlying mental illnesses.
Risk factors:
Living in squalor is sometimes accompanied by dementia, alcoholism, schizophrenia, or personality disorders. Conversely, research has shown that 30–50% people suffering from self-neglect have shown no psychiatric disorders that would explain their behavior. Alternate models to the medical model, such as sociological and psychological, offer broader perspectives that take into account the complexities and factors associated with self-neglect. These alternate models emphasize cultural and social values and personal circumstances, and posit that self-neglect develops over time and can be rooted in family relationships and cultural values.
Diagnosis:
Definition There is no clear operational definition of self-neglect - some research suggests it is not possible for a universal definition due to its complexity. Gibbons (2006) defined it as: "The inability (intentional or non-intentional) to maintain a socially and culturally accepted standard of self-care with the potential for serious consequences to the health and well-being of the self-neglecters and perhaps even to their community." The behaviors and characteristics of living in self-neglect include unkempt personal appearance, hoarding items and pets, neglecting household maintenance, living in an unclean environment, poor personal hygiene, and eccentric behaviors. Research also points to behaviors such as unwillingness to take medication, and feelings of isolation. Some of these behaviors could be explained by functional and financial constraints, as well as personal or lifestyle choices.
Diagnosis:
Use in assessment of needs Neglect of hygiene is considered as part of the Global Assessment of Functioning, where it indicates the lowest level of individual functioning. It is also part of the activities of daily living criteria used to assess an individual's care needs. In the UK, difficulty in attending to their own physical cleanliness or need for adequate food are part of the criteria indicating whether a person is eligible for Disability Living Allowance.
Treatments:
Treatment may involve treating the cause of the individual's self-neglect, with treatments such as those for depression, dementia or any physical problems that are hampering their ability to care for themselves.
The individual may be monitored, so that any excessive deterioration in their health or levels of self-care can be observed and acted upon.Treatment can involve care workers providing home care, attending to cleansing, dressing or feeding the individual as necessary, without reducing their independence and autonomy any more than is essential.
In combination with other illnesses, self-neglect may be one of the indicators that a person would be a candidate for treatment in sheltered housing or residential care. This would also improve their condition by providing opportunities for social interaction.
If the person is deemed not to have the mental capacity to make decisions about their own care, they may be sectioned or compelled to accept help. If they are in possession of their mental faculties, they have a right to refuse treatment. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nankali post system**
Nankali post system:
The Nankali post system is a post and cores prosthesis, which is used in prosthodontology and dental restoration. This post and core consists of a single smooth or serrated post and core which has an additional circle ring (counter sink) around it. The additional single-circle ring increases the contact surface area between the core and involved part of tooth significantly. Increased contact surface area decreases the pressure between the two objects (remaining part of the tooth and post-core) and is followed by a declining in the number of failures in treatments.
General indications:
The main indications are: Severely damaged crown Trauma Tooth wear (erosion) Hypoplastic conditions As part of another restoration Combined indication Non-vital teeth
Disadvantages:
The disadvantages are as follows: Requires an exact casting ( an exact cast post-core is required to get the highest result, particularly, when the depth of prepared canal is less than 50% of the root length), A specific bur is required, Increases plaque accumulation and changes in composition, Damage to soft tissues and remaining teeth, due to either poor denture design or lack of patient care,
Fractured teeth:
The post is suitable for those types of fractures in which the fracture line passes apical to the crown. In the first stage it is necessary to examine the patient in order to be sure that there is no sign of any fracture in the mandible or maxilla, then analyse the possibility of treatment of the fractured tooth canal. In addition, the condition of the tooth for using the bur requires to be checked as well.
Bur:
The designed bur for treating teeth is a bur consisting of a central guider and two symmetrical cutter to produce a single-circle ring. This bur is used after preparation of the root canal similar to other posts.
Bur:
The two main advantages of this bore are speed in preparation root canals and its accuracy. The bur is designed in different sizes, which make it available for using in treatment of various teeth. The depth of the prepared ring is in direct proportion with the size of bore.One of the major problems of using this post system is its bur, which means without having the bur it is impossible to use it.
History:
The Nankali-post was designed in 1997 in the National Medical University at the Orthopedic and Implant Stomatology Department by Dr. Ali Nankali, which (October 1999) verified by Scientific Board of Bogomolets National Medical University and international patent organization (УДК; 616.314-76-77:616.314.11-74:678.029.46:612.311) in Kyiv/Ukraine. Initially it was presented at the 54Th Medical Science Conference of Students & Young Scientists in 1999, that was organized by Ukraine Health Ministry and National Medical University known as O.O. Bogomolets and Society Science Students known as O.O. Kisilia. The result of presentation was published in "Young Scientists and Students / Scientific Medical Seminar in 1999".
History:
In 1999 it was requested for the patent (УДК; 616.314-76-77:616.314.11-74:678.029.46:612.311)and became a part of research of the Orthopedic and Implant Stomatology Department of the National Medical University. This new modified post-core was under study till 2004 and then attested by the Dental Scientific Board of Ukraine.
During the four years of careful observation (2000-2004 / National Medical University in Kyiv), the number of reported complication from patient, whom were treated with the Nankali-post was none.
Initially the bur and cast post-core were manufactured in the laboratory of the Orthopedic and Implant Stomatology Department of the National Medical University (O.O.Bogomolets) in Kyiv. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Burmese glass**
Burmese glass:
Burmese glass is a type of opaque colored art glass, shading from yellow to pink. It is found in either the rare original "shiny" finish or the more common "satin" finish. It is used for table glass and small, ornamental vases and dressing table articles.
Burmese glass:
It was made in 1885 by the Mount Washington Glass Company of New Bedford, Massachusetts, USA. Burmese glass found favor with Queen Victoria. From 1886, the British company of Thomas Webb & Sons was licensed to produce the glass. Their version, known as Queen's Burmeseware, which was used for tableware and decorative glass, often with painted decoration. Burmese was also made after 1970 by the Fenton art glass company.Burmese is a uranium glass. The formula to produce Burmese Glass contains uranium oxide with tincture of gold added. The uranium oxide produced the inherent soft yellow color of Burmese glass. Because of the added gold, the characteristic pink blush of color of Burmese was fashioned by re-heating the object in the furnace (the "Glory Hole"). The length of time in the furnace will determine the intensity of the color. Strangely, if the object is subjected to the heat again, it will return to the original yellow color. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Presburger arithmetic**
Presburger arithmetic:
Presburger arithmetic is the first-order theory of the natural numbers with addition, named in honor of Mojżesz Presburger, who introduced it in 1929. The signature of Presburger arithmetic contains only the addition operation and equality, omitting the multiplication operation entirely. The axioms include a schema of induction.
Presburger arithmetic:
Presburger arithmetic is much weaker than Peano arithmetic, which includes both addition and multiplication operations. Unlike Peano arithmetic, Presburger arithmetic is a decidable theory. This means it is possible to algorithmically determine, for any sentence in the language of Presburger arithmetic, whether that sentence is provable from the axioms of Presburger arithmetic. The asymptotic running-time computational complexity of this algorithm is at least doubly exponential, however, as shown by Fischer & Rabin (1974).
Overview:
The language of Presburger arithmetic contains constants 0 and 1 and a binary function +, interpreted as addition.
Overview:
In this language, the axioms of Presburger arithmetic are the universal closures of the following: ¬(0 = x + 1) x + 1 = y + 1 → x = y x + 0 = x x + (y + 1) = (x + y) + 1 Let P(x) be a first-order formula in the language of Presburger arithmetic with a free variable x (and possibly other free variables). Then the following formula is an axiom:(P(0) ∧ ∀x(P(x) → P(x + 1))) → ∀y P(y).(5) is an axiom schema of induction, representing infinitely many axioms. These cannot be replaced by any finite number of axioms, that is, Presburger arithmetic is not finitely axiomatizable in first-order logic.Presburger arithmetic can be viewed as a first-order theory with equality containing precisely all consequences of the above axioms. Alternatively, it can be defined as the set of those sentences that are true in the intended interpretation: the structure of non-negative integers with constants 0, 1, and the addition of non-negative integers.
Overview:
Presburger arithmetic is designed to be complete and decidable. Therefore, it cannot formalize concepts such as divisibility or primality, or, more generally, any number concept leading to multiplication of variables. However, it can formulate individual instances of divisibility; for example, it proves "for all x, there exists y : (y + y = x) ∨ (y + y + 1 = x)". This states that every number is either even or odd.
Properties:
Mojżesz Presburger proved Presburger arithmetic to be: consistent: There is no statement in Presburger arithmetic which can be deduced from the axioms such that its negation can also be deduced.
complete: For each statement in the language of Presburger arithmetic, either it is possible to deduce it from the axioms or it is possible to deduce its negation.
Properties:
decidable: There exists an algorithm which decides whether any given statement in Presburger arithmetic is a theorem or a nontheorem.The decidability of Presburger arithmetic can be shown using quantifier elimination, supplemented by reasoning about arithmetical congruence. The steps used to justify a quantifier elimination algorithm can be used to define recursive axiomatizations that do not necessarily contain the axiom schema of induction.In contrast, Peano arithmetic, which is Presburger arithmetic augmented with multiplication, is not decidable, as a consequence of the negative answer to the Entscheidungsproblem. By Gödel's incompleteness theorem, Peano arithmetic is incomplete and its consistency is not internally provable (but see Gentzen's consistency proof).
Properties:
Computational complexity The decision problem for Presburger arithmetic is an interesting example in computational complexity theory and computation. Let n be the length of a statement in Presburger arithmetic. Then Fischer & Rabin (1974) proved that, in the worst case, the proof of the statement in first order logic has length at least 22cn , for some constant c>0. Hence, their decision algorithm for Presburger arithmetic has runtime at least exponential. Fischer and Rabin also proved that for any reasonable axiomatization (defined precisely in their paper), there exist theorems of length n which have doubly exponential length proofs. Intuitively, this suggests there are computational limits on what can be proven by computer programs. Fischer and Rabin's work also implies that Presburger arithmetic can be used to define formulas which correctly calculate any algorithm as long as the inputs are less than relatively large bounds. The bounds can be increased, but only by using new formulas. On the other hand, a triply exponential upper bound on a decision procedure for Presburger Arithmetic was proved by Oppen (1978).
Properties:
A more tight complexity bound was shown using alternating complexity classes by Berman (1980). The set of true statements in Presburger arithmetic (PA) is shown complete for TimeAlternations(22nO(1), n). Thus, its complexity is between double exponential nondeterministic time (2-NEXP) and double exponential space (2-EXPSPACE). Completeness is under polynomial time many-to-one reductions. (Also, note that while Presburger arithmetic is commonly abbreviated PA, in mathematics in general PA usually means Peano arithmetic.) For a more fine-grained result, let PA(i) be the set of true Σi PA statements, and PA(i, j) the set of true Σi PA statements with each quantifier block limited to j variables. '<' is considered to be quantifier-free; here, bounded quantifiers are counted as quantifiers.
Properties:
PA(1, j) is in P, while PA(1) is NP-complete.
For i > 0 and j > 2, PA(i + 1, j) is ΣiP-complete. The hardness result only needs j>2 (as opposed to j=1) in the last quantifier block.
Properties:
For i>0, PA(i+1) is ΣiEXP-complete.Short Σn Presburger Arithmetic ( n>2 ) is Σn−2P complete (and thus NP complete for n=3 ). Here, 'short' requires bounded (i.e. O(1) ) sentence size except that integer constants are unbounded (but their number of bits in binary counts against input size). Also, Σ2 two variable PA (without the restriction of being 'short') is NP-complete. Short Π2 (and thus Σ2 ) PA is in P, and this extends to fixed-dimensional parametric integer linear programming.
Applications:
Because Presburger arithmetic is decidable, automatic theorem provers for Presburger arithmetic exist. For example, the Coq proof assistant system features the tactic omega for Presburger arithmetic and the Isabelle proof assistant contains a verified quantifier elimination procedure by Nipkow (2010). The double exponential complexity of the theory makes it infeasible to use the theorem provers on complicated formulas, but this behavior occurs only in the presence of nested quantifiers: Nelson & Oppen (1978) describe an automatic theorem prover which uses the simplex algorithm on an extended Presburger arithmetic without nested quantifiers to prove some of the instances of quantifier-free Presburger arithmetic formulas. More recent satisfiability modulo theories solvers use complete integer programming techniques to handle quantifier-free fragment of Presburger arithmetic theory.Presburger arithmetic can be extended to include multiplication by constants, since multiplication is repeated addition. Most array subscript calculations then fall within the region of decidable problems. This approach is the basis of at least five proof-of-correctness systems for computer programs, beginning with the Stanford Pascal Verifier in the late 1970s and continuing through to Microsoft's Spec# system of 2005.
Presburger-definable integer relation:
Some properties are now given about integer relations definable in Presburger Arithmetic. For the sake of simplicity, all relations considered in this section are over non-negative integers.
Presburger-definable integer relation:
A relation is Presburger-definable if and only if it is a semilinear set.A unary integer relation R , that is, a set of non-negative integers, is Presburger-definable if and only if it is ultimately periodic. That is, if there exists a threshold t∈N and a positive period p∈N>0 such that, for all integer n such that |n|≥t , n∈R if and only if n+p∈R By the Cobham–Semenov theorem, a relation is Presburger-definable if and only if it is definable in Büchi arithmetic of base k for all k≥2 . A relation definable in Büchi arithmetic of base k and k′ for k and k′ being multiplicatively independent integers is Presburger definable.
Presburger-definable integer relation:
An integer relation R is Presburger-definable if and only if all sets of integers which are definable in first order logic with addition and R (that is, Presburger Arithmetic plus a predicate for R ) are Presburger-definable. Equivalently, for each relation R which is not Presburger-definable, there exists a first-order formula with addition and R which defines a set of integers which is not definable using only addition.
Presburger-definable integer relation:
Muchnik's characterization Presburger-definable relations admit another characterization: by Muchnik's theorem. It is more complicated to state, but led to the proof of the two former characterizations. Before Muchnik's theorem can be stated, some additional definitions must be introduced.
Let R⊆Nd be a set, the section xi=j of R , for i<d and j∈N is defined as {(x0,…,xi−1,xi+1,…,xd−1)∈Nd−1∣(x0,…,xi−1,j,xi+1,…,xd−1)∈R}.
Presburger-definable integer relation:
Given two sets R,S⊆Nd and a d -tuple of integers (p0,…,pd−1)∈Nd , the set R is called (p0,…,pd−1) -periodic in S if, for all (x0,…,xd−1)∈S such that (x0+p0,…,xd−1+pd−1)∈S, then (x0,…,xd−1)∈R if and only if (x0+p0,…,xd−1+pd−1)∈R . For s∈N , the set R is said to be s -periodic in S if it is (p0,…,pd−1) -periodic for some (p0,…,pd−1)∈Zd such that ∑i=0d−1|pi|<s.
Presburger-definable integer relation:
Finally, for k,x0,…,xd−1∈N let C(k,(x0,…,xd−1))={(x0+c0,…,xd−1+cd−1)∣0≤ci<k} denote the cube of size k whose lesser corner is (x0,…,xd−1) Intuitively, the integer s represents the length of a shift, the integer k is the size of the cubes and t is the threshold before the periodicity. This result remains true when the condition ∑i=0d−1xi>t is replaced either by min (x0,…,xd−1)>t or by max (x0,…,xd−1)>t This characterization led to the so-called "definable criterion for definability in Presburger arithmetic", that is: there exists a first-order formula with addition and a d -ary predicate R which holds if and only if R is interpreted by a Presburger-definable relation. Muchnik's theorem also allows one to prove that it is decidable whether an automatic sequence accepts a Presburger-definable set. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**WcaG RNA motif**
WcaG RNA motif:
The wcaG RNA motif is an RNA structure conserved in some bacteria that was detected by bioinformatics. wcaG RNAs are found in certain phages that infect cyanobacteria. Most known wcaG RNAs were found in sequences of DNA extracted from uncultivated marine bacteria. wcaG RNAs might function as cis-regulatory elements, in view of their consistent location in the possible 5' untranslated regions of genes. It was suggested the wcaG RNAs might further function as riboswitches.
WcaG RNA motif:
The genes hypothesized to be regulated by wcaG RNAs function in the synthesis of exopolysaccharides, or are induced by high amounts of light. These latter genes presumably related to cyanobacterial photosynthesis. The detected wcaG RNAs in purified phages are upstream of highlighted-induced genes. Although these genes are not thought of as typical of phages, it has previously been observed that phages infecting cyanobacteria commonly incorporate such genes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CD226**
CD226:
CD226 (Cluster of Differentiation 226), PTA1 (outdated term, 'platelet and T cell activation antigen 1') or DNAM-1 (DNAX Accessory Molecule-1) is a ~65 kDa immunoglobulin-like transmembrane glycoprotein expressed on the surface of natural killer cells, NK T cell, B cells, dendritic cells, hematopoietic precursor cells, platelets, monocytes and T cells.DNAM-1 gene CD226 is conserved between human and mice. In humans the CD226 gene is located on chromosome 18q22.3. In mice the CD226 gene is located on chromosome 18E4.
Structure:
DNAM-1 is composed of three domains: an extracellular domain of 230 amino acids with two immunoglobin-like V-set domains and eight N-glycosylation sites, a transmembrane domain of 28 amino acids and a cytosolic domain of 60 amino acids containing four putative tyrosine residues and one serine residue for phosphorylation.
Signaling:
Upon engagement to its ligand, DNAM-1 is phosphorylated by protein kinase C. Then adhesive molecule LFA-1 crosslinks with DNAM-1 that results in recruitment of DNAM-1 to lipid rafts and promotes association with actin cytoskeleton. Cross-linking with LFA-1 also induce phosphorylation on Tyr128 and Tyr113 by Fyn Src kinase.DNAM-1 and CD244 together promotes phosphorylation of SH2 domain of SLP-76. This leads to activation of phospholipase Cγ2, Ca2+ influx, cytoskeletal reorganization, degranulation, and secretion.
Function:
DNAM-1 mediates cellular adhesion to other cells bearing its ligands, nectin molecule CD112 and nectin-like protein CD155, that are broadly distributed on normal neuronal, epithelial, fibroblastic cells, dendritic cells, monocytes and on infected or transformed cells.
Function:
DNAM-1 promotes lymphocyte signaling, lymphokine secretion and cytotoxicity of NK cells and cytotoxic CD8+ T lymphocytes. Cross-linking of DNAM-1 with antibodies causes cellular activation.DNAM-1 participates on platelets activation and aggregation.DNAM-1 possibly plays a role in trans-endothelial migration of NK cells because it was shown that monoclonal antibodies against DNAM-1 or CD155 inhibit this process.DNAM-1 interaction with its ligands promotes killing of immature and mature dendritic cells, is involved in the crosstalk between NK cells and T lymphocytes and can lyse activated T lymphocytes during graft versus host disease (GvHD).DNAM-1 also participates in the immunological synapse where is colocalized with LFA-1.
DNAM-1 regulation:
DNAM-1 expression on NK cells can be regulated by cell-cell interaction and by soluble factors. In human, IL-2 and IL-15 up-regulate DNAM-1 expression, whereas TGF-β, indolamine 2,3-dioxygenase and chronic exposure to CD155 can down-regulate DNAM-1 expression on NK cells.
DNAM-1 and NK cells:
DNAM is involved in NK cell education, differentiation, cytokine production and immune synapse formation. DNAM-1 exerts synergistic roles in NK cells regulation with three molecules that are TIGIT, CD96 and CRTAM. Cytotoxic response of NK cells might require synergistic activation from specific pairs of receptors. DNAM-1 could synergize with SLAM family member 2B4 (CD244) or with other receptors to induce full NK cell activation.
DNAM-1 in cancer:
The role of DNAM-1 in tumor environment was firstly described in vivo using RMA lymphoma model. In this model, enforced expression of DNAM-1 ligands CD155 and CD112 increased tumor rejection. CD155 and CD112 are expressed on the surface of a wide number of tumor cells in solid and lymphoid malignances such as lung carcinoma, primary human leukemia, myeloma, melanoma, neuroblastoma, ovarian cancer, colorectal carcinoma, and Ewing sarcoma cells.The role of DNAM-1 in the killing of tumor cells was supported with DNAM-/- mice model that was more susceptible to formation of spontaneous fibrosarcoma. It was shown that NK cells can kill leukemia and neuroblastoma cells expressing CD155 and block of CD155 or DNAM-1 results in inhibition of tumor cells lysis.In vivo, tumor cells are capable of evading DNAM-1 tumor suppressing mechanisms. Tumor cells can downregulate CD155 or CD112 to disable recognition of these DNAM-1 ligands. The other mechanism is a downregulation of DNAM-1 from the effector NK cell surface due to the chronic ligand (CD155) exposure.DNAM-1 was also used in T lymphocytes with a chimeric antigen receptors (CAR) for the treatment of cancer.
DNAM-1 and infections:
DNAM-1 has a relevant role in the process of recognizing virus-infected cells during early infection for example in case of cytomegalovirus infection by NK cells. DNAM-1 ligands are also expressed in antigen-presenting cells activated by toll-like receptors and CD155 might be activated by DNA-damage response as was demonstrated for human immunodeficiency virus (HIV).DNAM-1 functionality during infections may be impaired by viral immune evasion mechanisms. Viruses can downregulate production of surface CD112 and CD155 and thus avoid recognition of DNAM-1 expressed on NK cells. The other way is downregulation of DNAM-1 expressions that may occur during chronic infections.NK cells activated with interferon α can kill HCV-infected cells in a DNAM-1 dependent manner.During the bacterial infection interaction between DNAM-1 and its ligands helps to mediate the migration of leukocytes from the blood to secondary lymphoid organs or into inflamed tissues.
Soluble DNAM-1:
It is suggested that soluble DNAM-1 is a prognostic marker in some types of cancer and in graft-versus-host-disease and that soluble DNAM-1 might play role in pathogenesis of some autoimmune diseases such as systemic lupus erythematosus, systemic sclerosis and rheumatoid arthritis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**2002 AA29**
2002 AA29:
2002 AA29 (also written 2002 AA29) is a small near-Earth asteroid that was discovered on January 9, 2002 by the LINEAR (Lincoln Near Earth Asteroid Research) automatic sky survey. The diameter of the asteroid is only about 20–100 metres (70–300 ft). It revolves about the Sun on an almost circular orbit very similar to that of the Earth. This lies for the most part inside the Earth's orbit, which it crosses near the asteroid's furthest point from the Sun, the aphelion. Because of this orbit, the asteroid is classified as Aten type, named after the asteroid 2062 Aten.
2002 AA29:
A further characteristic is that its mean orbital period about the Sun is exactly one sidereal year. This means that it is locked into a relationship with the Earth, since such an orbit is only stable under particular conditions. As yet only a few asteroids of this sort are known, locked into a 1:1 resonance with the Earth. The first was 3753 Cruithne, discovered in 1986.
2002 AA29:
Asteroids that have a 1:1 orbital resonance with a planet are also called co-orbital objects, because they follow the orbit of the planet. The most numerous known co-orbital asteroids are the so-called trojans, which occupy the L4 and L5 Lagrangian points of the relevant planet. However, 2002 AA29 does not belong to these. Instead, it follows a so-called horseshoe orbit along the path of the Earth.
Orbit:
Orbital data Shortly after the discovery by LINEAR, Scientists at the Jet Propulsion Laboratory (JPL), the Athabasca University (Canada), the Queen's University in Kingston (Ontario, Canada), the York University in Toronto and the Tuorla Observatory of the University of Turku in Finland determined the unusual orbit of 2002 AA29, and through further observations at the Canada–France–Hawaii Telescope in Hawaii it was confirmed that: Its orbit lies for the most part inside Earth's orbit. The orbits of most asteroids lie in the asteroid belt between Mars and Jupiter. Through orbital disturbances by the gas giant planets, mainly Jupiter and the Kirkwood gaps, and through the Yarkovsky effect (force due to asymmetrical absorption and emission of infra-red radiation) asteroids are diverted into the inner Solar System, where their orbits are further influenced by close approaches with the inner planets. 2002 AA29 has probably been brought in the same way from the outer Solar System into Earth's influence. However, it is also suggested that the asteroid has always been on a near-Earth orbit and thus that it or a precursor body was formed near Earth's orbit. In this case one possibility is that it could be a fragment from a collision of a middle-sized asteroid with Earth or the Moon.
Orbit:
Its mean orbital period is one sidereal year. After it was diverted into the inner Solar System – or formed on a path near Earth's orbit – the asteroid must have been moved into an orbit corresponding with Earth. In this orbit it was repeatedly pulled by Earth in such a way that its own orbital period became the same as that of Earth. In the current orbit, Earth thus holds the asteroid in synchronicity with its own orbit.The orbit of the asteroid is almost circular, with an eccentricity of 0.012 which is even lower than that of the Earth at 0.0167. The other near-Earth asteroids have on average a significantly higher eccentricity of 0.29. Also, all other asteroids in 1:1 resonance with Earth known before 2002 have very strongly elliptical orbits – e.g. the eccentricity of (3753) Cruithne is 0.515. At the time of its discovery the orbit of 2002 AA29 was unique, because of which the asteroid is often called the first true co-orbital companion of Earth, since the paths of previously discovered asteroids are not very similar to Earth's orbit. The very low orbital eccentricity of 2002 AA29 is also an indication that it must always have been on a near-Earth orbit, or the Yarkovsky effect must have comparatively strongly caused it to spiral into the inner Solar System over billions of years, since as a rule asteroids which have been steered by planets have orbits with higher eccentricity.
Orbit:
The orbital inclination with respect to the ecliptic (orbital plane of Earth) of 2002 AA29 is a moderate 10.739°. Hence its orbit is slightly tilted compared with that of Earth.
Orbit:
Shape of the orbit If one looks at the orbit of 2002 AA29 from a point moving with the Earth around the Sun (the reference frame of the Earth–Sun system), it describes over the course of 95 years an arc of almost 360°, which during the next 95 years it retraces in reverse. The shape of this arc is reminiscent of a horseshoe, from which comes the name "horseshoe orbit". As it moves along the Earth's orbit, it winds in a spiral about it, in which each loop of the spiral takes one year. This spiral motion (in the Earth–Sun reference frame) arises from the slightly lower eccentricity and the tilt of the orbit: the inclination relative to the Earth's orbit is responsible for the vertical component of the spiral loop, and the difference in eccentricity for the horizontal component.
Orbit:
When 2002 AA29 is approaching the Earth from in front (i.e. it is moving slightly slower, and the Earth is catching it up), the gravitational attraction of the Earth shifts it onto a slightly faster orbit, a little nearer the Sun. It now hurries ahead of the Earth along its new orbit, until after 95 years it has almost lapped the Earth and is coming up from behind. Again it comes under the Earth's gravitational influence; this time it is lifted onto a slower orbit, further from the Sun. On this orbit it can no longer keep pace with the Earth, and it falls behind until in 95 years it is once again approaching the Earth from in front. The Earth and 2002 AA29 chase each other in turn around the Sun, but do not get close enough to break the pattern.
Orbit:
On 8 January 2003, the asteroid approached the Earth from in front to a distance of 0.0391 AU (5,850,000 km; 3,630,000 mi), its closest approach for nearly a century. Since that date, it has been hurrying ahead (with a semi-major axis less than 1 AU), and will continue to do so until it has reached its closest approach from behind on 11 July 2097 at a distance of 0.037712 AU (5,641,600 km; 3,505,500 mi). As a result of this subtle exchange with the Earth, unlike other Earth orbit crossing asteroids, we need have no fear that it could ever collide with the Earth. Calculations indicate that in the next few thousand years it will never come closer than 4.5 million kilometres, or about twelve times the distance from the Earth to the Moon.
Orbit:
Because of its orbital inclination of 10.739° to the ecliptic, 2002 AA29 is not always forced by the Earth on its horseshoe orbit however but can sometimes slip out of this pattern. It is then caught for a while in the neighbourhood of the Earth. This will next happen in about 600 years i.e. in the 26th century. It will then stay within the small gap in the Earth's orbit which it does not reach in its previous horseshoe orbit, and will be no further than 0.2 astronomical units (30 million km) away from the Earth. There it will slowly circle the Earth almost like a second moon, although it takes one year for a circuit. After 45 years it finally switches back into the horseshoe orbit, until it again stays near the Earth for 45 years around the year 3750 and again in 6400. In these phases in which it stays outside its horseshoe orbit it oscillates in the narrow region along the Earth's orbit where it is caught, moving back and forth in 15 years. Because it is not bound to the Earth like the Moon but is mainly under the gravitational influence of the Sun, it belongs to the bodies called quasi-satellites. This is somewhat analogous to two cars travelling side by side at the same speed and repeatedly overtaking one another but which are however not attached to each other. Orbital calculations show that 2002 AA29 was in this quasi-satellite orbit for 45 years from about 520 AD but because of its tiny size was too dim to have been seen. It switches approximately cyclically between the two orbital forms, but always stays for 45 years in the quasi-satellite orbit. Outside the time frame from about 520-6500 AD, the calculated orbits become chaotic i.e. not predictable, and thus for periods outside this time frame no exact statements can be made. 2002 AA29 was the first known heavenly body that switches between horseshoe and quasi-satellite orbits.
Physical nature:
Brightness and size Relatively little is known about 2002 AA29 itself. With a size of about 20–100 metres (70–300 ft) it is very small, on account of which it is seen from the Earth as a small point even with large telescopes, and can only be observed using highly sensitive CCD cameras. At the time of its closest approach in January 2003 it had an apparent magnitude of about 20.4.So far nothing concrete is known about the composition of 2002 AA29. Because of its nearness to the Sun, it cannot however consist of volatile substances such as water ice, since these would evaporate or sublime; one can clearly observe this happening to a comet as this forms the visible tail. Presumably it will have a dark, carbon-bearing or somewhat lighter silicate-rich surface; in the former case the albedo would be around 0.05, in the latter somewhat higher at 0.15 to 0.25. It is due to this uncertainty that the figures for its diameter cover such a wide range.
Physical nature:
A further uncertainty arises from radar echo measurements at the Arecibo Radio Telescope, which could only pick up an unexpectedly weak radar echo, implying that 2002 AA29 is either smaller than estimated or reflects radio waves only weakly. In the former case it would have to have an unusually high albedo. This would be evidence in support of the speculation that it, or at least the material of which it is composed, is different from most other asteroids so far discovered on near-Earth orbits, or represents a fragment thrown off by the collision of a medium-sized asteroid with the Earth or the Moon.
Physical nature:
Rotational period Using radar echo measurements at the Arecibo radio telescope the rotational period of 2002 AA29 could be determined. In this radar astronomy procedure radio waves of known wavelength are emitted from a radio telescope aimed at an asteroid. There they are reflected, and because of the Doppler effect the part of the surface that is moving towards the observer (because of the asteroid's rotation) shortens the wavelength of the reflected waves, whilst the other part which is turning away from the observer lengthens the reflected wavelength. As a result, the wavelength of the reflected waves is "smeared out". The extent of the wavelength smearing and the diameter of the asteroid allow the rotational period to be narrowed down. 33 minutes is thus calculated as the upper limit of the rotational period for 2002 AA29; it probably rotates more quickly. This rapid rotation together with the small diameter and therefore low mass leads to some interesting conclusions: The asteroid rotates so quickly that the centrifugal force on its surface exceeds its gravitational pull. It is therefore under tension and so cannot be composed of an agglomeration of loosely bound debris or of fragments circling each other – as is supposed for several other asteroids and for example has been determined for the asteroid (69230) Hermes. Instead the body must be made of a single relatively strong block of rock or of pieces baked together. However, its tensile strength is probably considerably lower than terrestrial rock and the asteroid also very porous.
Physical nature:
2002 AA29 cannot possibly have been built up from individual small pieces, as these would be thrown apart by the rapid rotation. Therefore, it must be a fragment blown off in the collision of two heavenly bodies. J. Richard Gott and Edward Belbruno from Princeton University have speculated that 2002 AA29 might have formed together with Earth and Theia, the postulated planet that, according to the giant impact hypothesis, collided with Earth in its early history.
Outlook:
Because its orbit is very similar to the Earth's, the asteroid is relatively easily reachable by space probes. 2002 AA29 would therefore be a suitable object of study for more precise research into the structure and formation of asteroids and the evolution of their orbits around the Sun. Meanwhile, further co-orbital companions of the Earth of this type on horseshoe orbits or on orbits as quasi-satellites have already been found, such as the quasi-satellite 2003 YN107. Furthermore, it is assumed that there are small trojan companions of the Earth with diameters in the region of 100 metres located at the L4 and L5 Lagrangian points of the Earth–Sun system.
Related objects:
6Q0B44E 2006 RH120 2003 YN107 – quasi-satellite of Earth 2010 TK7 – Trojan co-orbital companion of Earth 3753 Cruithne (1986 TO) 2001 GO2 2002 AA29 2006 JY26 2010 SO16 2012 FC71 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**System sales**
System sales:
System sales is a term used in the franchising industry. System sales represents the total sales of all outlets that use a brand, or that use multiple brands owned by one franchisor. It is always higher than the franchisor's revenue. For example, say an average "Fast Eats" restaurant has annual revenue of US$1 million. Fast Eats Inc operates 1,000 Fast Eats restaurants directly and franchises another 3,000 outlets, taking a 20% cut of sales. Fast Eats Inc's reported revenue is $1.6 billion (1,000 × $1million from direct operations and 20% × 3,000 × $1 million from franchised outlets). But Fast Eats' system revenue is $4 billion (4,000 × $1 million).
System sales:
System sales provides a useful way of assessing the growth of a franchised brand. A franchise operator can easily increase its reported revenue by taking more outlets under direct management, but that may not be the best option for the profitability of the business, and the increase in accounting revenue it generates may give a misleading impression of the rate of growth of the underlying business, if system sales are not taken into account. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Papillorenal syndrome**
Papillorenal syndrome:
Papillorenal syndrome is an autosomal dominant genetic disorder marked by underdevelopment (hypoplasia) of the kidney and colobomas of the optic nerve.
Presentation:
Ocular Ocular disc dysplasia is the most notable ocular defect of the disease. An abnormal development in the optic stalk causes optic disc dysplasia, which is caused by a mutation in the Pax2 gene. The nerve head typically resembles the morning glory disc anomaly, but has also been described as a coloboma. A coloboma is the failure to close the choroid fissure, which is the opening from the ventral side of the retina in the optic stalk. Despite the similarities with coloboma and morning glory anomaly, significant differences exist such that optic disc dysplasia cannot be classified as either one entity.
Presentation:
Optic disc dysplasia is noted by an ill-defined inferior excavation, convoluted origin of the superior retinal vessels, excessive number of vessels, infrapapillary pigmentary disturbance, and slight band of retinal elevation adjacent to the disk. Some patients have normal or near normal vision, but others have visual impairment associated with the disease, though it is not certain if this is due only to the dysplastic optic nerves, or a possible contribution from macular and retinal malformations. The retinal vessels are abnormal or absent, in some cases having small vessels exiting the periphery of the disc. There is a great deal of clinical variability.
Presentation:
Kidney The most common malformation in patients with the syndrome is kidney hypodysplasia, which are small and underdeveloped kidneys, often leading to end-stage renal disease (ESRD). Estimates show approximately 10% of children with hypoplastic kidneys are linked to the disease. Many different histological abnormalities have been noted, including: decrease in nephron number associated with hypertrophy focal segmental glomerulosclerosis interstitial fibrosis and tubular atrophy multicystic dysplastic kidneyUp to one-third of diagnosed patients develop end stage kidney disease, which may lead to complete kidney failure.
Causes:
Pax2 mutations The majority of mutations occur in exons 2,3 and 4, which encode the paired domain and frame shift mutations that lead to a null allele. The missense mutations appear to disrupt hydrogen bonds, leading to decreased transactivation of Pax2, but do not seem to effect nuclear localization, steady state mRNA levels, or the ability of Pax2 to bind to its DNA consensus sequence. Mutations related to the disease have also been noted in exons 7,8, and 9, with milder phenotypes than the other mutations.
Causes:
Recent studies Pax2 is expressed in the kidney, midbrain, hindbrain, cells in the spinal column, developing ear and developing eye. Homozygous negative Pax2 mutation is lethal, but heterozygote mutants showed many symptoms of papillorenal syndrome, including optic nerve dysplasia with abnormal vessels emerging from the periphery of the optic cup and small dysplasic kidneys. It is shown that Pax2 is under upstream control of Shh in both mice and zebrafish, which is expressed in the precordal plate.
Causes:
Other genes Approximately half of patients with papillorenal syndrome do not have defects in the Pax2. This suggests that other genes play a role in the development of the syndrome, though few downstream effectors of Pax2 have been identified.
Genetic:
Papillorenal syndrome is an autosomal dominant disorder that results from a mutation of one copy of the Pax2 gene, located on chromosome 10q24.3-q25.1. The gene is important in the development of both the eye and the kidney. Autosomal dominant inheritance indicates that the gene responsible for the disorder is located on an autosome (chromosome 10 is an autosome), and only one defective copy of the gene is sufficient to cause the disorder, when inherited from a parent who has the disorder.
Diagnosis:
Clinical findings in the kidney Hypoplastic kidneys: Characterized by hypoplasia or hyperechogenicity. This typically occurs bilaterally, but there are also exceptions in which one kidney may be notably smaller while the other kidney is normal sized.
Hypodysplasia (RHD): Characterized histologically by reduced number of nephrons, smaller kidney size, or disorganized tissue.
Multicystic Dysplastic Kidney: Characterized histologically, displaying cysts or dysplasia. Shows disorganization of kidneys, and occurs in about 10% of patients with papillorenal syndrome.
Oligomeganephronia: Fewer than normal glomeruli, with a notable size increase.
Diagnosis:
Chronic kidney disease and end-stage kidney disease (ESKD) Vesicoureteral reflux Clinical findings in the eye Dysplasia of the optic nerve (most common) The severity varies, but the most severe form results in an enlarged disc where vessels exit from the periphery instead of the center. Redundant fibroglial tissue also is seen in severe cases. Milder forms of dysplasia exhibit missing portions of the optic disc located in the optic nerve pit. The least severe form of papillorenal disease shown in the eye is the exiting of blood vessels from the periphery that do not disturb the shape of the eye. Other eye malformations include scleral staphyloma, which is the bulging of the eye wall. There can also be retinal thinning and myopia. Additionally, there can be an optic nerve cyst, which is dilation of the optic nerve posterior to the globe; which most likely results from incomplete regression of the primordial optic stalk and the filling of this area with fluid. Retinal coloboma is also common, which is characterized by the absence of retinal tissue in the nasal ventral portion of the retina. However, this is an extremely rare finding.
Diagnosis:
Molecular genetic testing Sequence analysis shows that Pax2 is the only known gene associated with the disease. Mutations in Pax2 have been identified in half of renal coloboma syndrome victims.
Treatment:
Management of the disease should be focused on preventing end-stage kidney disease (ESKD) and/or vision loss. The treatment of hypertension may also preserve renal function. Renal replacement therapy is recommended, and vision experts may provide assistance to adapt to continued vision loss.
Treatment:
Kidney transplant is also an option. Treatment plans seem to be limited, as there is a large focus on the prevention of papillorenal syndrome and its implications. People with congenital optic nerve abnormalities should seek ophthalmologists regularly and use protective lenses. If abnormalities are present, a follow-up with a nephrologist should be achieved to monitor renal function and blood pressure. Since the disease is believed to be caused by Pax2 mutations and is inherited in an autosomal dominant manner, family members may be at risk and relatives should be tested for possible features. About half of those diagnosed with the disease have an affected parent, so genetic counseling is recommended.
Treatment:
Prenatal testing is another possibility for prevention or awareness, and this can be done through molecular genetic testing or ultrasounds at later stages of pregnancy. Additionally, preimplantation genetic diagnosis (PGD) should be considered for families where papillorenal syndrome is known to be an issue. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Formal criteria for adjoint functors**
Formal criteria for adjoint functors:
In category theory, a branch of mathematics, the formal criteria for adjoint functors are criteria for the existence of a left or right adjoint of a given functor.
One criterion is the following, which first appeared in Peter J. Freyd's 1964 book Abelian Categories, an Introduction to the Theory of Functors: Another criterion is: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**BeaTunes**
BeaTunes:
beaTunes is a commercial software package for Microsoft Windows and Mac OS X, developed and distributed by tagtraum industries incorporated. It originally started as a tool for detecting the BPM in music managed by Apple's iTunes. Since version 3, beaTunes is not dependent on iTunes anymore and supports Harmonic mixing and Beatmixing through BPM and key detection. Keys are displayed in either their musical notation or in Open Key Notation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Borehole image logs**
Borehole image logs:
Borehole imaging logs are logging and data-processing methods used to produce two-dimensional, centimeter-scale images of a borehole wall and the rocks that make it up. These tools are limited to the open-hole environment. The applications where images are useful cover the full range of the exploration and production cycle from exploration through appraisal, development, and production to abandonment and sealing.
Borehole image logs:
Specific applications are sedimentology, structural geology/tectonics, reservoir geomechanics and drilling, reservoir engineering.
The tools can be categorized in a number of ways: simple optical borehole imaging (OBI) systems, energy source (electrical, acoustic, or nuclear with gamma rays or neutron); conveyance (wireline or logging while drilling); and type of drilling mud (water-based mud or oil-based mud). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Alchemical symbol**
Alchemical symbol:
Alchemical symbols, originally devised as part of alchemy, were used to denote some elements and some compounds until the 18th century. Although notation was partly standardized, style and symbol varied between alchemists. Lüdy-Tenger published an inventory of 3,695 symbols and variants, and that was not exhaustive, omitting for example many of the symbols used by Isaac Newton. This page therefore lists only the most common symbols.
Three primes:
According to Paracelsus (1493–1541), the three primes or tria prima – of which material substances are immediately composed – are: Sulfur or soul, the principle of combustibility: 🜍 () Mercury or spirit, the principle of fusibility and volatility: ☿ () Salt or body, the principle of non-combustibility and non-volatility: 🜔 ()
Four basic elements:
Western alchemy makes use of the four classical elements. The symbols used for these are: Air 🜁 () Earth 🜃 () Fire 🜂 () Water 🜄 ()
Seven planetary metals:
The seven metals known since Classical times in Europe were associated with the seven classical planets; this figured heavily in alchemical symbolism. The exact correlation varied over time, and in early centuries bronze or electrum were sometimes found instead of mercury, or copper for Mars instead of iron; however, gold, silver, and lead had always been associated with the Sun, Moon, and Saturn.
Seven planetary metals:
The associations below are attested from the 7th century and had stabilized by the 15th. They started breaking down with the discovery of antimony, bismuth, and zinc in the 16th century. Alchemists would typically call the metals by their planetary names, e.g. "Saturn" for lead, "Mars" for iron; compounds of tin, iron, and silver continued to be called "jovial", "martial", and "lunar"; or "of Jupiter", "of Mars", and "of the moon", through the 17th century. The tradition remains today with the name of the element mercury, where chemists decided the planetary name was preferable to common names like "quicksilver", and in a few archaic terms such as lunar caustic (silver nitrate) and saturnism (lead poisoning).
Seven planetary metals:
Lead, corresponding with Saturn ♄ () Tin, corresponding with Jupiter ♃ () Iron, corresponding with Mars ♂ () Gold, corresponding with the Sun ☉ 🜚 ☼ ( ) Copper, corresponding with Venus ♀ () Quicksilver, corresponding with Mercury ☿ () Silver, corresponding with the Moon ☽ or ☾ ( or ) [also 🜛 in Newton]
Mundane elements and later metals:
Antimony ♁ () (in Newton), also Arsenic 🜺 () Bismuth ♆ () (in Newton), 🜘 () (in Bergman) Cobalt (approximately 🜶) (in Bergman) Manganese (in Bergman) Nickel (in Bergman; previously used for regulus of sulfur) Oxygen (in Lavoisier) Phlogiston (in Bergman) Phosphorus or Platinum or (in Bergman et al.) Sulfur 🜍 () (in Newton) Zinc (in Bergman)
Alchemical compounds:
The following symbols, among others, have been adopted into Unicode.
Acid (incl. vinegar) 🜊 () Sal ammoniac (ammonium chloride) 🜹 () Aqua fortis (nitric acid) 🜅 (), A.F.
Aqua regia (nitro-hydrochloric acid) 🜆 (), 🜇 (), A.R.
Spirit of wine (concentrated ethanol; called aqua vitae or spiritus vini) 🜈 (), S.V. or 🜉 () Amalgam (alloys of a metal and mercury) 🝛 () = a͞a͞a, ȧȧȧ (among other abbreviations).
Cinnabar (mercury sulfide) 🜓 () Vinegar (distilled) 🜋 () (in Newton) Vitriol (sulfates) 🜖 () Black sulphur (residue from sublimation of sulfur) 🜏 ()
Alchemical processes:
The alchemical magnum opus was sometimes expressed as a series of chemical operations. In cases where these numbered twelve, each could be assigned one of the Zodiac signs as a form of cryptography. The following example can be found in Pernety's Dictionnaire mytho-hermétique (1758): Calcination (Aries ) ♈︎ Congelation (Taurus ) ♉︎ Fixation (Gemini ) ♊︎ Solution (Cancer ) ♋︎ Digestion (Leo ) ♌︎ Distillation (Virgo ) ♍︎ Sublimation (Libra ) ♎︎ Separation (Scorpio ) ♏︎ Ceration (Sagittarius ) ♐︎ Fermentation (Capricorn ) ♑︎ (Putrefaction) Multiplication (Aquarius ) ♒︎ Projection (Pisces ) ♓︎
Units:
Several symbols indicate units of time.
Month 🝱 () or or xXx Day-Night 🝰 () Hour 🝮 ()
Unicode:
The Alchemical Symbols block was added to Unicode in 2010 as part of Unicode 6.0. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Boxer shorts**
Boxer shorts:
Boxer shorts (also known as loose boxers or as simply boxers) are a type of undergarment typically worn by men.
The term has been used in English since 1944 for all-around-elastic shorts, so named after the shorts worn by boxers, for whom unhindered leg movement ("footwork") is very important. Boxers come in a variety of styles and design but are characterized by their loose fit.
History:
In 1925, Jacob Golomb, founder of Everlast, designed elastic-waist trunks to replace the leather-belted trunks then worn by boxers. These trunks, now known as "boxer trunks", immediately became famous, but were later eclipsed by the popular Jockey-style briefs beginning in the late 1930s. The two styles, briefs and boxer shorts, had varying ratios of sales for the following fifty years, with strong regional and generational preferences.In 1985, in the U.S. men's briefs were more popular than boxer shorts, with four times as many briefs sold compared to boxers. Around that time many of the men who preferred boxers were older men who became accustomed to wearing them during their time in the U.S. military, and best selling color of boxers was white. Around that time boxers were beginning to become popular among young men, who wore boxers with varying colors and prints. Boxer shorts got a fashion boost in 1985 when English model and musician Nick Kamen stripped to white Sunspel boxers in a 1950s style "Launderette" in a Levi's commercial. Since the 1990s, some men also opt for boxer briefs as a compromise between the two. As of 2006, one American manufacturer reported that woven boxer shorts made up 15-20 per cent of men's underwear sales, but had been declining in popularity compared to boxer briefs since 2003.
Design:
Most boxer shorts have a fly in front. Boxer shorts manufacturers have a couple of methods of closing the fly: metal snaps or a button or two. However, many boxer shorts on the market do not need a fastening mechanism to close up the fly as the fabric is cut and the boxers are designed to sufficiently overlap and fully cover the opening. This is commonly known as an open fly design.
Design:
Since boxer shorts’ fabric is rarely stretchy, a "balloon seat", a generous panel of loosely fitting fabric in the center rear of the shorts, is designed to accommodate the wearer's various movements, especially bending forward. The most common sewing design of boxer shorts are made with a panel seat that has two seams running on the outer edges of the back seating area, creating a center rear panel. Most mass-produced commercial boxer shorts are made using this design.
Design:
Two less common forms of boxer shorts are "gripper" boxers and "yoke front" boxers. Gripper boxers have an elastic waistband like regular boxers but have snaps, usually 3, on the fly and on the waistband so that they open up completely.
Design:
Yoke front boxers are similar to gripper boxers in that the wide waistband yoke can be opened up completely, and the yoke usually has three snaps to close it while the fly itself, below, has no closure mechanism. There are two types of yoke boxers: one in which there is a short piece of elastic on each side of the waistband which snugs up the yoke to fit the waist; and "tie-sides" which have narrow cloth tapes on each side of the waist yoke, like strings, which are tightened and knotted by the wearer to make an exact fit. This style of underpant was very common during World War II, when the rubber needed for elastic waistbands had to be used for military purposes.
Design:
Boxer shorts are available in white and solid colors including pastels, and come in a variety of patterns and prints as well; Traditional patterns include "geometrics" (small repeating geometric designs), plaids and vertical stripes. Additionally, there are innumerable "novelty" boxer short patterns. Boxer shorts are produced using various fabrics including all cotton, cotton/polyester blends, jersey knits, satin, and silk.
Fertility:
Some studies have suggested that tight underpants (like briefs) and high temperature are not optimally conductive for sperm production. The testicles are outside the body for cooling because they operate for sperm production at a slightly lower temperature than the rest of the body, and boxer shorts allow the testicles to operate within the required temperature range. The compression of the genitals in briefs, boxer briefs, or thongs may cause the temperature to rise and sperm production to fall. There is a similar theory regarding testicular cancer risk. Other sources dispute this theory. A study in the October 1998 Journal of Urology, for example, concluded that underwear type is unlikely to have a significant effect on male fertility.
Boxer shorts for women:
Boxer shorts for women have come onto the market in recent years. They are often worn as loungewear. They differ from boyshorts in that they are commonly longer and more closely resemble their male counterparts. There have been reports that women have been buying men's boxers for use as underwear.
In popular culture:
In 1975, a Sears catalog photo of boxer shorts created a recurring urban legend. A model appeared to have part of his penis exposed in the photo, which a Sears spokesperson stated was a printing defect. Despite widespread press interest at the time, Sears reported that only a few letters were received from the general public, and noted that when the image was reprinted in the Spring-Summer catalog, it showed no such flaw. No recall of the catalog occurred. The incident inspired the singer Zoot Fenster's 1975 single "The Man on Page 602". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Phenylmercuric nitrate**
Phenylmercuric nitrate:
Phenylmercuric nitrate is an organomercury compound with powerful antiseptic and antifungal effects. It was once commonly used as a topical solution for disinfecting wounds, but as with all organomercury compounds it is highly toxic, especially to the kidneys, and is no longer used in this application. However it is still used in low concentrations as a preservative in eye drops for ophthalmic use, making it one of the few organomercury derivatives remaining in current medical use. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**The simExchange**
The simExchange:
The simExchange is a web-based prediction market in which players use virtual money to buy and sell stocks and futures contracts in upcoming video game properties. The main purpose of the web site is to predict trends in the video game industry, particularly how upcoming products will sell and how they will be received by the critics. For those who do not participate in the prediction market, the web site is a database of sales forecasts and game quality forecasts that are updated in real-time. The web site also features a number of "Wisdom of the crowd"-type content collaboration and aggregation tools, including means for sharing information, articles, images, and videos about the games.
How it works:
The prediction market works as a stock market game for players to predict the number of units a game or console hardware will sell or the reviews a game will receive. These predictions are made by trading global lifetime sales stocks, NPD Futures, and Metacritic Futures. These tradable contracts directly forecast the metric, for example: If a stock is priced at 100DKP, this corresponds to a forecast of 1 million units. If a trader believes the product will sell more than 1 million units, then he would buy the stock, if he believes the product will sell less than 1 million units, then he would sell the stock. In this way, the simExchange is a type of stock market game.
How it works:
The website also features a number of user tools to aggregate and display information about video games and publishers. In addition to a message board for each game, the simExchange advertises the ability of users to submit relevant news articles, and to append images and videos of a game to its page. Users may also vote on the importance of such submitted content by spending 100DKP to place either an "up bid" or a "down bid" on the content, receiving a 100DKP payout every time another user votes in the same way. Down bids were removed from the system in August 2009 as a result of user feedback. In this way, the simExchange is also a social news website. In March 2010, a number of these content submission tools, such as the ability to submit images and video, were removed due to lack of use, although the features remain advertised in the site's tutorials.
How it works:
DKP DKP is the name of the virtual currency used on the simExchange. The currency was named after the dragon kill points used in many massively multiplayer online games. Unlike some forms of virtual currency, DKP is not backed by real money.
Types of Contracts:
The simExchange aims to predict both the quantitative and qualitative sides of the video game industry. It originally launched with only stocks that forecast the global lifetime sales of gaming products. On April 12, 2007, the simExchange began public testing of monthly sales futures contracts, which later became known as NPD Futures. Metacritic Futures were similarly introduced in September of the same year. As of September 2010, the simExchange has deactivated both futures contracts, again only offers trading in stocks.
Types of Contracts:
Stocks A stock on the simExchange, also referred to as a Global Lifetime Sales stock or simply a GLS stock, represents the total global unit sales a video game product will sell through to consumers over the product's lifetime. These stocks do not have a pre-determined expiration date as some games can sell for years.For global lifetime stocks, 1DKP equals 10,000 copies of that game sold. For example, if a trader thinks a game will sell 234,000 copies, that would correlate to a stock price of 23.40DKP.
Types of Contracts:
NPD Futures An NPD Futures Contract was a virtual contract that predicted the sales data the NPD Group released on a monthly basis. These futures covered the NPD Group's data on console hardware sales, game sales, and total software sales. These contracts would cease trading one day before NPD reports were made public, and pay out at the end of the following day.
Types of Contracts:
The price of NPD Futures Contracts was similar to those of stocks, with 1DKP price corresponding to 10,000 units sold. For example, if the Xbox 360 November Futures Contract is priced at 60.00 DKP, this corresponds to a forecast of 600,000 units will be sold in the retail month of November.The simExchange stopped offering NPD Futures Contracts after September 2010, although the archives of all forecasts up until this point are still available on the simExchange website. The discontinuation of this service corresponded with the NPD Group's decision to cease publicly releasing hardware and software unit sales numbers.
Types of Contracts:
Metacritic Futures A Metacritic Futures Contract is a virtual contract that predicts the score a game will receive from review aggregation website Metacritic. Despite not being directly based on sales data, Metacritic scores have been shown to be highly indicative of a game's sales. These contracts cease trading and pay out 14 days after the game's release.
A Metacritic Future's price is directly related to the Metacritic score. For example, a 90.00 DKP price would correspond to a forecast of a 90 score on Metacritic.The simExchange stopped offering Metacritic Futures after July 2009 due to lack of user interest in trading the contracts.
Applications:
Data produced by the simExchange has been noted as a resource for real-world investors. The metrics that are forecast are fundamental factors that drive the stocks of the companies that produce the games. The simExchange's prediction market data has been used by Wall Street analysts, such as Michael Pachter, in their published notes and financial news outlets, such as Reuters, MarketWatch, and TheStreet.com. Given the current status of the website, it is unlikely that the simExchange's data is still influential or relevant.
Current Status:
Users of the simExchange noticed a "sharp decline" in the number of users in 2010, after NPD Futures were no longer offered as part of the game, and site upkeep slowed to a crawl by mid-2011. Although the game still has players, users have described it as everything from "dead" to "a ghost town." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vertisol**
Vertisol:
A vertisol is a Soil Order in the USDA soil taxonomy and a Reference Soil Group in the World Reference Base for Soil Resources (WRB). It is also defined in many other soil classification systems. In the Australian Soil Classification it is called vertosol. Vertisols have a high content of expansive clay minerals, many of them belonging to the montmorillonites that form deep cracks in drier seasons or years. In a phenomenon known as argillipedoturbation, alternate shrinking and swelling causes self-ploughing, where the soil material consistently mixes itself, causing some vertisols to have an extremely deep A horizon and no B horizon. (A soil with no B horizon is called an A/C soil). This heaving of the underlying material to the surface often creates a microrelief known as gilgai.
Vertisol:
Vertisols typically form from highly basic rocks, such as basalt, in climates that are seasonally humid or subject to erratic droughts and floods, or that impeded drainage. Depending on the parent material and the climate, they can range from grey or red to the more familiar deep black (known as "black earths" in Australia, "black gumbo" in East Texas, "black cotton" soils in East Africa, and "vlei soils" in South Africa).
Vertisol:
Vertisols are found between 50°N and 45°S of the equator. Major areas where vertisols are dominant are eastern Australia (especially inland Queensland and New South Wales), the Deccan Plateau of India, and parts of southern Sudan, Ethiopia, Kenya, Chad (the Gezira), South Africa, and the lower Paraná River in South America. Other areas where vertisols are dominant include southern Texas and adjacent Mexico, central India, northeast Nigeria, Thrace, New Caledonia and parts of eastern China.
Vertisol:
The natural vegetation of vertisols is grassland, savanna, or grassy woodland. The heavy texture and unstable behaviour of the soil makes it difficult for many tree species to grow, and forest is uncommon.
Vertisol:
The shrinking and swelling of vertisols can damage buildings and roads, leading to extensive subsidence. Vertisols are generally used for grazing of cattle or sheep. It is not unknown for livestock to be injured through falling into cracks in dry periods. Conversely, many wild and domestic ungulates do not like to move on this soil when inundated. However, the shrink-swell activity allows rapid recovery from compaction.
Vertisol:
When irrigation is available, crops such as cotton, wheat, sorghum and rice can be grown. Vertisols are especially suitable for rice because they are almost impermeable when saturated. Rainfed farming is very difficult because vertisols can be worked only under a very narrow range of moisture conditions: they are very hard when dry and very sticky when wet. However, in Australia, vertisols are highly regarded, because they are among the few soils that are not acutely deficient in available phosphorus. Some, known as "crusty vertisols", have a thin, hard crust when dry that can persist for two to three years before they have crumbled enough to permit seeding.
Vertisol:
In the USDA soil taxonomy, vertisols are subdivided into: Aquerts: Vertisols which are subdued aquic conditions for some time in most years and show redoximorphic features are grouped as Aquerts. Because of the high clay content, the permeability is slowed and aquic conditions are likely to occur. In general, when precipitation exceeds evapotranspiration, ponding may occur. Under wet soil moisture conditions, iron and manganese are mobilized and reduced. The manganese may be partly responsible for the dark color of the soil profile.
Vertisol:
Cryerts: They have a cryic soil temperature regime. Cryerts are most extensive in the grassland and forest-grassland transitions zones of the Canadian Prairies and at similar latitudes in Russia.
Xererts: They have a thermic, mesic, or frigid soil temperature regime. They show cracks that are open at least 60 consecutive days during the summer, but are closed at least 60 consecutive days during winter. Xererts are most extensive in the eastern Mediterranean and parts of California.
Torrerts: They have cracks that are closed for less than 60 consecutive days when the soil temperature at 50 cm is above 8 °C. These soils are not extensive in the U.S., and occur mostly in west Texas, New Mexico, Arizona, and South Dakota, but are the most extensive suborder of vertisols in Australia.
Usterts: They have cracks that are open for at least 90 cumulative days per year. Globally, this suborder is the most extensive of the vertisols order, encompassing the vertisols of the tropics and monsoonal climates in Australia, India, and Africa. In the U.S. the Usterts are common in Texas, Montana, Hawaii, and California.
Vertisol:
Uderts: They have cracks that are open less than 90 cumulative days per year and less than 60 consecutive days during the summer. In some areas, cracks open only in drought years. Uderts are of small extent globally, being most abundant in Uruguay and eastern Argentina, but also found in parts of Queensland and the "Black Belt" of Mississippi and Alabama.The WRB defines the diagnostic vertic horizon. It is usually a subsoil horizon and has at least 30% clay, shrink-swell cracks and wedge-shaped aggregates and/or slickensides. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Edison Bugg's Invention**
Edison Bugg's Invention:
Edison Bugg's Invention is a 1916 American silent comedy film featuring Oliver Hardy.
Cast:
Raymond McKee - Edison Bugg Oliver Hardy - The Fire Chief (as Babe Hardy) Jerold T. Hevener
Plot:
The firemen are so engrossed in their card playing that they ignore the fire alarm. Edison Bugg's invention yanks their chairs out from under them when the alarm sounds. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**2010 Johnson & Johnson children's product recall**
2010 Johnson & Johnson children's product recall:
The 2010 Johnson & Johnson children's product recall involved 43 over-the-counter children's medicines announced by McNeil Consumer Healthcare, a subsidiary of Johnson & Johnson, on April 30, 2010. Medications in the recall included liquid versions of Tylenol, Tylenol Plus, Motrin, Zyrtec, and Benadryl. The products were recalled after it was determined that they "may not fully meet the required manufacturing specifications". The recall affected at least 12 countries.
Discovery of manufacturing problems:
During a routine inspection on April 19, federal investigators found several "manufacturing deficiencies" at a McNeil manufacturing facility in Fort Washington, Pennsylvania, United States. According to the Food and Drug Administration (FDA), the plant's manufacturing process was "not in control," meaning it was using flawed procedures that could potentially lead to manufacturing errors. As a result, some products "may contain a higher concentration of active ingredient than is specified; others contain inactive ingredients that may not meet internal testing requirements; and others may contain tiny [foreign] particles." Foreign particles could potentially include solidified ingredients or "manufacturing residue such as tiny metal specks" or mold.
Discovery of manufacturing problems:
It was not clear when the problems began, but FDA official Douglas Stearn said it "does go back in time". The official FDA report, released May 4, said investigators found thick dust, grime, and contaminated ingredients at the manufacturing plant. Burkholderia cepacia bacteria was found on some equipment which, according to Johnson & Johnson, never was actually put into use.
Recall:
According to the FDA, the agency alerted Johnson & Johnson of the problem via letter on Friday, April 30. That evening, McNeil announced a voluntary recall of the affected products. According to Johnson & Johnson spokesperson Bonnie Jacob, the company had conducted an independent internal assessment and already alerted the FDA of recall plans before the letter arrived. Canada, Dominican Republic, Fiji, Guam, Guatemala, Jamaica, Kuwait, Puerto Rico, Panama, Trinidad and Tobago, the United Arab Emirates, and the United States were affected by the recall. It includes all non-expired packages produced in the United States – more than 100,000 bottles of medicine in total. "A vast portion of the [American] children's medicine market" was affected by the recall. In Canada, only Children's Motrin and Children's Tylenol Cough & Runny Nose were affected by the recall.According to the FDA, consumers should stop using the recalled products even though the chance of related health problems was "remote." A McNeil spokesperson stated that the recall was not made on "the basis of adverse medical events". As of May 2, no injuries or deaths have been reported.All production at the deficient plant has been voluntarily halted, but McNeil declined to state when the plant first ceased operations. In a statement, Johnson & Johnson said "a comprehensive quality assessment across its manufacturing operations" was underway. According to a spokesperson, fixes had already been identified by May 2, and would be put in place before operation resumed. A dedicated website and telephone hotline were established by the company to handle customer inquiries. The phone line was initially overwhelmed by a high call volume.
Aftermath:
On May 6, The House Committee on Oversight and Government Reform launched an investigation into Johnson & Johnson, saying that the recall combined with previous recalls "point to a major problem" with the company's production. "Taken together, these recalls point to a major problem in the production of McNeil products," said a statement from the panel's leadership. The company issued four recalls in the preceding year, and has recalled a variety of products since.
List of affected products:
Children's Benadryl Allergy liquids (one variety) Motrin Infants' drops (three varieties) Children's Motrin suspensions (eleven varieties) Tylenol Infants' drops (seven varieties) Children's Tylenol suspensions (eight varieties) Children's Tylenol Plus suspensions (nine varieties) Children's Zyrtec liquids (five varieties) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Side Pawn Capture, Pawn*23**
Side Pawn Capture, Pawn*23:
Side Pawn Capture Pawn*23 (横歩取り☖2三歩 yokofudori ni-san fu) or Side Pawn Capture B*25 (横歩取り☖2五角 yokofudori ni-go kaku) is variation stemming from the Side Pawn Capture opening, in which White drops a pawn on the second file before trading off pawns on the eighth file leading Black to capture White's Side Pawn. After this, White initiates a rapid attack against Black's rook starting from a bishop drop on the second file.
Side Pawn Capture, Pawn*23:
This is an older variation of the Side Pawn opening that has become disfavored since White's rapid attack is considered ineffective.
Development:
6...P*23. White drops their pawn in hand on the 23 square attacking Black's rook – a striking pawn tactic.
7. Rx34. Fleeing from attack, Black can now take White's side pawn.
Rapid Attack White's primary response to Black's capturing of the side pawn has been to immediately trade bishops and attack Black's vulnerable rook.
B*25 B*45 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Biophysical environment**
Biophysical environment:
A biophysical environment is a biotic and abiotic surrounding of an organism or population, and consequently includes the factors that have an influence in their survival, development, and evolution. A biophysical environment can vary in scale from microscopic to global in extent. It can also be subdivided according to its attributes. Examples include the marine environment, the atmospheric environment and the terrestrial environment. The number of biophysical environments is countless, given that each living organism has its own environment.
Biophysical environment:
The term environment can refer to a singular global environment in relation to humanity, or a local biophysical environment, e.g. the UK's Environment Agency.
Life-environment interaction:
All life that has survived must have adapted to the conditions of its environment. Temperature, light, humidity, soil nutrients, etc., all influence the species within an environment. However, life in turn modifies, in various forms, its conditions. Some long-term modifications along the history of the planet have been significant, such as the incorporation of oxygen to the atmosphere. This process consisted of the breakdown of carbon dioxide by anaerobic microorganisms that used the carbon in their metabolism and released the oxygen to the atmosphere. This led to the existence of oxygen-based plant and animal life, the great oxygenation event.
Related studies:
Environmental science is the study of the interactions within the biophysical environment. Part of this scientific discipline is the investigation of the effect of human activity on the environment.
Ecology, a sub-discipline of biology and a part of environmental sciences, is often mistaken as a study of human-induced effects on the environment.
Related studies:
Environmental studies is a broader academic discipline that is the systematic study of the interaction of humans with their environment. It is a broad field of study that includes: The natural environment Built environments Social environmentsEnvironmentalism is a broad social and philosophical movement that, in a large part, seeks to minimize and compensate for the negative effect of human activity on the biophysical environment. The issues of concern for environmentalists usually relate to the natural environment with the more important ones being climate change, species extinction, pollution, and old growth forest loss.
Related studies:
One of the studies related includes employing Geographic Information Science to study the biophysical environment.Biophysics is a multidisciplinary study utilizing systems from physics to study biological phenomena. Its scope ranges from a molecular level up and into populations separated by geographical boundaries. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**KIF9**
KIF9:
Kinesin family member 9 (KIF9), also known as kinesin-9, is a human protein encoded by the KIF9 gene. It is part of the kinesin family of motor proteins.
Function:
The beating of the flagella in sperm is regulated by KIF9 activity. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Are You a Werewolf?**
Are You a Werewolf?:
Are You a Werewolf? is a party game from Looney Labs. Its gameplay is similar to - and derived from - the game Mafia.
The game does not require special equipment; a deck of playing cards can also be used, though Looney Labs does have a specialized deck for sale.
Gameplay:
The game starts with the Moderator handing each player a card. Players play out the role given to them by the card: Villager: All but three players will be villagers. Villagers have no special role in the game.
Werewolf: Two players will be werewolves.
Seer: One player will be the seer.Until a player is eliminated, players are not to reveal their identity as printed on the card. Players may lie about what's on their card. When a player is killed, their card is revealed.
Gameplay:
The game starts at night. All players close their eyes (often also bending their heads forwards) and make noise, usually clapping hands or slapping thighs to cover any unintentional noises that could disclose information. The moderator summons their werewolves awake, and asks them to make a selection of a villager to die. They then close their eyes, and the Seer is summoned to make a selection of a player to determine if s/he is a werewolf. The moderator will gesture with a thumbs-up for yes and thumbs-down for no.
Gameplay:
After the first night has ended, the moderator describes a calamity as having befallen their town. Many narrators choose to ham it up, especially during pick-up games at game conventions.
Gameplay:
Night may be accompanied by players tapping gently to mask sounds made by gesturing.After the night, the day portion begins with the unfortunate demise of a villager, and debate about who in the group is a werewolf. The day ends when a simple majority of players select another player to be eliminated. Play then repeats unless a victory condition is met.
Gameplay:
According to some rules, the role of dead players should not be revealed; according to others, if doctor dies, nobody should know that.
Victory conditions:
A game ends in one of two ways: Werewolves win if there is a one-to-one ratio of werewolves and villagers.
Villagers win if they manage to kill both werewolves.
Psychology of the game:
The game displays many of the problems with mob mentality. With rare exceptions, villagers will end up killing at least one - and, more likely, several - villagers rather than the werewolves. Recriminations and accusations without cause are bandied about, and on occasion players are killed for the most trivial of reasons. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CUMYL-PICA**
CUMYL-PICA:
CUMYL-PICA (SGT-56) is an indole-3-carboxamide based synthetic cannabinoid. It is the α,α-dimethylbenzyl analogue of SDB-006. It was briefly sold in New Zealand during 2013 as an ingredient of at the time legal synthetic cannabis products, but the product containing CUMYL-BICA and CUMYL-PICA was denied an interim licensing approval under the Psychoactive Substances regulatory scheme, due to reports of adverse events in consumers. CUMYL-PICA acts as an agonist for the cannabinoid receptors, with Ki values of 59.21 nM at CB1 and 136.38 nM at CB2 and EC50 values of 11.98 nM at CB1 and 16.2 nM at CB2. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Quartz arenite**
Quartz arenite:
A quartz arenite or quartzarenite is a sandstone composed of greater than 90% detrital quartz. Quartz arenites are the most mature sedimentary rocks possible, and are often referred to as ultra- or super-mature, and are usually cemented by silica. They often exhibit both textural and compositional maturity. The two primary sedimentary depositional environments that produce quartz arenites are beaches/upper shoreface and aeolian processes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Adipoyl chloride**
Adipoyl chloride:
Adipoyl chloride (or adipoyl dichloride) is the organic compound with the formula (CH2CH2C(O)Cl)2. It is a colorless liquid. It reacts with water to give adipic acid.
It is prepared by treatment of adipic acid with thionyl chloride.
Adipoyl chloride reacts with hexamethylenediamine to form nylon 6,6. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**GReAT**
GReAT:
Graph Rewriting and Transformation (GReAT) is a Model Transformation Language (MTL) for Model Integrated Computing available in the GME environment. GReAT has a rich pattern specification sublanguage, a graph transformation sublanguage and a high level control-flow sublanguage. It has been designed to address the specific needs of the model transformation area. The GME environment is an example of a Model Driven Engineering (MDE) framework. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Twelfth root of two**
Twelfth root of two:
The twelfth root of two or 12 (or equivalently 12 ) is an algebraic irrational number, approximately equal to 1.0594631. It is most important in Western music theory, where it represents the frequency ratio (musical interval) of a semitone (Play ) in twelve-tone equal temperament. This number was proposed for the first time in relationship to musical tuning in the sixteenth and seventeenth centuries. It allows measurement and comparison of different intervals (frequency ratios) as consisting of different numbers of a single interval, the equal tempered semitone (for example, a minor third is 3 semitones, a major third is 4 semitones, and perfect fifth is 7 semitones). A semitone itself is divided into 100 cents (1 cent = 1200 1200 ).
Numerical value:
The twelfth root of two to 20 significant figures is 1.0594630943592952646. Fraction approximations in increasing order of accuracy include 18/17, 89/84, 196/185, 1657/1564, and 18904/17843.
As of December 2013, its numerical value has been computed to at least twenty billion decimal digits.
The equal-tempered chromatic scale:
A musical interval is a ratio of frequencies and the equal-tempered chromatic scale divides the octave (which has a ratio of 2:1) into twelve equal parts. Each note has a frequency that is 21⁄12 times that of the one below it.Applying this value successively to the tones of a chromatic scale, starting from A above middle C (known as A4) with a frequency of 440 Hz, produces the following sequence of pitches: The final A (A5: 880 Hz) is exactly twice the frequency of the lower A (A4: 440 Hz), that is, one octave higher.
The equal-tempered chromatic scale:
Other tuning scales Other tuning scales use slightly different interval ratios: The just or Pythagorean perfect fifth is 3/2, and the difference between the equal tempered perfect fifth and the just is a grad, the twelfth root of the Pythagorean comma (12√531441/524288).
The equal tempered Bohlen–Pierce scale uses the interval of the thirteenth root of three (13√3).
Stockhausen's Studie II (1954) makes use of the twenty-fifth root of five (25√5), a compound major third divided into 5×5 parts.
The delta scale is based on ≈50√3/2.
The gamma scale is based on ≈20√3/2.
The beta scale is based on ≈11√3/2.
The alpha scale is based on ≈9√3/2.
Pitch adjustment:
Since the frequency ratio of a semitone is close to 106% ( 1.05946 100 105.946 ), increasing or decreasing the playback speed of a recording by 6% will shift the pitch up or down by about one semitone, or "half-step". Upscale reel-to-reel magnetic tape recorders typically have pitch adjustments of up to ±6%, generally used to match the playback or recording pitch to other music sources having slightly different tunings (or possibly recorded on equipment that was not running at quite the right speed). Modern recording studios utilize digital pitch shifting to achieve similar results, ranging from cents up to several half-steps (note that reel-to-reel adjustments also affect the tempo of the recorded sound, while digital shifting does not).
History:
Historically this number was proposed for the first time in relationship to musical tuning in 1580 (drafted, rewritten 1610) by Simon Stevin. In 1581 Italian musician Vincenzo Galilei may be the first European to suggest twelve-tone equal temperament. The twelfth root of two was first calculated in 1584 by the Chinese mathematician and musician Zhu Zaiyu using an abacus to reach twenty four decimal places accurately, calculated circa 1605 by Flemish mathematician Simon Stevin, in 1636 by the French mathematician Marin Mersenne and in 1691 by German musician Andreas Werckmeister. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Incremental encoder**
Incremental encoder:
An incremental encoder is a linear or rotary electromechanical device that has two output signals, A and B, which issue pulses when the device is moved. Together, the A and B signals indicate both the occurrence of and direction of movement. Many incremental encoders have an additional output signal, typically designated index or Z, which indicates the encoder is located at a particular reference position. Also, some encoders provide a status output (typically designated alarm) that indicates internal fault conditions such as a bearing failure or sensor malfunction.
Incremental encoder:
Unlike an absolute encoder, an incremental encoder does not indicate absolute position; it only reports changes in position and, for each reported position change, the direction of movement. Consequently, to determine absolute position at any particular moment, it is necessary to send the encoder signals to an incremental encoder interface, which in turn will "track" and report the encoder's absolute position.
Incremental encoder:
Incremental encoders report position changes nearly instantaneously, which allows them to monitor the movements of high speed mechanisms in near real-time. Because of this, incremental encoders are commonly used in applications that require precise measurement and control of position and velocity.
Quadrature outputs:
An incremental encoder employs a quadrature encoder to generate its A and B output signals. The pulses emitted from the A and B outputs are quadrature-encoded, meaning that when the incremental encoder is moving at a constant velocity, the A and B waveforms are square waves and there is a 90 degree phase difference between A and B.At any particular time, the phase difference between the A and B signals will be positive or negative depending on the encoder's direction of movement. In the case of a rotary encoder, the phase difference is +90° for clockwise rotation and −90° for counter-clockwise rotation, or vice versa, depending on the device design.
Quadrature outputs:
The frequency of the pulses on the A or B output is directly proportional to the encoder's velocity (rate of position change); higher frequencies indicate rapid movement, whereas lower frequencies indicate slower speeds. Static, unchanging signals are output on A and B when the encoder is motionless. In the case of a rotary encoder, the frequency indicates the speed of the encoder's shaft rotation, and in linear encoders the frequency indicates the speed of linear traversal.
Quadrature outputs:
Conceptual drawings of quadrature encoder sensing mechanismsQuadrature encoder outputs can be produced by a quadrature-offset pattern read by aligned sensors (left diagram), or by a simple pattern read by offset sensors (right diagram).
Quadrature outputs:
Resolution The resolution of an incremental encoder is a measure of the precision of the position information it produces. Encoder resolution is typically specified in terms of the number of A (or B) pulses per unit displacement or, equivalently, the number of A (or B) square wave cycles per unit displacement. In the case of rotary encoders, resolution is specified as the number of pulses per revolution (PPR) or cycles per revolution (CPR), whereas linear encoder resolution is typically specified as the number of pulses issued for a particular linear traversal distance (e.g., 1000 pulses per mm).
Quadrature outputs:
This is in contrast to the measurement resolution of the encoder, which is the smallest position change that the encoder can detect. Every signal edge on A or B indicates a detected position change. Since each square-wave cycle on A (or B) encompasses four signal edges (rising A, rising B, falling A and falling B), the encoder's measurement resolution equals one-fourth of the displacement represented by a full A or B output cycle. For example, a 1000 pulse-per-mm linear encoder has a per-cycle measurement resolution of 1 mm / 1000 cycles = 1 μm, so this encoder's resolution is 1 μm / 4 = 250 nm.
Quadrature outputs:
Symmetry and phase When moving at constant velocity, an ideal incremental encoder would output perfect square waves on A and B (i.e., the pulses would be exactly 180° wide) with a phase difference of exactly 90° between A and B. In real encoders, however, due to sensor imperfections and speed variations, the pulse widths are never exactly 180° and the phase difference is never exactly 90°. Furthermore, the A and B pulse widths vary from one cycle to another (and from each other) and the phase difference varies at every A and B signal edge. Consequently, both the pulse width and phase difference will vary over a range of values.
Quadrature outputs:
For any particular encoder, the pulse width and phase difference ranges are defined by "symmetry" and "phase" (or "phasing") specifications, respectively. For example, in the case of an encoder with symmetry specified as 180° ±25°, the width of every output pulse is guaranteed to be at least 155° and no more than 205°. Similarly, with phase specified as 90° ±20°, the phase difference at every A or B edge will be at least 70° and no more than 110°.
Signal types:
Incremental encoders employ various types of electronic circuits to drive (transmit) their output signals, and manufacturers often have the ability to build a particular encoder model with any of several driver types. Commonly available driver types include open collector, mechanical, push-pull and differential RS-422.
Open collector Open collector drivers operate over a wide range of signal voltages and often can sink significant output current, making them useful for directly driving current loops, opto-isolators and fiber optic transmitters.
Because it cannot source current, the output of an open-collector driver must be connected to a positive DC voltage through a pull-up resistor. Some encoders provide an internal resistor for this purpose; others do not and thus require an external pull-up resistor. In the latter case, the resistor typically is located near the encoder interface to improve noise immunity.
Signal types:
The encoder's high-level logic signal voltage is determined by the voltage applied to the pull-up resistor (VOH in the schematic), whereas the low-level output current is determined by both the signal voltage and load resistance (including pull-up resistor). When the driver switches from the low to the high logic level, the load resistance and circuit capacitance act together to form a low-pass filter, which stretches (increases) the signal's rise time and thus limits its maximum frequency. For this reason, open collector drivers typically are not used when the encoder will output high frequencies.
Signal types:
Mechanical Mechanical (or contact) incremental encoders use sliding electrical contacts to directly generate the A and B output signals. Typically, the contacts are electrically connected to signal ground when closed so that the outputs will be "driven" low, effectively making them mechanical equivalents of open collector drivers and therefore subject to the same signal conditioning requirements (i.e. external pull-up resistor).
Signal types:
The maximum output frequency is limited by the same factors that affect open-collector outputs, and further limited by contact bounce – which must be filtered by the encoder interface – and by the operating speed of the mechanical contacts, thus making these devices impractical for high frequency operation. Furthermore, the contacts experience mechanical wear under normal operation, which limits the life of these devices. On the other hand, mechanical encoders are relatively inexpensive because they have no internal, active electronics. Taken together, these attributes make mechanical encoders a good fit for low duty, low frequency applications.
Signal types:
PCB- and panel-mounted mechanical incremental encoders are widely used as hand-operated controls in electronic equipment. Such devices are used as volume controls in audio equipment, as voltage controls in bench power supplies, and for a variety of other functions.
Signal types:
Push-pull Push-pull outputs (e.g., TTL) typically are used for direct interface to logic circuitry. These are well-suited to applications in which the encoder and interface are located near each other (e.g., interconnected via printed circuit conductors or short, shielded cable runs) and powered from a common power supply, thus avoiding exposure to electric fields, ground loops and transmission line effects that might corrupt the signals and thereby disrupt position tracking, or worse, damage the encoder interface.
Signal types:
Differential pair Differential RS-422 signaling is typically preferred when the encoder will output high frequencies or be located far away from the encoder interface, or when the encoder signals may be subjected to electric fields or common-mode voltages, or when the interface must be able to detect connectivity problems between encoder and interface. Examples of this include CMMs and CNC machinery, industrial robotics, factory automation, and motion platforms used in aircraft and spacecraft simulators.
Signal types:
When RS-422 outputs are employed, the encoder provides a differential conductor pair for every logic output; for example, "A" and "/A" are commonly-used designations for the active-high and active-low differential pair comprising the encoder's A logic output. Consequently, the encoder interface must provide RS-422 line receivers to convert the incoming RS-422 pairs to single-ended logic.
Principal applications:
Position tracking Incremental encoders are commonly used to monitor the physical positions of mechanical devices. The incremental encoder is mechanically attached to the device to be monitored so that its output signals will change as the device moves. Example devices include the balls in mechanical computer mice and trackballs, control knobs in electronic equipment, and rotating shafts in radar antennas.
Principal applications:
An incremental encoder does not keep track of, nor do its outputs indicate the current encoder position; it only reports incremental changes in position. Consequently, to determine the encoder's position at any particular moment, it is necessary to provide external electronics which will "track" the position. This external circuitry, which is known as an incremental encoder interface, tracks position by counting incremental position changes.
Principal applications:
As it receives each report of incremental position change (indicated by a transition of the A or B signal), an encoder interface will take into account the phase relationship between A and B and, depending on the sign of the phase difference, count up or down. The cumulative "counts" value indicates the distance traveled since tracking began. This mechanism ensures accurate position tracking in bidirectional applications and, in unidirectional applications, prevents false counts that would otherwise result from vibration or mechanical dithering near an AB code transition.
Principal applications:
Displacement units Often the encoder counts must be expressed in units such as meters, miles or revolutions. In such cases, the counts are converted to the desired units by multiplying by the ratio of encoder displacement D per count C :position=counts×DC .Typically this calculation is performed by a computer which reads the counts from the incremental encoder interface. For example, in the case of a linear incremental encoder that produces 8000 counts per millimeter of travel, the position in millimeters is calculated as follows: 1 mm 8000 counts Homing In order for an incremental encoder interface to track and report absolute position, the encoder counts must be correlated to a reference position in the mechanical system to which the encoder is attached. This is commonly done by homing the system, which consists of moving the mechanical system (and encoder) until it aligns with a reference position, and then jamming the associated absolute position counts into the encoder interface's counter.
Principal applications:
A proximity sensor is built into some mechanical systems to facilitate homing, which outputs a signal when the mechanical system is in its "home" (reference) position. In such cases, the mechanical system is homed by moving it until the encoder interface receives the sensor signal, whereupon the corresponding position value is jammed into the position counter.
Principal applications:
In some rotating mechanical systems (e.g. rotating radar antennas), the "position" of interest is the rotational angle relative to a reference orientation. These typically employ a rotary incremental encoder that has an index (or Z) output signal. The index signal is asserted when the shaft is in its reference orientation, which causes the encoder interface to jam the reference angle into its position counter.
Principal applications:
Some incremental encoder applications lack reference position detectors and therefore must implement homing by other means. For example a computer, when using a mouse or trackball pointing device, typically will home the device by assuming a central, initial screen position upon booting, and jam the corresponding counts into the X and Y position counters. In the case of panel encoders used as hand-operated controls (e.g., audio volume control), the initial position typically is retrieved from flash or other non-volatile memory upon power-up and jammed into the position counter, and upon power-down the current position count is saved to non-volatile memory to serve as the initial position for the next power-up.
Principal applications:
Speed measurement Incremental encoders are commonly used to measure the speed of mechanical systems. This may be done for monitoring purposes or to provide feedback for motion control, or both. Widespread applications of this include speed control of radar antenna rotation and material conveyors, and motion control in robotics, CMM and CNC machines.
Incremental encoder interfaces are primarily concerned with tracking mechanical displacement and usually do not directly measure speed. Consequently, speed must be indirectly measured by taking the derivative of the position with respect to time. The position signal is inherently quantized, which poses challenges for taking the derivative due to quantization error, especially at low speeds.
Encoder speed can be determined either by counting or by timing the encoder output pulses (or edges). The resulting value indicates a frequency or period, respectively, from which speed can be calculated. The speed is proportional to frequency, and inversely proportional to period.
Principal applications:
By frequency If the position signal is sampled (a discrete time signal), the pulses (or pulse edges) are detected and counted by the interface, and speed is typically calculated by a computer which has read access to the interface. To do this, the computer reads the position counts C0 from the interface at time T0 and then, at some later time T1 reads the counts again to obtain C1 . The average speed during the interval T0 to T1 is then calculated: speed=(C1−C0)(T1−T0) .The resulting speed value is expressed as counts per unit time (e.g., counts per second). In practice, however, it is often necessary to express the speed in standardized units such as meters per second, revolutions per minute (RPM), or miles per hour (MPH). In such cases, the software will take into account the relationship between counts and desired distance units, as well as the ratio of the sampling period to desired time units. For example, in the case of a rotary incremental encoder that produces 4096 counts per revolution, which is being read once per second, the software would compute RPM as follows: 1 second 60 seconds 1 minute 1 revolution 4096 counts .When measuring speed this way, the measurement resolution is proportional to both the encoder resolution and the sampling period (the elapsed time between the two samples); measurement resolution will become higher as the sampling period increases.
Principal applications:
By period Alternatively, a speed measurement can be reported at each encoder output pulse by measuring the pulse width or period. When this method is used, measurements are triggered at specific positions instead of at specific times. The speed calculation is the same as shown above (counts / time), although in this case the measurement start and stop times ( T0 and T1 ) are provided by a time reference.
Principal applications:
This technique avoids position quantization error but introduces errors related to quantization of the time reference. Also, it is more sensitive to sensor non-idealities such as phase errors, symmetry errors, and variations in the transition locations from their nominal values.
Incremental encoder interface:
An incremental encoder interface is an electronic circuit that receives signals from an incremental encoder, processes the signals to produce absolute position and other information, and makes the resulting information available to external circuitry.
Incremental encoder interfaces are implemented in a variety of ways, including as ASICs, as IP blocks within FPGAs, as dedicated peripheral interfaces in microcontrollers and, when high count rates are not required, as polled (software monitored) GPIOs.
Incremental encoder interface:
Regardless of the implementation, the interface must sample the encoder's A and B output signals frequently enough to detect every AB state change before the next state change occurs. Upon detecting a state change, it will increment or decrement the position counts based on whether A leads or trails B. This is typically done by storing a copy of the previous AB state and, upon state change, using the current and previous AB states to determine movement direction.
Incremental encoder interface:
Line receivers Incremental encoder interfaces use various types of electronic circuits to receive encoder-generated signals. These line receivers serve as buffers to protect downstream interface circuitry and, in many cases, also provide signal conditioning functions.
Single-ended Incremental encoder interfaces typically employ Schmitt trigger inputs to receive signals from encoders that have single-ended (e.g., push-pull, open collector) outputs. This type of line receiver inherently rejects low-level noise (by means of its input hysteresis) and protects downstream circuitry from invalid (and possibly destructive) logic signal levels.
Differential RS-422 line receivers are commonly used to receive signals from encoders that have differential outputs. This type of receiver rejects common-mode noise and converts the incoming differential signals to the single-ended form required by downstream logic circuits.
Incremental encoder interface:
In mission-critical systems, an encoder interface may be required to detect loss of input signals due to encoder power loss, signal driver failure, cable fault or cable disconnect. This is usually accomplished by using enhanced RS-422 line receivers which detect the absence of valid input signals and report this condition via a "signal lost" status output. In normal operation, glitches (brief pulses) may appear on the status outputs during input state transitions; typically, the encoder interface will filter the status signals to prevent these glitches from being erroneously interpreted as lost signals. Depending on the interface, subsequent processing may include generating an interrupt request upon detecting signal loss, and sending notification to the application for error logging or failure analysis.
Incremental encoder interface:
Clock synchronization An incremental encoder interface largely consists of sequential logic which is paced by a clock signal. However, the incoming encoder signals are asynchronous with respect to the interface clock because their timing is determined solely by encoder movement. Consequently, the output signals from the A and B (also Z and alarm, if used) line receivers must be synchronized to the interface clock, both to avoid errors due to metastability and to coerce the signals into the clock domain of the quadrature decoder.Typically this synchronization is performed by independent, single-signal synchronizers such as the two flip-flop synchronizer seen here. At very high clock frequencies, or when a very low error rate is needed, the synchronizers may include additional flip-flops in order to achieve an acceptably low bit error rate.
Incremental encoder interface:
Input filter In many cases an encoder interface must filter the synchronized encoder signals before further processing them. This may be required in order to reject low-level noise and brief, large-amplitude noise spikes commonly found in motor applications and, in the case of mechanical-type encoders, to debounce A and B to avoid count errors due to mechanical contact bounce.
Incremental encoder interface:
Hardware-based interfaces often provide programmable filters for the encoder signals, which provide a wide range of filter settings and thus allow them to debounce contacts or suppress transients resulting from noise or slowly slewing signals, as needed. In software-based interfaces, A and B typically are connected to GPIOs that are sampled (via polling or edge interrupts) and debounced by software.
Incremental encoder interface:
Quadrature decoder Incremental encoder interfaces commonly use a quadrature decoder to convert the A and B signals into the direction and count enable (clock enable) signals needed for controlling a bidirectional (up- and down-counting) synchronous counter.
Incremental encoder interface:
Typically, a quadrature decoder is implemented as a finite-state machine (FSM) which simultaneously samples the A and B signals and thus produces amalgamate "AB" samples. As each new AB sample is acquired, the FSM will store the previous AB sample for later analysis. The FSM evaluates the differences between the new and previous AB states and generates direction and count enable signals as appropriate for the detected AB state sequence.
Incremental encoder interface:
State transitions In any two consecutive AB samples, the logic level of A or B may change or both levels may remain unchanged, but in normal operation A and B will never both change. In this regard, each AB sample is effectively a two-bit Gray code.
Incremental encoder interface:
Normal transitions When only A or B changes state, it is assumed that the encoder has moved one increment of its measurement resolution and, accordingly, the quadrature decoder will assert its count enable output to allow the counts to change. Depending on the encoder's direction of travel (forward or reverse), the decoder will assert or negate its direction output to cause the counts to increment or decrement (or vice versa).
Incremental encoder interface:
When neither A nor B changes, it is assumed that the encoder has not moved and so the quadrature decoder negates its count enable output, thereby causing the counts to remain unchanged.
Incremental encoder interface:
Errors If both the A and B logic states change in consecutive AB samples, the quadrature decoder has no way of determining how many increments, or in what direction the encoder has moved. This can happen if the encoder speed is too fast for the decoder to process (i.e., the rate of AB state changes exceeds the quadrature decoder's sampling rate; see Nyquist rate) or if the A or B signal is noisy.
Incremental encoder interface:
In many encoder applications this is a catastrophic event because the counter no longer provides an accurate indication of encoder position. Consequently, quadrature decoders often will output an additional error signal which is asserted when the A and B states change simultaneously. Due to the severity and time-sensitive nature of this condition, the error signal is often connected to an interrupt request.
Incremental encoder interface:
Clock multiplier A quadrature decoder does not necessarily allow the counts to change for every incremental position change. When a decoder detects an incremental position change (due to a transition of A or B, but not both), it may allow the counts to change or it may inhibit counting, depending on the AB state transition and the decoder's clock multiplier.
Incremental encoder interface:
The clock multiplier of a quadrature decoder is so named because it results in a count rate which is a multiple of the A or B pulse frequency. Depending on the decoder's design, the clock multiplier may be hardwired into the design or it may be run-time configurable via input signals.
Incremental encoder interface:
The clock multiplier value may be one, two or four (typically designated "x1", "x2" and "x4", or "1x", "2x" and "4x"). In the case of a x4 multiplier, the counts will change for every AB state change, thereby resulting in a count rate equal to four times the A or B frequency. The x2 and x1 multipliers allow the counts to change on some, but not all AB state changes, as shown in the quadrature decoder state table above (note: this table shows one of several possible implementations for x2 and x1 multipliers; other implementations may enable counting at different AB transitions).
Incremental encoder interface:
Position reporting From an application's perspective, the fundamental purpose of an incremental encoder interface is to report position information on demand. Depending on the application, this may be as simple as allowing the computer to read the position counter at any time under program control. In more complex systems, the position counter may be sampled and processed by intermediate state machines, which in turn make the samples available to the computer.
Incremental encoder interface:
Sample register An encoder interface typically employs a sample register to facilitate position reporting. In the simple case where the computer demands position information under program control, the interface will sample the position counter (i.e., copy the current position counts to the sample register) and then the computer will read the counts from the sample register. This mechanism results in atomic operation and thus ensures the integrity of the sample data, which might otherwise be at risk (e.g., if the sample's word size exceeds the computer's word size).
Incremental encoder interface:
Triggered sampling In some cases the computer may not be able to programatically (via programmed I/O) acquire position information with adequate timing precision. For example, the computer may be unable to demand samples on a timely periodic schedule (e.g., for speed measurement) due to software timing variability. Also, in some applications it is necessary to demand samples upon the occurrence of external events, and the computer may be unable to do so in a timely manner. At higher encoder speeds and resolutions, position measurement errors can occur even when interrupts are used to demand samples, because the encoder may move between the time the IRQ is signaled and the sample demand is issued by the interrupt handler.
Incremental encoder interface:
To overcome this limitation, it is common for an incremental encoder interface to implement hardware-triggered sampling, which enables it to sample the position counter at precisely-controlled times as dictated by a trigger input signal. This is important when the position must be sampled at particular times or in response to physical events, and essential in applications such as multi-axis motion control and CMM, in which the position counters of multiple encoder interfaces (one per axis) must be simultaneously sampled.
Incremental encoder interface:
In many applications the computer must know precisely when each sample was acquired and, if the interface has multiple trigger inputs, which signal triggered the sample acquisition. To satisfy these requirements, the interface typically will include a timestamp and trigger information in every sample.
Event notification Sampling triggers are often asynchronous with respect to software execution. Consequently, when the position counter is sampled in response to a trigger signal, the computer must be notified (typically via interrupt) that a sample is available. This allows the software to be event-driven (vs. polled), which facilitates responsive system behavior and eliminates polling overhead.
Incremental encoder interface:
Sample FIFO Consecutive sampling triggers may occur faster than the computer can process the resulting samples. When this happens, the information in the sample register will be overwritten before it can be read by the computer, resulting in data loss. To avoid this problem, some incremental encoder interfaces provide a FIFO buffer for samples. As each sample is acquired, it is stored in the FIFO. When the computer demands a sample, it is allowed to read the oldest sample in the FIFO. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**The Science of Mind**
The Science of Mind:
The Science of Mind is a book by Ernest Holmes. It was published in 1926 and proposes a science with a new relationship between humans and God.
Overview:
The book was originally published by Holmes, the founder of Religious Science, in 1926. A revised version was completed by Holmes and Maude Allison Lathem and published 12 years later in 1938.
Holmes' writing details how people can actively engage their minds in creating change throughout their lives. The book includes explanations of how to pray and meditate, find self-confidence, and express love.
Influences:
Holmes wrote The Science of Mind with the belief that he was summarizing the best of beliefs from around the world. His influences included Thomas Troward, Ralph Waldo Emerson, Christian Larson, Evelyn Underhill, and Emma Curtis Hopkins.
Legacy:
In 1927, Holmes began publishing Science of Mind magazine which is still in publication.The name Science of Mind is used by the foundation which continues his work. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Attack vector**
Attack vector:
In computer security, an attack vector is a specific path, method, or scenario that can be exploited to break into an IT system, thus compromising its security. The term was derived from the corresponding notion of vector in biology. An attack vector may be exploited manually, automatically, or through a combination of manual and automatic activity.
Attack vector:
Often, this is a multi-step process. For instance, malicious code (code that the user did not consent to being run and that performs actions the user would not consent to) often operates by being added to a harmless seeming document made available to an end user. When the unsuspecting end user opens the document, the malicious code in question (known as the payload) is executed and performs the abusive tasks it was programmed to execute, which may include things such as spreading itself further, opening up unauthorized access to the IT system, stealing or encrypting the user's documents, etc.
Attack vector:
In order to limit the chance of discovery once installed, the code in question is often obfuscated by layers of seemingly harmless code.Some common attack vectors: exploiting buffer overflows; this is how the Blaster worm was able to propagate.
exploiting webpages and email supporting the loading and subsequent execution of JavaScript or other types of scripts without properly limiting their powers.
exploiting networking protocol flaws to perform unauthorized actions at the other end of a network connection.
phishing: sending deceptive messages to end users to entice them to reveal confidential information, such as passwords. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Deep ocean water**
Deep ocean water:
Deep ocean water (DOW) is the name for cold, salty water found deep below the surface of Earth's oceans. Ocean water differs in temperature and salinity. Warm surface water is generally saltier than the cooler deep or polar waters; in polar regions, the upper layers of ocean water are cold and fresh. Deep ocean water makes up about 90% of the volume of the oceans. Deep ocean water has a very uniform temperature, around 0-3 °C, and a salinity of about 3.5% or, as oceanographers state, 35 ppt (parts per thousand).In specialized locations such as the Natural Energy Laboratory of Hawaii ocean water is pumped to the surface from approximately 900 metres (2,952 feet) deep for applications in research, commercial and pre-commercial activities. DOW is typically used to describe ocean water at sub-thermal depths sufficient to provide a measurable difference in water temperature.
Deep ocean water:
When deep ocean water is brought to the surface, it can be used for a variety of things. Its most useful property is its temperature. At the surface of the Earth, most water and air is well above 3 °C. The difference in temperature is indicative of a difference in energy. Where there is an energy gradient, skillful application of engineering can harness that energy for productive use by humans.
Deep ocean water:
The simplest use of cold water is for air conditioning: using the cold water itself to cool air saves the energy that would be used by the compressors for traditional refrigeration. Another use could be to replace expensive desalination plants. When cold water passes through a pipe surrounded by humid air, condensation results. The condensate is pure water, suitable for humans to drink or for crop irrigation. Via a technology called ocean thermal energy conversion, the temperature difference can be turned into electricity.
Cold-bed agriculture:
During condensation or ocean thermal energy conservation operations, the water does not reach ambient temperature, because a certain temperature gradient is required to make these processes viable. The water leaving those operations is therefore still colder than the surroundings, and a further benefit can be extracted by passing this water through underground pipes, thereby cooling agricultural soil. This reduces evaporation, and even causes water to condense from the atmosphere. This allows agricultural production where crops would normally not be able to grow. This technique is sometimes referred to as "cold agriculture" or "cold-bed agriculture". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Glutaryl-7-aminocephalosporanic-acid acylase**
Glutaryl-7-aminocephalosporanic-acid acylase:
In enzymology, a glutaryl-7-aminocephalosporanic-acid acylase (EC 3.5.1.93) is an enzyme that catalyzes the chemical reaction (7R)-7-(4-carboxybutanamido)cephalosporanate + H2O ⇌ (7R)-7-aminocephalosporanate + glutarateThus, the two substrates of this enzyme are (7R)-7-(4-carboxybutanamido)cephalosporanate and H2O, whereas its two products are (7R)-7-aminocephalosporanate and glutarate.
Glutaryl-7-aminocephalosporanic-acid acylase:
This enzyme belongs to the family of hydrolases, those acting on carbon-nitrogen bonds other than peptide bonds, specifically in linear amides. The systematic name of this enzyme class is (7R)-7-(4-carboxybutanamido)cephalosporanate amidohydrolase. Other names in common use include 7beta-(4-carboxybutanamido)cephalosporanic acid acylase, cephalosporin C acylase, glutaryl-7-ACA acylase, CA, GCA, GA, cephalosporin acylase, glutaryl-7-aminocephalosporanic acid acylase, and GL-7-ACA acylase. This enzyme participates in penicillin and cephalosporin biosynthesis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tree uprooting**
Tree uprooting:
Uprooting is a form of treefall in which the root plate of a tree is torn from the soil, disrupting and mixing it and leaving a pit-mound.
Transplanting:
Small trees can be replanted if their root system is well attached to the trunk. Trees can suffer from transplant shock when moved to new environment, and that causes the tree not to be able to root itself properly. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**WDC 65C22**
WDC 65C22:
The W65C22 versatile interface adapter (VIA) is an input/output device for use with the 65xx series microprocessor family.
Overview:
Designed by the Western Design Center, the W65C22 is made in two versions, both of which are rated for 14 megahertz operation, and available in DIP-40 or PLCC-44 packages.
W65C22N: This device is fully compatible with the NMOS 6522 produced by MOS Technology and others, and includes current-limiting resistors on its output lines. The W65C22N has an open-drain interrupt output (the IRQB pin) that is compatible with a wired-OR interrupt circuit. Hence the DIP-40 version is a "drop-in" replacement for the NMOS part.
Overview:
W65C22S: This device is fully software– and partially hardware–compatible with the NMOS part. The W65C22S' IRQB output is a totem pole configuration, and thus cannot be directly connected to a wired-OR interrupt circuit.As with the NMOS 6522, the W65C22 includes functions for programmed control of two peripheral ports (ports A and B). Two program–controlled 8-bit bi-directional peripheral I/O ports allow direct interfacing between the microprocessor and selected peripheral units. Each port has input data latching capability. Two programmable data direction registers (A and B) allow selection of data direction (input or output) on an individual I/O line basis.
Overview:
Also provided are two programmable 16-bit interval timer/counters with latches. Timer 1 may be operated in a one-shot or free-run mode. In either mode, a timer can generate an interrupt when it has counted down to zero. Timer 2 functions as an interval counter or a pulse counter. If operating as an interval counter, timer 2 is driven by the microprocessor's PHI2 clock source. As a pulse counter, timer 2 is triggered by an external pulse source on the chip's PB6 line.
Overview:
Serial data transfers are provided by a serial to parallel/parallel to serial shift register, with bit transfers synchronized with the PHI2 clock. Application versatility is further increased by various control registers, including an interrupt flag register, an interrupt enable register and two Function Control Registers.
Features:
Two 16-bit timer/counters Two 8-bit parallel I/O ports, bi-directional, programmable direction and latching One synchronous serial I/O port, bi-directional TTL compatible I/O W65C22N has open-drain interrupt output, W65C22S has push-pull totem-pole interrupt output Software compatible with NMOS 6522 devices Bus compatible with high-speed W65C02S and W65C816S microprocessors Advanced CMOS process technology for low power consumption 1.8V to 5V power supply | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Yulex**
Yulex:
Yulex Corporation makes products from Guayule (Parthenium argentatum) a residual agricultural material.
Commercial success:
In 2008, the U.S. Food and Drug Administration (FDA) approved Yulex biorubber gloves for medical uses. Yulex is the first company to produce biobased, medical-grade latex that is safe for people with latex allergy.In 2012, Yulex received a $6.9 million USDA-DoE grant as part of a research consortium. Partnering with the Agricultural Research Service (ARS) and Cooper Tire, Yulex will research enhanced manufacturing processes, testing and utilization of guayule natural rubber as a strategic source of raw material in tires, and evaluate the remaining biomass of the guayule plant as a source of bio-fuel for the transportation industry, as well work on improving agronomic practices, developing genetic information and undertaking a lifecycle analysis.Also in 2012, Yulex released the first alternative to the traditional neoprene wetsuit in partnership with Patagonia, the first guayule-based mattresses and pillows in partnership with Pure LatexBliss, and the first plant-based, latex allergy-friendly dental dam in partnership with 4D Rubber.In 2013, Yulex formed a partnership with ENI's Versalis to expand the reach of guayule into European markets. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**IEEE Communications Surveys and Tutorials**
IEEE Communications Surveys and Tutorials:
IEEE Communications Surveys & Tutorials is a quarterly peer-reviewed academic journal published by the IEEE Communications Society for tutorials and surveys covering all aspects of the communications field. The journal publishes both original articles and reprints of articles featured in other IEEE Communication Society journals. It was established in 1998 and the current editor-in-chief is Dusit (Tao) Niyato (Nanyang Technological University). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cambridge Diploma in Computer Science**
Cambridge Diploma in Computer Science:
Diploma in Computer Science, originally known as the Diploma in Numerical Analysis and Automatic Computing, was a conversion course in computer science offered by the University of Cambridge. It is equivalent to a master's degree in present-day nomenclature but the title diploma was retained for historic reasons, "diploma" being the archaic term for a master's degree.
Cambridge Diploma in Computer Science:
The diploma was the world's first full-year taught course in computer science, starting in 1953. It attracted students of mathematics, science and engineering. At its peak, there were 50 students on the course. UK government (EPSRC) funding was withdrawn in 2001 and student numbers dropped dramatically. In 2007, the university took the decision to withdraw the diploma at the end of the 2007-08 academical year, after 55 years of service.
History:
The introduction of this one-year graduate course was motivated by a University of Cambridge Mathematics Faculty Board Report on the "demand for postgraduate instruction in numerical analysis and automatic computing … [which] if not met, there is a danger that the application to scientific research of the machines now being built will be hampered". The University of Cambridge Computer Laboratory "was one of the pioneers in the development and use of electronic computing-machines (sic)". It had introduced a Summer School in 1950, but the Report noted that "The Summer School deals [only] with 'programming', rather than the general theory of the numerical methods which are programmed." The Diploma "would include theoretical and practical work … [and also] instruction about the various types of computing-machine … and the principles of design on which they are based." With only a few students initially, no extra staff would be needed.University-supported teaching and research staff in the Laboratory at the time were Maurice Wilkes (head of the laboratory), J. C. P. Miller, W. Renwick, E. N. Mutch, and S. Gill, joined slightly later by C. B. Haselgrove.
History:
In its final incarnation, the Diploma was a 10-month course, evaluated two-thirds on examination and one-third on a project dissertation. Most of the examined courses were shared by the second year ("Part IB") of the undergraduate Computer Science Tripos course, with some additional lectures specifically for the Diploma students and four of the third year undergraduate ("Part II") lecture courses also included.
History:
There were three grades of result from the Diploma: distinction (roughly equivalent to first class honours), pass (equivalent to second or third class honours), and fail. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Discontinuous transmission**
Discontinuous transmission:
Discontinuous transmission (DTX) is a means by which a mobile telephone is temporarily shut off or muted while the phone lacks a voice input.
Misconception:
A common misconception is that DTX improves capacity by freeing up TDMA time slots for use by other conversations. In practice, the unpredictable availability of time slots makes this difficult to implement. However, reducing interference is a significant component in how GSM and other TDMA based mobile phone systems make better use of the available spectrum compared to older analog systems such as Advanced Mobile Phone System (AMPS) and Nordic Mobile Telephone (NMT). While older network types theoretically allocated two 25–30 kHz channels per conversation, in practice some radios would cause interference on neighbouring channels making them unusable, and a single radio may broadcast too strong an oval signal pattern to let nearby cells reuse the same channel.
Misconception:
GSM combines short packet sizes, frequency hopping, redundancy, power control, digital encoding, and DTX to minimize interference and the effects of interference on a conversation. In this respect, DTX indirectly improves the over-all capacity of a network.
Packet radio systems:
In packet radio systems such as GPRS/EDGE, it is possible to combine DTX with capacity increase when VoIP is used for telephony. In such cases, resources freed up when one user is in silence can be used to serve another user. The increase of the number of users will contribute to the interference level. Systems that use voice codecs such as AMR can reduce vocoder rate adaptively to better combat interference. Systems based upon CDMA air interfaces such as IS-95/CDMA2000, and most forms of UMTS, can use a form of implied DTX by usage of a variable rate codec such as AMR. As with the packet radio systems above, when one side of the conversion is silent, the amount of transmitted data is minimized. Again, the effect is reduced interference.
Packet radio systems:
In wireless transmitters, VAD is sometimes called voice-operated transmission (VOX).
Technical details:
SP flag = 0 indicates SID (Silence Insertion Descriptor) frame SP flag = 1 indicates speech frameSpeech frame = 260 samples Transmit side TX DTX handle performs speech encoding, comfort noise computation, voice activity detection TX Radio Subsystem (RSS):Performs SP flag monitoring and Channel coding Hangover period After the transition from VAD=1 to VAD=0, a "hangover period" of N+1 consecutive frames is required to make a new updated SID frame available. The bursts are directly passed to RSS with SP=1.
Technical details:
Background noise spikes can often be confused with the speech frame and hence, in order to nullify this issue, a check list for SID computation is Nelapsed >23, old SID is utilized with VAD=0.
Once after the end of speech SID is computed it is continuously passed to the RSS marked with SP=0 as long as VAD=0.
If a SID (SP=0) is chosen for transmission is stolen for FACCH signaling than the subsequent frame is scheduled for transmission.
Receive side BFI=0 Meaningful information bit BFI=1 Not Meaningful information bitA FACCH frame in not considered as a meaningful information and should be transmitted with BFI=1 Traffic frames aligned with SACCH multi frame have TAF (time alignment flag)=1 RX DTX handler performs speech decoding and comfort noise computation.
RX Radio subsystem Performs Error Correction and Detection and SID frame detection Whenever a good speech frame is detected the RX DTX handler shall pass directly to speech decoder.
Whenever a lost speech or lost SID frames are detected the substitution or mutation shall be applied.
Whenever a valid SID frame result in comfort noise generation.
In case of invalid SID frame after consecutive Speech frames the last valid SID frame will be applicable. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Intestinal metaplasia**
Intestinal metaplasia:
Intestinal metaplasia is the transformation (metaplasia) of epithelium (usually of the stomach or the esophagus) into a type of epithelium resembling that found in the intestine. In the esophagus, this is called Barrett's esophagus. Chronic inflammation caused by H. pylori infection in the stomach and GERD in the esophagus are seen as the primary instigators of metaplasia and subsequent adenocarcinoma formation. Initially, the transformed epithelium resembles the small intestine lining; in the later stages it resembles the lining of the colon. It is characterized by the appearance of goblet cells and expression of intestinal cell markers such as the transcription factor, CDX2.
Risk factors:
Although it was originally reported that people of East Asian ethnicity with gastric intestinal metaplasia are at increased risk of stomach cancer, it is now clear that gastric intestinal metaplasia is also a risk factor in low-incidence regions like Europe. Risk factors for progression of gastric intestinal metaplasia to full blown cancer are smoking and family history. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Comet (pyrotechnics)**
Comet (pyrotechnics):
In pyrotechnics a comet is a block attached to the outside of a shell or launched freely, which burns and emits sparks as the shell is rising, leaving a trail in the sky. Some comets use a matrix composition with small stars embedded in it. The matrix composition burns with little light but ignites the stars, producing the effect. Some freely-launched comets contain crossette breaks, which explode and break the comet into pieces to produce a branching effect.
Comet (pyrotechnics):
Comets intended for use indoors near an audience, such as at a rock concert, are typically freely-launched projectiles designed to completely consume themselves to reduce the hazard to audience members. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Speedwriting**
Speedwriting:
Speedwriting is the trademark under which three versions of a shorthand system were marketed during the 20th century. The original version was designed so that it could be written with a pen or typed on a typewriter. At the peak of its popularity, Speedwriting was taught in more than 400 vocational schools and its advertisements were ubiquitous in popular American magazines.
Description of the original version:
The original version of Speedwriting uses letters of the alphabet and a few punctuation marks to represent the sounds of English. There are abbreviations for common prefixes and suffixes; for example, uppercase N represents enter- or inter- so "entertainment" is written as Ntn- and "interrogation" is reduced to Ngj. Vowels are omitted from many words and arbitrary abbreviations are provided for the most common words.
Description of the original version:
Specimen: ltus vaqt ll p/, aspz rNb otfm. Let us have a quiet little party and surprise our neighbor on the farm.By reducing the use of spaces between words a high level of brevity can be achieved: "laugh and the world laughs with you" can be written as "lfatwolfs wu".Original Speedwriting can be typed on a typewriter or computer keyboard. When writing with a pen, one uses regular cursive handwriting with a few small modifications. Lowercase 't' is written as a simple vertical line and 'l' must be written with a distinctive loop; specific shapes for various letters are prescribed in the textbook.
Description of the original version:
With twelve weeks of training, students could achieve speeds of 80 to 100 words per minute writing with a pen. The inventor of the system was able to type notes on a typewriter as fast as anyone could speak, therefore she believed Speedwriting could eliminate the need for stenotype machines in most applications.
History of the original version:
Emma B. Dearborn (February 1, 1875 – July 28, 1937) worked as a shorthand instructor and trainer of shorthand teachers at Simmons College, Columbia University, and several other institutions. She was an expert in several pen stenography systems as well as stenotype.
History of the original version:
Having seen students struggle to master the complexities of symbol-based shorthand systems and stenotype theory, Dearborn decided to design a system that would be easier to learn. An early edition of her system was called "The Steno Short-Type System".Dearborn organized a corporation in 1924 and rebranded her shorthand system under the name of Speedwriting. Starting with just $192 of capital, she used print advertising to turn her textbooks and classes into a thriving international company with offices in England and Canada. Dearborn's company offered correspondence courses to individuals while vocational schools around the country paid an annual franchise fee for the right to teach Speedwriting classes within a specified territory.
History of the original version:
In addition to the extensive use of newspaper and magazine ads, Speedwriting gained publicity from a few unsolicited endorsements. Commander Richard E. Byrd commissioned Dearborn to teach her shorthand system to some members of his upcoming polar expedition. Theodore Roosevelt Jr. cited Dearborn as an example of women making great contributions to the business world. In 1937 the Works Progress Administration sponsored free Speedwriting classes as part of its Emergency Education Program.
History of the original version:
Dearborn died by suicide in 1937. The School of Speedwriting organization continued publishing her textbooks and making franchise deals with vocational schools through about 1950.
Later versions:
The second version of Speedwriting, designed by Alexander L. Sheff (July 21, 1898 – June 27, 1978) in the early 1950s, introduced some symbols that could not be produced on a typewriter such as arcs representing the letters 'm' and 'w'. This version modified a few of the abbreviating principles also. Changes implemented in the Sheff version include the following: vowels are included rather than omitted slightly more often; "cheap" is written as cep instead of cp the word "the" is indicated by a dot rather than the letter t period at the end of the sentence is written as a large diagonal stroke \ rather than a dot past tense of a regular verb is indicated by a short horizontal stroke above the final letter of the root-word the -ing suffix is indicated by a short horizontal mark under the last letter of the outline rather than the letter gA Spanish-language edition of the Sheff version was published in 1966. A Portuguese-language adaptation was published in 1968 by Prof. Plínio Leite, of Niterói (Brazil), who had licensed the exclusive teaching rights for Speedwriting in Brazil and Portugal. The Portuguese version was authored by him in collaboration with Prof. José Henrique Robertson Silva.
Later versions:
Starting in 1974 a variant of Speedwriting called Landmark Shorthand was taught in some American high schools and universities. Students generally enjoyed Landmark classes more than symbol-based shorthand because of the "immediate positive feedback ... Within minutes, words, phrases and sentences are written readily."In the 1980s Joe M. Pullis designed the third major version of Speedwriting which further modified the system's symbols and principles. Changes implemented in the Pullis version include the following: the sound of 'k' is written with lowercase c the "ch" sound is written with uppercase C the "sh" sound is written with a modified lowercase cursive s, as in Forkner shorthand the past tense of regular verbs is indicated with a hyphen on the line of writing the period, question mark, and end of paragraph symbols are identical to those of Gregg shorthand the brief forms for it/at, the, is/his are also the same as in Gregg
Ownership history:
Ownership of the Speedwriting trademark and the textbook copyrights changed hands several times. Dearborn's first corporation, formed in 1924, was called Brief English Systems. Later that was replaced by the Speedwriting Publishing Company which published textbooks under the name of its subsidiary, School of Speedwriting. By the 1970s ITT had purchased School of Speedwriting. At various times Macmillan and McGraw-Hill owned the Speedwriting trademark.
Ownership history:
Brief English Systems v. Owen (1931) John P. Owen, a former Speedwriting student, published a book describing a new shorthand system that was very similar to Speedwriting. Dearborn's organization sued him for copyright violation.
Ownership history:
In 1931 the Second Circuit Court of Appeals ruled that a particular textbook of shorthand could be copyrighted but the system itself is an invention or process rather than a literary work and cannot be copyrighted. "There is no literary merit in a mere system of condensing written words into less than the number of letters usually used to spell them out. Copyrightable material is found, if at all, in the explanation of how to do it." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pagibaximab**
Pagibaximab:
Pagibaximab is a chimeric monoclonal antibody for the prevention of staphylococcal sepsis in infants with low birth weight. As of March 2010, it is undergoing Phase II/III clinical trials. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ten and a quarter inch gauge**
Ten and a quarter inch gauge:
Ten and a quarter inch gauge (or X scale) (10+1⁄4 in / 260 mm) is a large modelling scale, generally only used for ridable miniature railways. Model railways at this scale normally confine the scale modelling aspects to the reproduction of the locomotive and with steam locomotives the accompanying tender. Rolling stock is generally made to carry passengers or maintenance equipment and is not to scale. There are also a number of railways which use this gauge of track but are narrow-gauge railways. Examples are Rudyard Lake Steam Railway, Isle of Mull Railway and Wells and Walsingham Light Railway.
Ten and a quarter inch gauge:
An organisation to promote this gauge of railway has been reformed in May 2010 as The Ten and a Quarter Railway Society, which will also cover the larger 12+1⁄4 in (311 mm) and smaller 9+1⁄2 in (241 mm) gauges.
Locomotives:
Generally model trains at this scale are individually hand-made, however between 1963 and 1964, Lines Bros Ltd using their combined Tri-ang and Minic brand names produced a commercial system under the name the Tri-ang Minic Narrowgauge Railway, or T.M.N.R.. Commercial companies also build bespoke locomotives or in the case of the Exmoor Steam Railway a standard design of 2-4-2T.
Rolling stock:
Rolling stock was normally supplied in the form of seaside coaches.
However the more growth in narrow-gauge style railways shows that fully enclosed coaches seating two adults side by side are possible and preferable for commercial railways. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Desmocollin**
Desmocollin:
Desmocollins are a subfamily of desmosomal cadherins, the transmembrane constituents of desmosomes. They are co-expressed with desmogleins to link adjacent cells by extracellular adhesion. There are seven desmosomal cadherins in humans, three desmocollins and four desmogleins. Desmosomal cadherins allow desmosomes to contribute to the integrity of tissue structure in multicellular living organisms.
Structure:
Three isoforms of desmocollin proteins have been identified.
Structure:
Desmocollin-1, coded by the DSC1 gene Desmocollin-2, coded by the DSC2 gene Desmocollin-3, coded by the DSC3 geneEach desmocollin gene encodes a pair of proteins: a longer 'a' form and a shorter 'b' form. The 'a' and 'b' forms differ in the length of their C-terminus tails. The protein pair is generated by alternative splicing.Desmocollin has four cadherin-like extracellular domains, an extracellular anchor domain, and an intracellular anchor domain. Additionally, the 'a' form has an intracellular cadherin-like sequence domain, which provides binding sites for other desmosomal proteins such as plakoglobin.
Expression:
The desmosomal cadherins are expressed in tissue-specific patterns. Desmocollin-2 and desmoglein-2 are found in all desmosome-containing tissues such as colon and cardiac muscle tissues, while other desmosomal cadherins are restricted to stratified epithelial tissues.All seven desmosomal cadherins are expressed in epidermis, but in a differentiation-specific manner. The '2' and '3' isoforms of desmocollin and desmoglein are expressed in the lower epidermal layers, and the '1' proteins and desmoglein-4 are expressed in the upper epidermal layers. Different isoforms are located in the same individual cells, and single desmosomes contain more than one isoform of both desmocollin and desmoglein.It is unclear why there are multiple desmosomal cadherin isoforms. It is thought that they may have different adhesive properties that are required at different levels in stratified epithelia or that they have specific functions in epithelial differentiation.
Disorders:
Desmosomes are involved in cell-cell adhesion, and are particularly important for the integrity of heart and skin tissue. Because of this, desmocollin gene mutations can affect the adhesion of cells that undergo mechanical stress, notably cardiomyocytes and keratinocytes. Genetic disorders associated with desmocollin gene mutations include Carvajal syndrome, striate palmoplantar keratoderma, Naxos disease, and arrhythmogenic right ventricular cardiomyopathy.There is also evidence that autoimmunity against desmosomal cadherins contributes to cardiac inflammation associated with arrhythmogenic right ventricular cardiomyopathy, and that anti-desmosomal cadherin antibodies may represent new therapeutic targets. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Glitter cell**
Glitter cell:
Glitter cells (also called Sternheimer-Malbin positive cells) are polymorphonuclear leukocyte neutrophils with granules that show a Brownian movement and that are found in the urine, most commonly associated with urinary tract infections or pyelonephritis and especially prevalent under conditions of hypotonic urine (samples with specific gravity less than 1.01). First described in 1908, they derive their name from their appearance when viewed on a wet mount preparation under a microscope; the granules within their cytoplasm can be seen moving, giving them a "glittering appearance." due to swelling of the neutrophil as result of hypotonicity. In addition to a glittering morphology, glitter cells also exhibit a colorless or pale blue nuclei and pale blue or gray cytoplasmic region when stained with Sternheimer-Malbin Stain. The presence of glitter cells may be indicative of inflammatory changes in the bladder and kidney. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mammal Paleogene zone**
Mammal Paleogene zone:
The Mammal Paleogene zones or MP zones are system of biostratigraphic zones in the stratigraphic record used to correlate mammal-bearing fossil localities of the Paleogene period of Europe. It consists of thirty consecutive zones (numbered MP 1 through MP 30; MN 8 and 9 have been joined into MN 8 + 9 zone; and MP 17 zone is split into two zones - MP 17A and MP 17B zone) defined through reference faunas, well-known sites that other localities can be correlated with. MP 1 is the earliest zone, and MP 30 is the most recent. The Grande Coupure extinction and faunal turnover event marks the boundary between MP 20 and MP 21, the post-Grande Coupure faunas occurring by MP 21 onward. The MP zones are complementary with the MN zones in the Neogene.
Mammal Paleogene zone:
These zones were proposed at the Congress in Mainz held in 1987 to help paleontologists provide more specific reference points to evolutionary events in Europe, but are used by paleontologists on other continents as well.
The zones are as follows: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Description logic**
Description logic:
Description logics (DL) are a family of formal knowledge representation languages. Many DLs are more expressive than propositional logic but less expressive than first-order logic. In contrast to the latter, the core reasoning problems for DLs are (usually) decidable, and efficient decision procedures have been designed and implemented for these problems. There are general, spatial, temporal, spatiotemporal, and fuzzy description logics, and each description logic features a different balance between expressive power and reasoning complexity by supporting different sets of mathematical constructors.DLs are used in artificial intelligence to describe and reason about the relevant concepts of an application domain (known as terminological knowledge). It is of particular importance in providing a logical formalism for ontologies and the Semantic Web: the Web Ontology Language (OWL) and its profiles are based on DLs. The most notable application of DLs and OWL is in biomedical informatics where DL assists in the codification of biomedical knowledge.
Introduction:
A description logic (DL) models concepts, roles and individuals, and their relationships.
The fundamental modeling concept of a DL is the axiom—a logical statement relating roles and/or concepts. This is a key difference from the frames paradigm where a frame specification declares and completely defines a class.
Nomenclature:
Terminology compared to FOL and OWL The description logic community uses different terminology than the first-order logic (FOL) community for operationally equivalent notions; some examples are given below. The Web Ontology Language (OWL) uses again a different terminology, also given in the table below.
Nomenclature:
Naming convention There are many varieties of description logics and there is an informal naming convention, roughly describing the operators allowed. The expressivity is encoded in the label for a logic starting with one of the following basic logics: Followed by any of the following extensions: Exceptions Some canonical DLs that do not exactly fit this convention are: Examples As an example, ALC is a centrally important description logic from which comparisons with other varieties can be made. ALC is simply AL with complement of any concept allowed, not just atomic concepts. ALC is used instead of the equivalent ALUE A further example, the description logic SHIQ is the logic ALC plus extended cardinality restrictions, and transitive and inverse roles. The naming conventions aren't purely systematic so that the logic ALCOIN might be referred to as ALCNIO and other abbreviations are also made where possible.
Nomenclature:
The Protégé ontology editor supports SHOIN(D) . Three major biomedical informatics terminology bases, SNOMED CT, GALEN, and GO, are expressible in EL (with additional role properties).
OWL 2 provides the expressiveness of SROIQ(D) , OWL-DL is based on SHOIN(D) , and for OWL-Lite it is SHIF(D)
History:
Description logic was given its current name in the 1980s. Previous to this it was called (chronologically): terminological systems, and concept languages.
History:
Knowledge representation Frames and semantic networks lack formal (logic-based) semantics. DL was first introduced into knowledge representation (KR) systems to overcome this deficiency.The first DL-based KR system was KL-ONE (by Ronald J. Brachman and Schmolze, 1985). During the '80s other DL-based systems using structural subsumption algorithms were developed including KRYPTON (1983), LOOM (1987), BACK (1988), K-REP (1991) and CLASSIC (1991). This approach featured DL with limited expressiveness but relatively efficient (polynomial time) reasoning.In the early '90s, the introduction of a new tableau based algorithm paradigm allowed efficient reasoning on more expressive DL. DL-based systems using these algorithms — such as KRIS (1991) — show acceptable reasoning performance on typical inference problems even though the worst case complexity is no longer polynomial.From the mid '90s, reasoners were created with good practical performance on very expressive DL with high worst case complexity. Examples from this period include FaCT, RACER (2001), CEL (2005), and KAON 2 (2005).
History:
DL reasoners, such as FaCT, FaCT++, RACER, DLP and Pellet, implement the method of analytic tableaux. KAON2 is implemented by algorithms which reduce a SHIQ(D) knowledge base to a disjunctive datalog program.
History:
Semantic web The DARPA Agent Markup Language (DAML) and Ontology Inference Layer (OIL) ontology languages for the Semantic Web can be viewed as syntactic variants of DL. In particular, the formal semantics and reasoning in OIL use the SHIQ DL. The DAML+OIL DL was developed as a submission to—and formed the starting point of—the World Wide Web Consortium (W3C) Web Ontology Working Group. In 2004, the Web Ontology Working Group completed its work by issuing the OWL recommendation. The design of OWL is based on the SH family of DL with OWL DL and OWL Lite based on SHOIN(D) and SHIF(D) respectively.The W3C OWL Working Group began work in 2007 on a refinement of - and extension to - OWL. In 2009, this was completed by the issuance of the OWL2 recommendation. OWL2 is based on the description logic SROIQ(D) . Practical experience demonstrated that OWL DL lacked several key features necessary to model complex domains.
Modeling:
In DL, a distinction is drawn between the so-called TBox (terminological box) and the ABox (assertional box). In general, the TBox contains sentences describing concept hierarchies (i.e., relations between concepts) while the ABox contains ground sentences stating where in the hierarchy, individuals belong (i.e., relations between individuals and concepts). For example, the statement: belongs in the TBox, while the statement: belongs in the ABox.
Modeling:
Note that the TBox/ABox distinction is not significant, in the same sense that the two "kinds" of sentences are not treated differently in first-order logic (which subsumes most DL). When translated into first-order logic, a subsumption axiom like (1) is simply a conditional restriction to unary predicates (concepts) with only variables appearing in it. Clearly, a sentence of this form is not privileged or special over sentences in which only constants ("grounded" values) appear like (2).
Modeling:
So why was the distinction introduced? The primary reason is that the separation can be useful when describing and formulating decision-procedures for various DL. For example, a reasoner might process the TBox and ABox separately, in part because certain key inference problems are tied to one but not the other one ('classification' is related to the TBox, 'instance checking' to the ABox). Another example is that the complexity of the TBox can greatly affect the performance of a given decision-procedure for a certain DL, independently of the ABox. Thus, it is useful to have a way to talk about that specific part of the knowledge base.
Modeling:
The secondary reason is that the distinction can make sense from the knowledge base modeler's perspective. It is plausible to distinguish between our conception of terms/concepts in the world (class axioms in the TBox) and particular manifestations of those terms/concepts (instance assertions in the ABox). In the above example: when the hierarchy within a company is the same in every branch but the assignment to employees is different in every department (because there are other people working there), it makes sense to reuse the TBox for different branches that do not use the same ABox.
Modeling:
There are two features of description logic that are not shared by most other data description formalisms: DL does not make the unique name assumption (UNA) or the closed-world assumption (CWA). Not having UNA means that two concepts with different names may be allowed by some inference to be shown to be equivalent. Not having CWA, or rather having the open world assumption (OWA) means that lack of knowledge of a fact does not immediately imply knowledge of the negation of a fact.
Formal description:
Like first-order logic (FOL), a syntax defines which collections of symbols are legal expressions in a description logic, and semantics determine meaning. Unlike FOL, a DL may have several well known syntactic variants.
Formal description:
Syntax The syntax of a member of the description logic family is characterized by its recursive definition, in which the constructors that can be used to form concept terms are stated. Some constructors are related to logical constructors in first-order logic (FOL) such as intersection or conjunction of concepts, union or disjunction of concepts, negation or complement of concepts, universal restriction and existential restriction. Other constructors have no corresponding construction in FOL including restrictions on roles for example, inverse, transitivity and functionality.
Formal description:
Notation Let C and D be concepts, a and b be individuals, and R be a role.
If a is R-related to b, then b is called an R-successor of a.
Formal description:
The description logic ALC The prototypical DL Attributive Concept Language with Complements ( ALC ) was introduced by Manfred Schmidt-Schauß and Gert Smolka in 1991, and is the basis of many more expressive DLs. The following definitions follow the treatment in Baader et al.Let NC , NR and NO be (respectively) sets of concept names (also known as atomic concepts), role names and individual names (also known as individuals, nominals or objects). Then the ordered triple ( NC , NR , NO ) is the signature.
Formal description:
Concepts The set of ALC concepts is the smallest set such that: The following are concepts: ⊤ (top is a concept) ⊥ (bottom is a concept) Every A∈NC (all atomic concepts are concepts) If C and D are concepts and R∈NR then the following are concepts: C⊓D (the intersection of two concepts is a concept) C⊔D (the union of two concepts is a concept) ¬C (the complement of a concept is a concept) ∀R.C (the universal restriction of a concept by a role is a concept) ∃R.C (the existential restriction of a concept by a role is a concept) Terminological axioms A general concept inclusion (GCI) has the form C⊑D where C and D are concepts. Write C≡D when C⊑D and D⊑C . A TBox is any finite set of GCIs.
Formal description:
Assertional axioms A concept assertion is a statement of the form a:C where a∈NO and C is a concept.
A role assertion is a statement of the form (a,b):R where a,b∈NO and R is a role.An ABox is a finite set of assertional axioms.
Formal description:
Knowledge base A knowledge base (KB) is an ordered pair (T,A) for TBox T and ABox A Semantics The semantics of description logics are defined by interpreting concepts as sets of individuals and roles as sets of ordered pairs of individuals. Those individuals are typically assumed from a given domain. The semantics of non-atomic concepts and roles is then defined in terms of atomic concepts and roles. This is done by using a recursive definition similar to the syntax.
Formal description:
The description logic ALC The following definitions follow the treatment in Baader et al.A terminological interpretation I=(ΔI,⋅I) over a signature (NC,NR,NO) consists of a non-empty set ΔI called the domain a interpretation function ⋅I that maps: every individual a to an element aI∈ΔI every concept to a subset of ΔI every role name to a subset of ΔI×ΔI such that ⊤I=ΔI ⊥I=∅ (C⊔D)I=CI∪DI (union means disjunction) (C⊓D)I=CI∩DI (intersection means conjunction) (¬C)I=ΔI∖CI (complement means negation) for every implies y∈CI} there exists and y∈CI} Define I⊨ (read in I holds) as follows TBox I⊨C⊑D if and only if CI⊆DI I⊨T if and only if I⊨Φ for every Φ∈T ABox I⊨a:C if and only if aI∈CI I⊨(a,b):R if and only if (aI,bI)∈RI I⊨A if and only if I⊨ϕ for every ϕ∈A Knowledge base Let K=(T,A) be a knowledge base.
Formal description:
I⊨K if and only if I⊨T and I⊨A
Inference:
Decision problems In addition to the ability to describe concepts formally, one also would like to employ the description of a set of concepts to ask questions about the concepts and instances described. The most common decision problems are basic database-query-like questions like instance checking (is a particular instance (member of an ABox) a member of a given concept) and relation checking (does a relation/role hold between two instances, in other words does a have property b), and the more global-database-questions like subsumption (is a concept a subset of another concept), and concept consistency (is there no contradiction among the definitions or chain of definitions). The more operators one includes in a logic and the more complicated the TBox (having cycles, allowing non-atomic concepts to include each other), usually the higher the computational complexity is for each of these problems (see Description Logic Complexity Navigator for examples).
Relationship with other logics:
First-order logic Many DLs are decidable fragments of first-order logic (FOL) and are usually fragments of two-variable logic or guarded logic. In addition, some DLs have features that are not covered in FOL; this includes concrete domains (such as integer or strings, which can be used as ranges for roles such as hasAge or hasName) or an operator on roles for the transitive closure of that role.
Relationship with other logics:
Fuzzy description logic Fuzzy description logics combines fuzzy logic with DLs. Since many concepts that are needed for intelligent systems lack well defined boundaries, or precisely defined criteria of membership, fuzzy logic is needed to deal with notions of vagueness and imprecision. This offers a motivation for a generalization of description logic towards dealing with imprecise and vague concepts.
Modal logic Description logic is related to—but developed independently of—modal logic (ML). Many—but not all—DLs are syntactic variants of ML.In general, an object corresponds to a possible world, a concept corresponds to a modal proposition, and a role-bounded quantifier to a modal operator with that role as its accessibility relation.
Operations on roles (such as composition, inversion, etc.) correspond to the modal operations used in dynamic logic.
Examples Temporal description logic Temporal description logic represents—and allows reasoning about—time dependent concepts and many different approaches to this problem exist. For example, a description logic might be combined with a modal temporal logic such as linear temporal logic. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Aflatoxin B1 exo-8,9-epoxide**
Aflatoxin B1 exo-8,9-epoxide:
Aflatoxin B1 exo-8,9-epoxide is a toxic metabolite of aflatoxin B1. It's formed by the action of cytochrome P450 enzymes in the liver.In the liver, aflatoxin B1 is metabolized to aflatoxin B1 exo-8,9-epoxide by the cytochrome P450 enzymes. The resulting epoxide can react with guanine in the DNA to cause DNA damage. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Turing (microarchitecture)**
Turing (microarchitecture):
Turing is the codename for a graphics processing unit (GPU) microarchitecture developed by Nvidia. It is named after the prominent mathematician and computer scientist Alan Turing. The architecture was first introduced in August 2018 at SIGGRAPH 2018 in the workstation-oriented Quadro RTX cards, and one week later at Gamescom in consumer GeForce RTX 20 series graphics cards. Building on the preliminary work of its HPC-exclusive predecessor, the Turing architecture introduces the first consumer products capable of real-time ray tracing, a longstanding goal of the computer graphics industry. Key elements include dedicated artificial intelligence processors ("Tensor cores") and dedicated ray tracing processors ("RT cores"). Turing leverages DXR, OptiX, and Vulkan for access to ray-tracing. In February 2019, Nvidia released the GeForce 16 series of GPUs, which utilizes the new Turing design but lacks the RT and Tensor cores.
Turing (microarchitecture):
Turing is manufactured using TSMC's 12 nm FinFET semiconductor fabrication process. The high-end TU102 GPU includes 18.6 billion transistors fabricated using this process. Turing also uses GDDR6 memory from Samsung Electronics, and previously Micron Technology.
Details:
The Turing microarchitecture combines multiple types of specialized processor core, and enables an implementation of limited real-time ray tracing. This is accelerated by the use of new RT (ray-tracing) cores, which are designed to process quadtrees and spherical hierarchies, and speed up collision tests with individual triangles.
Details:
Features in Turing: CUDA cores (SM, Streaming Multiprocessor) Compute Capability 7.5 traditional rasterized shaders and compute concurrent execution of integer and floating point operations (inherited from Volta) Ray-tracing (RT) cores bounding volume hierarchy acceleration shadows, ambient occlusion, lighting, reflections Tensor (AI) cores artificial intelligence large matrix operations Deep Learning Super Sampling (DLSS) Memory controller with GDDR6/HBM2 support DisplayPort 1.4a with Display Stream Compression (DSC) 1.2 PureVideo Feature Set J hardware video decoding GPU Boost 4 NVLink Bridge with VRAM stacking pooling memory from multiple cards VirtualLink VR NVENC hardware encodingThe GDDR6 memory is produced by Samsung Electronics for the Quadro RTX series. The RTX 20 series initially launched with Micron memory chips, before switching to Samsung chips by November 2018.
Details:
Rasterization Nvidia reported rasterization (CUDA) performance gains for existing titles of approximately 30–50% over the previous generation.
Details:
Ray-tracing The ray-tracing performed by the RT cores can be used to produce reflections, refractions and shadows, replacing traditional raster techniques such as cube maps and depth maps. Instead of replacing rasterization entirely, however, the information gathered from ray-tracing can be used to augment the shading with information that is much more photo-realistic, especially in regards to off-camera action. Nvidia said the ray-tracing performance increased about 8 times over the previous consumer architecture, Pascal.
Details:
Tensor cores Generation of the final image is further accelerated by the Tensor cores, which are used to fill in the blanks in a partially rendered image, a technique known as de-noising. The Tensor cores perform the result of deep learning to codify how to, for example, increase the resolution of images generated by a specific application or game. In the Tensor cores' primary usage, a problem to be solved is analyzed on a supercomputer, which is taught by example what results are desired, and the supercomputer determines a method to use to achieve those results, which is then done with the consumer's Tensor cores. These methods are delivered via driver updates to consumers. The supercomputer uses a large number of Tensor cores itself.
Development:
Turing's development platform is called RTX. RTX ray-tracing features can be accessed using Microsoft's DXR, OptiX, as well using Vulkan extensions (the last one being also available on Linux drivers). It includes access to AI-accelerated features through NGX. The Mesh Shader, Shading Rate Image functionalities are accessible using DX12, Vulkan and OpenGL extensions on Windows and Linux platforms.Windows 10 October 2018 update includes the public release of DirectX Raytracing.
Products using Turing:
GeForce MX series GeForce MX450 (Mobile) GeForce MX550 (Mobile) GeForce 16 series GeForce GTX 1630 GeForce GTX 1650 (Mobile) GeForce GTX 1650 GeForce GTX 1650 Super GeForce GTX 1650 Ti (Mobile) GeForce GTX 1660 GeForce GTX 1660 Super GeForce GTX 1660 Ti (Mobile) GeForce GTX 1660 Ti GeForce 20 series GeForce RTX 2060 (Mobile) GeForce RTX 2060 GeForce RTX 2060 Super GeForce RTX 2070 (Mobile) GeForce RTX 2070 GeForce RTX 2070 Super (Mobile) GeForce RTX 2070 Super GeForce RTX 2080 (Mobile) GeForce RTX 2080 GeForce RTX 2080 Super (Mobile) GeForce RTX 2080 Super GeForce RTX 2080 Ti Titan RTX Nvidia Quadro Quadro RTX 3000 (Mobile) Quadro RTX 4000 (Mobile) Quadro RTX 4000 Quadro RTX 5000 (Mobile) Quadro RTX 5000 Quadro RTX 6000 (Mobile) Quadro RTX 6000 Quadro RTX 8000 Quadro RTX T400 Quadro RTX T400 4GB Quadro RTX T600 Quadro RTX T1000 Quadro RTX T1000 8GB Nvidia Tesla Tesla T4 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**The F.O.D. Control Corporation**
The F.O.D. Control Corporation:
The F.O.D. Control Corporation is a private company that serves the aerospace industry's need for equipment and information to address FOD (Foreign Object Damage/Debris) issues in airport and manufacturing environments.
Based in Dallas, Texas the company helps the aerospace industry implement or improve FOD prevention programs by providing educational and training materials and equipment. This includes the FOD Prevention Program manual, "MAKE IT FOD FREE", the online news and information resource FODNews.com, and the online FOD Prevention Program resource FODProgram.com.
History:
Founder Gary Chaplin started the business in March 1983 in San Bernardino, California, in response to the US Air Force's need for improved methods to remove debris from their airfield pavements and aircraft operating surfaces.
History:
The first product introduced was a truck-bumper mounted permanent magnetic sweeper (trade name Power Bar) at Norton AFB in California. The product replaced its less efficient predecessor, the gas-powered electromagnetic sweeper, at over 90% of military airfields and many civilian airports. The firm's military customers requested help in clearing their airfields of more than just ferrous metal material. Sand, rocks, broken paving materials and non-ferrous work generated hardware presented a problem that conventional truck mounted vacuum sweepers and FOD walks were not thoroughly handling.
History:
Chaplin came across a traction driven lawn and leaf sweeper, and realized that if made larger it would offer a solution for sweeping non-ferrous debris. He invented the Fodbuster RockSweeper in 1987. A towable, traction driven sweeper with brush drive mechanism geared to its wheels, the Fodbuster was brought to market to serve military and civilian airfields. Soon after the RockSweeper introduction, the company embarked on an expansion of their product offerings to meet the growing requirements of the aviation industry. New lines were added, including walk-behind vacuum sweepers, small parts organizers, debris deposal containers, and promotional and awareness training materials such as posters, decals and downloadable PowerPoint training presentations. In 2004 Chaplin published "MAKE IT FOD FREE, The Ultimate FOD Prevention Program Manual”.Following the MAKE IT FOD FREE book” project, FODNews.com was launched as a free online news publication. Specializing in news and information about FOD control methods and FOD related issues, it includes links to news articles, instructional materials, research reports, photos, and videos.
History:
In 2012 the company was purchased by Garth Hughes.
In 2014 Hughes launched FODProgram.com, the first comprehensive online resource for FOD Prevention Program Management. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.