text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**CW Skimmer**
CW Skimmer:
CW Skimmer is a multi-channel Morse code (CW) decoder and analyzer program for Microsoft Windows. It was created by Alex Shovkoplyas, VE3NEA, and is marketed by Afreet Software, Inc. CW Skimmer uses a sensitive CW decoding algorithm based on the methods of Bayesian statistics, which allows simultaneous decoding of all CW signals in the receiver passband. The call signs are extracted from the decoded messages and displayed next to the signal traces on the waterfall. CW Skimmer also includes a DSP processor with a noise blanker, automatic gain control, and variable-bandwidth CW filter. It accepts TCP/IP network connections from telnet clients, presenting an interface similar to those of DX cluster programs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vladimir Okrepilov**
Vladimir Okrepilov:
Vladimir V. Okrepilov (born February 23, 1944, Leningrad, Russian) is an economist and professor.
General Director of the Federal budgetary institution State Regional Centre for Standardization, Metrology and Testing in St. Petersburg and Leningrad Region (State Centre "Test-St.Petersburg").
Work experience:
In 1970 graduated from the Leningrad Mechanical Institute (speciality: "Mechanical equipment of automatic installations").
From 1965: Leningrad Plant of Radio Engineering Equipment - mechanic, technician, process engineer, senior design-engineer.
From 1970 to 1979: public works.
Work experience:
From 1979: Chief engineer at the D.I. Mendeleyev All–Russian Scientific Research Institute of Metrology (VNIIM) From 1986: Director of the Leningrad Center for Standardization and Metrology of the USSR Gosstandart From 1990 to 1992: General Director of the Leningrad Centre for Product Testing ("So-yuztest – Leningrad") of the USSR Gosstandart From 1992: General Director of the Russian Centre for Testing and Certification in St.Petersburg ("Rostest-St.Petersburg") From 1992 to 2001: General Director of the Centre for Testing and Certification in St.Petersburg (State Centre "Test-St.Petersburg") of Rosstandart.
Work experience:
From 2001 to 2011: General Director of the Federal State Institution "Centre for Testing and Certification in St.Petersburg" (State Centre "Test-St.Petersburg") of Rosstandart.
From 2011 till present: General Director of the Federal Budgetary Institution State Regional Centre for Standardization, Metrology and Testing in St. Petersburg and Leningrad Region (State Centre "Test-St.Petersburg") of Rosstandart.
Academic and government awards, prizes, honorary titles, foreign awards:
Laureate of the RAS Award for the best achievements in science popularization in 2009 (2010) Awarded by: Order "For Merits to the Fatherland" of IV grade (1999), "Order of Honor" (2009), "Order of Friendship" (2016), "Order of Friendship of Peoples" (1988), medals: "Twenty years anniversary of victory in the Great Patriotic War of 1941-1945" (1965), "For valorous work.
Scientific activity:
Scientist-economist, founder of the new field of economic science – Economics of Quality, basing on the use of tools of quality management, standardization and metrology in ensuring socioeconomic progress and the quality of life improvement; the leader of the scientific school on the Economics of Quality. The author of works on in-creasing the efficiency of the regional development based on the implementation of the quality management models at the meso- and macro-levels. Under the leadership of V.V. Okrepilov for the first time in Russia the Comprehensive Scientific and Technical Development Program of the Northwest Russia for a period of up to 2030 was developed. V.V. Okrepilov is one of the authors of the "Strategy on Social and Economic Development of St. Petersburg for the period of up to 2030" approved by the Decree of St. Petersburg Government of May 13, 2014, No. 355. Under his leadership fundamental scientific research and practical calculations of economic benefits gained from various activity in the fields of standardization and metrology were carried out for the first time; and a unique national quality management system based on application of the MBO planning methods aimed at increasing the pace of the national economy modernization was developed, the system having no direct analogues in the world. Has made a decisive scientific and organizational contribution to the development of the unique scientifically-based multilevel system of continuous personnel training in Economics of Quality. The scientific school «Economics and Quality Management» led by him is entered into the Register of the Leading Scientific and Pedagogical Schools of St. Petersburg. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Passwordless authentication**
Passwordless authentication:
Passwordless authentication is an authentication method in which a user can log in to a computer system without the entering (and having to remember) a password or any other knowledge-based secret. In most common implementations users are asked to enter their public identifier (username, phone number, email address etc.) and then complete the authentication process by providing a secure proof of identity through a registered device or token.
Passwordless authentication:
Passwordless authentication methods typically rely on public-key cryptography infrastructure where the public key is provided during registration to the authenticating service (remote server, application or website) while the private key is kept on a user’s device (PC, smartphone or an external security token) and can be accessed only by providing a biometric signature or another authentication factor which is not knowledge-based. These factors classically fall into two categories: Ownership factors (“Something the user has”) such as a cellular phone, OTP token, smart card or a hardware token.
Passwordless authentication:
Inherence factors (“Something the user is”) like fingerprints, retinal scans, face or voice recognition and other biometric identifiers.Some designs might also accept a combination of other factors such as geo-location, network address, behavioral patterns and gestures, as long as no memorized passwords are involved.
Passwordless authentication:
Passwordless authentication is sometimes confused with multi-factor authentication (MFA), since both use a wide variety of authentication factors, but while MFA is often used as an added layer of security on top of password-based authentication, passwordless authentication does not require a memorized secret and usually uses just one highly secure factor to authenticate identity, making it faster and simpler for users.
Passwordless authentication:
"Passwordless MFA" is the term used when both approaches are employed, and the authentication flow is both passwordless and uses multiple factors, providing the highest security level when implemented correctly.
History:
The notion that passwords should become obsolete has been circling in computer science since at least 2004. Bill Gates, speaking at the 2004 RSA Conference predicted the demise of passwords saying "they just don't meet the challenge for anything you really want to secure." In 2011 IBM predicted that, within five years, "You will never need a password again." Matt Honan, a journalist at Wired, who was the victim of a hacking incident, in 2012 wrote "The age of the password has come to an end." Heather Adkins, manager of Information Security at Google, in 2013 said that "passwords are done at Google." Eric Grosse, VP of security engineering at Google, states that "passwords and simple bearer tokens, such as cookies, are no longer sufficient to keep users safe." Christopher Mims, writing in The Wall Street Journal said the password "is finally dying" and predicted their replacement by device-based authentication, however, purposefully revealing his Twitter password resulted in being forced to change his cellphone number.
History:
Avivah Litan of Gartner said in 2014 "Passwords were dead a few years ago. Now they are more than dead." The reasons given often include reference to the usability as well as security problems of passwords.
History:
Bonneau et al. systematically compared web passwords to 35 competing authentication schemes in terms of their usability, deployability, and security. (The technical report is an extended version of the peer-reviewed paper by the same name.) Their analysis shows that most schemes do better than passwords on security, some schemes do better and some worse with respect to usability, while every scheme does worse than passwords on deployability. The authors conclude with the following observation: “Marginal gains are often not sufficient to reach the activation energy necessary to overcome significant transition costs, which may provide the best explanation of why we are likely to live considerably longer before seeing the funeral procession for passwords arrive at the cemetery.” Recent technological advancements (e.g. the proliferation of biometric devices and smartphones) and changing business culture (acceptance of biometrics and decentralized workforce for example) is continuously promoting the adoption of passwordless authentication. Leading tech companies (Microsoft, Google) and industry wide initiatives are developing better architectures and practices to bring it to wider use, with many taking a cautious approach, keeping passwords behind the scenes in some use cases. The development of open standards such as FIDO2 and WebAuthn have further generated adoption of passwordless technologies such as Windows Hello. On June 24, 2020, Apple Safari announced that Face ID or Touch ID would be available as a WebAuthn platform authenticator for passwordless login.
Mechanism:
A user must first register with a system before their identity can be verified. A passwordless registration flow may include the following steps: Registration request: When a user attempts to register with a website, the server sends a registration request to the user's device.
Authentication factor selection: When the user's device receives the registration request, it sets up a method for authenticating the user. For example, the device may use biometrics like a fingerprint scanner or facial recognition for user identification.
Key generation: The user's device generates a public/private key pair and sends the public key to the server for future verification.Once they have registered, a user can log in to the system via the following process: Authentication challenge: The server sends an authentication to the user's device when the user attempts to log into the site.
User authentication: The user proves their identity to their device using the biometric scanner, unlocking their private key.
Challenge response: The user's device digitally signs a response to the authentication challenge with the user's private key.
Response validation: The server uses the device's public key to verify the digital signature and provides access to the user's account.
Benefits and drawbacks:
Proponents point out several unique benefits over other authentication methods: Greater security – passwords are known to be a weak point in computer systems (due to reuse, sharing, cracking, spraying etc.) and are regarded a top attack vector responsible for a huge percentage of security breaches.
Better user experience – Not only users aren’t required to remember complicated password and comply with different security policies, they are also not required to periodically renew passwords.
Reduced IT costs – since no password storage and management is needed IT teams are no longer burdened by setting password policies, detecting leaks, resetting forgotten passwords, and complying with password storage regulation.
Better visibility of credential use – since credentials are tied to a specific device or inherent user attribute, they can't be massively used and access management becomes more tight.
Benefits and drawbacks:
Scalability – managing multiple logins without additional password fatigue or complicated registration.While others point out operational and cost-related disadvantages: Implementation costs – Although it is accepted that passwordless authentication leads to savings in the long term, deployment costs are currently a hindering factor for many potential users. Cost is associated with the need to deploy an authentication mechanism on an existing user directory and sometimes the additional hardware deployed to users (e.g. OTPs or security keys).
Benefits and drawbacks:
Training and expertise needed – while most password management systems are built similarly and have been used for many years, passwordless authentication requires adaptation from both IT teams and end users.
Single point of failure – particularly implementations using OTP or push notifications to cellular device applications can create a challenge for the end user if a device is broken, lost, stolen or simply upgraded. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Deflection (engineering)**
Deflection (engineering):
In structural engineering, deflection is the degree to which a part of a structural element is displaced under a load (because it deforms). It may refer to an angle or a distance.
The deflection distance of a member under a load can be calculated by integrating the function that mathematically describes the slope of the deflected shape of the member under that load. Standard formulas exist for the deflection of common beam configurations and load cases at discrete locations.
Otherwise methods such as virtual work, direct integration, Castigliano's method, Macaulay's method or the direct stiffness method are used. The deflection of beam elements is usually calculated on the basis of the Euler–Bernoulli beam equation while that of a plate or shell element is calculated using plate or shell theory.
An example of the use of deflection in this context is in building construction. Architects and engineers select materials for various applications.
Beam deflection for various loads and supports:
Beams can vary greatly in their geometry and composition. For instance, a beam may be straight or curved. It may be of constant cross section, or it may taper. It may be made entirely of the same material (homogeneous), or it may be composed of different materials (composite). Some of these things make analysis difficult, but many engineering applications involve cases that are not so complicated. Analysis is simplified if: The beam is originally straight, and any taper is slight The beam experiences only linear elastic deformation The beam is slender (its length to height ratio is greater than 10) Only small deflections are considered (max deflection less than 1/10 of the span).In this case, the equation governing the beam's deflection ( w ) can be approximated as: d2w(x)dx2=M(x)E(x)I(x) where the second derivative of its deflected shape with respect to x (x being the horizontal position along the length of the beam) is interpreted as its curvature, E is the Young's modulus, I is the area moment of inertia of the cross-section, and M is the internal bending moment in the beam.
Beam deflection for various loads and supports:
If, in addition, the beam is not tapered and is homogeneous, and is acted upon by a distributed load q , the above expression can be written as: EId4w(x)dx4=q(x) This equation can be solved for a variety of loading and boundary conditions. A number of simple examples are shown below. The formulas expressed are approximations developed for long, slender, homogeneous, prismatic beams with small deflections, and linear elastic properties. Under these restrictions, the approximations should give results within 5% of the actual deflection.
Beam deflection for various loads and supports:
Cantilever beams Cantilever beams have one end fixed, so that the slope and deflection at that end must be zero.
Beam deflection for various loads and supports:
End-loaded cantilever beams The elastic deflection δ and angle of deflection ϕ (in radians) at the free end in the example image: A (weightless) cantilever beam, with an end load, can be calculated (at the free end B) using: δB=FL33EI ϕB=FL22EI where F = force acting on the tip of the beam L = length of the beam (span) E = modulus of elasticity I = area moment of inertia of the beam's cross sectionNote that if the span doubles, the deflection increases eightfold. The deflection at any point, x , along the span of an end loaded cantilevered beam can be calculated using: δx=Fx26EI(3L−x) ϕx=Fx2EI(2L−x) Note: At x=L (the end of the beam), the δx and ϕx equations are identical to the δB and ϕB equations above.
Beam deflection for various loads and supports:
Uniformly loaded cantilever beams The deflection, at the free end B, of a cantilevered beam under a uniform load is given by: δB=qL48EI ϕB=qL36EI where q = uniform load on the beam (force per unit length) L = length of the beam E = modulus of elasticity I = area moment of inertia of cross sectionThe deflection at any point, x , along the span of a uniformly loaded cantilevered beam can be calculated using: 24 EI(6L2−4Lx+x2) ϕx=qx6EI(3L2−3Lx+x2) Simply supported beams Simply supported beams have supports under their ends which allow rotation, but not deflection.
Beam deflection for various loads and supports:
Center-loaded simple beams The deflection at any point, x , along the span of a center loaded simply supported beam can be calculated using: 48 EI(3L2−4x2) for 0≤x≤L2 The special case of elastic deflection at the midpoint C of a beam, loaded at its center, supported by two simple supports is then given by: 48 EI where F = force acting on the center of the beam L = length of the beam between the supports E = modulus of elasticity I = area moment of inertia of cross section Off-center-loaded simple beams The maximum elastic deflection on a beam supported by two simple supports, loaded at a distance a from the closest support, is given by: δmax=Fa(L2−a2)3/293LEI where F = force acting on the beam L = length of the beam between the supports E = modulus of elasticity I = area moment of inertia of cross-section a = distance from the load to the closest supportThis maximum deflection occurs at a distance x1 from the closest support and is given by: x1=L2−a23 Uniformly loaded simple beams The elastic deflection (at the midpoint C) on a beam supported by two simple supports, under a uniform load (as pictured) is given by: 384 EI Where q = uniform load on the beam (force per unit length) L = length of the beam E = modulus of elasticity I = area moment of inertia of cross sectionThe deflection at any point, x , along the span of a uniformly loaded simply supported beam can be calculated using: 24 EI(L3−2Lx2+x3) Change in length The change in length ΔL of the beam is generally negligible in structures, but can be calculated by integrating the slope θx function, if the deflection function δx is known for all x Where: ΔL = change in length (always negative) θx = slope function (first derivative of δx )ΔL=−12∫0L(θ(x))2dx If the beam is uniform and the deflection at any point is known, this can be calculated without knowing other properties of the beam.
Units:
The formulas supplied above require the use of a consistent set of units. Most calculations will be made in the International System of Units (SI) or US customary units, although there are many other systems of units.
Units:
International system (SI) Force: newtons ( N Length: metres ( m Modulus of elasticity: Nm2(Pa) Moment of inertia: m4 US customary units (US) Force: pounds force ( lbf Length: inches ( in Modulus of elasticity: lbfin2 Moment of inertia: in4 Others Other units may be used as well, as long as they are self-consistent. For example, sometimes the kilogram-force ( kgf ) unit is used to measure loads. In such a case, the modulus of elasticity must be converted to kgfm2
Structural deflection:
Building codes determine the maximum deflection, usually as a fraction of the span e.g. 1/400 or 1/600. Either the strength limit state (allowable stress) or the serviceability limit state (deflection considerations among others) may govern the minimum dimensions of the member required.
The deflection must be considered for the purpose of the structure. When designing a steel frame to hold a glazed panel, one allows only minimal deflection to prevent fracture of the glass.
The deflected shape of a beam can be represented by the moment diagram, integrated (twice, rotated and translated to enforce support conditions). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Jakarta Enterprise Beans**
Jakarta Enterprise Beans:
Jakarta Enterprise Beans (EJB; formerly Enterprise JavaBeans) is one of several Java APIs for modular construction of enterprise software. EJB is a server-side software component that encapsulates business logic of an application. An EJB web container provides a runtime environment for web related software components, including computer security, Java servlet lifecycle management, transaction processing, and other web services. The EJB specification is a subset of the Java EE specification.
Specification:
The EJB specification was originally developed in 1997 by IBM and later adopted by Sun Microsystems (EJB 1.0 and 1.1) in 1999 and enhanced under the Java Community Process as JSR 19 (EJB 2.0), JSR 153 (EJB 2.1), JSR 220 (EJB 3.0), JSR 318 (EJB 3.1) and JSR 345 (EJB 3.2).
Specification:
The EJB specification provides a standard way to implement the server-side (also called "back-end") 'business' software typically found in enterprise applications (as opposed to 'front-end' user interface software). Such software addresses the same types of problem, and solutions to these problems are often repeatedly re-implemented by programmers. Jakarta Enterprise Beans is intended to handle such common concerns as persistence, transactional integrity and security in a standard way, leaving programmers free to concentrate on the particular parts of the enterprise software at hand.
General responsibilities:
The EJB specification details how an application server provides the following responsibilities: Transaction processing Integration with the persistence services offered by the Jakarta Persistence (JPA) Concurrency control Event-driven programming using Jakarta Messaging (JMS) and Jakarta Connectors (JCA) Asynchronous method invocation Job scheduling Naming and directory services via Java Naming and Directory Interface (JNDI) Interprocess Communication using RMI-IIOP and Web services Security (JCE and JAAS) Deployment of software components in an application serverAdditionally, the Jakarta Enterprise Beans specification defines the roles played by the EJB container and the EJBs as well as how to deploy the EJBs in a container. Note that the EJB specification does not detail how an application server provides persistence (a task delegated to the JPA specification), but instead details how business logic can easily integrate with the persistence services offered by the application server.
History:
Businesses found that using EJBs to encapsulate business logic brought a performance penalty. This is because the original specification allowed only for remote method invocation through CORBA (and optionally other protocols), even though the large majority of business applications actually do not require this distributed computing functionality. The EJB 2.0 specification addressed this concern by adding the concept of local interfaces which could be called directly without performance penalties by applications that were not distributed over multiple servers.The EJB 3.0 specification (JSR 220) was a departure from its predecessors, following a new light-weight paradigm. EJB 3.0 shows an influence from Spring in its use of plain Java objects, and its support for dependency injection to simplify configuration and integration of heterogeneous systems. EJB 3.0 along with the other version of the EJB can be integrated with MuleSoft-v4 using MuleSoft certified PlektonLabs EJB Connector. Gavin King, the creator of Hibernate, participated in the EJB 3.0 process and is an outspoken advocate of the technology. Many features originally in Hibernate were incorporated in the Java Persistence API, the replacement for entity beans in EJB 3.0. The EJB 3.0 specification relies heavily on the use of annotations (a feature added to the Java language with its 5.0 release) and convention over configuration to enable a much less verbose coding style. Accordingly, in practical terms EJB 3.0 is much more lightweight and nearly a completely new API, bearing little resemblance to the previous EJB specifications.
Example:
The following shows a basic example of what an EJB looks like in code: The above defines a service class for persisting a Customer object (via O/R mapping). The EJB takes care of managing the persistence context and the addCustomer() method is transactional and thread-safe by default. As demonstrated, the EJB focuses only on business logic and persistence and knows nothing about any particular presentation.
Example:
Such an EJB can be used by a class in e.g. the web layer as follows: The above defines a JavaServer Faces (JSF) backing bean in which the EJB is injected by means of the @EJB annotation. Its addCustomer method is typically bound to some UI component, such as a button. Contrary to the EJB, the backing bean does not contain any business logic or persistence code, but delegates such concerns to the EJB. The backing bean does know about a particular presentation, of which the EJB had no knowledge.
Types of Enterprise Beans:
An EJB container holds two major types of beans: Session Beans that can be either "Stateful", "Stateless" or "Singleton" and can be accessed via either a Local (same JVM) or Remote (different JVM) interface or directly without an interface, in which case local semantics apply. All session beans support asynchronous execution for all views (local/remote/no-interface).
Message Driven Beans (MDBs, also known as Message Beans). MDBs also support asynchronous execution, but via a messaging paradigm.
Types of Enterprise Beans:
Session beans Stateful Session Beans Stateful Session Beans are business objects having state: that is, they keep track of which calling client they are dealing with throughout a session and of the history of its requests, and thus access to the bean instance is strictly limited to only one client during its lifetime. If concurrent access to a single bean is attempted anyway the container serializes those requests, but via the @AccessTimeout annotation the container can instead throw an exception. Stateful session beans' state may be persisted (passivated) automatically by the container to free up memory after the client hasn't accessed the bean for some time. The JPA extended persistence context is explicitly supported by Stateful Session Beans.
Types of Enterprise Beans:
Examples Checking out in a web store might be handled by a stateful session bean that would use its state to keep track of where the customer is in the checkout process, possibly holding locks on the items the customer is purchasing (from a system architecture's point of view, it would be less ideal to have the client manage those locks).
Types of Enterprise Beans:
Stateless Session Beans Stateless Session Beans are business objects that do not have state associated with them. However, access to a single bean instance is still limited to only one client at a time, concurrent access to the bean is prohibited. If concurrent access to a single bean is attempted, the container simply routes each request to a different instance. This makes a stateless session bean automatically thread-safe. Instance variables can be used during a single method call from a client to the bean, but the contents of those instance variables are not guaranteed to be preserved across different client method calls. Instances of Stateless Session beans are typically pooled. If a second client accesses a specific bean right after a method call on it made by a first client has finished, it might get the same instance. The lack of overhead to maintain a conversation with the calling client makes them less resource-intensive than stateful beans. Examples Sending an e-mail to customer support might be handled by a stateless bean, since this is a one-off operation and not part of a multi-step process.
Types of Enterprise Beans:
A user of a website clicking on a "keep me informed of future updates" box may trigger a call to an asynchronous method of the session bean to add the user to a list in the company's database (this call is asynchronous because the user does not need to wait to be informed of its success or failure).
Types of Enterprise Beans:
Fetching multiple independent pieces of data for a website, like a list of products and the history of the current user might be handled by asynchronous methods of a session bean as well (these calls are asynchronous because they can execute in parallel that way, which potentially increases performance). In this case, the asynchronous method will return a Future instance.
Types of Enterprise Beans:
Singleton Session Beans Singleton Session Beans are business objects having a global shared state within a JVM. Concurrent access to the one and only bean instance can be controlled by the container (Container-managed concurrency, CMC) or by the bean itself (Bean-managed concurrency, BMC). CMC can be tuned using the @Lock annotation, that designates whether a read lock or a write lock will be used for a method call. Additionally, Singleton Session Beans can explicitly request to be instantiated when the EJB container starts up, using the @Startup annotation. Examples Loading a global daily price list that will be the same for every user might be done with a singleton session bean, since this will prevent the application having to do the same query to a database over and over again...
Types of Enterprise Beans:
Message driven beans Message Driven Beans are business objects whose execution is triggered by messages instead of by method calls. The Message Driven Bean is used among others to provide a high level ease-of-use abstraction for the lower level JMS (Java Message Service) specification. It may subscribe to JMS message queues or message topics, which typically happens via the activationConfig attribute of the @MessageDriven annotation. They were added in EJB to allow event-driven processing. Unlike session beans, an MDB does not have a client view (Local/Remote/No-interface), i. e. clients cannot look-up an MDB instance. An MDB just listens for any incoming message on, for example, a JMS queue or topic and processes them automatically. Only JMS support is required by the Java EE spec, but Message Driven Beans can support other messaging protocols. Such protocols may be asynchronous but can also be synchronous. Since session beans can also be synchronous or asynchronous, the prime difference between session- and message driven beans is not the synchronicity, but the difference between (object oriented) method calling and messaging. Examples Sending a configuration update to multiple nodes might be done by sending a JMS message to a 'message topic' and could be handled by a Message Driven Bean listening to this topic (the message paradigm is used here since the sender does not need to know the number of consumers, their location, or even their exact type).
Types of Enterprise Beans:
Submitting a job to a work cluster might be done by sending a JMS message to a 'message queue' and could also be handled by a Message Driven Bean, but this time listening to a queue (the message paradigm and the queue is used, since the sender doesn't have to care which worker executes the job, but it does need assurance that a job is only executed once).
Types of Enterprise Beans:
Processing timing events from the Quartz scheduler can be handled by a Message Driven Bean; when a Quartz trigger fires, the MDB is automatically invoked. Since Java EE doesn't know about Quartz by default, a JCA resource adapter would be needed and the MDB would be annotated with a reference to this.
Execution:
EJBs are deployed in an EJB container, typically within an application server. The specification describes how an EJB interacts with its container and how client code interacts with the container/EJB combination. The EJB classes used by applications are included in the javax.ejb package. (The javax.ejb.spi package is a service provider interface used only by EJB container implementations.) Clients of EJBs do not instantiate those beans directly via Java's new operator, but instead have to obtain a reference via the EJB container. This reference is usually not a reference to the implementation bean itself, but to a proxy, which dynamically implements either the local or remote business interface that the client requested or a sub-type of the actual bean. The proxy can then be directly cast to the interface or bean respectively. A client is said to have a 'view' on the EJB, and the local interface, remote interface and bean sub-type itself respectively correspond to the local view, remote view and no-interface view.
Execution:
This proxy is needed in order to give the EJB container the opportunity to transparently provide cross-cutting (AOP-like) services to a bean like transactions, security, interceptions, injections, and remoting. As an example, a client invokes a method on a proxy, which will first start a transaction with the help of the EJB container and then call the actual bean method. When the bean method returns, the proxy ends the transaction (i.e. by committing it or doing a rollback) and transfers control back to the client.
Execution:
The EJB Container is responsible for ensuring the client code has sufficient access rights to an EJB. Security aspects can be declaratively applied to an EJB via annotations.
Execution:
Transactions EJB containers must support both container managed ACID transactions and bean managed transactions.Container-managed transactions (CMT) are by default active for calls to session beans. That is, no explicit configuration is needed. This behavior may be declaratively tuned by the bean via annotations and if needed such configuration can later be overridden in the deployment descriptor. Tuning includes switching off transactions for the whole bean or specific methods, or requesting alternative strategies for transaction propagation and starting or joining a transaction. Such strategies mainly deal with what should happen if a transaction is or isn't already in progress at the time the bean is called. The following variations are supported: Alternatively, the bean can also declare via an annotation that it wants to handle transactions programmatically via the JTA API. This mode of operation is called Bean Managed Transactions (BMT), since the bean itself handles the transaction instead of the container.
Execution:
Events JMS (Java Message Service) is used to send messages from beans to clients, to let clients receive asynchronous messages from these beans. MDBs can be used to receive messages from clients asynchronously using either a JMS Queue or a Topic.
Execution:
Naming and directory services As an alternative to injection, clients of an EJB can obtain a reference to the session bean's proxy object (the EJB stub) using Java Naming and Directory Interface (JNDI). This alternative can be used in cases where injection is not available, such as in non-managed code or standalone remote Java SE clients, or when it's necessary to programmatically determine which bean to obtain.
Execution:
JNDI names for EJB session beans are assigned by the EJB container via the following scheme: (entries in square brackets denote optional parts) A single bean can be obtained by any name matching the above patterns, depending on the 'location' of the client. Clients in the same module as the required bean can use the module scope and larger scopes, clients in the same application as the required bean can use the app scope and higher, etc.
Execution:
E.g. code running in the same module as the CustomerService bean (as given by the example shown earlier in this article) could use the following code to obtain a (local) reference to it: Remoting/distributed execution For communication with a client that's written in the Java programming language a session bean can expose a remote-view via an interface annotated with @Remote. This allows those beans to be called from clients in other JVMs which may be running on other systems (from the point of view of the EJB container, any code in another JVM is remote).
Execution:
Stateless and Singleton session beans may also expose a "web service client view" for remote communication via WSDL and SOAP or plain XML. This follows the JAX-RPC and JAX-WS specifications. JAX-RPC support however is proposed for future removal. To support JAX-WS, the session bean is annotated with @WebService, and methods that are to be exposed remotely with @WebMethod.
Execution:
Although the EJB specification does not mention exposure as RESTful web services in any way and has no explicit support for this form of communication, the JAX-RS specification does explicitly support EJB. Following the JAX-RS spec, Stateless and Singleton session beans can be declared as root resources via the @Path annotation and EJB business methods can be mapped to resource methods via the @GET, @PUT, @POST and @DELETE annotations. This however does not count as a "web service client view", which is used exclusively for JAX-WS and JAX-RPC.
Execution:
Communication via web services is typical for clients not written in the Java programming language, but is also convenient for Java clients who have trouble reaching the EJB server via a firewall. Additionally, web service based communication can be used by Java clients to circumvent the arcane and ill-defined requirements for the so-called "client-libraries"; a set of jar files that a Java client must have on its class-path in order to communicate with the remote EJB server. These client-libraries potentially conflict with libraries the client may already have (for instance, if the client itself is also a full Java EE server) and such a conflict is deemed to be very hard or impossible to resolve.
Execution:
Legacy Home interfaces and required business interface With EJB 2.1 and earlier, each EJB had to provide a Java implementation class and two Java interfaces. The EJB container created instances of the Java implementation class to provide the EJB implementation. The Java interfaces were used by client code of the EJB.
Execution:
Required deployment descriptor With EJB 2.1 and earlier, the EJB specification required a deployment descriptor to be present. This was needed to implement a mechanism that allowed EJBs to be deployed in a consistent manner regardless of the specific EJB platform that was chosen. Information about how the bean should be deployed (such as the name of the home or remote interfaces, whether and how to store the bean in a database, etc.) had to be specified in the deployment descriptor.
Execution:
The deployment descriptor is an XML document having an entry for each EJB to be deployed. This XML document specifies the following information for each EJB: Name of the Home interface Java class for the Bean (business object) Java interface for the Home interface Java interface for the business object Persistent store (only for Entity Beans) Security roles and permissions Stateful or Stateless (for Session Beans)Old EJB containers from many vendors required more deployment information than that in the EJB specification. They would require the additional information as separate XML files, or some other configuration file format. An EJB platform vendor generally provided their own tools that would read this deployment descriptor, and possibly generated a set of classes that would implement the now deprecated Home and Remote interfaces.
Execution:
Since EJB 3.0 (JSR 220), the XML descriptor is replaced by Java annotations set in the Enterprise Bean implementation (at source level), although it is still possible to use an XML descriptor instead of (or in addition to) the annotations. If an XML descriptor and annotations are both applied to the same attribute within an Enterprise Bean, the XML definition overrides the corresponding source-level annotation, although some XML elements can also be additive (e.g., an activation-config-property in XML with a different name than already defined via an @ActivationConfigProperty annotation will be added instead of replacing all existing properties).
Container variations:
Starting with EJB 3.1, the EJB specification defines two variants of the EJB container; a full version and a limited version. The limited version adheres to a proper subset of the specification called EJB 3.1 Lite and is part of Java EE 6's web profile (which is itself a subset of the full Java EE 6 specification).
Container variations:
EJB 3.1 Lite excludes support for the following features: Remote interfaces RMI-IIOP Interoperability JAX-WS Web Service Endpoints EJB Timer Service (@Schedule, @Timeout) Asynchronous session bean invocations (@Asynchronous) Message-driven beansEJB 3.2 Lite excludes less features. Particularly it no longer excludes @Asynchronous and @Schedule/@Timeout, but for @Schedule it does not support the "persistent" attribute that full EJB 3.2 does support. The complete excluded list for EJB 3.2 Lite is: Remote interfaces RMI-IIOP Interoperability JAX-WS Web Service Endpoints Persistent timers ("persistent" attribute on @Schedule) Message-driven beans
Version history:
EJB 4.0, final release (2020-05-22) Jakarta Enterprise Beans 4.0, as a part of Jakarta EE 9, was a tooling release that mainly moved API package names from the top level javax.ejb package to the top level jakarta.ejb package.Other changes included removal of deprecated APIs that were pointless to move to the new top level package and the removal of features that depended on features that were removed from Java or elsewhere in Jakarta EE 9. The following APIs were removed: methods relying on java.security.Identity which has been removed from the Java 14.
Version history:
methods relying on Jakarta XML RPC to reflect the removal of XML RPC from the Jakarta EE 9 Platform.
deprecated EJBContext.getEnvironment() method.
"Support for Distributed Interoperability" to reflect the removal of CORBA from Java 11 and the Jakarta EE 9 Platform.Other minor changes include marking the Enterprise Beans 2.x API Group as "Optional" and making the Schedule annotation repeatable.
EJB 3.2.6, final release (2019-08-23) Jakarta Enterprise Beans 3.2, as a part of Jakarta EE 8, and despite still using "EJB" abbreviation, this set of APIs has been officially renamed to "Jakarta Enterprise Beans" by the Eclipse Foundation so as not to tread on the Oracle "Java" trademark.
Version history:
EJB 3.2, final release (2013-05-28) JSR 345. Enterprise JavaBeans 3.2 was a relatively minor release that mainly contained specification clarifications and lifted some restrictions that were imposed by the spec but over time appeared to serve no real purpose. A few existing full EJB features were also demanded to be in EJB 3 lite and functionality that was proposed to be pruned in EJB 3.1 was indeed pruned (made optional).The following features were added: Passivation of a stateful session bean can be deactivated via attribute on @Stateful annotation (passivationCapable = false) TimerService can retrieve all active timers in the same EJB module (could previously only retrieve timers for the bean in which the TimerService was called) Lifecycle methods (e.g. @PostConstruct) can be transactional for stateful session beans using the existing @TransactionAttribute annotation Autocloseable interface implemented by embeddable containerEJB 3.1, final release (2009-12-10) JSR 318. The purpose of the Enterprise JavaBeans 3.1 specification is to further simplify the EJB architecture by reducing its complexity from the developer's point of view, while also adding new functionality in response to the needs of the community: Local view without interface (No-interface view) .war packaging of EJB components EJB Lite: definition of a subset of EJB Portable EJB Global JNDI Names Singletons (Singleton Session Beans) Application Initialization and Shutdown Events EJB Timer Service Enhancements Simple Asynchrony (@Asynchronous for session beans)EJB 3.0, final release (2006-05-11) JSR 220 - Major changes: This release made it much easier to write EJBs, using 'annotations' rather than the complex 'deployment descriptors' used in version 2.x. The use of home and remote interfaces and the ejb-jar.xml file were also no longer required in this release, having been replaced with a business interface and a bean that implements the interface.
Version history:
EJB 2.1, final release (2003-11-24) JSR 153 - Major changes: Web service support (new): stateless session beans can be invoked over SOAP/HTTP. Also, an EJB can easily access a Web service using the new service reference.
EJB timer service (new): Event-based mechanism for invoking EJBs at specific times.
Message-driven beans accepts messages from sources other than JMS.
Message destinations (the same idea as EJB references, resource references, etc.) has been added.
EJB query language (EJB-QL) additions: ORDER BY, AVG, MIN, MAX, SUM, COUNT, and MOD.
XML schema is used to specify deployment descriptors, replaces DTDsEJB 2.0, final release (2001-08-22) JSR 19 - Major changes: Overall goals: The standard component architecture for building distributed object-oriented business applications in Java.
Make it possible to build distributed applications by combining components developed using tools from different vendors.
Make it easy to write (enterprise) applications: Application developers will not have to understand low-level transaction and state management details, multi-threading, connection pooling, and other complex low-level APIs.
Will follow the "Write Once, Run Anywhere" philosophy of Java. An enterprise Bean can be developed once, and then deployed on multiple platforms without recompilation or source code modification.
Address the development, deployment, and runtime aspects of an enterprise application’s life cycle.
Define the contracts that enable tools from multiple vendors to develop and deploy components that can interoperate at runtime.
Be compatible with existing server platforms. Vendors will be able to extend their existing products to support EJBs.
Be compatible with other Java APIs.
Provide interoperability between enterprise Beans and Java EE components as well as non-Java programming language applications.
Be compatible with the CORBA protocols (RMI-IIOP).EJB 1.1, final release (1999-12-17) Major changes: XML deployment descriptors Default JNDI contexts RMI over IIOP Security - role driven, not method driven Entity Bean support - mandatory, not optionalGoals for Release 1.1: Provide better support for application assembly and deployment.
Specify in greater detail the responsibilities of the individual EJB roles.EJB 1.0 (1998-03-24) Announced at JavaOne 1998, Sun's third Java developers conference (March 24 through 27) Goals for Release 1.0: Defined the distinct "EJB Roles" that are assumed by the component architecture.
Defined the client view of enterprise Beans.
Defined the enterprise Bean developer’s view.
Defined the responsibilities of an EJB Container provider and server provider; together these make up a system that supports the deployment and execution of enterprise Beans. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CrustaStun**
CrustaStun:
The CrustaStun is a device designed to administer a lethal electric shock to shellfish (such as lobsters, crabs, and crayfish) before cooking. This avoids boiling a live shellfish which may be able to experience pain in a way similar to vertebrates. The CrustaStun comprises a stainless-steel box approximately the size of a domestic microwave oven containing a tray with a wet sponge and an electrode. The shellfish is placed in the box and when the lid is closed, the wet sponge conducts the current which electrocutes the animal with a 120 volt 2–5 amp current. It is reported the CrustaStun renders the shellfish unconscious in 0.3 seconds and kills the animal in 5 to 10 seconds, compared to 3 minutes to kill a lobster by boiling or 4.5 minutes for a crab.The inventor of the device, Simon Buckhaven, worked for two years with scientists from the University of Bristol to develop the device which is manufactured by a company in England, at an estimated cost of £2,500 (in 2009).
CrustaStun:
There are claims that shellfish killed with the CrustaStun taste better than those killed by boiling. Waitrose, Tesco and other major supermarkets in the United Kingdom have insisted that all shellfish products supplied to them are killed using this method. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Siderogel**
Siderogel:
Siderogel is an amorphous mineral (a mineraloid) consisting of iron(III) oxide-hydroxide FeO(OH), the same chemical compound as limonite and goethite; or possibly an hydrate of the same FeO(OH)•nH2O.Siderogel is described as blackish, brownish, or reddish-brown, often glassy and translucent. It may be a gossan, the result of weathering and oxidation of sulfide ores.The International Mineralogical Association does not officially recognize siderogel as a mineral species.
Occurrence:
Siderogel has been reported in a few locations in Europe and Asia, including a site near Gavà and Bruguers, Catalunya; in the mine of Codos, Aragon; in the Becke-Oese quarry (now flooded) between Menden and Hemer, Germany; and in the Gaosong deposit, Gejiu China. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Foundations of Algebraic Geometry**
Foundations of Algebraic Geometry:
Foundations of Algebraic Geometry is a book by André Weil (1946, 1962) that develops algebraic geometry over fields of any characteristic. In particular it gives a careful treatment of intersection theory by defining the local intersection multiplicity of two subvarieties.
Weil was motivated by the need for a rigorous theory of correspondences on algebraic curves in positive characteristic, which he used in his proof of the Riemann hypothesis for curves over a finite field.
Foundations of Algebraic Geometry:
Weil introduced abstract rather than projective varieties partly so that he could construct the Jacobian of a curve. (It was not known at the time that Jacobians are always projective varieties.) It was some time before anyone found any examples of complete abstract varieties that are not projective. In the 1950s Weil's work was one of several competing attempts to provide satisfactory foundations for algebraic geometry, all of which were superseded by Grothendieck's development of schemes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Eluvium**
Eluvium:
In geology, eluvium or eluvial deposits are those geological deposits and soils that are derived by in situ weathering or weathering plus gravitational movement or accumulation.
Eluvium:
The process of removal of materials from geological or soil horizons is called eluviation or leaching. There is a difference in the usage of this term in geology and soil science. In soil science, eluviation is the transport of soil material from upper layers of soil to lower levels by downward percolation of water across soil horizons, and accumulation of this material (illuvial deposit) in lower levels is called illuviation. In geology, the removed material is irrelevant, and the deposit (eluvial deposit) is the remaining material. Eluviation occurs when precipitation exceeds evaporation.
Eluvium:
A soil horizon formed due to eluviation is an eluvial zone or eluvial horizon. In a typical soil profile, the eluvial horizon refers to a light-colored zone located (depending on context and literature) either at the lower part of the A horizon (symbol: Ae) or within a distinct horizon (E horizon) below the A, where the process is most intense and rapid. Yet some sources consider the eluvial zone to be the A horizon plus the (distinct) E horizon, as eluviation technically occurs in both.
Eluvium:
The strict eluvial horizon (E horizon) is typically light gray, clay-depleted, contains little organic matter and has a high concentration of silt and sand particles composed of quartz and other resistant minerals.
Eluvium:
Eluvial ore deposits are those such as tungsten and gold placer deposits formed by settling and enriched by the winnowing or removal of lower density materials. Diamonds within yellow ground (weathered portions of kimberlites) may be considered to be eluvial deposits. Cassiterite and columbite-tantalite deposits also occur as residual or eluvial concentrations. The Pitinga tin deposit in Brazil, an eluvial deposit, is one of the largest tin mines in the world. Weathering supergene enrichment of an apatite rich carbonatite in Ontario has produced a significant eluvial phosphate ore deposit. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Acyl-(acyl-carrier-protein)—UDP-N-acetylglucosamine O-acyltransferase**
Acyl-(acyl-carrier-protein)—UDP-N-acetylglucosamine O-acyltransferase:
In enzymology, an acyl-[acyl-carrier-protein]-UDP-N-acetylglucosamine O-acyltransferase (EC 2.3.1.129) is an enzyme that catalyzes the chemical reaction (R)-3-hydroxytetradecanoyl-[acyl-carrier-protein] + UDP-N-acetylglucosamine ⇌ [acyl-carrier-protein] + UDP-3-O-(3-hydroxytetradecanoyl)-N-acetylglucosamineThus, the two substrates of this enzyme are (R)-3-hydroxytetradecanoyl-acyl-carrier-protein and UDP-N-acetylglucosamine, whereas its two products are acyl-carrier-protein and UDP-3-O-(3-hydroxytetradecanoyl)-N-acetylglucosamine.
This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is (R)-3-hydroxytetradecanoyl-[acyl-carrier-protein]:UDP-N-acetylglucosamine 3-O-(3-hydroxytetradecanoyl) transferase. Other names in common use include UDP-N-acetylglucosamine acyltransferase and uridine diphosphoacetylglucosamine acyltransferase. This enzyme participates in lipopolysaccharide biosynthesis.
Structural studies:
As of late 2007, 7 structures have been solved for this class of enzymes, with PDB accession codes 1J2Z, 1LXA, 2AQ9, 2JF2, 2JF3, 2QIA, and 2QIV. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Substituted amphetamine**
Substituted amphetamine:
Substituted amphetamines are a class of compounds based upon the amphetamine structure; it includes all derivative compounds which are formed by replacing, or substituting, one or more hydrogen atoms in the amphetamine core structure with substituents. The compounds in this class span a variety of pharmacological subclasses, including stimulants, empathogens, and hallucinogens, among others. Examples of substituted amphetamines are amphetamine (itself), methamphetamine, ephedrine, cathinone, phentermine, mephentermine, tranylcypromine, bupropion, methoxyphenamine, selegiline, amfepramone (diethylpropion), pyrovalerone, MDMA (ecstasy), and DOM (STP).
Substituted amphetamine:
Some of amphetamine's substituted derivatives occur in nature, for example in the leaves of Ephedra and khat plants. Amphetamine was first produced at the end of the 19th century. By the 1930s, amphetamine and some of its derivative compounds found use as decongestants in the symptomatic treatment of colds and also occasionally as psychoactive agents. Their effects on the central nervous system are diverse, but can be summarized by three overlapping types of activity: psychoanaleptic, hallucinogenic and empathogenic. Various substituted amphetamines may cause these actions either separately or in combination.
Prodrugs of amphetamine/methamphetamine:
A variety of prodrugs of amphetamine and/or methamphetamine exist, and include amfecloral, amphetaminil, benzphetamine, clobenzorex, D-deprenyl, dimethylamphetamine, ethylamphetamine, fencamine, fenethylline, fenproporex, furfenorex, lisdexamfetamine, mefenorex, prenylamine, and selegiline.
Structure:
Amphetamines are a subgroup of the substituted phenethylamine class of compounds. Substitution of hydrogen atoms results in a large class of compounds. Typical reaction is substitution by methyl and sometimes ethyl groups at the amine and phenyl sites:
History:
Ephedra was used 5000 years ago in China as a medicinal plant; its active ingredients are alkaloids ephedrine, pseudoephedrine, norephedrine (phenylpropanolamine) and norpseudoephedrine (cathine). Natives of Yemen and Ethiopia have a long tradition of chewing khat leaves to achieve a stimulating effect. The active substances of khat are cathinone and, to a lesser extent, cathine.Amphetamine was first synthesized in 1887 by Romanian chemist Lazăr Edeleanu, although its pharmacological effects remained unknown until the 1930s. MDMA was produced in 1912 (in 1914, according to other sources) as an intermediate product. However, this synthesis also went largely unnoticed. In the 1920s, both methamphetamine and the dextrorotatory optical isomer of amphetamine, dextroamphetamine, were synthesized. This synthesis was a by-product of a search for ephedrine, a bronchodilator used to treat asthma extracted exclusively from natural sources. Over-the-counter use of substituted amphetamines was initiated in the early 1930s by the pharmaceutical company Smith, Kline & French (now part of GlaxoSmithKline), as a medicine (Benzedrine) for colds and nasal congestion. Subsequently, amphetamine was used in the treatment of narcolepsy, obesity, hay fever, orthostatic hypotension, epilepsy, Parkinson's disease, alcoholism and migraine. The "reinforcing" effects of substituted amphetamines were quickly discovered, and the misuse of substituted amphetamines had been noted as far back as 1936.
History:
During World War II, amphetamines were used by the German military to keep their tank crews awake for long periods, and treat depression. It was noticed that extended rest was required after such artificially induced activity. The widespread use of substituted amphetamines began in postwar Japan and quickly spread to other countries. Modified "designer amphetamines", such as MDA and PMA, have gained in popularity since the 1960s. In 1970, the United States adopted "the Controlled Substances Act" that limited non-medical use of substituted amphetamines. Street use of PMA was noted in 1972. MDMA emerged as a substitute for MDA in the early 1970s. American chemist Alexander Shulgin first synthesized the drug in 1976 and through him the drug was briefly introduced into psychotherapy. Recreational use grew and in 1985 MDMA was banned by the US authorities in an emergency scheduling initiated by the Drug Enforcement Administration.Since the mid-1990s, MDMA has become a popular entactogenic drug among the youth and quite often non-MDMA substances were sold as ecstasy. Ongoing trials are investigating its efficacy as an adjunct to psychotherapy in the management of treatment-resistant post-traumatic stress disorder (PTSD). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Energy content of biofuel**
Energy content of biofuel:
The energy content of biofuel is the chemical energy contained in a given biofuel, measured per unit mass of that fuel, as specific energy, or per unit of volume of the fuel, as energy density.
Energy content of biofuel:
A biofuel is a fuel produced from recently living organisms. Biofuels include bioethanol, an alcohol made by fermentation—often used as a gasoline additive, and biodiesel, which is usually used as a diesel additive. Specific energy is energy per unit mass, which is used to describe the chemical energy content of a fuel, expressed in SI units as joule per kilogram (J/kg) or equivalent units. Energy density is the amount of chemical energy per unit volume of the fuel, expressed in SI units as joule per litre (J/L) or equivalent units.
Energy and CO2 output of common biofuels:
The table below includes entries for popular substances already used for their energy, or being discussed for such use.
The second column shows specific energy, the energy content in megajoules per unit of mass in kilograms, useful in understanding the energy that can be extracted from the fuel.
The third column in the table lists energy density, the energy content per liter of volume, which is useful for understanding the space needed for storing the fuel.
Energy and CO2 output of common biofuels:
The final two columns deal with the carbon footprint of the fuel. The fourth column contains the proportion of CO2 released when the fuel is converted for energy, with respect to its starting mass, and the fifth column lists the energy produced per kilogram of CO2 produced. As a guideline, a higher number in this column is better for the environment. But these numbers do not account for other green house gases released during burning, production, storage, or shipping. For example, methane may have hidden environmental costs that are not reflected in the table. [1] Notes | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Delivery Performance**
Delivery Performance:
Delivery performance (DP) is a broadly used standard KPI measurement in supply chains to measure the fulfillment of a customers demand to the wish date. Following the nomenclature of the DR-DP-Matrix three main approaches to measure DP can be distinguished: DPTV DPDS DPTS Type of measurement: volume (V)/singular(S) Type of view: on time (T)/ delivery (D)
Volume/on time:
Formula If ( (Demandp,c+Backlogp−1,c)>0 )DPTV = Deliveredp,c+Predeliveryp−1,cDemandp,c+Backlogp−1,c Else NULLDemand:= customers wish c:= product identifier p:= Time period e.g. a day, a week, a month ...
The cumulation over a period and a group of product identifiers c is done as follows: <> NULL) whereas p is determined by demand period
Singular/delivery and singular/on time:
Singular case definition To fit to the needs of the environment, the granularity of a singular case ( DP∗S ) has to be defined. In general a singular case is described by a n-Tuple consisting of a set of the following order and delivery details: order number customer identifier product identifier wish date of customer confirmed date of supplier ship to information delivery date delivery note number Formula DPDS After a singular case has been delivered to the customer its DP is measured as follows: If (wish date = arrival date) then DPsingular case=1 else DPsingular case=0arrival date = delivery date + transit time By cumulating the results of singular cases over a certain period p and, if necessary, additional criteria c (e.g. customer, product, ...) the delivery performance is calculated as follows: DPp,c=∑p,c(DP)countp,c(singularcases) whereas p is determined by the arrival date DPTS After a period has elapsed all singular cases with wish date within period are considered and their DP is measured as follows: If (wish date = arrival date) then DRsingular case=1 else DRsingular case=0arrival date = delivery date + transit time By cumulating the results of singular cases over a certain period p and, if necessary, additional criteria c (e.g. customer, product, ...) the delivery performance is calculated as follows: DPp,c=∑p,c(DP)countp,c(singularcases) whereas p is determined by the first confirmed date | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**JoCaml**
JoCaml:
JoCaml is an experimental functional programming language derived from OCaml. It integrates the primitives of the join-calculus to enable flexible, type-checked concurrent and distributed programming. The current version of JoCaml is a re-implementation of the now unmaintained JoCaml made by Fabrice Le Fessant, featuring a modified syntax and improved OCaml compatibility compared to the original.
JoCaml was used by team Camls 'R Us to implement a distributed ray tracer, earning 2nd place on the ICFP 2000 programming contest.
The name is a reference to Joe Camel, a cartoon camel used in advertisements for Camel-brand cigarettes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MIDACO**
MIDACO:
MIDACO (Mixed Integer Distributed Ant Colony Optimization) is a software package for numerical optimization based on evolutionary computing.
MIDACO was created in collaboration of European Space Agency and EADS Astrium to solve constrained mixed-integer non-linear (MINLP) space applications.
MIDACO holds several record solutions on interplanetary spaceflight trajectory design problems made publicly available by European Space Agency. MIDACO is included in software packages like TOMLAB, Astos, and SigmaXL. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Radiogenic nuclide**
Radiogenic nuclide:
A radiogenic nuclide is a nuclide that is produced by a process of radioactive decay. It may itself be radioactive (a radionuclide) or stable (a stable nuclide).
Radiogenic nuclides (more commonly referred to as radiogenic isotopes) form some of the most important tools in geology. They are used in two principal ways: In comparison with the quantity of the radioactive 'parent isotope' in a system, the quantity of the radiogenic 'daughter product' is used as a radiometric dating tool (e.g. uranium–lead geochronology).
In comparison with the quantity of a non-radiogenic isotope of the same element, the quantity of the radiogenic isotope is used to define its isotopic signature (e.g. 206Pb/204Pb). This technique is discussed in more detail under the heading isotope geochemistry.
Examples:
Some naturally occurring isotopes are entirely radiogenic, but all those are radioactive isotopes, with half-lives too short to have occurred primordially and still exist today. Thus, they are only present as radiogenic daughters of either ongoing decay processes, or else cosmogenic (cosmic ray induced) processes that produce them in nature freshly. A few others are naturally produced by nucleogenic processes (natural nuclear reactions of other types, such as neutron absorption).
Examples:
For radiogenic isotopes that decay slowly enough, or that are stable isotopes, a primordial fraction is always present, since all sufficiently long-lived and stable isotopes do in fact naturally occur primordially. An additional fraction of some of these isotopes may also occur radiogenically.
Examples:
Lead is perhaps the best example of a partly radiogenic substance, as all four of its stable isotopes (204Pb, 206Pb, 207Pb, and 208Pb) are present primordially, in known and fixed ratios. However, 204Pb is only present primordially, while the other three isotopes may also occur as radiogenic decay products of uranium and thorium. Specifically, 206Pb is formed from 238U, 207Pb from 235U, and 208Pb from 232Th. In rocks that contain uranium and thorium, the excess amounts of the three heavier lead isotopes allows the rocks to be "dated", thus providing a time estimate for when the rock solidified and the mineral held the ratio of isotopes fixed and in place.
Examples:
Another notable radiogenic nuclide is argon-40, formed from radioactive potassium. Almost all the argon in the earth's atmosphere is radiogenic, whereas primordial argon is argon-36.
Some nitrogen-14 is radiogenic, coming from the decay of carbon-14 (half-life around 5700 years), but the carbon-14 was formed some time earlier from nitrogen-14 by the action of cosmic rays.
Examples:
Other important examples of radiogenic elements are radon and helium, both of which form during the decay of heavier elements in bedrock. Radon is entirely radiogenic, since it has too short a half-life to have occurred primordially. Helium, however, occurs in the crust of the Earth primordially, since both helium-3 and helium-4 are stable, and small amounts were trapped in the crust of the Earth as it formed. Helium-3 is almost entirely primordial (a small amount is formed by natural nuclear reactions in the crust). Helium-3 can also be produced as the decay product of tritium (3H) which is a product of some nuclear reactions, including ternary fission. The global supply of helium (which occurs in gas wells as well as the atmosphere) is mainly (about 90%–99%) radiogenic, as shown by its factor of 10 to 100 times enrichment in radiogenic helium-4 relative to the primordial ratio of helium-4 to helium-3. This latter ratio is known from extraterrestrial sources, such as some Moon rocks and meteorites, which are relatively free of parental sources for helium-3 and helium-4.
Examples:
As noted in the case of lead-204, a radiogenic nuclide is often not radioactive. In this case, if its precursor nuclide has a half-life too short to have survived from primordial times, then the parent nuclide will be gone, and known now entirely by a relative excess of its stable daughter. In practice, this occurs for all radionuclides with half lives less than about 50 to 100 million years. Such nuclides are formed in supernovas, but are known as extinct radionuclides, since they are not seen directly on the Earth today.
Examples:
An example of an extinct radionuclide is iodine-129; it decays to xenon-129, a stable isotope of xenon which appears in excess relative to other xenon isotopes. It is found in meteorites that condensed from the primordial Solar System dust cloud and trapped primordial iodine-129 (half life 15.7 million years) sometime in a relative short period (probably less than 20 million years) between the iodine-129's creation in a supernova, and the formation of the Solar System by condensation of this dust. The trapped iodine-129 now appears as a relative excess of xenon-129. Iodine-129 was the first extinct radionuclide to be inferred, in 1960. Others are aluminium-26 (also inferred from extra magnesium-26 found in meteorites), and iron-60.
Radiogenic nuclides used in geology:
The following table lists some of the most important radiogenic isotope systems used in geology, in order of decreasing half-life of the radioactive parent isotope. The values given for half-life and decay constant are the current consensus values in the Isotope Geology community.** indicates ultimate decay product of a series.
Units used in this tableGyr = gigayear = 109 yearsMyr = megayear = 106 yearskyr = kiloyear = 103 years
Radiogenic heating:
Radiogenic heating occurs as a result of the release of heat energy from radioactive decay during the production of radiogenic nuclides. Along with heat from the Primordial Heat (resulting from planetary accretion), radiogenic heating occurring in the mantle and crust make up the two main sources of heat in the Earth's interior. Most of the radiogenic heating in the Earth results from the decay of the daughter nuclei in the decay chains of uranium-238 and thorium-232, and potassium-40. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cusp**
Cusp:
A cusp is the most pointed end of a curve. It often refers to cusp (anatomy), a pointed structure on a tooth.
Cusp or CUSP may also refer to:
Mathematics:
Cusp (singularity), a singular point of a curve Cusp catastrophe, a branch of bifurcation theory in the study of dynamical systems Cusp form, in modular form theory Cusp neighborhood, a set of points near a cusp Cuspidal representation, a generalization of cusp forms in the theory of automorphic representations
Science and medicine:
Beach cusps, a pointed and regular arc pattern of the shoreline at the beach Behavioral cusp, a change in behavior with far-reaching consequences Caltech-USGS Seismic Processing, software for analyzing earthquake data Center for Urban Science and Progress, a graduate school of New York University focusing on urban informatics CubeSat for Solar Particles, a satellite launched in 2022 Cusp (anatomy), a pointed structure on a tooth Cusps of heart valves, leaflets of a heart valve Nuclear cusp condition, in electron density
Other uses:
Cusp (astrology) Cusp (film), a 2021 American documentary following three teenage girls at the end of summer Cusp (novel), a 2005 science fiction story by Robert A. Metzger Cusp Conference, an annual gathering of thinkers, innovators, etc. from various fields Cusp generation, a name given to those born during the transitional years of two generations Concordia University, St. Paul | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Australian rules football injuries**
Australian rules football injuries:
Australian rules football is a sport known for its high level of physical body contact compared to other ball sports such as soccer and basketball. High-impact collisions can occur from any direction, although deliberate collisions sometimes occur from a front-on direction (known specifically within the code as a "shirtfront" when the contact is a body-on-body collision). In addition, players of the code typically wear no protective padding of any kind except for a mouthguard or, occasionally, a helmet (unlike the full-body gear in gridiron football codes or the shin guards in soccer). As such, injury rates tend to be high.
Australian rules football injuries:
Soft tissue injuries are the most frequent, including injuries to the thighs and calf muscles. Osteitis pubis is a condition which particularly affects Australian rules footballers. Injuries to the knee, ankle and shoulders are also common. Hospital-treated injuries account for 40 percent of all injuries.Knee reconstructions are among the career-threatening injuries for professional and amateur players. Full-contact play with the potential to be tackled or bumped from any angle means that the risk of a knee being twisted or caught on a dangerous angle is high. Historically, players who historically had their careers ended prematurely (such as VFL/AFL legend John Coleman) can often be nursed back to full health with modern science.
Australian rules football injuries:
While many players choose not to wear protective padding, players do occasionally suffer head injury resulting in loss of consciousness; however, spinal injury is extremely uncommon and comparatively much lower than rugby football.In recent years, the AFL has commissioned official studies as well as introduced new rules and precautions aimed at reducing the number and severity of injuries in the sport. One example of a player that has suffered a large share of injuries is Essendon Hall of Famer James Hird, who has literally suffered injuries from head to foot and many between, including a hip injury that delayed his debut.The high levels of injuries that take place during the course of many games of football are so much so that not only during a player's career are they susceptible to injuries but the effects afterwards are detrimental to their post-career health. Like the concussions in NFL, brain injuries, while relatively rare in Australian rules football, can occur, especially over time without sufficient precautions. Shane Tuck is one example. While suffering a severe case of the degenerative brain disease chronic traumatic encephalopathy, Tuck decided to commit suicide at 39 years old.In a study conducted recently of 413 retired VFL/AFL footballers, common problems amongst the group in old age included arthritis, hip replacements (including Kevin Sheedy, who had two operations on his hip within a short period of time), and low ability to perform sport-based activities.
Australian rules football injuries:
Steven Febey recently spoke out in Good Weekend (the magazine of the Fairfax newspaper network) detailing that his emphasis on fitness during his career had been cancelled out after his retirement, specifically when the onset of injuries during his football career began to take their toll.
The AFL Players' Association is working on initiatives to set up a player welfare fund for post-AFL retirements that are impacted by sustained post-career injury.
Serious or career-threatening injury cases in the AFL:
The following is an incomplete list of incidents in AFL games which required immediate hospitalisation or threatened the career or a player.
2001 Winston Abraham tore his ACL in his left knee after falling hard in a kneeling position during a collision with James Hird. The knee was badly twisted. He was on the ground for less than 1 minute. The injury ended his career.
Jason Snell of Geelong suffered a broken ankle from which he was never able to play (or even run) again.
2002 James Hird (Essendon) suffered horrific facial injuries at Subiaco in a clash against Fremantle when teammate Mark McVeigh landed unsuccessfully from a marking contest onto his face.
Serious or career-threatening injury cases in the AFL:
2003 Collingwood star utility Tarkyn Lockyer was injured in the air during a tackle of a Cats player early in the round 3 clash against Geelong. His left leg was tackled alone. As his knee extended he suffered a tear in his anterior cruciate ligament, sidelining him for 12 months and starting a wretched luck with injuries for a further two years.
Serious or career-threatening injury cases in the AFL:
2004 In a match between Adelaide and Essendon at AAMi Stadium, James Hird suffered an eye injury when a Crow defender attempted to spoil the ball and hit him in the side of the face. It was another in the long list of Hird injuries. Hird was taken to hospital but Essendon still managed to win easily.
Dustin Fletcher lost most of his teeth requiring re-implantation despite wearing a mouthguard.
Serious or career-threatening injury cases in the AFL:
2005 In a match between Melbourne and Richmond, Matthew Whelan of Melbourne lunged to smother a kick from Nathan Brown of Richmond. The foot became stuck in the turf, and Whelan's torso landed directly on Brown's shin, snapping both bones in the leg, in an incident whose replay has made fans shudder since. Brown calmly sat on the ground and raised his hand for a stretcher with his lower leg badly bent outwards on a 30-degree angle. Brown had several complications and relapses from the condition in the following seasons.
Serious or career-threatening injury cases in the AFL:
During the elimination final between Melbourne and Geelong, a stray boot in a ruck contest from Geelong's Steven King connected with Melbourne's Jeff White's face. The injuries were described as "similar to those of a car accident victim," requiring the insertion of several plates.
2006 In a match between Richmond and Collingwood, Chris Newman received a similar injury to his team mate Nathan Brown above.
In a match between the Western Bulldogs and St Kilda, a hip-and-shoulder from the Bulldogs' Daniel Giansiracusa left St Kilda's notoriously unfortunate Justin Koschitzke with a fractured skull, sparking much of the debate about the safety of bumps in the game. Koschitzke has since returned to the side.
In a game between Brisbane and the Western Bulldogs. Mitch Hahn of the Bulldogs badly hyperextended his left knee forwards after landing awkwardly and grimaced on the ground with his hands around his knee. The horrific injury put him 10 months on the sidelines.
Serious or career-threatening injury cases in the AFL:
In a match between Collingwood and Brisbane, Blake Caracella tried to dive on a loose ball at the same time as a Lions player, though Caracella was diving and the opponent slid in on his side. Caracella's head was pushed back by his opponent, play went on, and Caracella was unable to move any lower than his neck. In the next few days, doctors said that he was lucky not to be a paraplegic after the incident. Caracella retired later that year, citing medical reasons for his decision.
Serious or career-threatening injury cases in the AFL:
In a match between St. Kilda and West Coast, Saints backman Matt Maguire had his left leg broken as a result of Tyson Stenglein sliding into his path.
In a match between Port Adelaide and Adelaide, Crows forward Trent Hentschel badly dislocated his right knee when a player dived on his right leg requiring a full knee reconstruction. He suffered a torn ACL and spent over a month in hospital.
Serious or career-threatening injury cases in the AFL:
In a match between Adelaide and West Coast, Crows ruckman Rhett Biglands badly bent his left knee when Nathan Van Berlo dived into his leg in a desperate scramble for the ball. The collision forced Bigland's lower leg to bend outwards requiring a full knee reconstruction. Despite been able to walk off the ground, it was confirmed a complete tear to the ACL in the knee.
Serious or career-threatening injury cases in the AFL:
Justin Koschitzke was again in the thick of it, fainting on live television, and smashing his head on the counter. He later collided with an umpire in a VFL match.
2007 Scott Camporeale suffered a career ending knee injury in round 21 2007 when his right knee bent and twisted in the wrong direction during a sudden change in direction. ACL gone. He was delisted by Essendon after the season and will be an assistant coach for the same club.
In the NAB Cup preseason, Nick Malceski suffered a serious injury to his right knee and after the knee reconstruction he returned to play for Sydney in round 8. Malceski created news headlines when he elected for a risky surgery technique to save his career.
2008 2009 Essendon ruckman David Hille injures his knee during the traditional ANZAC Day match against Collingwood, putting him out for the entire season.
2010 Carlton ruckman Matthew Kreuzer suffers a knee injury in Carlton's round 13 defeat against Fremantle which puts him out of action for 12 months.
Fremantle midfielder Michael Barlow suffers a horrific broken leg in Fremantle's round 14 victory against Port Adelaide when teammate Rhys Palmer slides into his path after Barlow landed from a marking contest. The injury put Barlow out for 12 months.
2011 Melbourne backman James Strauss breaks his left leg after landing awkwardly in a marking contest against Carlton's Jeff Garlett.
2012 Sydney's Gary Rohan suffers a compound fracture to his right leg in the opening minutes of round 4 after colliding with North Melbourne's Lindsay Thomas. The injury sidelined Rohan until round 21 2012. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Semi-generic**
Semi-generic:
Semi-generic is a legal term used in by the United States Alcohol and Tobacco Tax and Trade Bureau to refer to a specific type of wine designation. The majority of these were originally based on the names of well-known European wine-producing regions. Consumers didn't recognize grape varieties at that time and New World producers used the familiar names to suggest the style of wine they were offering for sale. U.S. regulations require that semi-generic names (for example, California champagne) may be used on a wine label only if there appears next to such name the appellation of "the actual place of origin" in order to prevent any possible consumer confusion.
Semi-generic:
The practice primarily ceased in 2006 with the Wine Trade Agreement, though existing brands can continue the practice, considered grandfathered in.
Recent problems:
Over the past thirty years, with the popularity of varietal labeling, semi-generic names have largely fallen out of use. They are typically only used on inexpensive wines sold in jugs or cartons and most of those now use the more popular varietal labeling.
Recent problems:
The use of these names is a subject of some disagreement. Through trade agreements, the European Union has protected most of these names in its major export markets. In 1993, Australia agreed not to use European place names and France and Italy agreed to stop using the term Tokay, which is now reserved for Hungarian wines. The use of semi-generic names is beginning to become a problem for US domestic and foreign policy because as many American Viticultural Areas (AVAs) are becoming more popular around the world, they are seeking greater protection for their names inside and outside the U.S. In 2006, the U.S. agreed with the EU in the Wine Trade Agreement to refrain from adding any additional labels to class of semi-generic wines.Some U.S. states have laws which additionally restrict or prohibit the use of semi-generic names wines produced within their borders.
Definition:
In the U.S., semi-generics are defined by law in 27 CFR 4.24. There are two types.
The first type is names that can legally refer to any grape wine whatsoever. In practice, most have become associated with a given style, which is noted.
Burgundy – Generic red wine, for example Gallo's Hearty Burgundy. Named after French Burgundy.
Chablis – Generic white wine, named after Chablis.
Chianti – Generic red, named after Italy's Chianti.
Claret – Also generic red wine, named after Claret, the British term for French red Bordeaux.
Malaga – A sherry, named after Málaga in Spain.
Moselle – Generic sweet white, based on a German style produced in the Moselle River valley.
Rhine Wine (syn. Hock) – Generic sweet white, after Germany's Rhine River. Hock is named after Hochheim.
Sauterne – White or pink, dry or sweet, named after Sauternes but deliberately misspelt.
Haut Sauterne – Same as above.
Tokay – Generic white, named after Hungary's Tokaji.The second type of semi-generic names have restrictions on what kind of wine they can be. The legal restriction is listed first, followed by the original term.
Angelica – Fortified wine of 18–24% alcohol, named after Los Angeles.
Champagne – Sparkling wine, named after France's Champagne.
Marsala – Wine of 14–24% alcohol, named after Italy's Marsala.
Madeira – Fortified wine of 18–24% alcohol, named after Portugal's Madeira.
Port – Fortified wine, named after Portugal's Porto.
Sherry Fortified wine of 17–24% alcohol, named after Spain's Sherry.
Sources:
Label Approval for Wine Labels with a Semi-Generic Name Robinson, Jancis (Ed.) The Oxford Companion to Wine. Oxford: Oxford University Press, second edition, 1999.
Agreement between the United States and the European Community on Trade in Wine [1] | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Optical black hole**
Optical black hole:
An optical black hole is a phenomenon in which slow light is passed through a Bose–Einstein condensate that is itself spinning faster than the local speed of light within to create a vortex capable of trapping the light behind an event horizon just as a gravitational black hole would.Unlike other black hole analogs such as a sonic black hole in a Bose–Einstein condensate, a slow light black hole analog is not expected to mimic the quantum effects of a black hole, and thus not emit Hawking radiation. It does, however, mimic the classical properties of a gravitational black hole, making it potentially useful in studying other properties of black holes. More recently, some physicists have developed a fiber optic based system which they believe will emit Hawking radiation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Glitch removal**
Glitch removal:
Glitch removal is the elimination of glitches—unnecessary signal transitions without functionality—from electronic circuits. Power dissipation of a gate occurs in two ways: static power dissipation and dynamic power dissipation. Glitch power comes under dynamic dissipation in the circuit and is directly proportional to switching activity. Glitch power dissipation is 20%–70% of total power dissipation and hence glitching should be eliminated for low power design.
Glitch removal:
Switching activity occurs due to signal transitions which are of two types: functional transition and a glitch. Switching power dissipation is directly proportional to the switching activity (α), load capacitance (C), Supply voltage (V), and clock frequency (f) as: P = α·C·V2·fSwitching activity means transition to different levels. Glitches are dependent on signal transitions and more glitches results in higher power dissipation. As per above equation switching power dissipation can be controlled by controlling switching activity (α), voltage scaling etc.
Glitch reduction techniques:
Reducing switching activity As discussed, more transition results in more glitches and hence more power dissipation. To minimize glitch occurrence, switching activity should be minimized. For example, Gray code could be used in counters instead of binary code, since every increment in Gray code only flips one bit.
Glitch reduction techniques:
Gate freezing Gate freezing minimizes power dissipation by eliminating glitching. It relies on the availability of modified standard library cells such as the so-called F-Gate. This method consists of transforming high glitch gates into modified devices which filter out the glitches when a control signal is applied. When the control signal is high, the F-Gate operates as normal but when the control signal is low, the gate output is disconnected from the ground. As a result it can never be discharged to logic 0 and glitches are prevented.
Glitch reduction techniques:
Hazard filtering and balanced path delay Hazards in digital circuits are unnecessary transitions due to varying path delays in the circuit. Balanced path delay techniques can be used for resolving differing path delays. To make path delays equal, buffer insertion is done on the faster paths. Balanced path delay will avoid glitches in the output.
Hazard filtering is another way to remove glitching. In hazard filtering gate propagation delays are adjusted. This results in balancing all path delays at the output.
Hazard filtering is preferred over path balancing as path balancing consumes more power due to the insertion of additional buffers.
Glitch reduction techniques:
Gate sizing Gate upsizing and gate downsizing techniques are used for path balancing. A gate is replaced by a logically equivalent but differently-sized cell so that delay of the gate is changed. Because increasing the gate size also increases power dissipation, gate-upsizing is only used when power saved by glitch removal is more than the power dissipation due to the increase in size. Gate sizing affects glitching transitions but does not affect the functional transition.
Glitch reduction techniques:
Multiple threshold transistor The delay of a gate is a function of its threshold voltage. Non-critical paths are selected and threshold voltage of the gates in these paths is increased. This results in balanced propagation delay along different paths converging at the receiving gate. Performance is maintained since it is determined by the time required by the critical path. A higher threshold voltage also reduces the leakage current of a path. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PLOS Biology**
PLOS Biology:
PLOS Biology is a monthly peer-reviewed scientific journal covering all aspects of biology. Publication began on October 13, 2003. It is the first journal published by the Public Library of Science. The editor-in-chief is Nonia Pariente.In addition to research articles, the journal publishes magazine content aimed to be accessible to a broad audience. Article types in this section are essays, "unsolved mysteries", editorials, and synopses.
Abstracting and indexing:
The journal is abstracted and indexed in: Biological Abstracts BIOSIS Previews Current Contents/Agriculture, Biology & Environmental Sciences Current Contents/Life Sciences Chemical Abstracts Service Embase Index Medicus/MEDLINE/PubMed Science Citation Index Scopus The Zoological RecordAccording to Journal Citation Reports, the journal had a 2019 impact factor of 7.076. The journal does not list this impact factor on its website. Instead, the journal promotes the use of article level metrics to provide a measure of the impact of their published articles. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Poop deck**
Poop deck:
In naval architecture, a poop deck is a deck that forms the roof of a cabin built in the rear, or "aft", part of the superstructure of a ship.The name originates from the French word for stern, la poupe, from Latin puppis. Thus the poop deck is technically a stern deck, which in sailing ships was usually elevated as the roof of the stern or "after" cabin, also known as the "poop cabin" (or simply the poop). On sailing ships, the helmsman would steer the craft from the quarterdeck, immediately in front of the poop deck. At the stern, the poop deck provides an elevated position ideal for observation. While the main purpose of the poop is adding buoyancy to the aft, on a sailing ship the cabin was also used as an accommodation for the shipmaster and officers.On modern, motorized warships, the ship functions which were once carried out on the poop deck have been moved to the bridge, usually located in a superstructure.
Sources:
Kerchove, René de baron (1961). "Poop". International Maritime Dictionary: An Encyclopedic Dictionary of Useful Maritime Terms and Phrases, Together with Equivalents in French and German (2 ed.). Van Nostrand Reinhold. p. 598. ISBN 978-0-442-02062-0. OCLC 1039382382. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Splitting of prime ideals in Galois extensions**
Splitting of prime ideals in Galois extensions:
In mathematics, the interplay between the Galois group G of a Galois extension L of a number field K, and the way the prime ideals P of the ring of integers OK factorise as products of prime ideals of OL, provides one of the richest parts of algebraic number theory. The splitting of prime ideals in Galois extensions is sometimes attributed to David Hilbert by calling it Hilbert theory. There is a geometric analogue, for ramified coverings of Riemann surfaces, which is simpler in that only one kind of subgroup of G need be considered, rather than two. This was certainly familiar before Hilbert.
Definitions:
Let L/K be a finite extension of number fields, and let OK and OL be the corresponding ring of integers of K and L, respectively, which are defined to be the integral closure of the integers Z in the field in question.
OK↪OL↓↓K↪L Finally, let p be a non-zero prime ideal in OK, or equivalently, a maximal ideal, so that the residue OK/p is a field.
From the basic theory of one-dimensional rings follows the existence of a unique decomposition pOL=∏j=1gPjej of the ideal pOL generated in OL by p into a product of distinct maximal ideals Pj, with multiplicities ej.
The field F = OK/p naturally embeds into Fj = OL/Pj for every j, the degree fj = [OL/Pj : OK/p] of this residue field extension is called inertia degree of Pj over p.
Definitions:
The multiplicity ej is called ramification index of Pj over p. If it is bigger than 1 for some j, the field extension L/K is called ramified at p (or we say that p ramifies in L, or that it is ramified in L). Otherwise, L/K is called unramified at p. If this is the case then by the Chinese remainder theorem the quotient OL/pOL is a product of fields Fj. The extension L/K is ramified in exactly those primes that divide the relative discriminant, hence the extension is unramified in all but finitely many prime ideals.
Definitions:
Multiplicativity of ideal norm implies [L:K]=∑j=1gejfj.
Definitions:
If fj = ej = 1 for every j (and thus g = [L : K]), we say that p splits completely in L. If g = 1 and f1 = 1 (and so e1 = [L : K]), we say that p ramifies completely in L. Finally, if g = 1 and e1 = 1 (and so f1 = [L : K]), we say that p is inert in L.
Definitions:
The Galois situation In the following, the extension L/K is assumed to be a Galois extension. Then the prime avoidance lemma can be used to show the Galois group Gal (L/K) acts transitively on the Pj. That is, the prime ideal factors of p in L form a single orbit under the automorphisms of L over K. From this and the unique factorisation theorem, it follows that f = fj and e = ej are independent of j; something that certainly need not be the case for extensions that are not Galois. The basic relations then read pOL=(∏j=1gPj)e .and [L:K]=efg.
Definitions:
The relation above shows that [L : K]/ef equals the number g of prime factors of p in OL. By the orbit-stabilizer formula this number is also equal to |G|/|DPj| for every j, where DPj, the decomposition group of Pj, is the subgroup of elements of G sending a given Pj to itself. Since the degree of L/K and the order of G are equal by basic Galois theory, it follows that the order of the decomposition group DPj is ef for every j.
Definitions:
This decomposition group contains a subgroup IPj, called inertia group of Pj, consisting of automorphisms of L/K that induce the identity automorphism on Fj. In other words, IPj is the kernel of reduction map Gal (Fj/F) . It can be shown that this map is surjective, and it follows that Gal (Fj/F) is isomorphic to DPj/IPj and the order of the inertia group IPj is e.
Definitions:
The theory of the Frobenius element goes further, to identify an element of DPj/IPj for given j which corresponds to the Frobenius automorphism in the Galois group of the finite field extension Fj /F. In the unramified case the order of DPj is f and IPj is trivial, so the Frobenius element is in this case an element of DPj, and thus also an element of G. For varying j, the groups DPj are conjugate subgroups inside G: Recalling that G acts transitively on the Pj, one checks that if σ maps Pj to Pj', σDPjσ−1=DPj′ . Therefore, if G is an abelian group, the Frobenius element of an unramified prime P does not depend on which Pj we take. Furthermore, in the abelian case, associating an unramified prime of K to its Frobenius and extending multiplicatively defines a homomorphism from the group of unramified ideals of K into G. This map, known as the Artin map, is a crucial ingredient of class field theory, which studies the finite abelian extensions of a given number field K.In the geometric analogue, for complex manifolds or algebraic geometry over an algebraically closed field, the concepts of decomposition group and inertia group coincide. There, given a Galois ramified cover, all but finitely many points have the same number of preimages.
Definitions:
The splitting of primes in extensions that are not Galois may be studied by using a splitting field initially, i.e. a Galois extension that is somewhat larger. For example, cubic fields usually are 'regulated' by a degree 6 field containing them.
Example — the Gaussian integers:
This section describes the splitting of prime ideals in the field extension Q(i)/Q. That is, we take K = Q and L = Q(i), so OK is simply Z, and OL = Z[i] is the ring of Gaussian integers. Although this case is far from representative — after all, Z[i] has unique factorisation, and there aren't many quadratic fields with unique factorization — it exhibits many of the features of the theory.
Example — the Gaussian integers:
Writing G for the Galois group of Q(i)/Q, and σ for the complex conjugation automorphism in G, there are three cases to consider.
Example — the Gaussian integers:
The prime p = 2 The prime 2 of Z ramifies in Z[i]: (2)=(1+i)2 The ramification index here is therefore e = 2. The residue field is OL/(1+i)OL which is the finite field with two elements. The decomposition group must be equal to all of G, since there is only one prime of Z[i] above 2. The inertia group is also all of G, since mod 1+i for any integers a and b, as mod 1+i In fact, 2 is the only prime that ramifies in Z[i], since every prime that ramifies must divide the discriminant of Z[i], which is −4.
Example — the Gaussian integers:
Primes p ≡ 1 mod 4 Any prime p ≡ 1 mod 4 splits into two distinct prime ideals in Z[i]; this is a manifestation of Fermat's theorem on sums of two squares. For example: 13 =(2+3i)(2−3i) The decomposition groups in this case are both the trivial group {1}; indeed the automorphism σ switches the two primes (2 + 3i) and (2 − 3i), so it cannot be in the decomposition group of either prime. The inertia group, being a subgroup of the decomposition group, is also the trivial group. There are two residue fields, one for each prime, OL/(2±3i)OL, which are both isomorphic to the finite field with 13 elements. The Frobenius element is the trivial automorphism; this means that 13 mod 2±3i for any integers a and b.
Example — the Gaussian integers:
Primes p ≡ 3 mod 4 Any prime p ≡ 3 mod 4 remains inert in Z[i]; that is, it does not split. For example, (7) remains prime in Z[i]. In this situation, the decomposition group is all of G, again because there is only one prime factor. However, this situation differs from the p = 2 case, because now σ does not act trivially on the residue field OL/(7)OL, which is the finite field with 72 = 49 elements. For example, the difference between 1 + i and σ(1 + i) = 1 − i is 2i, which is certainly not divisible by 7. Therefore, the inertia group is the trivial group {1}. The Galois group of this residue field over the subfield Z/7Z has order 2, and is generated by the image of the Frobenius element. The Frobenius element is none other than σ; this means that mod 7 for any integers a and b.
Computing the factorisation:
Suppose that we wish to determine the factorisation of a prime ideal P of OK into primes of OL. The following procedure (Neukirch, p. 47) solves this problem in many cases. The strategy is to select an integer θ in OL so that L is generated over K by θ (such a θ is guaranteed to exist by the primitive element theorem), and then to examine the minimal polynomial H(X) of θ over K; it is a monic polynomial with coefficients in OK. Reducing the coefficients of H(X) modulo P, we obtain a monic polynomial h(X) with coefficients in F, the (finite) residue field OK/P. Suppose that h(X) factorises in the polynomial ring F[X] as h(X)=h1(X)e1⋯hn(X)en, where the hj are distinct monic irreducible polynomials in F[X]. Then, as long as P is not one of finitely many exceptional primes (the precise condition is described below), the factorisation of P has the following form: POL=Q1e1⋯Qnen, where the Qj are distinct prime ideals of OL. Furthermore, the inertia degree of each Qj is equal to the degree of the corresponding polynomial hj, and there is an explicit formula for the Qj: Qj=POL+hj(θ)OL, where hj denotes here a lifting of the polynomial hj to K[X].
Computing the factorisation:
In the Galois case, the inertia degrees are all equal, and the ramification indices e1 = ... = en are all equal.
The exceptional primes, for which the above result does not necessarily hold, are the ones not relatively prime to the conductor of the ring OK[θ]. The conductor is defined to be the ideal {y∈OL:yOL⊆OK[θ]}; it measures how far the order OK[θ] is from being the whole ring of integers (maximal order) OL.
A significant caveat is that there exist examples of L/K and P such that there is no available θ that satisfies the above hypotheses (see for example ). Therefore, the algorithm given above cannot be used to factor such P, and more sophisticated approaches must be used, such as that described in.
An example Consider again the case of the Gaussian integers. We take θ to be the imaginary unit i, with minimal polynomial H(X) = X2 + 1. Since Z[ i ] is the whole ring of integers of Q( i ), the conductor is the unit ideal, so there are no exceptional primes.
For P = (2), we need to work in the field Z/(2)Z, which amounts to factorising the polynomial X2 + 1 modulo 2: mod 2).
Therefore, there is only one prime factor, with inertia degree 1 and ramification index 2, and it is given by Q=(2)Z[i]+(i+1)Z[i]=(1+i)Z[i].
The next case is for P = (p) for a prime p ≡ 3 mod 4. For concreteness we will take P = (7). The polynomial X2 + 1 is irreducible modulo 7. Therefore, there is only one prime factor, with inertia degree 2 and ramification index 1, and it is given by Q=(7)Z[i]+(i2+1)Z[i]=7Z[i].
The last case is P = (p) for a prime p ≡ 1 mod 4; we will again take P = (13). This time we have the factorisation mod 13 ).
Therefore, there are two prime factors, both with inertia degree and ramification index 1. They are given by 13 )Z[i]+(i+5)Z[i]=⋯=(2+3i)Z[i] and 13 )Z[i]+(i−5)Z[i]=⋯=(2−3i)Z[i]. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**N-Butylamine**
N-Butylamine:
n-Butylamine is an organic compound (specifically, an amine) with the formula CH3(CH2)3NH2. This colourless liquid is one of the four isomeric amines of butane, the others being sec-butylamine, tert-butylamine, and isobutylamine. It is a liquid having the fishy, ammonia-like odor common to amines. The liquid acquires a yellow color upon storage in air. It is soluble in all organic solvents. Its vapours are heavier than air and it produces toxic oxides of nitrogen during combustion.
Synthesis and reactions:
It is produced by the reaction of ammonia and alcohols over alumina: CH3(CH2)3OH + NH3 → CH3(CH2)3NH2 + H2On-Butylamine is a weak base. The pKa of [CH3(CH2)3NH3]+ is 10.78.n-Butylamine exhibits reactions typical of other simple alkyl amines, i.e., alkylation, acylation, condensation with carbonyls.
It forms complexes with metal ions, examples being cis- and trans-[PtI2(NH2Bu)2].
Uses:
This compound is used as an ingredient in the manufacture of pesticides (such as thiocarbazides), pharmaceuticals, and emulsifiers. It is also a precursor for the manufacture of N,N′-dibutylthiourea, a rubber vulcanization accelerator, and n-butylbenzenesulfonamide, a plasticizer of nylon. It is used in the synthesis of fengabine, the fungicide benomyl, and butamoxane, and the antidiabetic tolbutamide.
Safety:
The LD50 to rats through the oral exposure route is 366 mg/kg.In regards to occupational exposures to n-butylamine, the Occupational Safety and Health Administration and National Institute for Occupational Safety and Health have set occupational exposure limits at a ceiling of 5 ppm (15 mg/m3) for dermal exposure. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Driving club**
Driving club:
In the 19th century, a driving club was a membership club for the recreational practice of carriage driving.
Early British driving clubs:
One of the first driving clubs was the Bensington Driving Club, founded in February 1807 at Bensington, Oxfordshire, also known as the Benson Driving Club when Bensington became Benson, and commonly referred to as "the B.D.C.". It was disbanded in 1854. The BDC initially met in the White Hart public house. Later the club was relocated to Bedfont, becoming the Bedfont Driving Club with ease (since the initials remained the same), and met in the Black Dog public house. As a consequence it was also known by the informal name the Black and White Club.Its first president was Charles Finch. Finch's successor as president was Thomas Onslow, 2nd Earl of Onslow, a.k.a. "Tommy" Onslow. The members of the club were illustrated in Holcroft's comedy The Road to Ruin in Goldfinch. Tommy Onslow was ridiculed in two epigrams, the first of which was: The second was a variation: In fact, these were variants of a rhyme that had followed Onslow from his days as a "whip" long before the founding of the Four-In-Hand Club, where he had driven a phaeton. In Athenaeum one correspondent reported that the verse had been popular in Onslow's younger days, in Surrey, at the start of the 19th century: The Four Horse Club The (friendly) rival Four Horse Club was founded the year after the BDC, in April 1808, but didn't last as long. It was founded because the membership of the BDC was limited to 25 people. Charles Buxton, the inventor of the Buxton bit, along with some friends therefore founded the Four Horse Club. It was also informally known by various other names, as the Four-In-Hand Club (after four-in-hand), the Whip Club, and the Barouche Club. The third name was after a type of horse carriage called a barouche, which was driven by its members. The club rules dictated that a barouche should have silver mounted harnesses, rosettes at their heads, yellow bodies, "dickies", and bay horses. However, the final requirement was relaxed. Club members Sir Henry Peyton and Mr Annesley drove roan horses.The Four Horse Club rules also had strict dictates about clothing for the drivers. They required a drab coat that reached down to one's ankles, decorated with large mother-of-pearl buttons, and three tiers of pockets; a blue waistcoat with inch-wide yellow stripes; knee-length breeches with strings and rosettes, made of plush; and a hat that was at least 3.5 inch deep in the crown. The Club regularly drove as a group to Salt Hill, where they spent a convivial evening and the night, before driving back to London.The FHC encountered difficulties in 1820, revived in 1822 with slightly different club rules, but only lasting until 1826. An 1820 joke went the rounds, of a person addressing a FHC member, saying "I hear that you men have broken up.". To which, the reply was "No. We've broken down; the FHC had not enough in hand to keep on with." The modified rules called for a brown landaulet carriage, without ornaments; no restrictions upon horse colour; and brass mounted harnesses.
Early British driving clubs:
The Richmond Driving Club The Richmond Driving Club was founded in 1838 by Lord Chesterfield. It only lasted until 1845. It used to meet at Lord Chesterfield's house, and drive, in procession, to dinner at the Castle Hotel in Richmond. It was satirized by Robert Smith Surtees: The Duke of Beaufort, named in the poem, did take part in the processions, but was not actually a member of the RDC. Mr Angerstein, also named, was a particularly reckless driver, whose reputation led no-one to want to ride with him. An anecdote relates that on one occasion someone unwittingly climbed into Angerstein's carriage after dinner for the ride home. Angerstein, so excited that someone had actually chosen to ride with him, set off immediately, without waiting for the rest of the procession, and so suddenly that his passenger was thrown head-over-heels. The passenger, realizing whose carriage he had embarked upon, saying nothing jumped straight off.
Early British driving clubs:
The Four-In-Hand Driving Club The Four-In-Hand Driving Club was founded in 1856.
Driving clubs in the United States:
19th century popularity Enthusiasts in Boston, Massachusetts formed several driving clubs (also called "gentlemen's driving clubs"), and so-called trotting associations, in the second half of the 19th century. They would race in three locations: the Readville Race Course, the Riverside Riding Park in Allston (later to be named Beacon Park), and the South End Driving Park. The most famous of these clubs, the Metropolitan Driving Club, conducted races for several decades, until the rise in popularity of the motor car caused carriage driving to lose its appeal.
Driving clubs in the United States:
20th and 21st century A 2002 estimate by the USTA was that there were over 500 members of the various registered driving clubs in the United States. Most of these driving clubs are small, holding driving contests at the home tracks before the regular horse races on the racing card. There are additional organizations dedicated to the sport of combined driving. Still others focus on the driving of draft horses and other non-racing breeds for primarily recreational purposes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dihydrouracil oxidase**
Dihydrouracil oxidase:
In enzymology, a dihydrouracil oxidase (EC 1.3.3.7) is an enzyme that catalyzes the chemical reaction 5,6-dihydrouracil + O2 ⇌ uracil + H2O2Thus, the two substrates of this enzyme are 5,6-dihydrouracil and O2, whereas its two products are uracil and H2O2.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-CH group of donor with oxygen as acceptor. The systematic name of this enzyme class is 5,6-dihydrouracil:oxygen oxidoreductase. It employs one cofactor, FMN. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Working through**
Working through:
In psychodynamic psychotherapy, working through is seen as the process of repeating, elaborating, and amplifying interpretations. It is believed that such working through is critical towards the success of therapy.The concept was introduced by Sigmund Freud in 1914, and assumed ever greater importance in psychoanalysis, in contrast to the immediacy of abreaction.
Interpretation and resistance:
Interpretations are made when the client comes up with some material, be it written, a piece of art, music, or verbal, and are intended to bring the material offered into connection with the unconscious mind. Because of the resistance to accepting the unconscious, interpretations, whether correct or partially incorrect, consciously accepted or rejected, will inevitably require amplifying and extending to other aspects of the client's life.In a process Sandor Rado compared to the labour of mourning, the unconscious content must be demonstrated repeatedly in all its various forms and linkages – the process of working through.Because of the power of resistance, the client's rational thought and conscious awareness may not be sufficient on their own to overcome the maladjustment, entailing further interpretation and further working through.
Interpretation and resistance:
Rat Man Before formulating the concept of working through, in his case study of the Rat Man, Freud wrote of his interpretations: "It is never the aim of discussions like these to create convictions. They are only intended to bring the repressed complexes into consciousness...and to facilitate the emergence of fresh material from the unconscious. A sense of conviction is only attained after the patient has himself worked over the reclaimed material".
Transference:
The necessity of working through the transference is stressed in almost all forms of psychodynamic therapy, from object relations theory, through the openings offered for working through by transference disruption in self psychology, to the repetitive exploration of the transference in group therapy.
Cultural analogues:
Jacques Lacan compared the process of working-through to the stylistic recommendations of Boileau: "Cent fois sur the métier, remettez...A hundred times consider what you've said"(Pope). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Membrane ruffling**
Membrane ruffling:
Within molecular and cell biology membrane ruffling (also known as cell ruffling) is the formation of a motile cell surface that contains a meshwork of newly polymerized actin filaments. It can also be regarded as one of the earliest structural changes observed in the cell. The GTP-binding protein Rac is the regulator of this membrane ruffling. Changes in the Polyphosphoinositide metabolism and changes in Ca2+ level of the cell may also play an important role. A number of actin-binding and organizing proteins localize to membrane ruffles and potentially target to transducing molecules.
Characteristic feature of migrating cells:
Membrane ruffling is a characteristic feature of many actively migrating cells. When the membrane is unable to attach to the substrate, the membrane protrusion is recycled back into the cell. The ruffling of membranes is thought to be controlled by a group of enzymes known as Rho GTPases, specifically RhoA, Rac1 and cdc42.
Bacterial infection:
Some bacteria such as enteropathogenic E. coli and enterohemorrhagic E. coli can induce membrane ruffling by secreting toxins via the type three secretion system and modifying the host cytoskeleton. Such toxins include EspT, Map, and SopE, which mimic RhoGEF and activate endogenous Rho GTPases to manipulate actin polymerisation in the infected cell. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nanoring**
Nanoring:
A nanoring is a cyclic nanostructure with a thickness small enough to be on the nanoscale (10−9 meters). Note that this definition allows the diameter of the ring to be larger than the nanoscale. Nanorings are a relatively recent development within the realm of nanoscience; the first peer-reviewed journal article mentioning these nanostructures came from researchers at the Institute of Physics and Center for Condensed Matter Physics in Beijing who synthesized nanorings made of gallium nitride in 2001. Zinc oxide, a compound very commonly used in nanostructures, was first synthesized into nanorings by researchers at Georgia Institute of Technology in 2004 and several other common nanostructure compounds have been synthesized into nanorings since. More recently, carbon-based nanorings have been synthesized from cyclo-para-phenylenes as well as porphyrins.
Overview:
Although nanorings may have a diameter on the nanoscale, many of these materials have diameters that are larger than 100 nm, with many nanorings having a diameter on the microscale (10−6 meters). As such, nanorings are considered to be members of a sub-class of nanomaterials called one-dimensional (1-D) nanomaterials. These are nanomaterials in which one of the three physical dimensions in a single unit of the material is on a length scale greater than the nanoscale. Other examples of one-dimensional nanomaterials are nanowires, nanobelts, nanotubes, and nanosheets.
Overview:
Mechanical Uniqueness As with other nanomaterials, much of the practical interest in nanorings arises from the fact that in nanorings, one can often observe quantized phenomena which are ordinarily unobservable in bulk matter. Nanorings, in particular, have a few additional properties which are of particular research interest. One-dimensional nanostructures have a variety of potential uses and applications but due to the dimensions of their extended crystal structures, they cannot be grown on discrete crystal growth sites and thus, cannot be synthesized on a substrate with any crystallographic predictability. Therefore, nanorings are most commonly synthesized aqueously by creating entropically unique conditions which induce spontaneous nanoring self-assembly. These materials are much more useful if they can be easily manipulated by mechanical or magnetic forces as many one-dimensional nanostructures are extremely fragile and, thus, difficult to manipulate into useful environments. It has now been demonstrated that ZnO nanorings made from the spontaneous folding of a single nanobelt crystal can be extensively mechanically manipulated without breaking or fracturing, giving them a unique mechanical advantage over other classes of ZnO nanostructures.
Synthesis:
Generally, nanorings are synthesized using a bottom-up approach, as top-down syntheses are limited by the entropic barriers presented by these materials. Currently, the number of different synthetic techniques used to make these particles is almost as diverse as the number of different types of nanorings themselves. One common method for synthesizing nanorings involves first synthesizing nanobelts or nanowires with an uneven charge distribution focused on the edges of the material. These particles will naturally self-assemble into ring structures such that Coulomb repulsion forces are minimized within the resulting crystal. Other approaches for nanoring synthesis include the assembly of a nanoring around a small seed particle which is later removed or the expansion and twisting of porphyrin-like structures into a hollow nanoring structure. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Prey switching**
Prey switching:
Prey switching is frequency-dependent predation, where the predator preferentially consumes the most common type of prey. The phenomenon has also been described as apostatic selection, however the two terms are generally used to describe different parts of the same phenomenon. Apostatic selection has been used by authors looking at the differences between different genetic morphs. In comparison, prey switching has been used when describing the choice between different species.
Definition:
The term switching was first coined by the ecologist Murdoch in 1969 to describe the situation where a predator eats disproportionately more of the most common type of prey. Eight years earlier in 1962 the geneticst B. C. Clarke described a similar phenomenon and called it "apostatic selection". Since then the term prey switching has mainly been used by ecologists, while apostatic selection has been used by geneticists, and because of this they have been used to describe different aspects of frequency-dependent selection.
Definition:
One of the ways prey switching has been identified and defined is when a predator's preference for a particular type of prey increases as the prey increase in abundance. The result is a strong preference for prey which are common in the environment and a weak preference for prey which are rare. The definition of preference will therefore impact on understanding switching. The most common definition of preference is the relationship between the ratio of prey in the environment and the ratio of prey in a predator's diet. It has been independently proposed a number of times and is described by the equation: P1/P2 = c (N1/N2); alternatively, c = (P1/P2)/(N1/N2)where N1 and N2 are the abundance of prey types 1 and 2 in the environment and P1 and P2 are the abundances of the same prey types in the predator's diet. c is the preference for prey type 1. If the value of c increases over time with N1/N2, prey switching is presumed to occur. The opposite of prey switching is when a predator eats disproportionately more of the most rare prey than would be expected by chance. From the equation above this would occur when c (preference) decreases over time as N1/N2 (amount in the environment) increases. This opposite phenomenon has been called negative prey switching, or anti-apostatic selection when it refers to the choice between different morphs. Negative prey switching may occur when the more plentiful prey is harder to hunt or riskier.Prey switching has been in the scientific literature since about 1960, but since his initial work Hassell has suggested that interest in prey switching has fallen since it is hard to demonstrate whether it has or is occurring.
Mechanisms:
The reason a consumer may switch from eating one resource, to eating another, is because it may increase an individual's foraging efficiency and therefore its inclusive fitness. It has been argued that frequency-dependent predation is predicted from optimal foraging theory. In particular the contingency model predicts that in some circumstances the most profitable resource should be eaten at the expense of the less profitable resources, and that this decision is based on the absolute density of the most profitable type of resource. However frequency-dependent predation can occur even when the absolute density of the most profitable resource remains constant. These ultimate mechanisms help to demonstrate how prey switching and apostatic selection fit into overarching ecological theory. In addition there are proximate mechanisms which may account for why an individual preferentially feeds on the most abundant type of prey.
Mechanisms:
The location and timing of when a consumer feeds can account for switching behaviour. In experiments with Guppies the switching behaviour displayed was due to the choice of patch. Likewise the switching behaviour of stoneflies was due to the time they were active. The formation of a search image may also lead to the consumer switching which prey it eats. Real suggests that a mechanism similar to search image may account for the switching behaviour displayed by Bombus pensylvanicus, however they are reluctant to use the term search image, instead suggesting some kind of perceptual constraint. Prey switching may also occur if the consumer becomes more efficient at capturing the most common type of prey, for example increased practice at capturing the most common prey. This was found to be the case for Anax junius which fed on either mayfly nymphs or tubifex worms. From this Bergelson came up with the rule of thumb that consumers should "continue to pursue only those prey types you have successfully captured in the immediate past." Prey switching can alter the influence of predation on ecosystem function. For example, predators that switch between feeding on herbivores and detritivores can link green and brown food webs.In general there have been a limited number of studies which have identified mechanisms responsible for prey switching behaviour. However it has been suggested that a consumers choice of location to feed may be the most important mechanism. Conversely, search image is controversial with disagreement over whether it actually occurs in nature, and if it does whether it is important.
Outcomes:
If a predator displays prey switching behavior it can have a large effect on the stability of the system, coexistence of prey species and ecosystem functioning and evolutionary diversification.
Prey switching can promote coexistence between prey species. For example, prey switching causes predation to be very low for prey which are rare, which can subsequently create prey refugia which will aid coexistence.More generally than coexistence, prey switching has often been proposed to stabilise predator-prey dynamics. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Maximum sustained wind**
Maximum sustained wind:
The maximum sustained wind associated with a tropical cyclone is a common indicator of the intensity of the storm. Within a mature tropical cyclone, it is found within the eyewall at a distance defined as the radius of maximum wind, or RMW. Unlike gusts, the value of these winds are determined via their sampling and averaging the sampled results over a period of time. Wind measuring has been standardized globally to reflect the winds at 10 metres (33 ft) above mean sea level, and the maximum sustained wind represents the highest average wind over either a one-minute (US) or ten-minute time span (see the definition, below), anywhere within the tropical cyclone. Surface winds are highly variable due to friction between the atmosphere and the Earth's surface, as well as near hills and mountains over land.
Maximum sustained wind:
Over the ocean, satellite imagery determines the value of the maximum sustained winds within a tropical cyclone. Land, ship, aircraft reconnaissance observations, and radar imagery can also estimate this quantity, when available. This value helps determine damage expected from a tropical cyclone, through use of such scales as the Saffir–Simpson scale.
Definition:
The maximum sustained wind normally occurs at a distance from the center known as the radius of maximum wind, within a mature tropical cyclone's eyewall, before winds decrease at farther distances away from a tropical cyclone's center. Most weather agencies use the definition for sustained winds recommended by the World Meteorological Organization (WMO), which specifies measuring winds at a height of 10 metres (33 ft) for 10 minutes, and then taking the average. However, the United States National Weather Service defines sustained winds within tropical cyclones by averaging winds over a period of one minute, measured at the same 10 metres (33 ft) height. This is an important distinction, as the value of the highest one-minute sustained wind is about 14% greater than a ten-minute sustained wind over the same period.
Determination of value:
In most tropical cyclone basins, use of the satellite-based Dvorak technique is the primary method used to determine a tropical cyclone's maximum sustained winds. The extent of spiral banding and difference in temperature between the eye and eyewall is used within the technique to assign a maximum sustained wind and pressure. Central pressure values for their centers of low pressure are approximate. The intensity of example hurricanes is derived from both the time of landfall and the maximum intensity. The tracking of individual clouds on minutely satellite imagery could be used in the future in estimating surface winds speeds for tropical cyclones.Ship and land observations are also used, when available. In the Atlantic as well as the Central and Eastern Pacific basins, reconnaissance aircraft are still utilized to fly through tropical cyclones to determine flight level winds, which can then be adjusted to provide a fairly reliable estimate of maximum sustained winds. A reduction of 10 percent of the winds sampled at flight level is used to estimate the maximum sustained winds near the surface, which has been determined during the past decade through the use of GPS dropwindsondes. Doppler weather radar can be used in the same manner to determine surface winds with tropical cyclones near land.
Variation:
Friction between the atmosphere and the Earth's surface causes a 20% reduction in the wind at the surface of the Earth. Surface roughness also leads to significant variation of wind speeds. Over land, winds maximize at hill or mountain crests, while sheltering leads to lower wind speeds in valleys and lee slopes. Compared to over water, maximum sustained winds over land average 8% lower. More especially, over a city or rough terrain, the wind gradient effect could cause a reduction of 40% to 50% of the geostrophic wind speed aloft; while over open water or ice, the reduction is between 10% and 30%.
Relationship to tropical cyclone strength scales:
In most basins, maximum sustained winds are used to define their category. In the Atlantic and northeast Pacific oceans, the Saffir–Simpson scale is used. This scale can be used to determine possible storm surge and damage impact on land. In most basins, the category of the tropical cyclone (for example, tropical depression, tropical storm, hurricane/typhoon, super typhoon, depression, deep depression, intense tropical cyclone) is determined from the cyclone's maximum sustained wind over 1 minute. Only in Australia is this quantity not used to define the tropical cyclone's category; in that basin, maximum sustained wind speed is measured over 10 minutes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Amagat's law**
Amagat's law:
Amagat's law or the Law of Partial Volumes describes the behaviour and properties of mixtures of ideal (as well as some cases of non-ideal) gases. It is of use in chemistry and thermodynamics. It is named after Emile Amagat.
Overview:
Amagat's law states that the extensive volume V = Nv of a gas mixture is equal to the sum of volumes Vi of the K component gases, if the temperature T and the pressure p remain the same: Nv(T,p)=∑i=1KNivi(T,p).
This is the experimental expression of volume as an extensive quantity.
Overview:
According to Amagat's law of partial volume, the total volume of a non-reacting mixture of gases at constant temperature and pressure should be equal to the sum of the individual partial volumes of the constituent gases. So if V1,V2,…,Vn are considered to be the partial volumes of components in the gaseous mixture, then the total volume V would be represented as: V=V1+V2+V3+⋯+Vn=∑iVi Both Amagat's and Dalton's laws predict the properties of gas mixtures. Their predictions are the same for ideal gases. However, for real (non-ideal) gases, the results differ. Dalton's law of partial pressures assumes that the gases in the mixture are non-interacting (with each other) and each gas independently applies its own pressure, the sum of which is the total pressure. Amagat's law assumes that the volumes of the component gases (again at the same temperature and pressure) are additive; the interactions of the different gases are the same as the average interactions of the components.
Overview:
The interactions can be interpreted in terms of a second virial coefficient, B(T), for the mixture. For two components, the second virial coefficient for the mixture can be expressed as: B(T)=X1B1+X2B2+X1X2B1,2 where the subscripts refer to components 1 and 2, the Xs are the mole fractions, and the Bs are the second virial coefficients. The cross term, B1,2, of the mixture is given by: B1,2=0 (Dalton's law)and B1,2=B1+B22 (Amagat's law).When the volumes of each component gas (same temperature and pressure) are very similar, then Amagat's law becomes mathematically equivalent to Vegard's law for solid mixtures.
Ideal gas mixture:
When Amagat's law is valid and the gas mixture is made of ideal gases: ViV=niRTpnRTp=nin=xi where: p is the pressure of the gas mixture, Vi=niRTp is the volume of the i-component of the gas mixture, V=∑Vi is the total volume of the gas mixture, ni is the amount of substance of i-component of the gas mixture (in mol), n=∑ni is the total amount of substance of gas mixture (in mol), R is the ideal, or universal, gas constant, equal to the product of the Boltzmann constant and the Avogadro constant, T is the absolute temperature of the gas mixture (in K), xi=nin is the mole fraction of the i-component of the gas mixture.It follows that the mole fraction and volume fraction are the same. This is true also for other equation of state. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cofactor Genomics**
Cofactor Genomics:
Cofactor Genomics is a biotech company that primarily focuses on drug development, medical research, and personalized medicine.
Overview:
Cofactor Genomics was founded in August 2008 by Jarret Glasscock, Matt Hickenbotham, and Ryan Richt. The technological advances brought on by Next Gen sequencing encouraged Glasscock, Hickenbotham and Richt to leave Washington University Genome Center and purchase their first Next Gen genome analyzer with capital raised from an angel investor in California.
Overview:
In November 2013, Cofactor Genomics opened a 10,000 ft2 headquarters in St. Louis, MO. The company also has offices in San Diego and San Francisco. In July 2015, Cofactor Genomics was awarded a $1.5 million Phase II Small Business Innovation Research (SBIR) grant by the National Institute on Drug Abuse (NIDA) at the National Institutes of Health (NIH). In addition to working with clients such as Ozzy Osbourne, Cofactor Genomics is involved in a number of research projects, including the preservation of the White Rhino and the Black Footed Ferret.
Executive officers:
Jarret Glasscock is a co-founder as well as the chief executive officer of Cofactor Genomics. Glasscock completed his undergraduate degree at the University of Arizona, where he majored in biology with a focus in the computer sciences. Upon graduation, Glasscock pursued his doctorate in genetics at Washington University in St. Louis, where he studied under Warren Gish, developer of the NCBI BLAST sequence analysis program.
Executive officers:
Jon Armstrong is the chief scientific officer of Cofactor Genomics. After acquiring his master's degree in neuroscience, Armstrong spent nine years in the Genome Center at Washington University's Technology Development Group.David Messina is the chief operating officer of Cofactor Genomics. He has spent the last 19 years in computational biology and genetics. He worked on the Human Genome Project at Washington University in St. Louis, trained in molecular biology and human genetics at the University of Chicago, and earned his PhD in computational biology in Stockholm, Sweden. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Campaign Hexagon System**
Campaign Hexagon System:
Campaign Hexagon System is a book guide published by Judges Guild in 1977 for the Dungeons & Dragons game.
Contents:
Campaign Hexagon System is a 1977 book published by Judges Guild for use as an accessory with the Dungeons & Dragons game.
Contents:
Campaign Hexagon System is a GM's aid: a book of blank hexagon sheets overprinted with larger gray hexes so the GM can integrate wilderness maps of varying scales. It includes tables for generating wilderness terrain.Campaign Hexagon System is a booklet of which the main portion is devoted to over 60 blank campaign hex grids. A rectangular hexagonal tessellation of about 1000 small hexes appears on each page, and on this grid is superimposed a large hex representing 5 miles across flats. The booklet also contains other guidelines generally relevant to a fantasy wilderness campaign, including Keen Sighting, Hydrographic Terrain (such as rivers and streams), Movement Obstacles, Prospecting (for ore, precious minerals, and the like), Flora Types, Vegetables, and Fauna Classifications.
Publication history:
Campaign Hexagon System was written by Bob Bledsaw and Bill Owen, and was published by Judges Guild in 1977 as a 64-page book. A listing of cumulative sales from 1981 shows that Campaign Hexagon System sold over 20,000 units.: 200
Reception:
Don Turnbull reviewed Campaign Hexagon System for White Dwarf #6. He commented that "This is a useful booklet of records for those involved in a fantasy 'wilderness' campaign game". Turnbull concluded his review by saying, "Though I am not personally involved in 'outdoor' fantasy gaming at the moment, I should have thought this to be a most valuable source of reference data for player and gamemaster alike."Patrick Amory reviewed Campaign Hexagon System for Different Worlds magazine and stated that "In the front are useful and extraordinarily detailed charts for determining types of flora and fauna, just which way that stream bends, and the exact depth of that gorge - all with adjustments for latitude. This play-aid will be incomparably useful to all serious GMs."Shannon Appelcline called the Campaign Hexagon System (1977) a "clever gamemaster aid, this one a set of blank hex maps that gamemasters could use to portray large wilderness areas. It pushed Judges Guild's ideas of large-scale campaigns — something that they alone in the industry were concentrating on at the time — and matched the campaign hexes that they used to depict the lands around their City State.": 191
Reviews:
The Playboy Winner's Guide to Board Games | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Myofascial release**
Myofascial release:
Myofascial release (MFR, self-myofascial release) is an alternative medicine therapy claimed to be useful for treating skeletal muscle immobility and pain by relaxing contracted muscles, improving blood and lymphatic circulation and stimulating the stretch reflex in muscles.Fascia is a thin, tough, elastic type of connective tissue that wraps most structures within the human body, including muscle. Fascia supports and protects these structures. Osteopathic practice holds that this soft tissue can become restricted due to psychogenic disease, overuse, trauma, infectious agents, or inactivity, often resulting in pain, muscle tension and corresponding diminished blood flow.
Description and conceptual basis:
Writing for Science-Based Medicine, Harriet Hall described myofascial release as an umbrella term for several types of physical manipulation, which might more simply be described as a kind of massage based on vaguely-defined scientific notions.
Effectiveness:
The American Cancer Society states that "There is little scientific evidence available to support proponents' claims that myofascial release relieves pain or restores flexibility" and cautions against using it as a substitute for conventional cancer treatment. The poor quality of research into the use of myofascial release for orthopaedic conditions precludes any conclusions being drawn about its usefulness for this purpose.In 2011, the UK Advertising Standards Authority (ASA) upheld a complaint regarding the effectiveness claims published in an advertising leaflet produced by the Myofascial Release UK health care service. The ASA Council ruled that materials presented by Myofascial Release UK in support of the claims made in their ad were inadequate to establish a "body of robust scientific evidence" to substantiate Myofascial Release UK's range of claims. In addition, the ASA determined that the ad breached advertising rules by introducing a risk that readers might be discouraged from seeking other essential medical treatments.
Effectiveness:
Reviews published in 2013 and 2015 evaluating evidence for MFR efficacy found that clinical trials that had been conducted varied in quality, technique, outcome measurements and had mixed outcomes; the 2015 review noted: "it is time for scientific evidences on MFR to support its clinical use." Another review concluded that the use of foam rollers or a roller massager before or after exercise for self-myofascial release has been observed to decrease soreness due to DOMS and that self-myofascial release appears to have no negative effect on performance. However, the optimal timing and duration of use requires further study.
History:
The approach was promulgated as an alternative medicine concept by Andrew Taylor Still, inventor of osteopathy, and his early students. The exact phrase "myofascial release" was coined in the 1960s by Robert Ward, an osteopath who studied with Ida Rolf, the originator of Rolfing. Ward, along with physical therapist John Barnes, are considered the two primary founders of Myofascial Release. Ward also suggests-in other sources-that the term "myofascial release" was coined in 1981, when it was used as the name of a course taught at Michigan State University. It was popularized and taught to therapists, massage therapists and occupational therapists by John F. Barnes through his seminars. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chip race**
Chip race:
A chip race is an event that takes place in poker tournaments, especially those with an escalating blinds (such as Texas hold 'em), in which chips of denominations that are no longer needed (as the current and upcoming blinds are more easily played with larger chip values) are removed from play. This has the effect of reducing the number of physical chips in front of any player, and makes it easier for the players to count their stacks and their bets.
Chip race:
In a typical chip race: All players color up their lesser-valued chips into greater denominations. For example, if the blinds have increased to a level where $5 chips are no longer needed to post blinds, each five $5 chips will be exchanged for a $25 chip. Players will temporarily keep any leftover chips that cannot be fully colored up to larger chips (less than 5 $5 chips in the above example).
Chip race:
All leftover chips are counted, and equivalent chips in the larger denomination are presented to the table. Continuing the example, if there are 15 $5 chips remaining among 6 players, 3 $25 chips are prepared. In the event the remaining smaller chips do not add up to a whole larger chip, an extra larger chip should be added as long as the leftover smaller chips total at least half a single larger chip.
Chip race:
Each player with leftover chips in the smaller denomination will receive one card for each chip. The cards are typically dealt face up, starting from seat one, to the dealer's left. Each player due to receive cards will receive all of his cards before the next player, rather than a "traditional" card deal; the player on the small blind, for example, who is due to receive three cards for his three chips, will receive all three of his cards before the big blind receives any.
Chip race:
The larger chips are issued to the players with the highest single cards showing (poker hands do not count). No player is issued more than one chip. Ties (cards of the same rank) are broken by suit, using the same bridge (ascending alphabetical) order of the suits: Spades are highest, followed by Hearts, Diamonds, and Clubs. All remaining lesser-value chips are removed from play.A chip race cannot eliminate a player from the game. In the event a player's last smaller-denomination chips are removed from play as part of the chip race, he automatically gets one chip of the lowest value still in play.To make it easier to manage the chip race, it is advised that one player at the table (normally the player who currently holds most of the chips that are about to be eliminated) buys up all the smaller chips from the other player's stacks and exchanges them with chips of equal value in higher denomination. Having all the chips in one stack makes it easier to count up and exchange. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tiazofurin**
Tiazofurin:
Tiazofurin is a drug which acts as an inhibitor of the enzyme IMP dehydrogenase. Tiazofurin and its analogues were under investigation for potential use in the treatment of cancer, though side effects such as pleuropericarditis and a flu-like syndrome precluded further development. They also show antiviral effects and may be reevaluated as potential options in the treatment of newly emerging viral diseases. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Charring**
Charring:
Charring is a chemical process of incomplete combustion of certain solids when subjected to high heat. Heat distillation removes water vapour and volatile organic compounds (syngas) from the matrix. The residual black carbon material is char, as distinguished from the lighter colored ash. By the action of heat, charring removes hydrogen and oxygen from the solid, so that the remaining char is composed primarily of carbon. Polymers like thermoset, or most solid organic compounds like wood or biological tissue, exhibit charring behaviour.
Charring:
Charring means partially burning so as to blacken the surface.Charring can result from naturally occurring processes like fire; it is also a deliberate and controlled reaction used in the manufacturing of certain products. The mechanism of charring is part of the normal burning of certain solid fuels like wood. During normal combustion, the volatile compounds created by charring are consumed at the flames within the fire or released to the atmosphere, while combustion of char can be seen as glowing red coals or embers which burn without the presence of flames.
Production of char:
Coke and charcoal are both produced by charring, whether on an industrial scale or through normal combustion of coal or wood. Normal combustion consumes the char as well as the gases produced in its creation, while industrial processes seek to recover the purified char with minimal loss to combustion. This is accomplished by either burning the parent fuel (wood or coal) in a low-oxygen environment or by heating it to a high temperature without allowing combustion to occur. In industrial production of coke and charcoal, the volatile compounds driven off during charring are often captured for use in other chemical processes.
Production of char:
A "coal burning" blacksmith's forge actually produces the heat necessary for high-temperature metalworking by the continuous production and consumption of coke within a carefully managed fire. An inner ring of burning coke provides heat which converts the encircling coal into coke, which is then itself fed into the center of the fire to provide the required heat and to create more coke; coal itself is incapable of producing the heat required for some blacksmithing operations.
Charring and fire protection:
Charring is an important process in the combustion ignition of solid fuels and in smouldering. In construction of heavy-timbered wood buildings the predictable formation of char is used to determine the fire rating of supporting timbers and is an important consideration in fire protection engineering. If a wood column is of large enough diameter, during a structure fire its exposed surface will be converted to char until the thickness of char provides sufficient insulation to prevent additional charring. This layer then serves to protect the remaining structurally sound core of wood, which can continue to carry the building loads if appropriately designed.
Wood preservation:
Charring is also a technique used for wood preservation. In Japan this traditional technique is called yakisugi or shō sugi ban.
Legal definitions:
Charring had a special meaning under the common law of England. Under that system, the crime of arson required charring of a dwelling—actual damage to the fiber of the material from which the structure was built—and not mere "scorching" or damage to the surface, or to surface coverings such as carpets and wallpaper. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Calcar avis**
Calcar avis:
The calcar avis, previously known as the hippocampus minor, is an involution of the wall of the lateral ventricle's posterior cornu produced by the calcarine fissure.It is sometimes visible on ultrasonogram and can resemble a clot.
Name:
The ridge was originally described by anatomists as the calcar avis, while the ridge running along the floor of the temporal horn of the lateral ventricle was described by various names, in particular as the hippocampus. A classical allusion was introduced later with the term pes hippocampi, which may date back to Diemerbroeck in 1672, introducing a comparison with the shape of the folded back forelimbs and webbed feet of the Classical hippocampus (Greek: ἱππόκαμπος), a sea monster with a horse's forequarters and a fish's tail. At a subsequent stage the hippocampus was described as pes hippocampi major, with the calcar avis being named pes hippocampi minor.The renaming of the hippocampus as hippocampus major, and the calcar avis as hippocampus minor, has been attributed to Félix Vicq-d'Azyr systematising nomenclature of parts of the brain in 1786. While "hippocampus minor" was used interchangeably with "calcar avis" for much of the 19th century, for a few years after 1861 the former name was subjected to publicity and ridicule when the hippocampus minor became the centre of a dispute over human evolution between Thomas Henry Huxley and Richard Owen, satirised as the Great Hippocampus Question. The term hippocampus minor fell from use in anatomy textbooks, and was officially removed in the Nomina Anatomica of 1895, but still featured in the Encyclopædia Britannica of 1926, and appeared in general dictionaries as late as 1957. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Telegram messenger**
Telegram messenger:
In many English-speaking countries, a telegram messenger, more often known as a telegram delivery boy, telegraph boy or telegram boy was a young man employed to deliver telegrams, usually on bicycle. In the United Kingdom, they were employed by the General Post Office; in the United States, they worked for Western Union or other telegraph companies.
In the United Kingdom:
Telegram boys became popular in the United Kingdom after the General Post Office took over control of inland telegraphs from the railways and private telegraph companies. Many of the boys employed by these services to deliver telegrams transferred to the Post Office. In some respects the life of a telegram boy was not unlike that of someone in military service. They were expected to behave in a manner befitting one who wore the uniform of the Queen, and were required to complete a daily drill. From 1915 to 1921, morning exercise was added to these requirements.
In the United Kingdom:
During the 1930s the Post Office introduced motorcycles. This started in Leeds where boys aged 17 were allowed to volunteer for training, but only with the permission of their parents. However, following the success of this, motorcycles were soon introduced elsewhere in the country. The fleet was comprised almost exclusively of BSA B33-1 250cc motorbikes which boys were expected to ride at an average of 15 mph. Later 125cc BSA Bantams were used. These were finally replaced with smaller Raleigh and Puch models.
In the United Kingdom:
During its heyday in the 1930s, the service was delivering an average of 65 million telegrams per year; however, the service was running at a loss, estimated at £1 million annually.
In the United Kingdom:
Throughout the 1960s and 1970s the use of telegrams had dropped significantly, with around 10 million sent annually in the mid-1960s. Consequently, the Post Office took the decision in 1977 to abolish the service. The service continued for a few years and was briefly operated by British Telecom after it split from the Post Office. British Telecom announced on 19 October 1981 that the telegram would be discontinued, and it was finally taken out of service on 30 September 1982 after 139 years in the United Kingdom.The telegram as such was superseded by the British Telecom Telemessage service, introduced in October 1982. Messages were dictated over the telephone or sent via telex, printed, and delivered overnight by first class post in a distinctive envelope guaranteed for next day delivery, rather than by messenger.
In the United States:
Telegraph boys (also referred to as district messenger boys, telegraph messenger boys, or simply as messenger boys) were uniformed young men between 10 and 18 years of age who carried telegrams through urban streets. In most areas they used bicycles; in some dense areas they went on foot. Unlike the men in the telegraph office who worked indoors on fixed wages under close supervision, enjoyed union benefits, and managed the electrical transfer of information, telegraph boys worked outdoors under no supervision on piece wages, saw no union benefits, and managed the physical aspect of the industry in the form of handwritten or printed paper messages.
In the United States:
Boys reported for work in the morning clad in their uniforms and awaited their assignments, receiving payment by the mile. Though some chose to travel by foot, bicycles were required for distant destinations. John Dickinson of Dallas, Texas accumulated more than 16,000 miles between April and September 1916. Western Union bought 5,000 bicycles a year and resold them to their telegraph boys nationwide at a discount. A local fleet might number from one to three dozen or more. Companies were responsible for providing uniform laundries, locker rooms, assembly halls, and classrooms. In the call-box system developed in 1872, a customer would ring the telegraph office for a messenger who would then speed to the customer's door to pick up a handwritten message and return to the telegraph office to have it sent electrically to its destination.The life could be dangerous. Boys were expected to "scorch" their bicycles in urban traffic. Strikes occurred with messenger boys cycling en-masse to keep scabs from being hired. Boys attended continuation schools on a four-hours-per-week schedule rather than the 36-hour schedule of public schools. During slack times, the telegraph office hid the boys from public view in basements and back rooms where they smoked, read penny dreadfuls, and shot craps. Weekends or evenings might involve taking part in uniformed military drills before the public. At night, the boys might be required to enter the red light districts in connection with their job duties. The demand for telegraph boys fell when companies began reading messages over the telephone.In the autumn of 1913, bicycling telegraph boy Robert Crawford of Washington, D.C. collided with a car carrying President Woodrow Wilson. The President sent his personal physician to attend Crawford. Later, he visited the boy in the hospital and presented him with a new bicycle. "I did not know it was the President's car that I ran into," the boy said. Wilson replied, "I rather thought it was the President's car that ran into you."
Notable telegram boys:
Frank McCourt, author and teacher Ralph Reader, founder of the Scout Gang Shows Dave Ward, deputy leader of the Communication Workers Union Hyman G. Rickover, Father of the US Nuclear Navy Iain Pattison, writer of Rab C Nesbit was a telegram boy in the mid-1960s Keith Holyoake, Prime Minister of New Zealand | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**RNA extraction**
RNA extraction:
RNA extraction is the purification of RNA from biological samples. This procedure is complicated by the ubiquitous presence of ribonuclease enzymes in cells and tissues, which can rapidly degrade RNA. Several methods are used in molecular biology to isolate RNA from samples, the most common of these is guanidinium thiocyanate-phenol-chloroform extraction. The filter paper based lysis and elution method features high throughput capacity.RNA extraction in liquid nitrogen, commonly using a mortar and pestle (or specialized steel devices known as tissue pulverizers) is also useful in preventing ribonuclease activity.
RNase contamination:
The extraction of RNA in molecular biology experiments is greatly complicated by the presence of ubiquitous and hardy RNases that degrade RNA samples. Certain RNases can be extremely hardy and inactivating them is difficult compared to neutralizing DNases. In addition to the cellular RNases that are released there are several RNases that are present in the environment. RNases have evolved to have many extracellular functions in various organisms. For example, RNase 7, a member of the RNase A superfamily, is secreted by human skin and serves as a potent antipathogen defence. For these secreted RNases, enzymatic activity may not even be necessary for the RNase's exapted function. For example, immune RNases act by destabilizing the cell membranes of bacteria.To counter this, equipment used for RNA extraction is usually cleaned thoroughly, kept separate from common lab equipment and treated with various harsh chemicals that destroy RNases. For the same reason, experimenters take special care not to let their bare skin touch the equipment. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Two-sided Laplace transform**
Two-sided Laplace transform:
In mathematics, the two-sided Laplace transform or bilateral Laplace transform is an integral transform equivalent to probability's moment generating function. Two-sided Laplace transforms are closely related to the Fourier transform, the Mellin transform, the Z-transform and the ordinary or one-sided Laplace transform. If f(t) is a real- or complex-valued function of the real variable t defined for all real numbers, then the two-sided Laplace transform is defined by the integral B{f}(s)=F(s)=∫−∞∞e−stf(t)dt.
Two-sided Laplace transform:
The integral is most commonly understood as an improper integral, which converges if and only if both integrals ∫0∞e−stf(t)dt,∫−∞0e−stf(t)dt exist. There seems to be no generally accepted notation for the two-sided transform; the B used here recalls "bilateral". The two-sided transform used by some authors is T{f}(s)=sB{f}(s)=sF(s)=s∫−∞∞e−stf(t)dt.
In pure mathematics the argument t can be any variable, and Laplace transforms are used to study how differential operators transform the function.
Two-sided Laplace transform:
In science and engineering applications, the argument t often represents time (in seconds), and the function f(t) often represents a signal or waveform that varies with time. In these cases, the signals are transformed by filters, that work like a mathematical operator, but with a restriction. They have to be causal, which means that the output in a given time t cannot depend on an output which is a higher value of t.
Two-sided Laplace transform:
In population ecology, the argument t often represents spatial displacement in a dispersal kernel.
Two-sided Laplace transform:
When working with functions of time, f(t) is called the time domain representation of the signal, while F(s) is called the s-domain (or Laplace domain) representation. The inverse transformation then represents a synthesis of the signal as the sum of its frequency components taken over all frequencies, whereas the forward transformation represents the analysis of the signal into its frequency components.
Relationship to the Fourier transform:
The Fourier transform can be defined in terms of the two-sided Laplace transform: F{f(t)}=F(s=iω)=F(ω).
Note that definitions of the Fourier transform differ, and in particular F{f(t)}=F(s=iω)=12πB{f(t)}(s) is often used instead. In terms of the Fourier transform, we may also obtain the two-sided Laplace transform, as B{f(t)}(s)=F{f(t)}(−is).
The Fourier transform is normally defined so that it exists for real values; the above definition defines the image in a strip a<ℑ(s)<b which may not include the real axis where the Fourier transform is supposed to converge.
Relationship to the Fourier transform:
This is then why Laplace transforms retain their value in control theory and signal processing: the convergence of a Fourier transform integral within its domain only means that a linear, shift-invariant system described by it is stable or critical. The Laplace one on the other hand will somewhere converge for every impulse response which is at most exponentially growing, because it involves an extra term which can be taken as an exponential regulator. Since there are no superexponentially growing linear feedback networks, Laplace transform based analysis and solution of linear, shift-invariant systems, takes its most general form in the context of Laplace, not Fourier, transforms.
Relationship to the Fourier transform:
At the same time, nowadays Laplace transform theory falls within the ambit of more general integral transforms, or even general harmonical analysis. In that framework and nomenclature, Laplace transforms are simply another form of Fourier analysis, even if more general in hindsight.
Relationship to other integral transforms:
If u is the Heaviside step function, equal to zero when its argument is less than zero, to one-half when its argument equals zero, and to one when its argument is greater than zero, then the Laplace transform L may be defined in terms of the two-sided Laplace transform by L{f}=B{fu}.
On the other hand, we also have B{f}=L{f}+L{f∘m}∘m, where m:R→R is the function that multiplies by minus one ( m(x)=−x ), so either version of the Laplace transform can be defined in terms of the other.
The Mellin transform may be defined in terms of the two-sided Laplace transform by exp ∘m}, with m as above, and conversely we can get the two-sided transform from the Mellin transform by log }.
The moment-generating function of a continuous probability density function ƒ(x) can be expressed as B{f}(−s)
Properties:
The following properties can be found in Bracewell (2000) and Oppenheim & Willsky (1997) Most properties of the bilateral Laplace transform are very similar to properties of the unilateral Laplace transform, but there are some important differences: Parseval's theorem and Plancherel's theorem Let f1(t) and f2(t) be functions with bilateral Laplace transforms F1(s) and F2(s) in the strips of convergence α1,2<ℜs<β1,2 . Let c∈R with max min (−α1,β2) Then Parseval's theorem holds: ∫−∞∞f1(t)¯f2(t)dt=12πi∫c−i∞c+i∞F1(−s¯)¯F2(s)ds This theorem is proved by applying the inverse Laplace transform on the convolution theorem in form of the cross-correlation.
Properties:
Let f(t) be a function with bilateral Laplace transform F(s) in the strip of convergence α<ℜs<β Let c∈R with α<c<β Then the Plancherel theorem holds: ∫−∞∞e−2ct|f(t)|2dt=12π∫−∞∞|F(c+ir)|2dr Uniqueness For any two functions {\textstyle f,g} for which the two-sided Laplace transforms {\textstyle {\mathcal {T}}\{f\},{\mathcal {T}}\{g\}} exist, if {\textstyle {\mathcal {T}}\{f\}={\mathcal {T}}\{g\},} i.e. {\textstyle {\mathcal {T}}\{f\}(s)={\mathcal {T}}\{g\}(s)} for every value of {\textstyle s\in \mathbb {R} ,} then {\textstyle f=g} almost everywhere.
Region of convergence:
Bilateral transform requirements for convergence are more difficult than for unilateral transforms. The region of convergence will be normally smaller.
Region of convergence:
If f is a locally integrable function (or more generally a Borel measure locally of bounded variation), then the Laplace transform F(s) of f converges provided that the limit lim R→∞∫0Rf(t)e−stdt exists. The Laplace transform converges absolutely if the integral ∫0∞|f(t)e−st|dt exists (as a proper Lebesgue integral). The Laplace transform is usually understood as conditionally convergent, meaning that it converges in the former instead of the latter sense.
Region of convergence:
The set of values for which F(s) converges absolutely is either of the form Re(s) > a or else Re(s) ≥ a, where a is an extended real constant, −∞ ≤ a ≤ ∞. (This follows from the dominated convergence theorem.) The constant a is known as the abscissa of absolute convergence, and depends on the growth behavior of f(t). Analogously, the two-sided transform converges absolutely in a strip of the form a < Re(s) < b, and possibly including the lines Re(s) = a or Re(s) = b. The subset of values of s for which the Laplace transform converges absolutely is called the region of absolute convergence or the domain of absolute convergence. In the two-sided case, it is sometimes called the strip of absolute convergence. The Laplace transform is analytic in the region of absolute convergence.
Region of convergence:
Similarly, the set of values for which F(s) converges (conditionally or absolutely) is known as the region of conditional convergence, or simply the region of convergence (ROC). If the Laplace transform converges (conditionally) at s = s0, then it automatically converges for all s with Re(s) > Re(s0). Therefore, the region of convergence is a half-plane of the form Re(s) > a, possibly including some points of the boundary line Re(s) = a. In the region of convergence Re(s) > Re(s0), the Laplace transform of f can be expressed by integrating by parts as the integral F(s)=(s−s0)∫0∞e−(s−s0)tβ(t)dt,β(u)=∫0ue−s0tf(t)dt.
Region of convergence:
That is, in the region of convergence F(s) can effectively be expressed as the absolutely convergent Laplace transform of some other function. In particular, it is analytic.
There are several Paley–Wiener theorems concerning the relationship between the decay properties of f and the properties of the Laplace transform within the region of convergence.
In engineering applications, a function corresponding to a linear time-invariant (LTI) system is stable if every bounded input produces a bounded output.
Causality:
Bilateral transforms do not respect causality. They make sense when applied over generic functions but when working with functions of time (signals) unilateral transforms are preferred.
Table of selected bilateral Laplace transforms:
Following list of interesting examples for the bilateral Laplace transform can be deduced from the corresponding Fourier or unilateral Laplace transformations (see also Bracewell (2000)): | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**KCNS2**
KCNS2:
Potassium voltage-gated channel subfamily S member 2 is a protein that in humans is encoded by the KCNS2 gene. The protein encoded by this gene is a voltage-gated potassium channel subunit. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Polypropylene breast implant**
Polypropylene breast implant:
Polypropylene breast implants, also known as string breast implants, are a form of breast implant using polypropylene developed by Gerald W. Johnson. Due to a number of medical complications, the device has not been approved in the European Union or the United States.Polypropylene implants absorb water very slowly, about <0.01% in 24 hours. The polypropylene, which is yarn-like, causes irritation to the implant pocket which causes the production of serum which fills the implant pocket on a continual basis. This causes continuous expansion of the breast after surgery. Growth can only be alleviated by removal of serum by syringe. Problems can also arise if the breasts enlarge at different rates. This can be corrected by removal of serum or introduction of sterile saline. Continual breast growth will eventually result in "extreme, almost cartoonish breast sizes."String implants were only available for a very short time in the US before being removed from the market by the FDA around 2001.
Polypropylene breast implant:
Polypropylene implant have created the largest recorded increases in breast size due to surgical augmentation. They are rarely seen outside the adult entertainment industry. Big-bust entertainers Chelsea Charms, Maxi Mounds, Kayla Kleevage, Minka, Elizabeth Starr, and Teddi Barrett are some recipients of polypropylene breast implants. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Shearing interferometer**
Shearing interferometer:
The shearing interferometer is an extremely simple means to observe interference and to use this phenomenon to test the collimation of light beams, especially from laser sources which have a coherence length which is usually significantly longer than the thickness of the shear plate (see graphics) so that the basic condition for interference is fulfilled.
Function:
The testing device consists of a high-quality optical glass, like N-BK7, with extremely flat optical surfaces that are usually at a slight angle to each other. When a plane wave is incident at an angle of 45°, which gives maximum sensitivity, it is reflected two times. The two reflections are laterally separated due to the finite thickness of the plate and by the wedge. This separation is referred to as the shear and has given the instrument its name. The shear can also be produced by gratings, see External Links below.
Function:
Parallel-sided shear plates are sometimes used, but the interpretation of the interference fringes of wedged plates is relatively easy and straightforward. Wedged shear plates produce a graded path difference between the front and back surface reflections; as a consequence, a parallel beam of light produces a linear fringe pattern within the overlap.
Function:
With a plane wavefront incident, the overlap of the two reflected beams shows interference fringes with a spacing of df=λ2nθ , where df is the spacing perpendicular to shear, λ is the wavelength of the beam, n the refractive index, and θ the wedge angle. This equation makes the simplification that the distance from the wedged shear plate to the observation plane is small relative to the wavefront radius of curvature at the observation plane. The fringes are equally spaced and will be exactly perpendicular to the wedge orientation and parallel to a usually present wire cursor aligned along the beam axis in the shearing interferometer. The orientation of the fringes varies when the beam is not perfectly collimated. In the case of a noncollimated beam incident on a wedged shear plate, the path difference between the two reflected wavefronts is increased or decreased from the case of perfect collimation depending on the sign of the curvature. The pattern is then rotated and the beam's wavefront radius of curvature R can be calculated: tan γ , with s the shear distance, df the fringe distance, λ the wavelength and γ the angular deviation of the fringe alignment from that of perfect collimation. If the spacing normal to the fringes is used instead, this equation becomes sin γ , where kf is the fringe spacing normal to the fringes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cryptovirology**
Cryptovirology:
Cryptovirology refers to the use of cryptography to devise particularly powerful malware, such as ransomware and asymmetric backdoors. Traditionally, cryptography and its applications are defensive in nature, and provide privacy, authentication, and security to users. Cryptovirology employs a twist on cryptography, showing that it can also be used offensively. It can be used to mount extortion based attacks that cause loss of access to information, loss of confidentiality, and information leakage, tasks which cryptography typically prevents.The field was born with the observation that public-key cryptography can be used to break the symmetry between what an antivirus analyst sees regarding malware and what the attacker sees. The antivirus analyst sees a public key contained in the malware, whereas the attacker sees the public key contained in the malware as well as the corresponding private key (outside the malware) since the attacker created the key pair for the attack. The public key allows the malware to perform trapdoor one-way operations on the victim's computer that only the attacker can undo.
Overview:
The field encompasses covert malware attacks in which the attacker securely steals private information such as symmetric keys, private keys, PRNG state, and the victim's data. Examples of such covert attacks are asymmetric backdoors. An asymmetric backdoor is a backdoor (e.g., in a cryptosystem) that can be used only by the attacker, even after it is found. This contrasts with the traditional backdoor that is symmetric, i.e., anyone that finds it can use it. Kleptography, a subfield of cryptovirology, is the study of asymmetric backdoors in key generation algorithms, digital signature algorithms, key exchanges, pseudorandom number generators, encryption algorithms, and other cryptographic algorithms. The NIST Dual EC DRBG random bit generator has an asymmetric backdoor in it. The EC-DRBG algorithm utilizes the discrete-log kleptogram from kleptography, which by definition makes the EC-DRBG a cryptotrojan. Like ransomware, the EC-DRBG cryptotrojan contains and uses the attacker's public key to attack the host system. The cryptographer Ari Juels indicated that NSA effectively orchestrated a kleptographic attack on users of the Dual EC DRBG pseudorandom number generation algorithm and that, although security professionals and developers have been testing and implementing kleptographic attacks since 1996, "you would be hard-pressed to find one in actual use until now." Due to public outcry about this cryptovirology attack, NIST rescinded the EC-DRBG algorithm from the NIST SP 800-90 standard.Covert information leakage attacks carried out by cryptoviruses, cryptotrojans, and cryptoworms that, by definition, contain and use the public key of the attacker is a major theme in cryptovirology. In "deniable password snatching," a cryptovirus installs a cryptotrojan that asymmetrically encrypts host data and covertly broadcasts it. This makes it available to everyone, noticeable by no one (except the attacker), and only decipherable by the attacker. An attacker caught installing the cryptotrojan claims to be a virus victim. An attacker observed receiving the covert asymmetric broadcast is one of the thousands, if not millions of receivers, and exhibits no identifying information whatsoever. The cryptovirology attack achieves "end-to-end deniability." It is a covert asymmetric broadcast of the victim's data. Cryptovirology also encompasses the use of private information retrieval (PIR) to allow cryptoviruses to search for and steal host data without revealing the data searched for even when the cryptotrojan is under constant surveillance. By definition, such a cryptovirus carries within its own coding sequence the query of the attacker and the necessary PIR logic to apply the query to host systems.
History:
The first cryptovirology attack, invented by Adam L. Young and Moti Yung, is called "cryptoviral extortion" and it was presented at the 1996 IEEE Security & Privacy conference. In this attack, a cryptovirus, cryptoworm, or cryptotrojan contains the public key of the attacker and hybrid encrypts the victim's files. The malware prompts the user to send the asymmetric ciphertext to the attacker who will decipher it and return the symmetric decryption key it contains for a fee. The victim needs the symmetric key to decrypt the encrypted files if there is no way to recover the original files (e.g., from backups). The 1996 IEEE paper predicted that cryptoviral extortion attackers would one day demand e-money, long before Bitcoin even existed. Many years later, the media relabeled cryptoviral extortion as ransomware. In 2016, cryptovirology attacks on healthcare providers reached epidemic levels, prompting the U.S. Department of Health and Human Services to issue a Fact Sheet on Ransomware and HIPAA.
History:
The fact sheet states that when electronic protected health information is encrypted by ransomware, a breach has occurred, and the attack therefore constitutes a disclosure that is not permitted under HIPAA, the rationale being that an adversary has taken control of the information. Sensitive data might never leave the victim organization, but the break-in may have allowed data to be sent out undetected. California enacted a law that defines the introduction of ransomware into a computer system with the intent of extortion as being against the law.
Examples:
Tremor virus While viruses in the wild have used cryptography in the past, the only purpose of such usage of cryptography was to avoid detection by antivirus software. For example, the tremor virus used polymorphism as a defensive technique in an attempt to avoid detection by anti-virus software. Though cryptography does assist in such cases to enhance the longevity of a virus, the capabilities of cryptography are not used in the payload. The One-half virus was amongst the first viruses known to have encrypted affected files.
Examples:
Tro_Ransom.A virus An example of a virus that informs the owner of the infected machine to pay a ransom is the virus nicknamed Tro_Ransom.A. This virus asks the owner of the infected machine to send $10.99 to a given account through Western Union.Virus.Win32.Gpcode.ag is a classic cryptovirus. This virus partially uses a version of 660-bit RSA and encrypts files with many different extensions. It instructs the owner of the machine to email a given mail ID if the owner desires the decryptor. If contacted by email, the user will be asked to pay a certain amount as ransom in return for the decryptor.
Examples:
CAPI It has been demonstrated that using just 8 different calls to Microsoft's Cryptographic API (CAPI), a cryptovirus can satisfy all its encryption needs.
Other uses of cryptography-enabled malware:
Apart from cryptoviral extortion, there are other potential uses of cryptoviruses, such as deniable password snatching, cryptocounters, private information retrieval, and in secure communication between different instances of a distributed cryptovirus. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Doc (mascot)**
Doc (mascot):
Doc is the official mascot of Towson University. He is named after former sports department head Donald "Doc" Minnegan.
History:
The Golden Knights The Knights mascot may have come from the 1920s and 1930s, when an elaborate Olde English Christmas dinner was held with knights and ladies costumes, music and a pageant. The 1930 Tower Echoes used Renaissance style pictures of archers to depict athletes and campus life. In 1951, the Knight was used all over Towson with references to the campus being a Camelot with "merry court life" (student activities) and "many tournaments" (sports). The late 1950s, however, brought other mascots—the lacrosse team was the "Indians" and the wrestling team, the "Teachers".
History:
The Tiger The first appearance of the tiger on campus was with the help of Towson alumnus Lou Winkelman. He was the very first tiger mascot, in the 1963 homecoming parade. According to Winkelman, they just went to a costume shop and rented the tiger suit.
Winkelman actually introduced the tiger as the official Towson mascot winning Student Government Association's approval a year before the parade. It took about a year, but by 1963, along with the help of John Schuerholz students accepted it and Towson made the tiger its official mascot.
The tiger coming to Towson began in the early 1960s when Winkelman was a member of the men's soccer team. He says no one on the team wanted to be called the Golden Knights, the most popular name for sports teams prior to 1961.
History:
Winkelman and his teammates had their own idea and simply adopted the tiger as their mascot. Although they wore jerseys with a knight and horse logo, they were adamant that they would be called tigers in their yearbook photo.Student interest in the tiger remained high through the 1960s encouraged by Winkleman's weekly sports column Tiger Tales in The Towerlight, the school newspaper. The Towerlight masthead's use of the tiger image from 1966–1969 and a gift of a stone tiger statue by the Class of '67, stolen from campus a few years later.In the 1970s, however, the Towson Tiger was rarely referenced or talked about. Other than brief sports stories, there is only one reference in a 1970s yearbook. The tiger resurfaced in the 1980s with the purchase of the first official costume by the sports program and with a major presence in almost every issue of Tower Echoes and around the campus.In 2003 the tiger was renamed "Doc" in honor of longtime faculty member Donald Minnegan.
History:
Doc also makes appearances at almost all of the football games and several sporting events around campus and several community events around Towson. Former Towson Tiger football player Big Dan McPartland becomes incensed each and every time he sees Doc prowling the sidelines of Unitas Stadium wearing the #75 jersey, the very jersey number he once wore.
Tiger statue:
TU's tiger statue is a bronze tiger that sits in front of Stephens Hall, the oldest academic building on campus. There has been another fiberglass tiger that was created in 1996, but was taken down in 2006 due to vandalism. There was also a gift of a stone tiger statue by the class of 1967, but was stolen from the campus a few years later.
Tiger statue:
The first statue The idea of bringing the tiger to Towson started with the introduction of a bill on February 28, 1995. The SGA allocated $3,000 though it was purchased for $2,500 for a fiberglass tiger to create a more positive campus atmosphere. Donna Garrison, an SGA senator at the time, had heard student complaints of Towson lacking school spirit.
Tiger statue:
The tiger was erected and placed on the campus at the end of the Spring 1996 semester.
Tiger statue:
The following September it lost its tail to vandals. The damage totaled $500. The tiger was repaired and was fine for six months until March 17, 1997. On that evening, police aide Ron Bond saw seven males pushing the tiger off its platform, but upon police arrival, the seven fled the scene. Three were apprehended, one of whom was not a TU student. The statue had been bolted to the platform by its three paws, and the paws were damaged in the attempt to move the tiger. One of the tiger's canine teeth was also broken off in the act.
Tiger statue:
The last incident occurred over spring break in 2006. On Sunday, March 19, 2006, the Towson University Police Department received the first of two reports of destruction of the tiger statue. They had spray-painted profanities on the tiger between Saturday afternoon and Sunday evening. A few days later, the tiger's paw and teeth were removed.
The police reports said Aramark estimated the cost of repair at $1,500. Since then the statue has been removed. These incidents were not the only acts of vandalism on the tiger statue. Since it had found its home on "The Beach", the tiger has lost part of its tail, and a few teeth. There was even an attempted robbery.
Tiger statue:
In February, the university looked into repairing or replacing what the students called an eyesore. She asked Jeff Ellis of Scenic Artistry & Custom Finishes and Joseph Clarkson of Fiberglass Specialties to appraise the statue. "It looked pretty much beyond repair," Ellis said in an interview. "It's one of those things where you don't know where to start and where to finish." Bronze statue In September 2006, The Towerlight reported that a new bronze tiger statue had been unveiled as the centerpiece of the university's "Capital Campaign" to raise $50 million. The primary difference between the new statue and previous one is that the new one is made of bronze and all of the legs are on the ground and the tail is wrapped around its legs rather than raised, so it won't get damaged by vandals.
Tiger statue:
The new statue is outside Stephens Hall and was unveiled on February 8, 2007 where Caret said it would be "visible to passersby on York Road as well as students".A lot of students, however, have expressed disapproval of putting the new statue outside of Stephens Hall. They say since most students don't have class in the building, most Towson students probably wouldn't see the new statue. In 2009, a duplicate was placed in front of the library, near the old statue's location.
Tiger statue:
In 2012 a statue, in a different position, was placed at the new entrance of the university.In 2013, a fourth statue was placed near the entrance to the recently completed SECU Arena.
Previous mascots:
Knights (pre-1961) Teachers (Wrestling - 1961) Indians (lacrosse 1960) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Daniell cell**
Daniell cell:
The Daniell cell is a type of electrochemical cell invented in 1836 by John Frederic Daniell, a British chemist and meteorologist, and consists of a copper pot filled with a copper (II) sulfate solution, in which is immersed an unglazed earthenware container filled with sulfuric acid and a zinc electrode. He was searching for a way to eliminate the hydrogen bubble problem found in the voltaic pile, and his solution was to use a second electrolyte to consume the hydrogen produced by the first. Zinc sulfate may be substituted for the sulfuric acid. The Daniell cell was a great improvement over the existing technology used in the early days of battery development. A later variant of the Daniell cell called the gravity cell or crowfoot cell was invented in the 1860s by a Frenchman named Callaud and became a popular choice for electrical telegraphy.
Daniell cell:
The Daniell cell is also the historical basis for the contemporary definition of the volt, which is the unit of electromotive force in the International System of Units. The definitions of electrical units that were proposed at the 1881 International Conference of Electricians were designed so that the electromotive force of the Daniell cell would be about 1.0 volts. With contemporary definitions, the standard potential of the Daniell cell at 25 °C is actually 1.10 V.
Chemistry:
In the Daniell cell, copper and zinc electrodes are immersed in a solution of copper(II) sulfate and zinc sulfate, respectively. At the anode (negative electrode), zinc is oxidized as per the following half reaction: Zn(s) → Zn2+(aq) + 2e− . . (Standard electrode reduction potential −0.7618 V )At the cathode (positive electrode), copper is reduced as per the following reaction: Cu2+(aq) + 2e− → Cu(s) . . (Standard electrode reduction potential +0.340 V )Note that positively charged copper ions move towards the positive electrode, driven by a reduction in chemical energy.
Chemistry:
The total reaction is: Zn(s) + Cu2+(aq) → Zn2+(aq) + Cu(s) . . (Open-circuit voltage 1.1018 V )These processes result in the accumulation of solid copper at the cathode and the corrosion of the zinc electrode into the solution as zinc cations.
Chemistry:
In classroom demonstrations, a form of the Daniell cell known as two half cells is often used due to its simplicity. The two half cells each support one half of the reactions described above. A wire and light bulb may connect the two electrodes. Excess electrons produced by the oxidation of zinc metal are “pushed” out of the anode, which is therefore the negative electrode, travel through the wire and are "pulled" into the copper cathode where they are consumed by the reduction of copper ions. This provides an electric current that illuminates the bulb. Since neither half reaction will occur independently of the other, the two half cells must be connected in a way that will allow ions to move freely between them. A porous barrier or ceramic disk may be used to separate the two solutions while allowing the flow of sulfate ions. When the half cells are placed in two entirely different and separate containers, a salt bridge is often used to connect the two cells. The salt bridge typically contains a high concentration of potassium nitrate (a salt that will not interfere chemically with the reaction in either half-cell). In the above wet-cell during discharge, nitrate anions in the salt bridge move into the zinc half-cell in order to balance the increase in Zn2+ ions. At the same time, potassium ions from the salt bridge move into the copper half-cell in order to replace the Cu2+ ions being precipitated onto the copper electrode.
Chemistry:
If the cell is connected to a potential source (e.g. a battery charger) such that the potential difference of the source is slightly higher than the cell emf (1.1 V) then the current flow could be reversed and the reaction would become: Zn2+(aq) + 2e− → Zn(s) Cu(s) → Cu2+(aq) + 2e−or, Zn2+(aq) + Cu(s) → Zn(s) + Cu2+(aq)Hence, the Daniell cell is reversible, if the current drawn from (or fed to) it is small. The Daniell cell can be used to ‘generate’ electricity, by consuming an electrode, or to store electricity.
Development:
Daniell's original construction Daniell first constructed his cell in 1836. His original design consisted of a 3.5 inch diameter copper cylinder. A copper disc perforated with numerous holes was placed across the cylinder recessed down from the top. A tube of ox gullet hung from a large hole in the centre of the perforated copper disc. A 0.5 inch diameter zinc rod hung inside this ox-gullet tube suspended from wooden supports. The copper vessel was filled with sulfuric acid solution saturated with copper sulfate to above the level of the perforated disc. The ox-gullet tube was filled with sulfuric acid solution. Copper sulfate crystals were piled on the perforated copper disc to keep the solution saturated. The ox-gullet acts as a porous membrane allowing passage of ions. Daniell states that a porous earthenware tube may be used instead of the ox gullet for practical ease but this arrangement will produce less power. Another suggestion made by Daniell to improve the cell was to replace the copper with platinum and copper sulfate with platinum chloride, but he remarks "such an arrangement would be perfect, but too costly for ordinary applications". It is the porous pot form of the cell that came to be widely used in telegraphy.
Development:
Porous pot cell The porous pot cell consists of a central zinc anode dipped into a porous earthenware pot containing a zinc sulfate solution. The porous pot is, in turn, immersed in a solution of copper sulfate contained in a copper can, which acts as the cell's cathode. The use of a porous barrier allows ions to pass through but keeps the solutions from mixing. Without this barrier, when no current is drawn the copper ions will drift to the zinc anode and undergo reduction without producing a current, which will shorten the battery's life. The replacement of sulfuric acid with zinc sulfate was the innovation of J. F. Fuller in 1853. It prolongs the life of the cell.Over time, copper buildup will block the pores in the earthenware barrier and cut short the battery's life. Nevertheless, the Daniell cell provides a longer and more reliable current than the Voltaic pile because the electrolyte deposited copper, which is a conductor, rather than hydrogen, which is an insulator, on the cathode. It is also safer and less corrosive. With an operating voltage of roughly 1.1 volts, it saw widespread use in telegraph networks until it was supplanted by the Leclanché cell in the late 1860s.
Development:
Gravity cell Sometime during the 1860s, a Frenchman by the name of Callaud invented a variant of the Daniell cell which dispensed with the porous barrier. Instead, a layer of zinc sulfate sits on top of a layer of copper sulfate, the two liquids are kept separate by their differing densities, often with a layer of oil added on top to prevent evaporation. This reduces the internal resistance of the system and thus the battery yields a stronger current.
Development:
This variant, called a gravity cell, consists of a glass jar in which a copper cathode sat on the bottom and a zinc anode is suspended beneath the rim in the zinc sulfate layer. Copper sulfate crystals are scattered around the cathode and the jar then filled with distilled water. As the current is drawn, a layer of zinc sulfate solution forms at the top around the anode. This top layer is kept separate from the bottom copper sulfate layer by its lower density and by the polarity of the cell. A disadvantage of the gravity cell is that a current has to be continually drawn to keep the two solutions from mixing by diffusion, so it is unsuitable for intermittent use. In addition, it was vulnerable to loss of integrity if too much electric current is drawn, which also causes the layers to mix.
Development:
Sometimes called the crowfoot cell due to the distinctive shape of the electrodes, this arrangement is less costly for large multicell batteries and it quickly became the battery of choice for the American and British telegraph networks. Even after most telegraph lines started being powered by motor-generators, the gravity battery continued to be used in way stations to power the local circuit at least into the 1950s. In the telegraph industry, this battery was often assembled on site by the telegraph workers themselves, and when it ran down it could be renewed by replacing the consumed components. The zinc sulfate layer is clear in contrast to the deep blue copper sulfate layer, which allows a technician to determine the battery life with a glance. On the other hand, this setup means the battery could only be used in a stationary appliance, otherwise the solutions would mix or spill.
Use in electrometallurgy:
Bird's cell A variant of the Daniell cell was invented in 1837 by the Guy's hospital physician Golding Bird who used a plaster of Paris barrier to keep the solutions separate. Bird's experiments with this cell were of some importance to the new discipline of electrometallurgy, but Bird himself did not pursue this field; his interest was in electrotherapy. A surprising result from Bird's experiments was the deposition of copper on the porous plaster and in veins running through it without any contact with the metal electrodes. So surprising, in fact, that it was at first disbelieved by electrochemical investigators, including Michael Faraday. Bird himself had to carefully examine his apparatus for inadvertent contact, perhaps through the growth of copper "whiskers", before he was convinced of the result. Deposition of copper, and other metals, had been previously noted, but always previously it had been metal on metal electrode.
Use in electrometallurgy:
Electrotyping John Dancer, a Liverpool instrument maker, in 1838 was the first to take commercial advantage of the unique features of the Daniell cell for copper plating. In a process now known as electrotyping he found he could make objects to any desired shape by using the porous barrier as a mould. Many others, however, had made the same discovery and in a patent dispute with Thomas Spencer it was pointed out that Bird had priority for the principle. Credit for invention of electrotyping is usually given to the Russian Moritz von Jacobi. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Crest (sports)**
Crest (sports):
In sport, a crest is the term used to describe a logo used by a sports club. Such a logo is also often termed a badge. The logos of many clubs are inspired by heraldic design.
The use of the term crest to describe a logo derives from the misconception that a crest refers to any emblem that is heraldic. In heraldry, a crest specifically refers to the element of a coat of arms which appears above a helmet.
Crest (sports):
Due to the heraldic design of many club logos, they are sometimes regulated in regions with heraldic authorities. In Scotland, some club logos have been deemed "an heraldic device" by the Court of the Lord Lyon. Because heraldic devices must be authorised by this court, some clubs have been required to change their logos to designs which are not heraldic. Alternatively, a club may apply to have its logo authorised by the Court of the Lord Lyon. Similarly, the College of Arms has regulated club logos, with at least 25 football clubs in England and Wales having designs authorised by the College. In those cases, the English Football League was granted heraldic badges, which were subsequently licensed to the appropriate clubs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**19 Fortuna**
19 Fortuna:
Fortuna (minor planet designation: 19 Fortuna) is one of the largest main-belt asteroids. It has a composition similar to 1 Ceres: a darkly colored surface that is heavily space-weathered with the composition of primitive organic compounds, including tholins.
19 Fortuna:
Fortuna is 225 km in diameter and has one of the darkest known geometric albedos for an asteroid over 150 km in diameter. Its albedo has been measured at 0.028 and 0.037. The spectra of the asteroid displays evidence of aqueous alteration.The Hubble Space Telescope observed Fortuna in 1993. It was resolved with an apparent diameter of 0.20 arcseconds (4.5 pixels in the Planetary Camera) and its shape was found to be nearly spherical. Satellites were searched for but none were detected.
19 Fortuna:
Stellar occultations by Fortuna have been observed several times. Fortuna has been studied by radar.It was discovered by J. R. Hind on August 22, 1852, and named after Fortuna, the Roman goddess of luck.
Fortuna has been perturbed by the 80 km 135 Hertha and was initially estimated by Baer to have a mass of 1.08×1019 kg. A more recent estimate by Baer suggests it has a mass of 1.27×1019 kg.On December 21, 2012, Fortuna (~200 km) harmlessly passed within 6.5 Gm of asteroid 687 Tinette. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Onychophosis**
Onychophosis:
Onychophosis is a localized or diffuse hyperkeratotic tissue that develops on the lateral or proximal nailfolds, within the space between the nailfolds and the nail plate, and is a common finding in the elderly. Onychophosis may involve the subungual area, as a direct result of repeated minor trauma, and most frequently affects the first and fifth toe. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Improved Mobile Telephone Service**
Improved Mobile Telephone Service:
The Improved Mobile Telephone Service (IMTS) was a pre-cellular VHF/UHF radio system which linked to the public telephone network. IMTS was the radiotelephone equivalent of land dial phone service. Introduced in 1964, it replaced Mobile Telephone Service (MTS) and improved on most MTS systems by offering direct-dial rather than connections through a live operator, and full-duplex operation so both parties could talk at the same time.
Technical Information:
The original Bell System US and Canadian mobile telephone system includes three frequency bands, VHF Low (35-44 MHz, 9 channels), VHF High (152-158 MHz, 11 channels in the U.S., 13 channels in Canada), and UHF (454-460 MHz, 12 channels). Alternative names were "Low Band", "High band" and "UHF". In addition to the Bell system (wireline incumbent) channels, another 7 channels at VHF, and 12 channels at UHF were granted to non-wireline companies designated as "RCCs" (Radio Common Carriers). These RCC channels were adjacent to the Bell System frequencies.
Technical Information:
RCCs were also allowed to offer paging services to "beepers" or "pagers" on a secondary basis on the same channels, but soon, with the growth of paging, RCC mobile phone services were given lower priority. Some RCCs utilized IMTS technology, but most adopted the "Secode-2805" system which allowed for simultaneous paging, so after a few years, the predominant provider of mobile telephone service was the Bell System companies.
Technical Information:
A given provider might have offered service on one, two, or all three bands, although IMTS was never offered on low band (only MTS, but Whidbey Telephone in Washington State had a custom-designed direct-dial system.) These were prone to network congestion and interference since a radio closer to the terminal would sometimes take over the channel because of its stronger signal. Cellular networks remedied this problem by decreasing the area covered by one tower (a "cell") and increasing the number of cells. The disadvantage of this is more towers are required to cover a given area. Thus, IMTS and MTS systems still exist in some remote areas, as it may be the only feasible way to cover a large sparsely-populated area.
Technical Information:
The basic operation of IMTS was very advanced for its time, considering that integrated circuits were not commonly available. The most common IMTS phone, the Motorola TLD-1100 series, used two circuit boards about 8 inches square, to perform the channel scanning and digit decoding process, and all logic was performed with discrete transistors. In a given city, one IMTS base station channel was "marked idle" by the transmission of a steady 2000 Hz "idle" tone. Mobiles would scan the available frequencies and lock on to the channel transmitting the idle tone. When a call was placed to a mobile, the idle tone would change to 1800 Hz "channel seize" tone (the idle tone would appear on another frequency, if available), and the 7 digit mobile number (three digits of the NPA and the last four digits of subscriber number, the NXX was not sent) would be sent out as rotary dial pulses, switching between 2000 and 1800 Hz to represent digits. Any mobile recognizing that the call was for someone else would resume scanning for mark idle tone, while the called mobile would then transmit 2150 Hz "guard" tone back to the base station. This would also initiate ringing at the mobile, and when the mobile subscriber picked up the phone, 1633 Hz "connect" tone would be sent back to the base station to indicate answer supervision and the voice path would be cut through. When the mobile hung up, a burst of alternating 1336 "disconnect" and 1800 Hz "seize" tones would be sent to allow the base station to service another call.
Technical Information:
Mobiles would originate calls by sending a burst of connect tone, to which the base station responded with a burst of seize tone. The mobile would then respond with its identification, consisting of its area code and last four digits of the phone number sent at 20 pulses per second, just as in inward dialing but with the addition of rudimentary parity checking. Digits are formed with a pulsetrain of alternating tones, either connect and silence (for odd digits) or connect and guard (for even digits). When the base station received the calling party's identification, it would send dialtone to the mobile. The user would then use the rotary dial, which would send the dialed digits as an alternating 10 pps pulse train (originally, directly formed by the rotary dial) of connect and guard tones.
Technical Information:
Terminal IMTS systems typically had 25 watts of transmitter power at the mobile station and 100-250 Watts at the terminal — unlike the newer cellular car telephones that had maximum power output of 3 watts and modern cellular handsets with power outputs of 0.6 watts. Mobile installations normally consisted of a "head unit" or the telephone handset which sat in a cradle with a direct dialing keyboard. These looked and functioned much like a landline, or hardwired, telephone. Unlike cellular handsets, these units passed through a dial tone when the receiver was lifted from the cradle and in this way seemed more like a landline telephone. There was a separate large radio transceiver chassis, typically measuring at least a foot square and 6 inches high, mounted either in the trunk or under the seats of an automobile. These transceivers were connected to the handset cradle with a multi-conductor cable usually around .5 inch thick.
Technical Information:
The mobile antennas almost always required a hole to be drilled in the body of the car to mount the antenna in; until the 1970s there were no "on-glass" antennas - these were developed later for the cellular car-mounted telephones. These whip antennas looked much like those used for CB radios and were about 19 in. long (1/4 wavelength at 155 MHz). These mobile telephone systems required a large amount of power (10 to 15 amperes at 12 volts) and this was supplied by thick power cabling connected directly to the automobile's battery. It therefore was quite possible and not uncommon for an IMTS telephone to drain an automobile's battery if used for moderate periods of time without the automobile engine running or if left on overnight. Optionally these units were also connected to the car's horn and could honk the horn as a ringer to summon a user who was away from the car.
Technical Information:
The IMTS units were full duplex, meaning that a user could both talk and hear the other party at the same time. This was an improvement over the earlier MTS systems, most of which were half duplex, allowing only one party to transmit at a time; the user had to "push to talk" to speak and then "unkey" the transmitter to hear the other party on the line. In 1960 General Electric introduced the "Progress Line" DTO- series MTS mobiles which were full duplex, although subscribers were still required to press the "push to talk" bar on the handset to speak.
Technical Information:
There were also IMTS handheld transceivers (Yaesu's 1982 vintage Traveler) that operated on 2-4 watts, and these were all half duplex. These were essentially modified "walkie-talkies" with a DTMF (dual tone multi-frequency) keypad attached on the front panel, which fooled the terminal into believing an IMTS mobile was using the system. These units were not very common or practical because they lacked the power to reliably connect to the base station over the distances common in the IMTS systems. A compromise existed with the briefcase phone, which had somewhat higher power in the range of 10 to 20 watts (depending on how much battery was in the briefcase), and which was full duplex. Typical IMTS briefcase phones were made by Canyon, GCS, SCM Melabs and Livermore Data Systems.
Technical Information:
Base station IMTS base station sites generally covered an area 40–60 miles in diameter. This extended range was due to both their large transmitter power and in many cases higher antenna placement at anywhere from 100 to 500 ft. IMTS base stations in larger cities had as many as 7 or 8 channels while rural stations had as few as one or two channels. Each telephone conversation (connection) required the exclusive use of a channel for the duration. Because of this limitation these systems had a much lower capacity than cellular systems and all channels busy conditions were common. In larger cities this dictated a very limited number of simultaneous calls. Each subscriber was given a packet of dialing and use instructions. Roaming (receiving calls out of the "home area") was achieved by selecting the specific channels used by the tower and service provider the user was traveling in and dialing a three-digit code, thereby logging the user's land number at that location. This process had to be repeated at each tower which, as noted, usually had a range of 40–60 miles. Some areas only had half-duplex (one-way) communications and required the push-to-talk switch in the handset, between the mouthpiece and the earpiece. Two lights on the "head" indicated busy (red) if no channels were idle and in-use (green) if connected to the tower, or depressing the push-to-talk switch. There was no encryption and all conversations were public.
Technical Information:
Frequencies The frequencies listed below (in MHz) are those formerly used in the US & Canadian Mobile Telephone Service and the Improved Mobile Telephone Service. The low band "Z" prefixed channels were always operated in the MTS, or manual mode. The "Z" channels were sold at auction by the FCC in approximately 2003 to other services and remain largely unused. The VHF and UHF frequencies have been opened to other services unrelated to mobile telephony and largely reassigned.The two VHF high-band channels designated JJ and JW were used only in Canada, and were not available for use in the United States.
Limitations:
IMTS technology severely limited the total number of subscribers. In the 1970s and the early 1980s, before the introduction of cellular phones, there were "waiting lists" of up to three years for those wishing to have mobile telephone service. These potential subscribers were waiting for other subscribers to disconnect their subscription in order to obtain a mobile telephone number and mobile phone service.
Limitations:
These limitations resulted in low quantity sales and production of IMTS phones and the mobile units were therefore very expensive ($2,000 to $4,000). Prior to the divestiture of AT&T in 1984, Bell System IMTS subscribers usually leased the equipment at a monthly rate of up to $120. Availability of the channels was scarce hence airtime was also quite expensive at $0.70-1.20 per minute. Following the divestiture, customer-owned equipment was required by Bell companies and monthly rates then typically ran to $25 plus air time. Also, since there were so few channels, it was common for the phones to "queue up" to use a channel and IMTS manufacturers competed for the speed with which the units would seize an available channel.
Limitations:
The limit of customer numbers on MTS and IMTS was the driver for investment in cellular networks. In remote regions, this is not the case; in remote regions, obsolescence is the driver, but the lack of a suitable and affordable alternative has resulted in regulatory obstacles: customers did not want the MTS/IMTS service to be withdrawn. Increasing affordability of satellite service, and government investment in cellular expansion allowed MTS and IMTS to be removed. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Comparison of VoIP software**
Comparison of VoIP software:
This is a comparison of voice over IP (VoIP) software used to conduct telephone-like voice conversations across Internet Protocol (IP) based networks. For residential markets, voice over IP phone service is often cheaper than traditional public switched telephone network (PSTN) service and can remove geographic restrictions to telephone numbers, e.g., have a PSTN phone number in a New York area code ring in Tokyo.
Comparison of VoIP software:
For businesses, VoIP obviates separate voice and data pipelines, channelling both types of traffic through the IP network while giving the telephony user a range of advanced abilities.
Comparison of VoIP software:
Softphones are client devices for making and receiving voice and video calls over the IP network with the standard functions of most original telephones and usually allow integration with VoIP phones and USB phones instead of using a computer's microphone and speakers (or headset). Most softphone clients run on the open Session Initiation Protocol (SIP) supporting various codecs. Skype runs on a closed proprietary networking protocol but additional business telephone system (PBX) software can allow a SIP based telephone system to connect to the Skype network. Online chat programs now also incorporate voice and video communications.
Comparison of VoIP software:
Other VoIP software applications include conferencing servers, intercom systems, virtual foreign exchange services (FXOs) and adapted telephony software which concurrently support VoIP and public switched telephone network (PSTN) like Interactive Voice Response (IVR) systems, dial in dictation, on hold and call recording servers.
Some entries below are Web-based VoIP; most are standalone Desktop applications.
Secure VoIP software:
VoIP software with client-to-client encryption The following table is an overview of those VoIP clients which (can) provide end-to-end encryption.
VoIP software with client-to-server encryption The following table is an overview of those VoIP clients which (normally) provide client-to-server encryption.
Notes | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**FAUST (programming language)**
FAUST (programming language):
FAUST (Functional AUdio STream) is a domain-specific purely functional programming language for implementing signal processing algorithms in the form of libraries, audio plug-ins, or standalone applications. A FAUST program denotes a signal processor: a mathematical function that is applied to some input signal and then fed out.
Overview:
The FAUST programming model combines a functional programming approach with a block diagram syntax: The functional programming approach provides a natural framework for signal processing. Digital signals are modeled as discrete functions of time, signal processors as second order functions that operate on them, and FAUST’s block diagram composition operators, used to combine signal processors together, as third order functions, etc.
Overview:
Block diagrams, even if purely textual as in FAUST, promote a modular approach to signal processing that complies with sound engineers' and audio developers' habits.A FAUST program doesn’t describe a sound or a group of sounds, but a signal processor. The program source is organized as a set of definitions with at least the definition of the keyword process (the equivalent of main in C): The FAUST compiler translates FAUST code into a C++ object, which may then interface with other C++ code to produce a full program.
Overview:
The generated code works at the sample level. It is therefore suited to implement low-level DSP functions like recursive filters. The code may also be embedded. It is self-contained and does not depend on any DSP library or runtime system. It has a very deterministic behavior and a constant memory size.
The semantics of FAUST is driven to be simple and well-defined. It allows the FAUST compiler to be semantically driven. Instead of compiling a program literally, it compiles the mathematical function it denotes. This may promote component reuse. Moreover, having access to the exact semantics of a FAUST program can simplify preservation issues.
FAUST is a textual language but block diagram oriented. It combines two approaches: functional programming and algebraic block diagrams, which are constructed via function composition. For that, FAUST relies on a block diagram algebra of five composition operations.
Example code:
FAUST programs define a process function that operates on incoming data. This is analogous to the main function in most programming languages. The following is an example that produces silence: The second example copies the input signal to the output. It involves the _ primitive that denotes the identity function for signals: Another example sums a stereo signal into a mono signal using the + primitive: Most FAUST primitives are analogous to their C counterpart on numbers, but lifted to signals. For example, the FAUST primitive sin operates on a signal X by applying the C function sin to each sample X[t]. All C numerical functions have their counterpart in FAUST.
Example code:
Some signal processing primitives are specific to FAUST. For example, the delay operator @ takes two input signals: X (the signal to be delayed) and D (the delay to be applied), and produces an output signal Y such that Y(t) = X(t − D(t)).
Block diagram composition:
Contrary to Max-like visual programming languages where the user does manual connections, FAUST primitives are assembled in block diagrams by using a set of high-level block diagram composition operations.
Using the sequential composition operator : the output of + can be routed to the input of abs to compute the absolute value of the signal: Here is an example of parallel composition using the , operator that arranges its left and right expressions in parallel. This is analogous to a stereo cable.
Block diagram composition:
These operators can be arbitrarily combined. The following code multiplies an input signal with 0.5: The above may be rewritten in curried form: The recursive composition operator ~ can be used to create block diagrams with cycles (that include an implicit one-sample delay). Here is an example of an integrator that takes an input signal X and computes an output signal Y such that Y(t) = X(t) + Y(t−1):
Generating full applications:
Using specific architecture files, a FAUST program can be used to produce code for a variety of platforms and plug-in formats. These architecture files act as wrappers and describe the interactions with the host audio and GUI system. As of 2021, more than 30 architectures are supported and new ones may be implemented by anyone.
Generating block diagrams:
A useful option makes it possible to generate the block diagram representation of the program as one or more SVG graphic files.
Generating block diagrams:
It is useful to note the difference between the block diagram and the generated C++ code. As stated, the key idea here is not to compile the block diagram literally, but the mathematical function it denotes. Modern C/C++ compilers also don’t compile programs literally. But because of the complex semantics of C/C++ (due to side effects, pointer aliasing, etc.) they can’t go very far in that direction. This is a distinct advantage of a purely functional language: it allows compilers to do very advanced optimisations.
Arrows-like semantics:
The Faust semantics is almost the same as that of Haskell's Arrows type class.
However, the Arrow type class is not bound to signal processors.
The Arrow combinators are more restrictive than their FAUST counterparts, e.g., the nesting of parallel composition is preserved, and inputs of the operands of &&& must match exactly. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**AmpliFIND**
AmpliFIND:
AmpliFIND is an acoustic fingerprinting service and a software development kit developed by the US company MusicIP.
AmpliFIND:
MusicIP first marketed their fingerprinting algorithm and service as MusicDNS. In 2006, MusicIP reported that the MusicDNS database had more than 22 million fingerprints of digital audio recordings. One of their customers was MetaBrainz Foundation, a non-profit company that used MusicDNS in their MusicBrainz and MusicBrainz Picard software products.Even so, MusicIP dissolved in 2008. The company's CEO, Andrew Stess, bought the rights to MusicDNS, renamed the software to AmpliFIND, and started a new company called AmpliFIND Music Services. In 2011, Stess sold AmpliFIND to Sony, who incorporated it into the digital music service offerings of their Gracenote division. Tribune Media subsequently purchased Gracenote, including the MusicDNS software.
How MusicDNS identifies a recording:
To use the MusicDNS service, software developers write a computer program that incorporates an open-source software library called LibOFA. This library implements the Open Fingerprint Architecture, a specification developed during 2000–05 by MusicIP's previous incarnation, Predixis Corporation.
How MusicDNS identifies a recording:
Through LibOFA, a program can fingerprint a recording, and submit the fingerprint to MusicDNS via the Internet. MusicDNS attempts to match the submission to fingerprints in its database. If the MusicDNS service finds an approximate match, it returns a code called a PUID (Portable Unique Identifier). This code does not contain any acoustic information; rather, it enables a computer program to retrieve identifying information (such as the song title and recording artist) from the MusicDNS database. The PUID code is a short, alphanumeric string based on the universally unique identifier standard. The source code for LibOFA is distributed under a dual license: the GNU General Public License and the Adaptive Public License. The MusicDNS software that makes the fingerprints is proprietary. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Karloff (name)**
Karloff (name):
Karloff is a name that is used as a professional name. Notable people who use this name include the following: Boris Karloff, whose birthname was William Henry Pratt (1887–1969), English actor Karloff Lagarde, stage name of Carlos Delucio Lagarde, uncle of Karloff Lagarde Jr. (1928–2007), Mexican Luchador Karloff Lagarde Jr., stage name of César Baltazar de Lucio Valencia, nephew of Karloff Lagarde (born 1970), Mexican Luchador | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Stallion**
Stallion:
A stallion is a male horse that has not been gelded (castrated).
Stallions follow the conformation and phenotype of their breed, but within that standard, the presence of hormones such as testosterone may give stallions a thicker, "cresty" neck, as well as a somewhat more muscular physique as compared to female horses, known as mares, and castrated males, called geldings.
Stallion:
Temperament varies widely based on genetics, and training, but because of their instincts as herd animals, they may be prone to aggressive behavior, particularly toward other stallions, and thus require careful management by knowledgeable handlers. However, with proper training and management, stallions are effective equine athletes at the highest levels of many disciplines, including horse racing, horse shows, and international Olympic competition.
Stallion:
"Stallion" is also used to refer to males of other equids, including zebras and donkeys.
Herd behavior:
Contrary to popular myths, many stallions do not live with a harem of mares. Nor, in natural settings, do they fight each other to the death in competition for mares. Being social animals, stallions who are not able to find or win a harem of mares usually band together in stallions-only "bachelor" groups which are composed of stallions of all ages. Even with a band of mares, the stallion is not the leader of a herd but defends and protects the herd from predators and other stallions. The leadership role in a herd is held by a mare, known colloquially as the "lead mare" or "boss mare." The mare determines the movement of the herd as it travels to obtain food, water, and shelter. She also determines the route the herd takes when fleeing from danger. When the herd is in motion, the dominant stallion herds the straggling members closer to the group and acts as a "rear guard" between the herd and a potential source of danger. When the herd is at rest, all members share the responsibility of keeping watch for danger. The stallion is usually on the edge of the group, to defend the herd if needed.
Herd behavior:
There is usually one dominant mature stallion for every mixed-sex herd of horses. The dominant stallion in the herd will tolerate both sexes of horses while young, but once they become sexually mature, often as yearlings or two-year-olds, the stallion will drive both colts and fillies from the herd. Colts may present competition for the stallion, but studies suggest that driving off young horses of both sexes may also be an instinctive behavior that minimizes the risk of inbreeding within the herd, as most young are the offspring of the dominant stallion in the group. In some cases, a single younger mature male may be tolerated on the fringes of the herd. One theory is that this young male is considered a potential successor, as in time the younger stallion will eventually drive out the older herd stallion.
Herd behavior:
Fillies usually soon join a different band with a dominant stallion different from the one that sired them. Colts or young stallions without mares of their own usually form small, all-male, "bachelor bands" in the wild. Living in a group gives these stallions the social and protective benefits of living in a herd. A bachelor herd may also contain older stallions who have lost their herd in a challenge.Other stallions may directly challenge a herd stallion, or may simply attempt to "steal" mares and form a new, smaller herd. In either case, if the two stallions meet, there rarely is a true fight; more often there will be bluffing behavior and the weaker horse will back off. Even if a fight for dominance occurs, rarely do opponents hurt each other in the wild because the weaker combatant has a chance to flee. Fights between stallions in captivity may result in serious injuries; fences and other forms of confinement make it more difficult for the losing animal to safely escape. In the wild, feral stallions have been known to steal or mate with domesticated mares.
Reproductive anatomy:
The stallion's reproductive system is responsible for his sexual behavior and secondary sex characteristics (such as a large crest).
Reproductive anatomy:
The external genitalia comprise: the testes, which are suspended horizontally within the scrotum. The testes of an average stallion are ovoids 8 to 12 cm (3.1 to 4.7 in) long, 6 to 7 cm (2.4 to 2.8 in) high by 5 cm (2.0 in) wide; the penis, within the "penile sheath". Stallions have a vascular penis. When non-erect, it is quite flaccid and contained within the sheath. The retractor penis muscle is relatively underdeveloped. Erection and protrusion take place gradually, by the increasing tumescence of the erectile vascular tissue in the corpus cavernosum penis. When not erect, the penis is housed within the prepuce, 50 cm (20 in) long and 2.5 to 6 cm (0.98 to 2.36 in) in diameter with the distal end 15 to 20 cm (5.9 to 7.9 in). The retractor muscle contracts to retract the penis into the sheath and relaxes to allow the penis to extend from the sheath. When erect, the penis doubles in length and thickness and the glans increases by 3 to 4 times. The urethra opens within the urethral fossa, a small pouch at the distal end of the glans. A structure called the urethral process projects beyond the glans.The internal genitalia comprise the accessory sex glands, which include the vesicular glands, the prostate gland and the bulbourethral glands. These contribute fluid to the semen at ejaculation, but are not strictly necessary for fertility.
Management and handling of domesticated stallions:
Domesticated stallions are trained and managed in a variety of ways, depending on the region of the world, the owner's philosophy, and the individual stallion's temperament. In all cases, however, stallions have an inborn tendency to attempt to dominate both other horses and human handlers, and will be affected to some degree by proximity to other horses, especially mares in heat. They must be trained to behave with respect toward humans at all times or else their natural aggressiveness, particularly a tendency to bite, may pose a danger of serious injury.For this reason, regardless of management style, stallions must be treated as individuals and should only be handled by people who are experienced with horses and thus recognize and correct inappropriate behavior before it becomes a danger. While some breeds are of a more gentle temperament than others, and individual stallions may be well-behaved enough to even be handled by inexperienced people for short periods of time, common sense must always be used. Even the most gentle stallion has natural instincts that may overcome human training. As a general rule, children should not handle stallions, particularly in a breeding environment.
Management and handling of domesticated stallions:
Management of stallions usually follows one of the following models: confinement or "isolation" management, where the stallion is kept alone, or in management systems variously called "natural", "herd", or "pasture" management where the stallion is allowed to be with other horses. In the "harem" model, the stallion is allowed to run loose with mares akin to that of a feral or semi-feral herd. In the "bachelor herd" model, stallions are kept in a male-only group of stallions, or, in some cases, with stallions and geldings. Sometime stallions may periodically be managed in multiple systems, depending on the season of the year.
Management and handling of domesticated stallions:
The advantage of natural types of management is that the stallion is allowed to behave "like a horse" and may exhibit fewer stable vices. In a harem model, the mares may "cycle" or achieve estrus more readily. Proponents of natural management also assert that mares are more likely to "settle" (become pregnant) in a natural herd setting. Some stallion managers keep a stallion with a mare herd year-round, others will only turn a stallion out with mares during the breeding season.In some places, young domesticated stallions are allowed to live separately in a "bachelor herd" while growing up, kept out of sight, sound or smell of mares. A Swiss study demonstrated that even mature breeding stallions kept well away from other horses could live peacefully together in a herd setting if proper precautions were taken while the initial herd hierarchy was established.As an example, in the New Forest, England, breeding stallions run out on the open Forest for about two to three months each year with the mares and youngstock. On being taken off the Forest, many of them stay together in bachelor herds for most of the rest of the year. New Forest stallions, when not in their breeding work, take part on the annual round-ups, working alongside mares and geldings, and compete successfully in many disciplines.There are drawbacks to natural management, however. One is that the breeding date, and hence foaling date, of any given mare will be uncertain. Another problem is the risk of injury to the stallion or mare in the process of natural breeding, or the risk of injury while a hierarchy is established within an all-male herd. Some stallions become very anxious or temperamental in a herd setting and may lose considerable weight, sometimes to the point of a health risk. Some may become highly protective of their mares and thus more aggressive and dangerous to handle. There is also a greater risk that the stallion may escape from a pasture or be stolen. Stallions may break down fences between adjoining fields to fight another stallion or mate with the "wrong" herd of mares, thus putting the pedigree of ensuing foals in question.
Management and handling of domesticated stallions:
The other general method of managing stallions is to confine them individually, sometimes in a small pen or corral with a tall fence, other times in a stable, or, in certain places, in a small field (or paddock) with a strong fence. The advantages to individual confinement include less of a risk of injury to the stallion or to other horses, controlled periods for breeding mares, greater certainty of what mares are bred when, less risk of escape or theft, and ease of access by humans. Some stallions are of such a temperament, or develop vicious behavior due to improper socialization or poor handling, that they must be confined and cannot be kept in a natural setting, either because they behave in a dangerous manner toward other horses, or because they are dangerous to humans when loose.
Management and handling of domesticated stallions:
The drawbacks to confinement vary with the details of the actual method used, but stallions kept out of a herd setting require a careful balance of nutrition and exercise for optimal health and fertility. Lack of exercise can be a serious concern; stallions without sufficient exercise may not only become fat, which may reduce both health and fertility, but also may become aggressive or develop stable vices due to pent-up energy. Some stallions within sight or sound of other horses may become aggressive or noisy, calling or challenging other horses. This sometimes is addressed by keeping stallions in complete isolation from other animals.
Management and handling of domesticated stallions:
However, complete isolation has significant drawbacks; stallions may develop additional behavior problems with aggression due to frustration and pent-up energy. As a general rule, a stallion that has been isolated from the time of weaning or sexual maturity will have a more difficult time adapting to a herd environment than one allowed to live close to other animals. However, as horses are instinctively social creatures, even stallions are believed to benefit from being allowed social interaction with other horses, though proper management and cautions are needed.Some managers attempt to compromise between the two methods by providing stallions daily turnout by themselves in a field where they can see, smell, and hear other horses. They may be stabled in a barn where there are bars or a grille between stalls where they can look out and see other animals. In some cases, a stallion may be kept with or next to a gelding or a nonhorse companion animal such as a goat, a gelded donkey, a cat, or other creature.
Management and handling of domesticated stallions:
Properly trained stallions can live and work close to mares and to one another. Examples include the Lipizzan stallions of the Spanish Riding School in Vienna, Austria, where the entire group of stallions live part-time in a bachelor herd as young colts, then are stabled, train, perform, and travel worldwide as adults with few if any management problems. However, even stallions who are unfamiliar with each other can work safely in reasonable proximity if properly trained; the vast majority of Thoroughbred horses on the racetrack are stallions, as are many equine athletes in other forms of competition. Stallions are often shown together in the same ring at horse shows, particularly in halter classes where their conformation is evaluated. In horse show performance competition, stallions and mares often compete in the same arena with one another, particularly in Western and English "pleasure"-type classes where horses are worked as a group. Overall, stallions can be trained to keep focused on work and may be brilliant performers if properly handled.A breeding stallion is more apt to present challenging behavior to a human handler than one who has not bred mares, and stallions may be more difficult to handle in spring and summer, during the breeding season, than during the fall and winter. However, some stallions are used for both equestrian uses and for breeding at the same general time of year. Though compromises may need to be made in expectations for both athletic performance and fertility rate, well-trained stallions with good temperaments can be taught that breeding behavior is only allowed in a certain area, or with certain cues, equipment, or with a particular handler. However, some stallions lack the temperament to focus on work if also breeding mares in the same general time period, and therefore are taken out of competition either temporarily or permanently to be used for breeding. When permitted by a breed registry, use of artificial insemination is another technique that may reduce behavior problems in stallions.
Cultural views of stallions:
Attitudes toward stallions vary between different parts of the world. In some parts of the world, the practice of gelding is not widespread and stallions are common. In other places, most males are gelded and only a few stallions are kept as breeding stock. Horse breeders who produce purebred bloodstock often recommend that no more than the top 10 percent of all males be allowed to reproduce, to continually improve a given breed of horse.
Cultural views of stallions:
People sometimes have inaccurate beliefs about stallions, both positive and negative. Some beliefs are that stallions are always mean and vicious or uncontrollable; other beliefs are that misbehaving stallions should be allowed to misbehave because they are being "natural", "spirited" or "noble." In some cases, fed by movies and fictional depictions of horses in literature, some people believe a stallion can bond to a single human individual to the exclusion of all others. However, like many other misconceptions, there is only partial truth to these beliefs. Some, though not all stallions can be vicious or hard to handle, occasionally due to genetics, but usually due to improper training. Others are very well-trained and have excellent manners. Misbehaving stallions may look pretty or be exhibiting instinctive behavior, but it can still become dangerous if not corrected. Some stallions do behave better for some people than others, but that can be true of some mares and geldings, as well.
Cultural views of stallions:
In some parts of Asia and the Middle East, the riding of stallions is widespread, especially among male riders. The gelding of stallions is unusual, viewed culturally as either unnecessary or unnatural. In areas where gelding is not widely practised, stallions are still not needed in numbers as great as mares, and so many will be culled, either sold for horsemeat or simply sold to traders who will take them outside the area. Of those that remain, many will not be used for breeding purposes.
Cultural views of stallions:
In Europe, Australia, and the Americas, keeping stallions is less common, primarily confined to purebred animals that are usually trained and placed into competition to test their quality as future breeding stock. The majority of stallions are gelded at an early age and then trained for use as everyday working or riding animals.
Geldings:
If a stallion is not to be used for breeding, gelding the male horse will allow it to live full-time in a herd with both males and females, reduce aggressive or disruptive behavior, and allow the horse to be around other animals without being seriously distracted. If a horse is not to be used for breeding, it can be gelded prior to reaching sexual maturity. A horse gelded young may grow taller and behave better if this is done. Older stallions that are sterile or otherwise no longer used for breeding may also be gelded and will exhibit calmer behavior, even if previously used for breeding. However, they are more likely to continue stallion-like behaviors than horses gelded at a younger age, especially if they have been used as a breeding stallion. Modern surgical techniques allow castration to be performed on a horse of almost any age with relatively few risks.In most cases, particularly in modern industrialized cultures, a male horse that is not of sufficient quality to be used for breeding will have a happier life without having to deal with the instinctive, hormone-driven behaviors that come with being left intact. Geldings are safer to handle and present fewer management problems. They are also more widely accepted. Many boarding stables will refuse clients with stallions or charge considerably more money to keep them. Some types of equestrian activity, such as events involving children, or clubs that sponsor purely recreational events such as trail riding, may not permit stallions to participate.However, just as some pet owners may have conflicting emotions about neutering a male dog or cat, some stallion owners may be unsure about gelding a stallion. One branch of the animal rights community maintains that castration is mutilation and damaging to the animal's psyche.
Geldings:
Ridglings A ridgling or "rig" is a cryptorchid, a stallion which has one or both testicles undescended. If both testicles are not descended, the horse may appear to be a gelding, but will still behave like a stallion. A gelding that displays stallion-like behaviors is sometimes called a "false rig". In many cases, ridglings are infertile, or have fertility levels that are significantly reduced. The condition is most easily corrected by gelding the horse. A more complex and costly surgical procedure can sometimes correct the condition and restore the animal's fertility, though it is only cost-effective for a horse that has very high potential as a breeding stallion. This surgery generally removes the non-descended testicle, leaving the descended testicle, and creating a horse known as a monorchid stallion. Keeping cryptorchids or surgically-created monorchids as breeding stallions is controversial, as the condition is at least partially genetic and some handlers claim that cryptorchids tend to have greater levels of behavioral problems than normal stallions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fabry disease**
Fabry disease:
Fabry disease, also known as Anderson–Fabry disease, is a rare genetic disease that can affect many parts of the body, including the kidneys, heart, and skin. Fabry disease is one of a group of conditions known as lysosomal storage diseases. The genetic mutation that causes Fabry disease interferes with the function of an enzyme that processes biomolecules known as sphingolipids, leading to these substances building up in the walls of blood vessels and other organs. It is inherited in an X-linked manner.
Fabry disease:
Fabry disease is sometimes diagnosed using a blood test that measures the activity of the affected enzyme called alpha-galactosidase, but genetic testing is also sometimes used, particularly in females.
The treatment for Fabry disease varies depending on the organs affected by the condition, and the underlying cause can be addressed by replacing the enzyme that is lacking.
The first descriptions of the condition were made simultaneously by dermatologist Johannes Fabry and the surgeon William Anderson in 1898.
Signs and symptoms:
Symptoms are typically first experienced in early childhood and can be very difficult to diagnose; the rarity of Fabry disease to many clinicians sometimes leads to misdiagnoses. Manifestations of the disease usually increase in number and severity as an individual ages.
Signs and symptoms:
Pain Full-body or localized pain to the extremities (known as acroparesthesia) or gastrointestinal (GI) tract is common in patients with Fabry disease. This pain can increase over time. This acroparesthesia is believed to be related to the damage of peripheral nerve fibers that transmit pain. GI-tract pain is likely caused by accumulation of lipids in the small vasculature of the GI tract, which obstructs blood flow and causes pain.
Signs and symptoms:
Kidney Kidney complications are common and serious effects of the disease; chronic kidney disease and kidney failure may worsen throughout life. The presence of protein in the urine (which causes foamy urine) is often the first sign of kidney involvement. End-stage kidney failure in those with Fabry disease typically occurs in the third decade of life, and is a common cause of death due to the disease.
Signs and symptoms:
Heart Fabry disease can affect the heart in several ways. The accumulation of sphingolipids within heart muscle cells causes abnormal thickening of the heart muscle or hypertrophy. This hypertrophy can cause the heart muscle to become abnormally stiff and unable to relax, leading to a restrictive cardiomyopathy causing shortness of breath.Fabry disease can also affect the way in which the heart conducts electrical impulses, leading to both abnormally slow heart rhythms such as complete heart block, and abnormally rapid heart rhythms such as ventricular tachycardia. These abnormal heart rhythms can cause blackouts, palpitations, or even sudden cardiac death.Sphingolipids can also build up within the heart valves, thickening the valves and affecting the way they open and close. If severe, this can cause the valves to leak (regurgitation) or to restrict the forward flow of blood (stenosis). The aortic and mitral valves are more commonly affected than the valves on the right side of the heart.
Signs and symptoms:
Skin Angiokeratomas (tiny, painless papules that can appear on any region of the body, but are predominant on the thighs, around the navel, buttocks, lower abdomen, and groin) are common.Anhidrosis (lack of sweating) is a common symptom, and less commonly hyperhidrosis (excessive sweating).Additionally, patients can exhibit Raynaud's disease-like symptoms with neuropathy (in particular, burning extremity pain).Ocular involvement may be present showing cornea verticillata (also known as vortex keratopathy), i.e. clouding of the corneas. Keratopathy may be the presenting feature in asymptomatic patients, and must be differentiated from other causes of vortex keratopathy (e.g. drug deposition in the cornea). This clouding does not affect vision.Other ocular findings can include conjunctival and retinal vascular abnormalities and anterior/posterior spoke-like cataract. Visual reduction from these manifestations is uncommon.
Signs and symptoms:
Other manifestations Fatigue, neuropathy (in particular, burning extremity pain, red hands and feet on and off), cerebrovascular effects leading to an increased risk of stroke - early strokes, mostly vertebrobasilar system tinnitus (ringing in the ears), vertigo, nausea, inability to gain weight, chemical imbalances, and diarrhea are other common symptoms.
Causes:
Fabry disease is caused by a DNA sequence (gene) that is not functioning as it should. A person who inherits this gene does not have enough of a functioning enzyme known as alpha-galactosidase A. The lack of alpha-galactosidase leads to Fabry disease. A deficiency of alpha galactosidase A (a-GAL A, encoded by GLA) due to mutation causes a glycolipid known as globotriaosylceramide (abbreviated as Gb3, GL-3, or ceramide trihexoside) to accumulate within the blood vessels, other tissues, and organs. This accumulation leads to an impairment of their proper functions.At least 443 disease-causing mutations in the GLA gene have been discovered. The DNA mutations that cause the disease are X-linked recessive with incomplete penetrance in heterozygous females. The condition affects hemizygous males (i.e. all non-intersex males), as well as homozygous, and in many cases heterozygous females. While males typically experience severe symptoms, women can range from being asymptomatic to having severe symptoms. Research suggests many women experience severe symptoms ranging from early cataracts or strokes to hypertrophic left ventricular heart problems and kidney failure. This variability is thought to be due to X-inactivation patterns during embryonic development of the female.
Mechanism:
Fabry disease is an inherited lysosomal storage disorder that is caused by a deficiency of alpha-galactosidase A. This enzyme deficiency is a result of an accumulation of glycosphingolipids found in the lysosomes and most cell types and tissues, which leads it to be considered a multisystem disease. Indications include painful crisis, angiokeratomas, corneal dystrophy, and hypohydrosis. In severe cases there is renal, cerebrovascular, and cardiac involvement and it is predominately responsible for premature mortality in Fabry patients. Fabry disease is X-linked and manifests mostly in homozygous males but also in heterozygous females. Cardiac involvement is recurrent in Fabry patients. Patients have developed hypertrophic cardiomyopathy, arrhythmias, conduction abnormalities, and valvular abnormalities. Deficient activity of lysosomal alpha-galactosidase results in progressive accumulation of globotriaosylceramide (GL-3) within lysosomes, that is believed to trigger a cascade of cellular events. The demonstration of marked alpha-galactosidase deficiency is the conclusive method for the diagnosis in homozygous males. It may be detected in heterozygotous females, but it is often inconclusive due to random X-chromosomal inactivation, so molecular testing (genotyping) of females is mandatory.
Diagnosis:
Fabry disease is suspected based on the individual's clinical presentation, and can be diagnosed by an enzyme assay (usually done on leukocytes) to measure the level of alpha-galactosidase activity. An enzyme assay is not reliable for the diagnosis of disease in females due to the random nature of X-inactivation. Molecular genetic analysis of the GLA gene is the most accurate method of diagnosis in females, particularly if the mutations have already been identified in male family members. Many disease-causing mutations have been noted. Kidney biopsy may also be suggestive of Fabry disease if excessive lipid buildup is noted. Pediatricians, as well as internists, commonly misdiagnose Fabry disease. All immediate and extended family members in the same family have the same family mutation, so if one member of a family has a DNA sequence analysis performed, other members of the family can be diagnosed by performing a targeted sequence analysis instead of testing the entire gene. Targeted sequencing is quicker and less expensive to perform. One study reported that for every first diagnosis in a family, on average five more family members (immediate and extended) are also diagnosed.MRI is accurate in accessing left ventricular mass and thickness and hypertrophy. Late gadolinium enhancement shows increased signal of the midwall at the inferolateral wall of the base of the left ventricle, usually in the non-hypertrophic ventricle. T1-weighted imaging can show low T1 signal due to sphingolipid storage in the heart even without ventricular hypertrophy in 40% of the those affected by the disease. Thus, MRI is a useful way of diagnosing the disease early. T2 signal is increased in inflammation and oedema.
Treatment:
The treatments available for Fabry disease can be divided into therapies that aim to correct the underlying problem of decreased activity of the alpha galactosidase A enzyme and thereby reduce the risk of organ damage, and therapies to improve symptoms and life expectancy once organ damage has already occurred.
Treatment:
Therapies targeting enzyme activity Enzyme replacement therapy is designed to provide the enzyme the patient is missing as a result of a genetic malfunction. This treatment is not a cure, but can partially prevent disease progression, and potentially reverse some symptoms. As of March 2022, two medical drugs based on enzyme replacement therapy are available for Fabry disease: Agalsidase alfa, sold under the brand name Replagal by the company Takeda (since its acquisition of the company Shire), is a recombinant form of alpha-galactosidase A It received approval in the EU in 2001. FDA approval was applied for the United States. However, Shire withdrew their application for approval in the United States in 2012, citing that the agency will require additional clinical trials before approval. As of March 2022, Replagal has not received FDA approval.
Treatment:
Agalsidase beta, sold under the brand name Fabrazyme by the company Sanofi, is another recombinant form of alpha-galactosidase. Like replagal, it received approval in the EU in 2001. In 2003, it was the first treatment for Fabry disease to be approved by the FDA.
Treatment:
Pegunigalsidase alfa (Elfabrio) was approved for medical use in the European Union in May 2023.Clinically, agalsidase alfa and agalsidase beta are generally similar in effectiveness and safety, however they have never been compared directly in a randomized trial. Both are given by intravenous infusion every two weeks. They are available in Europe and in many other parts of the world, but treatment costs remain very high.Pharmacological chaperone therapy is another strategy to maintain enzyme activity. It does so by assisting correct folding of alpha-galactosidase despite the mutations that cause Fabry disease. As of March 2022, one medical drug based on pharmacological chaperone therapy is available for Fabry disease: Migalastat, sold under the brand name Galafold by the company Amicus Therapeutics, is a pharmacological chaperone that can stabilize many mutant forms of alpha-galactosidase. It is taken by mouth. In a randomized trial comparing Migalastat with enzyme replacement therapy, the efficacy and safety of both treatments were similar. The US Food and Drug Administration (FDA) granted Galafold orphan drug status in 2004, and the European Commission followed in 2006. The European Medicines Agency's Committee for Medicinal Products for Human Use (CHMP) granted the drug a marketing approval under the name Galafold in May 2016. FDA approval followed in 2018.
Treatment:
Experimental therapies that are not approved for treatment as of March 2022 include the following: A gene therapy treatment that is in early-phase clinical trials, with the technology licensed to AvroBio.
The substrate reduction therapy Venglustat (Ibiglustat) under development by Sanofi-Genzyme Bio-better ERT (CDX-6311) under pre-clinical development by the company Codexis A gene therapy (ST-920) under development by the company Sangamo.
Treatment:
Organ-specific treatment Pain associated with Fabry disease may be partially alleviated by enzyme replacement therapy in some patients, but pain management regimens may also include analgesics, anticonvulsants, and nonsteroidal anti-inflammatory drugs, though the latter are usually best avoided in kidney disease. The kidney failure seen in some of those with Fabry disease sometimes requires haemodialysis. The cardiac complications of Fabry disease include abnormal heart rhythms, which may require a pacemaker or implantable cardioverter-defibrillator, while the restrictive cardiomyopathy often seen may require diuretics.
Prognosis:
Life expectancy with Fabry disease for males was 58.2 years, compared with 74.7 years in the general population, and for females 75.4 years compared with 80.0 years in the general population, according to registry data from 2001 to 2008. The most common cause of death was cardiovascular disease, and most of those had received kidney replacements.
Epidemiology:
Fabry disease is panethnic, but due to its rarity, determining an accurate disease frequency is difficult. Reported incidences, ranging from one in 476,000 to one in 117,000 in the general population, may largely underestimate the true prevalence. Newborn screening initiatives have found an unexpectedly high prevalence of the disease, as high as one in about 3,100 newborns in Italy and have identified a surprisingly high frequency of newborn males around one in 1,500 in Taiwan.
Research:
Enzyme replacement therapy: Replacement of the missing enzyme to clear the lipids (GL-3) from the cells Substrate synthesis inhibition, also called substrate reduction therapy: Inhibits the production of the lipid (GL-3) that accumulates in the cells Chaperone therapy: Uses small-molecule drugs that bind to the defective enzyme and stabilize it to increase enzyme activity and increase cellular function Gene editing: Technology that can potentially cut and fix a broken gene in a cell Gene therapy: Genetically modifies the affected cells to produce the missing enzyme.
History:
Fabry disease was first described by dermatologist Johannes Fabry and surgeon William Anderson independently in 1898. It was recognised to be due to abnormal storage of lipids in 1952. In the 1960s, the inheritance pattern was established as being X-linked, as well as the molecular defect responsible for causing the accumulation of glycolipids.Ken Hashimoto published his classic paper on his electron microscopic findings in Fabry disease in 1965.The first specific treatment for Fabry disease was approved in 2001.
Society and culture:
House ("Epic Fail", season six, episode three) centers on a patient with Fabry disease.
Scrubs ("My Catalyst", season three, episode 12) features a Fabry disease diagnosis.
Crossing Jordan ("There's No Place Like Home", season two, episode one) features a patient who died from Fabry disease.
The Village (Korean drama): "Achiara's Secret" features daughters of a serial rapist who find each other because they share Fabry disease.
Doctor John (Korean drama): In episode two, a prisoner is diagnosed with Fabry disease.
In Lincoln Rhyme: Hunt for the Bone Collector, a copycat of the titular Bone Collector has Fabry disease and takes Galafold, which allows the detectives to learn his identity.
Partners for Justice 2 (Korean drama), features Doctor K, who had Fabry disease.
Doc (Italian drama): Series two features an episode with a tennis player who is diagnosed with Fabry disease | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pulse sequence**
Pulse sequence:
In Fourier transform NMR spectroscopy and imaging, a pulse sequence describes a series of radio frequency pulses applied to the sample, such that the free induction decay is related to the characteristic frequencies of the desired signals. After applying a Fourier transform, the signal can be represented in the frequency domain as the NMR spectrum. In magnetic resonance imaging, additional gradient pulses are applied by switching magnetic fields that exhibit a space-dependent gradient which can be used to reconstruct spatially resolved images after applying Fourier transforms.The outcome of pulse sequences is often analyzed using the product operator formalism. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**On the run (finance)**
On the run (finance):
In finance, an on the run security or contract is the most recently issued, and hence most liquid, of a periodically issued security. On the run, securities are generally more liquid and trade at a premium to other securities. Other, older issues are referred to as off the run securities, and trade at a discount to on the run securities.
Examples:
United States Treasury securities have periodic auctions; the treasury of a given tenor, say 30 years, which has most recently been auctioned is the on-the-run security, while all older treasuries of that tenor are off-the-run.
For credit default swaps, the 5-year contract sold at the most recent IMM date is the on-the-run security; it thus has remaining maturity of between 4 years, 9 months and 5 years.
A number of indices only hold on-the-run contracts, to ease trading.
Trades:
When a new security is issued, becoming the new on-the-run security, buying the new contract and selling the old one is called rolling the contract.
Trades:
A convergence trade involves the difference in price between the on-the-run and the most recent off-the-run instrument: for long tenors, these are virtually the same instrument, and in any event, an on-the-run instrument becomes off-the-run upon the issue of a newer instrument. Thus, if the basis (difference in price) between an on-the-run and most recent off-the-run instrument becomes large, one may buy the off-the-run and sell the on-the-run in anticipation of the basis shrinking. This trade, for 30-year treasuries, is notable for having been practiced by Long-Term Capital Management. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Trichofolliculoma**
Trichofolliculoma:
Trichofolliculoma is a cutaneous condition characterized by a benign, highly structured tumor of the pilosebaceous unit. Trichofolliculoma is a rare tumor of the eyelid. It can be suspected by the “cotton bag sign” | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Computer-aided maintenance**
Computer-aided maintenance:
Computer-aided maintenance (not to be confused with CAM which usually stands for Computer Aided Manufacturing) refers to systems that utilize software to organize planning, scheduling, and support of maintenance and repair. A common application of such systems is the maintenance of computers, either hardware or software, themselves. It can also apply to the maintenance of other complex systems that require periodic maintenance, such as reminding operators that preventive maintenance is due or even predicting when such maintenance should be performed based on recorded past experience.
Computer aided configuration:
The first computer-aided maintenance software came from DEC in the 1980s to configure VAX computers. The software was built using the techniques of artificial intelligence expert systems, because the problem of configuring a VAX required expert knowledge. During the research, the software was called R1 and was renamed XCON when placed in service. Fundamentally, XCON was a rule-based configuration database written as an expert system using forward chaining rules. As one of the first expert systems to be pressed into commercial service it created high expectations, which did not materialize, as DEC lost commercial pre-eminence.
Help Desk software:
Help desks frequently use help desk software that captures symptoms of a bug and relates them to fixes, in a fix database. One of the problems with this approach is that the understanding of the problem is embodied in a non-human way, so that solutions are not unified.
Strategies for finding fixes:
The bubble-up strategy simply records pairs of symptoms and fixes. The most frequent set of pairs is then presented as a tentative solution, which is then attempted. If the fix works, that fact is further recorded, along with the configuration of the presenting system, into a solutions database.
Strategies for finding fixes:
Oddly enough, shutting down and booting up again manages to 'fix,' or at least 'mask,' a bug in many computer-based systems; thus reboot is the remedy for distressingly many symptoms in a 'fix database.' The reason a reboot often works is that it causes the RAM to be flushed. However, typically the same set of actions are likely to create the same result demonstrating a need to refine the "startup" applications (which launch into memory) or install the latest fix/patch of the offending application.
Strategies for finding fixes:
Currently, most expertise in finding fixes lies in human domain experts, who simply sit at a replica of the computer-based system, and who then 'talk through' the problem with the client to duplicate the problem, and then relate the fix. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Noxious weed**
Noxious weed:
A noxious weed, harmful weed or injurious weed is a weed that has been designated by an agricultural or other governing authority as a plant that is injurious to agricultural or horticultural crops, natural habitats or ecosystems, or humans or livestock. Most noxious weeds have been introduced into an ecosystem by ignorance, mismanagement, or accident. Some noxious weeds are native. Typically they are plants that grow aggressively, multiply quickly without natural controls (native herbivores, soil chemistry, etc.), and display adverse effects through contact or ingestion. Noxious weeds are a large problem in many parts of the world, greatly affecting areas of agriculture, forest management, nature reserves, parks and other open space.Many noxious weeds have come to new regions and countries through contaminated shipments of feed and crop seeds or were intentionally introduced as ornamental plants for horticultural use.
Noxious weed:
Some "noxious weeds", such as ragwort, produce copious amounts of nectar, valuable for the survival of bees and other pollinators, or other advantages like larval host foods and habitats. In the USA, wild parsnip Pastinaca sativa, for instance, provides large tubular stems that some bee species hibernate in, larval food for two different swallowtail butterflies, and other beneficial qualities.
Types:
Some noxious weeds are harmful or poisonous to humans, domesticated grazing animals, and wildlife. Open fields and grazing pastures with disturbed soils and open sunlight are often more susceptible. Protecting grazing animals from toxic weeds in their primary feeding areas is therefore important.
Control:
Some guidelines to prevent the spread of noxious weeds are: Avoid driving through noxious weed-infested areas.
Avoid transporting or planting seeds and plants that one cannot identify.
For noxious weeds in flower or with seeds on plants, pulling 'gently' out and placing in a secure closable bag is recommended. Disposal such as hot composting or contained burning is done when safe and practical for the specific plant. Burning poison ivy can be fatal to humans.
Control:
Using only certified weed-free seeds for crops or gardens.Maintaining control of noxious weeds is important for the health of habitats, livestock, wildlife, and native plants, and of humans of all ages. How to control noxious weeds depends on the surrounding environment and habitats, the weed species, the availability of equipment, labor, supplies, and financial resources. Laws often require that noxious weed control funding from governmental agencies must be used for eradication, invasion prevention, or native habitat and plant community restoration project scopes.Insects and fungi have long been used as biological controls of some noxious weeds and more recently nematodes have also been used.
Controversy and biases:
Agricultural needs, desires, and concerns do not always mesh with those of other areas, such as pollinator nectar provision. Ragwort, for instance, was rated as the top flower meadow nectar source in a UK study, and in the top ten in another. Its early blooming period is also particularly helpful for the establishment of bumblebee colonies. Thistles that are considered noxious weeds in the USA and elsewhere, such as Cirsium arvense and Cirsium vulgare, have also rated at or near the top of the charts in multiple UK studies for nectar production, one of its native locations. These thistles also serve as a larval host plant for the painted lady butterfly. There can be, therefore, a conflict between agricultural policy and point of view and the point of view of conservationists or other groups.
By country:
Australia In Australia, the term "noxious weed" is used by state and territorial governments.
By country:
Canada In Canada, constitutional responsibility for the regulation of agriculture and the environment is shared between the federal and provincial governments. The federal government through the Canadian Food Inspection Agency (CFIA) regulates invasive plants under the authority of the Plant Protection Act, the Seeds Act and statutory regulations. Certain plant species have been designated by the CFIA as noxious weeds in the Weed Seeds Order.Each province also produces its own list of prohibited weeds. In Alberta, for example, a new Weed Control Act was proclaimed in 2010 with two weed designations: "prohibited noxious" (46 species) which are banned across Alberta, and "noxious" (29 species) which can be restricted at the discretion of local authorities.
By country:
New Zealand New Zealand has had a series of Acts of Parliament relating to noxious weeds: the Noxious Weeds Act 1908, the Noxious Weeds Act 1950, and the Noxious Plants Act 1978. The last was repealed by the Biosecurity Act 1993, which used words such as "pest", "organism" and "species", rather than "noxious". Consequently, the term "noxious weed" is no longer used in official publications in New Zealand. According to this Act, control of the majority of problem weeds, now called 'pest plants', is the responsibility of Regional Councils, or unitary authorities, in a few councils.
By country:
United Kingdom The Weeds Act, 1959 covers Great Britain, It is mainly relevant to farmers and other rural settings rather than the allotment or garden-scale growers. Five "injurious" weeds are listed. The word "injurious" means in this context harmful to agriculture,not liable to cause injury. All the species listed apart from ragwort are edible and appear in Richard Mabey's book Food for Free. They are all native plants. These are: Spear thistle (Cirsium vulgare) Creeping, or field, thistle (Cirsium arvense) Curled dock (Rumex crispus) Broad-leaved dock (Rumex obtusifolius) Common ragwort (Jacobaea vulgaris)The Department for Environment, Food and Rural Affairs (DEFRA) provides guidance for the removal of these weeds from infested land. Much of this is oriented towards the use of herbicides.
By country:
The Act does not place any automatic legal responsibility on landowners to control the weeds,or make growing them illegal, but they may be ordered to control them. Most common farmland weeds are not "injurious" within the meaning of the Weeds Act and many such plant species have conservation and environmental value. The various UK government agencies responsible have a duty to try to achieve a reasonable balance among different interests. These include agriculture, countryside conservation and the general public.
By country:
Section 14 of the Wildlife and Countryside Act 1981 makes it an offence to plant or grow certain specified foreign invasive plants in the wild, listed in Schedule 9 of the Act, including giant hogweed and Japanese knotweed. Some local authorities have by-laws controlling these plants. There is no statutory requirement for landowners to remove these plants from their property.
By country:
Northern Ireland is covered by the Noxious Weeds (Northern Ireland) Order 1977. This mirrors the Great Britain legislation, and covers the same five species, with the addition of: Wild oat (Avena fatua) Wild oat (Avena ludoviciana) United States The federal government defines noxious weeds under the Federal Noxious Weed Act of 1974. Noxious weeds are also defined by the state governments in the United States. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Aldolase C**
Aldolase C:
Aldolase C, fructose-bisphosphate (ALDOC, or ALDC), is an enzyme that, in humans, is encoded by the ALDOC gene on chromosome 17. This gene encodes a member of the class I fructose-bisphosphate aldolase gene family. Expressed specifically in the hippocampus and Purkinje cells of the brain, the encoded protein is a glycolytic enzyme that catalyzes the reversible aldol cleavage of fructose 1,6-bisphosphate and fructose-1-phosphate to dihydroxyacetone phosphate and either glyceraldehyde 3-phosphate or glyceraldehyde, respectively.[provided by RefSeq, Jul 2008]
Structure:
ALDOC is one of the three aldolase isozymes (A, B, and C), encoded by three different genes. The amino acid sequence of ALDOC is highly similar to those of the other isozymes, sharing a 68% identity with ALDOB and 78% identity with ALDOA. In particular, the residues Asp33, Arg42, Lys107, Lys146, Glu187, Ser271, Arg303, and Lys229 are all conserved in the active sites of the three isozymes. This active site is located in the center of the homotetrameric αβ-barrel structure of these aldolases. However, several structural details set ALDOC apart. For instance, the Arg303 residue in ALDOC adopts an intermediate conformation between the liganded and unliganded structures observed in the other isozymes. Also, the C-terminal region between Glu332 and Lys71 forms a salt bridge with the barrel region that is absent in the A and B isoforms. Moreover, the electrostatic surface of ALDOC is more negatively charged, which may serve as an acidic binding site or as a docking site to accommodate the C-terminal conformations. Four ALDOC-specific residues (N90, V92, R96 and D100) may be key for ALDOC-specific functions.
Function:
ALDOC is a key enzyme in the fourth step of glycolysis, as well as in the reverse pathway gluconeogenesis. It catalyzes the reversible conversion of fructose-1,6-bisphosphate to glyceraldehydes-3-phosphate (G3P), or glyceraldehyde, and dihydroxyacetone phosphate (DHAP) by aldol cleavage. As a result, it is a crucial player in ATP biosynthesis. As an aldolase, ALDOC putatively also contributes to other "moonlighting" functions, though its exact involvements remain unclear. For instance, it binds less tightly to the cytoskeleton than the other isozymes do, likely due to its more acidic pI. In addition, ALDOC participates in the stress-response pathway for lung epithelial cell function during hypoxia and in the resistance of cerebellar Purkinje cells against excitotoxic insult.ALDOC is ubiquitously expressed in most tissues, though it is predominantly expressed in brain, smooth muscle, and neuronal tissue. However, since the ALDOA isoform is co-expressed with ALDOC in the central nervous system (CS), it is suggested that ALDOC contributes to CNS function outside of glycolysis. Moreover, its presence within other cell types, such as platelets and mast cells (MCs), may serve as a failsafe in the case that the other predominant aldolase isozymes become inactivated. Within cells, it localizes to the cytoplasm.
Clinical significance:
This aldolase has been associated with cancer.ALDOC is found to be upregulated in the brains of schizophrenia (SCZ) patients. Notably, while ALDOC is differentially expressed in the anterior cingulate cortex (ACC) of male SCZ patients, it displays no significant changes in female SCZ patients, indicating that different regulatory mechanisms may be involved in male versus female SCZ patients. It is likely that ALDOC is involved in SCZ through its role in glycolysis, which is a central biochemical pathway in SCZ.Furthermore, ALDOC is reported to undergo oxidation in brains affected by mild cognitive impairment (MCI) and Alzheimer's disease (AD). This oxidative modification inhibits ALDOC activity, causing the accumulation of fructose 1,6- bisphosphate and driving the reverse reaction, in the direction of gluconeogenesis rather than glycolysis, thus halting ATP production.
Interactive pathway map:
Click on genes, proteins and metabolites below to link to respective articles. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Find first set**
Find first set:
In computer software and hardware, find first set (ffs) or find first one is a bit operation that, given an unsigned machine word, designates the index or position of the least significant bit set to one in the word counting from the least significant bit position. A nearly equivalent operation is count trailing zeros (ctz) or number of trailing zeros (ntz), which counts the number of zero bits following the least significant one bit. The complementary operation that finds the index or position of the most significant set bit is log base 2, so called because it computes the binary logarithm ⌊log2(x)⌋. This is closely related to count leading zeros (clz) or number of leading zeros (nlz), which counts the number of zero bits preceding the most significant one bit.
Find first set:
There are two common variants of find first set, the POSIX definition which starts indexing of bits at 1, herein labelled ffs, and the variant which starts indexing of bits at zero, which is equivalent to ctz and so will be called by that name.
Most modern CPU instruction set architectures provide one or more of these as hardware operators; software emulation is usually provided for any that aren't available, either as compiler intrinsics or in system libraries.
Examples:
Given the following 32-bit word: 0000 0000 0000 0000 1000 0000 0000 1000The count trailing zeros operation would return 3, while the count leading zeros operation returns 16. The count leading zeros operation depends on the word size: if this 32-bit word were truncated to a 16-bit word, count leading zeros would return zero. The find first set operation would return 4, indicating the 4th position from the right. The log base 2 is 15.
Examples:
Similarly, given the following 32-bit word, the bitwise negation of the above word: 1111 1111 1111 1111 0111 1111 1111 0111The count trailing ones operation would return 3, the count leading ones operation would return 16, and the find first zero operation ffz would return 4.
If the word is zero (no bits set), count leading zeros and count trailing zeros both return the number of bits in the word, while ffs returns zero. Both log base 2 and zero-based implementations of find first set generally return an undefined result for the zero word.
Hardware support:
Many architectures include instructions to rapidly perform find first set and/or related operations, listed below. The most common operation is count leading zeros (clz), likely because all other operations can be implemented efficiently in terms of it (see Properties and relations).
On some Alpha platforms CTLZ and CTTZ are emulated in software.
Tool and library support:
A number of compiler and library vendors supply compiler intrinsics or library functions to perform find first set and/or related operations, which are frequently implemented in terms of the hardware instructions above:
Properties and relations:
If bits are labeled starting at 1 (which is the convention used in this article), then count trailing zeros and find first set operations are related by ctz(x) = ffs(x) − 1 (except when the input is zero). If bits are labeled starting at 0, then count trailing zeros and find first set are exactly equivalent operations. Given w bits per word, the log2 is easily computed from the clz and vice versa by log2(x) = w − 1 − clz(x).
Properties and relations:
As demonstrated in the example above, the find first zero, count leading ones, and count trailing ones operations can be implemented by negating the input and using find first set, count leading zeros, and count trailing zeros. The reverse is also true.
On platforms with an efficient log2 operation such as M68000, ctz can be computed by: ctz(x) = log2(x & −x)where & denotes bitwise AND and −x denotes the two's complement of x. The expression x & −x clears all but the least-significant 1 bit, so that the most- and least-significant 1 bit are the same.
Properties and relations:
On platforms with an efficient count leading zeros operation such as ARM and PowerPC, ffs can be computed by: ffs(x) = w − clz(x & −x).Conversely, on machines without log2 or clz operators, clz can be computed using ctz, albeit inefficiently: clz = w − ctz(2⌈log2(x)⌉) (which depends on ctz returning w for the zero input)On platforms with an efficient Hamming weight (population count) operation such as SPARC's POPC or Blackfin's ONES, there is: ctz(x) = popcount((x & −x) − 1), or ctz(x) = popcount(~(x | −x)), ffs(x) = popcount(x ^ ~−x) clz = 32 − popcount(2⌈log2(x)⌉ − 1)where ^ denotes bitwise exclusive-OR, | denotes bitwise OR and ~ denotes bitwise negation.
Properties and relations:
The inverse problem (given i, produce an x such that ctz(x) = i) can be computed with a left-shift (1 << i).
Properties and relations:
Find first set and related operations can be extended to arbitrarily large bit arrays in a straightforward manner by starting at one end and proceeding until a word that is not all-zero (for ffs, ctz, clz) or not all-one (for ffz, clo, cto) is encountered. A tree data structure that recursively uses bitmaps to track which words are nonzero can accelerate this.
Software emulation:
Most CPUs dating from the late 1980s onward have bit operators for ffs or equivalent, but a few modern ones like some of the ARM-Mx series do not. In lieu of hardware operators for ffs, clz and ctz, software can emulate them with shifts, integer arithmetic and bitwise operators. There are several approaches depending on architecture of the CPU and to a lesser extent, the programming language semantics and compiler code generation quality. The approaches may be loosely described as linear search, binary search, search+table lookup, de Bruijn multiplication, floating point conversion/exponent extract, and bit operator (branchless) methods. There are tradeoffs between execution time and storage space as well as portability and efficiency.
Software emulation:
Software emulations are usually deterministic. They return a defined result for all input values; in particular, the result for an input of all zero bits is usually 0 for ffs, and the bit length of the operand for the other operations.
If one has a hardware clz or equivalent, ctz can be efficiently computed with bit operations, but the converse is not true: clz is not efficient to compute in the absence of a hardware operator.
Software emulation:
2n The function 2⌈log2(x)⌉ (round up to the nearest power of two) using shifts and bitwise ORs is not efficient to compute as in this 32-bit example and even more inefficient if we have a 64-bit or 128-bit operand: function pow2(x): if x = 0 return invalid // invalid is implementation defined (not in [0,63]) x ← x - 1 for each y in {1, 2, 4, 8, 16}: x ← x | (x >> y) return x + 1 FFS Since ffs = ctz + 1 (POSIX) or ffs = ctz (other implementations), the applicable algorithms for ctz may be used, with a possible final step of adding 1 to the result, and returning 0 instead of the operand length for input of all zero bits.
Software emulation:
CTZ The canonical algorithm is a loop counting zeros starting at the LSB until a 1-bit is encountered: function ctz1 (x) if x = 0 return w t ← 1 r ← 0 while (x & t) = 0 t ← t << 1 r ← r + 1 return r This algorithm executes O(n) time and operations, and is impractical in practice due to a large number of conditional branches.
Software emulation:
A lookup table can eliminate most branches: table[0..2n-1] = ctz(i) for i in 0..2n-1 function ctz2 (x) if x = 0 return w r ← 0 loop if (x & (2n-1)) ≠ 0 return r + table[x & (2n-1)] x ← x >> n r ← r + n The parameter n is fixed (typically 8) and represents a time–space tradeoff. The loop may also be fully unrolled. But as a linear lookup, this approach is still O(n) in the number of bits in the operand.
Software emulation:
A binary search implementation takes a logarithmic number of operations and branches, as in this 32-bit version: This algorithm can be assisted by a table as well, replacing the bottom three "if" statements with a 256 entry lookup table using the first non-zero byte encountered as an index.
Software emulation:
function ctz3 (x) if x = 0 return 32 n ← 0 if (x & 0x0000FFFF) = 0: n ← n + 16, x ← x >> 16 if (x & 0x000000FF) = 0: n ← n + 8, x ← x >> 8 if (x & 0x0000000F) = 0: n ← n + 4, x ← x >> 4 if (x & 0x00000003) = 0: n ← n + 2, x ← x >> 2 if (x & 0x00000001) = 0: n ← n + 1 return n If the hardware has a clz operator, the most efficient approach to computing ctz is thus: function ctz4 (x) x &= -x return w - (clz(x) + 1) An algorithm for 32-bit ctz uses de Bruijn sequences to construct a minimal perfect hash function that eliminates all branches.
Software emulation:
This algorithm assumes that the result of the multiplication is truncated to 32 bit.
Software emulation:
for i from 0 to 31: table[ ( 0x077CB531 * ( 1 << i ) ) >> 27 ] ← i // table [0..31] initialized function ctz5 (x) return table[((x & -x) * 0x077CB531) >> 27] The expression (x & -x) again isolates the least-significant 1 bit. There are then only 32 possible words, which the unsigned multiplication and shift hash to the correct position in the table. (This algorithm does not handle the zero input.) CLZ The canonical algorithm examines one bit at a time starting from the MSB until a non-zero bit is found, as shown in this example. It executes in O(n) time where n is the bit-length of the operand, and is not a practical algorithm for general use.
Software emulation:
function clz1 (x) if x = 0 return w t ← 1 << (w - 1) r ← 0 while (x & t) = 0 t ← t >> 1 r ← r + 1 return r An improvement on the previous looping approach examines eight bits at a time then uses a 256 (28) entry lookup table for the first non-zero byte. This approach, however, is still O(n) in execution time.
Software emulation:
function clz2 (x) if x = 0 return w t ← 0xff << (w - 8) r ← 0 while (x & t) = 0 t ← t >> 8 r ← r + 8 return r + table[x >> (w - 8 - r)] Binary search can reduce execution time to O(log2n): function clz3 (x) if x = 0 return 32 n ← 0 if (x & 0xFFFF0000) = 0: n ← n + 16, x ← x << 16 if (x & 0xFF000000) = 0: n ← n + 8, x ← x << 8 if (x & 0xF0000000) = 0: n ← n + 4, x ← x << 4 if (x & 0xC0000000) = 0: n ← n + 2, x ← x << 2 if (x & 0x80000000) = 0: n ← n + 1 return n The fastest portable approaches to simulate clz are a combination of binary search and table lookup: an 8-bit table lookup (28=256 1-byte entries) can replace the bottom 3 branches in binary search. 64-bit operands require an additional branch. A larger width lookup can be used but the maximum practical table size is limited by the size of L1 data cache on modern processors, which is 32 KB for many. Saving a branch is more than offset by the latency of an L1 cache miss.
Software emulation:
An algorithm similar to de Bruijn multiplication for CTZ works for CLZ, but rather than isolating the most-significant bit, it rounds up to the nearest integer of the form 2n−1 using shifts and bitwise ORs: table[0..31] = {0, 9, 1, 10, 13, 21, 2, 29, 11, 14, 16, 18, 22, 25, 3, 30, 8, 12, 20, 28, 15, 17, 24, 7, 19, 27, 23, 6, 26, 5, 4, 31} function clz4 (x) for each y in {1, 2, 4, 8, 16}: x ← x | (x >> y) return table[((x * 0x07C4ACDD) >> 27) % 32] For processors with deep pipelines, like Prescott and later Intel processors, it may be faster to replace branches by bitwise AND and OR operators (even though many more instructions are required) to avoid pipeline flushes for mispredicted branches (and these types of branches are inherently unpredictable): function clz5 (x) r = (x > 0xFFFF) << 4; x >>= r; q = (x > 0xFF ) << 3; x >>= q; r |= q; q = (x > 0xF ) << 2; x >>= q; r |= q; q = (x > 0x3 ) << 1; x >>= q; r |= q; r |= (x >> 1); return r; On platforms that provide hardware conversion of integers to floating point, the exponent field can be extracted and subtracted from a constant to compute the count of leading zeros. Corrections are needed to account for rounding errors. Floating point conversion can have substantial latency. This method is highly non-portable and not usually recommended.
Applications:
The count leading zeros (clz) operation can be used to efficiently implement normalization, which encodes an integer as m × 2e, where m has its most significant bit in a known position (such as the highest position). This can in turn be used to implement Newton–Raphson division, perform integer to floating point conversion in software, and other applications.Count leading zeros (clz) can be used to compute the 32-bit predicate "x = y" (zero if true, one if false) via the identity clz(x − y) >> 5, where ">>" is unsigned right shift. It can be used to perform more sophisticated bit operations like finding the first string of n 1 bits. The expression clz(x − y)1 << (16 − clz(x − 1)/2) is an effective initial guess for computing the square root of a 32-bit integer using Newton's method. CLZ can efficiently implement null suppression, a fast data compression technique that encodes an integer as the number of leading zero bytes together with the nonzero bytes. It can also efficiently generate exponentially distributed integers by taking the clz of uniformly random integers.The log base 2 can be used to anticipate whether a multiplication will overflow, since ⌈log2(xy)⌉ ≤ ⌈log2(x)⌉ + ⌈log2(y)⌉.Count leading zeros and count trailing zeros can be used together to implement Gosper's loop-detection algorithm, which can find the period of a function of finite range using limited resources.The binary GCD algorithm spends many cycles removing trailing zeros; this can be replaced by a count trailing zeros (ctz) followed by a shift. A similar loop appears in computations of the hailstone sequence.
Applications:
A bit array can be used to implement a priority queue. In this context, find first set (ffs) is useful in implementing the "pop" or "pull highest priority element" operation efficiently. The Linux kernel real-time scheduler internally uses sched_find_first_bit() for this purpose.The count trailing zeros operation gives a simple optimal solution to the Tower of Hanoi problem: the disks are numbered from zero, and at move k, disk number ctz(k) is moved the minimum possible distance to the right (circling back around to the left as needed). It can also generate a Gray code by taking an arbitrary word and flipping bit ctz(k) at step k. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Traffic exchange**
Traffic exchange:
A traffic exchange is a type of website which provides a service for webmasters in exchange for traffic. It is similar to the autosurf concept with the exception that traffic exchanges usually use a manual rotation.
Concept:
A traffic exchange website receives website submissions from webmasters that join traffic exchange networks. The person who submitted the website then has to browse other member sites on the exchange program to earn credits, which enable their sites to be viewed by other members through the surf system. This increases the number of visitors to all the sites involved.
Exchanges enforce a certain credit ratio, which illustrates the number of websites the surfer must view in order to receive one hit through the program for their promoted website. Many sites offer the ability to upgrade one's membership level for a more equal credit ratio.
Concept:
As the viewers are all website owners or affiliates, it is possible that some might find certain member sites interesting and thus make note of them on their own sites, sending more traffic their way. Most traffic programs also impose a time limit when members are browsing, ranging from 10 seconds to 60 seconds. Some incorporate the use of captcha to ensure user interaction.
Concept:
Almost all traffic exchange programs are free, although many of them offer special features to paid members and offer credits for purchase. Almost all traffic exchange programs encourage users to build their own referral networks, which in turn increases the referrers' number of credits.
The traffic generated in a traffic exchange can be leveraged by using a downline builder to assist the user in building a referral network in the many different traffic exchanges.
In practice, traffic exchange programs are generally used by small business owners or marketers who either want free advertising or use the exchange programs for low-budget advertisement campaigns.
History:
Traffic Exchanges date back to the beginning of the web and were primarily used by organizations to share sites between employees. Viewers would rate pages in a similar fashion to the now popular social bookmarking phenomenon. When interesting websites were hard to find a traffic exchange for an organization new to the web proved an invaluable tool.
Circa 1994 traffic exchanges moved from corporate intranets to the web. In an effort to build communities the concept of rating pages was replaced with rewarding members for viewing.
It was 1996 before traffic exchanges began to charge for traffic and around this time the concept changed from a tool for locating interesting sites to a commercial one. This change in direction resulted in increased popularity at the expense of the content which is now almost exclusively commerce.
Traffic Exchange vs. Bounce Rate:
Most people use Traffic Exchange programs to increase their site visit rate. Traffic Exchange programs offer both the Auto and Manual Surf options with a timing of 3 to 60 seconds. An 'autosurf' program requires no human intervention to rotate the sites in the database, and is used primarily to inflate the total number of site hits. This practice is rather controversial as it may skew the results of website popularity. People's main reason behind joining a Traffic Exchange program is to promote products and services to like-minded marketers. A factor that may negatively influence the ranking is the Bounce Rate. If a website or blog has a high bounce rate then it will be considered that people are not interested in the content. The bounce rate is calculated by the average rate a visitor stayed on the site. So whereas the traffic exchange sites increase the site visit rate, on the other hand they also increase the bounce rate. A higher bounce rate generally harms SEO performance, so using a traffic exchange comes with risks as well.
AdSense on Traffic Exchanges:
Google disallows using AdSense on Traffic Exchanges. Users who wish to advertise their websites on a traffic exchange but also have AdSense ads should create separate pages for advertising in traffic exchanges that do not have AdSense ads. Sending traffic sourced from a traffic exchange to a website monetized using AdSense or other ad networks like Adfly is likely to get the user banned from the network. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ATPase, Na+/K+ transporting, alpha 1**
ATPase, Na+/K+ transporting, alpha 1:
Sodium/potassium-transporting ATPase subunit alpha-1 is an enzyme that in humans is encoded by the ATP1A1 gene.The protein encoded by this gene belongs to the family of P-type cation transport ATPases, and to the subfamily of Na+/K+-ATPases. Na+/K+-ATPase is an integral membrane protein responsible for establishing and maintaining the electrochemical gradients of Na and K ions across the plasma membrane. These gradients are essential for osmoregulation, for sodium-coupled transport of a variety of organic and inorganic molecules, and for electrical excitability of nerve and muscle. This enzyme is composed of two subunits, a large catalytic subunit (alpha) and a smaller glycoprotein subunit (beta). The catalytic subunit of Na+/K+-ATPase is encoded by multiple genes. This gene encodes an alpha 1 subunit. Alternatively spliced transcript variants encoding different isoforms have been identified.In melanocytic cells ATP1A1 gene expression may be regulated by MITF.
Clinical relevance:
Mutations in this gene have been associated with aldosterone-producing adenomas and secondary hypertension. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Temoporfin**
Temoporfin:
Temoporfin (INN) is a photosensitizer (based on chlorin) used in photodynamic therapy for the treatment of squamous cell carcinoma of the head and neck . It is marketed in the European Union under the brand name Foscan. The U.S. Food and Drug Administration (FDA) declined to approve Foscan in 2000. The EU approved its use in June 2001.Good results were obtained in 21 of 35 patients treated in Germany.It is photoactivated at 652 nm i.e. by red light.
Temoporfin:
Patients can remain photosensitive for several weeks after treatment. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Thyrotroph Thyroid Hormone Sensitivity Index**
Thyrotroph Thyroid Hormone Sensitivity Index:
The Thyrotroph Thyroid Hormone Sensitivity Index (abbreviated TTSI, also referred to as Thyrotroph T4 Resistance Index or TT4RI) is a calculated structure parameter of thyroid homeostasis. It was originally developed to deliver a method for fast screening for resistance to thyroid hormone. Today it is also used to get an estimate for the set point of thyroid homeostasis, especially to assess dynamic thyrotropic adaptation of the anterior pituitary gland, including non-thyroidal illnesses.
How to determine TTSI:
Universal form The TTSI can be calculated with 100 ⋅TSH⋅FT4lu from equilibrium serum or plasma concentrations of thyrotropin (TSH), free T4 (FT4) and the assay-specific upper limit of the reference interval for FT4 concentration (lu).
Reference ranges Short form Some publications use a simpler form of this equation that doesn't correct for the reference range of free T4. It is calculated with 100 ⋅TSH⋅FT4 .The disadvantage of this uncorrected version is that its numeric results are highly dependent on the used assays and their units of measurement.
Biochemical associations:
In case of resistance to thyroid hormone, the magnitude of TTSI depends on which nucleotide in the THRB gene is mutated, but also on the genotype of coactivators. A systematic investigation in mice demonstrated a strong association of TT4RI to the genotypes of THRB and the steroid receptor coactivator (SRC-1) gene.
Clinical significance:
The TTSI is used as a screening parameter for resistance to thyroid hormone due to mutations in the THRB gene, where it is elevated. It is also beneficial for assessing the severity of already confirmed thyroid hormone resistance, even on replacement therapy with L-T4, and for monitoring the pituitary response to substitution therapy with thyromimetics (e.g. TRIAC) in RTH Beta.In autoimmune thyroiditis the TTSI is moderately elevated.A large cohort study demonstrated TTSI to be strongly influenced by genetic factors. A variant of the TTSI that is not corrected for the upper limit of the FT4 reference range was shown to be significantly increased in offspring from long-lived siblings compared to their partners.Conversely, an elevated set point of thyroid homeostasis, as quantified by the TT4RI, is associated to higher prevalence of metabolic syndrome and several harmonized criteria by the International Diabetes Federation, including triglyceride and HDL concentration and blood pressure.In certain phenotypes of non-thyroidal illness syndrome, especially in cases with concomitant sepsis, the TTSI is reduced. This reflects a reduced set point of thyroid homeostasis, as also experimentally predicted in rodent models of inflammation and sepsis.Negative correlation of the TTSI with the urinary excretion of certain phthalates suggests that endocrine disruptors may affect the central set point of thyroid homeostasis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CalorieKing**
CalorieKing:
CalorieKing is an online weight loss club and software developer with a program centred on healthy eating and exercise ("calories in, calories out"). The company offers products and services tailored specifically for the United States, British, and Australian markets. As well as offering help for people who wish to lose weight, there are also programs and support for those who want to maintain their current weight, or to gain weight. The web sites' resources also include forums, and an extensive library of recipes and health and weight loss related articles contributed by company staff as well as other organisations and contributors.
CalorieKing:
In addition to its web site, the company also produces personal computer software and several popular books. Many of its products are based on the CalorieKing food database, which claims to contain over 100,000 foods in the American version and 20,000 foods in the Australian version.
History:
CalorieKing was founded as Family Health Publications in 1973 in Australia by Allan Borushek, biochemist and clinical dietitian, with the publication of the first Australian Calorie, Fat, & Carb Counter. In 1988, the book was published in the United States, selling more than 10,000,000 copies. 1998 saw the launch of the company's web site, CalorieKing.com; its Australian sister-site DietClub.com.au was launched in 1997 and later rebranded as CalorieKing.com.au.
History:
In July 2007, the company announced an alliance with the Joslin Diabetes Center to promote type 2 diabetes awareness, prevention, and management, and to build a culturally specific Latino food database. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Electrification and controls technology**
Electrification and controls technology:
Electrification and controls technology are devices that control, service and enhance productivity of industrial handling. Controls interface with hardware such as receivers, cranes and hoists, through a network in order to ensure that equipment operates safely and effectively. Almost every business, including the food, chemical, and automobile industries, uses controls. Some examples of these gadgets are: Remote controls Festooning Drives Motors Conductor bars Anti-collision devices Weighing devices Brakes Resistors Cabling
Industry definitions:
Conductor bar: Insulated energized rails that safely provide power, control and data to moving equipment from a fixed source, much like electric rails on a model train.Festoon system: A cable management system of rolling trolleys that properly support power, control and data cables to moving equipment from a fixed source.Cable reel: A cable management device designed to spool and store electrical power, control or data cable, as the equipment moves along its path of motion.
Industry definitions:
Variable-frequency drive: A type of static controller that safely drives an electric motor by varying the frequency and voltage the motor is supplied. This device minimizes the wear and tear of the mechanical system while allowing precise control and maximizing operator safety.
Radio remote control: Allows an operator to control different types of moving equipment and cranes, meanwhile, providing the operator the best vantage point to the load or operation and physical position for a safe working area.
Load brake: A device used to safely stop linear or rotating motion of equipment through the use of power or friction. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Greek gift sacrifice**
Greek gift sacrifice:
In chess, the Greek gift sacrifice, also known as the classical bishop sacrifice, is a typical sacrifice of a bishop by White playing Bxh7+ or Black playing Bxh2+ at some point after the opponent has castled kingside, with the goal generally being to attack and checkmate the opponent's king, or to regain material. It is important to consider the opponent's defenses.
Greek gift sacrifice:
Greek gift sacrifices, or the threat of them, occur relatively frequently in play, especially at amateur level. One of the most famous examples of the sacrifice is found in the game Edgard Colle–John O'Hanlon, Nice 1930. Less commonly, a Greek gift sacrifice may be the prelude to a double bishop sacrifice, as seen in Lasker–Bauer, Amsterdam 1889.
Requirements:
The Greek gift sacrifice usually has several prerequisites in order to succeed. In general, the attack will succeed if: the attacker has more control over the g5-square than the defender; the attacker's knight can move to g5 to deliver a check; the attacker's queen can join the attack, often on the h-file; the defender cannot move a piece to safely defend square h7 (or h2); the defender cannot easily reorganize his defense.If there is a defending bishop on e7 (or e2), a pawn on h4 (or h5) is necessary. Otherwise, it can be useful.
Illustration:
The position after the moves 1.e4 e6 2.d4 d5 3.Nc3 Nf6 4.e5 Nfd7 5.Nf3 Bb4 6.Bd3 0-0?? (diagram) is a simple case where the Greek gift sacrifice works. White can play 7.Bxh7+! Kxh7 8.Ng5+ to force Black to give up the queen to prevent mate: 8...Kh8 9.Qh5+ Kg8 10.Qh7# 8...Kg8 9.Qh5 threatening 10.Qh7#, to which the only feasible responses are 9...Qxg5 10.Bxg5 wins the queen, and 9...Re8 10.Qxf7+ Kh8 11.Qh5+ Kg8 12.Qh7+ Kf8 13.Qh8+ Ke7 14.Qxg7# 8...Kh6 9.Nxf7+ wins the queen.
Illustration:
8...Kg6 9.h4 and there is no satisfactory way to meet the threat of 10.h5+ Kh6 (10...Kf5 11.g4#) 11.Nxf7+, winning the queen.
Etymology:
The etymology of the phrase "Greek gift" in this context is not entirely clear. The obvious explanation is that it alludes to the Trojan Horse, and specifically to Laocoön's famous Timeo Danaos et dona ferentes ("I fear the Greeks even when they bring gifts", Virgil's Aeneid II.49). The Oxford Companion to Chess, however, suggests that one explanation is that the sacrifice often occurred in Gioachino Greco's games. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Interprocedural optimization**
Interprocedural optimization:
Interprocedural optimization (IPO) is a collection of compiler techniques used in computer programming to improve performance in programs containing many frequently used functions of small or medium length. IPO differs from other compiler optimizations by analyzing the entire program as opposed to a single function or block of code.
IPO seeks to reduce or eliminate duplicate calculations and inefficient use of memory and to simplify iterative sequences such as loops. If a call to another routine occurs within a loop, IPO analysis may determine that it is best to inline that routine. Additionally, IPO may re-order the routines for better memory layout and locality.
Interprocedural optimization:
IPO may also include typical compiler optimizations applied on a whole-program level, for example dead code elimination (DCE), which removes code that is never executed. IPO also tries to ensure better use of constants. Modern compilers offer IPO as an option at compile-time. The actual IPO process may occur at any step between the human-readable source code and producing a finished executable binary program.
Interprocedural optimization:
For languages that compile on a file-by-file basis, effective IPO across translation units (module files) requires knowledge of the "entry points" of the program so that a whole program optimization (WPO) can be run. In many cases, this is implemented as a link-time optimization (LTO) pass, because the whole program is visible to the linker.
Analysis:
The objective of any optimization for speed is to have the program run as swiftly as possible; the problem is that it is not possible for a compiler to correctly analyze a program and determine what it will do, much less what the programmer intended for it to do. By contrast, human programmers start at the other end with a purpose, and attempt to produce a program that will achieve it, preferably without expending a lot of thought in the process.
Analysis:
For various reasons, including readability, programs are frequently broken up into a number of procedures that handle a few general cases. However, the generality of each procedure may result in wasted effort in specific usages. Interprocedural optimization represents an attempt at reducing this waste.
Analysis:
Suppose there is a procedure that evaluates F(x), and that F is a pure function, and the code requests the result of F(6) and then later, F(6) again. This second evaluation is almost certainly unnecessary: the result could have instead been saved and referred to later. This simple optimization is foiled the moment that the implementation of F(x) becomes impure; that is, its execution involves reference to parameters other than the explicit argument 6 that have been changed between the invocations, or side effects such as printing some message to a log, counting the number of evaluations, accumulating the CPU time consumed, preparing internal tables so that subsequent invocations for related parameters will be facilitated, and so forth. Losing these side effects via non-evaluation a second time may be acceptable, or they may not.
Analysis:
More generally, aside from optimization, the second reason to use procedures is to avoid duplication of code that would produce the same results, or almost the same results, each time the procedure is performed. A general approach to optimization would therefore be to reverse this: some or all invocations of a certain procedure are replaced by the respective code, with the parameters appropriately substituted. The compiler will then try to optimize the result.
WPO and LTO:
Whole program optimization (WPO) is the compiler optimization of a program using information about all the modules in the program. Normally, optimizations are performed on a per module, "compiland", basis; but this approach, while easier to write and test and less demanding of resources during the compilation itself, does not allow certainty about the safety of a number of optimizations such as aggressive inlining and thus cannot perform them even if they would actually turn out to be efficiency gains that do not change the semantics of the emitted object code. Link-time optimization (LTO) is a type of program optimization performed by a compiler to a program at link time. Link time optimization is relevant in programming languages that compile programs on a file-by-file basis, and then link those files together (such as C and Fortran), rather than all at once (such as Java's just-in-time compilation (JIT)).
WPO and LTO:
Once all files have been compiled separately into object files, traditionally, a compiler links (merges) the object files into a single file, the executable. However, in LTO as implemented by the GNU Compiler Collection (GCC) and LLVM, the compiler is able to dump its intermediate representation (IR), i.e. GIMPLE bytecode or LLVM bitcode, respectively, so that all the different compilation units that will go to make up a single executable can be optimized as a single module when the link finally happens. This expands the scope of interprocedural optimizations to encompass the whole program (or, rather, everything that is visible at link time). With link-time optimization, the compiler can apply various forms of interprocedural optimization to the whole program, allowing for deeper analysis, more optimization, and ultimately better program performance.
WPO and LTO:
In practice, LTO does not always optimize the entire program—library functions, especially dynamically linked shared objects, are intentionally kept out to avoid excessive duplication and to allow for updating. Static linking does naturally lend to the concept of LTO, but it only works with library archives that contain IR objects as opposed to machine-code only object files. Due to performance concerns, not even the entire unit is always directly used—a program could be partitioned in a divide-and-conquer style LTO such as GCC's WHOPR. And of course, when the program being built is itself a library, the optimization would keep every externally-available (exported) symbol, without trying too hard at removing them as a part of DCE.A much more limited form of WPO is still possible without LTO, as exemplified by GCC's -fwhole-program switch. This mode makes GCC assume that the module being compiled contains the entry point of the entire program, so that every other function in it is not externally used and can be safely optimized away. Since it only applies to a single module, it cannot truly encompass the whole program. It can be combined with LTO in the one-big-module sense, which is useful when the linker is not communicating back to GCC about what entry points or symbols are being used externally.
Example:
If the parameters to Silly are passed by value, the actions of the procedure have no effect on the original variables, and since Silly does nothing to its environment (read from a file, write to a file, modify global variables such as b, etc.) its code plus all invocations may be optimized away entirely, leaving the value of a undefined (which doesn't matter) so that just the write statements remain, simply printing constant values.
Example:
If instead the parameters are passed by reference, then action on them within Silly does indeed affect the originals. This is usually done by passing the machine address of the parameters to the procedure so that the procedure's adjustments are to the original storage area. Thus in the case of pass by reference, procedure Silly does have an effect. Suppose that its invocations are expanded in place, with parameters identified by address: the code amounts to The compiler could then in this rather small example follow the constants along the logic (such as it is) and find that the predicates of the if-statements are constant and so...
Example:
And since the assignments to a, b and x deliver nothing to the outside world - they do not appear in output statements, nor as input to subsequent calculations (whose results in turn do lead to output, else they also are needless) - there is no point in this code either, and so the result is A variant method for passing parameters that appear to be "by reference" is copy-in, copy-out whereby the procedure works on a local copy of the parameters whose values are copied back to the originals on exit from the procedure. If the procedure has access to the same parameter but in different ways as in invocations such as Silly(a,a) or Silly(a,b), discrepancies can arise. So, if the parameters were passed by copy-in, copy-out in left-to-right order then Silly(b,b) would expand into And in this case, copying the value of p1 (which has been changed) to b is pointless, because it is immediately overwritten by the value of p2, which value has not been modified within the procedure from its original value of b, and so the third statement becomes Such differences in behavior are likely to cause puzzlement, exacerbated by questions as to the order in which the parameters are copied: will it be left to right on exit as well as entry? These details are probably not carefully explained in the compiler manual, and if they are, they will likely be passed over as being not relevant to the immediate task and long forgotten by the time a problem arises. If (as is likely) temporary values are provided via a stack storage scheme, then it is likely that the copy-back process will be in the reverse order to the copy-in, which in this example would mean that p1 would be the last value returned to b instead.
Example:
The process of expanding a procedure in-line should not be regarded as a variant of textual replacement (as in macro expansions) because syntax errors may arise as when parameters are modified and the particular invocation uses constants as parameters. Because it is important to be sure that any constants supplied as parameters will not have their value changed (constants can be held in memory just as variables are) lest subsequent usages of that constant (made via reference to its memory location) go awry, a common technique is for the compiler to generate code copying the constant's value into a temporary variable whose address is passed to the procedure, and if its value is modified, no matter; it is never copied back to the location of the constant.
Example:
Put another way, a carefully written test program can report on whether parameters are passed by value or reference, and if used, what sort of copy-in and copy-out scheme. However, variation is endless: simple parameters might be passed by copy whereas large aggregates such as arrays might be passed by reference; simple constants such as zero might be generated by special machine codes (such as Clear, or LoadZ) while more complex constants might be stored in memory tagged as read-only with any attempt at modifying it resulting in immediate program termination, etc.
Example:
In general This example is extremely simple, although complications are already apparent. More likely it will be a case of many procedures, having a variety of deducible or programmer-declared properties that may enable the compiler's optimizations to find some advantage. Any parameter to a procedure might be read only, be written to, be both read and written to, or be ignored altogether giving rise to opportunities such as constants not needing protection via temporary variables, but what happens in any given invocation may well depend on a complex web of considerations. Other procedures, especially function-like procedures will have certain behaviours that in specific invocations may enable some work to be avoided: for instance, the Gamma function, if invoked with an integer parameter, could be converted to a calculation involving integer factorials.
Example:
Some computer languages enable (or even require) assertions as to the usage of parameters, and might further offer the opportunity to declare that variables have their values restricted to some set (for instance, 6 < x ≤ 28) thus providing further grist for the optimisation process to grind through, and also providing worthwhile checks on the coherence of the source code to detect blunders. But this is never enough - only some variables can be given simple constraints, while others would require complex specifications: how might it be specified that variable P is to be a prime number, and if so, is or is not the value 1 included? Complications are immediate: what are the valid ranges for a day-of-month D given that M is a month number? And are all violations worthy of immediate termination? Even if all that could be handled, what benefit might follow? And at what cost? Full specifications would amount to a re-statement of the program's function in another form and quite aside from the time the compiler would consume in processing them, they would thus be subject to bugs. Instead, only simple specifications are allowed with run-time range checking provided.
Example:
In cases where a program reads no input (as in the example), one could imagine the compiler's analysis being carried forward so that the result will be no more than a series of print statements, or possibly some loops expediently generating such values. Would it then recognise a program to generate prime numbers, and convert to the best-known method for doing so, or, present instead a reference to a library? Unlikely! In general, arbitrarily complex considerations arise (the Entscheidungsproblem) to preclude this, and there is no option but to run the code with limited improvements only.
History:
For procedural languages like ALGOL, interprocedural analysis and optimization appear to have entered commercial practice in the early 1970s. IBM's PL/I Optimizing Compiler performed interprocedural analysis to understand the side effects of both procedure calls and exceptions (cast, in PL/I terms as "on conditions") and in papers by Fran Allen. Work on compilation of the APL programming language was necessarily interprocedural.The techniques of interprocedural analysis and optimization were the subject of academic research in the 1980s and 1990s. They re-emerged into the commercial compiler world in the early 1990s with compilers from both Convex Computer Corporation (the "Application Compiler" for the Convex C4) and from Ardent (the compiler for the Ardent Titan). These compilers demonstrated that the technologies could be made sufficiently fast to be acceptable in a commercial compiler; subsequently interprocedural techniques have appeared in a number of commercial and non-commercial systems.
Flags and implementation:
Unix-like The GNU Compiler Collection has function inlining at all optimization levels. At -O1 this only applies to those only called once (-finline-functions-once), at -O2 this constraint is relaxed (-finline-functions). By default this is a single-file-only behavior, but with link-time optimization -flto it becomes whole program. Clang's command-line interface is similar to that of GCC, with the exception that there is no -fwhole-program option.Object files produced by LTO contain a compiler-specific intermediate representation (IR) that is interpreted at link-time. To make sure this plays well with static libraries, newer GNU linkers have a "linker plugin" interface that allows the compiler to convert the object files into a machine code form when needed. This plugin also helps drive the LTO process in general. Alternatively, a "fat LTO" object can be produced to contain both machine code and the IR, but this takes more space.Since both GCC and LLVM (clang) are able produce an IR from a variety of programming languages, link-time IPO can happen even across language boundaries. This is most commonly demonstrated with C and C++, but LLVM makes it possible for Rust and all other LLVM-based compilers too.
Flags and implementation:
Non-LTO options GCC and Clang perform IPO by default at optimization level 2. However, the degree of optimization is limited when LTO is disabled, as IPO can only happen within an object file and non-static functions can never be eliminated. The latter problem has a non-LTO solution: the -fwhole-program switch can be used to assume that only main() is non-static, i.e. visible from the outside.Another non-LTO technique is "function sections" (-ffunction-sections in GCC and Clang). By placing each function into its own section in the object file, the linker can perform dead code removal without an IR by removing unreferenced sections (using the linker option --gc-sections). A similar option is available for variables, but it causes much worse code to be produced.
Flags and implementation:
Other The Intel C/C++ compilers allow whole-program IPO. The flag to enable interprocedural optimizations for a single file is -ip, the flag to enable interprocedural optimization across all files in the program is -ipo.The MSVC compiler, integrated into Visual Studio, also supports interprocedural optimization on the whole program.A compiler-independent interface for enabling whole-program interprocedural optimizations is via the INTERPROCEDURAL_OPTIMIZATION property in CMake. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Automatic bid**
Automatic bid:
An automatic bid is a bid or berth to a tournament, granted based on performance in prior competition, and not based on subjective picking (see: at-large bid). It is used in the United States in all professional sports, in which all playoff bids are automatic and determined by objective formulae; in college sports, all divisions (except the highest division of college football) use a mix of automatic bids and subjective selections to seed the postseason tournaments.
Automatic bid:
In Men's and Women's Division I college basketball, the teams that win their conference tournament are granted automatic berths to the main tournament. The Ivy League was the last Division I conference to institute a conference tournament, not doing so until the 2016–17 season; before then, the team with the best record in conference games advanced via automatic berth. Schools not in conferences, called "independents," have no conference tournament and can only advance to the NCAA Tournament via an at-large bid, which rarely happens unless the team performs well. As of the 2022-23 season, two Division I teams are competing as independents: the Chicago State Cougars and the Hartford Hawks, the latter of whom will be dropping to NCAA Division III after the season ends.
Automatic bid:
Similar automatic bid processes are used in other NCAA sports with a post-season tournament. This allows a team with a losing record to qualify for the NCAA tournament based on winning the automatic bid via tournament.
Another post-season college basketball tournament, the NIT, includes the best teams that were left out of the NCAA Tournament. Since the 2005 purchase of the NIT by the NCAA, automatic bids are now awarded to all regular season conference champions who did not win their conference tournament and did not get an at-large bid to the NCAA Tournament. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Side-stick**
Side-stick:
A side-stick or sidestick controller is an aircraft control stick that is located on the side console of the pilot, usually on the righthand side, or outboard on a two-seat flightdeck. Typically this is found in aircraft that are equipped with fly-by-wire control systems.The throttle controls are typically located to the left of a single pilot or centrally on a two-seat flightdeck. Only one hand is required to operate it; two hand operation is neither possible nor necessary.
Prevalence:
The side-stick is used in many modern military fighter aircraft, such as the F-16 Fighting Falcon, Mitsubishi F-2, Dassault Rafale, and F-22 Raptor, and also on civil aircraft, such as the Sukhoi Superjet 100, Airbus A320 and all subsequent Airbus aircraft, including the largest passenger jet in service, the Airbus A380.
It is also used in new helicopter models such as the Bell 525.
Compared to centre sticks:
A side-stick arrangement contrasts with the more conventional design where the stick is located in the centre of the cockpit between the pilot's legs, called a "centre stick".
Comparison of passive and active side-sticks:
Passive side-sticks In the centre stick design, like traditional airplane yokes, both the pilot's and co-pilot's controls are mechanically connected together so each pilot has a sense of the control inputs of the other. In typical Airbus side-stick implementations, the sticks are independent, the so-called 'passive' side-stick. The plane's computer either aggregates multiple inputs or a pilot can press a "priority button" to lock out inputs from the other side-stick. However, if both side-sticks are moved in different directions at the same time (regardless of which pilot has priority), then both inputs are cancelled out and an aural "dual input" warning sounds. Examples of this occurring include the 2009 crash of Air France Flight 447 (an Airbus A330 flying from Rio de Janeiro to Paris), the 2010 crash of Afriqiyah Airways Flight 771 an Airbus A330 from flying Johannesburg to Tripoli and the 2014 crash of Indonesia AirAsia Flight 8501 (an Airbus A320 flying from Surabaya to Singapore). The "dual input" warning will not activate at very low levels if the EGPWS activates due to its lower priority compared to EGPWS.
Comparison of passive and active side-sticks:
Active side-sticks However a later, significant, development is the 'active' side-stick, which is in the new Gulfstream G500/G600 series business jet aircraft. In this system, movements in one side-stick produce the same actions in the other side-stick and therefore provides valuable feedback to the other pilot. This addresses the earlier criticisms of the 'passive' side-stick. The 'active' side-stick also provides tactile feedback to the pilot during manual flight. In fact the three largest avionics manufacturers, Honeywell, Rockwell Collins and Thales, believe it will become the standard for all new fly-by-wire aircraft. In 2015 Ratier-Figeac as a subsidiary of UTC Aerospace Systems, and supplier of ‘passive’ side-sticks to Airbus since the 1980s became the supplier of ‘active’ side-sticks for the Irkut MC-21. This is the first airliner to use them.
Comparison of passive and active side-sticks:
Such an active side-stick can also be used to increase adherence to a safe flight envelope by applying a force feedback when the pilot makes a control input that would bring the aircraft closer to (or beyond) the borders of the safe flight envelope. This reduces the risk of pilots entering dangerous states of flights outside the operational borders while maintaining the pilots' final authority and increasing their situation awareness. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Brumidi Corridors**
Brumidi Corridors:
The Brumidi Corridors are the vaulted, ornately decorated corridors on the first floor of the Senate wing in the United States Capitol.
Background and artist:
They are named for Constantino Brumidi, who designed the murals, although assistants and other artists are responsible for many of the details. Brumidi was an Italian artist of Greek descent who was born in Rome in 1805, worked for three years in the Vatican under Pope Gregory XVI, and served several aristocrats as an artist for palaces and villas, including the prince Torlonia. Brumidi emigrated to the United States in 1852, and, after proving his skill in frescos 1855, he spent much of the next 25 years until his death in 1880 working in the Capitol, painting the frieze of American history and The Apotheosis of Washington in the Rotunda as well as the Brumidi Corridors.
Construction and design:
The Brumidi Corridors were part of the new wing constructed under Architect of the Capitol Thomas U. Walter between 1852 and 1859. Brumidi began making designs for the corridors in 1856. The decorative painting of the walls and ceilings of the main corridors was carried out primarily between 1857 and 1859. Brumidi added details in the 1860s and frescoed the lunettes over the doorways in the 1870s. Although Walter had envisioned plain-colored walls hung with oil paintings, Captain Montgomery C. Meigs, Superintendent of Construction, directed Brumidi to carry out an elaborate decorative scheme based on Raphael's Loggia in the Vatican. Brumidi's classical training in Rome gave him a thorough understanding of ancient Roman, Renaissance, and Baroque styles, symbols, and techniques of wall painting.
Construction and design:
Brumidi created the overall design for the corridors and directed its execution by artists of many nationalities. His immediate assistants included Joseph Rakemann, Albert Peruchi, and Ludwig Odense. An English artist, James Leslie, painted parts of the walls and ceilings of the corridors, including some of the birds and animals copied from specimens borrowed from the Smithsonian Institution. Leslie also probably painted the trophies of musical, marine, agricultural, and military implements at the intersection of the north and west corridors and possibly the monochrome lunettes of trophies near the refectory. The foreman of the decorative painters was Emmerich Carstens.
Construction and design:
A variety of techniques were employed in the corridors. Brumidi created the portraits and historical or allegorical scenes in the semicircular lunettes over the doorways in the difficult true fresco technique. The wall decorations were painted by decorative painters in lime-wash fresco; Brumidi himself probably painted the portraits. The ceilings were painted in water-soluble tempera, which was then called "distemper." Within the framework of panels framed by illusionistic moldings are symmetrical designs of scrolling vines, vases, and mythological figures. Into these classical motifs Brumidi integrated American flora and fauna. On the intricately decorated walls can be seen an amazing variety of classical gods and goddesses; birds of a hundred different species; rodents, including chipmunks, squirrels, and mice; insects and reptiles; and flowers and fruits. On the ceilings are landscapes and agricultural implements interspersed among the colorful framework of ornament. The painters of the scenic landscapes and the impressionist-style oval landscapes in the ceiling are not documented.
Construction and design:
The subjects of Brumidi's lunettes over the doorways reflect the functions of the committees that met in the rooms between 1873 and 1878 when they were painted. At the end of the west corridor, over the door to S-131, is Authority Consults the Written Law (39k), whose subject related to the Senate Committee on the Revision of Laws assigned to the room. Columbus and the Indian Maiden (41k) and Bartolomé de Las Casas (40k), who was called the "Apostle of the Indians," were painted over the doors of the Senate Committee on Indian Affairs (S-132 and S-133). Above the present Senate Appropriations Committee room (S-128), originally occupied by the Military Affairs Committee, is Bellona, the Roman war-goddess (58k). In the ceiling at the north end of the corridor the signs of the Zodiac appear on fields of blue. Along the walls, Brumidi painted monochrome profile portraits of famous early Americans (John Hancock, Francis Hopkinson, Robert Livingston, Roger Sherman, John Jay, Charles Thomson, Charles Carroll of Carrollton, and Robert Morris) set in medallions to resemble reliefs carved in stone.
Construction and design:
Decorations in the north corridor include colorful parrots and trophies on the walls near the elevator. Near the stairways at either end of the corridor are pilasters decorated with squirrels and mice. Monochrome medallion portraits of Revolutionary War leaders (Daniel Morgan, Jonathan Trumbull, Horatio Gates, Israel Putnam, Thomas Mifflin, Silas Deane, Richard Montgomery, Joseph Warren, Thomas Jefferson, and Benjamin Franklin) are painted along the walls. Modern inventions, such as the airplane, were painted on the ceiling in the early twentieth century. Over S-124, which was then used by the Senate Committee on Territories, Brumidi painted The Cession of Louisiana (45k), depicting the meeting of Robert Livingston, James Monroe, and the François Barbé-Marbois in 1803. In this area a bronze bust of Cordell Hull and a marble one of Constantino Brumidi by Jimilu Mason, dedicated in 1967, are displayed.
Construction and design:
The north entrance retains its original tempera ceiling painted by Emmerich Carstens in 1875; Brumidi painted the frescoed portraits of jurists Justice Joseph Story and Chancellor James Kent, and the imitation sculpture bust of Chancellor Robert R. Livingston, in 1878. The area below is decorated with birds; medallions hold scenes of animals and landscapes. The profile portraits of Andrew Jackson, Henry Clay, Daniel Webster, and an Adams, perhaps John Quincy Adams, are different in style and inferior in quality; they are thought to date from the turn of the century. Opposite the entry is a painting of the USS Constitution copied from a 1924 lithograph.
Construction and design:
At the east end of the north corridor, over S-118, then occupied by the Senate Committee on Foreign Relations, Brumidi painted The Signing of the First Treaty of Peace with Great Britain (44k). His depiction of the 1782 event shows John Adams, Benjamin Franklin, John Jay, Henry Laurens, and the British representative Richard Oswald and was based on an unfinished sketch by Benjamin West. On the ceiling plows and other agricultural implements are depicted.
Construction and design:
The Committee on Patents occupied the room on the east end of the north corridor, now S-116. For the area known as the Patent Corridor, Brumidi created frescoed lunettes with three important inventors: John Fitch (47k) (working on his steamboat model), Benjamin Franklin (40k), and Robert Fulton (18k). (Appropriately, Franklin appears over the door of the room then assigned to the Committee on Post Offices and Post Roads.) On the ceiling are trophies of the arts and sciences. In this area a bronze bust of Cordell Hull and a marble one of Constantino Brumidi by Jimilu Mason, dedicated in 1967, are displayed.
Construction and design:
Along the main north–south corridor are 14 oval medallions of landscapes, probably by one of the German decorative painters. In the south corridor eight medallions of animal groups alternate with repeated red, white, and blue shields. In the area leading to the refectory is an original tempera ceiling with illusionistic carved eagles and coffers and trophies of military equipment.
Construction and design:
Visitors to the Capitol have long been puzzled by the many blank ovals in the corridors. Although intended for pictures, these were left empty when the corridors were being painted in 1858 and 1859 because of restrictions by Congress, which was considering having all fine art in the Capitol approved by an art commission. In the 1870s, Brumidi was hired to paint scenes in many of the empty spaces, but his progress was limited by lack of time and funds.
Construction and design:
Some of the blank areas in the north corridor have been filled by later artists. Around 1930, an unknown artist portrayed the Wright Flyer, the Wright Brothers' airplane, and Charles Lindbergh's Spirit of St. Louis. In 1975, Allyn Cox depicted the Moon landing. The most recent addition to the corridor is the scene depicting the Space Shuttle Challenger mission crew, painted by Charles Schmidt in 1987. These last two scenes were painted on canvas and then applied to the wall.
Construction and design:
The Brumidi Corridors have always been a high-traffic area and, thus, vulnerable to damage; they were first repaired by Brumidi as early as 1861. In 1897, the backgrounds of the wall decorations were completely repainted in oil by William H. Duckstein. Aside from constant repainting, the walls were protected with varnish, which discolors over time, so that gradually the backgrounds turned from creamy white to yellow and the borders from sandstone color to murky green. Brumidi's frescoes were also painted over in oil paint when they became damaged or dark with grime. Major campaigns of retouching and repainting in oil over the frescoes were carried out by Charles Ayer Whipple (from 1919 to 1927), Charles Moberly (from 1921 to 1931), and George B. Matthews (between 1928 and 1935). In some cases, these "restorers" proudly signed and dated their work, which included changing costumes and colors and adding their own details over Brumidi's composition. In the 1950s the walls were retouched and the ceilings repainted under Francis Cumberland and Joseph Giacolone and his sons. Cliff Young also restored the signs of the zodiac in 1980.
Construction and design:
Between 1985 and 1995, Brumidi's frescoes were cleaned and conserved by professional conservators, including Bernard Rabin, Constance Silver, Catherine Myers, and Christiana Cunningham-Adams, to reveal his original compositions. Following careful study, the walls of the corridors are being gradually restored to their original colors and details. The pilot phase began in 1996. By 1999, the walls in the Patent Corridor were brought back to their 1850s appearance. Unstable plaster was consolidated to make it firm, layers of overpaint were painstakingly removed, mainly with sharp scalpels, and missing details were inpainted. Although a clear protective coating is being applied to the restored murals, they are extremely vulnerable to damage, and care must be taken to make sure that they are not touched or bumped. The plain sandstone-colored borders and illusionistic shadows around the panels are being replicated to match small uncovered areas of the original. Conservation of the murals continues in the north corridor. No two panels are exactly alike, and new details and delicate colors are coming to light as the murky overpaint is removed.
Construction and design:
The ornate bronze railing of the stairways used by senators at either end of the north corridor are composed of cherubs, eagles, and deer entwined in leafy rinceaux that echo the wall decoration. They were designed by Brumidi, sculpted by Edmond Baudin, and cast in Philadelphia by Archer, Warner, Miskey & Co. in 1858 and 1859. Cleaning and conservation in 1988 have restored their original antique bronze patina.
Construction and design:
The ornately patterned and colored tile floors were manufactured by Minton, Hollins & Company in England. They were installed throughout the new extensions between 1856 and 1861. The encaustic tile, made of inlaid colored clays, was chosen for its beauty, durability, and rich design, which complement the painted decoration of the corridors. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Institute of Pharmacology and Structural Biology**
Institute of Pharmacology and Structural Biology:
The Institute of Pharmacology and Structural Biology (French: Institut de Pharmacologie et de Biologie Structurale, IPBS) is a joint CNRS-Paul Sabatier University research center. It has a scientific and administrative staff of 260 people, including a large number of postdoctoral workers and postgraduate (master's and PhD) students. The primary objective of the institute is the identification and characterization of novel therapeutic targets in the fields of cancer and infectious diseases (tuberculosis).The institute is located on 205 route de Narbonne and shares the campus with Laboratoire de Chimie de Coordination (LCC).The IPBS is part of a scientific network of Toulouse's main life science labs.
History:
In 1972, Claude Paoletti and Jean Cros create the Laboratory of Basic Pharmacology and Toxicology (French : LPTF), which will become in 1990 the seventh French pole of the National Programme IMABIO (Macromolecules engineering). New topics such as oncology, neurology and genotoxicology emerge.Between 1990 et 1995, new teams arrive to develop topics about tuberculosis, protein engineering and structural biology.
History:
1996-1999 Professor Jean Cros founds the IPBS in 1996, with the aim of applying the methods and concepts of modern cell, molecular and structural biology to the identification and validation of novel pharmacological targets in the fields of cancer and G-protein-coupled receptors. The opening of a new building in December 1997 makes it possible to bring all the groups of the institute together on the same site, on the campus of the Paul Sabatier University.
History:
1999-2008 Under the leadership of Professor François Amalric, the IPBS pursues the same objectives: the characterization and validation of new pharmacological targets by molecular and cell biology approaches, together with analysis of the structure/function relationships of biomolecules and their assemblies.The “Cancer Biology” Department was created in 2005, and five new teams were established during the 2005-2009 period. These new teams reinforced the two main axes of research covered by the Department: DNA transactions and repair, and the tumor microenvironment. Finally, the “Structural Biology and Biophysics” Department was created in 2009 with the objective of enhancing the exposure of the IPBS in structural biology and biophysics.
History:
2009-Now In January 2009, Dr. Jean-Philippe Girard succeeded Prof. François Amalric as Director of the institute. The current policy of the IPBS is to increase its international cooperation through the strengthening of the framework for hosting foreign students and researchers, who currently number 21, and through the participation to mobility programmes such as “Joint Research Programs” developed by the CNRS and “Hubert Curien partnerships” developed by the Ministry of Europe and Foreign Affairs.
Research fields:
The IPBS has seventeen research teams, divided into three departements: Cancer biology (six teams) Structural biology and biophysics (five teams) Tuberculosis and infection biology (six teams)
Core facilities:
IPBS supports technological facilities and equipment designed to advance the research efforts of the institute and of external investigators. The institute hosts four technological platforms.
Two main platforms Proteomics (Head: Dr Odile Burlet‐Schiltz).Based on its expertise in the field of mass spectrometry and proteomics and using up-to-date mass spectrometry instrumentation and bioinformatics tools, the facility is able to handle programs in various areas from biology and health to agricultural applications.
PICT ("Plateforme Intégrée de Criblage de Toulouse"; Head: Dr Laurent Maveyraud).
Core facilities:
The IPBS is partner site for two other platforms TRI ("Toulouse Réseau Imagerie"; Head: Dr Antonio Peixoto).The core facility offers the instrumentation and expertize to visualize complex systems from molecules to whole organisms and at a time scale ranging from nanoseconds to several days by the use of time-lapse imaging. Moreover, it possesses the instrumentation and expertize for the phenotypic characterization and sorting of eukaryotic and prokaryotic cells by flow cytometry.
Core facilities:
Anexplo (Head: Dr Magali Jacquier).The IPBS zootechnics facilities are part of the life science core facility of Toulouse which includes eight other sites with complementary technical skills.
These technological facilities are included in a regional network of research platforms in life sciences, open to groups from both public and private sectors and involved in technology development and innovation.
Technological transfer and partnership with industry:
Since 1999, IPBS has been very active in partnership with industry. The first public-private high throughput screening centre between CNRS and Pierre Fabre SA was present at IPBS from 1999 to 2003. Eight small biotechnology companies (start-ups) have been started or incubated at the IPBS during the last ten years.
Forty-two patent applications and extensions were filed, and more than eighty research contracts signed with the pharmaceutical industry and biotech companies.
In recognition of all these activities, the IPBS has been granted the "INPI Innovation Trophies 2008" Award.
Technological transfer and partnership with industry:
Industrial partnerships include : Abtech, Artichem, Adisseo France, Aureus Pharma, BetaTech, BT Pharma, Biovector Therapeutics, Bruker, Cayla, Centre d’Immunologie Pierre Fabre, CERPEM, CRIIT Castres, Diverchim, EDF, Endocube, GlaxoSmithKline, Genclis, GTP Technology, Immuno Designed Molecules, Institut Européen de Biologie Cellulaire, Institut de Recherche Pierre Fabre, L-Path (USA), Millegen, Mitsui-Norin, Nanobiotix, Novaleads, Oncodesign, Palumed, Pierre Fabre Dermo Cosmétique, Praxcell, Protein Biosensor, Sanofi-Aventis, SFRI, Techniques et Fabrication Electroniques, Total, Veolia...
Technological transfer and partnership with industry:
Intellectual Property : At present, seventy applications for patent or their extensions involve IPBS researchers as inventors or co-inventors. Two patents licensed.
Legal tools and good practices : Contracts for technical and counselling services, for research collaborations, consortium and confidentiality agreements, consulting, material transfer agreements, laboratory book data, quality control...
Help to emerging start-ups : Since 1999, the IPBS has developed scientific collaborations and/or hosted activities of eight companies: Abtech (struck off on 01/9/2008), Endocube (struck off on 12/3/2008), Millegen (closed), Novaleads (struck off on 10/9/2014), Nanobiotix, Protein Bio Sensor (registered on 06/7/2005), Praxcell (struck off on 05/3/2012), Icelltis (registered on 10/01/2008).
Scientific Advisory Board (SAB):
The Scientific Advisory Board advises the Director and Executive Board simultaneously on the scientific policy of the institute and public relations but also on strategic aspects relating to the life cycle of research teams (creation, modification of research orientations, transition, etc.). Besides, it assesses the scientific projects conducted by each research team at the institute.
Scientific Advisory Board (SAB):
It is composed of nine researchers, who are (in alphabetical order): Frederick Alt from Harvard Medical School, Mina Bissell from Lawrence Berkeley National Laboratory, Patrick Couvreur from Institut Galien Paris Sud, Sabine Ehrt from Weill Cornell Medical College, Jean-Pierre Gorvel from Centre d'immunologie de Marseille-Luminy, Kathryn S Lilley from University of Cambridge, Dino Moras from Institut de Génétique et de Biologie Moléculaire et Cellulaire, Eric Solary from Institut Gustave Roussy, David Russell from Cornell University.
International relations:
The IPBS has been involved in many research networks under the European Commission’s Frameworks, from the Fifth Framework Programme for Research and Technological Development activities (FP5) to Horizon 2020 projects.
These networks involve several teams from the “Tuberculosis & Infection Biology” department, which are participating in European integrated projects fighting tuberculosis, particularly in some projects coordinated by the "Tuberculosis Vaccine Initiative", but also involve teams from the "Cancer Biology" and "Structural Biology & Biophysics" departments.
International relations:
Since 2000, the “Tuberculosis & Infection Biology” department of the IPBS is part of the TBVAC Consortium. The latter brings in a large number of key partners from excellent laboratories from Europe, as well as the United States, Asia, Africa and Australia, many of which are global leaders in the field of tuberculosis. Scientists and developers from 40 research partners collaborate in TBVAC2020. The current 4-year project started in January 2015 and is coordinated by the Tuberculosis Vaccine Initiative (TBVI).Since 2015, IPBS takes part in various European schemes, such as RESPIRE 2 and 3, and in the Initial Training Network (ITN).
International relations:
Based on a solid and old collaboration with the University of Ljubljana, the IPBS developed a European Associated Laboratory (LEA) entitled “Pulsed Electric Fields Applications in Biology and Medicine”, abbreviated as “LEA EBAM”. This French-Slovenian “without walls” laboratory, created in January 2011 for four years, has been renewed for the same duration.
IPBS teams are also members of the interregional (Spain-France-Andorra) POCTEFA 2014-2020 created to promote the sustainable development of the border territories of the three countries on both sides of the Pyrenees.
The IPBS in numbers:
The IPBS is mainly supported by direct and indirect finance from the CNRS and the Paul Sabatier University, covering the wages of more than 260 researchers. Other funding sources include the European Union, the Occitanie (administrative region), industry, public contracts, charities and facilities. The average annual budget is seven million euros.
Today, the institute adds up 2200 publications, fifty European and international contracts, more than 300 theses supported by students, seventy patents registered and eight startups incubated. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Upper gastrointestinal series**
Upper gastrointestinal series:
An upper gastrointestinal series, also called a barium swallow, barium study, or barium meal, is a series of radiographs used to examine the gastrointestinal tract for abnormalities. A contrast medium, usually a radiocontrast agent such as barium sulfate mixed with water, is ingested or instilled into the gastrointestinal tract, and X-rays are used to create radiographs of the regions of interest. The barium enhances the visibility of the relevant parts of the gastrointestinal tract by coating the inside wall of the tract and appearing white on the film. This in combination with other plain radiographs allows for the imaging of parts of the upper gastrointestinal tract such as the pharynx, larynx, esophagus, stomach, and small intestine such that the inside wall lining, size, shape, contour, and patency are visible to the examiner. With fluoroscopy, it is also possible to visualize the functional movement of examined organs such as swallowing, peristalsis, or sphincter closure. Depending on the organs to be examined, barium radiographs can be classified into "barium swallow", "barium meal", "barium follow-through", and "enteroclysis" ("small bowel enema"). To further enhance the quality of images, air or gas is sometimes introduced into the gastrointestinal tract in addition to barium, and this procedure is called double-contrast imaging. In this case the gas is referred to as the negative contrast medium. Traditionally the images produced with barium contrast are made with plain-film radiography, but computed tomography is also used in combination with barium contrast, in which case the procedure is called "CT enterography".
Types:
Various types of barium X-ray examinations are used to examine different parts of the gastrointestinal tract. These include barium swallow, barium meal, barium follow-through, and barium enema. The barium swallow, barium meal, and barium follow-through are together also called an upper gastrointestinal series (or study), whereas the barium enema is called a lower gastrointestinal series (or study). In upper gastrointestinal series examinations, the barium sulfate is mixed with water and swallowed orally, whereas in the lower gastrointestinal series (barium enema), the barium contrast agent is administered as an enema through a small tube inserted into the rectum.
Types:
Barium swallow X-ray examinations are used to study the pharynx and esophagus.
Barium meal examinations are used to study the lower esophagus, stomach and duodenum.
Barium follow-through examinations are used to study the small intestine.
Enteroclysis, also called small bowel enema, is a barium X-ray examination used to display individual loops of the small intestine by intubating the jejunum and administering barium sulfate followed by methylcellulose or air.
Barium enema examinations are used to study the large intestine and rectum and are classified as lower gastrointestinal series.
Medical uses:
Barium X-ray examinations are useful tools for the study of appearance and function of the parts of the gastrointestinal tract. They are used to diagnose and monitor esophageal reflux, dysphagia, hiatus hernia, strictures, diverticula, pyloric stenosis, gastritis, enteritis, volvulus, varices, ulcers, tumors, and gastrointestinal dysmotility, as well as to detect foreign bodies. Although barium X-ray examinations are increasingly being replaced by more modern techniques, such as computer tomography, magnetic resonance imaging, ultrasound imaging, endoscopy and capsule endoscopy, barium contrast imaging remains in common use because it offers the advantages of greater affordability, wider availability, and better resolution in assessing superficial mucosal lesions.
Mechanism:
Barium sulfate is swallowed and is a radio opaque substance that does not allow the passage of X-rays. As a result, areas coated by barium sulfate will appear white on an X-ray film. The passage of barium sulfate through the gastrointestinal tract is observed by a radiologist using a fluoroscope attached to a TV monitor. The radiologist takes a series of individual X-ray images at timed intervals depending on the areas to be studied. Sometimes medication which produces gas in the gastrointestinal tract is administered together with the Barium sulfate. This gas distends the gastrointestinal lumen, providing better imaging conditions and in this case the procedure is called double-contrast imaging.
Procedure:
Clinical status and relevant medical history are reviewed prior to the studies. Patient consent is required.
Procedure:
Barium swallow A barium swallow study is also known as a barium esophagram and needs little if any preparations for the study of the larynx, pharynx, and esophagus when studied alone.Amongst the uses of barium swallow are: persistent dysphagia and odynophagia despite negative esophagogastroduodenoscopy (OGDS) findings, failed OGDS, esophageal motility disorder, globus pharyngis, assessment of tracheoesophageal fistula, and timed barium swallow to monitor the progress of esophageal achalasia therapy. Barium sulfate suspension such as 100 ml or more of E-Z HD 200 to 250% concentration and Baritop 100% can be used. Water-soluble contrast agent such as Gastrografin (diatrizoate) and Conray (Iotalamic acid) is used instead of barium if oesophageal perforation is suspected. Low osmolar contrast medium with concentration of 300 mg/ml is used instead of gastrografin if there is risk of aspiration or there is tracheoesophageal fistula.A thick barium mixture is swallowed in supine position and fluoroscopic images of the swallowing process are made. Then several swallows of a thin barium mixture are taken and the passage is recorded by fluoroscopy and standard radiographs. The procedure is repeated several times with the examination table tilted at various angles. A total of 350–450 mL of barium is swallowed during the process. Normally, 90% of ingested fluid should have passed into the stomach after 15 seconds.Right anterior oblique (RAO) view is to see the oesophagus clearly, away from overlapping spine. AP (anterior-posterior) view is also done to visualise the gastroesophageal junction. AP and lateral views are also done to visualise the hypopharynx during swallowing at a frame rate of 3–4 per second. Left posterior oblique (LPO) position is used to identify hernias, mucosal rings, and varices.
Procedure:
Barium meal Intravenous injection of Buscopan (Hyoscine butylbromide) 20 mg or glucagon 0.3 mg is used to distend the stomach and slow down the emptying of the contrast into the duodenum.Right anterior oblique (RAO) view is used to demonstrate antrum and greater curve of stomach. Supine position is to demonstrate antrum and body of stomach. Left anterior oblique (LAO) view is used to see the lesser curve of stomach en face. This position is also used to check for gastroesophageal reflux when patient is asked to cough or swallow (water siphon test). Left lateral tilted with head up 45 degrees is used to demonstrate the fundus of the stomach. To demonstrate the duodenal loop, the subject can lie down in prone position on a compression pad to prevent excessive barium flowing into the duodenal loop. Anterior view of duodenal loop can be seen at RAO position. Duodenal cap can be visualised by taking images when subject lie down in prone position, RAO, supine, and then LAO positions or it can be seen on erect position with RAO and steep LAO views. Total mucosal coating of the stomach is done by asking the subject to roll to the right side into a complete circle until RAO position. Arae gastriae in the antrum (fine reticular network of grooves) is visible if good coating is achieved.
Procedure:
Small bowel follow-through Indications to do this procedure are: unexplained chronic abdominal pain with weight loss, unexplained diarrhea, anemia which is caused by gastrointestinal bleeding or dependent on blood transfusion where the cause cannot be explained despite OGDS or colonoscopy investigations, partial obstruction of bowel/small bowel adhesive obstruction suspected, and unexplained malabsorption of nutrients. For barium follow-through examinations, a 6-hour period of fasting is observed prior to the study.Barium is administered orally, sometimes mixed with diatrizoic acid (gastrografin) to reduce transit time in the bowel. Intravenous metoclopramide is sometimes also added to the mixture to enhance gastric emptying. 600 ml of 0.5% methylcellulose can be given orally, after barium meal is given, to improve the images of small bowel follow-through by reducing the time taken for barium to pass through the small intestines, and increase the transparency of the contrast-filled small bowels. Other methods to reduce transit time are to add ice cold normal saline after the administration of barium saline mixture or to give a dry meal.X-ray images are then taken in a supine position at intervals of 20–30 minutes. Real-time fluoroscopy is used to assess bowel motility. The radiologist may press or palpate the abdomen during images to separate intestinal loops. The total time necessary for the test depends on the speed of bowel motility or transit time and may vary between 1 and 3 hours.
Procedure:
Enteroclysis Enteroclysis is also known as small bowel enema. It has been largely replaced by magnetic resonance enterography/enteroclysis and computed tomography enterography/enteroclysis.In addition to fasting for 8 hours prior to examination, a laxative may also be necessary for bowel preparation and cleansing. The main aim of this study is to distend the proximal bowel through infusion of large amount of barium suspension. Otherwise, the distension of distal small bowel is generally similar with small bowel follow-through. Therefore, there is a need to pass a tube through the nose into the jejunum (nasojejunal tube) to administer large amount of contrast. This can be unpleasant to the subject, requires more staff, longer procedural time, and higher radiation dose when compared to small bowel follow-through. The indications for enteroclysis are generally similar to small bowel follow-through. Barium suspensions such as diluted E-Z Paque 70% and Baritop 100% can be used. After that, 600 ml of 0.5% methylcellulose is administered after 500 ml of 70% barium suspension is given. Bilbao-Dotter tube and Silk tube can be used to administer barium suspension. The subject should be fasted overnight, any antispasmodic drugs should be stopped one day before the examination, and Tetracaine lozenges can be used 30 minutes before the procedure to numb the throat for nasojejunal tube insertion.The filling of the small intestines can be viewed continuously using fluoroscopy, or viewed as standard radiographs taken at frequent intervals. The technique is a double-contrast procedure that allows detailed imaging of the entire small intestine. However, the procedure may take 6 hours or longer to complete and is quite uncomfortable to undergo.
Interpretation of results:
Enteroclysis has shown to be very accurate in diagnosing small bowel diseases, with a sensitivity of 93.1% and specificity of 96.9%. It permits detection of lesion which may not be seen with other imaging techniques. There is no significant difference in terms of detection of clinically significant findings, sensitivity or specificity between enteroclysis and CT enterography. Enteroclysis compares favorably with wireless capsule endoscopy and double-balloon endoscopy in the diagnosis of mucosal abnormalities of the small bowel.
Interpretation of results:
The interpretation of standard barium swallow examinations for assessing dysphagia is operator and interpreter dependent. It has poor sensitivity for subtle abnormalities but is more sensitive in detecting esophageal webs and rings than gastroscopy. The best initial evaluation of suspected oropharyngeal dysphagia is a barium study. Barium swallow studies remain the main investigation of dysphagia. Barium studies may detect pharyngeal tumors that are difficult to visualize endoscopically.
Interpretation of results:
Barium follow-through examinations are the most commonly used imaging technique in assessing patients with Crohn's disease, although CT and magnetic resonance imaging are widely accepted as being superior. However Barium examinations remain superior in the depiction of mucosal abnormalities. The features of Crohn's disease are well described by barium follow-through examinations, appearing as a typical "cobblestone pattern", but no information is obtained regarding extraluminal disease. Radiographic imaging in Crohn's disease provides clinicians with objective evaluations of small bowel regions that are not accessible to standard endoscopic techniques. Because of its length and complex loops, the small intestine is the most difficult part of the gastrointestinal tract to evaluate. Most endoscopic techniques are limited to the examination of proximal or distal segments, hence Barium follow-through remains in most centres the test of choice for the investigation of abdominal pain, diarrhoea and in particular diseases manifesting mucosal abnormalities such as coeliac and Crohn's disease.
Interpretation of results:
Barium swallow studies are better than endoscopy at demonstrating the anatomic findings in gastroesophageal reflux disease after anti-reflux surgery.
Barium fluoroscopic examinations have some advantages over computed tomography and magnetic resonance techniques, such as higher spatial resolution and the ability to examine bowel peristalsis and distension in real time.
Interpretation of results:
Many infections and parasitic infestations produce patterns of the luminal surface, which are best seen on Barium examinations. Certain parasites are seen as filling defects outlined by Barium and Barium examinations play an important role in the diagnosis of intestinal infections and infestations as compared to other techniques. Barium studies show tapeworms and roundworms as thin, linear filling defects of the bowel. Because roundworms have a developed alimentary tract, barium may outline the parasites' intestinal tracts on delayed images. In Strongyloidiasis barium studies show intestinal wall oedema, thickening of intestinal folds with flattening, and atrophy of the overlying mucosa. Schistosomiasis caused by infection with flatworms have an appearance resembling colitis ulcerosa, with inflammatory polyps, ulcers, fibrosis, wall thickening, loss of haustration, and stenosis in Barium X-rays. Anisakiasis is demonstrated by Barium X-rays as bowel wall oedema, thickening, ulceration, or stricture due to inflammation. Sometimes worms are seen as long, thread-like, linear filling defects up to 30 cm long. In Typhlitis Barium studies show oedema, ulceration, and inflammation of bowel wall resulting in wall thickening. In pseudomembranous colitis, barium studies show pancolitis with thumb printing and shaggy margins as well as plaque-like eccentric, nodular or polypoid appearance.
Interpretation of results:
Barium studies and computer tomography are the most common tools used to diagnose gastrointestinal lymphoma. Barium contrast is more sensitive in the demonstration of subtle mucosa and sub-mucosa abnormalities but computer tomography is the method of choice for determining the extent of disease and staging as well as related complications such as fistulation and perforation. Submucosal nodules or masses form a bull's-eye or target appearance on barium studies.
Adverse effects:
Radiographic examinations involve radiation exposure in the form of X-rays.
Although barium ions are toxic, their use is generally regarded as safe because the small amounts of barium ions available in solution and absorbed by the gastrointestinal tract are deemed to be negligible; however, isolated cases of barium encephalopathy have been described following absorption of barium from the intestinal tract.
Constipation and abdominal pain may occur after barium meals.
The formation of baroliths, which may need to be removed surgically, is a complication of the use of barium sulfate.
Barium sulfate may cause serious peritoneal irritation.
Adverse effects:
Leakage of barium sulfate into the abdominal cavity may occur in people with duodenal ulcers or other perforations and may lead to peritonitis, adhesion, and granulomas; it is associated with a high mortality rate. Leakage of barium into the mediastinum or peritoneal cavity may lead to endotoxic shock, which is often fatal; as a result, the use of barium as a contrast agent is contraindicated when there is a suspicion or possibility of compromise of bowel wall integrity.
Adverse effects:
Aspiration or inhalation of barium sulfate into the lungs during oral application can lead to serious respiratory complications leading to fatal aspiration pneumonia or asphyxiation.
Hypersensitivity and allergic reactions are rare but some additives contained in barium preparations may induce immune reactions.Complete gastrointestinal obstruction is a contraindication for barium studies.
History:
Barium sulfate as a contrast medium was evolved from the prior use of bismuth preparations which were too toxic. The use of bismuth preparations had been described as early as 1898. Barium sulfate as a contrast medium in medical practice was introduced largely as a result of the works of Krause a director of the Bonn Polyclinic, now the medical faculty of the University of Bonn and his colleagues Bachem and Gunther. In a paper read in 1910 at the radiological congress they advocated for the use of barium sulfate as an opaque contrast medium in medicine. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Photonic radar**
Photonic radar:
Photonic radar is a technique by which radar may be produced and analysed with the help of photonics rather than traditional RF engineering techniques. The frequency of the radar is still in the RF, but lasers are used to create and analyse the RF signals with high precision.The USA, China, and Russia have research programs to equip fighter aircraft with photonic radar. The potential benefits are longer range of detection, better position sensing, and 3D model target reconstruction.In one study, a test device could resolve objects as small as 3 x 4 cm (1.2 x 1.6 in), much smaller than traditional radar.
Overview of operation:
A laser diode is used to generate an optical signal that is modulated by a linearly-chirped low frequency signal. This modulated optical signal is then split, with one part immediately converted to an electronic signal at 4 times the frequency of the original modulating signal. This waveform is then amplified, emitted via a standard antenna, and then received again via another standard antenna. The second half of the modulated optical signal is further modulated by the reflected signal, and then converted to an electronic signal. This electronic signal is sent through a low-pass filter and finally digitized via an analog-to-digital converter. The resulting digital waveform can be processed to recover the delay between the transmitted and reflected signal, and thus the distance to the target. The entire system may be operated in real-time to allow high-speed target acquisition.
Applications:
Novel potential applications include non-invasive patient vital sign monitoring using a photonic chip small enough to include in a phone. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Positive-incentive value**
Positive-incentive value:
Positive-incentive value is the anticipated pleasure involved in the performance of a particular behavior, such as eating a particular food or drinking a particular beverage. It is a key element of the positive-incentive theories of hunger. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Exhaust gas**
Exhaust gas:
Exhaust gas or flue gas is emitted as a result of the combustion of fuels such as natural gas, gasoline (petrol), diesel fuel, fuel oil, biodiesel blends, or coal. According to the type of engine, it is discharged into the atmosphere through an exhaust pipe, flue gas stack, or propelling nozzle. It often disperses downwind in a pattern called an exhaust plume.
Exhaust gas:
It is a major component of motor vehicle emissions (and from stationary internal combustion engines), which can also include crankcase blow-by and evaporation of unused gasoline.
Exhaust gas:
Motor vehicle emissions are a common source of air pollution and are a major ingredient in the creation of smog in some large cities. A 2013 study by the Massachusetts Institute of Technology (MIT) indicates that 53,000 early deaths occur per year in the United States alone because of vehicle emissions. According to another study from the same university, traffic fumes alone cause the death of 5,000 people every year just in the United Kingdom.
Composition:
The largest part of most combustion gas is nitrogen (N2), water vapor (H2O) (except with pure-carbon fuels), and carbon dioxide (CO2) (except for fuels without carbon); these are not toxic or noxious (although water vapor and carbon dioxide are greenhouse gases that contribute to climate change). A relatively small part of combustion gas is undesirable, noxious, or toxic substances, such as carbon monoxide (CO) from incomplete combustion, hydrocarbons (properly indicated as CxHy, but typically shown simply as "HC" on emissions-test slips) from unburnt fuel, nitrogen oxides (NOx) from excessive combustion temperatures, and particulate matter (mostly soot).
Exhaust gas temperature:
Exhaust gas temperature (EGT) is important to the functioning of the catalytic converter of an internal combustion engine. It may be measured by an exhaust gas temperature gauge. EGT is also a measure of engine health in gas-turbine engines (see below).
Cold engines:
During the first two minutes after starting the engine of a car that has not been operated for several hours, the amount of emissions can be very high. This occurs for two main reasons: Rich air-fuel ratio requirement in cold engines: When a cold engine is started, the fuel does not vaporize completely, creating higher emissions of hydrocarbons and carbon monoxide, which diminishes only as the engine reaches operating temperature. The duration of this start-up phase has been reduced by advances in materials and technology, including computer-controlled fuel injection, shorter intake lengths, and pre-heating of fuel and/or inducted air.
Cold engines:
Inefficient catalytic converter under cold conditions: Catalytic converters are very inefficient until warmed up to their operating temperature. This time has been much reduced by moving the converter closer to the exhaust manifold and even more so placing a small yet quick-to-heat-up converter directly at the exhaust manifold. The small converter handles the start-up emissions, which allows enough time for the larger main converter to heat up. Further improvements can be realised in many ways, including electric heating, thermal battery, chemical reaction preheating, flame heating and superinsulation.
Passenger car emissions summary:
Comparable with the European emission standards EURO III as it was applied on October 2000 In 2000, the United States Environmental Protection Agency began to implement more stringent emissions standards for light duty vehicles. The requirements were phased in beginning with 2004 vehicles and all new cars and light trucks were required to meet the updated standards by the end of 2007.
Types:
Internal-combustion engines Spark-ignition and Diesel engines In spark-ignition engines the gases resulting from combustion of the fuel and air mix are called exhaust gases. The composition varies from petrol to diesel engines, but is around these levels: The 10% oxygen for "diesel" is likely if the engine was idling, e.g. in a test rig. It is much less if the engine is running under load, although diesel engines always operate with an excess of air over fuel.
Types:
The CO content for petrol engines varies from ~ 15 ppm for well tuned engine with fuel injection and a catalytic converter up to 100,000 ppm (10%) for a richly tuned carburetor engine, such as typically found on small generators and garden equipment.
Nitromethane additive Exhaust gas from an internal combustion engine whose fuel includes nitromethane will contain nitric acid vapour, which is corrosive, and when inhaled causes a muscular reaction making it impossible to breathe. People who are likely to be exposed to it should wear a gas mask.
Types:
Diesel engines Gas-turbine engines In aircraft gas turbine engines, "exhaust gas temperature" (EGT) is a primary measure of engine health. Typically the EGT is compared with a primary engine power indication called "engine pressure ratio" (EPR). For example: at full power EPR there will be a maximum permitted EGT limit. Once an engine reaches a stage in its life where it reaches this EGT limit, the engine will require specific maintenance in order to rectify the problem. The amount the EGT is below the EGT limit is called EGT margin. The EGT margin of an engine will be greatest when the engine is new, or has been overhauled. For most airlines, this information is also monitored remotely by the airline maintenance department by means of ACARS.
Types:
Jet engines and rocket engines In jet engines and rocket engines, exhaust from propelling nozzles which in some applications shows shock diamonds.
Other types From burning coal Flue gas Flue gas emissions from fossil fuel combustion Steam engines In steam engine terminology the exhaust is steam that is now so low in pressure that it can no longer do useful work.
Main motor vehicle emissions:
NOx Mono-nitrogen oxides NO and NO2 (NOx)(whether produced this way or naturally by lightning) react with ammonia, moisture, and other compounds to form nitric acid vapor and related particles. Small particles can penetrate deeply into sensitive lung tissue and damage it, causing premature death in extreme cases. Inhalation of NO species increases the risk of lung cancer and colorectal cancer. and inhalation of such particles may cause or worsen respiratory diseases such as emphysema and bronchitis and heart disease.In a 2005 U.S. EPA study the largest emissions of NOx came from on road motor vehicles, with the second largest contributor being non-road equipment which is mostly gasoline and diesel stations.The resulting nitric acid may be washed into soil, where it becomes nitrate, which is useful to growing plants.
Main motor vehicle emissions:
Volatile organic compounds When oxides of nitrogen (NOx) and volatile organic compounds (VOCs) react in the presence of sunlight, ground level ozone is formed, a primary ingredient in smog. A 2005 U.S. EPA report gives road vehicles as the second largest source of VOCs in the U.S. at 26% and 19% are from non road equipment which is mostly gasoline and diesel stations. 27% of VOC emissions are from solvents which are used in the manufacturer of paints and paint thinners and other uses.
Main motor vehicle emissions:
Ozone Ozone is beneficial in the upper atmosphere, but at ground level ozone irritates the respiratory system, causing coughing, choking, and reduced lung capacity. It also has many negative effects throughout the ecosystem.
Main motor vehicle emissions:
Carbon monoxide (CO) Carbon monoxide poisoning is the most common type of fatal air poisoning in many countries. Carbon monoxide is colorless, odorless and tasteless, but highly toxic. It combines with hemoglobin to produce carboxyhemoglobin, which blocks the transport of oxygen. At concentrations above 1000ppm it is considered immediately dangerous and is the most immediate health hazard from running engines in a poorly ventilated space. In 2011, 52% of carbon monoxide emissions were created by mobile vehicles in the U.S.
Main motor vehicle emissions:
Hazardous air pollutants (toxics) Chronic (long-term) exposure to benzene (C6H6) damages bone marrow. It can also cause excessive bleeding and depress the immune system, increasing the chance of infection. Benzene causes leukemia and is associated with other blood cancers and pre-cancers of the blood.
Main motor vehicle emissions:
Particulate matter (PM10 and PM2.5) The health effects of inhaling airborne particulate matter have been widely studied in humans and animals and include asthma, lung cancer, cardiovascular issues, premature death. Because of the size of the particles, they can penetrate the deepest part of the lungs. A 2011 UK study estimates 90 deaths per year due to passenger vehicle PM. In a 2006 publication, the U.S. Federal Highway Administration (FHWA) state that in 2002 about 1 per-cent of all PM10 and 2 per-cent of all PM2.5 emissions came from the exhaust of on-road motor vehicles (mostly from diesel engines). In Chinese, European, and Indian markets, both diesel and gasoline vehicles are required to have a tailpipe filter installed, while the United States has mandated it for diesel only. In 2022, British testing specialist Emissions Analytics estimated that the 300 million or so gasoline vehicles in the US over the subsequent decade would emit around 1.6 septillion harmful particles.
Main motor vehicle emissions:
Carbon dioxide (CO2) Carbon dioxide is a greenhouse gas. Motor vehicle CO2 emissions are part of the anthropogenic contribution to the growth of CO2 concentrations in the atmosphere which according to the vast majority of the scientific community is causing climate change. Motor vehicles are calculated to generate about 20% of the European Union's man-made CO2 emissions, with passenger cars contributing about 12%. European emission standards limit the CO2 emissions of new passenger cars and light vehicles. The European Union average new car CO2 emissions figure dropped by 5.4% in the year to the first quarter of 2010, down to 145.6 g/km.
Main motor vehicle emissions:
Water vapour Vehicle exhaust contains much water vapour.
Water recovery There has been research into ways that troops in deserts can recover drinkable water from their vehicles' exhaust gases.
Pollution reduction:
Emission standards focus on reducing pollutants contained in the exhaust gases from vehicles as well as from industrial flue gas stacks and other air pollution exhaust sources in various large-scale industrial facilities such as petroleum refineries, natural gas processing plants, petrochemical plants and chemical production plants. However, these are often referred to as flue gases. Catalytic converters in cars intend to break down the pollution of exhaust gases using a catalyst. Scrubbers in ships intend to remove the sulfur dioxide (SO2) of marine exhaust gases. The regulations on marine sulfur dioxide emissions are tightening, however only a small number of special areas worldwide have been designated for low sulfur diesel fuel use only.
Pollution reduction:
One of the advantages claimed for advanced steam technology engines is that they produce smaller quantities of toxic pollutants (e.g. oxides of nitrogen) than petrol and diesel engines of the same power. They produce larger quantities of carbon dioxide but less carbon monoxide due to more efficient combustion.
Health studies:
Researchers from the University of California, Los Angeles School of Public Health say preliminary results of their statistical study of children listed in the California Cancer Registry born between 1998 and 2007 found that traffic pollution may be associated with a 5% to 15% increase in the likelihood of some cancers. A World Health Organization study found that diesel fumes cause an increase in lung cancer.
Localised effects:
The California Air Resources Board found in studies that 50% or more of the air pollution (smog) in Southern California is due to car emissions. Concentrations of pollutants emitted from combustion engines may be particularly high around signalized intersections because of idling and accelerations. Computer models often miss this kind of detail. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Local Bubble**
Local Bubble:
The Local Bubble, or Local Cavity, is a relative cavity in the interstellar medium (ISM) of the Orion Arm in the Milky Way. It contains the closest of celestial neighbours and among others, the Local Interstellar Cloud (which contains the Solar System), the neighbouring G-Cloud, the Ursa Major moving group (the closest stellar moving group) and the Hyades (the nearest open cluster). It is estimated to be at least 1000 light years in size, and is defined by its neutral-hydrogen density of about 0.05 atoms/cm3, or approximately one tenth of the average for the ISM in the Milky Way (0.5 atoms/cm3), and one sixth that of the Local Interstellar Cloud (0.3 atoms/cm3).The exceptionally sparse gas of the Local Bubble is the result of supernovae that exploded within the past ten to twenty million years. Geminga, a pulsar in the constellation Gemini, was once thought to be the remnant of a single supernova that created the Local Bubble, but now multiple supernovae in subgroup B1 of the Pleiades moving group are thought to have been responsible, becoming a remnant supershell.
Description:
The Solar System has been traveling through the region currently occupied by the Local Bubble for the last five to ten million years. Its current location lies in the Local Interstellar Cloud (LIC), a minor region of denser material within the Bubble. The LIC formed where the Local Bubble and the Loop I Bubble met. The gas within the LIC has a density of approximately 0.3 atoms per cubic centimeter.
Description:
The Local Bubble is not spherical, but seems to be narrower in the galactic plane, becoming somewhat egg-shaped or elliptical, and may widen above and below the galactic plane, becoming shaped like an hourglass. It abuts other bubbles of less dense interstellar medium (ISM), including, in particular, the Loop I Bubble. The Loop I Bubble was cleared, heated and maintained by supernovae and stellar winds in the Scorpius–Centaurus association, some 500 light years from the Sun. The Loop I Bubble contains the star Antares (also known as α Sco, or Alpha Scorpii), as shown on the diagram above right. Several tunnels connect the cavities of the Local Bubble with the Loop I Bubble, called the "Lupus Tunnel". Other bubbles which are adjacent to the Local Bubble are the Loop II Bubble and the Loop III Bubble.
Description:
In 2019, researchers found interstellar iron in Antarctica which they relate to the Local Interstellar Cloud, which might be related to the formation of the Local Bubble.
Observation:
Launched in February 2003 and active until April 2008, a small space observatory called Cosmic Hot Interstellar Plasma Spectrometer (CHIPS or CHIPSat) examined the hot gas within the Local Bubble. The Local Bubble was also the region of interest for the Extreme Ultraviolet Explorer mission (1992–2001), which examined hot EUV sources within the bubble. Sources beyond the edge of the bubble were identified but attenuated by the denser interstellar medium. In 2019, the first 3D map of the Local Bubble has been reported using the observations of diffuse interstellar bands.
Impact on star formation:
In January 2022, a paper in the journal Nature found that observations and modelling had determined that the action of the expanding surface of the bubble had collected gas and debris and was responsible for the formation of all young, nearby stars.These new stars are typically in molecular clouds like the Taurus molecular cloud and the open star cluster Pleiades. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Android Developer Day**
Android Developer Day:
Android Developer Days (ADD) is an open conference held at various locations worldwide each year. The Android Developer Days conference is a growing organization that allows developers of various software and applications to showcase, observe, and participate in Android Developing events, such as informational lectures, workshops, entertainment activities, panel discussions, and networking opportunities make up a majority of the Android Developer Days. As an international leader in mobile operating systems, ADD has become increasingly popular as the center for mobile device conventions. Unofficial participants may elect to observe different booths and displays. However, in order to partake in the festivities, one must apply to join the organization. There is an assortment of ways that one is able to join the conference including exhibiting your own presentation, showcasing posters featuring developing applications, or instructing hands-on, interactive coding tutorials. In 2014, the Android Developer Days conventions had been held in Ankara, Turkey, from May 16 to May 17.
History:
Android Developer Days were created to discuss and share technological developments happening throughout the world by the best Android Developers around the world, with the belief that organized events are beneficial to the information technology field. The Android Developer Days utilizes the positive effect of synergy. Synergy which, in this case, is the collaboration and sharing of ideas and products in order to form a greater effect than their own individual effect. An international level organization provides more information, experience, and inspiration for participants because synergy is involved. In addition, Android Developer Days aim to inspire developers of the future trends in the field, helping to create international products and brands.
History:
2012 Convention The inaugural Android Developer Day took place at Middle East Technical University's Cultural and Convention Center in Ankara, Turkey, as an extension of a similar event, Google Developer Day. The first ADD began on May 21, 2012, and ended the following day after approximately 30 presentations between two simultaneous sessions. In addition, 120 of the roughly 700 attendees participated in 6 workshops discussing mobile technology developments and predicting future trends of its development. Some examples of topics covered in the workshops include "Areas of HTML5 usage and Reasons," "How to use Facebook and Google accounts in Apps," and "Native or Web App Which one?". The event was made possible by 3 platinum, 2 silver, and 9 product sponsors, including General Mobile Inc., Huawei, and ASELSAN. Ankara exhibits a prime location for information technology development, with a 17 major universities in the area. Networking amongst these universities and Google Developer Group Ankara motivated ADD with the intention of featuring foreign participants, ultimately enhancing the technical entrepreneurship networks forming in Ankara.
History:
2013 Convention The subsequent ADD returned to Ankara, Turkey, on June 14, 2013. The two-day conference, taking 9 months to prepare, featured talks from guest speakers representing the Android community, such as Android community such as, Lars Vogel, Eric Lafortune, Bernd Schulze, Mark Allison.
History:
Over 1,000 people attended the event, along with 65 guest speakers, 20 of whom came from abroad. The event was supported by 15 Google Developer Groups from 7 different countries, and sponsored by 26 technical and entrepreneurial companies. The sessions available for attendees more than doubled from the previous year, with 67 sessions, seminars, workshops and discussions being held in 4 different halls.
History:
Sub Events ADD also hosted two sub-events: the Ecahack Hackathon, and an entrepreneurship marathon of the name Innov-a-thon’Lite Turkey. During the Ecahack Hackathon, the Android developers spent an entire 24 hours writing code. There were competitions during the hackathon, and winners received various prizes. The second sub-event was called the Innov-a-thon'Lite Turkey. During this sub-event, which lasted three hours, a Dubai-based seed accelerator program called TURN8 supported innovative ideas by strategizing investment funding and business management techniques.
2014 Convention:
As widely expected, many of the topics covered at the Android Developer Days indeed pertain to Android devices themselves. On top of mentioning androids in different areas, Android application development, and Android operating systems, the conventions serve to discuss future technologies, new generation mobile devices, and various mobile operating systems. Google is a large benefactor of Android, and consequently many of Google’s upcoming inventions involving Google Glass, Google TV, and Google Play are main attractions for the upcoming 2014 ADDs. Other minute topics to be discussed include but are not limited to App Development Best Practises, App Monetization, Ad Integration, In-app Billing, User Statistics, App Development in Mobile Operation Systems, Android NDK, Cross Platform App Development Frameworks, HTML5, Javascript, Game Development, Communication Solutions, Cloud, Augmented Reality, Social Media, Location-Based Services and Maps, Mobile Education, Mobile Payment Security, Internet of Things, Embedded Systems and Single Board Devices, Big Data Processing Optimizations in Mobile Devices, Software Development, Methodologies, Success Stories, and GWT.
2014 Convention:
The 2014 convention takes place at the Metu Cultural and Convention Center in Ankara, Turkey. The venue is located at the Middle East Technical University.
2014 Convention:
Participants As mentioned above, anyone is able to present and attend the Android Developer Days. Attendees can register online for free. Furthermore, there are featured speakers, which are selected via an application process on the ADD webpage. The featured speakers in 2014 include: Mark Allison: a software engineer with over 30 years of experience, and author of Styling Android, a blog dedicated to the thematic styling of Android applications.
2014 Convention:
Tim Messerschmidt: a long-time mobile and web-developer. He works at PayPal and is coordinating their developer activities in Europe.
Al Sutton: as a professional in selling software and intellectual property, he has been working on projects related to android the last 5 years.
Abhisek Devkota: the Community Manager and project manager for Cyanogen Inc.
Juhani Lehtimaki: head of android development at Snapp TV, has more than ten years of experience in java development, author of Smashing Android UI.
2014 Convention:
Thomas Mattson: works at Vaadin as a Vaadin expert and a project manager Xavier Hallade: technical marketing engineer at Intel Corporation, focus on wireless displays and native development Mustafa Sezgin: director of mobile engineering at SoundCloud Stephan Janssen: serial entrepreneur that has founded multiple successful organizations Ali Derbane: develops Android software at Itude Mobile Benjamin Weiss: Senior Software Developer at ImmobilienScout24 Mustafa Kasap: works for Microsoft in Turkey developing applications for Windows Phones Fatih Isbecer: CEO of Monitise MEA Paresh Mayani: Android developer since 5 years. Manager @ GDG Ahmedabad and Sr. Software Engineer @ InfoStretch Solutions Pvt. Ltd. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Philosophy of engineering**
Philosophy of engineering:
The philosophy of engineering is an emerging discipline that considers what engineering is, what engineers do, and how their work affects society, and thus includes aspects of ethics and aesthetics, as well as the ontology, epistemology, etc. that might be studied in, for example, the philosophy of science or the philosophy of technology.
History:
Engineering is the profession aimed at modifying the natural environment, through the design, manufacture and maintenance of artifacts and technological systems. It might then be contrasted with science, the aim of which is to understand nature. Engineering at its core is about causing change, and therefore management of change is central to engineering practice. The philosophy of engineering is then the consideration of philosophical issues as they apply to engineering. Such issues might include the objectivity of experiments, the ethics of engineering activity in the workplace and in society, the aesthetics of engineered artifacts, etc.
History:
While engineering seems historically to have meant devising, the distinction between art, craft and technology isn't clearcut. The Latin root ars, the Germanic root kraft and the Greek root techne all originally meant the skill or ability to produce something, as opposed to, say, athletic ability. The something might be tangible, like a sculpture or a building, or less tangible, like a work of literature. Nowadays, art is commonly applied to the visual, performing or literary fields, especially the so-called fine arts ('the art of writing'), craft usually applies to the manual skill involved in the manufacture of an object, whether embroidery or aircraft ('the craft of typesetting') and technology tends to mean the products and processes currently used in an industry ('the technology of printing'). In contrast, engineering is the activity of effecting change through the design and manufacture of artifacts ('the engineering of print technology').
Ethics:
What distinguishes engineering design from artistic design is the requirement for the engineer to make quantitative predictions of the behavior and effect of the artifact prior to its manufacture. Such predictions may be more or less accurate but usually includes the effects on individuals and/or society. In this sense, engineering can be considered a social as well a technological discipline and judged not just by whether its artifacts work, in a narrow sense, but also by how they influence and serve social values. What engineers do is subject to moral evaluation.
Ethics:
Modeling Socio-technical systems, such as transport, utilities and their related infrastructures comprise human elements as well as artifacts. Traditional mathematical and physical modeling techniques may not take adequate account of the effects of engineering on people, and culture. The Civil Engineering discipline makes elaborate attempts to ensure that a structure meets its specifications and other requirements prior to its actual construction. The methods employed are well known as Analysis and Design. Systems Modelling and Description makes an effort to extract the generic unstated principles behind the engineering approach.
Ethics:
Product life cycle The traditional engineering disciplines seem discrete but the engineering of artifacts has implications that extend beyond such disciplines into areas that might include psychology, finance and sociology. The design of any artifact will then take account of the conditions under which it will be manufactured, the conditions under which it will be used, and the conditions under which it will be disposed. Engineers can consider such "life cycle" issues without losing the precision and rigor necessary to design functional systems.
Publications:
Books P. & Gunn A.S. (1998), Engineering, Ethics, and the Environment, Cambridge University Press, New York Addis W (1990) Structural Engineering: The Nature of Theory and Design, Ellis Horwood, Chichester, UK Addis W (1986) Theory and Design in Civil and Structural Engineering: A Study in the History and Philosophy of Engineering, PhD Thesis, University of Reading Bucciarelli L.L. (2003) Engineering Philosophy, Delft University Press, Delft Bush V. (1980) Science,The Endless Frontier, National Science Foundation Press, Washington DC Beale N., Peyton-Jones S.L. et al. (1999) Cybernauts Awake Ethical and Spiritual Implications of Computers, Information Technology and the Internet Church House Publishing ISBN Cutcliffe S.H. (2000) Ideas, Machines and Values: An introduction to Science, Technology and Social Studies, Rowman and Littlefield, Lanham, MD Davis, M. (1998) Thinking like an Engineer: Studies in the Ethics of a Profession, Oxford University Press, New York.
Publications:
Florman, Samuel C. (1981) Blaming Technology: The Irrational Search for Scapegoats, St Martin's Press, New York Florman, Samuel C. (1987) The Civilized Engineer, St Martin's Press, New York Florman, Samuel C. (1968) Engineering and the Liberal Arts : A Technologist's Guide to History, Literature Florman, Samuel C. (1994) The Existential Pleasures of Engineering, 2nd ed, St Martin's Press, New York Florman, Samuel C. (1996) The Introspective Engineer, St Martin's Press, New York Goldman S.L. (1991) "The social captivity of Engineering", Critical Perspectives on non academic Science and Engineering, (ed Durbin P.T.), Lehigh University Press, Bethlehem, PA Goldman S.L. (1990) "Philosophy, Engineering and Western Culture", in Broad and Narrow interpretations of Philosophy of Technology, (ed Durbin P.T.), Kluwer,Amsterdam Harris E.C, Pritchard M.S. & Rabins M.J. (1995), Engineering Ethics: Concepts and Cases, Wadsworth, Belmont, CA Johnston, S., Gostelow, P., Jones, E. (1999), Engineering and Society: An Australian perspective, 2nd Ed. Longman, Lewis, Arthur O. Jr. ed. (1963), Of Men and Machines, E.P. Dutton Martin M.W. & Schinzinger R (1996), Ethics in Engineering, 3rd ed. McGraw-Hill, New York Mitcham C. (1999), Thinking through Technology: The Path between Engineering and Philosophy, University of Chicago Press, Chicago, pp. 19–38.
Publications:
Mumford L. (1970) The Myth of the Machine, Harcourt Brace Javonovich, New York Blockley, David (1980) The Nature of Structural Design and Safety, Ellis Howood, Chichester, UK. ISBN 0-85312-179-6 (Free download) Blockley, David (Editor) (1992) Engineering Safety, McGraw Hill, ISBN 0-07-707593-5 (Free download) Blockley, David (2010) A Very Short Introduction to Engineering Oxford University Press, ISBN 9780199578696 Petroski, Henry (1992) To Engineer Is Human: The Role of Failure in Successful Design Petroski, Henry (2010) The Essential Engineer: Why Science Alone Will Not Solve Our Global Problems Simon H. (1996), The Sciences of the Artificial, 3rd ed. MIT Press, Cambridge, MA Unger S.H. (1994), Controlling Technology: Ethics and the Responsible Engineer, 2nd ed., John Wiley, New York Vincenti W.G. (1990) What Engineers Know and How They Know It: Analytical Studies from Aeronautical History, The Johns Hopkins University Press, Baltimore, Md.
Publications:
Anthonie Meijers, ed. (2009). Philosophy of technology and engineering sciences. Handbook of the Philosophy of Science. Vol. 9. Elsevier. ISBN 978-0-444-51667-1.
Publications:
Jeroen van den Hoven, Seumas Miller & Thomas Pogge (2017). Designing in Ethics. Cambridge University Press, Cambridge. ISBN 978-051-18-4431-7 Priyan Dias (2019). Philosophy for Engineering: Practice, Context, Ethics, Models, Failure. Springer Singapore. ISBN 978-981-15-1270-4 Carl Mitcham (2019). Steps toward a Philosophy of Engineering: Historico-Philosophical and Critical Essays. ISBN 978-1-78661-126-0 Articles Philosophy in the Making by Natasha McCarthy Ingenia March 26, 2006 Creed M.J. (1993) "Introducing Structures in a Modern Curriculum", Proceedings of the Conference, Innovation and Change in Civil Engineering Education, The Queen's University of Belfast Davis, M. (2001) The Professional Approach to Engineering Ethics: Five Research Questions, Science and Engineering Ethics 7 (July 2001): 379-390.
Publications:
Lewin D (1981) Engineering Philosophy - The Third Culture, Paper to the Royal Society, UK Mitcham C. (1994), "Engineering Design Research and Social Responsibility", Ethics of Scientific Research, pp. 153–196 and 221-223 Hess, J.L. and Fore, G., (2018). "A systematic literature review of US engineering ethics interventions", Science and Engineering Ethics, 24(2), pp.551-583.
Mitcham, C. and Englehardt, E.E., 2019. "Ethics across the curriculum: Prospects for broader (and deeper) teaching and learning in research and engineering ethics", Science and Engineering Ethics, 25(6), pp.1735-1762. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tris(2-phenylpyridine)iridium**
Tris(2-phenylpyridine)iridium:
Tris(2-phenylpyridine)iridium, abbreviated [Ir(ppy)3] is the organoiridium complex with the formula Ir(C6H4-C5H4N)3. The complex, a yellow-green solid, is a derivative of Ir3+ bound to three monoanionic 2-pyridinylphenyl ligands. It is electroluminescent, emitting green light. The complex is observed with the facial stereochemistry, which is chiral.
Tris(2-phenylpyridine)iridium:
The complex is prepared by cyclometalation reactions of 2-phenylpyridine and iridium trichloride, as represented by this idealized equation: IrCl3 + 3 C6H5-C5H4N → Ir(C6H4-C5H4N)3 + 3 HClThe complex and many analogues have been investigated for application in photoredox catalysis. Its excited state has a reduction potential of −2.14 V, nearly 1 V more negative than the reduction potential of excited [Ru(bipy)3]2+. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cleaved amplified polymorphic sequence**
Cleaved amplified polymorphic sequence:
The cleaved amplified polymorphic sequence (CAPS) method is a technique in molecular biology for the analysis of genetic markers. It is an extension to the restriction fragment length polymorphism (RFLP) method, using polymerase chain reaction (PCR) to more quickly analyse the results.
Cleaved amplified polymorphic sequence:
Like RFLP, CAPS works on the principle that genetic differences between individuals can create or abolish restriction endonuclease restriction sites, and that these differences can be detected in the resulting DNA fragment length after digestion. In the CAPS method, PCR amplification is directed across the altered restriction site, and the products digested with the restriction enzyme. When fractionated by agarose or polyacrylamide gel electrophoresis, the digested PCR products will give readily distinguishable patterns of bands. Alternatively, the amplified segment can be analyzed by allele-specific oligonucleotide (ASO) probes, a process that can often be done by a simple dot blot. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Artificial iris**
Artificial iris:
An artificial iris is a surgically implanted device to treat damage or absence of the iris of the eye.In 2018, the United States Food and Drug Administration approved the first artificial iris, CustomFlex Artificial Iris developed and produced by HumanOptics Holding AG, made of medical grade silicone.The device improves vision by controlling the amount of light let into the eye. It also improves cosmetic appearance. There are a number of surgical techniques for implanting the prosthetic. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nitpicking**
Nitpicking:
Nitpicking is a term, first attested in 1956, that describes the action of giving too much attention to unimportant detail. A person who nitpicks is termed as a nitpicker.The terminology originates from the common act of manually removing nits (the eggs of lice, generally head lice) from another person's hair.As nitpicking inherently requires fastidious attention to detail, the term has become appropriated to describe the practice of meticulously searching for minor, even trivial errors in detail.Nitpicking has been used to describe dishonest insurers and bullying employers, or even bullying family members. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chloride channel**
Chloride channel:
Chloride channels are a superfamily of poorly understood ion channels specific for chloride. These channels may conduct many different ions, but are named for chloride because its concentration in vivo is much higher than other anions. Several families of voltage-gated channels and ligand-gated channels (e.g., the CaCC families) have been characterized in humans.
Voltage-gated chloride channels perform numerous crucial physiological and cellular functions, such as controlling pH, volume homeostasis, transporting organic solutes, regulating cell migration, proliferation, and differentiation. Based on sequence homology the chloride channels can be subdivided into a number of groups.
General functions:
Voltage-gated chloride channels are important for setting cell resting membrane potential and maintaining proper cell volume. These channels conduct Cl− or other anions such as HCO−3, I−, SCN−, and NO−3. The structure of these channels are not like other known channels. The chloride channel subunits contain between 1 and 12 transmembrane segments. Some chloride channels are activated only by voltage (i.e., voltage-gated), while others are activated by Ca2+, other extracellular ligands, or pH.
CLC family:
The CLC family of chloride channels contains 10 or 12 transmembrane helices. Each protein forms a single pore. It has been shown that some members of this family form homodimers. In terms of primary structure, they are unrelated to known cation channels or other types of anion channels. Three CLC subfamilies are found in animals. CLCN1 is involved in setting and restoring the resting membrane potential of skeletal muscle, while other channels play important parts in solute concentration mechanisms in the kidney. These proteins contain two CBS domains. Chloride channels are also important for maintaining safe ion concentrations within plant cells.
CLC family:
Structure and mechanism The CLC channel structure has not yet been resolved, however the structure of the CLC exchangers has been resolved by x-ray crystallography. Because the primary structure of the channels and exchangers are so similar, most assumptions about the structure of the channels are based on the structure established for the bacterial exchangers.
Each channel or exchanger is composed of two similar subunits—a dimer—each subunit containing one pore. The proteins are formed from two copies of the same protein—a homodimer—though scientists have artificially combined subunits from different channels to form heterodimers. Each subunit binds ions independently of the other, meaning conduction or exchange occur independently in each subunit.
CLC family:
Each subunit consists of two related halves oriented in opposite directions, forming an ‘antiparallel’ structure. These halves come together to form the anion pore. The pore has a filter through which chloride and other anions can pass, but lets little else through. These water-filled pores filter anions via three binding sites—Sint, Scen, and Sext—which bind chloride and other anions. The names of these binding sites correspond to their positions within the membrane. Sint is exposed to intracellular fluid, Scen lies inside the membrane or in the center of the filter, and Sext is exposed to extracellular fluid.[4] Each binding site binds different chloride anions simultaneously. In the exchangers, these chloride ions do not interact strongly with one another, due to compensating interactions with the protein. In the channels, the protein does not shield chloride ions at one binding site from the neighboring negatively charged chlorides. Each negative charge exerts a repulsive force on the negative charges next to it. Researchers have suggested that this mutual repulsion contributes to the high rate of conduction through the pore.CLC transporters shuttle H+ across the membrane. The H+ pathway in CLC transporters utilizes two glutamate residues—one on the extracellular side, Gluex, and one on the intracellular side, Gluin. Gluex also serves to regulate chloride exchange between the protein and extracellular solution. This means that the chloride and the proton share a common pathway on the extracellular side, but diverge on the intracellular side.CLC channels also have dependence on H+, but for gating rather than Cl− exchange. Instead of utilizing gradients to exchange two Cl− for one H+, the CLC channels transport one H+ while simultaneously transporting millions of anions. This corresponds with one cycle of the slow gate.
CLC family:
Eukaryotic CLC channels also contain cytoplasmic domains. These domains have a pair of CBS motifs, whose function is not fully characterized yet. Though the precise function of these domains is not fully characterized, their importance is illustrated by the pathologies resulting from their mutation. Thomsen's disease, Dent's disease, infantile malignant osteopetrosis, and Bartter's syndrome are all genetic disorders due to such mutations.
CLC family:
At least one role of the cytoplasmic CBS domains regards regulation via adenosine nucleotides. Particular CLC transporters and proteins have modulated activity when bound with ATP, ADP, AMP, or adenosine at the CBS domains. The specific effect is unique to each protein, but the implication is that certain CLC transporters and proteins are sensitive to the metabolic state of the cell.
CLC family:
Selectivity The Scen acts as the primary selectivity filter for most CLC proteins, allowing the following anions to pass through, from most selected to least: SCN−, Cl−, Br−, NO−3, I−. Altering a serine residue at the selectivity filter, labeled Sercen, to a different amino acid alters the selectivity.
CLC family:
Gating and kinetics Gating occurs through two mechanisms: protopore or fast gating and common or slow gating. Common gating involves both protein subunits closing their pores at the same time (cooperation), while protopore gating involves independent opening and closing of each pore. As the names imply, fast gating occur at a much faster rate than slow gating. Precise molecular mechanisms for gating are still being studied.
CLC family:
For the channels, when the slow gate is closed, no ions permeate through the pore. When the slow gate is open, the fast gates open spontaneously and independently of one another. Thus, the protein could have both gates open, or both gates closed, or just one of the two gates open. Single-channel patch-clamp studies demonstrated this biophysical property even before the dual-pore structure of CLC channels had been resolved. Each fast gate opens independently of the other and the ion conductance measured during these studies reflects a binomial distribution.H+ transport promotes opening of the common gate in CLC channels. For every opening and closing of the common gate, one H+ is transported across the membrane. The common gate is also affected by the bonding of adenosine nucleotides to the intracellular CBS domains. Inhibition or activation of the protein by these domains is specific to each protein.
CLC family:
Function The CLC channels allow chloride to flow down its electrochemical gradient, when open. These channels are expressed on the cell membrane. CLC channels contribute to the excitability of these membranes as well as transport ions across the membrane.The CLC exchangers are localized to intracellular components like endosomes or lysosomes and help regulate the pH of their compartments.
CLC family:
Pathology Bartter's syndrome, which is associated with renal salt wasting and hypokalemic alkalosis, is due to the defective transport of chloride ions and associated ions in the thick ascending loop of Henle. CLCNKB has been implicated.Another inherited disease that affects the kidney organs is Dent's Disease, characterised by low molecular weight proteinuria and hypercalciuria where mutations in CLCN5 are implicated.Thomsen disease is associated with dominant mutations and Becker disease with recessive mutations in CLCN1.
CLC family:
Genes CLCN1, CLCN2, CLCN3, CLCN4, CLCN5, CLCN6, CLCN7, CLCNKA, CLCNKB BSND - encodes barttin, accessory subunit beta for CLCNKA and CLCNKB
E-ClC family:
Members of Epithelial Chloride Channel (E-ClC) Family (TC# 1.A.13) catalyze bidirectional transport of chloride ions. Mammals have multiple isoforms (at least 6 different gene products plus splice variants) of epithelial chloride channel proteins, catalogued into the Chloride channel accessory (CLCA) family. The first member of this family to be characterized was a respiratory epithelium, Ca2+-regulated, chloride channel protein isolated from bovine tracheal apical membranes. It was biochemically characterized as a 140 kDa complex. The bovine EClC protein has 903 amino acids and four putative transmembrane segments. The purified complex, when reconstituted in a planar lipid bilayer, behaved as an anion-selective channel. It was regulated by Ca2+ via a calmodulin kinase II-dependent mechanism. Distant homologues may be present in plants, ciliates and bacteria, Synechocystis and Escherichia coli, so at least some domains within E-ClC family proteins have an ancient origin.
CLIC family:
The Chloride Intracellular Ion Channel (CLIC) Family (TC# 1.A.12) consists of six conserved proteins in humans (CLIC1, CLIC2, CLIC3, CLIC4, CLIC5, CLIC6). Members exist as both monomeric soluble proteins and integral membrane proteins where they function as chloride-selective ion channels. These proteins are thought to function in the regulation of the membrane potential and in transepithelial ion absorption and secretion in the kidney. They are a member of the glutathione S-transferase (GST) superfamily.
CLIC family:
Structure They possess one or two putative transmembrane α-helical segments (TMSs). The bovine p64 protein is 437 amino acyl residues in length and has the two putative TMSs at positions 223-239 and 367-385. The N- and C-termini are cytoplasmic, and the large central luminal loop may be glycosylated. The human nuclear protein (CLIC1 or NCC27) is much smaller (241 residues) and has only one putative TMS at positions 30-36. It is homologous to the second half of p64.
CLIC family:
Structural studies showed that in the soluble form, CLIC proteins adopt a GST fold with an active site exhibiting a conserved glutaredoxin monothiol motif, similar to the omega class GSTs. Al Khamici et al. demonstrated that CLIC proteins have glutaredoxin-like glutathione-dependent oxidoreductase enzymatic activity. CLICs 1, 2 and 4 demonstrate typical glutaredoxin-like activity using 2-hydroxyethyl disulfide as a substrate. This activity may regulate CLIC ion channel function.
CLIC family:
Transport reaction The generalized transport reaction believed to be catalyzed chloride channels is: Cl− (cytoplasm) → Cl− (intraorganellar space)
CFTR:
CFTR is a chloride channel belonging to the superfamily of ABC transporters. Each channel has two transmembrane domains and two nucleotide binding domains. ATP binding to both nucleotide binding domains causes changes these domains to associate, further causing changes that open up the ion pore. When ATP is hydrolyzed, the nucleotide binding domains dissociate again and the pore closes.
CFTR:
Pathology Cystic fibrosis is caused by mutations in the CFTR gene on chromosome 7, the most common mutation being deltaF508 (a deletion of a codon coding for phenylalanine, which occupies the 508th amino acid position in the normal CFTR polypeptide). Any of these mutations can prevent the proper folding of the protein and induce its subsequent degradation, resulting in decreased numbers of chloride channels in the body. This causes the buildup of mucus in the body and chronic infections.
Other chloride channels and families:
GABAA Glycine Receptor Calcium-activated chloride channel Anion-conducting channelrhodopsin | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vin gris**
Vin gris:
Vin gris (French: [vɛ̃ gʁi]) is a variant of rosé wine made from red grapes, in particular Pinot noir and Pinot gris. Pinot noir is a black grape, but can also be used to make rosé or white wine. When the grapes are brought to the winery and crushed, the juice is run off and removed from contact with the skin, leaving the color and flavor compounds from the skin behind. The juice is then typically fermented in stainless steel tanks before being bottled shortly after, without any aging in oak barrels.
Vin gris:
Producing a small volume of vin gris (or rosé) can also be used as a technique to improve Pinot noir. Removing some clear juice increases the concentration of colors and flavor compounds from the skins in the remaining juice intended for making red wine; the resulting rosé is known as a saignée (bled).
Grape varieties:
Another grape used to produce vin gris is Gamay, particularly in Lorraine, where the Côtes de Toul zone produces a light vin gris. The vinification is the same as with Pinot noir (short contact of the white juice with the red skins during the pressing), but the fruity flavor of Gamay greatly changes the taste of the wine. Champagne is often made using this process, when it is known as blanc de noirs.
Grape varieties:
The Moschofilero, an indigenous grape of Arcadia, Greece, is a grape with pink-to-purple skin and white flesh, and makes blanc de gris wines of the Mantineia appellation of origin. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SW Sextantis variable**
SW Sextantis variable:
SW Sextantis variable stars are a kind of cataclysmic variable star; they are double-star systems in which there is mass transfer from a red dwarf to a white dwarf forming a stable accretion disc around the latter. Unlike other non-magnetic cataclysmic variables, the emission lines from hydrogen and helium are not doubled, except briefly near phase 0.5.
Characteristics:
SW Sextantis stars have an orbital period between 2.8 and 4 hours; most systems were discovered by surveys of eclipsing variables, so the orbit is nearly edge-on with respect to the Earth. Their spectra resemble those of a dwarf nova in outburst, with signs of a permanently ionised accretion disc. Material is constantly flowing into the disc from the companion star, and friction within the disc causes it to emit optical light. It is more difficult to find SW Sextantis systems with low inclination, since it is necessary to examine many stellar spectra without being able to restrict to eclipsing variables; however, surveys have been performed, and suggest that some of the observed properties of SW Sextantis stars are accidental results of a sample restricted to high inclination systems Emission lines of hydrogen (the Balmer series) and helium are observed, and are not doubled (as one would expect by Doppler shift of light emitted from the edges of a fast-rotating disc), but the wings are broadened to the point that the spread of source velocities can be as much as 4000 km/s. For a brief period near phase 0.5 of their orbits, SW Sextantis stars do show doubling of their emission lines and this is a defining character of the class. In eclipsing systems, the emission lines are scarcely detected at minimum light because the white dwarf and the central part of the accretion disc are hidden behind the red dwarf.In the ultraviolet we observe emission lines from the white dwarf, which indicate an unusually high temperature and imply a high accretion rate. Furthermore, the radial velocity of an SW Sextantis star determined from the disc emission lines is not the same as that determined from the white dwarf.
Characteristics:
The orbital period of SW Sextantis systems is always just above the period gap, suggesting a joint-development phase for these cataclysmic variables.
Interpretation:
Models of SW Sextantis stars must explain the high mass transfer rate and the period distribution just above the period-gap. The standard theory of cataclysmic variables suggests that the rate of mass transfer is determined by loss of angular momentum due to magnetic fields. The stellar wind of the red dwarf sends ionised plasma into space, which travels along magnetic field lines; indeed, it is trapped in the magnetic field lines and follows the rotation of the star. Since the magnetic field accelerates the escaping plasma, the rotation of the star is braked. This in turn reduces the total angular momentum of the double-star system, which along with the rearrangement of the matter in the system leads to the orbital radius getting smaller, which keeps the mass transfer rate steady.Under this model, the core of the red dwarf is rotating faster than the orbital period. As mass transfer causes the radius of the star to shrink, conservation of angular momentum means that it spins faster, and this means the dynamo effect generates a stronger magnetic field. This increases the magnetic braking effect and accordingly the mass transfer rate.Another interpretation of SW Sextantis stars is that the high mass transfer rate is only temporary. Some cataclysmic variables (e.g. the classical novae RR Pictoris, XX Tauri and V728 Scorpii) have periods just above the period gap, and this is interpreted as part of the hibernation model, where, after a nova eruption, the white dwarf is unusually hot; it heats the red dwarf, causing a higher mass transfer rate until the white dwarf has cooled down again. As it cools, the red dwarf shrinks and the mass transfer rate drops to quite low levels; eventually loss of orbital angular momentum causes the stars to get closer together again, and mass transfer resumes. In this model, SW Sextantis stars represent a stage in the life of a cataclysmic variable either shortly before or shortly after a nova eruption.
Examples:
PX Andromedae DW Ursae Majoris LS Pegasi BB Doradus SW Sextantis (prototype) V533 HerculisDonald W. Hoard at the Max Planck Institute for Astronomy in Heidelberg maintains a list of SW Sextantis stars mentioned in the literature, and a description of the characteristics used to identify them. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lead service line**
Lead service line:
A lead service line (LSL, also known as lead service pipe, and lead connection pipe) is a pipe made of lead which is used in potable water distribution to connect a water main to a user's premises.
Lead service line:
Lead exposure is a public health hazard as it causes developmental effects in fetuses, infants, and young children. It also has other health effects in adults. According to World Health Organization, the presence of lead service lines is the most significant contributor of lead contamination in drinking water in many countries.The most certain way to eliminate lead exposure in drinking water from the lead service lines is to replace them with pipes made from other materials. However, the replacements are time-consuming and costly. The difficulty is exacerbated in many locations by ownership structure that has a shared responsibility between water utilities and property owners, which requires cooperation between the two entities. Some water utilities employed corrosion control as a short-term solution while working through the long-term replacement projects. A potential issue with corrosion control is a constant monitoring of its effectiveness. There have been widespread lead exposures resulting from failures of corrosion control, such as the Flint water crisis.
Background:
Lead had been associated with plumbing since the ancient times. The chemical symbol of lead (Pb) was derived from the Latin word plumbum which means waterworks or plumbing as lead was used to make water pipes. Lead water lines have also been known to be harmful since ancient times, though this is contested by industry trade groups within the United States. Lead pipes were preferred over iron pipes because they lasted longer and were more flexible.
Background:
In modern times, lead was still widely used in water distribution systems and plumbing hardware before the early 20th century, including lead pipes, leaded solder and leaded alloys. One part of the systems is the connections between the water mains and the water user locations. A service line is a pipe that makes the connection, which was also made of lead in those days. The first portion of the service line called gooseneck that connects to a valve at the water main requires to be flexible to allow some movements. Lead goosenecks (also called lead service connections or LSCs) were commonly used at the time due to the durability and flexibility. In colder-weather areas, the connection between the water main and the rest of the service line is subjected to expansion and contraction during changing temperatures. When a stiffer service line made of galvanized steel pipe was used, a lead gooseneck was installed to connect to the water main to reduce breakages by such expansion and contraction.From the mid-1800s to the early-1900s, many communities started to realize health risks of lead and began to phase out some of the lead-containing products. In Australia, the use of lead service lines was restricted in the 1930s, while other countries still continued the practice of using lead service lines decades later. An example is in the United States, where many cities were allowed to use lead service lines up to the 1980s. Not only they were allowed, some parts of the United States mandated the use of lead service lines until 1987, primarily due to lobbying by lead manufacturers and plumbing unions. This resulted in as many as 3.3 million lead service lines and 6.4 million lead goosenecks in the country. In England and Wales, there were about 8.9 million homes with lead service lines as of 1997.In the 2010s, one-third of American communities still had lead service lines, with an estimate to be up to six million. Elimination had been extremely difficult due to the high cost of identifying, locating, removing, and preventing the many potential sources of lead in various water distributions systems in the United States.
Health effects:
Lead exposure, even at low levels, can cause neurological effects, especially in young children, young women, and developing fetuses. In fetuses, lead in mother's bones is released along with calcium as part of the fetal bone formation. Lead exposure can also cross placental barrier into a fetus. This could cause premature birth, growth issues, and death of the fetus. In infants, lead exposure from mother can pass through breast feeding. In children, the effects of lead exposure include learning problems, slow growth, and lower IQ. In adults, low level exposure can cause hypertension, cognitive issues, and reproductive harms.
Regulations:
World Health Organization (WHO) published the first edition of Guidelines for Drinking-water Quality (GDWQ) in 1984 to replace the 1970 European Standards for Drinking-Water and 1971 International Standards for Drinking-Water. The publication recommended the limits of contaminants in drinking water which set the value for lead to not more than 0.05 mg/L based on an assumption about various sources of lead intake and the provisional tolerable weekly intake of 3 mg of lead per adult that was established by Joint FAO/WHO Expert Committee on Food Additives (JECFA) in 1972. However, no safe levels had been defined. In 1986, JECFA updated the provisional tolerable weekly intake level of lead for infants and children to be based on body weights at 25 micrograms of lead per one kilogram of body weight. JECFA reconfirmed this provisional tolerable value and extended the same value to all age groups in 1993. When WHO published the second edition of GDWQ in 1996, it based the on the new JECFA value with assumptions that 50% of lead exposure were from drinking water and a 5-kg infant consuming 0.75 liters from bottles per day, and infants are in the most sensitive subgroup. Therefore, WHO established the guideline value of lead concentration in drinking water not to exceed 0.01 mg/L.
Regulations:
Argentina As of early 2020 Argentina sets a standard of 0.05 mg/L based on Resolution no. 523/95-MTSS which is an amendment of law 19587.
Australia In 2004, Australia lowered lead exposure limit to 0.01 mg/L from 0.05 through the 2004 Australian Drinking Water Guidelines. However, this is a guideline, not a mandatory standard.
Regulations:
European Union On 3 November 1998, the European Union adopted Directive 98/83/EC to set standards on drinking water. This included a plan to lower the lead contamination in the water distribution systems of member states. The Directive sets the maximum lead concentration in drinking water at 0.025 mg/L by 2003, and 0.01 by 2013.A study in 1999 gave an estimate of the lead service lines in some European countries. Ireland, United Kingdom, France, Portugal, and Belgium all had higher percentages of lead lines ranging between 15% and 51%. Germany, Spain, Italy, Luxembourg, Netherlands had between 3% and 9%, while Denmark and Greece had less than 1%.The approaches to reduce lead exposure in water distribution systems to meet that goal have also been different. For example, the United Kingdom took the short- and medium-term strategy of dosing the water with orthophosphate as a corrosion control measure and considered lead service line replacement as the long-term strategy. By 2010 (3 years before the new lower standard), 95% of public water supplies were treated with orthophosphate. The tests had 99.8% compliance of 0.025 mg/L 2003 standard and 99.0% compliance of 0.01 mg/L 2013 standard. However, many other European countries considered the practice of adding orthophosphate to the water supply to be undesirable as it would result in sewage with higher concentrations of nutrient. That could potentially create problems of harmful algal blooms. An example of a country that took another approach was Germany. The southern part of Germany had prohibited lead pipes more than 100 years ago. However, northern Germany continued to use lead pipes until the 1970s. The approach to meet the new standard for Germany was to put a focus on getting rid of lead service lines. Water utilities in northern Germany had already been working on lead service line replacements since the adoption of the Directive, in order to meet the 2003 standard.
Regulations:
Canada In 1992, the federal government set the guideline to have the Maximum Allowable Concentration (MAC) of lead in drinking water at 0.01 mg/L. On 8 March 2019, Health Canada updated the guideline to lower the MAC of lead to 0.005 mg/L, one of the lowest values in the world. Regulation of these guidelines are performed at the provincial level, and is inconsistent.On 4 November 2019, Concordia University published a year-long study which found that one-third of water samples from 11 major Canadian cities tested higher for MAC of lead than the national guideline, with the highest levels recorded from samples in Montreal, Prince Rupert, and Regina. It was also found that some municipalities only had estimates on the number of lead service lines still in use, with no exact data available.
Regulations:
United States As of 2019, federal regulations in the United States specify an "action level" for lead at 0.015 mg/L. A public water system is required to monitor its water supply at customer locations. If more than 10% of tap water samples exceed the lead action level (or the copper action level of 1.3 ppm), the supplier must take additional steps to control corrosion. Other actions may include installation of treatment, checking of source water, removal of lead-containing plumbing, and public education. If corrosion control and monitoring of source water measures fail to keep lead exposure under the action level, water utilities must initiate a lead service line replacement program with at least 7% of total lead service lines identified at the beginning of the program to be replaced annually.
Regulations:
Uruguay Uruguay set the lead exposure of drinking water to 0.05 mg/L in 2000 through Decree 315/94, 2nd edition. It also banned lead water pipes and fittings in 2004. The country set new standards in 2011 through Decree 375/11 to lowered exposure level to 0.03 mg/L and to achieve 0.01 mg/L level by 2021.
Replacements:
Responsibilities There are two parts in a service line. The first part is the pipe that connects the water main to the curb stop which is a stop valve that is located around the street curb or the property line. That first section is called communication pipe. The second part is the pipe that connects the curb stop to the building inlet or a water meter. This part is called supply pipe. Depending on local water utilities, sometimes the meter is located at the property line instead. When the water meter is located at that alternative position, the pipe section that connects the water main to the water meter is the communication pipe, and the section that connects the water meter to the building isolation valve is the supply pipe. Lead service lines can be in one of these scenarios: the communication pipe section can be made of lead, that is called lead communication pipe; the supply pipe section can be made of lead, that is called lead supply pipe; the entire length can be made out of lead; or only a small section of the communication pipe at the water main is made out of lead (lead gooseneck).The ownership structure of service lines varies among water utilities. Depending on localities, the entire length of a service line from the water main to the building inlet may be owned by the water utility, or the property owner. There can also be a partial ownership scenario where the water utility and the property owner share ownership of the service line, thus, replacing the entire lead service line requires a cooperation between the two entities. In the shared ownership, the water utility typically owns the communication pipe and the curb stop, and property owner owns the supply pipe. In this scenario, when the water utility owned section of a lead service line is called public lead service line, and the section owned by the property owner is called private lead service line. When only one part of a lead service line (either public or private) is replaced, it is called partial lead service line replacement. When both sides are replaced at the same time, it is called full lead service line replacement.When there is an involvement with private ownership, it complicates a full lead service line replacement. A major issue is the cost of the replacement. In the United States, a replacement can cost between $3,000 to $5,000 (2018 estimate) for the private side. This can be a major financial burden for homeowners. Even with different incentives for homeowners such as interest-free loans, or using ratepayer money to cover part of the cost, the homeowners are still hesitant. Using the ratepayer money to fund private lead service line replacements in itself is a subject of a debate. People who advocate for it argue that benefits to the public health outweigh a small increase in water rates that impacts everyone. On the other side, there is a concern that the increased rates can cause hardship, and there is a public policy question on using ratepayer money to make private property improvements.Even in the case that private lead service line replacements that are fully funded at no cost to property owners, some owners still refuse to allow their water utility to work in their property for various reasons, such as fearing of damages done to the property, or not wanting workers to be inside. For example, in Pittsburgh, 10% of property owners refused the no-cost private lead service line replacement. This problem is exacerbated in rental properties where tenants had no authority to accept the offer, and the landlords do not see the value of the investment. For cities with a large amount of renters, it will be difficult to complete a full lead service line replacement program without any forms of mandate through a local ordinance. Alternately, some common law jurisdictions may have enough legal precedent in regard to public nuisance law. The court may allow the municipality to access a private property to address the public health threat without having to obtain a permission from the property owner.
Replacements:
Partial replacements A partial lead service line replacement involves replacing a service line in only one portion (whether it is the public or private portion) and leave the other portion intact. This practice does not completely remove the lead source from the water distribution. Additionally, studies have found that a partial lead service line replacement can cause short-term elevation of lead concentration due to the disturbances during the replacement. An Advisory Board of the United States Environmental Protection Agency concluded in 2011 that they had enough data to show that such practice could pose a public health risk. An Advisory Committee of the Centers for Disease Control and Prevention agreed with that position. Therefore, a partial lead service line replacement should be avoided. In 2014, the American Water Works Association published a communication guideline with a definition of a partial lead service line replacement to include a repair and a reconnection to a lead service line. It recommended that entire lead service line should be replaced instead of a partial replacement. They also provided a guidance on homeowner notifications before partial replacements and water utilities' commitment to follow-up tests. In 2017, a study of the Canadian House of Commons Standing Committee on Transport, Infrastructure and Communities concurred that partial replacements can aggravate the problem of lead exceedances.
Replacements:
Full replacements A full lead service line replacement involves replacing the entire service line from the water main to the building inlet. This includes the public and the private portions of the line. In a full lead service line replacement, there should be a coordination with the property owner as it may involve going through obstacles such as trees, driveways, and walls. Sometime, there is a need to break through the customer's basement wall.Although a full lead service line replacement is a preferred method, it does not come as risk free. There are short-term elevation of lead concentration as well, but it tends to release less lead and to release lead for a shorter period of time. A research has found that even with a full lead service line replacement to completely remove lead source, lead exposure can still exist in the home. In the study it is particularly true when the water source is rich in manganese and iron that has caused the scale build-ups in the interior wall of the home's pipes. The scales can absorb lead from the time before the replacement. When the scales crumble from the pipe walls even after the replacement, they carry lead to customer's tap in particulate form which may continue after the replacement for years. Therefore, an internal plumbing flushing is still required after a full lead service line replacement.
Replacements:
Post replacement flushing procedures After work done on or near a lead service line, whether it is a partial or full replacement, or other disturbances such as changing of the water meter, the water utility should perform a flushing procedure to take out lead that has been lodged into the building plumbing. Homeowner should not run water through any water filter devices, use hot water or ice makers, or consume tap water until the flushing procedure is completed. For this special flushing procedure, the water utility will perform an initial flush after the work is done. Then the worker will start from the lowest floor to open all cold water taps, showers and bathtub faucets throughout the home. Faucet aerators will be remove during this procedure. At the top floor where the last tap is open, wait for 30 minutes, then start turning off the tap and put back the faucet aerators from the top floor down to the lowest floor.
Replacements:
Replacement progress Lead exposure in drinking water that is caused by lead service lines can be mitigated in a short term by using corrosion control techniques. However, the only long-term solution is to completely replace them with other materials. Below is a partial list of replacement efforts by water utilities around the world:
Mitigation:
While full lead service line replacement is the permanent solution, such undertaking will take years or decades. Water utilities and customers need to use other strategies to mitigate the lead exposure risks in the short term.
Mitigation:
Internal corrosion control Various techniques can be used by water utilities to control internal corrosions, for example, pH level adjustment, adjustment of carbonate and calcium to create calcium carbonate as piping surface coating, and applying a corrosion inhibitor. An example of corrosion inhibitor is using phosphate products such as orthophosphate to form films over pipes. This reduces the chance of leaching of trace metal including lead from the pipe materials into the water. Another example of corrosion inhibitor is to use silicate products. However, the mechanism of film forming and its effectiveness are not well understood.
Mitigation:
Flushing American Water Works Association recommends for a home with lead service line that homeowner should do a morning flush by running the water at the kitchen tap for 3–5 minutes. The amount of required flushing may be longer if the house is far back from the curb which would have a longer lead service line. In order to conserve water, showering and flushing the toilet can also be used. However, those alternative activities will not flush the line at the kitchen where it is the main tap for water consumption. An additional flushing at the kitchen tap for 30–45 seconds is recommended.
Mitigation:
Filters In certain cases when flushing is not practical, such as having a very long lead service line, using filters may be considered. When choosing filters, water consumers should select the products that remove total lead that includes filtering of dissolved lead and lead particles. In United States, it is recommended that the filters are certified according to American National Standard Institute/National Science Foundation Standard 53
Widespread hazards and causes:
There were widespread lead contamination in drinking water incidents related to lead service lines. These were caused by many reasons that resulted in elevated levels of lead leaching.
Widespread hazards and causes:
Changing of water source In 2014, Flint water crisis was caused by switching of water source from Detroit Water and Sewerage Department's treated water to Flint River which would be treated locally in Flint. The treated Flint River water changed the water properties in Flint's distribution system in three areas. First the corrosion inhibitor was not added to the treated water. Secondly, the pH level had been changed over time to lower levels. Thirdly, the chloride level was higher than treated Detroit's water. The combination of these factors contributed to the corrosiveness of the water, causing corrosion to lead and iron pipes. The solution was to switch the water source back to Detroit's water supply and replace 30,000 lead service lines.
Widespread hazards and causes:
Changing of disinfection chemical In 2000, lead contamination in Washington, D.C. drinking water was caused by changing the disinfection chemical from chlorine to monochloramine for water treatment. This was done as a measure to limit disinfection byproducts according to a new regulation of the United States Environmental Protection Agency. The change inadvertently reduced the protective mineral coating properties in the water. That caused the scaling that had been covering the interior surface of lead service lines for decades to be reduced to the point that it allowed lead to leach into the water. The solution was to replace all 23,000 lead service lines. However, 15,000 of those was done as partial replacements which was found to be ineffective.
Widespread hazards and causes:
Changing of corrosion control chemical In 2014, Pittsburgh water crisis was caused by unauthorized switching of anti-corrosion agent from soda ash to caustic soda. The city had been using soda ash for decades as a corrosion control chemical. Soda ash's alkalinity helps the metals to be less corrosive. It also has another property that leaves a solid residue which encourages mineral buildup as a protective layer within the interior surface of the pipes. Although caustic soda has similar alkalinity, it does not help in buildup creation. After switching the corrosion control chemical, the buildup in the city's distribution system started to disappear causing leaching in the lead service lines. The short-term solution was to use orthophosphate to create coating. The long-term solution was to do full lead service line replacements. The city started the replacements in 2016. By 2019, 4,200 lead service lines had been replaced. On the same year, the city budgeted $49 million to replace another 4,400 public service lines and offer no cost private line replacements at the same time.
Widespread hazards and causes:
Adjustment of pH levels In 2016, water in Newark, New Jersey started having elevated lead levels. A year earlier, the city tried to adjust the pH levels in order to control carcinogens in the system. The result of higher acidity caused the sodium silicate that was used successfully as corrosion inhibitor for two decades to stop working. It took the city until 2018 to figure out the short-term solution to change to use orthophosphate, but that would take another six months to become effective. Water bottles and water filters were distributed as a stop gap. The long-term solution was for the city to do 18,000 full lead service line replacements. The city took unprecedented steps by borrowing $120 million to shorten the replacement timeframe to 3 years from 10 years, and working with legislators to come up with a law that allows the city to force replace the private portion of the lines free of charge and without any permission from property owners.
Widespread hazards and causes:
Physical disturbances In Chicago, after Mayor Rahm Emanuel took office in 2011, he initiated a $481 million water conservation initiative that included household water meter replacements. The work was carried in a multi-year project. In 2013, a study by United States Environmental Protection Agency (EPA) concluded that disturbances to lead service lines including street work or water meter installation could cause the leaching of lead to be elevated for months or years. During Mayor Emanuel's administration, the city consistently denied that it had any widespread problems of lead in water, and continued the meter installation. In July 2019, Mayor Lori Lightfoot who took office in the same year announced that the tests had shown elevated levels of lead in more than 1 in 5 metered homes, and she ordered to halt the meter installation program. The city also faced a lawsuit filed on behalf of the residents to force the city to solve the problem by replacing their lead service lines.Also with the water related projects, Mayor Emanuel borrowed $412 million from 2011 to 2016 with two-thirds of that money went to replacements of 440-mile (710 km) water mains, but those projects didn't include lead service line replacements. The workers would reconnect lead service lines to the newly installed water mains. In 2016, the city claimed that there was no evidence of risks associated with that method. The claim contradicted the 2013 EPA report and another finding in 2015 by the City of Milwaukee that prompted Milwaukee to stop installing new water mains. In addition to potential issues with reconnecting lead service lines back to water mains, there were also concerns about how the city repaired damages. During the projects, when the city damaged a lead service line, the city workers would cut off the broken part of the lead service line and replace that short section with a copper line. A scientist who tested the water after that repair procedure was done on his home found an extremely high level of lead. However, the city did not perform any tests or notified any homeowners on such repairs as that type of repairs was not considered to be a partial lead service line replacement, therefore, follow-up tests were not required by regulations. An EPA Advisory Board concluded that the procedure could be more dangerous than a partial lead service line replacement. After 87 months of work when the city completed two-thirds of the water main replacement project, the total amount of repairs using that procedure was still not available. There was an incomplete record that only showed a one-month snapshot to indicate that nine such repairs were done during that month.
Widespread hazards and causes:
Unspecified causes In some cases, high level of lead in drinking water may have not been caused by a specific incident. The cause may or may not be pinpointed to lead service lines.
Widespread hazards and causes:
In 2015, Irish Water sent out a warning that 126 households in Wexford had the level of lead that exceeded the limit. It urged residents to stop using their water for drinking unless a five to ten minute flush had been performed. The company also recommended owners to replace their lead service lines which are responsibility of property owners in Ireland. Starting in 2017, more than 30 areas across Ireland had found to have unsafe level of lead. Irish Water had been replacing lead service lines at their own costs as part of their leakage reduction program.In 2017 a report from French Agency for Food, Environmental and Occupational Health & Safety (ANSES) indicated that many older properties are at risk of high lead concentration, especially for those properties located in areas with older water distribution systems. The agency urged property owners to take mitigation efforts including replacements of water pipes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nonlinear junction detector**
Nonlinear junction detector:
The non-linear junction detector, or an NLJD, is a device that illuminates a small region of space with high-frequency RF energy. Any "non linear junction" in the vicinity—for example, and particularly, the p–n junction—will receive this energy, and because of the asymmetric response of the junction to an electric field, it will mangle it, re-emitting some of it on multiples of the illumination frequency (see harmonic). The detector has a sensitive receiver tuned to these harmonics, as well as appropriate processing and displays to make their presence known to the user of the device.
Nonlinear junction detector:
Because the basis of almost all semiconductor electronics is the p-n junction, an NLJD is correspondingly capable of detecting almost any unshielded electronic device containing semiconductors, whether the electronics are actively powered or not.
Nonlinear junction detector:
In its basic form, an NLJD can also detect things that are not themselves electronic in nature, so the use of the device requires a modicum of skill and experience. For example, a rusty nail inside a wall can give a false positive. For this reason, most modern NLJDs examine the ratio between the second and the third harmonic of the illumination frequency. When a true (electronic) p-n junction is encountered, the second harmonic will generally be stronger than the third.
History:
The NLJD was invented by Charles Bovill during WWII. It was initially used to discover corrosion below painted surfaces on airplanes. In 1972, shortly after Bovill had become technical director at Allen International Ltd. (Westminster, London, UK), the device was renamed 'Broom' and was introduced as a device for finding inactive covert listening devices (bugs). The Broom was later marketed by Audiotel in Corby (UK) as the Scanlock Broom, and was succeeded by the Scanlock Broom ECM and later the Scanlock Super Broom.
History:
Similar devices are available from manufacturers in the US (e.g. Orion) and Russia.
Countermeasures:
As a countermeasure against an NLJD, professional covert listening devices (bugs) of the Central Intelligence Agency were equipped from 1968 onwards with a so-called isolator. An isolator is a 3-port circulator of which the return ports is terminated with a resistor. Any energy injected into the bug by an NLJD will be absorbed by the resistor, resulting in no (or very little) reflected energy. An example of such a bug is the CIA's SRT-107.
Countermeasures:
A means to hinder isolating a non linear junction is to add inexpensive diodes to places where the NLJD is expected to sweep. This masks the true listening device against a field of false alerts when the many diodes are detected. Such a technique was used in the 1980s construction of the U.S. embassy in Moscow. Thousands of diodes were mixed into the building's structural concrete making detection and removal of the true listening devices nearly impossible. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Moulage**
Moulage:
Moulage (French for 'casting' / 'moulding') is the art of applying mock injuries for the purpose of training emergency response teams and other medical and military personnel. Moulage may be as simple as applying pre-made rubber or latex "wounds" to a healthy "patient's" limbs, chest, head, etc., or as complex as using makeup and theatre techniques to provide elements of realism (such as blood, vomitus, open fractures, etc.) to the training simulation. The practice dates to at least the Renaissance, when wax figures were used for this purpose.In Germany some universities and hospitals use their historical moulage collections for the training of students. The often very lifelike models are especially useful to show the students today the characteristics of rare diseases, such as skin tuberculosis or leprosy.
History:
Up until the 16th century, European scientists had little knowledge about human anatomy and anatomy of animals. Medical students of Bologna and Paris studied the books of Aristotle, Galen, and other Greek scholars. Four centuries after the invasion by the Arabs and the fall of Rome and Persia, many Greek books were translated into the Arabic language. European scientists then translated these Arabic books into the Latin and Greek languages. In the medical field, this led to a reliance on Galen as a medical authority in European countries. In European medical schools the professors of anatomy merely lectured from Galen, without any dissection of the human body, and Galen’s books were the only way to learn anatomy.
History:
Andreas Vesalius (1514–1564), a Flemish anatomist, was at first a "Galenist" at the University of Paris. When he moved to Italy and entered the University of Padua, he began dissecting human bodies. He studied many details of human anatomy and found that Galen made some anatomical mistakes. For example, Galen wrote that the sternum has seven segments, but Vesalius found it has three segments. Galen wrote that the bone of the arm is the longest bone in the human body, but Vesalius found that the bone of the thigh is actually the longest bone in human body. At age 25 Vesalius realized that the anatomical knowledge of Galen was derived from animal anatomy and therefore Galen had never dissected a human body.
History:
In 1543 Vesalius wrote an anatomical masterwork named in Latin De humani corporis fabrica libri septem ("On the fabric of the human body in seven books"), or in short De Fabrica. The book included drawings of human females and males with their skins dissected. These pictures greatly influenced the creation of future anatomical wax models. The anatomical pictures of Vesalius were followed by those of Johann Vesling ("Veslingius") and Hieronymus Fabricius. By 1600 Fabricius had gathered 300 anatomical paintings and made an anatomical atlas named the Tabulae Pictae. Giulio Cesare Casseri ("Casserius"), Spighelius, and William Harvey are other followers of the pictures of Andreas Vesalius.
History:
The Tabulae anatomicae of Bartolomeo Eustachi ("Eustachius") (1552), printed in 1714, had a major effect on the history of anatomical wax models. This work so affected Pope Benedict XIV that he ordered construction of a museum of anatomy in Bologna In 1742, named Ercole Lelli and featuring anatomical wax models. Felice Fontana made cadaveric specimens into wax models by the casting method for anatomical teaching.The history of wax models is ancient. Wax anatomical models were first made by Gaetano Giulio Zummo (1656–1701) who first worked in Naples, then Florence, and finally Paris, where he was granted monopoly right by Louis XIV. Later, Jules Baretta (1834–1923) made more than 2000 wax models in Hospital Saint-Louis, Paris, where more than 4000 wax models were collected. While wax models were being made, he made pleasant conversations with the patients, sang songs or at times played the piano. Moulages were made for the education of dermatologists around the world, but were eventually replaced by color slides.
History:
Wax sculpture, use in moulage The modeling of the soft parts of dissections, teaching illustrations of anatomy, was first practiced at Florence during the Renaissance. The practice of moulage, or the depiction of human anatomy and different diseases taken from directly casting from the body using (in the early period) gelatine moulds, later alginate or silicone moulds, used wax as its primary material (later to be replaced by latex and rubber). Some moulages were directly cast from the bodies of diseased subjects, others from healthy subjects to which disease features (blisters, sores, growths, rashes) were skilfully applied with wax and pigments. During the 19th century, moulage evolved into three-dimensional, realistic representations of diseased parts of the human body. These can be seen in many European medical museums, notably the Spitzner collection currently in Brussels, the Charite Hospital museum in Berlin and the Gordon Museum of Pathology at Guy's Hospital in London UK.
History:
A comprehensive book monograph on moulages is "Diseases in Wax: the History of Medical Moulage" by Thomas Schnalke (Author) the director of the Charite Museum and Kathy Spatschek (Translator). In the 19th century moulage was taken of medical patients for educational purposes. The prepared model was painted to mimic the original disease. Nowadays anatomicals model are an important instrument of education of human anatomy in department of anatomy and biological sciences in medical schools.
History:
Modern moulage Moulage has evolved dramatically since its original intent. In modern terms, the word moulage refers to the use of "special effects makeup (SPFX) and casting or moulding techniques that replicate illnesses or wounds" in simulation based techniques. Common examples include designing diabetic wounds, creating burns or other illness effects, like dermatological rashes and gunshot wounds.
History:
These illness and injury effects are applied to training manikins or simulated or standardized patients for training or other purposes. Simulation staff attend training to learn these techniques. It is argued that the use of moulage in simulation improves realism or participant buy-in. Moulage is an emerging field of research for paramedicine, radiography and medical education, with researchers exploring how moulage contributes to learning in training. Military training utilises highly-authentic moulage techniques to desensitise to graphic wounds, prepare for battle, and treat injuries. New advancements in the field include using tattooed injuries and moulage through augmented reality. The level of authenticity required for moulage remains unclear. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Burelage**
Burelage:
Burelage (French: burelage), also burelé, is a French term referring to an intricate network of fine lines, dots or other designs printed over or as the background of some postage or revenue stamps to prevent counterfeiting. In English the word is sometimes spelled with an accent on the first "e" as burélage, although the accent does not appear in the French spelling and its origin is unclear. Burelage most commonly appears as a form of underprinting.
Burelage:
Early uses of burelage on postage stamps include the first issue of the stamps of Denmark from 1851, and stamps issued by the City of Hanover beginning in 1855. Stamp varieties may be distinguished in catalogs based on the presence or absence of burelage as well as variations in the burelage itself, such as the size of network, orientation on the stamp, color, or method of printing.Although burelage is usually unobtrusive, some of the Mexico Exporta stamps (see below) had burelage printed over the stamp which is dark enough to obscure the stamp image. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SF3A3**
SF3A3:
Splicing factor 3A subunit 3 is a protein that in humans is encoded by the SF3A3 gene.This gene encodes subunit 3 of the splicing factor 3a protein complex. The splicing factor 3a heterotrimer includes subunits 1, 2 and 3 and is necessary for the in vitro conversion of 15S U2 snRNP into an active 17S particle that performs pre-mRNA splicing. Subunit 3 interacts with subunit 1 through its amino-terminus while the zinc finger domain of subunit 3 plays a role in its binding to the 15S U2 snRNP. This gene has a pseudogene on chromosome 20.
Interactions:
SF3A3 has been shown to interact with SF3A1. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sugru**
Sugru:
Sugru (), also known as Formerol, is a patented multi-purpose, non-slumping brand of silicone rubber that resembles modelling clay. It is available in several colours and upon exposure to air, cures to a rubber-like texture.
Properties:
Sugru is malleable when removed from its airtight, moisture-proof packaging, retains its plasticity for thirty minutes, and is self-curing at room temperature in approximately 24 hours. The material adheres to aluminium, steel, copper, ceramics, glass, fabric, brass, leather, plywood, and other materials, including ABS plastics.When cured, Sugru has a 'soft touch' or slightly flexible, grippable texture similar to features commonly found in soft overmolds. It is waterproof and dishwasher-safe, and the material is thermally insulating, with a service temperature range between −50 and 180 °C (between -58 and 356 °F or 223 and 453 K). Sugru is not resistant to isopropyl alcohol. While early versions of the product had a short shelf-life, as of 2014, it was being advertised as staying fresh for 13 months from the date it was made. According to the company, if kept in a refrigerator, the remaining shelf-life is tripled.Sugru has not been tested for food safety, and the manufacturers advise against using it in contact with food or drink.
History:
The idea for Sugru was developed by Jane Ní Dhulchaointigh from Kilkenny, Ireland. Ní Dhulchaointigh studied product design as a post-graduate research student at the Royal College of Art, where she conceived the idea for the substance in 2003 while using mixtures of standard silicone sealants and sawdust in her work.After receiving business grants, Ní Dhulchaointigh worked with retired scientists from Dow Corning and a silicone expert over a seven-year period at the materials department at Queen Mary, University of London to develop a silicone elastomer that was mouldable, self-adhesive and self-curing. Her goal was to enable people "to easily and affordably repair, improve or customise things they already own".Sugru was developed by and is marketed by FormFormForm, a company in Hackney, London, with over 100,000 customers as of 2012, annual sales of US$2 million, and a staff of 30.In May 2015, the company launched a campaign to raise £1 million (US$1,280,000) on the crowdfunding site CrowdCube. The company reached its £1 million funding target in just four days and continued on to raise well over £3 million.In December 2016, the company secured a further £4m investment from Clydesdale and Yorkshire Banks.The name Sugru derives from the Irish language word "súgradh" for "play".In May 2018, FormFormForm was acquired by German adhesive company Tesa SE, a subsidiary of Beiersdorf.
Chemical compound:
The formulation of Sugru contains 25-50% silicone (polysiloxane), 25–50% talc, and the remaining additives including methyltris (methylethylketoxime) silane and (3-aminopropyl)triethoxysilane. The company claims its formulation can be varied to offer different levels of consistency, plasticity, softness, resiliency, surface adhesion, modulus and abrasion resistance, setting time, density, and ability to float.According to the company's MSDS for the U.S., Sugru is classified as "not hazardous" under OSHA's 2012 Hazard Communication Standard, and for Europe, Sugru "does not meet the criteria for classification in any hazard class" under EU Regulation No. 1272/2008 and Directive 1999/45/EC. However, both versions of the MSDS note that Sugru may cause irritation or skin sensitization. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Thyroid-hormone transaminase**
Thyroid-hormone transaminase:
In enzymology, a thyroid-hormone transaminase (EC 2.6.1.26) is an enzyme that catalyzes the chemical reaction L-3,5,3'-triiodothyronine + 2-oxoglutarate ⇌ 3-[4-(4-hydroxy-3-iodophenoxy)-3,5-diiodophenyl]-2-oxopropanoate + L-glutamateThus, the two substrates of this enzyme are L-3,5,3'-triiodothyronine and 2-oxoglutarate, whereas its two products are [[3-[4-(4-hydroxy-3-iodophenoxy)-3,5-diiodophenyl]-2-oxopropanoate]] and L-glutamate.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is L-3,5,3'-triiodothyronine:2-oxoglutarate aminotransferase. Other names in common use include 3,5-dinitrotyrosine transaminase, and thyroid hormone aminotransferase. It employs one cofactor, pyridoxal phosphate. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Equivalence number method**
Equivalence number method:
The equivalence number method is a cost calculation method for co-production in cost and activity accounting. The resulting costs of the input factors are allocated to the individual products according to a weighting key, the so-called equivalence numbers.
Description:
As with the other cost allocation methods, the conservation of the cost sum applies, that is: sum of cost input sum of cost output The cost of the main product, usually for the product with the highest physical or economical output, receives for example the equivalence number 1. On the basis of selected indicators (average market prices, physical properties, etc.) other equivalence numbers are formed, using suitable ratios between the different co-products. Multiplying the equivalence numbers by the production or sales figures results in the allocation keys for a specific product type. From this the cost of a co-product can be calculated, both for main and by-products.
Application examples:
An airline can determine the cost of the transportation service by dividing air freight and passengers by weight. The average passenger weight of booked seats is to be compared to the weight of the loaded air cargo containers.
Application examples:
apass = mpass / (mpass + mfreight) afreight = mfreight / (mpass + mfreight)In a refinery, one can assume the input as crude oil and as output gasoline, diesel and heavy fuel oil as well as (flare) losses. The equivalence number method can use the energy content of the products as an allocation key. E is the product of energy density and production quantity.
Application examples:
agas = Egas / (Egas + Ediesel + EHFO) adiesel = Ediesel / (Egas + Ediesel + EHFO) aHFO = EHFO / (Egas + Ediesel + EHFO)In the cogeneration plants, the Carnot method allocates the fuel to the products useful heat and electrical work. The weighting key is the exergy content of the output energies.
ael= ηel / (ηel + ηc × ηth) ath= (ηc x ηth) / (ηel + ηc × ηth)In the alternative generation method, the key is thermal and weighted electrical efficiency, where the weighting factor is the ratio of thermal to electrical reference efficiencies (γ = ηth, ref/ηel,ref).
ael= (γ ηel) / (γ ηel + ηth) ath= ηth / (γ ηel + ηth)
Criticism:
Criticism of the equivalence number method is justified by the fact that completely arbitrary and random keys can be chosen. For example, in the case of allocating the potable water bill in a house with only one common meter, the water consumption could be divided according to the number of occupants per apartment or the apartment's net dwelling area in m2.
Mathematical background:
From a one-dimensional input I, a two-dimensional output is assumed with O1 = f1(I) * I and O2 = f2(I) * I.
Note: One interpretation for f is a conversion efficiency from the input to the respective output. More than 2 co-products are also conceivable.
Mathematical background:
The costs k1, k2 are the variable costs of the two outputs which need to be determined. kI represents the known variable costs of the input. Kvar denotes the respective sum of the variable costs. a1 and a2 are the allocation factors for the respective output, i.e. they describe the proportion of the input that is assigned to a co-product.
Mathematical background:
The weighting keys are f1 and f2: respectively K2var=a2⋅KIvar=f2f1+f2⋅kI⋅I This results in specific variable costs k1 and k2: respectively k2=K2varO2=K2varf2⋅I According to the introducing relation of the cost allocation, the following applies: or KIvar=K1var+K2var | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.