text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Diethyl toluene diamine**
Diethyl toluene diamine:
Diethyl toluene diamine (DETDA) is a liquid aromatic organic molecule with formula C11H18N2. It is chemically an aromatic diamine and has the CAS Registry number of 68479-98-1. It has more than one isomer and the mixture of the two main isomers is given a different CAS number of 75389-89-8. It is often marketed as a less toxic version of 4,4'-methylenedianiline (MDA). It is also used to replace the more toxic 4,4'-methylenebis(2-chloroaniline) (MOCA). The toxicology is reasonably well understood.
Uses:
DETDA is an industrial chemical used in the injection molding industry. One of the reasons it is used in RIM is because it gives very short demold times. It is also used extensively in polyurethanes and in both spray polyureas and elastomers. When used in elastomer production these can be used as an energy absorbing system in automobiles. It is a diamine and thus in polymer science terms is a Chain extender rather than a chain terminator. Chain extenders (f = 2) and cross linkers (f ≥ 3) are low molecular weight amine terminated compounds that play an important role in polyurea compounds, elastomers and adhesives. DETDA is one such amine and is used extensively in reaction injection molding (RIM) and in polyurethane and polyurea elastomer formulations.Pyrolysis in combination with other materials can produce a carbon-based molecular sieve. Carbon nanotubes have also been produced and studied with the material. There are other more specialist uses for the material too.As it is an aromatic amine, its rate of cure is much slower than aliphatic amines and thus used with epoxy resin systems to lengthen the working time or potlife. These are then used in adhesives, sealants, and paints or coatings. It is often used with epoxy resins for its excellent mechanical properties. Epoxy formulations based on DETDA also tend to have good high temperature properties.
Supply:
DETDA is produced globally and is thus fairly strategically important.
External websites:
Albermale DETDA Safety Data Sheet CheMondis DETDA information Leticia Chemicals DETDA | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Journal of Dental Biomechanics**
Journal of Dental Biomechanics:
TheJournal of Dental Biomechanics is a peer-reviewed academic journal that covers in the field of materials science applied to dentistry. The editors-in-chief are Christoph Bourauel (University of Bonn) and Theodore Eliades (University of Zurich). It was established in 2009 and published by SAGE Publications. The journal has stopped publications since 2015.
Abstracting and indexing:
The Journal of Dental Biomechanics is abstracted and indexed in: Biotechnology Research Abstracts Calcium and Calcified Tissue Abstracts PubMed | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Learner's permit**
Learner's permit:
A driver's permit, learner's permit, learner's license or provisional license is a restricted license that is given to a person who is learning to drive, but has not yet satisfied the prerequisite to obtain a driver's license. Having a learner's permit for a certain length of time is usually one of the requirements (along with driver's education and a road test) for applying for a full driver's license. To get a learner's permit, one must typically pass a written permit test, take a basic competency test in the vehicle, or both.
Australia:
Laws regarding learner's permits in Australia differ between each state. However, all states require a number of hours supervised driving to be undertaken and for the permit to be held for a set period. The age to get a Learner Permit is 16 in all states and territories except the ACT where it is 15 and 9 months. When a person is on their learner's permit, they have to log 50–120 hours depending on the state they are in and must obtain at least 5-20 night hours. They can be supervised or taught in their log book hours by any person/persons holding a full license. They must sign the log book for allocated hours. Learner drivers must display an 'L' plate on their car and have a 0% BAC Blood Alcohol limit. Some states, do provide online applications to log these hours digitally.
Belgium:
A provisional learners license can be obtained after passing a theoretical exam less than three years prior. The minimum age for a learners permit is 17 years. The learner needs to be accompanied by a designated person with a valid driving license. The vehicle needs to bear a clearly visible, predesignated "learners" sign, sporting the letter "L".
Belgium:
If you go to a driving school and follow 20 hours of lessons, you get another learner's permit. With this, you can drive with maximum two people who have had their driver's license for at least 8 years, or you can drive by yourself, but with some restrictions: you cannot drive between 10 p.m. and 6 a.m. on Fridays, Saturdays and Sundays; and you cannot drive on the evenings before a legal holiday, or the evening of the holiday itself.
Canada:
In Canada, the minimum age varies from province to province and may be 14 or 16. In Ontario, a G1 License is issued to new drivers at the age of 16 after completing a written test. G1 license restrictions include: Have time and/or road restrictions, and the learner must drive with a fully licensed driver of at least 4 years. After one year with a G2, the learner may upgrade to their full G class license by taking another road test, which has a major highway component. A similar program is in effect for motorcycles, the M class license.In Nova Scotia, a beginner's permit (L) is issued to new drivers after the age of 16 after a written test. The L license restrictions include: A fully licensed driver must sit in the seat adjacent the new driver There cannot be additional passengers The learner must have a blood alcohol count of 0 No time or road restrictionsIn Alberta, a learners permit is issued to those who complete a knowledge test, an eye exam and one who is 14 years of age or older. They're then put into a GDL program with restrictions. Some include: having a 0 blood alcohol level, fully licensed driver in the passenger seat, no more people than there are seats, and must hold the license for a one-year minimum before upgrading.
Canada:
In Alberta, one has to pass a basic road test after having a learner's permit for at least a year and at or over the age of 16, then can apply for a Class 5 GDL license, which carries some of the same restrictions, but no longer requires a fully licensed Class 5 non-GDL driver in the passenger seat. Once the person becomes 18 and holds the Class 5 GDL license for at least 2 years, they can do an advanced road test which if they pass, they'll become a fully licensed Class 5 driver.
France:
In France, there is Graduated driver licensing for people between the ages of 15 and 17 and half, for B Driving licence. There are some restrictions: for instance, a fully qualified driver must accompany the learner.
At age 18, the learner's permit can apply to a normal driving license, that it can pass more easily due to its previous experience; additionally, the length of the probation period (or permis probatoire) is lowered to two years.
This graduated driver licensing is valid only within France; thus one cannot use it to cross borders.
France:
For people over 18, there is a system similar to Graduated driver licensing, but the rules are slightly different: for instance there is no reduction from three to two years for the probation licence.Furthermore, once receiving a full driving license for the first time, the following restrictions apply for two or three years, known as permis probatoire: Maximum speed 110 km/h instead of 130 on motorways, 100 km/h instead of 110, and 80 km/h instead of 90 on rural roads.
France:
The permis probatoire has only six points while the regular permit has 12 points.
At the end of the two or three year period, assuming the driver made no infraction, the “permis probatoire” is automatically converted to a regular driver’s licence.Some training to road traffic safety might help to recover points.
Germany:
Since 2010, one can obtain a learner's permit at 17 in Germany. The only restriction is that a fully licensed and previously stated driver who is at least 30 years old must accompany the learner (but is not allowed to intervene in the drive). That does not apply to light motorcycles, which can be driven freely with this license.
Germany:
Furthermore, the following restrictions apply for two years after obtaining a full license: The driver must have a blood alcohol concentration of 0 (this applies at least until the age of 21) Any penalties are stricter than for advanced driversAt age 18, the learner's permit will be automatically replaced by a normal driving license - no further test is needed. These legal circumstances in Germany are comparable to those in Austria in that respect; thus, one can cross these countries' border with a learner's permit.
Hong Kong:
In Hong Kong, any person aged 18 or above can apply for a Learner's Driving License for private cars, light goods vehicle and motorcycles. For other types of vehicle, the age required is 21 and the applicant must have a valid private car or light goods vehicle driving license for 3 years. Unlike other jurisdictions, a learner must be supervised by an approved driving instructor instead of an ordinary fully licensed driver, or attending an approved driving school to learn to drive (except motorcycles, which learners can drive on their own, but motorcycle learners must pass a motorcycle course from an approved driving school before they can learn to drive on road). L-plate is also required when the learner is practicing.
India:
In India the minimum age at which a provisional licence is valid is 18 (motorcycle/scooter). When driving under a provisional license, the learner must be accompanied by a driver who holds a full driving license. The supervisor has to be in view of the road and be in a position to control the vehicle. The provisional license is available only after passing the theory test. A full licence can be acquired only after passing the driving test. Once the learner has passed the theory test, they may take the practical driving test. Once the practical driving test has been taken and passed, a full driving licence will be automatically issued. While it is possible to take both tests immediately after each other, most learner drivers take a period between taking the theory and applying for a practical test to carry out driving lessons, either with their supervisor or a professional Driving school.
India:
The vehicle being driven by the learner must also be fitted with L-plates on both the back and front of the vehicle. This tells other road users that the vehicle is being operated by a driver without their full license and that they may make mistakes easily and that the driver may not be fully competent yet. The L-plate consists of a white square plate with a large red L in the middle.
Ireland:
In Ireland, the learner may perform a theory test at the age of 16 which tests their knowledge of traffic situations and road signs. Upon passing this test the learner will receive a learner's permit which permits them to drive on the road accompanied by a full licensed driver who has had their license for more than two years. The only restrictions are that the learner driver cannot drive on motorways and must visibly display 'L' plates at all times. They must have held their learner's permit for 6 months before they can apply to perform road test to obtain their full license. This is known as the 'six-month rule'.
Italy:
In Italy, any person aged 14 or above can apply for a driving license (patente di guida). For B licences, obtainable from 18 years old, learner has to perform a theory test which tests their knowledge of traffic situations, road signs, insurance, sanctions, etc. Upon passing this test (the learner has two opportunities to pass it), the learner will receive a learner's permit (foglio rosa, literally pink sheet, given its color) which allows them to drive on the road, if accompanied by a driver which had their license for more than ten years. There are no restrictions on the horsepower of the car (there will be during the first year of full license). The learner can drive on motorways and must display 'P' (standing for Principiante, beginner) stickers, both in the front and in the back of the vehicle. After receiving the foglio rosa, they have 6 months to perform a road test to obtain the full license; should the learner be unable to pass the road test in 6 months (two opportunities to pass it, spaced one month apart), they have to pay for another foglio rosa.
New Zealand:
Learner licence In New Zealand, any eligible person 16 years or over can sit a learner licence test for a class 1 vehicle (car) or class 6 vehicle (motorbike), which is a theory multiple choice test on road rules. Once they have passed the learner licence test and received their licence in the mail, they may drive with an adult who has had their full licence of the same class for at least two years (a 'supervisor'). They may carry passengers with a supervisor in the car, but learner motorcyclists may not carry a pillion passenger. They must display L plates at all times when driving and may observe the posted speed limits.
New Zealand:
Restricted licence After at least 6 months have passed, they must pass a practical test in order to receive their restricted license. On a restricted license, the learner may only drive between 5am - 10pm, with no passengers other than their dependent children, spouse or someone for whom they are the primary caregiver; they may drive at any time when accompanied by a supervisor. Learners who sit the practical test in an automatic car are only legally allowed to drive an automatic while on the restricted licence. If a driver has successfully completed an approved defensive driving course, the wait time between passing the restricted licence practical test and taking the full licence practical test is reduced from 18 months to 12 months.
Norway:
In Norway, the learner may drive as long the learner is over 16 years of age, have passed a basic course in the rules of the road and first aid, and a person 25 or above who has had their driver's license for more than 5 years is present.
Singapore:
In Singapore, any persons aged 18 or above may obtain a provisional driving licence for a fee of S$25.00 after passing the Basic Theory Test. The provisional driving licence is valid for 6 months if the PDL licence is obtained before 1 December 2017. From 1 December 2017, the validity of a PDL licence is 2 years from the date of payment, with no change of cost. It permits the holder to drive on public roads (with a few exceptions) in the presence of a Certified Driving Instructor. A car driven by a learner must display an L-plate on the front and rear of the car. Passing the Final Theory Test enables a learner to apply for the Practical Driving Test and it is valid for 2 years. A valid provisional driving licence, passed FTT and a photo ID must be presented to be allowed to take the practical test. Should a learner's provisional driving licence expire before the date of their practical test, he or she will have to renew it at the same cost. Expired PDL are not accepted and taking the practical tests will be rejected.
Singapore:
A Qualified Driving Licence (QDL) is awarded to a person who has passed the practical test and made a one-time payment of S$50.00. Any person who has possessed a QDL for a period of less than a year is required to display a probation plate at the top right of their front and rear windscreens. The probation plate is made of a reflective material and consists of an orange triangle on a yellow background. Failure to do so may cause the offending driver to receive a fine for the first time and then subsequently revoked from driving.
Singapore:
See Driving licence in Singapore for detailed requirements of each class of licence.
South Africa:
A South African learners license consists of three sections with the following criteria required: Rules of the road - There are 28 questions in this category with 22 being the pass mark Vehicle controls - There are 8 questions in this section, the required pass mark is 6 Road signs, road markings and traffic signals - There are 28 questions in this category with a pass mark of 23There are primarily three codes to choose from: Code 1 - This is for motorcycles, motorised tricycle or quadricycle not more than 125cc and the driver should be 16 or older on the date of the test. If the motorcycle engine in above 125cc, the driver will need to be 17 years or older.
South Africa:
Code 2- This is for motor vehicles, bus and minibus or goods vehicle up to a maximum vehicle mass of 3500 kg. The driver will need to be 17 years or older on the date of the test.
South Africa:
Code 3 - This is for motor vehicles exceeding a gross vehicle mass of 3500 kg. The driver will need to be 18 years or older to apply for a learners license in this category.The following documents will need to be presented when applying for a learners license: Identity card or passport 2 Recent passport size photographs (colour or black and white)In South Africa, any person who is of the minimum required age and holds a valid ID document may sit a learner's licence exam. The minimum required age varies by vehicle class and has the following minimum age restrictions: for a motorcycle (without a sidecar) with an engine not exceeding 125 cc – 16 years for light motor vehicles with a mass not exceeding 3 500 kilograms – 17 years for all other vehicles (also motorcycles with an engine exceeding 125 cc) – 18 yearsThe Learner's Licence exam is a 64 question multiple choice exam with questions spread over three sections: Rules of the road (28 questions); Signs, signals and road markings (28 questions), and vehicle controls (8 questions). The holder of a learner’s licence is allowed to drive only when supervised by a licensed driver. If the category of vehicle being driven requires a professional driving permit, the licensed driver must also hold a professional driving permit. South African Learners must carry their Learner's Licence with them whenever they are driving a vehicle and have L plates on the rear window. The Learner's Licence is valid for 24 months.
Sweden:
In Sweden, the minimum age is 16 years old to get a basic car learner's permit; 17 years and six months are required for more advanced light vehicle combinations and up to 23 years for heavy vehicle combinations. Körkortslag 4kap 2§ A Swedish Learner's permit does not require a test, but only allows practising with a teacher. The teacher, including a private teacher such as a parent, must also have a permit. After a successful test, a real driver's license is given, but there is a test period of two years when all tests have to be redone to get it back if the driver's permit is cancelled due to serious traffic violations or large speeding.
Thailand:
In Thailand, the minimum age is 18 years old to obtain a temporary driving licence for cars and motorcycles, valid for 2 years. For motorcycles 110 cc or smaller, the minimum age is 15. A temporary driving licence holder may drive without supervision, but cannot apply for an International Driving Permit.
Thailand:
After holding the temporary driving licence for at least 1 year, the licence holder may apply for a full 5-year driving licence for the same type of vehicle (2-year car => 5-year car or 2-year motorcycle to 5-year motorcycle). A medical certificate and a physical evaluation of visual and reaction time are required. This process is commonly called "two to five" meaning a conversion of a two-year to a five-year licence, as opposed to a renewal of a full licence, "five to five". In the event a temporary small motorcycle licence is set to expire while the holder is younger than 18, the new licence will be a two-year temporary licence.If the temporary licence has expired for one year, a written examination is required. In case of three years or longer, a practical exam and a lecture are also required.
United Kingdom:
In the United Kingdom, the minimum age at which a provisional licence is valid is 17 (16 for driving a tractor, riding a moped or those receiving Disability Mobility Allowance). When driving under a provisional licence, the learner must be accompanied by a driver who has held a full driving licence for three years, and who is 21 or over. The supervisor has to be in view of the road, however the Road Traffic Act 1988 states that the supervisor does not have to be in the passenger seat, although the passenger in the front seat does have to be over the age of 15. A full licence can be acquired as soon as the provisional licence is received, unlike many other countries where applicants must wait a minimum of 6–12 months before getting a full license. The provisional licence is available without taking a test, although to get a full, unrestricted licence, the applicant must take a written 'Theory' test containing fifty multiple choice questions and a fourteen-clip hazard perception test, both of which are done on a computer at one of the many DVSA (Driving and Vehicle Standards Agency) Test centres. Once the learner has passed the theory test, they may take the practical driving test; however the practical driving test has to be passed within 2 years of completing the theory test, as the theory test certificate expires 2 years after receiving it. Once the practical driving test has been passed, a full driving licence will be automatically issued. One can take the practical test immediately after the theory test, but most learner drivers take some time between them to take driving lessons, usually with a professional driving instructor.
United Kingdom:
A vehicle being driven by a learner driver must be fitted with L-plates on both the back and front of the vehicle. These tell other road users that the vehicle is being operated by a driver without a full licence and that they may make mistakes easily and that the driver may not be fully competent yet. The L-plate consists of a white square plate (often tied to the vehicle or attached by magnets) with a large red L in the middle. (In Wales a D-plate (D for dysgwr, Welsh for "learner") may be used instead of an L-plate.) If the vehicle is operated by multiple named drivers (as specified by the car insurance policy), then the L-plate should be removed while the car is being driven by a holder of a full licence. When the learner has passed the test, they can display a non-compulsory 'P' plate, which shows that they have just passed their test, and so may not have much experience on the road. The P plate has a white background, with a green 'P'.
United Kingdom:
In the UK, provisional licence holders are not allowed to drive on motorways unless accompanied by a driving instructor and in a car fitted with dual controls.After gaining a full licence, the driver is subject to a probationary period: six or more penalty points accumulated within two years of passing the test would lead to a revocation of the licence, and both tests would need to be retaken.In Northern Ireland for one year after the passing of a driving test, the driver is defined as a "restricted driver" who must not exceed 45 mph (72 km/h) and must display an "R-plate" consisting of an amber sans-serif R on a white background.
United States:
In the United States, all states and Washington D.C. have graduated driver's license programs for teenage drivers. Although the specific requirements vary by state, in a typical program a minor must first obtain a learner's permit and meet specific requirements to qualify for an intermediate driver's license, before ultimately becoming eligible for a full driver's license.
United States:
Learner's permits In order for a minor to receive a learner's permit, sometimes called an instructional permit, states typically require that the minor have at least 6 practice hours before getting the permit and signed permission from a parent or guardian. In the state of New Hampshire, a permit is not given but the young driver may begin to drive with a parent or guardian, or an adult 25 years of age, at the age of 15 and a half.Typically, a driver operating with a learner's permit must be accompanied by an adult licensed driver who is at least 21 years of age or older and in the passenger seat of the vehicle at all times.After a legally defined period of driving supervised with a permit, usually between six and twelve months, and upon reaching the requisite age, the holder of a learner's permit can apply for a provisional license. Obtaining a provisional license allows certain restrictions to be lifted from the driver, such as the times that they are allowed to drive, and the number of people allowed in the car.
United States:
Some states require the permit holder to document specific hours of driving under the permit before qualifying for an intermediate license, such as fifty hours of practice.
United States:
Intermediate license An intermediate or provisional license allows the driver to drive a vehicle without supervision by a licensed driver. Driving is typically permitted during a limited range of mostly daylight hours, as well as to and from school, work and religious activities. Some states may require a road test before allowing a learner's permit holder to obtain an intermediate license.In order to qualify for a provisional license the applicant must typically be at least the age of 16 and must have previously held a learner's permit for at least six months. These requirements vary by state. For example, in Florida the prior period for holding a learner's permit is twelve months.In many states the period of driving on a learner's permit is shortened if the applicant is above the age of eighteen. For example, in Oklahoma if a driver is 18 or older a learner's permit must only be held for one month before the driver qualifies for an intermediate license. Some states allow drivers over the age of twenty-one to bypass the entire graduated licensing process. For example, in Colorado, a driver over the age of twenty-one may apply for and pass the tests for a permit and a full driver's license on the same day and, if successful in passing the tests, may obtain a full driver's license as soon as the driver passes a scheduled driving test.Intermediate drivers are normally restricted in their transportation of passengers, especially minor passengers, without supervision. In some states, such as California, Nebraska, Oregon, Maine, New York, Florida, Kansas, Illinois, Oklahoma and Arizona, permitted drivers may legally drive family members under the age of 21 without adult supervision if they possess a signed note from a legal guardian. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Galvanostat**
Galvanostat:
A galvanostat (also known as amperostat) is a control and measuring device capable of keeping the current through an electrolytic cell in coulometric titrations constant, disregarding changes in the load itself.
Its main feature is its nearly "infinite" (i.e. extremely high in respect to common loads) internal resistance.
The designation "galvanostat" is mainly used in electrochemistry: this device differs from common constant current sources by its ability to supply and measure a wide range of currents (from picoamperes to amperes) of both polarities.
The galvanostat responds to changes in the resistance of the cell by varying its output potential: as Ohm's law shows, R=UI the variable system resistance and the controlled voltage are directly proportional, i.e.
Uc=Rv×Io where Io is the electric current that is kept constant Uc is the output control voltage of the amperostat Rv is the electrical resistance that varies;thus, an increase of the load resistance implies an increase of the voltage the amperostat applies to the load.
Technical realization:
The simpler galvanostat consists of a high-voltage source producing a constant voltage U with a resistor Rx connected in series: in order to force an almost constant current through a load, this resistor shall be much higher than the load resistor Rload . As a matter of fact, the current I through the load is given by I=URx+Rload and if Rx >> Rload , the current I is approximately determined by Rx as follows I≅URx This simple realization requires rather high voltages (~100 V) to keep the load current constant with sufficient approximation for all practical purposes. Therefore, more complex versions of galvanostats, using electronic amplifiers with feedback and lower voltages, have been developed and produced. These instruments are capable to feed constant currents in the ranges from few picoamperes (pA) to several amperes (A); typical construction for use in the lower range of feed currents uses operational amplifiers.
Example of application:
Galvanostatic deposition techniques can be used for some thin film deposition applications where there is no need to control morphology of the thin film. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Idle scan**
Idle scan:
An idle scan is a TCP port scan method for determining what services are open on a target computer without leaving traces pointing back at oneself. This is accomplished by using packet spoofing to impersonate another computer (called a "zombie") so that the target believes it's being accessed by the zombie. The target will respond in different ways depending on whether the port is open, which can in turn be detected by querying the zombie.
Overview:
This action can be done through common software network utilities such as nmap and hping. The attack involves sending forged packets to a specific machine target in an effort to find distinct characteristics of another zombie machine. The attack is sophisticated because there is no interaction between the attacker computer and the target: the attacker interacts only with the "zombie" computer.
Overview:
This exploit functions with two purposes, as a port scanner and a mapper of trusted IP relationships between machines. The target system interacts with the "zombie" computer and difference in behavior can be observed using different "zombies" with evidence of different privileges granted by the target to different computers.The overall intention behind the idle scan is to "check the port status while remaining completely invisible to the targeted host." Origins Discovered by Salvatore Sanfilippo (also known by his handle "Antirez") in 1998, the idle scan has been used by many black hat "hackers" to covertly identify open ports on a target computer in preparation for attacking it. Although it was originally named dumb scan, the term idle scan was coined in 1999, after the publication of a proof of concept 16-bit identification field (IPID) scanner named idlescan, by Filipe Almeida (aka LiquidK). This type of scan can also be referenced as zombie scan; all the nomenclatures are due to the nature of one of the computers involved in the attack.
TCP/IP basics:
The design and operation of the Internet is based on the Internet Protocol Suite, commonly also called TCP/IP. IP is the primary protocol in the Internet Layer of the Internet Protocol Suite and has the task of delivering datagrams from the source host to the destination host solely based on their addresses. For this purpose, IP defines addressing methods and structures for datagram encapsulation. It is a connectionless protocol and relies on the transmission of packets. Every IP packet from a given source has an ID that uniquely identifies IP datagram.TCP provides reliable, ordered delivery of a stream of bytes from a program on one computer to another program on another computer. TCP is the protocol that major Internet applications rely on, such as the World Wide Web, e-mail, and file transfer. Each of these applications (web server, email server, FTP server) is called a network service. In this system, network services are identified using two components: a host address and a port number. There are 65536 distinct and usable port numbers per host. Most services use a limited range of numbers by default, and the default port number for a service is almost always used.
TCP/IP basics:
Some port scanners scan only the most common port numbers, or ports most commonly associated with vulnerable services, on a given host. See: List of TCP and UDP port numbers.
The result of a scan on a port is usually generalized into one of three categories: Open or Accepted: The host sent a reply indicating that a service is listening on the port.
Closed or Denied or Not Listening: The host sent a reply indicating that connections will be denied to the port.
Filtered, Dropped or Blocked: There was no reply from the host.Open ports present two vulnerabilities of which administrators must be wary: Security and stability concerns associated with the program responsible for delivering the service - Open ports.
TCP/IP basics:
Security and stability concerns associated with the operating system that is running on the host - Open or Closed ports.Filtered ports do not tend to present vulnerabilities. The host in a local network can be protected by a firewall that filters, according with rules that its administrator set up, packets. This is done to deny services to hosts not known and prevent intrusion in the inside network.
TCP/IP basics:
The IP protocol is network layer transmission protocol.
Basic mechanics:
Idle scans take advantage of predictable Identification field value from IP header: every IP packet from a given source has an ID that uniquely identifies fragments of an original IP datagram; the protocol implementation assigns values to this mandatory field generally by a fixed value (1) increment. Because transmitted packets are numbered in a sequence you can say how many packets are transmitted between two packets that you receive.
Basic mechanics:
An attacker would first scan for a host with a sequential and predictable sequence number (IPID). The latest versions of Linux, Solaris, OpenBSD, and Windows Vista are not suitable as zombie, since the IPID has been implemented with patches that randomized the IPID. Computers chosen to be used in this stage are known as "zombies".Once a suitable zombie is found the next step would be to try to establish a TCP connection with a given service (port) of the target system, impersonating the zombie. It is done by sending a SYN packet to the target computer, spoofing the IP address from the zombie, i.e. with the source address equal to zombie IP address.
Basic mechanics:
If the port of the target computer is open it will accept the connection for the service, responding with a SYN/ACK packet back to the zombie.
The zombie computer will then send a RST packet to the target computer (to reset the connection) because it did not actually send the SYN packet in the first place.
Since the zombie had to send the RST packet it will increment its IPID. This is how an attacker would find out if the target's port is open. The attacker will send another packet to the zombie. If the IPID is incremented only by a step then the attacker would know that the particular port is closed.
The method assumes that zombie has no other interactions: if there is any message sent for other reasons between the first interaction of the attacker with the zombie and the second interaction other than RST message, there will be a false positive.
Finding a zombie host:
The first step in executing an idle scan is to find an appropriate zombie. It needs to assign IP ID packets incrementally on a global (rather than per-host it communicates with) basis. It should be idle (hence the scan name), as extraneous traffic will bump up its IP ID sequence, confusing the scan logic. The lower the latency between the attacker and the zombie, and between the zombie and the target, the faster the scan will proceed.
Finding a zombie host:
Note that when a port is open, IPIDs increment by 2. Following is the sequence: 1. Attacker to target -> SYN, target to zombie ->SYN/ACK, Zombie to target -> RST (IPID increment by 1) 2. Now attacker tries to probe zombie for result. Attacker to Zombie ->SYN/ACK, Zombie to Attacker -> RST (IPID increment by 1) So, in this process IPID increments by 2 finally.
Finding a zombie host:
When an idle scan is attempted, tools (for example nmap) tests the proposed zombie and reports any problems with it. If one doesn't work, try another. Enough Internet hosts are vulnerable that zombie candidates aren't hard to find.
Finding a zombie host:
A common approach is to simply execute a ping sweep of some network. Choosing a network near your source address, or near the target, produces better results. You can try an idle scan using each available host from the ping sweep results until you find one that works. As usual, it is best to ask permission before using someone's machines for unexpected purposes such as idle scanning.
Finding a zombie host:
Simple network devices often make great zombies because they are commonly both underused (idle) and built with simple network stacks which are vulnerable to IP ID traffic detection.
While identifying a suitable zombie takes some initial work, you can keep re-using the good ones. Alternatively, there have been some research on utilizing unintended public web services as zombie hosts to perform similar idle scans. Leveraging the way some of these services perform outbound connections upon user submissions can serve as some kind of poor's man idle scanning.
Using hping:
The hping method for idle scanning provides a lower level example for how idle scanning is performed. In this example the target host (172.16.0.100) will be scanned using an idle host (172.16.0.105). An open and a closed port will be tested to see how each scenario plays out.
First, establish that the idle host is actually idle, send packets using hping2 and observe the id numbers increase incrementally by one. If the id numbers increase haphazardly, the host is not actually idle or has an OS that has no predictable IP ID.
Send a spoofed SYN packet to the target host on a port you expect to be open. In this case, port 22 (ssh) is being tested.
Since we spoofed the packet, we did not receive a reply and hping reports 100% packet loss. The target host replied directly to the idle host with a syn/ack packet. Now, check the idle host to see if the id number has increased.
Notice that the proxy hosts id increased from id=1379 to id=1381. 1380 was consumed when the idle host replied to the target host's syn/ack packet with an rst packet.
Run through the same processes again testing a port that is likely closed. Here we are testing port 23 (telnet).
Notice that this time, the id did not increase because the port was closed. When we sent the spoofed packet to the target host, it replied to the idle host with an rst packet which did not increase the id counter.
Using nmap:
The first thing the user would do is to find a suitable zombie on the LAN: Performing a port scan and OS identification (-O option in nmap) on the zombie candidate network rather than just a ping scan helps in selecting a good zombie. As long as verbose mode (-v) is enabled, OS detection will usually determine the IP ID sequence generation method and print a line such as “IP ID Sequence Generation: Incremental”. If the type is given as Incremental or Broken little-endian incremental, the machine is a good zombie candidate. That is still no guarantee that it will work, as Solaris and some other systems create a new IP ID sequence for each host they communicate with. The host could also be too busy. OS detection and the open port list can also help in identifying systems that are likely to be idle.
Using nmap:
Another approach to identifying zombie candidates is the run the ipidseq NSE script against a host. This script probes a host to classify its IP ID generation method, then prints the IP ID classification much like the OS detection does. Like most NSE scripts, ipidseq.nse can be run against many hosts in parallel, making it another good choice when scanning entire networks looking for suitable hosts.
Using nmap:
nmap -v -O -sS 192.168.1.0/24 This tells nmap to do a ping sweep and show all hosts that are up in the given IP range. Once you have found a zombie, next you would send the spoofed packets: nmap -P0 -p <port> -sI <zombie IP> <target IP> The images juxtaposition show both of these stages in a successful scenario.
Effectiveness:
Although many Operating Systems are now immune from being used in this attack, Some popular systems are still vulnerable; making the idle scan still very effective. Once a successful scan is completed there is no trace of the attacker's IP address on the target's firewall or Intrusion-detection system log. Another useful possibility is the chance of by-passing a firewall because you are scanning the target from the zombie's computer, which might have extra rights than the attacker's. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Blood lancet**
Blood lancet:
A blood lancet, or simply lancet, is a small medical implement used for capillary blood sampling. A blood lancet, sometimes called a lance, is similar to a scalpel style lancet, but with a double-edged blade and a pointed end. It can even be a specialized type of sharp needle. Lancets are used to make punctures, such as a fingerstick, to obtain small blood specimens. Blood lancets are generally disposable.
Blood lancet:
Lancets are also used to prick the skin in dermatological testing for allergies.A blood-sampling device, also known as a lancing device, is an instrument equipped with a lancet. It is also most commonly used by diabetic patients during blood glucose monitoring. The depth of skin penetration can be adjusted for various skin thicknesses. Long lancing devices are used for fetal scalp blood testing to get a measure of the acid base status of the fetus.
Blood sampling:
The small capillary blood samples obtained can be tested for blood glucose, hemoglobin, and many other blood components. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Wont**
Wont:
A wont is a habit, or routine of behavior that is repeated regularly and tends to occur subconsciously.
Wont may also refer to: Won't, the English contraction for will notBroadcast stationsWBYD-CD 39 Johnstown, Pennsylvania, a TV station that used the callsign WONT-LP from January 2001 to February 2002 101.1 WUPY Ontonagon, Michigan, an FM station that used the callsign WONT from 1983 to 1989 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Armstrong's mixture**
Armstrong's mixture:
Armstrong's mixture is a highly shock and friction sensitive primary explosive. Formulations vary, but one consists of 67% potassium chlorate, 27% red phosphorus, 3% sulfur, and 3% calcium carbonate. It is named for Sir William Armstrong, who invented it sometime prior to 1872 for use in explosive shells.
Toys:
Armstrong's mixture can be used as ammunition for toy cap guns. The mixture is suspended in water with some gum arabic or similar binder and deposited in drops, each containing a few milligrams of explosive, to dry between layers of paper backing. The dots explode with some smoke when struck.Armstrong's mixture can be used in impact firecrackers known as cap torpedoes, which explode on impact when the ball (made of clay or papier-mâché) is thrown or launched by slingshot. The firecrackers may include gravel with the explosive mixture to ensure detonation.
Military use:
With the addition of a grit such as boron carbide (in a modified formulation given as 70% KClO3, 19% red phosphorus, 3% sulfur, 3% chalk, and 5% boron carbide by weight), Armstrong's mixture has been considered for use in firearm primers. This use as primer for artillery propellants may have been Armstrong's original purpose.Armstrong's mixture has been used in thrown impact-detonated improvised explosive devices, made simply by loading it into hollow balls. It also was seen in various patents for matches, novelty fireworks, and signalling devices.
Safety:
Armstrong's mixture is both very sensitive and very explosive, a dangerous combination that limits its practical use to toy caps. Such toy caps and fireworks typically contain no more than 10 milligrams each, but gram quantities can cause maiming hand injuries.The mixture is likely to explode if mixed dry and is even dangerous wet. It is recommended that Armstrong's mixture be prepared as a slurry in water and adjusted to a slightly basic pH with an alkaline buffer, such as calcium carbonate, in order to neutralize any acid that may have been generated by oxidized phosphorus on contact with the water, which would cause it to deteriorate while slowly drying. The wet slurry or paste is loaded into the fireworks, then allowed to dry.Simple mixtures of red phosphorus and potassium chlorate can detonate at a wide range of proportions; a 20% phosphorus mixture had 27% of the equivalent power of a like mass of TNT in a laboratory experiment, and the detonation of the 10% and 20% phosphorus mixtures even in small unconfined samples of 1 gram was described by the authors as "impressive" and "scary". Pyrotechnician John Donner wrote in 1996 that it "is the most hazardous mixture commonly used in small fireworks." Davis Tenney called it "a combination which is the most sensitive, dangerous, and unpredictable of the many with which the pyrotechnist has to deal. Their preparation ought under no conditions to be attempted by an amateur."Toy charges, such as the several-milligram dots used for cap guns, are individually harmless but potentially dangerous in large numbers. On May 14, 1878, such an accident occurred in Paris. A store containing some six to eight million paper caps, totaling about 64 kilograms of explosive mass, caught fire and exploded, killing 14 and injuring 16 more. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Count–min sketch**
Count–min sketch:
In computing, the count–min sketch (CM sketch) is a probabilistic data structure that serves as a frequency table of events in a stream of data. It uses hash functions to map events to frequencies, but unlike a hash table uses only sub-linear space, at the expense of overcounting some events due to collisions. The count–min sketch was invented in 2003 by Graham Cormode and S. Muthu Muthukrishnan and described by them in a 2005 paper.Count–min sketches are essentially the same data structure as the counting Bloom filters introduced in 1998 by Fan et al. However, they are used differently and therefore sized differently: a count–min sketch typically has a sublinear number of cells, related to the desired approximation quality of the sketch, while a counting Bloom filter is more typically sized to match the number of elements in the set.
Data structure:
The goal of the basic version of the count–min sketch is to consume a stream of events, one at a time, and count the frequency of the different types of events in the stream. At any time, the sketch can be queried for the frequency of a particular event type i from a universe of event types U , and will return an estimate of this frequency that is within a certain distance of the true frequency, with a certain probability.The actual sketch data structure is a two-dimensional array of w columns and d rows. The parameters w and d are fixed when the sketch is created, and determine the time and space needs and the probability of error when the sketch is queried for a frequency or inner product. Associated with each of the d rows is a separate hash function; the hash functions must be pairwise independent. The parameters w and d can be chosen by setting w = ⌈e/ε⌉ and d = ⌈ln 1/δ⌉, where the error in answering a query is within an additive factor of ε with probability 1 − δ (see below), and e is Euler's number.
Data structure:
When a new event of type i arrives we update as follows: for each row j of the table, apply the corresponding hash function to obtain a column index k = hj(i). Then increment the value in row j, column k by one.
Several types of queries are possible on the stream.
The simplest is the point query, which asks for the count of an event type i. The estimated count is given by the least value in the table for i, namely min jcount[j,hj(i)] , where count is the table.Obviously, for each i, one has ai≤a^i , where ai is the true frequency with which i occurred in the stream.
Additionally, this estimate has the guarantee that a^i≤ai+εN with probability 1−δ , where N=∑i∈Uai is the stream size, i.e. the total number of items seen by the sketch.
An inner product query asks for the inner product between the histograms represented by two count–min sketches, counta and countb .Small modifications to the data structure can be used to sketch other different stream statistics.
Like the Count sketch, the Count–min sketch is a linear sketch. That is, given two streams, constructing a sketch on each stream and summing the sketches yields the same result as concatenating the streams and constructing a sketch on the concatenated streams. This makes the sketch mergeable and appropriate for use in distributed settings in addition to streaming ones.
Reducing bias and error:
One potential problem with the usual min estimator for count–min sketches is that they are biased estimators of the true frequency of events: they may overestimate, but never underestimate the true count in a point query. Furthermore, while the min estimator works well when the distribution is highly skewed, other sketches such as the Count sketch based on means are more accurate when the distribution is not sufficiently skewed. Several variations on the sketch have been proposed to reduce error and reduce or eliminate bias.To remove bias, the hCount* estimator repeatedly randomly selects d random entries in the sketch and takes the minimum to obtain an unbiased estimate of the bias and subtracts it off.
Reducing bias and error:
A maximum likelihood estimator (MLE) was derived in Ting. By using the MLE, the estimator is always able to match or better the min estimator and works well even if the distribution is not skewed. This paper also showed the hCount* debiasing operation is a bootstrapping procedure that can be efficiently computed without random sampling and can be generalized to any estimator.
Reducing bias and error:
Since errors arise from hash collisions with unknown items from the universe, several approaches correct for the collisions when multiple elements of the universe are known or queried for simultaneously . For each of these, a large proportion of the universe must be known to observe a significant benefit.
Conservative updating changes the update, but not the query algorithms. To count c instances of event type i, one first computes an estimate min jcount[j,hj(i)] , then updates max {count[j,hj(i)],ai^+c} for each row j. While this update procedure makes the sketch not a linear sketch, it is still mergeable. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PLATO (spacecraft)**
PLATO (spacecraft):
PLAnetary Transits and Oscillations of stars (PLATO) is a space telescope under development by the European Space Agency for launch in 2026. The mission goals are to search for planetary transits across up to one million stars, and to discover and characterize rocky extrasolar planets around yellow dwarf stars (like the Sun), subgiant stars, and red dwarf stars. The emphasis of the mission is on Earth-like planets in the habitable zone around Sun-like stars where water can exist in a liquid state. It is the third medium-class mission in ESA's Cosmic Vision programme and is named after the influential Greek philosopher Plato. A secondary objective of the mission is to study stellar oscillations or seismic activity in stars to measure stellar masses and evolution and enable the precise characterization of the planet host star, including its age.
History:
PLATO was first proposed in 2007 to the European Space Agency (ESA) by a team of scientists in response to the call for ESA's Cosmic Vision 2015–2025 programme. The assessment phase was completed during 2009, and in May 2010 it entered the Definition Phase. Following a call for missions in July 2010, ESA selected in February 2011 four candidates for a medium-class mission (M3 mission) for a launch opportunity in 2024. PLATO was announced on 19 February 2014 as the selected M3 class science mission for implementation as part of its Cosmic Vision Programme. Other competing concepts that were studied included the four candidate missions EChO, LOFT, MarcoPolo-R and STE-QUEST.In January 2015, ESA selected Thales Alenia Space, Airbus DS, and OHB System AG to conduct three parallel phase B1 studies to define the system and subsystem aspects of PLATO, which were completed in 2016. On 20 June 2017, ESA adopted PLATO in the Science Programme, which means that the mission can move from a blueprint into construction. Over the coming months, industry was asked to make bids to supply the spacecraft platform.
History:
PLATO is an acronym, but also the name of a philosopher in Classical Greece; Plato (428–348 BC) was looking for a physical law accounting for the orbit of planets (errant stars) and able to satisfy the philosopher's needs for "uniformity" and "regularity".
Management:
The PLATO Mission Consortium (PMC) that is responsible for the payload and major contributions to the science operations is led by Prof. Heike Rauer at the German Aerospace Center (DLR) Institute of Planetary Research. The design of the Telescope Optical Units is made by an international team from Italy, Switzerland and Sweden and coordinated by Roberto Ragazzoni at INAF (Istituto Nazionale di Astrofisica) Osservatorio Astronomico di Padova. The Telescope Optical Unit development is funded by the Italian Space Agency, the Swiss Space Office and the Swedish National Space Board. The PMC Science Management (PSM), composed of more than 100 experts, is coordinated by Prof. Don Pollacco of the University of Warwick and provides expertise for: The preparation of the PLATO Input Catalogue (PIC) Identifying the optimal fields for PLATO to observe Coordinating follow-up observations Scientifically validating PLATO's data products
Objective:
The objective is the detection of terrestrial exoplanets up to the habitable zone of solar-type stars and the characterization of their bulk properties needed to determine their habitability. To achieve this objective, the mission has these goals: Discover and characterize many nearby exoplanetary systems, with precision in the determination of the planets' radii of up to 3%, stellar age of up to 10%, and planet mass of up to 10% (the latter in combination with on-ground radial velocity measurements) Detect and characterize Earth-sized planets and super-Earths in the habitable zone around solar-type stars Discover and characterize many exoplanetary systems to study their typical architectures, and dependencies on the properties of their host stars and the environment Measure stellar oscillations to study the internal structure of stars and how it evolves with age Identify good targets for spectroscopic measurements to investigate exoplanet atmospheresPLATO will differ from the CoRoT, TESS, CHEOPS, and Kepler space telescopes in that it will study relatively bright stars (between magnitudes 4 and 11), enabling a more accurate determination of planetary parameters, and making it easier to confirm planets and measure their masses using follow-up radial velocity measurements on ground-based telescopes. Its dwell time will be longer than that of the TESS NASA mission, making it sensitive to longer-period planets.
Design:
Optics The PLATO payload is based on a multi-telescope approach, involving 26 cameras in total: 24 "normal" cameras organized in 4 groups, and 2 "fast" cameras for bright stars. The 24 "normal" cameras work at a readout cadence of 25 seconds and monitor stars fainter than apparent magnitude 8. The two "fast" cameras work at a cadence of 2.5 seconds to observe stars between magnitude 4 to 8. The cameras are refracting telescopes using six lenses; each camera has a 1,100 deg2 field and a 120 mm lens diameter. Each camera is equipped with its own CCD staring array, consisting of four CCDs of 4510 x 4510 pixels.The 24 "normal cameras" will be arranged in four groups of six cameras with their lines of sight offset by a 9.2° angle from the +ZPLM axis. This particular configuration allows surveying an instantaneous field of view of about 2,250 deg2 per pointing. The space observatory will rotate around the mean line of sight once per year, delivering a continuous survey of the same region of the sky.
Launch:
The space observatory is planned to launch in 2026 to the Sun-Earth L2 Lagrange point.
Data release schedule:
The public release of photometric data (including light curves) and high-level science products for each quarter will be made after six months and by one year after the end of their validation period. The data are processed by quarters because this is the duration between each 90-degree rotation of the spacecraft. For the first quarter of observations, six months are required for data validation and pipeline updates. For the next quarters, three months will be needed.A small number of stars (no more than 2,000 stars out of 250,000) will have proprietary status, meaning the data will only be accessible to the PLATO Mission Consortium members for a given time period. They will be selected using the first three months of PLATO observations for each field. The proprietary period is limited to 6 months after the completion of the ground-based observations or the end of the mission archival phase (Launch date + 7.5 years), whichever comes first. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Halterneck**
Halterneck:
Halterneck is a style of women's clothing strap that runs from the front of the garment around the back of the neck, generally leaving the upper back uncovered. The name comes from livestock halters. The word "halter" derives from the Germanic words meaning "that by which anything is held". Halter is part of the German word for bra, Büstenhalter. The halter style is used with swimsuits, to maximize sun tan exposure on the back and minimize tan lines. It is also used with dresses or shirts, to create a backless dress or top. The neck strap can itself be covered by the wearer's hair, leaving the impression from behind that nothing is holding the dress or shirt up.
Halterneck:
If a bra is worn with a halter top, it is generally either strapless or of halterneck construction itself, to avoid exposing the back straps of a typical bra.
Halterneck:
A halter top is a type of sleeveless shirt similar to a tank top (by the American English definition) but with the straps being tied behind the neck. In another style of the halter top, there is only a narrow strap behind the neck and a narrow strap behind the middle of the back, so that it is mostly backless. This design resembles many bikini tops, although it covers the chest more and may cover some, all or even none of the abdomen at the front.
Halterneck:
It has been suggested that the neckline's appeal stems from the fact that "it eliminated the need for spoiling the back detail with straps, leaving an uninterrupted area of skin to expose to the sun by day and display by night." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**California car (streetcar)**
California car (streetcar):
A California Car is a type of single-deck tramcar or streetcar that features a center, enclosed seating compartment and roofed seating areas without sides on either end. These cars were popular in California's mild Mediterranean climate offering passengers a choice of shaded outdoor seating during hot weather, or more protected seating during cool or rainy weather. They were also used in other climates to provide separate outdoor smoking and enclosed non-smoking areas. Some very early motor buses also used the combination car design.Early San Francisco cable car lines used two cars: a grip car (or "dummy") which contained the grip mechanism and a brake, and the trailer which carried passengers. A new car, called a combination car, was eventually developed which combined the trailer and the grip car into one vehicle. The combination car had one enclosed end and an open end with seats and the grip.
California car (streetcar):
In 1888, the California Street Cable Railroad Company commissioned a new car from John L. Hammond and Co., with two open ends and a center enclosed section. Placed in service in 1889, this double-ended combination car, dubbed a “California Type” car, could be operated from either end, which eliminated the need for a turntable at the ends of the lines.The design was also applied to many electric streetcars in the late 1890s and 1900s. Henry Huntington’s engineers developed a standard streetcar design in the California Car style in 1902 for his Los Angeles Railway (LARy). Called the “Huntington Standard”, it featured a center enclosed section, open sections on either end with wire sides rather than solid sides, and a distinctive five-window front and rear. Eventually, LARy had 747 of these cars in service. The cars were featured in early silent movies, becoming indelibly linked in moviegoers’ minds with southern California. Other railways that adopted the design included the Pacific Electric Railway, the San Francisco, Napa and Calistoga Railway, The Key System’s East Shore and Suburban Railway .California cars are still operational on the San Francisco cable car system. Both the single-ended cars on the Powell–Hyde and Powell–Mason lines, and the double-ended cars on the California Street line, are of this type. The single-ended cars have a single open section at the front of the car, with a closed compartment at the rear, whilst the double-ended cars have a central closed compartment flanked by open areas.Several California cars are now preserved and/or used on heritage operations. These include: Los Angeles Railway 521, an electric streetcar now preserved by the Seashore Trolley Museum Manchester Corporation Tramways 765, an electric tram now preserved and operated on its home city's Heaton Park Tramway San Francisco Muni 578, an electric streetcar of similar design to that city's cable cars, now preserved by the Market Street Railway and occasionally operated | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Thai fabrics**
Thai fabrics:
Thai Fabrics are Thai handicraft products that are indicative of the flourish of the Thai national culture and creativity of the nation in making products and clothes for daily use. Thai Fabric is hand-woven fabric produced in Thailand. It is a cultural heritage and unique culture to the Thai culture and now has been famous throughout the world.
Background of Thai fabrics:
Thai people known weaving since prehistoric. Culture and society in the countryside is regarded as weaving is a woman's leisure after her main job, rice planting or farming. This is common to all regions of the country. Product development, design, pattern and color of the fabric inherited a weaver's imagination and the influence of other factors. Thai fabrics is a core material in a cloth and it is indicative of social status including position of the wearer in society. Hence, weaving can classify three main types for people. The first type for public people or citizen. The whole fabric used in daily life and the fabric used on special occasions related to the faith traditions such as fabrics religious ceremony, carnival, festival and important ceremonies. The second type for privileged class, royalty and majesty. This type used a significant fabric such as the ancient embroidered cloth and different extraordinary types of fabric. And the third type for monk and Buddhist Scriptures. Thai fabric has many styles and local identity in each region which has evolved over many generations, such as: The 14th – 16th century AD - The Northern Thailand is the location of The Lan Na or Lanna Kingdom. They said that the Lanna people are skilled in weaving, particularly cotton. There is weaving widespread to distribute to the neighboring kingdom. The colorful of cottons in this era are extremely outstanding.
Background of Thai fabrics:
The Kingdom of Sukhothai - Around 755 years ago, The Sukhothai weaved both cotton and silk fabrics. And especially cotton called "Benjarong Cotton" is a traditional Thai five-colored famous cotton in the Rattanakosin period. The general public is use a common cotton whereas a top grade cotton is used in royal court that has royal tailors and some of fabric order from abroad such as silk, satin from China. Moreover, fabrics are used to decorate houses or made products, for instance mattresses, pillows, curtains, etc.
Background of Thai fabrics:
Ayutthaya Kingdom - Around 400 years ago, fabrics are important in the trade and economic in the country, it is also used as currency instead money. King used them for reward or maybe the royal salary per year which is embroidered clothes and woven with silk and the center is patterned by fabric. For the general public, men often use loincloth and Commer band (Kamar band in Persian language) and women wear shawl.
Background of Thai fabrics:
Early Rattanakosin Kingdom - Local weaving in Thailand has spread to almost all regions, but there are specially the Northeast and the North of Thailand. The pattern of the fabric will vary based on ideology, beliefs and traditions of each ethnic group.
Thai fabrics in regions of Thailand:
The local Thai fabrics were divided by region.
Thai fabrics in regions of Thailand:
Fabrics in the north of Thailand Woven fabrics in the north of Thailand or Lanna kingdom, but nowadays they are Chiang Rai, Phayao, Nan, Phrae, Lampang, Chiang Mai and Mae Hong Son and some of the land in Myanmar, China and Laos. Tai Yuan people of Thailand in the past had their own culture, they use of fabric since weaving, creating the pattern and wearing. For example, men would wear such apparel hiked through thigh to show off a tattoo above the knee up to the thigh, and they did not wear shirt or T-shirt but had a fabric across the shoulders, meanwhile elite wear shirt and cummerbund. Women wear Sinhs or Lao skirt fabric striped body. It has a linear pattern and, they wore a fabric chest strap instead shirt. And often have hair bun in the middle of the head, and then pin connector pins or flowers. This dress also appeared on the mural paintings, such as the mural paintings at Wat Phra Sing Waramahavihan in Chiang Mai.
Thai fabrics in regions of Thailand:
Fabrics in the northeast of Thailand In general, the northeast has a farming social. Most major occupation is farming. It takes about 7 – 9 months from planting until harvest in each year throughout the rainy season and winter. In summer time is about 3 – 5 months, the northeast will spare time to prepare equipment and tools used in everyday life includes the tradition of philanthropy and relaxation. This is the cycle of life in the every year. Local fabrics in the northeast are well known since ancient times. There are two types of woven fabrics from cotton and silk. Nevertheless, they bring polyester mixed such as thread and Toray silk. This is characterized by native weaving same people are doing every step. Since cotton planting and mulberry planting (Mulberry in Thailand) to feed silkworms. Remove the cocoons to the line until they purify and dyed cotton.
Thai fabrics in regions of Thailand:
Fabrics in the central of Thailand Immigration Tai - Laos to settle in various localities in the central region. As a result, they dispersed among various places. Most of these communities are still woven fabric for clothing like typical and popular tradition handed down since predecessors. There are important local weaving, for instance the Tai Puan weaving group, in Sukhothai Tai province and the Tai Puan from Laos. In the era of King Rama III, some groups have dispersed settlements in some areas of the province of Suphan Buri, Maha Sarakham and so on. The central region has woven fabrics in many local authorities. But when the industry into the weaving. Many local weaving decays. Although some are still woven together in a way, but the pattern is often changed to meet the needs of consumers. The pattern of fabric in central of Thailand.
Thai fabrics in regions of Thailand:
Fabrics in the south of Thailand Movement of people from political. Not only Muslims, but Malay moving settlers in the provinces in the South. And the family had to move into Thailand some settlements in a land which originally belonged to Thailand by such communities were evacuated to Thailand in Kelantan and Thairaburi.This behavior can cause the mixture of races ago. Also caused by a combination of cultural pressure. The weaving of Southern culture. Firstly begins to Nakhon Si Thammarat and then it spread to the other.
Maintenance of Thai fabrics and Thai clothes:
Different types of fabric have different characteristics such as some are thick, some are fine, or some have a visible texture. And these are suggestions on how to maintain Thai fabrics and Thai clothes.
Maintenance of Thai fabrics and Thai clothes:
Mostly in who interested Thai fabrics and Thai clothes often fear the trouble of maintenance or the high cost of dry cleaning. In fact, they can maintain the Thai fabrics and Thai clothes by yourselves without the high cost if only you understand, accurate and careful. To maintain and clean the fabrics will remain beautiful and durable and the cost savings offered.
Maintenance of Thai fabrics and Thai clothes:
Users mostly used on special occasions such as wedding tradition and banquets. They can use reusable before washed because Thai fabrics woven from natural fibers especially silk. There are special features with no dust.
When wearing Thai cloth should be careful when choosing a chair seat and do not allow water to clean clothes and food spills. After wearing the clothes should hang them and dry in an airy fabric for moisture and odor out. And then brush dust and clean without washing.
Keep away from light because spectrum of light can cause fabric damage.
Consult expert adviser to control lighting, temperature and humidity.
Consult expert adviser to maintain the best results for those valuable fabrics including how to clean your clothes.
Do not keep cotton and linen in wooden drawers.
Make sure that the area you stored clothes is dark, dry and cool conditions.
Consult expert adviser to store the fabric with the best approach. This could be a way to hang or cover it.
This is very important to usually look into insects might bite your clothes.
You should be vacuum your clothes regularly. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Uterine rupture**
Uterine rupture:
Uterine rupture is when the muscular wall of the uterus tears during pregnancy or childbirth. Symptoms, while classically including increased pain, vaginal bleeding, or a change in contractions, are not always present. Disability or death of the mother or baby may result.Risk factors include vaginal birth after cesarean section (VBAC), other uterine scars, obstructed labor, induction of labor, trauma, and cocaine use. While typically rupture occurs during labor it may occasionally happen earlier in pregnancy. Diagnosis may be suspected based on a rapid drop in the baby's heart rate during labor. Uterine dehiscence is a less severe condition in which there is only incomplete separation of the old scar.Treatment involves rapid surgery to control bleeding and delivery of the baby. A hysterectomy may be required to control the bleeding. Blood transfusions may be given to replace blood loss. Women who have had a prior rupture are generally recommended to have C-sections in subsequent pregnancies.Rates of uterine rupture during vaginal birth following one previous C-section, done by the typical technique, are estimated at 0.9%. Rates are greater among those who have had multiple prior C-sections or an atypical type of C-section. In those who do have uterine scarring, the risk during a vaginal birth is about 1 per 12,000. Risk of death of the baby is about 6%. Those in the developing world appear to be affected more often and have worse outcomes.
Signs and symptoms:
Symptoms of a rupture may be initially quite subtle. An old cesarean scar may undergo dehiscence; with further labor the woman may experience abdominal pain and vaginal bleeding, though these signs are difficult to distinguish from normal labor. Often a deterioration of the fetal heart rate is a leading sign, but the cardinal sign of uterine rupture is loss of fetal station on manual vaginal exam. Intra-abdominal bleeding can lead to hypovolemic shock and death. Although the associated maternal mortality is now less than one percent, the fetal mortality rate is between two and six percent when rupture occurs in the hospital.
Signs and symptoms:
In pregnancy uterine rupture may cause a viable abdominal pregnancy. This is what accounts for most abdominal pregnancy births.
Signs and symptoms:
Abdominal pain and tenderness. The pain may not be severe; it may occur suddenly at the peak of a contraction. The woman may describe a feeling that something "gave way" or "ripped." Chest pain, pain between the scapulae, or pain on inspiration—Pain occurs because of the irritation of blood below the woman's diaphragm Hypovolemic shock caused by bleeding, evidenced by falling blood pressure, tachycardia, tachypnea, pallor, cool and clammy skin, and anxiety. The fall in blood pressure is often a late sign of haemorrhage Signs associated with fetal oxygenation, such as late deceleration, reduced variability, tachycardia, and bradycardia Absent fetal heart sounds with a large disruption of the placenta; absent fetal heart activity by ultrasound examination Cessation of uterine contractions Palpation of the fetus outside the uterus (usually occurs only with a large, complete rupture). The fetus is likely to be dead at this point.
Signs and symptoms:
Signs of an abdominal pregnancy Post-term pregnancy
Risk factors:
A uterine scar from a previous cesarean section is the most common risk factor. (In one review, 52% had previous cesarean scars.) Other forms of uterine surgery that result in full-thickness incisions (such as a myomectomy), dysfunctional labor, labor augmentation by oxytocin or prostaglandins, and high parity may also set the stage for uterine rupture. In 2006, an extremely rare case of uterine rupture in a first pregnancy with no risk factors was reported.Uterine rupture during pregnancy without a prior cesarean section is one of the major diagnostic criterion for vascular Ehlers-Danlos syndrome (vEDS).
Mechanism:
In an incomplete rupture the peritoneum is still intact. With a complete rupture the contents of the uterus spill into the peritoneal cavity or the broad ligament.
Treatment:
Emergency exploratory laparotomy with cesarean delivery accompanied by fluid and blood transfusion are indicated for the management of uterine rupture. Depending on the nature of the rupture and the condition of the patient, the uterus may be either repaired or removed (cesarean hysterectomy). Delay in management places both mother and child at significant risk. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Visual Logic**
Visual Logic:
Visual Logic is a graphical authoring tool which allows students to write and execute programs using flowcharts. It is typically used in an academic setting to teach introductory programming concepts. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Trans-Resveratrol-3-O-glucuronide**
Trans-Resveratrol-3-O-glucuronide:
trans-Resveratrol-3-O-glucuronide is a metabolite of resveratrol and trans-resveratrol-3-O-glucoside (piceid). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ticket platform**
Ticket platform:
A ticket platform was a platform situated outside a passenger railway station to allow passengers' tickets to be collected.
These platforms were unpopular as they delayed the arrival of the trains just a short distance outside the station, but it did enable railway staff to collect tickets before passengers had a chance to leave the station.
Ticket platforms fell out of use when corridor coaches became common as these allowed on-board ticket collection.
The former ticket platform on the approach to Oban railway station in Scotland is still in place beside the railway, as is the one outside Liverpool Street Station, London to the south of the line.Ticket platforms are not to be confused with platform tickets. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Maitri (missile)**
Maitri (missile):
The Maitri missile (Friendship) project was a cancelled proposal for a next-generation quick-reaction surface-to-air missile (QRSAM) with a lethal near-hundred per cent kill probability (according to manufacturer's claim) planned for development by India's Defence Research and Development Organisation. It is a short-range (strike range varies from 25–30 km) surface-to-air defense missile system.The proposal was shelved and superseded by the QRSAM and VL-SRSAM missiles for the use of the Indian Army and Indian Navy respectively.
Introduction:
The Maitri missile should not be confused with the similar Indian Army Low-Level Quick Reaction Missile system (LLQRM) requirement. The missile will fill the gap created by the Indian government's decision to wind up development of the Trishul point defense missile system. It is believed to be a blend of the French Mica and DRDO Trishul. Maitri will build on the work done by DRDO while developing the Trishul missile, using technology transfer from MBDA to fill the technological gaps that led to the failure of the Trishul project.
Development:
On 15 July 2009, The Telegraph reported that the project was scrapped. But later, on June 4, 2010 Indian Express reported that, "After moving ahead with similar projects with Russia and Israel, India is set to finalise a missile co-development project with France to manufacture a new range of Short Range Surface to Air Missiles (SRSAM) for the armed forces." From 2007-2010, MBDA and DRDO finalised the design and performance parameters of the missile to suit the needs of the Indian armed forces. Besides providing the Indian armed forces with a modern air defence missile, the project will also add a new capability with France, which does not have a similar missile in production.The Maitri missile project involved a technological collaboration between MBDA, India’s Defence Research and Development Organisation (DRDO) and defence public sector unit Bharat Dynamics Limited. Defence Research and Development Laboratory (DRDL), a premier missile laboratory of DRDO, will act as the main design centre in India.
Development:
The project, with a budget of US$500 million was said to have been signed in May 2007.On 14 February 2013, India and France concluded negotiations on the Short Range Surface to Air Missile nearly worth of $6 billion during the talks between French President Francois Hollande and Prime Minister Manmohan Singh.On 30 March 2015, it was reported that the project was revived specially by the request of Indian Navy for a point air defence system after stating that Akash missile defence system is not suitable for Indian warships defence. The DRDO with MBDA is planning to develop 9 short-range surface-to-air missile system (SRSAM) with 40 missiles each for Indian Navy. Development of the missile is expected to be completed within three years of the project go-ahead, when initial testing will commence.
Development:
As of 2020, The project was expected to be cancelled as DRDO has instead taken up the development of an alternative missile known as VL-SRSAM derived from the Astra Mk1 air-to-air missile for use by the Indian Navy and QRSAM for the Indian Army.
Design:
The principal contribution of MBDA will be in providing be the active homing-head, thrust vector control, terminal guidance system and composites for a modified propulsion system for the missile, while the software, command-and-control system, the launchers and system integration work would be carried out by the DRDL.
MBDA has agreed to transfer all sensitive technology such as the seeker and thrust vector control system to India, allowing India to manufacture the Maitri missile locally as well as support them.
Radar:
The Electronics & Radar Development Establishment (LRDE), Bangalore, would develop two indigenous radars for the Maitri project. These would be new-generation variants of Central Acquisition Radar (3D-CAR), with the ability to track 150 targets simultaneously at a distance of 200 kilometers. The naval variant would be called the Revati and the air force version would be called Rohini.
Variants:
Two variants of the missile were planned: A ship-borne point and tactical air defense version for the Navy A mobile wheeled and tracked system for use by the Air Force and ArmyThere are two variants which are as below: For navy there are naval missile which is the variants of Maitri.
For airforce there are Rohini missile which is the variants of Maitri. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Kim Seong-min**
Kim Seong-min:
Kim Seong-min is the common English spelling of a Korean name also spelled Kim Sung-min. It may refer to: Kim Sung-min (actor) (1973–2016), South Korean actor Kim Sung-min (footballer, born 1981), South Korean footballer Kim Sung-min (footballer, born 1985), South Korean footballer Kim Sung-min (judoka) (born 1987), South Korean judoka Kim Sung-min (volleyball), South Korean volleyball player Kim Seong-min (defector), North Korean defector who founded Free North Korea Radio Kim Seong-min (field hockey), participant in 1999 Men's Hockey Champions Trophy Kim Seong-min (model), winner of 2012 Miss World Korea | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Basket winding**
Basket winding:
Basket winding (or basket-weave winding or honeycomb winding or scatter winding) is a winding method for electrical wire in a coil. The winding pattern is used for radio-frequency electronic components with many parallel wires, such as inductors and transformers. The winding pattern reduces the amount of wire running in adjacent, parallel turns. The wires in successive layers of a basket wound coil cross each other at large angles, as close to 90 degrees as possible, which reduces energy loss due to electrical cross-coupling between wires at radio frequencies.
Purpose:
The basket winding method is used for coils designed for use at frequencies of 50 kHz and higher to reduce two undesirable side effects, proximity effect and parasitic capacitance, that arise in long parallel segments of current-carrying wire.
Purpose:
The proximity effect is caused in a wire by the magnetic field from current flowing in nearby parallel wires, such as other loops in the same coil. If two adjacent wires carry a current in the same direction, then the effect is felt in both – the magnetic field of the nearby wires causes current in each wire to be concentrated in a small region on the wire’s surface farthest from the adjacent wires. The concentration of current along a small portion of the conductor increases the wire’s resistance and hence increases energy loss. At medium and high radio frequencies the increased resistance of the inductor can increase the bandwidth of tuned circuits and reduce the circuit’s frequency selectivity, or Q factor.
Purpose:
Parasitic capacitance is the consequence of parallel turns of wire acting as capacitor plates, storing charge between adjacent wires. The parasitic capacitance can cause the coil to become self-resonant at one or several frequencies, which interferes with the intended tuned resonance and blocks and reflects current at the self-resonant frequency.
Unfortunately basket-weave coil winding increases the physical size of the coil, which increases leakage inductance.
Methods:
Basket windings are often wound with Litz wire, a thin, multi-strand wire with each strand individually insulated, which further reduces losses. Cotton or fabric insulation is important from a mechanical point of view during the winding process, because a common enameled magnet wire does not provide sufficient surface friction between coil layers to hold the turns at large angles. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Instructional design**
Instructional design:
Instructional design (ID), also known as instructional systems design (ISD), is the practice of systematically designing, developing and delivering instructional materials and experiences, both digital and physical, in a consistent and reliable fashion toward an efficient, effective, appealing, engaging and inspiring acquisition of knowledge. The process consists broadly of determining the state and needs of the learner, defining the end goal of instruction, and creating some "intervention" to assist in the transition. The outcome of this instruction may be directly observable and scientifically measured or completely hidden and assumed. There are many instructional design models but many are based on the ADDIE model with the five phases: analysis, design, development, implementation, and evaluation.
Instructional design:
Robert M. Gagné is considered one of the founders of ISD due to the great influence his work, The Conditions of Learning, has had on the discipline.
History:
Origins As a field, instructional design is historically and traditionally rooted in cognitive and behavioral psychology, though recently constructivism has influenced thinking in the field. This can be attributed to the way it emerged during a period when the behaviorist paradigm was dominating American psychology. There are also those who cite that, aside from behaviorist psychology, the origin of the concept could be traced back to systems engineering. While the impact of each of these fields is difficult to quantify, it is argued that the language and the "look and feel" of the early forms of instructional design and their progeny were derived from this engineering discipline. Specifically, they were linked to the training development model used by the U.S. military, which were based on systems approach and was explained as "the idea of viewing a problem or situation in its entirety with all its ramifications, with all its interior interactions, with all its exterior connections and with full cognizance of its place in its context."The role of systems engineering in the early development of instructional design was demonstrated during World War II when a considerable amount of training materials for the military were developed based on the principles of instruction, learning, and human behavior. Tests for assessing a learner's abilities were used to screen candidates for the training programs. After the success of military training, psychologists began to view training as a system and developed various analysis, design, and evaluation procedures. In 1946, Edgar Dale outlined a hierarchy of instructional methods, organized intuitively by their concreteness. The framework first migrated to the industrial sector to train workers before it finally found its way to the education field.
History:
1950s B. F. Skinner's 1954 article "The Science of Learning and the Art of Teaching" suggested that effective instructional materials, called programmed instructional materials, should include small steps, frequent questions, and immediate feedback; and should allow self-pacing. Robert F. Mager popularized the use of learning objectives with his 1962 article "Preparing Objectives for Programmed Instruction". The article describes how to write objectives including desired behavior, learning condition, and assessment.In 1956, a committee led by Benjamin Bloom published an influential taxonomy with three domains of learning: cognitive (what one knows or thinks), psychomotor (what one does, physically) and affective (what one feels, or what attitudes one has). These taxonomies still influence the design of instruction.
History:
1960s Robert Glaser introduced "criterion-referenced measures" in 1962. In contrast to norm-referenced tests in which an individual's performance is compared to group performance, a criterion-referenced test is designed to test an individual's behavior in relation to an objective standard. It can be used to assess the learners' entry level behavior, and to what extent learners have developed mastery through an instructional program.In 1965, Robert Gagné (see below for more information) described three domains of learning outcomes (cognitive, affective, psychomotor), five learning outcomes (Verbal Information, Intellectual Skills, Cognitive Strategy, Attitude, Motor Skills), and nine events of instruction in "The Conditions of Learning", which remain foundations of instructional design practices. Gagne's work in learning hierarchies and hierarchical analysis led to an important notion in instruction – to ensure that learners acquire prerequisite skills before attempting superordinate ones.In 1967, after analyzing the failure of training material, Michael Scriven suggested the need for formative assessment – e.g., to try out instructional materials with learners (and revise accordingly) before declaring them finalized.
History:
1970s During the 1970s, the number of instructional design models greatly increased and prospered in different sectors in military, academia, and industry. Many instructional design theorists began to adopt an information-processing-based approach to the design of instruction. David Merrill for instance developed Component Display Theory (CDT), which concentrates on the means of presenting instructional materials (presentation techniques).
1980s Although interest in instructional design continued to be strong in business and the military, there was little evolution of ID in schools or higher education.
History:
However, educators and researchers began to consider how the personal computer could be used in a learning environment or a learning space. PLATO (Programmed Logic for Automatic Teaching Operation) is one example of how computers began to be integrated into instruction. Many of the first uses of computers in the classroom were for "drill and skill" exercises. There was a growing interest in how cognitive psychology could be applied to instructional design.
History:
1990s The influence of constructivist theory on instructional design became more prominent in the 1990s as a counterpoint to the more traditional cognitive learning theory. Constructivists believe that learning experiences should be "authentic" and produce real-world learning environments that allow learners to construct their own knowledge. This emphasis on the learner was a significant departure away from traditional forms of instructional design.Performance improvement was also seen as an important outcome of learning that needed to be considered during the design process. The World Wide Web emerged as an online learning tool with hypertext and hypermedia being recognized as good tools for learning. As technology advanced and constructivist theory gained popularity, technology's use in the classroom began to evolve from mostly drill and skill exercises to more interactive activities that required more complex thinking on the part of the learner. Rapid prototyping was first seen during the 1990s. In this process, an instructional design project is prototyped quickly and then vetted through a series of try and revise cycles. This is a big departure from traditional methods of instructional design that took far longer to complete.
History:
2000 - 2010 Online learning became common. Technology advances permitted sophisticated simulations with authentic and realistic learning experiences.In 2008, the Association for Educational Communications and Technology (AECT) changed the definition of Educational Technology to "the study and ethical practice of facilitating learning and improving performance by creating, using, and managing appropriate technological processes and resources".
History:
2010 - 2020 Academic degrees focused on integrating technology, internet, and human–computer interaction with education gained momentum with the introduction of Learning Design and Technology (LDT) majors. Universities such as Bowling Green State University, Penn State, Purdue, San Diego State University, Stanford, Harvard University of Georgia, California State University, Fullerton, and Carnegie Mellon University have established undergraduate and graduate degrees in technology-centered methods of designing and delivering education.
History:
Informal learning became an area of growing importance in instructional design, particularly in the workplace. A 2014 study showed that formal training makes up only 4 percent of the 505 hours per year an average employee spends learning. It also found that the learning output of informal learning is equal to that of formal training. As a result of this and other research, more emphasis was placed on creating knowledge bases and other supports for self-directed learning.
Robert Gagné:
Robert Gagné's work is widely used and cited in the design of instruction, as exemplified by more than 130 citations in prominent journals in the field during the period from 1985 through 1990. Synthesizing ideas from behaviorism and cognitivism, he provided a clear template, which is easy to follow for designing instructional events. Instructional designers who follow Gagné's theory will likely have tightly focused, efficient instruction.
Robert Gagné:
Taxonomy Robert Gagné classified the types of learning outcomes by asking how learning might be demonstrated. His domains and outcomes of learning correspond to standard verbs.
Robert Gagné:
Cognitive DomainVerbal information - is stated: state, recite, tell, declare Intellectual skills - label or classify the concepts Intellectual skills - apply the rules and principles Intellectual skills - problem solve by generating solutions or procedures Discrimination: discriminate, distinguish, differentiate Concrete Concept: identify, name, specify, label Defined Concept: classify, categorize, type, sort (by definition) Rule: demonstrate, show, solve (using one rule) Higher order rule: generate, develop, solve (using two or more rules) Cognitive strategies - are used for learning: adopt, create, originateAffective DomainAttitudes - are demonstrated by preferring options: choose, prefer, elect, favorPsychomotor DomainMotor skills - enable physical performance: execute, perform, carry out Nine events According to Gagné, learning occurs in a series of nine learning events, each of which is a condition for learning which must be accomplished before moving to the next in order. Similarly, instructional events should mirror the learning events: Gaining attention: To ensure reception of coming instruction, the teacher gives the learners a stimulus. Before the learners can start to process any new information, the instructor must gain the attention of the learners. This might entail using abrupt changes in the instruction.
Robert Gagné:
Informing learners of objectives: The teacher tells the learner what they will be able to do because of the instruction. The teacher communicates the desired outcome to the group.
Stimulating recall of prior learning: The teacher asks for recall of existing relevant knowledge.
Presenting the stimulus: The teacher gives emphasis to distinctive features.
Providing learning guidance: The teacher helps the students in understanding (semantic encoding) by providing organization and relevance.
Eliciting performance: The teacher asks the learners to respond, demonstrating learning.
Providing feedback: The teacher gives informative feedback on the learners' performance.
Assessing performance: The teacher requires more learner performance, and gives feedback, to reinforce learning.
Enhancing retention and transfer: The teacher provides varied practice to generalize the capability.Some educators believe that Gagné's taxonomy of learning outcomes and events of instruction oversimplify the learning process by over-prescribing. However, using them as part of a complete instructional package can assist many educators in becoming more organized and staying focused on the instructional goals.
Robert Gagné:
Influence Robert Gagné's work has been the foundation of instructional design since the beginning of the 1960s when he conducted research and developed training materials for the military. Among the first to coin the term "instructional design", Gagné developed some of the earliest instructional design models and ideas. These models have laid the groundwork for more present-day instructional design models from theorists like Dick, Carey, and Carey (The Dick and Carey Systems Approach Model), Jerold Kemp's Instructional Design Model, and David Merrill (Merrill's First Principle of Instruction). Each of these models are based on a core set of learning phases that include (1) activation of prior experience, (2) demonstration of skills, (3) application of skills, and (4) integration or these skills into real world activities.
Robert Gagné:
Gagné's main focus for instructional design was how instruction and learning could be systematically connected to the design of instruction. He emphasized the design principles and procedures that need to take place for effective teaching and learning. His initial ideas, along with the ideas of other early instructional designers were outlined in Psychological Principles in Systematic Development, written by Roberts B. Miller and edited by Gagné. Gagné believed in internal learning and motivation which paved the way for theorists like Merrill, Li, and Jones who designed the Instructional Transaction Theory, Reigeluth and Stein's Elaboration Theory, and most notably, Keller's ARCS Model of Motivation and Design.
Robert Gagné:
Prior to Robert Gagné, learning was often thought of as a single, uniform process. There was little or no distinction made between "learning to load a rifle and learning to solve a complex mathematical problem". Gagné offered an alternative view which developed the idea that different learners required different learning strategies. Understanding and designing instruction based on a learning style defined by the individual brought about new theories and approaches to teaching.
Robert Gagné:
Gagné 's understanding and theories of human learning added significantly to understanding the stages in cognitive processing and instructions. For example, Gagné argued that instructional designers must understand the characteristics and functions of short-term and long-term memory to facilitate meaningful learning. This idea encouraged instructional designers to include cognitive needs as a top-down instructional approach.Gagné (1966) defines curriculum as a sequence of content units arranged in such a way that the learning of each unit may be accomplished as a single act, provided the capabilities described by specified prior units (in the sequence) have already been mastered by the learner.His definition of curriculum has been the basis of many important initiatives in schools and other educational environments. In the late 1950s and early 1960s, Gagné had expressed and established an interest in applying theory to practice with particular interest in applications for teaching, training and learning. Increasing the effectiveness and efficiency of practice was of particular concern. His ongoing attention to practice while developing theory continues to influence education and training.Gagné's work has had a significant influence on American education, and military and industrial training. Gagné was one of the early developers of the concept of instructional systems design which suggests the components of a lesson can be analyzed and should be designed to operate together as an integrated plan for instruction. In "Educational Technology and the Learning Process" (Educational Researcher, 1974), Gagné defined instruction as "the set of planned external events which influence the process of learning and thus promote learning".
Learning design:
The concept of learning design arrived in the literature of technology for education in the late 1990s and early 2000s with the idea that "designers and instructors need to choose for themselves the best mixture of behaviourist and constructivist learning experiences for their online courses". But the concept of learning design is probably as old as the concept of teaching. Learning design might be defined as "the description of the teaching-learning process that takes place in a unit of learning (e.g., a course, a lesson or any other designed learning event)".As summarized by Britain, learning design may be associated with: The concept of learning design The implementation of the concept made by learning design specifications like PALO, IMS Learning Design, LDL, SLD 2.0, etc.
Learning design:
The technical realisations around the implementation of the concept like TELOS, RELOAD LD-Author, etc.
Models:
ADDIE process Perhaps the most common model used for creating instructional materials is the ADDIE Model. This acronym stands for the 5 phases contained in the model (Analyze, Design, Develop, Implement, and Evaluate).
Models:
Brief History of ADDIE's Development – The ADDIE model was initially developed by Florida State University to explain "the processes involved in the formulation of an instructional systems development (ISD) program for military interservice training that will adequately train individuals to do a particular job and which can also be applied to any interservice curriculum development activity." The model originally contained several steps under its five original phases (Analyze, Design, Develop, Implement, and [Evaluation and] Control), whose completion was expected before movement to the next phase could occur. Over the years, the steps were revised and eventually the model itself became more dynamic and interactive than its original hierarchical rendition, until its most popular version appeared in the mid-80s, as we understand it today.
Models:
The five phases are listed and explained below: Analyze – The first phase of content development is Analysis. Analysis refers to the gathering of information about one's audience, the tasks to be completed, how the learners will view the content, and the project's overall goals. The instructional designer then classifies the information to make the content more applicable and successful.
Models:
Design – The second phase is the Design phase. In this phase, instructional designers begin to create their project. Information gathered from the analysis phase, in conjunction with the theories and models of instructional design, is meant to explain how the learning will be acquired. For example, the design phase begins with writing a learning objective. Tasks are then identified and broken down to be more manageable for the designer. The final step determines the kind of activities required for the audience in order to meet the goals identified in the Analyze phase.
Models:
Develop – The third phase, Development, involves the creation of the activities that will be implemented. It is in this stage that the blueprints of the design phase are assembled.
Implement – After the content is developed, it is then Implemented. This stage allows the instructional designer to test all materials to determine if they are functional and appropriate for the intended audience.
Models:
Evaluate – The final phase, Evaluate, ensures the materials achieved the desired goals. The evaluation phase consists of two parts: formative and summative assessment. The ADDIE model is an iterative process of instructional design, which means that at each stage the designer can assess the project's elements and revise them if necessary. This process incorporates formative assessment, while the summative assessments contain tests or evaluations created for the content being implemented. This final phase is vital for the instructional design team because it provides data used to alter and enhance the design.
Models:
Connecting all phases of the model are external and reciprocal revision opportunities. As in the internal Evaluation phase, revisions should and can be made throughout the entire process.
Most of the current instructional design models are variations of the ADDIE model.
Rapid prototyping An adaptation of the ADDIE model, which is used sometimes, is a practice known as rapid prototyping.
Models:
Proponents suggest that through an iterative process the verification of the design documents saves time and money by catching problems while they are still easy to fix. This approach is not novel to the design of instruction, but appears in many design-related domains including software design, architecture, transportation planning, product development, message design, user experience design, etc. In fact, some proponents of design prototyping assert that a sophisticated understanding of a problem is incomplete without creating and evaluating some type of prototype, regardless of the analysis rigor that may have been applied up front. In other words, up-front analysis is rarely sufficient to allow one to confidently select an instructional model. For this reason many traditional methods of instructional design are beginning to be seen as incomplete, naive, and even counter-productive.However, some consider rapid prototyping to be a somewhat simplistic type of model. As this argument goes, at the heart of Instructional Design is the analysis phase. After you thoroughly conduct the analysis—you can then choose a model based on your findings. That is the area where most people get snagged—they simply do not do a thorough-enough analysis. (Part of Article By Chris Bressi on LinkedIn) Dick and Carey Another well-known instructional design model is the Dick and Carey Systems Approach Model. The model was originally published in 1978 by Walter Dick and Lou Carey in their book entitled The Systematic Design of Instruction.
Models:
Dick and Carey made a significant contribution to the instructional design field by championing a systems view of instruction, in contrast to defining instruction as the sum of isolated parts. The model addresses instruction as an entire system, focusing on the interrelationship between context, content, learning and instruction. According to Dick and Carey, "Components such as the instructor, learners, materials, instructional activities, delivery system, and learning and performance environments interact with each other and work together to bring about the desired student learning outcomes". The components of the Systems Approach Model, also known as the Dick and Carey Model, are as follows: Identify Instructional Goal(s): A goal statement describes a skill, knowledge or attitude (SKA) that a learner will be expected to acquire Conduct Instructional Analysis: Identify what a learner must recall and identify what learner must be able to do to perform particular task Analyze Learners and Contexts: Identify general characteristics of the target audience, including prior skills, prior experience, and basic demographics; identify characteristics directly related to the skill to be taught; and perform analysis of the performance and learning settings.
Models:
Write Performance Objectives: Objectives consists of a description of the behavior, the condition and criteria. The component of an objective that describes the criteria will be used to judge the learner's performance.
Develop Assessment Instruments: Purpose of entry behavior testing, purpose of pretesting, purpose of post-testing, purpose of practice items/practice problems Develop Instructional Strategy: Pre-instructional activities, content presentation, Learner participation, assessment Develop and Select Instructional Materials Design and Conduct Formative Evaluation of Instruction: Designers try to identify areas of the instructional materials that need improvement.
Revise Instruction: To identify poor test items and to identify poor instruction Design and Conduct Summative EvaluationWith this model, components are executed iteratively and in parallel, rather than linearly.
Models:
Guaranteed Learning The instructional design model, Guaranteed Learning, was formerly known as the Instructional Development Learning System (IDLS). The model was originally published in 1970 by Peter J. Esseff, PhD and Mary Sullivan Esseff, PhD in their book entitled IDLS—Pro Trainer 1: How to Design, Develop, and Validate Instructional Materials.Peter (1968) & Mary (1972) Esseff both received their doctorates in Educational Technology from the Catholic University of America under the mentorship of Gabriel Ofiesh, a founding father of the Military Model mentioned above. Esseff and Esseff synthesized existing theories to develop their approach to systematic design, "Guaranteed Learning" aka "Instructional Development Learning System" (IDLS). In 2015, the Drs. Esseffs created an eLearning course to enable participants to take the GL course online under the direction of Esseff.
Models:
The components of the Guaranteed Learning Model are the following: Design a task analysis Develop criterion tests and performance measures Develop interactive instructional materials Validate the interactive instructional materials Create simulations or performance activities (Case Studies, Role Plays, and Demonstrations) Other Other useful instructional design models include: the Smith/Ragan Model, the Morrison/Ross/Kemp Model and the OAR Model of instructional design in higher education, as well as, Wiggins' theory of backward design.
Models:
Learning theories also play an important role in the design of instructional materials. Theories such as behaviorism, constructivism, social learning, and cognitivism help shape and define the outcome of instructional materials.
Also see: Managing Learning in High Performance Organizations, by Ruth Stiehl and Barbara Bessey, from The Learning Organization, Corvallis, Oregon. ISBN 0-9637457-0-0.
Motivational design:
Motivation is defined as an internal drive that activates behavior and gives it direction. The term motivation theory is concerned with the process that describes why and how human behavior is activated and directed.
Motivational design:
Motivation concepts Intrinsic and Extrinsic Motivation Instrinsic: defined as the doing of an activity for its inherent satisfactions rather than for some separable consequence. When intrinsically motivated a person is moved to act for the fun or challenge entailed rather than because of external rewards. Intrinsic motivation reflects the desire to do something because it is enjoyable. If we are intrinsically motivated, we would not be worried about external rewards such as praise.Examples: Writing short stories because you enjoy writing them, reading a book because you are curious about the topic, and playing chess because you enjoy effortful thinkingExtrinsic: reflects the desire to do something because of external rewards such as awards, money and praise. People who are extrinsically motivated may not enjoy certain activities. They may only wish to engage in certain activities because they wish to receive some external reward.Examples: The writer who only writes poems to be submitted to poetry contests, a person who dislikes sales but accepts a sales position because he/she desires to earn an above average salary, and a person selecting a major in college based on salary and prestige, rather than personal interest.John Keller has devoted his career to researching and understanding motivation in instructional systems. These decades of work constitute a major contribution to the instructional design field. First, by applying motivation theories systematically to design theory. Second, in developing a unique problem-solving process he calls the ARCS Motivation....
Motivational design:
ARCS MODEL The ARCS Model of Motivational Design was created by John Keller while he was researching ways to supplement the learning process with motivation. The model is based on Tolman's and Lewin's expectancy-value theory, which presumes that people are motivated to learn if there is value in the knowledge presented (i.e. it fulfills personal needs) and if there is an optimistic expectation for success. The model consists of four main areas: Attention, Relevance, Confidence, and Satisfaction.
Motivational design:
Attention and relevance according to John Keller's ARCS motivational theory are essential to learning. The first 2 of 4 key components for motivating learners, attention, and relevance can be considered the backbone of the ARCS theory, the latter components relying upon the former.
Motivational design:
Components Attention The attention mentioned in this theory refers to the interest displayed by learners in taking in the concepts/ideas being taught. This component is split into three categories: perceptual arousal which uses surprise or uncertain situations, inquiry arousal which offers challenging questions and/or problems to answer/solve, and variability which uses a variety of resources and methods of teaching. Within each of these categories, John Keller has provided further sub-divisions of types of stimuli to grab attention. Grabbing attention is the most important part of the model because it initiates the motivation for the learners. Once learners are interested in a topic, they are willing to invest their time, pay attention, and find out more.
Motivational design:
Relevance Relevance, according to Keller, must be established by using language and examples that the learners are familiar with. The three major strategies Keller presents are goal-oriented, motive matching, and familiarity. Like the Attention category, Keller divided the three major strategies into subcategories, which provide examples of how to make a lesson plan relevant to the learner. Learners will throw concepts to the wayside if their attention cannot be grabbed and sustained and if relevance is not conveyed.
Motivational design:
Confidence The confidence aspect of the ARCS model focuses on establishing positive expectations for achieving success among learners. The confidence level of learners is often correlated with motivation and the amount of effort put forth in reaching a performance objective. For this reason, it's important that learning design provides students with a method for estimating their probability of success. This can be achieved in the form of a syllabus and grading policy, rubrics, or a time estimate to complete tasks. Additionally, confidence is built when positive reinforcement for personal achievements is given through timely, relevant feedback.
Motivational design:
Satisfaction Finally, learners must obtain some type of satisfaction or reward from a learning experience. This satisfaction can be from a sense of achievement, praise from a higher-up, or mere entertainment. Feedback and reinforcement are important elements and when learners appreciate the results, they will be motivated to learn. Satisfaction is based upon motivation, which can be intrinsic or extrinsic. To keep learners satisfied, instruction should be designed to allow them to use their newly learned skills as soon as possible in as authentic a setting as possible.
Motivational design:
Motivational Design Process Along with the motivational components (Attention, Relevance, Confidence, and Satisfaction) the ARCS model provides a process that can address motivational problems. This process has 4 phases (Analysis, Design, Development, and Evaluation) with 10 steps within the phases: Step 1: Obtain course information Includes reviewing the description of the course, the instructor, and way of delivery the information.Step 2: Obtain audience information Includes collecting the current skill level, attitudes towards the course, attitudes towards the teacher, attitudes towards the school.Step 3: Analyze audience This should help identify the motivational problem that needs to be addressed.Step 4: Analyze existing materials Identifying positives of the current instructional material, as well as any problems.Step 5: List objectives and assessments This allows the creation of assessment tools that align with the objectives.Step 6: List potential tactics Brainstorming possible tactics that could fill in the motivational gaps.Step 7: Select and design tactics Integrates, enhances, and sustains tactics from the list that fit the situation.Step 8: Integrate with instruction Integrate the tactic that was chosen from the list into the instruction.Step 9: Select and develop materials Select materials, modify to fit the situation and develop new materials.Step 10: Evaluate and revise Obtain reactions from the learner and determine satisfaction level.
Motivational design:
Motivating opportunities Although Keller's ARCS model currently dominates instructional design with respect to learner motivation, in 2006 Hardré and Miller proposed a need for a new design model that includes current research in human motivation, a comprehensive treatment of motivation, integrates various fields of psychology and provides designers the flexibility to be applied to a myriad of situations.
Hardré proposes an alternate model for designers called the Motivating Opportunities Model or MOM. Hardré's model incorporates cognitive, needs, and affective theories as well as social elements of learning to address learner motivation. MOM has seven key components spelling the acronym 'SUCCESS' – Situational, Utilization, Competence, Content, Emotional, Social, and Systemic.
Influential researchers and theorists:
Alphabetic by last name Bloom, Benjamin – Taxonomies of the cognitive, affective, and psychomotor domains – 1950s Bransford, John D. – How People Learn: Bridging Research and Practice – 1990s Bruner, Jerome – Constructivism - 1950s-1990s Gagné, Robert M. – Nine Events of Instruction (Gagné and Merrill Video Seminar) Gibbons, Andrew S - developed the Theory of Model Centered Instruction; a theory rooted in Cognitive Psychology.
Influential researchers and theorists:
Heinich, Robert – Instructional Media and the new technologies of instruction 3rd ed. – Educational Technology – 1989 Jonassen, David – problem-solving strategies – 1990s Kemp, Jerold E. – Created a cognitive learning design model - 1980s Mager, Robert F. – ABCD model for instructional objectives – 1962 - Criterion-Referenced Instruction and Learning Objectives Marzano, Robert J. - "Dimensions of Learning", Formative Assessment - 2000s Mayer, Richard E. - Multimedia Learning - 2000s Merrill, M. David – Component Display Theory / Knowledge Objects / First Principles of Instruction Osguthorpe, Russell T. – Overview of Instructional Design – The education of the heart: rediscovering the spiritual roots of learning Papert, Seymour – Constructionism, LOGO – 1970s-1980s Piaget, Jean – Cognitive development – 1960s Reigeluth, Charles – Elaboration Theory, "Green Books" I, II, and III – 1990s–2010s Rita Richey - instructional design theory and research methods Schank, Roger – Constructivist simulations – 1990s Simonson, Michael – Instructional Systems and Design via Distance Education – 1980s Skinner, B.F. – Radical Behaviorism, Programed Instruction - 1950s-1970s Vygotsky, Lev – Learning as a social activity – 1930s Wiley, David A. - influential work on open content, open educational resources, and informal online learning communities | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Single-reed instrument**
Single-reed instrument:
A single-reed instrument is a woodwind instrument that uses only one reed to produce sound. The very earliest single-reed instruments were documented in ancient Egypt, as well as the Middle East, Greece, and the Roman Empire. The earliest types of single-reed instruments used idioglottal reeds, where the vibrating reed is a tongue cut and shaped on the tube of cane. Much later, single-reed instruments started using heteroglottal reeds, where a reed is cut and separated from the tube of cane and attached to a mouthpiece of some sort. By contrast, in a double reed instrument (such as the oboe and bassoon), there is no mouthpiece; the two parts of the reed vibrate against one another. Reeds are traditionally made of cane and produce sound when air is blown across or through them. The type of instruments that use a single reed are clarinets and saxophone. The timbre of a single and double reed instrument is related to the harmonic series caused by the shape of the corpus. E.g. the clarinet is only including the odd harmonics due to air column modes canceling out the even harmonics. This may be compared to the timbre of a square wave.
Single-reed instrument:
Most single-reed instruments are descended from single-reed idioglot instruments called 'memet', found in Egypt as early as 2700 BCE. Due to their fragility, no instruments from antiquity were preserved but iconographic evidence is prevalent. During the Old Kingdom in Egypt (2778–2723 BCE), memets were depicted on the reliefs of seven tombs at Saqqarra, six tombs at Giza, and the pyramids of Queen Khentkaus. Most memets were double-clarinets, where two reed tubes were tied or glued together to form one instrument. Multiple pipes were used to reinforce sound or generate a strong beat-tone with slight variations in tuning among the pipes. One of the tubes usually functioned as a drone, but the design of these simple instruments varied endlessly. The entire reed entered the mouth, meaning that the player could not easily articulate so melodies were defined by quick movement of the fingers on the tone holes. These types of double-clarinets are still prevalent today, but also developed into simplified single-clarinets and hornpipes. Modern-day idioglots found in Egypt include the arghul and the zummara.Examples include clarinets, saxophones, and some bagpipes. See links to other examples below.
Classification:
Single reed instruments fall under three Hornbostel–Sachs classes: 412.13 Free reeds.
422.2 Single reed instruments: The pipe has a single 'reed' consisting of a percussion lamella. These are the percussion reeds including clarinets and saxophones.
422.3 Reedpipes with free reeds: The reed vibrates through [at] a closely fitted frame, and there are fingerholes.
Comparing clarinets and saxophones:
The following is a list of clarinets and saxophones, relative to their range and key of transposition from the opposite family: Note that if one was to compare clarinets to their saxophone counterparts while considering their approximate lowest (concert) pitch†, the order would shift: †The lowest possible pitch of each clarinet and saxophone is dependent on its manufacturer and model (the pitches used are typical of professional instruments).
List of single-reed instruments:
Modern Aulochrome Clarinet Heckel-clarina Heckelphone-clarinet Octavin Saxophone Tárogató Xaphoon Bass Clarinet Historical Mock Trumpet Chalumeau Traditional EuropeanAlboka Birbynė Chalumeau Diplica Ganurags Hornpipe Launeddas Pilili Mock trumpet Pibgorn Pku Sipsi Stock-and-horn ZhaleikaMiddle EasternArghul Double clarinet Mijwiz SipsiCentral AsianBülbanSoutheast AsianPey pok Sarune Etek Sneng Toleat
Playing a single reed instrument:
Although the clarinet and saxophone both have a single reed attached to their mouthpiece, the playing technique or embouchure is distinct from each other.
Playing a single reed instrument:
The standard embouchures for single reed woodwinds like the clarinet and saxophone are variants of the single lip embouchure, formed by resting the reed upon the bottom lip, which rests on the teeth and is supported by the chin muscles and the buccinator muscles on the sides of the mouth. The top teeth rest on top of the mouthpiece. The manner in which the lower lip rests against the teeth differs between clarinet and saxophone embouchures. In clarinet playing, the lower lip is rolled over the teeth and corners of the mouth are drawn back, which has the effect of drawing the upper lip around the mouthpiece to create a seal due to the angle at which the mouthpiece rests in the mouth. With the saxophone embouchure, the lower lip rests against, but not over, the teeth as in pronouncing the letter "V" and the corners of the lip are drawn in (similar to a drawstring bag). With the less common double-lip embouchure, the top lip is placed under (around) the top teeth. In both instances, the position of the tongue in the mouth plays a vital role in focusing and accelerating the air stream blown by the player. This results in a more mature and full sound, rich in overtones. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MacProject**
MacProject:
MacProject was a project management and scheduling business application released along with the first Apple Macintosh systems in 1984. MacProject was one of the first major business tools for the Macintosh which enabled users to calculate the "critical path" to completion and estimate costs in money and time. If a project deadline was missed or if available resources changed, MacProject recalculated everything automatically.
MacProject:
MacProject was written by Debra Willrett at Solosoft, and was published and distributed by Apple Computer to promote the original Macintosh personal computer. It was developed from an earlier application written by Debra Willrett for Apple's Lisa computer, LisaProject. This was the first graphical user interface (GUI) for project management. There were many other project management applications on the market at the time, but LisaProject was the first to simplify the process by allowing the user to interactively draw their project on the computer in the form of a PERT chart. Constraints could be entered for each task, and the relationships between tasks would show which ones had to be completed before a task could begin. Given the task constraints and relationships, a "critical path", schedule and budget could be calculated dynamically using heuristic methods.
MacProject:
One of the early proponents of MacProject was James Halcomb, a well known expert in the use of the Critical Path Method. Having supervised hand-drawn network diagrams for countless complex projects, Halcomb immediately recognized the promise of the WYSIWYG graphical interface and computerized calculation of the critical path. Using a Lisa computer housed in a case designed to fit under an airplane seat, Mr. Halcomb traveled the United States demonstrating this new technology in his CPM courses. In consultation with the software's developers he authored the book Planning Big with MacProject, which introduced a generation of Mac users to PERT and CPM.
MacProject:
In December 1987, an updated version of MacProject, called MacProject II, was introduced as a part of Claris's move to update its suite of Mac office applications.
In 1991, Microsoft Project was ported to the Macintosh from Microsoft Windows and became MacProject's main competitor. However, after the release of version 3.0 of Microsoft Project in 1993, Microsoft terminated support of the Macintosh release.
MacProject 1.0 is not Y2K-compliant as it cannot schedule tasks past 1999. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Molybdenum oxotransferase**
Molybdenum oxotransferase:
The enzyme super-family of molybdenum oxotransferases all contain molybdenum, and promote oxygen atom transfer reactions.Enzymes in this family include DMSO reductase, xanthine oxidase, nitrite reductase, and sulfite oxidase. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Perron's irreducibility criterion**
Perron's irreducibility criterion:
Perron's irreducibility criterion is a sufficient condition for a polynomial to be irreducible in Z[x] —that is, for it to be unfactorable into the product of lower-degree polynomials with integer coefficients.
This criterion is applicable only to monic polynomials. However, unlike other commonly used criteria, Perron's criterion does not require any knowledge of prime decomposition of the polynomial's coefficients.
Criterion:
Suppose we have the following polynomial with integer coefficients f(x)=xn+an−1xn−1+⋯+a1x+a0, where a0≠0 . If either of the following two conditions applies: |an−1|>1+|an−2|+⋯+|a0| |an−1|=1+|an−2|+⋯+|a0|,f(±1)≠0 then f is irreducible over the integers (and by Gauss's lemma also over the rational numbers).
History:
The criterion was first published by Oskar Perron in 1907 in Journal für die reine und angewandte Mathematik.
Proof:
A short proof can be given based on the following lemma due to Panaitopol:Lemma. Let f(x)=xn+an−1xn−1+⋯+a1x+a0 be a polynomial with |an−1|>1+|an−2|+⋯+|a1|+|a0| . Then exactly one zero z of f satisfies |z|>1 , and the other n−1 zeroes of f satisfy |z|<1 Suppose that f(x)=g(x)h(x) where g and h are integer polynomials. Since, by the above lemma, f has only one zero with modulus not less than 1 , one of the polynomials g,h has all its zeroes strictly inside the unit circle. Suppose that z1,…,zk are the zeroes of g , and |z1|,…,|zk|<1 . Note that g(0) is a nonzero integer, and |g(0)|=|z1⋯zk|<1 , contradiction. Therefore, f is irreducible.
Generalizations:
In his publication Perron provided variants of the criterion for multivariate polynomials over arbitrary fields. In 2010, Bonciocat published novel proofs of these criteria. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Adverse effect**
Adverse effect:
An adverse effect is an undesired harmful effect resulting from a medication or other intervention, such as surgery. An adverse effect may be termed a "side effect", when judged to be secondary to a main or therapeutic effect. The term complication is similar to adverse effect, but the latter is typically used in pharmacological contexts, or when the negative effect is expected or common. If the negative effect results from an unsuitable or incorrect dosage or procedure, this is called a medical error and not an adverse effect. Adverse effects are sometimes referred to as "iatrogenic" because they are generated by a physician/treatment. Some adverse effects occur only when starting, increasing or discontinuing a treatment. Adverse effects can also be caused by placebo treatments (in which case the adverse effects are referred to as nocebo effects).
Adverse effect:
Using a drug or other medical intervention which is contraindicated may increase the risk of adverse effects. Adverse effects may cause complications of a disease or procedure and negatively affect its prognosis. They may also lead to non-compliance with a treatment regimen. Adverse effects of medical treatment resulted in 142,000 deaths in 2013 up from 94,000 deaths in 1990 globally.The harmful outcome is usually indicated by some result such as morbidity, mortality, alteration in body weight, levels of enzymes, loss of function, or as a pathological change detected at the microscopic, macroscopic or physiological level. It may also be indicated by symptoms reported by a patient. Adverse effects may cause a reversible or irreversible change, including an increase or decrease in the susceptibility of the individual to other chemicals, foods, or procedures, such as drug interactions.
Classification:
In terms of drugs, adverse events may be defined as: "Any untoward medical occurrence in a patient or clinical investigation subject administered a pharmaceutical product and which does not necessarily have to have a causal relationship with this treatment."In clinical trials, a distinction is made between an adverse event and a serious adverse event. Generally, any event which causes death, permanent damage, birth defects, or requires hospitalization is considered a serious adverse event. The results of trials are often included in the labelling of the medication to provide information both for patients and the prescribing physicians.
Classification:
The term "life-threatening" in the context of a serious adverse event refers to an event in which the patient was at risk of death at the time of the event; it does not refer to an event which hypothetically might have caused death if it were more severe.
Reporting systems:
In many countries, adverse effects are required by law to be reported, researched in clinical trials and included into the patient information accompanying medical devices and drugs for sale to the public. Investigators in human clinical trials are obligated to report these events in clinical study reports. Research suggests that these events are often inadequately reported in publicly available reports. Because of the lack of these data and uncertainty about methods for synthesising them, individuals conducting systematic reviews and meta-analyses of therapeutic interventions often unknowingly overemphasise health benefit. To balance the overemphasis on benefit, scholars have called for more complete reporting of harm from clinical trials.
Reporting systems:
United Kingdom The Yellow Card Scheme is a United Kingdom initiative run by the Medicines and Healthcare products Regulatory Agency (MHRA) and the Commission on Human Medicines (CHM) to gather information on adverse effects to medicines. This includes all licensed medicines, from medicines issued on prescription to medicines bought over the counter from a supermarket. The scheme also includes all herbal supplements and unlicensed medicines found in cosmetic treatments. Adverse drug reactions (ADRs) can be reported by a number of health care professionals including physicians, pharmacists and nurses, as well as patients.
Reporting systems:
United States In the United States several reporting systems have been built, such as the Vaccine Adverse Event Reporting System (VAERS), the Manufacturer and User Facility Device Experience Database (MAUDE) and the Special Nutritionals Adverse Event Monitoring System. MedWatch is the main reporting center, operated by the Food and Drug Administration.
Reporting systems:
Australia In Australia, adverse effect reporting is administered by the Adverse Drug Reactions Advisory Committee (ADRAC), a subcommittee of the Australian Drug Evaluation Committee (ADEC). Reporting is voluntary, and ADRAC requests healthcare professionals to report all adverse reactions to its current drugs of interest, and serious adverse reactions to any drug. ADRAC publishes the Australian Adverse Drug Reactions Bulletin every two months. The Government's Quality Use of Medicines program is tasked with acting on this reporting to reduce and minimize the number of preventable adverse effects each year.
Reporting systems:
New Zealand Adverse reaction reporting is an important component of New Zealand's pharmacovigilance activities. The Centre for Adverse Reactions Monitoring (CARM) in Dunedin is New Zealand's national monitoring centre for adverse reactions. It collects and evaluates spontaneous reports of adverse reactions to medicines, vaccines, herbal products and dietary supplements from health professionals in New Zealand. Currently the CARM database holds over 80,000 reports and provides New Zealand-specific information on adverse reactions to these products, and serves to support clinical decision making when unusual symptoms are thought to be therapy related Canada In Canada, adverse reaction reporting is an important component of the surveillance of marketed health products conducted by the Health Products and Food Branch (HPFB) of Health Canada. Within HPFB, the Marketed Health Products Directorate leads the coordination and implementation of consistent monitoring practices with regards to assessment of signals and safety trends, and risk communications concerning regulated marketed health products.
Reporting systems:
MHPD also works closely with international organizations to facilitate the sharing of information. Adverse reaction reporting is mandatory for the industry and voluntary for consumers and health professionals.
Reporting systems:
Limitations In principle, medical professionals are required to report all adverse effects related to a specific form of therapy. In practice, it is at the discretion of the professional to determine whether a medical event is at all related to the therapy. As a result, routine adverse effects reporting often may not include long-term and subtle effects that may ultimately be attributed to a therapy.Part of the difficulty is identifying the source of a complaint. A headache in a patient taking medication for influenza may be caused by the underlying disease or may be an adverse effect of the treatment. In patients with end-stage cancer, death is a very likely outcome and whether the drug is the cause or a bystander is often difficult to discern.
By situation:
Medical procedures Surgery may have a number of undesirable or harmful effects, such as infection, hemorrhage, inflammation, scarring, loss of function, or changes in local blood flow. They can be reversible or irreversible, and a compromise must be found by the physician and the patient between the beneficial or life-saving consequences of surgery versus its adverse effects. For example, a limb may be lost to amputation in case of untreatable gangrene, but the patient's life is saved. Presently, one of the greatest advantages of minimally invasive surgery, such as laparoscopic surgery, is the reduction of adverse effects.
By situation:
Other nonsurgical physical procedures, such as high-intensity radiation therapy, may cause burns and alterations in the skin. In general, these therapies try to avoid damage to healthy tissues while maximizing the therapeutic effect.
By situation:
Vaccination may have adverse effects due to the nature of its biological preparation, sometimes using attenuated pathogens and toxins. Common adverse effects may be fever, malaise and local reactions in the vaccination site. Very rarely, there is a serious adverse effect, such as eczema vaccinatum, a severe, sometimes fatal complication which may result in persons who have eczema or atopic dermatitis.
By situation:
Diagnostic procedures may also have adverse effects, depending much on whether they are invasive, minimally invasive or noninvasive. For example, allergic reactions to radiocontrast materials often occur, and a colonoscopy may cause the perforation of the intestinal wall.
By situation:
Medications Adverse effects can occur as a collateral or side effect of many interventions, but they are particularly important in pharmacology, due to its wider, and sometimes uncontrollable, use by way of self-medication. Thus, responsible drug use becomes an important issue here. Adverse effects, like therapeutic effects of drugs, are a function of dosage or drug levels at the target organs, so they may be avoided or decreased by means of careful and precise pharmacokinetics, the change of drug levels in the organism in function of time after administration.
By situation:
Adverse effects may also be caused by drug interaction. This often occurs when patients fail to inform their physician and pharmacist of all the medications they are taking, including herbal and dietary supplements. The new medication may interact agonistically or antagonistically (potentiate or decrease the intended therapeutic effect), causing significant morbidity and mortality around the world. Drug-drug and food-drug interactions may occur, and so-called "natural drugs" used in alternative medicine can have dangerous adverse effects. For example, extracts of St John's wort (Hypericum perforatum), a phytotherapic used for treating mild depression are known to cause an increase in the cytochrome P450 enzymes responsible for the metabolism and elimination of many drugs, so patients taking it are likely to experience a reduction in blood levels of drugs they are taking for other purposes, such as cancer chemotherapeutic drugs, protease inhibitors for HIV and hormonal contraceptives.
By situation:
The scientific field of activity associated with drug safety is increasingly government-regulated, and is of major concern for the public, as well as to drug manufacturers. The distinction between adverse and nonadverse effects is a major undertaking when a new drug is developed and tested before marketing it. This is done in toxicity studies to determine the nonadverse effect level (NOAEL). These studies are used to define the dosage to be used in human testing (phase I), as well as to calculate the maximum admissible daily intake. Imperfections in clinical trials, such as insufficient number of patients or short duration, sometimes lead to public health disasters, such as those of fenfluramine (the so-called fen-phen episode), thalidomide and, more recently, of cerivastatin (Baycol, Lipobay) and rofecoxib (Vioxx), where drastic adverse effects were observed, such as teratogenesis, pulmonary hypertension, stroke, heart disease, neuropathy, and a significant number of deaths, causing the forced or voluntary withdrawal of the drug from the market.
By situation:
Most drugs have a large list of nonsevere or mild adverse effects which do not rule out continued usage. These effects, which have a widely variable incidence according to individual sensitivity, include nausea, dizziness, diarrhea, malaise, vomiting, headache, dermatitis, dry mouth, etc. These can be considered a form of pseudo-allergic reaction, as not all users experience these effects; many users experience none at all.
By situation:
The Medication Appropriateness Tool for Comorbid Health Conditions in Dementia (MATCH-D) warns that people with dementia are more likely to experience adverse effects, and that they are less likely to be able to reliably report symptoms.
Examples with specific medications Abortion, miscarriage or uterine hemorrhage associated with misoprostol (Cytotec), a labor-inducing drug (this is a case where the adverse effect has been used legally and illegally for performing abortions) Addiction to many sedatives and analgesics, such as diazepam, morphine, etc.
By situation:
Birth defects associated with thalidomide Bleeding of the intestine associated with aspirin therapy Cardiovascular disease associated with COX-2 inhibitors (i.e. Vioxx) Deafness and kidney failure associated with gentamicin (an antibiotic) Death, following sedation, in children using propofol (Diprivan) Depression or hepatic injury caused by interferon Diabetes caused by atypical antipsychotic medications (neuroleptic psychiatric drugs) Diarrhea caused by the use of orlistat (Xenical) Erectile dysfunction associated with many drugs, such as antidepressants Fever associated with vaccination Glaucoma associated with corticosteroid-based eye drops Hair loss and anemia may be caused by chemotherapy against cancer, leukemia, etc.
By situation:
Headache following spinal anaesthesia Hypertension in ephedrine users, which prompted FDA to remove the dietary supplement status of ephedra extracts Insomnia caused by stimulants, methylphenidate (Ritalin), Adderall, etc.
By situation:
Lactic acidosis associated with the use of stavudine (Zerit, for HIV therapy) or metformin (for diabetes) Mania caused by corticosteroids Liver damage from paracetamol Melasma and thrombosis associated with use of estrogen-containing hormonal contraception, such as the combined oral contraceptive pill Priapism associated with the use of sildenafil Rhabdomyolysis associated with statins (anticholesterol drugs) Seizures caused by withdrawal from benzodiazepines Drowsiness or increase in appetite due to antihistamine use. Some antihistamines are used in sleep aids explicitly because they cause drowsiness.
By situation:
Stroke or heart attack associated with sildenafil (Viagra), when used with nitroglycerin Suicide, increased tendency associated to the use of fluoxetine and other selective serotonin reuptake inhibitor (SSRI) antidepressants Tardive dyskinesia associated with use of metoclopramide and many antipsychotic medications
Controversies:
Sometimes, putative medical adverse effects are regarded as controversial and generate heated discussions in society and lawsuits against drug manufacturers. One example is the recent controversy as to whether autism was linked to the MMR vaccine (or to thiomersal, a mercury-based preservative used in some vaccines). No link has been found in several large studies, and despite removal of thimerosal from most early childhood vaccines beginning with those manufactured in 2003, the rate of autism has not decreased as would be expected if it had been the causative agent.Another instance is the potential adverse effects of silicone breast implants, which led to class actions brought by tens of thousands of plaintiffs against manufacturers of gel-based implants, due to allegations of damage to the immune system which have not yet been conclusively proven. In 1998, Dow Corning settled its remaining suits for $3.2 Billion and went into bankruptcy.Due to the exceedingly high impact on public health of widely used medications, such as hormonal contraception and hormone replacement therapy, which may affect millions of users, even marginal probabilities of adverse effects of a severe nature, such as breast cancer, have led to public outcry and changes in medical therapy, although its benefits largely surpassed the statistical risks. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Friesian Sporthorse**
Friesian Sporthorse:
The Friesian Sporthorse is a Friesian crossbred of sport horse type. The ideal Friesian Sporthorse is specifically bred to excel in FEI-recognized sport horse disciplines. Thus, "sporthorse" refers to the phenotype, breeding, and intended use of these horses.While some consider the Friesian Sporthorse as a breed and others consider the Friesian Sporthorse as a type, others sometimes use the term "Friesian Sport Horse" as a generic all-inclusive term to describe any Friesian cross horse.
Bloodlines:
Different registries have different standards that define what is considered to be a Friesian Sporthorse. One registry regards Friesian Sporthorses as a breed, with strict breeding requirements in addition to performance recognition. In this case, Friesians are crossbred primarily with warmbloods and Thoroughbreds, although limited percentages of American Saddlebred, draft and Arabian breeding are also acceptable into lower books of the studbook.Other registries contend that "sporthorse" is a type, and rather than breed-specific requirements, they require that horses meet certain performance requirements before the registry will deem them a Friesian Sporthorse.
Bloodlines:
Either way, the goal is to produce animals suitable for the sport disciplines of dressage, eventing, show jumping, and combined driving. Most registries agree that Friesian Sporthorses also must be a minimum of 25% Friesian. Although the crossbreeding of Friesians with many different types and breeds is popular, it is worth noting that the resulting offspring are not always considered Friesian Sporthorses.
Characteristics:
Friesian Sporthorses can come in a variety of colors and sizes, with no limitations on acceptable colors or markings. Their body type can range from a sport horse build to a heavier more Baroque build. A higher-set and more arched neck is also common among Friesian Sporthorses. They tend to have the gentle temperament and striking appearance of the Friesian, but with an increased athleticism, stamina, and hybrid vigor, when responsibly crossbred. They are most commonly used for dressage and carriage driving, but have also been successful as jumpers and eventing horses, as well as for all-around riding. They are also valued as pleasure and trail horses.
History:
People have been crossbreeding Friesians for more than a century. In 1879 the Friesian registry created two books for registration, one book for purebred Friesians, and another book for crossbreds. Crossbreeding had become so common by 1907 that the rules were again changed, combining the two books into one book again. This changed again in 1915, with concerns over the potential extinction of the purebred Friesian, and two books were again created. Eventually two separate Friesian registries were created, Dutch and German.Today the Dutch Friesian registry (FPS, Friese Paarden Stamboek) and its American counterpart (FHANA, Friesian Horse Association North America) prohibit their registered horses from being used to create crossbred horses. However, the German Friesian registry (FPZV, Friesenpferde Zuchtverband e. V.) and its American counterpart (FPZV USA) do allow their registered horses to be crossbred with other breeds, but they will not register the crossbred offspring. Both the Dutch and German registries have recognized the severe risks of inbreeding this has created in the breed, and have created policy committees to try to reduce these risks.In the last decade, the popularity of the Friesian crossbreds has increased, and additional registries have been formed specifically to register and recognize Friesian cross horses and Friesian Sporthorses as separate breeds.The studbook for Friesian Sporthorses was founded in 2007 by the Friesian Sporthorse Association (FSA) and in 2008 the FSA trademarked the name "Friesian Sporthorse". The Friesian Sporthorse Association was initially founded in the United States, but shortly thereafter a branch was added in Australia, and the Friesian Sporthorse Association now registers Friesian Sporthorses worldwide.
Georgian Grande:
The Georgian Grande horse was initially introduced as sport horse cross between the Friesian and the American Saddlebred, and was first developed in 1976 by a horse breeder named George Wagner, Jr. from Piketon, Ohio in the United States. Wagner was one of the wealthiest and largest landowners in Pike County, Ohio, owning almost 1,800 acres of land, was well as the 300-acre Flying W Farm. The land and its buildings were valued at just over $4 million in 2018.While formally recognized as a "breed" by some sources - including the United States Dressage Federation (USDF) and the United States Equestrian Federation (USEF) - the Georgian Grande is now considered to be a subtype of Friesian Sporthorse and the American Warmblood, as opposed to its own horse breed.While other draft horses were used in Wagner's breeding programme - including horses of the Shire, Percheron, Clydesdale, Belgian Draught, and Irish Draught horse breeds - Wagner primarily used Friesian stallions to cross to Saddlebred mares as the foundation stock for the Georgian Grande. Wagner wanted to bring back the "heavier boned, bigger Saddlebreds of the historic past...ridden by officers of the United States Cavalry in the American Civil War". Wagner claimed, "The American Saddlebred of today has changed a good deal from its original appearance, and tends to have much less bone. Occasionally, an 'old fashioned' or Baroque-style Saddlebred can still be found, but most have disappeared from the equine scene." To this end, Wagner co-founded the International Georgian Grande Horse Registry (IGGHR) in 1994 alongside his wife, Fredericka Wagner, and their daughter, Robin Wagner. IGGHR became a member of the United States Dressage Federation All Breeds' Council, as well as a member of the American Horse Council, and Georgian Grande horses began to compete internationally in dressage, eventing, and show jumping. A few Georgian Grande cross horses were registered in the United Kingdom (UK), Norway, and Australia, with some Georgian Grandes being registered as Shire and Saddlebred, as well as Percheron and Saddlebred, crosses.
Georgian Grande:
However, in November 2018, four members of the Wagner family were arrested in Ohio and Kentucky, and were charged with the murder of eight members of the Rhoden family in the Pike County shootings. Also arrested was Fredericka Wagner, who was charged with perjury and obstructing justice for allegedly misleading investigators. However, the charges against her were dropped in June 2019. While not charged with a crime, Robin Wagner continues to be involved in the case by supporting her brother and one of the accomplices in the Rhoden family murders, George “Billy” Wagner III. George "Billy" Wagner III is the son of George Wagner, Jr. and Fredericka Wagner.
Georgian Grande:
The Pike County murders have called into question the future of the Georgian Grande cross and the International Georgian Grande Horse Registry (IGGHR), especially with the arrest of registry co-founder Fredericka Wagner in the case. However, as of 2020, Georgian Grande crosses were still participating in the United States Dressage Federation (USDF) events and championships. Some Georgian Grandes are instead being registered as Friesian Sporthorses, Baroque Pintos, and American Warmbloods. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ug (book)**
Ug (book):
Ug is a children's book by Raymond Briggs. In 2001 it won the Nestlé Smarties Book Prize Silver Award.
Plot:
The book is about a boy named Ug living in the Stone Age who is thought by others to "think too much". He wants to have soft trousers (the trousers he and all the other cavemen wear are made of granite) and believes mammoth skin would be good to use, in the end, he and his father Dug do make the trousers, but after realising they cannot sew them together, they call it a day and leave them. Ug then grows up to be a cave painter as his mother Dugs warned him. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Myeloperoxidase deficiency**
Myeloperoxidase deficiency:
Myeloperoxidase deficiency is a disorder featuring lack in either the quantity or the function of myeloperoxidase–an iron-containing protein expressed primarily in neutrophil granules. There are two types of myeloperoxidase deficiency: primary/inherited and secondary/acquired. Lack of functional myeloperoxidase leads to less efficient killing of intracellular pathogens, particularly Candida albicans, as well as less efficient production and release of neutrophil extracellular traps (NETs) from the neutrophils to trap and kill extracellular pathogens. Despite these characteristics, more than 95% of individuals with myeloperoxidase deficiency experience no symptoms in their lifetime. For those who do experience symptoms, the most common symptom is frequent infections by Candida albicans. Individuals with myeloperoxidase deficiency also experience higher rates of chronic inflammatory conditions. Myeloperoxidase deficiency is diagnosed using flow cytometry or cytochemical stains. There is no treatment for myeloperoxidase deficiency itself. Rather, in the rare cases that individuals experience symptoms, these infections should be treated.
Pathophysiology:
The innate immune system responds quickly to infection, with neutrophils (a type of white blood cells) being the first responders. Neutrophils enter the site of infection and begin to phagocytose (take up) pathogens. Once engulfed, the neutrophils must then degrade the captured pathogens–a process known as intracellular killing.One method of intracellular killing which takes place in the phagolysosomes of neutrophils involves the reaction of myeloperoxidase with hydrogen peroxide (H2O2) acquired in the cells from NADPH oxidase through the respiratory bursts. This reaction generates several acidic products including hypochlorous acid (HClO), which can break down pathogens. Bacteria such as Pseudomonas aeruginosa and fungi such as Candida albicans are killed in this manner.Neutrophils are also involved in killing extracellular pathogens (pathogens outside of the cell) through the release of NETs. These NETs contain myeloperoxidase, among other antimicrobial proteins. Once released outside of the cell, NETs trap pathogens and may in some cases kill them. Although myeloperoxidase is not required for all NET formation/release, NETs are only formed and released in response to Candida albicans when myeloperoxidase is present. Myeloperoxidase proteins in NETs can still react with H2O2 to form HClO and break down some extracellular pathogens. In myeloperoxidase deficient individuals, this extracellular pathogen killing doesn’t typically occur.Finally, during infection, neutrophils can migrate to the lymph nodes, where they deposit myeloperoxidase. Although the mechanisms of this process aren’t well understood, there is evidence that this extracellular myeloperoxidase interacts with dendritic cells (cells of the adaptive immune system) in the lymph nodes, leading to a decrease in adaptive immune system activity in response to infection.
Presentation:
About 1:1,000 to 1:4,000 individuals in the United States and Europe and 1:55,000 individuals in Japan experience myeloperoxidase deficiency. The most common symptom of myeloperoxidase deficiency is frequent infections, particularly by the fungus Candida albicans. This symptom is especially frequent in individuals who also experience diabetes mellitus.The majority of myeloperoxidase-deficient individuals, however, do not display any significant tendencies towards chronic infections from most bacteria. This is likely due to the fact that the absence of myeloperoxidase leads to increased neutrophil phagocytosis and degranulation as well as increased development of the adaptive immune system. That is, other aspects of the immune system typically compensate for the lack of myeloperoxidase, leading to relatively mild symptoms. Nonetheless, myeloperoxidase-deficient individuals have been found to experience more chronic inflammatory conditions (such as rheumatoid arthritis, pulmonary/skin inflammation, kidney/heart disease, etc.) than individuals with sufficient myeloperoxidase. Researchers hypothesize this may be a result of heightened adaptive immune system activity in individuals with myeloperoxidase deficiency. There is also some evidence that congenital myeloperoxidase deficiency is correlated with higher rates of malignant tumors.
Types:
MPO deficiency is broken down into two categories: primary/congenital and secondary/acquired. Primary MPO deficiency is an autosomal recessive genetic disorder, which is caused by mutations in the myeloperoxidase gene on chromosome 17q23. There are several different known mutations of this gene which all lead to myeloperoxidase deficiency.Secondary MPO deficiency, on the other hand, occurs in various clinical situations as a result of hematological neoplasm, disseminated cancers, some drugs, iron deficiency, lead intoxication, thrombotic disease, renal transplantation, severe infectious disease, diabetes mellitus, neuronal lipofuscinosis, or pregnancy. Secondary MPO deficiency is typically partial, meaning only a portion of the affected individual’s neutrophils lack functional myeloperoxidase.
Diagnosis:
Myeloperoxidase deficiency can be diagnosed via flow cytometry and cytochemical stains. Various devices can divide up leukocyte (white blood cell) populations based on their size and peroxidase activity. Specific stains bind to myeloperoxidase, and individuals who display large, granulated cells without this stain through flow cytometry typically have myeloperoxidase deficiency. In this way, it’s apparent when neutrophils are present in an individual but peroxidase activity is absent.Note, myeloperoxidase deficiency can cause false positives in the diagnosis of chronic granulomatous disease, a condition which includes dysfunctional NADPH oxidase. Both disorders interfere with neutrophils’ abilities to kill pathogens through reaction with oxidative species. However, chronic granulomatous disease leads to inadequate H2O2 production, while myeloperoxidase deficiency is characterized by a lack of myeloperoxidase to interact with present H2O2. Testing with NADPH oxidase-specific assays can lead to positive results for chronic granulomatous disease and negative results for myeloperoxidase deficiency.
Treatment:
Most individuals with myeloperoxidase deficiency do not need regular treatment, as they experience only mild symptoms, if any at all. Continued antibiotic use is not recommended in myeloperoxidase-deficient patients who don’t experience recurrent infections.Acquired myeloperoxidase deficiency typically goes away when the underlying condition is treated. In particular, when myeloperoxidase deficiency is caused by severe iron deficiency, treatment with iron returns myeloperoxidase function to normal. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nest box camera**
Nest box camera:
A nest box camera, also known as a bird box camera, is a photographic device fitted inside a nest box in order to monitor its inhabitants. Many Internet sites broadcast video streams and still images of nesting birds in real time.
Technology:
Most cameras uses visible light to capture images. Infrared cameras may be used alone or in conjunction with visible light cameras if the birds are active at night. Infrared light is not dangerous to nesting birds. Wired and wireless systems are used. A webcam is frequently used by enthusiasts but the quality is usually standard-definition. Wired network cameras allow the streaming of high-definition video to the internet or to internal or external storage. Some nest box cameras have microphones inside them. It is relatively easy to construct a nest box camera because it involves little more than installing a camera in a nest box, remembering only to choose or construct a nest box large enough to contain the camera, to have a box deep enough to enable proper focusing of the camera and to use a camera suitable for outdoor conditions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Orlicz–Pettis theorem**
Orlicz–Pettis theorem:
A theorem in functional analysis concerning convergent series (Orlicz) or, equivalently, countable additivity of measures (Pettis) with values in abstract spaces.
Orlicz–Pettis theorem:
Let X be a Hausdorff locally convex topological vector space with dual X∗ . A series ∑n=1∞xn is subseries convergent (in X ), if all its subseries ∑k=1∞xnk are convergent. The theorem says that, equivalently, (i) If a series ∑n=1∞xn is weakly subseries convergent in X (i.e., is subseries convergent in X with respect to its weak topology σ(X,X∗) ), then it is (subseries) convergent; or (ii) Let A be a σ -algebra of sets and let μ:A→X be an additive set function. If μ is weakly countably additive, then it is countably additive (in the original topology of the space X ).The history of the origins of the theorem is somewhat complicated. In numerous papers and books there are misquotations or/and misconceptions concerning the result. Assuming that X is weakly sequentially complete Banach space, W. Orlicz proved the following Theorem. If a series ∑n=1∞xn is weakly unconditionally Cauchy, i.e., ∑n=1∞|x∗(xn)|<∞ for each linear functional x∗∈X∗ , then the series is (norm) convergent in X After the paper was published, Orlicz realized that in the proof of the theorem the weak sequential completeness of X was only used to guarantee the existence of the weak limits of the considered series. Consequently, assuming the existence of those limits, which amounts to the assumption of the weak subseries convergence of the series, the same proof shows that the series in norm convergent. In other words, the version (i) of the Orlicz–Pettis theorem holds. The theorem in this form, openly credited to Orlicz, appeared in Banach's monograph in the last chapter Remarques in which no proofs were provided. Pettis directly referred to Orlicz's theorem in Banach's book. Needing the result in order to show the coincidence of the weak and strong measures, he provided a proof. Also Dunford gave a proof (with a remark that it is similar to the original proof of Orlicz).
Orlicz–Pettis theorem:
A more thorough discussion of the origins of the Orlicz–Pettis theorem and, in particular, of the paper can be found in. See also footnote 5 on p. 839 of and the comments at the end of Section 2.4 of the 2nd edition of the quoted book by Albiac and Kalton. Though in Polish, there is also an adequate comment on page 284 of the quoted monograph of Alexiewicz, Orlicz’s first PhD-student, still in the occupied Lwów.
Orlicz–Pettis theorem:
In Grothendieck proved a theorem, whose special case is the Orlicz–Pettis theorem in locally convex spaces. Later, a more direct proofs of the form (i) of the theorem in the locally convex case were provided by McArthur and Robertson.
Orlicz-Pettis type theorems:
The theorem of Orlicz and Pettis had been strengthened and generalized in many directions. An early survey of this area of research is Kalton's paper. A natural setting for subseries convergence is that of an Abelian topological group X and a representative result of this area of research is the following theorem, called by Kalton the Graves-Labuda-Pachl Theorem.Theorem. Let X be an Abelian group and α,β two Hausdorff group topologies on X such that (X,β) is sequentially complete, α⊂β , and the identity j:(X,α)→(X,β) is universally measurable. Then the subseries convergence for both topologies α and β is the same.
Orlicz-Pettis type theorems:
As a consequence, if (X,β) is a sequentially complete K-analytic group, then the conclusion of the theorem is true for every Hausdorff group topology α weaker than β . This is a generalization of an analogical result for a sequentially complete analytic group (X,β) (in the original statement of the Andersen-Christensen theorem the assumption of sequential completeness is missing), which in turn extends the corresponding theorem of Kalton for a Polish group, a theorem that triggered this series of papers.
Orlicz-Pettis type theorems:
The limitations for this kind of results are provided by the weak* topology of the Banach space ℓ∞ and the examples of F-spaces X with separating dual X∗ such that the weak (i.e., σ(X,X∗) ) subseries convergence does not imply the subseries convergence in the F-norm of the space X | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SmartQVT**
SmartQVT:
SmartQVT is a unmaintained (since 2013) full Java open-source implementation of the QTV-Operational language which is dedicated to express model-to-model transformations. This tool compiles QVT transformations into Java programs to be able to run QVT transformations. The compiled Java programs are EMF-based applications. It is provided as Eclipse plug-ins running on top of the EMF metamodeling framework and is licensed under EPL.
Components:
SmartQVT contains 3 main components: a code editor: this component helps the user to write QVT code by highlighting key words.
a parser: this component converts QVT code files into model representations of the QVT programs (abstract syntax).
a compiler: this component converts model representations of the QVT program into executable Java programs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SCM Anywhere**
SCM Anywhere:
SCM Anywhere is SQL Server-based software configuration management tool with integrated revision control, bug tracking and build automation. It supports integration with CruiseControl.NET and ANT. Developed by Dynamsoft.SCM Anywhere is a client/server system. The server manages a central database and a master repository of file versions. Users work on files in a local client working folder and submit changed files together in changesets. On-premises software and Software as a service editions available.
SCM Anywhere:
Integration with Visual SourceSafe compatible IDEs is supported as well as Cross-platform. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Section (category theory)**
Section (category theory):
In category theory, a branch of mathematics, a section is a right inverse of some morphism. Dually, a retraction is a left inverse of some morphism.
Section (category theory):
In other words, if f:X→Y and g:Y→X are morphisms whose composition f∘g:Y→Y is the identity morphism on Y , then g is a section of f , and f is a retraction of g .Every section is a monomorphism (every morphism with a left inverse is left-cancellative), and every retraction is an epimorphism (every morphism with a right inverse is right-cancellative).
Section (category theory):
In algebra, sections are also called split monomorphisms and retractions are also called split epimorphisms. In an abelian category, if f:X→Y is a split epimorphism with split monomorphism g:Y→X , then X is isomorphic to the direct sum of Y and the kernel of f . The synonym coretraction for section is sometimes seen in the literature, although rarely in recent work.
Properties:
A section that is also an epimorphism is an isomorphism. Dually a retraction that is also a monomorphism is an isomorphism.
Terminology:
The concept of a retraction in category theory comes from the essentially similar notion of a retraction in topology: f:X→Y where Y is a subspace of X is a retraction in the topological sense, if it's a retraction of the inclusion map i:Y↪X in the category theory sense. The concept in topology was defined by Karol Borsuk in 1931.Borsuk's student, Samuel Eilenberg, was with Saunders Mac Lane the founder of category theory, and (as the earliest publications on category theory concerned various topological spaces) one might have expected this term to have initially be used. In fact, their earlier publications, up to, e.g., Mac Lane (1963)'s Homology, used the term right inverse. It was not until 1965 when Eilenberg and John Coleman Moore coined the dual term 'coretraction' that Borsuk's term was lifted to category theory in general. The term coretraction gave way to the term section by the end of the 1960s.
Terminology:
Both use of left/right inverse and section/retraction are commonly seen in the literature: the former use has the advantage that it is familiar from the theory of semigroups and monoids; the latter is considered less confusing by some because one does not have to think about 'which way around' composition goes, an issue that has become greater with the increasing popularity of the synonym f;g for g∘f.
Examples:
In the category of sets, every monomorphism (injective function) with a non-empty domain is a section, and every epimorphism (surjective function) is a retraction; the latter statement is equivalent to the axiom of choice.
In the category of vector spaces over a field K, every monomorphism and every epimorphism splits; this follows from the fact that linear maps can be uniquely defined by specifying their values on a basis.
In the category of abelian groups, the epimorphism Z → Z/2Z which sends every integer to its remainder modulo 2 does not split; in fact the only morphism Z/2Z → Z is the zero map. Similarly, the natural monomorphism Z/2Z → Z/4Z doesn't split even though there is a non-trivial morphism Z/4Z → Z/2Z.
The categorical concept of a section is important in homological algebra, and is also closely related to the notion of a section of a fiber bundle in topology: in the latter case, a section of a fiber bundle is a section of the bundle projection map of the fiber bundle.
Given a quotient space X¯ with quotient map π:X→X¯ , a section of π is called a transversal. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chemical waste**
Chemical waste:
Chemical waste is any excess, unused, or unwanted chemical, especially those that cause damage to human health or the environment. Chemical waste may be classified as hazardous waste, non-hazardous waste, universal waste, or household hazardous waste. Hazardous waste is material that displays one or more of the following four characteristics: ignitability, corrosivity, reactivity, and toxicity. This information, along with chemical disposal requirements, is typically available on a chemical's Material Safety Data Sheet (MSDS). Radioactive waste requires special ways of handling and disposal due to its radioactive properties. Biohazardous waste, which may contain hazardous materials, is also handled differently.
Laboratory chemical waste in the US:
The U.S. Environmental Protection Agency (EPA) prohibits disposing of certain materials down drains. Therefore, when hazardous chemical waste is generated in a laboratory setting, it is usually stored on-site in an appropriate waste carboy where it is later collected and disposed of by a specialist contractor in order to meet safety, health, and legislative requirements. Many universities' Environment, Health, and Safety (EHS) divisions/departments serve this collection and oversight role.Organic solvents and other organic waste is typically incinerated. Some chemical wastes are recycled, such as waste elemental mercury.
Laboratory chemical waste in the US:
Laboratory waste containment Packaging During packaging, chemical liquid waste containers are filled to no further than 75% capacity to allow for vapor expansion and to reduce potential spills which can occur from transporting or moving overfilled containers. Containers for chemical liquid waste are typically constructed from materials compatible with the hazardous waste being stored, such as inert materials like polypropylene (PP) or polytetrafluoroethylene (PTFE). These containers are also constructed of mechanically robust materials in order to minimize leakage during storage or transit. In addition to the general packaging requirements mentioned above, precipitates, solids, and other non-fluid wastes are typically stored separately from liquid waste. Chemically contaminated glassware is disposed of separately from other chemical waste in containers that cannot be punctured by broken glass.
Laboratory chemical waste in the US:
Labelling Containers are labelled with the group name from the chemical waste category and an itemized list of the contents. All chemicals or materials contaminated by chemicals pose a significant hazard. All waste must be appropriately packaged.
Laboratory chemical waste in the US:
Storage Chemical waste containers are kept closed to prevent spillage, except for when waste is being added. Suitable containers are labeled in order to inform disposal specialists of the contents, as well as to prevent addition of incompatible chemicals. Liquid waste is stored in containers with secure screw-top or similar lids that cannot be easily dislodged in transit. Solid waste is stored in various sturdy, chemically inert containers, such as large sealed buckets or thick plastic bags. A secondary containment (e.g., flammable cabinet or large plastic bin, etc.) is used to capture spills and leaks from the primary container and segregate incompatible hazardous wastes, such as acids and bases.
Laboratory chemical waste in the US:
Chemical compatibility guidelines Many chemicals react adversely when combined. Incompatible chemicals are therefore stored in separate areas of laboratories.Acids are separated from alkalis, metals, cyanides, sulfides, azides, phosphides, and oxidizers, as when acids combine with these types of compounds, violent exothermic reactions can occur. In addition, some of these reactions produce flammable gases, which, combined with the heat produced, may cause explosions. In the case of cyanides, sulfides, azides, phosphides, etc. Toxic gases are also produced.
Laboratory chemical waste in the US:
Oxidizers are separated from acids, organic materials, metals, reducing agents, and ammonia, as when oxidizers combine with these types of compounds, flammable and sometimes toxic compounds can be created. Oxidizers also increase the likelihood that any flammable material present will ignite, seen most readily in research laboratories with improper storage of organic solvents.
Environmental pollution:
Pharmaceuticals PPCPs River pollution Textile industry The textile industry is one of the largest polluters in the globalized world of mostly free market dominated socioeconomic systems. Chemically polluted textile wastewater degrades the quality of the soil and water. The pollution comes from the type of conduct of chemical treatments used e.g., in pretreatment, dyeing, printing, and finishing operations that many or most market-driven companies use despite "eco-friendly alternatives". Textile industry wastewater is considered to be one the largest polluters of water and soil ecosystems, causing "carcinogenic, mutagenic, genotoxic, cytotoxic and allergenic threats to living organisms". The textile industry uses over 8000 chemicals in its supply chain, also polluting the environment with large amounts of microplastics and has been identified in one review as the industry sector producing the largest amount of pollution.A campaign of big clothing brands like Nike, Adidas and Puma to voluntarily reform their manufacturing supply chains to commit to achieving zero discharges of hazardous chemicals by 2020 (global goal) appears to have failed.
Environmental pollution:
The textile industry also creates a lot of pollution that leads to externalities which can cause large economic problems. The problem usually occurs when there is no division of ownership rights. This means that the problem of pollution is largely caused because of incomplete information about which company pollutes and at what scale the damage was caused by the pollution.
Environmental pollution:
Planetary boundary A study by "Scienmag" defines a 'planetary boundary' for novel entities such as plastic and chemical pollution. The study reported that the boundary has been crossed.
Regulation of chemical waste:
Chemicals waste may fall under regulations such as COSHH in the United Kingdom or the Clean Water Act and Resource Conservation and Recovery Act in the United States. In the U.S., the Environmental Protection Agency (EPA) and the Occupational Safety and Health Administration (OSHA), as well as state and local regulations, also regulate chemical use and disposal.
Regulation of chemical waste:
Chemical waste in Canadian aquaculture Chemical waste in oceans is becoming a major issue for marine life. There have been many studies conducted to try and prove the effects of chemicals in oceans. In Canada, many of the studies concentrated on the Atlantic provinces, where fishing and aquaculture are an important part of the economy. In New Brunswick, a study was done on sea urchins in an attempt to identify the effects of toxic and chemical waste on life beneath the ocean, specifically the waste from salmon farms. Sea urchins were used to check the levels of metals in the environment. Green sea urchins have been used as they are widely distributed, abundant in many locations, and easily accessible. By investigating the concentrations of metals in the green sea urchins, the impacts of chemicals from salmon aquaculture activity could be assessed and detected. Samples were taken at 25-meter intervals along a transect in the direction of the main tidal flow. The study found that there were impacts to at least 75 meters based on the intestine metal concentrations. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Calvo (staggered) contracts**
Calvo (staggered) contracts:
A Calvo contract is the name given in macroeconomics to the pricing model that when a firm sets a nominal price there is a constant probability that a firm might be able to reset its price which is independent of the time since the price was last reset. The model was first put forward by Guillermo Calvo in his 1983 article "Staggered Prices in a Utility-Maximizing Framework". The original article was written in a continuous time mathematical framework, but nowadays is mostly used in its discrete time version. The Calvo model is the most common way to model nominal rigidity in new Keynesian DSGE macroeconomic models.
The Calvo model of pricing:
We can define the probability that the firm can reset its price in any one period as h (the hazard rate), or equivalently the probability (1-h) that the price will remain unchanged in that period (the survival rate). The probability h is sometimes called the "Calvo probability" in this context. In the Calvo model the crucial feature is that the price-setter does not know how long the nominal price will remain in place. The probability of the current price lasting for exactly i periods more is Pr[i]=(1−h)i−1h The probability of surviving i subsequent periods thus follows a geometric distribution, with the expected duration of the nominal price from when it is first set is E[Pr[i]]=h−1 . For example, if the Calvo probability h is 0.25 per period, the expected duration is 4 periods. Since the Calvo probability is constant and does not depend on how long it has been since the price was set, the probability that it will survive i more periods is given by exactly the same geometric distribution for all i=1,…,∞ . Thus if h = 0.25, then however old the price is, it is expected to last another 4 periods.
Calvo pricing and nominal rigidity:
With the Calvo model the response of prices to a shock is spread out over time. Suppose a shock hits the economy at time t. A proportion h of prices can respond immediately and the rest (1-h) remain fixed. The next period, there will still be (1−h)2 who have remained fixed and not responded to the shock. i periods after the shock this which have shrunk to (1−h)i . After any finite time, there will still be some proportion of prices that have not responded and remained fixed. This contrasts with the Taylor model, where there is a fixed length for contracts - for example 4 periods. After 4 periods, firms will have reset their price.
Calvo pricing and nominal rigidity:
The Calvo pricing model played a key role in the derivation of the New Keynesian Phillips curve by John Roberts in 1995, and since been used in New Keynesian DSGE models.
New Keynesian Phillips curve.
Calvo pricing and nominal rigidity:
where κ=h[1−(1−h)β]1−hγ .The current expectations of next period's inflation are incorporated as βEt[πt+1] . The coefficient κ captures the responsiveness of current inflation to current output. The New Keynesian Phillips curve reflects the fact that price-setting is forward looking, and what influences current inflation is not only the level of current demand (as represented by output) but also expected future inflation.
Calvo pricing and nominal rigidity:
There are different ways of measuring nominal rigidity in an economy. There will be many firms (or price-setters), some tend to change price frequently, others less so. Even a firm which changes its "normal" price infrequently might make a special offer or sale for a short period before returning to its normal price.
Calvo pricing and nominal rigidity:
Two possible ways of measuring nominal rigidity that have been suggested are: (i) The average age of contracts. One can take all of the firms and ask how long the prices have been set at their current level. With Calvo price setting, assuming that all firms have the same hazard rate h, there will be a proportion h which have just been reset, a proportion h.(1-h) which reset in the previous period and remain fixed this period, and in general, the proportion of prices set i periods ago that survive today is given by αi , where: αi=(1−h)i−1h The average age of contracts A∗ is then A∗=∑i=0∞ih(1−h)i−1=1h The average age of contracts is one measure of nominal rigidity. However, it suffers from interruption bias: at any point of time, we will only observe how long a price has been at its current level. We might wish to ask what will its completed length be at the next price change. This is the second measure.
Calvo pricing and nominal rigidity:
(ii) The average completed length of contracts. This is similar to the average age in that it looks at the current prices set by firms. However, rather than asking how long was it since the price was last set (the age of the contract), it asks how long will the price have lasted when the price next changes. Clearly for a single firm, this is random. Across all firms, however, the Law of large numbers kicks in and we can calculate the exact distribution of completed contract lengths. It can be shown that the average completed length of contracts is given by T: T=2h−1=2A∗−1 That is, the completed length of contracts is twice the average age minus 1. Thus, for example, if h= 0.25, 25% of prices change each period. At any time, the average age of prices will be 4 periods. However, the corresponding average completed length of contracts is 7 periods.
Development of the concept:
One of the major problems with the Calvo contract as a model of pricing is that the inflation dynamics it results in do not fit the data. Inflation is better described by the hybrid new Keyensian Phillips curve which includes lagged inflation: Hybrid new Keynesian Phillips curve.
Development of the concept:
This has led to the original Calvo model to be developed in a number of directions: (a) Indexation. With indexation, prices are automatically updated in response to lagged inflation (at least to some degree), which gives rise to the hybrid new Keyensian Phillips curve. The Calvo probability refers to the firm being able to choose the price it sets that period (which happens with probability h ) or to have the price rise by indexation (which happens with probability (1−h) . The Calvo model with indexation is adopted by many new Keynesian researchers(b) Duration dependent hazard function h(i) . A key feature of the Calvo model is that the hazard rate is constant: the probability of changing the price does not depend on how old the price is. In 1999, Wolman suggested that the model should be generalized to allow for the hazard rate to vary with the duration. The key idea is that an older price may be more or less likely to change than a newer price, which is captured by the hazard function h(i) which allows the hazard rate to be a function of age i. This generalized Calvo model with duration dependent hazard rate has been developed by several authors.
Sources:
David Romer, Advanced Macroeconomics, McGraw-Hill Higher Education; 4 edition (1 May 2011) ISBN 978-0073511375.
Carl Walsh Monetary Theory and Policy (3rd edition), MIT Press 2010, ISBN 978-0262013772.
Michael Woodford, Money Interest and Prices, Princeton University Press, 2003, ISBN 9781400830169. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Figure 8 roller coaster**
Figure 8 roller coaster:
Figure 8 roller coasters are a category of roller coasters where the train runs through a figure 8 shaped course before returning to the boarding station. This design was one of the first designs to be featured in roller coaster design, along with the out and back roller coaster. The figure 8 design allowed for more turns than the out and back design, offering riders an alternative experience.
Figure 8 roller coaster:
An early and famous example of a Figure 8 is the Leap the Dips at Lakemont Park, in Altoona, Pennsylvania.
Many figure 8 roller coasters carry the name "Figure 8."
Figure 8 roller coasters:
An Incomplete List of Figure 8 roller coasters Flying Fish, Thorpe Park (UK) Runaway Train, Chessington World of Adventures (UK) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**GreenBrowser**
GreenBrowser:
GreenBrowser is a discontinued freeware web browser based on Internet Explorer's core. GreenBrowser is based upon the Trident rendering engine used in Internet Explorer.
GreenBrowser:
GreenBrowser is a full-featured browser, highly customizable but compact in size and low in memory requirements. GreenBrowser is similar to Maxthon, and closely related to the MyIE browser. Some addons and plugins designed for Maxthon will also work with GreenBrowser. GreenBrowser features many automation features as standard, such as an ad filter, auto form fill, auto scroll, auto save, auto refresh.
GreenBrowser:
GreenBrowser is a product from morequick, a software organization based in China. Simplified Chinese language is built into the browser. The browser also has certain idiosyncrasies such as many toolbars and icons are enabled by default. When GreenBrowser is running, the green G logo floats over all pages but can be turned off by right-clicking on it and unchecking the "Monitor" option.GreenBrowser was one of the twelve browsers offered to European Economic Area users of Microsoft Windows in 2010 at BrowserChoice.eu. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pet peeve**
Pet peeve:
A pet peeve, pet aversion, or pet hate is a minor annoyance that an individual finds particularly irritating to them, to a greater degree than would be expected based on the experience of others.
Origin of the concept:
The noun peeve, meaning an annoyance, is believed to have originated in the United States early in the twentieth century, derived by back-formation from the adjective peevish, meaning "ornery or ill-tempered", which dates from the late 14th-century.The term pet peeve was introduced to a wide readership in the single-panel comic strip The Little Pet Peeve in the Chicago Tribune during the period 1916–1920. The strip was created by cartoonist Frank King, who also created the long-running Gasoline Alley strip. King's "little pet peeves" were humorous critiques of generally thoughtless behaviors and nuisance frustrations. Examples included people reading the inter-titles in silent films aloud, cracking an egg only to smell that it's gone rotten, back-seat drivers, and rugs that keep catching the bottom of the door and bunching up. King's readers submitted topics, including theater goers who unwrap candy in crinkly paper during a live performance, and (from a 12-year-old boy) having his mother come in to sweep when he has the pieces of a building toy spread out on the floor.
Current usage and examples:
Pet peeves often involve specific behaviors of someone close, such as a spouse or significant other. These behaviors may involve disrespect, manners, personal hygiene, relationships, and family issues. A key aspect of a pet peeve is that it may well seem acceptable or insignificant to others, while the person is likewise not bothered by things that might upset others. For example, a supervisor may have a pet peeve about people leaving the lid up on the copier, when others interrupt when speaking, or their subordinates having messy desks. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Standing wave**
Standing wave:
In physics, a standing wave, also known as a stationary wave, is a wave that oscillates in time but whose peak amplitude profile does not move in space. The peak amplitude of the wave oscillations at any point in space is constant with respect to time, and the oscillations at different points throughout the wave are in phase. The locations at which the absolute value of the amplitude is minimum are called nodes, and the locations where the absolute value of the amplitude is maximum are called antinodes.
Standing wave:
Standing waves were first described scientifically by Michael Faraday in 1831. Faraday observed standing waves on the surface of a liquid in a vibrating container. Franz Melde coined the term "standing wave" (German: stehende Welle or Stehwelle) around 1860 and demonstrated the phenomenon in his classic experiment with vibrating strings.This phenomenon can occur because the medium is moving in the direction opposite to the movement of the wave, or it can arise in a stationary medium as a result of interference between two waves traveling in opposite directions. The most common cause of standing waves is the phenomenon of resonance, in which standing waves occur inside a resonator due to interference between waves reflected back and forth at the resonator's resonant frequency.
Standing wave:
For waves of equal amplitude traveling in opposing directions, there is on average no net propagation of energy.
Moving medium:
As an example of the first type, under certain meteorological conditions standing waves form in the atmosphere in the lee of mountain ranges. Such waves are often exploited by glider pilots.
Moving medium:
Standing waves and hydraulic jumps also form on fast flowing river rapids and tidal currents such as the Saltstraumen maelstrom. A requirement for this in river currents is a flowing water with shallow depth in which the inertia of the water overcomes its gravity due to the supercritical flow speed (Froude number: 1.7 – 4.5, surpassing 4.5 results in direct standing wave) and is therefore neither significantly slowed down by the obstacle nor pushed to the side. Many standing river waves are popular river surfing breaks.
Opposing waves:
As an example of the second type, a standing wave in a transmission line is a wave in which the distribution of current, voltage, or field strength is formed by the superposition of two waves of the same frequency propagating in opposite directions. The effect is a series of nodes (zero displacement) and anti-nodes (maximum displacement) at fixed points along the transmission line. Such a standing wave may be formed when a wave is transmitted into one end of a transmission line and is reflected from the other end by an impedance mismatch, i.e., discontinuity, such as an open circuit or a short. The failure of the line to transfer power at the standing wave frequency will usually result in attenuation distortion.
Opposing waves:
In practice, losses in the transmission line and other components mean that a perfect reflection and a pure standing wave are never achieved. The result is a partial standing wave, which is a superposition of a standing wave and a traveling wave. The degree to which the wave resembles either a pure standing wave or a pure traveling wave is measured by the standing wave ratio (SWR).Another example is standing waves in the open ocean formed by waves with the same wave period moving in opposite directions. These may form near storm centres, or from reflection of a swell at the shore, and are the source of microbaroms and microseisms.
Mathematical description:
This section considers representative one- and two-dimensional cases of standing waves. First, an example of an infinite length string shows how identical waves traveling in opposite directions interfere to produce standing waves. Next, two finite length string examples with different boundary conditions demonstrate how the boundary conditions restrict the frequencies that can form standing waves. Next, the example of sound waves in a pipe demonstrates how the same principles can be applied to longitudinal waves with analogous boundary conditions.
Mathematical description:
Standing waves can also occur in two- or three-dimensional resonators. With standing waves on two-dimensional membranes such as drumheads, illustrated in the animations above, the nodes become nodal lines, lines on the surface at which there is no movement, that separate regions vibrating with opposite phase. These nodal line patterns are called Chladni figures. In three-dimensional resonators, such as musical instrument sound boxes and microwave cavity resonators, there are nodal surfaces. This section includes a two-dimensional standing wave example with a rectangular boundary to illustrate how to extend the concept to higher dimensions.
Mathematical description:
Standing wave on an infinite length string To begin, consider a string of infinite length along the x-axis that is free to be stretched transversely in the y direction.
For a harmonic wave traveling to the right along the string, the string's displacement in the y direction as a function of position x and time t is max sin (2πxλ−ωt).
Mathematical description:
The displacement in the y-direction for an identical harmonic wave traveling to the left is max sin (2πxλ+ωt), where ymax is the amplitude of the displacement of the string for each wave, ω is the angular frequency or equivalently 2π times the frequency f, λ is the wavelength of the wave.For identical right- and left-traveling waves on the same string, the total displacement of the string is the sum of yR and yL, max sin max sin (2πxλ+ωt).
Mathematical description:
Using the trigonometric sum-to-product identity sin sin sin cos (a−b2) Note that Equation (1) does not describe a traveling wave. At any position x, y(x,t) simply oscillates in time with an amplitude that varies in the x-direction as max sin (2πxλ) . The animation at the beginning of this article depicts what is happening. As the left-traveling blue wave and right-traveling green wave interfere, they form the standing red wave that does not travel and instead oscillates in place.
Mathematical description:
Because the string is of infinite length, it has no boundary condition for its displacement at any point along the x-axis. As a result, a standing wave can form at any frequency.
Mathematical description:
At locations on the x-axis that are even multiples of a quarter wavelength, x=…,−3λ2,−λ,−λ2,0,λ2,λ,3λ2,… the amplitude is always zero. These locations are called nodes. At locations on the x-axis that are odd multiples of a quarter wavelength x=…,−5λ4,−3λ4,−λ4,λ4,3λ4,5λ4,… the amplitude is maximal, with a value of twice the amplitude of the right- and left-traveling waves that interfere to produce this standing wave pattern. These locations are called anti-nodes. The distance between two consecutive nodes or anti-nodes is half the wavelength, λ/2.
Mathematical description:
Standing wave on a string with two fixed ends Next, consider a string with fixed ends at x = 0 and x = L. The string will have some damping as it is stretched by traveling waves, but assume the damping is very small. Suppose that at the x = 0 fixed end a sinusoidal force is applied that drives the string up and down in the y-direction with a small amplitude at some frequency f. In this situation, the driving force produces a right-traveling wave. That wave reflects off the right fixed end and travels back to the left, reflects again off the left fixed end and travels back to the right, and so on. Eventually, a steady state is reached where the string has identical right- and left-traveling waves as in the infinite-length case and the power dissipated by damping in the string equals the power supplied by the driving force so the waves have constant amplitude.
Mathematical description:
Equation (1) still describes the standing wave pattern that can form on this string, but now Equation (1) is subject to boundary conditions where y = 0 at x = 0 and x = L because the string is fixed at x = L and because we assume the driving force at the fixed x = 0 end has small amplitude. Checking the values of y at the two ends, y(0,t)=0, max sin cos 0.
Mathematical description:
This boundary condition is in the form of the Sturm–Liouville formulation. The latter boundary condition is satisfied when sin (2πLλ)=0 . L is given, so the boundary condition restricts the wavelength of the standing waves to n=1,2,3,… Waves can only form standing waves on this string if they have a wavelength that satisfies this relationship with L. If waves travel with speed v along the string, then equivalently the frequency of the standing waves is restricted to f=vλ=nv2L.
Mathematical description:
The standing wave with n = 1 oscillates at the fundamental frequency and has a wavelength that is twice the length of the string. Higher integer values of n correspond to modes of oscillation called harmonics or overtones. Any standing wave on the string will have n + 1 nodes including the fixed ends and n anti-nodes.
Mathematical description:
To compare this example's nodes to the description of nodes for standing waves in the infinite length string, note that Equation (2) can be rewritten as λ=4Ln, n=2,4,6,… In this variation of the expression for the wavelength, n must be even. Cross multiplying we see that because L is a node, it is an even multiple of a quarter wavelength, L=nλ4, n=2,4,6,… This example demonstrates a type of resonance and the frequencies that produce standing waves can be referred to as resonant frequencies.
Mathematical description:
Standing wave on a string with one fixed end Next, consider the same string of length L, but this time it is only fixed at x = 0. At x = L, the string is free to move in the y direction. For example, the string might be tied at x = L to a ring that can slide freely up and down a pole. The string again has small damping and is driven by a small driving force at x = 0.
Mathematical description:
In this case, Equation (1) still describes the standing wave pattern that can form on the string, and the string has the same boundary condition of y = 0 at x = 0. However, at x = L where the string can move freely there should be an anti-node with maximal amplitude of y. Equivalently, this boundary condition of the "free end" can be stated as ∂y/∂x = 0 at x = L, which is in the form of the Sturm–Liouville formulation. The intuition for this boundary condition ∂y/∂x = 0 at x = L is that the motion of the "free end" will follow that of the point to its left.
Mathematical description:
Reviewing Equation (1), for x = L the largest amplitude of y occurs when ∂y/∂x = 0, or cos 0.
This leads to a different set of wavelengths than in the two-fixed-ends example. Here, the wavelength of the standing waves is restricted to λ=4Ln, n=1,3,5,… Equivalently, the frequency is restricted to f=nv4L.
Mathematical description:
Note that in this example n only takes odd values. Because L is an anti-node, it is an odd multiple of a quarter wavelength. Thus the fundamental mode in this example only has one quarter of a complete sine cycle–zero at x = 0 and the first peak at x = L–the first harmonic has three quarters of a complete sine cycle, and so on.
Mathematical description:
This example also demonstrates a type of resonance and the frequencies that produce standing waves are called resonant frequencies.
Mathematical description:
Standing wave in a pipe Consider a standing wave in a pipe of length L. The air inside the pipe serves as the medium for longitudinal sound waves traveling to the right or left through the pipe. While the transverse waves on the string from the previous examples vary in their displacement perpendicular to the direction of wave motion, the waves traveling through the air in the pipe vary in terms of their pressure and longitudinal displacement along the direction of wave motion. The wave propagates by alternately compressing and expanding air in segments of the pipe, which displaces the air slightly from its rest position and transfers energy to neighboring segments through the forces exerted by the alternating high and low air pressures. Equations resembling those for the wave on a string can be written for the change in pressure Δp due to a right- or left-traveling wave in the pipe.
Mathematical description:
max sin (2πxλ−ωt), max sin (2πxλ+ωt), where pmax is the pressure amplitude or the maximum increase or decrease in air pressure due to each wave, ω is the angular frequency or equivalently 2π times the frequency f, λ is the wavelength of the wave.If identical right- and left-traveling waves travel through the pipe, the resulting superposition is described by the sum max sin cos (ωt).
Mathematical description:
Note that this formula for the pressure is of the same form as Equation (1), so a stationary pressure wave forms that is fixed in space and oscillates in time.
Mathematical description:
If the end of a pipe is closed, the pressure is maximal since the closed end of the pipe exerts a force that restricts the movement of air. This corresponds to a pressure anti-node (which is a node for molecular motions, because the molecules near the closed end can't move). If the end of the pipe is open, the pressure variations are very small, corresponding to a pressure node (which is an anti-node for molecular motions, because the molecules near the open end can move freely). The exact location of the pressure node at an open end is actually slightly beyond the open end of the pipe, so the effective length of the pipe for the purpose of determining resonant frequencies is slightly longer than its physical length. This difference in length is ignored in this example. In terms of reflections, open ends partially reflect waves back into the pipe, allowing some energy to be released into the outside air. Ideally, closed ends reflect the entire wave back in the other direction.First consider a pipe that is open at both ends, for example an open organ pipe or a recorder. Given that the pressure must be zero at both open ends, the boundary conditions are analogous to the string with two fixed ends, Δp(0,t)=0, max sin cos (ωt)=0, which only occurs when the wavelength of standing waves is λ=2Ln, n=1,2,3,…, or equivalently when the frequency is f=nv2L, where v is the speed of sound.
Mathematical description:
Next, consider a pipe that is open at x = 0 (and therefore has a pressure node) and closed at x = L (and therefore has a pressure anti-node). The closed "free end" boundary condition for the pressure at x = L can be stated as ∂(Δp)/∂x = 0, which is in the form of the Sturm–Liouville formulation. The intuition for this boundary condition ∂(Δp)/∂x = 0 at x = L is that the pressure of the closed end will follow that of the point to its left. Examples of this setup include a bottle and a clarinet. This pipe has boundary conditions analogous to the string with only one fixed end. Its standing waves have wavelengths restricted to λ=4Ln, n=1,3,5,…, or equivalently the frequency of standing waves is restricted to f=nv4L.
Mathematical description:
Note that for the case where one end is closed, n only takes odd values just like in the case of the string fixed at only one end.
Mathematical description:
So far, the wave has been written in terms of its pressure as a function of position x and time. Alternatively, the wave can be written in terms of its longitudinal displacement of air, where air in a segment of the pipe moves back and forth slightly in the x-direction as the pressure varies and waves travel in either or both directions. The change in pressure Δp and longitudinal displacement s are related as Δp=−ρv2∂s∂x, where ρ is the density of the air. In terms of longitudinal displacement, closed ends of pipes correspond to nodes since air movement is restricted and open ends correspond to anti-nodes since the air is free to move. A similar, easier to visualize phenomenon occurs in longitudinal waves propagating along a spring.We can also consider a pipe that is closed at both ends. In this case, both ends will be pressure anti-nodes or equivalently both ends will be displacement nodes. This example is analogous to the case where both ends are open, except the standing wave pattern has a π⁄2 phase shift along the x-direction to shift the location of the nodes and anti-nodes. For example, the longest wavelength that resonates–the fundamental mode–is again twice the length of the pipe, except that the ends of the pipe have pressure anti-nodes instead of pressure nodes. Between the ends there is one pressure node. In the case of two closed ends, the wavelength is again restricted to λ=2Ln, n=1,2,3,…, and the frequency is again restricted to f=nv2L.
Mathematical description:
A Rubens tube provides a way to visualize the pressure variations of the standing waves in a tube with two closed ends.
Mathematical description:
2D standing wave with a rectangular boundary Next, consider transverse waves that can move along a two dimensional surface within a rectangular boundary of length Lx in the x-direction and length Ly in the y-direction. Examples of this type of wave are water waves in a pool or waves on a rectangular sheet that has been pulled taut. The waves displace the surface in the z-direction, with z = 0 defined as the height of the surface when it is still.
Mathematical description:
In two dimensions and Cartesian coordinates, the wave equation is ∂2z∂t2=c2(∂2z∂x2+∂2z∂y2), where z(x,y,t) is the displacement of the surface, c is the speed of the wave.To solve this differential equation, let's first solve for its Fourier transform, with Z(x,y,ω)=∫−∞∞z(x,y,t)e−iωtdt.
Taking the Fourier transform of the wave equation, ∂2Z∂x2+∂2Z∂y2=−ω2c2Z(x,y,ω).
This is an eigenvalue problem where the frequencies correspond to eigenvalues that then correspond to frequency-specific modes or eigenfunctions. Specifically, this is a form of the Helmholtz equation and it can be solved using separation of variables. Assume Z=X(x)Y(y).
Dividing the Helmholtz equation by Z, 0.
This leads to two coupled ordinary differential equations. The x term equals a constant with respect to x that we can define as 1X(x)∂2X∂x2=(ikx)2.
Solving for X(x), X(x)=Akxeikxx+Bkxe−ikxx.
This x-dependence is sinusoidal–recalling Euler's formula–with constants Akx and Bkx determined by the boundary conditions. Likewise, the y term equals a constant with respect to y that we can define as 1Y(y)∂2Y∂y2=(iky)2=kx2−ω2c2, and the dispersion relation for this wave is therefore ω=ckx2+ky2.
Solving the differential equation for the y term, Y(y)=Ckyeikyy+Dkye−ikyy.
Multiplying these functions together and applying the inverse Fourier transform, z(x,y,t) is a superposition of modes where each mode is the product of sinusoidal functions for x, y, and t, z(x,y,t)∼e±ikxxe±ikyye±iωt.
Mathematical description:
The constants that determine the exact sinusoidal functions depend on the boundary conditions and initial conditions. To see how the boundary conditions apply, consider an example like the sheet that has been pulled taut where z(x,y,t) must be zero all around the rectangular boundary. For the x dependence, z(x,y,t) must vary in a way that it can be zero at both x = 0 and x = Lx for all values of y and t. As in the one dimensional example of the string fixed at both ends, the sinusoidal function that satisfies this boundary condition is sin kxx, with kx restricted to kx=nπLx,n=1,2,3,… Likewise, the y dependence of z(x,y,t) must be zero at both y = 0 and y = Ly, which is satisfied by sin kyy,ky=mπLy,m=1,2,3,… Restricting the wave numbers to these values also restricts the frequencies that resonate to ω=cπ(nLx)2+(mLy)2.
Mathematical description:
If the initial conditions for z(x,y,0) and its time derivative ż(x,y,0) are chosen so the t-dependence is a cosine function, then standing waves for this system take the form max sin sin cos (ωt).
Mathematical description:
n=1,2,3,…m=1,2,3,… So, standing waves inside this fixed rectangular boundary oscillate in time at certain resonant frequencies parameterized by the integers n and m. As they oscillate in time, they do not travel and their spatial variation is sinusoidal in both the x- and y-directions such that they satisfy the boundary conditions. The fundamental mode, n = 1 and m = 1, has a single antinode in the middle of the rectangle. Varying n and m gives complicated but predictable two-dimensional patterns of nodes and antinodes inside the rectangle.Note from the dispersion relation that in certain situations different modes–meaning different combinations of n and m–may resonate at the same frequency even though they have different shapes for their x- and y-dependence. For example, if the boundary is square, Lx = Ly, the modes n = 1 and m = 7, n = 7 and m = 1, and n = 5 and m = 5 all resonate at 50 .
Mathematical description:
Recalling that ω determines the eigenvalue in the Helmholtz equation above, the number of modes corresponding to each frequency relates to the frequency's multiplicity as an eigenvalue.
Standing wave ratio, phase, and energy transfer:
If the two oppositely moving traveling waves are not of the same amplitude, they will not cancel completely at the nodes, the points where the waves are 180° out of phase, so the amplitude of the standing wave will not be zero at the nodes, but merely a minimum. Standing wave ratio (SWR) is the ratio of the amplitude at the antinode (maximum) to the amplitude at the node (minimum). A pure standing wave will have an infinite SWR. It will also have a constant phase at any point in space (but it may undergo a 180° inversion every half cycle). A finite, non-zero SWR indicates a wave that is partially stationary and partially travelling. Such waves can be decomposed into a superposition of two waves: a travelling wave component and a stationary wave component. An SWR of one indicates that the wave does not have a stationary component – it is purely a travelling wave, since the ratio of amplitudes is equal to 1.A pure standing wave does not transfer energy from the source to the destination. However, the wave is still subject to losses in the medium. Such losses will manifest as a finite SWR, indicating a travelling wave component leaving the source to supply the losses. Even though the SWR is now finite, it may still be the case that no energy reaches the destination because the travelling component is purely supplying the losses. However, in a lossless medium, a finite SWR implies a definite transfer of energy to the destination.
Examples:
One easy example to understand standing waves is two people shaking either end of a jump rope. If they shake in sync the rope can form a regular pattern of waves oscillating up and down, with stationary points along the rope where the rope is almost still (nodes) and points where the arc of the rope is maximum (antinodes).
Examples:
Acoustic resonance Standing waves are also observed in physical media such as strings and columns of air. Any waves traveling along the medium will reflect back when they reach the end. This effect is most noticeable in musical instruments where, at various multiples of a vibrating string or air column's natural frequency, a standing wave is created, allowing harmonics to be identified. Nodes occur at fixed ends and anti-nodes at open ends. If fixed at only one end, only odd-numbered harmonics are available. At the open end of a pipe the anti-node will not be exactly at the end as it is altered by its contact with the air and so end correction is used to place it exactly. The density of a string will affect the frequency at which harmonics will be produced; the greater the density the lower the frequency needs to be to produce a standing wave of the same harmonic.
Examples:
Visible light Standing waves are also observed in optical media such as optical waveguides and optical cavities. Lasers use optical cavities in the form of a pair of facing mirrors, which constitute a Fabry–Pérot interferometer. The gain medium in the cavity (such as a crystal) emits light coherently, exciting standing waves of light in the cavity. The wavelength of light is very short (in the range of nanometers, 10−9 m) so the standing waves are microscopic in size. One use for standing light waves is to measure small distances, using optical flats.
Examples:
X-rays Interference between X-ray beams can form an X-ray standing wave (XSW) field. Because of the short wavelength of X-rays (less than 1 nanometer), this phenomenon can be exploited for measuring atomic-scale events at material surfaces. The XSW is generated in the region where an X-ray beam interferes with a diffracted beam from a nearly perfect single crystal surface or a reflection from an X-ray mirror. By tuning the crystal geometry or X-ray wavelength, the XSW can be translated in space, causing a shift in the X-ray fluorescence or photoelectron yield from the atoms near the surface. This shift can be analyzed to pinpoint the location of a particular atomic species relative to the underlying crystal structure or mirror surface. The XSW method has been used to clarify the atomic-scale details of dopants in semiconductors, atomic and molecular adsorption on surfaces, and chemical transformations involved in catalysis.
Examples:
Mechanical waves Standing waves can be mechanically induced into a solid medium using resonance. One easy to understand example is two people shaking either end of a jump rope. If they shake in sync, the rope will form a regular pattern with nodes and antinodes and appear to be stationary, hence the name standing wave. Similarly a cantilever beam can have a standing wave imposed on it by applying a base excitation. In this case the free end moves the greatest distance laterally compared to any location along the beam. Such a device can be used as a sensor to track changes in frequency or phase of the resonance of the fiber. One application is as a measurement device for dimensional metrology.
Examples:
Seismic waves Standing surface waves on the Earth are observed as free oscillations of the Earth.
Faraday waves The Faraday wave is a non-linear standing wave at the air-liquid interface induced by hydrodynamic instability. It can be used as a liquid-based template to assemble microscale materials.
Examples:
Seiches A seiche is an example of a standing wave in an enclosed body of water. It is characterised by the oscillatory behaviour of the water level at either end of the body and typically has a nodal point near the middle of the body where very little change in water level is observed. It should be distinguished from a simple storm surge where no oscillation is present. In sizeable lakes, the period of such oscillations may be between minutes and hours, for example Lake Geneva's longitudinal period is 73 minutes and its transversal seiche has a period of around 10 minutes, while Lake Huron can be seen to have resonances with periods between 1 and 2 hours. See Lake seiches. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Naphazoline/pheniramine**
Naphazoline/pheniramine:
Naphazoline/pheniramine, sold under the brand name Naphcon-A among others, is a combination eye drop used to help the symptoms of allergic conjunctivitis such as from hay fever. It contains naphazoline and pheniramine. It is used as an eye drop. Use is not recommended for more than three days.Side effects may include allergic reactions, eye pain, and dilated pupils. It is unclear if use in pregnancy is safe. Nephazoline works by resulting in constriction of blood vessels thus decreasing redness while pheniramine works by blocking the effects of histamine to stop itching.The combination was approved for medical use in the United States in 1994. It is available over the counter. In 2017, it was the 203rd most commonly prescribed medication in the United States, with more than two million prescriptions.
Medical use:
It is administered topically with one to two drops applied to the affected eye(s) up to four times daily.
Adverse effects:
Pupils may become enlarged temporarily Overuse may cause more redness Those with heart disease, high blood pressure, narrow angle glaucoma or who have urination trouble are discouraged from using the product It is recommended to remove contact lenses before use. Use with contact lenses can lead to reduced oxygenation of the underlying cornea If infants or children accidentally ingest the drops, it may lead to coma and significant reduction in body temperature. If such ingestion occurs, immediately calling a poison control center is recommended | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bellman equation**
Bellman equation:
A Bellman equation, named after Richard E. Bellman, is a necessary condition for optimality associated with the mathematical optimization method known as dynamic programming. It writes the "value" of a decision problem at a certain point in time in terms of the payoff from some initial choices and the "value" of the remaining decision problem that results from those initial choices. This breaks a dynamic optimization problem into a sequence of simpler subproblems, as Bellman's “principle of optimality" prescribes. The equation applies to algebraic structures with a total ordering; for algebraic structures with a partial ordering, the generic Bellman's equation can be used.The Bellman equation was first applied to engineering control theory and to other topics in applied mathematics, and subsequently became an important tool in economic theory; though the basic concepts of dynamic programming are prefigured in John von Neumann and Oskar Morgenstern's Theory of Games and Economic Behavior and Abraham Wald's sequential analysis. The term 'Bellman equation' usually refers to the dynamic programming equation associated with discrete-time optimization problems. In continuous-time optimization problems, the analogous equation is a partial differential equation that is called the Hamilton–Jacobi–Bellman equation.In discrete time any multi-stage optimization problem can be solved by analyzing the appropriate Bellman equation. The appropriate Bellman equation can be found by introducing new state variables (state augmentation). However, the resulting augmented-state multi-stage optimization problem has a higher dimensional state space than the original multi-stage optimization problem - an issue that can potentially render the augmented problem intractable due to the “curse of dimensionality”. Alternatively, it has been shown that if the cost function of the multi-stage optimization problem satisfies a "backward separable" structure, then the appropriate Bellman equation can be found without state augmentation.
Analytical concepts in dynamic programming:
To understand the Bellman equation, several underlying concepts must be understood. First, any optimization problem has some objective: minimizing travel time, minimizing cost, maximizing profits, maximizing utility, etc. The mathematical function that describes this objective is called the objective function.
Analytical concepts in dynamic programming:
Dynamic programming breaks a multi-period planning problem into simpler steps at different points in time. Therefore, it requires keeping track of how the decision situation is evolving over time. The information about the current situation that is needed to make a correct decision is called the "state". For example, to decide how much to consume and spend at each point in time, people would need to know (among other things) their initial wealth. Therefore, wealth (W) would be one of their state variables, but there would probably be others.
Analytical concepts in dynamic programming:
The variables chosen at any given point in time are often called the control variables. For instance, given their current wealth, people might decide how much to consume now. Choosing the control variables now may be equivalent to choosing the next state; more generally, the next state is affected by other factors in addition to the current control. For example, in the simplest case, today's wealth (the state) and consumption (the control) might exactly determine tomorrow's wealth (the new state), though typically other factors will affect tomorrow's wealth too.
Analytical concepts in dynamic programming:
The dynamic programming approach describes the optimal plan by finding a rule that tells what the controls should be, given any possible value of the state. For example, if consumption (c) depends only on wealth (W), we would seek a rule c(W) that gives consumption as a function of wealth. Such a rule, determining the controls as a function of the states, is called a policy function (See Bellman, 1957, Ch. III.2).Finally, by definition, the optimal decision rule is the one that achieves the best possible value of the objective. For example, if someone chooses consumption, given wealth, in order to maximize happiness (assuming happiness H can be represented by a mathematical function, such as a utility function and is something defined by wealth), then each level of wealth will be associated with some highest possible level of happiness, H(W) . The best possible value of the objective, written as a function of the state, is called the value function.
Analytical concepts in dynamic programming:
Bellman showed that a dynamic optimization problem in discrete time can be stated in a recursive, step-by-step form known as backward induction by writing down the relationship between the value function in one period and the value function in the next period. The relationship between these two value functions is called the "Bellman equation". In this approach, the optimal policy in the last time period is specified in advance as a function of the state variable's value at that time, and the resulting optimal value of the objective function is thus expressed in terms of that value of the state variable. Next, the next-to-last period's optimization involves maximizing the sum of that period's period-specific objective function and the optimal value of the future objective function, giving that period's optimal policy contingent upon the value of the state variable as of the next-to-last period decision. This logic continues recursively back in time, until the first period decision rule is derived, as a function of the initial state variable value, by optimizing the sum of the first-period-specific objective function and the value of the second period's value function, which gives the value for all the future periods. Thus, each period's decision is made by explicitly acknowledging that all future decisions will be optimally made.
Derivation:
A dynamic decision problem Let xt be the state at time t . For a decision that begins at time 0, we take as given the initial state x0 . At any time, the set of possible actions depends on the current state; we can write this as at∈Γ(xt) , where the action at represents one or more control variables. We also assume that the state changes from x to a new state T(x,a) when action a is taken, and that the current payoff from taking action a in state x is F(x,a) . Finally, we assume impatience, represented by a discount factor 0<β<1 Under these assumptions, an infinite-horizon decision problem takes the following form: max {at}t=0∞∑t=0∞βtF(xt,at), subject to the constraints at∈Γ(xt),xt+1=T(xt,at),∀t=0,1,2,… Notice that we have defined notation V(x0) to denote the optimal value that can be obtained by maximizing this objective function subject to the assumed constraints. This function is the value function. It is a function of the initial state variable x0 , since the best value obtainable depends on the initial situation.
Derivation:
Bellman's principle of optimality The dynamic programming method breaks this decision problem into smaller subproblems. Bellman's principle of optimality describes how to do this:Principle of Optimality: An optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision. (See Bellman, 1957, Chap. III.3.) In computer science, a problem that can be broken apart like this is said to have optimal substructure. In the context of dynamic game theory, this principle is analogous to the concept of subgame perfect equilibrium, although what constitutes an optimal policy in this case is conditioned on the decision-maker's opponents choosing similarly optimal policies from their points of view.
Derivation:
As suggested by the principle of optimality, we will consider the first decision separately, setting aside all future decisions (we will start afresh from time 1 with the new state x1 ). Collecting the future decisions in brackets on the right, the above infinite-horizon decision problem is equivalent to: max max {at}t=1∞∑t=1∞βt−1F(xt,at):at∈Γ(xt),xt+1=T(xt,at),∀t≥1]} subject to the constraints a0∈Γ(x0),x1=T(x0,a0).
Here we are choosing a0 , knowing that our choice will cause the time 1 state to be x1=T(x0,a0) . That new state will then affect the decision problem from time 1 on. The whole future decision problem appears inside the square brackets on the right.
Derivation:
The Bellman equation So far it seems we have only made the problem uglier by separating today's decision from future decisions. But we can simplify by noticing that what is inside the square brackets on the right is the value of the time 1 decision problem, starting from state x1=T(x0,a0) Therefore, we can rewrite the problem as a recursive definition of the value function: max a0{F(x0,a0)+βV(x1)} , subject to the constraints: a0∈Γ(x0),x1=T(x0,a0).
Derivation:
This is the Bellman equation. It can be simplified even further if we drop time subscripts and plug in the value of the next state: max a∈Γ(x){F(x,a)+βV(T(x,a))}.
Derivation:
The Bellman equation is classified as a functional equation, because solving it means finding the unknown function V , which is the value function. Recall that the value function describes the best possible value of the objective, as a function of the state x . By calculating the value function, we will also find the function a(x) that describes the optimal action as a function of the state; this is called the policy function.
Derivation:
In a stochastic problem In the deterministic setting, other techniques besides dynamic programming can be used to tackle the above optimal control problem. However, the Bellman Equation is often the most convenient method of solving stochastic optimal control problems.
Derivation:
For a specific example from economics, consider an infinitely-lived consumer with initial wealth endowment a0 at period 0 . They have an instantaneous utility function u(c) where c denotes consumption and discounts the next period utility at a rate of 0<β<1 . Assume that what is not consumed in period t carries over to the next period with interest rate r . Then the consumer's utility maximization problem is to choose a consumption plan {ct} that solves max ∑t=0∞βtu(ct) subject to at+1=(1+r)(at−ct),ct≥0, and lim 0.
Derivation:
The first constraint is the capital accumulation/law of motion specified by the problem, while the second constraint is a transversality condition that the consumer does not carry debt at the end of their life. The Bellman equation is max 0≤c≤a{u(c)+βV((1+r)(a−c))}, Alternatively, one can treat the sequence problem directly using, for example, the Hamiltonian equations.
Derivation:
Now, if the interest rate varies from period to period, the consumer is faced with a stochastic optimization problem. Let the interest r follow a Markov process with probability transition function Q(r,dμr) where dμr denotes the probability measure governing the distribution of interest rate next period if current interest rate is r . In this model the consumer decides their current period consumption after the current period interest rate is announced.
Derivation:
Rather than simply choosing a single sequence {ct} , the consumer now must choose a sequence {ct} for each possible realization of a {rt} in such a way that their lifetime expected utility is maximized: max {ct}t=0∞E(∑t=0∞βtu(ct)).
The expectation E is taken with respect to the appropriate probability measure given by Q on the sequences of r's. Because r is governed by a Markov process, dynamic programming simplifies the problem significantly. Then the Bellman equation is simply: max 0≤c≤a{u(c)+β∫V((1+r)(a−c),r′)Q(r,dμr)}.
Under some reasonable assumption, the resulting optimal policy function g(a,r) is measurable.
For a general stochastic sequential optimization problem with Markovian shocks and where the agent is faced with their decision ex-post, the Bellman equation takes a very similar form max c∈Γ(x,z){F(x,c,z)+β∫V(T(x,c),z′)dμz(z′)}.
Solution methods:
The method of undetermined coefficients, also known as 'guess and verify', can be used to solve some infinite-horizon, autonomous Bellman equations.
Solution methods:
The Bellman equation can be solved by backwards induction, either analytically in a few special cases, or numerically on a computer. Numerical backwards induction is applicable to a wide variety of problems, but may be infeasible when there are many state variables, due to the curse of dimensionality. Approximate dynamic programming has been introduced by D. P. Bertsekas and J. N. Tsitsiklis with the use of artificial neural networks (multilayer perceptrons) for approximating the Bellman function. This is an effective mitigation strategy for reducing the impact of dimensionality by replacing the memorization of the complete function mapping for the whole space domain with the memorization of the sole neural network parameters. In particular, for continuous-time systems, an approximate dynamic programming approach that combines both policy iterations with neural networks was introduced. In discrete-time, an approach to solve the HJB equation combining value iterations and neural networks was introduced.
Solution methods:
By calculating the first-order conditions associated with the Bellman equation, and then using the envelope theorem to eliminate the derivatives of the value function, it is possible to obtain a system of difference equations or differential equations called the 'Euler equations'. Standard techniques for the solution of difference or differential equations can then be used to calculate the dynamics of the state variables and the control variables of the optimization problem.
Applications in economics:
The first known application of a Bellman equation in economics is due to Martin Beckmann and Richard Muth. Martin Beckmann also wrote extensively on consumption theory using the Bellman equation in 1959. His work influenced Edmund S. Phelps, among others.
Applications in economics:
A celebrated economic application of a Bellman equation is Robert C. Merton's seminal 1973 article on the intertemporal capital asset pricing model. (See also Merton's portfolio problem).The solution to Merton's theoretical model, one in which investors chose between income today and future income or capital gains, is a form of Bellman's equation. Because economic applications of dynamic programming usually result in a Bellman equation that is a difference equation, economists refer to dynamic programming as a "recursive method" and a subfield of recursive economics is now recognized within economics.
Applications in economics:
Nancy Stokey, Robert E. Lucas, and Edward Prescott describe stochastic and nonstochastic dynamic programming in considerable detail, and develop theorems for the existence of solutions to problems meeting certain conditions. They also describe many examples of modeling theoretical problems in economics using recursive methods. This book led to dynamic programming being employed to solve a wide range of theoretical problems in economics, including optimal economic growth, resource extraction, principal–agent problems, public finance, business investment, asset pricing, factor supply, and industrial organization. Lars Ljungqvist and Thomas Sargent apply dynamic programming to study a variety of theoretical questions in monetary policy, fiscal policy, taxation, economic growth, search theory, and labor economics. Avinash Dixit and Robert Pindyck showed the value of the method for thinking about capital budgeting. Anderson adapted the technique to business valuation, including privately held businesses.Using dynamic programming to solve concrete problems is complicated by informational difficulties, such as choosing the unobservable discount rate. There are also computational issues, the main one being the curse of dimensionality arising from the vast number of possible actions and potential state variables that must be considered before an optimal strategy can be selected. For an extensive discussion of computational issues, see Miranda and Fackler, and Meyn 2007.
Example:
In Markov decision processes, a Bellman equation is a recursion for expected rewards. For example, the expected reward for being in a particular state s and following some fixed policy π has the Bellman equation: Vπ(s)=R(s,π(s))+γ∑s′P(s′|s,π(s))Vπ(s′).
This equation describes the expected reward for taking the action prescribed by some policy π The equation for the optimal policy is referred to as the Bellman optimality equation: max a{R(s,a)+γ∑s′P(s′|s,a)Vπ∗(s′)}.
where π∗ is the optimal policy and Vπ∗ refers to the value function of the optimal policy. The equation above describes the reward for taking the action giving the highest expected return. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lithium carbide**
Lithium carbide:
Lithium carbide, Li2C2, often known as dilithium acetylide, is a chemical compound of lithium and carbon, an acetylide. It is an intermediate compound produced during radiocarbon dating procedures. Li2C2 is one of an extensive range of lithium-carbon compounds which include the lithium-rich Li4C, Li6C2, Li8C3, Li6C3, Li4C3, Li4C5, and the graphite intercalation compounds LiC6, LiC12, and LiC18.Li2C2 is the most thermodynamically-stable lithium-rich carbide and the only one that can be obtained directly from the elements. It was first produced by Moissan, in 1896 who reacted coal with lithium carbonate. Li CO Li CO The other lithium-rich compounds are produced by reacting lithium vapor with chlorinated hydrocarbons, e.g. CCl4. Lithium carbide is sometimes confused with the drug lithium carbonate, Li2CO3, because of the similarity of its name.
Preparation and chemistry:
In the laboratory samples may be prepared by treating acetylene with a solution of lithium in ammonia, on −40°C, with creation of addition compound of Li2C2 • C2H2 • 2NH3 that decomposes in stream of hydrogen at room temperature giving white powder of Li2C2.
Li Li 2C2+H2 Samples prepared in this manner generally are poorly crystalline. Crystalline samples may be prepared by a reaction between molten lithium and graphite at over 1000 °C. Li2C2 can also be prepared by reacting CO2 with molten lithium.
10 Li CO Li Li 2O Other method for production of Li2C2 is heating of metallic lithium in atmosphere of ethylene. Li Li LiH Lithium carbide hydrolyzes readily to form acetylene: Li LiOH +C2H2 Lithium hydride reacts with graphite at 400°C forming lithium carbide.
LiH Li 2C2+C2H2 Also Li2C2 can be formed when organometallic compound n-Butyllithium reacts with ethyne in THF or Et2O used as a solvent, reaction is rapid and highly exothermic.
BuLi Et or THF Li 10 Lithium carbide reacts with acetylene in liquid ammonia rapidly to give a clear solution of lithium acetylide.
LiC≡CLi + HC≡CH → 2 LiC≡CH Preparation of the reagent in this way sometimes improves the yield in an ethynylation over that obtained with reagent prepared from lithium and acetylene.
Structure:
Li2C2 is a Zintl phase compound and exists as a salt, 2Li+C22−. Its reactivity, combined with the difficulty in growing suitable single crystals, has made the determination of its crystal structure difficult. It adopts a distorted anti-fluorite crystal structure, similar to that of rubidium peroxide (Rb2O2) and caesium peroxide (Cs2O2). Each Li atom is surrounded by six carbon atoms from 4 different acetylides, with two acetylides co-ordinating side -on and the other two end-on. The observed C-C distance of 120 pm indicates the presence of a C≡C triple bond.
Structure:
At high temperatures Li2C2 transforms reversibly to a cubic anti-fluorite structure.
Use in radiocarbon dating:
There are a number of procedures employed, some that burn the sample producing CO2 that is then reacted with lithium, and others where the carbon containing sample is reacted directly with lithium metal. The outcome is the same: Li2C2 is produced, which can then be used to create species easy to use in mass spectroscopy, like acetylene and benzene. Note that lithium nitride may be formed and this produces ammonia when hydrolyzed, which contaminates the acetylene gas. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Money pump**
Money pump:
In economic theory, the money pump argument is a thought experiment showing that rational behavior requires transitive preferences. Classical economic theory assumes that preferences are transitive: if someone thinks A is better than B and B is better than C, then they must think A is better than C. In other words, there cannot be a "cycle" of preferences.
Money pump:
The money pump argument notes that if someone held a set of intransitive preferences, they could be exploited (pumped) for money until being forced to leave the market. Imagine Jane has twenty dollars to buy fruit. She can fill her basket with either oranges or apples. Jane would prefer to have a dollar rather than an apple, an apple rather than an orange, and an orange rather than a dollar. Because Jane would rather have an orange than a dollar, she is willing to buy an orange for just over a dollar (perhaps $1.10). Then, she trades her orange for an apple, because she would rather have an apple rather than an orange. Finally, she sells her apple for a dollar, because she would rather have a dollar than an apple. At this point, Jane is left with $19.90, and has lost 10¢ and gained nothing in return. This process can be repeated until Jane is left with no money. (Note that, if Jane truly holds these preferences, she would see nothing wrong with this process, and would not try to stop this process; at every step, Jane agrees she has been left better off.) After running out of money, Jane leaves the market, and her preferences and actions cease to be economically relevant.
Money pump:
Experiments in behavioral economics show that subjects can violate the requirement for transitive preferences when comparing bets. However, most subjects do not make these choices in within-subject comparisons where the contradiction would be obviously visible (in other words, the subjects do not hold genuinely intransitive preferences, but instead make mistakes when making choices using heuristics). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Open-source firmware**
Open-source firmware:
Open-source firmware is firmware that is published under an open-source license. It can be contrasted with proprietary firmware, which is published under a proprietary license or EULA.
Examples:
OpenWrt coreboot SeaBIOS LinuxBoot Libreboot Marlin (firmware), Arduino-based firmware for 3D printers PinePhone LTE modem Rockbox, a replacement firmware for various digital audio players | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Prandtl–Glauert singularity**
Prandtl–Glauert singularity:
The Prandtl–Glauert singularity is a theoretical construct in flow physics, often incorrectly used to explain vapor cones in transonic flows.
It is the prediction by the Prandtl–Glauert transformation that infinite pressures would be experienced by an aircraft as it approaches the speed of sound. Because it is invalid to apply the transformation at these speeds, the predicted singularity does not emerge. The incorrect association is related to the early-20th-century misconception of the impenetrability of the sound barrier.
Reasons of invalidity around Mach 1:
The Prandtl–Glauert transformation assumes linearity (i.e. a small change will have a small effect that is proportional to its size). This assumption becomes inaccurate toward Mach 1 and is entirely invalid in places where the flow reaches supersonic speeds, since sonic shock waves are instantaneous (and thus manifestly non-linear) changes in the flow. Indeed, one assumption in the Prandtl–Glauert transformation is approximately constant Mach number throughout the flow, and the increasing slope in the transformation indicates that very small changes will have a very strong effect at higher Mach numbers, thus violating the assumption, which breaks down entirely at the speed of sound.
Reasons of invalidity around Mach 1:
This means that the singularity featured by the transformation near the sonic speed (M=1) is not within the area of validity. The aerodynamic forces are calculated to approach infinity at the so-called Prandtl–Glauert singularity; in reality, the aerodynamic and thermodynamic perturbations do get amplified strongly near the sonic speed, but they remain finite and a singularity does not occur. The Prandtl–Glauert transformation is a linearized approximation of compressible, inviscid potential flow. As the flow approaches sonic speed, the nonlinear phenomena dominate within the flow, which this transformation completely ignores for the sake of simplicity.
Prandtl–Glauert transformation:
The Prandtl–Glauert transformation is found by linearizing the potential equations associated with compressible, inviscid flow. For two-dimensional flow, the linearized pressures in such a flow are equal to those found from incompressible flow theory multiplied by a correction factor. This correction factor is given below: where cp is the compressible pressure coefficient cp0 is the incompressible pressure coefficient M∞ is the freestream Mach number.This formula is known as "Prandtl's rule", and works well up to low-transonic Mach numbers (M < ~0.7). However, note the limit: This obviously nonphysical result (of an infinite pressure) is known as the Prandtl–Glauert singularity.
Reason for condensation clouds:
The reason that observable clouds sometimes form around high speed aircraft is that humid air is entering low-pressure regions, which also reduces local density and temperature sufficiently to cause water to supersaturate around the aircraft and to condense in the air, thus creating clouds. The clouds vanish as soon as the pressure increases again to ambient levels.
In the case of objects at transonic speeds, the local pressure increase happens at the shock wave location.
Condensation in free flow does not require supersonic flow. Given sufficiently high humidity, condensation clouds can be produced in purely subsonic flow over wings, or in the cores of wing tips, and even within, or around vortices themselves. This can often be observed during humid days on aircraft approaching or departing airports. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PDCA**
PDCA:
PDCA or plan–do–check–act (sometimes called plan–do–check–adjust) is an iterative design and management method used in business for the control and continual improvement of processes and products. It is also known as the Shewhart cycle, or the control circle/cycle. Another version of this PDCA cycle is OPDCA. The added "O" stands for observation or as some versions say: "Observe the current condition." This emphasis on observation and current condition has currency with the literature on lean manufacturing and the Toyota Production System. The PDCA cycle, with Ishikawa's changes, can be traced back to S. Mizuno of the Tokyo Institute of Technology in 1959.The PDCA cycle is also known as PDSA cycle (where S stands for study). It was an early means of representing the task areas of traditional quality management. The cycle is sometimes referred to as the Shewhart / Deming cycle since it originated with physicist Walter Shewhart at the Bell Telephone Laboratories in the 1920s. W. Edwards Deming modified the Shewhart cycle in the 1940s and subsequently applied it to management practices in Japan in the 1950s.Dr. Deming found that the focus on Check is more about the implementation of a change, with success or failure. His focus was on predicting the results of an improvement effort, studying the actual results, and comparing them to possibly revise the theory.
Meaning:
Plan Establish objectives and processes required to deliver the desired results.
Do Carry out the objectives from the previous step.
Meaning:
Check During the check phase, the data and results gathered from the do phase are evaluated. Data is compared to the expected outcomes to see any similarities and differences. The testing process is also evaluated to see if there were any changes from the original test created during the planning phase. If the data is placed in a chart it can make it easier to see any trends if the plan–do–check–act cycle is conducted multiple times. This helps to see what changes work better than others and if said changes can be improved as well.
Meaning:
Example: Gap analysis or appraisals Act Also called "adjust", this act phase is where a process is improved. Records from the "do" and "check" phases help identify issues with the process. These issues may include problems, non-conformities, opportunities for improvement, inefficiencies, and other issues that result in outcomes that are evidently less-than-optimal. Root causes of such issues are investigated, found, and eliminated by modifying the process. Risk is re-evaluated. At the end of the actions in this phase, the process has better instructions, standards, or goals. Planning for the next cycle can proceed with a better baseline. Work in the next do phase should not create a recurrence of the identified issues; if it does, then the action was not effective.
About:
Plan–do–check–act is associated with W. Edwards Deming, who is considered by many to be the father of modern quality control; however, he used PDSA (Plan-Do-Study-Act) and referred to it as the "Shewhart cycle". Later in Deming's career, he modified PDCA to "Plan, Do, Study, Act" (PDSA) because he felt that "check" emphasized inspection over analysis. The PDSA cycle was used to create the model of know-how transfer process, and other models.The concept of PDCA is based on the scientific method, as developed from the work of Francis Bacon (Novum Organum, 1620). The scientific method can be written as "hypothesis–experiment–evaluation" or as "plan–do–check". Walter A. Shewhart described manufacture under "control"—under statistical control—as a three-step process of specification, production, and inspection.: 45 He also specifically related this to the scientific method of hypothesis, experiment, and evaluation. Shewhart says that the statistician "must help to change the demand [for goods] by showing [...] how to close up the tolerance range and to improve the quality of goods.": 48 Clearly, Shewhart intended the analyst to take action based on the conclusions of the evaluation. According to Deming, during his lectures in Japan in the early 1920s, the Japanese participants shortened the steps to the now traditional plan, do, check, act. Deming preferred plan, do, study, act because "study" has connotations in English closer to Shewhart's intent than "check".
About:
A fundamental principle of the scientific method and plan–do–check–act is iteration—once a hypothesis is confirmed (or negated), executing the cycle again will extend the knowledge further. Repeating the PDCA cycle can bring its users closer to the goal, usually a perfect operation and output.Plan–do–check–act (and other forms of scientific problem solving) is also known as a system for developing critical thinking. At Toyota this is also known as "Building people before building cars". Toyota and other lean manufacturing companies propose that an engaged, problem-solving workforce using PDCA in a culture of critical thinking is better able to innovate and stay ahead of the competition through rigorous problem solving and the subsequent innovations.Deming continually emphasized iterating towards an improved system, hence PDCA should be repeatedly implemented in spirals of increasing knowledge of the system that converge on the ultimate goal, each cycle closer than the previous. One can envision an open coil spring, with each loop being one cycle of the scientific method, and each complete cycle indicating an increase in our knowledge of the system under study. This approach is based on the belief that our knowledge and skills are limited, but improving. Especially at the start of a project, key information may not be known; the PDCA—scientific method—provides feedback to justify guesses (hypotheses) and increase knowledge. Rather than enter "analysis paralysis" to get it perfect the first time, it is better to be approximately right than exactly wrong. With improved knowledge, one may choose to refine or alter the goal (ideal state). The aim of the PDCA cycle is to bring its users closer to whatever goal they choose.: 160 When PDCA is used for complex projects or products with a certain controversy, checking with external stakeholders should happen before the Do stage, since changes to projects and products that are already in detailed design can be costly; this is also seen as Plan-Check-Do-Act.The rate of change, that is, the rate of improvement, is a key competitive factor in today's world. PDCA allows for major "jumps" in performance ("breakthroughs" often desired in a Western approach), as well as kaizen (frequent small improvements). In the United States a PDCA approach is usually associated with a sizable project involving numerous people's time, and thus managers want to see large "breakthrough" improvements to justify the effort expended. However, the scientific method and PDCA apply to all sorts of projects and improvement activities.: 76 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mixed reality game**
Mixed reality game:
A mixed reality game (or hybrid reality game) is a game which takes place in both reality and virtual reality simultaneously. According to Souza de Silva and Sutko, the defining characteristic of such games is their "lack of primary play space; these games are played simultaneously in physical, digital or represented spaces (such as a game board)". There is equivalence in definitions pertaining to their existence in mixed reality. Given the definition for mixed reality by Paul Milgram and Fumio Kishino for the virtuality continuum, virtual reality games are not mixed reality games, because they take place only in virtual reality. Souza de Silva and Sutko state that pervasive games are a subset of hybrid reality games. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ring latency**
Ring latency:
In a ring network, such as Token Ring, ring latency is the time required for a signal to propagate once around the ring. Ring latency may be measured in seconds or in bits at the data transmission rate. Ring latency includes signal propagation delays in the ring medium, the drop cables, and the data stations connected to the ring network. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rhamnogalacturonan exolyase**
Rhamnogalacturonan exolyase:
Rhamnogalacturonan exolyase (EC 4.2.2.24, YesX) is an enzyme with systematic name α-L-rhamnopyranosyl-(1→4)-α-D-galactopyranosyluronate exolyase. This enzyme catalyses the following chemical reaction Exotype eliminative cleavage of α-L-rhamnopyranosyl-(1→4)-α-D-galactopyranosyluronic acid bonds of rhamnogalacturonan I oligosaccharides containing α-L-rhamnopyranose at the reducing end and 4-deoxy-4,5-unsaturated D-galactopyranosyluronic acid at the non-reducing end. The products are the disaccharide 2-O-(4-deoxy-β-L-threo-hex-4-enopyranuronosyl)-α-Lrhamnopyranose and the shortened rhamnogalacturonan oligosaccharide containing one 4-deoxy-4,5-unsaturated D-galactopyranosyluronic acid at the non-reducing end.The enzyme is part of the degradation system for rhamnogalacturonan I in Bacillus subtilis strain 168. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Obedience school**
Obedience school:
An obedience school is an institution that trains pets (particularly dogs) how to behave properly. When puppies are young and in the first stages of training, they are often taken by their owners to obedience schools. Training usually takes place in small groups. In addition to training pets themselves, obedience schools also teach pet owners how to train, praise, and scold their pets themselves. Schools can teach at a various set of levels, ranging from the very basics for puppies to more advanced for competition level dogs. Most training in schools however, focuses on making dogs listen through basic commands such as sit, stay, lie down, etc.
Costs of Obedience School:
The prices of obedience school can vary depending on location, age of the dog, and the amount of training a dog requires. For example, group or class training can cost anywhere from $40–$125 per class, while private training, which may take place in the owners' home or trainers places of business, may cost anywhere from $30–100 per class. Dogs usually require 6-8 sessions. Other forms of classes are available as well, such as doggy boarding school which can cost about $950–$2,500; this includes 2–4 weeks of board-and-train. The cost of training can vary depending on the age of the dog as well; training an adult dog will cost more than training a puppy. A trainer may also include small additional costs such as training treats or leash/collar.
Other Forms of Dog Training:
Obedience schools are dedicated specifically to obedience training. There are other institutions, like major pet stores Petco and Petsmart that offer specific classes dedicated to obedience training. Dog daycares and animal shelters also provide dog training classes for less money than obedience schools. Places may offer classes that vary in different skill level. The American Kennel Club offers a couple different training options for their members. These training options include puppy class, basic class, Canine Good Citizen program, and other training classes for companion events.
What to expect:
Training classes are meant to teach the pet basic obedience. Beginner classes can include basic commands such as; sit, stay, lay down, and roll over. They will learn not to pull on a leash, not to jump or chew on furniture. They will also gain social skills when meeting new people. Social skills will make the dog more friendly with other dogs as well as humans. This will allow the dog to go more places with the owner. You should also understand when your dog is ready for obedience school. According to the American Kennel Club "classes are divided up between puppy classes, for dogs under five months of age, and adult or advanced classes, for dogs five months and older." This means you have to make sure you understand what level your dog is at so they have the best outcome. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Parricide**
Parricide:
Parricide refers to the deliberate killing of one's own father and mother, spouse (husband or wife), children, and/or close relative. However, the term is sometimes used more generally to refer to the intentional killing of a near relative. It is an umbrella term that can be used to refer to acts of matricide and patricide.
Matricide refers to the deliberate killing of one's own mother. Patricide refers to the deliberate killing of one's own father. The term parricide is also used to refer to many familicides (i.e. family annihilations wherein at least one parent is murdered along with other family members).
Parricide:
Societies consider parricide a serious crime and parricide offenders are subject to criminal prosecution under the homicide laws which are established in places (i.e. countries, states, etc.) in which parricides occur. According to the law, in most countries, an adult who is convicted of parricide faces a long-term prison sentence, a life sentence, or even capital punishment. Youthful parricide offenders who are younger than the age of majority (e.g. 18 year olds in the United States) may be prosecuted under less stringent laws which are designed to take their special needs and development into account but these laws are usually waived and as a result, most youthful parricide offenders are transferred into the Adult Judicial System.Parricide offenders are typically divided into two categories, 1) youthful parricide offenders (i.e. ages 8–24) and 2) adult parricide offenders (i.e. ages 25 and older) because the motivations and situations surrounding parricide events change as a child matures.
Prevalence:
As per the Parricide Prevention Institute, approximately 2–3% of all U.S. murders were parricides each year since 2010. The more than 300 parricides occurring in just the U.S. each year means there are 6 or more parricide events, on average, each week. This estimate does not include the murders of grandparents or stepparents by a child – only the murders of their natal or legally adoptive parents.
Youthful motives:
Youthful parricide is motivated by a variety of factors. Current research conducted by the Parricide Prevention Institute indicates the top five motives causing a child (aged 8–24 years old) to commit parricide are: issues of control - 38% (e.g. put on restriction, phone taken away, etc.); issues of money - 10% (access to life insurance, wants money for a party, etc.); stop abuse of self or family - 8%; fit of anger - 8%; wants a different life - 7% (e.g. wants to live with non-custodial parent, etc.).
Youthful motives:
Child abuse It is a common misconception that youthful parricide offenders murdered their parent/s to escape egregious child abuse. This is actually not the case. In fact, this notion was challenged beginning in 1999 when Hillbrand et al. suggested that child abuse is simply only one variable among myriad variables that lead to adolescent parricide, rather than the primary reason for youthful parricide occurrences. In a study published by Weisman et al. (2002), they noted there was a remarkable absence of child abuse and emphatically stated that their research did not statistically validate the generalization that prior child abuse had prompted the majority of these crimes. In 2006 Marleau et al. noted that in their study only 25% of all study participants had been subjected to any kind of family violence; refuting the generalization that child abuse is the primary motivator for parricide by youthful offenders. They called for more research on the alleged connection between child abuse and parricidal acts. Bourget et al. (2007) noted many shortcomings in the extant literature and suggested alternative causes of parricide rather than accepting a general notion that child abuse was the primary cause of parricide by youthful offenders. In their commentary on methodological problems plaguing parricide research, Hillbrand and Cipriano (2007) noted the challenges posed by studies on parricide; acknowledging that most studies utilized very small sample sizes that should not have been generalized. This call for more research was answered by a study in 2019 when the study by Thompson and Thompson statistically invalidated the general theory that most adolescent parricides were the result of abuse of the child at the hands of the parents who had been murdered. Their research (N = 754) revealed that only 15% of youthful parricide offenders alleged abuse at the hands of the parent/s they had killed. A full 66% were not abused, did not allege abuse and were not perpetrators of abuse. Of the remaining population, 13% of the offenders had alleged abuse that was not substantiated (some of these children had lied about abuse and it could not be proven that abuse had occurred in other cases). Additionally, 6% of the youthful parricide offenders had been found to have actually abused their parent/s prior to the murder/s. Child abuse, while a factor present in some youthful parricide occurrences, is not the primary motivator for these murders. As noted above, issues of control are the most typical motive behind the murder.
Notable modern-day cases:
Adam Lanza killed his mother before committing the Sandy Hook Elementary School shooting in 2012.
Henry Chau Hoi-leung killed and dismembered his parents in 2013.
Kip Kinkel killed his parents before committing the Thurston High School shooting in 1998.
Joel Michael Guy Jr. killed and dismembered both of his parents on the Saturday after Thanksgiving in 2016.
Charles Whitman killed his mother and his wife before climbing the bell tower at UT-Austin and randomly killing people in 1966. Upon autopsy he was found to have a tumor on his amygdala.
Dellen Millard killed his father in 2012 and inherited millions. He and his friend Mark Smich worked together as serial killers both before and after the murder; murdering Laura Babcock and Tim Bosma.
Dana Ewell hired two of his friends to murder his father, mother and sister in 1992. All three were convicted of murder.
Thomas Bartlett Whitaker killed his mother and his brother (and tried to kill his father but failed) in 2003.
Lyle and Erik Menéndez worked as a team to kill their parents in 1989.
Sarah Marie Johnson was the only female to kill both of her parents without the help of an accomplice in 2003.
Suzane von Richthofen killed her father and her mother with the help of her boyfriend and his brother in São Paulo in 2002.
Nicole Kasinskas killed her mother with the help of her boyfriend in 2003. Chandler Halderson killed and dismembered both of his parents on July 1, 2021. A 19 year old Japanese man in Tosu, Saga Prefecture, killed his parents by stabbing them with a knife in the neck on March 9, 2023.
Notable historical cases:
Lizzie Borden (1860–1927) was an American woman accused and acquitted of murdering her father and stepmother.
Lucius Hostius reportedly was the first parricide in Republican Rome, sometime after the Second Punic War.
The Criminal Code of Japan once determined that patricide brought capital punishment or life imprisonment. However, the law was abolished because of the trial of the Tochigi patricide case in which a woman killed her father in 1968 after she was sexually abused by him and bore their children.
Tullia the Younger, along with her husband, arranged the murder and overthrow of Servius Tullius, her father, securing the throne for her husband.
Mary Blandy (1720–1752) poisoned her father, Francis Blandy, with arsenic in England in 1751.
Legal definition in Roman times:
In the sixth century AD collection of earlier juristical sayings, the Digest, a precise enumeration of the victims' possible relations to the parricide is given by the 3rd century AD lawyer Modestinus: By the lex Pompeia on parricides it is laid down that if anyone kills his father, his mother, his grandfather, his grandmother, his brother, his sister, first cousin on his father's side, first cousin on his mother's side, paternal or maternal uncle, paternal (or maternal) aunt, first cousin (male or female) by mother's sister, wife, husband, father-in-law, son-in-law, mother-in-law, (daughter-in-law), stepfather, stepson, stepdaughter, patron, or patroness, or with malicious intent brings this about, shall be liable to the same penalty as that of the lex Cornelia on murderers. And a mother who kills her son or daughter suffers the penalty of the same statute, as does a grandfather who kills a grandson; and in addition, a person who buys poison to give to his father, even though he is unable to administer it. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Database abstraction layer**
Database abstraction layer:
A database abstraction layer (DBAL or DAL) is an application programming interface which unifies the communication between a computer application and databases such as SQL Server, IBM Db2, MySQL, PostgreSQL, Oracle or SQLite. Traditionally, all database vendors provide their own interface that is tailored to their products. It is up to the application programmer to implement code for the database interfaces that will be supported by the application. Database abstraction layers reduce the amount of work by providing a consistent API to the developer and hide the database specifics behind this interface as much as possible. There exist many abstraction layers with different interfaces in numerous programming languages. If an application has such a layer built in, it is called database-agnostic.
Database levels of abstraction:
Physical level (lowest level) The lowest level connects to the database and performs the actual operations required by the users. At this level the conceptual instruction has been translated into multiple instructions that the database understands. Executing the instructions in the correct order allows the DAL to perform the conceptual instruction.
Implementation of the physical layer may use database-specific APIs or use the underlying language standard database access technology and the database's version SQL.
Implementation of data types and operations are the most database-specific at this level.
Conceptual or logical level (middle or next highest level) The conceptual level consolidates external concepts and instructions into an intermediate data structure that can be devolved into physical instructions. This layer is the most complex as it spans the external and physical levels. Additionally it needs to span all the supported databases and their quirks, APIs, and problems.
This level is aware of the differences between the databases and able to construct an execution path of operations in all cases. However the conceptual layer defers to the physical layer for the actual implementation of each individual operation.
External or view level The external level is exposed to users and developers and supplies a consistent pattern for performing database operations.
Database operations are represented only loosely as SQL or even database access at this level.
Every database should be treated equally at this level with no apparent difference despite varying physical data types and operations.
Database abstraction in the API:
Libraries unify access to databases by providing a single low-level programming interface to the application developer. Their advantages are most often speed and flexibility because they are not tied to a specific query language (subset) and only have to implement a thin layer to reach their goal. As all SQL dialects are similar to one another, application developers can use all the language features, possibly providing configurable elements for database-specific cases, such as typically user-IDs and credentials. A thin-layer allows the same queries and statements to run on a variety of database products with negligible overhead.
Database abstraction in the API:
Popular use for database abstraction layers are among object-oriented programming languages, which are similar to API-level abstraction layers. In an object-oriented language like C++ or Java, a database can be represented through an object, whose methods and members (or the equivalent thereof in other programming languages) represent various functionalities of the database. They also share advantages and disadvantages with API-level interfaces.
Language-level abstraction:
An example of a database abstraction layer on the language level would be ODBC that is a platform-independent implementation of a database abstraction layer. The user installs specific driver software, through which ODBC can communicate with a database or set of databases. The user then has the ability to have programs communicate with ODBC, which then relays the results back and forth between the user programs and the database. The downside of this abstraction level is the increased overhead to transform statements into constructs understood by the target database.
Language-level abstraction:
Alternatively, there are thin wrappers, often described as lightweight abstraction layers, such as OpenDBX and libzdb. Finally, large projects may develop their own libraries, such as, for example, libgda for GNOME.
Arguments:
In favor Development period: software developers only have to know the database abstraction layer's API instead of all APIs of the databases their application should support. The more databases should be supported the bigger is the time saving.
Wider potential install-base: using a database abstraction layer means that there is no requirement for new installations to utilise a specific database, i.e. new users who are unwilling or unable to switch databases can deploy on their existing infrastructure.
Future-proofing: as new database technologies emerge, software developers won't have to adapt to new interfaces.
Developer testing: a production database may be replaced with a desktop-level implementation of the data for developer-level unit tests.
Arguments:
Added Database Features: depending on the database and the DAL, it may be possible for the DAL to add features to the database. A DAL may use database programming facilities or other methods to create standard but unsupported functionality or completely new functionality. For instance, the DBvolution DAL implements the standard deviation function for several databases that do not support it.
Arguments:
Against it Speed: any abstraction layer will reduce the overall speed more or less depending on the amount of additional code that has to be executed. The more a database layer abstracts from the native database interface and tries to emulate features not present on all database backends, the slower the overall performance. This is especially true for database abstraction layers that try to unify the query language as well like ODBC.
Arguments:
Dependency: a database abstraction layer provides yet another functional dependency for a software system, i.e. a given database abstraction layer, like anything else, may eventually become obsolete, outmoded or unsupported.
Masked operations: database abstraction layers may limit the number of available database operations to a subset of those supported by the supported database backends. In particular, database abstraction layers may not fully support database backend-specific optimizations or debugging features. These problems magnify significantly with database size, scale, and complexity. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sujeo**
Sujeo:
Sujeo (수저) is the Korean word for the set of eating utensils commonly used to eat Korean cuisine. The word is a portmanteau of the words sutgarak (숟가락, 'spoon') and jeotgarak (젓가락, 'chopsticks'). The sujeo set includes a pair of oval-shaped or rounded-rectangular metal (often stainless steel) chopsticks, and a long handled shallow spoon of the same material. One may use both at the same time, but this is a recent way to eat quicker. It is not considered good etiquette to hold the spoon and the chopstick together in one hand especially while eating with elders. More often food is eaten with chopsticks alone. Sometimes the spoon apart from chopsticks is referred to as sujeo.
Sujeo:
Chopsticks may be put down on a table, but never put into food standing up, particularly rice, as this is considered to bring bad luck since it resembles food offerings at a grave to deceased ancestors. The spoon may be laid down on the rice bowl, or soup bowl, if it has not been used. As food is eaten quickly, and portions are small, little time is spent in putting eating utensils down.
Sujeo:
Cases for sujeo in paper or Korean fabrics were often embroidered with symbols of longevity and given as gifts, particularly at weddings. They are now sold as souvenirs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mores**
Mores:
Mores (, sometimes ; from Latin mōrēs [ˈmoːreːs], plural form of singular mōs, meaning "manner, custom, usage, or habit") are social norms that are widely observed within a particular society or culture. Mores determine what is considered morally acceptable or unacceptable within any given culture. A folkway is what is created through interaction and that process is what organizes interactions through routine, repetition, habit and consistency.William Graham Sumner (1840–1910), an early U.S. sociologist, introduced both the terms "mores" (1898) and "folkways" (1906) into modern sociology.Mores are strict in the sense that they determine the difference between right and wrong in a given society, people may be punished for their immorality which is common place in many societies in the world, at times with disapproval or ostracizing. Examples of traditional customs and conventions that are mores include lying, cheating, causing harm, alcohol use, drug use, marriage beliefs, gossip, slander, jealousy, disgracing or disrespecting parents, refusal to attend a funeral, politically incorrect humor, sports cheating, vandalism, leaving trash, plagiarism, bribery, corruption, saving face, respecting your elders, religious prescriptions and fiduciary responsibility.Folkways are ways of thinking, acting and behaving in social groups which are agreed upon by the masses and are useful for the ordering of society. Folkways are spread through imitation, oral means or observation, and are meant to encompass the material, spiritual and verbal aspects of culture. Folkways meet the problems of social life, we feel security and order from their acceptance and application. Examples of folkways include: acceptable dress, manners, social etiquette, body language, posture, level of privacy, working hours and five day work week, acceptability of social drinking - abstaining or not from drinking during certain working hours, actions and behaviours in public places, school, university, business and religious institution, ceremonial situations, ritual, customary services and keeping personal space.
Terminology:
The English word morality comes from the same Latin root "mōrēs", as does the English noun moral. However, mores do not, as is commonly supposed, necessarily carry connotations of morality. Rather, morality can be seen as a subset of mores, held to be of central importance in view of their content, and often formalized into some kind of moral code or even into customary law. Etymological derivations include More danico, More judaico, More veneto, Coitus more ferarum, and O tempora, o mores!.
Terminology:
The Greek terms equivalent to Latin mores are ethos (ἔθος, ἦθος, 'character') or nomos (νόμος, 'law'). As with the relation of mores to morality, ethos is the basis of the term ethics, while nomos gives the suffix -onomy, as in astronomy.
Anthropology:
The meaning of all these terms extend to all customs of proper behavior in a given society, both religious and profane, from more trivial conventional aspects of custom, etiquette or politeness—"folkways" enforced by gentle social pressure, but going beyond mere "folkways" or conventions in including moral codes and notions of justice—down to strict taboos, behavior that is unthinkable within the society in question, very commonly including incest and murder, but also the commitment of outrages specific to the individual society such as blasphemy. Such religious or sacral customs may vary. Some examples include funerary services, matrimonial services; circumcision and covering of the hair in Judaism, Christian ten commandments, New Commandment and the sacraments or for example baptism, and Protestant work ethic, Shahada, prayer, alms, the fast and the pilgrimage as well as modesty in Islam, and religious diet.
Anthropology:
While cultural universals are by definition part of the mores of every society (hence also called "empty universals"), the customary norms specific to a given society are a defining aspect of the cultural identity of an ethnicity or a nation. Coping with the differences between two sets of cultural conventions is a question of intercultural competence.
Differences in the mores of various nations are at the root of ethnic stereotype, or in the case of reflection upon one's own mores, autostereotypes.
Anthropology:
The customary norms in a given society may include indigenous land rights, honour, filial piety, customary law and the customary international law that affects countries who may not have codified their customary norms. Land rights of indigenous peoples is under customary land tenure, its a system of arrangement in-line with customs and norms. This is the case in colonies. An example of a norm is an culture of honor exists in some societies, where the family is viewed as the main source of honor and the conduct of family members reflects upon their family honor. For instance some writers say in Rome to have an honorable stance, to be equals with someone, existed for those who are most similar to one another (family and friends) this could be due to the competing for public recognition and therefore for personal and public honor, over rhetoric, sport, war, wealth and virtue. To protrude, stand out, be recognized and demonstrate this "A Roman could win such a "competition" by pointing to past evidences of their honor" and "Or, a critic might be refuted by one's performance in a fresh showdown in which one's bona fides could be plainly demonstrated." Honor culture only can exist if the society has for males the shared code, a standard to uphold, guidelines and rules to follow, do not want to break those rules and how to interact successfully and to engage, this exists within a "closed" community of equals.Filial piety is ethics towards ones family, as Fung Yu-lan states "the ideological basis for traditional [Chinese] society" and according to Confucious repay a burden debt back to ones parents or caregiver but its also traditional in another sense so as to fulfill an obligation to ones own ancestors, also to modern scholars it suggests extends an attitude of respect to superiors also, who are deserving to have that respect. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Type (model theory)**
Type (model theory):
In model theory and related areas of mathematics, a type is an object that describes how a (real or possible) element or finite collection of elements in a mathematical structure might behave. More precisely, it is a set of first-order formulas in a language L with free variables x1, x2,…, xn that are true of a set of n-tuples of an L-structure M . Depending on the context, types can be complete or partial and they may use a fixed set of constants, A, from the structure M . The question of which types represent actual elements of M leads to the ideas of saturated models and omitting types.
Formal definition:
Consider a structure M for a language L. Let M be the universe of the structure. For every A ⊆ M, let L(A) be the language obtained from L by adding a constant ca for every a ∈ A. In other words, L(A)=L∪{ca:a∈A}.
A 1-type (of M ) over A is a set p(x) of formulas in L(A) with at most one free variable x (therefore 1-type) such that for every finite subset p0(x) ⊆ p(x) there is some b ∈ M, depending on p0(x), with M⊨p0(b) (i.e. all formulas in p0(x) are true in M when x is replaced by b).
Formal definition:
Similarly an n-type (of M ) over A is defined to be a set p(x1,…,xn) = p(x) of formulas in L(A), each having its free variables occurring only among the given n free variables x1,…,xn, such that for every finite subset p0(x) ⊆ p(x) there are some elements b1,…,bn ∈ M with M⊨p0(b1,…,bn) A complete type of M over A is one that is maximal with respect to inclusion. Equivalently, for every ϕ(x)∈L(A,x) either ϕ(x)∈p(x) or ¬ϕ(x)∈p(x) . Any non-complete type is called a partial type. So, the word type in general refers to any n-type, partial or complete, over any chosen set of parameters (possibly the empty set).
Formal definition:
An n-type p(x) is said to be realized in M if there is an element b ∈ Mn such that M⊨p(b) . The existence of such a realization is guaranteed for any type by the compactness theorem, although the realization might take place in some elementary extension of M , rather than in M itself. If a complete type is realized by b in M , then the type is typically denoted tpnM(b/A) and referred to as the complete type of b over A.
Formal definition:
A type p(x) is said to be isolated by φ , for φ∈p(x) , if for all ψ(x)∈p(x), we have Th (M)⊨φ(x)→ψ(x) . Since finite subsets of a type are always realized in M , there is always an element b ∈ Mn such that φ(b) is true in M ; i.e. M⊨φ(b) , thus b realizes the entire isolated type. So isolated types will be realized in every elementary substructure or extension. Because of this, isolated types can never be omitted (see below).
Formal definition:
A model that realizes the maximum possible variety of types is called a saturated model, and the ultrapower construction provides one way of producing saturated models.
Examples of types:
Consider the language L with one binary relation symbol, which we denote as ∈ . Let M be the structure ⟨ω,∈ω⟩ for this language, which is the ordinal ω with its standard well-ordering. Let T denote the first-order theory of M Consider the set of L(ω)-formulas := {n∈ωx∣n∈ω} . First, we claim this is a type. Let p0(x)⊆p(x) be a finite subset of p(x) . We need to find a b∈ω that satisfies all the formulas in p0 . Well, we can just take the successor of the largest ordinal mentioned in the set of formulas p0(x) . Then this will clearly contain all the ordinals mentioned in p0(x) . Thus we have that p(x) is a type. Next, note that p(x) is not realized in M . For, if it were there would be some n∈ω that contains every element of ω . If we wanted to realize the type, we might be tempted to consider the structure ⟨ω+1,∈ω+1⟩ , which is indeed an extension of M that realizes the type. Unfortunately, this extension is not elementary, for example, it does not satisfy T . In particular, the sentence ∃x∀y(y∈x∨y=x) is satisfied by this structure and not by M So, we wish to realize the type in an elementary extension. We can do this by defining a new L-structure, which we will denote M′ . The domain of the structure will be ω∪Z′ where Z′ is the set of integers adorned in such a way that Z′∩ω=∅ . Let < denote the usual order of Z′ . We interpret the symbol ∈ in our new structure by ∈M′=∈ω∪<∪(ω×Z′) . The idea being that we are adding a " Z -chain", or copy of the integers, above all the finite ordinals. Clearly any element of Z′ realizes the type p(x) . Moreover, one can verify that this extension is elementary.
Examples of types:
Another example: the complete type of the number 2 over the empty set, considered as a member of the natural numbers, would be the set of all first-order statements (in the language of Peano arithmetic), describing a variable x, that are true when x = 2. This set would include formulas such as x≠1+1+1 , x≤1+1+1+1+1 , and ∃y(y<x) . This is an example of an isolated type, since, working over the theory of the naturals, the formula x=1+1 implies all other formulas that are true about the number 2.
Examples of types:
As a further example, the statements ∀y(y2<2⟹y<x) and ∀y((y>0∧y2>2)⟹y>x) describing the square root of 2 are consistent with the axioms of ordered fields, and can be extended to a complete type. This type is not realized in the ordered field of rational numbers, but is realized in the ordered field of reals. Similarly, the infinite set of formulas (over the empty set) {x>1, x>1+1, x>1+1+1, ...} is not realized in the ordered field of real numbers, but is realized in the ordered field of hyperreals. Similarly, we can specify a type {0<x<1/n∣n∈N} that is realized by an infinitesimal hyperreal that violates the Archimedean property.
Examples of types:
The reason it is useful to restrict the parameters to a certain subset of the model is that it helps to distinguish the types that can be satisfied from those that cannot. For example, using the entire set of real numbers as parameters one could generate an uncountably infinite set of formulas like x≠1 , x≠π , ... that would explicitly rule out every possible real value for x, and therefore could never be realized within the real numbers.
Stone spaces:
It is useful to consider the set of complete n-types over A as a topological space. Consider the following equivalence relation on formulas in the free variables x1,…, xn with parameters in A: ψ≡ϕ⇔M⊨∀x1,…,xn(ψ(x1,…,xn)↔ϕ(x1,…,xn)).
One can show that ψ≡ϕ if and only if they are contained in exactly the same complete types.
Stone spaces:
The set of formulas in free variables x1,…,xn over A up to this equivalence relation is a Boolean algebra (and is canonically isomorphic to the set of A-definable subsets of Mn). The complete n-types correspond to ultrafilters of this Boolean algebra. The set of complete n-types can be made into a topological space by taking the sets of types containing a given formula as a basis of open sets. This constructs the Stone space associated to the Boolean algebra, which is a compact, Hausdorff, and totally disconnected space.
Stone spaces:
Example. The complete theory of algebraically closed fields of characteristic 0 has quantifier elimination, which allows one to show that the possible complete 1-types (over the empty set) correspond to: Roots of a given irreducible non-constant polynomial over the rationals with leading coefficient 1. For example, the type of square roots of 2. Each of these types is an isolated point of the Stone space.
Stone spaces:
Transcendental elements, which are not roots of any non-zero polynomial. This type is a point in the Stone space that is closed but not isolated.In other words, the 1-types correspond exactly to the prime ideals of the polynomial ring Q[x] over the rationals Q: if r is an element of the model of type p, then the ideal corresponding to p is the set of polynomials with r as a root (which is only the zero polynomial if r is transcendental). More generally, the complete n-types correspond to the prime ideals of the polynomial ring Q[x1,...,xn], in other words to the points of the prime spectrum of this ring. (The Stone space topology can in fact be viewed as the Zariski topology of a Boolean ring induced in a natural way from the Boolean algebra. While the Zariski topology is not in general Hausdorff, it is in the case of Boolean rings.) For example, if q(x,y) is an irreducible polynomial in two variables, there is a 2-type whose realizations are (informally) pairs (x,y) of elements with q(x,y)=0.
Omitting types theorem:
Given a complete n-type p one can ask if there is a model of the theory that omits p, in other words there is no n-tuple in the model that realizes p. If p is an isolated point in the Stone space, i.e. if {p} is an open set, it is easy to see that every model realizes p (at least if the theory is complete). The omitting types theorem says that conversely if p is not isolated then there is a countable model omitting p (provided that the language is countable).
Omitting types theorem:
Example: In the theory of algebraically closed fields of characteristic 0, there is a 1-type represented by elements that are transcendental over the prime field. This is a non-isolated point of the Stone space (in fact, the only non-isolated point). The field of algebraic numbers is a model omitting this type, and the algebraic closure of any transcendental extension of the rationals is a model realizing this type.
Omitting types theorem:
All the other types are "algebraic numbers" (more precisely, they are the sets of first-order statements satisfied by some given algebraic number), and all such types are realized in all algebraically closed fields of characteristic 0. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**K with diagonal stroke**
K with diagonal stroke:
K with diagonal stroke (Ꝃ, ꝃ) is a letter of the Latin alphabet, derived from K with the addition of a diagonal bar through the leg.
Usage:
This letter is used in medieval texts as an abbreviation for kalendas, calends, as well as for karta and kartam, a document or writ. The same function could also be performed by "K with stroke" (Ꝁ, ꝁ), or "K with stroke and diagonal stroke" (Ꝅ, ꝅ).In the Breton language, this letter is used, mainly from the fifteenth to the twentieth century, to abbreviate Ker, a prefix used in place names, similar to the Welsh caer.
Computer encodings:
Capital and small K with diagonal stroke is encoded in Unicode as of version 5.1, at codepoints U+A742 and U+A743. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**RNA recognition motif**
RNA recognition motif:
RNA recognition motif, RNP-1 is a putative RNA-binding domain of about 90 amino acids that are known to bind single-stranded RNAs. It was found in many eukaryotic proteins.The largest group of single strand RNA-binding protein is the eukaryotic RNA recognition motif (RRM) family that contains an eight amino acid RNP-1 consensus sequence.RRM proteins have a variety of RNA binding preferences and functions, and include heterogeneous nuclear ribonucleoproteins (hnRNPs), proteins implicated in regulation of alternative splicing (SR, U2AF2, Sxl), protein components of small nuclear ribonucleoproteins (U1 and U2 snRNPs), and proteins that regulate RNA stability and translation (PABP, La, Hu). The RRM in heterodimeric splicing factor U2 snRNP auxiliary factor appears to have two RRM-like domains with specialised features for protein recognition. The motif also appears in a few single stranded DNA binding proteins.
RNA recognition motif:
The typical RRM consists of four anti-parallel beta-strands and two alpha-helices arranged in a beta-alpha-beta-beta-alpha-beta fold with side chains that stack with RNA bases. A third helix is present during RNA binding in some cases. The RRM is reviewed in a number of publications.
Human proteins containing this domain:
A2BP1; ACF; BOLL; BRUNOL4; BRUNOL5; BRUNOL6; CCBL2; CGI-96; CIRBP; CNOT4; CPEB2; CPEB3; CPEB4; CPSF7; CSTF2; CSTF2T; CUGBP1; CUGBP2; D10S102; DAZ1; DAZ2; DAZ3; DAZ4; DAZAP1; DAZL; DNAJC17; DND1; EIF3S4; EIF3S9; EIF4B; EIF4H; ELAVL1; ELAVL2; ELAVL3; ELAVL4; ENOX1; ENOX2; EWSR1; FUS; FUSIP1; G3BP; G3BP1; G3BP2; GRSF1; HNRNPL; HNRPA0; HNRPA1; HNRPA2B1; HNRPA3; HNRPAB; HNRPC; HNRPCL1; HNRPD; HNRPDL; HNRPF; HNRPH1; HNRPH2; HNRPH3; HNRPL; HNRPLL; HNRPM; HNRPR; HRNBP1; HSU53209; HTATSF1; IGF2BP1; IGF2BP2; IGF2BP3; LARP7; MKI67IP; MSI1; MSI2; MSSP-2; MTHFSD; MYEF2; NCBP2; NCL; NOL8; NONO; P14; PABPC1; PABPC1L; PABPC3; PABPC4; PABPC5; PABPN1; POLDIP3; PPARGC1; PPARGC1A; PPARGC1B; PPIE; PPIL4; PPRC1; PSPC1; PTBP1; PTBP2; PUF60; RALY; RALYL; RAVER1; RAVER2; RBM10; RBM11; RBM12; RBM12B; RBM14; RBM15; RBM15B; RBM16; RBM17; RBM18; RBM19; RBM22; RBM23; RBM24; RBM25; RBM26; RBM27; RBM28; RBM3; RBM32B; RBM33; RBM34; RBM35A; RBM35B; RBM38; RBM39; RBM4; RBM41; RBM42; RBM44; RBM45; RBM46; RBM47; RBM4B; RBM5; RBM7; RBM8A; RBM9; RBMS1; RBMS2; RBMS3; RBMX; RBMX2; RBMXL2; RBMY1A1; RBMY1B; RBMY1E; RBMY1F; RBMY2FP; RBPMS; RBPMS2; RDBP; RNPC3; RNPC4; RNPS1; ROD1; SAFB; SAFB2; SART3; SETD1A; SF3B14; SF3B4; SFPQ; SFRS1; SFRS10; SFRS11; SFRS12; SFRS15; SFRS2; SFRS2B; SFRS3; SFRS4; SFRS5; SFRS6; SFRS7; SFRS9; SLIRP; SLTM; SNRP70; SNRPA; SNRPB2; SPEN; SR140; SRRP35; SSB; SYNCRIP; TAF15; TARDBP; THOC4; TIA1; TIAL1; TNRC4; TNRC6C; TRA2A; TRSPAP1; TUT1; U1SNRNPBP; U2AF1; U2AF2; UHMK1; ZCRB1; ZNF638; ZRSR1; ZRSR2; | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lexicon Recentis Latinitatis**
Lexicon Recentis Latinitatis:
The Lexicon Recentis Latinitatis is a Neo-Latin dictionary published by the Vatican-based Latinitas Foundation. The book is an attempt to update the Latin language with a definition of neologisms in Latin. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Log driving**
Log driving:
Log driving is a means of moving logs (sawn tree trunks) from a forest to sawmills and pulp mills downstream using the current of a river. It was the main transportation method of the early logging industry in Europe and North America.
History:
When the first sawmills were established, they were usually small water-powered facilities located near the source of timber, which might be converted to grist mills after farming became established when the forests had been cleared. Later, bigger circular sawmills were developed in the lower reaches of a river, with the logs floated down to them by log drivers. In the broader, slower stretches of a river, the logs might be bound together into timber rafts. In the smaller, wilder stretches of a river where rafts couldn't get through, masses of individual logs were driven down the river like huge herds of cattle. "Log floating" in Sweden (timmerflottning) had begun by the 16th century, and 17th century in Finland (tukinuitto). The total length of timber-floating routes in Finland was 40,000km.
History:
The log drive was one step in a larger process of lumber-making in remote places. In a location with snowy winters, the yearly process typically began in autumn when a small team of men hauled tools upstream into the timbered area, chopped out a clearing, and constructed crude buildings for a logging camp. In the winter when things froze, a larger crew moved into the camp and proceeded to cut trees, cutting the trunks into 5-metre (16 ft) lengths, and hauling the logs with oxen or horses over iced trails to the riverbank. There the logs were decked onto "rollways." In spring when snow thawed and water levels rose, the logs were rolled into the river, and the drive commenced.To ensure that logs drifted freely along the river, men called "log drivers" or "river pigs" were needed to guide the logs. The drivers typically divided into two groups. The more experienced and nimble men comprised the "jam" crew or "beat" crew. They watched the spots where logs were likely to jam, and when a jam started, tried to get to it quickly and dislodge the key logs before many logs stacked up. If they didn't, the river would keep piling on more logs, forming a partial dam which could raise the water level. Millions of board feet of lumber could back up for miles upriver, requiring weeks to break up, with some timber lost if it was shoved far enough into the shallows. When the jam crew saw a jam begin, they rushed to it and tried to break it up, using peaveys and possibly dynamite. This job required some understanding of physics, strong muscles, and extreme agility. The jam crew was an exceedingly dangerous occupation, with the drivers standing on the moving logs and running from one to another. Many drivers lost their lives by falling and being crushed by the logs.
History:
Each crew was accompanied by an experienced boss often selected for his fighting skills to control the strong and reckless men of his team. The overall drive was controlled by the "walking boss" who moved from place to place to coordinate the various teams to keep logs moving past problem spots. Stalling a drive near a saloon often created a cascade of drunken personnel problems.A larger group of less experienced men brought up the rear, pushing along the straggler logs that were stuck on the banks and in trees. They spent more time wading in icy water than balancing on moving logs. They were called the "rear crew." Other men worked with them from the bank, pushing logs away with pike poles. Others worked with horses and oxen to pull in the logs that had strayed furthest out into the flats.Bateaux ferried log drivers using pike poles to dislodge stranded logs while maneuvering with the log drive. A wannigan was a kitchen built on a raft which followed the drivers down the river. The wannigan served four meals a day to fuel the men working in cold water. It also provided tents and blankets for the night if no better accommodations were available. A commissary wagon carrying clothing, plug tobacco and patent medicines for purchase by the log drivers was also called a wangan. The logging company wangan train, called a Mary Anne, was a caravan of wagons pulled by four- or six-horse teams where roads followed the river to transport the tents, blankets, food, stoves, and tools needed by the log drivers.For log drives, the ideal river would have been straight and uniform, with sharp banks and a predictable flow of water. Wild rivers were not that, so men cut away the fallen trees that would snag logs, dynamited troublesome rocks, and built up the banks in places. To control the flow of water, they built "flash dams" or "driving dams" on smaller streams, so they could release water to push the logs down when they wanted.Each timber firm had its own mark which was placed on the logs, called an "end mark". Obliterating or altering a timber mark was a crime. At the mill the logs were captured by a log boom, and the logs were sorted for ownership before being sawn.Log drives were often in conflict with navigation, as logs would sometimes fill the entire river and make boat travel dangerous or impossible.Floating logs down a river worked well for the most desirable pine timber, because it floated well. But hardwoods were more dense, and weren't buoyant enough to be easily driven, and some pines weren't near drivable streams. Log driving became increasingly unnecessary with the development of railroads and the use of trucks on logging roads. However, the practice survived in some remote locations where such infrastructure did not exist. Most log driving in the US and Canada ended with changes in environmental legislation in the 1970s. Some places, like the Catalan Pyrenees, still retain the practice as a popular holiday celebration once a year.
History:
In Sweden legal exemptions for log driving were eliminated in 1983. "The last float in southern Sweden was in the 1960s, with the floating era in the rest of the country ending completely with the last of the many log drives in the Klarälven river in 1991."
Popular culture:
The contemporary logrolling contest, Birling, is a demonstration of skills originally devised by log drivers.
Inclusive description of a complex assortment as "the whole Mary Anne" derives from the colorful characters of wangan caravans which periodically transformed quiet rural communities with the excitement of a passing log drive.
In Canada, "The Log Driver's Waltz" is a popular folk song which boasts about a log driver's dancing skills.
Popular culture:
The version of the Canadian one-dollar bank note issued in 1974 and withdrawn in 1989 featured a view of the Ottawa River with log driving taking place in the foreground and Parliament Hill rising in the background. This banknote was part of the fourth series of banknotes released by the Bank of Canada entitled "Scenes of Canada". The logs depicted in this bank note may have been destined for a half dozen pulp, paper and sawmills near the Chaudière Falls immediately upstream from Parliament Hill, or for other mills further downstream.
Popular culture:
An Englishman may have observed loggers loitering in Bangor, Maine when he reported in 1801: "His habits in the forest and the [river] voyage all break up the system of persevering industry and substitute one of alternate toil and indolence, hardship and debauch; and in the alteration, indolence and debauch will inevitably be indulged in the greatest possible proportion." In the first chapter of The Cider House Rules (1985), John Irving briefly describes a 1930s log drive.
Popular culture:
Harry Brandelius’ 1950s Swedish song Flottarkärlek tells the story of a young log driver.
Teuvo Pakkala’s 1899 Finnish play Tukkijoella started the so-called ‘log driver romantics’ phase, resulting in several movies and books about log drivers’ lives.
The song Breakfast in Hell by Slaid Cleaves tells the tale of the death of Sandy Grey, a driver in Ontario.
In the first four chapters of John Irving's novel Last Night in Twisted River (2009), the hard and dangerous life of log drivers in New Hampshire is described in detail.
Canadian band Great Big Sea included the song "River Driver" on their album The Hard and the Easy, about a log driver from Newfoundland.
Sources:
Holbrook, Stewart H. (1961). Yankee Loggers. International Paper Company. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Diner lingo**
Diner lingo:
Diner lingo is a kind of American verbal slang used by cooks and chefs in diners and diner-style restaurants, and by the wait staff to communicate their orders to the cooks. Usage of terms with similar meaning, propagated by oral culture within each establishment, may vary by region or even among restaurants in the same locale.
History:
The origin of the lingo is unknown, but there is evidence suggesting it may have been used by waiters as early as the 1870s and 1880s. Many of the terms used are lighthearted and tongue-in-cheek and some are a bit racy or ribald, but are helpful mnemonic devices for short-order cooks and staff. Diner lingo was most popular in diners and luncheonettes from the 1920s to the 1970s. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lindeman-Sobel approach to artistic wind performance**
Lindeman-Sobel approach to artistic wind performance:
The Lindeman-Sobel Approach to Artistic Wind Performance is an integrated musical approach that seeks to unite fingers, air stream (or bow), and rhythm relative to the notes on the page and the length of tube you are blowing through (or the length of string you are vibrating).
Lindeman-Sobel approach to artistic wind performance:
The Lindeman-Sobel Approach was pioneered by Henry Lindeman (July 28, 1902 – March 7, 1961), an American woodwind player and member of the Paul Whiteman Orchestra, and expanded by Phil Sobel (May 13, 1917 – September 14, 2008), first chair woodwind with the NBC staff orchestra, leader of the West Coast Saxophone Quartet and a student of Henry Lindeman from 1935 to 1946.
Summary:
According to the Lindeman-Sobel Approach music is sound in motion and that sound is created when the air stream (or bow) meets the fingers. The Lindeman-Sobel Approach seeks to create an awareness in the individual of how their sound is being played rhythmically relative to the resistance of the tube length and the notes on the page.
Motion and rhythm:
A fundamental tenet of the Lindeman-Sobel Approach is that the general tendency among musicians is to hold long notes too long and make short notes too short.
Motion and rhythm:
"When you hold a note too long, it causes you to miss your next entrance, and consequently, you may rush to make up for the lost time. Music is made up of entrances. If you are constantly in motion, you won't miss your entrances and you won't have to rush to make up for the lost time. Most musicians are constantly rushing because they are constantly late. We put weight on the wrong notes.....We play the long notes too long and the short notes too short." – Phil Sobel – March 1999.The Lindeman-Sobel Approach also contains an emphasis on being aware of note groupings. This awareness involves placing greater weight on the downbeats of groupings and bars.
Motion and rhythm:
"It's not about keeping up with the metronome (or the beat), it's about playing the correct mathematical combinations and always moving forward and going somewhere!" – Phil Sobel, March 1999.
Fingers:
Being in touch with the fingers and aware of our fingers can have a significant impact on the sound. If we lose this awareness of the fingers then we become disconnected from the instrument and the sound suffers.
Fingers:
Phil Sobel said, speaking about great saxophonists "they all have great fingers. Fingers that are intimate with the instrument. Fingers that barely move and are always in touch with the horn. What did they know that most saxophonists do not? That the speed at which you put down or pick up a finger affects the sound and the pitch of the note. The distance from an open saxophone key to a closed key is very minimal, so any extra distance, i.e. starting with the finger above the key but not yet on the key is a waste of motion. The opposite is true also, that in opening a key if you actually lift your finger off of the key and come out of contact with the instrument, you have now wasted energy and motion in two directions because now you will have to get back down on the key to play it again. It is impossible not to see how much motion most saxophonists waste because they don't pay enough attention to their fingers."
Timbre:
Timbre is viewed as a combination of sound and pitch. This is because if a note is out of tune it will not have a great sound. Rhythm is the glue that holds the sound together. Without rhythm to coordinate the air, tongue, embouchure, slide, fingers, bow, etc. the sound will not resonate at its full potential. The Lindeman-Sobel Approach also emphasizes an awareness of the resistances that occur naturally within the instrument and how those relate to where you are on the instrument and where you are going. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Genomic selection**
Genomic selection:
Genomic Selection (GS) predicts the breeding values of an offspring in a population by associating their traits (i.e. resistance to pests) with their high-density genetic marker scores. GS is a method proposed to address deficiencies of marker-assisted selection (MAS) in breeding programs. However, GS is a form of MAS that differs from it by estimating, at the same time, all genetic markers, haplotypes or marker effects along the entire genome to calculate the values of genomic estimated breeding values (GEBV). The potentiality of GS is to explain the genetic diversity of a breeding program through a high coverage of genome-wide markers and to assess the effects of those markers to predict breeding values.
MAS limitations:
In contrast to MAS and its focus on a few significant markers, GS examines together all markers in a population. Since the initial proposal of GS for application in breeding populations, it has been emerging as a solution to the deficiencies of MAS.The MAS has presented two main limitations in breeding applications. First, the bi-parental mapping populations are used for most QTL analyses, limiting their accuracy. This represents a problem because a single bi-parental population cannot represent allelic diversity and genetic background effects in a breeding population.
MAS limitations:
Furthermore, polygenic traits (or complex traits) controlled by several small-effects markers have been an incredible hassle for MAS. The statistical methods applied for identifying target markers and implementing MAS for improvement of polygenic traits have been unsuccessful. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Portsmouth Yardstick**
Portsmouth Yardstick:
The Portsmouth Yardstick (PY) or Portsmouth handicap scheme is a term used for a number of related systems of empirical handicapping used primarily in small sailboat racing.
The handicap is applied to the time taken to sail any course, and the handicaps can be used with widely differing types of sailboats. Portsmouth Numbers are updated with data from race results, normally annually. The various schemes are not directly linked, and ratings for the same class can and often do vary in the different schemes.
The most prominent Portsmouth Yardstick systems are probably those administered in the United States by the Portsmouth Numbers Committee, in the United Kingdom by the Royal Yachting Association (RYA) and in Australia by Victoria Yachting.
History:
The original UK Portsmouth Yardstick was developed by Stanley Milledge, who was in charge of handicapping racing at the Langstone Sailing Club in 1947 using the Island One design as the scratch boat (having a value 100). In 1950 he received support from the Portsmouth Harbour Racing & Sailing Association to produce the first edition of the Langstone tables for club use when they would be known as Portsmouth numbers. In 1960 he handed over the administration to the RYA and in 1976 a new YR2 format was used, with the Langstone tables being removed in 1986. The Portsmouth Yardstick was extended to multihulls in 1973 and from 1977 four forms were used, for dinghy, multihull, keel and cruiser. Due to the increasing performance of boats, particularly multihulls, the base range of the numbers has been increased twice over the years and are now roughly centred on 1,000.In the United States, the Thistle was chosen as primary yardstick for compilation in 1961 with a value of 83.0, which corresponded to its RYA PN rating at the time. Other boats were compared using their DIYRA (Dixie Inland Yacht Racing Association) rating to produce the D-PN (Dixie-Portsmouth Number). This proved successful and in 1973 the responsibility was passed from the DIYRA to the North American Yacht Racing Union. Wind Handicap Factors (HC) are an extension conceived by the DIYRA Portsmouth Numbers Committee to take a more realistic account of wind and wave conditions for different classes. This produces a factor based on F=100 for each point of the Beaufort Scale from 0 to 9. Further extensions are being evaluated for offshore classes to take account of sail inventories, excess weight, etc.In Australia the most prominent Portsmouth Yardstick scheme is that run by Yachting Victoria Inc.
Application:
Each class of boat is assigned a "Portsmouth Number", with fast boats having low numbers and slow ones high numbers—so, for example, in the case of two dinghies, a 49er might have a RYA-PY of 697 while a Mirror has a RYA-PY of 1390 (these are the actual RYA Portsmouth numbers for 2018, but note that adjustments are made each year).
Application:
In a race involving a mixed fleet, finishing times can be adjusted using the formula: Corrected Time = Elapsed Time × Scale / Handicapwhere Scale is 100 for US and AUS numbers, and 1000 for UK numbers, and Handicap is the applicable Portsmouth Number for the given class of boat. Each boat's time is adjusted with the formula, and then the adjusted scores are compared to determine the outcome of the race.
Application:
For example, a PD Racer (a semi-open homebuilt class, and the slowest listed boat in the USA scheme) has a D-PN of 140, and an A-Scow (the fastest listed centreboard boat) has a D-PN of 61.3. If an A Scow takes 1 hour to finish a given course, and a PD Racer takes 2 hours, the handicapped times are: A Scow: 1 hour × 100 / 61.3 = 1.63 hours PD Racer: 2 hours × 100 / 140 = 1.43 hoursSo the PD racer, although it took twice as long to finish the course, would be declared the winner.
Examples of boats and their Portsmouth Numbers:
There are hundreds of boats that have a Portsmouth Number, or D-PN, or both; the table below gives some notable examples. The classes included below are from those used at the 2012 Olympics, the 2012 Paralympic Games, and the 2012 ISAF Youth Worlds.
The official table of RYA PNs is published on the RYA Portsmouth Pages. The official table of USA D-PNs is published on the US Sailing website.
Other handicap systems:
Portsmouth Yardstick systems are typically used for dinghy racing and small keelboats or multihulls. Larger sailboats are more likely to use the Performance Handicap Racing Fleet handicapping system in North America, or the IRC handicapping system in Europe, Australia & New Zealand.
There are many other methods of handicapping sailboat racing, including performance handicapping systems such as Echo, used in Ireland, and NHC, used in the UK.
Conversions between different Systems:
USA - D-PN and PHRF There is a linear correlation between the D-PN and PHRF, allowing the following conversion formulae:2007 D-PN D-PN = ( PHRF / 6 ) + 55 PHRF = ( D-PN − 55 ) × 6 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Integrated topside design**
Integrated topside design:
Integrated topside design is a design approach used by military ship and ship equipment designers to overcome the challenges of effectively operating shipboard antenna systems and equipment susceptible to electromagnetic fields in the high electromagnetic environment of a warship's topside. The approach primarily uses the well-understood physics of electromagnetism to simulate the topside environment before the equipment is tested for real.
Integrated topside design:
Advances in ship design to accommodate ever more high power antenna systems, numbers of parasitic re-radiating metallic structures (such as cranes and masts) and a requirement to have more sensitive sensors for littoral operations has led to a need for greater consideration of the operation of equipment prior to ship build or prior to equipment deployment. Whilst this can be done post-deployment using measurement teams, using a modelling and simulation approach early in the ship design is more cost-effective than making corrections after the ship is built, and so is the preferred option for several of the world's advanced navies. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Balinski's theorem**
Balinski's theorem:
In polyhedral combinatorics, a branch of mathematics, Balinski's theorem is a statement about the graph-theoretic structure of three-dimensional convex polyhedra and higher-dimensional convex polytopes. It states that, if one forms an undirected graph from the vertices and edges of a convex d-dimensional convex polyhedron or polytope (its skeleton), then the resulting graph is at least d-vertex-connected: the removal of any d − 1 vertices leaves a connected subgraph. For instance, for a three-dimensional polyhedron, even if two of its vertices (together with their incident edges) are removed, for any pair of vertices there will still exist a path of vertices and edges connecting the pair.Balinski's theorem is named after mathematician Michel Balinski, who published its proof in 1961, although the three-dimensional case dates back to the earlier part of the 20th century and the discovery of Steinitz's theorem that the graphs of three-dimensional polyhedra are exactly the three-connected planar graphs.
Balinski's proof:
Balinski proves the result based on the correctness of the simplex method for finding the minimum or maximum of a linear function on a convex polytope (the linear programming problem). The simplex method starts at an arbitrary vertex of the polytope and repeatedly moves towards an adjacent vertex that improves the function value; when no improvement can be made, the optimal function value has been reached.
Balinski's proof:
If S is a set of fewer than d vertices to be removed from the graph of the polytope, Balinski adds one more vertex v0 to S and finds a linear function ƒ that has the value zero on the augmented set but is not identically zero on the whole space. Then, any remaining vertex at which ƒ is non-negative (including v0) can be connected by simplex steps to the vertex with the maximum value of ƒ, while any remaining vertex at which ƒ is non-positive (again including v0) can be similarly connected to the vertex with the minimum value of ƒ. Therefore, the entire remaining graph is connected. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SIN (Substitute It Now!) List**
SIN (Substitute It Now!) List:
The Substitute It Now! List is a database developed by the International Chemical Secretariat (ChemSec) of chemicals the uses of which are likely to become legally restricted under EU REACH regulation. The list is being used by public interest groups as a campaign tool to advocate for increasing the pace of implementation of REACH and by commercial interests to identify substances for control in chemicals management programmes.
History and development:
The SIN List is composed of chemicals evaluated by the environmental NGO ChemSec as meeting EU criteria for being Substances of Very High Concern (SVHCs) under Article 57 of REACH, being either carcinogenic, mutagenic or reprotoxic (CMR), persistent, bioaccumulative and toxic (PBT), very persistent and very bioaccumulative (vPvB), or posing an equivalent environmental or health threat.,The first SIN List, known as version 1.0, was published in 2008 and identified 267 chemicals as meeting the Article 57 criteria for being SVHCs. ChemSec's assessment was independently validated by the Technical University of Denmark.In 2009 a further 89 substances were added to the SIN List (Version 1.1), before in 2011 another 22 chemicals were added (Version 2.0) for fulfilling the REACH 57(f) criterion of equivalent concern as endocrine disrupting chemicals (EDCs). The 2011 EDC additions were made in consultation with TEDX, the US endocrine-disruption research NGO founded by Professor Theo Colborn, and coincided with EU plans over 2011–2012 to develop accepted criteria for identifying endocrine disrupting chemicals.In October 2014, the list was updated, this time with 28 new chemicals. With this update, the SIN List was also divided into 31 groups, and a tool for sustainable substitution based on the SIN List – SINimilarity – was presented.
SIN List Advisory Committee:
The development of the SIN List is guided by a nine-member NGO advisory committee: The Center for International Environmental Law The European Consumers’ Organisation CHEM Trust Clean Production Action (CPA) Greenpeace European Unit European Environmental Bureau ClientEarth Friends of the Earth Europe European Trade Union Institute Women in Europe for a Common Future The Health and Environment Alliance
Impact:
EU Legislation The disparity between the length of the SIN List in comparison to the 15 chemicals nominated by the EU as SVHCs in October 2008 was used to pressure the European regulatory authorities and Member States to accelerate the nomination process. In 2011 Members of the European Parliament's Environment Committee cited the SIN List in criticising the European Commission for continuing slow progress on EDCs and evaluation of safety of chemicals in mixtures.EU regulators have been cautiously welcoming of the SIN List. Margot Wallström, Vice-President of the European Commission, stated that she welcomed initiatives such as the SIN List “[which] draw the attention of the public and industry to the most hazardous chemicals that should be a priority for inclusion in the REACH authorisation procedure”. European Commissioner for the Environment Janez Potočnik has referred to the SIN list as “[indicating] the substances the European Commission will take into consideration for placement on the candidate list. Industry representative group CEFIC has criticised the publication of the list for occurring outside the legal design of REACH.
Impact:
Commercial substitution Sony Ericsson, Sara Lee, Skanska, Marks & Spencer, Dell and Carrefour are on record as referring to the SIN List in their chemical substitution programmes. The SIN List is also used by other public interest groups in lobbying companies to substitute or phase out hazardous chemicals.
Impact:
Socially Responsible Investment The potential for legal restrictions on chemical use increasing costs associated with reformulating products and modifying processes has resulted in SIN List data being used by investment analysis firms concerned with Socially Responsible Investment, to aid in calculating financial risk posed by companies’ sustainability profiles.In March 2013 ChemSec published the SIN Producers List, a list of the 709 companies manufacturing or importing SIN List substances in the EU. The list is derived from data presented in the European Chemicals Agency (ECHA) database of registered substances.ChemSec has together with ClientEarth requested information about producers of REACH registered substances to be made publicly available, and launched a lawsuit against the European Chemicals Agency on this issue in 2011.
Related documents:
Comprehensive SIN List methodology SIN 2.0 (Endocrine Disruptors) Methodology Summary of the SIN List for the public REACH Article 57: THE EUROPEAN PARLIAMENT AND THE COUNCIL OF THE EUROPEAN UNION, (2006). REGULATION (EC) No 1907/2006 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 18 December 2006 concerning the Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH), establishing a European Chemicals Agency, amending Directive 1999/45/EC and repealing Council Regulation (EEC) No 793/93 and Commission Regulation (EC) No 1488/94 as well as Council Directive 76/769/EEC and Commission Directives 91/155/EEC, 93/67/EEC, 93/105/EC and 2000/21/EC. Official Journal of the European Union. p. 396/142 (PDF, English) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Universal Access**
Universal Access:
Apple Inc. operating systems include built-in accessibility features, as well as APIs for third-party developers to use in their applications. These accessibility features provide computing abilities to people with visual impairment, hearing impairment, or physical disability.
Components:
Accessibility (formerly Universal Access) is a preference pane of the System Preferences application. It includes four sub-components, each providing different options and settings.
Components:
Seeing Turn On/Off Screen Zooming Inverse Colors (White on Black, also known as reverse colors), +⌥ Option+Control+8⌘ Command Set Display to Greyscale (10.2 onwards) Enhance Contrast Enable Access for Assistive Devices Enable Text-To-Speech for Universal Access Preferences Disable unnecessary automatic animations Hearing Flash the screen when an alert sound occurs Raise/Lower Volume Keyboard Sticky Keys (Treat a sequence of modifier keys as a key combo) Slow keys (Delay between key press and key acceptance) Mouse Mouse Keys (Use the numeric keypad in place of the mouse) Mouse Pointer Delay Mouse Pointer Max Speed Mouse Pointer enlarging | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Commander Shepard**
Commander Shepard:
Lieutenant Commander Shepard, better known as Commander Shepard, is the player character in the Mass Effect video game series by BioWare (Mass Effect, Mass Effect 2, and Mass Effect 3).
Commander Shepard:
A veteran soldier of the Systems Alliance Navy, an N7 graduate of the Interplanetary Combatives Training (ICT) military program, and the first human Citadel Council Spectre, Shepard works to stop the Reapers, a sentient machine race dedicated to wiping out all advanced organic life. Shepard is neither a hero, nor a villain; depending upon players' choices and actions, Shepard is the abstaining factor that acts as both on occasion, and will take whatever action is deemed necessary when presented with impossible scenarios.
Commander Shepard:
Shepard's gender, class, first name and facial appearance are chosen and customized by the player. The default male Shepard's face and body were modelled after Mark Vanderloo, while Mark Meer provided the voice for the male Shepard. Jennifer Hale voiced the female Shepard. Since the player can choose the gender of Shepard, much of the dialogue revolving around the character is gender-neutral with only a few exceptions. However, in some other Mass Effect media, Shepard is called "he" regardless of player choice for the gender.
Commander Shepard:
The character is inspired by and named after American astronaut Alan Shepard. Shepard's armor developed over the series, and was originally intended to be red-and-white. Most promotional material for the series focused on the male Shepard, due to the studio's desire for a single identifiable hero, though both versions of the character were given equal priority during development. Various merchandise has been made, including several figurines. Shepard has made cameo appearances in other Electronic Arts games and is referenced in Mass Effect: Andromeda.
Concept and creation:
BioWare wanted players to feel special and empowered from the start of the game. Unlike other role-playing game protagonists, they felt Shepard should not be an entirely blank character for the player to create, in order to create a more "intense" experience; with Mass Effect being more cinematic than other BioWare video games, they felt they needed an "extra bit" with a sense of a specific flavor that can be caused by a memorable character, such as Star Trek's Captain Kirk or 24's Jack Bauer.
Concept and creation:
Developers wanted to at least give Shepard a last name so that other characters could address them. The developers wanted a name that was both "all-American" and common, which led them to start looking at the original seven astronauts. Alan Shepard was chosen due to fitting with the idea of "their" Shepard, being tough and respected, and fitting in with the character being the first human Spectre – Alan Shepard being the first American in space.During the development of the first game, the female Shepard was given equal importance as the male counterpart; unique lines were written for her as well as a unique romance option. In fact, the early model for animation tests featured a female Shepard. When describing her, Casey Hudson said "[s]he's not a caricature of the idea of role-playing as a female, but instead she's very impressive as a strong female character that's sensitive yet extremely confident and assertive".
Concept and creation:
Appearance and design Shepard's default armor was originally red-and-white, but this was changed to charcoal grey, with a red-and-white stripe and the N7 logo, as Shepard looked too much like a medic. The red stripe in the N7 logo is said to symbolise the blood the character must sacrifice to save the galaxy. The armor became piece-based in Mass Effect 2 to stress the character's silhouette, as well as making them look "stronger and able to take more punishment". Despite this, the colours, as well as other elements of the armor and the commander's appearance, are customisable in Mass Effect 2.For the character customisation at the start of the game, they focused on "quality and realism". In order to test out the customisation system, the team made various celebrity look-alikes to ensure it offered a wide enough variety. The default male face, as well as the male body, were based on Dutch model Mark Vanderloo. The default female face changed slightly between the first and second game, but underwent a big redesign for Mass Effect 3. Six different designs for the default female Shepard were hosted online, and fans were told to vote for whichever design they preferred via Facebook; many different designs were made before the vote, but were whittled down to six by BioWare staff. The blonde Shepard with freckles won, though BioWare later decided that the hairstyle may have interfered with the vote, and so made another competition to decide that. The red-haired Shepard won the subsequent competition.
Concept and creation:
Voice The male and female versions of Shepard were voiced by Mark Meer and Jennifer Hale respectively. Both of them had worked with BioWare many times previously.Meer had first worked with BioWare during the creation of Baldur's Gate II: Shadows of Amn, and went on to voice other bit parts in their games. When he was first called in to work on Mass Effect, he expected to voice more bit parts, and was "pleasantly surprised" to get the role of Shepard. Caroline Livingstone gave voice direction during recording, and lead writer Mac Walters would occasionally sit in during recording sessions, allowing lines to be changed quickly.Hale has said she is very invested in helping to "create" the stories of video games, though she herself is not a gamer. Although Hale does object to certain lines if they seem out-of-character in other works, she prefers not to mess with the words for Shepard and BioWare. That BioWare did not change the words based on gender considerations was one of Hale's favorite aspects of the series.
Appearances:
Lieutenant Commander Shepard serves as the player character of the main Mass Effect game trilogy. The commander is a graduate of the Systems Alliance's – the "galactic face of humanity" – military N7 program, the highest grade of their "Interplanetary Combatives Training" that commands a great deal of respect. Their service before joining the military and the military event that allowed them to rise to fame are both chosen by the player before the game starts, out of three options each. Also customizable is Shepard's gender, character class and physical appearance. The player is given paraphrased dialogue options via a radial command menu called the "dialogue wheel", which Shepard will expand on when clicked. Different choices on the dialogue wheel can grant either Paragon or Renegade points, which will overtime affect their physical appearance in Mass Effect 2 and Mass Effect 3: a higher Renegade score will cause Shepard's scars to worsen and their eyes to start glowing red, while a higher Paragon score will cause Shepard's scars to gradually heal and fade away.
Appearances:
Outside of the main trilogy, Shepard has been briefly mentioned in the novels Mass Effect: Ascension, Retribution, and Deception, and has also made a brief appearance in the third issue of the comic Homeworlds, with only the N7 logo on their armour being shown in-shot. Redemption, taking place two years before the second game's main events, concerns how Shepard's body was retrieved by Liara T'Soni and then given to Cerberus after the character's death in Mass Effect 2's prologue. The character, however, will not be making any further appearances in any Mass Effect games now that the main trilogy is over, and BioWare have said that they do not wish the next Mass Effect protagonist to just be another soldier or "Shepard 2".
Appearances:
Mass Effect In the first game, the commander is serving under Captain David Anderson during the shakedown run of the highly advanced turian/human ship SSV Normandy, heading toward humanity's first ever colony, Eden Prime. However, it turns out the ship is actually being sent to collect a Prothean beacon (the Protheans being an advanced and now-extinct race whose technology could contain great discoveries) and give it to the Citadel Council, an executive committee who hold great sway in the galaxy, and who are recognised as an authority by most of explored space. A Spectre, an elite agent of the Council with the authority to deal with situations "in whatever way they deem necessary", named Nihlus Kryik accompanies the mission to observe Shepard's candidacy to join the Spectres; if successful, this would make Shepard the first ever human Spectre and an exemplar of humanity's progress in galactic politics. However, Nihlus is killed during the mission when the geth, a race of sentient AIs, and Saren Arterius, a rogue Spectre, attack the colony to steal the beacon. Shepard manages to stop the colony from being destroyed, but is hit by a blast from the damaged beacon before it blows up; as a result, Shepard begins to have visions of war and death.
Appearances:
After Saren's treachery is exposed to the Council through the use of an audio recording that mentions "the Reapers", which are believed to be a race of synthetic-organic starships that eradicate all organic civilization every 50,000 years, the Council revoke Saren's Spectre status and make Shepard the first ever human Spectre, though they believe the Reapers are merely a myth Saren is using to manipulate the geth. Shepard is instructed to take down Saren, and is placed in charge of the Normandy and given free rein of the galaxy. Over the course of the game, it becomes clear that the visions are images of the Protheans being destroyed by the Reapers; the commander speaks to one of these Reapers, referred to as Sovereign, on the planet of Virmire, though the Council still believes them to be a myth.
Appearances:
Eventually, Saren, Sovereign and the geth launch an attack on the Citadel, the "political, cultural, and financial capital of the galactic community" and home of the Council, intending to activate a mass relay inside it that will allow all the Reapers to arrive at once from dark space, destroy the Citadel, and begin their "harvest" of organic life. Shepard manages to stop them, destroy Sovereign, and save the Citadel. Depending on the player's choices, Shepard may either also save the Council, or leave them to die to ensure Sovereign is destroyed or to build a new human-centric Council.
Appearances:
Mass Effect 2 At the beginning of the game, Shepard is killed when the Collectors attack and destroy the Normandy. The commander is revived by Cerberus, a human-supremacist organization considered to be terrorists by the Citadel Council and the Systems Alliance, with instructions by Cerberus leader the Illusive Man to be brought back unaltered and exactly as they were before their death. The Illusive Man provides Shepard with both a new ship (the Normandy SR-2) and a crew, and sends them on various missions against the Collectors, who are revealed to be puppets of the Reapers.
Appearances:
Over the course of the game, Shepard must assemble a team to prepare for a final assault on the Collector base accessible only through the Omega-4 Relay, a relay that destroys all non-Collector ships that try to go through it. Depending on the player's choices during the final mission, it is possible that Shepard may fail to escape the Collector base and die, though the save cannot be imported to Mass Effect 3 if this is the case. At the end of this mission, the player is given the choice to either destroy the base or hand it over to Cerberus – if the player chooses the former, Shepard effectively cuts all ties with Cerberus, and the crew and squadmates join the commander.
Appearances:
Depending on the player's decision concerning the Council in the first game, Shepard can either be reinstated as a Spectre now that they have been revived, be rejected by Udina and the rest of the Council, or refuse Spectre status when offered it.
Appearances:
Mass Effect 3 Shepard has been grounded and stripped of rank by the Alliance before the game starts, due to either working with Cerberus or blowing up a Mass Relay in the Mass Effect 2 DLC pack Arrival. After Earth is invaded by the Reapers, the Alliance reinstates them and sends them to ask the Council for help, retaking command of the Alliance-refitted Normandy SR-2. Though the Council refuses, they either reinstate or uphold the commander's Spectre status.
Appearances:
Shepard must then work to forge alliances between the various alien races to ensure the survival of Earth and to stop the Reapers from eradicating all organic life from the galaxy. Among the major decisions made by the player as part of the branching narrative of Mass Effect 3 include the resolution of the Krogan Genophage storyline, the outcome of the war between the geth and the quarians, and ultimately the fate of the Reapers and the rest of the galactic community.
Appearances:
Following the conclusion of one of the three original endings of Mass Effect 3, where Shepard activates a superweapon known as the Crucible on the Citadel to deal with the Reapers, the Normandy crashes on a distant planet after being caught in a wave of energy emitted by the Crucible. In some of the possible ending scenarios, the crew eulogizes Shepard by putting his or her name on the ship's memorial and flying away. If Shepard chose to destroy the Reapers, however, the crew will hesitate placing Shepard's name on the memorial wall. In one possible ending, the chestpiece of a body with the N7 emblem is shown to be moving, suggesting Shepard might have survived.
Appearances:
Mass Effect: Andromeda Shepard does not make a direct appearance, though players can select Shepard's gender at the start of the game. Shepard is referenced both in conversations between characters, and in audio logs sent by Liara T'Soni to Alec Ryder, the player character's father.
Promotion and merchandise:
The default male Shepard was used heavily in marketing, being featured on the covers for all three games and most trailers. The female Shepard was confirmed to be making an appearance in one of the trailers for the third game, and on one side of Mass Effect 3's Collector's Edition, in June 2011. The female Shepard had not been advertised heavily previously as marketing wanted to only showcase one character, so that consumers could easily understand who the hero was. For Mass Effect 3, BioWare wished to "acknowledge" the demand for material with the female Shepard.Outside of the Mass Effect series, Shepard has also made cameo appearances in other Electronic Arts games. SkyHeroes features various different characters from EA games, acting as playable pilots during the game's multiplayer mode. Through downloadable content released on March 27, Shepard becomes available as an alternate skin for Serah and Noel within Final Fantasy XIII-2. An N7 armor and omni-blades become available in Kingdoms of Amalur: Reckoning if the demo for Mass Effect 3 has been completed; similarly, an N7 armor becomes available in Dead Space 3 if the player owns a copy of Mass Effect 3.
Reception:
Commander Shepard has received a generally positive reception and often appeared in reader's polls published by video game publications. The character was voted number 2 by readers in Game Informer's poll of the top 30 video game characters, behind Halo protagonist Master Chief. A reader's poll for their top ultimate RPG party choices, drawing from characters of several disparate RPG video game franchises, published by IGN in December 2014 placed Shepard at No. 2. Another reader's poll published by PC Gamer in 2015 reveal that Shepard was overall the fifth most popular Mass Effect character. Shepard was voted the primary Xbox 360 candidate in IGN's mock video game presidential election, but lost to the PlayStation 3 candidate.Shepard has appeared in numerous top video game character lists compiled by video game publications, such as GameZone, GameDaily, and Game Informer. Joe Juba, writing for Game Informer, chose Shepard as their favourite protagonist in their "2012 RPG of the Year Awards", saying that while the player changed the tone and context of many parts of Mass Effect, "Shepard never comes out of it looking any less awesome".Not all critical reception has been positive. Maxim described the visual design of Shepard's armor as derivative. Andrew Goldfarb, for IGN, criticized the decision to revisit Shepard during the downloadable content of Mass Effect 3, believing that 3's ending was "final", and saying that he would have preferred to have a look at a new squad separate from Shepard.Data published by BioWare between 2011 and 2013 for Mass Effect 2 and Mass Effect 3 showed that the player pick rate for male Shepard, sometimes referred to as "BroShep", was at 82% with the remainder choosing to play as a female Shepard, nicknamed "FemShep". In July 2021, the choice statistics for Mass Effect Legendary Edition released by BioWare revealed that 68% of players preferred to play as the male version compared to 32% for the female version.
Reception:
Female Shepard Although female Shepard is less popular with players compared to her male counterpart, the character and her voice actress has consistently been well received critically. Hale was nominated for "Best Performance By A Human Female" at the 2010 Spike Video Game Awards, though lost to fellow Mass Effect voice actor Tricia Helfer (playing Sarah Kerrigan in StarCraft II: Wings of Liberty).Polygon included Jane Shepard, the default name for a female Shepard, in their list of the 70 best video game characters of the 2010s decade, with Cass Marshall singling out Hale’s voice acting for convincingly selling her character as "both a deadly space marine and a vulnerable protagonist facing impossible odds". Various media outlets like CNET, ONE37pm and CBR have included Commander Shepard in their lists of top female video game characters.The outcome of the design poll, which was initially won by the blonde Shepard, was described as controversial and proved divisive with critics. While commentators like Kirk Hamilton from Kotaku accepted what he perceived to be a legitimately democratic choice, others like Kim Richards from PC Gamer rejected the outcome. Richards in particular criticized the poll for encouraging players to go for the most "Barbie-like" conventionally attractive appearance.Carlen Lavigne's later analysis of the controversy concluded that the poll was presented like a beauty contest, which positioned Shepard in a sexualized manner for the pleasure of a straight male audience; this has a corrupting effect on Shepard's standing as a feminist lead, even though the original intention is to promote a female character as the face of the franchise. The authors of Bridging Game Studies and Feminist Theories noted that the poll produced"six avatars who have the exact same body shape, but are distinguished by different skin, hair, and eye colour." From their point of view, Shepard would either way "fit perfectly with beauty standards while creating the illusion of choice for players." Leandro Lima noted that the manner in which Shepard was included within the marketing campaign for Mass Effect 3 was still problematic for many players, as she is "perceived as very generic in terms of design". Patricia Hernandez from Polygon felt that the manner which the female Shepard poll unfolded was "strange" and that BioWare's attempts to continue modifying her years after the release of the first game while her male counterpart's appearance remains unchanged is somewhat "off-putting". Nevertheless, she expressed relief that "in 2021, there’s no vote, no massive fan campaign to get BioWare to even consider highlighting FemShep" in response to the character's prominence in promotional material released for Mass Effect Legendary Edition. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Galaxy filament**
Galaxy filament:
In cosmology, galaxy filaments are the largest known structures in the universe, consisting of walls of gravitationally bound galactic superclusters. These massive, thread-like formations can reach 80 megaparsecs h−1 (or of the order of 160 to 260 million light-years) and form the boundaries between voids. Galaxy filaments form the cosmic web and define the overall structure of the observable universe.
Discovery:
Discovery of structures larger than superclusters began in the late-1980s. In 1987, astronomer R. Brent Tully of the University of Hawaii's Institute of Astronomy identified what he called the Pisces–Cetus Supercluster Complex. In 1989, the CfA2 Great Wall was discovered, followed by the Sloan Great Wall in 2003.In January 2013, researchers led by Roger Clowes of the University of Central Lancashire announced the discovery of a large quasar group, the Huge-LQG, which dwarfs previously discovered galaxy filaments in size. In November 2013, using gamma-ray bursts as reference points, astronomers discovered the Hercules–Corona Borealis Great Wall, an extremely large filament measuring more than 10 billion light-years across.
Filaments:
The filament subtype of filaments have roughly similar major and minor axes in cross-section, along the lengthwise axis.
A short filament was proposed by Adi Zitrin and Noah Brosch—detected by identifying an alignment of star-forming galaxies—in the neighborhood of the Milky Way and the Local Group. The proposal of this filament, and of a similar but shorter filament, were the result of a study by McQuinn et al. (2014) based on distance measurements using the TRGB method.
Galaxy walls The galaxy wall subtype of filaments have a significantly greater major axis than minor axis in cross-section, along the lengthwise axis.
Filaments:
A "Centaurus Great Wall" (or "Fornax Great Wall" or "Virgo Great Wall") has been proposed, which would include the Fornax Wall as a portion of it (visually created by the Zone of Avoidance) along with the Centaurus Supercluster and the Virgo Supercluster also known as our Local Supercluster within which the Milky Way galaxy is located (implying this to be the Local Great Wall).
Filaments:
A wall was proposed to be the physical embodiment of the Great Attractor, with the Norma Cluster as part of it. It is sometimes referred to as the Great Attractor Wall or Norma Wall. This suggestion was superseded by the proposal of a supercluster, Laniakea, that would encompass the Great Attractor, Virgo Supercluster, Hydra-Centaurus Superclusters.
A wall was proposed in 2000 to lie at z=1.47 in the vicinity of radio galaxy B3 0003+387.
A wall was proposed in 2000 to lie at z=0.559 in the northern Hubble Deep Field (HDF North).
Map of nearest galaxy walls Large Quasar Groups Large quasar groups (LQGs) are some of the largest structures known. They are theorized to be protohyperclusters/proto-supercluster-complexes/galaxy filament precursors.
Supercluster complex | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Prime quadruplet**
Prime quadruplet:
In number theory, a prime quadruplet (sometimes called prime quadruple) is a set of four prime numbers of the form {p,p+2,p+6,p+8}.
This represents the closest possible grouping of four primes larger than 3, and is the only prime constellation of length 4.
Prime quadruplets:
The first eight prime quadruplets are: {5, 7, 11, 13}, {11, 13, 17, 19}, {101, 103, 107, 109}, {191, 193, 197, 199}, {821, 823, 827, 829}, {1481, 1483, 1487, 1489}, {1871, 1873, 1877, 1879}, {2081, 2083, 2087, 2089} (sequence A007530 in the OEIS) All prime quadruplets except {5, 7, 11, 13} are of the form {30n + 11, 30n + 13, 30n + 17, 30n + 19} for some integer n. (This structure is necessary to ensure that none of the four primes are divisible by 2, 3 or 5). A prime quadruplet of this form is also called a prime decade.
Prime quadruplets:
A prime quadruplet can be described as a consecutive pair of twin primes, two overlapping sets of prime triplets, or two intermixed pairs of sexy primes.
Prime quadruplets:
It is not known if there are infinitely many prime quadruplets. A proof that there are infinitely many would imply the twin prime conjecture, but it is consistent with current knowledge that there may be infinitely many pairs of twin primes and only finitely many prime quadruplets. The number of prime quadruplets with n digits in base 10 for n = 2, 3, 4, ... is 1, 3, 7, 27, 128, 733, 3869, 23620, 152141, 1028789, 7188960, 51672312, 381226246, 2873279651 (sequence A120120 in the OEIS).
Prime quadruplets:
As of February 2019 the largest known prime quadruplet has 10132 digits. It starts with p = 667674063382677 × 233608 − 1, found by Peter Kaiser.
Prime quadruplets:
The constant representing the sum of the reciprocals of all prime quadruplets, Brun's constant for prime quadruplets, denoted by B4, is the sum of the reciprocals of all prime quadruplets: 11 13 11 13 17 19 101 103 107 109 )+⋯ with value: B4 = 0.87058 83800 ± 0.00000 00005.This constant should not be confused with the Brun's constant for cousin primes, prime pairs of the form (p, p + 4), which is also written as B4.
Prime quadruplets:
The prime quadruplet {11, 13, 17, 19} is alleged to appear on the Ishango bone although this is disputed.
Excluding the first prime quadruplet, the shortest possible distance between two quadruplets {p, p+2, p+6, p+8} and {q, q+2, q+6, q+8} is q - p = 30. The first occurrences of this are for p = 1006301, 2594951, 3919211, 9600551, 10531061, ... (OEIS: A059925).
The Skewes number for prime quadruplets {p, p+2, p+6, p+8} is 1172531 (Tóth (2019)).
Prime quintuplets:
If {p, p+2, p+6, p+8} is a prime quadruplet and p−4 or p+12 is also prime, then the five primes form a prime quintuplet which is the closest admissible constellation of five primes.
Prime quintuplets:
The first few prime quintuplets with p+12 are: {5, 7, 11, 13, 17}, {11, 13, 17, 19, 23}, {101, 103, 107, 109, 113}, {1481, 1483, 1487, 1489, 1493}, {16061, 16063, 16067, 16069, 16073}, {19421, 19423, 19427, 19429, 19433}, {21011, 21013, 21017, 21019, 21023}, {22271, 22273, 22277, 22279, 22283}, {43781, 43783, 43787, 43789, 43793}, {55331, 55333, 55337, 55339, 55343} ... OEIS: A022006.
Prime quintuplets:
The first prime quintuplets with p−4 are: {7, 11, 13, 17, 19}, {97, 101, 103, 107, 109}, {1867, 1871, 1873, 1877, 1879}, {3457, 3461, 3463, 3467, 3469}, {5647, 5651, 5653, 5657, 5659}, {15727, 15731, 15733, 15737, 15739}, {16057, 16061, 16063, 16067, 16069}, {19417, 19421, 19423, 19427, 19429}, {43777, 43781, 43783, 43787, 43789}, {79687, 79691, 79693, 79697, 79699}, {88807, 88811, 88813, 88817, 88819} ... OEIS: A022007.
Prime quintuplets:
A prime quintuplet contains two close pairs of twin primes, a prime quadruplet, and three overlapping prime triplets.
It is not known if there are infinitely many prime quintuplets. Once again, proving the twin prime conjecture might not necessarily prove that there are also infinitely many prime quintuplets. Also, proving that there are infinitely many prime quadruplets might not necessarily prove that there are infinitely many prime quintuplets.
The Skewes number for prime quintuplets {p, p+2, p+6, p+8, p+12} is 21432401 (Tóth (2019)).
Prime sextuplets:
If both p−4 and p+12 are prime then it becomes a prime sextuplet. The first few: {7, 11, 13, 17, 19, 23}, {97, 101, 103, 107, 109, 113}, {16057, 16061, 16063, 16067, 16069, 16073}, {19417, 19421, 19423, 19427, 19429, 19433}, {43777, 43781, 43783, 43787, 43789, 43793} OEIS: A022008 Some sources also call {5, 7, 11, 13, 17, 19} a prime sextuplet. Our definition, all cases of primes {p-4, p, p+2, p+6, p+8, p+12}, follows from defining a prime sextuplet as the closest admissible constellation of six primes. A prime sextuplet contains two close pairs of twin primes, a prime quadruplet, four overlapping prime triplets, and two overlapping prime quintuplets.
Prime sextuplets:
All prime sextuplets except {7, 11, 13, 17, 19, 23} are of the form {210n + 97, 210n + 101, 210n + 103, 210n + 107, 210n + 109, 210n + 113} for some integer n. (This structure is necessary to ensure that none of the six primes is divisible by 2, 3, 5 or 7).
It is not known if there are infinitely many prime sextuplets. Once again, proving the twin prime conjecture might not necessarily prove that there are also infinitely many prime sextuplets. Also, proving that there are infinitely many prime quintuplets might not necessarily prove that there are infinitely many prime sextuplets.
The Skewes number for the tuplet {p, p+4, p+6, p+10, p+12, p+16} is 251331775687 (Tóth (2019)).
Prime k-tuples:
Prime quadruplets, quintuplets, and sextuplets are examples of prime constellations, and prime constellations are in turn examples of prime k-tuples. A prime constellation is a grouping of k primes, with minimum prime p and maximum prime p+n , meeting the following two conditions: Not all residues modulo q are represented for any prime q For any given k , the value of n is the minimum possibleMore generally, a prime k-tuple occurs if the first condition but not necessarily the second condition is met. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Super Bloch oscillations**
Super Bloch oscillations:
In physics, a Super Bloch oscillation describes a certain type of motion of a particle in a lattice potential under external periodic driving. The term super refers to the fact that the amplitude in position space of such an oscillation is several orders of magnitude larger than for 'normal' Bloch oscillations.
Bloch oscillations vs. Super Bloch oscillations:
Normal Bloch oscillations and Super Bloch oscillations are closely connected. In general, Bloch oscillations are a consequence of the periodic structure of the lattice potential and the existence of a maximum value of the Bloch wave vector max . A constant force F0 results in the acceleration of the particle until the edge of the first Brillouin zone is reached. The following sudden change in velocity from max /m to max /m can be interpreted as a Bragg scattering of the particle by the lattice potential. As a result, the velocity of the particle never exceeds max /m| but oscillates in a saw-tooth like manner with a corresponding periodic oscillation in position space. Surprisingly, despite of the constant acceleration the particle does not translate, but just moves over very few lattice sites.
Bloch oscillations vs. Super Bloch oscillations:
Super Bloch oscillations arise when an additional periodic driving force is added to F0 , resulting in: The details of the motion depend on the ratio between the driving frequency ω and the Bloch frequency ωB . A small detuning ω−ωB results in a beat between the Bloch cycle and the drive, with a drastic change of the particle motion. On top of the Bloch oscillation, the motion shows a much larger oscillation in position space that extends over hundreds of lattice sites. Those Super Bloch oscillations directly correspond to the motion of normal Bloch oscillations, just rescaled in space and time.
Bloch oscillations vs. Super Bloch oscillations:
A quantum mechanical description of the rescaling can be found here. An experimental realization is demonstrated in these.
A theoretical analysis of the properties of Super-Bloch Oscillations, including dependence on the phase of the driving field is found here. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Propetandrol**
Propetandrol:
Propetandrol (INN) (brand name Solevar; former developmental code name SC-7294), or propethandrol, also known as 17α-ethyl-19-nortestosterone 3β-propionate or 17α-ethyl-19-nor-4-androstenediol 3β-propionate, as well as 17α-ethylestr-4-en-3β,17β-diol 3β-propionate, is a synthetic and orally active anabolic–androgenic steroid (AAS) and progestogen and a 17α-alkylated derivative of 19-nortestosterone. It is an androgen ester – specifically, the 3β-propionate ester of norethandrolone (17α-ethyl-19-nortestosterone). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**GPS in the earthmoving industry**
GPS in the earthmoving industry:
GPS when applied in the earthmoving industry can be a viable asset to contractors and increase the overall efficiency of the job. Since GPS satellite positioning information is free to the public, it allows for everyone to take advantage of its uses. Heavy equipment manufacturers, in conjunction with GPS guidance system manufacturers, have been co-developing GPS guidance systems for heavy equipment since the late 1990s. These systems allow the equipment operator to use GPS position data to make decisions based on actual grade and design features. Some heavy equipment guidance systems can even operate the machine's implements automatically from a set design that was created for the particular jobsite. GPS guidance systems can have tolerances as small as two to three centimeters making them extremely accurate compared to relying on the operator's skill level. Since the machine's GPS system has the ability to know when it is off the design grade, this can reduce surveying and material costs required for a specific job.
History:
GPS Technology was officially introduced as a guidance system for earthmoving machines in the late 1990s. Since this time, many manufacturers of earthmoving equipment now offer GPS and other guidance systems, as a factory option. Many companies exist that also sell GPS guidance systems for the earthmoving industry as a retrofit option. The two main companies for heavy equipment guidance systems are Trimble and Topcon. In April 2002, Trimble and Caterpillar Inc. began a joint venture known as Caterpillar Trimble Controls Technology LLC (CTCT). "The joint venture develops machine control products that use site design information combined with accurate positioning technology to automatically control dozer blades and other machine tools". Though aftermarket kits were available from various companies to retrofit an existing machine for GPS guidance, Caterpillar Inc. was the first heavy equipment manufacturer to offer GPS guidance systems as a factory option from the dealer called an ARO (Attachment Ready Option). John Deere soon followed with their own version of ARO called "Integrated Grade Control" in 2006 on many Track-Type Tractors (TTT) and Motorgraders (MG).
Types:
While there are various GPS systems currently used in the heavy equipment industry, they can typically be categorized as either "indicate only" or "fully automatic". Both systems can utilize one or two GPS receivers. Using only one GPS receiver limits how the guidance system can orient the machine's position in respect to the site design. Using two GPS receivers gives the guidance system two points of position allowing it to calculate what angle the machine is on relative to the site plan. The following describes "indicate only" and "fully automatic" in more detail.
Indicate only:
Indicate only uses GPS positioning information as a guide to the operator. Depending on the system used, the machine position can be displayed over the specific design site that was created for the earthmoving project. This system relies on the operator to steer and move the machine's implements in order to match the site's design. Indicate only systems are typically cheaper and less complicated since they do not require hardware to tap into the machine's implement control systems. Indicate only systems typically utilize a single GPS receiver mounted on the machine itself and can use an angle sensor to calculate the machine's slope. Accuracy of these systems depends on if the site has a base station that can relay site specific corrections. If the site does not have a base station, indicate only systems can just use satellite information, however, the accuracy is usually in the one to two meter range.Utilizing a base station allows for site specific corrections to be transmitted to the machine, increasing the accuracy through Real Time Kinematics (RTK). Site specific corrections can increase the accuracy of an indicate only system to be around two to three centimeters. Machines that typically use indicate only consist of Soil Compactors (SC), Track-Type Tractors (TTT), and Motor Graders (MG). The use of a base station really depends on the accuracy requirements of the project. Some projects such as clearing overburden at a mine site with a TTT, may not need two to three centimeter accuracy while as grading a road base with a MG does.
Fully automatic:
Fully automatic systems allow the ability of the machine's implements to be controlled by the GPS guidance system. This is typically used in the fine grading applications where precise levels of material need to be moved on a predetermined design or grade. The advantages to this system is due to the accuracy that can be achieved with GPS and RTK, but requires an onsite base station. These systems can use either one or two GPS receivers and are mounted on the machine's blade. The more advanced systems use two receivers since it allows the machine to be controlled in a three-dimensional design. Fully automatic systems require the GPS guidance system to be integrated in the machine's implement controls. Some manufacturers sell the machine with these controls already integrated into the machine as an option. Aftermarket kits are available that can retrofit your existing machine to fully automatic control, but requires the GPS system to interface with the machine's implement controls. This is typically done one of two ways. If the machine's implements are controlled using electric over hydraulic (EH), the GPS system can input lever commands in parallel with the machine's implement lever. The output from the GPS system is interpreted by the machine's electronic control module as a lever command given by the operator and moves the implements accordingly. The second method for integrating GPS in the machine's implement controls is by adding a second pilot hydraulic valve in parallel with the machine's pilot hydraulic valve. This second valve is controlled by the GPS system and moves the implement valve according to the system design and blade location. Types of machines that use fully automatic GPS systems include TTT and MG.
Applications:
The key to successfully using GPS in the earthmoving industry is having an accurate site design. The site design, typically created by an engineering firm, can be imported from the original design file into the machine's GPS display. Most GPS guidance systems also have the ability to allow the operator to define a specific grade elevation or grade angle without a specific design. The following describes common machine applications that utilize GPS guidance systems.
Applications:
Track-Type Tractors TTT are an extremely popular machine platform for GPS guidance systems specifically in the smaller sized models that are used for fine grading. Caterpillar Inc. and John Deere both offer fully automatic integrated GPS as an option from the factory on some of these models. One example of GPS being used on a TTT would be on a road project.
Applications:
Motorgraders Motorgraders are another popular machine platform since they also perform fine grading activities that can benefit from the GPS accuracy. Caterpillar Inc. and John Deere also offer some models with integrated GPS.
Applications:
Hydraulic Excavators Hydraulic excavators are just beginning to be integrated using GPS technology and are typically indicate only. Excavators use GPS technology in conjunction with angle sensors integrated in the machine's boom, stick, and bucket. This allows the operator to see how deep they are digging by comparing the actual bucket location to the site design on the GPS display.
Applications:
In recent years, Komatsu has released excavators offering semi-automatic functions. With these functions, the machine will automatically raise the boom and bucket to maintain the predetermined design grade. These machines also offer an auto stop function, preventing the bucket and boom function from lower beyond the predetermined design grade. https://www.youtube.com/watch?v=X0ELceB420I Scrapers Scrapers use GPS technology and are typically indicate only. The GPS antenna is typically mounted on the bowl of the scraper and allows the operator to compare the depth of the cut versus the site plan. This takes a lot of the ambiguity out of moving large amounts of material.
Applications:
Compactors GPS technology is applied in both trash compactors and soil compactors. Typical systems record where the compactor has been in order to create a map of the area's compaction. Usually the display has various colors that indicate that the machine has compacted the area.
Financial Information:
GPS systems typically have a high initial cost of around $100,000 per machine. When used properly GPS on average can increase productivity by as much as 30% over traditional methods. There is also cost reduction of material (since less is needed) because such high accuracy can be achieved. Some construction projects even require the use of GPS since it can bring down the overall cost of the project due to its efficiency advantages. Some GPS systems allow the user to switch systems to other machines making this tool very versatile. The contractor must plan for greater efficiency, since increasing one aspect of the job by 30% may not increase the overall efficiency, since another area may not be able to keep up. "If you do everything right and boost overall productivity say 30 percent, you’re going to have to line up 30 percent more work in the future or send crews home early".
GPS Limitations:
GPS is extremely versatile in the earthmoving industry, but it does have its limitations. GPS satellite signals can only be received in a non obstructed view of the sky with the exception of clouds. If a contractor wanted to perform grade work in preparation for a concrete floor within a building, for example, the roof would block the view to the GPS satellites, preventing the system from working. Working too close to a structure can also obstruct the machine's view of the sky creating dead zones. High-voltage power-lines can also create dead zones when working underneath them. GPS satellite coverage can also be weaker during certain parts of the day lowering the number of satellites the machine's system can use. This all depends on the geographical location and time of day. Improvements in GPS technology and the addition of GLONASS (Russian GNSS Satellites) satellites have reduced this issue. As mentioned earlier, in order to increase the overall accuracy of GPS you have to purchase and use a base station, which adds additional cost.
Future use:
GPS continues to be integrated in the construction industry and soon will be an industry standard. Autonomous cars that utilize GPS are currently being developed, and someday the earthmoving industry could incorporate such features. Already, new machines are coming equipped with GPS integrated from the factory. The possibilities are endless and who knows what other practical uses for GPS in the earthmoving industries will be discovered.
Resources:
The first user-oriented web resource for prospective 3D machine control users was created in 2010. The Kellogg Report publicized a detailed comparison of the major systems available on the market, evaluating more than 200 system features. The report continues to be updated as the technology evolves. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Shopper marketing**
Shopper marketing:
'Shopper marketing' is "a discipline that focuses on the customer experience and the customer journey." It focuses on the consumer's path to purchasing a product, from first being aware of the product, to consideration and through to the purchase of it. It separates itself from retail marketing which focuses on engaging the customer in-store only. 'Shopper marketing' is not limited to in-store marketing activities, a common and inaccurate assumption that impairs the spread of any industry definition. Shopper marketing is part of an overall integrated marketing approach that considers the needs and wants of a particular "shopper" in order to drive consumption. Shopper insight data collected by shopper marketers includes the consideration of their shopper needs, preferred retail environments and in-store activity.
Shopper marketing:
Unilever defines shopper insight as a "focus on the process that takes place between that first thought the consumer has about purchasing an item, all the way through the selection of that item." They describe it as the analysis of consumer behavior and decision-making from the moment they consider buying a product until they choose it. It aims to understand the motivations, preferences, and influences that affect the shopping experience and outcome.
Description:
Manufacturers are able to develop strategic plans using high-quality shopper marketing data, allowing for a clear understanding of consumer preferences and behaviours. According to industry studies prior to 2010, manufacturer investment in shopper marketing is growing more than 21% annually.According to the company's financial statements, Procter & Gamble, invests at least 500 million dollars in shopper marketing each year.Shopper marketing is practiced by leading European companies such as Unilever and Beiersdorf, and the discipline is developed further by the likes of Phenomena Group, Europe's first shopper marketing agency.The following statistics have caused the reapportionment of marketing investment from consumer marketing to shopper marketing. Each brand performs differently based on shopper need states, shopper trip types, retailer formats, brand importance, brand relevance and a host of other factors: 70% of brand selections are made at stores 68% of buying decisions are unplanned 5% are loyal to the brand of one product group Practitioners believe that effective shopper marketing is increasingly important to achieve success in the marketplace
Partial areas:
History For almost 50 years, large-scale consumer packaged goods manufacturers had many possibilities available to spark continued business growth: 1970s: product commoditisation 1980s: channel consolidation 1990s: increased consumerism 2000s: globalisation.
Partial areas:
The organisation itself was structured accordingly to maximise growth agents through the efficiencies of mass production, distribution and sales. Marketers were organised into silos depending on which function they served: Product marketers developed and positioned goods for retailers to sell Distribution marketers took several product brands and managed lifecycle and supply chain issues by channel Consumer-driven marketers who were in the field among the channel(s) increased share or penetrationThe marketing organisation structure was originally built around the four Ps of marketing: product, price, placement and promotion. The four Ps of marketing were a product of the 1950s. The inward facing organisational structure of marketing ceased after the 1950s. Businesses no longer manufactured products with limited information provided to consumers. Marketing was used as a tool to became more consumer centric to customers who were privy to more information about products before purchase.
Partial areas:
Retail shopping environment In late 2004, a new model for growth emerged as product manufacturers and retailers alike identified the need to uniquely influence the shopping experience. It was called shopper marketing (SM). It wasn't until 2010 that it was formally defined by the Retail Commission on Shopper Marketing as follows: "Shopper Marketing is the use of insights-driven marketing and merchandising initiatives to satisfy the needs of targeted shoppers, enhance the shopping experience and improve business results and brand equity for retailers and manufacturers.”
Buying behaviour data:
Several different data collection methods provide information on the shopper's buying behaviour of a given brand: observations, intercepts, focus groups, diaries, point-of-sale and other data.
Observations made before entering a store, in the store, and after exiting a store clarify when, what, where, why, who and how shopper behaviour occurs.
Buying behaviour data:
Key insights into consumers include: the length of the buying process, the items the shopper noticed, touched, studied, the items the shopper bought, as well as the purchase methods influencing the process. Interviews help uncover motives guiding the buying behaviours. The matters commonly clarified are: the likelihood of product substitution and the identification of substitutes; values and attitudes; desires and motivational factors; as well as lifestyle and life situation. Point-of-sale data provide information on which products were bought, when and for how much (and sometimes by whom when a frequent shopper card can be used).
Buying behaviour data:
Another influence is the amount of other shoppers are in a store at a given time. For example, research by Martin (2012) in a retailing context found that male and female shoppers who were accidentally touched from behind by other shoppers left a store earlier than people who had not been touched and evaluated brands more negatively, resulting in the Accidental Interpersonal Touch effect
Segmenting shoppers:
When conducting shopper segmenting, the market is divided into essential and measurable groups, that is, segments on the basis of the buying behaviour data. Shopper segmenting makes it easier to answer the requirements of individual segments. For example, price-sensitive and traditional shoppers clearly differ from one another as far as their buying behaviour is concerned. Segmenting makes it possible to target marketing measures at the most profitable shoppers.
Segmenting shoppers:
The value of segmenting shoppers is debated in the shopper marketing industry. For retailers it can provide direction on positioning relative to competitors as well as in terms of store locations. Loyalty cards can provide one of the richest sources of segmentation data. For consumer product manufacturers, shopper segmentation is less useful, at least in physical stores, as the shelf and displays communicate to all store shoppers in the same way. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Amlodipine**
Amlodipine:
Amlodipine, sold under the brand name Norvasc among others, is a calcium channel blocker medication used to treat high blood pressure and coronary artery disease. It is taken by mouth.Common side effects include swelling, feeling tired, abdominal pain, and nausea. Serious side effects may include low blood pressure or heart attack. Whether use is safe during pregnancy or breastfeeding is unclear. When used by people with liver problems, and in elderly individuals, doses should be reduced. Amlodipine works partly by increasing the size of arteries. It is a long-acting calcium channel blocker of the dihydropyridine type.Amlodipine was patented in 1982, and approved for medical use in 1990. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. In 2020, it was the fifth most commonly prescribed medication in the United States, with more than 69 million prescriptions.
Medical uses:
Amlodipine is used in the management of hypertension and coronary artery disease in people with either stable angina (where chest pain occurs mostly after physical or emotional stress) or vasospastic angina (where it occurs in cycles) and without heart failure. It can be used as either monotherapy or combination therapy for the management of hypertension or coronary artery disease. Amlodipine can be administered to adults and children 6–17 years of age. Calcium channel blockers, including amlodipine, may provide greater protection against stroke than beta blockers Evidence from 2 meta-analysis have reported no significant difference between calcium channel blockers, Ace inhibitors, diuretics angiotensin receptor blockers in stroke protection while one 2015 meta-analysis has suggested that calcium channel blockers offer greatest protection against stroke than other classes of antihypertensive. Amlodipine along with other calcium channel blockers are considered the first choice in the pharmacological management of Raynaud's phenomenon.
Medical uses:
Combination therapy Amlodipine can be given as a combination therapy with a variety of medications: Amlodipine/atorvastatin, where amlodipine is given for hypertension or CAD and atorvastatin prevents cardiovascular events, or if the person also has high cholesterol.
Amlodipine/aliskiren or amlodipine/aliskiren/hydrochlorothiazide if amlodipine alone cannot reduce blood pressure. Aliskiren is a renin inhibitor, which works to reduce primary hypertension (that with no known cause) by binding to renin and preventing it from initiating the renin–angiotensin system (RAAS) pathway to increase blood pressure. Hydrochlorothiazide is a diuretic and decreases overall blood volume.
Amlodipine/benazepril if either drug has failed individually, or amlodipine alone caused edema. Benazepril is an ACE inhibitor and blocks the conversion of angiotensin I to angiotensin II in the RAAS pathway.
Amlodipine/celecoxib Amlodipine/lisinopril Amlodipine/olmesartan or amlodipine/olmesartan/hydrochlorothiazide if amlodipine is insufficient in reducing blood pressure. Olmesartan is an angiotensin II receptor antagonist and blocks part of the RAAS pathway.
Amlodipine/perindopril if using amlodipine alone caused edema. Perindopril is a long-lasting ACE inhibitor.
Amlodipine/telmisartan, where telmisartan is an angiotensin II receptor antagonist.
Amlodipine/valsartan or amlodipine/valsartan/hydrochlorothiazide, where valsartan is an angiotensin II receptor antagonist.
Contraindications:
The only absolute contraindication to amlodipine is an allergy to amlodipine or any other dihydropyridines.Other situations occur, however, where amlodipine generally should not be used. In patients with cardiogenic shock, where the heart's ventricles are not able to pump enough blood, calcium channel blockers exacerbate the situation by preventing the flow of calcium ions into cardiac cells, which is required for the heart to pump. While use in patients with aortic stenosis (narrowing of the aorta where it meets the left ventricle) since it does not inhibit the ventricle's function is generally safe, it can still cause collapse in cases of severe stenosis. In unstable angina (excluding variant angina), amlodipine can cause a reflex increase in cardiac contractility (how hard the ventricles squeeze) and heart rate, which together increase the demand for oxygen by the heart itself. Patients with severe hypotension can have their low blood pressure exacerbated, and patients in heart failure can get pulmonary edema. Those with impaired liver function are unable to metabolize amlodipine to its full extent, giving it a longer half-life than typical.Amlodipine's safety in pregnancy has not been established, although reproductive toxicity at high doses is known. Whether amlodipine enters the milk of breastfeeding mothers is also unknown.Those who have heart failure, or recently had a heart attack, should take amlodipine with caution.
Adverse effects:
Some common dose-dependent adverse effects of amlodipine include vasodilatory effects, peripheral edema, dizziness, palpitations, and flushing. Peripheral edema (fluid accumulation in the tissues) occurs at rate of 10.8% at a 10-mg dose (versus 0.6% for placebos), and is three times more likely in women than in men. It causes more dilation in the arterioles and precapillary vessels than the postcapillary vessels and venules. The increased dilation allows for more blood, which is unable to push through to the relatively constricted postcapillary venules and vessels; the pressure causes much of the plasma to move into the interstitial space. Amlodipine-association edema can be avoided by adding ACE inhibitors or angiotensin II receptor antagonist. Of the other dose-dependent side effects, palpitations (4.5% at 10 mg vs. 0.6% in placebos) and flushing (2.6% vs. 0%) occurred more often in women; dizziness (3.4% vs. 1.5%) had no sex bias.Common but not dose-related adverse effects are fatigue (4.5% vs. 2.8% with a placebo), nausea (2.9% vs. 1.9%), abdominal pain (1.6% vs. 0.3%), and drowsiness (1.4% vs. 0.6%). Side effects occurring less than 1% of the time include: blood disorders, impotence, depression, peripheral neuropathy, insomnia, tachycardia, gingival enlargement, hepatitis, and jaundice.Amlodipine-associated gingival overgrowth is a relatively common side effect with exposure to amlodipine. Poor dental health and buildup of dental plaque are risk factors.Amlodipine may increase the risk of worsening angina or acute myocardial infarction, especially in those with severe obstructive coronary artery disease, upon dosage initiation or increase. However, depending on the situation, amlodipine inhibits constriction and restores blood flow in coronary arteries as a result of its acting directly on vascular smooth muscle, causing a reduction in peripheral vascular resistance and a consequent reduction in blood pressure.
Overdose:
Although rare, amlodipine overdose toxicity can result in widening of blood vessels, severe low blood pressure, and fast heart rate. Toxicity is generally managed with fluid replacement monitoring ECG results, vital signs, respiratory system function, glucose levels, kidney function, electrolyte levels, and urine output. Vasopressors are also administered when low blood pressure is not alleviated by fluid resuscitation.
Interactions:
Several drugs interact with amlodipine to increase its levels in the body. CYP3A inhibitors, by nature of inhibiting the enzyme that metabolizes amlodipine, CYP3A4, are one such class of drugs. Others include the calcium-channel blocker diltiazem, the antibiotic clarithromycin, and possibly some antifungals. Amlodipine causes several drugs to increase in levels, including cyclosporine, simvastatin, and tacrolimus (the increase in the last one being more likely in people with CYP3A5*3 genetic polymorphisms). When more than 20 mg of simvastatin, a lipid-lowering agent, are given with amlodipine, the risk of myopathy increases. The FDA issued a warning to limit simvastatin to a maximum dose of 20 mg if taken with amlodipine based on evidence from the SEARCH trial. Giving amlodipine with Viagra increases the risk of hypotension.
Pharmacology:
Amlodipine is a long-acting calcium channel antagonist that selectively inhibits calcium ion influx across cell membranes. It targets L-type calcium channels in muscle cells and N-type calcium channels in the central nervous system which are involved in nociceptive signalling and pain perception. Amlodipine has an inhibitory effect on calcium influx in smooth muscle cells to inhibit contraction.Amlodipine ends up significantly reducing total vascular resistance without decreasing cardiac output expressed by pressure-rate product and cardiac contractability in comparison with verapamil, a non-dihydropyridine. In turn, following treatment lasting a month, with amlodipine, cardiac output is significantly enhanced. Unlike verapamil which has efficacy in moderation of emotional arousal and reduces cardiac load without lowering cardiac output demands, amlodipine increases the cardiac output response concomitantly with increased functional cardiac load.
Pharmacology:
Mechanism of action Amlodipine is an angioselective calcium channel blocker and inhibits the movement of calcium ions into vascular smooth muscle cells and cardiac muscle cells which inhibits the contraction of cardiac muscle and vascular smooth muscle cells. Amlodipine inhibits calcium ion influx across cell membranes, with a greater effect on vascular smooth muscle cells. This causes vasodilation and a reduction in peripheral vascular resistance, thus lowering blood pressure. Its effects on cardiac muscle also prevent excessive constriction in the coronary arteries.Negative inotropic effects can be detected in vitro, but such effects have not been seen in intact animals at therapeutic doses. Among the two stereoisomers [R(+), S(–)], the (–) isomer has been reported to be more active than the (+) isomer. Serum calcium concentration is not affected by amlodipine. And it specifically inhibits the currents of L-type Cav1.3 channels in the zona glomerulosa of the adrenal gland.The mechanisms by which amlodipine relieves angina are: Stable angina: amlodipine reduces the total peripheral resistance (afterload) against which the heart works and reduces the rate pressure product, thereby lowering myocardial oxygen demand, at any given level of exercise.
Pharmacology:
Variant angina: amlodipine blocks spasm of the coronary arteries and restores blood flow in coronary arteries and arterioles in response to calcium, potassium, epinephrine, serotonin, and thromboxane A2 analog in experimental animal models and in human coronary vessels in vitro.Amlodipine has additionally been found to act as an antagonist of the mineralocorticoid receptor, or as an antimineralocorticoid.
Pharmacology:
Pharmacokinetics Amlodipine has been studied in healthy volunteers following oral administration of 14C-labelled drug. Amlodipine is well absorbed by the oral route with a mean oral bioavailability around 60%; the half-life of amlodipine is about 30 h to 50 h, and steady-state plasma concentrations are achieved after 7 to 8 days of daily dosing. In the blood it has high plasma protein binding of 97.5%. Its long half-life and high bioavailability are largely in part of its high pKa (8.6); it is ionized at physiological pH, and thus can strongly attract proteins. It is slowly metabolized in the liver by CYP3A4, with its amine group being oxidized and its side ester chain being hydrolyzed, resulting in an inactive pyridine metabolite. Renal elimination is the major route of excretion with about 60% of an administered dose recovered in urine, largely as inactive pyridine metabolites. However, renal impairment does not significantly influence amlodipine elimination. 20-25% of the drug is excreted in the faeces.
History:
Pfizer's patent protection on Norvasc lasted until 2007; total patent expiration occurred later in 2007. A number of generic versions are available. In the United Kingdom, tablets of amlodipine from different suppliers may contain different salts. The strength of the tablets is expressed in terms of amlodipine base, i.e., without the salts. Tablets containing different salts are therefore considered interchangeable. Fixed-dose combination of amlodipine and perindopril, an angiotensin converting enzyme inhibitor are also available.The medical form comes as besilate, mesylate or maleate.
Veterinary use:
Amlodipine is most often used to treat systemic hypertension in both cats and dogs. In cats, it is the first line of treatment due to its efficacy and few side effects. Systemic hypertension in cats is usually secondary to another abnormality, such as chronic kidney disease, and so amlodipine is most often administered to cats with kidney disease. While amlodipine is used in dogs with systemic hypertension, it is not as efficacious. Amlodipine is also used to treat congestive heart failure due to mitral valve regurgitation in dogs. By decreasing resistance to forward flow in the systemic circulation it results in a decrease in regurgitant flow into the left atrium. Similarly, it can be used on dogs and cats with left-to-right shunting lesions such as ventricular septal defect to reduce the shunt. Side effects are rare in cats. In dogs, the primary side effect is gingival hyperplasia. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mobile browser**
Mobile browser:
A mobile browser is a web browser designed for use on a mobile device such as a mobile phone or PDA. Mobile browsers are optimized to display Web content most effectively on small screens on portable devices. Mobile browser software must be small and efficient to accommodate the low memory capacity and low-bandwidth of wireless handheld devices. Traditional smaller feature phones use stripped-down mobile web browsers; however, most current smartphones have full-fledged browsers that can handle the latest web technologies, such as CSS 3, JavaScript, and Ajax.
Mobile browser:
Websites designed to be usable in mobile browsers may be collectively referred to as the mobile web. Today, over 75% of websites are "mobile friendly", by detecting when a request comes from a mobile device and automatically creating a "mobile" version of the page, designed to fit the device's screen and be usable with a touch interface, for example the Wikipedia website (see illustration).
Underlying technology:
The mobile browser usually connects via the cellular network, or increasingly via Wireless LAN, using standard HTTP over TCP/IP and displays web pages written in HTML. Historically, early feature phones were restricted to only displaying pages specifically designed for mobile use, written in XHTML Mobile Profile (WAP 2.0), or WML (which evolved from HDML). WML and HDML are stripped-down formats suitable for transmission across limited bandwidth, and wireless data connection called WAP. In Japan, DoCoMo defined the i-mode service based on i-mode HTML, which is an extension of Compact HTML (C-HTML), a simple subset of HTML.
Underlying technology:
WAP 2.0 specifies XHTML Mobile Profile plus WAP CSS, subsets of the W3C's standard XHTML and CSS with minor mobile extensions.
Smartphone mobile browsers are full-featured Web browsers capable of HTML, CSS, ECMAScript, as well as mobile technologies such as WML, i-mode HTML, or cHTML.
To accommodate small screens, they use Post-WIMP interfaces.
History:
The first mobile browser for a PDA was PocketWeb for the Apple Newton created at TecO in 1994, followed by the first commercial product NetHopper released in August 1996.The so-called "microbrowser" technologies such as WAP, NTTDocomo's i-mode platform and Openwave's HDML platform fueled the first wave of interest in wireless data services.
History:
The first deployment of a mobile browser on a mobile phone was probably in 1997 when Unwired Planet (later to become Openwave) put their "UP.Browser" on AT&T handsets to give users access to HDML content.A British company, STNC Ltd., developed a mobile browser (HitchHiker) in 1997 that was intended to present the entire device UI. The demonstration platform for this mobile browser (Webwalker) had 1 MIPS total processing power. This was a single core platform, running the GSM stack on the same processor as the application stack. In 1999 STNC was acquired by Microsoft and HitchHiker became Microsoft Mobile Explorer 2.0, not related to the primitive Microsoft Mobile Explorer 1.0. HitchHiker is believed to be the first mobile browser with a unified rendering model, handling HTML and WAP along with ECMAScript, WMLScript, POP3 and IMAP mail in a single client. Although it was not used, it was possible to combine HTML and WAP in the same pages although this would render the pages invalid for any other device. Mobile Explorer 2.0 was available on the Benefon Q, Sony CMD-Z5, CMD-J5, CMD-MZ5, CMD-J6, CMD-Z7, CMD-J7 and CMD-J70. With the addition of a messaging kernel and a driver model, this was powerful enough to be the operating system for certain embedded devices. One such device was the Amstrad e-m@iler and e-m@iler 2. This code formed the basis for MME3.
History:
Multiple companies offered browsers for the Palm OS platform. The first HTML browser for Palm OS 1.0 was HandWeb by Smartcode software, released in 1997. HandWeb included its own TCP/IP stack, and Smartcode was acquired by Palm in 1999. Mobile browsers for the Palm OS platform multiplied after the release of Palm OS 2.0, which included a TCP/IP stack. A freeware (although later shareware) browser for the Palm OS was Palmscape, written in 1998 by Kazuho Oku in Japan, who went on to found Ilinx. It was still in limited use as late as 2003. Qualcomm also developed the Eudora Web browser, and launched it with the Palm OS based QCP smartphone. ProxiWeb was a proxy-based Web browsing solution, developed by Ian Goldberg and others at the University of California, Berkeley and later acquired by PumaTech.
History:
Released in 2001, Mobile Explorer 3.0 added iMode compatibility (cHTML) plus numerous proprietary schemes. By imaginatively combining these proprietary schemes with WAP protocols, MME3.0 implemented OTA database synchronisation, push email, push information clients (not unlike a 'Today Screen') and PIM functionality. The cancelled Sony Ericsson CMD-Z700 was to feature heavy integration with MME3.0. Although Mobile Explorer was ahead of its time in the mobile phone space, development was stopped in 2002.
History:
Also in 2002, Palm, Inc. offered Web Pro on Tungsten PDAs based upon a Novarra browser. PalmSource offered a competing Web browser based on Access NetFront.
Opera software pioneered with its Small Screen Rendering and Medium Screen Rendering technology. The Opera web browser is able to reformat regular web pages for optimal fit on small screens and medium-sized (PDA) screens. It was also the first widely available mobile browser to support Ajax and the first mobile browser to pass the Acid2 test.
Distinct from a mobile browser is a web-based emulator, which uses a "Virtual Handset" to display WAP pages on a computer screen, implemented either in Java or as an HTML transcoder.
Popular mobile browsers:
The following are some of the more popular mobile browsers. Some mobile browsers are really miniaturized web browsers, so some mobile device providers also provide browsers for desktop and laptop computers.
Default browsers for mobile and tablet (current and defunct) User-installable mobile browsers (current and defunct) Mobile HTML transcoders Mobile transcoders reformat and compress web content for mobile devices and must be used in conjunction with built-in or user-installed mobile browsers. The following are several leading mobile transcoding services.
Openwave Web Adapter - used by Vodacom Vision Mobile Server Skweezer - used by Orange, Etisalat, JumpTap, Medio, Miva, and others Opera Mini Defunct transcoders or sites with removed transcoding functionality Google Mobilizer (Google Web Transcoder) — Defunct since February 2016. Replaced with Google Web Light.
Smartphone site — The last extant snapshot of the site is from 5 September 2012.
Popular mobile browsers:
Device-Browser combinations on Cloud Finch — The last snapshot of a functional Finch site is from 28 February 2009. This defunct service should not be confused with Finch (software). Finch the transcoder became Squeezr!Beta as early as 8 December 2009.Squeezr!Beta — The last functional Squeezr!Beta page is dated 13 February 2010. As of 28 August 2010, Squeezr!Beta had closed; the last page of Squeezr as authored by Adam Brenecki is dated 2 January 2012. Since 2013, squeezr.net redirected to squeezr.it, which is a different service, and not related to Adam Brenecki.
Popular mobile browsers:
Microsoft Bing — the option to enable or disable "Optimize web pages for your phone" in "Search settings" is not visible in Bing's mobile version as of March 2018. (The mobile version can be accessed with a phone or tablet, or when setting a web browser to identify itself with a mobile-based user agent string.) MobileLeap Transcoding Engine, by MobileLeap Inc. As of March 2018, web page source code includes JavaScript from the domain parking company Sedo) — The site wouldn't allow entry without a cookie, so a typical crawler would be redirected to mlvb's cookiecheck page, the last snapshot of which is from 12 October 2017.
Popular mobile browsers:
Mowser (mowser.com) — Alternately marketed with the mowser.mobi domain name, which is now a permanent deadlink. The last snapshot of a working page is dated 22 September 2017. As of 30 March 2018, the site has been shut down. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Entrecôte**
Entrecôte:
In French, entrecôte (French pronunciation: [ɑ̃.tʁə.kot]) is a premium cut of beef used for steaks and roasts.
A traditional entrecôte is a boneless cut from the rib area corresponding to the steaks known in different parts of the English-speaking world as rib, rib eye, Scotch fillet, club, or Delmonico.
Entrecôte:
The muscle group concerned is the longissimus dorsi, which runs down the back of the animal adjacent to the vertebrae and above the rib cage, and continues into the hind quarter. Once past the rib cage into the area adjacent to the lumbar vertebrae, this muscle group is no longer called an "entrecôte"—at that point it becomes a sirloin/strip steak (UK/N.Am, respectively), or a contre-filet in French. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Derail**
Derail:
A derail or derailer is a device used to prevent fouling (blocking or compromising) of a rail track (or collision with anything present on the track, such as a person, or a train) by unauthorized movements of trains or unattended rolling stock. The device works by derailing the equipment as it rolls over or through it.
Although accidental derailment is damaging to equipment and track, and requires considerable time and expense to remedy, derails are used in situations where there is a risk of greater damage to equipment, injury or death if equipment is allowed to proceed past the derail point.
Applications:
Derails may be applied: where sidings meet main lines or other through tracks at junctions or other crossings to protect the interlocking against unauthorized movement temporarily at an area where crews are working on a rail line approaching a drawbridge, dead end, or similar hazard.
Design:
There are four basic forms of derail.
Design:
Wedge The most common form is a wedge-shaped piece of steel which fits over the top of the rail. If a car or locomotive attempts to roll over it, the wheel flange is lifted over the rail to the outside, derailing it. When not in use, the derail folds away, leaving the rail unobstructed. It can be manually or remotely operated; in the former case it will have a lock applied to prevent it from being moved by unauthorized personnel. This type is common on North American railroads.
Design:
Split rail The second type of derail is the "split rail" type. These are basically a complete or partial railroad switch which directs the errant rolling stock away from the main line. This form is common throughout the UK, where it is called trap points or catch points.
Design:
Portable The third type of derail is the portable derail, and is used by railroad mechanical crews, as well as some industries. This is often used in conjunction with blue flag rules (meaning equipment on the track must not be moved, as workers are on or near the equipment) and is temporary in nature. They are placed onto one side of the rail with the derail pointed to the outside of the track. Then there is a part of the derail that is able to be tightened down to the rail and then secured with a locking mechanism. If the derail is left unlocked for any reason or does not have a locking mechanism deployed then the owner of the derail can face substantial fines if found by an FRA inspector (49 CFR 218.109.).
Design:
Powered The fourth type of derailer is the powered or motorized derailer, electronically powered through an actuator. This type of derailer can be controlled remotely from an external control panel or manually. It is commonly installed as a part of Depot Personnel Protection Systems, to ensure personnel safety in maintenance workshops and depots.
Failures:
Derails have failed on occasion. Examples include: 1958 Newark Bay rail accident: On September 15, 1958 in Newark Bay, New Jersey, United States, when a Central Railroad of New Jersey (CNJ) morning commuter train, #3314, ran through a restricting and a stop signal, derailed, and slid off the open Newark Bay lift bridge. Although the derailer did work, it was insufficient as #3314 had such great speed that it was unable to stop in time. CSX 8888 incident: On May 15, 2001, CSX 8888, pulling a train of 47 cars including some loaded with hazardous chemicals, ran uncontrolled for two hours at up to 82 kilometers per hour (51 mph). A portable derail was used but failed.
Failures:
Englewood Railway incident: On April 20, 2017, three workers were killed in an accident on the Englewood Railway in Woss, British Columbia, when 11 runaway rail cars full of logs crashed into them and their equipment while they were working on the line. The railcars had become uncoupled at the top of the hill and as they rolled out-of-control down the hill, they overpowered the derails which had been installed incorrectly and into rotting rail ties. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sensor array**
Sensor array:
A sensor array is a group of sensors, usually deployed in a certain geometry pattern, used for collecting and processing electromagnetic or acoustic signals. The advantage of using a sensor array over using a single sensor lies in the fact that an array adds new dimensions to the observation, helping to estimate more parameters and improve the estimation performance.
Sensor array:
For example an array of radio antenna elements used for beamforming can increase antenna gain in the direction of the signal while decreasing the gain in other directions, i.e., increasing signal-to-noise ratio (SNR) by amplifying the signal coherently. Another example of sensor array application is to estimate the direction of arrival of impinging electromagnetic waves. The related processing method is called array signal processing. A third examples includes chemical sensor arrays, which utilize multiple chemical sensors for fingerprint detection in complex mixtures or sensing environments. Application examples of array signal processing include radar/sonar, wireless communications, seismology, machine condition monitoring, astronomical observations fault diagnosis, etc.
Sensor array:
Using array signal processing, the temporal and spatial properties (or parameters) of the impinging signals interfered by noise and hidden in the data collected by the sensor array can be estimated and revealed. This is known as parameter estimation.
Plane wave, time domain beamforming:
Figure 1 illustrates a six-element uniform linear array (ULA). In this example, the sensor array is assumed to be in the far-field of a signal source so that it can be treated as planar wave.
Plane wave, time domain beamforming:
Parameter estimation takes advantage of the fact that the distance from the source to each antenna in the array is different, which means that the input data at each antenna will be phase-shifted replicas of each other. Eq. (1) shows the calculation for the extra time it takes to reach each antenna in the array relative to the first one, where c is the velocity of the wave.
Plane wave, time domain beamforming:
cos θc,i=1,2,...,M(1) Each sensor is associated with a different delay. The delays are small but not trivial. In frequency domain, they are displayed as phase shift among the signals received by the sensors. The delays are closely related to the incident angle and the geometry of the sensor array. Given the geometry of the array, the delays or phase differences can be used to estimate the incident angle. Eq. (1) is the mathematical basis behind array signal processing. Simply summing the signals received by the sensors and calculating the mean value give the result y=1M∑i=1Mxi(t−Δti)(2) Because the received signals are out of phase, this mean value does not give an enhanced signal compared with the original source. Heuristically, if we can find delays of each of the received signals and remove them prior to the summation, the mean value y=1M∑i=1Mxi(t)(3) will result in an enhanced signal. The process of time-shifting signals using a well selected set of delays for each channel of the sensor array so that the signal is added constructively is called beamforming.
Plane wave, time domain beamforming:
In addition to the delay-and-sum approach described above, a number of spectral based (non-parametric) approaches and parametric approaches exist which improve various performance metrics. These beamforming algorithms are briefly described as follows
Array design:
Sensor arrays have different geometrical designs, including linear, circular, planar, cylindrical and spherical arrays. There are sensor arrays with arbitrary array configuration, which require more complex signal processing techniques for parameter estimation. In uniform linear array (ULA) the phase of the incoming signal ωτ should be limited to ±π to avoid grating waves. It means that for angle of arrival θ in the interval [−π2,π2] sensor spacing should be smaller than half the wavelength d≤λ/2 . However, the width of the main beam, i.e., the resolution or directivity of the array, is determined by the length of the array compared to the wavelength. In order to have a decent directional resolution the length of the array should be several times larger than the radio wavelength.
Types of sensor arrays:
Antenna array Antenna array (electromagnetic), a geometrical arrangement of antenna elements with a deliberate relationship between their currents, forming a single antenna usually to achieve a desired radiation pattern Directional array, an antenna array optimized for directionality Phased array, An antenna array where the phase shifts (and amplitudes) applied to the elements are modified electronically, typically in order to steer the antenna system's directional pattern, without the use of moving parts Smart antenna, a phased array in which a signal processor computes phase shifts to optimize reception and/or transmission to a receiver on the fly, such as is performed by cellular telephone towers Digital antenna array, this is smart antenna with multi channels digital beamforming, usually by using FFT.
Types of sensor arrays:
Interferometric array of radio telescopes or optical telescopes, used to achieve high resolution through interferometric correlation Watson-Watt / Adcock antenna array, using the Watson-Watt technique whereby two Adcock antenna pairs are used to perform an amplitude comparison on the incoming signal Acoustic arrays Microphone array is used in acoustic measurement and beamforming Loudspeaker array is used in acoustic measurement and beamforming Other arrays Geophone array used in Reflection seismology Sonar array is an array of hydrophones used in underwater imaging
Delay-and-sum beamforming:
If a time delay is added to the recorded signal from each microphone that is equal and opposite of the delay caused by the additional travel time, it will result in signals that are perfectly in-phase with each other. Summing these in-phase signals will result in constructive interference that will amplify the SNR by the number of antennas in the array. This is known as delay-and-sum beamforming. For direction of arrival (DOA) estimation, one can iteratively test time delays for all possible directions. If the guess is wrong, the signal will be interfered destructively, resulting in a diminished output signal, but the correct guess will result in the signal amplification described above.
Delay-and-sum beamforming:
The problem is, before the incident angle is estimated, how could it be possible to know the time delay that is 'equal' and opposite of the delay caused by the extra travel time? It is impossible. The solution is to try a series of angles θ^∈[0,π] at sufficiently high resolution, and calculate the resulting mean output signal of the array using Eq. (3). The trial angle that maximizes the mean output is an estimation of DOA given by the delay-and-sum beamformer.
Delay-and-sum beamforming:
Adding an opposite delay to the input signals is equivalent to rotating the sensor array physically. Therefore, it is also known as beam steering.
Spectrum-based beamforming:
Delay and sum beamforming is a time domain approach. It is simple to implement, but it may poorly estimate direction of arrival (DOA). The solution to this is a frequency domain approach. The Fourier transform transforms the signal from the time domain to the frequency domain. This converts the time delay between adjacent sensors into a phase shift. Thus, the array output vector at any time t can be denoted as x(t)=x1(t)[1e−jωΔt⋯e−jω(M−1)Δt]T , where x1(t) stands for the signal received by the first sensor. Frequency domain beamforming algorithms use the spatial covariance matrix, represented by R=E{x(t)xT(t)} . This M by M matrix carries the spatial and spectral information of the incoming signals. Assuming zero-mean Gaussian white noise, the basic model of the spatial covariance matrix is given by R=VSVH+σ2I(4) where σ2 is the variance of the white noise, I is the identity matrix and V is the array manifold vector V=[v1⋯vk]T with vi=[1e−jωΔti⋯e−jω(M−1)Δti]T . This model is of central importance in frequency domain beamforming algorithms.
Spectrum-based beamforming:
Some spectrum-based beamforming approaches are listed below.
Conventional (Bartlett) beamformer The Bartlett beamformer is a natural extension of conventional spectral analysis (spectrogram) to the sensor array. Its spectral power is represented by P^Bartlett(θ)=vHRv(5) The angle that maximizes this power is an estimation of the angle of arrival.
Spectrum-based beamforming:
MVDR (Capon) beamformer The Minimum Variance Distortionless Response beamformer, also known as the Capon beamforming algorithm, has a power given by P^Capon(θ)=1vHR−1v(6) Though the MVDR/Capon beamformer can achieve better resolution than the conventional (Bartlett) approach, this algorithm has higher complexity due to the full-rank matrix inversion. Technical advances in GPU computing have begun to narrow this gap and make real-time Capon beamforming possible.
Spectrum-based beamforming:
MUSIC beamformer MUSIC (MUltiple SIgnal Classification) beamforming algorithm starts with decomposing the covariance matrix as given by Eq. (4) for both the signal part and the noise part. The eigen-decomposition is represented by R=UsΛsUsH+UnΛnUnH(7) MUSIC uses the noise sub-space of the spatial covariance matrix in the denominator of the Capon algorithm P^MUSIC(θ)=1vHUnUnHv(8) Therefore MUSIC beamformer is also known as subspace beamformer. Compared to the Capon beamformer, it gives much better DOA estimation.
Spectrum-based beamforming:
SAMV beamformer SAMV beamforming algorithm is a sparse signal reconstruction based algorithm which explicitly exploits the time invariant statistical characteristic of the covariance matrix. It achieves superresolution and robust to highly correlated signals.
Parametric beamformers:
One of the major advantages of the spectrum based beamformers is a lower computational complexity, but they may not give accurate DOA estimation if the signals are correlated or coherent. An alternative approach are parametric beamformers, also known as maximum likelihood (ML) beamformers. One example of a maximum likelihood method commonly used in engineering is the least squares method. In the least square approach, a quadratic penalty function is used. To get the minimum value (or least squared error) of the quadratic penalty function (or objective function), take its derivative (which is linear), let it equal zero and solve a system of linear equations.
Parametric beamformers:
In ML beamformers the quadratic penalty function is used to the spatial covariance matrix and the signal model. One example of ML beamformer penalty function is LML(θ)=‖R^−R‖F2=‖R^−(VSVH+σ2I)‖F2(9) where ‖⋅‖F is the Frobenius norm. It can be seen in Eq. (4) that the penalty function of Eq. (9) is minimized by approximating the signal model to the sample covariance matrix as accurate as possible. In other words, the maximum likelihood beamformer is to find the DOA θ , the independent variable of matrix V , so that the penalty function in Eq. (9) is minimized. In practice, the penalty function may look different, depending on the signal and noise model. For this reason, there are two major categories of maximum likelihood beamformers: Deterministic ML beamformers and stochastic ML beamformers, corresponding to a deterministic and a stochastic model, respectively.
Parametric beamformers:
Another idea to change the former penalty equation is the consideration of simplifying the minimization by differentiation of the penalty function. In order to simplify the optimization algorithm, logarithmic operations and the probability density function (PDF) of the observations may be used in some ML beamformers.
Parametric beamformers:
The optimizing problem is solved by finding the roots of the derivative of the penalty function after equating it with zero. Because the equation is non-linear a numerical searching approach such as Newton–Raphson method is usually employed. The Newton–Raphson method is an iterative root search method with the iteration 10 ) The search starts from an initial guess x0 . If the Newton-Raphson search method is employed to minimize the beamforming penalty function, the resulting beamformer is called Newton ML beamformer. Several well-known ML beamformers are described below without providing further details due to the complexity of the expressions.
Parametric beamformers:
Deterministic maximum likelihood beamformer In deterministic maximum likelihood beamformer (DML), the noise is modeled as a stationary Gaussian white random processes while the signal waveform as deterministic (but arbitrary) and unknown.Stochastic maximum likelihood beamformer In stochastic maximum likelihood beamformer (SML), the noise is modeled as stationary Gaussian white random processes (the same as in DML) whereas the signal waveform as Gaussian random processes.Method of direction estimation Method of direction estimation (MODE) is subspace maximum likelihood beamformer, just as MUSIC, is the subspace spectral based beamformer. Subspace ML beamforming is obtained by eigen-decomposition of the sample covariance matrix. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Decodoku**
Decodoku:
Decodoku is set of online citizen science games, based on quantum error correction. The project is supported by the NCCR QSIT and the University of Basel, and allows the public to get involved with quantum error correction research.The games present the clues left in a quantum computer when errors occur, and encourage the players to work out how best to correct them. These puzzles are presented in a manner similar to typical casual puzzle games, like 2048, Threes or Sudoku, with the scientific background explained via the project website and YouTube channel. Thus far three games have been released: Decodoku, Decodoku:Puzzles and Decodoku:Colors. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Reflected-wave switching**
Reflected-wave switching:
Reflected-wave switching is a signalling technique used in backplane computer buses such as PCI.
Reflected-wave switching:
A backplane computer bus is a type of multilayer printed circuit board that has at least one (almost) solid layer of copper called the ground plane, and at least one layer of copper tracks that are used as wires for the signals. Each signal travels along a transmission line formed by its track and the narrow strip of ground plane directly beneath it. This structure is known in radio engineering as microstrip line.
Reflected-wave switching:
Each signal travels from a transmitter to one or more receivers. Most computer buses use binary digital signals, which are sequences of pulses of fixed amplitude. In order to receive the correct data, the receiver must detect each pulse once, and only once. To ensure this, the designer must take the high-frequency characteristics of the microstrip into account.
Reflected-wave switching:
When a pulse is launched into the microstrip by the transmitter, its amplitude depends on the ratio of the impedances of the transmitter and the microstrip. The impedance of the transmitter is simply its output resistance. The impedance of the microstrip is its characteristic impedance, which depends on its dimensions and on the materials used in the backplane's construction. As the leading edge of the pulse (the incident wave) passes the receiver, it may or may not have sufficient amplitude to be detected. If it does, then the system is said to use incident-wave switching. This is the system used in most computer buses predating PCI, such as the VME bus.
Reflected-wave switching:
When the pulse reaches the end of the microstrip, its behaviour depends on the circuit conditions at this point. If the microstrip is correctly terminated (usually with a combination of resistors), the pulse is absorbed and its energy is converted to heat. This is the case in an incident-wave switching bus. If, on the other hand, there is no termination at the end of the microstrip, and the pulse encounters an open circuit, it is reflected back towards its source. As this reflected wave travels back along the microstrip, its amplitude is added to that of the original pulse. As the reflected wave passes the receiver for a second time, this time from the opposite direction, it now has enough amplitude to be detected. This is what happens in a reflected-wave switching bus.
Reflected-wave switching:
In incident-wave switching buses, reflections from the end of the bus are undesirable and must be prevented by adding termination. Terminating an incident-wave trace varies in complexity from a DC-balanced, AC-coupled termination to a single resistor series terminator, but all incident wave terminations consume both power and space (Johnson and Graham, 1993). However, incident-wave switching buses can be significantly longer than reflected-wave switching buses operating at the same frequency.
Reflected-wave switching:
If the limited bus length is acceptable, a reflected-wave switching bus will use less power, and fewer components to operate at a given frequency. The bus has to be short enough, such that a pulse may travel twice the length of the backplane (one complete journey for the incident wave, and another for the reflected wave), and stabilize sufficiently to be read in a single bus cycle. The travel time can be calculated by dividing the round-trip length of the bus by the speed of propagation of the signal (which is roughly one half to two-thirds of c, the speed of light in vacuum). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Black hole**
Black hole:
A black hole is a region of spacetime where gravity is so strong that nothing, including light or other electromagnetic waves, has enough energy to escape it. The theory of general relativity predicts that a sufficiently compact mass can deform spacetime to form a black hole. The boundary of no escape is called the event horizon. Although it has a great effect on the fate and circumstances of an object crossing it, it has no locally detectable features according to general relativity. In many ways, a black hole acts like an ideal black body, as it reflects no light. Moreover, quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is of the order of billionths of a kelvin for stellar black holes, making it essentially impossible to observe directly.
Black hole:
Objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. In 1916, Karl Schwarzschild found the first modern solution of general relativity that would characterize a black hole. David Finkelstein, in 1958, first published the interpretation of "black hole" as a region of space from which nothing can escape. Black holes were long considered a mathematical curiosity; it was not until the 1960s that theoretical work showed they were a generic prediction of general relativity. The discovery of neutron stars by Jocelyn Bell Burnell in 1967 sparked interest in gravitationally collapsed compact objects as a possible astrophysical reality. The first black hole known was Cygnus X-1, identified by several researchers independently in 1971.Black holes of stellar mass form when massive stars collapse at the end of their life cycle. After a black hole has formed, it can grow by absorbing mass from its surroundings. Supermassive black holes of millions of solar masses (M☉) may form by absorbing other stars and merging with other black holes. There is consensus that supermassive black holes exist in the centres of most galaxies.
Black hole:
The presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as visible light. Any matter that falls onto a black hole can form an external accretion disk heated by friction, forming quasars, some of the brightest objects in the universe. Stars passing too close to a supermassive black hole can be shredded into streamers that shine very brightly before being "swallowed." If other stars are orbiting a black hole, their orbits can be used to determine the black hole's mass and location. Such observations can be used to exclude possible alternatives such as neutron stars. In this way, astronomers have identified numerous stellar black hole candidates in binary systems and established that the radio source known as Sagittarius A*, at the core of the Milky Way galaxy, contains a supermassive black hole of about 4.3 million solar masses.
History:
The idea of a body so big that even light could not escape was briefly proposed by English astronomical pioneer and clergyman John Michell in a letter published in November 1784. Michell's simplistic calculations assumed such a body might have the same density as the Sun, and concluded that one would form when a star's diameter exceeds the Sun's by a factor of 500, and its surface escape velocity exceeds the usual speed of light. Michell referred to these bodies as dark stars. He correctly noted that such supermassive but non-radiating bodies might be detectable through their gravitational effects on nearby visible bodies. Scholars of the time were initially excited by the proposal that giant but invisible 'dark stars' might be hiding in plain view, but enthusiasm dampened when the wavelike nature of light became apparent in the early nineteenth century, as if light were a wave rather than a particle, it was unclear what, if any, influence gravity would have on escaping light waves.Modern physics discredits Michell's notion of a light ray shooting directly from the surface of a supermassive star, being slowed down by the star's gravity, stopping, and then free-falling back to the star's surface.
History:
General relativity In 1915, Albert Einstein developed his theory of general relativity, having earlier shown that gravity does influence light's motion. Only a few months later, Karl Schwarzschild found a solution to the Einstein field equations that describes the gravitational field of a point mass and a spherical mass. A few months after Schwarzschild, Johannes Droste, a student of Hendrik Lorentz, independently gave the same solution for the point mass and wrote more extensively about its properties. This solution had a peculiar behaviour at what is now called the Schwarzschild radius, where it became singular, meaning that some of the terms in the Einstein equations became infinite. The nature of this surface was not quite understood at the time. In 1924, Arthur Eddington showed that the singularity disappeared after a change of coordinates, although it took until 1933 for Georges Lemaître to realize that this meant the singularity at the Schwarzschild radius was a non-physical coordinate singularity. Arthur Eddington did however comment on the possibility of a star with mass compressed to the Schwarzschild radius in a 1926 book, noting that Einstein's theory allows us to rule out overly large densities for visible stars like Betelgeuse because "a star of 250 million km radius could not possibly have so high a density as the Sun. Firstly, the force of gravitation would be so great that light would be unable to escape from it, the rays falling back to the star like a stone to the earth. Secondly, the red shift of the spectral lines would be so great that the spectrum would be shifted out of existence. Thirdly, the mass would produce so much curvature of the spacetime metric that space would close up around the star, leaving us outside (i.e., nowhere)."In 1931, Subrahmanyan Chandrasekhar calculated, using special relativity, that a non-rotating body of electron-degenerate matter above a certain limiting mass (now called the Chandrasekhar limit at 1.4 M☉) has no stable solutions. His arguments were opposed by many of his contemporaries like Eddington and Lev Landau, who argued that some yet unknown mechanism would stop the collapse. They were partly correct: a white dwarf slightly more massive than the Chandrasekhar limit will collapse into a neutron star, which is itself stable. But in 1939, Robert Oppenheimer and others predicted that neutron stars above another limit (the Tolman–Oppenheimer–Volkoff limit) would collapse further for the reasons presented by Chandrasekhar, and concluded that no law of physics was likely to intervene and stop at least some stars from collapsing to black holes. Their original calculations, based on the Pauli exclusion principle, gave it as 0.7 M☉; subsequent consideration of neutron-neutron repulsion mediated by the strong force raised the estimate to approximately 1.5 M☉ to 3.0 M☉. Observations of the neutron star merger GW170817, which is thought to have generated a black hole shortly afterward, have refined the TOV limit estimate to ~2.17 M☉. Oppenheimer and his co-authors interpreted the singularity at the boundary of the Schwarzschild radius as indicating that this was the boundary of a bubble in which time stopped. This is a valid point of view for external observers, but not for infalling observers. Because of this property, the collapsed stars were called "frozen stars", because an outside observer would see the surface of the star frozen in time at the instant where its collapse takes it to the Schwarzschild radius.Also in 1939, Einstein would attempt to prove that black holes were impossible in his publication "On a Stationary System with Spherical Symmetry Consisting of Many Gravitating Masses", using his theory of general relativity to defend his argument. Months later, Oppenheimer and his student Hartland Snyder would provide the Oppenheimer–Snyder model in their paper "On Continued Gravitational Contraction", which predicted the existence of black holes. In the paper, which made no reference to Einstein's recent publication, Oppenheimer and Snyder used Einstein's own theory of general relativity to show the conditions on how a black hole could develop for the first time in contemporary physics .
History:
Golden age In 1958, David Finkelstein identified the Schwarzschild surface as an event horizon, "a perfect unidirectional membrane: causal influences can cross it in only one direction". This did not strictly contradict Oppenheimer's results, but extended them to include the point of view of infalling observers. Finkelstein's solution extended the Schwarzschild solution for the future of observers falling into a black hole. A complete extension had already been found by Martin Kruskal, who was urged to publish it.These results came at the beginning of the golden age of general relativity, which was marked by general relativity and black holes becoming mainstream subjects of research. This process was helped by the discovery of pulsars by Jocelyn Bell Burnell in 1967, which, by 1969, were shown to be rapidly rotating neutron stars. Until that time, neutron stars, like black holes, were regarded as just theoretical curiosities; but the discovery of pulsars showed their physical relevance and spurred a further interest in all types of compact objects that might be formed by gravitational collapse.In this period more general black hole solutions were found. In 1963, Roy Kerr found the exact solution for a rotating black hole. Two years later, Ezra Newman found the axisymmetric solution for a black hole that is both rotating and electrically charged. Through the work of Werner Israel, Brandon Carter, and David Robinson the no-hair theorem emerged, stating that a stationary black hole solution is completely described by the three parameters of the Kerr–Newman metric: mass, angular momentum, and electric charge.At first, it was suspected that the strange features of the black hole solutions were pathological artifacts from the symmetry conditions imposed, and that the singularities would not appear in generic situations. This view was held in particular by Vladimir Belinsky, Isaak Khalatnikov, and Evgeny Lifshitz, who tried to prove that no singularities appear in generic solutions. However, in the late 1960s Roger Penrose and Stephen Hawking used global techniques to prove that singularities appear generically. For this work, Penrose received half of the 2020 Nobel Prize in Physics, Hawking having died in 2018. Based on observations in Greenwich and Toronto in the early 1970s, Cygnus X-1, a galactic X-ray source discovered in 1964, became the first astronomical object commonly accepted to be a black hole.Work by James Bardeen, Jacob Bekenstein, Carter, and Hawking in the early 1970s led to the formulation of black hole thermodynamics. These laws describe the behaviour of a black hole in close analogy to the laws of thermodynamics by relating mass to energy, area to entropy, and surface gravity to temperature. The analogy was completed when Hawking, in 1974, showed that quantum field theory implies that black holes should radiate like a black body with a temperature proportional to the surface gravity of the black hole, predicting the effect now known as Hawking radiation.
History:
Observation On 11 February 2016, the LIGO Scientific Collaboration and the Virgo collaboration announced the first direct detection of gravitational waves, representing the first observation of a black hole merger. On 10 April 2019, the first direct image of a black hole and its vicinity was published, following observations made by the Event Horizon Telescope (EHT) in 2017 of the supermassive black hole in Messier 87's galactic centre. As of 2023, the nearest known body thought to be a black hole, Gaia BH1, is around 1,560 light-years (480 parsecs) away. Though only a couple dozen black holes have been found so far in the Milky Way, there are thought to be hundreds of millions, most of which are solitary and do not cause emission of radiation. Therefore, they would only be detectable by gravitational lensing.
History:
Etymology John Michell used the term "dark star" in a November 1783 letter to Henry Cavendish, and in the early 20th century, physicists used the term "gravitationally collapsed object". Science writer Marcia Bartusiak traces the term "black hole" to physicist Robert H. Dicke, who in the early 1960s reportedly compared the phenomenon to the Black Hole of Calcutta, notorious as a prison where people entered but never left alive.The term "black hole" was used in print by Life and Science News magazines in 1963, and by science journalist Ann Ewing in her article "'Black Holes' in Space", dated 18 January 1964, which was a report on a meeting of the American Association for the Advancement of Science held in Cleveland, Ohio.In December 1967, a student reportedly suggested the phrase "black hole" at a lecture by John Wheeler; Wheeler adopted the term for its brevity and "advertising value", and it quickly caught on, leading some to credit Wheeler with coining the phrase.
Properties and structure:
The no-hair theorem postulates that, once it achieves a stable condition after formation, a black hole has only three independent physical properties: mass, electric charge, and angular momentum; the black hole is otherwise featureless. If the conjecture is true, any two black holes that share the same values for these properties, or parameters, are indistinguishable from one another. The degree to which the conjecture is true for real black holes under the laws of modern physics is currently an unsolved problem.These properties are special because they are visible from outside a black hole. For example, a charged black hole repels other like charges just like any other charged object. Similarly, the total mass inside a sphere containing a black hole can be found by using the gravitational analog of Gauss's law (through the ADM mass), far away from the black hole. Likewise, the angular momentum (or spin) can be measured from far away using frame dragging by the gravitomagnetic field, through for example the Lense–Thirring effect.When an object falls into a black hole, any information about the shape of the object or distribution of charge on it is evenly distributed along the horizon of the black hole, and is lost to outside observers. The behavior of the horizon in this situation is a dissipative system that is closely analogous to that of a conductive stretchy membrane with friction and electrical resistance—the membrane paradigm. This is different from other field theories such as electromagnetism, which do not have any friction or resistivity at the microscopic level, because they are time-reversible. Because a black hole eventually achieves a stable state with only three parameters, there is no way to avoid losing information about the initial conditions: the gravitational and electric fields of a black hole give very little information about what went in. The information that is lost includes every quantity that cannot be measured far away from the black hole horizon, including approximately conserved quantum numbers such as the total baryon number and lepton number. This behavior is so puzzling that it has been called the black hole information loss paradox.
Properties and structure:
Physical properties The simplest static black holes have mass but neither electric charge nor angular momentum. These black holes are often referred to as Schwarzschild black holes after Karl Schwarzschild who discovered this solution in 1916. According to Birkhoff's theorem, it is the only vacuum solution that is spherically symmetric. This means there is no observable difference at a distance between the gravitational field of such a black hole and that of any other spherical object of the same mass. The popular notion of a black hole "sucking in everything" in its surroundings is therefore correct only near a black hole's horizon; far away, the external gravitational field is identical to that of any other body of the same mass.Solutions describing more general black holes also exist. Non-rotating charged black holes are described by the Reissner–Nordström metric, while the Kerr metric describes a non-charged rotating black hole. The most general stationary black hole solution known is the Kerr–Newman metric, which describes a black hole with both charge and angular momentum.While the mass of a black hole can take any positive value, the charge and angular momentum are constrained by the mass. The total electric charge Q and the total angular momentum J are expected to satisfy the inequality Q24πϵ0+c2J2GM2≤GM2 for a black hole of mass M. Black holes with the minimum possible mass satisfying this inequality are called extremal. Solutions of Einstein's equations that violate this inequality exist, but they do not possess an event horizon. These solutions have so-called naked singularities that can be observed from the outside, and hence are deemed unphysical. The cosmic censorship hypothesis rules out the formation of such singularities, when they are created through the gravitational collapse of realistic matter. This is supported by numerical simulations.Due to the relatively large strength of the electromagnetic force, black holes forming from the collapse of stars are expected to retain the nearly neutral charge of the star. Rotation, however, is expected to be a universal feature of compact astrophysical objects. The black-hole candidate binary X-ray source GRS 1915+105 appears to have an angular momentum near the maximum allowed value. That uncharged limit is J≤GM2c, allowing definition of a dimensionless spin parameter such that 1.
Properties and structure:
Black holes are commonly classified according to their mass, independent of angular momentum, J. The size of a black hole, as determined by the radius of the event horizon, or Schwarzschild radius, is proportional to the mass, M, through 2.95 MM⊙km, where rs is the Schwarzschild radius and M☉ is the mass of the Sun. For a black hole with nonzero spin and/or electric charge, the radius is smaller, until an extremal black hole could have an event horizon close to r+=GMc2.
Properties and structure:
Event horizon The defining feature of a black hole is the appearance of an event horizon—a boundary in spacetime through which matter and light can pass only inward towards the mass of the black hole. Nothing, not even light, can escape from inside the event horizon. The event horizon is referred to as such because if an event occurs within the boundary, information from that event cannot reach an outside observer, making it impossible to determine whether such an event occurred.As predicted by general relativity, the presence of a mass deforms spacetime in such a way that the paths taken by particles bend towards the mass. At the event horizon of a black hole, this deformation becomes so strong that there are no paths that lead away from the black hole.To a distant observer, clocks near a black hole would appear to tick more slowly than those farther away from the black hole. Due to this effect, known as gravitational time dilation, an object falling into a black hole appears to slow as it approaches the event horizon, taking an infinite time to reach it. At the same time, all processes on this object slow down, from the viewpoint of a fixed outside observer, causing any light emitted by the object to appear redder and dimmer, an effect known as gravitational redshift. Eventually, the falling object fades away until it can no longer be seen. Typically this process happens very rapidly with an object disappearing from view within less than a second.On the other hand, indestructible observers falling into a black hole do not notice any of these effects as they cross the event horizon. According to their own clocks, which appear to them to tick normally, they cross the event horizon after a finite time without noting any singular behaviour; in classical general relativity, it is impossible to determine the location of the event horizon from local observations, due to Einstein's equivalence principle.The topology of the event horizon of a black hole at equilibrium is always spherical. For non-rotating (static) black holes the geometry of the event horizon is precisely spherical, while for rotating black holes the event horizon is oblate.
Properties and structure:
Singularity At the centre of a black hole, as described by general relativity, may lie a gravitational singularity, a region where the spacetime curvature becomes infinite. For a non-rotating black hole, this region takes the shape of a single point; for a rotating black hole it is smeared out to form a ring singularity that lies in the plane of rotation. In both cases, the singular region has zero volume. It can also be shown that the singular region contains all the mass of the black hole solution. The singular region can thus be thought of as having infinite density.Observers falling into a Schwarzschild black hole (i.e., non-rotating and not charged) cannot avoid being carried into the singularity once they cross the event horizon. They can prolong the experience by accelerating away to slow their descent, but only up to a limit. When they reach the singularity, they are crushed to infinite density and their mass is added to the total of the black hole. Before that happens, they will have been torn apart by the growing tidal forces in a process sometimes referred to as spaghettification or the "noodle effect".In the case of a charged (Reissner–Nordström) or rotating (Kerr) black hole, it is possible to avoid the singularity. Extending these solutions as far as possible reveals the hypothetical possibility of exiting the black hole into a different spacetime with the black hole acting as a wormhole. The possibility of traveling to another universe is, however, only theoretical since any perturbation would destroy this possibility. It also appears to be possible to follow closed timelike curves (returning to one's own past) around the Kerr singularity, which leads to problems with causality like the grandfather paradox. It is expected that none of these peculiar effects would survive in a proper quantum treatment of rotating and charged black holes.The appearance of singularities in general relativity is commonly perceived as signaling the breakdown of the theory. This breakdown, however, is expected; it occurs in a situation where quantum effects should describe these actions, due to the extremely high density and therefore particle interactions. To date, it has not been possible to combine quantum and gravitational effects into a single theory, although there exist attempts to formulate such a theory of quantum gravity. It is generally expected that such a theory will not feature any singularities.
Properties and structure:
Photon sphere The photon sphere is a spherical boundary of zero thickness in which photons that move on tangents to that sphere would be trapped in a circular orbit about the black hole. For non-rotating black holes, the photon sphere has a radius 1.5 times the Schwarzschild radius. Their orbits would be dynamically unstable, hence any small perturbation, such as a particle of infalling matter, would cause an instability that would grow over time, either setting the photon on an outward trajectory causing it to escape the black hole, or on an inward spiral where it would eventually cross the event horizon.While light can still escape from the photon sphere, any light that crosses the photon sphere on an inbound trajectory will be captured by the black hole. Hence any light that reaches an outside observer from the photon sphere must have been emitted by objects between the photon sphere and the event horizon. For a Kerr black hole the radius of the photon sphere depends on the spin parameter and on the details of the photon orbit, which can be prograde (the photon rotates in the same sense of the black hole spin) or retrograde.
Properties and structure:
Ergosphere Rotating black holes are surrounded by a region of spacetime in which it is impossible to stand still, called the ergosphere. This is the result of a process known as frame-dragging; general relativity predicts that any rotating mass will tend to slightly "drag" along the spacetime immediately surrounding it. Any object near the rotating mass will tend to start moving in the direction of rotation. For a rotating black hole, this effect is so strong near the event horizon that an object would have to move faster than the speed of light in the opposite direction to just stand still.The ergosphere of a black hole is a volume bounded by the black hole's event horizon and the ergosurface, which coincides with the event horizon at the poles but is at a much greater distance around the equator.Objects and radiation can escape normally from the ergosphere. Through the Penrose process, objects can emerge from the ergosphere with more energy than they entered with. The extra energy is taken from the rotational energy of the black hole. Thereby the rotation of the black hole slows down. A variation of the Penrose process in the presence of strong magnetic fields, the Blandford–Znajek process is considered a likely mechanism for the enormous luminosity and relativistic jets of quasars and other active galactic nuclei.
Properties and structure:
Innermost stable circular orbit (ISCO) In Newtonian gravity, test particles can stably orbit at arbitrary distances from a central object. In general relativity, however, there exists an innermost stable circular orbit (often called the ISCO), for which any infinitesimal inward perturbations to a circular orbit will lead to spiraling into the black hole, and any outward perturbations will, depending on the energy, result in spiraling in, stably orbiting between apastron and periastron, or escaping to infinity. The location of the ISCO depends on the spin of the black hole, in the case of a Schwarzschild black hole (spin zero) is: rISCO=3rs=6GMc2, and decreases with increasing black hole spin for particles orbiting in the same direction as the spin.
Formation and evolution:
Given the bizarre character of black holes, it was long questioned whether such objects could actually exist in nature or whether they were merely pathological solutions to Einstein's equations. Einstein himself wrongly thought black holes would not form, because he held that the angular momentum of collapsing particles would stabilize their motion at some radius. This led the general relativity community to dismiss all results to the contrary for many years. However, a minority of relativists continued to contend that black holes were physical objects, and by the end of the 1960s, they had persuaded the majority of researchers in the field that there is no obstacle to the formation of an event horizon.
Formation and evolution:
Penrose demonstrated that once an event horizon forms, general relativity without quantum mechanics requires that a singularity will form within. Shortly afterwards, Hawking showed that many cosmological solutions that describe the Big Bang have singularities without scalar fields or other exotic matter. The Kerr solution, the no-hair theorem, and the laws of black hole thermodynamics showed that the physical properties of black holes were simple and comprehensible, making them respectable subjects for research. Conventional black holes are formed by gravitational collapse of heavy objects such as stars, but they can also in theory be formed by other processes.
Formation and evolution:
Gravitational collapse Gravitational collapse occurs when an object's internal pressure is insufficient to resist the object's own gravity. For stars this usually occurs either because a star has too little "fuel" left to maintain its temperature through stellar nucleosynthesis, or because a star that would have been stable receives extra matter in a way that does not raise its core temperature. In either case the star's temperature is no longer high enough to prevent it from collapsing under its own weight.
Formation and evolution:
The collapse may be stopped by the degeneracy pressure of the star's constituents, allowing the condensation of matter into an exotic denser state. The result is one of the various types of compact star. Which type forms depends on the mass of the remnant of the original star left if the outer layers have been blown away (for example, in a Type II supernova). The mass of the remnant, the collapsed object that survives the explosion, can be substantially less than that of the original star. Remnants exceeding 5 M☉ are produced by stars that were over 20 M☉ before the collapse.If the mass of the remnant exceeds about 3–4 M☉ (the Tolman–Oppenheimer–Volkoff limit), either because the original star was very heavy or because the remnant collected additional mass through accretion of matter, even the degeneracy pressure of neutrons is insufficient to stop the collapse. No known mechanism (except possibly quark degeneracy pressure) is powerful enough to stop the implosion and the object will inevitably collapse to form a black hole.
Formation and evolution:
The gravitational collapse of heavy stars is assumed to be responsible for the formation of stellar mass black holes. Star formation in the early universe may have resulted in very massive stars, which upon their collapse would have produced black holes of up to 103 M☉. These black holes could be the seeds of the supermassive black holes found in the centres of most galaxies. It has further been suggested that massive black holes with typical masses of ~105 M☉ could have formed from the direct collapse of gas clouds in the young universe. These massive objects have been proposed as the seeds that eventually formed the earliest quasars observed already at redshift z∼7 . Some candidates for such objects have been found in observations of the young universe.While most of the energy released during gravitational collapse is emitted very quickly, an outside observer does not actually see the end of this process. Even though the collapse takes a finite amount of time from the reference frame of infalling matter, a distant observer would see the infalling material slow and halt just above the event horizon, due to gravitational time dilation. Light from the collapsing material takes longer and longer to reach the observer, with the light emitted just before the event horizon forms delayed an infinite amount of time. Thus the external observer never sees the formation of the event horizon; instead, the collapsing material seems to become dimmer and increasingly red-shifted, eventually fading away.
Formation and evolution:
Primordial black holes and the Big Bang Gravitational collapse requires great density. In the current epoch of the universe these high densities are found only in stars, but in the early universe shortly after the Big Bang densities were much greater, possibly allowing for the creation of black holes. High density alone is not enough to allow black hole formation since a uniform mass distribution will not allow the mass to bunch up. In order for primordial black holes to have formed in such a dense medium, there must have been initial density perturbations that could then grow under their own gravity. Different models for the early universe vary widely in their predictions of the scale of these fluctuations. Various models predict the creation of primordial black holes ranging in size from a Planck mass ( mP=ℏc/G ≈ 1.2×1019 GeV/c2 ≈ 2.2×10−8 kg) to hundreds of thousands of solar masses.Despite the early universe being extremely dense, it did not re-collapse into a black hole during the Big Bang, since the expansion rate was greater than the attraction. Following inflation theory there was a net repulsive gravitation in the beginning until the end of inflation. Since then the Hubble flow was slowed by the energy density of the universe.
Formation and evolution:
Models for the gravitational collapse of objects of relatively constant size, such as stars, do not necessarily apply in the same way to rapidly expanding space such as the Big Bang.
Formation and evolution:
High-energy collisions Gravitational collapse is not the only process that could create black holes. In principle, black holes could be formed in high-energy collisions that achieve sufficient density. As of 2002, no such events have been detected, either directly or indirectly as a deficiency of the mass balance in particle accelerator experiments. This suggests that there must be a lower limit for the mass of black holes. Theoretically, this boundary is expected to lie around the Planck mass, where quantum effects are expected to invalidate the predictions of general relativity. This would put the creation of black holes firmly out of reach of any high-energy process occurring on or near the Earth. However, certain developments in quantum gravity suggest that the minimum black hole mass could be much lower: some braneworld scenarios for example put the boundary as low as 1 TeV/c2. This would make it conceivable for micro black holes to be created in the high-energy collisions that occur when cosmic rays hit the Earth's atmosphere, or possibly in the Large Hadron Collider at CERN. These theories are very speculative, and the creation of black holes in these processes is deemed unlikely by many specialists. Even if micro black holes could be formed, it is expected that they would evaporate in about 10−25 seconds, posing no threat to the Earth.
Formation and evolution:
Growth Once a black hole has formed, it can continue to grow by absorbing additional matter. Any black hole will continually absorb gas and interstellar dust from its surroundings. This growth process is one possible way through which some supermassive black holes may have been formed, although the formation of supermassive black holes is still an open field of research. A similar process has been suggested for the formation of intermediate-mass black holes found in globular clusters. Black holes can also merge with other objects such as stars or even other black holes. This is thought to have been important, especially in the early growth of supermassive black holes, which could have formed from the aggregation of many smaller objects. The process has also been proposed as the origin of some intermediate-mass black holes.
Formation and evolution:
Evaporation In 1974, Hawking predicted that black holes are not entirely black but emit small amounts of thermal radiation at a temperature ℏc3/(8πGMkB); this effect has become known as Hawking radiation. By applying quantum field theory to a static black hole background, he determined that a black hole should emit particles that display a perfect black body spectrum. Since Hawking's publication, many others have verified the result through various approaches. If Hawking's theory of black hole radiation is correct, then black holes are expected to shrink and evaporate over time as they lose mass by the emission of photons and other particles. The temperature of this thermal spectrum (Hawking temperature) is proportional to the surface gravity of the black hole, which, for a Schwarzschild black hole, is inversely proportional to the mass. Hence, large black holes emit less radiation than small black holes.A stellar black hole of 1 M☉ has a Hawking temperature of 62 nanokelvins. This is far less than the 2.7 K temperature of the cosmic microwave background radiation. Stellar-mass or larger black holes receive more mass from the cosmic microwave background than they emit through Hawking radiation and thus will grow instead of shrinking. To have a Hawking temperature larger than 2.7 K (and be able to evaporate), a black hole would need a mass less than the Moon. Such a black hole would have a diameter of less than a tenth of a millimeter.If a black hole is very small, the radiation effects are expected to become very strong. A black hole with the mass of a car would have a diameter of about 10−24 m and take a nanosecond to evaporate, during which time it would briefly have a luminosity of more than 200 times that of the Sun. Lower-mass black holes are expected to evaporate even faster; for example, a black hole of mass 1 TeV/c2 would take less than 10−88 seconds to evaporate completely. For such a small black hole, quantum gravity effects are expected to play an important role and could hypothetically make such a small black hole stable, although current developments in quantum gravity do not indicate this is the case.The Hawking radiation for an astrophysical black hole is predicted to be very weak and would thus be exceedingly difficult to detect from Earth. A possible exception, however, is the burst of gamma rays emitted in the last stage of the evaporation of primordial black holes. Searches for such flashes have proven unsuccessful and provide stringent limits on the possibility of existence of low mass primordial black holes. NASA's Fermi Gamma-ray Space Telescope launched in 2008 will continue the search for these flashes.If black holes evaporate via Hawking radiation, a solar mass black hole will evaporate (beginning once the temperature of the cosmic microwave background drops below that of the black hole) over a period of 1064 years. A supermassive black hole with a mass of 1011 M☉ will evaporate in around 2×10100 years. Some monster black holes in the universe are predicted to continue to grow up to perhaps 1014 M☉ during the collapse of superclusters of galaxies. Even these would evaporate over a timescale of up to 10106 years.Some models of quantum gravity predict modifications of the Hawking description of black holes. In particular, the evolution equations describing the mass loss rate and charge loss rate get modified.
Observational evidence:
By nature, black holes do not themselves emit any electromagnetic radiation other than the hypothetical Hawking radiation, so astrophysicists searching for black holes must generally rely on indirect observations. For example, a black hole's existence can sometimes be inferred by observing its gravitational influence on its surroundings.On 10 April 2019, an image was released of a black hole, which is seen magnified because the light paths near the event horizon are highly bent. The dark shadow in the middle results from light paths absorbed by the black hole. The image is in false color, as the detected light halo in this image is not in the visible spectrum, but radio waves.
Observational evidence:
The Event Horizon Telescope (EHT) is an active program that directly observes the immediate environment of black holes' event horizons, such as the black hole at the centre of the Milky Way. In April 2017, EHT began observing the black hole at the centre of Messier 87. "In all, eight radio observatories on six mountains and four continents observed the galaxy in Virgo on and off for 10 days in April 2017" to provide the data yielding the image in April 2019. After two years of data processing, EHT released the first direct image of a black hole; specifically, the supermassive black hole that lies in the centre of the aforementioned galaxy. What is visible is not the black hole—which shows as black because of the loss of all light within this dark region. Instead, it is the gases at the edge of the event horizon (displayed as orange or red) that define the black hole.On 12 May 2022, the EHT released the first image of Sagittarius A*, the supermassive black hole at the centre of the Milky Way galaxy. The published image displayed the same ring-like structure and circular shadow as seen in the M87* black hole, and the image was created using the same techniques as for the M87 black hole. However, the imaging process for Sagittarius A*, which is more than a thousand times smaller and less massive than M87*, was significantly more complex because of the instability of its surroundings. The image of Sagittarius A* was also partially blurred by turbulent plasma on the way to the galactic centre, an effect which prevents resolution of the image at longer wavelengths.The brightening of this material in the 'bottom' half of the processed EHT image is thought to be caused by Doppler beaming, whereby material approaching the viewer at relativistic speeds is perceived as brighter than material moving away. In the case of a black hole, this phenomenon implies that the visible material is rotating at relativistic speeds (>1,000 km/s [2,200,000 mph]), the only speeds at which it is possible to centrifugally balance the immense gravitational attraction of the singularity, and thereby remain in orbit above the event horizon. This configuration of bright material implies that the EHT observed M87* from a perspective catching the black hole's accretion disc nearly edge-on, as the whole system rotated clockwise. However, the extreme gravitational lensing associated with black holes produces the illusion of a perspective that sees the accretion disc from above. In reality, most of the ring in the EHT image was created when the light emitted by the far side of the accretion disc bent around the black hole's gravity well and escaped, meaning that most of the possible perspectives on M87* can see the entire disc, even that directly behind the "shadow".
Observational evidence:
In 2015, the EHT detected magnetic fields just outside the event horizon of Sagittarius A* and even discerned some of their properties. The field lines that pass through the accretion disc were a complex mixture of ordered and tangled. Theoretical studies of black holes had predicted the existence of magnetic fields.
In April 2023, an image of the shadow of the Messier 87 black hole and the related high-energy jet, viewed together for the first time, was presented.
Observational evidence:
Detection of gravitational waves from merging black holes On 14 September 2015, the LIGO gravitational wave observatory made the first-ever successful direct observation of gravitational waves. The signal was consistent with theoretical predictions for the gravitational waves produced by the merger of two black holes: one with about 36 solar masses, and the other around 29 solar masses. This observation provides the most concrete evidence for the existence of black holes to date. For instance, the gravitational wave signal suggests that the separation of the two objects before the merger was just 350 km (or roughly four times the Schwarzschild radius corresponding to the inferred masses). The objects must therefore have been extremely compact, leaving black holes as the most plausible interpretation.More importantly, the signal observed by LIGO also included the start of the post-merger ringdown, the signal produced as the newly formed compact object settles down to a stationary state. Arguably, the ringdown is the most direct way of observing a black hole. From the LIGO signal, it is possible to extract the frequency and damping time of the dominant mode of the ringdown. From these, it is possible to infer the mass and angular momentum of the final object, which match independent predictions from numerical simulations of the merger. The frequency and decay time of the dominant mode are determined by the geometry of the photon sphere. Hence, observation of this mode confirms the presence of a photon sphere; however, it cannot exclude possible exotic alternatives to black holes that are compact enough to have a photon sphere.The observation also provides the first observational evidence for the existence of stellar-mass black hole binaries. Furthermore, it is the first observational evidence of stellar-mass black holes weighing 25 solar masses or more.Since then, many more gravitational wave events have been observed.
Observational evidence:
Proper motions of stars orbiting Sagittarius A* The proper motions of stars near the centre of our own Milky Way provide strong observational evidence that these stars are orbiting a supermassive black hole. Since 1995, astronomers have tracked the motions of 90 stars orbiting an invisible object coincident with the radio source Sagittarius A*. By fitting their motions to Keplerian orbits, the astronomers were able to infer, in 1998, that a 2.6×106 M☉ object must be contained in a volume with a radius of 0.02 light-years to cause the motions of those stars. Since then, one of the stars—called S2—has completed a full orbit. From the orbital data, astronomers were able to refine the calculations of the mass to 4.3×106 M☉ and a radius of less than 0.002 light-years for the object causing the orbital motion of those stars. The upper limit on the object's size is still too large to test whether it is smaller than its Schwarzschild radius; nevertheless, these observations strongly suggest that the central object is a supermassive black hole as there are no other plausible scenarios for confining so much invisible mass into such a small volume. Additionally, there is some observational evidence that this object might possess an event horizon, a feature unique to black holes.
Observational evidence:
Accretion of matter Due to conservation of angular momentum, gas falling into the gravitational well created by a massive object will typically form a disk-like structure around the object. Artists' impressions such as the accompanying representation of a black hole with corona commonly depict the black hole as if it were a flat-space body hiding the part of the disk just behind it, but in reality gravitational lensing would greatly distort the image of the accretion disk.
Observational evidence:
Within such a disk, friction would cause angular momentum to be transported outward, allowing matter to fall farther inward, thus releasing potential energy and increasing the temperature of the gas.
Observational evidence:
When the accreting object is a neutron star or a black hole, the gas in the inner accretion disk orbits at very high speeds because of its proximity to the compact object. The resulting friction is so significant that it heats the inner disk to temperatures at which it emits vast amounts of electromagnetic radiation (mainly X-rays). These bright X-ray sources may be detected by telescopes. This process of accretion is one of the most efficient energy-producing processes known; up to 40% of the rest mass of the accreted material can be emitted as radiation. (In nuclear fusion only about 0.7% of the rest mass will be emitted as energy.) In many cases, accretion disks are accompanied by relativistic jets that are emitted along the poles, which carry away much of the energy. The mechanism for the creation of these jets is currently not well understood, in part due to insufficient data.As such, many of the universe's more energetic phenomena have been attributed to the accretion of matter on black holes. In particular, active galactic nuclei and quasars are believed to be the accretion disks of supermassive black holes. Similarly, X-ray binaries are generally accepted to be binary star systems in which one of the two stars is a compact object accreting matter from its companion. It has also been suggested that some ultraluminous X-ray sources may be the accretion disks of intermediate-mass black holes.In November 2011 the first direct observation of a quasar accretion disk around a supermassive black hole was reported.
Observational evidence:
X-ray binaries X-ray binaries are binary star systems that emit a majority of their radiation in the X-ray part of the spectrum. These X-ray emissions are generally thought to result when one of the stars (compact object) accretes matter from another (regular) star. The presence of an ordinary star in such a system provides an opportunity for studying the central object and to determine if it might be a black hole.If such a system emits signals that can be directly traced back to the compact object, it cannot be a black hole. The absence of such a signal does, however, not exclude the possibility that the compact object is a neutron star. By studying the companion star it is often possible to obtain the orbital parameters of the system and to obtain an estimate for the mass of the compact object. If this is much larger than the Tolman–Oppenheimer–Volkoff limit (the maximum mass a star can have without collapsing) then the object cannot be a neutron star and is generally expected to be a black hole.The first strong candidate for a black hole, Cygnus X-1, was discovered in this way by Charles Thomas Bolton, Louise Webster, and Paul Murdin in 1972. Some doubt, however, remained due to the uncertainties that result from the companion star being much heavier than the candidate black hole. Currently, better candidates for black holes are found in a class of X-ray binaries called soft X-ray transients. In this class of system, the companion star is of relatively low mass allowing for more accurate estimates of the black hole mass. Moreover, these systems actively emit X-rays for only several months once every 10–50 years. During the period of low X-ray emission (called quiescence), the accretion disk is extremely faint allowing detailed observation of the companion star during this period. One of the best such candidates is V404 Cygni.
Observational evidence:
Quasi-periodic oscillations The X-ray emissions from accretion disks sometimes flicker at certain frequencies. These signals are called quasi-periodic oscillations and are thought to be caused by material moving along the inner edge of the accretion disk (the innermost stable circular orbit). As such their frequency is linked to the mass of the compact object. They can thus be used as an alternative way to determine the mass of candidate black holes.
Observational evidence:
Galactic nuclei Astronomers use the term "active galaxy" to describe galaxies with unusual characteristics, such as unusual spectral line emission and very strong radio emission. Theoretical and observational studies have shown that the activity in these active galactic nuclei (AGN) may be explained by the presence of supermassive black holes, which can be millions of times more massive than stellar ones. The models of these AGN consist of a central black hole that may be millions or billions of times more massive than the Sun; a disk of interstellar gas and dust called an accretion disk; and two jets perpendicular to the accretion disk.
Observational evidence:
Although supermassive black holes are expected to be found in most AGN, only some galaxies' nuclei have been more carefully studied in attempts to both identify and measure the actual masses of the central supermassive black hole candidates. Some of the most notable galaxies with supermassive black hole candidates include the Andromeda Galaxy, M32, M87, NGC 3115, NGC 3377, NGC 4258, NGC 4889, NGC 1277, OJ 287, APM 08279+5255 and the Sombrero Galaxy.It is now widely accepted that the centre of nearly every galaxy, not just active ones, contains a supermassive black hole. The close observational correlation between the mass of this hole and the velocity dispersion of the host galaxy's bulge, known as the M–sigma relation, strongly suggests a connection between the formation of the black hole and that of the galaxy itself.
Observational evidence:
Microlensing Another way the black hole nature of an object may be tested is through observation of effects caused by a strong gravitational field in their vicinity. One such effect is gravitational lensing: The deformation of spacetime around a massive object causes light rays to be deflected, such as light passing through an optic lens. Observations have been made of weak gravitational lensing, in which light rays are deflected by only a few arcseconds. Microlensing occurs when the sources are unresolved and the observer sees a small brightening. In January 2022, astronomers reported the first possible detection of a microlensing event from an isolated black hole.Another possibility for observing gravitational lensing by a black hole would be to observe stars orbiting the black hole. There are several candidates for such an observation in orbit around Sagittarius A*.
Alternatives:
The evidence for stellar black holes strongly relies on the existence of an upper limit for the mass of a neutron star. The size of this limit heavily depends on the assumptions made about the properties of dense matter. New exotic phases of matter could push up this bound. A phase of free quarks at high density might allow the existence of dense quark stars, and some supersymmetric models predict the existence of Q stars. Some extensions of the standard model posit the existence of preons as fundamental building blocks of quarks and leptons, which could hypothetically form preon stars. These hypothetical models could potentially explain a number of observations of stellar black hole candidates. However, it can be shown from arguments in general relativity that any such object will have a maximum mass.Since the average density of a black hole inside its Schwarzschild radius is inversely proportional to the square of its mass, supermassive black holes are much less dense than stellar black holes (the average density of a 108 M☉ black hole is comparable to that of water). Consequently, the physics of matter forming a supermassive black hole is much better understood and the possible alternative explanations for supermassive black hole observations are much more mundane. For example, a supermassive black hole could be modelled by a large cluster of very dark objects. However, such alternatives are typically not stable enough to explain the supermassive black hole candidates.The evidence for the existence of stellar and supermassive black holes implies that in order for black holes to not form, general relativity must fail as a theory of gravity, perhaps due to the onset of quantum mechanical corrections. A much anticipated feature of a theory of quantum gravity is that it will not feature singularities or event horizons and thus black holes would not be real artifacts. For example, in the fuzzball model based on string theory, the individual states of a black hole solution do not generally have an event horizon or singularity, but for a classical/semi-classical observer the statistical average of such states appears just as an ordinary black hole as deduced from general relativity.A few theoretical objects have been conjectured to match observations of astronomical black hole candidates identically or near-identically, but which function via a different mechanism. These include the gravastar, the black star, and the dark-energy star.
Open questions:
Entropy and thermodynamics In 1971, Hawking showed under general conditions that the total area of the event horizons of any collection of classical black holes can never decrease, even if they collide and merge. This result, now known as the second law of black hole mechanics, is remarkably similar to the second law of thermodynamics, which states that the total entropy of an isolated system can never decrease. As with classical objects at absolute zero temperature, it was assumed that black holes had zero entropy. If this were the case, the second law of thermodynamics would be violated by entropy-laden matter entering a black hole, resulting in a decrease in the total entropy of the universe. Therefore, Bekenstein proposed that a black hole should have an entropy, and that it should be proportional to its horizon area.The link with the laws of thermodynamics was further strengthened by Hawking's discovery in 1974 that quantum field theory predicts that a black hole radiates blackbody radiation at a constant temperature. This seemingly causes a violation of the second law of black hole mechanics, since the radiation will carry away energy from the black hole causing it to shrink. The radiation, however also carries away entropy, and it can be proven under general assumptions that the sum of the entropy of the matter surrounding a black hole and one quarter of the area of the horizon as measured in Planck units is in fact always increasing. This allows the formulation of the first law of black hole mechanics as an analogue of the first law of thermodynamics, with the mass acting as energy, the surface gravity as temperature and the area as entropy.One puzzling feature is that the entropy of a black hole scales with its area rather than with its volume, since entropy is normally an extensive quantity that scales linearly with the volume of the system. This odd property led Gerard 't Hooft and Leonard Susskind to propose the holographic principle, which suggests that anything that happens in a volume of spacetime can be described by data on the boundary of that volume.Although general relativity can be used to perform a semi-classical calculation of black hole entropy, this situation is theoretically unsatisfying. In statistical mechanics, entropy is understood as counting the number of microscopic configurations of a system that have the same macroscopic qualities (such as mass, charge, pressure, etc.). Without a satisfactory theory of quantum gravity, one cannot perform such a computation for black holes. Some progress has been made in various approaches to quantum gravity. In 1995, Andrew Strominger and Cumrun Vafa showed that counting the microstates of a specific supersymmetric black hole in string theory reproduced the Bekenstein–Hawking entropy. Since then, similar results have been reported for different black holes both in string theory and in other approaches to quantum gravity like loop quantum gravity.Another promising approach is constituted by treating gravity as an effective field theory. One first computes the quantum gravitational corrections to the radius of the event horizon of the black hole, then integrates over it to find the quantum gravitational corrections to the entropy as given by the Wald formula. The method was applied for Schwarzschild black holes by Calmet and Kuipers, then successfully generalised for charged black holes by Campos Delgado.
Open questions:
Information loss paradox Because a black hole has only a few internal parameters, most of the information about the matter that went into forming the black hole is lost. Regardless of the type of matter which goes into a black hole, it appears that only information concerning the total mass, charge, and angular momentum are conserved. As long as black holes were thought to persist forever this information loss is not that problematic, as the information can be thought of as existing inside the black hole, inaccessible from the outside, but represented on the event horizon in accordance with the holographic principle. However, black holes slowly evaporate by emitting Hawking radiation. This radiation does not appear to carry any additional information about the matter that formed the black hole, meaning that this information appears to be gone forever.The question whether information is truly lost in black holes (the black hole information paradox) has divided the theoretical physics community. In quantum mechanics, loss of information corresponds to the violation of a property called unitarity, and it has been argued that loss of unitarity would also imply violation of conservation of energy, though this has also been disputed. Over recent years evidence has been building that indeed information and unitarity are preserved in a full quantum gravitational treatment of the problem.One attempt to resolve the black hole information paradox is known as black hole complementarity. In 2012, the "firewall paradox" was introduced with the goal of demonstrating that black hole complementarity fails to solve the information paradox. According to quantum field theory in curved spacetime, a single emission of Hawking radiation involves two mutually entangled particles. The outgoing particle escapes and is emitted as a quantum of Hawking radiation; the infalling particle is swallowed by the black hole. Assume a black hole formed a finite time in the past and will fully evaporate away in some finite time in the future. Then, it will emit only a finite amount of information encoded within its Hawking radiation. According to research by physicists like Don Page and Leonard Susskind, there will eventually be a time by which an outgoing particle must be entangled with all the Hawking radiation the black hole has previously emitted. This seemingly creates a paradox: a principle called "monogamy of entanglement" requires that, like any quantum system, the outgoing particle cannot be fully entangled with two other systems at the same time; yet here the outgoing particle appears to be entangled both with the infalling particle and, independently, with past Hawking radiation. In order to resolve this contradiction, physicists may eventually be forced to give up one of three time-tested principles: Einstein's equivalence principle, unitarity, or local quantum field theory. One possible solution, which violates the equivalence principle, is that a "firewall" destroys incoming particles at the event horizon. In general, which—if any—of these assumptions should be abandoned remains a topic of debate. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Slx1 structure-specific endonuclease subunit homolog b (s. cerevisiae)**
Slx1 structure-specific endonuclease subunit homolog b (s. cerevisiae):
SLX1 structure-specific endonuclease subunit homolog B (S. cerevisiae) is a protein in humans that is encoded by the SLX1B gene.
Slx1 structure-specific endonuclease subunit homolog b (s. cerevisiae):
This gene encodes a protein that is an important regulator of genome stability. The protein represents the catalytic subunit of the SLX1-SLX4 structure-specific endonuclease, which can resolve DNA secondary structures that are formed during repair and recombination processes. Two identical copies of this gene are located on the p arm of chromosome 16 due to a segmental duplication; this record represents the more telomeric copy. Alternative splicing results in multiple transcript variants. Read-through transcription also occurs between this gene and the downstream SULT1A4 (sulfotransferase family, cytosolic, 1A, phenol-preferring, member 4) gene. [provided by RefSeq, Nov 2010]. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Poly(N-isopropylacrylamide)**
Poly(N-isopropylacrylamide):
Poly(N-isopropylacrylamide) (variously abbreviated PNIPA, PNIPAM, PNIPAAm, NIPA, PNIPAA or PNIPAm) is a temperature-responsive polymer that was first synthesized in the 1950s. It can be synthesized from N-isopropylacrylamide which is commercially available. It is synthesized via free-radical polymerization and is readily functionalized making it useful in a variety of applications.
Poly(N-isopropylacrylamide):
PNIPA dissolves in water, however, when these solutions are heated in above their cloud point temperature, they undergo a reversible lower critical solution temperature (LCST) phase transition from a soluble hydrated state to an insoluble dehydrated state. Although it is widely believed that this phase transition occurs at 32 °C (90 °F), the actual temperatures may differ 5 to 10 °C (or even more) depending on the polymer concentration, molar mass of polymer chains, polymer dispersity as well as terminal moieties. Furthermore, other molecules in the polymer solution, such as salts or proteins, can alter the cloud point temperature.Since PNIPA expels its liquid contents at a temperature near that of the human body, PNIPA copolymers have been investigated by many researchers for possible applications in tissue engineering and controlled drug delivery.
History:
The synthesis of poly(N-isopropylacrylamide) began with the synthesis of the acrylamide monomer by Sprecht in 1956. In 1957, Shearer patented the first application for what would be later identified as PNIPA for the use as a rodent repellent. Early work was piqued by theoretical curiosity of the material properties of PNIPA. The first report of PNIPA came in 1968, which elucidated the unique thermal behavior in aqueous solutions. The 1980s marked an explosion in interest in PNIPAs with the realization of potential applications due to its unique thermal behavior in aqueous solutions.
Chemical and Physical Properties:
PNIPA is one of the most studied thermosensitive hydrogels. In dilute solution, it undergoes a coil-to-globule transition. PNIPA possesses an inverse solubility upon heating. It changes hydrophilicity and hydrophobicity abruptly at its LCST. At lower temperatures PNIPA orders itself in solution in order to hydrogen bond with the already arranged water molecules. The water molecules must reorient around the nonpolar regions of PNIPA which results in a decreased entropy. At lower temperatures, such as room temperature, the negative enthalpy term ( ΔH ) from hydrogen bonding effects dominates the Gibbs free energy, causing the PNIPA to absorb water and dissolve in solution. At higher temperatures, the entropy term ( ΔS ) dominates, causing the PNIPA to release water and phase separate which can be seen in the following demonstration.
Synthesis of Heat and pH Sensitive PNIPA:
Homopolymerization The process of free radical polymerization of a single type of monomer, in this case, N-isopropylacrylamide, to form the polymer is known as a homopolymerization. The radical initiator azobisisobutyronitrile (AIBN) is commonly used in radical polymerizations.
Copolymerization A free-radical polymerization of two different monomer results in a copolymerization. An advantage to a copolymerization includes fine tuning of the LCST.
Terpolymerization A free-radical polymerization of three different monomer is known as a terpolymerization. Advantages to a terpolymerization may include enhancing multiple properties of the polymer including thermosensitivity, pH sensitivity or fine tuning of the LCST.
Cross-linked Hydrogel The reaction scheme below is a terpolymerization to form a cross-linked hydrogel. The reactant ammonium persulfate (APS) is used in polymer chemistry as a strong oxidizing agent that is often used along with tetramethylethylenediamine (TMEDA) to catalyze the polymerization when making polyacrylamide gels.
Synthesis of Chain-End Functionalized PNIPA:
PNIPA can be functionalized using chain transfer agents using a free radical polymerization. The three schemes below demonstrate functionalization using chain transfer agents (CTA), where one end of the polymer is the radical initiator and the other is a functionalized group. Functionalization of the polymer chain-end allows for the polymer to be used in many diverse settings and applications. Advantages to a functionalizing the chain-end may include enhancing multiple properties of the polymer including thermosensitivity, pH sensitivity or fine tuning of the LCST.(1) (2) (3)
Applications:
The versatility of PNIPA has led to finding uses in macroscopic gels, microgels, membranes, sensors, biosensors, thin films, tissue engineering, and drug delivery. The tendency of aqueous solutions of PNIPA to increase in viscosity in the presence of hydrophobic molecules has made it excellent for tertiary oil recovery.
Applications:
Adding additives or copolymerization of PNIPA can lower the lower critical solution temperature to temperatures around human body temperatures, which makes it an excellent candidate for drug delivery applications. The PNIPA can be placed in a solution of bioactive molecules, which allows the bioactive molecules to penetrate the PNIPA. The PNIPA can then be placed in vivo, where there is a rapid release of biomolecules due to the initial gel collapse and an ejection of the biomolecules into the surrounding media, followed by a slow release of biomolecules due to surface pore closure.PNIPA have also been used in pH-sensitive drug delivery systems. Some examples of these drug delivery systems may include the intestinal delivery of human calcitonin, delivery of insulin, and the delivery of ibuprofen. When radiolabeled PNIPA copolymers with different molecular weights were intravenously injected to rats, it was found that the glomerular filtration threshold of the polymer was around 32 000 g/mol.PNIPA have been used in gel actuators, which convert external stimuli into mechanical motion. Upon heating above the LCST, the hydrogel goes from hydrophilic to hydrophobic state. This conversion results in an expulsion of water which causes a physical conformational change, creating a mechanical hinge movement.
Applications:
Furthermore, PNIPA-based thin films can be applied as nano-switches featuring multiple distinct thin-film states, which is based on the cononsolvency effect. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Reconfigurability**
Reconfigurability:
Reconfigurability denotes the Reconfigurable Computing capability of a system, so that its behavior can be changed by reconfiguration, i. e. by loading different configware code. This static reconfigurability distinguishes between reconfiguration time and run time. Dynamic reconfigurability denotes the capability of a dynamically reconfigurable system that can dynamically change its behavior during run time, usually in response to dynamic changes in its environment.
Reconfigurability:
In the context of wireless communication dynamic reconfigurability tackles the changeable behavior of wireless networks and associated equipment, specifically in the fields of radio spectrum, radio access technologies, protocol stacks, and application services.
Reconfigurability:
Research regarding the (dynamic) reconfigurability of wireless communication systems is ongoing for example in working group 6 of the Wireless World Research Forum (WWRF), in the Wireless Innovation Forum (WINNF) (formerly Software Defined Radio Forum), and in the European FP6 project End-to-End Reconfigurability (E²R). Recently, E²R initiated a related standardization effort on the cohabitation of heterogeneous wireless radio systems in the framework of the IEEE P1900.4 Working Group.
Reconfigurability:
See cognitive radio.
In the context of Control reconfiguration, a field of fault-tolerant control within control engineering, reconfigurability is a property of faulty systems meaning that the original control goals specified for the fault-free system can be reached after suitable control reconfiguration. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Voiced retroflex fricative**
Voiced retroflex fricative:
The voiced retroflex sibilant fricative is a type of consonantal sound, used in some spoken languages. The symbol in the International Phonetic Alphabet that represents this sound is ⟨ʐ ⟩, and the equivalent X-SAMPA symbol is z`. Like all the retroflex consonants, the IPA symbol is formed by adding a rightward-pointing hook extending from the bottom of a z (the letter used for the corresponding alveolar consonant).
Features:
Features of the voiced retroflex sibilant: Its manner of articulation is sibilant fricative, which means it is generally produced by channeling air flow along a groove in the back of the tongue up to the place of articulation, at which point it is focused against the sharp edge of the nearly clenched teeth, causing high-frequency turbulence.
Its place of articulation is retroflex, which prototypically means it is articulated subapical (with the tip of the tongue curled up), but more generally, it means that it is postalveolar without being palatalized. That is, besides the prototypical subapical articulation, the tongue contact can be apical (pointed) or laminal (flat).
Its phonation is voiced, which means the vocal cords vibrate during the articulation.
It is an oral consonant, which means air is allowed to escape through the mouth only.
It is a central consonant, which means it is produced by directing the airstream along the center of the tongue, rather than to the sides.
The airstream mechanism is pulmonic, which means it is articulated by pushing air solely with the intercostal muscles and diaphragm, as in most sounds.
Occurrence:
In the following transcriptions, diacritics may be used to distinguish between apical [ʐ̺] and laminal [ʐ̻].
The commonality of [ʐ] cross-linguistically is 2% in a phonological analysis of 2155 languages
Voiced retroflex non-sibilant fricative:
Features Features of the voiced retroflex non-sibilant fricative: Its manner of articulation is fricative, which means it is produced by constricting air flow through a narrow channel at the place of articulation, causing turbulence.
Its place of articulation is retroflex, which prototypically means it is articulated subapical (with the tip of the tongue curled up), but more generally, it means that it is postalveolar without being palatalized. That is, besides the prototypical subapical articulation, the tongue contact can be apical (pointed) or laminal (flat).
Its phonation is voiced, which means the vocal cords vibrate during the articulation.
It is an oral consonant, which means air is allowed to escape through the mouth only.
It is a central consonant, which means it is produced by directing the airstream along the center of the tongue, rather than to the sides.
The airstream mechanism is pulmonic, which means it is articulated by pushing air solely with the intercostal muscles and diaphragm, as in most sounds.
Occurrence | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mazer (video game)**
Mazer (video game):
Mazer is a video game developed and published by American Laser Games in arcades as well as the 3DO.
Gameplay:
Mazer is an isometric shooter with a three-quarter perspective.
Reception:
Next Generation reviewed the 3DO version of the game, rating it one star out of five, and stated that "this title gives you the most frustrating gaming experience you can remember [...] The CD might be suitable for use as a coaster, but that's about it." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Datolite**
Datolite:
Datolite is a calcium boron hydroxide nesosilicate, CaBSiO4(OH). It was first observed by Jens Esmark in 1806, and named by him from δατεῖσθαι, "to divide," and λίθος, "stone," in allusion to the granular structure of the massive mineral.Datolite crystallizes in the monoclinic system forming prismatic crystals and nodular masses. The luster is vitreous and may be brown, yellow, light green or colorless. The Mohs hardness is 5.5 and the specific gravity is 2.8 - 3.0. The type localities are in the diabases of the Connecticut River valley and Arendal, Aust-Agder, Norway. Associated minerals include prehnite, danburite, babingtonite, epidote, native copper, calcite, quartz and zeolites. It is common in the copper deposits of the Lake Superior region of Michigan. It occurs as a secondary mineral in mafic igneous rocks often filling vesicles along with zeolites in basalt. Unlike most localities throughout the world, the occurrence of datolite in the Lake Superior region is usually fine grained in texture and possesses colored banding. Much of the coloration is due to the inclusion of copper or associated minerals in progressive stages of hydrothermal precipitation.
Datolite:
Botryolite is a botryoidal form of datolite. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**JAR (file format)**
JAR (file format):
A JAR ("Java archive") file is a package file format typically used to aggregate many Java class files and associated metadata and resources (text, images, etc.) into one file for distribution.JAR files are archive files that include a Java-specific manifest file. They are built on the ZIP format and typically have a .jar file extension.
Design:
A JAR file allows Java runtimes to efficiently deploy an entire application, including its classes and their associated resources, in a single request. JAR file elements may be compressed, shortening download times.
A JAR file may contain a manifest file, that is located at META-INF/MANIFEST.MF. The entries in the manifest file describe how to use the JAR file. For instance, a Classpath entry can be used to specify other JAR files to load with the JAR.
Extraction:
The contents of a file may be extracted using any archive extraction software that supports the ZIP format, or the jar command line utility provided by the Java Development Kit.
Security:
Developers can digitally sign JAR files. In that case, the signature information becomes part of the embedded manifest file. The JAR itself is not signed, but instead every file inside the archive is listed along with its checksum; it is these checksums that are signed. Multiple entities may sign the JAR file, changing the JAR file itself with each signing, although the signed files themselves remain valid. When the Java runtime loads signed JAR files, it can validate the signatures and refuse to load classes that do not match the signature. It can also support 'sealed' packages, in which the Classloader will only permit Java classes to be loaded into the same package if they are all signed by the same entities. This prevents malicious code from being inserted into an existing package, and so gaining access to package-scoped classes and data.
Security:
The content of JAR files may be obfuscated to make reverse engineering more difficult.
Executable JAR files:
An executable Java program can be packaged in a JAR file, along with any libraries the program uses. Executable JAR files have the manifest specifying the entry point class with Main-Class: myPrograms.MyClass and an explicit Class-Path (and the -cp argument is ignored). Some operating systems can run these directly when clicked. The typical invocation is java -jar foo.jar from a command line.
Executable JAR files:
Native launchers can be created on most platforms. For instance, Microsoft Windows users who prefer having Windows EXE files can use tools such as JSmooth, Launch4J, WinRun4J or Nullsoft Scriptable Install System to wrap single JAR files into executables.
Manifest:
A manifest file is a metadata file contained within a JAR. It defines extension and package-related data. It contains name–value pairs organized in sections. If a JAR file is intended to be used as an executable file, the manifest file specifies the main class of the application. The manifest file is named MANIFEST.MF. The manifest directory has to be the first entry of the compressed archive.
Manifest:
Specifications The manifest appears at the canonical location META-INF/MANIFEST.MF. There can be only one manifest file in an archive and it must be at that location.
The content of the manifest file in a JAR file created with version 1.0 of the Java Development Kit is the following.
Manifest-Version: 1.0 The name is separated from its value by a colon. The default manifest shows that it conforms to version 1.0 of the manifest specification.
The manifest can contain information about the other files that are packaged in the archive. Manifest contents depend on the intended use for the JAR file. The default manifest file makes no assumptions about what information it should record about other files, so its single line contains data only about itself. It should be encoded in UTF-8.
Special-Purpose Manifest Headers JAR files created only for the purpose of archiving do not use the MANIFEST.MF file.
Most uses of JAR files go beyond simple archiving and compression and require special information in the manifest file.
Features The manifest allows developers to define several useful features for their jars. Properties are specified in key-value pairs.
Applications If an application is contained in a JAR file, the Java Virtual Machine needs to know the application's entry point. An entry point is any class with a public static void main(String[] args) method. This information is provided in the manifest Main-Class header, which has the general form: Main-Class: com.example.MyClassName In this example com.example.MyClassName.main() executes at application launch.
Package Sealing Optionally, a package within a JAR file can be sealed, which means that all classes defined in that package are archived in the same JAR file. A package might be sealed to ensure version consistency among the classes in the software or as a security measure.
Manifest:
To seal a package, a Name entry needs to appear, followed by a Sealed header, such as: The Name header's value is the package's relative pathname. Note that it ends with a '/' to distinguish it from a filename. Any headers following a Name header, without any intervening blank lines, apply to the file or package specified in the Name header. In the above example, because the Sealed header occurs after the Name: myCompany/myPackage header with no intervening blank lines, the Sealed header applies (only) to the package myCompany/myPackage.
Manifest:
The feature of sealed packages is outmoded by the Java Platform Module System introduced in Java 9, in which modules cannot split packages.
Manifest:
Package Versioning Several manifest headers hold versioning information. One set of headers can be assigned to each package. The versioning headers appear directly beneath the Name header for the package. This example shows all the versioning headers: Multi-Release A jar can be optionally marked as a multi-release jar. Using the multi-release feature allows library developers to load different code depending on the version of the Java runtime. This in turn allows developers to leverage new features without sacrificing compatibility.
Manifest:
A multi-release jar is enabled using the following declaration in the manifest: Dependencies The MANIFEST.MF file can be used to specify all the classes that must be loaded for an application to be able to run.Note that Class-Path entries are delimited with spaces, not with the system path delimiter:
Apache Ant Zip/JAR support:
The Apache Ant build tool has its own package to read and write Zip and JAR archives, including support for Unix filesystem extensions. The org.apache.tools.zip package is released under the Apache Software Foundation license and is designed to be usable outside Ant.
Related formats:
Several related file formats build on the JAR format: WAR (Web application archive) files, also Java archives, store XML files, Java classes, JavaServer Pages and other objects for Web Applications.
RAR (resource adapter archive) files (not to be confused with the RAR file format), also Java archives, store XML files, Java classes and other objects for J2EE Connector Architecture (JCA) applications.
EAR (enterprise archive) files provide composite Java archives that combine XML files, Java classes and other objects including JAR, WAR and RAR Java archive files for Enterprise Applications.
SAR (service archive) is similar to EAR. It provides a service.xml file and accompanying JAR files.
APK (Android application package), a variant of the Java archive format, is used for Android applications.
AAR (Android archive) is used for distribution of Android libraries, typically via Maven.
PAR (plan archive) - supported by Eclipse Virgo OSGi application server, allows the deployment of multi-bundle OSGi applications as a single archive and provides isolation from other PAR-based applications deployed in the same server.
KAR (Karaf archive) - supported by Apache Karaf OSGi application server, allows the deployment of multi-bundle, multi-feature OSGi applications. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Train ride**
Train ride:
A train ride or miniature train consists of miniature trains capable of carrying people. Some are considered amusement rides and some are located in amusement parks and municipal parks. Backyard railroads and ridable miniature railways run on tracks, and especially if the service is provided by a non-commercial hobbyist club, their trains may be exact scale models, often with a live steam locomotive. Some train rides are kiddie rides, which are commercial children's rides that often use simple, colorful equipment with the driving mechanism hidden under vacuum-formed plastic covers. Trackless trains do not use tracks and usually consist of railroad-like cars towed behind an ordinary, or modified motor vehicle. This type of ride is often used for sightseeing tours. Some roller coasters like the Big Thunder Mountain Railroad attractions in several Disney parks resemble train rides, but may not be available to children under a certain age or minimum height.
History:
One early maker of miniature train rides was Paul Allen Sturtevant, who began building model trains as rides for children in the 1930s. Sturtevant began this craft as a hobby, later making them for rental to department stores and eventually producing them in a plant in Addison, Illinois until the demands of World War II shifted production away from consumer goods. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fay and Wu's H**
Fay and Wu's H:
Fay and Wu's H is a statistical test created by and named after two researchers Justin Fay and Chung-I Wu. The purpose of the test is to distinguish between a DNA sequence evolving randomly ("neutrally") and one evolving under positive selection. This test is an advancement over Tajima's D, which is used to differentiate neutrally evolving sequences from those evolving non-randomly (through directional selection or balancing selection, demographic expansion or contraction or genetic hitchhiking). Fay and Wu's H is frequently used to identify sequences which have experienced selective sweeps in their evolutionary history.
Concept:
Imagine a DNA sequence which has very few polymorphisms in its alleles across different populations. This could arise due to at least three causes: The sequence is experiencing heavy negative selection, so any new mutation in the sequence is deleterious and is purged off immediately, or The sequence just experienced a bout of selective sweep (an allele rose to fixation/near fixation), so all alleles became homogenized. The rare polymorphisms you see are very recent, or There was a population bottleneck, so all individuals in the population are derived from a small set (or one) common ancestorNow, when you calculate Tajima's D using all the alleles across all populations, because there is an excess of rare polymorphisms, Tajima's D will show up negative and will tell you that the particular sequence was evolving non-randomly. However, you don't know whether this is because of some selection acting or whether there was some selective sweep recently or due to population expansion/contraction. To know that, you calculate Fay and Wu's H.Fay and Wu's H not only uses population polymorphism data but also data from an outgroup species. Due to the outgroup species, you can now tell what the ancestral state of the allele was before the two lineages split. If, for example, the ancestral allele was different, you can now say that there was a selective sweep in that region (could be due to linkage too). The magnitude of the selective sweep will be decided by the strength of H. If the allele was the same, it means the sequence is experiencing negative selection and the ancestral state is maintained. On the other hand, an H close to 0 means that there is no evidence of deviation from neutrality.
Interpretation:
A significantly positive Fay and Wu's H indicates a deficit of moderate- and high-frequency derived single nucleotide polymorphisms (SNPs) relative to equilibrium expectations, whereas a significant negative Fay and Wu's H indicates an excess of high-frequency derived SNPs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Progesterone 5alpha-reductase**
Progesterone 5alpha-reductase:
In enzymology, a progesterone 5alpha-reductase (EC 1.3.1.22) is an enzyme that catalyzes the chemical reaction 5alpha-pregnan-3,20-dione + NADP+ ⇌ progesterone + NADPH + H+Thus, the two substrates of this enzyme are 5alpha-pregnan-3,20-dione and NADP+, whereas its 3 products are progesterone, NADPH, and H+.
Progesterone 5alpha-reductase:
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-CH group of donor with NAD+ or NADP+ as acceptor. It is a C21-steroid hormone that is a 5α-pregnane substituted with a oxo groups at positions 3 and 20. It is an intermediate in the conversion of progesterone to allopregnalone and isopregnanolone, other common forms of neurosteroids. The systematic name of this enzyme class is 5alpha-pregnan-3,20-dione:NADP+ 5-oxidoreductase. Other names in common use include steroid 5-alpha-reductase, and Delta4-steroid 5alpha-reductase (progesterone). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Steam and water analysis system**
Steam and water analysis system:
Steam and water analysis system (SWAS) is a system dedicated to the analysis of steam or water. In power stations, it is usually used to analyze boiler steam and water to ensure the water used to generate electricity is clean from impurities which can cause corrosion to any metallic surface, such as in boiler and turbine.
Steam and water analysis system (SWAS):
Corrosion and erosion are major concerns in thermal power plants operating on steam. The steam reaching the turbines need to be ultra-pure and hence needs to be monitored for its quality. A well designed Steam and Water Analysis system (SWAS) can help in monitoring the critical parameters in the steam. These parameters include pH, conductivity, silica, sodium, dissolved oxygen, phosphate and chlorides. A well designed SWAS must ensure that the sample is representative until the point of analysis. To achieve this, it is important to take care of the following aspects of the sample: Sample Extraction Sample Transport Conditioning Analysis ControlsThese aspects are well explained in international standards like ASME PTC 19.11-2008 and VGB S006 -00 2012_09_EN. The International Association for the Properties of Water and Steam (IAPWS) also gives good information on important measurement points and its significance.
Steam and water analysis system (SWAS):
Sample handling system components are the most important pressure parts of sample handling system and need to have certification from ASME Section VIII Div1 & Div2 or PED. Also many times country-specific certifications required like American: ASME Section VIII Div 1 and Div 2/ ASME U and S Stamp Europe: Pressure Equipment Directive (PED) India: Indian Boiler Regulation (IBR) form IIIC Malaysia: DOSH Russia: CU TR Certification Sample extraction To ensure that the sample that is going to be extracted for analysis represents the process conditions exactly, it is important to choose the correct sample extraction probe. The validity of the analysis is largely dependent on the sample being truly representative. As the probe is going to be directly attached to the process pipe work, it may have to withstand severe conditions. For most applications, the sample probe is manufactured to the stringent codes applicable to high-pressure, high-temperature pipework.
Steam and water analysis system (SWAS):
The selection of the right type of probe is a challenge. Its use depends on the process stream parameter to be measured, the required sample flow rate and the location of the sampling point (which is also called the 'tapping point'). An important aspect of the sample extraction probe design is that the steam must enter the probe at the same velocity as the steam flowing in the pipeline from where the sample (it can be steam or water) was extracted. These probes are designed as per ASTM D1066 standard for steam extraction and must be designed and tested for their structural integrity in High pressure, High Temperature and Higher velocity of samples.
Steam and water analysis system (SWAS):
Sample extraction probes are extremely important and necessary of proper analysis of suspended impurities like Corrosion products, Total Iron, copper, carryover effects.
Sample Transport Section#4 in ASME PTC 19.11-2008 standard describes details for designing of sample transportation lines. Following care need to take while designing of this sample transportation lines: (1) Line Size Selection: Following aspects are very important while designing of sample Transportation lines.
(a) Transportation time i.e. (Velocity) of sample from Isokinetic sample extraction probes to sampling system should be as minimum. SWAS room must be located close to low pressure water (condensate) samples from CEP discharge and condensate Polishing plants with lesser velocities.
(b) Pressure drops in lines is an important aspect. It is very important that the sample meets least resistance. Hence joints and bends in the pipeline need to be minimal. Also, sample lines must be continuously sloping to avoid accumulation of samples in lines.
Steam and water analysis system (SWAS):
(2) Line Material: Minimum Stainless steel SS316 Grade material must be used for sample Transport Lines. This is to avoid corrosion of lines which leads to wrong measurement and analysis. For High pressure and Temperature samples (Super heated steam, Reheated Steam, Saturated Steam, Separator drains, Feed water at Economizer inlets) SS316H must be used which withstand High Temperature of samples.
Steam and water analysis system (SWAS):
Sample conditioning system Sample conditioning system in some countries is also called sampling system, Wet Panel or Wet Rack. This is intended to house various components for sample conditioning. This may be an open rack or a closed enclosure with a corridor in between. The system contains sample conditioning equipment and a grab sampling sink. In this system stage, sample is first cooled in Sample Coolers, depressurized in Pressure Regulator and then fed to various analyzers while the flow characteristics is kept constant by means of Back Pressure Regulator.
Steam and water analysis system (SWAS):
The need to condition the sample exists, because the sensors used for online analysis are not able to handle the water/steam sample at high temperatures or pressures. To maintain a common reference of analysis, the sample analysis should be done at 25 °C. However, due to temperature compensation logic being available in most of the analyzers today, it is a practice to cool the sample to 25–40 °C. with the help of a well engineered sample conditioning system and then feed the conditioned sample to the analyzers.
Steam and water analysis system (SWAS):
However, if an uncompensated sample is to be analyzed, it becomes essential to cool the sample to 25 °C +/- 1 °C. This can be achieved by two-stage cooling. In the first stage cooling (also known as 'primary cooling'), the sample is cooled by using available cooling water. In most of the countries, cooling water is available in the range of 30–32 °C. This cooling water can cool the sample unto 35 °C(considering an approach temperature of 3 to 5 °C). A sample cooler is used to achieve this. Sample cooler is a heat exchanger specially designed for SWAS applications. Preferred sample cooler for primary cooling is a double helix coil in shell type design providing contraflow heat exchange.
Steam and water analysis system (SWAS):
The remaining part of cooling (i.e. from 35 to 25 °C) is achieved by using chilled water in the secondary cooling circuit. A chilled water supply is required from the plant or else an independent chiller package can be considered for this purpose along with SWAS.The sampling system can be an 'open-frame free standing' type design or a fully or partially closed design, depending on the choice of the user, the environment it is supposed to operate in & the criticality of operation.
Steam and water analysis system (SWAS):
Sample coolers In the sampling system, sample coolers play a major role in bringing down the temperature of hot steam (or water) to a temperature acceptable to the sensors of the on-line analyser. Some of the important design aspects of sample coolers are: Preferably a sample cooler design should be double helix, coil in shell type, so designed as to provide contra-flow heat exchange. This makes the sample cooler more compact, yet highly effective in terms of heat exchange.
Steam and water analysis system (SWAS):
Sample coils made of stainless steel SS-316 are suitable for normal cooling water conditions. However, if the chloride content in the cooling water is high (more than 35 ppm), then other suitable coil materials such as Monel or Inconel need to be used depending upon the quality of cooling water.
A “built-in” safety relief valve on shell side of the cooler is a must, so as to prevent explosion of the shell in event of sample coil failure.
Steam and water analysis system (SWAS):
The sample cooler design must be meeting ASME PTC 19.11 standard requirements.These sample coolers handle very high pressure and Temperature steam and Water samples and thus it is very important to design these Helical Tube Heat Exchangers inline with Pressure vessel standards These are unfired pressure vessels and thus designed inline with ASME Section VIII Div 1&2, Pressure Equipment Directives (PED) Standards. Also many countries asked for local certification like American: ASME Section VIII Div 1 and Div 2/ ASME U and S Stamp Europe: Pressure Equipment Directive (PED) India: Indian Boiler Regulation (IBR) form IIIC Malaysia: DOSH Russia: CU TR Certification Pressure reducers After the sample is cooled, the pressure of the sample must be reduced to meet the requirement of the sensors that receive this sample. Usually, the sensors like pH, conductivity, silica, sodium, and hydrazine require low pressure sample for healthy operation.
Steam and water analysis system (SWAS):
A rod-in-tube type of pressure reducer is the most effective method of pressure reduction recommended in ASME PTC19.11-2008 standard.
Steam and water analysis system (SWAS):
As per the latest technology, a Sample rod-in-tube pressure reducer with thermal and safety relief valve device is considered to be the most reliable and safe device. Single Rod in Tube System is a system in itself that takes care of some important aspects of sample conditioning. The pressure reducer in the Sampling system is rated for high very high pressure 450 Bar. There is no need of filters before the Rod in tube Pressure Reducers, as cleaning is on-line, without using any tools. For maintenance, no-shut-down is required for cleaning these pressure reducer.
Steam and water analysis system (SWAS):
Safety of analyzers against high temperature Analyzers must be protected from high temperature samples. This is to avoid situations in case of failure of cooling water to primary sample coolers. There are various methods for stopping sample to analyzer in such a situation. The most popular and simple method is use of mechanical thermal shut off valves. These valves close and block samples to analyzer in case of cooling water failures.
Steam and water analysis system (SWAS):
These valves must be with: (1) High pressure rating and designed inline with ASME standards to assure safety of operator and instruments downstream.
(2) This valves must be with MANUAL RESET design as recommended in ASME PTC 19.11-2008 standards.
(3) These valves must be equipped with potential free alarm contact for operator indication in Control system.
Steam and water analysis system (SWAS):
Online Analysis of Steam and Water Cycle Chemistry Parameters A sample analysis system in some countries is also called Analyser Panel, Dry Panel or Dry Rack. It is usually a free-standing enclosed panel. The system contains the transmitter electronics, usually it is mounted on panels. In this system stage, sample is analyzed on its conductivity, pH, silica, phosphate, chloride, dissolved oxygen, hydrazine, sodium etc.[1] Online Conductivity measurements in SWAS In Steam and Water Cycle conductivity measurement is very basic, but the most important measurement. Specific conductivity (total conductivity), acidic conductivity (conductivity after cation exchanger CACE) and degassed cation conductivity are measured at different location in steam and water cycle continuously Conductivity measurements give indication of contamination of water / steam with any kind of salts. These salts can get added to the water / steam from atmosphere or due to leakages in heat exchangers etc. The conductivity of ultra pure water is almost close to zero(as low as 0.05 microsiemens/cm), while with addition of even 1 ppm of any salt, the conductivity can shoot up to even more than 100 micro siemens/cm. Thus conductivity is a very good general purpose watch dog which can give a quick indication of plant malfunctioning or possible leakages. Typical points in the steam circuit where conductivity should be monitored are . Drum steam, Drum water, High pressure heaters, Low pressure heaters, Condenser, Plant effluent, D.M. plant, Make-up water to D.M. plant.
Steam and water analysis system (SWAS):
Three types of conductivity measurement are usually done: Specific conductivity, Cation conductivity and De-gassed cation conductivity.There is a difference between these three types of measurements.
Steam and water analysis system (SWAS):
Specific conductivity gives overall conductivity value of the sample and is the most generic measurement Cation conductivity is conductivity measurement after the Cation Column. At the Cation Column, the H+ resins replace the positive ions of all dissolved matter in the solution. When this happens, the treatment chemicals, which are desired ones (and are of basic or alkaline in nature) get converted to H2O, i.e. water. (e.g. NH4OH + H(+) gives NH4+ and H2O). The impurities are nothing but salts of different natures These get converted to respective acids (e.g. NaCl + H(+) gives HCl and Cl-). Thus masking effects of treatment chemicals on the conductivity value are eliminated, while the conversion of salts to corresponding acids has an effect of increase in their corresponding conductivity value to around three times its original value. Thus, in effect, cation conductivity acts as amplifier of conductivity due to impurities and eliminator of conductivity due to treatment chemicals.
Steam and water analysis system (SWAS):
De-gassed conductivity is the finest level of conductivity measurement. Here one removes the masking effects of dissolved gases, mainly CO2, on the conductivity measurement. In the De-Gassed conductivity system, there is a reboiler chamber to heat the sample, so that the dissolved gases are liberated and then there is cooling mechanism, by which the hot liquid is cooled again. The conductivity measured after this process is indeed the 'real' value of conductivity because of 'dissolved' impurities after eliminating the dissolved gases. Degas columns are designed inline with ASTM D4519 Standard. These measurements are also recommended in standards like ASME PTC 19.11-2008 and VGB S006 -00 2012_09_EN. You can also refer IAPWS guidelines for more information.
Steam and water analysis system (SWAS):
These Three conductivity measurement are very important and also used to calculate pH and dissolved CO2 values in steam and water cycles.
Steam and water analysis system (SWAS):
Online pH Measurement pH measurement is also very basic yet very critical measurement for steam and water cycle. Monitoring the pH value of the feed water gives direct indication of alkalinity or acidity of this water. The ultra pure water has pH value of 7. In steam circuit it is normal practice to keep the pH value of feed water at slightly alkaline levels using chemical dosing. This helps in preventing the corrosion of pipe work and other equipment. Typical points in the steam circuit where pH should be monitored are : Drum water, High pressure heaters, Make-up condensate, Plant effluent, Condenser, Cooling water.
Steam and water analysis system (SWAS):
Online Dissolved Oxygen Measurement In Steam and Water Circuit temperature of water is increased from room temperature to superheated steam temperatures. In temperature range of 200to 250°C (feed water), dissolved oxygen causes corrosion of components and piping. Iron reacts with dissolved oxygen in feed water circuit resulting pitting may eventually cause puncturing and failures of Parts in Steam water circuits. Parts like condensers, Low Pressure Heaters (LPH), Feed water tanks, High pressure Heaters and Economizers need to be protected from dissolved oxygen attack. Dissolved oxygen also promotes electrolytic action between dissimilar metals causing corrosion and leakage at joints and gaskets.
Steam and water analysis system (SWAS):
In power plants various feed water treatments like (1) All Volatile Treatment (AVT-R or AVT-O) (2) Oxygenated Treatment (OT) (3) Combine Water Treatment (CWT) are adopted to minimize corrosion. Thus it is very important and critical to monitor and control Dissolved oxygen and pH values in Feed water systems. The typical points in steam circuit where dissolved oxygen monitoring is required are . Condenser outlet, L.P. heaters, Economizer inlet.
Steam and water analysis system (SWAS):
Online Hydrazine (Oxygen Scavenger) Measurement In All Volatile Treatment-Reducing (AVT-R) treatment chemicals Like Hydrazine/ Carbohydrazine or DEHA are dosed in Boiler feed water. Such treatments are used for Steam water circuits with mixed metallurgy. These Chemicals act as an oxygen scavenger and a source of feed water alkalinity has well known advantages e.g. : a) It prevents foaming and carryovers from boiler.
Steam and water analysis system (SWAS):
b) It minimizes deposits on metal surfaces.
Steam and water analysis system (SWAS):
c) Reduce Dissolved oxygen corrosion In addition to its oxygen-scavenging function, hydrazine helps to maintain a protective magnetite layer over steel surfaces, and maintain feed water alkalinity to prevent acidic corrosion. The nominal dosage rate for hydrazine in feed water is about three times its oxygen level. Under dosing of hydrazine leads to increased corrosion; overdosing represents a costly waste. Monitoring the dissolved oxygen levels is not sufficient to control the optimum concentration because its provides no measure of any excess hydrazine. The typical points in steam circuit where hydrazine monitoring is required are . Re-heaters, Economizer inlet, L.P. heaters.
Steam and water analysis system (SWAS):
Online Silica (SiO2--) Measurement When it comes to safety and efficiency of the steam turbine and boiler in a power plant, silica becomes one of the most critical factors to be monitored. Deposition of various impurities on turbine blades has been identified as one of the most common problems. Various compounds deposit on the turbine blades. Of all these compounds, silica (SiO2) deposits can occur at lower operating pressures also, Therefore, silica deposition is quite common in turbines than other types of deposits. Silica usually deposits in the intermediate-pressure and low-pressure sections of the turbine. These deposits are hard to remove, disturb the geometry of turbine blades and ultimately result in vibrations causing imbalance and loss of output from turbine.
Steam and water analysis system (SWAS):
Another important area of concern as far as silica deposition is concerned is boiler tube. Silica scale is one of the hardest scale to remove. Because of its low thermal conductivity, a very thin silica deposit can reduce heat transfer considerably, reducing efficiency, leading to hot spots and ultimately ruptures.
Because of all these issues, it is extremely important to closely monitor silica levels by using on-line silica analyzers that can measure silica levels to a ppb (parts per billion) level.
Steam and water analysis system (SWAS):
Online Sodium (Na+) Ion measurement Sodium Measurement is one of the most critical measurement in Steam and Water Cycle for leak detections in circuit. The measurement of sodium is recognized - among other chemical parameters - as an effective telltale to reveal the condition of a high-purity water/steam circuit. The presence of sodium signals contamination with potentially corrosive anions, e.g. chlorides, sulfates etc. Under conditions of high pressure and temperature, neutral sodium salts exhibit considerable steam solubility. NaCl and NaOH, in particular, are known to be associated with stress corrosion cracking of boiler and super heater tubes. The measurement of sodium, acting as a carrier of potentially corrosive anions, is now recognized as an effective means to monitor steam purity.
Steam and water analysis system (SWAS):
DM Water after Cation and Mixed bed: Sampling after cation exchange is one of the most important parameters in trace sodium monitoring because it rapidly alerts the operator about resin bed exhaustion. Sodium measurement is particularly valuable in plants cooled by saline waters, especially if there is a high risk of condenser leakage and no provision for condensate polishing. Consequently, while small leaks may be extremely difficult to locate and eliminate, their detection and escalation is most readily monitored by sodium measurement. SWAN's sodium analyzers can detect up to 0.001 ppb or 1 ppt of trace sodium in water treatment facilities. This sensitivity allows operators to follow trend changes before any leakage requires immediate action. Additionally, this advantage can be converted over time to analyze the origin of the leakage and to plan either a production reduction, or even to stop production far enough in advance to avoid costly and unexpected emergency shut downs.
Steam and water analysis system (SWAS):
Boiler: Solid conditioning agents, such as Tri-sodium phosphate (TSP) and sodium hydroxide (Caustic) used for boiler drum water treatment. In case these chemicals are carried over with steam, They may cause deposits in the turbine and therefore need to be considered as potentially corrosive impurities.
Steam and water analysis system (SWAS):
Steam: Sodium is also measured in power plant water and steam samples because it is a common corrosive contaminant and can be detected at very low concentrations in the presence of higher amounts of ammonia and/or amine treatment which have a relatively high background conductivity. Steam purity can be more accurately assessed by measuring sodium concentration in both steam and condensate, thus determining the “sodium balance”. The two concentrations should be equal. A higher level of sodium in the condensate indicates a condenser leakage. A lower level of sodium in the condensate indicates deposition of sodium in the steam circuit.
Steam and water analysis system (SWAS):
Condensate: Sodium measurement should be the preferred option for early warnings of leakages of impurities in condensates. It also plays key role in Condensate Polishing plant controls.
Steam and water analysis system (SWAS):
Online Phosphate Measurement in Boiler Drum Water Phosphate measurement is important only for Drum Type boilers. Solid conditioning agents, such as Tri-sodium phosphate (TSP) are widely used as a dosing chemical in Boiler Drums. In case of excess dosing of these chemicals can lead to issues like Foaming, Carry over of salts to Steam. Controlling dosing of phosphate under variable steam loads is challenging task mainly because of Phosphate hideouts. Thus mainly users preferred Phosphate measurement in Drum water samples | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Colliding-wind binary**
Colliding-wind binary:
A colliding-wind binary is a binary star system in which the two members are massive stars that emit powerful, radiatively-driven stellar winds. The location where these two winds collide produces a strong shock front that can cause radio, X-ray and possibly synchrotron radiation emission. Wind compression in the bow shock region between the two stellar winds allows dust formation. When this dust streams away from the orbiting pair, it can form a pinwheel nebula of spiraling dust. Such pinwheels have been observed in the Quintuplet Cluster The archetype of such a colliding-wind binary system is WR 140 (HD 193793), which consists of a 20 solar mass (M☉) Wolf-Rayet star orbiting about a 50 M☉, spectral class O4-5 main sequence star every 7.9 years. The high orbital eccentricity of the pair allows astronomers to observe changes in the colliding wind region as their separation varies. Another prominent example of a colliding-wind binary is thought to be Eta Carinae, one of the most luminous objects in the Milky Way galaxy. The first colliding-wind binary to be detected in the X-ray band outside the Milky Way galaxy was HD 5980, located in the Small Magellanic Cloud. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Comparison of eDonkey software**
Comparison of eDonkey software:
The following tables compare general and technical information for a number of available applications supporting the eDonkey network. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**History of geology**
History of geology:
The history of geology is concerned with the development of the natural science of geology. Geology is the scientific study of the origin, history, and structure of Earth.
Antiquity:
in the year 540 BC, Xenophanes described fossil fish and shells found in deposits on mountains. Similar fossils were noted by Herodotus (about 490 BC).Some of the first geological thoughts were about the origin of Earth. Ancient Greece developed some primary geological concepts concerning the origin of the Earth. Additionally, in the 4th century BC Aristotle made critical observations of the slow rate of geological change. He observed the composition of the land and formulated a theory where the Earth changes at a slow rate and that these changes cannot be observed during one person's lifetime. Aristotle developed one of the first evidence-based concepts connected to the geological realm regarding the rate at which the Earth physically changes.However, it was his successor at the Lyceum, the philosopher Theophrastus, who made the greatest progress in antiquity in his work On Stones. He described many minerals and ores both from local mines such as those at Laurium near Athens, and further afield. He also quite naturally discussed types of marble and building materials like limestones, and attempted a primitive classification of the properties of minerals by their properties such as hardness.
Antiquity:
Much later in the Roman period, Pliny the Elder produced a very extensive discussion of many more minerals and metals then widely used for practical ends. He was among the first to correctly identify the origin of amber as a fossilized resin from trees by the observation of insects trapped within some pieces. He also laid the basis of crystallography by recognising the octahedral habit of diamond.
Middle Ages:
Abu al-Rayhan al-Biruni (AD 973–1048) was one of the earliest Muslim geologists, whose works included the earliest writings on the geology of India, hypothesizing that the Indian subcontinent was once a sea.Ibn Sina (Avicenna, AD 981–1037), a Persian polymath, made significant contributions to geology and the natural sciences (which he called Attabieyat) along with other natural philosophers such as Ikhwan AI-Safa and many others. Ibn Sina wrote an encyclopedic work entitled "Kitab al-Shifa" (the Book of Cure, Healing or Remedy from ignorance), in which Part 2, Section 5, contains his commentary on Aristotle's Mineralogy and Meteorology, in six chapters: Formation of mountains, The advantages of mountains in the formation of clouds; Sources of water; Origin of earthquakes; Formation of minerals; The diversity of the Earth's terrain.
Middle Ages:
In medieval China, one of the most intriguing naturalists was Shen Kuo (1031–1095), a polymath personality who dabbled in many fields of study in his age. In terms of geology, Shen Kuo is one of the first naturalists to have formulated a theory of geomorphology. This was based on his observations of sedimentary uplift, soil erosion, deposition of silt, and marine fossils found in the Taihang Mountains, located hundreds of miles from the Pacific Ocean. He also formulated a theory of gradual climate change, after his observation of ancient petrified bamboos found in a preserved state underground near Yanzhou (modern Yan'an), in the dry northern climate of Shaanxi province. He formulated a hypothesis for the process of land formation: based on his observation of fossil shells in a geological stratum in a mountain hundreds of miles from the ocean, he inferred that the land was formed by erosion of the mountains and by deposition of silt.
17th century:
It was not until the 17th century that geology made great strides in its development. At this time, geology became its own entity in the world of natural science. It was discovered by the Christian world that different translations of the Bible contained different versions of the biblical text. The one entity that remained consistent through all of the interpretations was that the Deluge had formed the world's geology and geography. To prove the Bible's authenticity, individuals felt the need to demonstrate with scientific evidence that the Great Flood had in fact occurred. With this enhanced desire for data came an increase in observations of the Earth's composition, which in turn led to the discovery of fossils. Although theories that resulted from the heightened interest in the Earth's composition were often manipulated to support the concept of the Deluge, a genuine outcome was a greater interest in the makeup of the Earth. Due to the strength of Christian beliefs during the 17th century, the theory of the origin of the Earth that was most widely accepted was A New Theory of the Earth published in 1696, by William Whiston. Whiston used Christian reasoning to "prove" that the Great Flood had occurred and that the flood had formed the rock strata of the Earth.
17th century:
During the 17th century, both religious and scientific speculation about Earth's origin further propelled interest in the Earth and brought about more systematic identification techniques of the Earth's strata. The Earth's strata can be defined as horizontal layers of rock having approximately the same composition throughout. An important pioneer in the science was Nicolas Steno. Steno was trained in the classical texts on science; however, by 1659 he seriously questioned accepted knowledge of the natural world. Importantly, he questioned the idea that fossils grew in the ground, as well as common explanations of rock formation. His investigations and his subsequent conclusions on these topics have led scholars to consider him one of the founders of modern stratigraphy and geology (Steno, who became a Catholic as an adult, was eventually made a bishop, and was beatified in 1988 by Pope John Paul II. Therefore, he is also called Blessed Nicolas Steno).
18th century:
From this increased interest in the nature of the Earth and its origin, came a heightened attention to minerals and other components of the Earth's crust. Moreover, the increasing economic importance of mining in Europe during the mid to late 18th century made the possession of accurate knowledge about ores and their natural distribution vital. Scholars began to study the makeup of the Earth in a systematic manner, with detailed comparisons and descriptions not only of the land itself, but of the semi-precious metals it contained, which had great commercial value. For example, in 1774 Abraham Gottlob Werner published the book Von den äusserlichen Kennzeichen der Fossilien (On the External Features of Fossils), which brought him widespread recognition because he presented a detailed system for identifying specific minerals based on external characteristics. The more efficiently productive land for mining could be identified and the semi-precious metals could be found, the more money could be made. This drive for economic gain propelled geology into the limelight and made it a popular subject to pursue. With an increased number of people studying it, came more detailed observations and more information about the Earth.
18th century:
Also during the eighteenth century, aspects of the history of the Earth – namely the divergences between the accepted religious concept and factual evidence – once again became a popular topic for discussion in society. In 1749, the French naturalist Georges-Louis Leclerc, Comte de Buffon published his Histoire Naturelle, in which he attacked the popular Biblical accounts given by Whiston and other ecclesiastical theorists of the history of the earth. From experimentation with cooling globes, he found that the age of the Earth was not only 4,000 or 5,500 years as inferred from the Bible, but rather 75,000 years. Another individual who described the history of the Earth with reference to neither God nor the Bible was the philosopher Immanuel Kant, who published his Universal Natural History and Theory of the Heavens (Allgemeine Naturgeschichte und Theorie des Himmels) in 1755. From the works of these respected men, as well as others, it became acceptable by the mid eighteenth century to question the age of the Earth. This questioning represented a turning point in the study of the Earth. It was now possible to study the history of the Earth from a scientific perspective without religious preconceptions.
18th century:
With the application of scientific methods to the investigation of the Earth's history, the study of geology could become a distinct field of science. To begin with, the terminology and definition of what constituted geological study had to be worked out. The term "geology" was first used technically in publications by two Genevan naturalists, Jean-André Deluc and Horace-Bénédict de Saussure, though "geology" was not well received as a term until it was taken up in the very influential compendium, the Encyclopédie, published beginning in 1751 by Denis Diderot. Once the term was established to denote the study of the Earth and its history, geology slowly became more generally recognized as a distinct science that could be taught as a field of study at educational institutions. In 1741 the best-known institution in the field of natural history, the National Museum of Natural History in France, created the first teaching position designated specifically for geology. This was an important step in further promoting knowledge of geology as a science and in recognizing the value of widely disseminating such knowledge.
18th century:
By the 1770s, chemistry was starting to play a pivotal role in the theoretical foundation of geology and two opposite theories with committed followers emerged. These contrasting theories offered differing explanations of how the rock layers of the Earth's surface had formed. One suggested that a liquid inundation, perhaps like the biblical deluge, had created all geological strata. The theory extended chemical theories that had been developing since the seventeenth century and was promoted by Scotland's John Walker, Sweden's Johan Gottschalk Wallerius and Germany's Abraham Werner. Of these names, Werner's views become internationally influential around 1800. He argued that the Earth's layers, including basalt and granite, had formed as a precipitate from an ocean that covered the entire Earth. Werner's system was influential and those who accepted his theory were known as Diluvianists or Neptunists. The Neptunist thesis was the most popular during the late eighteenth century, especially for those who were chemically trained. However, another thesis slowly gained currency from the 1780s forward. Instead of water, some mid eighteenth-century naturalists such as Buffon had suggested that strata had been formed through heat (or fire). The thesis was modified and expanded by the Scottish naturalist James Hutton during the 1780s. He argued against the theory of Neptunism, proposing instead the theory of based on heat. Those who followed this thesis during the early nineteenth century referred to this view as Plutonism: the formation of the Earth through the gradual solidification of a molten mass at a slow rate by the same processes that had occurred throughout history and continued in the present day. This led him to the conclusion that the Earth was immeasurably old and could not possibly be explained within the limits of the chronology inferred from the Bible. Plutonists believed that volcanic processes were the chief agent in rock formation, not water from a Great Flood.
19th century:
In the early 19th century, the mining industry and Industrial Revolution stimulated the rapid development of the stratigraphic column – "the sequence of rock formations arranged according to their order of formation in time." In England, the mining surveyor William Smith, starting in the 1790s, found empirically that fossils were a highly effective means of distinguishing between otherwise similar formations of the landscape as he travelled the country working on the canal system and produced the first geological map of Britain. At about the same time, the French comparative anatomist Georges Cuvier assisted by his colleague Alexandre Brogniart at the École des Mines de Paris realized that the relative ages of fossils could be determined from a geological standpoint; in terms of what layer of rock the fossils are located and the distance these layers of rock are from the surface of the earth. Through the synthesis of their findings, Brogniart and Cuvier realized that different strata could be identified by fossil contents and thus each stratum could be assigned to a unique position in a sequence. After the publication of Cuvier and Brongniart's book, "Description Geologiques des Environs de Paris" in 1811, which outlined the concept, stratigraphy became very popular amongst geologists; many hoped to apply this concept to all the rocks of the earth. During this century various geologists further refined and completed the stratigraphic column. For instance, in 1833 while Adam Sedgwick was mapping rocks that he had established were from the Cambrian Period, Charles Lyell was elsewhere suggesting a subdivision of the Tertiary Period; whilst Roderick Murchison, mapping into Wales from a different direction, was assigning the upper parts of Sedgwick's Cambrian to the lower parts of his own Silurian Period. The stratigraphic column was significant because it supplied a method to assign a relative age of these rocks by slotting them into different positions in their stratigraphical sequence. This created a global approach to dating the age of the Earth and allowed for further correlations to be drawn from similarities found in the makeup of the Earth's crust in various countries.
19th century:
In early nineteenth-century Britain, catastrophism was adapted with the aim of reconciling geological science with religious traditions of the biblical Great Flood. In the early 1820s English geologists including William Buckland and Adam Sedgwick interpreted "diluvial" deposits as the outcome of Noah's flood, but by the end of the decade they revised their opinions in favour of local inundations. Charles Lyell challenged catastrophism with the publication in 1830 of the first volume of his book Principles of Geology which presented a variety of geological evidence from England, France, Italy and Spain to prove Hutton's ideas of gradualism correct. He argued that most geological change had been very gradual in human history. Lyell provided evidence for Uniformitarianism, a geological doctrine holding that processes occur at the same rates in the present as they did in the past and account for all of the Earth's geological features. Lyell's works were popular and widely read, and the concept of Uniformitarianism took a strong hold in geological society.In 1831 Captain Robert FitzRoy, given charge of the coastal survey expedition of HMS Beagle, sought a suitable naturalist to examine the land and give geological advice. This fell to Charles Darwin, who had just completed his BA degree and had accompanied Sedgwick on a two-week Welsh mapping expedition after taking his Spring course on geology. Fitzroy gave Darwin Lyell's Principles of Geology, and Darwin became an advocate of Lyell's ideas, inventively theorising on uniformitarian principles about the geological processes he saw, and even challenging some of Lyell's ideas. He speculated about the Earth expanding to explain uplift, then on the basis of the idea that ocean areas sank as land was uplifted, theorised that coral atolls grew from fringing coral reefs round sinking volcanic islands. This idea was confirmed when the Beagle surveyed the Cocos (Keeling) Islands, and in 1842 he published his theory on The Structure and Distribution of Coral Reefs. Darwin's discovery of giant fossils helped to establish his reputation as a geologist, and his theorising about the causes of their extinction led to his theory of evolution by natural selection published in On the Origin of Species in 1859.Economic motivations for the practical use of geological data motivated some governments to support geological research. During the 19th century several countries, including Canada, Australia, Great Britain and the United States, initiated geological surveying that would produce geological maps of vast areas of the countries. Geological mapping provides the location of useful rocks and minerals and such information could be used to benefit the country's mining and quarrying industries. With the government and industrial funding of geological research, more individuals undertook study of geology as technology and techniques improved, leading to the expansion of the field of the science.In the 19th century, geological inquiry had estimated the age of the Earth in terms of millions of years. In 1862, the physicist William Thomson, 1st Baron Kelvin, published calculations that fixed the age of Earth at between 20 million and 400 million years. He assumed that Earth had formed as a completely molten object, and estimated the amount of time it would take for the near-surface to cool to its present temperature. Many geologists contended that Thomson's estimates were inadequate to account for observed thicknesses of sedimentary rock, evolution of life, and the formation of the crystalline basement rocks beneath the sedimentary cover. The discovery of radioactivity in the early twentieth century provided an additional source of heat within the Earth, allowing for an increase in Thomson's calculated age, as well as a means of dating geological events.
20th century:
By the early 20th Century radiogenic isotopes had been discovered and radiometric dating had been developed. In 1911 Arthur Holmes, among the pioneers in the use of radioactive decay as a means of measuring geological time, dated a sample from Ceylon at 1.6 billion years old using lead isotopes. In 1913 Holmes was on the staff of Imperial College, when he published his famous book The Age of the Earth in which he argued strongly in favour of the use of radiometric dating methods rather than methods based on geological sedimentation or cooling of the Earth (many people still clung to Lord Kelvin's calculations of less than 100 million years). Holmes estimated the oldest Archean rocks to be 1,600 million years old, but did not speculate about the Earth's age. His promotion of the theory over the next decades earned him the nickname of Father of Modern Geochronology. In 1921, attendees at the yearly meeting of the British Association for the Advancement of Science came to a rough consensus that the age of the Earth was a few billion years and that radiometric dating was credible. Holmes published The Age of the Earth, an Introduction to Geological Ideas in 1927 in which he presented a range of 1.6 to 3.0 billion years increasing the estimate in the 1940s to 4,500 ± 100 million years, based on measurements of the relative abundance of uranium isotopes established by Alfred O. C. Nier. Theories that did not comply with the scientific evidence that established the age of the Earth could no longer be accepted. The established age of the Earth has been refined since then but has not significantly changed.
20th century:
In 1912 Alfred Wegener proposed the theory of continental drift. This theory suggests that the shapes of continents and matching coastline geology between some continents indicates they were joined together in the past and formed a single landmass known as Pangaea; thereafter they separated and drifted like rafts over the ocean floor, currently reaching their present position. Additionally, the theory of continental drift offered a possible explanation as to the formation of mountains; plate tectonics built on the theory of continental drift.
20th century:
Unfortunately, Wegener provided no convincing mechanism for this drift, and his ideas were not generally accepted during his lifetime. Arthur Holmes accepted Wegener's theory and provided a mechanism: mantle convection, to cause the continents to move. However, it was not until after the Second World War that new evidence started to accumulate that supported continental drift. There followed a period of 20 years during which the theory of continental drift developed from being believed by a few to being the cornerstone of modern geology. Beginning in 1947 research provided new evidence about the ocean floor, and in 1960 Bruce C. Heezen published the concept of mid-ocean ridges. Soon after this, Robert S. Dietz and Harry H. Hess proposed that the oceanic crust forms as the seafloor spreads apart along mid-ocean ridges in seafloor spreading. This was seen as confirmation of mantle convection and so the major stumbling block to the theory was removed. Geophysical evidence suggested lateral motion of continents and that oceanic crust is younger than continental crust. This geophysical evidence also spurred the hypothesis of paleomagnetism, the record of the orientation of the Earth's magnetic field recorded in magnetic minerals. British geophysicist S. K. Runcorn suggested the concept of paleomagnetism from his finding that the continents had moved relative to the Earth's magnetic poles. Tuzo Wilson, who was a promoter of the sea floor spreading hypothesis and continental drift from the very beginning, added the concept of transform faults to the model, completing the classes of fault types necessary to make the mobility of the plates on the globe function. A symposium on continental drift that was held at the Royal Society of London in 1965 must be regarded as the official start of the acceptance of plate tectonics by the scientific community. The abstracts from the symposium are issued as Blacket, Bullard, Runcorn; 1965. In this symposium, Edward Bullard and co-workers showed with a computer calculation how the continents along both sides of the Atlantic would best fit to close the ocean, which became known as the famous "Bullard's Fit". By the late 1960s the weight of the evidence available saw continental drift as the generally accepted theory.
Modern geology:
By applying sound stratigraphic principles to the distribution of craters on the Moon, it can be argued that almost overnight, Gene Shoemaker took the study of the Moon away from Lunar astronomers and gave it to Lunar geologists.
Modern geology:
In recent years, geology has continued its tradition as the study of the character and origin of the earth, its surface features and internal structure. What changed in the later 20th century is the perspective of geological study. Geology was now studied using a more integrative approach, considering the Earth in a broader context encompassing the atmosphere, biosphere and hydrosphere. Satellites located in space that take wide scope photographs of the earth provide such a perspective. In 1972, The Landsat Program, a series of satellite missions jointly managed by NASA and the U.S. Geological Survey, began supplying satellite images that can be geologically analyzed. These images can be used to map major geological units, recognize and correlate rock types for vast regions and track the movements of Plate Tectonics. A few applications of this data include the ability to produce geologically detailed maps, locate sources of natural energy and predict possible natural disasters caused by plate shifts. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Accretion (coastal management)**
Accretion (coastal management):
Accretion is the process of coastal sediment returning to the visible portion of a beach or foreshore after a submersion event. A sustainable beach or foreshore often goes through a cycle of submersion during rough weather and later accretion during calmer periods.
If a coastline is not in a healthy sustainable state, erosion can be more serious, and accretion does not fully restore the original volume of the visible beach or foreshore, which leads to permanent beach loss. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Philosophy of statistics**
Philosophy of statistics:
The philosophy of statistics involves the meaning, justification, utility, use and abuse of statistics and its methodology, and ethical and epistemological issues involved in the consideration of choice and interpretation of data and methods of statistics.
Topics of interest:
Foundations of statistics involves issues in theoretical statistics, its goals and optimization methods to meet these goals, parametric assumptions or lack thereof considered in nonparametric statistics, model selection for the underlying probability distribution, and interpretation of the meaning of inferences made using statistics, related to the philosophy of probability and the philosophy of science. Discussion of the selection of the goals and the meaning of optimization, in foundations of statistics, are the subject of the philosophy of statistics. Selection of distribution models, and of the means of selection, is the subject of the philosophy of statistics, whereas the mathematics of optimization is the subject of nonparametric statistics.
Topics of interest:
David Cox makes the point that any kind of interpretation of evidence is in fact a statistical model, although it is known through Ian Hacking's work that many are ignorant of this subtlety.
Issues arise involving sample size, such as cost and efficiency, are common, such as in polling and pharmaceutical research.
Extra-mathematical considerations in the design of experiments and accommodating these issues arise in most actual experiments.
The motivation and justification of data analysis and experimental design, as part of the scientific method are considered.
Distinctions between induction and logical deduction relevant to inferences from data and evidence arise, such as when frequentist interpretations are compared with degrees of certainty derived from Bayesian inference. However, the difference between induction and ordinary reasoning is not generally appreciated.
Leo Breiman exposed the diversity of thinking in his article on 'The Two Cultures', making the point that statistics has several kinds of inference to make, modelling and prediction amongst them.
Issues in the philosophy of statistics arise throughout the history of statistics. Causality considerations arise with interpretations of, and definitions of, correlation, and in the theory of measurement.
Objectivity in statistics is often confused with truth whereas it is better understood as replicability, which then needs to be defined in the particular case. Theodore Porter develops this as being the path pursued when trust has evaporated, being replaced with criteria.
Topics of interest:
Ethics associated with epistemology and medical applications arise from potential abuse of statistics, such as selection of method or transformations of the data to arrive at different probability conclusions for the same data set. For example, the meaning of applications of a statistical inference to a single person, such as one single cancer patient, when there is no frequentist interpretation for that patient to adopt.
Topics of interest:
Campaigns for statistical literacy must wrestle with the problem that most interesting questions around individual risk are very difficult to determine or interpret, even with the computer power currently available. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**TYPO3**
TYPO3:
TYPO3 is a Web content management system (CMS) written in the programming language PHP. It can run on a variety of web servers, such as Apache, Nginx, or Internet Information Services (IIS), and on many operating systems, including Linux, Microsoft Windows, FreeBSD, macOS, and OS/2. It is free and open-source software released under the GNU General Public License version 2.
TYPO3:
TYPO3 is similar to other popular content management systems such as, Drupal, Joomla! and WordPress. It is used more widely in Europe than in other regions, with larger market share in German-speaking countries.TYPO3 is credited to be highly flexible, as code and content are run separately. It can be extended by new functions without writing any program code. TYPO3 supports publishing content in multiple languages due to its built-in localization system. Due to its features like editorial workplace and workflow, advanced frontend editing, scalability and maturity, TYPO3 makers classify it as an enterprise level content management system.
History and usage:
TYPO3 was initially authored by the Dane Kasper Skårhøj in 1997. It is now developed by over 300 contributors under the lead of Benjamin Mack (Core team leader) and Mathias Schreiber (Product Owner).Calculations from the TYPO3 Association show that it is currently used in more than 500,000 installations. The number of installations detected by the public website "CMS Crawler" was around 384,000 by February 2017.
Features:
TYPO3 provides a base set of interfaces, functions and modules. Most functionality exceeding the base set needs extensions. More than 5000 extensions are currently available for TYPO3 for download under the GNU General Public License from a repository called the TYPO3 Extension Repository, or TER.TYPO3 can run on most HTTP servers such as Apache, Nginx or IIS on top of Linux, Microsoft Windows or macOS. It uses PHP 7.2 or newer and any relational database supported by the TYPO3 DBAL including MySQL, MariaDB, PostgreSQL, and SQLite. Some 3rd-party extensions – not using the database API – support MySQL as the only database engine.
Features:
The system can be run on any web server with at least 256 MB RAM and a CPU appropriate for that RAM. The backend can be displayed in any modern browser with JavaScript. There is no browser restriction for displaying user-oriented content generated by TYPO3.
Features:
Since version 4.5, TYPO3 is published with a demo website called "Introduction Package". The websites serves as a tutorial for setting up a working example website and allows experimenting with built-in features. The package can be enabled from the install tool.Building basic proficiency in TYPO3 needs between a few weeks up to some months. For an author or editor who administers and operates a TYPO3 based website, this requirement can range from a few minutes to a few hours. A developer setting up a website with TYPO3 would need to work intensively with the meta-language TypoScript.
Features:
System architecture Conceptually, TYPO3 consists of two parts: the frontend, visible to visitors, and the administrative backend. The frontend displays the web content. The backend is responsible for administration and managing content. The core functions of TYPO3 include user privileges and user roles, timed display control of content (show/hide content elements), a search function for static and dynamic content, search-engine friendly URLs, an automatic sitemap, multi-language capability for frontend and backend, and more.
Features:
Like most modern CMSes, TYPO3 follows the policy of separation of content and layout: The website content is stored in a relational database, while the page templates are stored on the file system. Therefore, both can be managed and updated separately.
TYPO3 defines various basic types of content data. Standard content elements are described as text, text with media, images, (plain) HTML, video etc. Various added types of content elements can be handled using extensions.
Features:
The fundamental content unit is a "page". Pages represent a URL in the frontend and are organized hierarchically in the backends' page tree. Standard pages serve as "containers" for one or multiple content elements. There are several added special page types, including: shortcuts (they show content from another page) mount points (that insert a part of the page tree at the mount point) external URLs system folders (to handle complex data such as registered users)Internally, TYPO3 is managed by various PHP arrays. They contain all the information necessary to generate HTML code from the content stored in the database. This is achieved by a unique configuration language called TypoScript.
Features:
Design elements Designing and developing with TYPO3 is commonly based on the following elements, among others: Page tree Representation of all pages of a site, their structure and properties.
Features:
Constants System-wide configuration parameters Template Since TYPO3 6, the system runs on the templating engine Fluid. Fluid combines HTML markup with conditions and control structures. It can be extended by custom view helpers written in PHP.Until version 4.3, an HTML skeleton was used, with markers (e.g., ###MARKER###) and range markers, called subparts (e.g., <!-- ###CONTENT### Start --> … <!-- ###CONTENT### End -->); that were replaced by various content elements or served as a subtemplate. This template system can still be found in older extensions or installations.TypoScript TypoScript is a purely declarative configuration language. In Typoscript, configuration values are defined, which are parsed into a system-wide PHP array. TypoScript is object-based and organized in a tree-like structure.
Features:
Extensions Added plug-ins to enable more functions. See Extensions.
PHP TYPO3 CMS is written in PHP. Thus, most features can be modified or extended by experienced users. For example, the XCLASS mechanism allows classes and methods to be overwritten and extended. If available, hooks are preferred.
Features:
Extensions Extensions are the cornerstone in the internal architecture of TYPO3. A feature that was introduced with version 3.5 in 2003 is the Extension Manager, a control center managing all TYPO3 extensions. The division between the TYPO3 core and the extensions is an important concept which determined the development of TYPO3 in the past years. Extensions are designed in a way so they can supplement the core seamlessly. This means that a TYPO3 system will appear as a unit while actually being composed of the core application and a set of extensions providing various features.
Features:
They can be downloaded from the online repository (TER) directly from the backend, and are installed and updated with a few clicks. Every extension is identified by a unique extension key (for example, tt_news). Also, developers can share new or modified extensions by uploading them to the repository.Generally, extensions are written in PHP. The full command set of PHP 5.3 can be used (regarded the system requirements of the specific TYPO3 version), but TYPO3 also provides several library classes for better efficiency: Best known and most used is the piBase library class. With introduction of TYPO3 4.3 in 2009, piBase has been replaced (or extended) by the Extbase library, which is a modern, model–view–controller (MVC) based development framework. To ensure backward compatibility, both libraries can be used in the same TYPO3 installation. Extbase is a backport of some features of FLOW3, renamed Neos Flow, a general web application framework.
Notable projects:
As it classifies as an enterprise CMS, many global companies and organisations base their web or intranet sites on TYPO3. The majority are based in German-speaking countries, such as the state of Saxony-Anhalt, the German Green Party, the University of Lucerne (Switzerland), the University of Vienna (Austria) and the Technical University of Berlin. International organisations running one or more TYPO3 sites are: Airbus, Konica-Minolta, Leica Microsystems, Air France, Greenpeace, and Meda (Sweden).
Releases:
Version history Neos A completely rewritten version (code-named "Phoenix") was originally planned as TYPO3 version 5.0. While working on this new release and analyzing the 10-year history and complexity of TYPO3 v4, the TYPO3 community decided to branch out version 5 as a completely separate product, one that wouldn't replace version 4 in the near future and as such needed to have its own name. Published as FLOW3, now renamed Neos Flow, it along with various other packages then served as the basis for the start of development of project Phoenix.In September 2012, the TYPO3 developers decided on the name for the new product, "TYPO3 Neos". With TYPO3 Neos 1.0 alpha1, a public test version was released in late 2012. In May 2015 the TYPO3 Association and the Neos team decided to go separate ways, with TYPO3 CMS remaining the only CMS product endorsed by the Association and the Neos team publishing Neos as a stand-alone CMS without any connection to the TYPO3 world.In January 2017, Neos 3.0 has been published, along with a new version of Flow framework and a name change of its configuration language from TypoScript2 to Fusion. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.