text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Drug nomenclature**
Drug nomenclature:
Drug nomenclature is the systematic naming of drugs, especially pharmaceutical drugs. In the majority of circumstances, drugs have 3 types of names: chemical names, the most important of which is the IUPAC name; generic or nonproprietary names, the most important of which are international nonproprietary names (INNs); and trade names, which are brand names. Under the INN system, generic names for drugs are constructed out of affixes and stems that classify the drugs into useful categories while keeping related names distinguishable. A marketed drug might also have a company code or compound code.
Legal regulation:
Drug names are often subject to legal regulation, including approval for new drugs (to avoid confusion with existing drugs) and on packaging to establish clear rules about adulterants and fraudulent or misleading labeling. A national formulary is often designated to define drug names (and purity standards) for regulatory purposes. The legally approved names in various countries include: Australian Approved Name Brazilian Nonproprietary Name British Approved Name Dénomination Commune Française (France) Denominazione Comune Italiana (Italy, generic name) Japanese Accepted Name United States Adopted NameThe World Health Organization administers the international nonproprietary name list.
Legal regulation:
A company or person developing a drug can apply for a generic (nonproprietary) name through their national formulary or directly to the WHO INN Programme. In order to minimize confusion, many of the national naming bodies have policies of maintaining harmony between national nonproprietary names and INNs. The European Union has mandated this harmonization for all member states In the United States, the developer applies to United States Adopted Name (USAN) Council, and a USAN negotiator applies to the INN on the developer's behalf.
Chemical names:
The chemical names are the scientific names, based on the molecular structure of the drug. There are various systems of chemical nomenclature and thus various chemical names for any one substance. The most important is the IUPAC name. Chemical names are typically very long and too complex to be commonly used in referring to a drug in speech or in prose documents. For example, "1-(isopropylamino)-3-(1-naphthyloxy) propan-2-ol" is a chemical name for propranolol. Sometimes, a company that is developing a drug might give the drug a company code, which is used to identify the drug while it is in development. For example, CDP870 was UCB's company code for certolizumab pegol; UCB later chose "Cimzia" as its trade name. Many of these codes, although not all, have prefixes that correspond to the company name.
Nonproprietary (generic) names:
Generic names are used for a variety of reasons. They provide a clear and unique identifier for active chemical substances, appearing on all drug labels, advertising, and other information about the substance. Relatedly, they help maintain clear differentiation between proprietary and nonproprietary aspects of reality, which people trying to sell proprietary things have an incentive to obfuscate; they help people compare apples to apples. They are used in scientific descriptions of the chemical, in discussions of the chemical in the scientific literature and descriptions of clinical trials. Generic names usually indicate via their stems what drug class the drug belongs to. For example, one can tell that aciclovir is an antiviral drug because its name ends in the -vir suffix.
Nonproprietary (generic) names:
History The earliest roots of standardization of generic names for drugs began with city pharmacopoeias, such as the London, Edinburgh, Dublin, Hamburg, and Berlin Pharmacopoeias. The fundamental advances in chemistry during the 19th century made that era the first time in which what we now call chemical nomenclature, a huge profusion of names based on atoms, functional groups, and molecules, was necessary or conceivable. In the second half of the 19th century and the early 20th, city pharmacopoeias were unified into national pharmacopoeias (such as the British Pharmacopoeia, United States Pharmacopeia, Pharmacopoeia Germanica (PhG or PG), Italian Pharmacopeia, and Japanese Pharmacopoeia) and national formularies (such as the British National Formulary, the Australian Pharmaceutical Formulary, and the National Formulary of India). International pharmacopeias, such as the European Pharmacopoeia and the International Pharmacopoeia of the World Health Organization (WHO), have been the next level.
Nonproprietary (generic) names:
In 1953 the WHO created the International Nonproprietary Name (INN) system, which issues INNs in various languages, including Latin, English, French, Spanish, Russian, Chinese, and Arabic. Several countries also have national-level systems for creating generic drug names, including the British Approved Name (BAN) system, the Australian Approved Name (AAN) system, the United States Adopted Name (USAN) system (which is mostly the same as the United States Pharmacopeia (USP) system), and the Japanese Accepted Name (JAN) system. At least several of these national-level Approved Name/Adopted Name/Accepted Name systems were not created until the 1960s, after the INN system already existed. In the 21st century, increasing globalization is encouraging maximal rationalization for new generic names for drugs, and there is an increasing expectation that new USANs, BANs, and JANs will not differ from new INNs without special justification.
Nonproprietary (generic) names:
During the first half of the 20th century, generic names for drugs were often coined by contracting the chemical names into fewer syllables. Such contraction was partially, informally, locally standardized, but it was not universally consistent. In the second half of the 20th century, the nomenclatural systems moved away from such contraction toward the present system of stems and affixes that show chemical relationships.
Nonproprietary (generic) names:
Biopharmaceuticals have posed a challenge in nonproprietary naming because unlike smaller molecules made with total synthesis or semisynthesis, there is less assurance of complete fungibility between products from different manufacturers. Just as wine may vary by strain of yeast and year of grape harvest, so each product can be subtly different because living organisms are an integral part of production. The WHO MedNet community continually works to augment its system for biopharmaceuticals to ensure continued fulfillment of the goals served by having nonproprietary names. In recent years the development of the Biological Qualifier system has been an example.The prefixes and interfixes have no pharmacological significance and are used to separate the drug from others in the same class. Suffixes or stems may be found in the middle or more often the end of the drug name, and normally suggest the action of the drug. Generic names often have suffixes that define what class the drug is.
Nonproprietary (generic) names:
List of stems and affixes More comprehensive lists can be found at the National Library of Medicine's Drug Information Portal or in Appendix VII of the USP Dictionary.
Nonproprietary (generic) names:
Example breakdown of a drug name If the name of the drug solanezumab were to be broken down, it would be divided into two parts like this: solane-zumab. -Zumab is the suffix for humanized monoclonal antibody. Monoclonal antibodies by definition contain only a single antibody clone and have binding specificity for one particular epitope. In the case of solanezumab, the antibody is designed to bond to the amyloid-β peptides which make up protein plaques on the neurons of people with Alzheimer's disease.
Nonproprietary (generic) names:
See also Time release technology > List of abbreviations for formulation suffixes.
Nonproprietary (generic) names:
Combination drug products For combination drug products—those with two or more drugs combined into a single dosage form—single nonproprietary names beginning with "co-" exist in both British Approved Name (BAN) form and in a formerly maintained USP name called the pharmacy equivalent name (PEN). Otherwise the two names are simply both given, joined by hyphens or slashes. For example, suspensions combining trimethoprim and sulfamethoxazole are called either trimethoprim/sulfamethoxazole or co-trimoxazole. Similarly, co-codamol is codeine-paracetamol (acetaminophen), and co-triamterzide is triamterene-hydrochlorothiazide. The USP ceased maintaining PENs, but the similar "co"-prefixed BANs are still current.
Nonproprietary (generic) names:
Pronunciation Most commonly, a nonproprietary drug name has one widely agreed pronunciation in each language. For example, doxorubicin is consistently in English. Trade names almost always have one accepted pronunciation, because the sponsoring company who coined the name has an intended pronunciation for it.
Nonproprietary (generic) names:
However, it is also common for a nonproprietary drug name to have two pronunciation variants, or sometimes three. For example, for paracetamol, both and are common, and one medical dictionary gives .Some of the variation comes from the fact that some stems and affixes have pronunciation variants. For example, the aforementioned third (and least common) pronunciation for paracetamol reflects the treatment of the acet affix as rather than (both are accepted for acetyl).
Nonproprietary (generic) names:
The World Health Organization does not give suggested pronunciations for its INNs, but familiarity with the typical sounds and spellings of the stems and affixes often points to the widely accepted pronunciation of any given INN. For example, abciximab is predictably , because for INNs ending in -ciximab, the sound is familiar. The United States Pharmacopeia gives suggested pronunciations for most USANs in its USP Dictionary, which is published in annual editions. Medical dictionaries give pronunciations of many drugs that are both commonly used and have been commercially available for a decade or more, although many newer drugs or less common drugs are not entered. Pharmacists also have access to pronunciations from various clinical decision support systems such as Lexi-comp.
Drug brands:
For drugs that make it all the way through development, testing, and regulatory acceptance, the pharmaceutical company then gives the drug a trade name, which is a standard term in the pharmaceutical industry for a brand name or trademark name. For example, Lipitor is Pfizer's trade name for atorvastatin, a cholesterol-lowering medication. Many drugs have multiple trade names, reflecting marketing in different countries, manufacture by different companies, or both. Thus the trade names for atorvastatin include not only Lipitor (in the U.S.) but also Atocor (in India).
Publication policies for nonproprietary and proprietary names:
In the scientific literature, there is a set of strong conventions for drug nomenclature regarding the letter case and placement of nonproprietary and proprietary names, as follows: Nonproprietary names begin in lowercase; trade names begin with a capital.
Unbiased mentions of a drug place the nonproprietary name first and follow it with the trade name in parentheses, if relevant (for example, "doxorubicin (Adriamycin)").
Publication policies for nonproprietary and proprietary names:
This pattern is important for the scientific literature, where conflict of interest is disclosed or avoided. The authors reporting on a study are not endorsing any particular brand of drug. They will often state which brand was used, for methodologic validity (fully disclosing all details that might possibly affect reproducibility), but they do so in a way that makes clear the absence of endorsement.For example, the 2015 American Society of Hematology (ASH) publication policies say, "Non-proprietary (generic/scientific) names should be used and should be lowercase." ... "[T]he first letter of the name of a proprietary drug should be capitalized." ... "If necessary, you may include a proprietary name in parentheses directly following the generic name after its first mention."Valid exceptions to the general pattern occur when a nonproprietary name starts a sentence (and thus takes a capital), when a proprietary name has intercapping (for example, GoLYTELY, MiraLAX), or when tall-man letters are used within nonproprietary names to prevent confusion of similar names (for example, predniSONE versus predniSOLONE). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Engineering education**
Engineering education:
Engineering education is the activity of teaching knowledge and principles to the professional practice of engineering. It includes an initial education (bachelor's and/or master's degree), and any advanced education and specializations that follow. Engineering education is typically accompanied by additional postgraduate examinations and supervised training as the requirements for a professional engineering license. The length of education, and training to qualify as a basic professional engineer, is typically 5 years, with 15–20 years for an engineer who takes responsibility for major projects.
Engineering education:
Science, technology, engineering, and mathematics (STEM) education in primary and secondary schools often serves as the foundation for engineering education at the university level. In the United States, engineering education is a part of the STEM initiative in public schools. Service-learning in engineering education is gaining popularity within the variety of disciplinary focuses within engineering education including chemical engineering, civil engineering, mechanical engineering, industrial engineering, computer engineering, electrical engineering, architectural engineering, and other engineering education.
Africa:
Kenya Engineering training in Kenya is typically provided by the universities. Registration of engineers is governed by the Engineers Registration Act. A candidate stands to qualify as a registered engineer, R.Eng., if they are a holder of a minimum of four years of post-secondary Engineering Education and a minimum of three years of postgraduate work experience.All registrations are undertaken by the Engineers Registration Board which is a statutory body established through an Act of the Kenyan Parliament in 1969. A minor revision was done in 1992 to accommodate Technician Engineer grade. The board has been given the responsibility of regulating the activities and conduct of Practicing Engineers in the Republic of Kenya in accordance with the functions and powers conferred upon it by the Act. Under CAP 530 of the Laws of Kenya, it is illegal for an engineer to practice or call themself an engineer if not registered with the board. Registration with the board is thus a license to practice engineering in Kenya.
Africa:
Nigeria Engineering training is provided by universities subject to accreditation by the National Universities Commission (NUC) and the Council for Regulation of Engineering in Nigeria (COREN). A candidate can be registered as an engineer after completion of a five-year Bachelor's degree (or equivalent) and four years of post-graduate work experience. Previously, postgraduate education in engineering could be counted towards work experience. A candidate trained through a polytechnic may also be certified as a registered engineer on completion of a two-year Ordinary National Diploma (OND), a two-year Higher National Diploma (HND) and a post-graduate diploma (PGD) all in the same engineering discipline with two years of work experience after the PGD. Registration allows use of the protected title, Registered Engineer (R. Eng). Any person not registered as an engineer may not use any title that implies that they are Registered Engineers. It is illegal to carry out engineering practice without a COREN registration. All unregistered engineers must work under supervision of an R.Eng.
Africa:
COREN also recognizes other cadres of engineering work- technologists, technicians and craftsmen. Technologists and technicians are trained by polytechnics while craftsmen are trained by technical colleges. Technologists can become Registered Engineering Technologists (R.Eng Tech) on completion of a two-year Ordinary National Diploma (OND), a two-year Higher National Diploma (HND) and three years of post-graduate work experience. A technician can be certified as a Registered Engineering Technician (Eng Tech) after completion of a two-year OND and two-years of post-graduate work experience. A craftsman can become a Registered Engineering Craftsman after passing the technical exam of the West African Examinations Council or National Business and Technical Examinations Board or a Trade Test Grade 1 from the Federal Ministry of Labour. In addition, two years of work experience are required.
Africa:
South Africa Engineering training in South Africa is typically provided by the universities, universities of technology and colleges for Technical and Vocational Education and Training (previously Further Education and Training). The qualifications provided by these institutions must have an Engineering Council of South Africa (ECSA) accreditation for the qualification for graduates and diplomats of these institutions to be registered as Candidate Certificated Engineers, Candidate Engineers, Candidate Engineering Technologists and Candidate Engineering Technicians. There are many benefits to these attributes.
Africa:
The academic training performed by the universities is typically in the form of a four-year BSc(Eng), BIng or BEng degree. For the degree to be accredited, the course material must conform to the ECSA Graduate Attibutes (GA).
Africa:
Professional Engineers (Pr Eng) are persons that are accredited by ECSA as engineering professionals. Legally, a Professional Engineer's sign off is required for any major project to be implemented, in order to ensure the safety and standards of the project. Professional Engineering Technologists (Pr Tech Eng) and Professional Engineering Technicians (Pr Techni Eng) are other members of the engineering team.
Africa:
Professional Certificated Engineers (Pr Cert Eng) are people who hold one of seven Government Certificates of Competency and who have been registered by ECSA as engineering professionals.
The categories of professionals are differentiated by the degree of complexity of work carried out, where Professional Engineers are expected to solve complex engineering problems, Professional Engineering Technologists and Professional Certificated Engineers, broadly defined engineering problems and Professional Engineering Technicians, well-defined engineering problems.
Africa:
Tanzania Engineering training in Tanzania is typically provided by various universities and technical institutions in the country. Graduate engineers are registered by the Engineers Registration Board (ERB) after undergoing three years of practical training. A candidate stands to qualify as a professional engineer, P.Eng., if they are a holder of a minimum four years post-secondary Engineering Education and a minimum of three years of postgraduate work experience. Engineers Registration Board is a statutory body established through an Act of the Tanzanian Parliament in 1968. Minor revision was done in 1997 to address the issue of engineering professional excellence in the country.
Africa:
The board has been given the responsibility of regulating the activities and conduct of Practicing Engineers in the United Republic of Tanzania in accordance with the functions and powers conferred upon it by the Act. According to Tanzania Laws, it is illegal for an engineer to practice or call themself an engineer if not registered with the board. Registration with the board is thus a license to practice engineering in United Republic of Tanzania.
Asia:
Bangladesh Bangladesh University of Engineering and Technology (BUET) Dhaka University of Engineering & Technology (DUET) Rajshahi University of Engineering & Technology (RUET) Chittagong University of Engineering & Technology (CUET) Khulna University of Engineering & Technology (KUET) Islamic University of Technology (IUT) Sylhet Engineering College (SEC) Mymensingh Engineering College (MEC) Hong Kong In Hong Kong, engineering degree programmes (4-year bachelor's degree) are offered by public universities funded by the University Grant Committee (UGC). There are 94 UGC-funded programmes in engineering and technology offered by City University of Hong Kong, the Chinese University of Hong Kong, the Hong Kong Polytechnic University, the Hong Kong University of Science and Technology, and the University of Hong Kong. For example, the Faculty of Engineering of the University of Hong Kong (HKU) has five departments providing undergraduate, postgraduate and research degrees in civil engineering, Computer Science, Electrical and Electronic Engineering, Industrial and Manufacturing Systems Engineering, as well as Mechanical Engineering. All programmes of Bachelor of Engineering under the Joint University Programmes Admissions System (JUPAS) code 6963 being offered are accredited by the Hong Kong Institution of Engineers (HKIE). With that standing, the professional qualification of HKU engineering graduates is mutually recognized by most countries, such as the United States, Australia, Canada, Japan, Korea, New Zealand, Singapore and South Africa. Applicants with other local / international /national qualifications such as GCE A-level, International Baccalaureate (IB) or SAT can apply through the Non-JUPAS Route.The Hong Kong Institution of Engineers (the HKIE) accredits individual engineering degree programmes. The process of professional accreditation also considers the appropriate Faculty in terms of its overall philosophy, objectives and resources. The professional accreditation of engineering degree programmes in the universities is normally initiated by a university issuing an invitation to the HKIE's Accreditation Board to carry out appropriate accreditation exercises.To become a professional engineer, senior secondary (Form 4 to Form 6) school students start by choosing science and technology related subjects, while at least passing English and Mathematics in the Hong Kong Diploma of Secondary Education examinations. Secondary school graduates have to enroll in an HKIE accredited engineering programme, join the universities' engineering students society and join the HKIE as a student member. After completing a bachelor's degree in engineering, graduates undergo two to three years of engineering graduate training and gaining another two to three years relevant working experience. Upon passing the Professional Assessment, the candidate will be conferred member by the HKIE, finally becoming a Professional Engineer. The engineering profession in Hong Kong has 21 engineering disciplines, namely Aircraft, Biomedical, Building, Building Services, Chemical, Civil Control, Automation & Instrumentation, Electrical, Electronics, Energy, Environmental, Fire, Gas, Geotechnical, Information, Logistics & Transportation, Manufacturing & Industrial, Marine & Naval Architecture, Materials, Mechanical, as well as Structural engineering.In 2019, the Asian Society of Engineering Education (AsiaSEE) is founded in Hong Kong by Dr. Cecilia K.Y. Chan and over twenty founding members around Asia. AsiaSEE is the first Asian regional network of higher educational institutions leaders with commitment to improve engineering education. The vision of AsiaSEE is to be the trusted body in Asia to facilitate communications and cooperation in engineering education between members, institutions, industries, stakeholders and like-minded societies in the world. The mission of AsiaSEE is to contribute to the advancement and enhancement in engineering education via research and practice for the future generation.} Uzbekistan Turin Polytechnic University in Tashkent Tashkent State Technical University Tashkent Institute of Irrigation and Melioration Tashkent Automobile and Road Construction Institut India More than 5,000 universities and colleges offer engineering courses in India.
Asia:
Indonesia Sepuluh Nopember Institute of Technology Bandung Institute of Technology Faculty of Engineering of Sebelas Maret University Faculty of Engineering of Andalas University Faculty of Engineering of Sultan Ageng Tirtayasa University Faculty of Engineering of University of Indonesia Faculty of Engineering of Gadjah Mada University Faculty of Engineering of Diponegoro University Faculty of Engineering of Universitas Negeri Padang Faculty of Engineering of Universitas Negeri Malang Faculty of Engineering of Hasanuddin University Faculty of Engineering of University of Surabaya Malaysia Activities on engineering education in Malaysia are spearheaded by the Society of Engineering Education Malaysia (SEEM). SEEM was established in 2008 and launched on 23 February 2009. The idea of establishing the Society of Engineering Education was initiated on April, 2005 with the creating of a Pro-team Committee for SEEM. The objectives of this society are to contribute to the development of education in the fields of engineering education and science and technology, including teaching and learning, counseling, research, service and public relations.
Asia:
Universiti Teknologi Malaysia Centre For Engineering Education, CEE Universiti Tunku Abdul Rahman Tunku Abdul Rahman University College Southern University College Universiti Malaysia Pahang Pakistan In Pakistan, engineering education is accredited by the Pakistan Engineering Council, a statutory body, constituted under the PEC Act No. V of 1976 of the constitution of Pakistan and amended vide Ordinance No.XXIII of 2006, to regulate the engineering profession in the country. It aims to achieve rapid and sustainable growth in all national, economic and social fields. The council is responsible for maintaining realistic and internationally relevant standards of professional competence and ethics for engineers in the country. PEC interacts with the Government, both at the Federal and Provincial level by participating in Commissions, Committees and Advisory Bodies. PEC is a fully representative body of the engineering community in the country. PEC has a full signatory status with Washington Accord.
Asia:
Philippines The Professional Regulation Commission is the regulating body for engineers in the Philippines.
Sri Lanka Taiwan Engineering is one of the most popular majors among universities in Taiwan. The engineering degrees are over a quarter of the bachelor's degrees in Taiwan. Campuses include the National Taiwan University of Science and Technology.
Europe:
Austria In Austria, similar to Germany, an engineering degree can be obtained from either universities or Fachhochschulen (universities of applied sciences). As in most of Europe, the education usually consists of a 3-year bachelor's degree and a 2-year master's degree.
A lower engineering degree is offered by Höheren Technische Lehranstalten, (HTL, Higher Technical Institute), a form of secondary college which reaches from grade 9 to 13. There are disciplines like civil engineering, electronics, information technology, etc.
In the 5th year of HTL, as in other secondary schools in Austria, there is a final exam, called Matura. Graduates obtain an Ingenieur engineering degree after three years of work in the studied field.
Europe:
Bulgaria The beginning of higher engineering education in Bulgaria is established by the Law for Establishing a Higher Technical School in Sofia in 1941. Only two years later however because of the bombs flying over Sofia, the school was evacuated in Lovech, and the regular classes were discontinued. The learning process started again in 1945 when the university became a State Polytechnic.
Europe:
In Bulgaria, engineers are trained in the three basic degrees – bachelor, master and doctor. Since the Bologna declaration, students receive a bachelor's degree (4 years of studies), optionally followed by a master's degree (1 years of studies). The science and engineering courses include lecture and laboratory education. The main subjects to be studied are mathematics, physics, chemistry, electrical engineering, etc. The degree received after completing with a state exam or defense of a thesis. Absolvents are awarded with the Ing. title always put in front of one's name.
Europe:
Some of engineering specialties are completely traditional, such as machine building, computer and software engineering, automation, electrical engineering, electronics. Newer specialties are engineering design, mechatronics, aviation engineering, industrial engineering.
Europe:
The following technical universities prepare mainly engineers in Bulgaria: Technical University Sofia Technical University Varna Technical University Gabrovo University of Forestry University of Architecture, Civil Engineering and Geodesy University of Chemical Technology and Metallurgy Sofia Agricultural University Plovdiv University of Mining and Geology "St. Ivan Rilski"The Bulgarian engineers are united in the Federation of Scientific and Technical Unions, established in 1949. It comprises 33 territorial and 19 national unions.
Europe:
Denmark In Denmark, the engineering degree is delivered by either universities or engineering colleges (e.g. Engineering College of Aarhus).
Students receive first a baccalaureate degree (3 years of studies) followed by a master's degree (1–2 years of studies) according to the principles of the Bologna declaration, though traditionally. The engineering doctorate degree is the PhD (3 years of studies).
Europe:
The quality of Danish engineering expertise has long been much vaunted. Danish engineers, especially from engineering colleges, have also been praised at being very practical (i.e. skilled at physical work related to their discipline), ascribed to the high quality of the apprenticeship courses many Danish engineers go through as part of their education. Please note: No sources listed and seems to be opinions rather than facts.
Europe:
Finland Finland's system is derived from Germany's system. Two kinds of universities are recognized, the universities and the universities of applied sciences.
Europe:
Universities award typically 'Bachelor of Science in Technology' and 'Master of Science in Technology' degrees. Bachelor's degree is a three-year degree as master's degree is equivalent for two-year full-time studies. In Finnish the master's degree is called diplomi-insinööri, similarly as in Germany (Diplom-Ingenieur). The degrees are awarded by engineering schools or faculties in universities (in Aalto University, Oulu, Turku, Vaasa and Åbo Akademi University) or by separate universities of technology (Tampere UT and Lappeenranta UT). The degree is a scientific, theoretical taught master's degree. Master's thesis is important part of master's degree studies. Master's degree qualifies for further study into Licentiate or doctorate. Because of the Bologna process, the degree tekniikan kandidaatti ("Bachelor of Technology"), corresponding to three years of study into the master's degree, has been introduced.
Europe:
The universities of applied sciences are regional universities that award 3.5-, to 4-year engineer degrees insinööri (amk). An engineer's degree is normally 240 ECTS. There are 20 universities of applied sciences in Finland with a vide range of disciplines. The aim of the degree is professional competency with an emphasis on practical problem solving in engineering. Normally the teaching language is Finnish but there are also universities with Swedish as language of instruction, and most universities of applied sciences offer some degrees in English, too. These universities also award a Master of Engineering degree, designed for engineers already involved in the working life with at least two years of professional experience.
Europe:
France In France, the engineering degree is mainly delivered by "Grandes Écoles d'Ingénieurs" (graduate schools of engineering) upon completion of 3 years of Master's studies. Many Écoles recruit undergraduate students from CPGE (two- or three-year high level program after the Baccalauréat), even though some of them include an integrated undergraduate cycle. Other students accessing these Grandes Ecoles may come from other horizons, such as DUT or BTS (technical two-year university degrees) or standard two-year university degrees. In all cases, recruitment is highly selective. Hence graduate engineers in France have studied a minimum of five years after the baccalaureate. Since 2013, the French engineering degree is recognized by the AACRAO as a Master of Science in engineering.
Europe:
To be able to deliver the engineering degree, an École Master 's curriculum has to be validated by the Commission des titres d'ingénieur (Commission of the Engineering Title). It is important for the external observer to note that the system in France is extremely demanding in its entrance requirements (numerus clausus, using student rank in exams as the only criterion), despite being almost free of tuition fees, and much stricter in regards to the academic level of applying students than many other systems. The system focuses solely on selecting students by their engineering fundamental disciplines (mathematics, physics) abilities rather than their financial ability to finance large tuition fees, thus enabling a wider population access to higher education. In fact, being a graduate engineer in France is considered as being near/at the top of the social/professional ladder. The engineering profession grew from the military and the nobility in the 18th century. Before the French Revolution, engineers were trained in schools for technical officers, like "École d'Arts et Métiers" (Arts et Métiers ParisTech) established in 1780. Then, other schools were created, for instance the École polytechnique and the Conservatoire national des arts et métiers which was established in 1794. Polytechnique is one of the grandes écoles that have traditionally prepared technocrats to lead French government and industry, and has been one of the most privileged routes into the elite divisions of the civil service known as the "grands corps de l'État".
Europe:
Inside a French company the title of Ingénieur refers to a rank in qualification and is not restricted. Therefore, there are sometimes Ingénieurs des Ventes (Sales Engineers), Ingénieur Marketing, Ingénieur Bancaire (Banking Engineer), Ingénieur Recherche & Développement (R&D Engineer), etc.
Germany In Germany, the term Ingenieur (engineer) is legally protected and may only be used by graduates of a university degree program in engineering. Such degrees are offered by universities (Universitäten), including Technische Universitäten (universities of technology) and Technische Hochschulen, or Fachhochschulen (universities of applied sciences).
Since the Bologna reforms, students receive a bachelor's degree (3–4 years of studies), optionally followed by a master's degree (1–2 years of studies). Prior to the country adopting the Bologna system, the first and only pre-doctorate degree received after completing engineering education at university was the German Diplomingenieur (Dipl.-Ing.). The engineering doctorate is the Doktoringenieur (Dr.-Ing.).
Europe:
The quality of German engineering expertise has long been much vaunted, especially in the field of mechanical engineering. This is supported by the degree to which the various theories governing aerodynamics and structural mechanics are named after German scientists and engineers such as Ludwig Prandtl. German engineers have also been praised at being very practical (i.e. skilled at physical work related to their discipline), ascribed to the high quality of the apprenticeship courses many German engineers go through as part of their education.
Europe:
Italy In Italy, the engineering degree and "engineer" title is delivered by polytechnic universities upon completion of 3 years of studies (laurea). Additional master's degree (2 years) and doctorate programs (3 years) provide the title of "dottore di ricerca in ingegneria". Students that started studies in polytechnic universities before 2005 (when Italy adopted the Bologna declaration) need to complete a 5-year program to get the engineer title. In this case the master's degree is obtained after 1 year of studies.
Europe:
Only people with an engineer title can be employed as "engineers". Still, some with competence and experience in an engineering field that do not have such a title, can still be employed to perform engineering tasks as "specialist", "assistant", "technologist" or "technician". But, only engineers can take legal responsibility and provide guarantee upon the work done by a team in their area of expertise. Sometimes a company working in this area, which temporarily does not have any employees with an engineer title must pay for an external service of an engineering audit to provide legal guarantee for their products or services.
Europe:
The Netherlands In the Netherlands there are two ways to study engineering, i.e. at the Dutch 'technical hogeschool', which is a professional school (equivalent to a university of applied sciences internationally) and awards a practically orientated degree with the pre-nominal ing. after four years study. Or at the university, which offers a more academically oriented degree with the pre-nominal ir. after five years study. Both are abbreviations of the title Ingenieur.
Europe:
In 2002 when the Netherlands switched to the Bachelor-Master system. This is a consequence of the Bologna process. In this accord 29 European countries agreed to harmonize their higher education system and create a European higher education area. In this system the professional schools award bachelor's degrees like BEng or BASc after four years study. And the universities with engineering programs award the bachelor's degree BSc after the third year. A university bachelor is usually continuing his education for one or two more years to earn his master's degree MSc. Adjacent to these degrees, the old titles of the pre-populated system are still in use. A vocational bachelor may be admitted to a university master's degree program although often they are required to take additional courses.
Europe:
Poland In Poland after 3,5–4 years of technical studies, one gets inżynier degree (inż.), which corresponds to BSc or BEng After that, one can continue studies, and after 2 years of post-graduate programme (supplementary studies) can obtain additional MSc (or MEng) degree, called magister, mgr, and that time one has two degrees: magister inżynier, mgr inż. (literally: master engineer). The mgr degree formerly (until full adaptation of Bologna process by university) could be obtained in integrated 5 years BSc-MSc programme studies. Graduates having magister inżynier degree, can start 4 years doctorate studies (PhD), which require opening of doctoral proceedings (przewód doktorski), carrying out own research, passing some exams (e.g. foreign language, philosophy, economy, leading subjects), writing and defense of doctoral thesis. Some PhD students have also classes with undergraduate students (BSc, MSc). Graduate of doctorate studies of technical university holds scientific degree of doktor nauk technicznych, dr inż., (literally: "doctor of technical sciences") or other e.g. Doktor Nauk Chemicznych (lit. "doctor of chemical sciences").
Europe:
Portugal In Portugal, there are two paths to study engineering: the polytechnic and the university paths. In theory, but many times not so much in practice, the polytechnic path is more practical oriented, the university path being more research oriented.
In this system, the polytechnic institutes award a licenciatura (bachelor) in engineering degree after three years of study, that can be complemented by a mestrado (master) in engineering after two plus years of study.
Europe:
Regarding the universities, they offer both engineering programs similar to those of the polytechnics (three years licenciatura plus two years mestrado) as mestrado integrados (integrated master's) in engineering programs. The mestrado integrado programs take five years of study to complete, awarding a licenciatura degree in engineering sciences after the first three years and a mestrado degree in engineering after the whole five years. Further, the universities also offer doutoramento (PhD) programs in engineering.
Europe:
Being an holder of an academic degree in engineering is not enough to practice the profession of engineer and to have the legal right of the use of the title engenheiro (engineer) in Portugal. For that, it is necessary to be admitted and be a member of the Ordem dos Engenheiros (Portuguese institution of engineers). At the Ordem dos Engenheiros, an engineer is classified as an E1, E2 or E3 grade engineer, accordingly with the higher engineer degree he or she holds. Holders of the ancient pre-Bologna declaration five years licenciatura degrees in engineering are classified as E2 engineers.
Europe:
Romania In Romania, the engineering degree and "engineer" title is delivered by technology and polytechnics universities upon completion of 4 years of studies. Additional master's degree (2 years) and doctorate programs (4–5 years) provide the title of "doctor inginer". Students that started studies in polytechnic universities before 2005 (when Romania adopted the Bologna declaration) needed to complete a 5-year program to get the engineer title. In this case the master's degree is obtained after 1 year of studies.
Europe:
Only people with an engineer title can be employed as engineers. Still, some with competence and experience in an engineering field that do not have such a title, can still be employed to perform engineering tasks as "specialist", "assistant", "technologist" or "technician". But, only engineers can take legal responsibility and provide guarantee upon the work done by a team in their area of expertise. Sometimes a company working in this area, which temporarily does not have any employees with an engineer title must pay for an external service of an engineering audit to provide legal guarantee for their products or services.
Europe:
Russia Moscow School of Mathematics and Navigation was a first Russian educational institution founded by Peter the Great in 1701. It provided Russians with technical education for the first time and much of its curriculum was devoted to producing sailors, engineers, cartographers and bombardiers to support Russian expanding navy and army.
Then in 1810, the Saint Petersburg Military engineering-technical university becomes the first engineering higher learning institution in the Russian Empire, after addition of officers classes and application of five-year term of teaching. So initially more rigorisms of standards and teaching terms became the traditional historical feature of the Russian engineering education in the 19th century.
Europe:
Slovakia In Slovakia, an engineer (inžinier) is considered to be a person holding master's degree in technical sciences or economics. Several technical and economic universities offer 4-5-year master study in the fields of chemistry, agriculture, material technology, computer science, electrical and mechanical engineering, nuclear physics and technology or economics. A bachelor's degree in similar field is prerequisite. Absolvents are awarded with the Ing. title always put in front of one's name; eventual follow-up doctoral study is offered both by universities and some institutes of the Slovak Academy of Sciences.
Europe:
Spain In Spain, the engineering degree is delivered by universities in Engineering Schools, called "Escuelas de Ingeniería". Like with any other degree in Spain, students need to pass a series of examinations based on Bachillerato's subjects (Selectividad), select their bachelor's degree, and their marks determine whether they are access the degree they want or not.
Students receive first a grado degree (4 years of studies) followed by a master's degree (1–2 years of studies) according to the principles of the Bologna declaration, though traditionally, the degree received after completing an engineering education is the Spanish title of "Ingeniero". Using the title "Ingeniero" is legally regulated and limited to the according academic graduates.
Europe:
Sweden An institution offering engineering education is called "teknisk högskola" (institute of technology). These schools primarily offers five-year programmes resulting in the civilingenjör degree (not to be confused with the narrower English term "civil engineer"), internationally corresponding to a Master of Science in Engineering degree. These programmes typically offers a strong backing in the natural sciences, and the degree also opens up for doctoral (PHD) studies towards the degree "teknologie doktor". Civilingenjör programmes are offered in a broad range of fields: Engineering physics, Chemistry, Civil engineering, surveying, Industrial engineering and management, etc. There also are shorter three-year programmes called högskoleingenjör (Bachelor of Science in Engineering) that are typically more applied.
Europe:
Turkey In Turkey, engineering degrees range from a bachelor's degree in engineering (for a four-year period), to a master's degree (adding two years), and to a doctoral degree (usually four to five years).
The title is limited by law to people with an engineering degree, and the use of the title by others (even persons with much more work experience) is illegal.
Europe:
The Union of Chambers of Turkish Engineers and Architects (UCTEA) was established in 1954 and separates engineers and architects to professional branches, with the condition of being within the framework of laws and regulations and in accordance with the present conditions, requirements and possibilities and to also establishes new Chambers for the group of engineers and architects, whose professional or working areas are similar or the same.
Europe:
UCTEA is maintaining its activities with its 23 Chambers, 194 branches of its Chambers and 39 Provincial Coordination Councils. Approximately, graduates of 70 related academic disciplines in engineering, architecture and city planning are members of the Chambers of UCTEA.
Europe:
United Kingdom In the UK, like in the United States and Canada, most professional engineers are trained in universities, but some can start in a technical apprenticeship and either enroll in a university engineering degree later, or enroll in one of the Engineering Council UK programmes (level 6 – bachelor's and 7 – master's) administered by the City and Guilds of London Institute. A recent trend has seen the rise of both bachelor's and master's degree higher engineering apprenticeships. All accredited engineering courses and apprenticeships are assessed and approved by the various professional engineering institutions reflecting the subject by engineering discipline covered; IMechE, IET, BCS, ICE, IStructE etc. Many of these institutions date back to the 19th century, and have previously administered their own engineering examination programmes. They have become globally renowned as premier learned societies.
Europe:
The degree then counts in part to qualifying as a Chartered Engineer after a period (usually 4–8 years beyond the first degree) of structured professional practice, professional practice peer review and, if required, further exams to then become a corporate member of the relevant professional body. The term 'Chartered Engineer' is regulated by Royal Assent and its use is restricted only to those registered; the awarding of this status is devolved to the professional institutions by the Engineering Council.
Europe:
In the UK (except Scotland), most engineering courses take three years for an undergraduate bachelors (BEng) and four years for an undergraduate master's. Students who read a four-year engineering course are awarded a Masters of Engineering (as opposed to Masters of Science in Engineering) Some universities allow a student to opt out after one year before completion of the programme and receive a Higher National Diploma if a student has successfully completed the second year, or a Higher National Certificate if only successfully completed year one. Many courses also include an option of a year in industry, which is usually a year before completion. Students who opt for this are awarded a 'sandwich degree'.BEng graduates may be registered as an "Incorporated Engineer" by the Engineering Council after a period of structured professional practice, professional practice peer review and, if required, further exams to then become a member of the relevant professional body. Again, the term 'Incorporated Engineer' is regulated by Royal Assent and its use is restricted only to those registered; the awarding of this status is devolved to the professional institutions by the Engineering Council.
Europe:
Unlike the US and Canada, engineers do not require a licence to practice the profession in the UK. In the UK, the term "engineer" can be applied to non-degree vocations such as technologists, technicians, draftsmen, machinists, mechanics, plumbers, electricians, repair people, semi-skilled and even unskilled occupations.In recent developments by government and industry, to address the growing skills deficit in many fields of UK engineering, there has been a strong emphasis placed on dealing with engineering in school and providing students with positive role models from a young age.
North America:
Canada Engineering degree education in Canada is highly regulated by the Canadian Council of Professional Engineers (Engineers Canada) and its Canadian Engineering Accreditation Board (CEAB). In Canada, there are 43 institutions offering 278 engineering accredited programs delivering a bachelor's degree after a term of 4 years. Many schools also offer graduate level degrees in the applied sciences. Accreditation means that students who successfully complete the accredited program will have received sufficient engineering knowledge in order to meet the knowledge requirements of licensure as a Professional Engineer. Alternately, Canadian graduates of unaccredited 3-year diploma, BSc, BTech, or BEng programs can qualify for professional license by association examinations. Some of the schools include: Concordia University, École de technologie supérieure, École Polytechnique de Montréal, University of Toronto, University of Manitoba, University of Saskatchewan, University of Victoria, University of Calgary, University of Alberta, University of British Columbia, McGill University, Dalhousie University, Toronto Metropolitan University, York University, University of Regina, Carleton University, McMaster University, University of Ottawa, Queen's University, University of New Brunswick, UOIT, University of Waterloo, University of Guelph, University of Windsor, Memorial University of Newfoundland, and Royal Military College of Canada just to name a few. Every university offering engineering degrees in Canada needs to be accredited by the CEAB (Canadian Engineering Accreditation Board), thus ensuring high standards are enforced at all universities. Engineering degrees in Canada are distinct from degrees in engineering technology which are more applied degrees or diplomas. An engineering education in Canada can culminate by qualifying as a professional engineer (P.Eng.) licensee.
North America:
Mexico In the case of Mexico, education in the engineering field could be taken from public and private universities. Both types of colleges and universities can confer degrees of BEng, BSc, MEng, MSc and PhD through the presentation and dissertation of a thesis or other kind of requirements such as technical reports and knowledge exams among others.
The first University in Mexico to offers degrees in some engineering fields was the Royal and Pontifical University of Mexico, established under the Spanish rule; the degrees offered included Mines Engineering and Physical Mathematical state-of-the-art knowledge from Europe.
North America:
Then came the 19th century and lack of political stability. The universities founded under Spanish rule were closed and reopened and the Engineering teaching tradition was lost; the University of Mexico, University of Guadalajara and University of Mérida suffered this. Then the liberal rule created the Arts and Handcraft schools were opened without the same success as the universities. In the 20th century and with the success of the Mexican Revolution some of the old colleges were reopened and the old Arts and Handcraft schools were joined to the new universities. In 1936 the National Polytechnic Institute of Mexico was created as an educational alternative for workers' sons and their families. A short time later the Regional Institutes of Technology were founded as a branch of the Polytechnic Institute in a few states of the republic, though most of them do not have any university in their own territory.
North America:
Right now the Regional Institutes of Technology have been merged into one single entity labeled as Mexican National Technological Institute. The National Polytechnic Institute is the ensign university of the Mexican federal government on engineering education.
North America:
United States The first professional degree in engineering is a bachelor's degree with few exceptions. Interest in engineering has grown since 1999; the number of bachelor's degrees issued has increased by 20%. Most bachelor's degree engineering programs are four years long and require about two years of core courses followed by two years of specialized discipline specific courses. This is where a typical engineering student would learn mathematics (single- and multi-variable calculus and elementary differential equations), general chemistry, English composition, general and modern physics, computer science (typically programming), and introductory engineering in several areas that are required for a satisfactory engineering background and to be successful in their program of choice. Several courses in social sciences or humanities are often also required, but are commonly elective courses from a broad choice. Required common engineering courses typically include engineering drawing/computer-aided-design, materials engineering, statics and dynamics, strength of materials, basic circuits, thermodynamics, fluid mechanics, and perhaps some systems or industrial engineering. The science and engineering courses include lecture and laboratory education, either in the same course(s) or in separate courses. However, some professors and educators believe that engineering programs should change to focus more on professional engineering practice, and engineering courses should be taught more by professional engineering practitioners and not by engineering researchers.Many engineering degree programs admit students directly to a specialization as a first-year, but those which don't often require students to decide on a specialization by the end of the first or second year of study. Specializations often include architectural engineering, civil engineering (including structural engineering), mechanical engineering, electrical engineering (often including computer engineering), chemical engineering, nuclear engineering, biological engineering, industrial engineering, aerospace engineering, materials engineering (including metallurgical engineering), agricultural engineering, and many other specializations. After choosing a specialization, an engineering student will begin to take classes that will build on the fundamentals and gain their specialized knowledge and skills. Toward the end of their undergraduate education, engineering students often undertake an open-ended design or other special project specific to their field.It is common for University students who are studying engineering to partake in different forms of career development during their undergraduate studies. These often take the form of paid internships, cooperative education programs (also referred to as "co-ops"), research experiences, or service learning. These types of experiences may be facilitated by the students' universities, or sought out by the students independently.
North America:
Internships Engineering internships are typically pursued by undergraduate students during the summer recess between the Spring and Fall semesters of the standard semester-based academic cycle (although some US universities abide by a 'quarter' or 'trimester' cycle). These internships usually have a duration of 8–12 weeks and may be part-time or full-time as well as paid or unpaid depending on the company; sometimes, students receive academic credit as an alternative or in addition to a wage. Shorter duration full-time internships over winter and other breaks are often available too, especially for those who have completed summer internships with the same firm.
North America:
Internships are offered as temporary positions by engineering companies, and are often competitive in certain fields. They provide a way for companies to recruit and get familiar with individual students as potential full-time employment after graduation. Engineering internships also have numerous benefits for participating students. They provide hands-on learning outside of the classroom as well as an opportunity for the student to discover if her current choice of engineering discipline is appropriate based on her level of enjoyment of her internship role. Additionally, research and internship experiences have been shown to have a positive effect on engineering task self-efficacy (ETSE), a measure of a students' perception of her ability to perform engineering functions and related tasks. It is also considered advantageous to have internship or co-op experience before completion of undergraduate studies, as students who have practical engineering experience are considered to be more attractive to engineering employers.
North America:
Cooperative Education Programs Cooperative Education Programs (often referred to as 'co-ops') are similar to internships insofar as they are employment opportunities offered to undergraduate students by engineering employers; however, they are intended to take place concurrently with the students' academic studies. Co-ops are sometimes part-time roles that are ongoing throughout the academic semester, with the student expected to invest between 10 and 30 hours a week depending on the severity of their course load. Some American universities, such as Northeastern University and Drexel University, incorporate co-ops into their students' plan of study in the form of alternating semesters of full-time work and full-time classes; these programs typically take an additional year to complete compared to most 4-year undergraduate engineering programs in the US, even though Northeastern currently has a 4-year undergraduate program that integrates full-time co-ops with full-time studies. Co-ops are considered to be a valuable form of professional development, and may be undertaken by students who are looking to bolster their resumes with hopes of securing better salary offers when looking to secure their first job.
North America:
Licensing After formal education, the engineer will often enter an internship or engineer in training status for approximately four years. To achieve Engineering Intern (E.I.) or Engineer-in-Training (EIT) status, an individual must be the recipient of an engineering degree from an institution accredited by the Engineering Accreditation Commission (EAC) of the ABET, formerly the Accreditation Board for Engineering and Technology, Inc., as well as pass the Fundamentals of Engineering Exam (often abbreviated to the 'FE Exam'). The FE Exam is offered by the National Council for Examiners for Engineering and Surveying (NCEES) for the following disciplines: Mechanical Engineering, Civil Engineering, Industrial & Systems Engineering, Chemical Engineering, Electrical & Computer Engineering, Environmental Engineering, or Other Disciplines (also referred to as "General Engineering"). The FE Exam is held at remote testing locations four times throughout the year and can be taken by college graduates as well as current college students. After successfully passing the Fundamentals of Engineering Exam and receiving an ABET-accredited engineering degree, an aspiring engineer may apply for engineer-in-training status with their state's licensing board. If granted, they may use the suffix E.I.T. to denote their status as an engineer-in-training.
North America:
After that time, the engineer in training can decide whether or not to take a state licensing test to make them a Professional Engineer. The licensing process varies state-by-state, but generally they require the engineer-in-training to possess four years of verifiable work experience in their engineering field, as well as successfully pass the NCEES Principles and Practice of Engineering (PE) Exam for their engineering discipline. After successful completion of that test, the Professional engineer can place the suffix P.E. after their name signifying that they are now a Professional Engineer and they can affix their P.E. seal to drawings and reports, for example. They can also serve as expert witnesses in their areas of expertise.
North America:
Achieving the status of ' Professional Engineer is one of the highest levels of achievement one can attain in the engineering industry. Engineers with this status are generally highly sought-after by employers, especially in the field of Civil Engineering.There are also graduate degree options for an engineer. Many engineers decide to complete a master's degree in some field of engineering or business administration or get education in law, medicine, or other field.
North America:
Two types of doctorate are available also, the traditional PhD or the Doctor of Engineering. The PhD focuses on research and academic excellence, whereas the doctor of engineering focuses on practical engineering. The education requirements are the same for both degrees; however, the dissertation required is different. The PhD also requires the standard research problem, where the doctor of engineering focuses on a practical dissertation.
North America:
In present undergraduate engineering education, the emphasis on linear systems develops a way of thinking that dismisses nonlinear dynamics as spurious oscillations. The linear systems approach oversimplifies the dynamics of nonlinear systems. Hence, the undergraduate students and teachers should recognize the educational value of chaotic dynamics. Practicing engineers will also have more insight of nonlinear circuits and systems by having an exposure to chaotic phenomena.
North America:
After graduation, continuing education courses may be needed to keep a government-issued professional engineer (PE) license valid, to keep skills fresh, to expand skills, or to keep up with new technology.
Caribbean:
Trinidad and Tobago Engineering degree education in Trinidad and Tobago is not regulated by the Board of Professional Engineers of Trinidad and Tobago (BOETT) or the location Engineering Association (APETT). Professional Engineers registed with BOETT are given the credentials "r.Eng.".
South America:
Argentina Engineering education programs at universities in Argentina span a variety of disciplines and typically require 5–6 years of studies to complete. Most degree programs begin with foundational courses in mathematics, statistics, and the physical sciences during the first and second years, then move on to courses specific to the students' plan of study. After receiving a degree, an engineering student will go on to complete an external evaluation in order to become accredited as an engineer.There are many universities and technical schools across Argentina that offer degree programs in engineering education. The National Technological University (Universidad Tecnológica Nacional, UTN) is recognized as one of the best engineering institutions in the country, with degrees in the following disciplines offered across its 33 campuses: Aeronautical Engineering Civil Engineering Electrical Engineering Electronics Engineering Electro-mechanical Engineering Automotive Engineering Information Systems Engineering Railway Engineering Mechanical Engineering Metallurgical Engineering Naval Engineering Fisheries Engineering Chemical Engineering Textile EngineeringOutlined in the Argentinian Law 'Ley de Educacion Superior No. 24521' is the requirement for all universities to include a compulsory external evaluation for accreditation of certain professions, such as Law, Medicine, and Engineering, which are also strictly governed by other laws. Accreditation of engineers in Argentina is under the authority of the CONEAU (Comision Nacional de Evaluación y Acreditación Universitaria 1997), which performs the functions of coordinating and executing external evaluations and accrediting graduate and post-graduate university studies in the field of engineering.
South America:
Brazil In Brazil, education in engineering is offered by both public and private institutions. A degree in engineering requires five to six years of studies, comprising the core courses, specific subjects, an internship and a Course Completion Paper.
South America:
Due to the nature of college admissions in Brazil, most students have to declare their major before entering college. This said, the first two years of a degree in engineering consist mostly of the core courses (calculus, physics, programming, etc.) along with a few specific subjects as well as some courses in humanities. After this period, some institutions offer specializations within the different fields of engineering (i.e. a student majoring in electrical engineering can choose to specialize in electronics or telecommunications) although most institutions balance their workload in order to give the students a consistent knowledge of every specialization.
South America:
Towards the end of their undergraduate education, students are required to develop the Course Completion Paper under the guidance of an adviser to be presented to and graded by a number of professors. In some institutions, students are also required to pursue an internship (the amount of time depends on the institution).
In order to pursue a career in engineering, graduates must first register with and abide by the rules of the Regional Counsel of Engineering and Agronomy of their state, a regional representative of the Federal Counsel of Engineering and Agronomy, a certification board for engineers, agronomists, geologists and other professionals of the applied sciences. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Electronic voting machine**
Electronic voting machine:
An electronic voting machine is a voting machine based on electronics. Two main technologies exist: optical scanning and direct recording (DRE).
Optical scanning:
In an optical scan voting system, or marksense, each voter's choices are marked on one or more pieces of paper, which then go through a scanner. The scanner creates an electronic image of each ballot, interprets it, creates a tally for each candidate, and usually stores the image for later review.
Optical scanning:
The voter may mark the paper directly, usually in a specific location for each candidate. Or the voter may select choices on an electronic screen, which then prints the chosen names, and a bar code or QR code summarizing all choices, on a sheet of paper to put in the scanner.Hundreds of errors in optical scan systems have been found, from feeding ballots upside down, multiple ballots pulled through at once in central counts, paper jams, broken, blocked or overheated sensors which misinterpret some or many ballots, printing which does not align with the programming, programming errors, and loss of files. The cause of each programming error is rarely found, so it is not known how many were accidental or intentional.
Direct-recording electronic (DRE):
In a DRE voting machine system, a touch screen displays choices to the voter, who selects choices, and can change their mind as often as needed, before casting the vote. Staff initialize each voter once on the machine, to avoid repeat voting. Voting data are recorded in memory components, and can be copied out at the end of the election.
Direct-recording electronic (DRE):
Some of these machines also print names of chosen candidates on paper for the voter to verify, though less than 40% verify. These names on paper are kept behind glass in the machine, and can be used for election audits and recounts if needed. The tally of the voting data is printed on the end of the paper tape. The paper tape is called a Voter-verified paper audit trail (VVPAT). The VVPATs can be tallied at 20–43 seconds of staff time per vote (not per ballot).For machines without VVPAT, there is no record of individual votes to check. For machines with VVPAT, checking is more expensive than with paper ballots, because on the flimsy thermal paper in a long continuous roll, staff often lose their place, and the printout has each change by each voter, not just their final decisions.Problems have included public web access to the software, before it is loaded into machines for each election, and programming errors which increment different candidates than voters select. The Federal Constitutional Court of Germany found that with existing machines could not be allowed because they could not be monitored by the public.Successful hacks have been demonstrated under laboratory conditions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Proctocolectomy**
Proctocolectomy:
Proctocolectomy is the surgical removal of the rectum and all or part of the colon. It is the most widely accepted surgical method for ulcerative colitis and familial adenomatous polyposis (FAP).A proctocolectomy is considered a cure for ulcerative colitis, as the disease only attacks the large intestine and the rectum, and the disease cannot flare-up again, but extra-intestinal symptoms will remain. It can also be performed for Crohn's disease that has damaged the entire large intestine and caused complications, but it does not cure or eliminate the disease. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Indian Ocean coastal belt**
Indian Ocean coastal belt:
The Indian Ocean coastal belt is one of the nine recognised biomes of South Africa. They are described in terms of their vegetation and climatic variations.
Location and description:
The Indian Ocean coastal belt is a region of coastal dunes and coastal grassy plains in KwaZuluNatal and the Eastern Cape, from sea level to an altitude of about 600 m Mean annual rainfall ranges from 819 to 1,272 mm, and falls throughout the year, peaking in summer. The mean annual temperature ranges from 19.1 °C near the Mbhashe River in the southwest to 22 °C in the north east near the Mozambican border, with hot summers and mild, frost-free winters. The belt is about 800 km long and narrow, with a maximum width of about 35 km in the north to less than 10 km in parts of the wild coast, and the total area is relatively small.The relief of the region varies between flat in Maputaland, rolling hills with deeply incised valleys between Richards Bay and Port Edward in KwaZulu-Natal and further south as far as the Great Kei River mouth. The Pondoland coast and other areas with sandstone geology have elevated plateaus with deep gorges.
Flora:
The dominant forest cover is interrupted by areas of grassland, with part of the belt comprising dense savanna vegetation, with scattered areas of forest and grassland. Most of the coastal belt outside the remaining patches of forest has been changed considerably.The following vegetation units have been identified: Maputaland Coastal Belt Maputaland Wooded Grassland KwaZulu-Natal Coastal Belt Pondoland-Ugu Sandstone Coastal Sourveld, which is characterised by grassland species, with some scattered low shrubs and small trees.
Economic value:
The region provides water supplies and fodder for livestock grazing.
Threats and preservation:
The biome is fairly well protected relative to the other South African biomes in that about 45% of the 20-year target is protected.
Climate change impacts:
Three scenarios have been modeled for climate change impacts on the South African biomes. The low risk scenario suggests a possible increase in area for this biome, with warm, moist conditions expanding southwest along the coast, and extending further inland, but the intermediate and high risk models show a possibility of less water availability and parts of the biome shifting to a savanna climate. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cytochrome c oxidase subunit III**
Cytochrome c oxidase subunit III:
Cytochrome c oxidase subunit III (COX3) is an enzyme that in humans is encoded by the MT-CO3 gene. It is one of main transmembrane subunits of cytochrome c oxidase. It is also one of the three mitochondrial DNA (mtDNA) encoded subunits (MT-CO1, MT-CO2, MT-CO3) of respiratory complex IV. Variants of it have been associated with isolated myopathy, severe encephalomyopathy, Leber hereditary optic neuropathy, mitochondrial complex IV deficiency, and recurrent myoglobinuria .
Structure:
The MT-CO3 gene produces a 30 kDa protein composed of 261 amino acids. COX3, the protein encoded by this gene, is a member of the cytochrome c oxidase subunit 3 family. This protein is located on the inner mitochondrial membrane. COX3 is a multi-pass transmembrane protein: in human, it contains 7 transmembrane domains at positions 15–35, 42–59, 81–101, 127–147, 159–179, 197–217, and 239–259.
Function:
Cytochrome c oxidase (EC 1.9.3.1) is the terminal enzyme of the respiratory chain of mitochondria and many aerobic bacteria. It catalyzes the transfer of electrons from reduced cytochrome c to molecular oxygen: 4 cytochrome c+2 + 4 H+ + O2 ⇌ 4 cytochrome c+3 + 2 H2OThis reaction is coupled to the pumping of four additional protons across the mitochondrial or bacterial membrane.Cytochrome c oxidase is an oligomeric enzymatic complex that is located in the mitochondrial inner membrane of eukaryotes and in the plasma membrane of aerobic prokaryotes. The core structure of prokaryotic and eukaryotic cytochrome c oxidase contains three common subunits, I, II and III. In prokaryotes, subunits I and III can be fused and a fourth subunit is sometimes found, whereas in eukaryotes there are a variable number of additional small subunits.As the bacterial respiratory systems are branched, they have a number of distinct terminal oxidases, rather than the single cytochrome c oxidase present in the eukaryotic mitochondrial systems. Although the cytochrome o oxidases do not catalyze the cytochrome c but the quinol (ubiquinol) oxidation they belong to the same haem-copper oxidase superfamily as cytochrome c oxidases. Members of this family share sequence similarities in all three core subunits: subunit I is the most conserved subunit, whereas subunit II is the least conserved.
Clinical significance:
Mutations in mtDNA-encoded cytochrome c oxidase subunit genes have been observed to be associated with isolated myopathy, severe encephalomyopathy, Leber hereditary optic neuropathy, mitochondrial complex IV deficiency, and recurrent myoglobinuria .
Clinical significance:
Leber hereditary optic neuropathy (LHON) LHON is a maternally inherited disease resulting in acute or subacute loss of central vision, due to optic nerve dysfunction. Cardiac conduction defects and neurological defects have also been described in some patients. LHON results from primary mitochondrial DNA mutations affecting the respiratory chain complexes. Mutations at positions 9438 and 9804, which result in glycine-78 to serine and alanine-200 to threonine amino acid changes, have been associated with this disease.
Clinical significance:
Mitochondrial complex IV deficiency (MT-C4D) Complex IV deficiency (COX deficiency) is a disorder of the mitochondrial respiratory chain with heterogeneous clinical manifestations, ranging from isolated myopathy to severe multisystem disease affecting several tissues and organs. Features include hypertrophic cardiomyopathy, hepatomegaly and liver dysfunction, hypotonia, muscle weakness, exercise intolerance, developmental delay, delayed motor development, mental retardation, lactic acidemia, encephalopathy, ataxia, and cardiac arrhythmia. Some affected individuals manifest a fatal hypertrophic cardiomyopathy resulting in neonatal death and a subset of patients manifest Leigh syndrome. The mutations G7970T and G9952A have been associated with this disease.
Clinical significance:
Recurrent myoglobinuria mitochondrial (RM-MT) Recurrent myoglobinuria is characterized by recurrent attacks of rhabdomyolysis (necrosis or disintegration of skeletal muscle) associated with muscle pain and weakness, and followed by excretion of myoglobin in the urine. It has been associated with mitochondrial complex IV deficiency.
Subfamilies:
Cytochrome o ubiquinol oxidase, subunit III InterPro: IPR014206 Cytochrome aa3 quinol oxidase, subunit III InterPro: IPR014246
Interactions:
COX3 has been shown to have 15 binary protein-protein interactions including 8 co-complex interactions. COX3 appears to interact with SNCA, KRAS, RAC1, and HSPB2. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Irish syntax**
Irish syntax:
Irish syntax is rather different from that of most Indo-European languages, especially because of its VSO word order.
Normal word order:
The normal word order in an Irish sentence is: Preverbal particle Verb Subject Direct object or predicate adjective Indirect object Location descriptor Manner descriptor Time descriptorOnly the verb and subject are obligatory; all other parts are optional (unless the primary or finite verb is transitive, in which case a direct object is required). In synthetic verb forms, the verb and subject are united in a single word, so that even one-word sentences are possible, e.g. Tuigim "I understand." An example sentence:
Questions and answers:
Irish has no words for "yes" and "no". The answer to a question contains a repetition (the same as in Latin) of the verb, either with or without a negative particle. For analytic forms, only the verb is given and the subject is not repeated. If a verb has different dependent and independent forms, the dependent form follows the interrogative or negative particle. The independent form is used where there is no particle.
Commands:
In a command the imperative mood is used, and no subject is given.
To express a negative command, the particle ná is used. This particle, which can be roughly translated "don't", causes neither eclipsis nor lenition, and attaches h to a following vowel.
Syntax of the verbal noun:
A progressive aspect can be formed by connecting the verbal noun to the existential verb with the progressive particle ag.
The object of a verbal noun is in the genitive, if it is definite.
If a nonfinite clause forms the complement of the verb, the verbal noun stands alone (without a preposition) in the clause.
The direct object of a verbal noun complement precedes the verbal noun; the leniting particle a "to" is placed between them. Other complements follow.
Object pronouns:
Generally, an object pronoun or a conjugated preposition stands at the end of a sentence in Irish. Compare this sentence: with the two following sentences:
Passive:
Irish commonly uses the impersonal form (also called the autonomous form) instead of the passive voice.
In the perfect, the passive voice is formed by using the passive participle with the existential verb.
Stative verbs:
Some verbs describing the state or condition of a person form a progressive present with the existential verb plus 'in (my, your, his etc.)' plus the verbal noun.
Forms meaning "to be":
Irish, like Spanish and other languages, has two forms that can express the English verb "to be". The two forms perform different grammatical functions.
Existential verb bí The existential verb is bí. It is an irregular verb; see Irish verbs for its conjugation.
Existence, condition or location This verb expresses the absolute existence of something, its condition, or its location. When accompanied by the adverb ann "there", it means "exist" or "there is/are". Otherwise, the verb is complemented by an adjective, an adverb or a prepositional phrase.
Definitions A noun phrase alone cannot form the predicate of the existential verb. Instead, the noun complement is preceded by a form meaning "in my, in your, in his", etc.
The copula is The Irish copula is not a verb but a particle, used to express a definition or identification. It may be complemented by a noun, a pronoun, an adjective, or a topicalized phrase. Because it is not a verb, it does not inflect for person or number, and pronouns appear in the disjunctive form.
The copula, which has the realis form is, is used for identification and definition: Definition: X is a Y. Here, the word order is "Is-Y-(pronoun)-X". X is a definite noun or a pronoun.
Identification: X is the Y. Here the word order is "Is-pronoun-X-Y", or "Is-pronoun-Y-X". There must always be a pronoun between a definite noun and the copula. It would be wrong to say *Is Seán an múinteoir, which would mean "The teacher is a Seán".
Forms meaning "to be":
To identify a first or second person pronoun with a definite noun, it is usual to use the longer form of the personal pronoun, which comes immediately after the copula: (26a) Is mise an múinteoir. "I am the teacher." (26b) Is tusa an scoláire. "You are the student." (26c) Is sinne na múinteoirí. "We are the teachers." (26d) Is sibhse na scoláirí. "You are the students."The long form of the personal pronoun is very emphatic and stressed and often ejects the copula entirely. Thus, in the previous four examples, it is possible to leave out the copula, which will then be understood: (27a) Mise an múinteoir.
Forms meaning "to be":
(27b) Tusa an scoláire.
(27c) Sinne na múinteoirí.
Forms meaning "to be":
(27d) Sibhse na scoláirí.If a third-person pronoun with a definite noun is identified, the same construction may be used: (28a) (Is) eisean an múinteoir. "He is the teacher." (28b) (Is) ise an scoláire. "She is the student." (28c) (Is) iadsan na saighdiúirí. "They are the soldiers".However, in the third person, that is perceived to be much more emphatic than in the first and second persons. The usual way to say "He is the teacher" is (28d) Is é an múinteoir é.in which the definite noun is flanked by two personal pronouns agreeing with it in gender and number.
Forms meaning "to be":
When saying "this is", or "that is", seo and sin are used, in which case is is usually dropped: (29a) Seo í mo mháthair. "This is my mother." (29b) Sin é an muinteoir. "That's the teacher."One can also add "that is in him/her/it", especially when an adjective is used if one wants to emphasise the quality: That sometimes appears in Hiberno-English, translated literally as "that is in it" or as "so it is".
Forms meaning "to be":
The present tense of the copula can be used for the future: (32) Is múinteoir é. "He will be a teacher."The past tense of the copula can be used for the conditional: (33) Ba mhúinteoir í. "She would be a teacher."The forms is and ba are not used after preverbal particles.
(34a) An múinteoir thú? "Are you a teacher?" (34b) Níor mhúinteoirí sinn. "We were not teachers."If the predicate is definite, the copula is followed by a disjunctive personal pronoun, which may be repeated at the end of the sentence.
(35a) Is í Siobhán an múinteoir. "Siobhán is the teacher." (35b) Is iad na daoine sin na múinteoirí. "Those people are the teachers." (35c) Is é an múinteoir é. "He is the teacher."If the predicate is indefinite, it follows the copula directly, with the disjunctive pronoun and subject coming at the end.
Forms meaning "to be":
(36a) Is dalta mé. "I am a student." (36b) Is múinteoir í Cáit. "Cáit is a teacher."The copula can also be used to stress an adjective, as in the following instance: Topicalization Topicalization in Irish is formed by clefting: by fronting the topicalized element as the predicate of the copula, while the rest of the sentence becomes a relative clause. Compare Dúirt mise é "I said it" with Is mise a dúirt é "I said it." Other uses for the copula There are other set idiomatic phrases using the copula, as seen in the following examples. Here the predicate consists mostly of either a prepositional phrase or an adjective.
Forms meaning "to be":
(38a) Is maith liom "I like" (lit. "is good with me") (38b) Ba mhaith liom "I would like" (lit. "would be good with me") (38c) Is fearr liom "I prefer" (lit. "is better with me") (38d) Is féidir liom "I can" (lit. "is possible with me") (38e) Ba cheart "one should" (lit. "would be right") (38f) Níor cheart "one shouldn't" (lit. "would not be right") (38g) Is fuath liom "I hate" (lit. "is hatred with me") (38h) Is cuma liom "I don't care" (lit. "is indifferent with me") (38i) Is mian liom "I wish/would like" (lit. "is desire with me") (38j) Is cuimhin liom "I remember" (lit. "is memory with me")There are also the following constructions: Answering questions with copula Since the copula cannot stand alone, the answer must contain either a part of the predicate or a pronoun, both of which follow the copula.
Forms meaning "to be":
(42) An é Seán an múinteoir? "Is Seán the teacher?" (42.1) Is é. "Yes, he is." (42.2) Ní hé. "No, he isn't." (43) An múinteoir é Seán? "Is Seán a teacher?" (43.1) Is ea. "Yes, he is." (43.2) Ní hea. "No, he isn't." Omission of is In all dialects, the copula is may be omitted if the predicate is a noun. (Ba cannot be deleted.) If is is omitted, the following é, í, iad preceding the noun is omitted as well.
Forms meaning "to be":
(44a) (Is) mise an múinteoir. "I am the teacher." (44b) (Is é) Seán an múinteoir. "Seán is the teacher." (44c) (Is) dalta mé. "I am a student." Comparison of the existential verb and the copula Both the existential verb and the copula may take a nominal predicate, but the two constructions have slightly different meanings: Is dochtúir é Seán sounds more permanent: it represents something absolute about Seán; it is a permanent characteristic of Seán that he is a doctor. That is known as an individual-level predicate. In the sentence Tá Seán ina dhochtúir, one says rather that Seán performs the job of a doctor, he is a doctor at the moment, or he has become a doctor. That is known as a stage-level predicate.
Subordination:
Most complementizers (subordinating conjunctions) in Irish cause eclipsis and require the dependent form of irregular verbs. The word order in an Irish subordinate clause is the same as in a main clause. The types of subordination discussed here are: complementation, relative clauses, and wh-questions (which are formed as a kind of relative clause in Irish).
Complementation Syntactic complementation The subordinate clause is a part of the main clause in a purely syntactic complementation. In Irish it is introduced by go "that" in the positive and nach "that... not" in the negative.
Subordination:
Other examples of complex sentences using complementizers: (47a) Bhí faitíos roimhe mar go raibh sé taghdach. "People were afraid of him because he was quick-tempered." (47b) Ní chreidim é cé go bhfeicim é. "I don't believe it although I see it." (47c) Scríobh sí síos é ar nós nach ndéanfadh sí dearmad air. "She wrote it down so that she wouldn't forget it." (47d) Fan nó go dtiocfaidh sé. "Wait until he comes." Conditional complementation A conditional clause gives the condition under which something will happen. In Irish there are two kinds of conditional clauses, depending on the plausibility of the condition. The particle má introduces a conditional clause that is plausible, also called a realis condition. Má causes lenition and takes the independent form of irregular verbs. Its negated form is mura and causes eclipsis. Preceding the preterite it is murar and causes lenition.
Subordination:
If the condition of the clause is hypothetical, also called an irrealis condition or counterfactual conditional, the word dá is used, which causes eclipsis and takes the dependent form of irregular verbs. The negated equivalent is either mura or murach go, meaning roughly "if it were not the case that...". The verb in both clauses is in the conditional.
Subordination:
(48a) Má chreideann sé an scéal sin, tá sé saonta go maith. "If he believes that story, he is pretty gullible." (realis) (48b) Murar chaill sé é, ghoid sé é. "If he didn't lose it, then he stole it." (realis) (48c) Dá bhfágfainn agat é ní dhéanfá é. "If I left it to you, you wouldn't do it." (irrealis)Other examples of conditionals are: (49a) Éireoidh leis an bhfiontar i gcleithiúnas go mbeidh cách páirteach ann. "The venture will succeed provided that all take part in it." (49b) Tig leat é a bhriseadh ar chuntar go n-íocfaidh tú as. "You may break it provided that you pay for it." Relative clauses Direct relative There are two kinds of relative clauses in Irish: direct and indirect. Direct relative clauses begin with the leniting relativizer a and the independent form of an irregular verb is used. The direct relative is used when the relative pronoun is the subject or direct object of its clause.
Subordination:
(50a) D'imigh na daoine a bhí míshásta thar sáile. "The people who were unhappy went overseas." (50b) Sin í an obair a rinne mé. "That's the work that I did."The direct relative is also used in topicalizations, e.g.: (51) Is é Jimmy a chuaigh go Méiriceá. "It's Jimmy who went to America."The direct relative is also used after the word uair "time": (52) an chéad uair a bhí mé ann "the first time that I was there" Indirect relative Indirect relative clauses begin with the eclipsing relativizer a (in the preterite with leniting ar); the dependent form of an irregular verb is used. The indirect relative is used to signify a genitive or the object of a preposition. In these cases, there is a resumptive pronoun in the relative clause.
Subordination:
(53a) an fear a raibh a dheirfiúr san ospidéal "the man whose sister was in the hospital" (lit. "the man that his sister was in the hospital") (53b) an fear ar thug a iníon céad punt dó "the man whose daughter gave him a hundred pounds" or "the man to whom his daughter gave a hundred pounds" (lit. "the man that his daughter gave him a hundred pounds") (53c) an seomra ar chodail mé ann "the room that I slept in" (lit. "the room that I slept in it")The negative form of a relative clause, direct or indirect, is formed with the eclipsing relativizer nach, or, before the preterite, with the leniting relativizer nár.
Subordination:
(54a) Sin rud nach dtuigim. "That's something I don't understand." (direct) (54b) bean nach bhfuil a mac ag obair "a woman whose son isn't working" (indirect; lit. "a woman that her son isn't working")Sometimes a direct relative clause can be ambiguous in meaning, leaving unclear if the relative is accusative or nominative: (55) an sagart a phóg an bhean "the priest who kissed the woman" or "the priest whom the woman kissed"If the accusative reading is intended, one could use an indirect relative with a resumptive pronoun: (56) an sagart ar phóg an bhean é "the priest whom the woman kissed" (lit. "the priest that the woman kissed him") Wh-questions A wh-question begins with a word such as "who, what, how, when, where, why" etc. In Irish, such questions are constructed as relative clauses, in that they can be constructed as either direct or indirect.
Subordination:
Direct relative wh-questions Questions with "who, what, how many, which, when" are constructed as direct relative clauses.
Subordination:
(57a) Cathain/Cá huair a tharla sé? "When did it happen?" (57b) Cé a rinne é? "Who did it?" (57c) Céard a fuair tú? "What did you get?" (57d) Cé mhéad míle a shiúil tú? "How many miles did you walk?" (57e) Cé acu is daoire, feoil nó iasc? "Which is more expensive, meat or fish?" Indirect relative wh-questions Questions with prepositions (i.e. "on what?, with whom?") and questions with "why?" and "where?" are constructed as indirect relative clauses.
Subordination:
(58a) Cé aige a bhfuil an t-airgead? "Who has the money?" (lit. "who at him is the money") (58b) Cá leis ar thóg tú an gluaisteán? "What did you lift the car with?" (lit. "what with it did you lift the car") (58c) Cad chuige ar bhuail tú é? "Why did you hit him?" (58d) Cén áit a bhfaca tú an bhean? "Where did you see the woman?" Clauses introduced by "how" There are two words for "how" in Irish: the word conas takes the direct relative clause, the phrase cén chaoi takes the indirect.
Subordination:
(59a) Conas a tharla sé? "How did it happen?" (59b) Cén chaoi a mbaineann sin leat? "How does that concern you?/What business is that of yours?" Complementary subordinate clauses in the form of a relative clause Some complements in Irish take the form of a relative, in that they end in the relative particle a; both direct and indirect relative are found.
Subordination:
Direct(60a) Nuair a bhí mé óg, bhí mé i mo chónaí i nDún na nGall. "When I was young, I lived in Donegal." (60b) Glaofaidh sí chomh luath agus a thiocfaidh sí abhaile. "She will call as soon as she gets home." (60c) Bhí sé ag caoineadh an t-achar a bhí sé ag caint liom. "He was crying while he was talking to me." (60d) Seinneadh port ansin, mar a rinneadh go minic. "Then a melody was played, as one often did ." (60e) Bhog sé a cheann ar nós mar a bheadh sé ag seinm. "He moved his head as if he were playing music." (60f) Tig leat é a choinneáil fad is a thugann tú aire dó. "You may hold it as long as you are careful with it."Indirect(61a) Lorg iad mar ar chuir tú iad. "Look for them where you put them." (61b) Fan san áit a bhfuil tú. "Stay where you are!" (61c) An t-am ar tháinig sé, bhíodar díolta ar fad. "By the time he came, they were all sold out." (61d) Inseoidh mé sin dó ach a bhfeicfidh mé é. "I will tell him that as soon as I see him." (61e) D'fhág sí é sa gcaoi a raibh sé. "She left it as it was." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Strong partition cardinal**
Strong partition cardinal:
In Zermelo–Fraenkel set theory without the axiom of choice a strong partition cardinal is an uncountable well-ordered cardinal k such that every partition of the set [k]k of size k subsets of k into less than k pieces has a homogeneous set of size k The existence of strong partition cardinals contradicts the axiom of choice. The Axiom of determinacy implies that ℵ1 is a strong partition cardinal. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hemline**
Hemline:
The hemline is the line formed by the lower edge of a garment, such as a skirt, dress or coat, measured from the floor.The hemline is perhaps the most variable style line in fashion, changing shape and ranging in height from hip-high to floor-length. What is a fashionable style and height of hemline has varied considerably throughout the years, and has also depended on a number of factors such as the age of the wearer, the occasion for which the garment is worn and the choice of the individual.
Types:
Similar to necklines and waistlines, hemlines can be grouped by their height and shape: floor-length hemlines ankle hemlines midcalf hemlines below-knee hemlines above-knee hemlines mid-thigh hemlines hip-high hemlines handkerchief hemlines diagonal or asymmetric hemlines high-low hemlines, usually short in front and dipping behind other hemlines, such as modern-cut hemlinesDresses and skirts are also classified in terms of their length: mini ballerina length midi tea length full length maxi Intermission length
History:
In the history of Western fashion, the ordinary public clothes of upper- and middle-class women varied only between floor-length and slightly above ankle-length for many centuries before World War I. Skirts of lower-calf or mid-calf length were associated with the practical working garments of lower-class or pioneer women, while even shorter skirt lengths were seen only in certain specialized and restricted contexts (e.g. sea-bathing costumes, or outfits worn by ballerinas on stage). It was not until the mid-1910s that hemlines began to rise significantly (with many variations in height thereafter). Skirts rose all the way from floor-length to near knee-length in little more than fifteen years (from late in the decade of the 1900s to the mid-1920s). Between 1919 and 1923 they changed considerably, being almost to the floor in 1919, rising to the mid-calf in 1920, before dropping back to the ankles by 1923. 1927 saw "flapper length" skirts at the kneecap and higher, before shifting down again in the 1930s.From World War I to roughly 1970, women were under social pressure to wear skirts near to the currently fashionable length or be considered unstylish, but since the 1970s, women's options have widened, and there is no longer really only one single fashionable skirt-length at a time.
History:
Another influence on the length of a woman's skirt is the hemline index, which, oversimplified, states that hemlines rise and fall in sync with the stock market. The term was brought up by Wharton Business School Professor George Taylor in 1926 at a time when hemlines rose with flapper dresses during the so-called Roaring '20s. The Great Depression subsequently set in and hemlines fell to the floor once again. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**APBB1**
APBB1:
Amyloid beta A4 precursor protein-binding family B member 1 is a protein that in humans is encoded by the APBB1 gene.
Function:
The protein encoded by this gene is a member of the Fe65 protein family. It is an adaptor protein localized in the nucleus. It interacts with the Alzheimer's disease amyloid precursor protein (APP), transcription factor CP2/LSF/LBP1 and the low-density lipoprotein receptor-related protein. APP functions as a cytosolic anchoring site that can prevent the gene product's nuclear translocation. This encoded protein could play an important role in the pathogenesis of Alzheimer's disease. It is thought to regulate transcription. Also it is observed to block cell cycle progression by downregulating thymidylate synthase expression. Multiple alternatively spliced transcript variants have been described for this gene but some of their full length sequence is not known.
Interactions:
APBB1 has been shown to interact with APLP2, TFCP2, LRP1 and Amyloid precursor protein. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dehydrogluconokinase**
Dehydrogluconokinase:
In enzymology, a dehydrogluconokinase (EC 2.7.1.13) is an enzyme that catalyzes the chemical reaction ATP + 2-dehydro-D-gluconate ⇌ ADP + 6-phospho-2-dehydro-D-gluconateThus, the two substrates of this enzyme are ATP and 2-dehydro-D-gluconate, whereas its two products are ADP and 6-phospho-2-dehydro-D-gluconate.
This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing groups (phosphotransferases) with an alcohol group as acceptor. The systematic name of this enzyme class is ATP:2-dehydro-D-gluconate 6-phosphotransferase. Other names in common use include ketogluconokinase, 2-ketogluconate kinase, ketogluconokinase (phosphorylating), and 2-ketogluconokinase. This enzyme participates in pentose phosphate pathway. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MDA framework**
MDA framework:
In game design the Mechanics-Dynamics-Aesthetics (MDA) framework is a tool used to analyze games. It formalizes the consumption of games by breaking them down into three components: Mechanics, Dynamics and Aesthetics. These three words have been used informally for many years to describe various aspects of games, but the MDA framework provides precise definitions for these terms and seeks to explain how they relate to each other and influence the player's experience.
MDA framework:
Mechanics are the base components of the game - its rules, every basic action the player can take in the game, the algorithms and data structures in the game engine etc.
Dynamics are the run-time behavior of the mechanics acting on player input and "cooperating" with other mechanics.
Aesthetics are the emotional responses evoked in the player.There are many types of aesthetics, including but not limited to the following eight stated by Hunicke, LeBlanc and Zubek: Sensation (Game as sense-pleasure): Player enjoys memorable audio-visual effects.
Fantasy (Game as make-believe): Imaginary world.
Narrative (Game as drama): A story that drives the player to keep coming back Challenge (Game as obstacle course): Urge to master something. Boosts a game's replayability.
Fellowship (Game as social framework): A community where the player is an active part of it. Almost exclusive for multiplayer games.
Discovery (Game as uncharted territory): Urge to explore game world.
Expression (Game as self-discovery): Own creativity. For example, creating character resembling player's own avatar.
MDA framework:
Submission (Game as pastime): Connection to the game, as a whole, despite constraints.The paper also mentions a ninth kind of fun competition. The paper seeks to better specify terms such as 'gameplay' and 'fun', and extend the vocabulary of game studies, suggesting a non-exhaustive taxonomy of eight different types of play. The framework uses these definitions to demonstrate the incentivising and disincentivising properties of different dynamics on the eight subcategories of game use.
MDA framework:
From the perspective of the designer, the mechanics generate dynamics which generate aesthetics. This relationship poses a challenge for the game designer as they are only able to influence the mechanics and only through them can be produced meaningful dynamics and aesthetics for the player.
The perspective of the player is the other way around. They experience the game through the aesthetics, which the game dynamics provide, which emerged from the mechanics.
Criticism:
Despite its popularity, the original MDA framework has been criticized for several potential weaknesses. The eight kinds of fun comprise a rather arbitrary list of emotional targets, which lack fundamentals and how more types of emotional responses can be explored. It also has been challenged for neglecting many design aspects of games while focusing too much on game mechanics, and therefore not suitable for all types of games, including particularly gamified content or any type of experience-oriented design. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Net (polyhedron)**
Net (polyhedron):
In geometry, a net of a polyhedron is an arrangement of non-overlapping edge-joined polygons in the plane which can be folded (along edges) to become the faces of the polyhedron. Polyhedral nets are a useful aid to the study of polyhedra and solid geometry in general, as they allow for physical models of polyhedra to be constructed from material such as thin cardboard.An early instance of polyhedral nets appears in the works of Albrecht Dürer, whose 1525 book A Course in the Art of Measurement with Compass and Ruler (Unterweysung der Messung mit dem Zyrkel und Rychtscheyd ) included nets for the Platonic solids and several of the Archimedean solids. These constructions were first called nets in 1543 by Augustin Hirschvogel.
Existence and uniqueness:
Many different nets can exist for a given polyhedron, depending on the choices of which edges are joined and which are separated. The edges that are cut from a convex polyhedron to form a net must form a spanning tree of the polyhedron, but cutting some spanning trees may cause the polyhedron to self-overlap when unfolded, rather than forming a net. Conversely, a given net may fold into more than one different convex polyhedron, depending on the angles at which its edges are folded and the choice of which edges to glue together. If a net is given together with a pattern for gluing its edges together, such that each vertex of the resulting shape has positive angular defect and such that the sum of these defects is exactly 4π, then there necessarily exists exactly one polyhedron that can be folded from it; this is Alexandrov's uniqueness theorem. However, the polyhedron formed in this way may have different faces than the ones specified as part of the net: some of the net polygons may have folds across them, and some of the edges between net polygons may remain unfolded. Additionally, the same net may have multiple valid gluing patterns, leading to different folded polyhedra.
Existence and uniqueness:
In 1975, G. C. Shephard asked whether every convex polyhedron has at least one net, or simple edge-unfolding. This question, which is also known as Dürer's conjecture, or Dürer's unfolding problem, remains unanswered. There exist non-convex polyhedra that do not have nets, and it is possible to subdivide the faces of every convex polyhedron (for instance along a cut locus) so that the set of subdivided faces has a net. In 2014 Mohammad Ghomi showed that every convex polyhedron admits a net after an affine transformation. Furthermore, in 2019 Barvinok and Ghomi showed that a generalization of Dürer's conjecture fails for pseudo edges, i.e., a network of geodesics which connect vertices of the polyhedron and form a graph with convex faces.
Existence and uniqueness:
A related open question asks whether every net of a convex polyhedron has a blooming, a continuous non-self-intersecting motion from its flat to its folded state that keeps each face flat throughout the motion.
Shortest path:
The shortest path over the surface between two points on the surface of a polyhedron corresponds to a straight line on a suitable net for the subset of faces touched by the path. The net has to be such that the straight line is fully within it, and one may have to consider several nets to see which gives the shortest path. For example, in the case of a cube, if the points are on adjacent faces one candidate for the shortest path is the path crossing the common edge; the shortest path of this kind is found using a net where the two faces are also adjacent. Other candidates for the shortest path are through the surface of a third face adjacent to both (of which there are two), and corresponding nets can be used to find the shortest path in each category.The spider and the fly problem is a recreational mathematics puzzle which involves finding the shortest path between two points on a cuboid.
Higher-dimensional polytope nets:
A net of a 4-polytope, a four-dimensional polytope, is composed of polyhedral cells that are connected by their faces and all occupy the same three-dimensional space, just as the polygon faces of a net of a polyhedron are connected by their edges and all occupy the same plane. The net of the tesseract, the four-dimensional hypercube, is used prominently in a painting by Salvador Dalí, Crucifixion (Corpus Hypercubus) (1954). The same tesseract net is central to the plot of the short story "—And He Built a Crooked House—" by Robert A. Heinlein.The number of combinatorially distinct nets of n -dimensional hypercubes can be found by representing these nets as a tree on 2n nodes describing the pattern by which pairs of faces of the hypercube are glued together to form a net, together with a perfect matching on the complement graph of the tree describing the pairs of faces that are opposite each other on the folded hypercube. Using this representation, the number of different unfoldings for hypercubes of dimensions 2, 3, 4, ... have been counted as | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hard sectoring**
Hard sectoring:
Hard sectoring in a magnetic or optical data storage device is a form of sectoring which uses a physical mark or hole in the recording medium to reference sector locations.
Hard sectoring:
In older 8- and 51⁄4-inch floppy disks, hard sectoring was implemented by punching sector holes in the disk to mark the start of each sector. These were equally spaced holes, at a common radius. This was in addition to the index hole, situated between two sector holes, to mark the start of the entire track of sectors. When the index or sector hole was recognized by an optical sensor, a sector signal was generated. Timing electronics or software would use the faster timing of the index hole between sector holes, to generate an index signal. Data read and write is faster in this technique than soft sectoring as no operations are to be performed regarding the starting and ending points of tracks.
Storage formats using hard sectoring:
32 sector 8-inch floppy disks 10 sector and 16 sector 51⁄4-inch floppy disks Numerous magneto-optical formats DVD-RAM | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**High-temperature engineering test reactor**
High-temperature engineering test reactor:
The high-temperature test reactor (HTTR) is a graphite-moderated gas-cooled research reactor in Ōarai, Ibaraki, Japan operated by the Japan Atomic Energy Agency. It uses long hexagonal fuel assemblies, unlike the competing pebble bed reactor designs.
HTTR first reached its full design power of 30 MW (thermal) in 1999. Other tests have shown that the core can reach temperatures sufficient for hydrogen production via the sulfur-iodine cycle.
Technical details:
The primary coolant is helium gas at a pressure of about 4 MPa, the inlet temperature of 395 °C (743 °F), and the outlet temperature of 850–950 °C (1,560–1,740 °F). The fuel is uranium oxide (enriched to an average of about 6%). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Periodic fever syndrome**
Periodic fever syndrome:
Periodic fever syndromes are a set of disorders characterized by recurrent episodes of systemic and organ-specific inflammation. Unlike autoimmune disorders such as systemic lupus erythematosus, in which the disease is caused by abnormalities of the adaptive immune system, people with autoinflammatory diseases do not produce autoantibodies or antigen-specific T or B cells. Instead, the autoinflammatory diseases are characterized by errors in the innate immune system.The syndromes are diverse, but tend to cause episodes of fever, joint pains, skin rashes, abdominal pains and may lead to chronic complications such as amyloidosis.Most autoinflammatory diseases are genetic and present during childhood. The most common genetic autoinflammatory syndrome is familial Mediterranean fever, which causes short episodes of fever, abdominal pain, serositis, lasting less than 72 hours. It is caused by mutations in the MEFV gene, which codes for the protein pyrin.Pyrin is a protein normally present in the inflammasome. The mutated pyrin protein is thought to cause inappropriate activation of the inflammasome, leading to release of the pro-inflammatory cytokine IL-1β. Most other autoinflammatory diseases also cause disease by inappropriate release of IL-1β. Thus, IL-1β has become a common therapeutic target, and medications such as anakinra, rilonacept, and canakinumab have revolutionized the treatment of autoinflammatory diseases.However, there are some autoinflammatory diseases that are not known to have a clear genetic cause. This includes PFAPA, which is the most common autoinflammatory disease seen in children, characterized by episodes of fever, aphthous stomatitis, pharyngitis, and cervical adenitis. Other autoinflammatory diseases that do not have clear genetic causes include adult-onset Still's disease, systemic-onset juvenile idiopathic arthritis, Schnitzler syndrome, and chronic recurrent multifocal osteomyelitis. It is likely that these diseases are multifactorial, with genes that make people susceptible to these diseases, but they require an additional environmental factor to trigger the disease. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Artery**
Artery:
An artery (PL: arteries) (from Greek ἀρτηρία (artēríā) 'windpipe, artery') is a blood vessel in humans and most animals that takes blood away from the heart to one or more parts of the body (tissues, lungs, brain etc.). Most arteries carry oxygenated blood; the two exceptions are the pulmonary and the umbilical arteries, which carry deoxygenated blood to the organs that oxygenate it (lungs and placenta, respectively). The effective arterial blood volume is that extracellular fluid which fills the arterial system.
Artery:
The arteries are part of the circulatory system, that is responsible for the delivery of oxygen and nutrients to all cells, as well as the removal of carbon dioxide and waste products, the maintenance of optimum blood pH, and the circulation of proteins and cells of the immune system.
Arteries contrast with veins, which carry blood back towards the heart.
Structure:
The anatomy of arteries can be separated into gross anatomy, at the macroscopic level, and microanatomy, which must be studied with a microscope. The arterial system of the human body is divided into systemic arteries, carrying blood from the heart to the whole body, and pulmonary arteries, carrying deoxygenated blood from the heart to the lungs.
Structure:
The outermost layer of an artery (or vein) is known as the tunica externa, also known as tunica adventitia, and is composed of collagen fibers and elastic tissue - with the largest arteries containing vasa vasorum (small blood vessels that supply large blood vessels). Most of the layers have a clear boundary between them, however the tunica externa has a boundary that is ill-defined. Normally its boundary is considered when it meets or touches the connective tissue. Inside this layer is the tunica media, or media, which is made up of smooth muscle cells, elastic tissue (also called connective tissue proper) and collagen fibres. The innermost layer, which is in direct contact with the flow of blood, is the tunica intima, commonly called the intima. The elastic tissue allows the artery to bend and fit through places in the body. This layer is mainly made up of endothelial cells (and a supporting layer of elastin rich collagen in elastic arteries). The hollow internal cavity in which the blood flows is called the lumen.
Structure:
Development Arterial formation begins and ends when endothelial cells begin to express arterial specific genes, such as ephrin B2.
Function:
Arteries form part of the circulatory system. They carry blood that is oxygenated after it has been pumped from the heart. Coronary arteries also aid the heart in pumping blood by sending oxygenated blood to the heart, allowing the muscles to function. Arteries carry oxygenated blood away from the heart to the tissues, except for pulmonary arteries, which carry blood to the lungs for oxygenation (usually veins carry deoxygenated blood to the heart but the pulmonary veins carry oxygenated blood as well). There are two types of unique arteries. The pulmonary artery carries blood from the heart to the lungs, where it receives oxygen. It is unique because the blood in it is not "oxygenated", as it has not yet passed through the lungs. The other unique artery is the umbilical artery, which carries deoxygenated blood from a fetus to its mother.
Function:
Arteries have a blood pressure higher than other parts of the circulatory system. The pressure in arteries varies during the cardiac cycle. It is highest when the heart contracts and lowest when heart relaxes. The variation in pressure produces a pulse, which can be felt in different areas of the body, such as the radial pulse. Arterioles have the greatest collective influence on both local blood flow and on overall blood pressure. They are the primary "adjustable nozzles" in the blood system, across which the greatest pressure drop occurs. The combination of heart output (cardiac output) and systemic vascular resistance, which refers to the collective resistance of all of the body's arterioles, are the principal determinants of arterial blood pressure at any given moment.
Function:
Arteries have the highest pressure and have narrow lumen diameter. It consists of three tunics: Tunica media, intima, and external.
Function:
Systemic arteries are the arteries (including the peripheral arteries), of the systemic circulation, which is the part of the cardiovascular system that carries oxygenated blood away from the heart, to the body, and returns deoxygenated blood back to the heart. Systemic arteries can be subdivided into two types—muscular and elastic—according to the relative compositions of elastic and muscle tissue in their tunica media as well as their size and the makeup of the internal and external elastic lamina. The larger arteries (>10 mm diameter) are generally elastic and the smaller ones (0.1–10 mm) tend to be muscular. Systemic arteries deliver blood to the arterioles, and then to the capillaries, where nutrients and gases are exchanged.
Function:
After traveling from the aorta, blood travels through peripheral arteries into smaller arteries called arterioles, and eventually to capillaries. Arterioles help in regulating blood pressure by the variable contraction of the smooth muscle of their walls, and deliver blood to the capillaries.
Function:
Aorta The aorta is the root systemic artery (i.e., main artery). In humans, it receives blood directly from the left ventricle of the heart via the aortic valve. As the aorta branches and these arteries branch, in turn, they become successively smaller in diameter, down to the arterioles. The arterioles supply capillaries, which in turn empty into venules. The first branches off of the aorta are the coronary arteries, which supply blood to the heart muscle itself. These are followed by the branches of the aortic arch, namely the brachiocephalic artery, the left common carotid, and the left subclavian arteries.
Function:
Capillaries The capillaries are the smallest of the blood vessels and are part of the microcirculation. The microvessels have a width of a single cell in diameter to aid in the fast and easy diffusion of gases, sugars and nutrients to surrounding tissues. Capillaries have no smooth muscle surrounding them and have a diameter less than that of red blood cells; a red blood cell is typically 7 micrometers outside diameter, capillaries typically 5 micrometers inside diameter. The red blood cells must distort in order to pass through the capillaries.
Function:
These small diameters of the capillaries provide a relatively large surface area for the exchange of gases and nutrients.
Clinical significance:
Systemic arterial pressures are generated by the forceful contractions of the heart's left ventricle. High blood pressure is a factor in causing arterial damage. Healthy resting arterial pressures are relatively low, mean systemic pressures typically being under 100 mmHg (1.9 psi; 13 kPa) above surrounding atmospheric pressure (about 760 mmHg, 14.7 psi, 101 kPa at sea level). To withstand and adapt to the pressures within, arteries are surrounded by varying thicknesses of smooth muscle which have extensive elastic and inelastic connective tissues. The pulse pressure, being the difference between systolic and diastolic pressure, is determined primarily by the amount of blood ejected by each heart beat, stroke volume, versus the volume and elasticity of the major arteries.
Clinical significance:
A blood squirt also known as an arterial gush is the effect when an artery is cut due to the higher arterial pressures. Blood is spurted out at a rapid, intermittent rate, that coincides with the heartbeat. The amount of blood loss can be copious, can occur very rapidly, and be life-threatening.Over time, factors such as elevated arterial blood sugar (particularly as seen in diabetes mellitus), lipoprotein, cholesterol, high blood pressure, stress and smoking, are all implicated in damaging both the endothelium and walls of the arteries, resulting in atherosclerosis. Atherosclerosis is a disease marked by the hardening of arteries. This is caused by an atheroma or plaque in the artery wall and is a build-up of cell debris, that contain lipids, (cholesterol and fatty acids), calcium and a variable amount of fibrous connective tissue.
Clinical significance:
Accidental intraarterial injection either iatrogenically or through recreational drug use can cause symptoms such as intense pain, paresthesia and necrosis. It usually causes permanent damage to the limb; often amputation is necessary.
History:
Among the Ancient Greeks before Hippocrates, all blood vessels were called Φλέβες, phlebes. The word arteria then referred to the windpipe. Herophilos was the first to describe anatomical differences between the two types of blood vessel. While Empedocles believed that the blood moved to and fro through the blood vessels, there was no concept of the capillary vessels that join arteries and veins, and there was no notion of circulation. Diogenes of Apollonia developed the theory of pneuma, originally meaning just air but soon identified with the soul itself, and thought to co-exist with the blood in the blood vessels. The arteries were thought to be responsible for the transport of air to the tissues and to be connected to the trachea. This was as a result of finding the arteries of cadavers devoid of blood. In medieval times, it was supposed that arteries carried a fluid, called "spiritual blood" or "vital spirits", considered to be different from the contents of the veins. This theory went back to Galen. In the late medieval period, the trachea, and ligaments were also called "arteries".William Harvey described and popularized the modern concept of the circulatory system and the roles of arteries and veins in the 17th century. Alexis Carrel at the beginning of the 20th century first described the technique for vascular suturing and anastomosis and successfully performed many organ transplantations in animals; he thus actually opened the way to modern vascular surgery that was previously limited to vessels' permanent ligation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pallet jack**
Pallet jack:
A pallet jack, also known as a pallet truck, pallet pump, pump truck, scooter, dog, or jigger is a tool used to lift and move pallets. Pallet jacks are the most basic form of a forklift and are intended to move pallets within a warehouse.
Operational principle:
The jack is steered by a tiller-like lever called a 'tow bar' that also acts on the pump piston for raising the forks. A small lever on the tow bar's steering handle releases the hydraulic fluid, causing the forks to lower. The steering wheels are located directly below the tow bar and support the jacking mechanism.
Operational principle:
The front wheels inside the end of the forks are mounted on push rods attached to linkages that go to levers attached to the jack cylinder. As the hydraulic jack at the 'tiller' end is raised, the links force the wheels down, raising the forks vertically above the front wheels, raising the load upward until it clears the floor. The pallet is only lifted enough to clear the floor for subsequent travel. Oftentimes, pallet jacks are used to move and organize pallets inside a trailer, especially when there is no forklift truck access or availability.
History:
Manual pallet jacks have existed since at least 1918. Early types lifted the forks and load only by mechanical linkages. More modern type uses a hand pumped hydraulic jack to lift.
Types:
Manual pallet jack A manual pallet jack is a hand-powered jack most commonly seen in retail and personal warehousing operations. They are used predominantly for lifting, lowering and steering pallets from one place to another.
Types:
Powered pallet jack Powered pallet jacks, also known as electric pallet trucks, walkies, single or double pallet jacks, or power jack, are motorized to allow lifting and moving of heavier and stacked pallets. Some contain a platform for the user to stand while moving pallets. The powered pallet jack is generally moved by a throttle on the handle to move forward or in reverse and steered by swinging the handle in the intended direction. Some contain a type of dead man's switch rather than a brake to stop the machine should the user need to stop quickly or leave the machine while it is in use. Others use a system known as "plugging" wherein the driver turns the throttle from forward to reverse (or vice versa) to slow and stop the machine, as the dead man's switch is used in emergencies only.
Types:
Rough terrain pallet jack Rough terrain pallet jacks are designed specifically for use on uneven ground. They are made using heavy-duty frames and robust pneumatic tyres so that they can be manoeuvred over rough surfaces with ease. Many manufacturers opt for watertight wheel bearings, a hydraulic elevator or a built-in pump to ensure their rough terrain pallet jacks are easy and comfortable to use, even in the harshest conditions.
Operational limitations:
Reversible pallets cannot be used.
Double-faced non-reversible pallets cannot have deck-boards where the front wheels extend to the floor.
Enables only two-way entry into a four-way notched-stringer pallet, because the forks cannot be inserted into the notches.
Power jacks have difficulty in confined spaces (coolers) and narrow openings.
Operational risks:
Pallet jacks are classed as material-handling equipment (MHE). Under most health and safety law, training is required in their use (particularly for powered pallet jacks) and, as the loads carried are heavy, there is a substantial risk of accidents resulting in injuries.
Typical dimensions:
Industry seems to have 'standardized' pallet jacks in several ways: Width of each of two forks: 180 mm (7 in) Fork width, i.e. The dimension between the outer edges of the forks: Available as 510 and 690 mm (20+1⁄4 and 27 in) Fork length: Available as 910, 1,070 and 1,220 mm (36, 42 and 48 in) Lowered height: 74 mm (2.9 in) Raised height: At least 190 mm (7.5 in), but some will raise higherIn Eurasia the overall dimensions are similar, as modern container palletization has forced standardization in the dimensional domain globally. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Logic programming**
Logic programming:
Logic programming is a programming paradigm which is largely based on formal logic. Any program written in a logic programming language is a set of sentences in logical form, expressing facts and rules about some problem domain. Major logic programming language families include Prolog, answer set programming (ASP) and Datalog. In all of these languages, rules are written in the form of clauses: H :- B1, …, Bn.and are read declaratively as logical implications: H if B1 and … and Bn.H is called the head of the rule and B1, ..., Bn is called the body. Facts are rules that have no body, and are written in the simplified form: H.In the simplest case in which H, B1, ..., Bn are all atomic formulae, these clauses are called definite clauses or Horn clauses. However, there are many extensions of this simple case, the most important one being the case in which conditions in the body of a clause can also be negations of atomic formulas. Logic programming languages that include this extension have the knowledge representation capabilities of a non-monotonic logic.
Logic programming:
In ASP and Datalog, logic programs have only a declarative reading, and their execution is performed by means of a proof procedure or model generator whose behaviour is not meant to be controlled by the programmer. However, in the Prolog family of languages, logic programs also have a procedural interpretation as goal-reduction procedures: to solve H, solve B1, and ... and solve Bn.Consider the following clause as an example: fallible(X) :- human(X).based on an example used by Terry Winograd to illustrate the programming language Planner. As a clause in a logic program, it can be used both as a procedure to test whether X is fallible by testing whether X is human, and as a procedure to find an X which is fallible by finding an X which is human. Even facts have a procedural interpretation. For example, the clause: human(socrates).can be used both as a procedure to show that socrates is human, and as a procedure to find an X that is human by "assigning" socrates to X.
Logic programming:
The declarative reading of logic programs can be used by a programmer to verify their correctness. Moreover, logic-based program transformation techniques can also be used to transform logic programs into logically equivalent programs that are more efficient. In addition, the programmer can use knowledge of the problem-solving behaviour of the implementation of the language, to write programs that exploit that behaviour for the sake of greater efficiency.
History:
The use of mathematical logic to represent and execute computer programs is also a feature of the lambda calculus, developed by Alonzo Church in the 1930s. However, the first proposal to use the clausal form of logic for representing computer programs was made by Cordell Green. This used an axiomatization of a subset of LISP, together with a representation of an input-output relation, to compute the relation by simulating the execution of the program in LISP. Foster and Elcock's Absys, on the other hand, employed a combination of equations and lambda calculus in an assertional programming language that places no constraints on the order in which operations are performed.Logic programming in its present form can be traced back to debates in the late 1960s and early 1970s about declarative versus procedural representations of knowledge in artificial intelligence. Advocates of declarative representations were notably working at Stanford, associated with John McCarthy, Bertram Raphael and Cordell Green, and in Edinburgh, with John Alan Robinson (an academic visitor from Syracuse University), Pat Hayes, and Robert Kowalski. Advocates of procedural representations were mainly centered at MIT, under the leadership of Marvin Minsky and Seymour Papert.Although it was based on the proof methods of logic, Planner, developed at MIT, was the first language to emerge within this proceduralist paradigm. Planner featured pattern-directed invocation of procedural plans from goals (i.e. goal-reduction or backward chaining) and from assertions (i.e. forward chaining). The most influential implementation of Planner was the subset of Planner, called Micro-Planner, implemented by Gerry Sussman, Eugene Charniak and Terry Winograd. It was used to implement Winograd's natural-language understanding program SHRDLU, which was a landmark at that time. To cope with the very limited memory systems at the time, Planner used a backtracking control structure so that only one possible computation path had to be stored at a time. Planner gave rise to the programming languages QA4, Popler, Conniver, QLISP, and the concurrent language Ether.Hayes and Kowalski in Edinburgh tried to reconcile the logic-based declarative approach to knowledge representation with Planner's procedural approach. Hayes (1973) developed an equational language, Golux, in which different procedures could be obtained by altering the behavior of the theorem prover. Kowalski, on the other hand, developed SLD resolution, a variant of SL-resolution, and showed how it treats implications as goal-reduction procedures. Kowalski collaborated with Colmerauer in Marseille, who developed these ideas in the design and implementation of the programming language Prolog.
History:
The Association for Logic Programming was founded to promote Logic Programming in 1986.
Prolog gave rise to the programming languages ALF, Fril, Gödel, Mercury, Oz, Ciao, Visual Prolog, XSB, and λProlog, as well as a variety of concurrent logic programming languages, constraint logic programming languages, Datalog and Answer Set Programming.
Concepts:
Semantics Maarten van Emden and Robert Kowalski defined three semantics for Horn clause logic programs, model-theoretic, fixed-point, and proof-theoretic, and showed that they are equivalent.
Concepts:
Logic and control Logic programming can be viewed as controlled deduction. An important concept in logic programming is the separation of programs into their logic component and their control component. With pure logic programming languages, the logic component alone determines the solutions produced. The control component can be varied to provide alternative ways of executing a logic program. This notion is captured by the slogan Algorithm = Logic + Controlwhere "Logic" represents a logic program and "Control" represents different theorem-proving strategies.
Concepts:
Problem solving In the simplified, propositional case in which a logic program and a top-level atomic goal contain no variables, backward reasoning determines an and-or tree, which constitutes the search space for solving the goal. The top-level goal is the root of the tree. Given any node in the tree and any clause whose head matches the node, there exists a set of child nodes corresponding to the sub-goals in the body of the clause. These child nodes are grouped together by an "and". The alternative sets of children corresponding to alternative ways of solving the node are grouped together by an "or".
Concepts:
Any search strategy can be used to search this space. Prolog uses a sequential, last-in-first-out, backtracking strategy, in which only one alternative and one sub-goal is considered at a time. Other search strategies, such as parallel search, intelligent backtracking, or best-first search to find an optimal solution, are also possible.
In the more general case, where sub-goals share variables, other strategies can be used, such as choosing the subgoal that is most highly instantiated or that is sufficiently instantiated so that only one procedure applies. Such strategies are used, for example, in concurrent logic programming.
Concepts:
Negation as failure For most practical applications, as well as for applications that require non-monotonic reasoning in artificial intelligence, Horn clause logic programs need to be extended to normal logic programs with negative conditions. A clause in a normal logic program has the form: H :- A1, …, An, not B1, …, not Bn. and is read declaratively as a logical implication: H if A1 and … and An and not B1 and … and not Bn.where H and all the Ai and Bi are atomic formulas. The negation in the negative literals not Bi is commonly referred to as "negation as failure", because in most implementations, a negative condition not Bi is shown to hold by showing that the positive condition Bi fails to hold. For example: Given the goal of finding something that can fly: there are two candidate solutions, which solve the first subgoal bird(X), namely X = john and X = mary. The second subgoal not abnormal(john) of the first candidate solution fails, because wounded(john) succeeds and therefore abnormal(john) succeeds. However, the second subgoal not abnormal(mary) of the second candidate solution succeeds, because wounded(mary) fails and therefore abnormal(mary) fails. Therefore, X = mary is the only solution of the goal.
Concepts:
Micro-Planner had a construct, called "thnot", which when applied to an expression returns the value true if (and only if) the evaluation of the expression fails. An equivalent operator typically exists in modern Prolog's implementations. It is typically written as not(Goal) or \+ Goal, where Goal is some goal (proposition) to be proved by the program. This operator differs from negation in first-order logic: a negation such as \+ X == 1 fails when the variable X has been bound to the atom 1, but it succeeds in all other cases, including when X is unbound. This makes Prolog's reasoning non-monotonic: X = 1, \+ X == 1 always fails, while \+ X == 1, X = 1 can succeed, binding X to 1, depending on whether X was initially bound (note that standard Prolog executes goals in left-to-right order).
Concepts:
The logical status of negation as failure was unresolved until Keith Clark [1978] showed that, under certain natural conditions, it is a correct (and sometimes complete) implementation of classical negation with respect to the completion of the program. Completion amounts roughly to regarding the set of all the program clauses with the same predicate on the left hand side, say H :- Body1.
Concepts:
… H :- Bodyk.as a definition of the predicate H iff (Body1 or … or Bodyk)where "iff" means "if and only if". Writing the completion also requires explicit use of the equality predicate and the inclusion of a set of appropriate axioms for equality. However, the implementation of negation as failure needs only the if-halves of the definitions without the axioms of equality.
Concepts:
For example, the completion of the program above is: canfly(X) iff bird(X), not abnormal(X).
abnormal(X) iff wounded(X).
bird(X) iff X = john or X = mary.
X = X.
not john = mary.
not mary = john.The notion of completion is closely related to McCarthy's circumscription semantics for default reasoning, and to the closed world assumption.
Concepts:
As an alternative to the completion semantics, negation as failure can also be interpreted epistemically, as in the stable model semantics of answer set programming. In this interpretation not(Bi) means literally that Bi is not known or not believed. The epistemic interpretation has the advantage that it can be combined very simply with classical negation, as in "extended logic programming", to formalise such phrases as "the contrary can not be shown", where "contrary" is classical negation and "can not be shown" is the epistemic interpretation of negation as failure.
Concepts:
Knowledge representation The fact that Horn clauses can be given a procedural interpretation and, vice versa, that goal-reduction procedures can be understood as Horn clauses + backward reasoning means that logic programs combine declarative and procedural representations of knowledge. The inclusion of negation as failure means that logic programming is a kind of non-monotonic logic.
Concepts:
Despite its simplicity compared with classical logic, this combination of Horn clauses and negation as failure has proved to be surprisingly expressive. For example, it provides a natural representation for the common-sense laws of cause and effect, as formalised by both the situation calculus and event calculus. It has also been shown to correspond quite naturally to the semi-formal language of legislation. In particular, Prakken and Sartor credit the representation of the British Nationality Act as a logic program with being "hugely influential for the development of computational representations of legislation, showing how logic programming enables intuitively appealing representations that can be directly deployed to generate automatic inferences".
Variants and extensions:
Prolog The programming language Prolog was developed in 1972 by Alain Colmerauer. It emerged from a collaboration between Colmerauer in Marseille and Robert Kowalski in Edinburgh. Colmerauer was working on natural-language understanding, using logic to represent semantics and using resolution for question-answering. During the summer of 1971, Colmerauer and Kowalski discovered that the clausal form of logic could be used to represent formal grammars and that resolution theorem provers could be used for parsing. They observed that some theorem provers, like hyper-resolution, behave as bottom-up parsers and others, like SL-resolution (1971), behave as top-down parsers.
Variants and extensions:
It was in the following summer of 1972, that Kowalski, again working with Colmerauer, developed the procedural interpretation of implications. This dual declarative/procedural interpretation later became formalised in the Prolog notation H :- B1, …, Bn.which can be read (and used) both declaratively and procedurally. It also became clear that such clauses could be restricted to definite clauses or Horn clauses, where H, B1, ..., Bn are all atomic predicate logic formulae, and that SL-resolution could be restricted (and generalised) to LUSH or SLD-resolution. Kowalski's procedural interpretation and LUSH were described in a 1973 memo, published in 1974.Colmerauer, with Philippe Roussel, used this dual interpretation of clauses as the basis of Prolog, which was implemented in the summer and autumn of 1972. The first Prolog program, also written in 1972 and implemented in Marseille, was a French question-answering system. The use of Prolog as a practical programming language was given great momentum by the development of a compiler by David Warren in Edinburgh in 1977. Experiments demonstrated that Edinburgh Prolog could compete with the processing speed of other symbolic programming languages such as Lisp. Edinburgh Prolog became the de facto standard and strongly influenced the definition of ISO standard Prolog.
Variants and extensions:
Abductive logic programming Abductive logic programming is an extension of normal Logic Programming that allows some predicates, declared as abducible predicates, to be "open" or undefined. A clause in an abductive logic program has the form: H :- B1, …, Bn, A1, …, An.where H is an atomic formula that is not abducible, all the Bi are literals whose predicates are not abducible, and the Ai are atomic formulas whose predicates are abducible. The abducible predicates can be constrained by integrity constraints, which can have the form: false :- L1, …, Ln.where the Li are arbitrary literals (defined or abducible, and atomic or negated). For example: where the predicate normal is abducible.
Variants and extensions:
Problem-solving is achieved by deriving hypotheses expressed in terms of the abducible predicates as solutions to problems to be solved. These problems can be either observations that need to be explained (as in classical abductive reasoning) or goals to be solved (as in normal logic programming). For example, the hypothesis normal(mary) explains the observation canfly(mary). Moreover, the same hypothesis entails the only solution X = mary of the goal of finding something which can fly: Abductive logic programming has been used for fault diagnosis, planning, natural language processing and machine learning. It has also been used to interpret Negation as Failure as a form of abductive reasoning.
Variants and extensions:
Metalogic programming Because mathematical logic has a long tradition of distinguishing between object language and metalanguage, logic programming also allows metalevel programming. The simplest metalogic program is the so-called "vanilla" meta-interpreter: where true represents an empty conjunction, and clause(A,B) means that there is an object-level clause of the form A :- B.
Metalogic programming allows object-level and metalevel representations to be combined, as in natural language. It can also be used to implement any logic which is specified as inference rules. Metalogic is used in logic programming to implement metaprograms, which manipulate other programs, databases, knowledge bases or axiomatic theories as data.
Variants and extensions:
Constraint logic programming Constraint logic programming combines Horn clause logic programming with constraint solving. It extends Horn clauses by allowing some predicates, declared as constraint predicates, to occur as literals in the body of clauses. A constraint logic program is a set of clauses of the form: H :- C1, …, Cn ◊ B1, …, Bn.where H and all the Bi are atomic formulas, and the Ci are constraints. Declaratively, such clauses are read as ordinary logical implications: H if C1 and … and Cn and B1 and … and Bn.However, whereas the predicates in the heads of clauses are defined by the constraint logic program, the predicates in the constraints are predefined by some domain-specific model-theoretic structure or theory.
Variants and extensions:
Procedurally, subgoals whose predicates are defined by the program are solved by goal-reduction, as in ordinary logic programming, but constraints are checked for satisfiability by a domain-specific constraint-solver, which implements the semantics of the constraint predicates. An initial problem is solved by reducing it to a satisfiable conjunction of constraints.
Variants and extensions:
The following constraint logic program represents a toy temporal database of john's history as a teacher: Here ≤ and < are constraint predicates, with their usual intended semantics. The following goal clause queries the database to find out when john both taught logic and was a professor: :- teaches(john, logic, T), rank(john, professor, T).The solution is 2010 ≤ T, T ≤ 2012.
Variants and extensions:
Constraint logic programming has been used to solve problems in such fields as civil engineering, mechanical engineering, digital circuit verification, automated timetabling, air traffic control, and finance. It is closely related to abductive logic programming.
Variants and extensions:
Concurrent logic programming Concurrent logic programming integrates concepts of logic programming with concurrent programming. Its development was given a big impetus in the 1980s by its choice for the systems programming language of the Japanese Fifth Generation Project (FGCS).A concurrent logic program is a set of guarded Horn clauses of the form: H :- G1, …, Gn | B1, …, Bn.The conjunction G1, ... , Gn is called the guard of the clause, and | is the commitment operator. Declaratively, guarded Horn clauses are read as ordinary logical implications: H if G1 and … and Gn and B1 and … and Bn.However, procedurally, when there are several clauses whose heads H match a given goal, then all of the clauses are executed in parallel, checking whether their guards G1, ... , Gn hold. If the guards of more than one clause hold, then a committed choice is made to one of the clauses, and execution proceeds with the subgoals B1, ..., Bn of the chosen clause. These subgoals can also be executed in parallel. Thus concurrent logic programming implements a form of "don't care nondeterminism", rather than "don't know nondeterminism".
Variants and extensions:
For example, the following concurrent logic program defines a predicate shuffle(Left, Right, Merge) , which can be used to shuffle two lists Left and Right, combining them into a single list Merge that preserves the ordering of the two lists Left and Right: Here, [] represents the empty list, and [Head | Tail] represents a list with first element Head followed by list Tail, as in Prolog. (Notice that the first occurrence of | in the second and third clauses is the list constructor, whereas the second occurrence of | is the commitment operator.) The program can be used, for example, to shuffle the lists [ace, queen, king] and [1, 4, 2] by invoking the goal clause: The program will non-deterministically generate a single solution, for example Merge = [ace, queen, 1, king, 4, 2].
Variants and extensions:
Arguably, concurrent logic programming is based on message passing, so it is subject to the same indeterminacy as other concurrent message-passing systems, such as Actors (see Indeterminacy in concurrent computation). Carl Hewitt has argued that concurrent logic programming is not based on logic in his sense that computational steps cannot be logically deduced. However, in concurrent logic programming, any result of a terminating computation is a logical consequence of the program, and any partial result of a partial computation is a logical consequence of the program and the residual goal (process network). Thus the indeterminacy of computations implies that not all logical consequences of the program can be deduced.
Variants and extensions:
Concurrent constraint logic programming Concurrent constraint logic programming combines concurrent logic programming and constraint logic programming, using constraints to control concurrency. A clause can contain a guard, which is a set of constraints that may block the applicability of the clause. When the guards of several clauses are satisfied, concurrent constraint logic programming makes a committed choice to use only one.
Variants and extensions:
Inductive logic programming Inductive logic programming is concerned with generalizing positive and negative examples in the context of background knowledge: machine learning of logic programs. Recent work in this area, combining logic programming, learning and probability, has given rise to the new field of statistical relational learning and probabilistic inductive logic programming.
Higher-order logic programming Several researchers have extended logic programming with higher-order programming features derived from higher-order logic, such as predicate variables. Such languages include the Prolog extensions HiLog and λProlog.
Variants and extensions:
Linear logic programming Basing logic programming within linear logic has resulted in the design of logic programming languages that are considerably more expressive than those based on classical logic. Horn clause programs can only represent state change by the change in arguments to predicates. In linear logic programming, one can use the ambient linear logic to support state change. Some early designs of logic programming languages based on linear logic include LO, Lolli, ACL, and Forum. Forum provides a goal-directed interpretation of all linear logic.
Variants and extensions:
Object-oriented logic programming F-logic extends logic programming with objects and the frame syntax.
Logtalk extends the Prolog programming language with support for objects, protocols, and other OOP concepts. It supports most standard-compliant Prolog systems as backend compilers.
Transaction logic programming Transaction logic is an extension of logic programming with a logical theory of state-modifying updates. It has both a model-theoretic semantics and a procedural one. An implementation of a subset of Transaction logic is available in the Flora-2 system. Other prototypes are also available.
Sources:
General introductions Baral, C.; Gelfond, M. (1994). "Logic programming and knowledge representation" (PDF). The Journal of Logic Programming. 19–20: 73–148. doi:10.1016/0743-1066(94)90025-6.
Kowalski, R. A. (1988). "The early years of logic programming" (PDF). Communications of the ACM. 31: 38–43. doi:10.1145/35043.35046. S2CID 12259230. [1] Lloyd, J. W. (1987). Foundations of Logic Programming. {{cite book}}: |work= ignored (help) Other sources John McCarthy. "Programs with common sense". Symposium on Mechanization of Thought Processes. National Physical Laboratory. Teddington, England. 1958.
Miller, Dale; Nadathur, Gopalan; Pfenning, Frank; Scedrov, Andre (1991). "Uniform proofs as a foundation for logic programming". Annals of Pure and Applied Logic. 51 (1–2): 125–157. doi:10.1016/0168-0072(91)90068-W.
Ehud Shapiro (Editor). Concurrent Prolog. MIT Press. 1987.
James Slagle. "Experiments with a Deductive Question-Answering Program". CACM. December 1965.
Gabbay, Dov M.; Hogger, Christopher John; Robinson, J.A., eds. (1993-1998). Handbook of Logic in Artificial Intelligence and Logic Programming.Vols. 1–5, Oxford University Press. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Phase-out of fossil fuel vehicles**
Phase-out of fossil fuel vehicles:
Vehicles that are powered by fossil fuels, such as gasoline (petrol), diesel, kerosene, and fuel oil are set to be phased out by a number of countries. It is one of the three most important parts of the general fossil fuel phase-out process, the others being the phase-out of fossil fuel power plants for electricity generation and decarbonisation of industry.Many countries and cities around the world have stated they will ban the sale of passenger vehicles (primarily cars and buses) powered by fossil fuels such as petrol, liquefied petroleum gas, and diesel at some time in the future. Synonyms for the bans include phrases like "banning gas cars", "banning petrol cars", "the petrol and diesel car ban", or simply "the diesel ban". Another method of phase-out is the use of zero-emission zones in cities.
Phase-out of fossil fuel vehicles:
A few places have set dates for banning other types of vehicles, such as fossil-fuelled ships and lorries.
Background:
Reasons for banning the further sale of fossil fuel vehicles include: reducing health risks from pollution particulates, notably diesel PM10s, and other emissions, notably nitrogen oxides; meeting national greenhouse gas, such as CO2, targets under international agreements such as the Kyoto Protocol and the Paris Agreement; or energy independence. The intent to ban vehicles powered by fossil fuels is attractive to governments as it offers a simpler compliance target, compared with a carbon tax or phase-out of fossil fuels.
Background:
The automotive industry is working to introduce electric vehicles to adapt to bans with varying success and it is seen by some in the industry as a possible source of money in a declining market. A 2020 study from the Eindhoven University of Technology showed that the manufacturing emissions of batteries of new electric cars are much smaller than what was assumed in the 2017 IVL study (around 75 kg CO2/kWh) and that the lifespan of lithium batteries is also much longer than previously thought (at least 12 years with a mileage of 15,000 km annually): they are cleaner than internal combustion cars powered by diesel or petrol.There is some opposition to simply moving from fossil-fuel-powered cars to electric cars, as they would still require a large proportion of urban land. On the other hand, there are many types of (electric) vehicles that take up little space, such as (cargo) bicycles and electric motorcycles and scooters. Making cycling and walking over short distances, especially in urban areas, more attractive and feasible with measures such as removing roads and parking spaces and improving cycling infrastructure and footpaths (including pavements), provides a partial alternative to replacing all fossil-fuelled vehicles with electric vehicles. Although there are as yet very few completely carfree cities (such as Venice), several are banning all cars in parts of the city, such as city centers.
Methods:
The banning of fossil-fuelled vehicles of a defined scope requires authorities to enact legislation that restricts them in a certain way. Proposed methods include: A prohibition on further sales or registration of new vehicles powered with specific fuels from a certain date in a certain area. At the date of implementation, existing vehicles would remain legal to drive on public highways.
Methods:
A prohibition on the importation of new vehicles powered with specific fuels from a certain date into a certain area. This is planned in countries such as Denmark, Israel, and Switzerland; however, some countries, such as Israel, have no legislation on the subject.
A prohibition on any use of certain vehicles powered with specific fuels from a certain date within a certain area. Restrictions such as these are already in place in many European cities, usually in the context of their low-emission zones (LEZs).
Making emission legislation so strict that it can in reality not be fulfilled.Fuel cell (electric) vehicles (FCVs or FCEVs) also allow running on (some) non-fossil fuels (i.e., hydrogen, ethanol, methanol, ).
Methods:
Cities generally use the introduction of low-emission zones (LEZs) or zero-emission zones (ZEZs), sometimes with an accompanying air quality certificate sticker such as Crit'air (France), to restrict the use of fossil-fuelled cars in some or all of its territory. These zones are growing in number, size, and strictness. Some city bans in countries such as Italy, Germany, and Switzerland are only temporarily activated during particular times of the day, during winter, or when there is a smog alert (for example, in Italy in January 2020); these do not directly contribute to the phase-out of fossil fuel vehicles, but they make owning and using such vehicles less attractive as their utility is restricted and the cost of driving them increases.Some countries have given consumers various incentives such as subsidies or tax breaks to stimulate the purchase of electric vehicles, while fossil-fuelled vehicles are taxed increasingly heavily.Helped by government incentives, Norway became the first country to have the majority of new vehicles sold in 2021 be electric. In January 2022, 88 per cent of new vehicles sold in the country were electric, and based upon current trends, they would most likely hit the goal of no new fossil fuel cars being sold by 2025.
Places with planned fossil-fuel vehicle restrictions:
International At the 2021 United Nations Climate Change Conference held in Glasgow multiple governments and companies signed a not legally binding declaration to accelerate the transition to 100% zero emission cars and vans (the Glasgow Declaration). They wanted all new cars and vans to not emit any greenhouse gas at the tailpipe by 2035 in leading markets and by 2040 globally. The United States and China (the biggest car markets) did not sign and neither did Germany (the biggest car market in the EU). Also absent from the list of signatories were major car manufacturers Volkswagen, Toyota, Renault-Nissan and Hyundai-Kia.
Places with planned fossil-fuel vehicle restrictions:
European Union In 2018, Denmark proposed an EU-wide prohibition on petrol and diesel cars, but that turned out to be contrary to EU regulations. In October 2019, Denmark made a proposal for phasing out fossil fuel vehicles on the member state level by 2030 which was supported by 10 other EU member states.
In July 2021, France opposed a ban on combustion-powered cars and in particular on hybrid vehicles.
Places with planned fossil-fuel vehicle restrictions:
In July 2021, the European Commission proposed a 100% reduction of emissions for new sales of cars and vans as of 2035. On 8 June 2022, the European Parliament voted in favour of the proposal of the European Commission, but agreement with the European Union member states was necessary before a final law could be passed. On 22 June 2022, German Finance Minister Christian Lindner stated that his government would refuse to agree on the ban. But on 29 June 2022, after 16 hours of negotiations, all climate ministers of the 27 EU member states agreed to the commission's proposal (part of the 'Fit for 55' package) to effectively ban the sale of new internal combustion vehicles by 2035 (through '[introducing] a 100% CO2 emissions reduction target by 2035 for new cars and vans'). Germany backed the 2035 target, asking the Commission whether hybrid vehicles or CO2-neutral fuels could also comply with the proposal; Frans Timmermans responded that the Commission kept an "open mind", but at the time 'hybrids did not deliver sufficient emissions cuts and alternative fuels were prohibitively expensive.' The law for "zero CO2 emissions for new cars and vans in 2035" was approved by the European Parliament on 14 February 2023.
Places with planned fossil-fuel vehicle restrictions:
Countries Countries with proposed bans or implementing 100% sales of zero-emissions vehicles include China (including Hong Kong and Macau), Japan, Singapore, the UK, South Korea, Iceland, Denmark, Sweden, Norway, Slovenia, Germany, Italy, France, Belgium, the Netherlands, Portugal, Canada, the 12 U.S. states that adhered to California's Zero-Emission Vehicle (ZEV) Program, Sri Lanka, Cabo Verde, and Costa Rica.
Places with planned fossil-fuel vehicle restrictions:
Some politicians in some countries have made broad announcements but have implemented no legislation and therefore there is no phase-out and no binding legislation. Ireland, for example, had made announcements but ultimately did not ban diesel nor petrol vehicles.The International Energy Agency predicted in 2021 that 70% of India's new car sales will be fossil powered in 2030, despite earlier government announcements that were discarded in 2018. In November 2021, the Indian government was amongst 30 national governments and six major automakers who pledged to phase out the sale of all new petrol and diesel vehicles by 2040 worldwide, and by 2035 in "leading markets".
Places with planned fossil-fuel vehicle restrictions:
Cities and territories Some cities or territories have planned or taken measures to partially or entirely phase out fossil fuel vehicles earlier than their national governments. In some cases, this is achieved through local or regional government initiatives, in other cases through legal challenges brought on by citizens or civil organisations enforcing partial phase-outs based on the right to clean air.Some cities listed have signed the Fossil Fuel Free Streets Declaration, committing to banning emitting vehicles by 2030, but this does not necessarily have the force of law in those jurisdictions. The bans typically apply to a select number of streets in the urban centre of the city where most people live, not to its entire territory. Some cities take a gradual approach to prohibit the most polluting categories of vehicles first, then the next-most polluting, all the way up to a complete ban on all fossil-fuel vehicles; some cities have not yet set a deadline for a complete ban, and/or are waiting for the national government to set such a date.In California, emissions requirements for automakers to be permitted to sell any vehicles in the state were expected to force 15% of new vehicles offered for sale between 2018 and 2025 to be zero emission. Much cleaner emissions and increased efficiency in petrol engines mean this will be met with just 8% of ZEV vehicles. The "Ditching Dirt Diesel" law SB 44 sponsored by Nancy Skinner and adopted on 20 September 2019 requires the California Air Resources Board (CARB) to "create a comprehensive strategy for deploying medium- and heavy-duty vehicles" to make California meet federal ambient air quality standards, and 'establish goals and spur technology advancements for reducing GHG emissions from the medium- and heavy-duty vehicle sectors by 2030 and 2050'. It stops short of directly requiring a phase-out of all diesel vehicles by 2050 (as the original bill did), but it would be the most obvious means of achieving the reduction goals. In August of 2022, California Governor Gavin Newsom signed off on a new EV mandate. The plan's targets are 35% ZEV market share by 2026, 68% by 2030, and 100% by 2035. This plan is accompanied by supporting funding for infrastructure and ZEV rebates totaling $10 billion. Newsom has stated his commitment to keep California at the forefront of zero-emission transportation.
Places with planned fossil-fuel vehicle restrictions:
In the European Union, Council Directive 96/62/EC on ambient air quality assessment and management and Directive 2008/50/EC on ambient air quality form the legal basis for EU citizens' right to clean air. On 25 July 2008 in the case Dieter Janecek v Freistaat Bayern CURIA, the European Court of Justice ruled that under Directive 96/62/EC citizens have the right to require national authorities to implement a short-term action plan that aims to maintain or achieve compliance to air quality limit values. The ruling of the German Federal Administrative Court in Leipzig on 5 September 2013 significantly strengthened the right of environmental associations and consumer protection organisations to sue local authorities to enforce compliance with air quality limits throughout an entire city. The Administrative Court of Wiesbaden declared on 30 June 2015 that financial or economic aspects were not a valid excuse to refrain from taking measures to ensure that the limit values were observed, the Administrative Court of Düsseldorf ruled on 13 September 2016 that driving bans on certain diesel vehicles were legally possible to comply with the limit values as quickly as possible, and on 26 July 2017, the Administrative Court of Stuttgart ordered the state of Baden-Württemberg to consider a year-round ban on diesel-powered vehicles. By mid-February 2018, citizens in the EU member states the Czech Republic, France, Germany, Hungary, Italy, Romania, Slovakia, Spain, and the United Kingdom were suing their governments for violating the limit of 40 micrograms per cubic meter of breathable air as stipulated in the Ambient Air Quality Directive.A landmark ruling by the German Federal Administrative Court in Leipzig on 27 February 2018 declared that the cities of Stuttgart and Düsseldorf were allowed to legally prohibit older, more polluting diesel vehicles from driving in zones worst affected by pollution, rejecting appeals made by German states against the bans imposed by the two cities' local courts. The case was strongly influenced by the ongoing Volkswagen emissions scandal (also known as Dieselgate), which in 2015 revealed that many Volkswagen diesel engines were deceptively tested and marketed as much cleaner than they were. The decision was predicted to set a precedent for other places in the country and in Europe. Indeed, the ruling triggered a wave of dozens of local diesel restrictions, brought about by Environmental Action Germany (DUH) suing city authorities and winning legal challenges across Germany. While some groups and parties such as the AfD again tried to overturn them, others such as the Greens advocated for a national phaseout of diesel cars by 2030. On 13 December 2018, the European Court of Justice overturned a 2016 European Commission relaxation of car NOx emission limits to 168 mg/km, which the Court declared illegal. This allowed the cities of Brussels, Madrid, and Paris, who had filed the complaint, to proceed with their plans to also reject Euro 6 diesel vehicles from their urban centres, based on the original 80 mg/km limit set by EU law.
Manufacturers with planned fossil-fuel vehicle phase-out roadmaps:
In 2017, Volvo announced plans to phase out internal combustion-only vehicle production by 2019, after which all new cars manufactured by Volvo will either be fully electric or electric hybrids. In 2020, the Volvo Group with other truck makers including DAF Trucks, Daimler AG, Ford, Iveco, MAN SE, and Scania AB pledged to end diesel truck sales by 2040.In 2018, Volkswagen Group's strategy chief said "the year 2026 will be the last product start on a combustion engine platform" for its core brand, Volkswagen.In 2021, General Motors announced plans to go fully electric by 2035. In the same year, the CEO of Jaguar Land Rover, Thierry Bolloré also claimed it would "achieve zero tailpipe emissions by 2036" and that its Jaguar brand would be electric-only by 2025. By March, Volvo Cars announced that by 2030 it "intends to only sell fully electric cars and phase out any car in its global portfolio with an internal combustion engine, including hybrids". In April 2021, Honda announced that it will stop selling gas-powered vehicles by 2040. In July 2021, Mercedes-Benz announced that its new vehicle platforms will be EV-only by 2025. In Oct 2021, Rolls-Royce announced that it will be fully electric by 2030. In November 2021, at 2021 United Nations Climate Change Conference, car manufacturers including BYD Auto, Ford Motor Company, General Motors, Jaguar, Land Rover, Mercedes-Benz and Volvo have committed to "work towards all sales of new cars and vans being zero emission globally by 2040, and by no later than 2035 in leading markets".In 2022, Maserati announced its plans to offer full-electric variants of all its models by 2025 and its intention to halt production of combustion engine vehicles by 2030.
Railways:
While railway electrification is often pursued for reasons unrelated to the emissions caused by fossil fuels, there has been an increased push in the 21st century to replace diesel locomotives with alternatives such as battery electric multiple units, hydrogen fuel trains like the Alstom Coradia iLint or overhead wire electrification. To date the only (non-micro- or city-state) country to have electrified its entire mainline railway network, Switzerland, pursued this phase-out of fossil fuel vehicles before the term or concept existed in the modern form, in large part because importing coal for steam locomotives had proven difficult during the World Wars but Switzerland has plenty of domestic hydropower resources to power electric trains. Israel Railways which had no electrified mainline rail services prior to 2018 when the Tel Aviv-Jerusalem railway became the first line to see electric train operation, plans to electrify most or all of its network and to phase out diesel locomotives and diesel multiple units. The project was further accelerated in 2020 as the temporary shutdown of rail traffic due to the COVID-19 pandemic in Israel allowed faster construction and ERTMS level 2 was being rolled out. However, in 2019 Israel Railways ordered diesel powered rolling stock to replace the ageing IC3 trains with media reports citing delays in the electrification program as the main reason. In California's bay area, the Caltrain Electrification program approved in 2016 is nearing completion. Despite having no electric locomotives previously, Caltrain's infrastructure has successfully implemented electric support. Funding was awarded in 2018, and train assembly and testing completed in 2022. In a multi-stage phase out plan, the new electric train cars will supplement and eventually replace diesel powered locomotives by 2024.
Shipping:
Emissions will be banned from Norway's World Heritage Sites Geirangerfjord and Nærøyfjord from 2026.Besides boats driven by batteries or indeed trolley boats, there have been several attempts to adapt nuclear marine propulsion which has been a part of the military naval forces of many countries for decades in the form of nuclear submarines, nuclear aircraft carriers and nuclear icebreakers to civilian uses. While prototypes like Otto Hahn (ship) (German) NS Savannah (American) and RV Mirai (Japan) were built, the only non-icebreaker nuclear powered ship to remain in civilian service is the Russian Sevmorput built in the late 1980s by the Soviet Union. The Soviet Union and its successor state Russia also maintains a fleet of nuclear icebreakers to keep the Northern Sea Route open.
Shipping:
Sail ships and oars rely on renewable resources rather than fossil fuels (wind and human muscle-power respectively) but have disadvantages in terms of speed and labour-costs and have thus been phased out of virtually all commercial uses. There are some attempts to use wind-powered ships for commercial purposes, but as of 2022 they have remained marginal.
Aviation:
Norway, and possibly some other Scandinavian countries, are aiming for all domestic flights to be emission-free by 2040. A major obstacle to decarbonising air travel is the low energy density of current and foreseeable battery technology. Thus alternatives to electric planes such as so called sustainable aviation fuels or e-fuels (fuels derived from electrochemical conversion of substances like water and carbon dioxide into hydrocarbons) are also proposed as a future replacement of current jet fuels. In 2021 the first production scale plant for e-fuels to be used in aviation opened in northern Germany. Production capacity is planned to reach 8 barrels a day by 2022. Lufthansa will be among the chief users of the synthetic fuel produced in the new facility. Germany's plan to transform aviation to net zero carbon emissions relies heavily on e-fuels.Besides the need to rapidly scale up currently minuscule production capacity, the main obstacles to wider deployment of sustainable aviation fuels and e-Fuels are their much higher cost in the absence of meaningful carbon pricing in aviation. Furthermore, with current CORSIA regulations for sustainable aviation fuels allowing up to 90% of emissions compared to conventional fuels, even those options are currently far from carbon neutral.There were attempts at building Nuclear-powered aircraft during the Cold War, which unlike nuclear marine propulsion never got very far and were always only proposed for military uses. As of 2022 no country or private enterprise is seriously pursuing nuclear propulsion for passenger aircraft.However, short haul, low demand routes can be easily flown using electric aircraft, and manufacturers such as Heart Aerospace are planning to introduce them with United Airlines in 2026.
Unintended side-effects:
Second-hand vehicle dumping From the European Union, there is already an export market which includes millions of used cars which are sent to Eastern Europe and the Caucasus, central Asia and Africa. According to UNECE, the global on-road vehicle fleet is to double by 2050 (from 1.2 billion to 2.5 billion, see introduction), with most future car purchases taking place in developing countries. Some experts predict that the number of vehicles in developing countries will increase by 4 or 5-fold by 2050 (compared to current car use levels), and that the majority of these will be second-hand. There are currently no global or even regional agreements that rationalise and govern the flow of second-hand vehicles. Others say that new electric 2-wheelers may sell widely in developing countries as they are affordable.Internal combustion engine cars that may no longer comply to local environmental standards are exported to developing countries, where legislation on vehicle emissions is often less strict. In addition, in some developing countries, such as Uganda, the average age of a car imported is already 16.5 years and it will likely be driven for another 20 years. In such cases, fuel efficiency levels of these vehicles become worse as they age. In addition, national vehicle inspection requirements vary widely depending on the country.
Unintended side-effects:
Potential solutions Export prohibitions: Some propose that the European Union could implement a rule that does not allow the most polluting cars to leave the EU. The European Union itself is of the opinion that it "should stop exporting its waste outside of the EU" and it will therefore "revisit the rules on waste shipments and illegal exports".
Import prohibitions: This includes used vehicle bans, used vehicle import age limits, taxation and inspection tests as a precondition to vehicle registration.
Convert fossil fuel vehicles to electric: As of 2021, this is expensive, so it tends to only be done for classic cars.
Unintended side-effects:
Mandatory recycling: The European Commission is considering plans to introduce rules on mandatory recycled content in specific product groups for packaging, vehicles, construction materials and batteries, for instance. The EU announced a new Circular Economy Action Plan in March 2020, and it mentioned that the Commission will also propose to revise the rules on end-of-life vehicles with a view to promoting more circular business models.
Unintended side-effects:
Scrappage programs: Governments can offer a premium to owners to have their fossil fuel vehicles voluntarily scrapped and to buy a cleaner vehicle from that money (if they so choose). For example, the city of Ghent offers a scrapping premium of €1,000 for diesel vehicles and €750 for petrol vehicles; as of December 2019, the city had allocated €1.2 million for this purpose to the scrapping fund.
Mobility transition:
In Germany, activists have coined the term Verkehrswende (mobility transition, analogous to "Energiewende", energy transition) for a project of not only changing the motive power of cars (from fossil fuels to renewable power sources) but the entire mobility system to one of walkability, complete streets, public transit, electrified railways and bicycle infrastructure.
Mobility transition:
There is similar research being done in the United States around the term mobility justice. Geologist Dr. Jason Henderson of University of California, San Francisco argues that supporting electric vehicles while neglecting compact city design and public transportation will lead to car-oriented city design. This comes with numerous sustainability issues that disproportionately affect disadvantaged communities such as environmental gentrification, less low-income housing, and unequal access to the benefits of electric vehicle adoption. In addition, the production of electric vehicles can come at the price of laborers in other countries, and the environmental costs there are seldom taken into account when calculating the environmental benefits of electric vehicles. According to mobility justice critiques, relying primarily on electric vehicles for the phase out of fossil fuels comes at an opportunity cost of investing in other types of sustainable transportation such as bike lanes, safe walking spaces, electric trains, and electric buses. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Eutactic lattice**
Eutactic lattice:
In mathematics, a eutactic lattice (or eutactic form) is a lattice in Euclidean space whose minimal vectors form a eutactic star. This means they have a set of positive eutactic coefficients ci such that (v, v) = Σci(v, mi)2 where the sum is over the minimal vectors mi. "Eutactic" is derived from the Greek language, and means "well-situated" or "well-arranged".
Eutactic lattice:
Voronoi (1908) proved that a lattice is extreme if and only if it is both perfect and eutactic.
Conway & Sloane (1988) summarize the properties of eutactic lattices of dimension up to 7. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Post hole digger**
Post hole digger:
A post hole clam-shell digger, also called post hole pincer or simply post hole digger, is a tool consisting of two articulated shovel-like blades, forming an incomplete hollow cylinder about a foot long and a few inches wide, with two long handles that can put the blades in an "open" (parallel) position or a "closed" (convergent) position.
Post hole digger:
The tool is used to dig holes in the ground, typically from a few inches to a about a foot in diameter, for general purposes such as setting fence and sign posts or planting saplings. In operation, the tool is jabbed into the ground with the blades in the open position. The handles are then operated to close the blades, thus grabbing the portion of soil between them. The tool is then pulled out and the soil is deposited by the side. The process is repeated until the hole is deep enough. Sometimes dry or sandier soils will not stay secured in the clams.
Comparison with earth augers:
An earth auger is another tool that is used to dig holes in the ground, consisting of a rotating shaft with one or more blades attached at the lower end. A hand-powered auger is generally easier to use than a clam-shell digger, and can in principle dig deeper and remove more dry, sandy soils. It naturally creates a round and straight hole, but only of a fixed diameter. The shovel like shape of a clam-shell-type digger allows it to be used to dig holes of any shape and any diameter greater than that of the open blades.
History and patent info:
Clam-shell-type pole diggers seem to be a relatively recent invention, newer than earth augers. A patent was filed by J. Lawry of Lenior City, Tennessee in 1908. The patent has the traditional clam-shell design with an extra spike in the center. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Orders of magnitude (radiation)**
Orders of magnitude (radiation):
Recognized effects of higher acute radiation doses are described in more detail in the article on radiation poisoning. Although the International System of Units (SI) defines the sievert (Sv) as the unit of radiation dose equivalent, chronic radiation levels and standards are still often given in units of millirems (mrem), where 1 mrem equals 1/1,000 of a rem and 1 rem equals 0.01 Sv. Light radiation sickness begins at about 50–100 rad (0.5–1 gray (Gy), 0.5–1 Sv, 50–100 rem, 50,000–100,000 mrem).
Orders of magnitude (radiation):
The following table includes some dosages for comparison purposes, using millisieverts (mSv) (one thousandth of a sievert). The concept of radiation hormesis is relevant to this table – radiation hormesis is a hypothesis stating that the effects of a given acute dose may differ from the effects of an equal fractionated dose. Thus 100 mSv is considered twice in the table below – once as received over a 5-year period, and once as an acute dose, received over a short period of time, with differing predicted effects. The table describes doses and their official limits, rather than effects. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Timeline of the history of the scientific method**
Timeline of the history of the scientific method:
This timeline of the history of the scientific method shows an overview of the development of the scientific method up to the present time. For a detailed account, see History of the scientific method.
BC:
c.1600 BC – The Edwin Smith Papyrus, a unique ancient Egyptian text, contains practical and objective advice to physicians regarding the examination, diagnosis, treatment and prognosis, of injuries and ailments. It provides evidence that medicine in Egypt was at this time practised as a quantifiable science.
624 – 548 BC – Thales of Miletus raises the study of nature from the realm of the mythical to the level of empirical study.
610 – 547 BC – The Greek philosopher Anaximander extends the idea of law from human society to the physical world, and is the first to use maps and models.
c.400 BC – In China, the philosopher Mozi founds the Mohist school of philosophy and introduces the 'three-prong method' for testing the truth or falsehood of statements.
c.400 BC – The Greek philosopher Democritus advocates inductive reasoning through a process of examining the causes of perceptions and drawing conclusions about the outside world.
c.400 BC – Plato provides the first detailed definitions of the concepts of idea, matter, form and appearance.
c.320 BC – Aristotle categorises and subdivides knowledge into physics, poetry, zoology, logic, rhetoric, politics, and biology. His Posterior Analytics defended the ideal of science as originating from known axioms. Aristotle believed that the world was real and that we can learn the truth by experience.
c.341-270 BC – Epicurus and his followers develop an epistemology as a result of their rivalry with other philosophical schools. His treatise Κανών ('Rule'), now lost, explained his methods of investigation and theory of knowledge.
c.300 BC – Euclid's Euclid's Elements expounds geometry as a system of theorems following logically from axioms.
c.240 BC – The Greek polymath Eratosthenes calculates the circumference of the Earth to a remarkable degree of accuracy, using stadia, then a standard unit for measuring distances.
c.200 BC – The Great Library of Alexandria is built as part of a larger research institution called the Mouseion, with the intention that it becomes a collection of all Greek knowledge.
c.150 BC – The first chapter of the Book of Daniel describes an early (and flawed) version of a clinical trial proposed by the young Jewish noble Daniel, in which he and his three companions eat vegetables and water for ten days, rather than the royal food and wine.
1st–12th centuries:
c.90–168 – Ptolemy writes the astronomical treatise now known as the Almagest. His writings reveal his understanding of the scientific method, his recognition of the importance of both systematically ordered observations and hypotheses.
c. 800–900 – Early Muslim scientists such al-Kindi (801–873) and the authors writing under the name of Jabir ibn Hayyan (died c. 806–816) started to put a greater emphasis on the use of experiment as a source of knowledge.
1021 – The astronomer, physicist and mathematician Ibn al-Haytham combines observations, experiments and rational arguments in his Book of Optics.
c. 1025 – The scholar al-Biruni develops experimental methods for mineralogy and mechanics, and conducts elaborate experiments related to astronomical phenomena.
1027 – In his treatise al-Burhân ('On Demonstration') in his book Kitāb al-Šifāʾ ('The Book of Healing'), the Persian polymath Ibn Sīnā (known in the Western world as Avicenna) censures Aristotelian method of induction.
1200–1700:
1220–1235 – Robert Grosseteste, an English scholastic philosopher, theologian and later the Bishop of Lincoln during 1253, publishes his Aristotelian commentaries, laying out the framework for the proper methods of science.
1265 – The English monk Roger Bacon, inspired by the writings of Robert Grosseteste, describes a scientific method based on a repeating cycle of observation, hypothesis, experimentation, and the need for independent verification. He recorded the manner in which he conducted his experiments in precise detail so that others could reproduce and independently test his results.
1327 – Ockham's razor appears, a principle which states that among competing hypotheses, the one with the fewest assumptions should be selected.
1408 – The Yongle Encyclopedia (Chinese: 永樂大典), the largest encyclopedia in book form ever made, is completed.
1581 – The sceptic Francisco Sanches uses classical sceptical arguments to show that science, in the Aristotelian sense of giving necessary reasons or causes for the behavior of nature, cannot be attained.
1581 – The Danish astronomer Tycho Brahe builds Uraniborg and Stjerneborg on the island of Ven. Research done in the fields of astronomy, alchemy, and meteorology by Tycho and his assistants produces high precision measurements of the planets.
1595 – The microscope is invented in the Netherlands.
1608 – Evidence of the earliest known telescope appears in the Netherlands, when a patent is submitted by Hans Lipperhey.
1609 – The first 'public chemical laboratory' is set up at the University of Marburg.
1620 – The Novum Organum, fully Novum Organum, sive indicia vera de Interpretatione Naturae ("New Organon, or true directions concerning the interpretation of nature"), a philosophical work by English philosopher and statesman Francis Bacon, is published.
1637 – The French philosopher, mathematician and scientist René Descartes publishes his Discourse on the Method of Rightly Conducting One's Reason and of Seeking Truth in the Sciences, an important work in the development of the natural sciences.
1200–1700:
1638 – Galileo's Discorsi e dimostrazioni matematiche intorno a due nuove scienze (commonly known as Two New Sciences), his scientific testament covering much of his work in physics over the preceding thirty years, is published. It contains two thought experiments, now referred to as his Leaning Tower of Pisa experiment and Galileo's ship, each invented to disprove a physical theory by showing that it has a contradictory consequence.1650 – The world's oldest national scientific institution, the Royal Society, is founded in London. It establishes experimental evidence as the arbiter of truth.
1200–1700:
c.1665 – The British scientist Robert Boyle reveals his scientific methods in his writings, and commends that a subject be generally researched before detailed experiments are undertaken; that results that are inconsistent with current theories are reported; that experiments should be regarded as 'provisional' in nature; and that experiments are shown to be repeatable.
1665 – Academic journals are published for the first time, in France and Great Britain.
1675 – To encourage the publicising of new discoveries in science, the German-born Henry Oldenburg pioneers the practice now known as peer reviewing, by sending scientific manuscripts to experts to judge their quality.
1687 – Sir Isaac Newton's book Philosophiæ Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy), is first published. It laid the foundations of classical mechanics. Newton also made seminal contributions to optics, and shares credit with Gottfried Wilhelm Leibniz for developing the infinitesimal calculus.
1700–1900:
1739 – David Hume's Treatise of Human Nature argues that the problem of induction is unsolvable.
1753 – The first description of a controlled experiment using identical populations with only one variable is published, when James Lind, a Scottish doctor, undergoes research into scurvy among sailors.
1763 – Reverend Thomas Bayes' An Essay towards solving a Problem in the Doctrine of Chances is published posthumously. The Essay laid the basis for Bayesian inference, used to update the probability estimate for a hypothesis as additional evidence is acquired.
1812 – Hans Christian Ørsted formulates the Latin-German mixed term Gedankenexperiment, meaning 'thought experiment', a method used since antiquity.
1815 – An optimal design for polynomial regression is published by the French logician Joseph Diaz Gergonne.
1833, 1840 – William Whewell invents the term scientist, previously 'natural philosopher' or 'man of science'. In his Philosophy of the Inductive Sciences he coins the term "consilience" the principle that evidence from independent, unrelated sources can 'converge' to strong conclusions.
1877–1878 – The American scientist Charles Sanders Peirce writes his Illustrations of the Logic of Science. The work popularises his trichotomy of abduction, deduction and induction.
1885 – Peirce and Joseph Jastrow first describe blinded, randomized experiments.
1897 – The American geologist Thomas Chrowder Chamberlin proposes the use of multiple hypotheses to assist in the design of experiments.
1900–present:
1905 – The German-born theoretical physicist Albert Einstein proposes the theory of special relativity.
1926 – Randomized design is popularized and analyzed by the British statistician Ronald Fisher.
1934 – Falsifiability as a criterion for evaluating new hypotheses is popularized by Karl Popper's The Logic of Scientific Discovery .
1937 – The first complete placebo trial is undertaken. The American pharmacologist Harry Gold, studying the effect of xanthines on cardiac pain, alternates them with a placebo and shows them to be ineffective.
1946 – Work begins on the first computer simulation in history, a digital flight simulator developed by the Massachusetts Institute of Technology, for training bomber crews.
1950 – Research based on the double blind test is published for the first time, by Greiner et al.
1962 – The American physicist Thomas S. Kuhn publishes his book The Structure of Scientific Revolutions, which controversially challenged powerful and entrenched philosophical assumptions about the progress of science through history.
1964 – Strong inference—a model of scientific inquiry that emphasizes the need for alternative hypotheses—is proposed by the American physicist John R. Platt.
1976 – The British-born, professor emeritus of statistics at the University of Wisconsin–Madison George E. P. Box publishes his Journal Article Science and Statistics, which sets a framework for statistical modeling of phenomena, and the need for only appropriate complexity in model.
2009 – Robot Scientist (also known as Adam) is created, the first machine in history to have discovered new scientific knowledge independently of its human creators.
2012 – Constructor theory, a proposal for a new mode of explanation in fundamental physics, is sketched out by the British physicist David Deutsch. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Capacity to be alone**
Capacity to be alone:
Capacity to be alone is a developmentally acquired ability, considered by object relations theory to be a key to creative living.
Julia Kristeva sees it as central to an authentic inner life, as well as to creative sublimations in life and art.
Conceptual development:
D. W. Winnicott in his article of that name (1958/64) highlighted the importance of the capacity to be alone, distinguishing it from both withdrawal and loneliness, and seeing it as derived from an internalisation of the non-intrusive background presence of a mothering figure. Winnicott in his writings always stressed the importance of the baby being allowed "just to lie back and float", and of the "opportunity that the baby has to experience separation without separation". Out of those early experiences emerges the capacity to be alone in (or out of) the presence of others - something which might have to be re-acquired later in life through psychotherapy.A later strand of analysis, drawing on the work on listening of Theodore Reik, has emphasised the importance of the analyst's capacity to be alone in the analytic situation - to remain centred in themselves in the face of the projections and resistances of the patient.
Creative adaptations:
André Green saw the fertile interaction of reading/writing as rooted in the capacity to be alone. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Meenhard Herlyn**
Meenhard Herlyn:
Meenhard Herlyn, D.V.M., D.Sc., is a researcher who works as director of The Wistar Institute Melanoma Research in Philadelphia. Herlyn obtained his D.V.M. degree from the University of Veterinary Medicine, Hanover, in 1970. Following that, in 1976, he earned a D.Sc. in medical microbiology from the University of Munich. In 1976, he joined The Wistar Institute as an associate scientist, focusing on the emerging field of monoclonal antibodies—a groundbreaking technology that now underlies a significant portion of targeted therapeutics. Transitioning to the role of assistant professor in 1981, Herlyn established a renowned laboratory dedicated to studying melanoma biology, which remains highly regarded in the field to this day. His primary research focus is the underlying biology behind melanoma, the most aggressive form of skin cancer. Over the course of his career, he has been responsible for the use of three-dimensional artificial skin cultures to study tumor and normal cells, a clearer understanding of stem cells and how they relate to cancer, and signaling pathways related to cancer. The Wistar Melanoma (WM) cell lines that Herlyn has used and helped discover in his laboratory are responsible for a better understanding of the major steps of tumor progression in human cases of melanoma.In 2003, Herlyn helped launch the first International Melanoma Research Congress in Philadelphia, the first scientific meeting specifically aimed at melanoma scientists. This was done in collaboration with the Foundation for Melanoma Research. This collaboration was prompted with his first direct melanoma patient interaction after more than two decades of research on the disease. He also established the Society for Melanoma Research in 2004 and served as its president for three years.One of the more recent discoveries made by Herlyn's lab found that the diabetes drug phenformin could be used to treat certain slow-growing, drug-resistant tumor cells in melanoma. In December 2013, it was announced that Herlyn and his colleagues had received a five-year, $12.5 million program project grant (PO1) to continue research on melanoma from multiple angles. In May 2013, L'Oréal Paris announced that Herlyn would lead the Melanoma Research Alliance Team Science Award, which provides more than $750,000 over three years to research the prevention, treatment, and potential cures for melanoma.He is a member of the editorial board of the Cancer and Metastasis Reviews.
Select publications:
Balaburski, G.M., Leu, J.I., Beeharry, N., Hayik, S., Andrake, M.D., Zhang, G., Herlyn, M., Villanueva, J., Dunbrack, R.L., Yen, T., George, D.L., Murphy, M.E.: A modified HSP70 inhibitor shows broad activity as an anticancer agent. Mol. Cancer Res. 11:219-229, 2013.
Krepler, C., Chunduru, S.K., Halloran, M.B., He, X., Xiao, M., Vultur, A., Villanueva, J., Mitsuuchi, Y., Neiman, E.M., Benetatos, C., Nathanson, K.L., Amaravadi, R.K., Pehamberger, H., McKinlay, M., and Herlyn, M.: The novel SMAC inhibitor birinapant exhibits potent activity against human melanoma cells. Clin. Cancer Res. 19: 1784-1794, 2013.
Desai, B.M., Villanueva, J., Nguyen, T-T.K., Lioni, M., Xiao, M., Kong, J., Krepler, C., Vultur, A., Flaherty, K.T., Nathanson, K.L., Smalley, K.S.M., Herlyn, M.: The anti-melanoma activity of dinaciclib, a cyclin-dependent kinase inhibitor, is dependent on p53 signaling. PLoS One, 8(3):e59588 doi:10:1371/journal.pone.0059588, 2013.
Vultur, A., Villanueva, J., Krepler, C., Rajan, G., Chen, Q., Li, L., Gimotty, P., Wilson, M., Hayden, J., Keeney, F., Nathanson, K.L., Herlyn, M. MEK inhibition affects STAT3 signaling and invasion in human melanoma cell lines. Oncogene April 29 [Epub ahead of print], 2013.
Aird, K.M., Zhang, G., Li, H., Tu, Z., Bitler, B.G., Garipov, A., Wu, H., Wei, Z., Wagner, S.N., Herlyn, M., Zhang, R. Suppression of nucleotide metabolism underlies the establishment and maintenance of oncogene-induced senescence. Cell Rep. 3:1-4, 2013. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dissolved organic carbon**
Dissolved organic carbon:
Dissolved organic carbon (DOC) is the fraction of organic carbon operationally defined as that which can pass through a filter with a pore size typically between 0.22 and 0.7 micrometers. The fraction remaining on the filter is called particulate organic carbon (POC).Dissolved organic matter (DOM) is a closely related term often used interchangeably with DOC. While DOC refers specifically to the mass of carbon in the dissolved organic material, DOM refers to the total mass of the dissolved organic matter. So DOM also includes the mass of other elements present in the organic material, such as nitrogen, oxygen and hydrogen. DOC is a component of DOM and there is typically about twice as much DOM as DOC. Many statements that can be made about DOC apply equally to DOM, and vice versa.
Dissolved organic carbon:
DOC is abundant in marine and freshwater systems and is one of the greatest cycled reservoirs of organic matter on Earth, accounting for the same amount of carbon as in the atmosphere and up to 20% of all organic carbon. In general, organic carbon compounds are the result of decomposition processes from dead organic matter including plants and animals. DOC can originate from within or outside any given body of water. DOC originating from within the body of water is known as autochthonous DOC and typically comes from aquatic plants or algae, while DOC originating outside the body of water is known as allochthonous DOC and typically comes from soils or terrestrial plants. When water originates from land areas with a high proportion of organic soils, these components can drain into rivers and lakes as DOC.
Dissolved organic carbon:
The marine DOC pool is important for the functioning of marine ecosystems because they are at the interface between the chemical and the biological worlds. DOC fuels marine food webs, and is a major component of the Earth's carbon cycling.
Overview:
DOC is a basic nutrient, supporting growth of microorganisms and plays an important role in the global carbon cycle through the microbial loop. In some organisms (stages) that do not feed in the traditional sense, dissolved matter may be the only external food source. Moreover, DOC is an indicator of organic loadings in streams, as well as supporting terrestrial processing (e.g., within soil, forests, and wetlands) of organic matter. Dissolved organic carbon has a high proportion of biodegradable dissolved organic carbon (BDOC) in first order streams compared to higher order streams. In the absence of extensive wetlands, bogs, or swamps, baseflow concentrations of DOC in undisturbed watersheds generally range from approximately 1 to 20 mg/L carbon. Carbon concentrations considerably vary across ecosystems. For example, the Everglades may be near the top of the range and the middle of oceans may be near the bottom. Occasionally, high concentrations of organic carbon indicate anthropogenic influences, but most DOC originates naturally.The BDOC fraction consists of organic molecules that heterotrophic bacteria can use as a source of energy and carbon. Some subset of DOC constitutes the precursors of disinfection byproducts for drinking water. BDOC can contribute to undesirable biological regrowth within water distribution systems.The dissolved fraction of total organic carbon (TOC) is an operational classification. Many researchers use the term "dissolved" for compounds that pass through a 0.45 μm filter, but 0.22 μm filters have also been used to remove higher colloidal concentrations.A practical definition of dissolved typically used in marine chemistry is all substances that pass through a GF/F filter, which has a nominal pore size of approximately 0.7 μm (Whatman glass microfiber filter, 0.6–0.8 μm particle retention). The recommended procedure is the HTCO technique, which calls for filtration through pre-combusted glass fiber filters, typically the GF/F classification.
Overview:
Labile and recalcitrant Dissolved organic matter can be classified as labile or as recalcitrant, depending on its reactivity. Recalcitrant DOC is also called refractory DOC, and these terms seem to be used interchangeably in the context of DOC. Depending on the origin and composition of DOC, its behavior and cycling are different; the labile fraction of DOC decomposes rapidly through microbially or photochemically mediated processes, whereas refractory DOC is resistant to degradation and can persist in the ocean for millennia. In the coastal ocean, organic matter from terrestrial plant litter or soils appears to be more refractory and thus often behaves conservatively. In addition, refractory DOC is produced in the ocean by the bacterial transformation of labile DOC, which reshapes its composition.Due to the continuous production and degradation in natural systems, the DOC pool contains a spectrum of reactive compounds each with their own reactivity, that have been divided into fractions from labile to recalcitrant, depending on the turnover times, as shown in the following table...
Overview:
This wide range in turnover or degradation times has been linked with the chemical composition, structure and molecular size, but degradation also depends on the environmental conditions (e.g., nutrients), prokaryote diversity, redox state, iron availability, mineral-particle associations, temperature, sun-light exposure, biological production of recalcitrant compounds, and the effect of priming or dilution of individual molecules. For example, lignin can be degraded in aerobic soils but is relatively recalcitrant in anoxic marine sediments. This example shows bioavailability varies as a function of the ecosystem's properties. Accordingly, even normally ancient and recalcitrant compounds, such as petroleum, carboxyl-rich alicyclic molecules, can be degraded in the appropriate environmental setting.
Terrestrial ecosystems:
Soil Dissolved organic matter (DOM) is one of the most active and mobile carbon pools and has an important role in global carbon cycling. In addition, dissolved organic carbon (DOC) affects the soil negative electrical charges denitrification process, acid-base reactions in the soil solution, retention and translocation of nutrients (cations), and immobilization of heavy metals and xenobiotics. Soil DOM can be derived from different sources (inputs), such as atmospheric carbon dissolved in rainfall, litter and crop residues, manure, root exudates, and decomposition of soil organic matter (SOM). In the soil, DOM availability depends on its interactions with mineral components (e.g., clays, Fe and Al oxides) modulated by adsorption and desorption processes. It also depends on SOM fractions (e.g., stabilized organic molecules and microbial biomass) by mineralization and immobilization processes. In addition, the intensity of these interactions changes according to soil inherent properties, land use, and crop management.During the decomposition of organic material, most carbon is lost as CO2 to the atmosphere by microbial oxidation. Soil type and landscape slope, leaching, and runoff are also important processes associated to DOM losses in the soil. In well-drained soils, leached DOC can reach the water table and release nutrients and pollutants that can contaminate groundwater, whereas runoff transports DOM and xenobiotics to other areas, rivers, and lakes.
Terrestrial ecosystems:
Groundwater Precipitation and surface water leaches dissolved organic carbon (DOC) from vegetation and plant litter and percolates through the soil column to the saturated zone. The concentration, composition, and bioavailability of DOC are altered during transport through the soil column by various physicochemical and biological processes, including sorption, desorption, biodegradation and biosynthesis. Hydrophobic molecules are preferentially partitioned onto soil minerals and have a longer retention time in soils than hydrophilic molecules. The hydrophobicity and retention time of colloids and dissolved molecules in soils are controlled by their size, polarity, charge, and bioavailability. Bioavailable DOM is subjected to microbial decomposition, resulting in a reduction in size and molecular weight. Novel molecules are synthesized by soil microbes, and some of these metabolites enter the DOC reservoir in groundwater.
Terrestrial ecosystems:
Freshwater ecosystems Aquatic carbon occurs in different forms. Firstly, a division is made between organic and inorganic carbon. Organic carbon is a mixture of organic compounds originating from detritus or primary producers. It can be divided into POC (particulate organic carbon; particles > 0.45 μm) and DOC (dissolved organic carbon; particles < 0.45 μm). DOC usually makes up 90% of the total amount of aquatic organic carbon. Its concentration ranges from 0.1 to >300 mg L−1.Likewise, inorganic carbon also consists of a particulate (PIC) and a dissolved phase (DIC). PIC mainly consists of carbonates (e.g., CaCO3), DIC consists of carbonate (CO32-), bicarbonate (HCO3−), CO2 and a negligibly small fraction of carbonic acid (H2CO3). The inorganic carbon compounds exist in equilibrium that depends on the pH of the water. DIC concentrations in freshwater range from about zero in acidic waters to 60 mg C L−1 in areas with carbonate-rich sediments.POC can be degraded to form DOC; DOC can become POC by flocculation. Inorganic and organic carbon are linked through aquatic organisms. CO2 is used in photosynthesis (P) by for instance macrophytes, produced by respiration (R), and exchanged with the atmosphere. Organic carbon is produced by organisms and is released during and after their life; e.g., in rivers, 1–20% of the total amount of DOC is produced by macrophytes. Carbon can enter the system from the catchment and is transported to the oceans by rivers and streams. There is also exchange with carbon in the sediments, e.g., burial of organic carbon, which is important for carbon sequestration in aquatic habitats.Aquatic systems are very important in global carbon sequestration; e.g., when different European ecosystems are compared, inland aquatic systems form the second largest carbon sink (19–41 Tg C y−1); only forests take up more carbon (125–223 Tg C y−1).
Marine ecosystems:
Sources In marine systems DOC originates from either autochthonous or allochthonous sources. Autochthonous DOC is produced within the system, primarily by plankton organisms and in coastal waters additionally by benthic microalgae, benthic fluxes, and macrophytes, whereas allochthonous DOC is mainly of terrestrial origin supplemented by groundwater and atmospheric inputs. In addition to soil derived humic substances, terrestrial DOC also includes material leached from plants exported during rain events, emissions of plant materials to the atmosphere and deposition in aquatic environments (e.g., volatile organic carbon and pollens), and also thousands of synthetic human-made organic chemicals that can be measured in the ocean at trace concentrations.Dissolved organic carbon (DOC) represents one of the Earth's major carbon pools. It contains a similar amount of carbon as the atmosphere and exceeds the amount of carbon bound in marine biomass by more than two-hundred times. DOC is mainly produced in the near-surface layers during primary production and zooplankton grazing processes. Other sources of marine DOC are dissolution from particles, terrestrial and hydrothermal vent input, and microbial production. Prokaryotes (bacteria and archaea) contribute to the DOC pool via release of capsular material, exopolymers, and hydrolytic enzymes, as well as via mortality (e.g. viral shunt). Prokaryotes are also the main decomposers of DOC, although for some of the most recalcitrant forms of DOC very slow abiotic degradation in hydrothermal systems or possibly sorption to sinking particles may be the main removal mechanism. Mechanistic knowledge about DOC-microbe-interactions is crucial to understand the cycling and distribution of this active carbon reservoir.
Marine ecosystems:
Phytoplankton Phytoplankton produces DOC by extracellular release commonly accounting between 5 and 30% of their total primary production, although this varies from species to species. Nonetheless, this release of extracellular DOC is enhanced under high light and low nutrient levels, and thus should increase relatively from eutrophic to oligotrophic areas, probably as a mechanism for dissipating cellular energy. Phytoplankton can also produce DOC by autolysis during physiological stress situations e.g., nutrient limitation. Other studies have demonstrated DOC production in association with meso- and macro-zooplankton feeding on phytoplankton and bacteria.
Marine ecosystems:
Zooplankton Zooplankton-mediated release of DOC occurs through sloppy feeding, excretion and defecation which can be important energy sources for microbes. Such DOC production is largest during periods with high food concentration and dominance of large zooplankton species.
Marine ecosystems:
Bacteria and viruses Bacteria are often viewed as the main consumers of DOC, but they can also produce DOC during cell division and viral lysis. The biochemical components of bacteria are largely the same as other organisms, but some compounds from the cell wall are unique and are used to trace bacterial derived DOC (e.g., peptidoglycan). These compounds are widely distributed in the ocean, suggesting that bacterial DOC production could be important in marine systems. Viruses are the most abundant life forms in the oceans infecting all life forms including algae, bacteria and zooplankton. After infection, the virus either enters a dormant (lysogenic) or productive (lytic) state. The lytic cycle causes disruption of the cell(s) and release of DOC.
Marine ecosystems:
Macrophytes Marine macrophytes (i.e., macroalgae and seagrass) are highly productive and extend over large areas in coastal waters but their production of DOC has not received much attention. Macrophytes release DOC during growth with a conservative estimate (excluding release from decaying tissues) suggesting that macroalgae release between 1-39% of their gross primary production, while seagrasses release less than 5% as DOC of their gross primary production. The released DOC has been shown to be rich in carbohydrates, with rates depending on temperature and light availability. Globally the macrophyte communities have been suggested to produce ~160 Tg C yr−1 of DOC, which is approximately half the annual global river DOC input (250 Tg C yr−1).
Marine ecosystems:
Marine sediments Marine sediments represent the main sites of OM degradation and burial in the ocean, hosting microbes in densities up to 1000 times higher than found in the water column. The DOC concentrations in sediments are often an order of magnitude higher than in the overlying water column. This concentration difference results in a continued diffusive flux and suggests that sediments are a major DOC source releasing 350 Tg C yr−1, which is comparable to the input of DOC from rivers. This estimate is based on calculated diffusive fluxes and does not include resuspension events which also releases DOC and therefore the estimate could be conservative. Also, some studies have shown that geothermal systems and petroleum seepage contribute with pre-aged DOC to the deep ocean basins, but consistent global estimates of the overall input are currently lacking. Globally, groundwaters account for an unknown part of the freshwater DOC flux to the oceans. The DOC in groundwater is a mixture of terrestrial, infiltrated marine, and in situ microbially produced material. This flux of DOC to coastal waters could be important, as concentrations in groundwater are generally higher than in coastal seawater, but reliable global estimates are also currently lacking.
Marine ecosystems:
Sinks The main processes that remove DOC from the ocean water column are: (1) Thermal degradation in e.g., submarine hydrothermal systems; (2) bubble coagulation and abiotic flocculation into microparticles or sorption to particles; (3) abiotic degradation via photochemical reactions; and (4) biotic degradation by heterotrophic marine prokaryotes. It has been suggested that the combined effects of photochemical and microbial degradation represent the major sinks of DOC.
Marine ecosystems:
Thermal degradation Thermal degradation of DOC has been found at high-temperature hydrothermal ridge-flanks, where outflow DOC concentrations are lower than in the inflow. While the global impact of these processes has not been investigated, current data suggest it is a minor DOC sink. Abiotic DOC flocculation is often observed during rapid (minutes) shifts in salinity when fresh and marine waters mix. Flocculation changes the DOC chemical composition, by removing humic compounds and reducing molecular size, transforming DOC to particulate organic flocs which can sediment and/or be consumed by grazers and filter feeders, but it also stimulates the bacterial degradation of the flocculated DOC. The impacts of flocculation on the removal of DOC from coastal waters are highly variable with some studies suggesting it can remove up to 30% of the DOC pool, while others find much lower values (3–6%;). Such differences could be explained by seasonal and system differences in the DOC chemical composition, pH, metallic cation concentration, microbial reactivity, and ionic strength.
Marine ecosystems:
CDOM The colored fraction of DOC (CDOM) absorbs light in the blue and UV-light range and therefore influences plankton productivity both negatively by absorbing light, that otherwise would be available for photosynthesis, and positively by protecting plankton organisms from harmful UV-light. However, as the impact of UV damage and ability to repair is extremely variable, there is no consensus on how UV-light changes might impact overall plankton communities. The CDOM absorption of light initiates a complex range of photochemical processes, which can impact nutrient, trace metal and DOC chemical composition, and promote DOC degradation.
Marine ecosystems:
Photodegradation Photodegradation involves the transformation of CDOM into smaller and less colored molecules (e.g., organic acids), or into inorganic carbon (CO, CO2), and nutrient salts (NH4−, HPO2−4). Therefore, it generally means that photodegradation transforms recalcitrant into labile DOC molecules that can be rapidly used by prokaryotes for biomass production and respiration. However, it can also increase CDOM through the transformation of compounds such as triglycerides, into more complex aromatic compounds, which are less degradable by microbes. Moreover, UV radiation can produce e.g., reactive oxygen species, which are harmful to microbes. The impact of photochemical processes on the DOC pool depends also on the chemical composition, with some studies suggesting that recently produced autochthonous DOC becomes less bioavailable while allochthonous DOC becomes more bioavailable to prokaryotes after sunlight exposure, albeit others have found the contrary. Photochemical reactions are particularly important in coastal waters which receive high loads of terrestrial derived CDOM, with an estimated ~20–30% of terrestrial DOC being rapidly photodegraded and consumed. Global estimates also suggests that in marine systems photodegradation of DOC produces ~180 Tg C yr−1 of inorganic carbon, with an additional 100 Tg C yr−1 of DOC made more available to microbial degradation. Another attempt at global ocean estimates also suggest that photodegradation (210 Tg C yr−1) is approximately the same as the annual global input of riverine DOC (250 Tg C yr−1;), while others suggest that direct photodegradation exceeds the riverine DOC inputs.
Marine ecosystems:
Recalcitrant DOC DOC is conceptually divided into labile DOC, which is rapidly taken up by heterotrophic microbes, and the recalcitrant DOC reservoir, which has accumulated in the ocean (following a definition by Hansell). As a consequence of its recalcitrance, the accumulated DOC reaches average radiocarbon ages between 1,000 and 4,000 years in surface waters, and between 3,000 and 6,000 years in the deep ocean, indicating that it persists through several deep ocean mixing cycles between 300 and 1,400 years each. Behind these average radiocarbon ages, a large spectrum of ages is hidden. Follett et al. showed DOC comprises a fraction of modern radiocarbon age, as well as DOC reaching radiocarbon ages of up to 12,000 years.
Marine ecosystems:
Distribution More precise measurement techniques developed in the late 1990s have allowed for a good understanding of how dissolved organic carbon is distributed in marine environments both vertically and across the surface. It is now understood that dissolved organic carbon in the ocean spans a range from very labile to very recalcitrant (refractory). The labile dissolved organic carbon is mainly produced by marine organisms and is consumed in the surface ocean, and consists of sugars, proteins, and other compounds that are easily used by marine bacteria. Recalcitrant dissolved organic carbon is evenly spread throughout the water column and consists of high molecular weight and structurally complex compounds that are difficult for marine organisms to use such as the lignin, pollen, or humic acids. As a result, the observed vertical distribution consists of high concentrations of labile DOC in the upper water column and low concentrations at depth.
Marine ecosystems:
In addition to vertical distributions, horizontal distributions have been modeled and sampled as well. In the surface ocean at a depth of 30 meters, the higher dissolved organic carbon concentrations are found in the South Pacific Gyre, the South Atlantic Gyre, and the Indian Ocean. At a depth of 3,000 meters, highest concentrations are in the North Atlantic Deep Water where dissolved organic carbon from the high concentration surface ocean is removed to depth. While in the northern Indian Ocean high DOC is observed due to high fresh water flux and sediments. Since the time scales of horizontal motion along the ocean bottom are in the thousands of years, the refractory dissolved organic carbon is slowly consumed on its way from the North Atlantic and reaches a minimum in the North Pacific.
Marine ecosystems:
As emergent Dissolved organic matter is a heterogeneous pool of thousands, likely millions, of organic compounds. These compounds differ not only in composition and concentration (from pM to μM), but also originate from various organisms (phytoplankton, zooplankton, and bacteria) and environments (terrestrial vegetation and soils, coastal fringe ecosystems) and may have been produced recently or thousands of years ago. Moreover, even organic compounds deriving from the same source and of the same age may have been subjected to different processing histories prior to accumulating within the same pool of DOM.Interior ocean DOM is a highly modified fraction that remains after years of exposure to sunlight, utilization by heterotrophs, flocculation and coagulation, and interaction with particles. Many of these processes within the DOM pool are compound- or class-specific. For example, condensed aromatic compounds are highly photosensitive, whereas proteins, carbohydrates, and their monomers are readily taken up by bacteria. Microbes and other consumers are selective in the type of DOM they utilize and typically prefer certain organic compounds over others. Consequently, DOM becomes less reactive as it is continually reworked. Said another way, the DOM pool becomes less labile and more refractory with degradation. As it is reworked, organic compounds are continually being added to the bulk DOM pool by physical mixing, exchange with particles, and/or production of organic molecules by the consumer community. As such, the compositional changes that occur during degradation are more complex than the simple removal of more labile components and resultant accumulation of remaining, less labile compounds.Dissolved organic matter recalcitrance (i.e., its overall reactivity toward degradation and/or utilization) is therefore an emergent property. The perception of DOM recalcitrance changes during organic matter degradation and in conjunction with any other process that removes or adds organic compounds to the DOM pool under consideration.The surprising resistance of high concentrations of DOC to microbial degradation has been addressed by several hypotheses. The prevalent notion is that the recalcitrant fraction of DOC has certain chemical properties, which prevent decomposition by microbes ("intrinsic stability hypothesis"). An alternative or additional explanation is given by the "dilution hypothesis", that all compounds are labile, but exist in concentrations individually too low to sustain microbial populations but collectively form a large pool. The dilution hypothesis has found support in recent experimental and theoretical studies.
DOM isolation and analysis:
DOM is found in low concentrations in nature for direct analysis with NMR or MS. Moreover, DOM samples often contain high concentrations of inorganic salts that are incompatible with such techniques. Therefore, it is necessary a concentration and isolation step of the sample. The most used isolation techniques are ultrafiltration, reverse osmosis, and solid-phase extraction. Among them solid-phase extraction is considered as the cheapest and easiest technique. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Speech tempo**
Speech tempo:
Speech tempo is a measure of the number of speech units of a given type produced within a given amount of time. Speech tempo is believed to vary within the speech of one person according to contextual and emotional factors, between speakers and also between different languages and dialects. However, there are many problems involved in investigating this variance scientifically.
Problems of definition:
While most people seem to believe that they can judge how quickly someone is speaking, it is generally said that subjective judgements and opinions cannot serve as scientific evidence for statements about speech tempo; John Laver has written that analyzing tempo can be "dangerously open to subjective bias ... listeners' judgements rapidly begin to lose objectivity when the utterance concerned comes either from an unfamiliar accent or ... from an unfamiliar language". Scientific observation depends on accurate segmenting of recorded speech along the time course of an utterance, usually using one of the acoustic analysis software tools available on the internet such as Audacity or, specifically for speech research, Praat, Intelligent Speech Analyser and SIL Speech Analyzer.
Problems of definition:
Measurements of speech tempo can be strongly affected by pauses and hesitations. For this reason, it is usual to distinguish between speech tempo including pauses and hesitations and speech tempo excluding them. The former is called speaking rate and the latter articulation rate.Various units of speech have been used as a basis for measurement. The traditional measure of speed in typing and Morse code transmission has been words per minute (wpm). However, in the study of speech the word is not well defined (being primarily a unit of grammar), and speech is not usually temporally stable over a period as long as a minute. Many studies have used the measure of syllables per second, but this is not completely reliable because, although the syllable as a phonological unit of a given language is well-defined, it is not always possible to get agreement on the phonetic syllable. For example, the English word 'particularly' in the form in which it occurs in dictionaries is, phonologically speaking, composed of five syllables /pə.tɪk.jə.lə.li/. Phonetic realizations of the word, however, may be heard as comprising five [pə.tɪk.jə.lə.li], four [pə.tɪk.jə.li], three [pə.tɪk.li] or even two syllables [ptɪk.li], and listeners are likely to have different opinions about the number of syllables heard.
Problems of definition:
An alternative measure that has been proposed is that of sounds per second. One study found rates varying from an average of 9.4 sounds per second for poetry reading to 13.83 per second for sports commentary. The problem with this approach is that the researcher must be clear as to whether the "sounds" s/he is counting are phonemes or physically observable phonetic units (sometimes called "phones"). As an example, the utterance 'Don't forget to record it' might in slow, careful speech be pronounced /dəʊnt fəget tə rɪkɔːd ɪt/, with 19 phonemes, each of which is phonetically realized. When the sentence is said at high speed it might be pronounced as [də̃ʊ̃ʔ fɡeʔtrɪkɔːd ɪt], with 16 units. If we are counting only units that can be observed and measured, it is clear that at faster speeds of utterance the number of sounds produced per second does not necessarily increase.
Within-speaker variability:
Speakers vary their speed of speaking according to contextual and physical factors. A typical speaking rate for English is 4 syllables per second, but in different emotional or social contexts the rate may vary, one study reporting a range between 3.3 and 5.9 syl/sec, Another study found significant differences in speaking rate between story-telling and taking part in an interview.Speech tempo may be regarded as one of the components of prosody. Possibly the most detailed analytical framework for the role of tempo in English prosody is that of David Crystal. His system, which uses terms mostly borrowed from musical usage, allows for simple variation away from normal in tempo, where monosyllables may be pronounced as "clipped", "drawled" or "held" and polysyllabic utterances may be spoken at "allegro", "allegrissimo", "lento" and "lentissimo". Complex variation includes "accelerando" and "rallentando". Crystal claims that "tempo has probably the most highly discrete grammatical function of all prosodic parameters other than pitch ...". He cites from his corpus-based analysis instances of increased tempo in cases of speakers' self-corrections of speech errors, and in citing embedded material in the form of titles and names, e.g. "I'm sorry, but we won't be able to start So you think you know what's happening for a few moments" and "This is the I'll show you a picture and you tell me what it is technique" (where the italicized text is spoken at faster tempo).
Between-language differences:
Subjective impressions of tempo differences between different languages and dialects are difficult to substantiate with scientific data. Counting syllables per second will result in differences caused by the different syllable structures found in different languages; many languages have a predominantly CV (consonant+vowel) syllable structure while English syllables may begin with up to 3 consonants and end with up to 4. Consequently, it is likely that a Japanese speaker can produce more syllables in their language per second than an English speaker can in theirs. Counting sounds per second is also problematic for the reason mentioned above, i.e. that the researcher needs to be sure what objects it is that they are counting.
Between-language differences:
Howard Giles has studied the relationship between perceived tempo and perceived competence of speakers of different accents of English, and found a positive linear relationship between the two (i.e. people who speak faster are perceived as more competent).Osser and Peng counted sounds per second for Japanese and English and found no significant difference. The study by Kowal et al., referred to above, comparing story-telling with speaking in an interview, looked at English, Finnish, French, German and Spanish. They found no significant differences in rate between the languages, but highly significant differences between the speaking styles. Similarly, Barik found that differences in tempo between French and English were due to speaking style rather than to the language. From the point of view of the perception of tempo differences between languages, Vaane used spoken Dutch, English, French, Spanish and Arabic produced at three different rates and found that untrained and phonetically trained listeners performed equally well at judging the rate of speaking for familiar and unfamiliar languages.In the absence of reliable evidence to support it, it seems that the widespread view that some languages are spoken more rapidly than others is an illusion. This illusion may well be related to other factors such as differences of rhythm and pausing. In another study, an analysis of speech rate and perception in radio bulletins, the average rate of bulletins varied from 168 (English, BBC) to 210 words per minutes (Spanish, RNE). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cut (music)**
Cut (music):
In African American music, and in deejaying and turntablism, a cut "overtly insists on the repetitive nature of the music, by abruptly skipping it back to another beginning which we have already heard. Moreover, the greater the insistence on the pure beauty and value of repetition, the greater the awareness must also be that repetition takes place not on the level of musical development or progression, but on the purest tonal and timbric level" (Snead 1984, p. 69, drawing on Chernoff 1979).
Cut (music):
David Brackett (Interpreting Popular Music, 2000, p. 118) describes the cut, repetition on the level of the beat, ostinato, and the harmonic sequence, as what makes improvisation possible. In a cut repetition is not considered accumulation. "Progress in the sense of 'avoidance of repetition' would at once sabotage such an effort" (Snead, "Repetition as a Figure of Black Culture", 1984, p. 68).
Cut (music):
Brackett (ibid) finds the cut in all African American folk and popular music "from ring to rap" and lists the blues (AAB), "Rhythm" changes in jazz, the AABA form of bebop, the ostinato vamps at the end of gospel songs allowing improvisation and a rise in energy, short ostinatos of funk which spread that intensity throughout the song, samples in rap, the last of which cuts on two levels, the repetition of the sample itself and its intertexual repetition.
Cut (music):
The cuts of African American music are not to be confused with those of traditional Irish music, especially on the instrument of the tin whistle "Cuts and rolls" are used as a form of ornamentation in Irish traditional, and sometimes Scottish tunes.
Sources:
Brackett, David (1995/2000). Interpreting Popular Music. ISBN 0-520-22541-4.
Snead (1984). "Repetition as a Figure of Black Culture", in: Black Literature and Literary Theory, ed. Henry Louis Gates, Jr. London: Routledge, p. 59-80.
Chernoff, John (1979). African Rhythm and African Sensibility: Aesthetics and Social Action in African Musical Idioms. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cognitive ethology**
Cognitive ethology:
Cognitive ethology is a branch of ethology concerned with the influence of conscious awareness and intention on the behaviour of an animal. Donald Griffin, a zoology professor in the United States, set up the foundations for researches in the cognitive awareness of animals within their habitats.The fusion of cognitive science and classical ethology into cognitive ethology "emphasizes observing animals under more-or-less natural conditions, with the objective of understanding the evolution, adaptation (function), causation, and development of the species-specific behavioral repertoire" (Niko Tinbergen 1963).
Cognitive ethology:
According to Jamieson & Bekoff (1993), "Tinbergen's four questions about the evolution, adaptation, causation and development of behavior can be applied to the cognitive and mental abilities of animals." Allen & Bekoff (1997, chapter 5) attempt to show how cognitive ethology can take on the central questions of cognitive science, taking as their starting point the four questions described by Barbara Von Eckardt in her 1993 book What is Cognitive Science?, generalizing the four questions and adding a fifth. Kingstone, Smilek & Eastwood (2008) suggested that cognitive ethology should include human behavior. They proposed that researchers should firstly study how people behave in their natural, real world environments and then move to the lab. Anthropocentric claims for the ways non-human animals interact in their social and non-social worlds are often used to influence decisions on how the non-human animals can or should be used by humans.
Relation to laboratory experimental psychology:
Traditionally, cognitive ethologists have questioned research methods that isolate animals in unnatural surroundings and present them with a limited set of artificial stimuli, arguing that such techniques favor the study of artificial issues that are not relevant to an understanding of the natural behavior of animals. However, many modern researchers favor a judicious combination of field and laboratory methods.
Relation to ethics:
Bekoff, M and Allen, C (1997) "identify three major groups of people (among some of whose members there are blurred distinctions) with different views on cognitive ethology, namely, slayers, skeptics, and proponents." The latter seemingly convergent with animal rights thinking in seeing animal experience as worthy in itself.
Ethicist Peter Singer is an example of a "proponent" in this sense, as is biologist E. O. Wilson who coined the term biophilia to describe the basis of a direct moral cognition, that 'higher' animals would use to perceive moral implications in the environment directly.
Three views:
According to Marc Bekoff, there are three different views towards whether a science of cognitive ethology is even possible. Slayers deny any possibility of success in cognitive ethology, proponents keep an open mind about animal cognition and the utility of cognitive ethological investigation, and skeptics stand somewhere in between.
Sources:
Philosophy of Cognitive Ethology, Colin Allen, Texas A&M University Cognitive ethology: slayers, skeptics and proponents | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**XML Protocol**
XML Protocol:
The XML Protocol ("XMLP") is a standard being developed by the W3C XML Protocol Working Group to the following guidelines, outlined in the group's charter: An envelope for encapsulating XML data to be transferred in an interoperable manner that allows for distributed extensibility.
A convention for the content of the envelope when used for RPC (Remote Procedure Call) applications. The protocol aspects of this should be coordinated closely with the IETF and make an effort to leverage any work they are doing, see below for details.
A mechanism for serializing data representing non-syntactic data models such as object graphs and directed labeled graphs, based on the data types of XML Schema.
XML Protocol:
A mechanism for using HTTP transport in the context of an XML Protocol. This does not mean that HTTP is the only transport mechanism that can be used for the technologies developed, nor that support for HTTP transport is mandatory. This component merely addresses the fact that HTTP transport is expected to be widely used, and so should be addressed by this Working Group. There will be coordination with the Internet Engineering Task Force (IETF). (See Blocks Extensible Exchange Protocol)Further, the protocol developed must meet the following requirements, as per the working group's charter: The envelope and the serialization mechanisms developed by the Working Group may not preclude any programming model nor assume any particular mode of communication between peers.
XML Protocol:
Focus must be put on simplicity and modularity and must support the kind of extensibility actually seen on the Web. In particular, it must support distributed extensibility where the communicating parties do not have a priori knowledge of each other. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Zirconyl chloride**
Zirconyl chloride:
Zirconyl chloride is the inorganic compound with the formula of [Zr4(OH)8(H2O)16]Cl8(H2O)12, more commonly written ZrOCl2·8H2O, and referred to as zirconyl chloride octahydrate. It is a white solid and is the most common water-soluble derivative of zirconium. A compound with the formula ZrOCl2 has not been characterized.
Production and structure:
The salt is produced by hydrolysis of zirconium tetrachloride or treating zirconium oxide with hydrochloric acid. It adopts a tetrameric structure, consisting of the cation [Zr4(OH)8]8+. features four pairs of hydroxide bridging ligands linking four Zr4+ centers. The chloride anions are not ligands, consistent with the high oxophilicity of Zr(IV). The salt crystallizes as tetragonal crystals. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Piromelatine**
Piromelatine:
Piromelatine (Neu-P11) is a multimodal sleep drug under development by Neurim Pharmaceuticals. It is an agonist at melatonin MT1/MT2 and serotonin 5-HT1A/5-HT1D receptors. Neurim is conducting a phase II randomized, placebo controlled trial of cognitive and sleep effects in Alzheimer's disease.
Results of a phase II trial on insomnia in 120 adults were announced in 2013, finding piromelatine 20/50 mg improved sleep over 4 weeks vs placebo. Phase 1A/1B studies in 2011, showed safe dose-dependent improvement in sleep. Pre-clinical studies showed antinociceptive antihypertensive and cognitive benefits in rat disease models of pain, hypertension, and Alzheimer's disease.
Antidepressant and anti-anxiety effects were also demonstrated in animal models. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Isosceles triangle**
Isosceles triangle:
In geometry, an isosceles triangle () is a triangle that has two sides of equal length. Sometimes it is specified as having exactly two sides of equal length, and sometimes as having at least two sides of equal length, the latter version thus including the equilateral triangle as a special case.
Examples of isosceles triangles include the isosceles right triangle, the golden triangle, and the faces of bipyramids and certain Catalan solids.
The mathematical study of isosceles triangles dates back to ancient Egyptian mathematics and Babylonian mathematics. Isosceles triangles have been used as decoration from even earlier times, and appear frequently in architecture and design, for instance in the pediments and gables of buildings.
The two equal sides are called the legs and the third side is called the base of the triangle. The other dimensions of the triangle, such as its height, area, and perimeter, can be calculated by simple formulas from the lengths of the legs and base.
Every isosceles triangle has an axis of symmetry along the perpendicular bisector of its base. The two angles opposite the legs are equal and are always acute, so the classification of the triangle as acute, right, or obtuse depends only on the angle between its two legs.
Terminology, classification, and examples:
Euclid defined an isosceles triangle as a triangle with exactly two equal sides, but modern treatments prefer to define isosceles triangles as having at least two equal sides. The difference between these two definitions is that the modern version makes equilateral triangles (with three equal sides) a special case of isosceles triangles. A triangle that is not isosceles (having three unequal sides) is called scalene.
Terminology, classification, and examples:
"Isosceles" is made from the Greek roots "isos" (equal) and "skelos" (leg). The same word is used, for instance, for isosceles trapezoids, trapezoids with two equal sides, and for isosceles sets, sets of points every three of which form an isosceles triangle.In an isosceles triangle that has exactly two equal sides, the equal sides are called legs and the third side is called the base. The angle included by the legs is called the vertex angle and the angles that have the base as one of their sides are called the base angles. The vertex opposite the base is called the apex. In the equilateral triangle case, since all sides are equal, any side can be called the base.
Terminology, classification, and examples:
Whether an isosceles triangle is acute, right or obtuse depends only on the angle at its apex. In Euclidean geometry, the base angles can not be obtuse (greater than 90°) or right (equal to 90°) because their measures would sum to at least 180°, the total of all angles in any Euclidean triangle. Since a triangle is obtuse or right if and only if one of its angles is obtuse or right, respectively, an isosceles triangle is obtuse, right or acute if and only if its apex angle is respectively obtuse, right or acute. In Edwin Abbott's book Flatland, this classification of shapes was used as a satire of social hierarchy: isosceles triangles represented the working class, with acute isosceles triangles higher in the hierarchy than right or obtuse isosceles triangles.As well as the isosceles right triangle, several other specific shapes of isosceles triangles have been studied.
Terminology, classification, and examples:
These include the Calabi triangle (a triangle with three congruent inscribed squares), the golden triangle and golden gnomon (two isosceles triangles whose sides and base are in the golden ratio), the 80-80-20 triangle appearing in the Langley's Adventitious Angles puzzle, and the 30-30-120 triangle of the triakis triangular tiling.
Five Catalan solids, the triakis tetrahedron, triakis octahedron, tetrakis hexahedron, pentakis dodecahedron, and triakis icosahedron, each have isosceles-triangle faces, as do infinitely many pyramids and bipyramids.
Formulas:
Height For any isosceles triangle, the following six line segments coincide: the altitude, a line segment from the apex perpendicular to the base, the angle bisector from the apex to the base, the median from the apex to the midpoint of the base, the perpendicular bisector of the base within the triangle, the segment within the triangle of the unique axis of symmetry of the triangle, and the segment within the triangle of the Euler line of the triangle, except when the triangle is equilateral.Their common length is the height h of the triangle.
Formulas:
If the triangle has equal sides of length a and base of length b the general triangle formulas for the lengths of these segments all simplify to h=a2−b24.
Formulas:
This formula can also be derived from the Pythagorean theorem using the fact that the altitude bisects the base and partitions the isosceles triangle into two congruent right triangles.The Euler line of any triangle goes through the triangle's orthocenter (the intersection of its three altitudes), its centroid (the intersection of its three medians), and its circumcenter (the intersection of the perpendicular bisectors of its three sides, which is also the center of the circumcircle that passes through the three vertices). In an isosceles triangle with exactly two equal sides, these three points are distinct, and (by symmetry) all lie on the symmetry axis of the triangle, from which it follows that the Euler line coincides with the axis of symmetry. The incenter of the triangle also lies on the Euler line, something that is not true for other triangles. If any two of an angle bisector, median, or altitude coincide in a given triangle, that triangle must be isosceles.
Formulas:
Area The area T of an isosceles triangle can be derived from the formula for its height, and from the general formula for the area of a triangle as half the product of base and height: T=b44a2−b2.
Formulas:
The same area formula can also be derived from Heron's formula for the area of a triangle from its three sides. However, applying Heron's formula directly can be numerically unstable for isosceles triangles with very sharp angles, because of the near-cancellation between the semiperimeter and side length in those triangles.If the apex angle (θ) and leg lengths (a) of an isosceles triangle are known, then the area of that triangle is: sin θ.
Formulas:
This is a special case of the general formula for the area of a triangle as half the product of two sides times the sine of the included angle.
Perimeter The perimeter p of an isosceles triangle with equal sides a and base b is just p=2a+b.
As in any triangle, the area T and perimeter p are related by the isoperimetric inequality 12 3T.
This is a strict inequality for isosceles triangles with sides unequal to the base, and becomes an equality for the equilateral triangle.
The area, perimeter, and base can also be related to each other by the equation 16 0.
If the base and perimeter are fixed, then this formula determines the area of the resulting isosceles triangle, which is the maximum possible among all triangles with the same base and perimeter.
On the other hand, if the area and perimeter are fixed, this formula can be used to recover the base length, but not uniquely: there are in general two distinct isosceles triangles with given area T and perimeter p . When the isoperimetric inequality becomes an equality, there is only one such triangle, which is equilateral.
Formulas:
Angle bisector length If the two equal sides have length a and the other side has length b , then the internal angle bisector t from one of the two equal-angled vertices satisfies 2aba+b>t>ab2a+b as well as t<4a3; and conversely, if the latter condition holds, an isosceles triangle parametrized by a and t exists.The Steiner–Lehmus theorem states that every triangle with two angle bisectors of equal lengths is isosceles. It was formulated in 1840 by C. L. Lehmus. Its other namesake, Jakob Steiner, was one of the first to provide a solution.
Formulas:
Although originally formulated only for internal angle bisectors, it works for many (but not all) cases when, instead, two external angle bisectors are equal.
The 30-30-120 isosceles triangle makes a boundary case for this variation of the theorem, as it has four equal angle bisectors (two internal, two external).
Radii The inradius and circumradius formulas for an isosceles triangle may be derived from their formulas for arbitrary triangles.
The radius of the inscribed circle of an isosceles triangle with side length a , base b , and height h is: 2ab−b24h.
The center of the circle lies on the symmetry axis of the triangle, this distance above the base.
An isosceles triangle has the largest possible inscribed circle among the triangles with the same base and apex angle, as well as also having the largest area and perimeter among the same class of triangles.The radius of the circumscribed circle is: a22h.
The center of the circle lies on the symmetry axis of the triangle, this distance below the apex.
Formulas:
Inscribed square For any isosceles triangle, there is a unique square with one side collinear with the base of the triangle and the opposite two corners on its sides. The Calabi triangle is a special isosceles triangle with the property that the other two inscribed squares, with sides collinear with the sides of the triangle, are of the same size as the base square. A much older theorem, preserved in the works of Hero of Alexandria, states that, for an isosceles triangle with base b and height h , the side length of the inscribed square on the base of the triangle is bhb+h.
Isosceles subdivision of other shapes:
For any integer n≥4 , any triangle can be partitioned into n isosceles triangles.
Isosceles subdivision of other shapes:
In a right triangle, the median from the hypotenuse (that is, the line segment from the midpoint of the hypotenuse to the right-angled vertex) divides the right triangle into two isosceles triangles. This is because the midpoint of the hypotenuse is the center of the circumcircle of the right triangle, and each of the two triangles created by the partition has two equal radii as two of its sides.
Isosceles subdivision of other shapes:
Similarly, an acute triangle can be partitioned into three isosceles triangles by segments from its circumcenter, but this method does not work for obtuse triangles, because the circumcenter lies outside the triangle.Generalizing the partition of an acute triangle, any cyclic polygon that contains the center of its circumscribed circle can be partitioned into isosceles triangles by the radii of this circle through its vertices. The fact that all radii of a circle have equal length implies that all of these triangles are isosceles. This partition can be used to derive a formula for the area of the polygon as a function of its side lengths, even for cyclic polygons that do not contain their circumcenters. This formula generalizes Heron's formula for triangles and Brahmagupta's formula for cyclic quadrilaterals.Either diagonal of a rhombus divides it into two congruent isosceles triangles. Similarly, one of the two diagonals of a kite divides it into two isosceles triangles, which are not congruent except when the kite is a rhombus.
Applications:
In architecture and design Isosceles triangles commonly appear in architecture as the shapes of gables and pediments. In ancient Greek architecture and its later imitations, the obtuse isosceles triangle was used; in Gothic architecture this was replaced by the acute isosceles triangle.In the architecture of the Middle Ages, another isosceles triangle shape became popular: the Egyptian isosceles triangle. This is an isosceles triangle that is acute, but less so than the equilateral triangle; its height is proportional to 5/8 of its base. The Egyptian isosceles triangle was brought back into use in modern architecture by Dutch architect Hendrik Petrus Berlage.
Applications:
Warren truss structures, such as bridges, are commonly arranged in isosceles triangles, although sometimes vertical beams are also included for additional strength.
Applications:
Surfaces tessellated by obtuse isosceles triangles can be used to form deployable structures that have two stable states: an unfolded state in which the surface expands to a cylindrical column, and a folded state in which it folds into a more compact prism shape that can be more easily transported. The same tessellation pattern forms the basis of Yoshimura buckling, a pattern formed when cylindrical surfaces are axially compressed, and of the Schwarz lantern, an example used in mathematics to show that the area of a smooth surface cannot always be accurately approximated by polyhedra converging to the surface.
Applications:
In graphic design and the decorative arts, isosceles triangles have been a frequent design element in cultures around the world from at least the Early Neolithic to modern times. They are a common design element in flags and heraldry, appearing prominently with a vertical base, for instance, in the flag of Guyana, or with a horizontal base in the flag of Saint Lucia, where they form a stylized image of a mountain island.They also have been used in designs with religious or mystic significance, for instance in the Sri Yantra of Hindu meditational practice.
Applications:
In other areas of mathematics If a cubic equation with real coefficients has three roots that are not all real numbers, then when these roots are plotted in the complex plane as an Argand diagram they form vertices of an isosceles triangle whose axis of symmetry coincides with the horizontal (real) axis. This is because the complex roots are complex conjugates and hence are symmetric about the real axis.In celestial mechanics, the three-body problem has been studied in the special case that the three bodies form an isosceles triangle, because assuming that the bodies are arranged in this way reduces the number of degrees of freedom of the system without reducing it to the solved Lagrangian point case when the bodies form an equilateral triangle. The first instances of the three-body problem shown to have unbounded oscillations were in the isosceles three-body problem.
History and fallacies:
Long before isosceles triangles were studied by the ancient Greek mathematicians, the practitioners of Ancient Egyptian mathematics and Babylonian mathematics knew how to calculate their area. Problems of this type are included in the Moscow Mathematical Papyrus and Rhind Mathematical Papyrus.The theorem that the base angles of an isosceles triangle are equal appears as Proposition I.5 in Euclid. This result has been called the pons asinorum (the bridge of asses) or the isosceles triangle theorem. Rival explanations for this name include the theory that it is because the diagram used by Euclid in his demonstration of the result resembles a bridge, or because this is the first difficult result in Euclid, and acts to separate those who can understand Euclid's geometry from those who cannot.A well-known fallacy is the false proof of the statement that all triangles are isosceles. Robin Wilson credits this argument to Lewis Carroll, who published it in 1899, but W. W. Rouse Ball published it in 1892 and later wrote that Carroll obtained the argument from him. The fallacy is rooted in Euclid's lack of recognition of the concept of betweenness and the resulting ambiguity of inside versus outside of figures. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Meroterpene**
Meroterpene:
A meroterpene (or merterpenoid) is a chemical compound having a partial terpenoid structure.
Examples:
Terpenophenolics Terpenophenolics are compounds that are part terpenes, part natural phenols. Plants in the genus Humulus and Cannabis produce terpenophenolic metabolites. Examples of terpenophenolics are: Bakuchiol Ferruginol Mutisianthol TotarolTerpenophenolics can also be isolated from animals. The terpenophenolics methoxyconidiol, epiconicol and didehydroconicol, isolated from the ascidian Aplidium aff. densum, show antiproliferative activity. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Stigmator**
Stigmator:
A stigmator is a component of electron microscopes that reduces astigmatism of the beam by imposing a weak electric or magnetic quadrupole field on the electron beam.
Background:
For early electron microscopes - between the 1940s and 1960s - astigmatism was one of the main performance limiting factors. Sources of this astigmatism include misaligned objectives, non-uniform magnetic fields of the lenses, which was especially hard to correct, lenses that aren't perfectly circular and contamination on the objective aperture. Therefore, to improve the resolving resolution, the astigmatism had to be corrected. The first commercially used stigmators on electron microscopes were installed in the early 1960s.The stigmatic correction is done using an electric or magnetic field perpendicular to the beam. By adjusting the magnitude and azimuth of the stigmator field, asymmetric astigmatization can be compensated for. Stigmators produce weak fields compared to the electromagnetic lenses they correct, as usually only minor correction are necessary.
Number of poles:
Stigmators create a quadrupole field, and thus have to consist of at least four poles, but hexapole, octopole and dodecapole stigmatizors are also used, with octopole stigmators being the most common. The octopole (or higher order of poles) stigmatizers also produce a quadrupole field, but use their additional poles to align the imposed field with the direction of the stigmatization ellipticity.
Types:
Magnetic stigmator The magnetic stigmator is a weak cylindrical lens that can correct the cylindrical component of the beam. It can consist of metal rods which induce an magnetic field, which are inserted with their long axis towards the beam center. By retracting or extending the rods, the astigmatism can be compensated.
Electromagnetic Electromagnetic stigmators are stigmators that are integrated with the lenses and directly deform the magnetic field of the lens(es). These were the first types of stigmators to be used.
Automatic stigmators:
In most cases, the astigmatism can be corrected using a constant stigmator field which is adjusted by the microscope operator. The main cause of astigmatism, the non-uniform magnetic field produced by the lenses, usually does not change noticeable during a TEM session. A recent development are computer-controlled stigmators, which usually use the Fourier transform of the image to find the ideal stigmator setting. The Fourier transform of an astigmatic image is usually elliptically shaped. For a stigmatic image, it is round, this property can be used by algorithms to reduce the astigmatic aberration.
Multiple stigmator systems:
Normally, one stigmator is sufficient, but TEMs normally contain three stigmators: one to stigmatize the source beam, one to stigmatize real-space images, and one to stigmatize diffraction patterns. These are commonly referred to as condensor, objective, and intermediate (or diffraction) stigmators. The use of three post-sample stigmators is proposed to reduce linear distortion | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Active Stabilizer Suspension System**
Active Stabilizer Suspension System:
Active Power Stabilizer Suspension System (APSSS), is an electric active suspension system with active anti-roll bars developed by Toyota Motor Corporation for its high-end vehicles including Lexus models. By altering stabilizer bar stiffness, this system acts to reduce body tilt during cornering, keeping the vehicle more level during turns and improving handling, as opposed to the natural tendency of a vehicle to roll due to the lateral forces experienced during high-speed maneuvering. The active stabilizer system relies on vehicle body sensors and electric motors. The first production usage of this system was introduced in August 2005 with the Lexus GS430 sport sedan, followed by the 2008 Lexus LS 600h luxury sedan. The development of APSSS is claimed to be the world's first electric active stabilizer system. It is a system improvement of an earlier Toyota technology called Toyota TEMS (Toyota Electronic Modulated Suspension).
How it works:
The APSSS utilizes sensors for steering wheel speed, steering angle, along with yaw and acceleration/deceleration sensors. These sensors are tied to an electronic control unit (ECU), which in turn connects with actuators consisting of 46V DC brushless motors and reduction mechanisms. Mounted with the vehicle suspension stabilizer bars, each reduction mechanism houses a wave generator, flexible gear, and circular gear.The system is activated when the vehicle enters a high-speed turn, and the sensors register vertical, longitudinal, and transverse forces which contribute to body lean and additional movements. Along with steering data, these are sent to the ECU where they are processed, with the forces necessary to counteract body roll movements calculated. Corrective instructions are then sent to the suspension motors and reduction mechanisms. The reduction mechanism gears activate to adjust suspension rigidity, torquing the stabilizer bars and thus increasing sway resistance and reducing body roll movements. Developed jointly with Aisin, APSSS engineers found that compared with prior hydraulically actuated active suspension systems, which rely on hydraulic servomechanisms, the electric APSSS offered a faster response time (within 20 milliseconds) and reduced energy consumption characteristics.
Vehicles:
Vehicles that have offered Active Power Stabilizer Suspension System (APSSS) to date, listed by model year (system was offered as an option): 2006 Lexus GS 350 2006 Lexus GS 430 2006 Lexus GS 450h 2008 Lexus GS 460 2008 Lexus LS 600h 2008 Lexus LS 600h L 2010 Lexus RX 450h 2013 Lexus LS 600h F SPORT Active Stabiliser System | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**The Dave Fanning Show**
The Dave Fanning Show:
The Dave Fanning Show is a radio program broadcast on RTÉ Radio. The show is presented by Dave Fanning and has, at various times, been broadcast on both RTÉ Radio 1 and RTÉ 2fm.
History:
The first "Dave Fanning Show" was broadcast on RTÉ Radio 2 from the 1970s. This music-based show featured the Fanning Sessions, where aspiring bands were afforded a full recording session which was subsequently played on Fanning's radio show. U2 recorded one such Fanning Session, in return for which Fanning was given the world-exclusive first play of all new U2 singles. Other bands to record Fanning Sessions have included The Cranberries, JJ72, Kerbdog and Therapy?.With updating schedules, the "Dave Fanning Show" was broadcast at different time-slots throughout the 1980s and 1990s on RTÉ Radio 2 (which was rebranded "2FM" in 1988). Focusing on music until the early 1990s, the format was updated to include a mix of music, movie news, lifestyle items, competitions, and guests. The last such incarnation of this weekday "Dave Fanning Show" was broadcast on RTÉ 2fm from March 2002 until July 2006. In 2006, the "Dave Fanning Show" was moved to RTÉ Radio 1. By 2009, however, it was returned to RTÉ 2FM as an evening weekday show.As of 2020, the "Dave Fanning Show" is broadcast as a weekend midday magazine/chat show on RTÉ 2FM.On 22 February 2023, Fanning announced that he was stepping away from his weekend show on RTÉ 2FM but that he would continue broadcasting on digital radio, on TV and online.
Signature tunes:
Until mid 1990s - Oh Well by Fleetwood Mac In the later 1990s - Another Girl Another Planet by The Only Ones Up to July 2006 - The Modern Age by The Strokes | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hugo (software)**
Hugo (software):
Hugo is a static site generator written in Go. Steve Francia originally created Hugo as an open source project in 2013. Since v0.14 in 2015, Hugo has continued development under the lead of Bjørn Erik Pedersen with other contributors. Hugo is licensed under the Apache License 2.0.Hugo is particularly noted for its speed, and Hugo's official website states it is "the world’s fastest framework for building websites". In July 2015, Netlify began providing Hugo hosting. Notable adopters are Smashing Magazine, which migrated from WordPress to a Jamstack solution with Hugo in 2017, and Cloudflare, which switched its Developer Docs from Gatsby to Hugo in 2022.
Features:
Hugo takes data files, i18n bundles, configuration, templates for layouts, static files, assets, and content written in Markdown, HTML, AsciiDoctor, or Org-mode and renders a static website. Some notable features are multilingual support, image processing, asset management, custom output formats, markdown render hooks and shortcodes. Nested sections allow for different types of content to be separated, e.g. for a website containing a blog and a podcast.Hugo can be used in combination with front-end frameworks such as Bootstrap or Tailwind. Hugo sites can be connected to cloud-based CMS software such as Netlify CMS, CloudCannon or Forestry enabling content editors to modify site content without coding knowledge. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Reflecting instrument**
Reflecting instrument:
Reflecting instruments are those that use mirrors to enhance their ability to make measurements. In particular, the use of mirrors permits one to observe two objects simultaneously while measuring the angular distance between the objects. While reflecting instruments are used in many professions, they are primarily associated with celestial navigation as the need to solve navigation problems, in particular the problem of the longitude, was the primary motivation in their development.
Objectives of the instruments:
The purpose of reflecting instruments is to allow an observer to measure the altitude of a celestial object or measure the angular distance between two objects. The driving force behind the developments discussed here was the solution to the problem of finding one's longitude at sea. The solution to this problem was seen to require an accurate means of measuring angles and the accuracy was seen to rely on the observer's ability to measure this angle by simultaneously observing two objects.
Objectives of the instruments:
The deficiency of prior instruments was well known. Requiring the observer to observe two objects with two divergent lines of sight increased the likelihood of an error. Those that considered the problem realized that the use of specula (mirrors in modern parlance) could permit two objects to be observed in a single view. What followed is a series of inventions and improvements that refined the instrument to the point that its accuracy exceeded that which was required for determining longitude. Any further improvements required a completely new technology.
Early reflecting instruments:
Some of the early reflecting instruments were proposed by scientists such as Robert Hooke and Isaac Newton. These were little used or may not have been built or tested extensively. The van Breen instrument was the exception, in that it was used by the Dutch. However, it had little influence outside of the Netherlands.
Joost van Breen's reflecting cross-staff Invented in 1660 by the Dutch Joost van Breen, the spiegelboog (mirror-bow) was a reflecting cross staff. This instrument appears to have been used for approximately 100 years, mainly in the Zeeland Chamber of the VOC (The Dutch East India Company).
Robert Hooke's single-reflecting instrument Hooke's instrument was a single-reflecting instrument. It used a single mirror to reflect the image of an astronomical object to the observer's eye. This instrument was first described in 1666 and a working model was presented by Hooke at a meeting of the Royal Society some time later.
Early reflecting instruments:
The device consisted of three primary components, an index arm, a radial arm and a graduated chord. The three were arranged in a triangle as in the image on the right. A telescopic sight was mounted on the index arm. At the point of rotation of the radial arm, a single mirror was mounted. This point of rotation allowed the angle between the index arm and the radial arm to be changed. The graduated chord was connected to the opposite end of the radial arm and the chord was permitted to rotate about the end. The chord was held against the distant end of the index arm and slid against it. The graduations on the chord were uniform and, by using it to measure the distance between the ends of the index arm and the radial arm, the angle between those arms could be determined. A table of chords was used to convert a measurement of distance to a measurement of angle. The use of the mirror resulted in the measured angle being twice the angle included by the index and the radius arm.
Early reflecting instruments:
The mirror on the radial arm was small enough that the observer could see the reflection of an object in half the telescope's view while seeing straight ahead in the other half. This allowed the observer to see both objects at once. Aligning the two objects together in the telescopes view resulted in the angular distance between them to be represented on the graduated chord.
Early reflecting instruments:
While Hooke's instrument was novel and attracted some attention at the time, there is no evidence that it was subjected to any tests at sea. The instrument was little used and did not have any significant effect on astronomy or navigation.
Early reflecting instruments:
Halley's reflecting instrument In 1692, Edmond Halley presented the design of a reflecting instrument to the Royal Society.This is an interesting instrument, combining the functionality of a radio latino with a double telescope. The telescope (AB in the adjacent image), has an eyepiece at one end and a mirror (D) partway along its length with one objective lens at the far end (B). The mirror only obstructs half the field (either left or right) and permits the objective to be seen on the other. Reflected in the mirror is the image from the second objective lens (C). This permits the observer to see both images, one straight through and one reflected, simultaneously besides each other. It is essential that the focal lengths of the two objective lenses be the same and that the distances from the mirror to either lens be identical. If this condition is not met, the two images cannot be brought to a common focus.
Early reflecting instruments:
The mirror is mounted on the staff (DF) of the radio latino portion of the instrument and rotates with it. The angle this side of the radio latino's rhombus makes to the telescope can be set by adjusting the rhombus' diagonal length. In order to facilitate this and allow for fine adjustment of the angle, a screw (EC) is mounted so as to allow the observer to change the distance between the two vertexes (E and C).
Early reflecting instruments:
The observer sights the horizon with the direct lens' view and sights a celestial object in the mirror. Turning the screw to bring the two images directly adjacent sets the instrument. The angle is determined by taking the length of the screw between E and C and converting this to an angle in a table of chords.
Early reflecting instruments:
Halley specified that the telescope tube be rectangular in cross section. This makes construction easy, but is not a requirement as other cross section shapes can be accommodated. The four sides of the radio latino portion (CD, DE, EF, FC) must be equal in length in order for the angle between the telescope and the objective lens side (ADC) to be precisely twice the angle between the telescope and the mirror (ADF) (or in other words – to enforce the angle of incidence being equal to the angle of reflection). Otherwise, instrument collimation will be compromised and the resulting measurements would be in error.
Early reflecting instruments:
The celestial object's elevation angle could have been determined by reading from graduations on the staff at the slider, however, that's not how Halley designed the instrument. This may suggest that the overall design of the instrument was coincidentally like a radio latino and that Halley may not have been familiar with that instrument.
There is no knowledge of whether this instrument was ever tested at sea.
Newton's reflecting quadrant Newton's reflecting quadrant was similar in many respects to Hadley's first reflecting quadrant that followed it.
Early reflecting instruments:
Newton had communicated the design to Edmund Halley around 1699. However, Halley did not do anything with the document and it remained in his papers only to be discovered after his death. However, Halley did discuss Newton's design with members of the Royal Society when Hadley presented his reflecting quadrant in 1731. Halley noted that Hadley's design was quite similar to the earlier Newtonian instrument.As a result of this inadvertent secrecy, Newton's invention played little role in the development of reflecting instruments.
The octant:
What is remarkable about the octant is the number of persons who independently invented the device in a short period of time. John Hadley and Thomas Godfrey both get credit for inventing the octant. They independently developed the same instrument around 1731. They were not the only ones, however.
The octant:
In Hadley's case, two instruments were designed. The first was an instrument very similar to Newton's reflecting quadrant. The second had essentially the same form as the modern sextant. Few of the first design were constructed, while the second became the standard instrument from which the sextant derived and, along with the sextant, displaced all prior navigation instruments used for celestial navigation.
The octant:
Caleb Smith, an English insurance broker with a strong interest in astronomy, had created an octant in 1734. He called it an Astroscope or Sea-Quadrant. He used a fixed prism in addition to an index mirror to provide reflective elements. Prisms provide advantages over mirrors in an era when polished speculum metal mirrors were inferior and both the silvering of a mirror and the production of glass with flat, parallel surfaces was difficult. However, the other design elements of Smith's instrument made it inferior to Hadley's octant and it was not used significantly.Jean-Paul Fouchy, a mathematics professor and astronomer in France, invented an octant in 1732. His was essentially the same as Hadley's. Fouchy did not know of the developments in England at the time, since communications between the two country's instrument makers was limited and the publications of the Royal Society, particularly the Philosophical Transactions, were not being distributed in France. Fouchy's octant was overshadowed by Hadley's.
The sextant:
The main article, Sextant, covers the use of the instrument in navigation. This article concentrates on the history and the development of the instrumentThe origin of the sextant is straightforward and not in dispute. Admiral John Campbell, having used Hadley's octant in sea trials of the method of lunar distances, found that it was wanting. The 90° angle subtended by the arc of the instrument was insufficient to measure some of the angular distances required for the method. He suggested that the angle be increased to 120°, yielding the sextant. John Bird made the first such sextant in 1757.With the development of the sextant, the octant became something of a second class instrument. The octant, while occasionally constructed entirely of brass, remained primarily a wooden-framed instrument. Most of the developments in advanced materials and construction techniques were reserved for the sextant.
The sextant:
There are examples of sextants made with wood, however most are made from brass. In order to ensure the frame was stiff, instrument makers used thicker frames. This had a drawback in making the instrument heavier, which could influence the accuracy due to hand-shaking as the navigator worked against its weight. In order to avoid this problem, the frames were modified. Edward Troughton patented the double-framed sextant in 1788. This used two frames held in parallel with spacers. The two frames were about a centimetre apart. This significantly increased the stiffness of the frame. An earlier version had a second frame that only covered the upper part of the instrument, securing the mirrors and telescope. Later versions used two full frames. Since the spacers looked like little pillars, these were also called pillar sextants.
The sextant:
Troughton also experimented with alternative materials. The scales were plated with silver, gold or platinum. Gold and platinum both minimized corrosion problems. The platinum-plated instruments were expensive, due to the scarcity of the metal, though less expensive than gold. Troughton knew William Hyde Wollaston through the Royal Society and this gave him access to the precious metal. Instruments from Troughton's company that used platinum can be easily identified by the word Platina engraved on the frame. These instruments remain highly valued as collector's items and are as accurate today as when they were constructed.As the developments in dividing engines progressed, the sextant was more accurate and could be made smaller. In order to permit easy reading of the vernier, a small magnifying lens was added. In addition, to reduce glare on the frame, some had a diffuser surrounding the magnifier to soften the light. As accuracy increased, the circular arc vernier was replaced with a drum vernier.
The sextant:
Frame designs were modified over time to create a frame that would not be adversely affected by temperature changes. These frame patterns became standardized and one can see the same general shape in many instruments from many different manufacturers.
In order to control costs, modern sextants are now available in precision-made plastic. These are light, affordable and of high quality.
Types of sextants:
While most people think of navigation when they hear the term sextant, the instrument has been used in other professions.
Navigator's sextant The common type of instrument most people think of when they hear the term sextant.
Sounding sextants These are sextants that were constructed for use horizontally rather than vertically and were developed for use in hydrographic surveys.
Surveyor's sextants These were constructed for use exclusively on land for horizontal angular measurements. Instead of a handle on the frame, they had a socket to allow the attachment of a surveyor's Jacob's staff.
Types of sextants:
Box or pocket sextants These are small sextants entirely contained within a metal case. First developed by Edward Troughton, they are usually all brass with most of the mechanical components inside the case. The telescope extends from an opening in the side. The index and other parts are completely covered when the case cover is slipped on. Popular with surveyors for their small size (typically only 6.5–8 cm [2+1⁄2–3+1⁄4 in] in diameter and 5 cm [2 in] deep), their accuracy was enabled by improvements in the dividing engines used to graduate the arcs. The arcs are so small that magnifiers are attached to allow them to be read.In addition to these types, there are terms used for various sextants.
Types of sextants:
A pillar sextant can be either: A double-frame sextant as patented by Edward Troughton in 1788.
A surveyor's sextant with a socket for a surveyor's staff (the pillar).The former is the most common use of the term.
Beyond the sextant:
Quintant and others Several makers offered instruments with sizes other than one-eighth or one-sixth of a circle. One of the most common was the quintant or fifth of a circle (72° arc reading to 144°). Other sizes were also available, but the odd sizes never became common. Many instruments are found with scales reading to, for example, 135°, but they are simply referred to as sextants. Similarly, there are 100° octants, but these are not separated as unique types of instruments.
Beyond the sextant:
There was interest in much larger instruments for special purposes. In particular a number of full circle instruments were made, categorized as reflecting circles and repeating circles.
Beyond the sextant:
Reflecting circles The reflecting circle was invented by the German geometer and astronomer Tobias Mayer in 1752, with details published in 1767. His development preceded the sextant and was motivated by the need to create a superior surveying instrument.The reflecting circle is a complete circular instrument graduated to 720° (to measure distances between heavenly bodies, there is no need to read an angle greater than 180°, since the minimum distance will always be less than 180°). Mayer presented a detailed description of this instrument to the Board of Longitude and John Bird used the information to construct one sixteen inches in diameter for evaluation by the Royal Navy. This instrument was one of those used by Admiral John Campbell during his evaluation of the lunar distance method. It differed in that it was graduated to 360° and was so heavy that it was fitted with a support that attached to a belt. It was not considered better than the Hadley octant and was less convenient to use. As a result, Campbell recommended the construction of the sextant.
Beyond the sextant:
Jean-Charles de Borda further developed the reflecting circle. He modified the position of the telescopic sight in such a way that the mirror could be used to receive an image from either side relative to the telescope. This eliminated the need to ascertain that the mirrors were precisely parallel when reading zero. This simplified the use of the instrument. Further refinements were performed with the help of Etienne Lenoir. The two of them refined the instrument to its definitive form in 1777. This instrument was so distinctive it was given the name Borda circle or repeating circle.
Beyond the sextant:
Borda and Lenoir developed the instrument for geodetic surveying. Since it was not used for the celestial measures, it did not use double reflection and substituted two telescope sights. As such, it was not a reflecting instrument. It was notable as being the equal of the great theodolite created by the renowned instrument maker, Jesse Ramsden.
Beyond the sextant:
Josef de Mendoza y Ríos redesigned Borda's reflecting circle (London, 1801). The goal was to use it together with his Lunar Tables published by the Royal Society (London, 1805). He made a design with two concentric circles and a vernier scale and recommended averaging three sequential readings to reduce the error. Borda's system was not based on a circle of 360° but 400 grads (Borda spent years calculating his tables with a circle divided in 400°). Mendoza's lunar tables have been used through almost the entire nineteenth century (see Lunar distance (navigation)).
Beyond the sextant:
Edward Troughton also modified the reflecting circle. He created a design with three index arms and verniers. This permitted three simultaneous readings to average out the error.
As a navigation instrument, the reflecting circle was more popular with the French navy than with the British.
Beyond the sextant:
Bris sextant The Bris sextant is not a true sextant, but it is a true reflecting instrument based on the principle of double reflection and subject to the same rules and errors as common octants and sextants. Unlike common octants and sextants, the Bris sextant is a fixed angle instrument capable of accurately measuring a few specific angles unlike other reflecting instruments which can measure any angle within the range of the instrument. It is particularly suited to determining the altitude of the sun or moon.
Beyond the sextant:
Surveying sector Francis Ronalds invented an instrument for recording angles in 1829 by modifying the octant. A disadvantage of reflecting instruments in surveying applications is that optics dictate that the mirror and index arm rotate through half the angular separation of the two objects. The angle thus needs to be read, noted and a protractor employed to draw the angle on a plan. Ronalds’ idea was to configure the index arm to rotate through twice the angle of the mirror, so that the arm could then be used to draw a line at the correct angle directly onto the drawing. He used a sector as the basis of his instrument and placed the horizon glass at one tip and the index mirror near the hinge connecting the two rulers. The two revolving elements were linked mechanically and the barrel supporting the mirror was twice the diameter of the hinge to give the required angular ratio. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dopamine receptor D4**
Dopamine receptor D4:
The dopamine receptor D4 is a dopamine D2-like G protein-coupled receptor encoded by the DRD4 gene on chromosome 11 at 11p15.5.The structure of DRD4 was recently reported in complex with the antipsychotic drug nemonapride.As with other dopamine receptor subtypes, the D4 receptor is activated by the neurotransmitter dopamine. It is linked to many neurological and psychiatric conditions including schizophrenia and bipolar disorder, ADHD, addictive behaviors, Parkinson's disease, and eating disorders such as anorexia nervosa. A weak association has been drawn between DRD4 and borderline personality disorder.
Dopamine receptor D4:
It is also a target for drugs which treat schizophrenia and Parkinson's disease. The D4 receptor is considered to be D2-like in which the activated receptor inhibits the enzyme adenylate cyclase, thereby reducing the intracellular concentration of the second messenger cyclic AMP.
Genetics:
The human protein is coded by the DRD4 on chromosome 11 located in 11p15.5.There are slight variations (mutations/polymorphisms) in the human gene: A 48-base pair VNTR in exon 3 C-521T in the promoter 13-base pair deletion of bases 235 to 247 in exon 1 12 base pair repeat in exon 1 Val194Gly A polymorphic tandem duplication of 48 bpMutations in this gene have been associated with various behavioral phenotypes, including autonomic nervous system dysfunction, attention deficit/hyperactivity disorder, schizophrenia and the personality trait of novelty seeking.
Genetics:
48-base pair VNTR The 48-base pair variable number tandem repeat (VNTR) in exon 3 range from 2 to 11 repeats. Dopamine is more potent at the D4 receptor with 2 allelic repeat or 7 allelic repeats than the variant with 4 allelic repeats.DRD4-7R, the 7-repeat (7R) variant of DRD4 (DRD4 7-repeat polymorphism), has been linked to a susceptibility for developing ADHD in several meta-analyses and other psychological traits and disorders. Adults and children with the DRD4 7-repeat polymorphism show variations in auditory-evoked gamma oscillations, which may be related to attention processing.The frequency of the alleles varies greatly between populations, e.g., the 7-repeat version has high incidence in America and low in Asia. "Long" versions of polymorphisms are the alleles with 6 to 10 repeats. 7R appears to react less strongly to dopamine molecules.The 48-base pair VNTR has been the subject of much speculation about its evolution and role in human behaviors cross-culturally. The 7R allele appears to have been selected for about 40,000 years ago. In 1999 Chen and colleagues observed that populations who migrated farther in the past 30,000 to 1,000 years ago had a higher frequency of 7R/long alleles. They also showed that nomadic populations had higher frequencies of 7R alleles than sedentary ones. More recently it was observed that the health status of nomadic Ariaal men was higher if they had 7R alleles. However, in recently sedentary (non-nomadic) Ariaal those with 7R alleles seemed to have slightly deteriorated health.
Novelty seeking:
Despite early findings of an association between the DRD4 48bp VNTR and novelty seeking (a normal characteristic of exploratory and excitable people), a 2008 meta-analysis compared 36 published studies of novelty seeking and the polymorphism and found no effect. Results are consistent with novelty-seeking behavior being a complex trait associated with many genes, and the variance attributable to DRD4 by itself being very small. The meta-analysis of 11 studies did find that another polymorphism in the gene, the -521C/T, showed an association with novelty seeking. While human results are not strong, research in animals has suggested stronger associations and new evidence suggests that human encroachment may exert selection pressure in favor of DRD4 variants associated with novelty seeking.
Cognition:
Several studies have shown that agonists that activate the D4 receptor increase working memory performance and fear acquisition in monkeys and rodents according to a U-shaped dose response curve. However, antagonists of the D4 receptor reverse stress-induced or drug-induced working memory deficits. Gamma oscillations, which may be correlated with cognitive processing, can be increased by D4R agonists, but are not significantly reduced by D4R antagonists.
Cognitive development:
Several studies have suggested that parenting may affect the cognitive development of children with the 7-repeat allele of DRD4. Parenting that has maternal sensitivity, mindfulness, and autonomy–support at 15 months was found to alter children's executive functions at 18 to 20 months. Children with poorer quality parenting were more impulsive and sensation seeking than those with higher quality parenting. Higher quality parenting was associated with better executive control in 4-year-olds.
Ligands:
Agonists WAY-100635: potent full agonist, with 5-HT1A antagonistic component A-412,997: full agonist, > 100-fold selective over a panel of seventy different receptors and ion channels ABT-724 - developed for treatment of erectile dysfunction ABT-670 - better oral bioavailability than ABT-724 FAUC 316: partial agonist, > 8600-fold selective over other dopamine receptor subtypes FAUC 299: partial agonist F-15063: antipsychotic with partial D4 agonism (E)-1-aryl-3-(4-pyridinepiperazin-1-yl)propanone oximes PIP3EA: partial agonist Flibanserin - partial agonist PD-168,077 - D4 selective but also binds to α1A, α2C and 5HT1A CP-226,269 - D4 selective but also binds to D2, D3, α2A, α2C and 5HT1A Ro10-5824 – partial agonist Roxindole – D4 selective but also D2 and D3 autoreceptor agonist, 5HT1A receptor agonist, serotonin reuptake inhibitor) Apomorphine – D4 selective but also D2 and D3 agonist, α-adrenergic and serotonergic weak antagonist Antagonists A-381393: potent, subtype selective antagonist (>2700-fold) FAUC 213 L-745,870 L-750,667 ML-398 S 18126 - also σ1 affinity Fananserin – mixed 5-HT2A / D4 antagonist Olanzapine, an atypical antipsychotic Buspirone, an anxiolytic Inverse agonists FAUC F41: inverse agonist, subtype selectivity of more than 3 orders of magnitude over D2 and D3
In popular culture:
Michael Connelly’s 2020 crime novel Fair Warning (ISBN 978-0-316-53942-5) revolves around a serial killer who uses DNA profiles obtained on the Dark Web to target female victims, specifically those whose DRD4 profiles allegedly make them more susceptible to risk taking and sexual promiscuity. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Active heave compensation**
Active heave compensation:
Active heave compensation (AHC) is a technique used on lifting equipment to reduce the influence of waves upon offshore operations. AHC differs from Passive Heave Compensation by having a control system that actively tries to compensate for any movement at a specific point, using power to gain accuracy.
Principles:
The purpose of AHC is to keep a load, held by equipment on a moving vessel, motionless with regard to the seabed or another vessel. Commercial offshore cranes usually use a Motion reference unit (MRU) or pre-set measurement position detection to detect the current ship displacements and rotations in all directions. A control system, often PLC or computer based, then calculates how the active parts of the system are to react to the movement. The performance of an AHC system is normally limited by power, motor speed and torque, by measurement accuracy and delay, or by computing algorithms. Choice of control method, like using preset values or delayed signals, may affect performance and give large residual motions, especially with unusual waves.
Principles:
State of the art AHC systems are real time systems that can calculate and compensate any displacement in a matter of milliseconds. Accuracy then depends on the forces on the system, and thus the shape of the waves more than the size of the waves.
Application:
Electric winch systems In an electric winch system, the wave movement is compensated by automatically driving the winch in the opposite direction at the same speed. The hook of the winch will thus keep its position relative to the seabed. AHC winches are used in ROV-systems and for lifting equipment that is to operate near or at the seabed. Active compensation can include tension control, aiming to keep wire tension at a certain level while operating in waves. Guide-wires, used to guide a load to an underwater position, may use AHC and tension control in combination.
Application:
Hydraulic winch systems Hydraulic cranes can use hydraulic cylinders to compensate, or they can utilize a hydraulic winch.
Application:
Hydraulic "active boost" winches control the oil flow from the pump(s) to the winch so that the target position is reached. Hydraulic winch systems can use accumulators and passive heave compensation to form a semi-active system with both an active and a passive component. In such systems the active part will take over when the passive system is too slow or inaccurate to meet the target of the AHC control system. AHC cranes need to calculate the vertical displacement and/or the velocity of the crane tip position in order to actively heave compensate a load sub-sea.
Application:
A good AHC-crane is able to keep its load steady with a deviation of a few centimeters in waves up to 8m (+/-4m).
Application:
AHC cranes are typically used for sub-sea lifting operations or construction, and special rules applies to certified heave compensating equipment.US navy have used AHC to create a Roll-On/Roll-Off (Ro/Ro) system for two vessels or floating platforms at sea. The system is utilizing AHC via hydraulic cylinders. This system is, according to some, currently not commercially interesting due to costs, limited use and huge amount of power.
Application:
The latest development is to compensate not only the vertical direction but also the horizontal directions, making it possible to perform operations on offshore windmills.
Application:
AHC for Towing Operations When towing side-scan sonars or scientific sampling systems, the stability of the towed equipment is important for data quality. These towed systems usually have low water resistance, and Constant Tension (CT) does not help stabilize the equipment when the vessel is affected by waves. AHC is in most cases a better solution for stabilizing the towed body. The AHC controller uses information about the towing depth and length of the towing line to calculate the angle of the towing line. This is used to drive the winch to compensate for the towing point movement and ensures that the towed device moves smoothly through the sea.
New generation AHC Controllers:
Active Heave Compensation has mainly been applied in the offshore oil and gas sector where the development has been focused on increasing the capacities of the compensating winches or cylinders. Cost and complexity of AHC systems have limited the use of this technology in other subsea applications, such as marine research. Control technology advancements in recent years are allowing AHC to become more standardized and available for applications where cost and simplicity are significant.
New generation AHC Controllers:
A new generation AHC controller with an integrated MRU (motion sensor) is now available, making it easier for winch and crane manufacturers to integrate AHC into their products without involving external experts. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Plantar fascial rupture**
Plantar fascial rupture:
A plantar fascial rupture, is a painful tear in the plantar fascia. The plantar fascia is a connective tissue that spans across the bottom of the foot. The condition plantar fasciitis may increase the likelihood of rupture. A plantar fascial rupture may be mistaken for plantar fasciitis or even a calcaneal fracture. To allow for proper diagnosis, an MRI is often needed.
Causes:
The risk for the development of plantar fascia tears can be increased by certain factors which could include: Overweight Non- Supportive footwear Flat arched feet High arched feet A sudden increase in activity/Overuse Hormone problems Lack of flexibility of the calf, Achilles tendon and the plantar fascia.
Connective tissue disorders such as Rheumatoid arthritis.
Types:
Complete Complete tears of the plantar fascia are often due to sudden trauma or injury. Often, the rupture will be accompanied by a popping sound and painful snapping sensation. The bottom of the foot often bruises and swells. Former NFL athlete Peyton Manning suffered a complete rupture in 2015.The surgical procedure known as a plantar fascia release actually involves the purposeful infliction of a complete tear of the plantar fascia. This is intended to relieve plantar fasciitis symptoms when the tissue recovers by building more tissue, elongating the previously tight plantar fascia.
Types:
Partial Partial tears are seemingly even less common than complete tears. They are more likely to arise from overuse from activities like daily running. The bottom of the foot may be swollen or bruised.
Treatment:
Full recovery from both complete and partial tears typically takes 12 weeks or more. However, activities may gradually resume after 6–8 weeks when the plantar fascia will be mostly recovered. Surgery is typically a last resort. At home, it might be advisable to follow the RICE method to reduce inflammation and ease pain.
Immobilization For the first 2–4 weeks after diagnosis, patients are often instructed to use a walking boot to immobilize and protect the foot.
Physical therapy During the immobilization period, it is important to keep the foot flexible by lightly stretching the foot and calf regularly.
As the plantar fascia recovers, physical therapy exercises help stabilize the ankle and correct gait patterns that may have contributed to the tear. Stretching and strengthening exercises decrease the chance of reinjury.
Other treatments Platelet-Rich Plasma injections may be used to help accelerate recovery and decrease the chance of reinjury.
Cortisone injections may ease pain. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ArchBang**
ArchBang:
ArchBang Linux is a simple lightweight rolling release Linux distribution based on a minimal Arch Linux operating system with the i3 window manager, but was previously using the Openbox windows manager. ArchBang is especially suitable for high performance on old or low-end hardware with limited resources. ArchBang's aim is to provide a simple out-of-the-box Arch-based Linux distribution with a pre-configured i3 desktop suite, adhering to Arch principles.ArchBang has also been recommended as a fast installation method for people who have experience installing Arch Linux but want to avoid the more demanding default installation of Arch Linux when reinstalling it on another PC.
History:
Inspired by CrunchBang Linux (which was derived from Debian), ArchBang was originally conceived and founded in a forum thread posted on the CrunchBang Forums by Willensky Aristide (a.k.a. Will X TrEmE). Aristide wanted a rolling release with the Openbox setup that Crunchbang came with. Arch Linux provided the light configurable rolling release system that was needed as a base for the Openbox desktop. With the encouragement and help of many in the CrunchBang community, and the addition of developer Pritam Dasgupta (a.k.a. sHyLoCk), the project began to take form. The goal was to make Arch Linux look like CrunchBang.As of April 16, 2012, the new project leader is Stan McLaren.
Installation:
ArchBang is available as an x86-64 ISO file for live CD installation or installed on a USB flash drive. The live CD is designed to allow the user to test the operating system prior to installation.ArchBang comes with a modified Arch Linux graphical installation script for installation and also provides a simple, easy to follow, step-by-step installation guide.
Receptions:
Jesse Smith reviewed the ArchBang 2011 for DistroWatch Weekly: The ISO for ArchBang's live disc weighs in at approximately 530 MB and, after showing us a boot menu, it boots into an Openbox environment in under a minute. The default desktop is dark, the background mostly black. A task switcher sits at the bottom of the screen and a Conky panel displays resource usage information to the right-hand side of the display. Right-clicking on the desktop brings up a menu that allows us to launch applications (including the installer), change settings or logout/shutdown.
Receptions:
Smith also reviewed ArchBang 2013.09.01.Whitson Gordon from Lifehacker wrote review about ArchBang in 2011: ArchBang has all of that, without the arduous installation process. ArchBang, like most other Linux distributions, comes on a Live CD. Just boot it up, and you'll head straight into a desktop, from which you can try out the system or install it directly to your computer. The installation is actually very similar to Arch's, only without the config file editing, the driver installations, or the pain of running startx and seeing nothing happen. You just pick your drives, hit the install button, and in five minutes, you're done. Of course, you can edit the config files if you so desire—you just don't have to. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Clinica Chimica Acta**
Clinica Chimica Acta:
Clinica Chimica Acta, the International Journal of Clinical Chemistry is a peer-reviewed medical journal covering research in clinical chemistry and laboratory medicine.
Abstracting and indexing:
This journal is abstracted and indexed in BIOSIS, Chemical Abstracts, Clinical Chemistry Lookout, Current Clinical Chemistry, Current Contents/Life Sciences, EMBASE, EMBiology, FRANCIS, Index Chemica, Informedicus, MEDLINE, PASCAL, Reference Update, and Scopus. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Human betaherpesvirus 6B**
Human betaherpesvirus 6B:
Human betaherpesvirus 6B (HHV-6B) is a species of virus in the genus Roseolovirus, subfamily Betaherpesvirinae, family Herpesviridae, and order Herpesvirales.
Taxonomy:
In 1992 the two variants were recognised within Human herpesvirus 6 on the basis of differing restriction endonuclease cleavages, monoclonal antibody reactions, and growth patterns. In 2012 these two variants were officially recognised as distinct species by the International Committee on Taxonomy of Viruses and named Human betaherpesvirus 6A and Human betaherpesvirus 6B. Despite now being recognised as paraphyletic, the name Human herpesvirus 6 still sees usage in clinical contexts.
Pathology:
Human betaherpesvirus 6B affects humans. Primary infection with this virus is the cause of the common childhood illness exanthema subitum (also known as roseola infantum or sixth disease). Additionally, reactivation is common in transplant recipients, which can cause several clinical manifestations such as encephalitis, bone marrow suppression, and pneumonitis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Coverity**
Coverity:
Coverity is a proprietary static code analysis tool from Synopsys. This product enables engineers and security teams to find and fix software defects.
Coverity:
Coverity started as an independent software company in 2002 at the Computer Systems Laboratory at Stanford University in Palo Alto, California. It was founded by Benjamin Chelf, Andy Chou, and Seth Hallem with Stanford professor Dawson Engler as a technical adviser. The headquarters was moved to San Francisco. In June 2008, Coverity acquired Solidware Technologies. In February 2014, Coverity announced an agreement to be acquired by Synopsys, an electronic design automation company, for $350 million net of cash on hand.
Products:
Coverity is a static code analysis tool for C, C++, C#, Java, JavaScript, PHP, Python, .NET, ASP.NET, Objective-C, Go, JSP, Ruby, Swift, Fortran, Scala, VB.NET, and TypeScript. It also supports more than 70 different frameworks for Java, JavaScript, C# and other languages.Coverity Scan is a free static-analysis cloud-based service for the open source community.
Applications:
Under a United States Department of Homeland Security contract in 2006, the tool was used to examine over 150 open source applications for bugs; 6000 bugs found by the scan were fixed across 53 projects.National Highway Traffic Safety Administration used the tool in its 2010-2011 investigation into reports of sudden unintended acceleration in Toyota vehicles. The tool was used by CERN on the software employed in the Large Hadron Collider and in the NASA Jet Propulsion Laboratory during the flight software development of the Mars rover Curiosity. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sulfate chloride**
Sulfate chloride:
The sulfate chlorides are double salts containing both sulfate (SO42–) and chloride (Cl–) anions. They are distinct from the chlorosulfates, which have a chlorine atom attached to the sulfur as the ClSO3− anion.
Many minerals in this family exist. Many are found associated with volcanoes and fumaroles. As minerals they are included in the Nickel-Strunz classification group 7.DG.
The book Hey's Chemical Index of Minerals groups these in subgroup 12.2.
List:
Artificial Some "chloride sulfates" are sold as solutions in water and used for water treatment. these include ferric chloride sulfate and polyaluminium sulfate chloride. The solutions may also be called "chlorosulfates" even though they do not contain a chlorosulfate group. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Coders at Work**
Coders at Work:
Coders at Work: Reflections on the Craft of Programming (ISBN 1-430-21948-3) is a 2009 book by Peter Seibel comprising interviews with 15 highly accomplished programmers. The primary topics in these interviews include how the interviewees learned programming, how they debug code, their favorite languages and tools, their opinions on literate programming, proofs, code reading and so on.
Interviewees:
Jamie Zawinski Brad Fitzpatrick For studying Perl he recommends Higher-Order Perl by Mark Jason Dominus.
Interviewees:
Douglas Crockford Brendan Eich Joshua Bloch Joe Armstrong Simon Peyton Jones Mentions David Turner's paper on S-K combinators (cf. SKI combinator calculus). The S-K combinators are a way of translating and then executing the lambda calculus. Turner showed in his paper how to translate lambda calculus into the three combinators S, K and I which are all just closed lambda terms and I = SKK. So in effect you take a lambda term and compile to just Ss and Ks.
Interviewees:
Recalls his first instance of learning functional programming when taking a course by Arthur Norman who showed how to build doubly linked lists without any side effects at all.
Mentions the paper "Can Programming be Liberated from the von Neumann Style" by John Backus.
Wants John Hughes to write a paper for the Journal of Functional Programming on why static typing is bad. Hughes has written a popular paper titled "Why Functional Programming Matters".
Mentions a data structure called "zipper" that is a very useful functional data structure. Peyton Jones also mentions the 4-5 line program that Hughes wrote to calculate an arbitrary number of digits of e lazily.
Mentions that the sequential implementation of a double-ended queue is a first year undergraduate programming problem. For a concurrent implementation with a lock per node, it's a research paper problem. With transactional memory, it's an undergraduate problem again.
Interviewees:
Favorite books/authors: Programming Pearls by Jon Bentley, a chapter titled "Writing Programs for 'The Book'" by Brian Hayes from the book Beautiful Code where he explores the problem of determining which side of the line a given point is, Art of Computer Programming by Don Knuth, Purely Functional Data Structures by Chris Okasaki exploring how to build data structures like queues and heaps without side effects and reasonable complexity bounds, Structure and Interpretation of Computer Programs by Abelson and Sussman, Compiling with Continuations by Andrew Appel, A Discipline of Programming by Dijkstra, Per Brinch Hansen's book about writing concurrent operating systems.
Interviewees:
Peyton Jones mentions Fred Brook's paper that he reread and liked "The Computer Scientist as Toolsmith".
Peter Norvig In 1972/73 when Norvig was still in high school, he found the Knuth algorithm for shuffling cards.
The first interesting program that Norvig wrote was Game of Life.
Wrote an essay called "Teach Yourself Programming in Ten Years".
On practical applications of academic concepts, he mentions that part of the problem is that academics do not see the whole problem and another part is education. If you have a bunch of programmers who don't understand what a monad is and haven't taken courses in category theory, there's a gap.
Books/Authors he recommends include Knuth; Cormen, Leiserson and Rivest; Sally Goldman; Abelson and Sussman; McConnell.
Knuth has written an essay about developing TeX where he talks about flipping over to his pure, destructive QA personality and doing his darnedest to break his own code.
Interviewees:
Talks about the job interview process at Google and says that the best signal is if somebody has worked with one of their employees and they can vouch for the candidate. He also talks about "resume predictor" that takes resume attributes such as experience, winning a programming contest, working on open source project etc. and predicts fit. He also mentions assigning of scores 1 to 4 by interviewers and generally turning down candidates who get a 1 by any of the interviewers unless someone at Google fights for hiring them.
Interviewees:
Guy Steele Collaborated with Gerald Sussman on a series of papers now known as "The Lambda Papers" which included the original definition of the Scheme programming language.
On getting degree in Computer Science, Guy mentions that he had set out to be a pure math major but he realized that he had no intuition whatsoever for infinite dimensional Banach spaces and that's what did it for him to switch to applied math major.
Interviewees:
Favorite authors and books: Knuth; Aho, Hopcroft and Ullman (Guy says that this book is where he learned sorting for "real"), Gerald Weinberg on the Psychology of Computer Programming, Fred Brook's Mythical Man-Month Suggests that you want to design the specification of what's in the middle in such a way that it naturally is also correct on the boundaries rather than treating boundaries as special cases.
Interviewees:
A parallel garbage collector algorithm developed by Dijkstra which fit on half a page. David Gries wrote a paper for CACM using techniques developed by his student Susan Owicki to prove correctness of this algorithm.
Dan Ingalls L. Peter Deutsch Ken Thompson Fran Allen Bernie Cosell Donald Knuth | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**IBM Advanced Program-to-Program Communication**
IBM Advanced Program-to-Program Communication:
In computing, Advanced Program to Program Communication or APPC is a protocol which computer programs can use to communicate over a network. APPC is at the application layer in the OSI model, it enables communications between programs on different computers, from portables and workstations to midrange and host computers. APPC is defined as VTAM LU 6.2 ( Logical unit type 6.2 ) APPC was developed in 1982 as a component of IBM's Systems Network Architecture (SNA). Several APIs were developed for programming languages such as COBOL, PL/I, C or REXX.
IBM Advanced Program-to-Program Communication:
APPC software is available for many different IBM and non-IBM operating systems, either as part of the operating system or as a separate software package. APPC serves as a translator between application programs and the network. When an application on your computer passes information to the APPC software, APPC translates the information and passes it to a network interface, such as a LAN adapter card. The information travels across the network to another computer, where the APPC software receives the information from the network interface. APPC translates the information back into its original format and passes it to the corresponding partner application.
IBM Advanced Program-to-Program Communication:
APPC is mainly used by IBM installations running operating systems such z/OS (formerly MVS then OS/390), z/VM (formerly VM/CMS), z/TPF, IBM i (formerly OS/400), OS/2, AIX and z/VSE (formerly DOS/VSE). Microsoft also includes SNA support in Microsoft's Host Integration Server. Major IBM software products also include support for APPC, including CICS, Db2, CIM and WebSphere MQ.
IBM Advanced Program-to-Program Communication:
Unlike TCP/IP, in which both communication partners always possess a clear role (one is always server, and others always the client), APPC is a peer-to-peer protocol. The communication partners in APPC are equal, every application can be both server and client equally. The role, and the number of the parallel sessions between the partners, is negotiated over CNOS sessions (Change Number Of Session) with a special log mode (e.g. at IBM, 'snasvcmg'). Transmission of the data is made then by 'data sessions', their log modes can be determined in detail from the VTAM administrator (e.g. length of the data blocks, coding etc..).
IBM Advanced Program-to-Program Communication:
It was also apparent to the architects of APPC that it could be used to provide operating system services on remote computers. A separate architecture group was formed to use APPC to enable programs on one computer to transparently use the data management services of remote computers. For each such use, an APPC session is created and used in a client–server fashion by the Conversational Communications Manager of Distributed Data Management Architecture (DDM). Message formats and protocols were defined for accessing and managing record-oriented files, stream-oriented files, relational databases (as the base architecture of Distributed Relational Database Architecture (DRDA)), and other services. A variety of DDM and DRDA products were implemented by IBM and other vendors. With the increasing prevalence of TCP/IP, APPC has declined, although many IBM systems have translators, such as Enterprise Extender (RFC 2353), to allow sending APPC-formatted traffic over IP networks.APPC should not be confused with the similarly named APPN (Advanced Peer-to-Peer Networking). APPC manages communication between programs, operating at the application and presentation layers. By contrast, APPN manages communication between machines, including routing, and operates at the transport and network layers. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pub cheese**
Pub cheese:
Pub cheese is a type of soft cheese spread and dip prepared using cheese as a primary ingredient and usually with some type of beer or ale added. It can be made with smoked cheeses or liquid smoke added to impart a smoky flavor. It is typically served with crackers or vegetables, whereby the cheese is spread onto these foods, or the foods may be dipped in it. It is also used as a topping on sandwiches, such as hamburgers. Pub cheese is a traditional bar snack in the United States.Pub cheese is sometimes prepared using a mix of processed cheese and pure cheese.It is a mass-produced product in the United States. For example, Président is a brand that includes pub cheese in its line, and Trader Joe's has a store brand of pub cheese.Some bars, breweries, public houses and restaurants produce their own versions of pub cheese. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**UTC+07:00**
UTC+07:00:
UTC+07:00 is an identifier for a time offset from UTC of +07:00. In ISO 8601 the associated time would be written as 2023-08-19T15:03:30+07:00. It is 7 hours ahead of UTC, meaning that when the time in UTC areas is midnight (00:00), the time in UTC+07:00 areas would be 7:00 in the morning.
Also known as Indochina Time (ICT) and Western Indonesian Time (Indonesian: Waktu Indonesia Barat, WIB) (in Indonesia), it is used in:
As standard time (year-round):
Principal cities: Ho Chi Minh City, Hanoi, Phnom Penh, Vientiane, Bangkok, Krasnoyarsk, Novosibirsk, Jakarta, Batang, Pekalongan, Tegal, Banjarnegara, Wonosobo, Pemalang, Kendal, Brebes, Magelang, Temanggung, Kebumen, Purworejo, Purbalingga, Banyumas, Cilacap, Surakarta, Klaten, Demak, Kudus, Sleman, Sragen, Blora, Kulon Progo, Pacitan, Medan, Lhokseumawe, Langsa, Garut, Gunungsitoli, Karawang, Jepara, Lubuklinggau, Batam, Pangkal Pinang, Palangkaraya, Pagar Alam, Probolinggo, Garut, Gunungsitoli, Karawang, Jepara, Lubuklinggau, Batam, Pangkal Pinang, Palangkaraya, Pagar Alam, Probolinggo, Pasuruan, Purwakarta, Purwokerto, Prabumulih, Palembang, Pematangsiantar, Padangsidempuan, Pekanbaru, Padang, Padang Panjang, Pontianak, Pariaman, Payakumbuh, Dumai, Binjai, Bogor, Cimahi, Cirebon, Bukittinggi, Bandar Lampung, Palembang, Bandung, Semarang, Surakarta, Surat Thani, Nakhon Si Thammarat, Solok, Sawahlunto, Tanjungpinang, Singkawang, Tebing Tinggi, Sibolga, Sungai Penuh, Sukabumi, Sumedang, Salatiga, Tasikmalaya, Tegal, Udon Thani, Yogyakarta, Surabaya.
As standard time (year-round):
North Asia Russia – Krasnoyarsk Time Siberian Federal District Altai Krai Altai Republic Kemerovo Oblast Khakassia Krasnoyarsk Krai Novosibirsk Oblast Tomsk Oblast Tuva East Asia It is considered the westernmost time zone in East Asia.
As standard time (year-round):
Mongolia – Time in Mongolia Western part, including Khovd, Uvs, Bayan-Ölgii, Govi-Altai and Zavkhan Southeast Asia Indonesia – Western Indonesia TimeWestern zone, including: All provinces of Java, and surrounding islands including: Banten Jakarta West Java Central Java East Java, and Special Region of Yogyakarta All provinces of Sumatra and surrounding islands: Aceh North Sumatra West Sumatra Riau Riau Islands Jambi South Sumatra Bangka Belitung Islands Bengkulu, and Lampung Parts of Kalimantan: West Kalimantan Central Kalimantan Cambodia – Time in Cambodia (Indochina Time) Laos – Time in Laos (Indochina Time) Thailand – Time in Thailand (Indochina Time) Vietnam – Time in Vietnam (Indochina Time) Oceania Indian Ocean Australia – Christmas Island Time Christmas Island Antarctica Southern Ocean Some bases in Antarctica. See also Time in Antarctica.
Discrepancies between official UTC+07:00 and geographical UTC+07:00:
Since legal, political, and economic in addition to physical or geographical criteria are used in the drawing of time zones, it follows that official time zones do not precisely adhere to meridian lines. The UTC+07:00 time zone, were it drawn by purely geographical terms, would consist of exactly the area between meridians 97°30′ E and 112°30′ E. As a result, there are places which, despite lying in an area with a "physical" UTC+07:00 time, actually use another time zone. Conversely, there are areas that have adopted UTC+07:00, even though their "physical" time zone is UTC+08:00, UTC+06:00, or even UTC+05:00.
Discrepancies between official UTC+07:00 and geographical UTC+07:00:
Areas within UTC+07:00 longitudes using other time zones This concerns areas within 97°30′ E to 112°30′ E longitude.
Using UTC+06:30 Eastern part of Myanmar Using UTC+08:00 In China, many parts of central China including these divisions: Hainan Guangxi Yunnan Guizhou Sichuan Chongqing Shaanxi Ningxia Gansu western two-thirds of Hunan western half of: Hubei Shanxi Inner Mongolia, including its capital Hohhot.
western third of: Guangdong HenanIn Russia: Irkutsk Oblast BuryatiaOutside China & Russia: Most of central Mongolia including the capital Ulaanbaatar Peninsular Malaysia Western part of Sarawak in Malaysian Borneo including Kuching SingaporeUsing UTC+09:00 A (western) part Sakha Republic in Russia, including the urban localities Aykhal and Udachny.
Areas outside UTC+07:00 longitudes using UTC+07:00 time Areas between 67°30′ E and 97°30′ E ("physical" UTC+05:00 and UTC+06:00) The westernmost part of Indonesia including most of the province Aceh with its capital Banda Aceh.
Discrepancies between official UTC+07:00 and geographical UTC+07:00:
The westernmost part of Mongolia Parts of Russia: A large part of Krasnoyarsk Krai Tuva Khakassia Altai Republic Altai Krai Kemerovo Oblast Novosibirsk Oblast (mostly within the "physical" UTC+05:00 area) Tomsk Oblast (partly within the "physical" UTC+05:00 area) Areas between 112°30′ E and 127°30′ E ("physical" UTC+08:00) In Indonesia: The easternmost part of Java including East Java's capital Surabaya, Sidoarjo, Malang, and Banyuwangi.
Discrepancies between official UTC+07:00 and geographical UTC+07:00:
The island of Bawean and Madura, and islands of Kangean and Masalembu, which administratively belong to East Java Province.
Eastern part of West Kalimantan, and most of Central Kalimantan, including the capital Palangka RayaIn Russia: The very easternmost part of Krasnoyarsk Krai
Historical time offsets:
The Republic of China's offset for this time zone was Kansu-Szechwan, and was used until 1949, when the Chinese Communist Party took control of Mainland China following the Chinese Civil War and made UTC+08:00 the standard time for all areas under its control. Formerly, from 1918 to 1949, this time offset was used in eastern Sikang and Tsinghai, central Outer Mongolia (1921–1924), and all of Yunnan, Kwangsi, Kweichow, Ningsia, Suiyuan, Kansu, and Shensi.
Historical time offsets:
This time zone was also the standard used in Malaysia and Singapore from 1 June 1905 to 31 December 1932. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Moonsault**
Moonsault:
A moonsault, moonsault press, or back flip splash is a professional wrestling aerial technique. It was innovated by Mando Guerrero. Much of its popularity in both Japanese and American wrestling is attributed to The Great Muta, despite it being used in North America by "Leaping" Lanny Poffo years before Muta came from Japan.
Moonsault:
In a standard moonsault, which is generally attempted from the top rope, a wrestler faces away from the supine opponent and executes a backflip landing on the opponent in a splash/press position but facing towards the elevated position. Though this move is generally attempted from the top rope to an opponent lying face up in the mat, myriad variations exist, including moonsaults that see the wrestler land on a standing opponent and forcing them down to the mat. The move is considered a higher-impact version of a splash, since the wrestler utilizes rotational speed. A less common variation sees the wrestler perform a moonsault on a standing opponent, with the torso of the wrestler striking the torso of the opponent (albeit upside down), forcing the opponent backwards and to the ground with the opponent on top of them, usually placing the opponent in a pinning predicament. Most of the variations listed below can also be performed on standing opponents.
Danger and precautions:
When executed properly the moonsault is generally considered safe, but as with any aerial maneuver, there is inherent high risk when not executed properly. The wrestler performing the move often misses and lands on their stomach unharmed (such as Keiji Mutoh during Starrcade (1989), when he went for a Moonsault on Sting, but ended up missing; he was eventually able to land on his feet and land a kick). Mutoh underwent double knee replacement surgery on February 18, 2018, and has since then not performed the Moonsault. In an interview with Tokyo Sports, Mutoh told them that he was lucky to be alive after botching a moonsault.In an example of a moonsault gone spectacularly wrong, Eiji Ezaki, better known as Hayabusa, suffered a life-threatening injury on October 22, 2001, while working for the Japanese wrestling promotion Frontier Martial-Arts Wrestling. As Hayabusa began executing a springboard moonsault from the second rope, his feet slipped off the rope and struck the first rope below. As a result Hayabusa did not have enough height within which to execute the full 360° of the move, causing him to land head first and on his neck. He broke two vertebrae and was left quadriplegic, completely ending his career. Hayabusa was eventually able to gain some movement in his lower body, but was never able to wrestle again.
Variations:
Corkscrew moonsault The corkscrew moonsault is a twisting moonsault in which the wrestler is standing or on an elevated platform, such as the top rope, or the corner of the ring, and performs a moonsault with a 360° twist or multiple twists, landing as if performing a normal moonsault. It was used by KUSHIDA early in his career as the Midnight Express while Tetsuya Naito previously used it as the Stardust Press.
Variations:
Diving moonsault This is a Moonsault from the top rope, a wrestler faces away from the supine opponent and executes a Diving backflip landing on the opponent in a splash position but facing towards the elevated position.in this moonsault, the wrestler land on a standing opponent and forcing them down to the mat. The move is considered a higher-impact version of a splash, since the wrestler utilizes rotational speed.
Variations:
Double jump moonsault This is a variation of a springboard moonsault. This variation sees the wrestler bounce off the middle rope to elevate themself to the top rope, from where they bounce off to perform the moonsault. This version of a moonsault is often referred to as a picture perfect moonsault or double springboard moonsault. It was used by Christopher Daniels, who called the move the BME (Best Moonsault Ever).
Variations:
Double rotation moonsault This is a moonsault where another rotation is performed after the initial moonsault. There are two major variants of the double moonsault, an Asai moonsault version and a normal moonsault from the top turnbuckle to the inside of the ring with two rotations. The first rotation is an arc of the back The first variation sees a wrestler who is standing on the apron, with a wrestler on the floor behind them, jump up on to either the first or second rope and perform and backflip as in to perform an Asai moonsault but while in mid air tucks their legs reducing resistance and performs a second complete backflip after the first one, landing on a standing opponent below. This is the more common of the two variants due to the increased airtime of the springboard and height from the springboard to the floor. This variant is closely associated with Jack Evans who popularized it as the Stuntin' 101. Evans is also known to perform a corkscrew version of this variant.
Variations:
The second variation sees a wrestler ascend to the top rope and perform a backflip while tucking their legs. This allows the wrestler to have less resistance and continue to rotate after the initial first 360° for another 270° completing the second rotation onto an opponent lying on the mat. This was popularized by Ricochet.
Variations:
Triple jump moonsault This is a variation of the double jump moonsault where, from a running start, the attacking wrestler jumps to a chair or other elevated platform, onto the top rope, and then does a moonsault from there onto the opponent. This move has been popularized by wrestler Sabu. It is also used by Tiffany Stratton, who called the move "Prettiest Moonsault Ever".
Variations:
Moonsault side slam Invented by Naomichi Marufuji and called Shiranui Kai. Any move where the wrestler stands on an elevated position, grabs hold of the opponent, and performs a moonsault while still holding on to the opponent, driving them down to the mat. This move is also known as a Solo Spanish Fly. Multiple variations exist, such as a belly-to-bellybelly-to-belly version used by Matt Sydal. This version which sees him holding the opponent in a belly-to-belly position while performing the moonsault to land on top of them in a seated senton. He calls this version the Sydal Special., a side slam version or a rolling version, which can also be performed while standing, John Morrison used the standing version as the C4, while Frankie Kazarian use the rolling version as the Flux Capacitor.
Variations:
Rounding moonsault This variation is also referred to as a sideways moonsault, rolling moonsault, rounding splash, and Original-style moonsault. The attacker climbs the top rope, or other elevated position facing away the opponent. Instead of doing a backflip as in a normal moonsault, the attacker rotates their body off to one side diagonally and lands on the opponent chest-first, facing the turnbuckle as in a normal moonsault. Innovated by Tiger Mask I and used by Bam Bam Bigelow as the Bam Bam-Sault and Vader as the Vadersault respectively.
Variations:
Another variation of this move sees the attacker facing the prone opponent with the attacker leaping forward into the air rotating their body in a semi-circle to end up-side down as if doing a midair cartwheel then landing on the opponent chest first facing the turnbuckle. Alexa Bliss uses this move as her finisher, which she calls Twisted Bliss. Dana Brooke uses this move as a variation while running to an opponent lying on the mat, they rotate in opposite directions.
Variations:
Split-legged moonsault This moonsault variation sees the performer jump up and split their legs onto both the left and right top ropes surrounding the top turnbuckle, using the impact of their thighs on the rope to flip themselves over, executing a moonsault onto a prone opponent.A variation of the split-legged moonsault is the Arabian Press, which involves the performer's thighs both landing on a single top rope, and the performer then continues to use the impact of their thighs on the rope to flip themselves over, executing a moonsault onto a prone opponent. Naomi uses this move. Also known for being used by Rob Van Dam as the Hollywood Star Press.
Variations:
Split-legged corkscrew moonsault This variation involves performing a corkscrew moonsault after using the impact of their thighs on the ropes to flip themselves over. It was popularized by John Morrison, who called the move the Starship Pain and The End of The World.
Springboard moonsault This is a move in which a wrestler springboards (bounces off ropes), then executes a backflip and lands on an opponent. This move is known as La Quebrada in lucha libre, sometimes shortened to simply Quebrada. A variation performed off the second rope from a running start, popularized by Chris Jericho, is known as the Lionsault.
Variations:
When a springboard moonsault is performed onto an opponent on the floor outside the ring, rather than one in the ring, it is called an Asai Moonsault. It is named after Yoshihiro Asai, also known by his ring name Último Dragón, who popularized the move. This can also be used as a setup for an inverted DDT, as popularized by AJ Styles.
Variations:
Standing moonsault This is a wrestling move in which the wrestler does a backflip on the mat landing on the opponent. This move can be set up by preceding with a roundoff. WWE wrestler Apollo Crews uses this as his finishing maneuver. Jeff Cobb also uses the Move as the Gachimuchi-Sault.
Variations:
Fallaway moonsault slam This moves shows a wrestler grab an opponent like a fallaway slam but instead of just throwing them backwards does a backflip slamming the opponents back into the mat. This move is used by Cameron Grimes, and was innovated by Scott Steiner as a counter to a running crossbody. A diving/avalanche version of it is used by Bandido as Guerrero Moonsault. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Antimicrobial spectrum**
Antimicrobial spectrum:
The antimicrobial spectrum of an antibiotic means the range of microorganisms it can kill or inhibit. Antibiotics can be divided into broad-spectrum antibiotics, extended-spectrum antibiotics and narrow-spectrum antibiotics based on their spectrum of activity. Detailedly, broad-spectrum antibiotics can kill or inhibit a wide range of microorganisms; extended-spectrum antibiotic can kill or inhibit Gram positive bacteria and some Gram negative bacteria; narrow-spectrum antibiotic can only kill or inhibit limited species of bacteria.Currently no antibiotic's spectrum can completely cover all types of microorganisms.
Determination:
The antimicrobial spectrum of an antibiotic can be determined by testing its antimicrobial activity against a wide range of microbes in vitro . Nonetheless, the range of microorganisms which an antibiotic can kill or inhibit in vivo may not always be the same as the antimicrobial spectrum based on data collected in vitro.
Significance:
Narrow-spectrum antibiotics have low propensity to induce bacterial resistance and are less likely to disrupt the microbiome (normal microflora). On the other hand, indiscriminate use of broad-spectrum antibiotics may not only induce the development of bacterial resistance and promote the emergency of multidrug-resistant organisms, but also cause off-target effects due to dysbiosis. They may also have side effects, such as diarrhea or rash. Generally, a broad antibiotic has more clinical indications, and therefore are more widely used. The Healthcare Infection Control Practices Advisory Committee (HICPAC) recommends the use of narrow-spectrum antibiotics whenever possible.
Examples:
Broad-spectrum antibiotic: Ciprofloxacin, Doxycycline, Minocycline, Tetracycline, Imipenem, Azithromycin Extended-spectrum antibiotic: Ampicillin Narrow-spectrum antibiotic: Sarecycline, Vancomycin, Isoniazid | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fiscorn**
Fiscorn:
A fiscorn (Catalan pronunciation: [fisˈkɔɾn]) is a brass instrument. It is a bass flugelhorn in the key of C. In the cobla, it has the deepest sound among the brass instruments.
Background:
Originally played in polka bands throughout Germany and the former Czechoslovakia, as well as in military bands in Italy, the instrument has migrated to find its home today in Catalonia. Favored over other valved low brass for its haunting mellow tone and bell front projection, the fiscorn quickly became an essential instrument of the cobla band, along with the tenora, tible and flabiol. While the instrument has been dropped from most music ensembles for intonational concerns, the instrument's powerful baritone voice has no counterpart, (save perhaps the piston-valved marching baritone), in the organology. The forward-projected sound makes a pair of fiscorns the most suitable two instruments to join the shrill Catalan shawms (tenoras and tibles) and flabiol pipe, together with trumpets and trombone in the outdoor town-square setting where the cobla bands play for sardana dancers.
Musicians:
While the instrument is often played by trombonists, its conical nature is easily mastered by tuba or euphonium players. Several instrument manufacturers have attempted to solve the intonational discrepancies of the instrument in recent years, with a varied success. The instrument is taught throughout Catalonia, most notably in the traditional music departments of the ESMUC (Catalonia College of Music) in Barcelona and at the CRR (Conservatoire à rayonnement régional) in Perpignan, France. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Solar storm**
Solar storm:
A solar storm is a disturbance on the Sun, which can emanate outward across the heliosphere, affecting the entire Solar System, including Earth and its magnetosphere, and is the cause of space weather in the short-term with long-term patterns comprising space climate.
Types:
Solar storms include: Solar flare, a large explosion in the Sun's atmosphere caused by tangling, crossing or reorganizing of magnetic field lines Coronal mass ejection (CME), a massive burst of plasma from the Sun, sometimes associated with solar flares Geomagnetic storm, the interaction of the Sun's outburst with Earth's magnetic field Solar particle event (SPE), proton or energetic particle (SEP) storm | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rectiformer**
Rectiformer:
A rectiformer is a rectifier and transformer designed and built as a single entity for converting alternating current into direct current. It is piece of power systems equipment rather than an electronics component. Rectiformers are used for supplying power to different field of ESP (electrostatic precipitator). Rectiformers are also used to create dc supply for Hall process cells in the aluminium smelting industry.
Rectiformer:
Rectiformers are commonly found in electrowinning operations, where a direct current is required to convert base metal ions such as copper to a metal at the cathode. The passage of an electric current through a purified copper sulfate solution produces cathode copper. The equation is as follows: Cu2+aq+ 2e− = Cuo
Physical Characteristics:
Rectiformers may be designed to output voltages from 30V to over 120KV dc and can weigh over 400 tons. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Oat sensitivity**
Oat sensitivity:
Oat sensitivity represents a sensitivity to the proteins found in oats, Avena sativa. Sensitivity to oats can manifest as a result of allergy to oat seed storage proteins either inhaled or ingested. A more complex condition affects individuals who have gluten-sensitive enteropathy in which there is an autoimmune response to avenin, the glutinous protein in oats similar to the gluten within wheat. Sensitivity to oat foods can also result from their frequent contamination by wheat, barley, or rye particles.
Oat allergy:
Studies on farmers with grain dust allergy and children with atopy dermatitis reveal that oat proteins can act as both respiratory and skin allergens. Oat dust sensitivity in farms found 53% showed reactivity to dust, second only to barley (70%), and almost double that of wheat dust. The 66 kDa protein in oats was visualized by 28 out of 33 sera (84%). However, there was evident non-specific binding to this region and thus it may also represent lectin-like binding. IgA and IgG responses, meanwhile, like those seen to anti-gliadin antibodies in celiac disease or dermatitis herpetiformis, are not seen in response to avenins in atopic dermatitis patients.Food allergies to oats can accompany atopy dermatitis. Oat avenins share similarities with γ and ω-gliadins of wheat — based on these similarities they could potentiate both enteropathic response and anaphylactic responses. Oat allergy in gluten-sensitive enteropathy can explain an avenin-sensitive individual with no histological abnormality, no T-cell reaction to avenin, bearing the rarer DQ2.5trans phenotype, and with anaphylactic reaction to avenin.
Avenin-sensitive enteropathy:
Oat toxicity in people with gluten-related disorders depends on the oat cultivar consumed because the immunoreactivities of toxic prolamins are different among oat varieties. Furthermore, oats are frequently cross-contaminated with the other gluten-containing cereals. Pure oat (labelled as "pure oat" or "gluten-free oat") refers to oats uncontaminated with any of the other gluten-containing cereals.Some cultivars of pure oat could be a safe part of a gluten-free diet, requiring knowledge of the oat variety used in food products for a gluten-free diet. Nevertheless, the long-term effects of pure oats consumption are still unclear and further studies identifying the cultivars used are needed before making final recommendations on their inclusion in the gluten-free diet.
Avenin-sensitive enteropathy:
Immunological evidence Anti-avenin antibodies In 1992, six proteins were extracted from oats that reacted with a single coeliac sera. Three of the proteins were prolamins, and have been called CIP 1 (gamma avenin), CIP 2, and CIP3. They had the following amino acid sequences: Antibody recognition sites on three avenins CIP1 (γ-avenin) P S E Q Y Q P Y P E Q Q Q P F CIP2 (γ-avenin) T T T V Q Y D P S E Q Y Q P Y P E Q Q Q P F V Q Q Q P P F CIP3 (α-avenin) T T T V Q Y N P S E Q Y Q P Y Within the same study, three other proteins were identified, one of them an α-amylase inhibitor as identified by protein homology. A follow-up study showed that most celiacs have anti-avenin antibodies (AVAs), with a specificity and sensitivity comparable to anti-gliadin antibodies. A subsequent study found that these AVAs did not result from cross-reaction with wheat. However, recently it has been found that AVAs drop as soon as Triticeae glutens are removed from the diet. Anti-avenin antibodies declined in treated celiacs on an oat diet in 136 individuals, suggesting oats can be involved in celiac disease when wheat is present, but are not involved when wheat is removed from the diet. The study, however, did find an increased number of patients with higher intraepithelial lymphocytes (IELs, a type of white bloodcell) in the oat-eating cohort. Regardless of whether or not this observation is a direct allergic immune response, by itself this is essentially benign.
Avenin-sensitive enteropathy:
Cellular immunity In gluten-sensitive enteropathy, prolamins mediate between T-cells and antigen-presenting cells, whereas anti-transglutaminase antibodies confer autoimmunity via covalent attachment to gliadin. In 16 examined coeliacs, none produced a significant Th1 response. Th1 responses are needed to stimulate T-helper cells that mediate disease. This could indicate that coeliac disease does not directly involve avenin or that the sample size was too small to detect the occasional responder.
Avenin-sensitive enteropathy:
Evidence that there are exceptional cases came in a 2004 study on oats. The patients drafted for this study were those who had symptoms of celiac disease when on a "pure-oat" challenge, therefore not representative of a celiac sample. This study found that four patients had symptoms after oat ingestion, and three had elevated Marsh scores for histology and avenin responsive T-cells, indicating avenin-sensitive enteropathy (ASE). All three patients were the DQ2.5/DQ2 (HLA DR3-DQ2/DR7-DQ2) phenotype. Patients with DQ2.5/DQ2.2 tend to be the most prone toward gluten sensitive enteropathy (GSE), have the highest risk for GS-EATL, and shows signs of more severe disease at diagnosis.
Avenin-sensitive enteropathy:
While the DQ2.5/DQ2 phenotype represents only 25% of celiac patients, it accounts for all of the ASE celiacs, and 60-70% of patients with GS-EATL.
Synthetic avenin peptides were synthesized either in native or deamidated form, and the deamidated peptides showed higher response.
Avenin-sensitive enteropathy:
DQ2.5/T-cell receptor recognition from 2 Oat-sensitive coeliacs TCR-Site1 Y Q P Y P E Q E~E~P F V TCR-Site2 Q Y Q P Y P E Q Q Q P F V Q Q Q Q Antibody recognition site(see above) CIP2 (γ-avenin) T T T V Q Y D P S E Q Y Q P Y P E Q Q Q P F V Q Q Q P P F The overlap of the antibody and T-cell sites, given trypsin digestion of avenin, suggest this region is dominant in immunity. The TCR-site1 was synthetically made as deamidated ("~E~"), and native peptide requires transglutaminase to reach full activation. Two studies to date have looked at the ability of different oat strains to promote various immunochemical aspects of celiac disease. While preliminary, these studies indicate different strains may have different risks for avenin sensitivity. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mass-flux fraction**
Mass-flux fraction:
The mass-flux fraction (or Hirschfelder-Curtiss variable or Kármán-Penner variable) is the ratio of mass-flux of a particular chemical species to the total mass flux of a gaseous mixture. It includes both the convectional mass flux and the diffusional mass flux. It was introduced by Joseph O. Hirschfelder and Charles F. Curtiss in 1948 and later by Theodore von Kármán and Sol Penner in 1954. The mass-flux fraction of a species i is defined as ϵi=ρi(v+Vi)ρv=Yi(1+Viv) where Yi=ρi/ρ is the mass fraction v is the mass average velocity of the gaseous mixture Vi is the average velocity with which the species i diffuse relative to v ρi is the density of species i ρ is the gas density.It satisfies the identity ∑iϵi=1 similar to mass fraction, but, the mass-flux fraction can take both positive and negative values. This variable is used in steady, one-dimensional combustion problems in place of mass fraction. For one-dimensional ( x direction) steady flows, the conservation equation for the mass-flux fraction reduces to dϵidx=wiρv where wi is the mass production rate of species i. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mothers against decapentaplegic homolog 7**
Mothers against decapentaplegic homolog 7:
Mothers against decapentaplegic homolog 7 or SMAD7 is a protein that in humans is encoded by the SMAD7 gene.SMAD7 is a protein that, as its name describes, is a homolog of the Drosophila gene: "Mothers against decapentaplegic". It belongs to the SMAD family of proteins, which belong to the TGFβ superfamily of ligands. Like many other TGFβ family members, SMAD7 is involved in cell signalling. It is a TGFβ type 1 receptor antagonist. It blocks TGFβ1 and activin associating with the receptor, blocking access to SMAD2. It is an inhibitory SMAD (I-SMAD) and is enhanced by SMURF2.
Structure:
Smad proteins contain two conserved domains. The Mad Homology domain 1 (MH1 domain) is at the N-terminal and the Mad Homology domain 2 (MH2 domain) is at the C-terminal. Between them there is a linker region which is full of regulatory sites. The MH1 domain has DNA binding activity while the MH2 domain has transcriptional activity. The linker region contains important regulatory peptide motifs including potential phosphorylation sites for mitogen-activated protein kinases(MAPKs), Erk-family MAP kinases, the Ca2+ /calmodulin-dependent protein kinase II (CamKII) and protein kinase C (PKC). Smad7 does not have the MH1 domain. A proline-tyrosine (PY) motif presents at its linker region enables its interaction with the WW domains of the E3 ubiquitin ligase, the Smad ubiquitination-related factors (Smurf2). It resides predominantly in the nucleus at basal state and translocates to the cytoplasm upon TGF-β stimulation.
Function:
SMAD7 inhibits TGF-β signaling by preventing formation of Smad2/Smad4 complexes which initiate the TGF-β signaling. It interacts with activated TGF-β type I receptor therefore block the association, phosphorylation and activation of Smad2. By occupying type I receptors for Activin and bone morphogenetic protein (BMP), it also plays a role in negative feedback of these pathways.Upon TGF- β treatment, Smad7 binds to discrete regions of Pellino-1 via distinct regions of the Smad MH2 domains. The interaction blocks the formation of the IRAK1-mediated IL-1R/TLR signaling complex therefore abrogates NF-κB activity, which subsequently causes reduced expression of pro-inflammatory genes.While Smad7 is induced by TGF-β, it is also induced by other stimuli, such as epidermal growth factor (EGF), interferon-γ and tumor necrosis factor (TNF)-α. Therefore, it provides a cross-talk between TGF-β signaling and other cellular signaling pathways.
Role in cancer:
A mutation located in SMAD7 gene is a cause of susceptibility to colorectal cancer (CRC) type 3. Perturbation of Smad7 and suppression of TGF-β signaling was found to be evolved in CRC. Case control studies and meta-analysis in Asian and European populations also provided evidence that this mutation is associated with colorectal cancer risk.TGF-β is one of the important growth factors in pancreatic cancer. By controlling the TGF-β pathway, smad7 is believed to be related to this disease. Some previous study showed over-expression of Smad7 in pancreatic cells but there was a recent study showed a low Smad7 expression. The role of Smad7 in pancreatic cancer is still controversial.Over-expression or constitutive activation of epidermal growth factor receptor (EGFR) can promote tumor processes. EGF-induced MMP-9 expression enhances tumor invasion and metastasis in some kinds of tumor cells such as breast cancer and ovarian cancer. Smad7 exerts an inhibitory effect on the EGF signaling pathway. Therefore, it may play a role in prevention of cancer metastasis.
Use in Pharmacology:
SMAD7 signaling has been studied in a recent Celgene Phase III trial, NCT ID number 94, which interacts with the SMAD7 pathway. This drug (Mongersen) was studied in patients with Crohn's disease.
Interactions:
Mothers against decapentaplegic homolog 7 has been shown to interact with: CTNNB1, EP300, TAB1, PIAS4, RNF111, SMAD3.
SMAD6, SMURF2, STRAP, TGFBR1, and YAP1. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Haugh unit**
Haugh unit:
The Haugh unit is a measure of egg protein quality based on the height of its egg white (albumen). The test was introduced by Raymond Haugh in 1937 and is an important industry measure of egg quality next to other measures such as shell thickness and strength.An egg is weighed, then broken onto a flat surface (breakout method), and a micrometer used to determine the height of the thick albumen (egg white) that immediately surrounds the yolk. The height, correlated with the weight, determines the Haugh unit, or HU, rating. The higher the number, the better the quality of the egg (fresher, higher quality eggs have thicker whites). Although the measurement determines the protein content and freshness of the egg, it does not measure other important nutrient contents such as the micronutrient or vitamins present in the egg.
Formula:
The formula for calculating the Haugh unit is: 100 1.7 0.37 7.6 ) Where: HU = Haugh unit h = observed height of the albumen in millimeters w = weight of egg in gramsHaugh Index : AA : 72 or more A : 71 - 60 B : 59 - 31 C : 30 or less Below are the USDA's terms describing egg white and its corresponding Haugh unit: (a) Clear. A white that is free from discolorations or from any foreign bodies floating in it. (Prominent chalazas should not be confused with foreign bodies such as spots or blood clots.) (b) Firm (AA quality). A white that is sufficiently thick or viscous to prevent the yolk outline from being more than slightly defined or indistinctly indicated when the egg is twirled. With respect to a broken-out egg, a firm white has a Haugh unit value of 72 or higher when measured at a temperature between 45 o F and 60 o F.
Formula:
(c) Reasonably firm (A quality). A white that is somewhat less thick or viscous than a firm white. A reasonably firm white permits the yolk to approach the shell more closely which results in a fairly well defined yolk outline when the egg is twirled. With respect to a broken-out egg, a reasonably firm white has a Haugh unit value of 60 up to, but not including, 72 when measured at a temperature between 45 o F and 60 o F.
Formula:
(d) Weak and watery (B quality). A white that is weak, thin, and generally lacking in viscosity. A weak and watery white permits the yolk to approach the shell closely, thus causing the yolk outline to appear plainly visible and dark when the egg is twirled. With respect to a broken-out egg, a weak and watery white has a Haugh unit value lower than 60 when measured at a temperature between 45 o F and 60 o F.
Formula:
Excerpt from United States Department of Agriculture (USDA)United States Standards, Grades, and Weight Classes for Shell Eggs, AMS 56, Effective July 20, 2000 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Photo recovery**
Photo recovery:
Photo recovery is the process of salvaging digital photographs from damaged, failed, corrupted, or inaccessible secondary storage media when it cannot be accessed normally. Photo recovery can be considered a subset of the overall data recovery field. Photo loss or deletion failures may be due to both hardware or software failures/errors.
Recovering data after logical failure:
Logical Damage or the inability to view photos can occur for several reasons. The most common reasons are: Deletion of photos Corruption of the boot sector of media Corruption of file system Disk formatting Move or copy errors Photo recovery using file carving The majority of photo recovery programs work by using a technique called file carving (data carving). There are many different file carving techniques that are used to recover photos. Most of these techniques fail in the presence of file system fragmentation. Simson Garfinkel showed that on average, 16% of JPEGs are fragmented, which means on average 16% of JPEGs are recovered partially or appear corrupt when recovered using techniques that cannot handle fragmented photos. Header-footer carving, along with header-size carving, are by far the most common techniques for photo recovery.
Recovering data after logical failure:
Header-footer carving In Header-footer carving, a recovery program attempts to recover photos based on the standard starting and ending byte signature of the photo format. For example, JPEGs always begin with the hex sequence "FFD8" and they must end with the hex sequence "FFD9". Header-footer carving cannot be used to recover fragmented photos, and fragmented photos will appear to be partially recovered or corrupt if incorrect data is added. Use of footers can often truncate a photo, as many JPEGs contain thumbnails as an embedded object. If a file is terminated with a FFD9 it will be corrupted, unless nested FFD8/FFD9s are counted.
Recovering data after logical failure:
Header-size carving In Header-size carving, a recovery program attempts to recover photos based on the standard starting byte signature of the photo format, along with the size of the photo that is either derived or explicitly stated in the photo format. Header-size carving cannot be used to recover fragmented photos, and fragmented photos will appear to be partially recovered or corrupt if incorrect data is added.
Recovering data after logical failure:
File-structure carving A more advanced form of carving, a recovery program attempts to recover photos based on detailed knowledge of the structure rules of the photo format. This will enable a recovery program to identify when a photo is not complete or fragmented, but more needs to be done to see if a fragmented photo can be recovered. This technique is rarely used by most photo recovery programs.
Recovering data after logical failure:
Validated carving In validated carving, a decoder is used to detect any errors in recovery of a photo. More advanced forms of validated carving occur when each part of the recovered photo is compared against the rest of the photo to see if it "fits" visually. Validated carving is superb at detecting photos that are either fragmented or have parts that are over-written or missing. Validated carving alone cannot be used to recover fragmented photos.
Recovering data after logical failure:
Log carving Log carving occurs when a recovery program uses information left over in either file system structures or the log to recover a deleted photo. For example, occasionally NTFS will store in the logs the exact location of where the file was located prior to its deletion. A program using log carving will be able to then recover the photo. To be sure about the quality of recovery, validated carving or file-structure carving should also be used to validate the recovered photo.
Recovering data after logical failure:
Bi-fragment gap carving A fragmented photo recovery technique where a header and footer are identified and then all combinations of blocks between the header and footer are validated to determine which combination results in the correct recovery of the photo. This technique will only work if the file is fragmented into two parts.
Smart carving A process by which fragmented photos are recovered by looking at blocks on the disk and determining which block is the best visual match for the photo being recovered. This is done in parallel for all blocks that are not part of a recovered file. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Iced coffee**
Iced coffee:
Iced coffee is a coffee beverage served cold, It may be prepared either by brewing coffee normally (i.e. carafe, French press, etc.) and then serving it over ice or in cold milk or by brewing the coffee cold. In hot brewing, sweeteners and flavoring may be added before cooling, as they dissolve faster. Iced coffee can also be sweetened with pre-dissolved sugar in water.
Iced coffee:
Iced coffee is regularly available in most coffee shops. Iced coffee is generally brewed at a higher strength than normal coffee, given that it is diluted by the melting ice. In Australia, "iced coffee" is a common term for packaged coffee-flavored and sweetened milk beverage.
History:
Mazagran, a cold, sweetened coffee beverage that originated in Algeria circa 1840, has been described by locals as "the original iced coffee". It was prepared with coffee syrup and cold water.Frozen coffee beverages, similar to slush, are documented in the 19th century. The Italian granita al caffè is a similar beverage."Iced coffee"--coffee that has been brewed and then chilled with ice--appeared in menus and recipes in the late 19th century.Iced coffee was popularized by a marketing campaign of the Joint Coffee Trade Publicity Committee of the United States in 1920. Much later, it was marketed by chain outlets like Burger King, Dunkin' Donuts and Starbucks.
Variations by country:
Australia In Australia, iced coffee may include syrup, cream, cocoa powder or coffee beans. The café style is something like an unblended milkshake, and may be made from espresso coffee or only coffee flavoring. Bushells Coffee And Chicory Essence has been sold commercially since the late 19th century in the form of a syrup.The packaged iced coffee beverage is a different product altogether. In South Australia, Farmers Union Iced Coffee has outsold Coca-Cola and is one of the state's biggest brands. Pauls "Territory's Own" Iced Coffee is popular in the Northern Territory and Norco Real Iced Coffee is prominent in Northern New South Wales and South East Queensland. Other brands include Breaka, Big M, Brownes Chill, Moove, Masters, Dare, Max, Fleurieu, Rush, Oak and Ice Break.
Variations by country:
Canada In Canada, the popular Tim Hortons coffee chain sells iced cappuccinos known locally as Ice Capps. The chain has also recently introduced traditional iced coffee to its Canadian menu in addition to its U.S. menu. Other fast-food and beverage chains also provide iced coffee. A June 2016 study by research firm NPD found that the popularity of iced coffee drinks had increased by about 16 percent over the same period a year earlier.
Variations by country:
Chile In Chile, iced coffee is called café helado (iced coffee). It is very popular in the summertime. Café helado is composed of espresso or coffee powder. Ice cream is added to the coffee, as are sugary additives such as vanilla, cinnamon, or dulce de leche. Iced coffee is enjoyed during the summer at breakfast and at parties. Atop of Chilean iced coffee may also be whipped cream, and chopped nuts.
Variations by country:
Germany In Germany there are different types of Eiskaffee (coffee with ice cream). The most widespread form is a flavoured milk drink similar to Australian iced coffee, available in German coffeehouses and in Eisdielen (ice cream parlours). It consists of filtered, hot brewed and cooled coffee with vanilla ice cream and whipped cream on top. However, this type of iced coffee is rarely available in German supermarkets. The most widespread form of iced coffee in supermarkets is a canned version from a variety of brands with different flavours such as Cappuccino and Espresso. This iced coffee is very similar to the canned iced coffee in the UK and in the case of some brands (particularly Nestlé) actually the same product.
Variations by country:
Greece In Greece, the most popular iced coffee beverage is frappé, made of instant coffee (generally Nescafe), water, and optionally, sugar using either an electric mixer or a shaker to create foam. Ice cubes and, optionally, milk are added. Frappés became known outside of Greece as a result of the 2004 Summer Olympics in Athens. Frappés have become very popular in Cyprus and Romania.
Variations by country:
The second most popular iced coffee beverage in Greece is the freddo cappuccino which is topped with a cold milk foam known as afrógala (Greek: αφρόγαλα) and freddo espresso which is a double shot of espresso blended with ice cubes and served over ice.
Variations by country:
Italy In Italy, the Nestlé company introduced Frappé coffee under its Nescafé Red Cup line, with the name Red Cup Iced Coffee. Many Italian coffee bars serve "caffè freddo", which is straight espresso kept in a freezer and served as icy slush. In the Salento region of Apulia, this was perfected by brewing the espresso freshly, adding the desired amount of sugar or almond milk and finally pouring it into a whiskey glass filled with ice cubes right before being served, known as Caffè in ghiaccio, or coffee in ice. Affogato (espresso poured over a scoop of vanilla gelato or ice cream) is also served, typically as a dessert.
Variations by country:
Japan In Japan, iced coffee (アイスコーヒー, aisu kōhī) has been drunk since Taishō period (around the 1920s) in coffeehouses. It is served with gum syrup and milk. Cold tea was already popular, so it was natural to drink cold coffee. Cold brew coffee is also common in Japan, where it is known as Dutch coffee (ダッチ・コーヒー, dacchi kōhī), due to the historical Dutch coffee trade from Indonesia. In 1969, UCC Ueshima Coffee released canned coffee, which made coffee available everywhere. Today, canned liquid coffee is consumed both cold and hot.
Variations by country:
New Zealand In New Zealand, iced coffee is popular and served in a number of cafes. It is often served with vanilla ice-cream or whipped cream.
Variations by country:
Thailand Thai iced coffee is brewed using strong black coffee, sweetened with sugar, heavy cream (or half-and-half) and cardamom, and quickly cooled and served over ice. Some variations are brewed using espresso. Thai iced coffee can be served with whipped cream on top for a layered effect and garnished with cinnamon, vanilla or anise. It is a common menu item at Thai restaurants.
Variations by country:
United States Iced coffee is prepared many different ways in the U.S., including cold-brew coffee and chilled conventional coffee.
Variations by country:
Iced coffee can be made from cold-brew coffee, for which coffee grounds are soaked for several hours and then strained. The next day, the grounds would be filtered out. The result was a very strong coffee concentrate that was mixed with milk and sweetened.Many coffee retailers simply use hot-brewed coffee in their iced coffee drinks. Starbucks specifically uses the double-strength method in which the coffee is brewed hot with twice the amount of grounds. With this method, the melted ice does not dilute the strength and flavour of the coffee. Unlike the cold-brew process, this method does not eliminate the acidity inherent in hot-brewed coffee. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tour operator**
Tour operator:
A tour operator is a business that typically combines and organizes accommodations, meals, sightseeing and transportation components, in order to create a package tour. They advertise and produce brochures to promote their products, holidays and itineraries. Tour operators can sell directly to the public or sell through travel agents or a combination of both.
The most common example of a tour operator's product would be a flight on a charter airline, plus a transfer from the airport to a hotel and the services of a local representative, all for one price. Each tour operator may specialise in certain destinations, e.g. Italy, activities and experiences, e.g. skiing, or a combination thereof.
Operations:
The original raison d'être of tour operating was the difficulty for ordinary folk of making arrangements in far-flung places, with problems of language, currency and communication. The advent of the Internet has led to a rapid increase in self-packaging of holidays. However, tour operators still have their competence in arranging tours for those who do not have time to do DIY holidays, and specialize in large group events and meetings such as conferences or seminars. Also, tour operators still exercise contracting power with suppliers (airlines, hotels, other land arrangements, cruise companies and so on) and influence over other entities (tourism boards and other government authorities) in order to create packages and special group departures for destinations that might otherwise be difficult and expensive to visit.
Trade associations:
The three major tour operator associations in the U.S. are the National Tour Association (NTA), the United States Tour Operators Association (USTOA), and the American Bus Association (ABA). In Europe, there are the European Tour Operators Association (ETOA), and in the UK, the ABTA – The Travel Association and the Association of Independent Tour Operators (AITO). The primary association for receptive North American inbound tour operators is the International Inbound Travel Association. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Clinical data repository**
Clinical data repository:
A Clinical Data Repository (CDR) or Clinical Data Warehouse (CDW) is a real time database that consolidates data from a variety of clinical sources to present a unified view of a single patient. It is optimized to allow clinicians to retrieve data for a single patient rather than to identify a population of patients with common characteristics or to facilitate the management of a specific clinical department. Typical data types which are often found within a CDR include: clinical laboratory test results, patient demographics, pharmacy information, radiology reports and images, pathology reports, hospital admission, discharge and transfer dates, ICD-9 codes, discharge summaries, and progress notes.
Clinical data repository:
A Clinical Data Repository could be used in the hospital setting to track prescribing trends as well as for the monitoring of infectious diseases. One area CDR's could potentially be used is monitoring the prescribing of antibiotics in hospitals especially as the number of antiobiotic-resistant bacteria is ever increasing. In 1995, a study at the Beth Israel Deaconess Medical Center conducted by the Harvard Medical School used a CDR to monitor vancomycin use and prescribing trends since vancomycin-resistant enterococci is a growing problem. They used the CDR to track the prescribing by linking the individual patient, medication, and the microbiology lab results which were all contained within the CDR. If the microbiology lab result did not support the use of vancomycin, it was suggested to change the medication to something appropriate as under the Center for Disease Control CDC guidelines. The use of CDR's could help monitor infectious diseases in the hospital and the appropriate prescribing based on lab results.The use of Clinical Data Repositories could provide a wealth of knowledge about patients, their medical conditions, and their outcome. The database could serve as a way to study the relationship and potential patterns between disease progression and management. The term "Medical Data Mining" has been coined for this method of research. Past epidemiological studies may not have had as complete of information as that which is contained in a CDR, which could lead to inconclusive data/results. The use of medical data mining and correlative studies using the CDR could serve as a valuable resource helping the future of healthcare in all facets of medicine. The idea of data mining a CDW was used for screening variables that were associated with diabetes and poor glycemic control. It allowed for novel correlations that may have not been discovered without this method.One potential use of a clinical data repository would be for clinical trials. This would allow for researchers to have all the information from a study in one place as well as let other researchers benefit from the data to further innovation. They would also be advantageous since they are digital and real-time. This would be easier to log data and keep it accurate since it would be digital rather than in paper form.
Clinical data repository:
The clinical data repository is not without its weaknesses, however. Since they usually don't integrate with other non-clinical sources, following patient treatment across the care continuum becomes very difficult. In turn, tracking the true cost per case for each patient isn't feasible. IT teams spend most of their time gathering and compiling data instead of interpreting information and finding opportunities for cutting costs and improving patient care. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Infusion pump**
Infusion pump:
An infusion pump infuses fluids, medication or nutrients into a patient's circulatory system. It is generally used intravenously, although subcutaneous, arterial and epidural infusions are occasionally used.
Infusion pump:
Infusion pumps can administer fluids in ways that would be impractically expensive or unreliable if performed manually by nursing staff. For example, they can administer as little as 0.1 mL per hour injections (too small for a drip), injections every minute, injections with repeated boluses requested by the patient, up to maximum number per hour (e.g. in patient-controlled analgesia), or fluids whose volumes vary by the time of day.
Infusion pump:
Because they can also produce quite high but controlled pressures, they can inject controlled amounts of fluids subcutaneously (beneath the skin), or epidurally (just within the surface of the central nervous system – a very popular local spinal anesthesia for childbirth).
Types of infusion:
The user interface of pumps usually requests details on the type of infusion from the technician or nurse that sets them up: Continuous infusion usually consists of small pulses of infusion, usually between 500 nanoliters and 10 milliliters, depending on the pump's design, with the rate of these pulses depending on the programmed infusion speed.
Types of infusion:
Intermittent infusion has a "high" infusion rate, alternating with a low programmable infusion rate to keep the cannula open. The timings are programmable. This mode is often used to administer antibiotics, or other drugs that can irritate a blood vessel.To get the entire dose of antibiotics into the patient, the "volume to be infused" or VTBI must be programmed for at least 30 CCs more than is in the medication bag; failure to do so can potentially result in up to half of the antibiotic being left in the IV tubing.
Types of infusion:
Patient-controlled is infusion on-demand, usually with a preprogrammed ceiling to avoid intoxication. The rate is controlled by a pressure pad or button that can be activated by the patient. It is the method of choice for patient-controlled analgesia (PCA), in which repeated small doses of opioid analgesics are delivered, with the device coded to stop administration before a dose that may cause hazardous respiratory depression is reached.
Types of infusion:
Total parenteral nutrition usually requires an infusion curve similar to normal mealtimes.Some pumps offer modes in which the amounts can be scaled or controlled based on the time of day. This allows for circadian cycles which may be required for certain types of medication.
Types of pump:
There are two basic classes of pumps. Large volume pumps can pump fluid replacement such as saline solution, medications such as antibiotics or nutrient solutions large enough to feed a patient. Small-volume pumps infuse hormones, such as insulin, or other medicines, such as opiates.
Within these classes, some pumps are designed to be portable, others are designed to be used in a hospital, and there are special systems for charity and battlefield use.
Large-volume pumps usually use some form of peristaltic pump. Classically, they use computer-controlled rollers compressing a silicone-rubber tube through which the medicine flows. Another common form is a set of fingers that press on the tube in sequence.
Small-volume pumps usually use a computer-controlled motor turning a screw that pushes the plunger on a syringe.
Types of pump:
The classic medical improvisation for an infusion pump is to place a blood pressure cuff around a bag of fluid. The battlefield equivalent is to place the bag under the patient. The pressure on the bag sets the infusion pressure. The pressure can actually be read-out at the cuff's indicator. The problem is that the flow varies dramatically with the cuff's pressure (or patient's weight), and the needed pressure varies with the administration route, potentially causing risk when attempted by an individual not trained in this method.
Types of pump:
Places that must provide the least-expensive care often use pressurized infusion systems. One common system has a purpose-designed plastic "pressure bottle" pressurized with a large disposable plastic syringe. A combined flow restrictor, air filter and drip chamber helps a nurse set the flow. The parts are reusable, mass-produced sterile plastic, and can be produced by the same machines that make plastic soft-drink bottles and caps. A pressure bottle, restrictor and chamber requires more nursing attention than electronically controlled pumps. In the areas where these are used, nurses are often volunteers, or very inexpensive.
Types of pump:
The restrictor and high pressure helps control the flow better than the improvised schemes because the high pressure through the small restrictor orifice reduces the variation of flow caused by patients' blood pressures.
Types of pump:
An air filter is an essential safety device in a pressure infusor, to keep air out of the patients' veins. Small bubbles could cause harm in arteries, but in the veins they pass through the heart and leave in the patients' lungs. The air filter is just a membrane that passes gas but not fluid or pathogens. When a large air bubble reaches it, it bleeds off.
Types of pump:
Some of the smallest infusion pumps use osmotic power. Basically, a bag of salt solution absorbs water through a membrane, swelling its volume. The bag presses medicine out. The rate is precisely controlled by the salt concentrations and pump volume. Osmotic pumps are usually recharged with a syringe.
Spring-powered clockwork infusion pumps have been developed, and are sometimes still used in veterinary work and for ambulatory small-volume pumps. They generally have one spring to power the infusion, and another for the alarm bell when the infusion completes.
Battlefields often have a need to perfuse large amounts of fluid quickly, with dramatically changing blood pressures and patient condition. Specialized infusion pumps have been designed for this purpose, although they have not been deployed.
Types of pump:
Many infusion pumps are controlled by a small embedded system. They are carefully designed so that no single cause of failure can harm the patient. For example, most have batteries in case the wall-socket power fails. Additional hazards are uncontrolled flow causing an overdose, uncontrolled lack of flow, causing an underdose, reverse flow, which can siphon blood from a patient, and air in the line, which can cause an air embolism.
Safety features available on some pumps:
The range of safety features varies widely with the age and make of the pump. A state of the art pump in 2003 might have had the following safety features: Certified to have no single point of failure. That is, no single cause of failure should cause the pump to silently fail to operate correctly. It should at least stop pumping and make at least an audible error indication. This is a minimum requirement on all human-rated infusion pumps of whatever age. It is not required for veterinary infusion pumps.
Safety features available on some pumps:
Batteries, so the pump can operate if the power fails or is unplugged.
Anti-free-flow devices prevent blood from draining from the patient, or infusate from freely entering the patient, when the infusion pump is being set up.
A "down pressure" sensor will detect when the patient's vein is blocked, or the line to the patient is kinked. This may be configurable for high (subcutaneous and epidural) or low (venous) applications.
An "air-in-line" detector. A typical detector will use an ultrasonic transmitter and receiver to detect when air is being pumped. Some pumps actually measure the volume, and may even have configurable volumes, from 0.1 to 2 ml of air. None of these amounts can cause harm, but sometimes the air can interfere with the infusion of a low-dose medicine.
An "up pressure" sensor can detect when the bag or syringe is empty, or even if the bag or syringe is being squeezed.
A drug library with customizable programmable limits for individual drugs that helps to avoid medication errors.
Safety features available on some pumps:
Mechanisms to avoid uncontrolled flow of drugs in large volume pumps (often in combination with a giving st based free flow clamp) and increasingly also in syringe pumps (piston-brake) Many pumps include an internal electronic log of the last several thousand therapy events. These are usually tagged with the time and date from the pump's clock. Usually, erasing the log is a feature protected by a security code, specifically to detect staff abuse of the pump or patient.
Safety features available on some pumps:
Many makes of infusion pump can be configured to display only a small subset of features while they are operating, in order to prevent tampering by patients, untrained staff and visitors.By 2019 intravenous smart pumps were being introduced. They could include wireless connectivity, drug libraries, profiles of care areas, and soft and hard limits.
Safety issues:
Infusion pumps have been a source of multiple patient safety concerns, and problems with such pumps have been linked to more than 56,000 adverse event reports from 2005 to 2009, including at least 500 deaths. As a result, the U.S. Food and Drug Administration (FDA) has launched a comprehensive initiative to improve their safety, called the Infusion Pump Improvement Initiative. The initiative proposed stricter regulation of infusion pumps. It cited software defects, user interface issues, and mechanical or electrical failures as the main causes of adverse events. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Semiconductor Bloch equations**
Semiconductor Bloch equations:
The semiconductor Bloch equations (abbreviated as SBEs) describe the optical response of semiconductors excited by coherent classical light sources, such as lasers. They are based on a full quantum theory, and form a closed set of integro-differential equations for the quantum dynamics of microscopic polarization and charge carrier distribution. The SBEs are named after the structural analogy to the optical Bloch equations that describe the excitation dynamics in a two-level atom interacting with a classical electromagnetic field. As the major complication beyond the atomic approach, the SBEs must address the many-body interactions resulting from Coulomb force among charges and the coupling among lattice vibrations and electrons.
Background:
The optical response of a semiconductor follows if one can determine its macroscopic polarization P as a function of the electric field E that excites it. The connection between P and the microscopic polarization Pk is given by P=d∑kPk+c.c., where the sum involves crystal-momenta ℏk of all relevant electronic states. In semiconductor optics, one typically excites transitions between a valence and a conduction band. In this connection, d is the dipole matrix element between the conduction and valence band and Pk defines the corresponding transition amplitude.
Background:
The derivation of the SBEs starts from a system Hamiltonian that fully includes the free-particles, Coulomb interaction, dipole interaction between classical light and electronic states, as well as the phonon contributions. Like almost always in many-body physics, it is most convenient to apply the second-quantization formalism after the appropriate system Hamiltonian H^System is identified. One can then derive the quantum dynamics of relevant observables O^ by using the Heisenberg equation of motion iℏddt⟨O^⟩=⟨[O^,H^System]−⟩.
Background:
Due to the many-body interactions within H^System , the dynamics of the observable O^ couples to new observables and the equation structure cannot be closed. This is the well-known BBGKY hierarchy problem that can be systematically truncated with different methods such as the cluster-expansion approach.At operator level, the microscopic polarization is defined by an expectation value for a single electronic transition between a valence and a conduction band. In second quantization, conduction-band electrons are defined by fermionic creation and annihilation operators a^c,k† and a^c,k , respectively. An analogous identification, i.e., a^v,k† and a^v,k , is made for the valence band electrons. The corresponding electronic interband transition then becomes Pk⋆=⟨a^c,k†a^v,k⟩,Pk=⟨a^v,k†a^c,k⟩, that describe transition amplitudes for moving an electron from conduction to valence band ( Pk⋆ term) or vice versa ( Pk term). At the same time, an electron distribution follows from fke=⟨a^c,k†a^c,k⟩.
Background:
It is also convenient to follow the distribution of electronic vacancies, i.e., the holes, fkh=1−⟨a^v,k†a^v,k⟩=⟨a^v,ka^v,k†⟩ that are left to the valence band due to optical excitation processes.
Principal structure of SBEs:
The quantum dynamics of optical excitations yields an integro-differential equations that constitute the SBEs These contain the renormalized Rabi energy Ωk=d⋅E+∑k′≠kVk−k′Pk′ as well as the renormalized carrier energy ε~k=εk−∑k′≠kVk−k′[fk′e+fk′h], where εk corresponds to the energy of free electron–hole pairs and Vk is the Coulomb matrix element, given here in terms of the carrier wave vector k The symbolically denoted ⋯|scatter contributions stem from the hierarchical coupling due to many-body interactions. Conceptually, Pk , fke , and fkh are single-particle expectation values while the hierarchical coupling originates from two-particle correlations such as polarization-density correlations or polarization-phonon correlations. Physically, these two-particle correlations introduce several nontrivial effects such as screening of Coulomb interaction, Boltzmann-type scattering of fke and fkh toward Fermi–Dirac distribution, excitation-induced dephasing, and further renormalization of energies due to correlations.
Principal structure of SBEs:
All these correlation effects can be systematically included by solving also the dynamics of two-particle correlations. At this level of sophistication, one can use the SBEs to predict optical response of semiconductors without phenomenological parameters, which gives the SBEs a very high degree of predictability. Indeed, one can use the SBEs in order to predict suitable laser designs through the accurate knowledge they produce about the semiconductor's gain spectrum. One can even use the SBEs to deduce existence of correlations, such as bound excitons, from quantitative measurements.The presented SBEs are formulated in the momentum space since carrier's crystal momentum follows from ℏk . An equivalent set of equations can also be formulated in position space. However, especially, the correlation computations are much simpler to be performed in the momentum space.
Interpretation and consequences:
The Pk dynamic shows a structure where an individual Pk is coupled to all other microscopic polarizations due to the Coulomb interaction Vk . Therefore, the transition amplitude Pk is collectively modified by the presence of other transition amplitudes. Only if one sets Vk to zero, one finds isolated transitions within each k state that follow exactly the same dynamics as the optical Bloch equations predict. Therefore, already the Coulomb interaction among Pk produces a new solid-state effect compared with optical transitions in simple atoms.
Interpretation and consequences:
Conceptually, Pk is just a transition amplitude for exciting an electron from valence to conduction band. At the same time, the homogeneous part of Pk dynamics yields an eigenvalue problem that can be expressed through the generalized Wannier equation. The eigenstates of the Wannier equation is analogous to bound solutions of the hydrogen problem of quantum mechanics. These are often referred to as exciton solutions and they formally describe Coulombic binding by oppositely charged electrons and holes.
Interpretation and consequences:
However, a real exciton is a true two-particle correlation because one must then have a correlation between one electron to another hole. Therefore, the appearance of exciton resonances in the polarization does not signify the presence of excitons because Pk is a single-particle transition amplitude. The excitonic resonances are a direct consequence of Coulomb coupling among all transitions possible in the system. In other words, the single-particle transitions themselves are influenced by Coulomb interaction making it possible to detect exciton resonance in optical response even when true excitons are not present.Therefore, it is often customary to specify optical resonances as excitonic instead of exciton resonances. The actual role of excitons on optical response can only be deduced by quantitative changes to induce to the linewidth and energy shift of excitonic resonances.The solutions of the Wannier equation produce valuable insight to the basic properties of a semiconductor's optical response. In particular, one can solve the steady-state solutions of the SBEs to predict optical absorption spectrum analytically with the so-called Elliott formula. In this form, one can verify that an unexcited semiconductor shows several excitonic absorption resonances well below the fundamental bandgap energy. Obviously, this situation cannot be probing excitons because the initial many-body system does not contain electrons and holes to begin with. Furthermore, the probing can, in principle, be performed so gently that one essentially does not excite electron–hole pairs. This gedanken experiment illustrates nicely why one can detect excitonic resonances without having excitons in the system, all due to virtue of Coulomb coupling among transition amplitudes.
Extensions:
The SBEs are particularly useful when solving the light propagation through a semiconductor structure. In this case, one needs to solve the SBEs together with the Maxwell's equations driven by the optical polarization. This self-consistent set is called the Maxwell–SBEs and is frequently applied to analyze present-day experiments and to simulate device designs.
Extensions:
At this level, the SBEs provide an extremely versatile method that describes linear as well as nonlinear phenomena such as excitonic effects, propagation effects, semiconductor microcavity effects, four-wave-mixing, polaritons in semiconductor microcavities, gain spectroscopy, and so on. One can also generalize the SBEs by including excitation with terahertz (THz) fields that are typically resonant with intraband transitions. One can also quantize the light field and investigate quantum-optical effects that result. In this situation, the SBEs become coupled to the semiconductor luminescence equations. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Eisenstein integral**
Eisenstein integral:
In mathematical representation theory, the Eisenstein integral is an integral introduced by Harish-Chandra in the representation theory of semisimple Lie groups, analogous to Eisenstein series in the theory of automorphic forms. Harish-Chandra used Eisenstein integrals to decompose the regular representation of a semisimple Lie group into representations induced from parabolic subgroups. Trombi gave a survey of Harish-Chandra's work on this.
Definition:
Harish-Chandra defined the Eisenstein integral by exp ((iν−ρP)HP(xk))dk where: x is an element of a semisimple group G P = MAN is a cuspidal parabolic subgroup of G ν is an element of the complexification of a a is the Lie algebra of A in the Langlands decomposition P = MAN.
K is a maximal compact subgroup of G, with G = KP.
ψ is a cuspidal function on M, satisfying some extra conditions τ is a finite-dimensional unitary double representation of K HP(x) = log a where x = kman is the decomposition of x in G = KMAN. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**User intent**
User intent:
User intent, otherwise known as query intent or search intent, is the identification and categorization of what a user online intended or wanted to find when they typed their search terms into an online web search engine for the purpose of search engine optimisation or conversion rate optimisation. Examples of user intent are fact-checking, comparison shopping or navigating to other websites.
Optimizing For User Intent:
To increase ranking on search engines, marketers need to create content that best satisfies queries entered by users on their smartphones or desktops. Creating content with user intent in mind helps increase the value of the information being showcased. Keyword research can help determine user intent. The search terms a user enters into a web search engine to find content, services, or products are the words that should be used on the webpage to optimize for user intent.Google can show SERP features such as featured snippets, knowledge cards or knowledge panels for queries where the search intent is clear. SEO practitioners take this into account because Google can often satisfy the user intent without having the user leave Google SERP. The better Google gets in figuring out user intent, the less users are going to click on search results. As of 2019, less than half of Google searches result in clicks.
Types:
Though there are various ways of classifying the categories of user intent, overall, they tend to follow the same clusters. Until recently, there were three broad categories: informational, transactional, and navigational. However, after the rise of mobile search, other categories have appeared or have segmented into more specific categorisation. For example, as mobile users may want to find directions or information about a specific physical location, some marketers have proposed categories such as "local intent," as in searches like "XY near me." Additionally, there is commercial search intent, which is when someone searches for a product or service to know more about it or compare other alternatives before finalizing their purchase.See the major types with examples below: Informational Intent: Donald Trump, Who is Maradona?, How to lose weight? Navigational Intent: Facebook login, Wikipedia contribution page Transactional Intent: Latest iPhone, Amazon coupons, cheap dell laptop, fence installers Commercial Intent: top headphones, best marketing agency, x protein powder review, Local Search Intent: restaurants near me, nearest gas station, Many search queries also have mixed search intent. For example, when someone searches "Best iPhone repair shop near me" is transactional and local search intent. Mixed search intent can easily happen with homonyms and such SERPs tend to be volatile because user signals differ.User intent is often misinterpreted, and thinking that there are just a few user intent types is not giving the complete picture of the user behavior.
Types:
It is also a term to describe what type of activity, business or services users are searching for (not only the user behavior after the search).
Types:
Example: when you write 'Spanish games' in the search engine (your browser settings in English) you have results for learning Spanish methods, not a real games with Spanish origin. In this example, the user intent is to learn Spanish language, not to play typical games. This intent is reflected by Google and the other search engines, and they strive to display their SERP results based on the user interest. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Acid–base homeostasis**
Acid–base homeostasis:
Acid–base homeostasis is the homeostatic regulation of the pH of the body's extracellular fluid (ECF). The proper balance between the acids and bases (i.e. the pH) in the ECF is crucial for the normal physiology of the body—and for cellular metabolism. The pH of the intracellular fluid and the extracellular fluid need to be maintained at a constant level.The three dimensional structures of many extracellular proteins, such as the plasma proteins and membrane proteins of the body's cells, are very sensitive to the extracellular pH. Stringent mechanisms therefore exist to maintain the pH within very narrow limits. Outside the acceptable range of pH, proteins are denatured (i.e. their 3D structure is disrupted), causing enzymes and ion channels (among others) to malfunction.
Acid–base homeostasis:
An acid–base imbalance is known as acidemia when the pH is acidic, or alkalemia when the pH is alkaline.
Lines of defense:
In humans and many other animals, acid–base homeostasis is maintained by multiple mechanisms involved in three lines of defense: Chemical: The first lines of defense are immediate, consisting of the various chemical buffers which minimize pH changes that would otherwise occur in their absence. These buffers include the bicarbonate buffer system, the phosphate buffer system, and the protein buffer system.
Lines of defense:
Respiratory component: The second line of defense is rapid consisting of the control the carbonic acid (H2CO3) concentration in the ECF by changing the rate and depth of breathing by hyperventilation or hypoventilation. This blows off or retains carbon dioxide (and thus carbonic acid) in the blood plasma as required.
Lines of defense:
Metabolic component: The third line of defense is slow, best measured by the base excess, and mostly depends on the renal system which can add or remove bicarbonate ions (HCO−3) to or from the ECF. Bicarbonate ions are derived from metabolic carbon dioxide which is enzymatically converted to carbonic acid in the renal tubular cells. There, carbonic acid spontaneously dissociates into hydrogen ions and bicarbonate ions. When the pH in the ECF falls, hydrogen ions are excreted into urine, while bicarbonate ions are secreted into blood plasma, causing the plasma pH to rise. The converse happens if the pH in the ECF tends to rise: bicarbonate ions are then excreted into the urine and hydrogen ions into the blood plasma.The second and third lines of defense operate by making changes to the buffers, each of which consists of two components: a weak acid and its conjugate base. It is the ratio concentration of the weak acid to its conjugate base that determines the pH of the solution. Thus, by manipulating firstly the concentration of the weak acid, and secondly that of its conjugate base, the pH of the extracellular fluid (ECF) can be adjusted very accurately to the correct value. The bicarbonate buffer, consisting of a mixture of carbonic acid (H2CO3) and a bicarbonate (HCO−3) salt in solution, is the most abundant buffer in the extracellular fluid, and it is also the buffer whose acid-to-base ratio can be changed very easily and rapidly.
Acid–base balance:
The pH of the extracellular fluid, including the blood plasma, is normally tightly regulated between 7.32 and 7.42 by the chemical buffers, the respiratory system, and the renal system. The normal pH in the fetus differs from that in the adult. In the fetus, the pH in the umbilical vein pH is normally 7.25 to 7.45 and that in the umbilical artery is normally 7.18 to 7.38.Aqueous buffer solutions will react with strong acids or strong bases by absorbing excess H+ ions, or OH− ions, replacing the strong acids and bases with weak acids and weak bases. This has the effect of damping the effect of pH changes, or reducing the pH change that would otherwise have occurred. But buffers cannot correct abnormal pH levels in a solution, be that solution in a test tube or in the extracellular fluid. Buffers typically consist of a pair of compounds in solution, one of which is a weak acid and the other a weak base. The most abundant buffer in the ECF consists of a solution of carbonic acid (H2CO3), and the bicarbonate (HCO−3) salt of, usually, sodium (Na+). Thus, when there is an excess of OH− ions in the solution carbonic acid partially neutralizes them by forming H2O and bicarbonate (HCO−3) ions. Similarly an excess of H+ ions is partially neutralized by the bicarbonate component of the buffer solution to form carbonic acid (H2CO3), which, because it is a weak acid, remains largely in the undissociated form, releasing far fewer H+ ions into the solution than the original strong acid would have done.The pH of a buffer solution depends solely on the ratio of the molar concentrations of the weak acid to the weak base. The higher the concentration of the weak acid in the solution (compared to the weak base) the lower the resulting pH of the solution. Similarly, if the weak base predominates the higher the resulting pH.
Acid–base balance:
This principle is exploited to regulate the pH of the extracellular fluids (rather than just buffering the pH). For the carbonic acid-bicarbonate buffer, a molar ratio of weak acid to weak base of 1:20 produces a pH of 7.4; and vice versa—when the pH of the extracellular fluids is 7.4 then the ratio of carbonic acid to bicarbonate ions in that fluid is 1:20.
Acid–base balance:
Henderson–Hasselbalch equation The Henderson–Hasselbalch equation, when applied to the carbonic acid-bicarbonate buffer system in the extracellular fluids, states that: log 10 ([HCO3−][H2CO3]), where: pH is the negative logarithm (or cologarithm) of molar concentration of hydrogen ions in the extracellular fluid.
pKa H2CO3 is the cologarithm of the acid dissociation constant of carbonic acid. It is equal to 6.1.
[HCO−3] is the molar concentration of bicarbonate in the blood plasma.
Acid–base balance:
[H2CO3] is the molar concentration of carbonic acid in the extracellular fluid.However, since the carbonic acid concentration is directly proportional to the partial pressure of carbon dioxide ( PCO2 ) in the extracellular fluid, the equation can be rewritten as follows: 6.1 log 10 0.0307 ×PCO2), where: pH is the negative logarithm of molar concentration of hydrogen ions in the extracellular fluid.
Acid–base balance:
[HCO−3] is the molar concentration of bicarbonate in the plasma.
PCO2 is the partial pressure of carbon dioxide in the blood plasma.The pH of the extracellular fluids can thus be controlled by the regulation of PCO2 and the other metabolic acids.
Acid–base balance:
Homeostatic mechanisms Homeostatic control can change the PCO2 and hence the pH of the arterial plasma within a few seconds. The partial pressure of carbon dioxide in the arterial blood is monitored by the central chemoreceptors of the medulla oblongata. These chemoreceptors are sensitive to the levels of carbon dioxide and pH in the cerebrospinal fluid.The central chemoreceptors send their information to the respiratory centers in the medulla oblongata and pons of the brainstem. The respiratory centres then determine the average rate of ventilation of the alveoli of the lungs, to keep the PCO2 in the arterial blood constant. The respiratory center does so via motor neurons which activate the muscles of respiration (in particular, the diaphragm). A rise in the PCO2 in the arterial blood plasma above 5.3 kPa (40 mmHg) reflexly causes an increase in the rate and depth of breathing. Normal breathing is resumed when the partial pressure of carbon dioxide has returned to 5.3 kPa. The converse happens if the partial pressure of carbon dioxide falls below the normal range. Breathing may be temporally halted, or slowed down to allow carbon dioxide to accumulate once more in the lungs and arterial blood.
Acid–base balance:
The sensor for the plasma HCO−3 concentration is not known for certain. It is very probable that the renal tubular cells of the distal convoluted tubules are themselves sensitive to the pH of the plasma. The metabolism of these cells produces CO2, which is rapidly converted to H+ and HCO−3 through the action of carbonic anhydrase. When the extracellular fluids tend towards acidity, the renal tubular cells secrete the H+ ions into the tubular fluid from where they exit the body via the urine. The HCO−3 ions are simultaneously secreted into the blood plasma, thus raising the bicarbonate ion concentration in the plasma, lowering the carbonic acid/bicarbonate ion ratio, and consequently raising the pH of the plasma. The converse happens when the plasma pH rises above normal: bicarbonate ions are excreted into the urine, and hydrogen ions into the plasma. These combine with the bicarbonate ions in the plasma to form carbonic acid (H+ + HCO−3 ⇌ H2CO3), thus raising the carbonic acid:bicarbonate ratio in the extracellular fluids, and returning its pH to normal.In general, metabolism produces more waste acids than bases. Urine produced is generally acidic and is partially neutralized by the ammonia (NH3) that is excreted into the urine when glutamate and glutamine (carriers of excess, no longer needed, amino groups) are deaminated by the distal renal tubular epithelial cells. Thus some of the "acid content" of the urine resides in the resulting ammonium ion (NH4+) content of the urine, though this has no effect on pH homeostasis of the extracellular fluids.
Imbalance:
Acid–base imbalance occurs when a significant insult causes the blood pH to shift out of the normal range (7.32 to 7.42). An abnormally low pH in the extracellular fluid is called an acidemia and an abnormally high pH is called an alkalemia.
Imbalance:
Acidemia and alkalemia unambiguously refer to the actual change in the pH of the extracellular fluid (ECF). Two other similar sounding terms are acidosis and alkalosis. They refer to the customary effect of a component, respiratory or metabolic. Acidosis would cause an acidemia on its own (i.e. if left "uncompensated" by an alkalosis). Similarly, an alkalosis would cause an alkalemia on its own. In medical terminology, the terms acidosis and alkalosis should always be qualified by an adjective to indicate the etiology of the disturbance: respiratory (indicating a change in the partial pressure of carbon dioxide), or metabolic (indicating a change in the Base Excess of the ECF). There are therefore four different acid-base problems: metabolic acidosis, respiratory acidosis, metabolic alkalosis, and respiratory alkalosis. One or a combination of these conditions may occur simultaneously. For instance, a metabolic acidosis (as in uncontrolled diabetes mellitus) is almost always partially compensated by a respiratory alkalosis (hyperventilation). Similarly, a respiratory acidosis can be completely or partially corrected by a metabolic alkalosis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Conjoined twins**
Conjoined twins:
Conjoined twins – popularly referred to as Siamese twins – are twins joined in utero. It is a very rare phenomenon, estimated to occur in anywhere between one in 49,000 births to one in 189,000 births, with a somewhat higher incidence in Southwest Asia and Africa. Approximately half are stillborn, and an additional one-third die within 24 hours. Most live births are female, with a ratio of 3:1.Two possible explanations of the cause of conjoined twins have been proposed. The one that is generally accepted is fission, in which the fertilized egg splits partially. The other explanation, no longer believed to be accurate, is fusion, in which the fertilized egg completely separates, but stem cells (that search for similar cells) find similar stem cells on the other twin and fuse the twins together. Conjoined twins share a single common chorion, placenta, and amniotic sac in utero, but so do some monozygotic but non-conjoined twins.Chang and Eng Bunker (1811–1874) were brothers born in Siam (now Thailand) who traveled widely for many years and were known internationally as the Siamese Twins. Chang and Eng were joined at the torso by a band of flesh and cartilage, and by their fused livers. In modern times, they could easily have been separated. Due to the brothers' fame and the rarity of the condition, the term Siamese twins came to be associated with conjoined twins.
Causes:
There are two theories about the development of conjoined twins. The first is that a single fertilized egg does not fully split during the process of forming identical twins. If the zygote division occurs after two weeks of the development of the embryonic disc, it results in the formation of conjoined twins.The second theory is that a fusion of two fertilized eggs occurs earlier in development.Partial splitting of the primitive node and streak may result in the formation of conjoined twins. These twins are classified according to the nature and degree of their union. Occasionally, monozygotic twins are connected only by a common skin bridge or by a common liver bridge. The type of twins formed depends on when and to what extent abnormalities of the node and streak occurred. Misexpression of genes, such as Goosecoid, may also result in conjoined twins. Goosecoid activates inhibitors of BMP4 and contributes to regulation of head development. Over- or underexpression of this gene in laboratory animals results in severe malformations of the head region, including duplications, similar to some types of conjoined twins.
Types:
Conjoined twins are typically classified by the point at which their bodies are joined. The most common types of conjoined twins are: Thoraco-omphalopagus (28% of cases): Two bodies fused from the upper chest to the lower chest. These twins usually share a heart and may also share the liver or part of the digestive system.
Thoracopagus (18.5%): Two bodies fused from the upper chest to lower belly. The heart is always shared in these cases. As of 2015, twins who share a heart have not been able to both survive separation; a designated twin who is allotted the heart may survive if the other twin is sacrificed.
Omphalopagus (10%): Two bodies fused at the lower abdomen. Unlike thoracopagus, the heart is not shared; however, the twins often share a liver, a digestive system, a diaphragm and other organs.
Parasitic twins (10%): Twins that are asymmetrically conjoined, resulting in one twin that is small, less formed, and dependent on the larger twin's organs for survival.
Types:
Craniopagus (6%): Fused skulls, but separate bodies. These twins' heads may be conjoined at the back, front, or side of the head, but not on the face or at the base of the skull.Other, less common types of conjoined twins include: Cephalopagus: Two faces on opposite sides of a single, conjoined head; the upper portion of the body is fused while the bottom portions are separate. These twins generally cannot survive due to severe malformations of the brain. This is also known as janiceps (after the two-faced Roman deity Janus).
Types:
Syncephalus: One head with a single face but four ears and two bodies.
Cephalothoracopagus: Bodies fused at the head and thorax, with two faces facing in opposite directions, or sometimes with a single face and an enlarged skull.
Xiphopagus: Two bodies fused in the xiphoid cartilage, which extends approximately from the navel to the lower breastbone. These twins almost never share any vital organs, with the exception of the liver. A famous example is Chang and Eng Bunker.
Ischiopagus: Fused lower half of the two bodies, with spines conjoined end-to-end at a 180° angle. These twins have four arms; one, two, three or four legs; and typically one set of external genitalia and one anus.
Omphalo-Ischiopagus: Fused in a similar fashion to ischiopagus twins, but facing each other, with a joined abdomen, akin to omphalopagus. These twins have four arms, and two, three, or four legs.
Parapagus: Fused side by side with a shared pelvis. Those that are dithoracic parapagus are fused at the abdomen and pelvis, but not at the thorax. Those that are diprosopic parapagus have one trunk and two faces. Those that are dicephalic parapagus have one trunk and two heads, and may have two (dibrachius), three (tribrachius), or four (tetrabrachius) arms.
Craniopagus parasiticus: Like craniopagus, but with a second bodiless head attached to the dominant head.
Pygopagus or Iliopagus: Two bodies joined at the pelvis.
Rachipagus: Twins joined along the back of their bodies, with fusion of the vertebral arches and the soft tissue from the head to the buttocks Tricephalus (conjoined triplets): Extremely rare conjoining of 3 fetuses. Very few confirmed cases, both human and animal, are known.
Management:
Separation Surgery to separate conjoined twins may range from very easy to very difficult depending on the point of attachment and the internal parts that are shared. Due to the complex nature of these cases, some medical organizations such as the Children's Hospital of Philadelphia (US, Pennsylvania) have assembled multidisciplinary medical teams that specialize in conjoined twins. Most cases of separation are extremely risky and life-threatening. Though there have been a number of successful separations throughout history, in many cases, the surgery results in the death of one or both of the twins, particularly if they are joined at the head or share a vital organ. This makes the ethics of surgical separation, where the twins can survive if not separated, contentious. Alice Dreger of Northwestern University found the quality of life of twins who remain conjoined to be higher than is commonly supposed. Lori and George Schappell and Abby and Brittany Hensel are notable examples.
Management:
The first recorded separation of conjoined twins took place in the Byzantine Empire in the 900s. One of the conjoined twins had already died, so the doctors of the town attempted to separate the dead twin from the surviving twin. The result was briefly successful, as the remaining twin lived for three days after separation. The next recorded case of separating conjoined twins was several centuries later, in Germany, in 1689. The first recorded successful separation of conjoined twins was performed in 1689 by Johannes Fatio. In 1955, neurosurgeon Harold Voris (1902-1980) and his team at Mercy Hospital in Chicago performed the first successful operation to separate craniopagus twins (conjoined at the head), which resulted in long-term survival for both. The larger girl was reported in 1963 as developing normally, but the smaller girl was permanently impaired.In 1957, Bertram Katz and his surgical team made international medical history performing the world's first successful separation of conjoined twins sharing a vital organ. Omphalopagus twins John Nelson and James Edward Freeman (Johnny and Jimmy) were born in Youngstown, Ohio, on April 27, 1956. The boys shared a liver but had separate hearts and were successfully separated at North Side Hospital in Youngstown, Ohio, by Bertram Katz. The operation was funded by the Ohio Crippled Children's Service Society.Recent successful separations of conjoined twins include that of the separation of Ganga and Jamuna Shreshta in 2001, who were born in Kathmandu, Nepal, in 2000. The 97-hour surgery on the pair of craniopagus twins was a landmark one which took place in Singapore; the team was led by neurosurgeons Chumpon Chan and Keith Goh. The surgery left Ganga with brain damage and Jamuna unable to walk. Seven years later, Ganga Shrestha died at the Model Hospital in Kathmandu in July 2009, at the age of eight, three days after being admitted for treatment of a severe chest infection.Infants Rose and Grace Attard, conjoined twins from Malta, were separated in the United Kingdom by court order Re A over the religious objections of their parents, Michaelangelo and Rina Attard. The twins were attached at the lower abdomen and spine. The surgery took place in November 2000, at St Mary's Hospital in Manchester. The operation was controversial because Rose, the weaker twin, would die as a result of the procedure as her heart and lungs were dependent upon Grace's. However, if the operation had not taken place, it was certain that both twins would die. Grace survived to enjoy a normal childhood.In 2003, two 29-year-old women from Iran, Ladan and Laleh Bijani, who were joined at the head but had separate brains (craniopagus) were surgically separated in Singapore, despite surgeons' warnings that the operation could be fatal to one or both. Their complex case was accepted only because technologically advanced graphical imagery and modeling would allow the medical team to plan the risky surgery. However, an undetected major vein hidden from the scans was discovered during the operation. The separation was completed but both women died while still in surgery.
Management:
In 2019 Safa and Marwa Ullah were separated at Great Ormond Street Hospital in London, England. The twins, born January 2017 were joined at the top of the head with separate brains and a cylindrical shared skull with the twins each facing in opposite directions to one another. The surgery was jointly led by neurosurgeon Owase Jeelani and plastic surgeon Professor David Dunaway. The surgery presented particular difficulties due to a number of shared veins and a distortion in the shape of the girls' brains, causing them to overlap. The distortion would need to be corrected in order for the separation to go ahead. The surgery utilized a team of more than 100 including bio engineers, 3D modelers and a virtual reality designer. The separation was completed in February 2019 following a total of 52 hours of surgery over three separate operations. As of July 2019, both girls remain healthy and the family planned to return to their home in Pakistan in 2020.
History:
The Moche culture of ancient Peru depicted conjoined twins in their ceramics dating back to 300 CE. Writing around 415 AD, St. Augustine of Hippo, in his book, City of God, refers to a man "double in his upper, but single in his lower half—having two heads, two chests, four hands, but one body and two feet like an ordinary man."According to Theophanes the Confessor, a Byzantine historian of the 9th century, around 385/386 AD, he writes that "in the village of Emmaus in Palestine, a child was born perfectly normal below the navel but divided above it, so that it had two chests and two heads, each possessing the senses. One would eat and drink but the other did not eat; one would sleep but the other stayed awake. There were times when they played with each other, when both cried and hit each other. They lived for a little over two years. One died while the other lived for another four days and it, too, died."In Arabia, the twin brothers Hashim ibn Abd Manaf and 'Abd Shams were born with Hashim's leg attached to his twin brother's head. Legend says that their father, Abd Manaf ibn Qusai, separated his conjoined sons with a sword and that some priests believed that the blood that had flowed between them signified wars between their progeny (confrontations did occur between Banu al'Abbas and Banu Ummaya ibn 'Abd Shams in the year 750 AH). The Muslim polymath Abū al-Rayhān al-Bīrūnī described conjoined twins in his book Kitab-al-Saidana.The English twin sisters Mary and Eliza Chulkhurst, who were conjoined at the back (pygopagus), lived from 1100 to 1134 (or 1500 to 1534) and were perhaps the best-known early historical example of conjoined twins. Other early conjoined twins to attain notice were the "Scottish brothers", allegedly of the dicephalus type, essentially two heads sharing the same body (1460–1488, although the dates vary); the pygopagus Helen and Judith of Szőny, Hungary (1701–1723), who enjoyed a brief career in music before being sent to live in a convent; and Rita and Cristina of Parodi of Sardinia, born in 1829. Rita and Cristina were dicephalus tetrabrachius (one body with four arms) twins and although they died at only eight months of age, they gained much attention as a curiosity when their parents exhibited them in Paris.
History:
Several sets of conjoined twins lived during the nineteenth century and made careers for themselves in the performing arts, though none achieved quite the same level of fame and fortune as Chang and Eng. Most notably, Millie and Christine McCoy (or McKoy), pygopagus twins, were born into slavery in North Carolina in 1851. They were sold to a showman, J.P. Smith, at birth, but were soon kidnapped by a rival showman. The kidnapper fled to England but was thwarted because England had already banned slavery. Smith traveled to England to collect the girls and brought with him their mother, Monimia, from whom they had been separated. He and his wife provided the twins with an education and taught them to speak five languages, play music, and sing. For the rest of the century, the twins enjoyed a successful career as "The Two-Headed Nightingale" and appeared with the Barnum Circus. In 1912, they died of tuberculosis, 17 hours apart.
History:
Giacomo and Giovanni Tocci, from Locana, Italy, were immortalized in Mark Twain's short story "Those Extraordinary Twins" as fictitious twins Angelo and Luigi. The Toccis, born in 1877, were dicephalus tetrabrachius twins, having one body with two legs, two heads, and four arms. From birth they were forced by their parents to perform and never learned to walk, as each twin controlled one leg (in modern times, physical therapy allows twins like the Toccis to learn to walk on their own). They are said to have disliked show business. In 1886, after touring the United States, the twins returned to Europe with their family. They are believed to have died around this time, though some sources claim they survived until 1940, living in seclusion in Italy.
Notable people:
Born 19th century and earlier Mary and Eliza Chulkhurst, alleged names of the Biddenden Maids (per tradition, born in the 12th century) of Kent, England. They are the earliest set of conjoined twins whose names are (purportedly) known.
Lazarus and Joannes Baptista Colloredo (1617 — after 1646), autosite-and-parasite pair Helen and Judith of Szony (Hungary, 1701 — 1723), pygopagus.
Chang and Eng Bunker (1811–1874). The Bunker twins were born of Chinese origin in Siam (now Thailand), and the expression Siamese twins is derived from their case. They were joined by the areas around their xiphoid cartilages, but over time, the connective tissue stretched.
In 1834, a set of conjoined triplets were born in Cattania. Two of the heads shared a neck while the other head had its own. The infant, a male, was described by Galvagni.
Millie and Christine McCoy (July 11, 1851 – October 8, 1912), (oblique pygopagus). The McCoy twins were born into slavery in Columbus County, North Carolina, United States. They went by the stage names "The Two-Headed Nightingale" and "The Eighth Wonder of the World" and had an extensive career before retiring to the farm on which they were born.
Notable people:
Giacomo and Giovanni Battista Tocci (1875? — 1912?), (dicephalus tetrabrachius dipus) Josefa and Rosa Blazek (January 20, 1878 — March 30, 1922), pygopagus. The Blazek twins were born in Skrejšov, Bohemia (now the Czech Republic). They began performing in public exhibitions at the age of 13, and their act later included Rosa's son Franz. The sisters died in Chicago, Illinois.
Notable people:
Born 20th century Daisy and Violet Hilton of Brighton, England (1908–1969), pygopagus. The Hilton twins were performers who played musical instruments, sang, and danced. At the height of their career, they had the highest paid act in vaudeville. They also appeared in the movies Freaks and Chained for Life.
Lucio and Simplicio Godina of Samar, Philippines (1908–1936) Masha and Dasha Krivoshlyapova of Moscow, Russia (1950–2003), the rarest form of conjoined twins, one of few cases of dicephalus tetrabrachius tripus (two heads, four arms, three legs) Ronnie and Donnie Galyon of Ohio (1951–2020), omphalopagus; longest-lived conjoined twins in the world at 68 years and 250 days.
Tjitske and Folkje de Vries of Mûnein, Netherlands (b. 1953) Wariboko and Tamunotonye Davies, born July 25, 1953, in Kano, Nigeria. Separated in London by a team led by Ian Aird. Tamunotonye died postoperatively. Wariboko became a nurse.
Lori and George Schappell, born September 18, 1961, in Reading, Pennsylvania, American entertainers, craniopagus. As of 2022, they are the world's oldest living conjoined twins. Guinness World Records noted that George's gender transition made him and Lori the first same-sex conjoined twins to identify as different genders.
Ganga and Jamuna Mondal of India, born 1969 or 1970, known professionally as The Spider Girls and The Spider Sisters. Ischiopagus.
Anna and Barbara Rozycki (born 1970), the first conjoined twins successfully separated in the UK.
Ma Nan Soe and Ma Nan San (born 1971 in Myanmar), separated in July 1971 at Yangon Pediatric Hospital. They were joined from chest to belly button. Ma Nan San died after one month and seven days after operation.
Notable people:
Elisa and Lisa Hansen, Ogden, Utah (1977). Born by Caesarean section on October 18, 1977, were conjoined at the top of their head (craniopagus). They were separated 1979 after 16-hour surgery, were first to both survive surgery. Elisa lost the use of her right side after the surgery, but went on to complete school, win medals in the Special Olympics, work, and act in the theatre. Elisa and Lisa died in 2020 (age 42).
Notable people:
Ladan and Laleh Bijani of Shiraz, Iran (1974–2003); died during separation surgery in Singapore. Craniopagus.
Baby Girl A and Baby Girl B (born 1977 in New Jersey) shared a single six-chambered heart. Separation surgery, led by C. Everett Koop, involved the instant death of Baby Girl A; the difficult ethical and religious concerns generated significant local newspaper coverage. Baby Girl B survived for three months.
Viet and Duc Nguyen, born on February 25, 1981, in Kon Tum Province, Vietnam, and separated in 1988 in Ho Chi Minh City. Viet died on October 6, 2007. Ischiopagus.
Maria and Consolata Mwakikuti of Tanzania (1996–2018); conjoined by the abdomen; died of respiratory problems resulting from an abnormal, inoperable chest deformity.
Patrick and Benjamin Binder, separated in 1987 by team of doctors led by Ben Carson. Craniopagus.
Andrew and Alex Olson, born in 1987, separated in April 1988 at the University of Nebraska Medical Center. Omphalopagus. Alex died in 2018.
Katie and Eilish Holton, born August 1988 in Ireland, separated at age 3 and a half. Katie died 4 days after the separation surgery due to a weak heart which went into cardiac arrest.
Abigail and Brittany Hensel are dicephalic parapagus twins born on March 7, 1990, in Carver County, Minnesota. Both graduated in 2012 from Bethel University, St. Paul, hired as teachers.
Tiesha and Iesha Turner (born 1991 in Texas), separated in 1992 at Texas Children's Hospital in Houston, Texas. Omphalopagus.
Ashley and Ashil Fokeer, born on November 2, 1992, in Mauritius Joseph and Luka Banda (born January 23, 1997, in Zambia), separated in 1997 in South Africa by Ben Carson (with a later intervention in 2001 to artificially close their skulls). Craniopagus.
José Armando and José Luis Cevallos Herrera were born in September 1999 in Milagro, Ecuador. They were accepted in 2021 to the State University of Milagro.
Maria del Carmen Andrade Solis and Maria Guadalupe Andrade Solis (better known as Carmen and Lupita) were born in June 2000 in Veracruz, Mexico. They later moved to the United States for healthcare with their parents.
Born 21st century Carl and Clarence Aguirre, born with vertical craniopagus in Silay City, Negros Occidental, on April 21, 2002. They were successfully separated on August 4, 2004.
Tabea and Lea Block, from Lemgo, Germany, were born as craniopagus twins joined on the tops of their heads on August 9, 2003. The girls shared some major veins, but their brains were separate. They were separated on September 16, 2004, although Tabea died about 90 minutes later.
Sohna and Mohna from Amritsar, India. Born in New Delhi on June 14, 2003. They have two hearts, arms, kidneys and spinal cords while share liver, gall bladder and legs.
Anastasia and Tatiana Dogaru, born outside Rome in Lazio, Italy, on January 13, 2004. As craniopagus twins, the top of Tatiana's head is attached to the back of Anastasia's head.
Lakshmi Tatma (born 2005) was an ischiopagus conjoined twin born in Araria district in the state of Bihar, India. She had four arms and four legs, resulting from a joining at the pelvis with a headless undeveloped parasitic twin.
In 2005 a set of conjoined triplets was detected, characterized as tricephalus, tetrabrachius, and tetrapus parapagothoracopagus, and the pregnancy interrupted at 22 weeks.
Kendra and Maliyah Herrin, ischiopagus twins separated in 2006 at age 4 Krista and Tatiana Hogan, Canadian twins conjoined at the head. Born October 25, 2006. Share part of their brain and can pass sensory information and thoughts between each other.
Trishna and Krishna from Bangladesh were born in December 2006. They are craniopagus twins, joined on the tops of their skulls and sharing a small amount of brain tissue. In 2009, they were separated in Melbourne, Australia.
Maria and Teresa Tapia, born in the Dominican Republic on April 8, 2010. Conjoined by the liver, pancreas, and a small portion of their small intestine. Separation occurred on November 7, 2011, at Children's Hospital of Richmond at VCU.
Aung Myat Kyaw and Aung Khant Kyaw (born in May 2011, Mandalay, Myanmar), connected at pelvis.
Jesus and Emanuel de Nazaré are dicephalic parapagus twins born in Pará, Brazil on December 19, 2011.
Zheng Han Wei and Zheng Han Jing, born in China on August 11, 2013. Conjoined by their sternum, pericardium, and liver. In 2014, they were separated in Shanghai, China, at the Shanghai Children's Medical Center.
Asa and Eli Hamby were born in 2014 in Georgia but died less than two days after birth due to heart failure. The twins were dicephalic parapagus having two heads but being conjoined at the torso, arms and legs. They had separate spinal columns but one heart making postnatal operations impossible.
Jadon and Anias McDonald, born in September 2015. Conjoined by the head. Successfully separated at Children's Hospital of Montefiore Medical Center by James T. Goodrich in October 2016.
Erin and Abby Delaney, born in Philadelphia, Pennsylvania on July 24, 2016. Conjoined by the head. They were successfully separated at Children's Hospital of Philadelphia on June 16, 2017.
Marieme and Ndeye Ndiaye, twin girls born in Senegal in 2017, living in Cardiff, UK in 2019.
Safa and Marwa Bibi, twin girls born in Hayatabad, Pakistan on January 17, 2017, conjoined by the head. Successfully separated at Great Ormond Street Hospital in February 2019.
Callie and Carter Torres, born January 30, 2017, in Houston Texas, from Blackfoot Idaho. They are Omphalo-Ischiopagus conjoined twins, attached by their pelvic area and sharing all organs from the belly button down with just one leg each.
Yiğit and Derman Evrensel, twin boys born on June 21, 2018, Antalya, Turkey. They are craniopagus twins and were separated at Great Ormond Street Hospital in 2019 by the same surgeons that separated Safa and Marwa Bibi.
Ervina and Prefina, born June 29, 2018, in the Central African Republic. They were separated on June 5, 2020, at the Bambino Gesù Pediatric Hospital in Rome, Italy.
Mercy and Goodness Ede, born August 13, 2019, conjoined by the chest and abdomen. Successfully separated at the National Hospital in Abuja, Nigeria in November 2019.
Marie-Cléa and Marie-Cléanne Papillon, born in Mauritius in 2019. Conjoined from neck to abdomen, but also at the heart which had seven chambers, instead of four. Marie-Cléa did not survive the surgery to separate the two.
Notable people:
Valentina and Kristina, born in 2019 in Croatia, shared a part of the digestive system and liver (xypho-omphalopagus). Several months after birth they developed twin-to-twin transfusion syndrome due to blood shunting, and they were successfully separated at University Hospital Centre Zagreb (Rebro) in an urgent procedure. Susannah and Elizabeth Castle, born April 22, 2021, and separated December 10, 2021, in Philadelphia, Pennsylvania.
Notable people:
AmieLynn Rose and JamieLynn Rae Finley, born October 3, 2022 and separated January 23, 2023, in Fort Worth, Texas.
In fiction:
Conjoined twins have been the focus of several noteworthy works of entertainment, including: Stuck on You, a 2003 American comedy film screen written and directed by the Farrelly brothers and starring Matt Damon and Greg Kinnear as conjoined twin brothers, whose conflicting aspirations provide both conflict and humorous situations, in particular when one of them wishes to move to Hollywood to pursue a career as an actor.
In fiction:
Alone, a Thai horror film following Pim after the death of her sister Ploy and their subsequent separation.
Blood Sisters focuses on a French Canadian model who has a separated conjoined twin.
CLAYMORE, Rafaela upon fuses with her sister Luciela, and subsequently awakening from of resembling the Twins Goddesses of Love.
Tarot: Witch of the Black Rose graphic novel debut the ghost twin, She/they are constant companion(s) of Skeleton Man, her protector.
The Broadway musical Side Show depicts the lives of real-life conjoined twins Daisy and Violet Hilton, portrayed in the original Broadway production by Alice Ripley and Emily Skinner.
Reiko the Zombie Shop, 1990s women's horror manga, bonus chapters focus on unexpectedly life of Noriko and her "sister". Summoners Dr. Zero can resurrect and control fuse zombies called medicinal death.
MA GI & CA L., a conjoined magical alternative android, from psychology horror manga Magical Girl Apocalypse.
In fiction:
In the TV series The Addams Family, there are extended family members of the Addams Family who are mentioned to have two heads. In "Mother Lurch Visits the Addams Family," Morticia Addams mentions that she has a Cousin Slimy who has two good heads on his shoulder. In "Progress and the Addams Family," Morticia was making a knitted hat for Cousin Plato where Gomez Addams has stated that his left head is size 6 and his right head is size 8 3/4. In "Lurch's Little Helper," Morticia made a portrait of Cousin Crimp who has a male head and a female head.
In fiction:
Tamil actor Suriya portrays Vimalan and Akilan, conjoined twins in the 2012 film, Maattrraan.
The book The Girls, by Canadian novelist Lori Lansens, published in 2005, is the fictional autobiography of Canadian craniopagus twins Rose and Ruby Darlen with Slovakian background.
Irish author Sarah Crossan won the Carnegie Medal for her verse novel, One. The story follows the life and survival of conjoined twin sisters. The book also won The Bookseller's 2016 prize for young adult fiction and the Irish Children's Book of the Year.
In fiction:
In Lilo & Stitch: The Series, Swapper are Ischiopagus twin green stubby limbed lizard-like experiment with black eyes, purple markings on his back and three purple-tipped tendrils on each head that can emit a green ray from each head's eyes. The ray will swap the minds and voices of the targets, and the only means of returning to normal is through Swapper choosing to do so. Because Swapper is two heads on the same body, Swapper is two beings cooperating as one, though their personalities mirror each other: they can be indecisive at times but usually work well together.
In fiction:
In Big-Top Pee-wee, the Cabrini Circus has some conjoined twins named Ruth and Dot (portrayed by Helen Infield Siff and Carol Infield Sender).
In The Addams Family and Addams Family Values, there are conjoined twins named Flora and Fauna Amor (portrayed by Darlene and Maureen Sue Levin) who were once dates to Gomez Addams and Uncle Fester. Both films also featured a two-headed relative named Dexter and Donald Addams (portrayed by Douglas Brian Martin and Steven M. Martin).
In The Addams Family cartoon in 1992, the episode "N.J. Addams" featured Aunt Noggin who was a two-headed person who wears a Victorian dress. One head is black and speaks in a Jamaican accent and the other head is Caucasian and speaks in a Brooklyn accent.
In Midnighter issue #13, Shock & Awe is a superheroine working for Los Angeles Strike Force.
Delilah and Jezebel in the video game Bully.
CatDog depicts Cat and Dog, a hybrid of a dog and cat who are brothers.
Vaka-Waka and Nurp-Naut in Cartoon Network and The Lego Group's Mixels.
Fender and Bender (also known individually as HeadBanger) are characters in the 90's television series Toxic Crusaders, based on The Toxic Avenger films by Troma Entertainment. Fender is supposed to be a Mad Scientist while Bender is a Surfer.
Dragon Tales, a children's show, depicts Zak and Wheezie (voiced by Jason Michas and Kathleen Barr) as a two-headed dragon that are brother and sister making them dicephalic parapagus twins.
The Simpsons features Hugo in "Treehouse of Horror VII", who is Bart Simpson's conjoined twin. They were separated at birth by Dr. Hibbert and Hugo was imprisoned in the Simpsons' attic.
The Oblongs, depicts Biff Oblong (Randy Sklar) and Chip Oblong (Jason Sklar)—17-year-old conjoined twins who are attached at the waist and share a middle leg due to their valley's pollution and radiation.
In the DC Comics series Hitman, villain Moe and Joe Dubelz is a conjoined twin gangster. Moe was alive at the time of introduction, but Joe had already died and is, in fact, undergoing putrefaction.
In the episode "Humbug" of The X-Files, Vincent Schiavelli portrayed a circus performer named Lanny, with an underdeveloped conjoined twin named Leonard.
In the anime Naruto, Sakon (左近) and his conjoined twin brother Ukon (右近) are the strongest of the Sound Four and count as one member due to their abilities to merge bodies and kill an opponent at a cellular level. They both serve as antagonists.
The American medical drama Grey's Anatomy featured several cases of conjoined twins.
The 2001 movie Not Another Teen Movie depicts Kara and Sara Fratelli, conjoined twins portrayed by Samaire Armstrong and Nectar Rose.
The musical group Evelyn, Evelyn depicts a pair of conjoined twin sisters—often referred to as "The Evelyn Sisters"—in many of their songs and music videos. The fictional sisters are shown to be child prostitutes in the music video for "Sandy Fishnets", and the song "Evelyn, Evelyn" describes their longing for privacy and to be separated from one another.
The Bride with White Hair, a 1993 Hong Kong movie, features conjoined twin villains.
The animated series Duckman featured Eric T. Duckman's sons Charles (voiced by Dana Hill in 1994–1996, Pat Musick in 1997) and Mambo (voiced by E. G. Daily) who are dicephalic parapagus twins where their heads share a body.
In fiction:
Fran Bow, a 2015 indie psychological horror game, includes Clara and Mia Buhalmet, a set of mentally ill conjoined twins, as characters. They were surgically sewn together, much like an experiment performed by Josef Mengele, also known as the Angel of Death, in which a pair of twins were sewn together back to back by blood vessels and organs, in an attempt to create conjoined twins.
In fiction:
The Peach Tree, a Korean novel and film, portrays conjoined twin brothers falling in love with the same woman.
The 1999 movie Twin Falls Idaho portrays conjoined twin brothers who are played by two non-conjoined identical twin brothers, one of whom directed the film, and both of whom co-wrote the screenplay.
In the fourth season of the American television series American Horror Story titled American Horror Story: Freak Show, the main character Bette and Dot Tattler (Sarah Paulson in a dual role) are a dicephalic parapagus twin where their two heads are side by side on one torso. This performance is done with the help of CGI.
In season two, episode eight of Rick and Morty, Michael and Pichael Thompson (voiced by Justin Roiland) are depicted as conjoined twins hosting separate TV shows at the same time.
In the Cirque Du Soleil show Kurios: Cabinetes des Curiosities, a pair of conjoined twins are among the Seeker's collection. They later split during an aerial straps duo and reunite for the rest of the show.
The bilingual film Chaarulatha stars Priyamani as a conjoined twin.
On the television series Ruby Gloom, the characters Frank and Len are conjoined twins who comprise a rock group called RIP.
In the film Monsters University, two of the members of the fictitious fraternity Oozma Kappa are named Terri and Terry Perry (voiced by Sean Hayes and Dave Foley). They are dicephalic parapagus twins where they have four arms and share the same tentacles that are in place of their legs.
In the children's cartoon Steven Universe, the Rutile twins are conjoined.
Fire and Water are conjoined twins in Chris Abani's 2014 novel The Secret History of Las Vegas.
The Knick portrays conjoined twins Zoya and Nika, who share a liver. They are successfully separated by the doctors.
Brian Aldiss's 1977 novel Brothers of the Head depicts conjoined twins who become rock stars. In the 2005 film version, they are played by non-conjoined identical twins Harry Treadaway and Luke Treadaway.
Admirals Watson and Crick are presumably conjoined twins joined at the torso in the 2015 children's show Miles from Tomorrowland.
In the Ultimate Marvel reality of Marvel Comics, Syndicate is two conjoined twins in Ultimate X-Men. They were created by Brian K. Vaughn and Steve Dillon, and first appeared in Ultimate X-Men #58. They were killed during the crossover event Ultimatum.
In the horror video game Dead by Daylight, the playable characters "The Twins" are a brother and sister who are conjoined twins. However, the brother is able to detach from the sister.
In the episode 11 and 12 of the first season of The Good Doctor, Marcus Andrew and Neil Melandez's accomplish a kidney transplant on a pair of conjoined twins, the operation leads to several complications and multiple operations to try saving the girls.
In the 1982 film Basket Case and its sequels, Duane and Belial Bradley were separated after their father's death and Belial is hidden in the basket.
The horror comedy Conjoined features conjoined twins, one of whom is a serial killer. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Passive accessory intervertebral movements**
Passive accessory intervertebral movements:
Passive accessory intervertebral movements (PAIVM) refers to a spinal physical therapy assessment and treatment technique developed by Geoff Maitland. The purpose of PAIVM is to assess the amount and quality of movement at various intervertebral levels, and to treat pain and stiffness of the cervical and lumbar spine.
Technique:
During assessment, the aim of PAIVM is to reproduce patient symptoms, and assess the endfeel of cervical movement, quality of resistance, behaviour of pain throughout the range of movement, and observe any muscle spasm. A posterior to anterior force of varying strength is applied by the therapist either centrally onto the spinous process, or unilaterally on either the left or right articular pillar. As a treatment technique, pain is treated by oscillations of small amplitude short of resistance, whilst stiffness is treated by larger amplitudes 50% into resistance.
Technique:
Contraindications The technique is contraindicated by bone disease, malignancy, pregnancy, vertebral artery insufficiency, active ankylosing spondylitis, rheumatoid arthritis, spinal instability, acute irritation or compression of the nerve root, and recent whiplash.
Clinical evidence:
A 2005 study by Abbott et al. suggested that as an assessment technique, PAIVMs are highly specific, but not sensitive, in the detection of lumbar segmental instability. A 1993 study by Watson and Trott suggested that PAIVM examinations are reliable when identifying symptomatic vertebral joints when assessing for cervicogenic headache. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Input shaping**
Input shaping:
In control theory, input shaping is an open-loop control technique for reducing vibrations in computer-controlled machines. The method works by creating a command signal that cancels its own vibration. That is, a vibration excited by previous parts of the command signal is cancelled by vibration excited by latter parts of the command. Input shaping is implemented by convolving a sequence of impulses, known as an input shaper, with any arbitrary command. The shaped command that results from the convolution is then used to drive the system. If the impulses in the shaper are chosen correctly, then the shaped command will excite less residual vibration than the unshaped command. The amplitudes and time locations of the impulses are obtained from the system's natural frequencies and damping ratios. Shaping can be made very robust to errors in the system parameters. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PX domain**
PX domain:
The PX domain is a phosphoinositide-binding structural domain involved in targeting of proteins to cell membranes.
This domain was first found in P40phox and p47phox domains of NADPH oxidase (phox stands for phagocytic oxidase). It was also identified in many other proteins involved in membrane trafficking, including nexins, Phospholipase D, and phosphoinositide-3-kinases.
The PX domain is structurally conserved in eukaryotes, although amino acid sequences show little similarity. PX domains interact primarily with PtdIns(3)P lipids. However some of them bind to phosphatidic acid, PtdIns(3,4)P2, PtdIns(3,5)P2, PtdIns(4,5)P2, and PtdIns(3,4,5)P3. The PX-domain can also interact with other domains and proteins.
Human proteins containing this domain:
Sorting nexins contain this domain. Other examples include: HS1BP3 KIF16B (SNX23) NCF1; NCF1C; NCF4; NISCH PIK3C2A; PIK3C2B; PIK3C2G; PLD1; PLD2; PXK RPS6KC1 SGK3; SH3PXD2A; SNAG1; SNX9 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Honeycomb housing**
Honeycomb housing:
Honeycomb housing is an urban planning model pertaining to residential subdivision design.
Honeycomb housing:
The defining hexagonal tessellation, or "honeycomb" pattern, consists of multiple housing clusters containing 5-16 houses and centered around a courtyard in a cul-de-sac arrangement at its smallest unit of organization. Multiple clusters are connected to each other to form larger cul-de-sac communities with up to 42 houses in total. These courtyard communities are in turn also connected to one another, making up a distinct neighborhood of up to 300 houses.The honeycomb concept was first introduced in Malaysia as an alternative to terrace houses and the predominantly rectilinear form of residential layouts.
Honeycomb housing:
It can also be described as a new form of cul-de-sac layout.
From Cul-de-sac to Honeycomb:
Cul-de-sacs are popular: they are perceived as being safer, more exclusive and neighbourly. According to one study, between the ‘grid’, ‘loops’ and cul-de-sacs, the latter were the most popular. However, in developing countries like Malaysia, only the very rich can afford to live in quarter-acre single-family houses located in a cul-de-sac. The Honeycomb concept was a response to two questions: • How can the cul-de-sac be made affordable for more people and for the environment? • Is it possible to have cul-de-sacs without sprawl? First, the cul-de-sac is made bigger so as to fit in a public green area in the middle in order to meet local planning regulations that require 10% of any residential development to be open space. Then an interlocking arrangement of cul-de-sacs is created such that each building lot would face two or three cul-de-sacs. If the buildings in this layout were detached houses, they would be priced in the top range of the market. But instead, the buildings are divided into 2, 3, 4 or 6, creating duplex, triplex, quadruplex or sextuplex units.
From Cul-de-sac to Honeycomb:
As the buildings are divided, the land area and the built-up area become smaller; the number of units in the layout and the density of the development go up to rival that obtained in terrace house developments. In this way, the housing units become less expensive. Yet each house still retains a public access. The size and shape of the external environment are not changed – only now more units share it.Since houses are built around a small park with plentiful shady trees, this communal garden is easily accessible to all in the cul-de-sac, allowing it to act as a social focus that can encourage social interaction and neighborly spirit.The courtyard area is a "defensible space" as well, as it acts naturally to reduce crime in the sense that strangers are quickly spotted. The short winding roads put a stop to speeding traffic, and certain to dissuade snatch thieves on motorcycles - therefore becoming safe for children, pedestrians and cyclists.Apart from the social advantages, it is also claimed that compared to the terrace house layout, the honeycomb layout uses land efficiently and offers savings in the cost of infrastructure.The honeycomb Layout may be said to be inspired from the geometrical design of Islamic tiles and the structure of beehives. Introduced by Kuala Lumpur-based architect Mazlin Ghazali, it has received a patent in Australia
Honeycomb Housing projects under construction:
The honeycomb concept has been applied to a hillside development on 14 acres of land at Kampung Nong Chik the edge of Johor Bahru business district in a development which advertises a modern version of the traditional village or "kampong" lifestyle.
Criticism:
Being so new, many developers would worry about the difficulty of obtaining approvals from the local authorities and so hesitate to be the first to adopt the honeycomb concept. Another problem is that the houses are not rectangular and the house design ends up with odd corners in the house.
Criticism:
Another criticism comes from followers ‘fengshui’, the ancient Chinese art of geomancy, who believe that in a cul-de-sac ‘the chi energy coming to a house placed at the end of a road is usually fast, so the energy is pernicious and non-beneficial. Instead of bringing good fortune, it brings misfortune’.Nowadays cul-de-sacs are often frowned upon in planning circles, especially by supporters of the New Urbanism: they do not permit higher densities, are car-dependent, and unlike grid systems, do not lend themselves to redevelopment and change. However the Honeycomb housing concept - which allows relatively high density - does appear to overcome some of the concerns here. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vertex normal**
Vertex normal:
In the geometry of computer graphics, a vertex normal at a vertex of a polyhedron is a directional vector associated with a vertex, intended as a replacement to the true geometric normal of the surface. Commonly, it is computed as the normalized average of the surface normals of the faces that contain that vertex. The average can be weighted for example by the area of the face or it can be unweighted. Vertex normals can also be computed for polygonal approximations to surfaces such as NURBS, or specified explicitly for artistic purposes. Vertex normals are used in Gouraud shading, Phong shading and other lighting models. Using vertex normals, much smoother shading than flat shading can be achieved; however, without some modifications to topology such a support loops, it cannot produce a sharper edge. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PER1**
PER1:
The PER1 gene encodes the period circadian protein homolog 1 protein in humans.
Function:
The PER1 protein is important to the maintenance of circadian rhythms in cells, and may also play a role in the development of cancer. This gene is a member of the period family of genes. It is expressed with a daily oscillating circadian rhythm, or an oscillation that cycles with a period of approximately 24 hours. PER1 is most notably expressed in the region of the brain called the suprachiasmatic nucleus (SCN), which is the primary circadian pacemaker in the mammalian brain. PER1 is also expressed throughout mammalian peripheral tissues. Genes in this family encode components of the circadian rhythms of locomotor activity, metabolism, and behavior. Circadian expression of PER1 in the suprachiasmatic nucleus will free-run in constant darkness, meaning that the 24-hour period of the cycle will persist without the aid of external light cues. Subsequently, a shift in the light/dark cycle evokes a proportional shift of gene expression in the suprachiasmatic nucleus. The time of gene expression is sensitive to light, as light during a mammal's subjective night results in a sudden increase in per expression and thus a shift in phase in the suprachiasmatic nucleus. Alternative splicing has been observed in this gene; however, these variants have not been fully described. There is some disagreement between experts over the occurrence of polymorphisms with functional significance. Many scientists state that there are no known polymorphisms of the human PER1 gene with significance at a population level that results in measurable behavioral or physiological changes. Still, some believe that even silent mutations can cause significant behavioral phenotypes, and result in major phase changes.Functional conservation of the PER gene is shown in a study by Shigeyoshi et al. 2002. In this study, mouse mPer1 and mPer2 genes were driven by Drosophila timeless promoter in Drosophila melanogaster. They found that both mPer constructs could restore rhythm to arrhythmic flies (per01 flies). Thus mPer1 and mPer2 can function as clock components in flies and may have implications concerning the homology of per genes.
Function:
Role in chronobiology The PER1 gene, also called rigui, is a characteristic circadian oscillator. PER1 is rhythmically transcribed in the SCN, keeping a period of approximately 24 hours. This rhythm is sustained in constant darkness, and can also be entrained to changing light cycles. PER1 is involved in generating circadian rhythms in the SCN, and also has an effect on other oscillations throughout the body. For example, PER1 knockouts affect food entrainable oscillators and methamphetamine-sensitive circadian oscillators, whose periods are altered in the absence of PER1. In addition, mice with knockouts in both the PER1 and PER2 genes show no circadian rhythmicity. Phase shifts in PER1 neurons can be induced by a strong, brief light stimulus to the SCN of rats. This light exposure causes increases in PER1 mRNA, suggesting that the PER1 gene plays an important role in entrainment of the mammalian biological clock to the light-dark cycle.
Function:
Feedback mechanism The PER1 mRNA is expressed in all cells, acting as a part of a transcription-translation negative feedback mechanism, which creates a cell autonomous molecular clock. PER1 transcription is regulated by protein interactions with its five E-box and one D-box elements in its promoter region. Heterodimer CLOCK-BMAL1 activates E-box elements present in the PER1 promoter, as well activating the E box promoters of other components of the molecular clock such as PER2, CRY1, and CRY2. The phase of PER1 mRNA expression varies between tissues, The transcript leaves the nucleus and is translated into a protein with PAS domains, which enable protein-protein interactions. PER1 and PER2 are phosphorylated by CK1ε, which leads to increased ubiquitylation and degradation. This phosphorylation is counteracted by PP1 phosphatase, resulting in a more gradual increase in phosphorylated PER, and an additional control over the period of the molecular clock. Phosphorylation of PER1 can also lead to masking of its leucine-rich nuclear localization sequence and thus impeded heterodimer import.PER interacts with other PER proteins as well as the E-box regulated, clock controlled proteins CRY1 and CRY2 to create a heterodimer which translocates into the nucleus. There it inhibits CLOCK-BMAL activation. PER1 is not necessary for the creation circadian rhythms, but homozygous PER1 mutants display a shortened period of mRNA expression. While PER1 must be mutated in conjunction with PER2 to result in arhythmiticity, the two translated PER proteins have been shown to have slightly different roles, as PER1 acts preferentially through interaction with other clock proteins.
Clinical significance:
PER1 expression may have significant effects on the cell cycle. Cancer is often a result of unregulated cell growth and division, which can be controlled by circadian mechanisms. Therefore, a cell's circadian clock may play a large role in its likelihood of developing into a cancer cell. PER1 is a gene that plays an important role in such a circadian mechanism. Its overexpression, in particular, causes DNA-damage induced apoptosis. In addition, down-regulation of PER1 can enhance tumor growth in mammals. PER1 also interacts with proteins ATM and Chk2. These proteins are key checkpoint proteins in the cell cycle. Cancer patients have a lowered expression of per1. Gery, et al. suggests that regulation of PER1 expression may be useful for cancer treatment in the future.
Gene:
Orthologs The following is a list of some orthologs of the PER1 gene in other species: Paralogs PER2 PER3 Location The human PER1 gene is located on chromosome 17 at the following location: Start: 8,140,470 Finish: 8,156,405 Length: 15,936 Exons: 24PER1 has 19 transcripts (splice variants).
Discovery:
The PER1 ortholog was first discovered by Ronald Konopka and Seymour Benzer in 1971. During 1997, Period 1 (mPer1) and Period 2 (mPer2) genes were discovered (Sun et al., 1997 and Albretch et al., 1997). Through homology screens with the Drosophila per, these genes were discovered. It was independently discovered by Sun et al. 1997, naming it RIGUI and by Tei et al. 1997, who named it hper because of the protein sequence similarity with Drosophila per. They found that the mouse homolog had the properties of a circadian regulator. It had circadian expression in the suprachiasmatic nucleus (SCN), self-sustained oscillation, and entrainment of circadian expression by external light cues. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**S-1 Gard**
S-1 Gard:
The S-1 Gard (also known as the Dangerzone Deflector and the People Catcher) is a safety device consisting of a curved polyurethane guard, mounted in front of the right rear wheels of transit buses, designed to deflect a person out of the path of the wheels in order to prevent injury or death. It is distributed by Public Transportation Safety International (PTS) and is currently installed on bus fleets in major cities such as Los Angeles, Washington, D.C., Chicago, Austin (TX) and Baltimore, among others. The S-1 Gard was invented by Mark B. Barron of PTS in 1994.Where no S-1 Gard is present, the prevalent type of accident involving transit vehicles and individuals is pedestrians or cyclists falling in the path of and/or being pulled under the wheel, as opposed to individuals being struck by the vehicle body. Among bus wheel accidents, approximately 85% occur at the right hand drive wheel (curbside rear wheel). The accidents most often occur as buses are making right-hand turns, when pedestrians who are not fully on the sidewalk or who fall into the street risk being run over by the right rear wheels of the bus. It is estimated that 40–100 people are seriously injured this way annually in the United States, although precise statistics concerning this are lacking. In 2001 in Chicago, a total of nine pedestrians were injured in incidents involving the rear right wheels of Chicago Transit Authority buses, prior to the CTA's retrofitting of its bus fleet with the S-1 Gard.Because of Bernoulli's principle, a decrease in air pressure between a larger object (such as a transit bus or large truck) and a smaller object (such as a pedestrian or cyclist) is created when passing in close proximity, resulting in a force that pulls the smaller object towards the larger object. The greater the velocity of the larger object, and the greater the difference in mass of the two objects, the greater the force. This principle accounts for accidents wherein, for instance, a cyclist being closely passed by a moving bus is pulled toward the bus, loses control and is thrown under the path of the wheels. The curvature of the guard is designed to counteract Bernoulli's principle by producing a net outward pressure away from the direction of travel of the bus, in addition to physically pushing an object out of the way.
S-1 Gard:
It is manufactured in two pieces, totaling approximately two feet wide by one foot high, and is curved (convex). It is mounted onto the chassis of the bus in front of the rear right wheels and behind the rear door, if present. Because of its curved design, the force vector is outward and to the right, helping move a fallen body out from under the bus.
S-1 Gard:
The S-1 Gard is recommended by the Transportation Research Board as a strategy for mitigating bus collisions with pedestrians, and has been credited by public transit officials with significantly reducing, if not eliminating, injuries and fatalities involving the rear right wheels of transit buses.
MDZ Shield:
In 2010, PTS announced the MDZ Shield as an alternative to the S-1 Gard for school buses. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Exo-poly-alpha-galacturonosidase**
Exo-poly-alpha-galacturonosidase:
Exo-poly-α-galacturonosidase (EC 3.2.1.82, exopolygalacturonosidase, exopolygalacturanosidase, poly(1,4-α-D-galactosiduronate) digalacturonohydrolase) is an enzyme with systematic name poly[(1→4)-α-D-galactosiduronate] digalacturonohydrolase. It catalyses the hydrolysis of pectic acid from the non-reducing end, releasing digalacturonate. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bacillus virus G**
Bacillus virus G:
Bacillus virus G is a bacteriophage (phage) that infects Bacillus bacteria. The phage has been reported to have the largest genome of all discovered Myoviridae with nearly 700 protein-coding genes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**WinRoll**
WinRoll:
WinRoll is an open source, free software utility for Windows 2000, Windows XP and Windows 7 which allows the user to "roll up" windows into their title bars, in addition to other window management related features. It is compiled in assembly code.
History:
WinRoll 1.0 was first released on April 10, 2003. It is unclear if it still maintained by Wil Palma. The most recent version, 2.0, was released on April 7, 2004. Being an open source program, its source code was freely available from the website. The website is now down.
Features:
The purpose of WinRoll is to allow users to have many windows on the screen, while keeping them organized and manageable. The main feature of the program is enabling the user to "roll" windows up into their title bars. It also allows users to minimize programs to the tray, and to adjust the opacity of windows. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rnd3**
Rnd3:
Rnd3 is a small (~21 kDa) signaling G protein (to be specific, a GTPase), and is a member of the Rnd subgroup of the Rho family of GTPases. It is encoded by the gene RND3.Like other members of the Rho family of Ras-related GTPases it regulates the organization of the actin cytoskeleton in response to extracellular growth factors.
Regulation:
Most Rho family members cycle between an inactive GDP-bound form and an active GTP-bound form. However, members of the Rnd subgroup of the Rho family are exceptions to this, binding detectably only to GTP, while having low GTPase activity, if any. Instead, Rnd family proteins are regulated through other mechanisms that control their production, degradation, phosphorylation, and localization.
Interactions:
In its GTP-bound form, RhoA exposes regions that allow it to interact with downstream targets. Rnd3 contains a region which is similar to the one RhoA exposes for interaction with ROCK1, allowing Rnd3 to compete with RhoA for interaction with ROCK1. By binding to ROCK1, Rnd3 inhibits it from phosphorylating downstream targets necessary for stress fiber formation. Rnd3 is also directly involved in controlling RhoA activity through suppression of PLEKHG5 and activation of ARHGAP5. Interaction with UBXD5 has also been shown. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ground speed**
Ground speed:
Ground speed is the horizontal speed of an aircraft relative to the Earth’s surface. It is vital for accurate navigation that the pilot has an estimate of the ground speed that will be achieved during each leg of a flight.
An aircraft diving vertically would have a ground speed of zero. Information displayed to passengers through the entertainment system of airline aircraft usually gives the aircraft ground speed rather than airspeed.
Ground speed can be determined by the vector sum of the aircraft's true airspeed and the current wind speed and direction; a headwind subtracts from the ground speed, while a tailwind adds to it. Winds at other angles to the heading will have components of either headwind or tailwind as well as a crosswind component.
Ground speed:
An airspeed indicator indicates the aircraft's speed relative to the air mass. The air mass may be moving over the ground due to wind, and therefore some additional means to provide position over the ground is required. This might be through navigation using landmarks, radio aided position location, inertial navigation system, or GPS. When more advanced technology is unavailable, an E6B flight computer may be used to calculate ground speed. Ground speed radar can measure it directly.
Ground speed:
Ground speed is quite different from airspeed. When an aircraft is airborne the ground speed does not determine when the aircraft will stall, and it doesn't influence the aircraft performance such as rate of climb. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Technical Advisory Service for Images**
Technical Advisory Service for Images:
The Technical Advisory Service for Images (TASI) provides advice to the United Kingdom's Further Education (FE) and Higher Education (HE) communities in the creation and use of digital images. Its services include a Web site [1], helpdesk, training programme, and mailing list. TASI is funded by the Joint Information Systems Committee (Jisc) and based within the Institute for Learning and Research Technology (ILRT) of the University of Bristol. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Post Office Protocol**
Post Office Protocol:
In computing, the Post Office Protocol (POP) is an application-layer Internet standard protocol used by e-mail clients to retrieve e-mail from a mail server. Today, POP version 3 (POP3) is the most commonly used version. Together with IMAP, it is one of the most common protocols for email retrieval.
Purpose:
The Post Office Protocol provides access via an Internet Protocol (IP) network for a user client application to a mailbox (maildrop) maintained on a mail server. The protocol supports list, retrieve and delete operations for messages. POP3 clients connect, retrieve all messages, store them on the client computer, and finally delete them from the server. This design of POP and its procedures was driven by the need of users having only temporary Internet connections, such as dial-up access, allowing these users to retrieve e-mail when connected, and subsequently to view and manipulate the retrieved messages when offline.
Purpose:
POP3 clients also have an option to leave mail on the server after retrieval, and in this mode of operation, clients will only download new messages which are identified by using the UIDL command (unique-id list). By contrast, the Internet Message Access Protocol (IMAP) was designed to normally leave all messages on the server to permit management with multiple client applications, and to support both connected (online) and disconnected (offline) modes of operation.
Purpose:
A POP3 server listens on well-known port number 110 for service requests. Encrypted communication for POP3 is either requested after protocol initiation, using the STLS command, if supported, or by POP3S, which connects to the server using Transport Layer Security (TLS) or Secure Sockets Layer (SSL) on well-known TCP port number 995.
Purpose:
Messages available to the client are determined when a POP3 session opens the maildrop, and are identified by message-number local to that session or, optionally, by a unique identifier assigned to the message by the POP server. This unique identifier is permanent and unique to the maildrop and allows a client to access the same message in different POP sessions. Mail is retrieved and marked for deletion by the message-number. When the client exits the session, mail marked for deletion is removed from the maildrop.
History:
The first version of the Post Office Protocol, POP1, was specified in RFC 918 (1984) by Joyce K. Reynolds. POP2 was specified in RFC 937 (1985).
History:
POP3 is the version in most common use. It originated with RFC 1081 (1988) but the most recent specification is RFC 1939, updated with an extension mechanism (RFC 2449) and an authentication mechanism in RFC 1734. This led to a number of POP implementations such as Pine, POPmail, and other early mail clients. While the original POP3 specification supported only an unencrypted USER/PASS login mechanism or Berkeley .rhosts access control, today POP3 supports several authentication methods to provide varying levels of protection against illegitimate access to a user's e-mail. Most are provided by the POP3 extension mechanisms. POP3 clients support SASL authentication methods via the AUTH extension. MIT Project Athena also produced a Kerberized version. RFC 1460 introduced APOP into the core protocol. APOP is a challenge–response protocol which uses the MD5 hash function in an attempt to avoid replay attacks and disclosure of the shared secret. Clients implementing APOP include Mozilla Thunderbird, Opera Mail, Eudora, KMail, Novell Evolution, RimArts' Becky!, Windows Live Mail, PowerMail, Apple Mail, and Mutt. RFC 1460 was obsoleted by RFC 1725, which was in turn obsoleted by RFC 1939.
History:
POP4 POP4 exists only as an informal proposal adding basic folder management, multipart message support, as well as message flag management to compete with IMAP; however, its development has not progressed since 2003.
Extensions and specifications:
An extension mechanism was proposed in RFC 2449 to accommodate general extensions as well as announce in an organized manner support for optional commands, such as TOP and UIDL. The RFC did not intend to encourage extensions, and reaffirmed that the role of POP3 is to provide simple support for mainly download-and-delete requirements of mailbox handling.
The extensions are termed capabilities and are listed by the CAPA command. With the exception of APOP, the optional commands were included in the initial set of capabilities. Following the lead of ESMTP (RFC 5321), capabilities beginning with an X signify local capabilities.
STARTTLS The STARTTLS extension allows the use of Transport Layer Security (TLS) or Secure Sockets Layer (SSL) to be negotiated using the STLS command, on the standard POP3 port, rather than an alternate. Some clients and servers instead use the alternate-port method, which uses TCP port 995 (POP3S).
SDPS Demon Internet introduced extensions to POP3 that allow multiple accounts per domain, and has become known as Standard Dial-up POP3 Service (SDPS). To access each account, the username includes the hostname, as john@hostname or john+hostname.
Google Apps uses the same method.
Extensions and specifications:
Kerberized Post Office Protocol In computing, local e-mail clients can use the Kerberized Post Office Protocol (KPOP), an application-layer Internet standard protocol, to retrieve e-mail from a remote server over a TCP/IP connection. The KPOP protocol is based on the POP3 protocol – differing in that it adds Kerberos security and that it runs by default over TCP port number 1109 instead of 110. One mail server software implementation is found in the Cyrus IMAP server.
Session example:
The following POP3 session dialog is an example in RFC 1939: S: <wait for connection on TCP port 110> C: <open connection> S: +OK POP3 server ready <1896.697170952@dbc.mtview.ca.us> C: APOP mrose c4c9334bac560ecc979e58001b3e22fb S: +OK mrose's maildrop has 2 messages (320 octets) C: STAT S: +OK 2 320 C: LIST S: +OK 2 messages (320 octets) S: 1 120 S: 2 200 S: .
Session example:
C: RETR 1 S: +OK 120 octets S: <the POP3 server sends message 1> S: .
C: DELE 1 S: +OK message 1 deleted C: RETR 2 S: +OK 200 octets S: <the POP3 server sends message 2> S: .
Session example:
C: DELE 2 S: +OK message 2 deleted C: QUIT S: +OK dewey POP3 server signing off (maildrop empty) C: <close connection> S: <wait for next connection> POP3 servers without the optional APOP command expect the client to log in with the USER and PASS commands: C: USER mrose S: +OK User accepted C: PASS tanstaaf S: +OK Pass accepted
Comparison with IMAP:
The Internet Message Access Protocol (IMAP) is an alternative and more recent mailbox access protocol. The highlights of differences are: POP is a simpler protocol, making implementation easier.
POP moves the message from the email server to the local computer, although there is usually an option in email clients to leave the messages on the email server as well. IMAP defaults to leaving the message on the email server, simply downloading a local copy.
Comparison with IMAP:
POP treats the mailbox as a single store, and has no concept of folders An IMAP client performs complex queries, asking the server for headers, or the bodies of specified messages, or to search for messages meeting certain criteria. Messages in the mail repository can be marked with various status flags (e.g. "deleted" or "answered") and they stay in the repository until explicitly removed by the user—which may not be until a later session. In short: IMAP is designed to permit manipulation of remote mailboxes as if they were local. Depending on the IMAP client implementation and the mail architecture desired by the system manager, the user may save messages directly on the client machine, or save them on the server, or be given the choice of doing either.
Comparison with IMAP:
POP provides a completely static view of the current state of the mailbox, and does not provide a mechanism to show any external changes in state during the session.
IMAP provides a dynamic view, and sends responses for external changes in state, including newly arrived messages, as well as changes made to the mailbox by other concurrently connected clients.
POP can either retrieve an entire message with the RETR command, and for servers that support it, the headers, as well as a specified number of body lines can be accessed with the TOP command.
IMAP allows clients to retrieve any of the individual MIME parts separately – for example, retrieving the plain text without retrieving attached files, or retrieving only one of many attached files.
IMAP supports flags on the server to keep track of message state: for example, whether or not the message has been read, replied to, forwarded, or deleted.
Related requests for comments (RFCs):
RFC 918 – POST OFFICE PROTOCOL RFC 937 – POST OFFICE PROTOCOL – VERSION 2 RFC 1081 – Post Office Protocol – Version 3 RFC 1939 – Post Office Protocol – Version 3 (STD 53) RFC 1957 – Some Observations on Implementations of the Post Office Protocol (POP3) RFC 2195 – IMAP/POP AUTHorize Extension for Simple Challenge/Response RFC 2384 – POP URL Scheme RFC 2449 – POP3 Extension Mechanism RFC 2595 – Using TLS with IMAP, POP3 and ACAP RFC 3206 – The SYS and AUTH POP Response Codes RFC 5034 – The Post Office Protocol (POP3) Simple Authentication and Security Layer (SASL) Authentication Mechanism RFC 8314 – Cleartext Considered Obsolete: Use of Transport Layer Security (TLS) for Email Submission and Access | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HegartyMaths**
HegartyMaths:
HegartyMaths is an educational subscription tool used by schools in the United Kingdom. It is sometimes used as a replacement for general mathematics homework tasks. Its creator, Colin Hegarty, was the UK Teacher of the Year in 2015 and shortlisted for the Varkey Foundation's Global Teacher Prize in 2016.
Usage:
HegartyMaths covers a variety of topics and has 943 tasks to complete.
Usage:
A task includes an educational video with an explanation and examples on the topic. Afterwards, there is a quiz to complete, containing topic specific questions. The site is regularly updated and more topics added to keep up with the GCSE mathematics curriculum. Students can complete tasks by themselves, or teachers can assign these tasks to students to complete as homework or for revision purposes and then track the student's progress.
History:
HegartyMaths was created by co-founders and teachers Colin Hegarty and Brian Arnold. In 2011 they started to make maths videos on YouTube to support their own classes with maths homework and revision. Since the videos were freely available on YouTube, students from all over the country and the world started using the videos too. In 2012 Colin won £15,000 of funding from education charity SHINE, through its Let Teachers SHINE competition, to make a website to host the videos and make more content. The original website, launched on 12 July 2013, was called mathswebsite.com. It was built to contain free maths videos to assist students in revision and is still accessible today.In February 2016, a new site was launched: HegartyMaths.com. In 2019, Colin Hegarty sold HegartyMaths to Sparx (a company selling revision GCSE packages), for an undisclosed sum. Colin became part of the leadership team for Sparx and continued to lead development on HegartyMaths. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Context-sensitive language**
Context-sensitive language:
In formal language theory, a context-sensitive language is a language that can be defined by a context-sensitive grammar (and equivalently by a noncontracting grammar). Context-sensitive is one of the four types of grammars in the Chomsky hierarchy.
Computational properties:
Computationally, a context-sensitive language is equivalent to a linear bounded nondeterministic Turing machine, also called a linear bounded automaton. That is a non-deterministic Turing machine with a tape of only kn cells, where n is the size of the input and k is a constant associated with the machine. This means that every formal language that can be decided by such a machine is a context-sensitive language, and every context-sensitive language can be decided by such a machine.
Computational properties:
This set of languages is also known as NLINSPACE or NSPACE(O(n)), because they can be accepted using linear space on a non-deterministic Turing machine. The class LINSPACE (or DSPACE(O(n))) is defined the same, except using a deterministic Turing machine. Clearly LINSPACE is a subset of NLINSPACE, but it is not known whether LINSPACE = NLINSPACE.
Examples:
One of the simplest context-sensitive but not context-free languages is L={anbncn:n≥1} : the language of all strings consisting of n occurrences of the symbol "a", then n "b"s, then n "c"s (abc, aabbcc, aaabbbccc, etc.). A superset of this language, called the Bach language, is defined as the set of all strings where "a", "b" and "c" (or any other set of three symbols) occurs equally often (aabccb, baabcaccb, etc.) and is also context-sensitive.L can be shown to be a context-sensitive language by constructing a linear bounded automaton which accepts L. The language can easily be shown to be neither regular nor context-free by applying the respective pumping lemmas for each of the language classes to L.
Examples:
Similarly: LCross={ambncmdn:m≥1,n≥1} is another context-sensitive language; the corresponding context-sensitive grammar can be easily projected starting with two context-free grammars generating sentential forms in the formats amCm and Bndn and then supplementing them with a permutation production like CB→BC , a new starting symbol and standard syntactic sugar.
Examples:
LMUL3={ambncmn:m≥1,n≥1} is another context-sensitive language (the "3" in the name of this language is intended to mean a ternary alphabet); that is, the "product" operation defines a context-sensitive language (but the "sum" defines only a context-free language as the grammar S→aSc|R and R→bRc|bc shows). Because of the commutative property of the product, the most intuitive grammar for LMUL3 is ambiguous. This problem can be avoided considering a somehow more restrictive definition of the language, e.g. LORDMUL3={ambncmn:1<m<n} . This can be specialized to LMUL1={amn:m>1,n>1} and, from this, to Lm2={am2:m>1} , Lm3={am3:m>1} , etc.
Examples:
LREP={w|w|:w∈Σ∗} is a context-sensitive language. The corresponding context-sensitive grammar can be obtained as a generalization of the context-sensitive grammars for LSquare={w2:w∈Σ∗} , LCube={w3:w∈Σ∗} , etc.
LEXP={a2n:n≥1} is a context-sensitive language.
Examples:
is prime } is a context-sensitive language (the "2" in the name of this language is intended to mean a binary alphabet). This was proved by Hartmanis using pumping lemmas for regular and context-free languages over a binary alphabet and, after that, sketching a linear bounded multitape automaton accepting LPRIMES2 is prime } is a context-sensitive language (the "1" in the name of this language is intended to mean an unary alphabet). This was credited by A. Salomaa to Matti Soittola by means of a linear bounded automaton over an unary alphabet (pages 213-214, exercise 6.8) and also to Marti Penttonen by means of a context-sensitive grammar also over an unary alphabet (See: Formal Languages by A. Salomaa, page 14, Example 2.5).
Examples:
An example of recursive language that is not context-sensitive is any recursive language whose decision is an EXPSPACE-hard problem, say, the set of pairs of equivalent regular expressions with exponentiation.
Properties of context-sensitive languages:
The union, intersection, concatenation of two context-sensitive languages is context-sensitive, also the Kleene plus of a context-sensitive language is context-sensitive.
The complement of a context-sensitive language is itself context-sensitive a result known as the Immerman–Szelepcsényi theorem.
Membership of a string in a language defined by an arbitrary context-sensitive grammar, or by an arbitrary deterministic context-sensitive grammar, is a PSPACE-complete problem. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Piperaquine**
Piperaquine:
Piperaquine is an antiparasitic drug used in combination with dihydroartemisinin to treat malaria. Piperaquine was developed under the Chinese National Malaria Elimination Programme in the 1960s and was adopted throughout China as a replacement for the structurally similar antimalarial drug chloroquine. Due to widespread parasite resistance to piperaquine, the drug fell out of use as a monotherapy, and is instead used as a partner drug for artemisinin combination therapy. Piperaquine kills parasites by disrupting the detoxification of host heme.
Medical uses:
Piperaquine is used in combination with dihydroartemisinin for the treatment of malaria. This combination is one of several artemisinin combination therapies recommended by the World Health Organization for treatment of uncomplicated malaria. This combination is also recommended by the World Health Organization for treatment of severe malaria after administration of artesunate.Piperaquine is also registered for use in some countries in combination with arterolane. However, this combination is not recommended by the World Health Organization due to insufficient data.
Contraindications:
Like chloroquine, piperaquine can prolong the QT interval. Although large randomized clinical trials have not revealed evidence of cardiotoxicity, the World Health Organization recommends not using piperaquine in patients with congenital QT prolongation or who are on other drugs that prolong the QT interval.
Pharmacology:
Mechanism of action Like chloroquine, piperaquine is thought to function by accumulating in the parasite digestive vacuole and interfering with the detoxification of heme into hemozoin.
Pharmacology:
Resistance Parasites that survive piperaquine treatment have been increasingly reported since 2010, particularly in Southeast Asia. The epicenter of piperaquine resistance appears to be western Cambodia where in 2014 over 40% of dihydroartemisinin-piperaquine treatments failed to eliminate parasites from the patient's blood. Characterizing piperaquine-resistant parasites has been technically challenging, as parasites that survive piperaquine treatment in patients appear to remain sensitive to piperaquine in vitro; i.e. piperaquine appears to have the same IC50 in sensitive parasites and resistant parasites.The mechanism by which parasites become resistant to piperaquine remains unclear. Amplification of the parasite proteases plasmepsin 2 and plasmepsin 3, both involved in degrading host hemoglobin, is associated with resistance to piperaquine. Similarly, mutations in a gene related to chloroquine resistance, PfCRT, have been associated with piperaquine resistance; however, parasites that are resistant to chloroquine remain sensitive to piperaquine. In contrast, amplification of the gene for the parasite transporter PfMDR1, a mechanism of parasite resistance to mefloquine, is inversely correlated with piperaquine resistance.
Pharmacology:
Pharmacokinetics Piperaquine is a lipophilic drug and therefore is rapidly absorbed and distributed across much of the body. The drug reaches its maximal concentrations approximately 2 hours after administration.
Chemistry:
Piperaquine is available as a base, and as a water-soluble tetraphosphate salt.
History:
Piperaquine was discovered in the 1960s by two separate groups working independently of one another: the Shanghai Pharmaceutical Industry Research Institute in China and the Rhone Poulenc in France. In the 1970s and 1980s piperaquine became the primary antimalarial drug of the Chinese National Malaria Control Programme due to increased parasite resistance to chloroquine. By the late 1980s, the use of piperaquine as an antimalarial monotherapy diminished as increasing parasite resistance to piperaquine was observed. Beginning in the 1990s, piperaquine was tested and adopted as a partner drug for artemisinin combination therapy. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Retting**
Retting:
Retting is a process employing the action of micro-organisms and moisture on plants to dissolve or rot away much of the cellular tissues and pectins surrounding bast-fibre bundles, facilitating the separation of the fibre from the stem. It is used in the production of linen from flax stalks and coir from coconut husks.
Water retting:
The most widely practiced method of retting, water retting, is performed by submerging bundles of stalks in water. The water, penetrating the central stalk portion, swells the inner cells, bursting the outermost layer, thus increasing the absorption of both moisture and decay-producing bacteria. Retting time must be carefully judged; under-retting makes separation difficult, and over-retting weakens the fibre. In double retting, a gentle process producing excellent fibre, the stalks are removed from the water before retting is completed, dried for several months, then retted again.Natural water retting employs stagnant or slow-moving waters, such as ponds, bogs, and slow streams and rivers. The stalk bundles are weighted down, usually with stones or wood, for about 8 to 14 days, depending on water temperature and mineral content.Tank retting, by contrast, employs vats usually made of concrete, requires about four to six days, and is feasible in any season. In the first six to eight hours, called the leaching period, much of the dirt and colouring matter is removed by the water, which is usually changed to assure clean fibre. Waste retting water, which requires treatment to reduce harmful toxic elements before its release, is rich in plant minerals, such as nitrates, and can be used as liquid fertilizer.
Dew retting:
This is a common method in areas with limited water resources. It is most effective in climates with heavy night time dews and warm daytime temperatures. The harvested plant stalks are spread evenly in grassy fields, where the combined action of bacteria, sun, air, and dew produces fermentation, dissolving much of the stem material surrounding the fibre bundles. Within two to three weeks, depending upon climatic conditions, the fibre can be separated. Dew-retted fibre is generally darker in color and of poorer quality than water-retted fibre.
After retting:
The retted stalks, called straw, are dried in open air or by mechanical means, and are frequently stored for a short period to allow "curing" to occur, facilitating fibre removal. Final separation of the fibre is accomplished by a breaking process in which the brittle woody portion of the straw is broken, either by hand or by passing through rollers, followed by the scutching operation, which removes the broken woody pieces (shives) by beating or scraping. Some machines combine breaking and scutching operations. Waste material from the first scutching, consisting of shives and short fibres, is usually treated a second time. The short fibre or tow thus obtained is frequently used in paper manufacture, and the shives may serve as fuel to heat the retting water or may be made into wallboard and to make rope. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**D-threo-aldose 1-dehydrogenase**
D-threo-aldose 1-dehydrogenase:
In enzymology, a D-threo-aldose 1-dehydrogenase (EC 1.1.1.122) is an enzyme that catalyzes the chemical reaction a D-threo-aldose + NAD+ ⇌ a D-threo-aldono-1,5-lactone + NADH + H+Thus, the two substrates of this enzyme are D-threo-aldose and NAD+, whereas its 3 products are D-threo-aldono-1,5-lactone, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is D-threo-aldose:NAD+ 1-oxidoreductase. Other names in common use include L-fucose dehydrogenase, (2S,3R)-aldose dehydrogenase, dehydrogenase, L-fucose, and L-fucose (D-arabinose) dehydrogenase. This enzyme participates in ascorbate and aldarate metabolism. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PARK3**
PARK3:
Parkinson disease 3 (autosomal dominant, Lewy body) is a protein that in humans is encoded by the PARK3 gene. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Trimethoxyamphetamine**
Trimethoxyamphetamine:
Trimethoxyamphetamines (TMAs) are a family of isomeric psychedelic hallucinogenic drugs. There exist six different TMAs that differ only in the position of the three methoxy groups: TMA, TMA-2, TMA-3, TMA-4, TMA-5, and TMA-6. The TMAs are analogs of the phenethylamine cactus alkaloid mescaline. The TMAs are substituted amphetamines, however, their mechanism of action is more complex than that of the unsubstituted compound amphetamine, probably involving agonist activity on serotonin receptors such as the 5HT2A receptor in addition to the generalised dopamine receptor agonism typical of most amphetamines. This action on serotonergic receptors likely underlie the psychedelic effects of these compounds. It is reported that some TMAs elicit a range of emotions ranging from sadness to empathy and euphoria. TMA was first synthesized by Hey, in 1947. Synthesis data as well as human activity data has been published in the book PiHKAL.
Trimethoxyamphetamine:
The most important TMA compound from a pharmacological standpoint is TMA-2, as this isomer has been much more widely used as a recreational drug and sold on the grey market as a so-called research chemical; TMA (sometimes referred to as "mescalamphetamine" or TMA-1) and TMA-6 have also been used in this way to a lesser extent. These three isomers are significantly more active as hallucinogenic drugs, and have consequently been placed onto the illegal drug schedules in some countries such as the Netherlands and Japan. The other three isomers TMA-3, TMA-4, and TMA-5 are not known to have been used as recreational drugs to any great extent.
TMAs:
Note: Because they are isomers, the TMAs have the same chemical formula, C12H19NO3, and the same molecular mass, 225.28 g/mol.
Legality:
Sweden Sveriges riksdag added TMA-2 to schedule I ("substances, plant materials and fungi which normally do not have medical use") as narcotics in Sweden as of Dec 30, 1999, published by Medical Products Agency in their regulation LVFS 2004:3 listed as 2,4,5-trimetoxiamfetamin (TMA-2).
United Kingdom Illegal under the Psychoactive Substances Act 2016 United States of America 3,4,5-Trimethoxyamphetamine is listed as a Schedule 1 controlled substance, along with positional isomers 2,4,5-Trimethoxyamphetamine (TMA-5), 2,4,6-Trimethoxyamphetamine (TMA-6) and Escaline. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Allianz (arts)**
Allianz (arts):
Allianz was a group of Swiss artists which formed in 1937.The Allianz group advocated the concrete art theories of Max Bill with more emphasis on color than their Constructivist counterparts.
Allianz (arts):
Their first group exhibition, Neue Kunst in der Schweiz, was held in Kunsthalle Basel in 1937 (January 9-February 2) and was followed by a second at the Kunsthaus in Zürich in 1942 and then in 1947 (October 18-November 23). Further shows were held at the Galerie des Eaux Vives in Zürich, starting with two in 1944. The founder and Director of Galerie des Eaux Vives, as well as a prominent founding artist of the Allianz, was John Konstantin Hansegger, born in St. Gallen in 1908.
Allianz (arts):
The Almanach Neuer Kunst in der Schweiz, published by the group in 1941, showed reproductions of their works with those of artists such as Paul Klee, Le Corbusier, Gérard Vulliamy and Kurt Seligmann. The publication included texts by Bill, Leuppi, Le Corbusier, Seligmann, Sigfried Giedion, Gérard Vulliamy and others. Editions des Eaux-Vives Zurich (connected with the Galerie) published important illustrated bulletins of Allianz shows with texts by Hansegger, Johannes Sorge, Max Bill and Ugo Pirogallo.
Allianz (arts):
Allianz exhibitions continued into the 1950s.
Allianz artists:
Max Bill Camille Graeser Hansegger André Evard Fritz Glarner Max Huber Leo Leuppi Richard Paul Lohse Verena Loewensberg | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PISat**
PISat:
PISat (PESIT Imaging Satellite) is a remote sensing nanosatellite developed by the PES University, Bengaluru.The satellite weighs 5 kg and carries an image camera that can capture pictures with 80 meter resolution. Muie
Mission:
The main mission of the satellite was to develop the capability of designing satellites on campus with collaboration from students and professors.
Launch:
The satellite was launched on 26 September 2016 by ISRO using the PSLV-C35 rocket. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Terrestrial ecosystem**
Terrestrial ecosystem:
Terrestrial ecosystems are ecosystems that are found on land. Examples include tundra, taiga, temperate deciduous forest, tropical rain forest, grassland, deserts.Terrestrial ecosystems differ from aquatic ecosystems by the predominant presence of soil rather than water at the surface and by the extension of plants above this soil/water surface in terrestrial ecosystems. There is a wide range of water availability among terrestrial ecosystems (including water scarcity in some cases), whereas water is seldom a limiting factor to organisms in aquatic ecosystems. Because water buffers temperature fluctuations, terrestrial ecosystems usually experience greater diurnal and seasonal temperature fluctuations than do aquatic ecosystems in similar climates.Terrestrial ecosystems are of particular importance especially in meeting Sustainable Development Goal 15 that targets the conservation-restoration and sustainable use of terrestrial ecosystems.
Organisms and processes:
Organisms in terrestrial ecosystems have adaptations that allow them to obtain water when the entire body is no longer bathed in that fluid, means of transporting the water from limited sites of acquisition to the rest of the body, and means of preventing the evaporation of water from body surfaces. They also have traits that provide body support in the atmosphere, a much less buoyant medium than water, and other traits that render them capable of withstanding the extremes of temperature, wind, and humidity that characterize terrestrial ecosystems. Finally, the organisms in terrestrial ecosystems have evolved many methods of transporting gametes in environments where fluid flow is much less effective as a transport medium. This is terrestrial ecosystems.
Size and plants:
Terrestrial ecosystems occupy 55,660,000 mi2 (144,150,000 km2), or 28.26% of Earth's surface. Major plant taxa in terrestrial ecosystems are members of the division Magnoliophyta (flowering plants), of which there are about 275,000 species, and the division Pinophyta (conifers), of which there are about 500 species. Members of the division Bryophyta (mosses and liverworts), of which there are about 24,000 species, are also important in some terrestrial ecosystems. Major animal taxa in terrestrial ecosystems include the classes Insecta (insects) with about 900,000 species, Aves (birds) with 8,500 species, and Mammalia (mammals) with approximately 4,100 species. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Turret punch**
Turret punch:
A turret punch or turret press is a type of punch press used for metal forming by punching.
Turret punch:
Punching, and press work in general, is a process well suited to mass production. However the initial tooling costs, of both the machine and the job-specific press tool, are high. This limits punch work from being used for much small-volume and prototype work. A turret punch is one way of addressing this cost. The tooling of a turret punch uses a large number of standard punch tools: holes of varying sizes, straight edges, commonly-used notches or mounting holes. By using a large number of strokes, with several different tools in turn, a turret press may make a wide variety of parts without having to first make a specialised press tool for that task. This saves both time and money, allowing rapid prototyping or for low volume production to start without tooling delays.
Turret punch:
A typical CNC turret punch has a choice of up to 60 tools in a "turret" that can be rotated to bring any tool to the punching position. A simple shape (e.g., a square, circle, or hexagon) is cut directly from the sheet. A complex shape can be cut out by making many square or rounded cuts around the perimeter. As a press tool requires a matching punch and die set, there are two corresponding turrets, above and below the bed, for punch and die. These two turrets must rotate in precise synchronisation and with their alignment carefully maintained. Several punches of identical shape may be used in the turret, each one turned to a different angle, as there is usually no feature to rotate the sheet workpiece relative to the tool.
Turret punch:
A punch is less flexible than a laser for cutting compound shapes, but faster for repetitive shapes (for example, the grille of an air-conditioning unit). Some units combine both laser and punch features in one machine.
Most turret punches are CNC-controlled, with automatic positioning of the metal sheet beneath the tool and programmed selection of particular tools. A CAM process first converts the CAD design for the finished item into the number of individual punch operations needed, depending on the tools available in the turret.
Turret punch:
The precise load-out of tools may change according to a particular job's needs. The CAD stage is also optimised for turret punching: an operation such as rounding a corner may be much quicker with a single chamfered cut than a fully rounded corner requiring several strokes. Changing an unimportant dimension such as the width of a ventilation slot may match an available tool, requiring a single cut, rather than cutting each side separately. CAD support may also manage the selection of tools to be loaded into the turret before starting work.
Turret punch:
As each tool in a turret press is relatively small, the press requires little power compared to a press manufacturing similar parts with a single press stroke. This allows the tool to be lighter and sometimes cheaper, although this is offset by the increased complexity of the turret and sheet positioning. Turret punches can operate faster per stroke than a heavier tool press, although of course many strokes are required. A turret punch can achieve 600 strokes per minute.
Turret punch:
The most sophisticated recent machines may also add facilities for forming and bending, as well as punch cutting. Although unlikely to replace a press brake for box making, the ability to form even small lugs may turn a two machine process into a one machine process, reducing materials handling time.
Manual punches:
Manual turret punches have also been used. These are C frame presses, usually with a rack-actuated ram. There is no CNC, for either sheet positioning or tool changing. Using such a manual press requires great familiarity, as the correct tool must be selected from the turret each time for every one of the many press operations performed. Such manual presses are rarely found, but they have their place in labour-intensive tasks such as hand-worked sheetmetal shops, making such products as custom car bodywork. They are often used in conjunction with other highly skilled artisan processes such as an English wheel. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Forensic chemistry**
Forensic chemistry:
Forensic chemistry is the application of chemistry and its subfield, forensic toxicology, in a legal setting. A forensic chemist can assist in the identification of unknown materials found at a crime scene. Specialists in this field have a wide array of methods and instruments to help identify unknown substances. These include high-performance liquid chromatography, gas chromatography-mass spectrometry, atomic absorption spectroscopy, Fourier transform infrared spectroscopy, and thin layer chromatography. The range of different methods is important due to the destructive nature of some instruments and the number of possible unknown substances that can be found at a scene. Forensic chemists prefer using nondestructive methods first, to preserve evidence and to determine which destructive methods will produce the best results.
Forensic chemistry:
Along with other forensic specialists, forensic chemists commonly testify in court as expert witnesses regarding their findings. Forensic chemists follow a set of standards that have been proposed by various agencies and governing bodies, including the Scientific Working Group on the Analysis of Seized Drugs. In addition to the standard operating procedures proposed by the group, specific agencies have their own standards regarding the quality assurance and quality control of their results and their instruments. To ensure the accuracy of what they are reporting, forensic chemists routinely check and verify that their instruments are working correctly and are still able to detect and measure various quantities of different substances.
Role in investigations:
Forensic chemists' analysis can provide leads for investigators, and they can confirm or refute their suspicions. The identification of the various substances found at the scene can tell investigators what to look for during their search. During fire investigations, forensic chemists can determine if an accelerant such as gasoline or kerosene was used; if so, this suggests that the fire was intentionally set. Forensic chemists can also narrow down the suspect list to people who would have access to the substance used in a crime. For example, in explosive investigations, the identification of RDX or C-4 would indicate a military connection as those substances are military grade explosives. On the other hand, the identification of TNT would create a wider suspect list, since it is used by demolition companies as well as in the military. During poisoning investigations, the detection of specific poisons can give detectives an idea of what to look for when they are interviewing potential suspects. For example, an investigation that involves ricin would tell investigators to look for ricin's precursors, the seeds of the castor oil plant.Forensic chemists also help to confirm or refute investigators' suspicions in drug or alcohol cases. The instruments used by forensic chemists can detect minute quantities, and accurate measurement can be important in crimes such as driving under the influence as there are specific blood alcohol content cutoffs where penalties begin or increase. In suspected overdose cases, the quantity of the drug found in the person's system can confirm or rule out overdose as the cause of death.
History:
Early history Throughout history, a variety of poisons have been used to commit murder, including arsenic, nightshade, hemlock, strychnine, and curare. Until the early 19th century, there were no methods to accurately determine if a particular chemical was present, and poisoners were rarely punished for their crimes. In 1836, one of the first major contributions to forensic chemistry was introduced by British chemist James Marsh. He created the Marsh test for arsenic detection, which was subsequently used successfully in a murder trial. It was also during this time that forensic toxicology began to be recognized as a distinct field. Mathieu Orfila, the "father of toxicology", made great advancements to the field during the early 19th century. A pioneer in the development of forensic microscopy, Orfila contributed to the advancement of this method for the detection of blood and semen. Orfila was also the first chemist to successfully classify different chemicals into categories such as corrosives, narcotics, and astringents.The next advancement in the detection of poisons came in 1850 when a valid method for detecting vegetable alkaloids in human tissue was created by chemist Jean Stas. Stas's method was quickly adopted and used successfully in court to convict Count Hippolyte Visart de Bocarmé of murdering his brother-in-law by nicotine poisoning. Stas was able to successfully isolate the alkaloid from the organs of the victim. Stas's protocol was subsequently altered to incorporate tests for caffeine, quinine, morphine, strychnine, atropine, and opium.The wide range of instrumentation for forensic chemical analysis also began to be developed during this time period. The early 19th century saw the invention of the spectroscope by Joseph von Fraunhofer. In 1859, chemist Robert Bunsen and physicist Gustav Kirchhoff expanded on Fraunhofer's invention. Their experiments with spectroscopy showed that specific substances created a unique spectrum when exposed to specific wavelengths of light. Using spectroscopy, the two scientists were able to identify substances based on their spectrum, providing a method of identification for unknown materials. In 1906 botanist Mikhail Tsvet invented paper chromatography, an early predecessor to thin layer chromatography, and used it to separate and examine the plant proteins that make up chlorophyll. The ability to separate mixtures into their individual components allows forensic chemists to examine the parts of an unknown material against a database of known products. By matching the retention factors for the separated components with known values, materials can be identified.
History:
Modernization Modern forensic chemists rely on numerous instruments to identify unknown materials found at a crime scene. The 20th century saw many advancements in technology that allowed chemists to detect smaller amounts of material more accurately. The first major advancement in this century came during the 1930s with the invention of a spectrometer that could measure the signal produced with infrared (IR) light. Early IR spectrometers used a monochromator and could only measure light absorption in a very narrow wavelength band. It was not until the coupling of an interferometer with an IR spectrometer in 1949 by Peter Fellgett that the complete infrared spectrum could be measured at once.: 202 Fellgett also used the Fourier transform, a mathematical method that can break down a signal into its individual frequencies, to make sense of the enormous amount of data received from the complete infrared analysis of a material. Since then, Fourier transform infrared spectroscopy (FTIR) instruments have become critical in the forensic analysis of unknown material because they are nondestructive and extremely quick to use. Spectroscopy was further advanced in 1955 with the invention of the modern atomic absorption (AA) spectrophotometer by Alan Walsh. AA analysis can detect specific elements that make up a sample along with their concentrations, allowing for the easy detection of heavy metals such as arsenic and cadmium.Advancements in the field of chromatography arrived in 1953 with the invention of the gas chromatograph by Anthony T. James and Archer John Porter Martin, allowing for the separation of volatile liquid mixtures with components which have similar boiling points. Nonvolatile liquid mixtures could be separated with liquid chromatography, but substances with similar retention times could not be resolved until the invention of high-performance liquid chromatography (HPLC) by Csaba Horváth in 1970. Modern HPLC instruments are capable of detecting and resolving substances whose concentrations are as low as parts per trillion.One of the most important advancements in forensic chemistry came in 1955 with the invention of gas chromatography-mass spectrometry (GC-MS) by Fred McLafferty and Roland Gohlke. The coupling of a gas chromatograph with a mass spectrometer allowed for the identification of a wide range of substances. GC-MS analysis is widely considered the "gold standard" for forensic analysis due to its sensitivity and versatility along with its ability to quantify the amount of substance present. The increase in the sensitivity of instrumentation has advanced to the point that minute impurities within compounds can be detected potentially allowing investigators to trace chemicals to a specific batch and lot from a manufacturer.
Methods:
Forensic chemists rely on a multitude of instruments to identify unknown substances found at a scene. Different methods can be used to determine the identity of the same substance, and it is up to the examiner to determine which method will produce the best results. Factors that forensic chemists might consider when performing an examination are the length of time a specific instrument will take to examine a substance and the destructive nature of that instrument. They prefer using nondestructive methods first, to preserve the evidence for further examination. Nondestructive techniques can also be used to narrow down the possibilities, making it more likely that the correct method will be used the first time when a destructive method is used.
Methods:
Spectroscopy The two main standalone spectroscopy techniques for forensic chemistry are FTIR and AA spectroscopy. FTIR is a nondestructive process that uses infrared light to identify a substance. The attenuated total reflectance sampling technique eliminates the need for substances to be prepared before analysis. The combination of nondestructiveness and zero preparation makes ATR FTIR analysis a quick and easy first step in the analysis of unknown substances. To facilitate the positive identification of the substance, FTIR instruments are loaded with databases that can be searched for known spectra that match the unknown's spectra. FTIR analysis of mixtures, while not impossible, presents specific difficulties due to the cumulative nature of the response. When analyzing an unknown that contains more than one substance, the resulting spectra will be a combination of the individual spectra of each component. While common mixtures have known spectra on file, novel mixtures can be difficult to resolve, making FTIR an unacceptable means of identification. However, the instrument can be used to determine the general chemical structures present, allowing forensic chemists to determine the best method for analysis with other instruments. For example, a methoxy group will result in a peak between 3,030 and 2,950 wavenumbers (cm−1).Atomic absorption spectroscopy (AAS) is a destructive technique that is able to determine the elements that make up the analyzed sample. AAS performs this analysis by subjecting the sample to an extremely high heat source, breaking the atomic bonds of the substance, leaving free atoms. Radiation in the form of light is then passed through the sample forcing the atoms to jump to a higher energy state.: 2 Forensic chemists can test for each element by using a corresponding wavelength of light that forces that element's atoms to a higher energy state during the analysis.: 256 For this reason, and due to the destructive nature of this method, AAS is generally used as a confirmatory technique after preliminary tests have indicated the presence of a specific element in the sample. The concentration of the element in the sample is proportional to the amount of light absorbed when compared to a blank sample. AAS is useful in cases of suspected heavy metal poisoning such as with arsenic, lead, mercury, and cadmium. The concentration of the substance in the sample can indicate whether heavy metals were the cause of death.
Methods:
Chromatography Spectroscopy techniques are useful when the sample being tested is pure, or a very common mixture. When an unknown mixture is being analyzed it must be broken down into its individual parts. Chromatography techniques can be used to break apart mixtures into their components allowing for each part to be analyzed separately.
Methods:
Thin layer chromatography (TLC) is a quick alternative to more complex chromatography methods. TLC can be used to analyze inks and dyes by extracting the individual components. This can be used to investigate notes or fibers left at the scene since each company's product is slightly different and those differences can be seen with TLC. The only limiting factor with TLC analysis is the necessity for the components to be soluble in whatever solution is used to carry the components up the analysis plate. This solution is called the mobile phase. The forensic chemist can compare unknowns with known standards by looking at the distance each component travelled. This distance, when compared to the starting point, is known as the retention factor (Rf) for each extracted component. If each Rf value matches a known sample, that is an indication of the unknown's identity.High-performance liquid chromatography (HPLC) can be used to extract individual components from a mixture dissolved in a solution. HPLC is used for nonvolatile mixtures that would not be suitable for gas chromatography. This is useful in drug analysis where the pharmaceutical is a combination drug since the components would separate, or elute, at different times allowing for the verification of each component. The eluates from the HPLC column are then fed into various detectors that produce a peak on a graph relative to its concentration as it elutes off the column. The most common type of detector is an ultraviolet-visible spectrometer as the most common item of interest tested with HPLC, pharmaceuticals, have UV absorbance.Gas chromatography (GC) performs the same function as liquid chromatography, but it is used for volatile mixtures. In forensic chemistry, the most common GC instruments use mass spectrometry as their detector. GC-MS can be used in investigations of arson, poisoning, and explosions to determine exactly what was used. In theory, GC-MS instruments can detect substances whose concentrations are in the femtogram (10−15) range. However, in practice, due to signal-to-noise ratios and other limiting factors, such as the age of the individual parts of the instrument, the practical detection limit for GC-MS is in the picogram (10−12) range. GC-MS is also capable of quantifying the substances it detects; chemists can use this information to determine the effect the substance would have on an individual. GC-MS instruments need around 1,000 times more of the substance to quantify the amount than they need simply to detect it; the limit of quantification is typically in the nanogram (10−9) range.
Forensic toxicology:
Forensic toxicology is the study of the pharmacodynamics, or what a substance does to the body, and pharmacokinetics, or what the body does to the substance. To accurately determine the effect a particular drug has on the human body, forensic toxicologists must be aware of various levels of drug tolerance that an individual can build up as well as the therapeutic index for various pharmaceuticals. Toxicologists are tasked with determining whether any toxin found in a body was the cause of or contributed to an incident, or whether it was at too low a level to have had an effect. While the determination of the specific toxin can be time-consuming due to the number of different substances that can cause injury or death, certain clues can narrow down the possibilities. For example, carbon monoxide poisoning would result in bright red blood while death from hydrogen sulfide poisoning would cause the brain to have a green hue.Toxicologists are also aware of the different metabolites that a specific drug could break down into inside the body. For example, a toxicologist can confirm that a person took heroin by the presence in a sample of 6-monoacetylmorphine, which only comes from the breakdown of heroin. The constant creation of new drugs, both legal and illicit, forces toxicologists to keep themselves apprised of new research and methods to test for these novel substances. The stream of new formulations means that a negative test result does not necessarily rule out drugs. To avoid detection, illicit drug manufacturers frequently change the chemicals' structure slightly. These compounds are often not detected by routine toxicology tests and can be masked by the presence of a known compound in the same sample. As new compounds are discovered, known spectra are determined and entered into the databases that can be downloaded and used as reference standards. Laboratories also tend to keep in-house databases for the substances they find locally.
Standards:
Guidelines have been set up by various governing bodies regarding the standards that are followed by practicing forensic scientists. For forensic chemists, the international Scientific Working Group for the Analysis of Seized Drugs (SWGDRUG) presents recommendations for the quality assurance and quality control of tested materials. In the identification of unknown samples, protocols have been grouped into three categories based on the probability for false positives. Instruments and protocols in category A are considered the best for uniquely identifying an unknown material, followed by categories B and then C. To ensure the accuracy of identifications SWGDRUG recommends that multiple tests using different instruments be performed on each sample, and that one category A technique and at least one other technique be used. If a category A technique is not available, or the forensic chemist decides not to use one, SWGDRUG recommends that at least three techniques be used, two of which must be from category B.: 14–15 Combination instruments, such as GC-MS, are considered two separate tests as long as the results are compared to known values individually For example, the GC elution times would be compared to known values along with the MS spectra. If both of those match a known substance, no further tests are needed.: 16 Standards and controls are necessary in the quality control of the various instruments used to test samples. Due to the nature of their work in the legal system, chemists must ensure that their instruments are working accurately. To do this, known controls are tested consecutively with unknown samples. By comparing the readouts of the controls with their known profiles the instrument can be confirmed to have been working properly at the time the unknowns were tested. Standards are also used to determine the instrument's limit of detection and limit of quantification for various common substances. Calculated quantities must be above the limit of detection to be confirmed as present and above the limit of quantification to be quantified. If the value is below the limit the value is not considered reliable.
Standards:
Testimony The standardized procedures for testimony by forensic chemists are provided by the various agencies that employ the scientists as well as SWGDRUG. Forensic chemists are ethically bound to present testimony in a neutral manner and to be open to reconsidering their statements if new information is found.: 3 Chemists should also limit their testimony to areas they have been qualified in regardless of questions during direct or cross-examination.: 27 Individuals called to testify must be able to relay scientific information and processes in a manner that lay individuals can understand. By being qualified as an expert, chemists are allowed to give their opinions on the evidence as opposed to just stating the facts. This can lead to competing opinions from experts hired by the opposing side. Ethical guidelines for forensic chemists require that testimony be given in an objective manner, regardless of what side the expert is testifying for. Forensic experts that are called to testify are expected to work with the lawyer who issued the summons and to assist in their understanding of the material they will be asking questions about.
Standards:
Education Forensic chemistry positions require a bachelor's degree or similar in a natural or physical science, as well as laboratory experience in general, organic, and analytical chemistry. Once in the position, individuals are trained in protocols performed at that specific lab until they are proven competent to perform all experiments without supervision. Practicing chemists in the field are expected to complete continuing education to maintain their proficiency.: 4–6 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Brennan conjecture**
Brennan conjecture:
The Brennan conjecture is a mathematical hypothesis (in complex analysis) for estimating (under specified conditions) the integral powers of the moduli of the derivatives of conformal maps into the open unit disk. The conjecture was formulated by James E. Brennan in 1978.Let W be a simply connected open subset of C with at least two boundary points in the extended complex plane. Let φ be a conformal map of W onto the open unit disk. The Brennan conjecture states that ∫W|φ′|pdxdy<∞ whenever 4/3<p<4 . Brennan proved the result when 4/3<p<p0 for some constant p0>3 . Bertilsson proved in 1999 that the result holds when 3.422 , but the full result remains open. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Corrosion Science**
Corrosion Science:
Corrosion Science is a peer-reviewed scientific journal published by Elsevier in 16 issues per year. Established in 1961, it covers a wide range of topics in the study of pure/applied corrosion and corrosion engineering, including but not limited to oxidation, biochemical corrosion, stress corrosion cracking, and corrosion control methods, as well as surface science and engineering. The editors-in-chief are J.M.C. Mol (Delft University of Technology) and O.R. Mattos (Federal University of Rio de Janeiro).
Abstracting and indexing:
The journal is abstracted and indexed in: Chemical Abstracts Current Contents/Engineering, Computing & Technology Inspec Materials Science Citation Index ScopusAccording to the Journal Citation Reports, the journal has a 2020 impact factor of 7.205. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Blue Onion**
Blue Onion:
Blue Onion (German: Zwiebelmuster) is a porcelain tableware pattern for dishware. Originally manufactured by Meissen porcelain since the 18th century and the late 19th Century. It has been copied by other companies.
History:
The "onion" pattern was originally named the "bulb" pattern.While modelled after a pattern first produced by Chinese porcelain painters, which featured pomegranates unfamiliar in Saxony, the plates and bowls produced in the Meissen factory in 1740 created their own style and feel. Among the earliest Chinese examples are underglaze blue and white porcelains of the early Ming Dynasty. The Meissen painters created hybrids that resembled flora more familiar to Europeans. The so-called "onions" are not onions at all, but, according to historians, are most likely mutations of the peaches and pomegranates, modelled on the original Chinese pattern. The design is a grouping of several floral motifs, with peonies and asters in the pattern's centre, and winding stems around a bamboo stalk.
History:
Before the end of the 18th century, other porcelain factories were copying the Meissen Zwiebelmuster. In the 19th century almost all the European manufactories offered a version, with transfer-printed outlines that were coloured in by hand. Enoch Wedgwood's pattern in the 1870s was known as "Meissen". Today, a Japanese version called "Blue Danube" is well-known and featured amongst tableware patterns.
Characteristics:
While the design is considered to have originated from an east Asian model, likely Chinese, it also demonstrates the European influence within the abstract stylisation. It is connected with the rhythm and rules of rococo ornamentation: for instance, the asymmetrical motif is composed according to type in various areas, giving the impression of symmetry.The onion pattern was designed as a white ware decorated with cobalt blue underglaze pattern. Some rare dishes have a green, red, pink, or black pattern instead of the cobalt blue. A very rare type is called red bud because there are red accents on the blue-and-white dishes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.