text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Cartographica**
Cartographica:
The Cartographica is an interdisciplinary peer-reviewed academic journal and the official publication of the Canadian Cartographic Association, in affiliation with the International Cartographic Association.Cartographica is published four times a year by the University of Toronto Press.
Abstracting and indexing:
The journal is abstracted and indexed in: Academic Search Alumni Edition Academic Search Complete Academic Search Elite Academic Search Premier Canadian Reference Centre Emerging Sources Citation Index (ESCI) Microsoft Academic Search Scopus Ulrich's Periodicals Directory | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Evans Tuning**
Evans Tuning:
Evans Tuning, LLC is an automotive engine tuning, and aftermarket modification shop that specializes in the reprogramming of engine control units (ECUs), to provide a smooth driving experience and safe engine conditions after modifications to a stock automotive configuration have been performed.
Overview:
The company was founded in 2004 by Jeffrey Evans and is located in Mount Bethel, Pennsylvania. The company was initially started in Easton, PA and in late 2009 moved to its current location in Mount Bethel. Additionally, the company started by only performing service to modified Honda and Acura models. Mid 2004, the company purchased its first 2WD Dynapack Dynomometer (dyno) and began their in-house dyno tuning operations. In early 2009 an additional 2WD Dynapack Dyno was purchased expanding their operation into an AWD dyno shop.
Tuning Services:
Dyno Tuning Evans Tuning's dyno tuning service requires the car to be present for tuning. After a car has been put on the dyno it can be tuned (usually in real-time) and air/fuel ratios, timing, cam angles, and other adjustments can be made to optimize the engine. All of these tuning services are included with the standard tuning rate (which varies depending on the car and tuning system used).
Tuning Services:
The use of a Dynapack Dyno requires the wheels to be removed prior to tuning. Adapters are then bolted onto the wheel hubs using lug nuts to properly center them. The dyno is then connected to the adapters and locked into place.
eTuning Evans Tuning began eTuning in February 2010. The eTuning service was discontinued in late 2014.
Drag Racing:
2011 Evans Tuning began drag racing again in 2011 after taking 4 racing seasons off with a 95 Acura Integra built for the True Street Class. The car was built for 2 years by Jeffrey Evans before it was ready for competition. It made its debut in the 4th Annual $10,000 Outlaw FWD Shootout with Brian Ballard driving during Fall Nationals at Old Bridge Township Raceway Park in Englishtown, NJ.
Drag Racing:
Record and Crash Jeffrey Evans was able to set a record in the class running an 8.980 @ 167.34MPH during the first round of qualifying and on the 3rd full power pass in the car. He was the first in the class to run an 8-second quarter mile at a True Street event in full legal race trim.
Control of the car was lost after it made it to the end of the track. The car swerved across lanes and crashed into the wall. Although Evans Tuning has said on their official Facebook page that they will compete it 2012, it is unknown whether this car will be repaired for next season.
2012 Evans Tuning is building a new race car based on the Honda EK hatchback chassis for the 2012 season. The car debuted at World Cup Finals at Maryland International Raceway piloted by Andrea Evans but had issues preventing it from running.
2013 The Evans Tuning True Street Civic made its first appearance of the 2013 season at Maryland International Raceway piloted by Andrea Evans. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hafada piercing**
Hafada piercing:
A hafada piercing is a surface piercing anywhere on the skin of the scrotum. Piercings on the scrotal raphe or "seam" of the scrotum are common. This piercing does not penetrate deep into the scrotum, and due to the looseness and flexibility of the skin in that area, does not migrate or reject as much as many other surface piercings. The main motives are beautification and individualization. A piercing that passes through the scrotum, from front-to-back, or from side-to-side, is known as a transscrotal piercing. Multiple hafada piercings are not uncommon, often as an extension of a frenum ladder or Jacob's Ladder, which is a series of piercings from the frenulum to the scrotum.
Historical origin:
The Hafada piercing may have originated in Arabia and spread from there to the Middle East and North Africa. According to piercing lore, it was a ritual usually performed when a young man entered puberty. It was most commonly applied on the left side. In Europe, Hafada piercing was adopted by some members of the French Legion, who were active in the areas of Syria and Lebanon. While originally a Hafada piercing referred to a scrotal piercing with a ring or barbell placed high and laterally (i.e. on the side of the scrotum,) the term Hafada piercing is now used used interchangeably with scrotal piercing and can refer to piercings anywhere on the scrotum.
Jewelry:
Hafada piercings are usually pierced with a captive bead ring (also called a BCR or ball closure ring,) a curved barbell or straight barbell. One source states that while rings were popular in the past, "barbells are more common nowadays." Since the skin of the scrotum is thin, titanium jewelry is advantageous due to its lower weight. Horizontal piercings (with one hole beside the other) are most common. Although vertical scrotum piercings are rare, they have been done successfully, using straight or curved barbells.
Healing:
Healing is relatively uncomplicated and lasts normally between six and eight weeks according to some sources, or up to 13 weeks according to other sources. A single scrotal piercing will tend to heal faster than multiple piercings. For this reason, many piercers will not place more than two or three ‘rungs’ of a ladder at a time, scheduling another set a month or two later.
Advantages:
While piercings on the penis can break a condom during intercourse, that is not a risk with piercings on the scrotum. This piercing does not interfere with sex. Due to the looseness of the skin, the rate of rejection is lower than for other surface piercings. While this piercing is primarily done for aesthetic reasons, piercings high on the scrotum (close to the penis shaft) may provide stimulation to a sexual partner during intercourse. Since the scrotum is sexually sensitive, Hafada piercings may enhance pleasure when the scrotum is rubbed or orally stimulated by a partner or during masturbation.
Advantages:
In comparison to facial piercings, a scrotal piercing is private, except in circumstances the pierced person chooses.
Disadvantages:
In some cases, Hafada piercings might induce discomfort while walking or running, or when riding a motorcycle or on horseback, especially during the healing process. Avoidance of tight clothing would minimize any sensitivity while walking. Piercings might present minor interference when shaving the scrotum. Piercings on the scrotal raphe or "seam" of the scrotum may not be particularly visible when the penis is flaccid. Piercings anywhere on the scrotum may become hidden should the wearer choose to not shave the scrotum.
Contraindications:
Scrotal piercings would not be advisable for anyone with tinea cruris (jock itch) or other dermatological conditions. Men who have had a recent vasectomy should wait for incisions to heal prior to obtaining a scrotal piercing.
Motivations:
Scrotal piercings are done primarily for aesthetic reasons and as an artistic expression of personal style. Unlike most other male genital piercings, scrotal piercings were not devised for and are not promoted for the enhancement of sexual pleasure, either for the wearer or for a sexual partner. Any such benefits are incidental. The presumed motivation for obtaining a scrotal piercing is simply as an adornment, either on its own, or in juxtaposition to other genital piercings. Beyond that, motivations may be simple or complex, and might not even be fully understood by the person obtaining the piercing. Some people say that even if they were the only person who ever saw their genital piercings, they would be happy with them. Some men may get a piercing to please their partner, perhaps to surprise or test their partner, or possibly in the hope of attracting, amusing or pleasing some future partner. Mention or display of a particular genital piercing such as a hafada piercing may serve as a personal marketing device (analogous to product differentiation) on online dating websites and apps and in sexting. Some might hope to use their piercing as a conversation starter. For someone with facial or other normally visible piercings, a scrotal piercing might be an answer to the potential question, "Do you have any other piercings?" For some, a scrotal piercing, which is said to be one of the least risky genital piercings, might be an "entry piercing" to test one's resolve or willingness to proceed with other genital piercings.
Motivations:
For many who have been sexually abused, teased, or psychologically hurt in other ways, genital piercings serve as a means to reclaim their sexuality or their ownership of their genitals.
Motivations:
Some people who obtain a genital piercings may seek a sense of uniqueness or intend to make a statement of non-conformity. Recent research suggests, however, that genital piercings are becoming mainstream, at least within some age groups, so are unlikely to succeed in providing a sense of uniqueness, signs of individuality or of subcultural identity, or as visual declarations of non-conformity. One piercer observed that, as of 2018, it was becoming more mainstream and acceptable for men to have one or multiple genital piercings, whereas in the late 1970s and early 1980s it was still very taboo.In social situations where one is naked, such as skinny dipping or naturism, genital piercings may serve as an implicit invitation to others to admire the wearer's genital area. People new to naturism usually feel they must avoid glancing at, and certainly avoid staring at, other naturists' genitals. However, jewelry is intended to attract attention, so genital piercings such as hafada piercings may be taken as an indication that fellow naturists are welcome to let their eyes wander and indeed linger without feeling they are visually trespassing or making the pierced individual uncomfortable. Elaborate piercings such as scrotal ladders might be taken as a clear message that the wearer is totally comfortable with being naked and with having others look at, and maybe even discuss, his piercings.
Motivations:
Conversely, individuals who do not practice naturism may simply wish to get a body piercing that is private or a secret shared only with their intimate partner. Unlike some penis piercings, a scrotal piercing is unlikely to be noticeable even when wearing a tight bathing suit or using a urinal in a public washroom. Teenagers may wish to get a piercing that their parents won't know about. (Note, however, that many jurisdictions prohibit or require parental consent for genital piercings of minors.) Fathers might wish to get a piercing that (in conservative families) their children won't see (and take as an implicit endorsement of body piercing.) For some males, a scrotal piercing may simply be the most discreet and least risky body piercing option. It is a body piercing that won't become an object of discussion or derision in the workplace or at the dinner table. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sxy 5′ UTR element**
Sxy 5′ UTR element:
The Sxy 5′ UTR element is an RNA element that controls expression of the sxy gene in H. influenzae. The sxy gene is a transcription factor (also known as TfoX) that regulates competence which is the ability of bacteria to take up DNA from their environment. When the sxy gene is deleted the bacterium loses the ability to express genes in the competence regulon. Cameron et al. recently showed that mutations in the 5′ end of the sxy gene lead to hypercompetance. They showed that this region formed an RNA secondary structure that occludes the Shine-Dalgarno sequence. Mutations that interfere with the stability of this secondary structure lead to increased translation of sxy followed by upregulation of the competence regulon.
tfoR RNA:
In the fellow gammaproteobacterium Vibrio cholerae, a different RNA regulatory system is used. Here, a sRNA named 'tfoR' positively regulates expression of the sxy (tfoX) protein.The RNA element responds to chitin, which is an important regulator of competence in V. cholera. Deletion of tfoR removed all competence for exogenous DNA in V. cholera in vivo. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**INH1**
INH1:
INH1, a thiazolyl benzamide compound, is a cell-permeable Hec1/Nek2 mitotic pathway inhibitor I.
Biological activity:
INH1 controls the biological activity of Hec1/Nek2 mitotic pathway. It specifically disrupts the Hec1/Nek2 interaction by directly binding to Hec1, resulting in defective Hec1 kinetochores localization and low-level cellular Nek2 protein. INH1 induces a transient mitotic arrest, exhibiting metaphase chromosome misalignment, spindle abnormality, and consequently cancer cell apoptosis.
Experiments show that INH1 potently inhibits the proliferation of multiple human breast cancer cell lines, cervical HeLa cells, and colon cancer cells in vitro. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bernold Fiedler**
Bernold Fiedler:
Bernold Fiedler (born 15 May 1956) is a German mathematician, specializing in nonlinear dynamics.
Bernold Fiedler:
Fiedler received a Diploma from Heidelberg University in 1980 for his thesis Ein Räuber-Beute-System mit zwei time lags ("A predator-prey system with two time lags") and his doctorate with his thesis Stabilitätswechsel und globale Hopf-Verzweigung (Stability transformation and global Hopf bifurcation), written under the direction of Willi Jäger. Fiedler is a professor at the Institute for Mathematics of the Free University of Berlin.His research includes, among other topics, global bifurcation, global attractors, and patterning in reaction-diffusion equations (an area of research pioneered by Alan Turing).In 2008, Fiedler gave the Gauss Lecture with a talk titled "Aus Nichts wird nichts? Mathematik der Selbstorganisation". In 2002 he was, with Stefan Liebscher, an Invited Speaker at the ICM in Beijing, with a talk titled "Bifurcations without parameters: some ODE and PDE examples".
Selected publications:
Articles with S. B. Angenent: The dynamics of rotating waves in scalar reaction diffusion equations, Trans. Amer. Math. Soc. 307 (1988), 545–568 doi:10.1090/S0002-9947-1988-0940217-X with Peter Poláčik: "Complicated dynamics of scalar reaction diffusion equations with a nonlocal term." Proceedings of the Royal Society of Edinburgh Section A: Mathematics 115, no. 1–2 (1990): 167–192. doi:10.1017/S0308210500024641 with Shui-Nee Chow and Bo Deng: "Homoclinic bifurcation at resonant eigenvalues." Journal of Dynamics and Differential Equations 2, no. 2 (1990): 177–244. doi:10.1007/BF01057418 with Carlos Rocha: Orbit equivalence of global attractors of semilinear parabolic differential equations, Trans. Amer. Math. Soc. 352 (2000), 257–284 doi:10.1090/S0002-9947-99-02209-6 Spatio-Temporal Dynamics of Reaction-Diffusion Patterns, in M. Kirkilionis, S. Krömker, R. Rannacher, F. Tomi (eds.) Trends in Nonlinear Analysis, Festschrift dedicated to Willi Jäger for his 60th birthday, Springer-Verlag, 2003, pp. 23–152. doi:10.1007/978-3-662-05281-5_2 Romeo und Julia, spontane Musterbildung und Turings Instabilität, in Martin Aigner, Ehrhard Behrends (eds.) Alles Mathematik. Von Pythagoras zum CD Player, Vieweg, 3rd edition 2009 doi:10.1007/978-3-658-09990-9_7 Books Fiedler, Bernold; Scheurle, Jürgen (1996). Discretization of homoclinic orbits, rapid forcing, and "invisible chaos". Providence, RI: American Mathematical Society. ISBN 978-1-4704-0149-8. OCLC 851088509.
Selected publications:
Fiedler, Bernold (2001). Ergodic Theory, Analysis, and Efficient Simulation of Dynamical Systems. Berlin, Heidelberg: Springer Berlin Heidelberg. ISBN 978-3-642-56589-2. OCLC 840292245.
Hasselblatt, Boris; Katok, A. B. (2002). Handbook of dynamical systems. Amsterdam: N.H. North Holland. ISBN 978-0-08-093226-2. OCLC 162578012.
Fiedler, Bernold (1988). Global bifurcation of periodic solutions with symmetry. Berlin: Springer-Verlag. ISBN 978-3-540-39150-0. OCLC 294802397. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Soapbox Science**
Soapbox Science:
Soapbox Science is a public outreach platform that promotes women working in science and the research that they do. The events turns public spaces into an area for learning and debate, in the spirit of Hyde Park's Speakers' Corner. Soapbox Science encourages scientists to explain their research to members of the public using non-traditional methods (for example, there is no use of a projector or slides). Speakers typically make props at home to explain the processes behind their research.
Soapbox Science:
Soapbox Science launched in London in 2011, where it was led by Seirian Sumner and Nathalie Pettorelli. It aims to showcase eminent female scientists across the world.
History:
Soapbox Science launched in London in 2011, led by Seirian Sumner and Nathalie Pettorelli and funded by L'Oreal UNESCO For Women in Science Scheme, Zoological Society of London and the Science & Technology Facilities Council. Soapbox Science formed a partnership with Speakezee in 2016.
The first three annual events 2011-2013 ran in London, in 2014 events ran in London, Bristol, Dublin, and Swansea.In 2015 more cities joined including Exeter, Manchester, Newcastle, Belfast and Glasgow.
In 2016, Cambridge, Cardiff, Edinburgh, Milton Keynes, Oxford, Galway, Reading and Brisbane ran events.By 2021, there were 45 events in 15 countries worldwide.
Impact:
Soapbox Science was established to complement other initiatives such as Athena SWAN that tackle the low numbers of women in Science, Technology, Engineering and Mathematics (STEM) in the UK.
Awards and honours:
Serian Sumner and Nathalie Pettorelli were awarded a Point of Light Award in 2015 from the UK Prime Minister, a Silver Medal from the Zoological Society of London in 2016, presented by Sir John Beddington, and an Equality & Diversity Champion Award from the British Ecological Society in 2017, in recognition of their work on the Soapbox Science initiative.
Notable alumni Prof Athene Donald Dr Sue Black OBE Prof Julie Williams Prof Hilary Lappin-Scott Prof Karen Holford Prof Sunetra Gupta Prof Georgina Mace Prof Lesley Yellowlees Dr Maggie Aderin-Pocock Dr Victoria Foster Dr Goedele De Clerck Prof Siwan Davies Prof Farah Bhatti | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**NDUFA5**
NDUFA5:
NADH dehydrogenase [ubiquinone] 1 alpha subcomplex subunit 5 is an enzyme that in humans is encoded by the NDUFA5 gene. The NDUFA5 protein is a subunit of NADH dehydrogenase (ubiquinone), which is located in the mitochondrial inner membrane and is the largest of the five complexes of the electron transport chain.
Structure:
The NDUFA5 gene is located on the q arm of chromosome 7 and it spans 64,655 base pairs. The gene produces a 13.5 kDa protein composed of 116 amino acids. NDUFA5 is a subunit of the enzyme NADH dehydrogenase (ubiquinone), the largest of the respiratory complexes. The structure is L-shaped with a long, hydrophobic transmembrane domain and a hydrophilic domain for the peripheral arm that includes all the known redox centers and the NADH binding site. It has been noted that the N-terminal hydrophobic domain has the potential to be folded into an alpha helix spanning the inner mitochondrial membrane with a C-terminal hydrophilic domain interacting with globular subunits of Complex I. The highly conserved two-domain structure suggests that this feature is critical for the protein function and that the hydrophobic domain acts as an anchor for the NADH dehydrogenase (ubiquinone) complex at the inner mitochondrial membrane. NDUFA5 is one of about 31 hydrophobic subunits that form the transmembrane region of Complex I. The protein localizes to the inner mitochondrial membrane as part of the 7 component-containing, water-soluble iron-sulfur protein (IP) fraction of complex I, although its specific role is unknown. It is assumed to undergo post-translational removal of the initiator methionine and N-acetylation of the next amino acid. The predicted secondary structure is primarily alpha helix, but the carboxy-terminal half of the protein has high potential to adopt a coiled-coil form. The amino-terminal part contains a putative beta sheet rich in hydrophobic amino acids that may serve as mitochondrial import signal. Related pseudogenes have also been identified on four other chromosomes.
Function:
The human NDUFA5 gene codes for the B13 subunit of complex I of the respiratory chain, which transfers electrons from NADH to ubiquinone. The NDUFA5 protein localizes to the mitochondrial inner membrane and it is thought to aid in this transfer of electrons. Initially, NADH binds to Complex I and transfers two electrons to the isoalloxazine ring of the flavin mononucleotide (FMN) prosthetic arm to form FMNH2. The electrons are transferred through a series of iron-sulfur (Fe-S) clusters in the prosthetic arm and finally to coenzyme Q10 (CoQ), which is reduced to ubiquinol (CoQH2). The flow of electrons changes the redox state of the protein, resulting in a conformational change and pK shift of the ionizable side chain, which pumps four hydrogen ions out of the mitochondrial matrix. The high degree of conservation of NDUFA5 extending to plants and fungi indicates its functional significance in the enzyme complex.
Clinical significance:
NDUFA5, ATP5A1 and ATP5A1 all show consistently reduced expression in brains of autism patients. Mitochondrial dysfunction and impaired ATP synthesis can result in oxidative stress, which may play a role in the development of autism.
Interactions:
NDUFA5 has many protein-protein interactions, such as ubiquitin C and with members of the NADH dehydrogenase [ubiquinone] 1 beta subcomplex, including NDUFB1, NDUFB9 and NDUFB10. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PassMap**
PassMap:
PassMap is a map-based graphical password method of authentication, similar to passwords, proposed by National Tsing Hua University researchers. The word PassMap originates from the word password by substituting word with map.
History and usage:
PassMap was proposed by National Tsing Hua University researchers Hung-Min Sun, Yao-Hsin Chen, Chiung-Cheng Fang, and Shih-Ying Chang at the 7th Association for Computing Machinery Symposium on Information, Computer and Communications Security. They defined PassMap as letting a consumer get authenticated by choosing a series of points on a big world map. Their study showed that for people PassMap passwords are more user friendly and memorable.Users are shown Google Maps on their screen through which they can zoom in to choose any two points they want to become their PassMap password. Since PassMap uses Google Maps, it cannot be used in applications that lack Internet access or Google Maps integration.
History and usage:
By default, PassMap's screen is set to the eighth zoom level and is centered on Taiwan. PassMap has no constraints on the zoom level, so consumers are allowed to select dots at unsafer, lower levels like level 8. It does not normalize error tolerance based on a screen's zoom position. PassMap's effective login percentage is 92.59%.
Commentary:
Ritika Sachdev wrote in the International Journal of Pure and Applied Research in Engineering and Technology that based on psychological studies, people can effortlessly recall the milestones they have visited. Sachdev called PassMap a "highly subjective or customized based password to ensure security".S. Rajarajan, M. Prabhu, and S. Palanivel praised PassMap for having "good memorability due to the usage of map for the password mechanism". But they noted that like many graphical passwords, PassMap is susceptible to a shoulder surfing intrusion. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Quick reaction force**
Quick reaction force:
A rapid reaction force / rapid response force (RRF), quick reaction force / quick response force (QRF), rapid deployment force (RDF) or quick maneuver force (QMF) is a military or police unit capable of responding to emergencies in a very short time frame. When used in reference to law enforcement and security forces, such as police tactical units, the time frame is usually minutes, while in military applications, such as paratroopers or commandos, the time frame can be minutes, hours or days. Rapid reaction forces are designed to intervene quickly as a spearhead to gain and hold ground in quickly unfolding combat or low-intensity conflicts, such as uprisings that necessitate the evacuation of foreign embassies. They are usually transported by air. Rapid reaction forces are usually lightly armed—limited to small arms and light crew-served weapons, and lacking vehicles, armor, and heavy equipment—but are often very well-trained to compensate.
Types:
Rapid reaction force A rapid reaction force is an armed military unit capable of rapidly responding to developing situations, usually to assist allied units in need of assistance. They are equipped to respond to any type of emergency within a short time frame, often only a few minutes, based on unit standard operating procedures (SOPs). Cavalry units are frequently postured as rapid reaction forces, with a main mission of security and reconnaissance. They are generally platoon-sized in the U.S. military's combat arms.
Types:
A rapid reaction force is a military reserve unit that belongs directly to the commander of the unit it is created from. Depending on the unit size and protocols, the commander may be the only person authorized to control a RRF, or they may delegate this responsibility to one or more additional people. RRFs are commonly found in maneuver battalion-level task forces and above, in addition to many operating bases having their own dedicated RRF to react to threats on or immediately around the base.
Types:
The readiness level of a RRF is based on unit SOPs. Since maintaining extremely high levels of readiness is draining on equipment, resources, and personnel, a RRF is postured based on the likelihood of being called up. During a high-intensity conflict, a RRF may be forced to maintain high readiness, with all members waiting in their vehicles to respond. However, during a low-intensity conflict, when deployment is less likely and may be more readily predicted, command establishes how fast a RRF must be able to react, which can range from vehicles and personnel in a central location with the troops rotating regularly, to the vehicles staged close to a unit area with all personnel staying close enough for rapid recall. The speed at which a RRF is expected to react is defined by its readiness condition level.
Types:
The mission of a RRF can vary widely, as they are used to respond to any threat the commander chooses to employ them for. Depending on the mission requirement, additional units can be attached to an organic platoon to expand their capabilities. Examples include attaching explosive ordnance disposal teams to a RRF responding to bombs or similar threats, and vehicle recovery assets to a RRF expected to recover damaged trucks Rapid deployment force A rapid deployment force (RDF) is a military formation that is capable of fast deployment outside their country's borders. Rapid deployment forces typically consist of well-trained military units (special forces, paratroopers, marines, etc.) that can be deployed fairly quickly.
List:
Rapid reaction force The concept of a United Nations rapid reaction force was proposed in the mid-1990s by several commentators and officials, including Secretary-General Boutros Boutros-Ghali. The UN rapid reaction force would consist of personnel stationed in their home countries, but they would have the same training, equipment, and procedures, and would conduct joint exercises. The force would remain at high readiness at all times so as to quickly deploy them where necessary.
List:
The Allied Rapid Reaction Corps (ARRC) is a NATO rapid reaction force, established in 1992. A successor to the British Army's I Corps, the ARRC is capable of rapidly deploying a NATO headquarters for operations and crisis response.
The European Gendarmerie Force (EUROGENDFOR) is a European rapid reaction force under the European Union, established in 2006. An alliance of gendarmerie forces from Italy, France, the Netherlands, Poland, Portugal, Romania, and Spain, it serves as a unified intervention force of European militarized police.
List:
The European Rapid Operational Force (EUROFOR) was a European rapid reaction force under the European Union and Western European Union, established in 1995 and composed of military units from Italy, France, Portugal, and Spain. EUROFOR was tasked with performing duties outlined in the Petersberg Tasks. EUROFOR deployed to Kosovo from 2000 to 2001, and North Macedonia as part of EUFOR Concordia in 2003. After being converted into an EU Battlegroup, EUROFOR was dissolved in 2012.
List:
The European Rapid Reaction Force (ERRF) was the intended result of the Helsinki Headline Goal. Though many media reports suggested the ERRF would be a European Union army, the Helsinki Headline Goal was little more than headquarters arrangements and a list of theoretically available national forces for a rapid reaction force.
The NATO Response Force (NRF) is a NATO rapid reaction force, established in 2003. Distinct from the ARRC, the NRF comprises land, sea, air, and special forces units that can be deployed quickly.
Riot Police Units (RPU) are the rapid reaction forces of Japanese prefectural police. They combine riot police, police tactical units, and disaster response squads under one unit. Each prefectural police force operates RPUs, sometimes under different names.
2nd Quick Response Division ROK Marine Corps Quick Maneuver Force The Immediate Response Force (IRF) is an American rapid reaction force composed of units from the United States Army and United States Air Force. They are capable of responding to any location in the world within 18 hours of notice.
List:
The Joint Rapid Reaction Force (JRRF) was a British Armed Forces capability concept created in 1999. The force was composed of units from all three branches of the British military, and was able to rapidly deploy anywhere in the world at short notice. However, the War in Afghanistan and 2003 invasion of Iraq siphoned British personnel and equipment, leaving the JRRF with insufficient forces. The JRRF was succeeded by the Combined Joint Expeditionary Force in 2010 and the UK Joint Expeditionary Force in 2014.
List:
Rapid deployment force Argentine Rapid Deployment Force 3rd Brigade Rapid Deployment Force Egyptian Rapid Deployment Forces Finnish Rapid Deployment Force / Rapid Forces Division Indonesian Quick Reaction Forces Command Indonesian Army Strategic Command Indonesian Marine Corps NEDSA Corps / NATO Rapid Deployable Corps – Italy Central Readiness Regiment ROKMC Quick Maneuver Force 10th Parachute Brigade Netherlands Marine Corps Norwegian Telemark Battalion 710th Special Operations Wing Rapid Reaction Brigade Guards Air Mobile Brigade 31st Infantry Regiment, King's Guard The Rapid Deployment Joint Task Force (RDJTF) was a former United States Department of Defense joint task force. It was formed in 1979 as the Rapid Deployment Force (RDF), envisioned as a mobile force that could quickly deploy U.S. forces to any location outside the usual American deployment areas of Western Europe and East Asia, soon coming to focus on the Middle East. It was inactivated in 1983 and reorganized as the United States Central Command.
List:
Marine Expeditionary Unit XVIII Airborne Corps 75th Ranger Regiment / Russian Airborne Forces EU Battlegroup | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Giant magnetoimpedance**
Giant magnetoimpedance:
In materials science Giant Magnetoimpedance (GMI) is the effect that occurs in some materials where an external magnetic field causes a large variation in the electrical impedance of the material. It should not be confused with the separate physical phenomenon of Giant Magnetoresistance.
The phenomenology of the GMI:
GMI is caused by the penetration length that is a measure of how deep an ac electrical current can flow inside an electrical conductor. The penetration length (also known as the skin-depth effect) increases with the square root of the electrical resistivity of the material and is inversely proportional to the square root of the product of the magnetic permeability and the frequency of the ac electrical current. Thus, in materials with very high values of magnetic permeability, such as soft-ferromagnetic materials, the penetration-length can be much less than the thickness of the conductor even for moderate values of frequencies driving the current near the surface of the material. When an external magnetic field is applied, the size of the permeability diminishes, increasing the penetration of the current in the magnetic material. Large variations are observed in both in-phase and out-of-phase components of the magnetoimpedance for applied magnetic fields close to the value of the Earth magnetic field up to few tens of Oersted. For comparison, in normal electrical conductors the effect of the skin-depth becomes important for frequencies in the microwave range only. Despite the fact that the dependence of the GMI on the geometry of the electrical conductor (ribbons, wires, multilayers meander-likes) and external parameters is somewhat complex, there are theoretical models that allow calculation of the GMI to within some approximations. Beside the dependence of the GMI on the frequency of the current there are other sources that contribute to the frequency dependence of the GMI, such as the motion of the domain wall and the ferromagnetic resonance.
Experimental measurement:
A typical experimental set-up for investigating the GMI in research laboratories is shown below. It includes an alternating current source, a phase sensitive amplifier for detecting the ac voltage across the sample and an electromagnet for applying a dc magnetic field. A cryostat or an oven may be required for measuring the temperature dependence of the GMI.
Several experimental measurements were also performed to characterize the long-term stability and the thermal drift of the GMI, which were supported by a theoretical model describing the physical modeling of the sensing element.
History:
The observation that the impedance of soft-magnetic materials is influenced by the frequency and amplitudes of applied magnetic fields was first observed in the 1930s. These initial studies were limited to frequencies of a few hundreds of Hz and the changes of impedance reported in those works were not large. Starting in the 1990s, this phenomenon was investigated again, this time making use of currents with frequencies of hundreds of kHz.Because of the huge variations observed in the magnetic field dependence of the magnetoimpedance it was named giant magnetoimpedance. Due to the high sensitivity of the sensors using the GMI effect, they have been used in compasses, accelerometers, virus detection, biomagnetism, among other applications. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Universal quantification**
Universal quantification:
In mathematical logic, a universal quantification is a type of quantifier, a logical constant which is interpreted as "given any", "for all", or "for any". It expresses that a predicate can be satisfied by every member of a domain of discourse. In other words, it is the predication of a property or relation to every member of the domain. It asserts that a predicate within the scope of a universal quantifier is true of every value of a predicate variable.
Universal quantification:
It is usually denoted by the turned A (∀) logical operator symbol, which, when used together with a predicate variable, is called a universal quantifier ("∀x", "∀(x)", or sometimes by "(x)" alone). Universal quantification is distinct from existential quantification ("there exists"), which only asserts that the property or relation holds for at least one member of the domain.
Quantification in general is covered in the article on quantification (logic). The universal quantifier is encoded as U+2200 ∀ FOR ALL in Unicode, and as \forall in LaTeX and related formula editors.
Basics:
Suppose it is given that 2·0 = 0 + 0, and 2·1 = 1 + 1, and 2·2 = 2 + 2, etc.
This would seem to be a logical conjunction because of the repeated use of "and". However, the "etc." cannot be interpreted as a conjunction in formal logic. Instead, the statement must be rephrased: For all natural numbers n, one has 2·n = n + n.
This is a single statement using universal quantification.
This statement can be said to be more precise than the original one. While the "etc." informally includes natural numbers, and nothing more, this was not rigorously given. In the universal quantification, on the other hand, the natural numbers are mentioned explicitly.
Basics:
This particular example is true, because any natural number could be substituted for n and the statement "2·n = n + n" would be true. In contrast, For all natural numbers n, one has 2·n > 2 + n is false, because if n is substituted with, for instance, 1, the statement "2·1 > 2 + 1" is false. It is immaterial that "2·n > 2 + n" is true for most natural numbers n: even the existence of a single counterexample is enough to prove the universal quantification false.
Basics:
On the other hand, for all composite numbers n, one has 2·n > 2 + n is true, because none of the counterexamples are composite numbers. This indicates the importance of the domain of discourse, which specifies which values n can take. In particular, note that if the domain of discourse is restricted to consist only of those objects that satisfy a certain predicate, then for universal quantification this requires a logical conditional. For example, For all composite numbers n, one has 2·n > 2 + n is logically equivalent to For all natural numbers n, if n is composite, then 2·n > 2 + n.
Basics:
Here the "if ... then" construction indicates the logical conditional.
Basics:
Notation In symbolic logic, the universal quantifier symbol ∀ (a turned "A" in a sans-serif font, Unicode U+2200) is used to indicate universal quantification. It was first used in this way by Gerhard Gentzen in 1935, by analogy with Giuseppe Peano's ∃ (turned E) notation for existential quantification and the later use of Peano's notation by Bertrand Russell.For example, if P(n) is the predicate "2·n > 2 + n" and N is the set of natural numbers, then ∀n∈NP(n) is the (false) statement "for all natural numbers n, one has 2·n > 2 + n".Similarly, if Q(n) is the predicate "n is composite", then ∀n∈N(Q(n)→P(n)) is the (true) statement "for all natural numbers n, if n is composite, then 2·n > 2 + n".Several variations in the notation for quantification (which apply to all forms) can be found in the Quantifier article.
Properties:
Negation The negation of a universally quantified function is obtained by changing the universal quantifier into an existential quantifier and negating the quantified formula. That is, is equivalent to ∃x¬P(x) where ¬ denotes negation.
Properties:
For example, if P(x) is the propositional function "x is married", then, for the set X of all living human beings, the universal quantification Given any living person x, that person is married is written ∀x∈XP(x) This statement is false. Truthfully, it is stated that It is not the case that, given any living person x, that person is married or, symbolically: ¬∀x∈XP(x) .If the function P(x) is not true for every element of X, then there must be at least one element for which the statement is false. That is, the negation of ∀x∈XP(x) is logically equivalent to "There exists a living person x who is not married", or: ∃x∈X¬P(x) It is erroneous to confuse "all persons are not married" (i.e. "there exists no person who is married") with "not all persons are married" (i.e. "there exists a person who is not married"): ¬∃x∈XP(x)≡∀x∈X¬P(x)≢¬∀x∈XP(x)≡∃x∈X¬P(x) Other connectives The universal (and existential) quantifier moves unchanged across the logical connectives ∧, ∨, →, and ↚, as long as the other operand is not affected; that is: provided that provided that provided that provided that Y≠∅ Conversely, for the logical connectives ↑, ↓, ↛, and ←, the quantifiers flip: provided that provided that provided that provided that Y≠∅ Rules of inference A rule of inference is a rule justifying a logical step from hypothesis to conclusion. There are several rules of inference which utilize the universal quantifier.
Properties:
Universal instantiation concludes that, if the propositional function is known to be universally true, then it must be true for any arbitrary element of the universe of discourse. Symbolically, this is represented as ∀x∈XP(x)→P(c) where c is a completely arbitrary element of the universe of discourse.
Universal generalization concludes the propositional function must be universally true if it is true for any arbitrary element of the universe of discourse. Symbolically, for an arbitrary c, P(c)→∀x∈XP(x).
The element c must be completely arbitrary; else, the logic does not follow: if c is not arbitrary, and is instead a specific element of the universe of discourse, then P(c) only implies an existential quantification of the propositional function.
The empty set By convention, the formula ∀x∈∅P(x) is always true, regardless of the formula P(x); see vacuous truth.
Universal closure:
The universal closure of a formula φ is the formula with no free variables obtained by adding a universal quantifier for every free variable in φ. For example, the universal closure of P(y)∧∃xQ(x,z) is ∀y∀z(P(y)∧∃xQ(x,z))
As adjoint:
In category theory and the theory of elementary topoi, the universal quantifier can be understood as the right adjoint of a functor between power sets, the inverse image functor of a function between sets; likewise, the existential quantifier is the left adjoint.For a set X , let PX denote its powerset. For any function f:X→Y between sets X and Y , there is an inverse image functor f∗:PY→PX between powersets, that takes subsets of the codomain of f back to subsets of its domain. The left adjoint of this functor is the existential quantifier ∃f and the right adjoint is the universal quantifier ∀f That is, ∃f:PX→PY is a functor that, for each subset S⊂X , gives the subset ∃fS⊂Y given by ∃fS={y∈Y|∃x∈X.f(x)=y∧x∈S}, those y in the image of S under f . Similarly, the universal quantifier ∀f:PX→PY is a functor that, for each subset S⊂X , gives the subset ∀fS⊂Y given by ∀fS={y∈Y|∀x∈X.f(x)=y⟹x∈S}, those y whose preimage under f is contained in S The more familiar form of the quantifiers as used in first-order logic is obtained by taking the function f to be the unique function !:X→1 so that P(1)={T,F} is the two-element set holding the values true and false, a subset S is that subset for which the predicate S(x) holds, and P(!):P(1)→P(X)T↦XF↦{} ∃!S=∃x.S(x), which is true if S is not empty, and ∀!S=∀x.S(x), which is false if S is not X.
As adjoint:
The universal and existential quantifiers given above generalize to the presheaf category. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Snow shovel**
Snow shovel:
A snow shovel is a specialized shovel designed for snow removal. Snow shovels come in several different designs, each of which is designed to move snow in a different way. Removing snow with a snow shovel has health and injury risks, but can also have significant health benefits when the snow shovel is used correctly.
History:
The earliest known snow shovel was found in a bog in Russia. Estimated to be 6,000 years old, its blade was made from a carved elk antler section. According to archaeologists, the antler piece was tied to a wood or bone handle.
Features:
All snow shovels consist of a handle and a scoop. Sometimes there may be a shaft connecting handle and scoop, while in other snow shovels, the handle is extended and attaches directly to the scoop.Most snow shovels are designed for either pushing snow or lifting snow, although some are crossovers which can do either job. Some snow shovel scoops have sharpened blades which can chip away and lever up slabs of ice.Handles may be straight or bent. Straight handles make the pushing angle easier to adjust and snow throwing easier compared to a bent handle. Long handles enable the user to leverage their weight for pushing snow, but shorter handles make tossing snow easier. Plastic and fiberglass handles are lightweight, while wood handles are heavy. Metal handles conduct heat away from the hands more readily than other kinds of handles, so they feel colder.Some handles include a D-shaped grip or padded grip at the end of the handle. There may also be extra grips in the middle of the handle to assist with the snow shovel's lever action when lifting snow.Snow shovels designed for lifting snow generally have smaller scoops than snow shovels designed for throwing snow. A typical push-type shovel scoop would be about 24 inches across with a wide, blunt blade, while a lift-type shovel scoop may be half that size. A narrower scoop makes the removal of deep, wet, or heavy snow easier. Scoops with a large curve can carry more snow, while those with a shallow curve are intended to push snow rather than carry it. Metal scoops are sturdier than plastic but heavier, and they also require more maintenance. Steel and steel-edged scoops are heavier than aluminum or plastic, but are also more durable. Although they are very good for dealing with ice, they can also damage delicate outdoor home surfaces.Snow shovel designs which let one push aside snow without lifting it are sometimes called snow sled shovels, or snow scoops and sleigh shovels. They are large and deep hopper-like implements fitted with a wide handle and designed to scoop up a load of snow and slide it to another location without lifting. These tools may be effective for dealing with lighter accumulations of snow, but cannot handle thick or heavy snow or ice.
Features:
Many homeowners who deal with large amounts of snow have multiple snow shovels for different types of snow. If lifting is a concern, then they may choose separate shovels for lifting versus pushing. Otherwise, users may wish to have a shovel for fresh light snow and another one to manage icy hard snow.
Safe usage:
Shoveling snow is hard work. In a single winter, shoveling a typical driveway can involve moving more than 25 tons of snow. Health risks associated with shoveling snow include heart attacks (myocardial infarction), worsening of existing breathing issues, sprains and strains, slips and falls, back injuries, hypothermia and frostbite, and accidents involving road traffic.Persons doing snow shoveling can reduce their risk of injury by shoveling snow when it is fresh and light. Slip-resistant boots protect against user falls. Appropriate clothing prevents hypothermia and frostbite. Ideal snow shoveling clothing for the rest of the body is lightweight, layered, and water-repellent to increase ventilation while maintaining insulation.Proper snow throwing technique minimizes strains and back injuries. Recommended technique is that when lifting snow, the user bends their knees to collect the snow while maintaining a straight upright back, then straightening the legs to stand and lift. It is best to lift snow by using the shovel as a lever. Never lift snow with a side-twisting motion, as that can lead to injury.Shoveling snow is a known trigger for myocardial infarction among people at risk for heart problems and who do not regularly engage in strenuous physical activity. People who suffer from pre-existing heart or breathing problems should consult their doctor before shoveling snow.When done correctly, snow shoveling can provide good exercise. One hour of shoveling snow can burn 600 calories. Shoveling snow also builds bone and muscle mass and is a good form of aerobic exercise.
In popular culture:
In Advance of the Broken Arm, a 1915 readymade sculpture from Marcel Duchamp, consisted of a regular snow shovel with "from Marcel Duchamp 1915" painted on the handle. The original artwork which used to hang in Duchamp's studio is now lost, but an authorized replica is in the collection of the Yale University Art Gallery. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Alpine skiing**
Alpine skiing:
Alpine skiing, or downhill skiing, is the pastime of sliding down snow-covered slopes on skis with fixed-heel bindings, unlike other types of skiing (cross-country, Telemark, or ski jumping), which use skis with free-heel bindings. Whether for recreation or for sport, it is typically practiced at ski resorts, which provide such services as ski lifts, artificial snow making, snow grooming, restaurants, and ski patrol.
Alpine skiing:
"Off-piste" skiers—those skiing outside ski area boundaries—may employ snowmobiles, helicopters or snowcats to deliver them to the top of a slope. Back-country skiers may use specialized equipment with a free-heel mode, including 'sticky' skins on the bottoms of the skis to stop them sliding backwards during an ascent, then locking the heel and removing the skins for their descent.
Alpine ski racing has been held at the Winter Olympics since 1936. A competition corresponding to modern slalom was introduced in Norway at Oslo in 1886.
Participants and venues:
As of 2023, there were estimated to be 55 million people worldwide who engaged in alpine skiing. The estimated number of skiers, who practiced alpine, cross-country skiing, and related snow sports, amounted to 30 million in Europe, 20 million in North America, and 14 million in Japan. As of 1996, there were reportedly 4,500 ski areas, operating 26,000 ski lifts and enjoying skier visits. The predominant region for downhill skiing was Europe, followed by Japan and the US.
History:
The ancient origins of skiing can be traced back to prehistoric times in Russia, Finland, Sweden and Norway where varying sizes and shapes of wooden planks were found preserved in peat bogs. The word ski is related to the Old Norse word skíð, which means "split piece of wood or firewood." Skis were first invented to cross wetlands and marshes in the winter when they froze over. Skiing was an integral part of transportation in colder countries for thousands of years. In the 1760s, skiing was recorded as being used in military training. The Norwegian army held skill competitions involving skiing down slopes, around trees and obstacles while shooting. The birth of modern alpine skiing is often dated to the 1850s, and during the late 19th century, skiing was adapted from a method of transportation to a competitive and recreational sport. Norwegian legend Sondre Norheim first began the trend of skis with curved sides, and bindings with stiff heel bands made of willow. Norheim also trended the slalom turn style. The wooden skis designed by Norheim closely resemble the shape of modern slalom skis. Norheim was the champion of the first downhill skiing competition, reportedly held in Oslo, Norway in 1868. Norheim impressed spectators when he used the stem christie in Christiania (Oslo) in 1868, the technique was originally called christiania turn (norwegian: christianiasving or kristianiasving) after the city (first printed in 1901 in guidelines for ski jumping). The telemark turn was the alternative technique. The christiania turn later developed into parallel turn as the standard technique in alpine skiing.The term "slalom" is from Norwegian dialects slalåm meaning a trail (låm) on a slope (sla). In Telemark in the 1800s, the steeper and more difficult trails were called ville låmir (wild trails). Skiing competitions in Telemark often began on a steep mountain, continued along a logging-slides (tømmerslepe) and was completed with a sharp turn (Telemark turn) on a field or frozen lake. This type of competition used the natural and typical terrain in Telemark. Some races were on "bumpy courses" (kneikelåm) and sometimes included "steep jumps" (sprøytehopp) for difficulty. The first known slalom competitions were presumably held in Telemark around 1870 in conjunction with ski jumping competitions, involving the same athletes and on slopes next to the ski jump. Husebyrennet from 1886 included svingrenn (turning competition on hills), the term slalåm had not been introduced at that time. Slalom was first used at a skiing competition in Sonnenberg in 1906. Two to three decades later, the sport spread to the rest of Europe and the US. The first slalom ski competition occurred in Mürren, Switzerland in 1922.
Technique:
A skier following the fall line will reach the maximum possible speed for that slope. A skier with skis pointed perpendicular to the fall line, across the hill instead of down it, will accelerate more slowly. The speed of descent down any given hill can be controlled by changing the angle of motion in relation to the fall line, skiing across the hill rather than down it.
Technique:
Downhill skiing technique focuses on the use of turns to smoothly turn the skis from one direction to another. Additionally, the skier can use the same techniques to turn the ski away from the direction of movement, generating skidding forces between the skis and snow which further slow the descent. Good technique results in a fluid flowing motion from one descent angle to another one, adjusting the angle as needed to match changes in the steepness of the run. This looks more like a single series of S's than turns followed by straight sections.
Technique:
Stemming The oldest and still common type of turn on skis is the stem, angling the tail of the ski off to the side, while the tips remain close together. In doing so, the snow resists passage of the stemmed ski, creating a force that retards downhill speed and sustains a turn in the opposite direction. When both skis are stemmed, there is no net turning force, only retardation of downhill speed.
Technique:
Carving Carving is based on the shape of the ski itself; when the ski is rotated onto its edge, the pattern cut into its side causes it to bend into an arc. The contact between the arc of the ski edges and the snow naturally causes the ski to tend to move along that arc, changing the skiers direction of motion.
Technique:
Checking This is an advanced form of speed control by increasing the pressure on one inside edge (for example the right ski), then releasing the pressure and shifting immediately to increasing the other inside edge (the left ski). Then repeat if necessary. Each increased pressure slows the speed. Alternating right and left allows the skis to remain parallel and point ahead without turning. The increase and release sequence results in the up and down motions of the upper body. Some skiers go down the top of moguls and control the speed by checking at the tops. This is how they can practically go straight down the fall line without gaining speed.
Technique:
Snowplough turn The snowplough turn is the simplest form of turning and is usually learned by beginners. To perform the snowplough turn one must be in the snowplough position while going down the ski slope. While doing this they apply more pressure to the inside of the opposite foot of which the direction they would like to turn. This type of turn allows the skier to keep a controlled speed and introduces the idea of turning across the fall line.
Equipment:
Skis Modern alpine skis are shaped to enable carve turning, and have evolved significantly since the 1980s, with variants including powder skis, freestyle skis, all-mountain skis, and children's skis. Powder skis are usually used when there is a large amount of fresh snow; the shape of a powder ski is wide, allowing the ski to float on top of the snow, compared to a normal downhill ski which would most likely sink into the snow. Freestyle skis are used by skiers who ski terrain parks. These skis are meant to help a skier who skis jumps, rails, and other features placed throughout the terrain park. Freestyle skis are usually fully symmetric, meaning they are the same dimensions from the tip of the ski to the backside (tail) of the ski. All-mountain skis are the most common type of ski, and tend to be used as a typical alpine ski. All-mountain skis are built to do a little bit of everything; they can be used in fresh snow (powder) or used when skiing groomed runs. Slalom race skis, usually referred to as race skis, are short, narrow skis, which tend to be stiffer because they are meant for those who want to go fast as well as make quick sharp turns.
Equipment:
Bindings The binding is a device used to connect the skier's boot to the ski. The purpose of the binding is to allow the skier to stay connected to the ski, but if the skier falls the binding can safely release them from the ski to prevent injury. There are two types of bindings: the heel and toe system (step-in) and the plate system binding.
Equipment:
Boots Ski boots are one of the most important accessories to skiing. They connect the skier to the skis, allowing them full control over the ski. When ski boots first came about they were made of leather and laces were used. The leather ski boots started off as low-cut, but gradually became taller, allowing for more ankle support, as injuries became more common . Eventually the tied laces were replaced with buckles and the leather boots were replaced with plastic. This allowed the bindings to be more closely matched to the fit of the boot, and offer improved performance. The new plastic model contained two parts of the boots: an inner boot and an outer shell. The inner part of the boot (also called the liner) is the cushioning part of the boot and contains a footbed along with a cushion to keep a skier's foot warm and comfortable. The outer shell is the part of the boot that is made of plastic and contains the buckles. Most ski boots contain a strap at shin level to allow for extra strength when tightening the boots.
Equipment:
Poles Ski poles, one in each hand, are used for balance and propulsion.
Equipment:
Helmet Ski helmets reduce the chances of head injury while skiing. Ski helmets also help to provide warmth to the head since they incorporate an inner liner that traps warmth. Helmets are available in many styles, and typically consist of a hard plastic/resin shell with inner padding. Modern ski helmets may include many additional features such as vents, earmuffs, headphones, goggle mounts, and camera mounts.
Equipment:
Protective gear The protective gear used in alpine skiing includes: helmets, mouth guards, shin guards, chin guards, arm guards, back protectors, pole guards, and padding. Mouth guards can reduce the effects of a concussion and protect the teeth of the athlete. Shin guards, pole guards, arm guards and chin guards are mainly used in slalom skiing in order to protect the body parts having impact with the gates. Back protectors and padding, also known as stealth, is worn for giant slalom and other speed events in order to better protect the body if an athlete were to have an accident at high speeds.
Competition:
Elite competitive skiers participate in the FIS World Cup, the World Championships, and the Winter Olympics. Broadly speaking, competitive skiing is divided into two disciplines: Racing, comprising slalom, giant slalom, super giant slalom, combined, and downhill, parallel slalom and parallel giant slalom.
Freestyle skiing, incorporating events such as moguls, aerials, halfpipe, and ski cross.Other disciplines administered by the FIS but not usually considered part of alpine are speed skiing and grass skiing.
Ski trail ratings:
In most ski resorts, the runs are graded according to comparative difficulty so that skiers can select appropriate routes. The grading schemes around the world are related, although with significant regional variations. A beginner-rated trail at a large mountain may be more of an intermediate-rated trail on a smaller mountain.
Ski trail ratings:
In the United States and Canada, there are four rating symbols: Easy (green circle), Intermediate (blue square), and Difficult (black diamond), and Experts Only (double black diamond) Ski trail difficulty is measured by percent slope, not degree angle. A 100% slope is a 45-degree angle. In general, beginner slopes (green circle) are between 6% and 25%. Intermediate slopes (blue square) are between 25% and 40%. Difficult slopes (black diamond) are 40% and up. Although slope gradient is the primary consideration in assigning a trail difficulty rating, other factors come into play. A trail will be rated by its most difficult part, even if the rest of the trail is easy. Ski resorts assign ratings to their own trails, rating a trail compared only with other trails at that resort. Also considered are width of the trail, sharpest turns, terrain roughness, and whether the resort regularly grooms the trail.
Safety:
In 2014, there were more than 114,000 alpine skiing-related injuries treated in hospitals, doctor's offices, and emergency rooms.
Safety:
The most common types of ski injuries are those of the knee, head, neck and shoulder area, hands and back. Ski helmets are highly recommended by professionals as well as doctors. Head injuries caused in skiing can lead to death or permanent brain damage. In alpine skiing, for every 1000 people skiing in a day, on average between two and four will require medical attention. Most accidents are the result of user error leading to an isolated fall. Learning how to fall correctly and safely can reduce the risk of injury.
Health:
According to a 2004 Harvard Medical School study, alpine skiing burns between 360 and 532 calories per hour.
Climate change:
Winter season lengths are projected to decline at ski areas across North America and Europe due to the effects of global warming. In the United States, winter season lengths are projected to decline by more than 50 percent by 2050 and by 80 percent by 2090 if greenhouse gas emissions continue at current rates. About half of the 103 ski resorts in the Northeastern United States operating in 2012 may not be able to maintain an economically viable ski season by 2050. In Europe, half of the glacial ice in the Alps has melted and the European Geosciences Union projects snowpack in the mountains could decline 70 percent by 2100 (however, if humans manage to keep global warming below 2 °C, the snow-cover reduction would be limited to 30 percent by 2100). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dock7**
Dock7:
Dedicator of cytokinesis protein (Dock7) is a large (~240 kDa) protein encoded in the human by the DOCK7 gene, involved in intracellular signalling networks. It is a member of the DOCK-C subfamily of the DOCK family of guanine nucleotide exchange factors (GEFs) which function as activators of small G-proteins. Dock7 activates isoforms of the small G protein Rac.
Discovery:
Dock7 was identified as one of a number of proteins which share high sequence similarity with the previously described protein Dock180, the archetypal member of the DOCK family. Dock7 expression has been reported in neurons and in the HEK 293 cell line.
Structure and function:
Dock7 is part of a large class of proteins (GEFs) which contribute to cellular signalling events by activating small G proteins. In their resting state G proteins are bound to Guanosine diphosphate (GDP) and their activation requires the dissociation of GDP and binding of guanosine triphosphate (GTP). GEFs activate G proteins by promoting this nucleotide exchange.
Structure and function:
Dock7 and other DOCK family proteins differ from other GEFs in that they do not possess the canonical structure of tandem DH-PH domains known to elicit nucleotide exchange. Instead they possess a DHR2 domain which mediates G protein activation by stabilising it in its nucleotide free state. They also contain a DHR1 domain which, in many DOCK family members, interacts with phospholipids. Dock7 shares the highest level of sequence similarity with Dock6 and Dock8, the other members of the DOCK-C subfamily. However, the specificity of the Dock7 DHR2 domain appears to resemble that of DOCK-A/B subfamily proteins in that it binds Rac but not Cdc42. Many DOCK family proteins contain important structural features at their N- and C-termini, however, these regions in Dock7 are poorly characterised thus far and no such features have been identified.
Regulation of Dock7 Activity:
Many members of the DOCK family are regulated by protein-protein interactions mediated via domains at their N- and C-termini, however, the mechanisms by which Dock7 is regulated are largely unknown. There is evidence that the production of PtdIns(3,4,5)P3 by members of the Phosphoinositide 3-kinase (PI3K) family is important for efficient recruitment of Dock7 since the PI3K inhibitor LY294002 was shown to block Dock7-dependent functions in neurons. This observation is consistent with the role of the DHR1 domain in other DOCK family proteins. In neurons of the hippocampus Dock7 undergoes striking changes in subcellular localisation during the progressive stages of neuronal development, resulting in an abundance of this protein in a single neurite which goes on to form the axon of the polarised neuron.In Schwann cells (which generate an insulating layer, known as the myelin sheath, around axons of the peripheral nervous system) Dock7 appears to be activated downstream of the neuregulin receptor ErbB2, which receives signals from the axon that induce Schwann cell proliferation, migration and myelination. ErbB2 has been shown to tyrosine phosphorylate Dock7 and thus promote Schwann cell migration.
Signalling downstream of Dock7:
DOCK proteins are known activators of small G proteins of the Rho family. A study of Dock7 in HEK 293 cells and hippocampal neurons has shown that it can bind and promote nucleotide exchange on the Rac subfamily isoforms Rac1 and Rac3. This work suggests that Dock7 is a key mediator of the process that specifies which of the many neurites will become the axon. Indeed, overexpression of Dock7 induced the formation of multiple axons and RNA interference knock-down of Dock7 prevented axon formation. In Schwann cells Dock7 was shown to regulate the activation of Cdc42 as well as Rac1 however no direct interaction between Dock7 and Cdc42 has been demonstrated. Dock7 has also been reported to interact with the TSC1-TSC2 (also known as hamartin-tuberin) complex, the normal function of which is disrupted in sufferers of Tuberous sclerosis. It was subsequently suggested that Dock7 may function as a GEF for Rheb, a small G protein that functions downstream of the TSC1-TSC2 complex. Although DOCK family proteins are generally considered as GEFs specific for Rho family G proteins Dock4 has been shown to bind and activate Rap1, which is not a member of the Rho family. This apparent promiscuity among DOCK proteins and their targets, coupled with the fact that Rheb is highly expressed in the brain means that Dock7 GEF activity towards Rheb, although not yet demonstrated, would not be surprising. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Web server**
Web server:
A web server is computer software and underlying hardware that accepts requests via HTTP (the network protocol created to distribute web content) or its secure variant HTTPS. A user agent, commonly a web browser or web crawler, initiates communication by making a request for a web page or other resource using HTTP, and the server responds with the content of that resource or an error message. A web server can also accept and store resources sent from the user agent if configured to do so.The hardware used to run a web server can vary according to the volume of requests that it needs to handle. At the low end of the range are embedded systems, such as a router that runs a small web server as its configuration interface. A high-traffic Internet website might handle requests with hundreds of servers that run on racks of high-speed computers.
Web server:
A resource sent from a web server can be a pre-existing file (static content) available to the web server, or it can be generated at the time of the request (dynamic content) by another program that communicates with the server software. The former usually can be served faster and can be more easily cached for repeated requests, while the latter supports a broader range of applications.
Web server:
Technologies such as REST and SOAP, which use HTTP as a basis for general computer-to-computer communication, as well as support for WebDAV extensions, have extended the application of web servers well beyond their original purpose of serving human-readable pages.
History:
This is a very brief history of web server programs, so some information necessarily overlaps with the histories of the web browsers, the World Wide Web and the Internet; therefore, for the sake of clarity and understandability, some key historical information below reported may be similar to that found also in one or more of the above-mentioned history articles.
History:
Initial WWW project (1989–1991) In March 1989, Sir Tim Berners-Lee proposed a new project to his employer CERN, with the goal of easing the exchange of information between scientists by using a hypertext system. The proposal titled "HyperText and CERN", asked for comments and it was read by several people. In October 1990 the proposal was reformulated and enriched (having as co-author Robert Cailliau), and finally, it was approved.Between late 1990 and early 1991 the project resulted in Berners-Lee and his developers writing and testing several software libraries along with three programs, which initially ran on NeXTSTEP OS installed on NeXT workstations: a graphical web browser, called WorldWideWeb; a portable line mode web browser; a web server, later known as CERN httpd.Those early browsers retrieved web pages written in a simple early form of HTML, from web server(s) using a new basic communication protocol that was named HTTP 0.9.
History:
In August 1991 Tim Berner-Lee announced the birth of WWW technology and encouraged scientists to adopt and develop it. Soon after, those programs, along with their source code, were made available to people interested in their usage. Although the source code was not formally licensed or placed in the public domain, CERN informally allowed users and developers to experiment and further develop on top of them. Berners-Lee started promoting the adoption and the usage of those programs along with their porting to other operating systems.
History:
Fast and wild development (1991–1995) In December 1991 the first web server outside Europe was installed at SLAC (U.S.A.). This was a very important event because it started trans-continental web communications between web browsers and web servers.
In 1991–1993 CERN web server program continued to be actively developed by the www group, meanwhile, thanks to the availability of its source code and the public specifications of the HTTP protocol, many other implementations of web servers started to be developed.
History:
In April 1993 CERN issued a public official statement stating that the three components of Web software (the basic line-mode client, the web server and the library of common code), along with their source code, were put in the public domain. This statement freed web server developers from any possible legal issue about the development of derivative work based on that source code (a threat that in practice never existed).
History:
At the beginning of 1994, the most notable among new web servers was NCSA httpd which ran on a variety of Unix-based OSs and could serve dynamically generated content by implementing the POST HTTP method and the CGI to communicate with external programs. These capabilities, along with the multimedia features of NCSA's Mosaic browser (also able to manage HTML FORMs in order to send data to a web server) highlighted the potential of web technology for publishing and distributed computing applications.
History:
In the second half of 1994, the development of NCSA httpd stalled to the point that a group of external software developers, webmasters and other professional figures interested in that server, started to write and collect patches thanks to the NCSA httpd source code being available to the public domain. At the beginning of 1995 those patches were all applied to the last release of NCSA source code and, after several tests, the Apache HTTP server project was started.At the end of 1994 a new commercial web server, named Netsite, was released with specific features. It was the first one of many other similar products that were developed first by Netscape, then also by Sun Microsystems, and finally by Oracle Corporation.
History:
In mid-1995 the first version of IIS was released, for Windows NT OS, by Microsoft. This marked the entry, in the field of World Wide Web technologies, of a very important commercial developer and vendor that has played and still is playing a key role on both sides (client and server) of the web.
In the second half of 1995 CERN and NCSA web servers started to decline (in global percentage usage) because of the widespread adoption of new web servers which had a much faster development cycle along with more features, more fixes applied, and more performances than the previous ones.
Explosive growth and competition (1996–2014) At the end of 1996 there were already over fifty known (different) web server software programs that were available to everybody who wanted to own an Internet domain name and/or to host websites. Many of them lived only shortly and were replaced by other web servers.
The publication of RFCs about protocol versions HTTP/1.0 (1996) and HTTP/1.1 (1997, 1999), forced most web servers to comply (not always completely) with those standards. The use of TCP/IP persistent connections (HTTP/1.1) required web servers both to increase a lot the maximum number of concurrent connections allowed and to improve their level of scalability.
Between 1996 and 1999 Netscape Enterprise Server and Microsoft's IIS emerged among the leading commercial options whereas among the freely available and open-source programs Apache HTTP Server held the lead as the preferred server (because of its reliability and its many features).
In those years there was also another commercial, highly innovative and thus notable web server called Zeus (now discontinued) that was known as one of the fastest and most scalable web servers available on market, at least till the first decade of 2000s, despite its low percentage of usage.
Apache resulted in the most used web server from mid-1996 to the end of 2015 when, after a few years of decline, it was surpassed initially by IIS and then by Nginx. Afterward IIS dropped to much lower percentages of usage than Apache (see also market share).
History:
From 2005–2006 Apache started to improve its speed and its scalability level by introducing new performance features (e.g. event MPM and new content cache). As those new performance improvements initially were marked as experimental, they were not enabled by its users for a long time and so Apache suffered, even more, the competition of commercial servers and, above all, of other open-source servers which meanwhile had already achieved far superior performances (mostly when serving static content) since the beginning of their development and at the time of the Apache decline were able to offer also a long enough list of well tested advanced features.
History:
In fact, a few years after 2000 started, not only other commercial and highly competitive web servers, e.g. LiteSpeed, but also many other open-source programs, often of excellent quality and very high performances, among which should be noted Hiawatha, Cherokee HTTP server, Lighttpd, Nginx and other derived/related products also available with commercial support, emerged.
History:
Around 2007–2008 most popular web browsers increased their previous default limit of 2 persistent connections per host-domain (a limit recommended by RFC-2616) to 4, 6 or 8 persistent connections per host-domain, in order to speed up the retrieval of heavy web pages with lots of images, and to mitigate the problem of the shortage of persistent connections dedicated to dynamic objects used for bi-directional notifications of events in web pages. Within a year, these changes, on average, nearly tripled the maximum number of persistent connections that web servers had to manage. This trend (of increasing the number of persistent connections) definitely gave a strong impetus to the adoption of reverse proxies in front of slower web servers and it gave also one more chance to the emerging new web servers that could show all their speed and their capability to handle very high numbers of concurrent connections without requiring too many hardware resources (expensive computers with lots of CPUs, RAM and fast disks).
History:
New challenges (2015 and later years) In 2015, RFCs published new protocol version [HTTP/2], and as the implementation of new specifications was not trivial at all, a dilemma arose among developers of less popular web servers (e.g. with a percentage of usage lower than 1% .. 2%), about adding or not adding support for that new protocol version.In fact supporting HTTP/2 often required radical changes to their internal implementation due to many factors (practically always required encrypted connections, capability to distinguish between HTTP/1.x and HTTP/2 connections on the same TCP port, binary representation of HTTP messages, message priority, compression of HTTP headers, use of streams also known as TCP/IP sub-connections and related flow-control, etc.) and so a few developers of those web servers opted for not supporting new HTTP/2 version (at least in the near future) also because of these main reasons: protocols HTTP/1.x would have been supported anyway by browsers for a very long time (maybe forever) so that there would be no incompatibility between clients and servers in next future; implementing HTTP/2 was considered a task of overwhelming complexity that could open the door to a whole new class of bugs that till 2015 did not exist and so it would have required notable investments in developing and testing the implementation of the new protocol; adding HTTP/2 support could always be done in future in case the efforts would be justified.Instead, developers of most popular web servers, rushed to offer the availability of new protocol, not only because they had the work force and the time to do so, but also because usually their previous implementation of SPDY protocol could be reused as a starting point and because most used web browsers implemented it very quickly for the same reason. Another reason that prompted those developers to act quickly was that webmasters felt the pressure of the ever increasing web traffic and they really wanted to install and to try – as soon as possible – something that could drastically lower the number of TCP/IP connections and speedup accesses to hosted websites.In 2020–2021 the HTTP/2 dynamics about its implementation (by top web servers and popular web browsers) were partly replicated after the publication of advanced drafts of future RFC about HTTP/3 protocol.
Technical overview:
The following technical overview should be considered only as an attempt to give a few very limited examples about some features that may be implemented in a web server and some of the tasks that it may perform in order to have a sufficiently wide scenario about the topic.
A web server program plays the role of a server in a client–server model by implementing one or more versions of HTTP protocol, often including the HTTPS secure variant and other features and extensions that are considered useful for its planned usage.
Technical overview:
The complexity and the efficiency of a web server program may vary a lot depending on (e.g.): common features implemented; common tasks performed; performances and scalability level aimed as a goal; software model and techniques adopted to achieve wished performance and scalability level; target hardware and category of usage, e.g. embedded system, low-medium traffic web server, high traffic Internet web server.
Technical overview:
Common features Although web server programs differ in how they are implemented, most of them offer the following common features.
These are basic features that most web servers usually have.
Static content serving: to be able to serve static content (web files) to clients via HTTP protocol.
HTTP: support for one or more versions of HTTP protocol in order to send versions of HTTP responses compatible with versions of client HTTP requests, e.g. HTTP/1.0, HTTP/1.1 (eventually also with encrypted connections HTTPS), plus, if available, HTTP/2, HTTP/3.
Logging: usually web servers have also the capability of logging some information, about client requests and server responses, to log files for security and statistical purposes.A few other more advanced and popular features (only a very short selection) are the following ones.
Dynamic content serving: to be able to serve dynamic content (generated on the fly) to clients via HTTP protocol.
Virtual hosting: to be able to serve many websites (domain names) using only one IP address.
Authorization: to be able to allow, to forbid or to authorize access to portions of website paths (web resources).
Content cache: to be able to cache static and/or dynamic content in order to speed up server responses; Large file support: to be able to serve files whose size is greater than 2 GB on 32 bit OS.
Bandwidth throttling: to limit the speed of content responses in order to not saturate the network and to be able to serve more clients; Rewrite engine: to map parts of clean URLs (found in client requests) to their real names.
Custom error pages: support for customized HTTP error messages.
Technical overview:
Common tasks A web server program, when it is running, usually performs several general tasks, (e.g.): starts, optionally reads and applies settings found in its configuration file(s) or elsewhere, optionally opens log file, starts listening to client connections / requests; optionally tries to adapt its general behavior according to its settings and its current operating conditions; manages client connection(s) (accepting new ones or closing the existing ones as required); receives client requests (by reading HTTP messages): reads and verify each HTTP request message; usually performs URL normalization; usually performs URL mapping (which may default to URL path translation); usually performs URL path translation along with various security checks; executes or refuses requested HTTP method: optionally manages URL authorizations; optionally manages URL redirections; optionally manages requests for static resources (file contents): optionally manages directory index files; optionally manages regular files; optionally manages requests for dynamic resources: optionally manages directory listings; optionally manages program or module processing, checking the availability, the start and eventually the stop of the execution of external programs used to generate dynamic content; optionally manages the communications with external programs / internal modules used to generate dynamic content; replies to client requests sending proper HTTP responses (e.g. requested resources or error messages) eventually verifying or adding HTTP headers to those sent by dynamic programs / modules; optionally logs (partially or totally) client requests and/or its responses to an external user log file or to a system log file by syslog, usually using common log format; optionally logs process messages about detected anomalies or other notable events (e.g. in client requests or in its internal functioning) using syslog or some other system facilities; these log messages usually have a debug, warning, error, alert level which can be filtered (not logged) depending on some settings, see also severity level; optionally generates statistics about web traffic managed and/or its performances; other custom tasks.
Technical overview:
Read request message Web server programs are able: to read an HTTP request message; to interpret it; to verify its syntax; to identify known HTTP headers and to extract their values from them.Once an HTTP request message has been decoded and verified, its values can be used to determine whether that request can be satisfied or not. This requires many other steps, including security checks.
Technical overview:
URL normalization Web server programs usually perform some type of URL normalization (URL found in most HTTP request messages) in order: to make resource path always a clean uniform path from root directory of website; to lower security risks (e.g. by intercepting more easily attempts to access static resources outside the root directory of the website or to access to portions of path below website root directory that are forbidden or which require authorization); to make path of web resources more recognizable by human beings and web log analysis programs (also known as log analyzers / statistical applications).The term URL normalization refers to the process of modifying and standardizing a URL in a consistent manner. There are several types of normalization that may be performed, including the conversion of the scheme and host to lowercase. Among the most important normalizations are the removal of "." and ".." path segments and adding trailing slashes to a non-empty path component.
Technical overview:
URL mapping "URL mapping is the process by which a URL is analyzed to figure out what resource it is referring to, so that that resource can be returned to the requesting client. This process is performed with every request that is made to a web server, with some of the requests being served with a file, such as an HTML document, or a gif image, others with the results of running a CGI program, and others by some other process, such as a built-in module handler, a PHP document, or a Java servlet."In practice, web server programs that implement advanced features, beyond the simple static content serving (e.g. URL rewrite engine, dynamic content serving), usually have to figure out how that URL has to be handled, e.g.: as a URL redirection, a redirection to another URL; as a static request of file content; as a dynamic request of: directory listing of files or other sub-directories contained in that directory; other types of dynamic request in order to identify the program / module processor able to handle that kind of URL path and to pass to it other URL parts, i.e. usually path-info and query string variables.One or more configuration files of web server may specify the mapping of parts of URL path (e.g. initial parts of file path, filename extension and other path components) to a specific URL handler (file, directory, external program or internal module).When a web server implements one or more of the above-mentioned advanced features then the path part of a valid URL may not always match an existing file system path under website directory tree (a file or a directory in file system) because it can refer to a virtual name of an internal or external module processor for dynamic requests.
Technical overview:
URL path translation to file system Web server programs are able to translate an URL path (all or part of it), that refers to a physical file system path, to an absolute path under the target website's root directory.Website's root directory may be specified by a configuration file or by some internal rule of the web server by using the name of the website which is the host part of the URL found in HTTP client request.Path translation to file system is done for the following types of web resources: a local, usually non-executable, file (static request for file content); a local directory (dynamic request: directory listing generated on the fly); a program name (dynamic requests that is executed using CGI or SCGI interface and whose output is read by web server and resent to client who made the HTTP request).The web server appends the path found in requested URL (HTTP request message) and appends it to the path of the (Host) website root directory. On an Apache server, this is commonly /home/www/website (on Unix machines, usually it is: /var/www/website). See the following examples of how it may result.
Technical overview:
URL path translation for a static file request Example of a static request of an existing file specified by the following URL: http://www.example.com/path/file.html The client's user agent connects to www.example.com and then sends the following HTTP/1.1 request: GET /path/file.html HTTP/1.1 Host: www.example.com Connection: keep-alive The result is the local file system resource: /home/www/www.example.com/path/file.html The web server then reads the file, if it exists, and sends a response to the client's web browser. The response will describe the content of the file and contain the file itself or an error message will return saying that the file does not exist or its access is forbidden.
Technical overview:
URL path translation for a directory request (without a static index file) Example of an implicit dynamic request of an existing directory specified by the following URL: http://www.example.com/directory1/directory2/ The client's user agent connects to www.example.com and then sends the following HTTP/1.1 request: GET /directory1/directory2 HTTP/1.1 Host: www.example.com Connection: keep-alive The result is the local directory path: /home/www/www.example.com/directory1/directory2/ The web server then verifies the existence of the directory and if it exists and it can be accessed then tries to find out an index file (which in this case does not exist) and so it passes the request to an internal module or a program dedicated to directory listings and finally reads data output and sends a response to the client's web browser. The response will describe the content of the directory (list of contained subdirectories and files) or an error message will return saying that the directory does not exist or its access is forbidden.
Technical overview:
URL path translation for a dynamic program request For a dynamic request the URL path specified by the client should refer to an existing external program (usually an executable file with a CGI) used by the web server to generate dynamic content.Example of a dynamic request using a program file to generate output: http://www.example.com/cgi-bin/forum.php?action=view&orderby=thread&date=2021-10-15 The client's user agent connects to www.example.com and then sends the following HTTP/1.1 request: GET /cgi-bin/forum.php?action=view&ordeby=thread&date=2021-10-15 HTTP/1.1 Host: www.example.com Connection: keep-alive The result is the local file path of the program (in this example, a PHP program): /home/www/www.example.com/cgi-bin/forum.php The web server executes that program, passing in the path-info and the query string action=view&orderby=thread&date=2021-10-15 so that the program has the info it needs to run. (In this case, it will return an HTML document containing a view of forum entries ordered by thread from October 15, 2021). In addition to this, the web server reads data sent from the external program and resends that data to the client that made the request.
Technical overview:
Manage request message Once a request has been read, interpreted, and verified, it has to be managed depending on its method, its URL, and its parameters, which may include values of HTTP headers.
Technical overview:
In practice, the web server has to handle the request by using one of these response paths: if something in request was not acceptable (in status line or message headers), web server already sent an error response; if request has a method (e.g. OPTIONS) that can be satisfied by general code of web server then a successful response is sent; if URL requires authorization then an authorization error message is sent; if URL maps to a redirection then a redirect message is sent; if URL maps to a dynamic resource (a virtual path or a directory listing) then its handler (an internal module or an external program) is called and request parameters (query string and path info) are passed to it in order to allow it to reply to that request; if URL maps to a static resource (usually a file on file system) then the internal static handler is called to send that file; if request method is not known or if there is some other unacceptable condition (e.g. resource not found, internal server error, etc.) then an error response is sent.
Technical overview:
Serve static content If a web server program is capable of serving static content and it has been configured to do so, then it is able to send file content whenever a request message has a valid URL path matching (after URL mapping, URL translation and URL redirection) that of an existing file under the root directory of a website and file has attributes which match those required by internal rules of web server program.That kind of content is called static because usually it is not changed by the web server when it is sent to clients and because it remains the same until it is modified (file modification) by some program.
Technical overview:
NOTE: when serving static content only, a web server program usually does not change file contents of served websites (as they are only read and never written) and so it suffices to support only these HTTP methods: OPTIONS HEAD GETResponse of static file content can be sped up by a file cache.
Technical overview:
Directory index files If a web server program receives a client request message with an URL whose path matches one of an existing directory and that directory is accessible and serving directory index file(s) is enabled then a web server program may try to serve the first of known (or configured) static index file names (a regular file) found in that directory; if no index file is found or other conditions are not met then an error message is returned.
Technical overview:
Most used names for static index files are: index.html, index.htm and Default.htm.
Regular files If a web server program receives a client request message with an URL whose path matches the file name of an existing file and that file is accessible by web server program and its attributes match internal rules of web server program, then web server program can send that file to client.
Usually, for security reasons, most web server programs are pre-configured to serve only regular files or to avoid to use special file types like device files, along with symbolic links or hard links to them. The aim is to avoid undesirable side effects when serving static web resources.
Technical overview:
Serve dynamic content If a web server program is capable of serving dynamic content and it has been configured to do so, then it is able to communicate with the proper internal module or external program (associated with the requested URL path) in order to pass to it parameters of client request; after that, web server program reads from it its data response (that it has generated, often on the fly) and then it resends it to the client program who made the request.NOTE: when serving static and dynamic content, a web server program usually has to support also the following HTTP method in order to be able to safely receive data from client(s) and so to be able to host also websites with interactive form(s) that may send large data sets (e.g. lots of data entry or file uploads) to web server / external programs / modules: POSTIn order to be able to communicate with its internal modules and/or external programs, a web server program must have implemented one or more of the many available gateway interface(s) (see also Web Server Gateway Interfaces used for dynamic content).
Technical overview:
The three standard and historical gateway interfaces are the following ones.
Technical overview:
CGI An external CGI program is run by web server program for each dynamic request, then web server program reads from it the generated data response and then resends it to client.SCGI An external SCGI program (it usually is a process) is started once by web server program or by some other program / process and then it waits for network connections; every time there is a new request for it, web server program makes a new network connection to it in order to send request parameters and to read its data response, then network connection is closed.FastCGI An external FastCGI program (it usually is a process) is started once by web server program or by some other program / process and then it waits for a network connection which is established permanently by web server; through that connection are sent the request parameters and read data responses.
Technical overview:
Directory listings A web server program may be capable to manage the dynamic generation (on the fly) of a directory index list of files and sub-directories.If a web server program is configured to do so and a requested URL path matches an existing directory and its access is allowed and no static index file is found under that directory then a web page (usually in HTML format), containing the list of files and/or subdirectories of above mentioned directory, is dynamically generated (on the fly). If it cannot be generated an error is returned.
Technical overview:
Some web server programs allow the customization of directory listings by allowing the usage of a web page template (an HTML document containing placeholders, e.g. $(FILE_NAME), $(FILE_SIZE), etc., that are replaced with the field values of each file entry found in directory by web server), e.g. index.tpl or the usage of HTML and embedded source code that is interpreted and executed on the fly, e.g. index.asp, and / or by supporting the usage of dynamic index programs such as CGIs, SCGIs, FCGIs, e.g. index.cgi, index.php, index.fcgi.
Technical overview:
Usage of dynamically generated directory listings is usually avoided or limited to a few selected directories of a website because that generation takes much more OS resources than sending a static index page.
The main usage of directory listings is to allow the download of files (usually when their names, sizes, modification date-times or file attributes may change randomly / frequently) as they are, without requiring to provide further information to requesting user.
Technical overview:
Program or module processing An external program or an internal module (processing unit) can execute some sort of application function that may be used to get data from or to store data to one or more data repositories, e.g.: files (file system); databases (DBs); other sources located in local computer or in other computers.A processing unit can return any kind of web content, also by using data retrieved from a data repository, e.g.: a document (e.g. HTML, XML, etc.); an image; a video; structured data, e.g. that may be used to update one or more values displayed by a dynamic page (DHTML) of a web interface and that maybe was requested by an XMLHttpRequest API (see also: dynamic page).In practice whenever there is content that may vary, depending on one or more parameters contained in client request or in configuration settings, then, usually, it is generated dynamically.
Technical overview:
Send response message Web server programs are able to send response messages as replies to client request messages.An error response message may be sent because a request message could not be successfully read or decoded or analyzed or executed.NOTE: the following sections are reported only as examples to help to understand what a web server, more or less, does; these sections are by any means neither exhaustive nor complete.
Technical overview:
Error message A web server program may reply to a client request message with many kinds of error messages, anyway these errors are divided mainly in two categories: HTTP client errors, due to the type of request message or to the availability of requested web resource; HTTP server errors, due to internal server errors.When an error response / message is received by a client browser, then if it is related to the main user request (e.g. an URL of a web resource such as a web page) then usually that error message is shown in some browser window / message.
Technical overview:
URL authorization A web server program may be able to verify whether the requested URL path: can be freely accessed by everybody; requires a user authentication (request of user credentials, e.g. such as user name and password); access is forbidden to some or all kind of users.If the authorization / access rights feature has been implemented and enabled and access to web resource is not granted, then, depending on the required access rights, a web server program: can deny access by sending a specific error message (e.g. access forbidden); may deny access by sending a specific error message (e.g. access unauthorized) that usually forces the client browser to ask human user to provide required user credentials; if authentication credentials are provided then web server program verifies and accepts or rejects them.
Technical overview:
URL redirection A web server program may have the capability of doing URL redirections to new URLs (new locations) which consists in replying to a client request message with a response message containing a new URL suited to access a valid or an existing web resource (client should redo the request with the new URL).URL redirection of location is used: to fix a directory name by adding a final slash '/'; to give a new URL for a no more existing URL path to a new path where that kind of web resource can be found.
Technical overview:
to give a new URL to another domain when current domain has too much load.Example 1: a URL path points to a directory name but it does not have a final slash '/' so web server sends a redirect to client in order to instruct it to redo the request with the fixed path name.From: /directory1/directory2 To: /directory1/directory2/ Example 2: a whole set of documents has been moved inside website in order to reorganize their file system paths.
Technical overview:
From: /directory1/directory2/2021-10-08/ To: /directory1/directory2/2021/10/08/ Example 3: a whole set of documents has been moved to a new website and now it is mandatory to use secure HTTPS connections to access them.
From: http://www.example.com/directory1/directory2/2021-10-08/ To: https://docs.example.com/directory1/2021-10-08/ Above examples are only a few of the possible kind of redirections.
Technical overview:
Successful message A web server program is able to reply to a valid client request message with a successful message, optionally containing requested web resource data.If web resource data is sent back to client, then it can be static content or dynamic content depending on how it has been retrieved (from a file or from the output of some program / module).
Technical overview:
Content cache In order to speed up web server responses by lowering average HTTP response times and hardware resources used, many popular web servers implement one or more content caches, each one specialized in a content category.Content is usually cached by its origin, e.g.: static content: file cache; dynamic content: dynamic cache (module / program output).
Technical overview:
File cache Historically, static contents found in files which had to be accessed frequently, randomly and quickly, have been stored mostly on electro-mechanical disks since mid-late 1960s / 1970s; regrettably reads from and writes to those kind of devices have always been considered very slow operations when compared to RAM speed and so, since early OSs, first disk caches and then also OS file cache sub-systems were developed to speed up I/O operations of frequently accessed data / files.
Technical overview:
Even with the aid of an OS file cache, the relative / occasional slowness of I/O operations involving directories and files stored on disks became soon a bottleneck in the increase of performances expected from top level web servers, specially since mid-late 1990s, when web Internet traffic started to grow exponentially along with the constant increase of speed of Internet / network lines.
Technical overview:
The problem about how to further efficiently speed-up the serving of static files, thus increasing the maximum number of requests/responses per second (RPS), started to be studied / researched since mid 1990s, with the aim to propose useful cache models that could be implemented in web server programs.In practice, nowadays, many popular / high performance web server programs include their own userland file cache, tailored for a web server usage and using their specific implementation and parameters.The wide spread adoption of RAID and/or fast solid-state drives (storage hardware with very high I/O speed) has slightly reduced but of course not eliminated the advantage of having a file cache incorporated in a web server.
Technical overview:
Dynamic cache Dynamic content, output by an internal module or an external program, may not always change very frequently (given a unique URL with keys / parameters) and so, maybe for a while (e.g. from 1 second to several hours or more), the resulting output can be cached in RAM or even on a fast disk.The typical usage of a dynamic cache is when a website has dynamic web pages about news, weather, images, maps, etc. that do not change frequently (e.g. every n minutes) and that are accessed by a huge number of clients per minute / hour; in those cases it is useful to return cached content too (without calling the internal module or the external program) because clients often do not have an updated copy of the requested content in their browser caches.Anyway, in most cases those kind of caches are implemented by external servers (e.g. reverse proxy) or by storing dynamic data output in separate computers, managed by specific applications (e.g. memcached), in order to not compete for hardware resources (CPU, RAM, disks) with web server(s).
Technical overview:
Kernel-mode and user-mode web servers A web server software can be either incorporated into the OS and executed in kernel space, or it can be executed in user space (like other regular applications).
Technical overview:
Web servers that run in kernel mode (usually called kernel space web servers) can have direct access to kernel resources and so they can be, in theory, faster than those running in user mode; anyway there are disadvantages in running a web server in kernel mode, e.g.: difficulties in developing (debugging) software whereas run-time critical errors may lead to serious problems in OS kernel.
Technical overview:
Web servers that run in user-mode have to ask the system for permission to use more memory or more CPU resources. Not only do these requests to the kernel take time, but they might not always be satisfied because the system reserves resources for its own usage and has the responsibility to share hardware resources with all the other running applications. Executing in user mode can also mean using more buffer/data copies (between user-space and kernel-space) which can lead to a decrease in the performance of a user-mode web server.
Technical overview:
Nowadays almost all web server software is executed in user mode (because many of the aforementioned small disadvantages have been overcome by faster hardware, new OS versions, much faster OS system calls and new optimized web server software). See also comparison of web server software to discover which of them run in kernel mode or in user mode (also referred as kernel space or user space).
Performances:
To improve the user experience (on client / browser side), a web server should reply quickly (as soon as possible) to client requests; unless content response is throttled (by configuration) for some type of files (e.g. big or huge files), also returned data content should be sent as fast as possible (high transfer speed).
In other words, a web server should always be very responsive, even under high load of web traffic, in order to keep total user's wait (sum of browser time + network time + web server response time) for a response as low as possible.
Performances:
Performance metrics For web server software, main key performance metrics (measured under vary operating conditions) usually are at least the following ones (i.e.): number of requests per second (RPS, similar to QPS, depending on HTTP version and configuration, type of HTTP requests and other operating conditions); number of connections per second (CPS), is the number of connections per second accepted by web server (useful when using HTTP/1.0 or HTTP/1.1 with a very low limit of requests / responses per connection, i.e. 1 .. 20); network latency + response time for each new client request; usually benchmark tool shows how many requests have been satisfied within a scale of time laps (e.g. within 1ms, 3ms, 5ms, 10ms, 20ms, 30ms, 40ms) and / or the shortest, the average and the longest response time; throughput of responses, in bytes per second.Among the operating conditions, the number (1 .. n) of concurrent client connections used during a test is an important parameter because it allows to correlate the concurrency level supported by web server with results of the tested performance metrics.
Performances:
Software efficiency The specific web server software design and model adopted (e.g.): single process or multi-process; single thread (no thread) or multi-thread for each process; usage of coroutines or not;... and other programming techniques, such as (e.g.): zero copy; minimization of possible CPU cache misses; minimization of possible CPU branch mispredictions in critical paths for speed; minimization of the number of system calls used to perform a certain function / task; other tricks;... used to implement a web server program, can bias a lot the performances and in particular the scalability level that can be achieved under heavy load or when using high end hardware (many CPUs, disks and lots of RAM).
Performances:
In practice some web server software models may require more OS resources (specially more CPUs and more RAM) than others to be able to work well and so to achieve target performances.
Performances:
Operating conditions There are many operating conditions that can affect the performances of a web server; performance values may vary depending on (i.e.): the settings of web server (including the fact that log file is or is not enabled, etc.); the HTTP version used by client requests; the average HTTP request type (method, length of HTTP headers and optional body); whether the requested content is static or dynamic; whether the content is cached or not cached (by server and/or by client); whether the content is compressed on the fly (when transferred), pre-compressed (i.e. when a file resource is stored on disk already compressed so that web server can send that file directly to the network with the only indication that its content is compressed) or not compressed at all; whether the connections are or are not encrypted; the average network speed between web server and its clients; the number of active TCP connections; the number of active processes managed by web server (including external CGI, SCGI, FCGI programs); the hardware and software limitations or settings of the OS of the computer(s) on which the web server runs; other minor conditions.
Performances:
Benchmarking Performances of a web server are typically benchmarked by using one or more of the available automated load testing tools.
Load limits:
A web server (program installation) usually has pre-defined load limits for each combination of operating conditions, also because it is limited by OS resources and because it can handle only a limited number of concurrent client connections (usually between 2 and several tens of thousands for each active web server process, see also the C10k problem and the C10M problem).
Load limits:
When a web server is near to or over its load limits, it gets overloaded and so it may become unresponsive.
Causes of overload At any time web servers can be overloaded due to one or more of the following causes (e.g.).
Excess legitimate web traffic. Thousands or even millions of clients connecting to the website in a short amount of time, e.g., Slashdot effect.
Distributed Denial of Service attacks. A denial-of-service attack (DoS attack) or distributed denial-of-service attack (DDoS attack) is an attempt to make a computer or network resource unavailable to its intended users.
Computer worms that sometimes cause abnormal traffic because of millions of infected computers (not coordinated among them).
XSS worms can cause high traffic because of millions of infected browsers or web servers.
Internet bots Traffic not filtered/limited on large websites with very few network resources (e.g. bandwidth) and/or hardware resources (CPUs, RAM, disks).
Internet (network) slowdowns (e.g. due to packet losses) so that client requests are served more slowly and the number of connections increases so much that server limits are reached.
Load limits:
Web servers, serving dynamic content, waiting for slow responses coming from back-end computer(s) (e.g. databases), maybe because of too many queries mixed with too many inserts or updates of DB data; in these cases web servers have to wait for back-end data responses before replying to HTTP clients but during these waits too many new client connections / requests arrive and so they become overloaded.
Load limits:
Web servers (computers) partial unavailability. This can happen because of required or urgent maintenance or upgrade, hardware or software failures such as back-end (e.g. database) failures; in these cases the remaining web servers may get too much traffic and become overloaded.
Symptoms of overload The symptoms of an overloaded web server are usually the following ones (e.g.).
Requests are served with (possibly long) delays (from 1 second to a few hundred seconds).
The web server returns an HTTP error code, such as 500, 502, 503, 504, 408, or even an intermittent 404.
The web server refuses or resets (interrupts) TCP connections before it returns any content.
In very rare cases, the web server returns only a part of the requested content. This behavior can be considered a bug, even if it usually arises as a symptom of overload.
Anti-overload techniques To partially overcome above average load limits and to prevent overload, most popular websites use common techniques like the following ones (e.g.).
Tuning OS parameters for hardware capabilities and usage.
Tuning web server(s) parameters to improve their security and performances.
Deploying web cache techniques (not only for static contents but, whenever possible, for dynamic contents too).
Managing network traffic, by using: Firewalls to block unwanted traffic coming from bad IP sources or having bad patterns; HTTP traffic managers to drop, redirect or rewrite requests having bad HTTP patterns; Bandwidth management and traffic shaping, in order to smooth down peaks in network usage.
Load limits:
Using different domain names, IP addresses and computers to serve different kinds (static and dynamic) of content; the aim is to separate big or huge files (download.*) (that domain might be replaced also by a CDN) from small and medium-sized files (static.*) and from main dynamic site (maybe where some contents are stored in a backend database) (www.*); the idea is to be able to efficiently serve big or huge (over 10 – 1000 MB) files (maybe throttling downloads) and to fully cache small and medium-sized files, without affecting performances of dynamic site under heavy load, by using different settings for each (group) of web server computers, e.g.: https://download.example.com https://static.example.com https://www.example.com Using many web servers (computers) that are grouped together behind a load balancer so that they act or are seen as one big web server.
Load limits:
Adding more hardware resources (i.e. RAM, fast disks) to each computer.
Using more efficient computer programs for web servers (see also: software efficiency).
Using the most efficient Web Server Gateway Interface to process dynamic requests (spawning one or more external programs every time a dynamic page is retrieved, kills performances).
Using other programming techniques and workarounds, especially if dynamic content is involved, to speed up the HTTP responses (i.e. by avoiding dynamic calls to retrieve objects, such as style sheets, images and scripts), that never change or change very rarely, by copying that content to static files once and then keeping them synchronized with dynamic content).
Load limits:
Using latest efficient versions of HTTP (e.g. beyond using common HTTP/1.1 also by enabling HTTP/2 and maybe HTTP/3 too, whenever available web server software has reliable support for the latter two protocols) in order to reduce a lot the number of TCP/IP connections started by each client and the size of data exchanged (because of more compact HTTP headers representation and maybe data compression).Caveats about using HTTP/2 and HTTP/3 protocols Even if newer HTTP (2 and 3) protocols usually generate less network traffic for each request / response data, they may require more OS resources (i.e. RAM and CPU) used by web server software (because of encrypted data, lots of stream buffers and other implementation details); besides this, HTTP/2 and maybe HTTP/3 too, depending also on settings of web server and client program, may not be the best options for data upload of big or huge files at very high speed because their data streams are optimized for concurrency of requests and so, in many cases, using HTTP/1.1 TCP/IP connections may lead to better results / higher upload speeds (your mileage may vary).
Market share:
Below are the latest statistics of the market share of all sites of the top web servers on the Internet by Netcraft.
NOTE: (*) percentage rounded to integer number, because its decimal values are not publicly reported by source page (only its rounded value is reported in graph). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Maxwell–Boltzmann statistics**
Maxwell–Boltzmann statistics:
In statistical mechanics, Maxwell–Boltzmann statistics describes the distribution of classical material particles over various energy states in thermal equilibrium. It is applicable when the temperature is high enough or the particle density is low enough to render quantum effects negligible.
Maxwell–Boltzmann statistics:
The expected number of particles with energy εi for Maxwell–Boltzmann statistics is ⟨Ni⟩=gie(εi−μ)/kT=NZgie−εi/kT, where: εi is the energy of the i-th energy level, ⟨Ni⟩ is the average number of particles in the set of states with energy εi ,gi is the degeneracy of energy level i, that is, the number of states with energy εi which may nevertheless be distinguished from each other by some other means, μ is the chemical potential, k is the Boltzmann constant, T is absolute temperature, N is the total number of particles: Z is the partition function: e is Euler's numberEquivalently, the number of particles is sometimes expressed as ⟨Ni⟩=1e(εi−μ)/kT=NZe−εi/kT, where the index i now specifies a particular state rather than the set of all states with energy εi , and {\textstyle Z=\sum _{i}e^{-\varepsilon _{i}/kT}}
History:
Maxwell–Boltzmann statistics grew out of the Maxwell–Boltzmann distribution, most likely as a distillation of the underlying technique. The distribution was first derived by Maxwell in 1860 on heuristic grounds. Boltzmann later, in the 1870s, carried out significant investigations into the physical origins of this distribution. The distribution can be derived on the ground that it maximizes the entropy of the system.
Applicability:
Maxwell–Boltzmann statistics is used to derive the Maxwell–Boltzmann distribution of an ideal gas. However, it can also be used to extend that distribution to particles with a different energy–momentum relation, such as relativistic particles (resulting in Maxwell–Jüttner distribution), and to other than three-dimensional spaces.
Applicability:
Maxwell–Boltzmann statistics is often described as the statistics of "distinguishable" classical particles. In other words, the configuration of particle A in state 1 and particle B in state 2 is different from the case in which particle B is in state 1 and particle A is in state 2. This assumption leads to the proper (Boltzmann) statistics of particles in the energy states, but yields non-physical results for the entropy, as embodied in the Gibbs paradox.
Applicability:
At the same time, there are no real particles that have the characteristics required by Maxwell–Boltzmann statistics. Indeed, the Gibbs paradox is resolved if we treat all particles of a certain type (e.g., electrons, protons, etc.) as principally indistinguishable. Once this assumption is made, the particle statistics change. The change in entropy in the entropy of mixing example may be viewed as an example of a non-extensive entropy resulting from the distinguishability of the two types of particles being mixed.
Applicability:
Quantum particles are either bosons (following instead Bose–Einstein statistics) or fermions (subject to the Pauli exclusion principle, following instead Fermi–Dirac statistics). Both of these quantum statistics approach the Maxwell–Boltzmann statistics in the limit of high temperature and low particle density.
Derivations:
Maxwell–Boltzmann statistics can be derived in various statistical mechanical thermodynamic ensembles: The grand canonical ensemble, exactly.
The canonical ensemble, exactly.
The microcanonical ensemble, but only in the thermodynamic limit.In each case it is necessary to assume that the particles are non-interacting, and that multiple particles can occupy the same state and do so independently.
Derivations:
Derivation from microcanonical ensemble Suppose we have a container with a huge number of very small particles all with identical physical characteristics (such as mass, charge, etc.). Let's refer to this as the system. Assume that though the particles have identical properties, they are distinguishable. For example, we might identify each particle by continually observing their trajectories, or by placing a marking on each one, e.g., drawing a different number on each one as is done with lottery balls.
Derivations:
The particles are moving inside that container in all directions with great speed. Because the particles are speeding around, they possess some energy. The Maxwell–Boltzmann distribution is a mathematical function that describes about how many particles in the container have a certain energy. More precisely, the Maxwell–Boltzmann distribution gives the non-normalized probability (this means that the probabilities do not add up to 1) that the state corresponding to a particular energy is occupied.
Derivations:
In general, there may be many particles with the same amount of energy ε . Let the number of particles with the same energy ε1 be N1 , the number of particles possessing another energy ε2 be N2 , and so forth for all the possible energies {εi∣i=1,2,3,…}.
To describe this situation, we say that Ni is the occupation number of the energy level i.
Derivations:
If we know all the occupation numbers {Ni∣i=1,2,3,…}, then we know the total energy of the system. However, because we can distinguish between which particles are occupying each energy level, the set of occupation numbers {Ni∣i=1,2,3,…} does not completely describe the state of the system. To completely describe the state of the system, or the microstate, we must specify exactly which particles are in each energy level. Thus when we count the number of possible states of the system, we must count each and every microstate, and not just the possible sets of occupation numbers.
Derivations:
To begin with, assume that there is only one state at each energy level i (there is no degeneracy). What follows next is a bit of combinatorial thinking which has little to do in accurately describing the reservoir of particles. For instance, let's say there is a total of k boxes labelled a,b,…,k . With the concept of combination, we could calculate how many ways there are to arrange N into the set of boxes, where the order of balls within each box isn’t tracked. First, we select Na balls from a total of N balls to place into box a , and continue to select for each box from the remaining balls, ensuring that every ball is placed in one of the boxes. The total number of ways that the balls can be arranged is W=N!Na!(N−Na)!×(N−Na)!Nb!(N−Na−Nb)!×(N−Na−Nb)!Nc!(N−Na−Nb−Nc)!×⋯×(N−⋯−Nℓ)!Nk!(N−⋯−Nℓ−Nk)!=N!Na!Nb!Nc!⋯Nk!(N−Na−⋯−Nℓ−Nk)! As every ball has been placed into a box, (N−Na−Nb−⋯−Nk)!=0!=1 , and we simplify the expression as W=N!∏ℓ=a,b,…k1Nℓ! This is just the multinomial coefficient, the number of ways of arranging N items into k boxes, the l-th box holding Nl items, ignoring the permutation of items in each box.
Derivations:
Now, consider the case where there is more than one way to put Ni particles in the box i (i.e. taking the degeneracy problem into consideration). If the i -th box has a "degeneracy" of gi , that is, it has gi "sub-boxes" ( gi boxes with the same energy εi . These states/boxes with the same energy are called degenerate states.), such that any way of filling the i -th box where the number in the sub-boxes is changed is a distinct way of filling the box, then the number of ways of filling the i-th box must be increased by the number of ways of distributing the Ni objects in the gi "sub-boxes". The number of ways of placing Ni distinguishable objects in gi "sub-boxes" is giNi (the first object can go into any of the gi boxes, the second object can also go into any of the gi boxes, and so on). Thus the number of ways W that a total of N particles can be classified into energy levels according to their energies, while each level i having gi distinct states such that the i-th level accommodates Ni particles is: W=N!∏igiNiNi! This is the form for W first derived by Boltzmann. Boltzmann's fundamental equation ln W relates the thermodynamic entropy S to the number of microstates W, where k is the Boltzmann constant. It was pointed out by Gibbs however, that the above expression for W does not yield an extensive entropy, and is therefore faulty. This problem is known as the Gibbs paradox. The problem is that the particles considered by the above equation are not indistinguishable. In other words, for two particles (A and B) in two energy sublevels the population represented by [A,B] is considered distinct from the population [B,A] while for indistinguishable particles, they are not. If we carry out the argument for indistinguishable particles, we are led to the Bose–Einstein expression for W: W=∏i(Ni+gi−1)!Ni!(gi−1)! The Maxwell–Boltzmann distribution follows from this Bose–Einstein distribution for temperatures well above absolute zero, implying that gi≫1 . The Maxwell–Boltzmann distribution also requires low density, implying that gi≫Ni . Under these conditions, we may use Stirling's approximation for the factorial: N!≈NNe−N, to write: W≈∏i(Ni+gi)Ni+giNiNigigi≈∏igiNi(1+Ni/gi)giNiNi Using the fact that (1+Ni/gi)gi≈eNi for gi≫Ni we can again use Stirling's approximation to write: W≈∏igiNiNi! This is essentially a division by N! of Boltzmann's original expression for W, and this correction is referred to as correct Boltzmann counting.
Derivations:
We wish to find the Ni for which the function W is maximized, while considering the constraint that there is a fixed number of particles {\textstyle \left(N=\sum N_{i}\right)} and a fixed energy {\textstyle \left(E=\sum N_{i}\varepsilon _{i}\right)} in the container. The maxima of W and ln (W) are achieved by the same values of Ni and, since it is easier to accomplish mathematically, we will maximize the latter function instead. We constrain our solution using Lagrange multipliers forming the function: ln (W)+α(N−∑Ni)+β(E−∑Niεi) ln ln ln ln Ni+Ni) Finally ln ln Ni+Ni−(α+βεi)Ni) In order to maximize the expression above we apply Fermat's theorem (stationary points), according to which local extrema, if exist, must be at critical points (partial derivatives vanish): ln ln Ni−(α+βεi)=0 By solving the equations above ( i=1…n ) we arrive to an expression for Ni :Ni=gieα+βεi Substituting this expression for Ni into the equation for ln W and assuming that N≫1 yields: ln W=(α+1)N+βE or, rearranging: ln Wβ−Nβ−αNβ Boltzmann realized that this is just an expression of the Euler-integrated fundamental equation of thermodynamics. Identifying E as the internal energy, the Euler-integrated fundamental equation states that : E=TS−PV+μN where T is the temperature, P is pressure, V is volume, and μ is the chemical potential. Boltzmann's famous equation ln W is the realization that the entropy is proportional to ln W with the constant of proportionality being the Boltzmann constant. Using the ideal gas equation of state (PV = NkT), It follows immediately that β=1/kT and α=−μ/kT so that the populations may now be written: Ni=gie(εi−μ)/(kT) Note that the above formula is sometimes written: Ni=gieεi/kT/z where exp (μ/kT) is the absolute activity.
Derivations:
Alternatively, we may use the fact that ∑iNi=N to obtain the population numbers as Ni=Ngie−εi/kTZ where Z is the partition function defined by: Z=∑igie−εi/kT In an approximation where εi is considered to be a continuous variable, the Thomas–Fermi approximation yields a continuous degeneracy g proportional to ε so that: εe−ε/kT∫0∞εe−ε/kT=εe−ε/kTπ2(kT)3/2=2εe−ε/kTπ(kT)3 which is just the Maxwell–Boltzmann distribution for the energy.
Derivations:
Derivation from canonical ensemble In the above discussion, the Boltzmann distribution function was obtained via directly analysing the multiplicities of a system. Alternatively, one can make use of the canonical ensemble. In a canonical ensemble, a system is in thermal contact with a reservoir. While energy is free to flow between the system and the reservoir, the reservoir is thought to have infinitely large heat capacity as to maintain constant temperature, T, for the combined system.
Derivations:
In the present context, our system is assumed to have the energy levels εi with degeneracies gi . As before, we would like to calculate the probability that our system has energy εi If our system is in state s1 , then there would be a corresponding number of microstates available to the reservoir. Call this number ΩR(s1) . By assumption, the combined system (of the system we are interested in and the reservoir) is isolated, so all microstates are equally probable. Therefore, for instance, if ΩR(s1)=2ΩR(s2) , we can conclude that our system is twice as likely to be in state s1 than s2 . In general, if P(si) is the probability that our system is in state si ,P(s1)P(s2)=ΩR(s1)ΩR(s2).
Derivations:
Since the entropy of the reservoir ln ΩR , the above becomes P(s1)P(s2)=eSR(s1)/keSR(s2)/k=e(SR(s1)−SR(s2))/k.
Next we recall the thermodynamic identity (from the first law of thermodynamics): dSR=1T(dUR+PdVR−μdNR).
In a canonical ensemble, there is no exchange of particles, so the dNR term is zero. Similarly, 0.
Derivations:
This gives SR(s1)−SR(s2)=1T(UR(s1)−UR(s2))=−1T(E(s1)−E(s2)), where UR(si) and E(si) denote the energies of the reservoir and the system at si , respectively. For the second equality we have used the conservation of energy. Substituting into the first equation relating P(s1),P(s2) :P(s1)P(s2)=e−E(s1)/kTe−E(s2)/kT, which implies, for any state s of the system P(s)=1Ze−E(s)/kT, where Z is an appropriately chosen "constant" to make total probability 1. (Z is constant provided that the temperature T is invariant.) Z=∑se−E(s)/kT, where the index s runs through all microstates of the system. Z is sometimes called the Boltzmann sum over states (or "Zustandssumme" in the original German). If we index the summation via the energy eigenvalues instead of all possible states, degeneracy must be taken into account. The probability of our system having energy εi is simply the sum of the probabilities of all corresponding microstates: P(εi)=1Zgie−εi/kT where, with obvious modification, Z=∑jgje−εj/kT, this is the same result as before.
Derivations:
Comments on this derivation: Notice that in this formulation, the initial assumption "... suppose the system has total N particles..." is dispensed with. Indeed, the number of particles possessed by the system plays no role in arriving at the distribution. Rather, how many particles would occupy states with energy εi follows as an easy consequence.
What has been presented above is essentially a derivation of the canonical partition function. As one can see by comparing the definitions, the Boltzmann sum over states is equal to the canonical partition function.
Derivations:
Exactly the same approach can be used to derive Fermi–Dirac and Bose–Einstein statistics. However, there one would replace the canonical ensemble with the grand canonical ensemble, since there is exchange of particles between the system and the reservoir. Also, the system one considers in those cases is a single particle state, not a particle. (In the above discussion, we could have assumed our system to be a single atom.) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Thermal capillary wave**
Thermal capillary wave:
Thermal motion is able to produce capillary waves at the molecular scale. At this scale, gravity and hydrodynamics can be neglected, and only the surface tension contribution is relevant.
Capillary wave theory (CWT) is a classic account of how thermal fluctuations distort an interface.
It starts from some intrinsic surface h(x,y,t) that is distorted. Its energy will be proportional to its area: Est=σ∫dxdy[1+(dhdx)2+(dhdy)2−1]≈σ2∫dxdy[(dhdx)2+(dhdy)2], where the first equality is the area in this (de Monge) representation, and the second applies for small values of the derivatives (surfaces not too rough). The constant of proportionality, σ , is the surface tension.
Thermal capillary wave:
By performing a Fourier analysis treatment, normal modes are easily found. Each contributes an energy proportional to the square of its amplitude; therefore, according to classical statistical mechanics, equipartition holds, and the mean energy of each mode will be kT/2 . Surprisingly, this result leads to a divergent surface (the width of the interface is bound to diverge with its area). This divergence is nevertheless very mild: even for displacements on the order of meters the deviation of the surface is comparable to the size of the molecules. Moreover, the introduction of an external field removes the divergence: the action of gravity is sufficient to keep the width fluctuation on the order of one molecular diameter for areas larger than about 1 mm2 (Ref. 2). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fundamental matrix (computer vision)**
Fundamental matrix (computer vision):
In computer vision, the fundamental matrix F is a 3×3 matrix which relates corresponding points in stereo images. In epipolar geometry, with homogeneous image coordinates, x and x′, of corresponding points in a stereo image pair, Fx describes a line (an epipolar line) on which the corresponding point x′ on the other image must lie. That means, for all pairs of corresponding points holds 0.
Fundamental matrix (computer vision):
Being of rank two and determined only up to scale, the fundamental matrix can be estimated given at least seven point correspondences. Its seven parameters represent the only geometric information about cameras that can be obtained through point correspondences alone.
The term "fundamental matrix" was coined by QT Luong in his influential PhD thesis. It is sometimes also referred to as the "bifocal tensor". As a tensor it is a two-point tensor in that it is a bilinear form relating points in distinct coordinate systems.
The above relation which defines the fundamental matrix was published in 1992 by both Olivier Faugeras and Richard Hartley. Although H. Christopher Longuet-Higgins' essential matrix satisfies a similar relationship, the essential matrix is a metric object pertaining to calibrated cameras, while the fundamental matrix describes the correspondence in more general and fundamental terms of projective geometry.
This is captured mathematically by the relationship between a fundamental matrix F and its corresponding essential matrix E which is E=(K′)⊤FK K and K′ being the intrinsic calibration matrices of the two images involved.
Introduction:
The fundamental matrix is a relationship between any two images of the same scene that constrains where the projection of points from the scene can occur in both images. Given the projection of a scene point into one of the images the corresponding point in the other image is constrained to a line, helping the search, and allowing for the detection of wrong correspondences. The relation between corresponding points, which the fundamental matrix represents, is referred to as epipolar constraint, matching constraint, discrete matching constraint, or incidence relation.
Projective reconstruction theorem:
The fundamental matrix can be determined by a set of point correspondences. Additionally, these corresponding image points may be triangulated to world points with the help of camera matrices derived directly from this fundamental matrix. The scene composed of these world points is within a projective transformation of the true scene.
Proof Say that the image point correspondence x↔x′ derives from the world point X under the camera matrices (P,P′) as x=PXx′=P′X Say we transform space by a general homography matrix H4×4 such that X0=HX The cameras then transform as P0=PH−1P0′=P′H−1 P0X0=PH−1HX=PX=x and likewise with P0′ still get us the same image points.
Derivation of the fundamental matrix using coplanarity condition:
The fundamental matrix can also be derived using the coplanarity condition.
For satellite images:
The fundamental matrix expresses the epipolar geometry in stereo images. The epipolar geometry in images taken with perspective cameras appears as straight lines. However, in satellite images, the image is formed during the sensor movement along its orbit (pushbroom sensor). Therefore, there are multiple projection centers for one image scene and the epipolar line is formed as an epipolar curve. However, in special conditions such as small image tiles, the satellite images could be rectified using the fundamental matrix.
Properties:
The fundamental matrix is of rank 2. Its kernel defines the epipole.
Toolboxes:
fundest is a GPL C/C++ library for robust, non-linear (based on the Levenberg–Marquardt algorithm) fundamental matrix estimation from matched point pairs and various objective functions (Manolis Lourakis).
Structure and Motion Toolkit in MATLAB (Philip H. S. Torr) Fundamental Matrix Estimation Toolbox (Joaquim Salvi) The Epipolar Geometry Toolbox (EGT) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**TNFRSF18**
TNFRSF18:
Tumor necrosis factor receptor superfamily member 18 (TNFRSF18), also known as glucocorticoid-induced TNFR-related protein (GITR) or CD357. GITR is encoded and tnfrsf18 gene at chromosome 4 in mice. GITR is type I transmembrane protein and is described in 4 different isoforms. GITR human orthologue, also called activation-inducible TNFR family receptor (AITR), is encoded by the TNFRSF18 gene at chromosome 1.
Function:
GITR is a member of TNFR superfamily and shares high homology in cytoplasmic domain, characterized with cysteine pseudo-repeats, with other members of TNFRSF, such as CD137, OX40 or CD27. GITR is constitutively expressed on CD25+CD4+ regulatory T cells and its expression is upregulated on all T cell subsets after activation. GITR is also expressed on murine neutrophils and NK cells. GITR interacts with its ligand (GITRL) that is expressed on antigen-presenting cells (APC) and endothelial cells.
Function:
AITR Human activation-inducible tumor necrosis factor receptor (AITR) and its ligand, AITRL, are important costimulatory molecules in the pathogenesis of autoimmune diseases. Despite the importance of these costimulatory molecules in autoimmune disease, their role in the autoimmune reaction to herniated disc fragments has yet to be explored.
Function:
GITR GITR was identified as a new member of the TNF receptor superfamily, by comparing gene expression in untreated and DEX-treated murine T-cell lines. GITR is co-stimulatory surface receptor for T cells and after interaction with GITRL maintain T cell activation, proliferation, cytokine production, and rescue T cells from anti-CD3-induced apoptosis. GITR can be used as Treg marker and its signaling abrogates the suppressive function of regulatory T cells. Also, GITR plays role in Treg development, as it is expressed already at CD4+CD25+Foxp3- Treg progenitors. GITR-/- mice has no developmental problem and are fertile. They have complete block in anti-CD3-induced T cell activation and decrease in regulatory T cells progenitors. After infection challenge, GTIR-/- mice developed less inflammation than WT littermates.
GITR signaling:
GITR does not have any enzymatic activity and signaling is propagated via recruiting TRAF-family members, specifically TRAF1, TRAF2 and TRAF5, to the GITR-signaling complex. The signaling is then mediated through NF-kB and MAPK pathways. There is an evidence that GITR has unique role for CD8+ and CD4+ T cells. GITR signaling lowers the threshold for CD28 signaling on CD8+ T cells or induces expression of CD137 on CD8+ memory T cells. For CD4+ regulatory T cells, GITR signaling promotes their expansion, inhibits Treg suppressive capacity and promotes resistance of effector T cells to Treg suppression.
GITR in disease:
GITR is in high interest as one of the immune checkpoint molecules that have potential in cancer treatment. GITR signaling can promote antitumor and anti-infective immune response, but also can be a driver of autoimmune diseases. Different response to GITR signaling rely on the GITR expression on different immune cell types. How GITR signaling is modulated in the different cells remains unknown. GITR agonistic antibodies are in the clinical trials as activators of effector CD8 T cells, while decreasing number of circulating suppressive regulatory T cells. Limited response to GITR agonistic antibodies is enhanced in combination with anti-PD-1 or anti-CTLA-4 therapies. GITR-/- mice in pancreatitis model have reduced IkBα and decreased expression of NF-kB p65 protein in pancreatic tissue, and also increased pro-apoptotic markers (e.g. Bax) and decreased anti-apoptotic markers (e.g. Bcl-2). Asthma model: GITR activation drives an infiltration of eosinophils to the lungs and induces production of cytokines. Model of arthritis: GITR activation increase numbers of Th17 cells in secondary lymphoid organs and stimulate cytokine production. Model of atopic dermatitis: GITR-GITRL pathway activation supports the production of attractants of regulatory T cells (CCL17 and CCL27) and promotes production of Th2-induced cytokines. Inhibition of GITR-GITRL pathway potentially may decrease a severity of different diseases, as asthma, arthritis or atopic dermatitis.
GITR in disease:
Atherosclerosis Atherosclerosis is autoinflammatory disease that belongs to the group of cardiovascular diseases (CVD). In atherosclerosis progression, plaques with modified low density lipoprotein (LDL) are formed. GITR expression was detected in plaques macrophages and T cells. Moreover, soluble GITR (sGITR) was present in patient's plasma. GITR potentially might be used as a biomarker of CVD patients, as its plaque expression and levels in plasma can distinguish the CVD patients from healthy controls. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Throw shade**
Throw shade:
The expressions "throw shade", "throwing shade", or simply "shade", are slang terms for a certain type of insult, often nonverbal. Journalist Anna Holmes called shade "the art of the sidelong insult". Merriam-Webster defines it as "subtle, sneering expression of contempt for or disgust with someone—sometimes verbal, and sometimes not".
History:
The term can be found in Jane Austen's novel Mansfield Park (1814). Young Edmund Bertram is displeased with a dinner guest's disparagement of the uncle who took her in: "With such warm feelings and lively spirits it must be difficult to do justice to her affection for Mrs. Crawford, without throwing a shade on the Admiral."The slang version of "shade" originated from the black community. According to gender studies scholar John C. Hawley, the expression "throwing shade" was used in the 1980s by New York City's ethnic working-class in the "ballroom and vogue culture", particularly by gender nonconformists. He writes that it refers to "the processes of a publicly performed dissimulation that aims either to protect oneself from ridicule or to verbally or psychologically attack others in a haughty or derogatory manner."
Later use:
The first major use of "shade" that introduced the slang to the greater public was in Jennie Livingston's documentary film, Paris Is Burning (1990), about the mid-1980s drag scene in Manhattan. In the documentary, one of the drag queens, Dorian Corey, explains that shade derives from "reading", the "real art form of insults". Shade is a developed form of reading: "Shade is, I don't tell you you're ugly. But I don't have to tell you, because you know you're ugly. And that's shade."Willi Ninja, who also appeared in Paris Is Burning, described "shade" in 1994 as a "nonverbal response to verbal or nonverbal abuse. Shade is about using certain mannerisms in battle. If you said something nasty to me, I would just turn on you, and give you a look like: 'Bitch please, you're not even worth my time, go on.' ... It's like watching Joan Collins going against Linda Evans on Dynasty. ... Or when George Bush ran against Bill Clinton, they were throwing shade. Who got the bigger shade? Bush did because Clinton won." A New York Times letter to the editor in 1993 criticized the newspaper for commenting on Bill Clinton's hair: "The Sunday Stylers are the last people I'd expect to throw shade on President Bill's hair pursuits."According to E. Patrick Johnson, to throw shade is to ignore someone: "If a shade thrower wishes to acknowledge the presence of the third party, he or she might roll his or her eyes and neck while poking out his or her lips. People throw shade if they do not like a particular person or if that person has dissed them in the past. ... In the playful mode, however, a person may throw shade at a person with whom he or she is a best friend." The expression was further popularized by the American reality television series RuPaul's Drag Race, which premiered in 2009. In 2015, Anna Holmes of The New York Times Magazine wrote: Shade can take many forms — a hard, deep look that could be either aggressive or searching, a compliment that could be interpreted as the opposite of one. E. Patrick Johnson, who teaches performance studies and African-American studies at Northwestern University, and who has written about the tradition of insults in the gay and black communities, explains: "If someone walks into a room with a hideous dress, but you don’t want to say it's hideous, you might say, 'Oooh … look at you!'" At its most refined, shade should have an element of plausible deniability, so that the shade-thrower can pretend that he or she didn't actually mean to behave with incivility, making it all the more delicious. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lookalike audience**
Lookalike audience:
A lookalike audience is a group of social network members who are determined as sharing characteristics with another group of members. In digital advertising, it refers to a targeting tool for digital marketing, first initiated by Facebook, which helps to reach potential customers online who are likely to share similar interests and behaviors with existing customers. Since Facebook debuted this feature in 2013, additional advertising platforms have followed suit, including Google Ads, Outbrain, Taboola, LinkedIn Ads and others.
Considerations:
Lookalike audiences anatomize existing customers and their user profiles to find the commonalities between the existing audience. This helps to find highly-qualified customers who previously would have been difficult to identify and reach. This expands the potential audience in different countries and applies to new differentiated audience segments; This approach saves time and lowers advertising costs for the acquisition of a new audience.
Considerations:
In order to be effective, a lookalike audience seed needs to be homogeneous. This is commonly achieved using a consistent behavioral pattern. The homogeneity of the lookalike seed has a greater influence on the audience's effectiveness than the size of this sample group. In Facebook, the minimal lookalike seed size is 100 users from the same country. Facebook generally recommends creating a seed from an audience of 1,000 to 5,000 users.Lookalike audiences might have limited effects on small companies or startups because of the small sample size of their existing audience, which would inevitably lead to insufficient data drawn from the current audience and interference from outliers. Namely, there would be no high bounce rate with these companies' websites.
Examples of seeds:
Marketers use many data sources to create lookalike seeds. Some examples of eCommerce lookalike seeds include: CRM-based – A seed based on an email or phone number list of customers who have had a past interaction with the business. This can be further segmented, for example customers with the highest lifetime value or past purchases of a specific product.
Conversion-based – A seed based on users that have performed an action such as a Purchase or Lead form submission on the website.
Engagement-based – A seed based on users segmented by their engagement, such as pages viewed, time spent on the site, video views, etc.
Methodology:
Facebook, as an example, takes three steps to build a lookalike audience: Choose the audience seed to build a lookalike audience from. This can range from page fans, visitors to the website, and customer lists etc. Generally the base audience should be composed of a minimum of 500 people. Larger pools will increase the accuracy of the lookalike audience.
Choose the specific location (country or region) to find a similar audience in.
Methodology:
Customize the audience size. Facebook offers a range of percentiles from 1% to 10%, indicating the size of the combined population of the locations selected. Larger audiences provide a wider reach, but a smaller lookalike audience is more targeted, which means ads are seen by fewer people, but they are likely to be better aligned to the features of the audience's seed.
Debate:
One study has shown that the tool of lookalike audiences, to some degrees, does well in generally advertising results. It is also listed as an important trend of pay-per-click (PPC) by Delhi School of Internet Marketing. However, debates over such a third party behavioral targeting being used for digital marketing hasn't stopped either, because using the data of customers is against online privacy settings.In 2019, limitations were put in place by Facebook to stop discriminatory targeting of audiences according to zip code, income levels and demographics (age and gender). In June 2022, the U.S. Justice Department Civil Rights Division filed a lawsuit in the Southern New York U.S. District Court against Meta Platforms alleging that the Lookalike audience tool for targeted advertising on Facebook discriminates against users based on their race, color, religion, sex, disability, familial status, and national origin in its distribution of housing advertisements in violation of Title VIII of the Civil Rights Act of 1968. Meta Platforms settled with the Justice Department on the same day the lawsuit was filed. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Spillage**
Spillage:
In industrial production, spillage is the loss of production output due to production of a series of defective or unacceptable products which must be rejected. Spillage is an often costly event which occurs in manufacturing when a process degradation or failure occurs that is not immediately detected and corrected, and in which defective or reject product therefore continues to be produced for some extended period of time.
Spillage:
Spillage results in costs due to lost production volume, excessive scrap, delayed delivery of product, and wastage of human and capital equipment resources. Minimization of the occurrence and duration of manufacturing spillage requires that closed-loop control and associated process monitoring and metrology functions be integrated into critical steps of the overall manufacturing process. The extent to which process control is complete and metrology is high resolution so as to be comprehensive determines the extent to which spillages will be prevented. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Angstrom exponent**
Angstrom exponent:
The Angstrom exponent or Ångström exponent is a parameter that describes how the optical thickness of an aerosol typically depends on the wavelength of the light.
Definition:
In 1929, the Swedish physicist Anders K. Ångström found that the optical thickness of an aerosol depends on the wavelength of light according to the power law τλτλ0=(λλ0)−α where τλ is the optical thickness at wavelength λ , and τλ0 is the optical thickness at the reference wavelength λ0 . The parameter α is the Angstrom exponent of the aerosol.
Significance:
The Angstrom exponent is inversely related to the average size of the particles in the aerosol: the smaller the particles, the larger the exponent. For example, cloud droplets are usually large, and thus clouds have very small Angstrom exponent (nearly zero), and the optical depth does not change with wavelength. That is why clouds appear to be white or grey. This relation can be used to estimate the particle size of an aerosol by measuring its optical depth at different wavelengths.
Determining the exponent:
In principle, if the optical thickness at one wavelength and the Angstrom exponent are known, the optical thickness can be computed at a different wavelength. In practice, measurements are made of the optical thickness of an aerosol layer at two different wavelengths, and the Angstrom exponent is estimated from these measurements using this formula. The aerosol optical thickness can then be derived at all other wavelengths, within the range of validity of this formula.
Determining the exponent:
For measurements of optical thickness τλ1 and τλ2 taken at two different wavelengths λ1 and λ2 respectively, the Angstrom exponent is given by log log λ1λ2 The Angstrom exponent is now routinely estimated by analyzing radiation measurements acquired on Earth Observation platforms, such as AErosol RObotic NETwork, or AERONET. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fingers (game)**
Fingers (game):
Fingers or finger spoof is a drinking game where players guess the number of participating players who will keep their finger on a cup at the end of a countdown. A correct guess eliminates the player from the game and ensures they will not have to drink the cup. The last person in the game loses and must consume the cup contents. The cup could be a pint glass, pitcher, or other vessel (large enough for all players to put one finger on the rim) that is filled with a sip or small sample of all players' own beverage prior to the start of the game.
Rules and setup:
Equipment Alcoholic beverages, typically wine, beer or mixed spirits A pint glass, pitcher, or other vessel, but ideally a bowl.
Rules and setup:
Setup and common rules Fingers starts by a participant offering his empty or almost-empty pint glass, pitcher, or other vessel to be used as "the cup." Popular in circles where the game us called "Scoff", the game starts with someone yelling "Scoff!" followed by players assembling around the cup. Each player pours a small amount of their own beverage into "the cup".
Rules and setup:
The game progresses in a series of turns with the first turn going to the game participant who suggested playing the game. Each turn starts with all players putting one finger on the rim of the cup. When all fingers are on the rim, the player whose turn it is announces, "three - two - one" followed by a number. The number is the player's guess at how many fingers will remain on the cup. All participating players, including the player whose turn it is, have the option to keep their finger on the cup or to remove it from the cup after the "three - two - one" count. A correct guess eliminates the player from the game (a win), an incorrect guess keeps the player active in the game.
Rules and setup:
The game progresses clockwise as each player takes their turn. The game ends when only one person remains- the loser. The loser must drink the contents of the cup. If the game is played again, a second round, the loser is the first to start the game.
Variations and other rules Two-man Fingers: a version of fingers played with only 2 players. Each player uses both index fingers (4 fingers total) to start the game. Fingers are ordered player - opponent - player - opponent. The game progresses as if 4 individuals were playing.
Rules and setup:
Balk: a balk is when the a player whose turn it is starts the "three - two - one" count and does not announce, or waits too long to announce their guess number. The player loses his / her turn if a balk occurs. There should be no gap in timing when announcing the guess number after the "three - two - one" series.
Rules and setup:
Slow Pull: a slow pull is when a participant is slow or decides late to remove their finger from the cup (within a second). Most players will agree that counting the remaining fingers after a number is called and then deciding to remove his / her finger (within a second) to cheat the current active player is next to impossible. For this reason, slow pulls should be considered fair game unless it is unreasonably delayed or there are fewer than 3-4 players remaining. All players (eliminated players included) should make the judgement call.
Rules and setup:
Social: a social is when all players take one sip of their own drink. Socials occur when everyone coincidentally removes their finger during a call.
Non-Celebration for the truly advanced: If you guess correctly and eliminate yourself from the game you can not show any emotion that might offend the other participants (celebrating, fist-pumping, smiling, etc.). If you do, you must apologize to the remaining players for your unwarranted celebration and re-enter the game.
Idiot Cup: when one player calls out a number but makes the result impossible by their own action (i.e., calling 0 and leaving their own finger on the cup or calling 5 with only 4 players left like an idiot). In this case, the player has to finish the drink.
Penalty: where a player calls a number out of turn they must down the drink.
Finger Spoof Variations: Traditionally played in the pubs and sports clubs of Gloucestershire as an alternative to full (Three-Coin) spoof.
The game is played for any pre-agreed forfeit including purchasing a round of drinks, drinking an unpleasant drink as above, snorting snuff/mustard, etc. Penalties should also be pre-agreed.
Zero is referred to as Spoof.
No countdown occurs. Players should be alert.
False/impossible shout results in a penalty. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Advanced Distributed Learning**
Advanced Distributed Learning:
The Advanced Distributed Learning (ADL) Initiative is a US government program that conducts research and development on distributed learning and coordinates related efforts broadly across public and private organizations. ADL reports to the Defense Human Resources Activity (DHRA), under the Director, DHRA. Although it is a DoD program, ADL serves the entire US federal government, operates a global partnership network including international defense ministries and US-based academic partners, and collaborates closely with industry and academia. ADL advises the DoD and US government on emerging learning technologies, best practices for improving learning effectiveness and efficiency, and methods for enhancing interoperability. Notable ADL contributions to distributed learning include the Sharable Content Object Reference Model (SCORM), Experience API (xAPI), and the DoD Instruction 1322.26.
History:
The ADL Initiative traces its antecedents to the early 1990s, when Congress authorized and appropriated funds for the National Guard to build prototype electronic classrooms and learning networks to increase personnel's access to learning opportunities. By the mid-1990s, DoD realized the need for a more coordinated approach, and the 1996 Quadrennial Defense Review formalized this by directing development of a department-wide strategy for modernizing technology-based education and training. This strategy became the original ADL Initiative, and in 1998, the Deputy Secretary of Defense directed the Under Secretary of Defense for Personnel and Readiness (USD(P&R), in collaboration with the Services, Joint Staff, Under Secretaries of Defense for Acquisition and Technology and the Comptroller), to lead ADL. The Deputy Secretary of Defense also directed the USD(P&R) to produce the department-wide policy for advanced distributed learning, develop a corresponding “master plan” to carry out the policy, and to ensure sufficient programs and resources were available for the associated implementation (see the 1999 ADL Strategic Plan in Appendix 1 for more details).By 1998, the DoD and other Federal agencies (e.g., the Department of Labor) had each established their own ADL projects, and the Office of Science and Technology Policy (OSTP) moved to consolidate these via the Federal Training Technology Initiative. Thus, following guidance from Congress, OSTP, and the National Partnership for Reinventing Government, the DoD ADL Initiative was grown into a Federal-wide program. Specific direction for this can be found in Section 378 of Public Law 105-261, the Strom Thurmond National Defense Authorization Act for Fiscal Year 1999, which required the Secretary of Defense to develop a strategic plan for expanding distance learning initiatives, as well as Executive Order 13111 (President William J. Clinton, 12 January 1999). The Executive Order, titled “Using Technology to Improve Training Opportunities for Federal Government Employees,” established a task force and advisory committee to explore how federal training programs, initiatives, and policies can better support lifelong learning through the use of learning technologies and to provide learning standards, specifications, and applications which can be sustained and extended to incorporate new technologies and learning science as they occur.
History:
Shortly after President Clinton signed Executive Order 13111, the Pentagon released the Department of Defense Strategic Plan for Advanced Distributed Learning (April 30, 1999) and the corresponding Department of Defense Implementation Plan for Advanced Distributed Learning (May 19, 2000). This strategy empowers the ADL Initiative to: Influence the development/use of common industry standards Enable acquisition of interoperable tools and content Create a robust and dynamic network infrastructure for distribution Enable the modernization of supporting resources Engender cultural change to move from “classroom-centric” to “learner-centric”Since its inception in the 1990s, the ADL Initiative has achieved several notable milestones, including the development of SCORM, ADL PlugFests, xAPI, and the Total Learning Architecture. More information about the history and products of ADL can be found in the ADL-sponsored book, Learning on Demand: ADL and the Future of e-Learning, published in 2010.
Organization:
The ADL Initiative reports to the Defense Human Resources Activity (DHRA), under the Director, DHRA. Originally, the ADL Initiative reported to the OSD Director for Readiness and Training Policy and Programs in the Under Secretary of Defense for Personnel and Readiness (USD(P&R)) chain of command. Until April 18, 2021, the ADL Initiative reported to the Deputy Assistant Secretary of Defense for Force Education and Training (DASD(FE&T)), who reported to the Assistant Secretary of Defense for Readiness (ASD(R)), who in turn reported to the Under Secretary for Personnel and Readiness (USD(P&R)) within the Office of the Secretary of Defense.
Subject Areas:
ADL uses the term “distributed learning” broadly, to refer to all network-centric learning technologies and their corresponding best practices for their use. Similarly, ADL uses the term “learning” to include education, training, operational performance support, and other forms of ad hoc, just-in-time, or self-directed learning. Within these topical areas, ADL conducts research and development (Budget Area 6.3, Advanced Technology Development), facilitates coordination, and assists with the implementation of emerging science and technologies. More precisely, ADL's work emphasizes the following six areas: [7] e-Learning (e.g., traditional web-based courseware) Mobile learning (m-Learning) and associated mobile performance support Web-based virtual worlds and simulations Learning analytics and performance modeling Associated learning theory (e.g., pedagogy, andragogy, instructional design) Distributed learning interoperability specifications
SCORM:
When ADL was established, the use of Learning Management Systems (LMSs) was increasing rapidly, but the content delivered through those systems remained separated (locked into silos). For example, while the Navy and Army have standard courses with similar content, that content could not be shared and reused from one service to another because their LMSs would not allow it. The silo’d nature of content delivered through LMSs was not cost efficient, and became one of ADL's first challenges to tackle resulting in the development of the SCORM (sharable content object reference model).SCORM, which integrates a set of related technical standards, specifications, and guidelines designed to meet high-level requirements—accessible, reusable, interoperable, and durable content and systems is arguably one of ADL's most well known projects. SCORM content can be delivered to learners via any SCORM-conformant LMS using the same version of SCORM. Due to the Department of Defense Instruction (DoDI) 1322.26, SCORM is a mature technology which has been widely adopted.
Experience API:
In 2011, ADL recognized the need for a software specification that tracks learning experiences that occur outside of a LMS and a web browser. As a result, ADL issued a Broad Agency Announcement (BAA) asking for assistance in improving SCORM. The BAA was awarded to Rustici Software, a Nashville-based software company experienced with SCORM.
Experience API:
Rustici Software conducted numerous interviews with the e-learning community to determine where improvements needed to be made and developed the research version of the Experience API specification as a result. This process was called Project Tin Can. The moniker “Tin Can API” was derived from Project Tin Can. When version 1.0 was officially released in April 2013, the specification was dubbed “xAPI” but by that time, some people already knew the specification by the original moniker. The Experience API (xAPI) allows the capture of big data on human performance, along with associated instructional content or performance context information. xAPI applies “activity streams” to tracking data and provides sub-APIs to access and store information about state and content. This enables nearly dynamic tracking of activities from any platform or software system—from traditional LMSs to mobile devices, simulations, wearables, physical beacons, and more.
Department of Defense Instruction 1322.26:
Under delegated authority, ADL stewards DoDI 1322.26, “Development, Management, and Delivery of Distributed Learning.” This DoDI provides guidance to support implementation of DoD Directive (DoDD) 1322.18, “Military Training.” | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chiraphos**
Chiraphos:
Chiraphos is a chiral diphosphine employed as a ligand in organometallic chemistry. This bidentate ligand chelates metals via the two phosphine groups. Its name is derived from its description — being both chiral and a phosphine. As a C2-symmetric ligand, chiraphos is available in two enantiomeric forms, S,S and R,R, each with C2 symmetry.
Preparation:
Chiraphos is prepared from S,S or R,R-2,3-butanediol, which are derived from commercially available S,S or R,R-tartaric acid; the technique of using cheaply available enantiopure starting materials is known as chiral pool synthesis. The diol is tosylated and then the ditosylate is treated with lithium diphenylphosphide. The ligand was an important demonstration of how the conformation of the chelate ring can affect asymmetric induction by a metal catalyst. Prior to this work, in most chiral phosphines, e.g., DIPAMP, phosphorus was the stereogenic center. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**GALNT13**
GALNT13:
Polypeptide N-acetylgalactosaminyltransferase 13 is an enzyme that in humans is encoded by the GALNT13 gene.The GALNT13 protein is a member of the UDP-N-acetyl-alpha-D-galactosamine:polypeptide N-acetylgalactosaminyltransferase (GalNAcT; EC 2.4.1.41) family, which initiate O-linked glycosylation of mucins (see MUC3A, MIM 158371) by the initial transfer of N-acetylgalactosamine (GalNAc) with an alpha-linkage to a serine or threonine residue.[supplied by OMIM] | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Junior showmanship**
Junior showmanship:
Junior showmanship (also called junior handling) is a sport for young people (called "Juniors") in which they exhibit their dog handling skills in an event similar to a conformation dog show. Unlike a conformation show, it is the young handlers who are judged, not their dogs.
History:
County agricultural fairs in the United States began holding livestock judging contests for members of the 4-H, a club run by state agricultural extensions for children of farm families, in the early 1900s. Showmanship, in which the child was judged for ability to display "an animal to its greatest advantage" was a component of livestock judging. As the idea of 4-H as a youth development club, not just a club for future agriculturalists, spread around the world, horses and pet animals were added to showmanship categories.The first dog handling competition for children at a formal dog show was held in 1932 at the Westbury Kennel Club Show in Long Island, New York, in the United States. In 1933 the Westminster Kennel Club in New York offered a children's handling class, and prizes were established in the names of early promoters of children's events, Leonard Brumby, Sr, and George F. Foley. The American Kennel Club recognized Junior Showmanship as a dog show class in 1971.Today, major Junior Showmanship competition is offered worldwide through Fédération Cynologique Internationale clubs, as well as through the Kennel Club (UK), The Canadian Kennel Club, The American Kennel Club, The United Kennel Club (US), as well as 4-H and similar clubs. Other show-giving dog clubs and businesses may also offer Junior Showmanship events.
Purpose:
Learning sportsmanship and developing knowledge of the dog are given as the purpose for Junior Showmanship (Junior Handling) by most organizations. The Junior learns sportsmanship, ring procedures, and grooming and showing techniques specific to the dog he or she is showing, and develops a close bond with the dog. For the major show-giving bodies, Junior Showmanship can also be an apprenticeship in dog handling, preparing young people for careers in dog handling, raising, and training.
Eligibility:
In general, children and young people may compete with dogs of any breed or in some cases, mixed breed dogs. Competition is by age group, in various classification levels. Rules are specific to each show giving organization.
Eligibility:
Children as young as two years old are allowed in the ring by the United Kennel Club (US), at age four by the Canadian Kennel Club, at age six by the Kennel Club (UK), and at age nine by the American Kennel Club and the 4-H. All end eligibility at age eighteen except for the Kennel Club (UK), which allows Juniors to compete until they are 24. Fédération Cynologique Internationale clubs have similar rules.
Nature of the competition:
The Junior Showmanship competition is organized in a similar manner to a conformation dog show. The Juniors are separated into their age and experience levels, and enter the ring in order of the size of their dog. The juniors must move their dogs around the ring according to the instructions of the judge in pre determined patterns. The judge notes whether or not the Junior follows instructions correctly and presents the dog properly according to the dog's breed or type. Dogs are examined as in a conformation show, but the emphasis is on how the Junior interacts with the dog and the judge, not on the quality of the dog.
Nature of the competition:
At the basic or novice level, children are judged on how well they follow the judge's instructions, their understanding of ring procedure and of the standard of the breed or type of dog they are showing. In some clubs, the children may be quizzed or questioned by the judge.
In close competition between advanced Juniors, judging is also based on the Junior's knowledge of his or her dog's faults, and how well they disguise the faults through skillful handling so that "what a judge observes are animals at the top of their form."
Related activities:
Junior Showmanship is a sport limited to children and young people, but many young handlers also enter adult show classes in conformation and performance (obedience, agility, hunting events, flyball, etc.) as well. Some organizations have set up separate performance event categories for junior handlers.
Judging:
As junior handling is a separate sport from regular conformation showing, judges are usually given separate training and are expected to know the rules for the sport. Westminster continues to be the "crown jewel" of the Juniors Competition. Only one winner of Best Junior Handler at this prestigious show has ever returned to judge the event. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Diamond Quadrilateral**
Diamond Quadrilateral:
The Diamond Quadrilateral is a project of the Indian Railways to establish a high-speed rail network in India. The Diamond Quadrilateral will connect the four mega cities of India, viz. Delhi, Mumbai, Kolkata and Chennai, similar to the Golden Quadrilateral highway system.
High-speed train on Mumbai-Ahmedabad section will be the first high-speed train corridor to be implemented in the country. On 9 June 2014, the President of India Pranab Mukherjee, officially mentioned that the Government led by Prime Minister Narendra Modi will launch a Diamond Quadrilateral project of high-speed trains.
History:
Prior to the 2014 general election, the two major national parties (Bharatiya Janata Party and Indian National Congress) pledged to introduce high-speed rail. The INC pledged to connect all of India's million-plus cities by high-speed rail, whereas BJP, which won the election, promised to build the "Diamond Quadrilateral" project, which would connect the cities of Chennai, Delhi, Kolkata, and Mumbai via high-speed rail. This project was approved as a priority for the new government in the incoming president's speech. Construction of one kilometer of high speed railway track will cost ₹100 crore (US$13 million) – ₹140 crore (US$18 million) which is 10-14 times higher than the cost of construction of standard railway.India's Union Council of Ministers passed the proposal of Japan to build India's first high-speed railway on 10 December 2015. The planned rail will run approximately 500 km (310 mi) between Mumbai and the western city of Ahmedabad at a top speed of 320 km/h (200 mph). Under this proposal, the construction began in 2017 and is expected to be completed in the year 2022. The estimated cost of this project is ₹980 billion (US$12 billion) and is financed by a low-interest loan from Japan. Operation is officially targeted to begin in 2023, but India has announced intentions to attempt to bring the line into operation one year earlier. It will transport the passengers from Ahmedabad to Mumbai in just 3 hours and its ticket fare will be cheaper than air planes, that is, ₹2,500-₹3,000.
Current status:
As of July 2020, NHSRCL has floated almost 60% of tenders for civil works, and almost 60% of land is acquired for the first Mumbai-Ahmadabad corridor and the deadline of the project is December 2023.
Current status:
The National High Speed Rail Corporation Limited, the implementing body of the project, has planned 7 routes which are Delhi to Varanasi via Noida, Agra and Lucknow; Varanasi to Howrah via Patna; Delhi to Ahmadabad via Jaipur and Udaipur; Delhi to Amritsar via Chandigarh, Ludhiana and Jalandhar; Mumbai to Nagpur via Nasik; Mumbai to Hyderabad via Pune and Chennai to Mysore via Bangalore.
Current status:
According to reports, the NHAI will soon acquire land to lay tracks for high-speed trains along greenfield expressways for integrated development of the rail transport network in the country. to expedite the project, the Indian Railways along with the National Highways Authority of India (NHAI) will begin the process of acquiring additional land.
The decision to acquire additional land was taken during a recent meeting of a group of infrastructure ministers led by Union Minister Nitin Gadkari. During the infra sector group meeting, it was decided that the NHAI will take over land acquisition and a 4-member committee was constituted to take this process forward.
The four-member task force will work out the modalities for acquiring land and sharing the cost. It may be noted that the Indian Railways is in the process of preparing the blueprint of 7 high-speed rail routes in the country.
As per reports, The railway board has also written to the NHAI and given details of seven high-speed rail corridors for running bullet trains for which the detailed project reports are being prepared.
NHAI has been asked to depute a nodal officer for this purpose for better integration of the mammoth planning exercise. Railways plans to run bullet trains on 7 important new routes of the country. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Demand priority**
Demand priority:
Demand priority is a media-access method used in 100BaseVG, a 100 megabit per second (Mbit/s) Ethernet implementation proposed by Hewlett-Packard (HP) and AT&T Microelectronics, later standardized as IEEE 802.12. Demand priority shifts network access control from the workstation to a hub. This access method works with a star topology.
In this method, a node that wishes to transmit indicates this wish to the hub and also requests high- or regular-priority service for its transmission. After it obtains permission, the node begins transmitting to the hub.
The hub is responsible for passing the transmission on to the destination node; that is, the hub is responsible for providing access to the network. A hub will pass high priority transmissions through immediately, and will pass regular-priority transmissions through as the opportunity arises.
By letting the hub manage access, the architecture is able to guarantee required bandwidths and requested service priority to particular applications or nodes. It also can guarantee that the network can be scaled up (enlarged) without loss of bandwidth.
Demand priority:
Demand priority helps increase bandwidth in the following ways: A node does not need to keep checking whether the network is idle before transmitting. In current Ethernet implementations, a wire pair is dedicated to this task. By making network checking unnecessary, demand priority frees a wire pair. This is fortunate, because the 100BaseVG specifications use quartet signalling, which needs four available wire pairs. Heavy traffic can effectively bring standard Ethernet networks to a standstill, because nodes spend most of their time trying to access the network.
Demand priority:
With demand priority, the hub needs to pass a transmission on only to its destination, so that overall network traffic is decreased. This means there is more bandwidth available for heavy network traffic. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Retraction (topology)**
Retraction (topology):
In topology, a branch of mathematics, a retraction is a continuous mapping from a topological space into a subspace that preserves the position of all points in that subspace. The subspace is then called a retract of the original space. A deformation retraction is a mapping that captures the idea of continuously shrinking a space into a subspace.
An absolute neighborhood retract (ANR) is a particularly well-behaved type of topological space. For example, every topological manifold is an ANR. Every ANR has the homotopy type of a very simple topological space, a CW complex.
Definitions:
Retract Let X be a topological space and A a subspace of X. Then a continuous map r:X→A is a retraction if the restriction of r to A is the identity map on A; that is, {\textstyle r(a)=a} for all a in A. Equivalently, denoting by ι:A↪X the inclusion, a retraction is a continuous map r such that id A, that is, the composition of r with the inclusion is the identity of A. Note that, by definition, a retraction maps X onto A. A subspace A is called a retract of X if such a retraction exists. For instance, any non-empty space retracts to a point in the obvious way (the constant map yields a retraction). If X is Hausdorff, then A must be a closed subset of X.
Definitions:
If {\textstyle r:X\to A} is a retraction, then the composition ι∘r is an idempotent continuous map from X to X. Conversely, given any idempotent continuous map {\textstyle s:X\to X,} we obtain a retraction onto the image of s by restricting the codomain.
Deformation retract and strong deformation retract A continuous map F:X×[0,1]→X is a deformation retraction of a space X onto a subspace A if, for every x in X and a in A, and F(a,1)=a.
In other words, a deformation retraction is a homotopy between a retraction and the identity map on X. The subspace A is called a deformation retract of X. A deformation retraction is a special case of a homotopy equivalence.
A retract need not be a deformation retract. For instance, having a single point as a deformation retract of a space X would imply that X is path connected (and in fact that X is contractible).
Note: An equivalent definition of deformation retraction is the following. A continuous map {\textstyle r:X\to A} is a deformation retraction if it is a retraction and its composition with the inclusion is homotopic to the identity map on X. In this formulation, a deformation retraction carries with it a homotopy between the identity map on X and itself.
Definitions:
If, in the definition of a deformation retraction, we add the requirement that F(a,t)=a for all t in [0, 1] and a in A, then F is called a strong deformation retraction. In other words, a strong deformation retraction leaves points in A fixed throughout the homotopy. (Some authors, such as Hatcher, take this as the definition of deformation retraction.) As an example, the n-sphere {\textstyle S^{n}} is a strong deformation retract of {\textstyle \mathbb {R} ^{n+1}\backslash \{0\};} as strong deformation retraction one can choose the map F(x,t)=((1−t)+t‖x‖)x.
Definitions:
Cofibration and neighborhood deformation retract A map f: A → X of topological spaces is a (Hurewicz) cofibration if it has the homotopy extension property for maps to any space. This is one of the central concepts of homotopy theory. A cofibration f is always injective, in fact a homeomorphism to its image. If X is Hausdorff (or a compactly generated weak Hausdorff space), then the image of a cofibration f is closed in X.
Definitions:
Among all closed inclusions, cofibrations can be characterized as follows. The inclusion of a closed subspace A in a space X is a cofibration if and only if A is a neighborhood deformation retract of X, meaning that there is a continuous map u:X→[0,1] with {\textstyle A=u^{-1}\!\left(0\right)} and a homotopy {\textstyle H:X\times [0,1]\rightarrow X} such that {\textstyle H(x,0)=x} for all x∈X, H(a,t)=a for all a∈A and t∈[0,1], and {\textstyle H\left(x,1\right)\in A} if u(x)<1 .For example, the inclusion of a subcomplex in a CW complex is a cofibration.
Properties:
One basic property of a retract A of X (with retraction r : X → A {\textstyle r:X\to A} ) is that every continuous map f : A → Y {\textstyle f:A\rightarrow Y} has at least one extension g : X → Y , {\textstyle g:X\rightarrow Y,} namely g = f ∘ r {\textstyle g=f\circ r} .
Deformation retraction is a particular case of homotopy equivalence. In fact, two spaces are homotopy equivalent if and only if they are both homeomorphic to deformation retracts of a single larger space.
Any topological space that deformation retracts to a point is contractible and vice versa. However, there exist contractible spaces that do not strongly deformation retract to a point.
No-retraction theorem:
The boundary of the n-dimensional ball, that is, the (n−1)-sphere, is not a retract of the ball. (See Brouwer fixed-point theorem § A proof using homology or cohomology.)
Absolute neighborhood retract (ANR):
A closed subset {\textstyle X} of a topological space {\textstyle Y} is called a neighborhood retract of {\textstyle Y} if {\textstyle X} is a retract of some open subset of {\textstyle Y} that contains {\textstyle X} Let C be a class of topological spaces, closed under homeomorphisms and passage to closed subsets. Following Borsuk (starting in 1931), a space {\textstyle X} is called an absolute retract for the class C , written AR {\textstyle \operatorname {AR} \left({\mathcal {C}}\right),} if {\textstyle X} is in C and whenever {\textstyle X} is a closed subset of a space {\textstyle Y} in C , {\textstyle X} is a retract of {\textstyle Y} . A space {\textstyle X} is an absolute neighborhood retract for the class C , written ANR {\textstyle \operatorname {ANR} \left({\mathcal {C}}\right),} if {\textstyle X} is in C and whenever {\textstyle X} is a closed subset of a space {\textstyle Y} in C , {\textstyle X} is a neighborhood retract of {\textstyle Y} Various classes C such as normal spaces have been considered in this definition, but the class M of metrizable spaces has been found to give the most satisfactory theory. For that reason, the notations AR and ANR by themselves are used in this article to mean AR (M) and ANR (M) .A metrizable space is an AR if and only if it is contractible and an ANR. By Dugundji, every locally convex metrizable topological vector space {\textstyle V} is an AR; more generally, every nonempty convex subset of such a vector space {\textstyle V} is an AR. For example, any normed vector space (complete or not) is an AR. More concretely, Euclidean space {\textstyle \mathbb {R} ^{n},} the unit cube {\textstyle I^{n},} and the Hilbert cube {\textstyle I^{\omega }} are ARs.
Absolute neighborhood retract (ANR):
ANRs form a remarkable class of "well-behaved" topological spaces. Among their properties are: Every open subset of an ANR is an ANR.
Absolute neighborhood retract (ANR):
By Hanner, a metrizable space that has an open cover by ANRs is an ANR. (That is, being an ANR is a local property for metrizable spaces.) It follows that every topological manifold is an ANR. For example, the sphere {\textstyle S^{n}} is an ANR but not an AR (because it is not contractible). In infinite dimensions, Hanner's theorem implies that every Hilbert cube manifold as well as the (rather different, for example not locally compact) Hilbert manifolds and Banach manifolds are ANRs.
Absolute neighborhood retract (ANR):
Every locally finite CW complex is an ANR. An arbitrary CW complex need not be metrizable, but every CW complex has the homotopy type of an ANR (which is metrizable, by definition).
Absolute neighborhood retract (ANR):
Every ANR X is locally contractible in the sense that for every open neighborhood {\textstyle U} of a point {\textstyle x} in {\textstyle X} , there is an open neighborhood {\textstyle V} of {\textstyle x} contained in {\textstyle U} such that the inclusion {\textstyle V\hookrightarrow U} is homotopic to a constant map. A finite-dimensional metrizable space is an ANR if and only if it is locally contractible in this sense. For example, the Cantor set is a compact subset of the real line that is not an ANR, since it is not even locally connected.
Absolute neighborhood retract (ANR):
Counterexamples: Borsuk found a compact subset of {\textstyle \mathbb {R} ^{3}} that is an ANR but not strictly locally contractible. (A space is strictly locally contractible if every open neighborhood {\textstyle U} of each point {\textstyle x} contains a contractible open neighborhood of {\textstyle x} .) Borsuk also found a compact subset of the Hilbert cube that is locally contractible (as defined above) but not an ANR.
Absolute neighborhood retract (ANR):
Every ANR has the homotopy type of a CW complex, by Whitehead and Milnor. Moreover, a locally compact ANR has the homotopy type of a locally finite CW complex; and, by West, a compact ANR has the homotopy type of a finite CW complex. In this sense, ANRs avoid all the homotopy-theoretic pathologies of arbitrary topological spaces. For example, the Whitehead theorem holds for ANRs: a map of ANRs that induces an isomorphism on homotopy groups (for all choices of base point) is a homotopy equivalence. Since ANRs include topological manifolds, Hilbert cube manifolds, Banach manifolds, and so on, these results apply to a large class of spaces.
Absolute neighborhood retract (ANR):
Many mapping spaces are ANRs. In particular, let Y be an ANR with a closed subspace A that is an ANR, and let X be any compact metrizable space with a closed subspace B. Then the space {\textstyle \left(Y,A\right)^{\left(X,B\right)}} of maps of pairs {\textstyle \left(X,B\right)\rightarrow \left(Y,A\right)} (with the compact-open topology on the mapping space) is an ANR. It follows, for example, that the loop space of any CW complex has the homotopy type of a CW complex.
Absolute neighborhood retract (ANR):
By Cauty, a metrizable space {\textstyle X} is an ANR if and only if every open subset of {\textstyle X} has the homotopy type of a CW complex.
Absolute neighborhood retract (ANR):
By Cauty, there is a metric linear space {\textstyle V} (meaning a topological vector space with a translation-invariant metric) that is not an AR. One can take {\textstyle V} to be separable and an F-space (that is, a complete metric linear space). (By Dugundji's theorem above, {\textstyle V} cannot be locally convex.) Since {\textstyle V} is contractible and not an AR, it is also not an ANR. By Cauty's theorem above, {\textstyle V} has an open subset {\textstyle U} that is not homotopy equivalent to a CW complex. Thus there is a metrizable space {\textstyle U} that is strictly locally contractible but is not homotopy equivalent to a CW complex. It is not known whether a compact (or locally compact) metrizable space that is strictly locally contractible must be an ANR. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Collagen, type VIII, alpha 1**
Collagen, type VIII, alpha 1:
Collagen alpha-1(VIII) chain is a protein that in humans is encoded by the COL8A1 gene.This gene encodes one of the two alpha chains of type VIII collagen. The gene product is a short chain collagen and a major component of the basement membrane of the corneal endothelium. The type VIII collagen fibril can be either a homo- or a heterotrimer. Alternatively spliced transcript variants encoding the same isoform have been observed. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dendrimer**
Dendrimer:
Dendrimers are highly ordered, branched polymeric molecules. Synonymous terms for dendrimer include arborols and cascade molecules. Typically, dendrimers are symmetric about the core, and often adopt a spherical three-dimensional morphology. The word dendron is also encountered frequently. A dendron usually contains a single chemically addressable group called the focal point or core. The difference between dendrons and dendrimers is illustrated in the top figure, but the terms are typically encountered interchangeably.
Dendrimer:
The first dendrimers were made by divergent synthesis approaches by Fritz Vögtle in 1978, R.G. Denkewalter at Allied Corporation in 1981, Donald Tomalia at Dow Chemical in 1983 and in 1985, and by George R. Newkome in 1985. In 1990 a convergent synthetic approach was introduced by Craig Hawker and Jean Fréchet. Dendrimer popularity then greatly increased, resulting in more than 5,000 scientific papers and patents by the year 2005.
Properties:
Dendritic molecules are characterized by structural perfection. Dendrimers and dendrons are monodisperse and usually highly symmetric, spherical compounds. The field of dendritic molecules can be roughly divided into low-molecular weight and high-molecular weight species. The first category includes dendrimers and dendrons, and the latter includes dendronized polymers, hyperbranched polymers, and the polymer brush.
Properties:
The properties of dendrimers are dominated by the functional groups on the molecular surface, however, there are examples of dendrimers with internal functionality. Dendritic encapsulation of functional molecules allows for the isolation of the active site, a structure that mimics that of active sites in biomaterials. Also, it is possible to make dendrimers water-soluble, unlike most polymers, by functionalizing their outer shell with charged species or other hydrophilic groups. Other controllable properties of dendrimers include toxicity, crystallinity, tecto-dendrimer formation, and chirality.Dendrimers are also classified by generation, which refers to the number of repeated branching cycles that are performed during its synthesis. For example, if a dendrimer is made by convergent synthesis (see below), and the branching reactions are performed onto the core molecule three times, the resulting dendrimer is considered a third generation dendrimer. Each successive generation results in a dendrimer roughly twice the molecular weight of the previous generation. Higher generation dendrimers also have more exposed functional groups on the surface, which can later be used to customize the dendrimer for a given application.
Synthesis:
One of the first dendrimers, the Newkome dendrimer, was synthesized in 1985. This macromolecule is also commonly known by the name arborol. The figure outlines the mechanism of the first two generations of arborol through a divergent route (discussed below). The synthesis is started by nucleophilic substitution of 1-bromopentane by triethyl sodiomethanetricarboxylate in dimethylformamide and benzene. The ester groups were then reduced by lithium aluminium hydride to a triol in a deprotection step. Activation of the chain ends was achieved by converting the alcohol groups to tosylate groups with tosyl chloride and pyridine. The tosyl group then served as leaving groups in another reaction with the tricarboxylate, forming generation two. Further repetition of the two steps leads to higher generations of arborol.Poly(amidoamine), or PAMAM, is perhaps the most well known dendrimer. The core of PAMAM is a diamine (commonly ethylenediamine), which is reacted with methyl acrylate, and then another ethylenediamine to make the generation-0 (G-0) PAMAM. Successive reactions create higher generations, which tend to have different properties. Lower generations can be thought of as flexible molecules with no appreciable inner regions, while medium-sized (G-3 or G-4) do have internal space that is essentially separated from the outer shell of the dendrimer. Very large (G-7 and greater) dendrimers can be thought of more like solid particles with very dense surfaces due to the structure of their outer shell. The functional group on the surface of PAMAM dendrimers is ideal for click chemistry, which gives rise to many potential applications.Dendrimers can be considered to have three major portions: a core, an inner shell, and an outer shell. Ideally, a dendrimer can be synthesized to have different functionality in each of these portions to control properties such as solubility, thermal stability, and attachment of compounds for particular applications. Synthetic processes can also precisely control the size and number of branches on the dendrimer. There are two defined methods of dendrimer synthesis, divergent synthesis and convergent synthesis. However, because the actual reactions consist of many steps needed to protect the active site, it is difficult to synthesize dendrimers using either method. This makes dendrimers hard to make and very expensive to purchase. At this time, there are only a few companies that sell dendrimers; Polymer Factory Sweden AB commercializes biocompatible bis-MPA dendrimers and Dendritech is the only kilogram-scale producers of PAMAM dendrimers. NanoSynthons, LLC from Mount Pleasant, Michigan, USA produces PAMAM dendrimers and other proprietary dendrimers.
Synthesis:
Divergent methods The dendrimer is assembled from a multifunctional core, which is extended outward by a series of reactions, commonly a Michael reaction. Each step of the reaction must be driven to full completion to prevent mistakes in the dendrimer, which can cause trailing generations (some branches are shorter than the others). Such impurities can impact the functionality and symmetry of the dendrimer, but are extremely difficult to purify out because the relative size difference between perfect and imperfect dendrimers is very small.
Synthesis:
Convergent methods Dendrimers are built from small molecules that end up at the surface of the sphere, and reactions proceed inward building inward and are eventually attached to a core. This method makes it much easier to remove impurities and shorter branches along the way, so that the final dendrimer is more monodisperse. However dendrimers made this way are not as large as those made by divergent methods because crowding due to steric effects along the core is limiting.
Synthesis:
Click chemistry Dendrimers have been prepared via click chemistry, employing Diels-Alder reactions, thiol-ene and thiol-yne reactions and azide-alkyne reactions.There are ample avenues that can be opened by exploring this chemistry in dendrimer synthesis.
Applications:
Applications of dendrimers typically involve conjugating other chemical species to the dendrimer surface that can function as detecting agents (such as a dye molecule), affinity ligands, targeting components, radioligands, imaging agents, or pharmaceutically active compounds. Dendrimers have very strong potential for these applications because their structure can lead to multivalent systems. In other words, one dendrimer molecule has hundreds of possible sites to couple to an active species. Researchers aimed to utilize the hydrophobic environments of the dendritic media to conduct photochemical reactions that generate the products that are synthetically challenged. Carboxylic acid and phenol-terminated water-soluble dendrimers were synthesized to establish their utility in drug delivery as well as conducting chemical reactions in their interiors. This might allow researchers to attach both targeting molecules and drug molecules to the same dendrimer, which could reduce negative side effects of medications on healthy cells.Dendrimers can also be used as a solubilizing agent. Since their introduction in the mid-1980s, this novel class of dendrimer architecture has been a prime candidate for host–guest chemistry. Dendrimers with hydrophobic core and hydrophilic periphery have shown to exhibit micelle-like behavior and have container properties in solution. The use of dendrimers as unimolecular micelles was proposed by Newkome in 1985. This analogy highlighted the utility of dendrimers as solubilizing agents. The majority of drugs available in pharmaceutical industry are hydrophobic in nature and this property in particular creates major formulation problems. This drawback of drugs can be ameliorated by dendrimeric scaffolding, which can be used to encapsulate as well as to solubilize the drugs because of the capability of such scaffolds to participate in extensive hydrogen bonding with water. Dendrimer labs are trying to manipulate dendrimer's solubilizing trait, to explore dendrimers for drug delivery and to target specific carriers.For dendrimers to be able to be used in pharmaceutical applications, they must surmount the required regulatory hurdles to reach market. One dendrimer scaffold designed to achieve this is the polyethoxyethylglycinamide (PEE-G) dendrimer. This dendrimer scaffold has been designed and shown to have high HPLC purity, stability, aqueous solubility and low inherent toxicity.
Applications:
Drug delivery Approaches for delivering unaltered natural products using polymeric carriers is of widespread interest. Dendrimers have been explored for the encapsulation of hydrophobic compounds and for the delivery of anticancer drugs. The physical characteristics of dendrimers, including their monodispersity, water solubility, encapsulation ability, and large number of functionalizable peripheral groups make these macromolecules appropriate candidates for drug delivery vehicles.
Applications:
Role of dendrimer chemical modifications in drug delivery Dendrimers are particularly versatile drug delivery devices due to the wide range of chemical modifications that can be made to increase in vivo suitability and allow for site-specific targeted drug delivery.
Applications:
Drug attachment to the dendrimer may be accomplished by (1) a covalent attachment or conjugation to the external surface of the dendrimer forming a dendrimer prodrug, (2) ionic coordination to charged outer functional groups, or (3) micelle-like encapsulation of a drug via a dendrimer-drug supramolecular assembly. In the case of a dendrimer prodrug structure, linking of a drug to a dendrimer may be direct or linker-mediated depending on desired release kinetics. Such a linker may be pH-sensitive, enzyme catalyzed, or a disulfide bridge. The wide range of terminal functional groups available for dendrimers allows for many different types of linker chemistries, providing yet another tunable component on the system. Key parameters to consider for linker chemistry are (1) release mechanism upon arrival to the target site, whether that be within the cell or in a certain organ system, (2) drug-dendrimer spacing so as to prevent lipophilic drugs from folding into the dendrimer, and (3) linker degradability and post-release trace modifications on drugs.Polyethylene glycol (PEG) is a common modification for dendrimers to modify their surface charge and circulation time. Surface charge can influence the interactions of dendrimers with biological systems, such as amine-terminal modified dendrimers which have a propensity to interact with cell membranes with anionic charge. Certain in vivo studies have shown polycationic dendrimers to be cytotoxic through membrane permeabilization, a phenomenon that could be partially mitigated via addition of PEGylation caps on amine groups, resulting in lower cytotoxicity and lower red blood cell hemolysis. Additionally, studies have found that PEGylation of dendrimers results in higher drug loading, slower drug release, longer circulation times in vivo, and lower toxicity in comparison to counterparts without PEG modifications.Numerous targeting moieties have been used to modify dendrimer biodistribution and allow for targeting to specific organs. For example, folate receptors are overexpressed in tumor cells and are therefore promising targets for localized drug delivery of chemotherapeutics. Folic acid conjugation to PAMAM dendrimers has been shown to increase targeting and decrease off-target toxicity while maintaining on-target cytotoxicity of chemotherapeutics such as methotrexate, in mouse models of cancer.Antibody-mediated targeting of dendrimers to cell targets has also shown promise for targeted drug delivery. As epidermal growth factor receptors (EGFRs) are often overexpressed in brain tumors, EGFRs are a convenient target for site-specific drug delivery. The delivery of boron to cancerous cells is important for effective neutron capture therapy, a cancer treatment which requires a large concentration of boron in cancerous cells and a low concentration in healthy cells. A boronated dendrimer conjugated with a monoclonal antibody drug that targets EGFRs was used in rats to successfully deliver boron to cancerous cells.Modifying nanoparticle dendrimers with peptides has also been successful for targeted destruction of colorectal (HCT-116) cancer cells in a co-culture scenario. Targeting peptides can be used to achieve site- or cell-specific delivery, and it has been shown that these peptides increase in targeting specificity when paired with dendrimers. Specifically, gemcitabine-loaded YIGSR-CMCht/PAMAM, a unique kind of dendrimer nanoparticle, induces a targeted mortality on these cancer cells. This is performed via selective interaction of the dendrimer with laminin receptors. Peptide dendrimers may be employed in the future to precisely target cancer cells and deliver chemotherapeutic agents.The cellular uptake mechanism of dendrimers can also be tuned using chemical targeting modifications. Non-modified PAMAM-G4 dendrimer is taken up into activated microglia by fluid phase endocytosis. Conversely, mannose modification of hydroxyl PAMAM-G4 dendrimers was able to change the mechanism of internalization to mannose-receptor (CD206) mediated endocytosis. Additionally, mannose modification was able to change the biodistribution in the rest of the body in rabbits.
Applications:
Pharmacokinetics and pharmacodynamics Dendrimers have the potential to completely change the pharmacokinetic and pharmacodynamic (PK/PD) profiles of a drug. As carriers, the PK/PD is no longer determined by the drug itself but by the dendrimer’s localization, drug release, and dendrimer excretion. ADME properties are very highly tunable by varying dendrimer size, structure, and surface characteristics. While G9 dendrimers biodistribute very heavily to the liver and spleen, G6 dendrimers tend to biodistribute more broadly. As molecular weight increases, urinary clearance and plasma clearance decrease while terminal half-life increases.
Applications:
Routes of delivery To increase patient compliance with prescribed treatment, delivery of drugs orally is often preferred to other routes of drug administration. However oral bioavailability of many drugs tends to be very low. Dendrimers can be used to increase the solubility and stability of orally-administered drugs and increase drug penetration through the intestinal membrane. The bioavailability of PAMAM dendrimers conjugated to a chemotherapeutic has been studied in mice; it was found that around 9% of dendrimer administered orally was found intact in circulation and that minimal dendrimer degradation occurred in the gut.Intravenous dendrimer delivery shows promise as gene vectors to deliver genes to various organs in the body, and even tumors. One study found that through intravenous injection, a combination of PPI dendrimers and gene complexes resulted in gene expression in the liver, and another study showed that a similar injection regressed the growth of tumors in observed animals.The primary obstacle to transdermal drug delivery is the epidermis. Hydrophobic drugs have a very difficult time penetrating the skin layer, as they partition heavily into skin oils. Recently, PAMAM dendrimers have been used as delivery vehicles for NSAIDS to increase hydrophilicity, allowing greater drug penetration. These modifications act as polymeric transdermal enhancers allowing drugs to more easily penetrate the skin barrier.
Applications:
Dendrimers may also act as new ophthalmic vehicles for drug delivery, which are different from the polymers currently used for this purpose. A study by Vanndamme and Bobeck used PAMAM dendrimers as ophthalmic delivery vehicles in rabbits for two model drugs and measured the ocular residence time of this delivery to be comparable and in some cases greater than current bioadhesive polymers used in ocular delivery. This result indicates that administered drugs were more active and had increased bioavailability when delivered via dendrimers than their free-drug counterparts. Additionally, photo-curable, drug-eluting dendrimer-hyaluronic acid hydrogels have been used as corneal sutures applied directly to the eye. These hydrogel sutures have shown efficacy as a medical device in rabbit models that surpasses traditional sutures and minimizes corneal scarring.
Applications:
Brain drug delivery Dendrimer drug delivery has also shown major promise as a potential solution for many traditionally difficult drug delivery problems. In the case of drug delivery to the brain, dendrimers are able to take advantage of the EPR effect and blood-brain barrier (BBB) impairment to cross the BBB effectively in vivo. For example, hydroxyl-terminated PAMAM dendrimers possess an intrinsic targeting ability to inflamed macrophages in the brain, verified using fluorescently labeled neutral generation dendrimers in a rabbit model of cerebral palsy. This intrinsic targeting has enabled drug delivery in a variety of conditions, ranging from cerebral palsy and other neuroinflammatory disorders to traumatic brain injury and hypothermic circulatory arrest, across a variety of animal models ranging from mice and rabbits to canines. Dendrimer uptake into the brain correlates with severity of inflammation and BBB impairment and it is believed that the BBB impairment is the key driving factor allowing dendrimer penetration. Localization is heavily skewed towards activated microglia. Dendrimer-conjugated N-acetyl cysteine has shown efficacy in vivo as an anti-inflammatory at more than 1000-fold lower dose than free drug on a drug basis, reversing the phenotype of cerebral palsy, Rett syndrome, macular degeneration and other inflammatory diseases.
Applications:
Clinical trials Starpharma, an Australian pharmaceutical company, has multiple products that have either already been approved for use or are in the clinical trial phase. SPL7013, also known as astodrimer sodium, is a hyperbranched polymer used in Starpharma’s VivaGel line of pharmaceuticals that is currently approved to treat bacterial vaginosis and prevent the spread of HIV, HPV, and HSV in Europe, Southeast Asia, Japan, Canada, and Australia. Due to SPL7013’s broad antiviral action, it has recently been tested by the company as a potential drug to treat SARS-CoV-2. The company states preliminary in-vitro studies show high efficacy in preventing SARS-CoV-2 infection in cells.
Applications:
Gene delivery and transfection The ability to deliver pieces of DNA to the required parts of a cell includes many challenges. Current research is being performed to find ways to use dendrimers to traffic genes into cells without damaging or deactivating the DNA. To maintain the activity of DNA during dehydration, the dendrimer/DNA complexes were encapsulated in a water-soluble polymer, and then deposited on or sandwiched in functional polymer films with a fast degradation rate to mediate gene transfection. Based on this method, PAMAM dendrimer/DNA complexes were used to encapsulate functional biodegradable polymer films for substrate mediated gene delivery. Research has shown that the fast-degrading functional polymer has great potential for localized transfection.
Applications:
Sensors Dendrimers have potential applications in sensors. Studied systems include proton or pH sensors using poly(propylene imine), cadmium-sulfide/polypropylenimine tetrahexacontaamine dendrimer composites to detect fluorescence signal quenching, and poly(propylenamine) first and second generation dendrimers for metal cation photodetection amongst others. Research in this field is vast and ongoing due to the potential for multiple detection and binding sites in dendritic structures.
Applications:
Nanoparticles Dendrimers also are used in the synthesis of monodisperse metallic nanoparticles. Poly(amidoamide), or PAMAM, dendrimers are utilized for their tertiary amine groups at the branching points within the dendrimer. Metal ions are introduced to an aqueous dendrimer solution and the metal ions form a complex with the lone pair of electrons present at the tertiary amines. After complexation, the ions are reduced to their zerovalent states to form a nanoparticle that is encapsulated within the dendrimer. These nanoparticles range in width from 1.5 to 10 nanometers and are called dendrimer-encapsulated nanoparticles.
Applications:
Other applications Given the widespread use of pesticides, herbicides and insecticides in modern farming, dendrimers are also being used by companies to help improve the delivery of agrochemicals to enable healthier plant growth and to help fight plant diseases.Dendrimers are also being investigated for use as blood substitutes. Their steric bulk surrounding a heme-mimetic centre significantly slows degradation compared to free heme, and prevents the cytotoxicity exhibited by free heme.
Applications:
Dendritic functional polymer polyamidoamine (PAMAM) is used to prepare core shell structure i.e. microcapsules and utilized in formulation of self-healing coatings of conventional and renewable origins.
Applications:
Drug delivery Dendrimers in drug-delivery systems is an example of various host–guest interactions. The interaction between host and guest, the dendrimer and the drug, respectively, can either be hydrophobic or covalent. Hydrophobic interaction between host and guest is considered "encapsulated," while covalent interactions are considered to be conjugated. The use of dendrimers in medicine has shown to improve drug delivery by increasing the solubility and bioavailability of the drug. In conjunction, dendrimers can increase both cellular uptake and targeting ability, and decrease drug resistance.The solubility of various nonsteroidal anti-inflammatory drugs (NSAID) increases when they are encapsulated in PAMAM dendrimers. This study shows the enhancement of NSAID solubility is due to the electrostatic interactions between the surface amine groups in PAMAM and the carboxyl groups found in NSAIDs. Contributing to the increase in solubility are the hydrophobic interactions between the aromatic groups in the drugs and the interior cavities of the dendrimer. When a drug is encapsulated within a dendrimer, its physical and physiological properties remains unaltered, including non-specificity and toxicity. However, when the dendrimer and the drug are covalently linked together, it can be used for specific tissue targeting and controlled release rates. Covalent conjugation of multiple drugs on dendrimer surfaces can pose a problem of insolubility.This principle is also being studied for cancer treatment application. Several groups have encapsulated anti-cancer medications such as: Camptothecin, Methotrexate, and Doxorubicin. Results from these research has shown that dendrimers have increased aqueous solubility, slowed release rate, and possibly control cytotoxicity of the drugs. Cisplatin has been conjugated to PAMAM dendrimers that resulted in the same pharmacological results as listed above, but the conjugation also helped in accumulating cisplatin in solid tumors in intravenous administration. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Iron(III) fluoride**
Iron(III) fluoride:
Iron(III) fluoride, also known as ferric fluoride, are inorganic compounds with the formula FeF3(H2O)x where x = 0 or 3. They are mainly of interest by researchers, unlike the related iron(III) chloride. Anhydrous iron(III) fluoride is white, whereas the hydrated forms are light pink.
Chemical and physical properties:
Iron(III) fluoride is a thermally robust, antiferromagnetic solid consisting of high spin Fe(III) centers, which is consistent with the pale colors of all forms of this material. Both anhydrous iron(III) fluoride as well as its hydrates are hygroscopic.
Structure:
The anhydrous form adopts a simple structure with octahedral Fe(III)F6 centres interconnected by linear Fe-F-Fe linkages. In the language of crystallography, the crystals are classified as rhombohedral with an R-3c space group. The structural motif is similar to that seen in ReO3. Although the solid is nonvolatile, it evaporates at high temperatures, the gas at 987 °C consists of FeF3, a planar molecule of D3h symmetry with three equal Fe-F bonds, each of length 176.3 pm. At very high temperatures, it decomposes to give FeF2 and F2.Two crystalline forms—or more technically, polymorphs—of FeF3·3H2O are known, the α and β forms. These are prepared by evaporation of an HF solution containing Fe3+ at room temperature (α form) and above 50 °C (β form). The space group of the β form is P4/m, and the α form maintains a P4/m space group with a J6 substructure. The solid α form is unstable and converts to the β form within days. The two forms are distinguished by their difference in quadrupole splitting from their Mössbauer spectra.
Preparation, occurrence, reactions:
Anhydrous iron(III) fluoride is prepared by treating virtually any anhydrous iron compound with fluorine. More practically and like most metal fluorides, it is prepared by treating the corresponding chloride with hydrogen fluoride: FeCl3 + 3 HF → FeF3 + 3 HClIt also forms as a passivating film upon contact between iron (and steel) and hydrogen fluoride. The hydrates crystallize from aqueous hydrofluoric acid.The material is a fluoride acceptor. With xenon hexafluoride it forms [FeF4][XeF5].Pure FeF3 is not yet known among minerals. However, hydrated form is known as the very rare fumarolic mineral topsøeite. Generally a trihydrate, its chemistry is slightly more complex: FeF[F0.5(H2O)0.5]4·H2O.
Applications:
The primary commercial use of iron(III) fluoride in the production of ceramics.Some cross coupling reaction are catalyzed by ferric fluoride-based compounds. Specifically the coupling of biaryl compounds are catalyzed by hydrated iron(II) fluoride complexes of N-heterocyclic carbene ligands. Other metal fluorides also catalyse similar reactions. Iron(III) fluoride has also been shown to catalyze chemoselective addition of cyanide to aldehydes to give the cyanohydrins.
Safety:
The anhydrous material is a powerful dehydrating agent. The formation of ferric fluoride may have been responsible for the explosion of a cylinder of hydrogen fluoride gas. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ATtiny microcontroller comparison chart**
ATtiny microcontroller comparison chart:
ATtiny (also known as TinyAVR) is a subfamily of the popular 8-bit AVR microcontrollers, which typically has fewer features, fewer I/O pins, and less memory than other AVR series chips. The first members of this family were released in 1999 by Atmel (later acquired by Microchip Technology in 2016).
Features:
ATtiny microcontrollers specifically excludes various common features, such as: USB peripheral, DMA controller, crypto engine, or an external memory bus.
The following table summarizes common features of the ATtiny microcontrollers, for easy comparison. This table is not meant to be an unabridged feature list.
Features:
Notes Package column - the number after the dash is the number of pins on the package. DIP packages in this table are 0.3 inches (7.62 mm) row-to-row. SOwww means SOIC package with a case width of 'www' in thousandth of an inch. Though some package types are known by more than one name, a common name was chosen to make it easier to compare packages.
Features:
UART/I²C/SPI columns - green cell means a dedicated peripheral, * yellow cell means a multi-feature peripheral that is chosen by setting configuration bits. Most USART peripherals support a minimum choice between UART or SPI, where as some might support additional choices, such as LIN, IrDA, RS-485.
Timers column - more recent families have wider timers. RTT is a 16-bit Real Time Timer that is driven by a 32.768KHz clock, though Microchip calls it RTC for Real Time Counter (easily confused to mean Real Time Clock).
ADC pins column - the total number of analog channels that are accessible via pins that multiplex into the ADC input. Most parts have one ADC, a few have two ADC.
Features:
Pgm/Dbg column - flash programming and debugging protocols: HVPP means High Voltage Parallel Programming 12V protocol, HVSP means High Voltage Serial Programming 12V protocol, ISP means In-System Programmable protocol, uses SPI to program the internal flash. TPI is Tiny Programming Interface. dW means debugWIRE protocol. UPDI means Unified Program and Debug Interface protocol (newest).AbbreviationsTWI: Many of Atmels microcontrollers contain built-in support for interfacing to a two-wire bus, called Two-Wire Interface. This is essentially the same thing as the I²C interface by Philips, but that term is avoided in Atmel's documentation due to trademark issues.
Features:
USI: Universal Serial Interface (not to be confused with USB). The USI is a multi-purpose hardware communication module. With appropriate software support, it can be used to implement an SPI, I²C or UART interface. USART peripherals have more features than USI peripherals.
Timeline:
The following table lists each ATtiny microcontroller by the first release date of each datasheet.
Development boards:
The following are ATtiny development boards sold by Microchip Technology: ATtiny104 Xplained Nano ATtiny416 Xplained Nano ATtiny817 AVR Parrot ATtiny817 Xplained Mini ATtiny817 Xplained Pro ATtiny3217 Xplained Pro | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Memory bandwidth**
Memory bandwidth:
Memory bandwidth is the rate at which data can be read from or stored into a semiconductor memory by a processor. Memory bandwidth is usually expressed in units of bytes/second, though this can vary for systems with natural data sizes that are not a multiple of the commonly used 8-bit bytes.
Memory bandwidth:
Memory bandwidth that is advertised for a given memory or system is usually the maximum theoretical bandwidth. In practice the observed memory bandwidth will be less than (and is guaranteed not to exceed) the advertised bandwidth. A variety of computer benchmarks exist to measure sustained memory bandwidth using a variety of access patterns. These are intended to provide insight into the memory bandwidth that a system should sustain on various classes of real applications.
Measurement conventions:
There are three different conventions for defining the quantity of data transferred in the numerator of "bytes/second": The bcopy convention: counts the amount of data copied from one location in memory to another location per unit time. For example, copying 1 million bytes from one location in memory to another location in memory in one second would be counted as 1 million bytes per second. The bcopy convention is self-consistent, but is not easily extended to cover cases with more complex access patterns, for example three reads and one write.
Measurement conventions:
The Stream convention: sums the amount of data that the application code explicitly reads plus the amount of data that the application code explicitly writes. Using the previous 1 million byte copy example, the STREAM bandwidth would be counted as 1 million bytes read plus 1 million bytes written in one second, for a total of 2 million bytes per second. The STREAM convention is most directly tied to the user code, but may not count all the data traffic that the hardware is actually required to perform.
Measurement conventions:
The hardware convention: counts the actual amount of data read or written by the hardware, whether the data motion was explicitly requested by the user code or not. Using the same 1 million byte copy example, the hardware bandwidth on computer systems with a write allocate cache policy would include an additional 1 million bytes of traffic because the hardware reads the target array from memory into cache before performing the stores. This gives a total of 3 million bytes per second actually transferred by the hardware. The hardware convention is most directly tied to the hardware, but may not represent the minimum amount of data traffic required to implement the user's code.For example, some computer systems have the ability to avoid write allocate traffic using special instructions, leading to the possibility of misleading comparisons of bandwidth based on different amounts of data traffic performed.
Bandwidth computation and nomenclature:
The nomenclature differs across memory technologies, but for commodity DDR SDRAM, DDR2 SDRAM, and DDR3 SDRAM memory, the total bandwidth is the product of: Base DRAM clock frequency Number of data transfers per clock: Two, in the case of "double data rate" (DDR, DDR2, DDR3, DDR4) memory.
Bandwidth computation and nomenclature:
Memory bus (interface) width: Each DDR, DDR2, or DDR3 memory interface is 64 bits wide. Those 64 bits are sometimes referred to as a "line." Number of interfaces: Modern personal computers typically use two memory interfaces (dual-channel mode) for an effective 128-bit bus width.For example, a computer with dual-channel memory and one DDR2-800 module per channel running at 400 MHz would have a theoretical maximum memory bandwidth of: 400,000,000 clocks per second × 2 lines per clock × 64 bits per line × 2 interfaces =102,400,000,000 (102.4 billion) bits per second (in bytes, 12,800 MB/s or 12.8 GB/s)This theoretical maximum memory bandwidth is referred to as the "burst rate," which may not be sustainable.
Bandwidth computation and nomenclature:
The naming convention for DDR, DDR2 and DDR3 modules specifies either a maximum speed (e.g., DDR2-800) or a maximum bandwidth (e.g., PC2-6400). The speed rating (800) is not the maximum clock speed, but twice that (because of the doubled data rate). The specified bandwidth (6400) is the maximum megabytes transferred per second using a 64-bit width. In a dual-channel mode configuration, this is effectively a 128-bit width. Thus, the memory configuration in the example can be simplified as: two DDR2-800 modules running in dual-channel mode.
Bandwidth computation and nomenclature:
Two memory interfaces per module is a common configuration for PC system memory, but single-channel configurations are common in older, low-end, or low-power devices. Some personal computers and most modern graphics cards use more than two memory interfaces (e.g., four for Intel's LGA 2011 platform and the NVIDIA GeForce GTX 980). High-performance graphics cards running many interfaces in parallel can attain very high total memory bus width (e.g., 384 bits in the NVIDIA GeForce GTX TITAN and 512 bits in the AMD Radeon R9 290X using six and eight 64-bit interfaces respectively).
ECC bits:
In systems with error-correcting memory (ECC), the additional width of the interfaces (typically 72 rather than 64 bits) is not counted in bandwidth specifications because the extra bits are unavailable to store user data. ECC bits are better thought of as part of the memory hardware rather than as information stored in that hardware. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cyclic language**
Cyclic language:
In computer science, more particularly in formal language theory, a cyclic language is a set of strings that is closed with respect to repetition, root, and cyclic shift.
Definition:
If A is a set of symbols, and A* is the set of all strings built from symbols in A, then a string set L ⊆ A* is called a formal language over the alphabet A.
The language L is called cyclic if ∀w∈A*. ∀n>0. w ∈ L ⇔ wn ∈ L, and ∀v,w∈A*. vw ∈ L ⇔ wv ∈ L,where wn denotes the n-fold repetition of the string w, and vw denotes the concatenation of the strings v and w.: Def.1
Examples:
For example, using the alphabet A = {a, b }, the language is cyclic, but not regular.: Exm.2 However, L is context-free, since M = { an1bn1 an2bn2 ... ank bnk : ni ≥ 0 } is, and context-free languages are closed under circular shift; L is obtained as circular shift of M. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mothing**
Mothing:
Mothing or moth-watching is a form of wildlife observation where moths are observed, both for recreation and for citizen science activities. It is analogous to birdwatching, but for moths.Many bird observatories also run moth traps.
Techniques:
Mothing is frequently done with the aid of attractants, such as sugary solutions painted in tree trunks or using light. There are also moth traps, which are designed specifically for mothing, with do-it-yourself and commercial versions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Least distance of distinct vision**
Least distance of distinct vision:
In optometry, the least distance of distinct vision (LDDV) or the reference seeing distance (RSD) is the closest someone with "normal" vision (20/20 vision) can comfortably look at something. In other words, LDDV is the minimum comfortable distance between the naked human eye and a visible object.
The magnifying power (M) of a lens with focal length (f in millimeters) when viewed by the naked human eye can be calculated as: 250 f. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Battery tester**
Battery tester:
A battery tester is an electronic device intended for testing the state of an electric battery, going from a simple device for testing the charge actually present in the cells and/or its voltage output, to a more comprehensive testing of the battery's condition, namely its capacity for accumulating charge and any possible flaws affecting the battery's performance and security.
Simple battery testers:
The most simple battery tester is a DC ammeter, that indicates the battery's charge rate.
DC voltmeters can be used to estimate the charge rate of a battery, provided that its nominal voltage is known.
Integrated battery testers:
There are many types of integrated battery testers, each one corresponding to a specific condition testing procedure, according to the type of battery being tested, such as the “421” test for lead-acid vehicle batteries. Their common principle is based on the empirical fact that after having applied a given current for a given number of seconds to the battery, the resulting voltage output is related to the battery's overall condition, when compared to a healthy battery's output. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dow Jones Transportation Average**
Dow Jones Transportation Average:
The Dow Jones Transportation Average (DJTA, also called the "Dow Jones Transports") is a U.S. stock market index from S&P Dow Jones Indices of the transportation sector, and is the most widely recognized gauge of the American transportation sector. It is the oldest stock index still in use, even older than its better-known relative, the Dow Jones Industrial Average (DJIA).
Components:
The index is a running average of the stock prices of twenty transportation corporations, with each stock's price weighted to adjust for stock splits and other factors. As a result, it can change at any time the markets are open. The figure mentioned in news reports is usually the figure derived from the prices at the close of the market for the day.
Components:
Changes in the index's composition are rare, and generally occur only after corporate acquisitions or other dramatic shifts in a component's core business. Should such an event require that one component be replaced, the entire index is reviewed. As of December 14, 2021, the index consists of the following 20 companies: Alaska Air Group replaced AMR Corporation on December 2, 2011, after AMR corp. filed for bankruptcy protection.Effective October 30, 2012, Kirby Corp. replaced Overseas Shipholding Group, Inc.Effective October 1, 2014, Avis Budget Group Inc. replaced GATX Corporation.On October 15, 2015, American Airlines Group replaced Con-way.Effective December 14, 2021, Old Dominion Freight Line replaced Kansas City Southern.
History:
The average was created on July 3, 1884, by Charles Dow, co-founder of Dow Jones & Company, as part of the "Customer's Afternoon Letter". At its inception, it consisted of eleven transportation companies—nine railroads and two non-rail companies: Chicago, Milwaukee and St. Paul Railway Chicago and North Western Railway Delaware, Lackawanna and Western Railroad Lake Shore and Michigan Southern Railway Louisville and Nashville Railroad Missouri Pacific Railway New York Central Railroad Northern Pacific Railroad preferred stock Pacific Mail Steamship Company (not a railroad) Union Pacific Railway Western Union (not a railroad)As a result of the dominating presence of railroads, the Transportation Average was often referred to as "rails" in financial discussions in the early and middle part of the 20th century.
Use in Dow theory:
The Transportation Average is an important factor in Dow theory.
Price history:
In 1964, the index first broke 200, slightly over where it was in 1929.
In 1983, the index first broke 500. In 1987, the index broke 1000. It closed at 2146.89 on March 9, 2009, having a low coincident with some other indices; this was a bit above its low of 1942.19 on March 11, 2003.
Price history:
The index broke above the mid-5000s to begin a run of record highs on January 15, 2013, at a time when the better-known Industrials stood about 5% below all-time highs achieved more than five years earlier. By May, the Industrials and all other major indexes except the NASDAQ group were making all-time highs, including the Transports, which reached new closing and intraday records above the 6,500 level. On October 24, 2013, the Transports closed at 7,022.79, for its first close above 7,000 points. It closed the year at a record high of 7,400.57. On May 27, 2014, it first closed above 8,000 points. The index closed above 9000 on November 10, 2014. At the close of 2014, the index hit 9139.92. At the close of 2015, the index hit 7508.71, a loss of 17.85% on the year.
Annual returns:
The following table shows the price return of the Dow Jones Transportation Average, which was calculated back to 1924.
Investing:
There is no fund that tracks this index. There are funds that have a similar behavior, such as iShares Transportation Average ETF (NYSE Arca: IYT). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hit the ball twice**
Hit the ball twice:
Hit the ball twice, or "double-hit", is a method of dismissal in the sport of cricket. Its occurrence in modern cricket is exceptionally rare.
Definition:
Law 34.1 of the Laws of Cricket states: 34.1 Out Hit the ball twice 34.1.1 The striker is out Hit the ball twice if, while the ball is in play, it strikes any part of his/her person or is struck by his/her bat and, before the ball has been touched by a fielder, the striker wilfully strikes it again with his/her bat or person, other than a hand not holding the bat, except for the sole purpose of guarding his/her wicket.
Definition:
34.1.2 For the purpose of this Law ‘struck’ or ‘strike’ shall include contact with the person of the striker.
A player can hit the ball twice in order to prevent it from hitting his/her stumps but not with a hand that is not in contact with the bat and not if doing so prevents a catch being taken (in which case they would be out obstructing the field). The bowler does not get credit for the wicket.
History:
Cricket is often considered to be a rather gentle pastime but it has a history of extreme violence. In its early days, before the modern rules had universal effect, batsmen could go to almost any lengths to avoid being out. They could obstruct the fielders and they could hit the ball as many times as necessary to preserve their wicket. This had fatal consequences on more than one occasion and, ultimately, strict rules were introduced to prevent the batsman from physically attacking the fielders.
History:
In 1622, several parishioners of Boxgrove, near Chichester in West Sussex, were prosecuted for playing cricket in a churchyard on Sunday 5 May. There were three reasons for the prosecution: one was that it contravened a local by-law; another reflected concern about church windows which may or may not have been broken; the third was that "a little childe had like to have her braines beaten out with a cricket batt".The latter situation was because the rules at the time allowed the batsman to hit the ball more than once and so fielding near the batsman was very hazardous, as two later incidents confirm.
History:
In 1624, a fatality occurred at Horsted Keynes in East Sussex when a fielder called Jasper Vinall was struck on the head by the batsman, Edward Tye, who was trying to hit the ball a second time to avoid being caught. Vinall is thus the earliest known cricketing fatality. The matter was recorded in a coroner's court, which returned a verdict of misadventure.In 1647, another fatality was recorded at Selsey, West Sussex, when a fielder called Henry Brand was hit on the head by a batsman trying to hit the ball a second time.It is not known when the rules were changed to outlaw striking for the ball a second time or when the offence of obstructing the field was introduced, but both those rules were clearly stated in the 1744 codification of the Laws of Cricket, which were drawn up by the London Cricket Club and are believed to be based on a much earlier code that has been lost.The first definite record of a batsman being dismissed for hitting the ball twice occurred in the Hampshire v Kent match at Windmill Down on 13–15 July 1786. Tom Sueter of Hampshire, who had scored 3, was the player in question, as recorded in Scores and Biographies.
Unusual dismissal:
An example of the dismissal occurred in 1906 when John King, playing for Leicestershire against Surrey at The Oval tried to score a run after playing the ball twice to avoid getting bowled. Had he not tried to score a run, he would not have been out. Based on the history of the game, this method of dismissal is the second rarest after timed out, recorded merely on twenty-one occasions, although in modern times timed out has become more common.One relatively recent example of a batsman being out "Hit the ball twice" was Kurt Wilkinson's dismissal when playing for Barbados against Rest of Leeward Islands in the 2002–03 Red Stripe Bowl. The dismissal was controversial as there was doubt as to whether Wilkinson had "wilfully" struck the ball twice as required under the relevant law of cricket. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Water hole (radio)**
Water hole (radio):
The waterhole, or water hole, is an especially quiet band of the electromagnetic spectrum between 1420 and 1662 megahertz, corresponding to wavelengths of 21 and 18 centimeters, respectively. It is a popular observing frequency used by radio telescopes in radio astronomy.The strongest hydroxyl radical spectral line radiates at 18 centimeters, and atomic hydrogen at 21 centimeters (the hydrogen line). These two molecules, which combine to form water, are widespread in interstellar gas, which means this gas tends to absorb radio noise at these frequencies. Therefore, the spectrum between these frequencies forms a relatively "quiet" channel in the interstellar radio noise background.
Water hole (radio):
Bernard M. Oliver, who coined the term in 1971, theorized that the waterhole would be an obvious band for communication with extraterrestrial intelligence, hence the name, which is a pun: in English, a watering hole is a vernacular reference to a common place to meet and talk. Several programs involved in the search for extraterrestrial intelligence, including SETI@home, search in the waterhole radio frequencies. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Galactic Emission Mapping**
Galactic Emission Mapping:
The Galactic Emission Mapping survey (GEM) is an international project with the goal of making a precise map of the electromagnetic spectrum of our galaxy at low frequencies (radio and microwaves).
Description of the project:
The GEM Radio Telescope measures the radio emission of our galaxy in five frequencies, between 408 MHz and 10 GHz, from different places of the earth. This data will be used to calibrate other telescopes, more specifically the Planck Surveyor, and will give the means to filter the Cyclotron Radiation and the free free radiation from other maps in a way that the only radiation left on the map is the Cosmic Microwave Background.
Description of the project:
The telescope is in construction at Pampilhosa da Serra, Portugal, but the receptor has already made measurements in Cachoeira Paulista, (Brasil), in Antártica, in Bishop (U.S.), Villa de Leyva (Colombia) and in Tenerife (Canary Islands). The main reflector has a parabolic form of 5,5m of diameter.The telescope was projected and is operated by an international collaboration coordinated by the University of California, Berkeley and by the Lawrence Berkeley National Laboratory, under the guidance of George Smoot, awarded with the Nobel Prize in Physics in 2006.
Description of the project:
In Brasil, the radio telescope is under the responsibility of the Instituto Nacional de Pesquisas Espaciais ( National Institute of Space Research) and counts with the participation of the Astrophysics group of the Universidade Federal de Itajubá (Itajubá Federal University). Portugal joined the project in 2005 through the Instituto de Telecomunicações of Aveiro (Telecommunications institute of Aveiro), who is responsible for the planning and construction of the radio telescope.
Description of the project:
GEM in Portugal Scanning Process In Portugal the radio telescope will perform scans by rotating on its base at a speed greater than one rotation per minute, therefore avoiding the error fluctuations caused by water vapour in the atmosphere. This scanning process will provide an important contribution to the data processing.
Description of the project:
Telescope A Ground Shield will be built to avoid signal contamination with thermal radiation that may come from below the horizon, to reflect side lobes to the sky and to reduce the noise originating from diffraction from the edges of the reflector to the receiver. This will be made possible by an aluminium grid surrounding the radio telescope, which is 10 meters wide but only 8 meters high because it will be inclined towards the exterior.
Description of the project:
The edges will be curved with a radius larger than ¼ of the wavelength so that diffraction is reduced.
Localization The antenna is located at Pampilhosa da Serra at an altitude of 800m above sea level. This location was chosen because it is surrounded by a mountain range which peaks at about 1000m above sea level, which give a natural "shielding" from the electromagnetic noise of the neighboring cities.
Description of the project:
The same reason that made this location a good choice also created additional problems, since many of the necessary infrastructures had to be prepared and installed. The Telescope foundations were studied by the Département of Civil Engineering of the Universidade de Aveiro and the city hall of Pampilhosa da Serra offered 120 tons of concrete. A new connection to the electric grid was made taking into account the size of the transformer to avoid noise in the observed frequencies. This was necessary because the wavelength of the emitted radiation is close to size of the transformer. A small meteorologic station was also installed to measure the wind intensity and help prevent against wind damages on the telescope.
Description of the project:
A second telescope is planned on the same site, to study solar phenomena. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nonlinear eigenproblem**
Nonlinear eigenproblem:
In mathematics, a nonlinear eigenproblem, sometimes nonlinear eigenvalue problem, is a generalization of the (ordinary) eigenvalue problem to equations that depend nonlinearly on the eigenvalue. Specifically, it refers to equations of the form M(λ)x=0, where x≠0 is a vector, and M is a matrix-valued function of the number λ . The number λ is known as the (nonlinear) eigenvalue, the vector x as the (nonlinear) eigenvector, and (λ,x) as the eigenpair. The matrix M(λ) is singular at an eigenvalue λ
Definition:
In the discipline of numerical linear algebra the following definition is typically used.Let Ω⊆C , and let M:Ω→Cn×n be a function that maps scalars to matrices. A scalar λ∈C is called an eigenvalue, and a nonzero vector x∈Cn is called a right eigevector if M(λ)x=0 . Moreover, a nonzero vector y∈Cn is called a left eigevector if yHM(λ)=0H , where the superscript H denotes the Hermitian transpose. The definition of the eigenvalue is equivalent to det (M(λ))=0 , where det () denotes the determinant.The function M is usually required to be a holomorphic function of λ (in some domain Ω ).
Definition:
In general, M(λ) could be a linear map, but most commonly it is a finite-dimensional, usually square, matrix.
Definition:
Definition: The problem is said to be regular if there exists a z∈Ω such that det (M(z))≠0 . Otherwise it is said to be singular.Definition: An eigenvalue λ is said to have algebraic multiplicity k if k is the smallest integer such that the k th derivative of det (M(z)) with respect to z , in λ is nonzero. In formulas that det (M(z))dzk|z=λ≠0 but det (M(z))dzℓ|z=λ=0 for ℓ=0,1,2,…,k−1 .Definition: The geometric multiplicity of an eigenvalue λ is the dimension of the nullspace of M(λ)
Special cases:
The following examples are special cases of the nonlinear eigenproblem.
The (ordinary) eigenvalue problem: M(λ)=A−λI.
The generalized eigenvalue problem: M(λ)=A−λB.
The quadratic eigenvalue problem: M(λ)=A0+λA1+λ2A2.
The polynomial eigenvalue problem: M(λ)=∑i=0mλiAi.
The rational eigenvalue problem: M(λ)=∑i=0m1Aiλi+∑i=1m2Biri(λ), where ri(λ) are rational functions.
The delay eigenvalue problem: M(λ)=−Iλ+A0+∑i=1mAie−τiλ, where τ1,τ2,…,τm are given scalars, known as delays.
Jordan chains:
Definition: Let (λ0,x0) be an eigenpair. A tuple of vectors (x0,x1,…,xr−1)∈Cn×Cn×⋯×Cn is called a Jordan chain iffor ℓ=0,1,…,r−1 , where M(k)(λ0) denotes the k th derivative of M with respect to λ and evaluated in λ=λ0 . The vectors x0,x1,…,xr−1 are called generalized eigenvectors, r is called the length of the Jordan chain, and the maximal length a Jordan chain starting with x0 is called the rank of x0 .Theorem: A tuple of vectors (x0,x1,…,xr−1)∈Cn×Cn×⋯×Cn is a Jordan chain if and only if the function M(λ)χℓ(λ) has a root in λ=λ0 and the root is of multiplicity at least ℓ for ℓ=0,1,…,r−1 , where the vector valued function χℓ(λ) is defined as
Mathematical software:
The eigenvalue solver package SLEPc contains C-implementations of many numerical methods for nonlinear eigenvalue problems.
The NLEVP collection of nonlinear eigenvalue problems is a MATLAB package containing many nonlinear eigenvalue problems with various properties. The FEAST eigenvalue solver is a software package for standard eigenvalue problems as well as nonlinear eigenvalue problems, designed from density-matrix representation in quantum mechanics combined with contour integration techniques.
The MATLAB toolbox NLEIGS contains an implementation of fully rational Krylov with a dynamically constructed rational interpolant.
The MATLAB toolbox CORK contains an implementation of the compact rational Krylov algorithm that exploits the Kronecker structure of the linearization pencils.
The MATLAB toolbox AAA-EIGS contains an implementation of CORK with rational approximation by set-valued AAA.
The MATLAB toolbox RKToolbox (Rational Krylov Toolbox) contains implementations of the rational Krylov method for nonlinear eigenvalue problems as well as features for rational approximation. The Julia package NEP-PACK contains many implementations of various numerical methods for nonlinear eigenvalue problems, as well as many benchmark problems.
The review paper of Güttel & Tisseur contains MATLAB code snippets implementing basic Newton-type methods and contour integration methods for nonlinear eigenproblems.
Eigenvector nonlinearity:
Eigenvector nonlinearities is a related, but different, form of nonlinearity that is sometimes studied. In this case the function M maps vectors to matrices, or sometimes hermitian matrices to hermitian matrices. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CARMENES survey**
CARMENES survey:
The CARMENES survey (Calar Alto high-Resolution search for M-dwarfs with Exoearths with Near-infrared and optical Échelle Spectrographs) is a project to examine approximately 300 M-dwarf stars for signs of exoplanets with the CARMENES instrument on the Spanish Calar Alto's 3.5m telescope.Operating since 2016, it aims to find Earth-sized exoplanets around 2 MEarth (Earth masses) using Doppler spectroscopy (also called the radial velocity method). More than 20 exoplanets have been found through CARMENES, among them Teegarden b, considered one of the most potentially habitable exoplanets. Another potentially habitable planet found is GJ 357 d. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Triple DES**
Triple DES:
In cryptography, Triple DES (3DES or TDES), officially the Triple Data Encryption Algorithm (TDEA or Triple DEA), is a symmetric-key block cipher, which applies the DES cipher algorithm three times to each data block. The Data Encryption Standard's (DES) 56-bit key is no longer considered adequate in the face of modern cryptanalytic techniques and supercomputing power. A CVE released in 2016, CVE-2016-2183 disclosed a major security vulnerability in DES and 3DES encryption algorithms. This CVE, combined with the inadequate key size of DES and 3DES, led to NIST deprecating DES and 3DES for new applications in 2017, and for all applications by the end of 2023. It has been replaced with the more secure, more robust AES.
Triple DES:
While the government and industry standards abbreviate the algorithm's name as TDES (Triple DES) and TDEA (Triple Data Encryption Algorithm), RFC 1851 referred to it as 3DES from the time it first promulgated the idea, and this namesake has since come into wide use by most vendors, users, and cryptographers.
History:
In 1978, a triple encryption method using DES with two 56-bit keys was proposed by Walter Tuchman; in 1981 Merkle and Hellman proposed a more secure triple key version of 3DES with 112 bits of security.
Standards:
The Triple Data Encryption Algorithm is variously defined in several standards documents: RFC 1851, The ESP Triple DES Transform (approved in 1995) ANSI ANS X9.52-1998 Triple Data Encryption Algorithm Modes of Operation (approved in 1998, withdrawn in 2008) FIPS PUB 46-3 Data Encryption Standard (DES) (approved in 1999, withdrawn in 2005) NIST Special Publication 800-67 Revision 2 Recommendation for the Triple Data Encryption Algorithm (TDEA) Block Cipher (approved in 2017) ISO/IEC 18033-3:2010: Part 3: Block ciphers (approved in 2005)
Algorithm:
The original DES cipher's key size of 56 bits was generally sufficient when that algorithm was designed, but the availability of increasing computational power made brute-force attacks feasible. Triple DES provides a relatively simple method of increasing the key size of DES to protect against such attacks, without the need to design a completely new block cipher algorithm.
Algorithm:
A naive approach to increase strength of a block encryption algorithm with short key length (like DES) would be to use two keys (K1,K2) instead of one, and encrypt each block twice: plaintext )) . If the original key length is n bits, one would hope this scheme provides security equivalent to using key 2n bits long. Unfortunately, this approach is vulnerable to meet-in-the-middle attack: given a known plaintext pair (x,y) , such that y=EK2(EK1(x)) , one can recover the key pair (K1,K2) in 2n+1 steps, instead of the 22n steps one would expect from an ideally secure algorithm with 2n bits of key.
Algorithm:
Therefore, Triple DES uses a "key bundle" that comprises three DES keys, K1 , K2 and K3 , each of 56 bits (excluding parity bits). The encryption algorithm is: ciphertext plaintext ))).
That is, DES encrypt with K1 , DES decrypt with K2 , then DES encrypt with K3 Decryption is the reverse: plaintext ciphertext ))).
That is, decrypt with K3 , encrypt with K2 , then decrypt with K1 Each triple encryption encrypts one block of 64 bits of data.
In each case the middle operation is the reverse of the first and last. This improves the strength of the algorithm when using keying option 2 and provides backward compatibility with DES with keying option 3.
Keying options:
The standards define three keying options: Keying option 1 All three keys are independent. Sometimes known as 3TDEA or triple-length keys.
This is the strongest, with 3 × 56 = 168 independent key bits. It is still vulnerable to meet-in-the-middle attack, but the attack requires 22 × 56 steps.
Keying option 2 K1 and K2 are independent, and K3 = K1. Sometimes known as 2TDEA or double-length keys.
This provides a shorter key length of 56*2 or 112 bits and a reasonable compromise between DES and Keying option 1, with the same caveat as above. This is an improvement over "double DES" which only requires 256 steps to attack. NIST has deprecated this option.
Keying option 3 All three keys are identical, i.e. K1 = K2 = K3.
Keying options:
This is backward compatible with DES, since two operations cancel out. ISO/IEC 18033-3 never allowed this option, and NIST no longer allows K1 = K2 or K2 = K3.Each DES key is 8 odd-parity bytes, with 56 bits of key and 8 bits of error-detection. A key bundle requires 24 bytes for option 1, 16 for option 2, or 8 for option 3.
Keying options:
NIST (and the current TCG specifications version 2.0 of approved algorithms for Trusted Platform Module) also disallows using any one of the 64 following 64-bit values in any keys (note that 32 of them are the binary complement of the 32 others; and that 32 of these keys are also the reverse permutation of bytes of the 32 others), listed here in hexadecimal (in each byte, the least significant bit is an odd-parity generated bit, it is discarded when forming the effective 56-bit keys): 01.01.01.01.01.01.01.01, FE.FE.FE.FE.FE.FE.FE.FE, E0.FE.FE.E0.F1.FE.FE.F1, 1F.01.01.1F.0E.01.01.0E, 01.01.FE.FE.01.01.FE.FE, FE.FE.01.01.FE.FE.01.01, E0.FE.01.1F.F1.FE.01.0E, 1F.01.FE.E0.0E.01.FE.F1, 01.01.E0.E0.01.01.F1.F1, FE.FE.1F.1F.FE.FE.0E.0E, E0.FE.1F.01.F1.FE.0E.01, 1F.01.E0.FE.0E.01.F1.FE, 01.01.1F.1F.01.01.0E.0E, FE.FE.E0.E0.FE.FE.F1.F1, E0.FE.E0.FE.F1.FE.F1.FE, 1F.01.1F.01.0E.01.0E.01, 01.FE.01.FE.01.FE.01.FE, FE.01.FE.01.FE.01.FE.01, E0.01.FE.1F.F1.01.FE.0E, 1F.FE.01.E0.0E.FE.01.F1, 01.FE.FE.01.01.FE.FE.01, FE.01.01.FE.FE.01.01.FE, E0.01.01.E0.F1.01.01.F1, 1F.FE.FE.1F.0E.FE.FE.0E, 01.FE.E0.1F.01.FE.F1.0E, FE.01.1F.E0.FE.01.0E.F1, E0.01.1F.FE.F1.01.0E.FE, 1F.FE.E0.01.0E.FE.F1.01, 01.FE.1F.E0.01.FE.0E.F1, FE.01.E0.1F.FE.01.F1.0E, E0.01.E0.01.F1.01.F1.01, 1F.FE.1F.FE.0E.FE.0E.FE, 01.E0.01.E0.01.F1.01.F1, FE.1F.FE.1F.FE.0E.FE.0E, E0.1F.FE.01.F1.0E.FE.01, 1F.E0.01.FE.0E.F1.01.FE, 01.E0.FE.1F.01.F1.FE.0E, FE.1F.01.E0.FE.0E.01.F1, E0.1F.01.FE.F1.0E.01.FE, 1F.E0.FE.01.0E.F1.FE.01, 01.E0.E0.01.01.F1.F1.01, FE.1F.1F.FE.FE.0E.0E.FE, E0.1F.1F.E0.F1.0E.0E.F1, 1F.E0.E0.1F.0E.F1.F1.0E, 01.E0.1F.FE.01.F1.0E.FE, FE.1F.E0.01.FE.0E.F1.01, E0.1F.E0.1F.F1.0E.F1.0E, 1F.E0.1F.E0.0E.F1.0E.F1, 01.1F.01.1F.01.0E.01.0E, FE.E0.FE.E0.FE.F1.FE.F1, E0.E0.FE.FE.F1.F1.FE.FE, 1F.1F.01.01.0E.0E.01.01, 01.1F.FE.E0.01.0E.FE.F1, FE.E0.01.1F.FE.F1.01.0E, E0.E0.01.01.F1.F1.01.01, 1F.1F.FE.FE.0E.0E.FE.FE, 01.1F.E0.FE.01.0E.F1.FE, FE.E0.1F.01.FE.F1.0E.01, E0.E0.1F.1F.F1.F1.0E.0E, 1F.1F.E0.E0.0E.0E.F1.F1, 01.1F.1F.01.01.0E.0E.01, FE.E0.E0.FE.FE.F1.F1.FE, E0.E0.E0.E0.F1.F1.F1.F1, 1F.1F.1F.1F.0E.0E.0E.0E, With these restrictions on allowed keys, Triple DES has been reapproved with keying options 1 and 2 only. Generally the three keys are generated by taking 24 bytes from a strong random generator and only keying option 1 should be used (option 2 needs only 16 random bytes, but strong random generators are hard to assert and it's considered best practice to use only option 1).
Encryption of more than one block:
As with all block ciphers, encryption and decryption of multiple blocks of data may be performed using a variety of modes of operation, which can generally be defined independently of the block cipher algorithm. However, ANS X9.52 specifies directly, and NIST SP 800-67 specifies via SP 800-38A that some modes shall only be used with certain constraints on them that do not necessarily apply to general specifications of those modes. For example, ANS X9.52 specifies that for cipher block chaining, the initialization vector shall be different each time, whereas ISO/IEC 10116 does not. FIPS PUB 46-3 and ISO/IEC 18033-3 define only the single block algorithm, and do not place any restrictions on the modes of operation for multiple blocks.
Security:
In general, Triple DES with three independent keys (keying option 1) has a key length of 168 bits (three 56-bit DES keys), but due to the meet-in-the-middle attack, the effective security it provides is only 112 bits. Keying option 2 reduces the effective key size to 112 bits (because the third key is the same as the first). However, this option is susceptible to certain chosen-plaintext or known-plaintext attacks, and thus it is designated by NIST to have only 80 bits of security. This can be considered insecure, and, as consequence Triple DES has been deprecated by NIST in 2017.
Security:
The short block size of 64 bits makes 3DES vulnerable to block collision attacks if it is used to encrypt large amounts of data with the same key. The Sweet32 attack shows how this can be exploited in TLS and OpenVPN. Practical Sweet32 attack on 3DES-based cipher-suites in TLS required 36.6 blocks (785 GB) for a full attack, but researchers were lucky to get a collision just after around 20 blocks, which took only 25 minutes.
Security:
The security of TDEA is affected by the number of blocks processed with one key bundle. One key bundle shall not be used to apply cryptographic protection (e.g., encrypt) more than 20 64-bit data blocks.
OpenSSL does not include 3DES by default since version 1.1.0 (August 2016) and considers it a "weak cipher".
Usage:
As of 2008, the electronic payment industry uses Triple DES and continues to develop and promulgate standards based upon it, such as EMV.Earlier versions of Microsoft OneNote, Microsoft Outlook 2007 and Microsoft System Center Configuration Manager 2012 use Triple DES to password-protect user content and system data. However, in December 2018, Microsoft announced the retirement of 3DES throughout their Office 365 service.Firefox and Mozilla Thunderbird use Triple DES in CBC mode to encrypt website authentication login credentials when using a master password.
Implementations:
Below is a list of cryptography libraries that support Triple DES: Botan Bouncy Castle cryptlib Crypto++ Libgcrypt Nettle OpenSSL wolfSSL Trusted Platform Module (alias TPM, hardware implementation)Some implementations above may not include 3DES in the default build, in later or more recent versions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rugby socks**
Rugby socks:
Rugby socks are socks similar to the long socks that are worn in other sports such as association football. They are intended to be worn pulled up just below the knee and cover the shins and calves and are designed to be hardwearing. The knee-high socks for rugby were designed to fit tightly around their calves and feet. The proper fitting is an important requirement that ensures players will not fall down when playing, and moreover, it should assist in preventing blisters.Historically, rugby socks were made from a much thicker weave of material to cope with the more aggressive demands of the game compared to association football, but this is less common, and the two types are barely distinguishable. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ANAPC2**
ANAPC2:
Anaphase-promoting complex subunit 2 is an enzyme that in humans is encoded by the ANAPC2 gene.A large protein complex, termed the anaphase-promoting complex (APC), or the cyclosome, promotes metaphase-anaphase transition by ubiquitinating its specific substrates such as mitotic cyclins and anaphase inhibitor, which are subsequently degraded by the 26S proteasome. Biochemical studies have shown that the vertebrate APC contains eight subunits. The composition of the APC is highly conserved in organisms from yeast to humans. The product of this gene is a component of the complex and shares sequence similarity with a recently identified family of proteins called cullins, which may also be involved in ubiquitin-mediated degradation.
Interactions:
ANAPC2 has been shown to interact with ANAPC1 and ANAPC11. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Far-eastern blot**
Far-eastern blot:
The far-eastern blot, or far-eastern blotting, is a technique for the analysis of lipids separated by high-performance thin layer chromatography (HPTLC). When executing the technique, lipids are transferred from HPTLC plates to a PVDF membrane for further analysis, for example by enzymatic or ligand binding assays and mass spectrometry. It was developed in 1994 by Taki and colleagues at the Tokyo Medical and Dental University, Japan.
Analysis:
Cholesterol, glycerophospholipids and sphingolipids are major constituents of the cell membrane and in certain cases function as second messengers in cell proliferation, apoptosis and cell adhesion in inflammation and tumor metastasis. Far-eastern blot was established as a method for transferring lipids from an HPTLC plate to a polyvinylidene difluoride (PVDF) membrane within a minute. Applications of this with other methods have been studied. Far-eastern blotting allows for the purification of glycosphingolipids and phospholipids, structural analysis of lipids in conjunction with direct mass spectrometry, binding studies using various ligands such as antibodies, lectins, bacterium, viruses, and toxins, and enzyme reaction on membranes.
Analysis:
Far-eastern blot is adaptable to the analysis of lipids as well as metabolites of drugs and natural compounds from plants and environmental hormones.
Etymology:
The name is a dual reference to eastern blot and the geographical concept of the Far East (which includes Japan). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fuculose**
Fuculose:
Fuculose or 6-deoxy-tagatose is a ketohexose deoxy sugar. Fuculose is involved in the process of sugar metabolism. l-Fuculose can be formed from l-fucose by l-fucose isomerase and converted to L-fuculose-1-phosphate by l-fuculose kinase. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hermite transform**
Hermite transform:
In mathematics, Hermite transform is an integral transform named after the mathematician Charles Hermite, which uses Hermite polynomials Hn(x) as kernels of the transform. This was first introduced by Lokenath Debnath in 1964.The Hermite transform of a function F(x) is The inverse Hermite transform is given by | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pistoleer**
Pistoleer:
A pistoleer is a mounted soldier trained to use a pistol, or more generally anyone armed with such a weapon. It is derived from pistolier, a French word for an expert marksman.
History:
The earliest kind of pistoleer was the mounted German Reiter, who came to prominence in Europe after the Battle of St. Quentin in 1557. These soldiers were equipped with a number of single-shot, muzzle-loader wheel-lock or Snaphance horse pistols, amongst the most advanced weapons of the era. Although mounted Pistoleers were effective against heavy cavalry, they gradually fell out of use during the Thirty Years War. After this time, cavalry in Western armies used swords or lances as their primary arm, although they still generally carried a pistol as a sidearm.During the English Civil War, the Roundhead Ironside cavalry were issued with a pair of flintlock pistols. Cavaliers used similar weapons, often ornately decorated, including an early breechloader with a barrel that could be unscrewed.Before 1700, cavalrymen were recruited from the wealthy gentry, and generally purchased their own nonstandard pistols. The industrial revolution enabled armies to mass-produce firearms with interchangeable parts, and cheaply issue large quantities of standardised firearms to enlisted personnel. However, officers in the British Army and Royal Navy continued to privately commission pistols from London gunsmiths such as Joseph Manton, Robert Wogdon, Henry Nock and Durs Egg until the mid 19th century.
Equipment:
Light cavalry of the early modern period were equipped with a sabre and specialised horse pistols, carried in saddle holsters. These large calibre single shot handguns, also known as holster pistols, horsemen's pistols, cavalry pistols or musket calibre pistols, saw extensive use among the British and French armies during the Napoleonic Wars. These were deadliest at close range, but massed pistol fire from horseback proved moderately effective at medium range. Many were made in .71, .65 and .58 calibre, to enable the use of standard infantry musket balls.During the early Victorian era, most horse pistols in the arsenals of Britain, France and America were converted to caplock ignition. These remained in service until .44 calibre revolvers such as the Colt Dragoon of 1847 or the Adams revolver of 1851 were introduced.
Equipment:
British horse pistol Horse pistols made at the Tower of London used the same lock as the Brown Bess musket. Pistols made before 1790 had wood instead of steel ramrods. The lock was stamped with the crown of George III of Great Britain and the barrel received arrow proof marks.Due to the high demand for arms during the wars against France, regulation .71 calibre horse pistols were also manufactured in Birmingham, and by private gunsmiths. Britain's German allies produced similar pistols in .71 and .65 calibre, including the Prussian Potzdam horse pistols of 1733, 1774 and 1789.British light cavalry such as the hussars fought as pistoliers during the Napoleonic Wars, being trained to draw and fire both pistols before closing in with the sabre. Dragoons were issued with a pair, or brace, of pistols as secondary weapons to their carbines. Although designed for use by cavalry, horse pistols were also issued to mounted staff officers for personal defence, and it was a widespread if unauthorised practice for colour sergeants to carry a pistol in addition to the half-pike and spadroon. After the war, surplus horse pistols were issued to the coast guard, customs officers, and the Metropolitan mounted police.
Equipment:
Similar weapons, issued to the Royal Navy as the Sea Service pistol, had brass rather than steel barrels to prevent corrosion, a belt hook, and a brass butt cap for close quarters fighting. Blackbeard the pirate was infamous for carrying seven pistols of this type in a bandolier.
Equipment:
India pattern pistol An improved variant of the regulation .71 Tower horse pistol, known as the Indian pattern, was manufactured in British India from 1787 to 1832, for use by officers of the East India Trading Company and British Indian Cavalry. Indian or New Land Pattern pistols produced after 1802 had captive ramrods, raised waterproof frizzens for use in India's monsoons, and an attachment on the buttcap for a lanyard. These features would later be retro-fitted to the Tower Model 1835 and Model 1840 pistols.Indian horse pistols in .65 and later .577 calibre were produced at British-controlled arsenals such as Lucknow from 1796 to 1856, and were favoured by big game hunters before the invention of the double barreled howdah pistol. Additionally, many were exported to England and saw use during the later years of the Napoleonic Wars. During the Indian Mutiny, caplock conversions of the India pattern pistol with rifled barrels were used by British forces and mutinous sepoys alike.
Equipment:
French and American horse pistols The French army first issued horse pistols to their cavalry in 1733, with an improved model introduced in 1764. French horse pistols were used primarily by cuirassiers, and as a secondary weapon by lancers. During the Napoleonic Wars, the most commonly issued pistols were the Pistolet Modele An. IX of 1798, and the Pistolet Modele An. XIII in service from 1806 to 1840. The latter was half-stocked, had a bird's head grip, and included an attachment for a lanyard. An improved model was introduced in 1822, and was converted to caplock ignition in 1842. Copies of the French An. XIII pistol were manufactured in Holland, Belgium, Switzerland and Prussia and were issued to the armies of those countries from the 1820s onwards.
Equipment:
During the Revolutionary War the Americans manufactured copies of the British horse pistol, and its likely that the 1804-1806 Lewis and Clark Expedition procured horsemen's pistols of this type. British and American horse pistols were also acquired by indigenous American warriors either from dead white men, or through trade.The Americans manufactured their first standardised horse pistol at Harpers Ferry in 1805, copied from the French An. IX pattern. Improved models of the Harpers Ferry pistol were produced in 1806, 1807, 1812, 1818, and 1835. These were issued to the US Army during the War of 1812, Indian Wars and Mexican War, and were used by gunfighters and mountain men in the early days of the Old West, including Kit Carson. The US Navy used similar pistols from 1813 until after the American Civil War, and the Confederate army issued large quantities of Harpers Ferry horse pistols.
Equipment:
Russian horse pistols The hussars of the Tsarist army filled a similar role to their British counterparts, being trained to fight with sword and pistol. Before the standardised Model 1808 horse pistol in 7 Line (.71-inch) caliber was introduced, the Tsarist cavalry were equipped with a mixture of weapons in different calibers, some made before 1700. The Model 1808 pistol was full-stocked, with a brass barrel band, belt hook and the initials of Tsar Alexander I stamped on the buttplate. New pistols were manufactured at Tula, Izhevsk, Sestroretsk, Moscow, Leningrad, and Kiev in 1818, 1824 and 1836, and most older weapons were converted to percussion from 1844 to 1848. Many were painted black as thermal insulation from the Russian winter, and leather wrapped grips were not uncommon.Ukrainian Cossacks were equipped with their own distinctive horse pistol, featuring a miquelet lock imported from Spain or Italy, a stock carved from an elm root, a bulbous ivory or bone butt, and niello silver decoration. These were in use among the Cossacks, Chechens, Georgians, Abkhazians and other inhabitants of the Caucasus from the Russo Turkish Wars of the 17th century until after the Crimean War.Some Cossack tribes of the early 1800s scorned the pistol as the weapon of an inexperienced recruit or coward, but others celebrated skilled pistoliers and assigned the best marksmen to elite companies of dismounted skirmishers. By the 1840s, it had become mandatory for every Ukrainian youth to be as competent in the use of the pistol and carbine as he was with the sabre, lance, wolf hunting, and horse-breaking. Unlike regular cavalry, Cossacks carried their pistols on the left side of their belt or around their neck rather than in a saddle holster so they would never be unarmed if attacked while away from their horses.
Revival:
Horse-mounted pistoleers of a kind made a brief comeback in North America during the American Civil War (particularly by the Confederates) as well as in the Indian Wars of the 1860s and 70s. This was a consequence of the adoption of the multi-shot Colt revolver, which gave horsemen greater range and firepower. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Light ergonomics**
Light ergonomics:
Light ergonomics is the relationship between the light source and the individual. Poor light can be divided into the following: Individual or socio-cultural expectations Insufficient light Poor distribution of light Improper contrast Glare Flicker Thermal heating (over or under) Acoustic noise (especially fluorescents) Color spectrum (full-spectrum light, color temperature, etc.)
Effects of poor light:
The effects of poor light can include the following: low productivity high human error rates inability to match or select correct colors eyestrain headache a reduction in mental alertness general malaise low employee morale | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tin foil**
Tin foil:
Tin foil, also spelled tinfoil, is a thin foil made of tin. Tin foil was superseded after World War II by cheaper and more durable aluminium foil, which is still referred to as "tin foil" in many regions (an example of a misnomer).
History:
Foil made from a thin leaf of tin was commercially available before its aluminium counterpart. In the late 19th century and early 20th century, tin foil was in common use, and some people continue to refer to the new product by the name of the old one. Tin foil is stiffer than aluminium foil. It tends to give a slight tin taste to food wrapped in it, which is a major reason it has largely been replaced by aluminium and other materials for wrapping food.
History:
Because of its corrosion resistance, oxidation resistance, availability, low cost, low toxicity, and slight malleability, tin foil was used as a filling for tooth cavities prior to the 20th century.
The first audio recordings on phonograph cylinders were made on tin foil. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Component video**
Component video:
Component video is an analog video signal that has been split into two or more component channels. In popular use, it refers to a type of component analog video (CAV) information that is transmitted or stored as three separate signals. Component video can be contrasted with composite video in which all the video information is combined into a single signal that is used in analog television. Like composite, component cables do not carry audio and are often paired with audio cables.
Component video:
When used without any other qualifications, the term component video usually refers to analog YPBPR component video with sync on luma (Y) found on analog high-definition televisions and associated equipment from the 1990s through the 2000s when they were largely replaced with HDMI and other all-digital standards. Component video cables and their RCA jack connectors on equipment are normally color-coded red, green and blue, although the signal is not in RGB. YPbPr component video can be losslessly converted to the RGB signal that internally drives the monitor; the encoding is useful as the Y signal will also work on black and white monitors.
Analog component video:
Reproducing a video signal on a display device (for example, a cathode ray tube; CRT) is a straightforward process complicated by the multitude of signal sources. DVD, VHS, computers and video game consoles all store, process and transmit video signals using different methods, and often each will provide more than one signal option. One way of maintaining signal clarity is by separating the components of a video signal so that they do not interfere with each other. A signal separated in this way is called "component video". S-Video, RGB and YPBPR signals comprise two or more separate signals, and thus are all component-video signals. For most consumer-level video applications, the common three-cable system using BNC or RCA connectors analog component video was used. Typical formats are 480i (480 lines visible, 525 full for NTSC) and 576i (576 lines visible, 625 full for PAL). For personal computer displays the 15 pin DIN connector (IBM VGA) provided screen resolutions including 640×480, 800×600, 1024×768, 1152×864, 1280×1024.
Analog component video:
RGB analog component video The various RGB (red, green, blue) analog component video standards (e.g., RGBS, RGBHV, RGsB) use no compression and impose no real limit on color depth or resolution, but require large bandwidth to carry the signal and contain a lot of redundant data since each channel typically includes much of the same black-and-white image. Early personal computers such as the IBM PS/2 offered this signal via a VGA port. Many televisions, especially in Europe, can utilize RGB via the SCART connector. All arcade video games, other than early vector and black-and-white games, use RGB monitors.In addition to the red, green and blue color signals, RGB requires two additional signals to synchronize the video display. Several methods are used: composite sync, where the horizontal and vertical signals are mixed together on a separate wire (the S in RGBS) separate sync, where the horizontal and vertical are each on their own wire (the H and V in RGBHV; also the acronym HD/VD, meaning horizontal deflection/vertical deflection, is used) sync on green, where a composite sync signal is overlaid on the wire used to transport the green signal (SoG, Sync on G, or RGsB).
Analog component video:
sync on red or sync on blue, where a composite sync signal is overlaid on either the red or blue wire sync on composite (not to be confused with composite sync), where the signal normally used for composite video is used alongside the RGB signal only for the purposes of sync.
Analog component video:
sync on luma, where the Y signal from S-Video is used alongside the RGB signal only for the purposes of sync.Composite sync is common in the European SCART connection scheme (using pins 17 [ground] and 19 [composite-out] or 20 [composite-in]). RGBS requires four wires – red, green, blue and sync. If separate cables are used, the sync cable is usually colored yellow (as is the standard for composite video) or white.
Analog component video:
Separate sync is most common with VGA, used worldwide for analog computer monitors. This is sometimes known as RGBHV, as the horizontal and vertical synchronization pulses are sent in separate channels. This mode requires five conductors. If separate cables are used, the sync lines are usually yellow (H) and white (V), yellow (H) and black (V), or gray (H) and black (V).
Analog component video:
Sync on Green (SoG) is less common, and while some VGA monitors support it, most do not. Sony is a big proponent of SoG, and most of their monitors (and their PlayStation line of video game consoles) use it. Like devices that use composite video or S-video, SoG devices require additional circuitry to remove the sync signal from the green line. A monitor that is not equipped to handle SoG will display an image with an extreme green tint, if any image at all, when given a SoG input.
Analog component video:
Sync on red and sync on blue are even rarer than sync on green, and are typically used only in certain specialized equipment.
Analog component video:
Sync on composite, not to be confused with composite sync, is commonly used on devices that output both composite video and RGB over SCART. The RGB signal is used for color information, while the composite video signal is only used to extract the sync information. This is generally an inferior sync method, as this often causes checkerboards to appear on an image, but the image quality is still much sharper than standalone composite video.
Analog component video:
Sync on luma is much similar to sync on composite, but uses the Y signal from S-Video instead of a composite video signal. This is sometimes used on SCART, since both composite video and S-Video luma ride along the same pins. This generally does not suffer from the same checkerboard issue as sync on composite, and is generally acceptable on devices that do not feature composite sync, such as the Sony PlayStation and some modded Nintendo 64 models.
Analog component video:
Luma-based analog component video Further types of component analog video signals do not use separate red, green and blue components but rather a colorless component, termed luma, which provides brightness information (as in black-and-white video). This combines with one or more color-carrying components, termed chroma, that give only color information. Both the S-Video component video output (two separate signals) and the YPBPR component video output (three separate signals) seen on DVD players are examples of this method.
Analog component video:
Converting video into luma and chroma allows for chroma subsampling, a method used by JPEG and MPEG compression schemes to reduce the storage requirements for images and video (respectively).
Many consumer TVs, DVD players, monitors, video projectors and other video devices at one time used YPBPR output or input.
When used for connecting a video source to a video display where both support 4:3 and 16:9 display formats, the PAL television standard provides for signaling pulses that will automatically switch the display from one format to the other.
Connectors used D-Terminal: Used mostly on Japanese electronics.
Three BNC (professional) or RCA connectors (consumer): Typically colored green (Y), blue (PB) and red (PR).
SCART used in Europe.
Video-in video-out (VIVO): 9-pin Mini-DIN-connectors called "TV Out" in computer video cards, which usually include an adaptor for component RCA, composite RCA and 4-pin S-Video-Mini-DIN.
Synchronization:
Component video requires an extra synchronization signal to be sent along with the video. Component video sync signals can be sent in several different ways: Separate sync Uses separate wires for horizontal and vertical synchronization. When used in RGB (i.e. VGA) connections, five separate signals are sent (Red, Green, Blue, Horz. Sync, Vert. Sync).
Composite sync Combines horizontal and vertical synchronization onto one pair of wires. When used in RGB connections, four separate signals are sent (Red, Green, Blue, Sync).
Sync-on-green (SOG) Combines composite sync with the green signal in RGB. Only three signals are sent (Red, Green with Sync, Blue). This synchronization system is used in - among other applications - many systems by Silicon Graphics and Sun Microsystems through a DB13W3 connector.
Sync-on-luminance Similar to sync-on-green, but combines sync with the luminance signal (Y) of a color system such as YPbPr and S-Video. This is the synchronization system normally used in home theater systems.
Synchronization:
Sync-on-composite The connector carries a standard composite video signal along with the RGB components, for use with devices that cannot process RGB signals. For devices that do understand RGB, the sync component of that composite signal is used along with the color information from the RGB lines. This arrangement is found in the SCART connector in common use in Europe and some other PAL/SECAM areas.
Digital component video:
Digital component video makes use of single cables with signal lines/connector pins dedicated to digital signals, transmitting digital color space values allowing higher resolutions such as 480p, 480i, 576i, 576p, 720p, 1080i, and 1080p.RGB component video has largely been replaced by modern digital formats, such as DisplayPort or Digital Visual Interface (DVI) digital connections, while home theater systems increasingly favor High-Definition Multimedia Interface (HDMI), which support higher resolutions, higher dynamic range, and can be made to support digital rights management. The demise of analog is largely due to screens moving to large flat digital panels as well as the desire for having a single cable for both audio and video, but also due to a slight loss of clarity when converting from a digital media source to analog and back again for a flat digital display, particularly when used at higher resolutions where analog signals are highly susceptible to noise.
International standards:
Examples of international component video standards are: RS-170 RGB (525 lines, based on NTSC timings, now EIA/TIA-343) RS-343 RGB (525, 625 or 875 lines) STANAG 3350 Analogue Video Standard (NATO military version of RS-343 RGB, now EIA-343A) CEA-770.3 High Definition TV Analog Component Video Interface Consumer Electronics Association
Component versus composite:
In a composite signal, such as NTSC, PAL or SECAM, the luminance, Brightness (Y) signal and the chrominance, Color (C) signals are encoded together into one signal. When the color components are kept as separate signals, the video is called component analog video (CAV), which requires three separate signals: the luminance signal (Y) and the color difference signals (R-Y and B-Y).
Component versus composite:
Since component video does not undergo the encoding process, the color quality is noticeably better than composite video.Component video connectors are not unique in that the same connectors are used for several different standards; hence, making a component video connection often does not lead to a satisfactory video signal being transferred. Many DVD players and TVs may need to be set to indicate the type of input/output being used, and if set incorrectly the image may not be properly displayed. Progressive scan, for example, is often not enabled by default, even when component video output is selected. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Scientific essentialism**
Scientific essentialism:
Scientific essentialism, a view espoused by Saul Kripke and Hilary Putnam, maintains that there exist essential properties that objects possess (or instantiate) necessarily. In other words, having such and such essential properties is a necessary condition for membership in a given natural kind. For example, tigers are tigers in virtue of possessing a particular set of genetic properties, but identifying (or appearance-based) properties are nonessential properties. If a tiger lost a leg, or didn't possess stripes, we would still call it a tiger. They are not necessary for being a member of the class of tigers.
Scientific essentialism:
It is important, however, that the set of essential properties of an object not be used to identify or be identified with that object because they are not necessary and sufficient, but only necessary. Having such and such a genetic code does not suffice for being a tiger. We wouldn't call a piece of tiger tail a tiger, even though a piece of tiger tail contains the genetic information essential to being a tiger.
Scientific essentialism:
Other advocates of scientific essentialism include Brian Ellis, Caroline Lierse, John Bigelow, and Alexander Bird. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Contemporary architecture**
Contemporary architecture:
Contemporary architecture is the architecture of the 21st century. No single style is dominant. Contemporary architects work in several different styles, from postmodernism, high-tech architecture and new interpretations of traditional architecture to highly conceptual forms and designs, resembling sculpture on an enormous scale. Some of these styles and approaches make use of very advanced technology and modern building materials, such as tube structures which allow construction of buildings that are taller, lighter and stronger than those in the 20th century, while others prioritize the use of natural and ecological materials like stone, wood and lime. One technology that is common to all forms of contemporary architecture is the use of new techniques of computer-aided design, which allow buildings to be designed and modeled on computers in three dimensions, and constructed with more precision and speed.
Contemporary architecture:
Contemporary buildings and styles vary greatly. Some feature concrete structures wrapped in glass or aluminium screens, very asymmetric facades, and cantilevered sections which hang over the street. Skyscrapers twist, or break into crystal-like facets. Facades are designed to shimmer or change color at different times of day.
Contemporary architecture:
Whereas the major monuments of modern architecture in the 20th century were mostly concentrated in the United States and western Europe, contemporary architecture is global; important new buildings have been built in China, Russia, Latin America, and particularly in Arab states of the Persian Gulf; the Burj Khalifa in Dubai was the tallest building in the world in 2019, and the Shanghai Tower in China was the second-tallest.
Contemporary architecture:
Additionally, in the late 20th century, New Classical Architecture, a traditionalist response to modernist architecture, emerged, continuing into the 21st century. The 21st century saw the emergence of multiple organizations dedicated to the promotion of local and/or traditional architecture. Examples include the International Network for Traditional Building, Architecture & Urbanism (INTBAU), the Institute of Classical Architecture & Art (ICAA), the Driehaus Architecture Prize, and the Complementary architecture movement. New traditional architects include Michael Graves, Léon Krier, Yasmeen Lari, Robert Stern and Abdel-Wahed El-Wakil.
Contemporary architecture:
Most of the landmarks of contemporary architecture are the works of a small group of architects who work on an international scale. Many were designed by architects already famous in the late 20th century, including Mario Botta, Frank Gehry, Jean Nouvel, Norman Foster, Ieoh Ming Pei and Renzo Piano, while others are the work of a new generation born during or after World War II, including Zaha Hadid, Santiago Calatrava, Daniel Libeskind, Jacques Herzog, Pierre de Meuron, Rem Koolhaas, and Shigeru Ban. Other projects are the work of collectives of several architects, such as UNStudio and SANAA, or large multinational agencies such as Skidmore, Owings & Merrill, with thirty associate architects and large teams of engineers and designers, and Gensler, with 5,000 employees in 16 countries.
Museums:
Some of the most striking and innovative works of contemporary architecture are art museums, which are often examples of sculptural architecture, and are the signature works of major architects. The Quadracci Pavilion of the Milwaukee Art Museum in Milwaukee, Wisconsin, was designed by Spanish architect Santiago Calatrava. Its structure includes a movable, wing-like brise soleil that opens up for a wingspan of 217 feet (66 m) during the day, folding over the tall, arched structure at night or during bad weather.The Walker Art Center in Minneapolis (2005), was designed by the Swiss architects Herzog and de Meuron, who designed the Tate Modern museum in London, and who won the Pritzker Architecture Prize, the most prestigious award in architecture, in 2001. It updates and provides a contrast to the austere earlier Modernist structure designed by Edward Larrabee Barnes by adding a five-story tower clad in panels of delicately sculpted gray aluminum, which change color with the changing light, connecting by a wide glass gallery leading to the older building. It also harmonizes with two stone churches opposite.The Polish-born American architect Daniel Libeskind (born 1946) is one of the most prolific of contemporary museum architects; He was an academic before he began designing buildings and was one of the early proponents of the architectural theory of Deconstructivism. The exterior of his Imperial War Museum North in Manchester, England (2002), has an exterior which resembles, depending upon the light and time of day, huge and broken pieces of earth or armor plates, and is said to symbolize the destruction of war. In 2006 Libeskind finished the Hamilton Building of the Denver Art Museum in Denver Colorado, composed of twenty sloping planes, none of them parallel or perpendicular, covered with 230,000 square feet of titanium panels. Inside, the walls of the galleries are all different, sloping and asymmetric. Libeskind completed another striking museum, the Royal Ontario Museum in Toronto, Ontario, Canada (2007), also known as "The Crystal," a building whose form, resembles a shattered crystal. Libeskind's museums have been both admired and attacked by critics. While admiring many features of the Denver Art Museum, The New York Times' architecture critic Nicolai Ouroussoff wrote that "In a building of canted walls and asymmetrical rooms—tortured geometries generated purely by formal considerations — it is virtually impossible to enjoy the art."The De Young Museum in San Francisco was designed by the Swiss architects Herzog & de Meuron. It opened in 2005, replacing an older structure that was badly damaged in an earthquake in 1989. The new museum was designed to blend with the park's natural landscape and resist strong earthquakes. The building can move up to three feet (91 centimeters) on ball-bearing sliding plates and viscous fluid dampers that absorb kinetic energy.
Museums:
The Zentrum Paul Klee by Renzo Piano is an art museum near Berne Switzerland located next to an autoroute in the Swiss countryside. The museum blends into the landscape by taking three rolling hills made of steel and glass. One building houses the gallery (which is almost entirely underground to preserve the fragile drawings of Klee from the effects of sunlight). At the same time, the other two "hills" contain an education center and administrative offices.The Centre Pompidou-Metz, in Metz, France, (2010), a branch of the Centre Pompidou museum of modern art in Paris, was designed by Shigeru Ban, a Japanese architect who won the Pritzker Prize for Architecture in 2014. The roof is the most dramatic feature of the building; it is a 90 m (300 ft) wide hexagon with a surface area of 8,000 m2 (86,000 sq ft), composed of sixteen kilometers of glued laminated timber, that intersect to form hexagonal wooden units resembling the cane-work pattern of a Chinese hat. The roof's geometry is irregular, featuring curves and counter-curves over the entire building, particularly the three exhibition galleries. The entire wooden structure is covered with a white fiberglass membrane, and a coating of teflon protects from direct sunlight and allows light to pass through.
Museums:
The Louis Vuitton Foundation by Frank Gehry (2014) is the gallery of contemporary art located adjacent to the Bois de Boulogne in Paris was opened in October 2014. Gehry described his architecture as inspired by the glass Grand Palais of the 1900 Paris Exposition and by the enormous glass greenhouses of the Jardin des Serres d'Auteuil near the park, built by Jean-Camille Formigé in 1894–95. Gehry had to work within strict height and volume restrictions, which required any part of the building over two stories to be made of glass. The building is low because of the height limits, sited in an artificial lake with water cascading beneath the building. The interior gallery structures are covered in a white fiber-reinforced concrete called Ductal. Similar in concept to Gehry's Walt Disney Concert Hall, the building is wrapped in curving glass panels resembling sails inflated by the wind. The glass "Sails" are made of 3,584 laminated glass panels, each one a different shape, specially curved for its place in the design. Inside the sails is a cluster of two-story towers containing 11 galleries of different sizes, with flower garden terraces, and rooftop spaces for displays.The new Whitney Museum of American Art in New York City by Renzo Piano (2015) took a very different approach from the sculptural museums of Frank Gehry. The Whitney has an industrial-looking facade and blends into the neighborhood. Michael Kimmelman, the architecture critic of The New York Times called the building a "mishmash of styles" but noted its similarity to Piano's Centre Pompidou in Paris, in the way that it mixed with the public spaces around it. "Unlike so much big-name architecture," Kimmelman wrote, "it's not some weirdly shaped trophy building into which all the practical stuff of a working museum must be fitted."The San Francisco Museum of Modern Art is actually two buildings by different architects fit together; an earlier (1995) five-story postmodernist structure by the Swiss architect Mario Botta, to which has been joined a much larger ten-story white gallery by the Norwegian-based firm of Snøhetta (2016). The expanded building includes a green living wall of native plants in San Francisco; a free ground-floor gallery with 25-foot (7.6 m) tall glass walls that will place art on view to passersby and glass skylights that flood the upper floors of offices (though not the galleries) with light. The facades clad are with lightweight panels made of Fibre-Reinforced Plastic. The critical reaction to the building was mixed. Roberta Smith of The New York Times said the building set a new standard for museums and wrote: "The new building’s rippling, sloping facade, rife with subtle curves and bulges, establishes a brilliant alternative to the straight-edged boxes of traditional modernism and the rebellion against them initiated by Frank Gehry, with his computer-inspired acrobatics." On the other hand, the critic of The Guardian of London compared the facade of the building to "a gigantic meringue with a hint of Ikea."
Concert halls:
Santiago Calatrava designed the Auditorio de Tenerife he concert hall of Tenerife, the major city of the Canary Islands. with a shell-like wing of reinforced concrete.
Concert halls:
The shell touches the ground at only two points.The Walt Disney Concert Hall in Los Angeles (2003) is one of the major works by California architect Frank Gehry The exterior is stainless steel, formed like the sails of sailboats. The interior is in the Vineyard style, with the audience surrounding the stage. Gehry designed the dramatic array of pipes of the organ to complement the exterior style of the building.
Concert halls:
The Casa da Musica in Porto, Portugal, by the Dutch architect Rem Koolhaas (2005) is unique among concert halls in having two walls made entirely of glass. Nicolai Ouroussoff, architecture critic from The New York Times, wrote "The building's chiseled concrete form, resting on a carpet of polished stone, suggests a bomb about to explode. He declared that in its originality it was one of the most important concert halls built in the last 100 years". ranking with the Walt Disney Concert Hall, in Los Angeles, and the Berliner Philharmonie.The interior of the Copenhagen Opera House by Henning Larsen (2005) has an oak floor and maple walls to enhance hat acoustics. The royal box of the Queen, usually placed in the back, is next to stage.
Concert halls:
The Schermerhorn Symphony Center in Nashville, Tennessee, by David M. Schwarz & Earl Swensson (2006), is an example of Neo-Classical architecture, borrowed literally from Roman and Greek models. It complements another Nashville landmark, a full-scale replica of the Parthenon.
Concert halls:
The Philharmonie de Paris by French architect Jean Nouvel opened in 2015. The concert hall is at La Villette, in a park at the edge of Paris devoted to museums, a music school and other cultural institutions, where its unusual shape blends with the late 20th-century modern architecture. The exterior of the building takes the form of glittering irregular cliff cut by horizontal fins which reveal ar amp leading upwards The exterior is clad in thousands of small pieces of aluminum in three different colors, from white to gray to black. A path leads up the ramp to the top of building to a terrace with a dramatic view of the peripheric highway around the city. view of the neighborhood. The hall, like the Disney Hall in Los Angeles, has Vineyard style seating, with the audience surrounding the main stage. The seating can be re-arranged in different styles depending upon the type of music performed. When it opened, the architectural critic of the London Guardian compared it to a space ship that had crash-landed on the outskirts of the city.The Elbphilharmonie concert hall in Hamburg, Germany, by Herzog & de Meuron, which was inaugurated in January 2017, is the tallest inhabited building in the city, with a height of 110 meters (360 feet). The glass concert hall, which has 2100 seats in Vineyard style, is perched atop a former warehouse. One side of the concert hall building contains a hotel, while the structure on the other side above the concert hall contains forty five apartments. The concert hall in the middle is isolated from the sound of the other parts of the building by an "eggshell" of plaster and paper panels and insulation resembling feather pillows.
Skyscrapers:
The skyscraper (usually defined as a building over 40 stories high) first appeared in Chicago in the 1890s, and was largely an American style in the mid 20th century, but in the 21st century skyscrapers were found in almost every large city on every continent. A new construction technology, the framed tube structure, was first developed in the United States in 1963 by structural engineer Fazlur Rahman Khan of Skimore, Owings and Merrill, which permitted the construction of super-tall buildings, which needed fewer interior walls, had more window space, and could better resist lateral forces, such as strong winds.The Burj Khalifa in Dubai, United Arab Emirates is the tallest structure in the world, standing at 829.8 m (2,722 ft). Construction of the Burj Khalifa began in 2004, with the exterior completed 5 years later in 2009. The primary structure is reinforced concrete. Burj Khalifa was designed by Adrian Smith, then of Skidmore, Owings & Merrill (SOM). He also was lead architect on the Jin Mao Tower, Pearl River Tower, and Trump International Hotel & Tower.
Skyscrapers:
Adrian Smith and his own firm are the architects for the building which, in 2020, meant to replace the Burj-Khalifa as the tallest building in the world. The Jeddah Tower in Jeddah, Saudi Arabia, is planned to be 1,008 meters, or (3,307 ft) tall, which will make it the tallest building in the world, and the first building to be more than one kilometer in height. Construction began in 2013, and the project is scheduled to be completed in 2020.After the destruction of the twin towers of the World Trade Center in the September 11 terrorist attacks, a new trade center was designed, with the main tower designed by David Childs of SOM. One World Trade Center, opened in 2015, is 1,776 feet (541 m) tall, making it the tallest building in the western hemisphere.
Skyscrapers:
In London, one of the most notable contemporary landmarks is 30 St Mary Axe, popularly known as "The Gherkin", designed by Norman Foster (2004). It replaced the London Millennium Tower – a much taller project that Foster earlier had proposed for the same site, which would have been the tallest building in Europe, but was so tall that it interfered with the flight pattern for Heathrow Airport. The steel framework of the Gherkin is integrated into the glass facade.The tallest building in Moscow is the Federation Tower, designed by the Russian architect Sergei Tchoban with Peter Schweger. Completed in 2017, with a height 373 meters, it surpassed Mercury City Tower, another skyscraper in Moscow, when it was built as the tallest building in Europe.
Skyscrapers:
The tallest building in China as of 2015 is the Shanghai Tower by the U.S. architectural and design firm of Gensler. It is 632-meter (2,073-foot) tall, with 127 floors, making it in 2016 the second-tallest building in the world. It also features the fastest elevators, which reach a speed of 20.5 meters per second (67 feet per second; 74 kilometers per hour) .Most skyscrapers are designed to express modernity; the most notable exception is the Abraj Al Bait, a complex of seven skyscraper hotels build by the government of Saudi Arabia to house pilgrims coming the holy shrine of Mecca. The centerpiece of the group is the Makkah Palace Clock Tower Hotel, with a gothic revival tower; it was the fourth-tallest building in the world in 2016, 581.1 meters (1,906 feet) high.
Residential buildings:
A tendency in contemporary residential architecture, particularly in the rebuilding of older neighborhoods in large cities, is the luxury condominium tower, with very expensive apartments for sale designed by "starchitects", that is, internationally famous architects. These buildings frequently have little relationship with the architecture of their neighborhood, but stand like signature works of their architects.
Residential buildings:
Daniel Libeskind (born 1946), was born in Poland and studied, taught and practiced architecture in the United States. In 2016 he was professor of architecture at UCLA in Los Angeles, He is known as much for his writings as his architecture; he was a founder of the movement called Deconstructivism. Best known for his museums, he also constructed a notable complex of residential apartment buildings in Singapore (2011) and The Ascent at Roebling's Bridge a 22-story apartment building in Covington, Kentucky (2008). The name of the latter is taken from the Roebling Suspension Bridge nearby on the Ohio River, but the structure of the building of luxury condominiums is extremely contemporary, sloping upward like the bridge cables to a peak, with a sharp edge and leaning slightly outward as the building rises.
Residential buildings:
One cheerful feature of contemporary residential architecture is color; Bernard Tschumi uses colored ceramic tiles on facades as well as unusual forms to make his buildings stand out. One example is the Blue Condominium in New York City (2007).Another contemporary tendency is the conversion of industrial buildings into mixed residential communities. An example is the Gasometer in Vienna, a group of four massive brick gas production towers constructed at the end of the 19th century. They have been transformed into a mixed residential, office and commercial complex, completed between 1999 and 2001. Some residences are located inside the towers, and others are in new buildings attached to them. The upper floors are devoted to housing units the middle floors to offices, and the ground floors to entertainment and shopping malls. with sky bridges connecting the shopping mall levels. Each tower was built by a prominent architect the participants were Jean Nouvel, Coop Himmelblau, Manfred Wehldorn and Wilhelm Holzbauer. The historic exterior walls of the towers were preserved.The Isbjerget, Danish for "iceberg", in Aarhus, Denmark (2013), is a group of four buildings with 210 apartments, both rented and owned, for residents with a range of incomes, located on the waterfront of a former industrial port in Denmark. The complex was designed by the Danish firms CEBRA and JDS Architects, French architect Louis Paillard and the Dutch firm SeARCH, and was financed by the Danish pension fund. The buildings are designed so that all the units, even those in the back, have a view of sea. The design and color of the buildings is inspired by icebergs. The buildings are clad in white terrazzo and have balconies made of blue glass.
Religious architecture:
Surprisingly few contemporary churches were built between 2000 and 2017. Church architects, with a few exceptions, rarely showed the same freedom of expression as architects of museums, concert halls and other large buildings. The new cathedral for the City of Los Angeles California, was designed in a postmodern style by the Spanish architect Rafael Moneo. The previous cathedral had been serious damaged by an earthquake in 1995; the new building was specially designed to resist similar shocks.
Religious architecture:
The Northern Lights Cathedral, by the Denmark-based international firm of Schmidt, Hammer and Lassen, is located in Alta, Norway, one of northernmost cities in the world. Their other important works include the National Library of Denmark in Copenhagen.The Vrindavan Chandrodaya Mandir is a Hindu Temple in Vrindivan, in Uttar Pradesh state in India, which was under construction at the end of 2016. The architects are InGenious Studio Pvt. Ltd. of Gurgaon and Quintessence Design Studio of Noida, in India. The entrance is in the traditional Nagara style of Indian architecture, while the tower is contemporary, with a glass facade up to the 70th floor. It is scheduled for completion in 2019. When completed, at 700 feet (210 meters) or 70 floors) it will be the tallest religious structure in the world.One of the most unusual contemporary churches is St. Jude's Anglican Cathedral in Iqaluit, the capital of Nunavut, the northernmost and least populous region of Canada. The church is built in the shape of an igloo, and serves the Inuktitut-speaking population of the region.
Religious architecture:
Another unusual contemporary church is the Cardboard Cathedral in Christchurch, New Zealand designed by Japanese architect Shigeru Ban. It replaced the city's main cathedral, damaged by the 2011 Christchurch earthquake. The cathedral, which seats seven hundred persons, rises 21 metres (69 ft) above the altar. Materials used include 60-centimetre (24 in)-diameter cardboard tubes, timber and steel. The roof is of polycarbon, with eight shipping containers forming the walls. "coated with waterproof polyurethane and flame retardants" with two-inch gaps between them so that light can filter inside.
Stadiums:
The Swiss architects Jacques Herzog and Pierre de Meuron designed the Allianz Arena in Munich, Germany, completed in 2005. It seats 75,000 spectators. The structure is wrapped in 2,874 ETFE-foil air panels that are kept inflated with dry air; each panel can be independently illuminated red, white, or blue. When illuminated, the stadium is visible from the Austrian Alps, fifty miles (80 kilometers) away.Among the most prestigious projects and best-known projects in contemporary architecture are the stadiums for the Olympic Games, whose architects are chosen by highly publicized international competitions. The Beijing National Stadium, built for the 2008 Games and popularly known as the Bird's Nest because of its intricate exterior framework, was designed by the Swiss firm of Herzog & de Meuron, with Chinese architect Li Xinggang. It was designed to seat 91,000 spectators, and when constructed had a retractable roof, since removed. Like many contemporary buildings, it is actually two structures; a concrete bowl in which the spectators sit, surrounded at distance of fifty feet by a glass and steel framework. The exterior "Bird's nest" design was inspired by the pattern of Chinese ceramics. The stadium when completed was the largest enclosed space in the world, and was also the largest steel structure, with 26 kilometers of unwrapped steel.The National Stadium in Kaohsiung, Taiwan by Japanese architect Toyo Ito (2009), is the form of a dragon. Its other distinctive feature is the array of solar panels that cover almost all of the exterior, providing most of the energy needed by the complex.
Government buildings:
Government buildings, once almost universally serious and sober looking, usually in variations of the school of neoclassical architecture, began to appear in more sculptural and even whimsical forms. One of the most dramatic examples was the London City Hall by Norman Foster (2002), the headquarters of the Greater London Authority. The unusual egg-like building design was intended to reduce the amount of exposed wall and to save energy, though the results have not entirely met expectations. One unusual feature is a helical stairway that spirals from the lobby up to the top of the building.
Government buildings:
Some new government buildings, such as the Parliament House, Valletta, Malta by Renzo Piano (2015) created controversy because of the contrast between their style and the historic architecture around them.Most new government buildings attempt to express solidity and seriousness; an exception is the Port Authority (Havenhuis) in Antwerp, Belgium by Zaha Hadid (2016), where a ship-like structure of glass and steel on a white concrete perch seems to have landed atop the old port building constructed in 1922. The faceted glass structure also resembles a diamond, a symbol of Antwerp's role as the major market of diamonds in Europe. It was one of the last works of Hadid, who died in 2016.
University buildings:
The Dr Chau Chak Wing Building is a Business School building of the University of Technology Sydney in Sydney, New South Wales, Australia, designed by Frank Gehry and completed in 2015. It is the first building in Australia designed by Gehry. The building's façade, made of 320,000 custom designed bricks, was described by one critic as the "squashed brown paper bag". Frank Gehry responded, "Maybe it's a brown paper bag, but it's flexible on the inside, there's a lot of room for changes or movement."The Siamese Twins Towers" at the Pontifical Catholic University of Chile in Santiago, Chile are by the Chilean architect Alejandro Aravena (born 1967), completed in 2013. Aravena was the winner of the 2016 Pritzker Architecture Prize.
Libraries:
The Bibliotheca Alexandrina in Alexandria, Egypt, by the Norwegian firm of Snøhetta (2002), attempts to recreate, in modern form, the famous Alexandria Library of antiquity. The building, by the edge of the Mediterranean, has shelf space for eight million books, and a main reading room covering 20,000 square metres (220,000 sq ft) on eleven cascading levels. plus galleries for expositions and a planetarium. The main reading room is covered by a 32-meter-high glass-panelled roof, tilted out toward the sea like a sundial, and measuring some 160 m in diameter. The walls are of gray Aswan granite, carved with characters from 120 different human scripts.
Libraries:
The Seattle Central Library by Dutch architect Rem Koolhaas (2006) features a glass and steel wrapping around a stack of platforms. One unusual feature is a ramp with continuous bookshelves spiraling upward four floors.
Malls and retail stores:
The shopping malls are the elephants of commercial architecture, massive structures which combine retail stores, food outlets, and entertainment under a single roof. The largest in area (though not in retail space, since much of the mall is devoted to entertainment and public space) and perhaps most extravagant is the Dubai Mall in the United Arab Emirates, designed by DP Architects of Singapore and opened in (2008), which features, in addition to shops and restaurants, a gigantic walk-through aquarium and underwater zoo, plus a huge ice skating rink, and, just outside, the highest fountain and tallest building in the world.
Malls and retail stores:
In competition with shopping malls are downtown department stores and shops of individual designer brands. These buildings are traditionally designed to attract attention and to express the modernity of the products they sell. A notable example is the Selfridge's Department Store in Birmingham, England, a department store designed by the firm of Future Systems, founded in 1979 by Jan Kaplický (1937–2009). The department store exterior is composed of undulating concrete in convex and concave forms, entirely covered with gleaming blue and white ceramic tiles.Designer brand shops try make their logo visible and to set themselves apart from department stores. One notable example is the Louis Vuitton store in the Ginza district of Tokyo, with a new facade designed by Japanese studio of Jun Aoki and Associates with a patterned and perforated shell based on the brand's logo.
Airports, railway stations and transport hubs:
Beijing Capital International Airport has been one of the fast-growing airports in the world. The new Terminal Three was designed by Norman Foster to handle the increased number of passengers coming for the 2008 Beijing Olympics. The terminal is the second largest in the world, after the Dulles Airport terminal near Washington, DC, and in 2008 was the sixth largest building in the world. The flat-roofed building looks like part of the runway from above.
Airports, railway stations and transport hubs:
The World Trade Center Transportation Hub is a station constructed beneath fountain and plaza honoring the victims of the 2001 terrorist attacks in 2001 in New York City, It was designed by Spanish architect Santiago Calatrava and opened in 2016. The above-ground structure, called the Oculus, has been compared to a bird about to take flight, and leads passengers down to the train station below the plaza. Michael Kimmelman, the architecture critic of The New York Times praised the soaring upward view inside the Oculus, but condemned what he called the buildings cost (the most expensive railroad station ever built) "scale, monotony of materials and color, preening formalism and disregard for the gritty urban fabric."
Bridges:
Several of the most prominent contemporary architects, including Norman Foster, Santiago Calatrava, Zaha Hadid, have turned their attention to designing bridges. One of the most remarkable examples of contemporary architecture and engineering is the Millau Viaduct in southern France, designed by architect Norman Foster and structural engineer Michel Virlogeux. The Millau Viaduct crosses the valley of the River Tarn and is part of the A75-A71 autoroute axis from Paris to Béziers and Montpellier. It was formally inaugurated on 14 December 2004. It is the tallest bridge in the world with one mast's summit at 343.0 metres (1,125 ft) above the base of the structure.The British-Iraqi architect Zaha Hadid constructed the Bridge Pavilion in Zaragoza. Spain for an international exposition there in 2008. The bridge, which also served as an exposition hall, is constructed of concrete reinforced with an external layer of fiberglass in different shades of gray. Since the event closed, the bridge has been used to host expositions and shows.
Bridges:
Some smaller new bridges also offer simple but very innovative designs. The Gateshead Millennium Bridge in Newcastle upon Tyne, England, (2004) designed by Michel Virlogeux, to enable pedestrians and cyclists to cross the Tyne River, tilts to one side to permit boats to pass beneath.
Eco-architecture:
A growing tendency in the 21st century is eco-architecture, also termed sustainable architecture; buildings with features which conserve heat and energy, and sometimes produce their own energy through solar cells and windmills, and use solar heat to generate solar hot water. They also may be built with their own wastewater treatment and sometimes rainwater harvesting Some buildings integrate gardens green walls and green roofs into their structures. Other features of eco-architecture include the use of wood and recycled materials. There are several green building certification programs, the best-known of which is the Leadership in Energy and Environmental Design, or LEED rating, which measure the environmental impact of buildings.
Eco-architecture:
Many urban skyscrapers such as 30 Saint Mary Axe in London use a double skin of glass to conserve energy. The double skin and curved shape of the building creates differences in air pressure which help keep the building cooler in summer and warmer in winter, reducing the need for air conditioning.BedZED, designed by British architect Bill Dunster, is an entire community of eighty-two homes in Hackbridge, near London, built according to eco-architecture principles. Houses face south to take advantage of sunlight and have triple-glazed windows for insulation, a significant portion of the energy comes from solar panels, rainwater is collected and reused, and automobiles are discouraged. BedZED successfully reduced electricity usage by 45 percent and hot water usage by 81 percent of the borough average in 2010, though a successful system for producing heat by burning wood chips proved elusive and difficult.The CaixaForum Madrid is a museum and cultural center in Paseo del Prado 36, Madrid, by the Swiss architects Herzog & de Meuron, built between 2001 and 2007, is an example of both green architecture and recycling. The main structure is an abandoned brick electric power station, with new floors constructed on top. The new floors are encased in oxidized cast iron, which has a rusty red color as the brick of the old power station below it. The building next to it features a green wall designed by French botanist Patrick Blanc. The red of the top floors contrast with the plants on the wall, while the green wall harmonizes with the botanical garden next door to the cultural center.Unusual materials are sometimes recycled for use in eco-architecture; they include denim from old blue jeans for insulation, and panels made from paper flakes, baked earth, flax, sisal, or coconut, and particularly fast-growing bamboo. Lumber and stone from demolished buildings are often reclaimed and reused for flooring, while hardware, windows and other details from older buildings are reused.
Bibliography and Further reading:
Bony, Anne (2012). L'Architecture Moderne (in French). Larousse. ISBN 978-2-03-587641-6.
Poisson, Michel (2009). 1000 Immeubles et monuments de Paris (in French). Parigramme. ISBN 978-2-84096-539-8.
Taschen, Aurelia and Balthazar (2016). L'Architecture Moderne de A à Z (in French). Bibliotheca Universalis. ISBN 978-3-8365-5630-9.
Prina, Francesca; Demaratini, Demartini (2006). Petite encyclopédie de l'architecture (in French). Solar. ISBN 2-263-04096-X.
Hopkins, Owen (2014). Les styles en architecture- guide visuel (in French). Dunod. ISBN 978-2-10-070689-1.
De Bure, Gilles (2015). Architecture contemporaine- le guide (in French). Flammarion. ISBN 978-2-08-134385-6. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Operand**
Operand:
In mathematics, an operand is the object of a mathematical operation, i.e., it is the object or quantity that is operated on.
Example:
The following arithmetic expression shows an example of operators and operands: 3+6=9 In the above example, '+' is the symbol for the operation called addition.
The operand '3' is one of the inputs (quantities) followed by the addition operator, and the operand '6' is the other input necessary for the operation.
The result of the operation is 9. (The number '9' is also called the sum of the augend 3 and the addend 6.) An operand, then, is also referred to as "one of the inputs (quantities) for an operation".
Notation:
Expressions as operands Operands may be complex, and may consist of expressions also made up of operators with operands.
(3+5)×2 In the above expression '(3 + 5)' is the first operand for the multiplication operator and '2' the second. The operand '(3 + 5)' is an expression in itself, which contains an addition operator, with the operands '3' and '5'.
Order of operations Rules of precedence affect which values form operands for which operators: 3+5×2 In the above expression, the multiplication operator has the higher precedence than the addition operator, so the multiplication operator has operands of '5' and '2'. The addition operator has operands of '3' and '5 × 2'.
Positioning of operands Depending on the mathematical notation being used the position of an operator in relation to its operand(s) may vary. In everyday usage infix notation is the most common, however other notations also exist, such as the prefix and postfix notations. These alternate notations are most common within computer science.
Notation:
Below is a comparison of three different notations — all represent an addition of the numbers '1' and '2' 1+2 (infix notation) +12 (prefix notation) 12+ (postfix notation) Infix and the order of operation In a mathematical expression, the order of operation is carried out from left to right. Start with the leftmost value and seek the first operation to be carried out in accordance with the order specified above (i.e., start with parentheses and end with the addition/subtraction group). For example, in the expression 4×22−(2+22) ,the first operation to be acted upon is any and all expressions found inside a parenthesis. So beginning at the left and moving to the right, find the first (and in this case, the only) parenthesis, that is, (2 + 22). Within the parenthesis itself is found the expression 22. The reader is required to find the value of 22 before going any further. The value of 22 is 4. Having found this value, the remaining expression looks like this: 4×22−(2+4) The next step is to calculate the value of expression inside the parenthesis itself, that is, (2 + 4) = 6. Our expression now looks like this: 4×22−6 Having calculated the parenthetical part of the expression, we start over again beginning with the left most value and move right. The next order of operation (according to the rules) is exponents. Start at the left most value, that is, 4, and scan your eyes to the right and search for the first exponent you come across. The first (and only) expression we come across that is expressed with an exponent is 22. We find the value of 22, which is 4. What we have left is the expression 4×4−6 .The next order of operation is multiplication. 4 × 4 is 16. Now our expression looks like this: 16 −6 The next order of operation according to the rules is division. However, there is no division operator sign (÷) in the expression, 16 − 6. So we move on to the next order of operation, i.e., addition and subtraction, which have the same precedence and are done left to right.
Notation:
16 10 .So the correct value for our original expression, 4 × 22 − (2 + 22), is 10.
Notation:
It is important to carry out the order of operation in accordance with rules set by convention. If the reader evaluates an expression but does not follow the correct order of operation, the reader will come forth with a different value. The different value will be the incorrect value because the order of operation was not followed. The reader will arrive at the correct value for the expression if and only if each operation is carried out in the proper order.
Notation:
Arity The number of operands of an operator is called its arity. Based on arity, operators are chiefly classified as nullary (no operands), unary (1 operand), binary (2 operands), ternary (3 operands). Higher arities are less frequently denominated through a specific terms, all the more when function composition or currying can be used to avoid them. Other terms include:
Computer science:
In computer programming languages, the definitions of operator and operand are almost the same as in mathematics.
In computing, an operand is the part of a computer instruction which specifies what data is to be manipulated or operated on, while at the same time representing the data itself.
A computer instruction describes an operation such as add or multiply X, while the operand (or operands, as there can be more than one) specify on which X to operate as well as the value of X.
Computer science:
Additionally, in assembly language, an operand is a value (an argument) on which the instruction, named by mnemonic, operates. The operand may be a processor register, a memory address, a literal constant, or a label. A simple example (in the x86 architecture) is where the value in register operand AX is to be moved (MOV) into register DS. Depending on the instruction, there may be zero, one, two, or more operands. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pauson–Khand reaction**
Pauson–Khand reaction:
The Pauson–Khand (PK) reaction is a chemical reaction, described as a [2+2+1] cycloaddition. In it, an alkyne, an alkene and carbon monoxide combine into a α,β-cyclopentenone in the presence of a metal-carbonyl catalyst.Ihsan Ullah Khand (1935-1980) discovered the reaction around 1970, while working as a postdoctoral associate with Peter Ludwig Pauson (1925–2013) at the University of Strathclyde in Glasgow. Pauson and Khand's initial findings were intermolecular in nature, but the reaction has poor selectivity. Most modern applications instead apply the reaction for intramolecular ends.The traditional catalyst is stoichiometric amounts of dicobalt octacarbonyl, stabilized by a carbon monoxide atmosphere. Catalytic metal quantities, enhanced reactivity and yield, or stereoinduction are all possible with the right chiral auxiliaries, choice of transition metal (Ti, Mo, W, Fe, Co, Ni, Ru, Rh, Ir and Pd), and additives.
Mechanism:
While the mechanism has not yet been fully elucidated, Magnus' 1985 explanation is widely accepted for both mono- and dinuclear catalysts, and was corroborated by computational studies published by Nakamura and Yamanaka in 2001. The reaction starts with dicobalt hexacarbonyl acetylene complex. Binding of an alkene gives a metallacyclopentene complex. CO then migratorily inserts into an M-C bond. Reductive elimination delivers the cyclopentenone. Typically, the dissociation of carbon monoxide from the organometallic complex is rate limiting.
Selectivity:
The reaction works with both terminal and internal alkynes, although internal alkynes tend to give lower yields. The order of reactivity for the alkene is(strained cyclic) > (terminal) > (disubstituted) > (trisubstituted). Tetrasubstituted alkenes and alkenes with strongly electron-withdrawing groups are unsuitable.
With unsymmetrical alkenes or alkynes, the reaction is rarely regioselective, although some patterns can be observed.
For mono-substituted alkenes, alkyne substituents typically direct: larger groups prefer the C2 position, and electron-withdrawing groups prefer the C3 position.
But the alkene itself struggles to discriminate between the C4 and C5 position, unless the C2 position is sterically congested or the alkene has a chelating heteroatom.
The reaction's poor selectivity is ameliorated in intramolecular reactions. For this reason, the intramolecular Pauson-Khand is common in total synthesis, particularly the formation of 5,5- and 6,5-membered fused bicycles.
Generally, the reaction is highly syn-selective about the bridgehead hydrogen and substituents on the cyclopentane.
Appropriate chiral ligands or auxiliaries can make the reaction enantioselective (see § Amine N-oxides). BINAP is commonly employed.
Additives:
Typical Pauson-Khand conditions are elevated temperatures and pressures in aromatic hydrocarbon (benzene, toluene) or ethereal (tetrahydrofuran, 1,2-dichloroethane) solvents. These harsh conditions may be attenuated with the addition of various additives.
Absorbent surfaces Adsorbing the metallic complex onto silica or alumina can enhance the rate of decarbonylative ligand exchange as exhibited in the image below. This is because the donor posits itself on a solid surface (i.e. silica). Additionally using a solid support restricts conformational movement (rotamer effect).
Lewis bases Traditional catalytic aids such as phosphine ligands make the cobalt complex too stable, but bulky phosphite ligands are operable.
Lewis basic additives, such as n-BuSMe, are also believed to accelerate the decarbonylative ligand exchange process. However, an alternative view holds that the additives make olefin insertion irreversible instead. Sulfur compounds are typically hard to handle and smelly, but n-dodecyl methyl sulfide and tetramethylthiourea do not suffer from those problems and can improve reaction performance.
Additives:
Amine N-oxides The two most common amine N-oxides are N-methylmorpholine N-oxide (NMO) and trimethylamine N-oxide (TMANO). It is believed that these additives remove carbon monoxide ligands via nucleophilic attack of the N-oxide onto the CO carbonyl, oxidizing the CO into CO2, and generating an unsaturated organometallic complex. This renders the first step of the mechanism irreversible, and allows for more mild conditions. Hydrates of the aforementioned amine N-oxides have similar effect.
Additives:
N-oxide additives can also improve enantio- and diastereoselectivity, although the mechanism thereby is not clear.
Alternative catalysts:
The original Pauson-Khand catalyst is a low-oxidation-state cobalt complex unstable in air. Multinuclear cobalt catalysts like (Co)4(CO)12 and Co3(CO)9(μ3-CH) suffer from the same flaw, although Takayama et al detail a reaction catalyzed by dicobalt octacarbonyl.
One stabilization method is to generate the catalyst in situ. Chung reports that Co(acac)2 can serve as a precatalyst, activated by sodium borohydride.
Other metals Wilkinson's rhodium-based catalyst requires a silver triflate co-catalyst to effect the Pauson-Khand reaction: Molybdenum hexacarbonyl is a carbon monoxide donor in PK-type reactions between allenes and alkynes with dimethyl sulfoxide in toluene. Titanium, nickel, and zirconium complexes admit the reaction. Other metals can also be employed in these transformations.
Substrate tolerance:
In general allenes, support the Pauson-Khand reaction; regioselectivity is determined by the choice of metal catalyst. Density functional investigations show the variation arises from different transition state metal geometries.
Heteroatoms are also acceptable: Mukai et al's total synthesis of physostigmine applied the Pauson-Khand reaction to a carbodiimide.
Cyclobutadiene also lends itself to a [2+2+1] cycloaddition, although this reactant is too active to store in bulk. Instead, ceric ammonium nitrate cyclobutadiene is generated in situ from decomplexation of stable cyclobutadiene iron tricarbonyl with (CAN).
An example of a newer version is the use of the chlorodicarbonylrhodium(I) dimer, [(CO)2RhCl]2, in the synthesis of (+)-phorbol by Phil Baran. In addition to using a rhodium catalyst, this synthesis features an intramolecular cyclization that results in the normal 5-membered α,β-cyclopentenone as well as 7-membered ring.
Substrate tolerance:
Carbon monoxide generation in situ Recently, several groups have published work avoiding the use of toxic carbon monoxide, and instead generate the cyclopentenone carbonyl motif from aldehydes, carboxylic acids, and formates. These examples typically employ rhodium as the organometallic transition metal, as it is commonly used in decarbonylation reactions. The decarbonylation and PK reaction occur in the same reaction vessel. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Stress majorization**
Stress majorization:
Stress majorization is an optimization strategy used in multidimensional scaling (MDS) where, for a set of n m -dimensional data items, a configuration X of n points in r (≪m) -dimensional space is sought that minimizes the so-called stress function σ(X) . Usually r is 2 or 3 , i.e. the (n×r) matrix X lists points in 2− or 3− dimensional Euclidean space so that the result may be visualised (i.e. an MDS plot). The function σ is a cost or loss function that measures the squared differences between ideal ( m -dimensional) distances and actual distances in r-dimensional space. It is defined as: σ(X)=∑i<j≤nwij(dij(X)−δij)2 where wij≥0 is a weight for the measurement between a pair of points (i,j) , dij(X) is the euclidean distance between i and j and δij is the ideal distance between the points (their separation) in the m -dimensional data space. Note that wij can be used to specify a degree of confidence in the similarity between points (e.g. 0 can be specified if there is no information for a particular pair).
Stress majorization:
A configuration X which minimizes σ(X) gives a plot in which points that are close together correspond to points that are also close together in the original m -dimensional data space.
Stress majorization:
There are many ways that σ(X) could be minimized. For example, Kruskal recommended an iterative steepest descent approach. However, a significantly better (in terms of guarantees on, and rate of, convergence) method for minimizing stress was introduced by Jan de Leeuw. De Leeuw's iterative majorization method at each step minimizes a simple convex function which both bounds σ from above and touches the surface of σ at a point Z , called the supporting point. In convex analysis such a function is called a majorizing function. This iterative majorization process is also referred to as the SMACOF algorithm ("Scaling by MAjorizing a COmplicated Function").
The SMACOF algorithm:
The stress function σ can be expanded as follows: σ(X)=∑i<j≤nwij(dij(X)−δij)2=∑i<jwijδij2+∑i<jwijdij2(X)−2∑i<jwijδijdij(X) Note that the first term is a constant C and the second term is quadratic in X (i.e. for the Hessian matrix V the second term is equivalent to tr X′VX ) and therefore relatively easily solved. The third term is bounded by: tr tr X′B(Z)Z where B(Z) has: bij=−wijδijdij(Z) for dij(Z)≠0,i≠j and bij=0 for dij(Z)=0,i≠j and bii=−∑j=1,j≠inbij Proof of this inequality is by the Cauchy-Schwarz inequality, see Borg (pp. 152–153).
The SMACOF algorithm:
Thus, we have a simple quadratic function τ(X,Z) that majorizes stress: tr tr X′B(X)X tr tr X′B(Z)Z=τ(X,Z) The iterative minimization procedure is then: at the kth step we set Z←Xk−1 min Xτ(X,Z) stop if σ(Xk−1)−σ(Xk)<ϵ otherwise repeat.This algorithm has been shown to decrease stress monotonically (see de Leeuw).
Use in graph drawing:
Stress majorization and algorithms similar to SMACOF also have application in the field of graph drawing. That is, one can find a reasonably aesthetically appealing layout for a network or graph by minimizing a stress function over the positions of the nodes in the graph. In this case, the δij are usually set to the graph-theoretic distances between nodes i and j and the weights wij are taken to be δij−α . Here, α is chosen as a trade-off between preserving long- or short-range ideal distances. Good results have been shown for α=2 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ZiiLABS**
ZiiLABS:
ZiiLABS is a global electronics company, producing a line of media-oriented application processors, reference platforms and enabling software, in a series of platforms named ZMS. Its products are found in low-power consumer electronics and embedded devices, including Android-based phones and tablets.
History:
ZiiLABS was founded in 1994 as 3Dlabs, which became a wholly owned subsidiary of Creative Technology Ltd in 2002. In January 2009 the company re-branded as ZiiLABS. This re-branding reflected 3Dlabs' focus on supplying low-power, media-rich application processors, hardware platforms and middleware, rather than just 3D GPUs as had previously been the case.
The company announced its first applications/media processor, the DMS-02 in 2005 and this has been followed by the ZMS-05, ZMS-08 and most recently the ZMS-20 and ZMS-40. The ZMS processors combine ZiiLABS’ core asset, the "Stemcell Computing Array" with ARM cores and integrated peripheral functions to create a system on a chip (SoC).
As 3Dlabs the company developed the GLINT and Permedia GPUs used in both personal and workstation graphics cards. In 2002 the company acquired the Intense3D group to become a vertically integrated graphics board vendor supplying workstation graphics card under the RealiZm brand. 3Dlabs stopped developing graphics GPUs and cards in 2006 to focus on its media processor business.
History:
In November 2012, Creative Technology Limited announced it has entered into an agreement with Intel Corporation for Intel to license certain technology and patents from ZiiLABS Inc. Ltd and acquire certain engineering resources and assets related to its UK branch as a part of a $50 million deal. ZiiLABS (still wholly owned by Creative) continues to retain all ownership of its StemCell media processor technologies and patents, and will continue to supply and support its ZMS series of chips to its customers.
Products:
The company's products include a range of ARM-based ZMS processors that feature its so-called StemCell media processing architecture, plus a portfolio of tablet reference platforms based on its in-house Android board support package and application software. The most recent platform, the JAGUAR Android reference tablet, was announced in May 2011.
Products:
StemCell cores The core asset of the ZiiLABS ZMS chips seem to be an array of processing units called StemCells that are programmed to perform media processing. These are described as 32-bit floating-point processing units and are likely some form of digital signal processor cores used to accelerate various operations. All video codec and 3D graphics handling in the ZMS processors is handled by programming this array of coprocessors to do the job.
Products:
Processors ZMS-40 (quad Cortex-A9 with 96 StemCell cores) ZMS-20 (dual Cortex A9 with 48 StemCell cores) ZMS-08 (single Cortex-A8 with 64 StemCell cores) ZMS-05 (dual ARM9 with 24 StemCell cores) Reference platforms Zii EGG (this product is now end-of-life) JAGUAR (Android 3.2 Reference Tablet) JAGUAR3 (Slim Android 3.2 Reference Tablet)Over the years a number of other development platforms have been made introduced including the Zii Development kits (traditional large form factor systems). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Syntactic hierarchy**
Syntactic hierarchy:
Syntax is concerned with the way sentences are constructed from smaller parts, such as words and phrases. Two steps can be distinguished in the study of syntax. The first step is to identify different types of units in the stream of speech and writing. In natural languages, such units include sentences, phrases, and words. The second step is to analyze how these units build up larger patterns, and in particular to find general rules that govern the construction of sentences.http://people.dsv.su.se/~vadim/cmnew/chapter2/ch2_21.htm This can be broken down into constituents, which are a group of words that are organized as a unit, and the organization of those constituents. They are organized in a hierarchical structure, by embedding inside one another to form larger constituents.
History:
Universal Grammar: the Chomskyan view In Chomsky’s view, humans are born with innate knowledge of certain principles that guide them in developing the grammar of their language. In other words, Chomsky’s theory is that language learning is facilitated by a predisposition that our brains have for certain structures of language. This implies in turn that all languages have a common structural basis: the set of rules known as "universal grammar". The theory of universal grammar entails that humans are born with an innate knowledge (or structural rules) of certain principles that guide them in developing the grammar of their language.
History:
Structural Linguistics: Saussurean Influences Regarded as the "Founder of Structural Linguistics", which reflects the concept of structuralism, Ferdinand de Saussure stated the ways in which human culture requires an overarching structure to relate to in order to communicate. He defines language to be different than human speech, which is fundamental and essential to language. While speech is a combination of several disciplines (i.e. physical, psychological, etc.) and is part of an individual and their society, language is a system of classification of its own entity.De Saussure argues that in written language, words are chained together in sequence on the chain of speaking, and therefore gain relations based on the linear nature of language. Language, however, is not simply a classification for universal concepts, as translating from one language to another proves to be a difficult task. Each Languages organizes their own world differently, and do not name existing categories, rather articulate their own. This idea explains that though languages can differ within levels of Syntactic Hierarchy, all languages encompass the same set of levels.
The Levels of Syntactic Hierarchy:
Groups of sentences (text) Separate sentences can combine together to create one sentence. For example, the sentence “The boy chased the ball” and “He didn’t catch it” can be combined together. You can do this in many ways including stringing one sentence after the other or joining the sentences with conjunctions.1. “The boy chased the ball; he didn’t catch it.”2. “The boy chased the ball, but he didn’t catch it.”3. “The boy chased the ball, and he didn’t catch it.” Sentence Phrases combined to form sentences. The phrase “the girl” combines with the phrase “ran away” to form the sentence: “The girl ran away.” Phrases Words combine to form phrases. The word “the” combines with “girl” to form the phrase: “The girl” Words Morphemes combine to form words. Morphemes belong to categories that determine how they combine. For example, the word 'manageable' is made up of two morphemes 'manage' which is a verb and 'able' which is an adjective, these categories tell the morphemes how to combine so they form the word 'manageable.' Morphemes Smallest meaningful unit in a word. For example, “boys” has two morphemes “boy” and “-s.” Phonemes A unit of sound such as individual consonants and vowels of a language. For example, /p/, /t/ and /æ/ are phonemes in English.
The Levels of Syntactic Hierarchy:
Phonetic form A subset of sounds which demonstrates the different variations of a phoneme. Diacritics can be used to represent this, for example aspiration can be added to the letter /p/ in the word /pʰɪt/.
Unsegmented Speech Refers to ambiguous streams of words with spaces in between. For example, this can represent how an infant may hear human speech.
Analysis of Syntactic Hierarchy Levels:
Sentence Sentences are the hierarchal structure of combined phrases. When constructing a sentence, two types of phrases are always necessary: Noun Phrase (NP) and Verb Phrase (VP), forming the simplest possible sentence. What determines whether a sentence is grammatically constructed (i.e. the sentence makes sense to a native speaker of the language), is its adherence to the language's phrase structure rules, allowing a language to generate large numbers of sentences. Languages cross-linguistically differ in their phrase structure rules, resulting in the difference of order of the NP and VP, and other phrases included in a sentence. Languages which do have the similar phrase structure rules, however, will translate so that a sentence from one language will be grammatical when translated into the target language.
Analysis of Syntactic Hierarchy Levels:
French example: Sentence: "Le chien a aimé la fille" (translation: "The dog loved the girl") The French sentence directly translates to English, demonstrating that the phrase structures of both languages are very similar.
Analysis of Syntactic Hierarchy Levels:
Phrase The idea that words combine to form phrases. For example, the word “the” combines with word “dog” to form the phrase “the dog”. A phrase is a sequence of words or a group of words arranged in a grammatical way to combine and form a sentence. There are five commonly occurring types of phrases; Noun phrases (NP), Adjective phrases (AdjP), Verb phrases (VP), Adverb Phrases (AdvP), and Prepositional Phrases (PP).
Analysis of Syntactic Hierarchy Levels:
Hierarchical combinations of words in their formation of phrasal categories are apparent cross-linguistically- for example, in French: French examples: Noun phrase: "Le chien" (translation: "The dog") Verb phrase: " a aimé la fille" (translation: "loved the girl") Full sentence: "Le chien a aimé la fille" Noun phrase A noun phrase refers to a phrase that is built upon a noun. For example, “ The dog” or “the girl” in the sentence “the dog loved the girl” act as noun phrases.
Analysis of Syntactic Hierarchy Levels:
Verb phrase Verb phrase refers to a phrase that is composed of at least one main verb and one or more helping/auxiliary verb (every sentence needs at least one main verb). For example, the word “loved the girl” in the sentence “the dog loved the girl” acts as a verb phrase.
see also Adjective phrases (AdjP), Adverb phrases (AdvP), and Prepositional phrases (PP) Phrase structure rules A phrase structure tree shows that a sentence is both linear string of words and a hierarchical structure with phrases nested in phrases (combination of phrase structures).
A phrase structure tree is a formal device for representing speaker’s knowledge about phrase structure in speech.The syntactic category of each individual word appears immediately above that word. In this way, “the” is shown to be a determiner, “child” is a noun, and so on.
Analysis of Syntactic Hierarchy Levels:
Word The subdomain that deals with words is morphology, which states that words are made up of morphemes which combine in a regular and rule-governed fashion.For example, the word 'national' is made up of two morphemes 'nation' which is a noun and '-al' which is a suffix that means pertaining to, this meaning helps us in categorizing '-al' as an adjective. These categories aid in arranging the morphemes so they can combine in a way that will form the word 'national,’ which is an adjective.
Analysis of Syntactic Hierarchy Levels:
Morphology states that words come in categories, and the morphemes that join together to create the word assign the category. The two main categories are open class, where new words can be created and closed class where there is a limited number of members. Within both of these categories there are further sub-categories. Open class includes: nouns, verbs, adjectives, and adverbs, and closed class includes: prepositions, determiners, numerals, complementizers, auxiliaries, modals, coordinators, and negation/affirmation. These sub categories can be further broken down for example, verbs can be either transitive or intransitive. These categories can be identified with semantic criteria about what a word means, for instance nouns are said to be people, places, or things while verbs are actions. Words are all placed in these categories and depending on their category they follow specific rules that determine their word order.Cross-Linguistically other languages form words similarly to English. This is done by having morphemes of different categories combine in a rule governed fashion in order to create the word.
Analysis of Syntactic Hierarchy Levels:
French example: "é"- masculine past participle root verb suffix of regular -er verbs "aimer" (translation: to like/love)-> "aim" + "é" = "aimé" (translation: liked)
Minimalist Theory of Syntax:
Syntactic hierarchy may be the most basic and assumed component of almost all syntactic theories, and yet the minimalist theory of syntax views a clause or group of words as a string, rather than as components of a hierarchical system. While this theory prioritizes linearity over hierarchy, hierarchical structure is still analyzed if it "generates correct data" or if there is "direct evidence for it". In this way, it appears that syntactic hierarchy still plays an important role in even the minimalist theories.
Artificial language:
In artificial languages, lexemes, tokens, and formulas are usually found among the basic units. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Groundwater flow equation**
Groundwater flow equation:
Used in hydrogeology, the groundwater flow equation is the mathematical relationship which is used to describe the flow of groundwater through an aquifer. The transient flow of groundwater is described by a form of the diffusion equation, similar to that used in heat transfer to describe the flow of heat in a solid (heat conduction). The steady-state flow of groundwater is described by a form of the Laplace equation, which is a form of potential flow and has analogs in numerous fields.
Groundwater flow equation:
The groundwater flow equation is often derived for a small representative elemental volume (REV), where the properties of the medium are assumed to be effectively constant. A mass balance is done on the water flowing in and out of this small volume, the flux terms in the relationship being expressed in terms of head by using the constitutive equation called Darcy's law, which requires that the flow is laminar. Other approaches are based on Agent Based Models to incorporate the effect of complex aquifers such as karstic or fractured rocks (i.e. volcanic)
Mass balance:
A mass balance must be performed, and used along with Darcy's law, to arrive at the transient groundwater flow equation. This balance is analogous to the energy balance used in heat transfer to arrive at the heat equation. It is simply a statement of accounting, that for a given control volume, aside from sources or sinks, mass cannot be created or destroyed. The conservation of mass states that, for a given increment of time (Δt), the difference between the mass flowing in across the boundaries, the mass flowing out across the boundaries, and the sources within the volume, is the change in storage.
Diffusion equation (transient flow):
Mass can be represented as density times volume, and under most conditions, water can be considered incompressible (density does not depend on pressure). The mass fluxes across the boundaries then become volume fluxes (as are found in Darcy's law). Using Taylor series to represent the in and out flux terms across the boundaries of the control volume, and using the divergence theorem to turn the flux across the boundary into a flux over the entire volume, the final form of the groundwater flow equation (in differential form) is: Ss∂h∂t=−∇⋅q−G.
Diffusion equation (transient flow):
This is known in other fields as the diffusion equation or heat equation, it is a parabolic partial differential equation (PDE). This mathematical statement indicates that the change in hydraulic head with time (left hand side) equals the negative divergence of the flux (q) and the source terms (G). This equation has both head and flux as unknowns, but Darcy's law relates flux to hydraulic heads, so substituting it in for the flux (q) leads to Ss∂h∂t=−∇⋅(−K∇h)−G.
Diffusion equation (transient flow):
Now if hydraulic conductivity (K) is spatially uniform and isotropic (rather than a tensor), it can be taken out of the spatial derivative, simplifying them to the Laplacian, this makes the equation Ss∂h∂t=K∇2h−G.
Dividing through by the specific storage (Ss), puts hydraulic diffusivity (α = K/Ss or equivalently, α = T/S) on the right hand side. The hydraulic diffusivity is proportional to the speed at which a finite pressure pulse will propagate through the system (large values of α lead to fast propagation of signals). The groundwater flow equation then becomes ∂h∂t=α∇2h−G.
Where the sink/source term, G, now has the same units but is divided by the appropriate storage term (as defined by the hydraulic diffusivity substitution).
Rectangular cartesian coordinates Especially when using rectangular grid finite-difference models (e.g. MODFLOW, made by the USGS), we deal with Cartesian coordinates. In these coordinates the general Laplacian operator becomes (for three-dimensional flow) specifically ∂h∂t=α[∂2h∂x2+∂2h∂y2+∂2h∂z2]−G.
Diffusion equation (transient flow):
MODFLOW code discretizes and simulates an orthogonal 3-D form of the governing groundwater flow equation. However, it has an option to run in a "quasi-3D" mode if the user wishes to do so; in this case the model deals with the vertically averaged T and S, rather than k and Ss. In the quasi-3D mode, flow is calculated between 2D horizontal layers using the concept of leakage.
Diffusion equation (transient flow):
Circular cylindrical coordinates Another useful coordinate system is 3D cylindrical coordinates (typically where a pumping well is a line source located at the origin — parallel to the z axis — causing converging radial flow). Under these conditions the above equation becomes (r being radial distance and θ being angle), ∂h∂t=α[∂2h∂r2+1r∂h∂r+1r2∂2h∂θ2+∂2h∂z2]−G.
Diffusion equation (transient flow):
Assumptions This equation represents flow to a pumping well (a sink of strength G), located at the origin. Both this equation and the Cartesian version above are the fundamental equation in groundwater flow, but to arrive at this point requires considerable simplification. Some of the main assumptions which went into both these equations are: the aquifer material is incompressible (no change in matrix due to changes in pressure — aka subsidence), the water is of constant density (incompressible), any external loads on the aquifer (e.g., overburden, atmospheric pressure) are constant, for the 1D radial problem the pumping well is fully penetrating a non-leaky aquifer, the groundwater is flowing slowly (Reynolds number less than unity), and the hydraulic conductivity (K) is an isotropic scalar.Despite these large assumptions, the groundwater flow equation does a good job of representing the distribution of heads in aquifers due to a transient distribution of sources and sinks.
Laplace equation (steady-state flow):
If the aquifer has recharging boundary conditions a steady-state may be reached (or it may be used as an approximation in many cases), and the diffusion equation (above) simplifies to the Laplace equation.
0=α∇2h This equation states that hydraulic head is a harmonic function, and has many analogs in other fields. The Laplace equation can be solved using techniques, using similar assumptions stated above, but with the additional requirements of a steady-state flow field.
A common method for solution of this equations in civil engineering and soil mechanics is to use the graphical technique of drawing flownets; where contour lines of hydraulic head and the stream function make a curvilinear grid, allowing complex geometries to be solved approximately.
Steady-state flow to a pumping well (which never truly occurs, but is sometimes a useful approximation) is commonly called the Thiem solution.
Two-dimensional groundwater flow:
The above groundwater flow equations are valid for three dimensional flow. In unconfined aquifers, the solution to the 3D form of the equation is complicated by the presence of a free surface water table boundary condition: in addition to solving for the spatial distribution of heads, the location of this surface is also an unknown. This is a non-linear problem, even though the governing equation is linear.
Two-dimensional groundwater flow:
An alternative formulation of the groundwater flow equation may be obtained by invoking the Dupuit–Forchheimer assumption, where it is assumed that heads do not vary in the vertical direction (i.e., ∂h/∂z=0 ). A horizontal water balance is applied to a long vertical column with area δxδy extending from the aquifer base to the unsaturated surface. This distance is referred to as the saturated thickness, b. In a confined aquifer, the saturated thickness is determined by the height of the aquifer, H, and the pressure head is non-zero everywhere. In an unconfined aquifer, the saturated thickness is defined as the vertical distance between the water table surface and the aquifer base. If ∂h/∂z=0 , and the aquifer base is at the zero datum, then the unconfined saturated thickness is equal to the head, i.e., b=h.
Two-dimensional groundwater flow:
Assuming both the hydraulic conductivity and the horizontal components of flow are uniform along the entire saturated thickness of the aquifer (i.e., ∂qx/∂z=0 and ∂K/∂z=0 ), we can express Darcy's law in terms of integrated groundwater discharges, Qx and Qy: Qx=∫0bqxdz=−Kb∂h∂x Qy=∫0bqydz=−Kb∂h∂y Inserting these into our mass balance expression, we obtain the general 2D governing equation for incompressible saturated groundwater flow: ∂nb∂t=∇⋅(Kb∇h)+N.
Two-dimensional groundwater flow:
Where n is the aquifer porosity. The source term, N (length per time), represents the addition of water in the vertical direction (e.g., recharge). By incorporating the correct definitions for saturated thickness, specific storage, and specific yield, we can transform this into two unique governing equations for confined and unconfined conditions: S∂h∂t=∇⋅(Kb∇h)+N.
(confined), where S=Ssb is the aquifer storativity and Sy∂h∂t=∇⋅(Kh∇h)+N.
(unconfined), where Sy is the specific yield of the aquifer.
Note that the partial differential equation in the unconfined case is non-linear, whereas it is linear in the confined case. For unconfined steady-state flow, this non-linearity may be removed by expressing the PDE in terms of the head squared: ∇⋅(K∇h2)=−2N.
Or, for homogeneous aquifers, ∇2h2=−2NK.
This formulation allows us to apply standard methods for solving linear PDEs in the case of unconfined flow. For heterogeneous aquifers with no recharge, Potential flow methods may be applied for mixed confined/unconfined cases. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Celestine (mineral)**
Celestine (mineral):
Celestine (the IMA-accepted name) or celestite is a mineral consisting of strontium sulfate (SrSO4). The mineral is named for its occasional delicate blue color. Celestine and the carbonate mineral strontianite are the principal sources of the element strontium, commonly used in fireworks and in various metal alloys.
Etymology:
Celestine derives its name from the Latin word caelestis meaning celestial which in turn is derived from the Latin word caelum meaning sky or heaven.
Occurrence:
Celestine occurs as crystals, and also in compact massive, and fibrous forms. It is mostly found in sedimentary rocks, often associated with the minerals gypsum, anhydrite, and halite. On occasion in some localities, it may also be found with sulfur inclusions.
The mineral is found worldwide, usually in small quantities. Pale blue crystal specimens are found in Madagascar. White and orange variants also occurred at Yate, Bristol, UK, where it was extracted for commercial purposes until April 1991.The skeletons of the protozoan Acantharea are made of celestine, unlike those of other radiolarians which are made of silica.
In carbonate marine sediments, burial dissolution is a recognized mechanism of celestine precipitation. It is sometimes used as a gemstone.
Geodes:
Celestine crystals are found in some geodes. The world's largest known geode, a celestine geode 35 feet (11 m) in diameter at its widest point, is located near the village of Put-in-Bay, Ohio, on South Bass Island in Lake Erie. The geode has been converted into a viewing cave, Crystal Cave, with the crystals which once composed the floor of the geode removed. The geode has celestine crystals as wide as 18 inches (46 cm) across, estimated to weigh up to 300 pounds (140 kg) each.
Geodes:
Celestine geodes are understood to form by replacement of alabaster nodules consisting of the calcium sulfates gypsum or anhydrite. Calcium sulfate is sparingly soluble, but strontium sulfate is mostly insoluble. Strontium-bearing solutions that come into contact with calcium sulfate nodules dissolve the calcium away, leaving a cavity. The strontium is immediately precipitated as celestine, with the crystals growing into the newly-formed cavity. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Apple Pencil**
Apple Pencil:
Apple Pencil is a line of wireless stylus pen accessories designed and developed by Apple Inc. for use with supported iPad tablets.
Apple Pencil:
The first-generation Apple Pencil was announced alongside the first iPad Pro on September 9, 2015. It communicates wirelessly via Bluetooth and has a removable cap that conceals a Lightning connector used for charging. The Pencil is compatible with the first- and second-generation iPad Pro models, and the sixth through tenth-generation iPad models (with the latter requiring a USB-C adapter).The second-generation Apple Pencil was announced on October 30, 2018, alongside the third-generation iPad Pro, and is used with most iPad models that contain a USB-C port (excluding the tenth-generation iPad). It uses a magnetic connector on the side of the tablet for charging rather than a Lightning connector, and includes touch-sensitive areas that can be tapped to perform actions within supported apps.
Apple Pencil:
Apple has promoted the Pencil as being oriented towards creative work and productivity; during its unveiling, the Pencil's drawing capabilities were demonstrated using the mobile version of Adobe Photoshop, and its document-annotation capabilities were shown on several Microsoft Office apps.
Specifications:
First generation The Apple Pencil has pressure sensitivity and angle detection, and it was designed for low latency to enable smooth marking on the screen. The Pencil and the user's fingers can be used simultaneously while rejecting input from the user's palm. One end of the device has a magnetically-fastened removable cap which covers a Lightning connector which is used for charging from an iPad's Lightning port. The initial charge lasts about twelve hours, but fifteen seconds of charging provides sufficient power for 30 minutes of use. It also ships with a female-to-female Lightning adapter that allows it to be used with charging cables.The Apple Pencil uses an STMicroelectronics STM32L151UCY6 Ultra-low-power 32-bit RISC ARM-based Cortex-M3 MCU running at 32 MHz with 64 KB of flash memory, a Bosch Sensortech BMA280 3‐axis accelerometer and a Cambridge Silicon Radio (Qualcomm) CSR1012A05 Bluetooth Smart IC for its Bluetooth connection to the iPad. It is powered by a rechargeable 3.82 V, 0.329 Wh lithium-ion battery.
Specifications:
The first-generation Apple Pencil is compatible with iPad models with a Lightning port released since 2018, including the first- and second-generation iPad Pro models, third-generation iPad Air, fifth-generation iPad Mini, sixth-generation 9.7-inch iPad, and the seventh, eighth, and ninth-generation 10.2-inch iPad models. It also supports the tenth-generation, 10.9-inch iPad released in 2022, but requires a dongle (similar to the aforementioned Lightning adapter) to connect it to a USB-C cable for charging. Apple began to bundle this dongle with Pencil units in October 2022, and it can be purchased separately by existing owners.
Specifications:
Second generation On October 30, 2018, Apple announced an updated Pencil alongside the third-generation iPad Pro. It is similar in design and specifications to the first model, but without the detachable connector, and part of the stylus is flattened to inhibit rolling. It contains tap-sensitive zones on its sides that can be mapped to functions within apps. The sixth-generation iPad Pro added the ability (known as Hover) to detect the position and angle of Pencil up to 12 millimetres (0.47 in) above the screen. Custom laser engraving is available when purchased via the Apple Store online.Rather than a physical Lightning connector, the second-generation Pencil is paired and charged using a proprietary magnetic wireless charging connector on the tablet instead. As such, it is only supported by the third-, fourth-, fifth- and sixth-generation iPad Pro, sixth-generation iPad Mini, and the fourth- and fifth-generation iPad Air. All of these models have USB-C connectors instead of Lightning, making them physically incompatible with the first-generation Pencil. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rabbit pie**
Rabbit pie:
Rabbit pie is a game pie consisting of rabbit meat in a gravy with other ingredients (typically onions, celery and carrots) enclosed in a pastry crust. Rabbit pie is part of traditional American and English cuisine. It has recently found renewed popularity.
Ingredients:
Wild rabbit, as opposed to farmed, is most often used as it is easily and affordably obtained, and is described as more flavoursome.Along with rabbit meat, ingredients of the filling of a rabbit pie typically include onions, celery and carrots. Other ingredients may include prunes, bacon and cider.Australian recipes for rabbit pie sometimes include the food paste Vegemite as an ingredient.
In culture:
Rabbit pie was a staple dish of the American pioneers. Thanks to the increasing demand for wild and fresh ingredients, rabbit pie is often seen on the menus of fashionable restaurants and gastropubs.Two huge rabbit pies are part of traditional Easter celebrations in the English village of Hallaton, Leicestershire.In Beatrix Potter's children's book The Tale of Peter Rabbit, Peter Rabbit and his siblings are warned "[not to] go into Mr. McGregor's garden" because their father "had an accident there; he was put in a pie by Mrs. McGregor.""Rabbit pie day" is ostensibly invoked in the song Run, Rabbit, Run. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SureThing**
SureThing:
SureThing is a line of label printing software created by MicroVision Development.Its most popular program is SureThing CD Labeler, which is designed to produce CD and DVD labels as well as LightScribe writing. SureThing CD Labeler's allows clipart and images to the labels to improve the label's design. The program supports playlists as well. SureThing has pre-produced templates for labels for LightScribe, 45-inch vinyl, CD, DVD, pocket CDs. It allows customers to create song labels electronically from the playlist of a CD player or other device.In 1999, Electronic Musician's David Rubin wrote that the "SureThing label applicator is arguably the weakest part of the package. Designed around a CD jewel case, it's awkward to use and susceptible to damage" and that "[c]learly, the SureThing software is the main reason that someone would buy this kit". Katherine Stevenson wrote in a 2002 review of SureThing that the "handy preview and browse capabilities make the vast number of options manageable" and that "Depending how flat the label lies, and how evenly you apply pressure with your fingers, you may still end up with wrinkles". In 2002, the SureThing CD Labeler received awards from SharewareJunkies for "Best Program of the Year", as well as the "Best Windows Program".In a 2005 review, "SureThing is a bit better, but lacks niceties like the alignment tools we've all grown accustomed to from the menu design functions in DVD-authoring programs." Sally Wiener Grotta and Daniel Grotta wrote in a 2009 PC World that "SureThing CD Labeler is an intuitive, easy-to-learn program" and "SureThing can make the difference between having your CDs and DVDs look homemade or professional". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Semantic lexicon**
Semantic lexicon:
A semantic lexicon is a digital dictionary of words labeled with semantic classes so associations can be drawn between words that have not previously been encountered. Semantic lexicons are built upon semantic networks, which represent the semantic relations between words. The difference between a semantic lexicon and a semantic network is that a semantic lexicon has definitions for each word, or a "gloss".
Structure:
Semantic lexicons are made up of lexical entries. These entries are not orthographic, but semantic, eliminating issues of homonymy and polysemy. These lexical entries are interconnected with semantic relations, such as hyperonymy, hyponymy, meronymy, or troponymy. Synonymous entries are grouped together in what the Princeton WordNet calls "synsets" Most semantic lexicons are made up of four different "sub-nets": nouns, verbs, adjectives, and adverbs, though some researchers have taken steps to add an "artificial node" interconnecting the sub-nets.
Structure:
Nouns Nouns are ordered into a taxonomy, structured into a hierarchy where the broadest and most encompassing noun is located at the top, such as "thing", with the nouns becoming more and more specific the further they are from the top. The very top noun in a semantic lexicon is called a unique beginner. The most specific nouns (those that do not have any subordinates), are terminal nodes.Semantic lexicons also distinguish between types, where a type of something has characteristics of a thing such as a Rhodesian Ridgeback being a type of dog, and instances, where something is an example of said thing, such as Dave Grohl is an instance of a musician. Instances are always terminal nodes because they are solitary and don’t have other words or ontological categories belonging to them.Semantic lexicons also address meronymy, which is a “part-to-whole” relationship, such as keys are part of a laptop. The necessary attributes that define a specific entry are also necessarily present in that entry’s hyponym. So, if a computer has keys, and a laptop is a type of computer, then a laptop must have keys. However, there are many instances where this distinction can become vague. A good example of this is the item chair. Most would define a chair as having legs and a seat (as in the part one sits on). However, there are some very “artistic” and “modern” chairs in overpriced boutiques that do not have legs at all. Beanbags also do not have legs, but few would argue that they aren't chairs. Questions like this are the core questions that drive research and work in the fields of taxonomy and ontology.
Structure:
Verbs Verb synsets are arranged much like their noun counterparts: the more general and encompassing verbs are near the top of the hierarchy while troponyms (verbs that describe a more specific way of doing something) are grouped beneath. Verb specificity moves along a vector, with the verbs becoming more and more specific in reference to a certain quality. For example. The set "walk / run / sprint" becomes more specific in terms of the speed, and "dislike / hate / abhor" becomes more specific in terms of the intensity of the emotion.
Structure:
The ontological groupings and separations of verbs is far more arguable than their noun counterparts. It is widely accepted that a dog is a type of animal and that a stool is a type of chair, but it can be argued that abhor is on the same emotional plane as hate (that they are synonyms and not super/subordinates). It can also be argued that love and adore are synonyms, or that one is more specific than the other. Thus, the relations between verbs are not as agreed-upon as that of nouns.
Structure:
Another attribute of verb synset relations is that there are also ordered into verb pairs. In these pairs, one verb necessarily entails the other in the way that massacre entails kill, and know entails believe. These verb pairs can be troponyms and their superordinates, as is the case in the first example, or they can be in completely different ontological categories, as in the case in the second example.
Structure:
Adjectives Adjective synset relations are very similar to verb synset relations. They are not quite as neatly hierarchical as the noun synset relations, and they have fewer tiers and more terminal nodes. However, there are generally less terminal nodes per ontological category in adjective synset relations than that of verbs. Adjectives in semantic lexicons are organized in word pairs as well, with the difference being that their word pairs are antonyms instead of entailments. More generic polar adjectives such as hot and cold, or happy and sad are paired. Then other adjectives that are semantically similar are linked to each of these words. Hot is linked to warm, heated, sizzling, and sweltering, while cold is linked to cool, chilly, freezing, and nippy. These semantically similar adjectives are considered indirect antonyms to the opposite polar adjective (i.e. nippy is an indirect antonym to hot). Adjectives that are derived from a verb or a noun are also directly linked to said verb or noun across sub-nets. For example, enjoyable is linked to the semantically similar adjectives agreeable, and pleasant, as well as to its origin verb, enjoy.
Structure:
Adverbs There are very few adverbs accounted for in semantic lexicons. This is because most adverbs are taken directly from their adjective counterparts, in both meaning and form, and changed only morphologically (i.e. happily is derived from happy, and luckily is derived from lucky, which is derived from luck). The only adverbs that are accounted for specifically are ones without these connections, such as really, mostly, and hardly.
Challenges facing semantic lexicons:
The effects of the Princeton WordNet project extend far past English, though most research in the field revolves around the English language. Creating a semantic lexicon for other languages has proved to be very useful for Natural Language Processing applications. One of the main focuses of research in semantic lexicons is linking lexicons of different languages to aid in machine translation. The most common approach is to attempt to create a shared ontology that serves as a “middleman” of sorts between semantic lexicons of two different languages. This is an extremely challenging and as-of-yet unsolved issue in the Machine Translation field. One issue arises from the fact that no two languages are word-for-word translations of each other. That is, every language has some sort of structural or syntactic difference from every other. In addition, languages often have words that don’t translate easily into other languages, and certainly not with an exact word-to-word match. Proposals have been made to create a set framework for wordnets. Research has shown that every known human language has some sort of concept resembling synonymy, hyponymy, meronymy, and antonymy. However, every idea so far proposed has been met with criticism for using a pattern that works best for English and less for other languages.Another obstacle in the field is that no solid guidelines exist for semantic lexicon framework and contents. Each lexicon project in each different language has had a slightly (or not so slightly) different approach to their wordnet. There is not even an agreed-upon definition of what a “word” is. Orthographically, they are defined as a string of letters with spaces on either side, but semantically it becomes a very debated subject. For example, though it is not difficult to define dog or rod as words, but what about guard dog or lightning rod? The latter two examples would be considered orthographically separate words, though semantically they make up one concept: one is a type of dog and one is a type of rod. In addition to these confusions, wordnets are also idiosyncratic, in that they do not consistently label items. They are redundant, in that they often have several words assigned to each meaning (synsets). They are also open-ended, in that they often focus on and extend into terminology and domain-specific vocabulary.
List of semantic lexicons:
WordNet EuroWordNet Multilingual Central Repository Global Wordnet MindNet | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Device configuration overlay**
Device configuration overlay:
Device configuration overlay (DCO) is a hidden area on many of today's hard disk drives (HDDs). Usually when information is stored in either the DCO or host protected area (HPA), it is not accessible by the BIOS (or UEFI), OS, or the user. However, certain tools can be used to modify the HPA or DCO. The system uses the IDENTIFY_DEVICE command to determine the supported features of a given hard drive, but the DCO can report to this command that supported features are nonexistent or that the drive is smaller than it actually is. To determine the actual size and features of a disk, the DEVICE_CONFIGURATION_IDENTIFY command is used, and the output of this command can be compared to the output of IDENTIFY_DEVICE to see if a DCO is present on a given hard drive. Most major tools will remove the DCO in order to fully image a hard drive, using the DEVICE_CONFIGURATION_RESET command. This permanently alters the disk, unlike with the host protected area (HPA), which can be temporarily removed for a power cycle.
Uses:
The Device Configuration Overlay (DCO), which was first introduced in the ATA-6 standard, "allows system vendors to purchase HDDs from different manufacturers with potentially different sizes, and then configure all HDDs to have the same number of sectors. An example of this would be using DCO to make an 80-gigabyte HDD appear as a 60-gigabyte HDD to both the (OS) and the BIOS.... Given the potential to place data in these hidden areas, this is an area of concern for computer forensics investigators. An additional issue for forensic investigators is imaging the HDD that has the HPA and/or DCO on it. While certain vendors claim that their tools are able to both properly detect and image the HPA, they are either silent on the handling of the DCO or indicate that this is beyond the capabilities of their tool."
DCO Software tools:
Detection tools HDAT2 a free software program for MS-DOS. It can be used to create/remove Host Protected Area (HPA) (using command SET MAX) and create/remove DCO hidden area (using command DCO MODIFY). It also can do other functions on the DCO.
Data Synergy's freeware ATATool utility can be used to detect a DCO from a Windows environment. Recent versions allow a DCO to be created, removed or frozen.
DCO Software tools:
Software imaging tools Guidance Software's EnCase comes with a Linux-based tool that images hard drives called LinEn. LinEn 6.01 was validated by the National Institute of Justice (NIJ) in October 2008, and they found that "The tool does not remove either Host Protected Areas (HPAs) or DCOs. However, the Linux test environment automatically removed the HPA on the test drive, allowing the tool to image sectors hidden by an HPA. The tool did not acquire sectors hidden by a DCO."AccessData's FTK Imager 2.5.3.14 was validated by the National Institute of Justice (NIJ) in June 2008. Their findings indicated that "If a physical acquisition is made of a drive with hidden sectors in either a Host Protected Area or a Device Configuration Overlay, the tool does not remove either an HPA or a DCO. The tool did not acquire sectors hidden by an HPA." Hardware imaging tools A variety of hardware imaging tools have been found to successfully detect and remove DCOs. The NIJ routinely tests digital forensics tools and these publications can be found at www.ojp.gov (Link needs checking by other Wikipedians! For this particular European reader using locked-down non-JavaScript Firefox on 2021-11-30 this link unhelpfully shows: "Access Denied. You are not authorized to access this page") or from NIST at https://www.nist.gov/itl/ssd/software-quality-group/computer-forensics-tool-testing-program-cftt | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**1-Testosterone**
1-Testosterone:
1-Testosterone (abbreviated and nicknamed as 1-Testo, 1-T), also known as δ1-dihydrotestosterone (δ1-DHT), as well as dihydroboldenone, is a synthetic anabolic–androgenic steroid (AAS) and a 5α-reduced derivative of boldenone (Δ1-testosterone). It differs from testosterone by having a 1(2)-double bond instead of a 4(5)-double bond in its A ring. It was legally sold online in the United States until 2005, when it was reclassified as a Schedule III drug.
Pharmacology:
Pharmacodynamics A 2006 study determined that 1-testosterone has a high androgenic and anabolic potency even without being metabolized, so it can be characterized as a typical anabolic steroid. 1-Testosterone binds in a manner that is highly selective to the androgen receptor (AR) and has a high potency to stimulate AR-dependent transactivation. In vivo, an equimolar dose of 1-testosterone has the same potency to stimulate the growth of the prostate, the seminal vesicles and the androgen-sensitive levator ani muscle as the reference anabolic steroid testosterone propionate, but, unlike testosterone propionate, 1-testosterone also increases liver weight.
Chemistry:
1-Testosterone, IUPAC name 17β-hydroxy-5α-androst-1-en-3-one, also known as 4,5α-dihydro-δ1-testosterone (Δ1-DHT) or as 5α-androst-1-en-17β-ol-3-one, is a synthetic androstane steroid and a derivative of dihydrotestosterone (DHT).
Derivatives Two prohormones of 1-testosterone are 1-androstenediol and 1-androstenedione, the latter of which may be synthesized from stanolone acetate.Mesabolone is a ketal made from 1-testosterone. 1-Testosterone also is known to be used to synthesize mestanolone and metenolone.Methyl-1-testosterone is the 17α-methyl derivative of 1-testosterone.
Detection in body fluids Doping with 1-testosterone can be detected in urine samples using gas chromatography. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gallium(III) fluoride**
Gallium(III) fluoride:
Gallium(III) fluoride (GaF3) is a chemical compound. It is a white solid that melts under pressure above 1000 °C but sublimes around 950 °C. It has the FeF3 structure where the gallium atoms are 6-coordinate. GaF3 can be prepared by reacting F2 or HF with Ga2O3 or by thermal decomposition of (NH4)3GaF6. GaF3 is virtually insoluble in water. Solutions of GaF3 in HF can be evaporated to form the trihydrate, GaF3·3H2O, which on heating gives a hydrated form of GaF2(OH). Gallium(III) fluoride reacts with mineral acids to form hydrofluoric acid. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**C1orf141**
C1orf141:
Chromosome 1 open reading frame 141, or C1orf141 is a protein which, in humans, is encoded by gene C1orf141. It is a precursor protein that becomes active after cleavage. The function is not yet well understood, but it is suggested to be active during development
Gene:
Locus This gene is located on chromosome 1 at position 1p31.3. It is encoded on the antisense strand of DNA spanning from 67,092,176 to 67,141,646 and has 10 total exons. It overlaps slightly with the gene IL23R being encoded on the sense strand.
Gene:
Transcription regulation A specific promoter region has not been predicted for C1orf141 so the 1000 base pairs upstream of the start of transcription was analyzed for transcription factor binding sites. The transcription factors below represent a subset of the transcription factor binding sites found within this region that give an idea of the kind of factors that could bind to the promoter Vertebrate TATA binding protein factor CCAAT binding factor Lim homeodomain factor Cart-1 Homeodomain transcription factor Fork head domain factor Nuclear receptor subfamily Brn POU domain
mRNA:
Alternative Splicing The C1orf141 gene appears to have two common isoforms and seven less common transcript variants.
Protein:
The primary encoded precursor protein (C1orf141 Isoform 1) consists of 400 amino acid residues and is 2177 base pairs long. It consists of 7 exons and a domain of unknown function DUF4545. Its predicted molecular mass is 54.4 kDa and predicted isoelectric point is 9.63.
Composition The C1orf141 precursor protein has more lysine amino acid residues and less glycine amino acid residues than expected when compared to other human proteins. The sequence has 11.7% lysine and only 2.1% glycine.
Post-translational modifications C1orf141 is modified post translation to form a mature protein product. It undergoes O-linked glycosylation, sumoylation, glycation, and phosphorylation. One N-terminal cleavage occurs followed by acetylation. Propeptide cleavage occurs at the start site of the final exon.
Protein:
Structure The secondary structure for uncleaved C1orf141 consists primarily of alpha helices with a few small segments of beta sheets. These helices can be seen in the model of the tertiary structure predicted by the I-TASSER program. The program Phyre2 also predicts the protein to be made up primarily of alpha helices. After propeptide cleavage of C1orf141, I-TASSER predicts that only alpha helices remain.
Protein:
Interactions There are currently no experimentally confirmed interactions for C1orf141. The STRING database for protein interactions identified ten potential proteins that interact with C1orf141 through text mining. These include SALT1, C8orf74, SHCBP1L, ACTL9, RBM44, CCDC116, ADO, WDR78, ZNF365, SPATA45. Through investigation of the papers where these interaction predictions were found, a solid link was not clear for any of the identified proteins.
Expression:
C1orf141 is expressed in 30 different tissues but primarily in the testes. Other tissues where expression is above baseline levels are the brain, lungs, and ovaries.
Localization The subcellular localization for C1orf141 is predicted to be in the nucleus. There are two nuclear localization signals within the protein sequence, one of which stays present after propeptide cleavage.
Function:
The function of C1orf141 is not yet fully understood and has not been experimentally confirmed. However, expression data shows that the protein is active in some developmental stages. RNA-Seq data taken at different stages of development show expression at varying levels throughout. Expression rates are seen at higher levels in the fetal developmental stage than the adult in the protein's ETS profile. Microarray data for cumulus cells during natural and stimulated in vitro fertilization show relatively high levels of expression. There is no significant change in expression in adult tissue disease states.
Homology:
Paralogs There are no paralogs for C1orf141 Orthologs Orthologous sequences are seen primarily in other mammalian species. The most distant ortholog identified through a NCBI BLAST search is a Reptilian species, but that is the only non-mammalian species. This list contains a subset of the species identified as orthologs to display the diversity of the species where orthologs can be found. Each species was compared to the human C1orf141 isoform that includes each coding exon, isoform X1.
Homology:
Evolutionary History Using the Molecular Clock Hypothesis, the m value (the number of corrected amino acid changes per 100 residues) was calculated for C1orf141 and plotted against the divergence of species. When compared to the same m value plot for hemoglobin, fibrinogen alpha chain, and cytochrome c, it is clear that the C1orf141 gene is evolving at a faster rate than all three. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Insignificant Details of a Random Episode**
Insignificant Details of a Random Episode:
Insignificant Details of a Random Episode (Russian: Незначительные подробности случайного эпизода) is a 2011 Russian comedy short film directed by Mikhail Mestetsky.
Plot:
The two trains stopped opposite each other and strange relationships began to develop between the passengers.
Cast:
Kirill Käro Miriam Sekhon Ilya Zaslavskiy | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Flux balance analysis**
Flux balance analysis:
Flux balance analysis (FBA) is a mathematical method for simulating metabolism in genome-scale reconstructions of metabolic networks. In comparison to traditional methods of modeling, FBA is less intensive in terms of the input data required for constructing the model. Simulations performed using FBA are computationally inexpensive and can calculate steady-state metabolic fluxes for large models (over 2000 reactions) in a few seconds on modern personal computers. The related method of metabolic pathway analysis seeks to find and list all possible pathways between metabolites.
Flux balance analysis:
FBA finds applications in bioprocess engineering to systematically identify modifications to the metabolic networks of microbes used in fermentation processes that improve product yields of industrially important chemicals such as ethanol and succinic acid. It has also been used for the identification of putative drug targets in cancer and pathogens, rational design of culture media, and host–pathogen interactions. The results of FBA can be visualized using flux maps similar to the image on the right, which illustrates the steady-state fluxes carried by reactions in glycolysis. The thickness of the arrows is proportional to the flux through the reaction.
Flux balance analysis:
FBA formalizes the system of equations describing the concentration changes in a metabolic network as the dot product of a matrix of the stoichiometric coefficients (the stoichiometric matrix S) and the vector v of the unsolved fluxes. The right-hand side of the dot product is a vector of zeros representing the system at steady state. Linear programming is then used to calculate a solution of fluxes corresponding to the steady state.
History:
Some of the earliest work in FBA dates back to the early 1980s. Papoutsakis demonstrated that it was possible to construct flux balance equations using a metabolic map. It was Watson, however, who first introduced the idea of using linear programming and an objective function to solve for the fluxes in a pathway. The first significant study was subsequently published by Fell and Small, who used flux balance analysis together with more elaborate objective functions to study the constraints in fat synthesis.
Simulations:
FBA is not computationally intensive, taking on the order of seconds to calculate optimal fluxes for biomass production for a typical network (around 2000 reactions). This means that the effect of deleting reactions from the network and/or changing flux constraints can be sensibly modelled on a single computer.
Simulations:
Gene/reaction deletion and perturbation studies Single reaction deletion A frequently used technique to search a metabolic network for reactions that are particularly critical to the production of biomass. By removing each reaction in a network in turn and measuring the predicted flux through the biomass function, each reaction can be classified as either essential (if the flux through the biomass function is substantially reduced) or non-essential (if the flux through the biomass function is unchanged or only slightly reduced).
Simulations:
Pairwise reaction deletion Pairwise reaction deletion of all possible pairs of reactions is useful when looking for drug targets, as it allows the simulation of multi-target treatments, either by a single drug with multiple targets or by drug combinations. Double deletion studies can also quantify the synthetic lethal interactions between different pathways providing a measure of the contribution of the pathway to overall network robustness.
Simulations:
Single and multiple gene deletions Genes are connected to enzyme-catalyzed reactions by Boolean expressions known as Gene-Protein-Reaction expressions (GPR). Typically a GPR takes the form (Gene A AND Gene B) to indicate that the products of genes A and B are protein sub-units that assemble to form the complete protein and therefore the absence of either would result in deletion of the reaction. On the other hand, if the GPR is (Gene A OR Gene B) it implies that the products of genes A and B are isozymes. Therefore, it is possible to evaluate the effect of single or multiple gene deletions by evaluation of the GPR as a Boolean expression. If the GPR evaluates to false, the reaction is constrained to zero in the model prior to performing FBA. Thus gene knockouts can be simulated using FBA.
Simulations:
Interpretation of gene and reaction deletion results The utility of reaction inhibition and deletion analyses becomes most apparent if a gene-protein-reaction matrix has been assembled for the network being studied with FBA. The gene-protein-reaction matrix is a binary matrix connecting genes with the proteins made from them. Using this matrix, reaction essentiality can be converted into gene essentiality indicating the gene defects which may cause a certain disease phenotype or the proteins/enzymes which are essential (and thus what enzymes are the most promising drug targets in pathogens). However, the gene-protein-reaction matrix does not specify the Boolean relationship between genes with respect to the enzyme, instead it merely indicates an association between them. Therefore, it should be used only if the Boolean GPR expression is unavailable.
Simulations:
Reaction inhibition The effect of inhibiting a reaction, rather than removing it entirely, can be simulated in FBA by restricting the allowed flux through it. The effect of an inhibition can be classified as lethal or non-lethal by applying the same criteria as in the case of a deletion where a suitable threshold is used to distinguish “substantially reduced” from “slightly reduced”. Generally the choice of threshold is arbitrary but a reasonable estimate can be obtained from growth experiments where the simulated inhibitions/deletions are actually performed and growth rate is measured.
Simulations:
Growth media optimization To design optimal growth media with respect to enhanced growth rates or useful by-product secretion, it is possible to use a method known as Phenotypic Phase Plane analysis. PhPP involves applying FBA repeatedly on the model while co-varying the nutrient uptake constraints and observing the value of the objective function (or by-product fluxes). PhPP makes it possible to find the optimal combination of nutrients that favor a particular phenotype or a mode of metabolism resulting in higher growth rates or secretion of industrially useful by-products. The predicted growth rates of bacteria in varying media have been shown to correlate well with experimental results, as well as to define precise minimal media for the culture of Salmonella typhimurium.Host-pathogen interactions The human microbiota is a complex system with as many as 400 trillion microbes and bacteria interacting with each other and the host. To understand key factors in this system; a multi-scale, dynamic flux-balance analysis is proposed as FBA is classified as less computationally intensive.
Mathematical description:
In contrast to the traditionally followed approach of metabolic modeling using coupled ordinary differential equations, flux balance analysis requires very little information in terms of the enzyme kinetic parameters and concentration of metabolites in the system. It achieves this by making two assumptions, steady state and optimality. The first assumption is that the modeled system has entered a steady state, where the metabolite concentrations no longer change, i.e. in each metabolite node the producing and consuming fluxes cancel each other out. The second assumption is that the organism has been optimized through evolution for some biological goal, such as optimal growth or conservation of resources. The steady-state assumption reduces the system to a set of linear equations, which is then solved to find a flux distribution that satisfies the steady-state condition subject to the stoichiometry constraints while maximizing the value of a pseudo-reaction (the objective function) representing the conversion of biomass precursors into biomass.
Mathematical description:
The steady-state assumption dates to the ideas of material balance developed to model the growth of microbial cells in fermenters in bioprocess engineering. During microbial growth, a substrate consisting of a complex mixture of carbon, hydrogen, oxygen and nitrogen sources along with trace elements are consumed to generate biomass.
Mathematical description:
The material balance model for this process becomes: Input=Output+Accumulation If we consider the system of microbial cells to be at steady state then we may set the accumulation term to zero and reduce the material balance equations to simple algebraic equations. In such a system, substrate becomes the input to the system which is consumed and biomass is produced becoming the output from the system. The material balance may then be represented as: Input=Output Input−Output=0 Mathematically, the algebraic equations can be represented as a dot product of a matrix of coefficients and a vector of the unknowns. Since the steady-state assumption puts the accumulation term to zero. The system can be written as: A⋅x=0 Extending this idea to metabolic networks, it is possible to represent a metabolic network as a stoichiometry balanced set of equations. Moving to the matrix formalism, we can represent the equations as the dot product of a matrix of stoichiometry coefficients (stoichiometric matrix S ) and the vector of fluxes v as the unknowns and set the right hand side to 0 implying the steady state.
Mathematical description:
S⋅v=0 Metabolic networks typically have more reactions than metabolites and this gives an under-determined system of linear equations containing more variables than equations. The standard approach to solve such under-determined systems is to apply linear programming.
Mathematical description:
Linear programs are problems that can be expressed in canonical form: maximize subject to and x≥0 where x represents the vector of variables (to be determined), c and b are vectors of (known) coefficients, A is a (known) matrix of coefficients, and (⋅)T is the matrix transpose. The expression to be maximized or minimized is called the objective function (cTx in this case). The inequalities Ax ≤ b are the constraints which specify a convex polytope over which the objective function is to be optimized.
Mathematical description:
Linear Programming requires the definition of an objective function. The optimal solution to the LP problem is considered to be the solution which maximizes or minimizes the value of the objective function depending on the case in point. In the case of flux balance analysis, the objective function Z for the LP is often defined as biomass production. Biomass production is simulated by an equation representing a lumped reaction that converts various biomass precursors into one unit of biomass.
Mathematical description:
Therefore, the canonical form of a Flux Balance Analysis problem would be: maximize subject to and lowerbound≤v≤upperbound where v represents the vector of fluxes (to be determined), S is a (known) matrix of coefficients. The expression to be maximized or minimized is called the objective function ( cTv in this case). The inequalities lowerbound≤v and v≤upperbound define, respectively, the minimal and the maximal rates of flux for every reaction corresponding to the columns of the S matrix. These rates can be experimentally determined to constrain and improve the predictive accuracy of the model even further or they can be specified to an arbitrarily high value indicating no constraint on the flux through the reaction.
Mathematical description:
The main advantage of the flux balance approach is that it does not require any knowledge of the metabolite concentrations, or more importantly, the enzyme kinetics of the system; the homeostasis assumption precludes the need for knowledge of metabolite concentrations at any time as long as that quantity remains constant, and additionally it removes the need for specific rate laws since it assumes that at steady state, there is no change in the size of the metabolite pool in the system. The stoichiometric coefficients alone are sufficient for the mathematical maximization of a specific objective function.
Mathematical description:
The objective function is essentially a measure of how each component in the system contributes to the production of the desired product. The product itself depends on the purpose of the model, but one of the most common examples is the study of total biomass. A notable example of the success of FBA is the ability to accurately predict the growth rate of the prokaryote E. coli when cultured in different conditions. In this case, the metabolic system was optimized to maximize the biomass objective function. However this model can be used to optimize the production of any product, and is often used to determine the output level of some biotechnologically relevant product. The model itself can be experimentally verified by cultivating organisms using a chemostat or similar tools to ensure that nutrient concentrations are held constant. Measurements of the production of the desired objective can then be used to correct the model.
Mathematical description:
A good description of the basic concepts of FBA can be found in the freely available supplementary material to Edwards et al. 2001 which can be found at the Nature website. Further sources include the book "Systems Biology" by B. Palsson dedicated to the subject and a useful tutorial and paper by J. Orth. Many other sources of information on the technique exist in published scientific literature including Lee et al. 2006, Feist et al. 2008, and Lewis et al. 2012.
Model preparation and refinement:
The key parts of model preparation are: creating a metabolic network without gaps, adding constraints to the model, and finally adding an objective function (often called the Biomass function), usually to simulate the growth of the organism being modelled.
Model preparation and refinement:
Metabolic network and software tools Metabolic networks can vary in scope from those describing a single pathway, up to the cell, tissue or organism. The main requirement of a metabolic network that forms the basis of an FBA-ready network is that it contains no gaps. This typically means that extensive manual curation is required, making the preparation of a metabolic network for flux-balance analysis a process that can take months or years. However, recent advances such as so-called gap-filling methods can reduce the required time to weeks or months.
Model preparation and refinement:
Software packages for creation of FBA models include: Pathway Tools/MetaFlux, Simpheny, MetNetMaker, COBRApy, CarveMe, MIOM, or COBREXA.jl.Generally models are created in BioPAX or SBML format so that further analysis or visualization can take place in other software although this is not a requirement.
Model preparation and refinement:
Constraints A key part of FBA is the ability to add constraints to the flux rates of reactions within networks, forcing them to stay within a range of selected values. This lets the model more accurately simulate real metabolism. The constraints belong to two subsets from a biological perspective; boundary constraints that limit nutrient uptake/excretion and internal constraints that limit the flux through reactions within the organism. In mathematical terms, the application of constraints can be considered to reduce the solution space of the FBA model. In addition to constraints applied at the edges of a metabolic network, constraints can be applied to reactions deep within the network. These constraints are usually simple; they may constrain the direction of a reaction due to energy considerations or constrain the maximum speed of a reaction due to the finite speed of all reactions in nature.
Model preparation and refinement:
Growth media constraints Organisms, and all other metabolic systems, require some input of nutrients. Typically the rate of uptake of nutrients is dictated by their availability (a nutrient that is not present cannot be absorbed), their concentration and diffusion constants (higher concentrations of quickly-diffusing metabolites are absorbed more quickly) and the method of absorption (such as active transport or facilitated diffusion versus simple diffusion).
Model preparation and refinement:
If the rate of absorption (and/or excretion) of certain nutrients can be experimentally measured then this information can be added as a constraint on the flux rate at the edges of a metabolic model. This ensures that nutrients that are not present or not absorbed by the organism do not enter its metabolism (the flux rate is constrained to zero) and also means that known nutrient uptake rates are adhered to by the simulation. This provides a secondary method of making sure that the simulated metabolism has experimentally verified properties rather than just mathematically acceptable ones.
Model preparation and refinement:
Thermodynamical reaction constraints In principle, all reactions are reversible however in practice reactions often effectively occur in only one direction. This may be due to significantly higher concentration of reactants compared to the concentration of the products of the reaction. But more often it happens because the products of a reaction have a much lower free energy than the reactants and therefore the forward direction of a reaction is favored more.
Model preparation and refinement:
For ideal reactions, −∞<vi<∞ For certain reactions a thermodynamic constraint can be applied implying direction (in this case forward) 0<vi<∞ Realistically the flux through a reaction cannot be infinite (given that enzymes in the real system are finite) which implies that, max Experimentally measured flux constraints Certain flux rates can be measured experimentally ( vi,m ) and the fluxes within a metabolic model can be constrained, within some error ( ε ), to ensure these known flux rates are accurately reproduced in the simulation.
Model preparation and refinement:
vi,m−ε<vi<vi,m+ε Flux rates are most easily measured for nutrient uptake at the edge of the network. Measurements of internal fluxes is possible using radioactively labelled or NMR visible metabolites.
Model preparation and refinement:
Constrained FBA-ready metabolic models can be analyzed using software such as the COBRA toolbox (available implementations in MATLAB and Python), SurreyFBA, or the web-based FAME. Additional software packages have been listed elsewhere. A comprehensive review of all such software and their functionalities has been recently reviewed.An open-source alternative is available in the R (programming language) as the packages abcdeFBA or sybil for performing FBA and other constraint based modeling techniques.
Model preparation and refinement:
Objective function FBA can give a large number of mathematically acceptable solutions to the steady-state problem (Sv→=0) . However solutions of biological interest are the ones which produce the desired metabolites in the correct proportion. The objective function defines the proportion of these metabolites. For instance when modelling the growth of an organism the objective function is generally defined as biomass. Mathematically, it is a column in the stoichiometry matrix the entries of which place a "demand" or act as a "sink" for biosynthetic precursors such as fatty acids, amino acids and cell wall components which are present on the corresponding rows of the S matrix. These entries represent experimentally measured, dry weight proportions of cellular components. Therefore, this column becomes a lumped reaction that simulates growth and reproduction. Therefore, the accuracy of experimental measurements plays an essential role in the correct definition of the biomass function and makes the results of FBA biologically applicable by ensuring that the correct proportion of metabolites are produced by metabolism.
Model preparation and refinement:
When modeling smaller networks the objective function can be changed accordingly. An example of this would be in the study of the carbohydrate metabolism pathways where the objective function would probably be defined as a certain proportion of ATP and NADH and thus simulate the production of high energy metabolites by this pathway.
Model preparation and refinement:
Optimization of the objective/biomass function Linear programming can be used to find a single optimal solution. The most common biological optimization goal for a whole-organism metabolic network would be to choose the flux vector v→ that maximises the flux through a biomass function composed of the constituent metabolites of the organism placed into the stoichiometric matrix and denoted biomass or simply vb max s.t.
Model preparation and refinement:
Sv→=0 In the more general case any reaction can be defined and added to the biomass function with either the condition that it be maximised or minimised if a single “optimal” solution is desired. Alternatively, and in the most general case, a vector c→ can be introduced, which defines the weighted set of reactions that the linear programming model should aim to maximise or minimise, max s.t.
Model preparation and refinement:
In the case of there being only a single separate biomass function/reaction within the stoichiometric matrix c→ would simplify to all zeroes with a value of 1 (or any non-zero value) in the position corresponding to that biomass function. Where there were multiple separate objective functions c→ would simplify to all zeroes with weighted values in the positions corresponding to all objective functions.
Model preparation and refinement:
Reducing the solution space – biological considerations for the system The analysis of the null space of matrices is implemented in software packages specialized for matrix operations such as Matlab and Octave. Determination of the null space of S tells us all the possible collections of flux vectors (or linear combinations thereof) that balance fluxes within the biological network. The advantage of this approach becomes evident in biological systems which are described by differential equation systems with many unknowns. The velocities in the differential equations above - v1 and v2 - are dependent on the reaction rates of the underlying equations. The velocities are generally taken from the Michaelis–Menten kinetic theory, which involves the kinetic parameters of the enzymes catalyzing the reactions and the concentration of the metabolites themselves. Isolating enzymes from living organisms and measuring their kinetic parameters is a difficult task, as is measuring the internal concentrations and diffusion constants of metabolites within an organism. Therefore, the differential equation approach to metabolic modeling is beyond the current scope of science for all but the most studied organisms. FBA avoids this impediment by applying the homeostatic assumption, which is a reasonably approximate description of biological systems.
Model preparation and refinement:
Although FBA avoids that biological obstacle, the mathematical issue of a large solution space remains. FBA has a two-fold purpose. Accurately representing the biological limits of the system and returning the flux distribution closest to the natural fluxes within the target system/organism. Certain biological principles can help overcome the mathematical difficulties. While the stoichiometric matrix is almost always under-determined initially (meaning that the solution space to Sv→=0 is very large), the size of the solution space can be reduced and be made more reflective of the biology of the problem through the application of certain constraints on the solutions.
Extensions:
The success of FBA and the realization of its limitations has led to extensions that attempt to mediate the limitations of the technique.
Flux variability analysis The optimal solution to the flux-balance problem is rarely unique with many possible, and equally optimal, solutions existing. Flux variability analysis (FVA), built into some analysis software, returns the boundaries for the fluxes through each reaction that can, paired with the right combination of other fluxes, estimate the optimal solution.
Reactions which can support a low variability of fluxes through them are likely to be of a higher importance to an organism and FVA is a promising technique for the identification of reactions that are important.
Extensions:
Minimization of metabolic adjustment (MOMA) When simulating knockouts or growth on media, FBA gives the final steady-state flux distribution. This final steady state is reached in varying time-scales. For example, the predicted growth rate of E. coli on glycerol as the primary carbon source did not match the FBA predictions; however, on sub-culturing for 40 days or 700 generations, the growth rate adaptively evolved to match the FBA prediction.Sometimes it is of interest to find out what is the immediate effect of a perturbation or knockout, since it takes time for regulatory changes to occur and for the organism to re-organize fluxes to optimally utilize a different carbon source or circumvent the effect of the knockout. MOMA predicts the immediate sub-optimal flux distribution following the perturbation by minimizing the distance (Euclidean) between the wild-type FBA flux distribution and the mutant flux distribution using quadratic programming. This yields an optimization problem of the form.
Extensions:
min ||vw−vd||2s.t.S⋅vd=0 where vw represents the wild-type (or unperturbed state) flux distribution and vd represents the flux distribution on gene deletion that is to be solved for. This simplifies to: min 12vdTIvd+(−vw)⋅vds.t.S⋅vd=0 This is the MOMA solution which represents the flux distribution immediately post-perturbation.
Extensions:
Regulatory on-off minimization (ROOM) ROOM attempts to improve the prediction of the metabolic state of an organism after a gene knockout. It follows the same premise as MOMA that an organism would try to restore a flux distribution as close as possible to the wild-type after a knockout. However it further hypothesizes that this steady state would be reached through a series of transient metabolic changes by the regulatory network and that the organism would try to minimize the number of regulatory changes required to reach the wild-type state. Instead of using a distance metric minimization however it uses a mixed integer linear programming method.
Extensions:
Dynamic FBA Dynamic FBA attempts to add the ability for models to change over time, thus in some ways avoiding the strict steady state condition of pure FBA. Typically the technique involves running an FBA simulation, changing the model based on the outputs of that simulation, and rerunning the simulation. By repeating this process an element of feedback is achieved over time.
Comparison with other techniques:
FBA provides a less simplistic analysis than Choke Point Analysis while requiring far less information on reaction rates and a much less complete network reconstruction than a full dynamic simulation would require. In filling this niche, FBA has been shown to be a very useful technique for analysis of the metabolic capabilities of cellular systems.
Comparison with other techniques:
Choke point analysis Unlike choke point analysis which only considers points in the network where metabolites are produced but not consumed or vice versa, FBA is a true form of metabolic network modelling because it considers the metabolic network as a single complete entity (the stoichiometric matrix) at all stages of analysis. This means that network effects, such as chemical reactions in distant pathways affecting each other, can be reproduced in the model. The upside to the inability of choke point analysis to simulate network effects is that it considers each reaction within a network in isolation and thus can suggest important reactions in a network even if a network is highly fragmented and contains many gaps.
Comparison with other techniques:
Dynamic metabolic simulation Unlike dynamic metabolic simulation, FBA assumes that the internal concentration of metabolites within a system stays constant over time and thus is unable to provide anything other than steady-state solutions. It is unlikely that FBA could, for example, simulate the functioning of a nerve cell. Since the internal concentration of metabolites is not considered within a model, it is possible that an FBA solution could contain metabolites at a concentration too high to be biologically acceptable. This is a problem that dynamic metabolic simulations would probably avoid. One advantage of the simplicity of FBA over dynamic simulations is that they are far less computationally expensive, allowing the simulation of large numbers of perturbations to the network. A second advantage is that the reconstructed model can be substantially simpler by avoiding the need to consider enzyme rates and the effect of complex interactions on enzyme kinetics. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mobile app development**
Mobile app development:
Mobile app development is the act or process by which a mobile app is developed for one or more mobile devices, which can include personal digital assistants (PDA), enterprise digital assistants (EDA), or mobile phones. Such software applications are specifically designed to run on mobile devices, taking numerous hardware constraints into consideration. Common constraints include CPU architecture and speeds, available memory (RAM), limited data storage capacities, and considerable variation in displays (technology, size, dimensions, resolution) and input methods (buttons, keyboard, touch screens with/without styluses). These applications (or 'apps') can be pre-installed on phones during manufacturing or delivered as web applications, using server-side or client-side processing (e.g., JavaScript) to provide an "application-like" experience within a web browser.Mobile app development has been steadily growing, in revenues and jobs created. A 2013 analyst report estimates there are 529,000 direct app economy jobs within the EU then 28 members (including the UK), 60 percent of which are mobile app developers.
Overview:
In order to facilitate the development of applications for mobile devices, and consistency thereof, various approaches have been taken.
Overview:
Most companies that ship a product (e.g. Apple, iPod/iPhone/iPad) provide an official software development kit (SDK). They may also opt to provide some form of Testing and/or Quality Assurance (QA). In exchange for being provided the SDK or other tools, it may be necessary for a prospective developer to sign a some form of non-disclosure agreement, or NDA, which restricts the sharing of privileged information.
Overview:
As part of the development process, mobile user interface (UI) design is an essential step in the creation of mobile apps. Mobile UI designers consider constraints, contexts, screen space, input methods, and mobility as outlines for design. Constraints in mobile UI design in constraints include the limited attention span of the user and form factors, such as a mobile device's screen size for a user's hand(s). Mobile UI context includes signal cues from user activity, such as the location where or the time when the device is in use, that can be observed from user interactions within a mobile app. Such context clues can be used to provide automatic suggestions when scheduling an appointment or activity or to filter a list of various services for the user.
Overview:
The user is often the focus of interaction with their device, and the interface entails components of both hardware and software. User input allows for the users to manipulate a system, and device's output allows the system to indicate the effects of the users' manipulation.
Overall, mobile UI design's goal is mainly for an understandable, user-friendly interface. Functionality is supported by mobile enterprise application platforms or integrated development environments (IDEs).
Developers of mobile applications must also consider a large array of devices with different screen sizes, hardware specifications, and configurations because of intense competition in mobile hardware and changes within each of the platforms.
Today, mobile apps are usually distributed via an official online outlet or marketplace (e.g. Apple - The App Store, Google - Google Play) and there is a formalized process by which developers submit their apps for approval and inclusion in those marketplaces. Historically, however, that was not always the case.
Mobile UIs, or front-ends, rely on mobile back-ends to support access to enterprise systems. The mobile back-end facilitates data routing, security, authentication, authorization, working off-line, and service orchestration. This functionality is supported by a mix of middleware components including mobile app server, mobile backend as a service (MBaaS), and service-oriented architecture (SOA) infrastructure.
Platform:
The software development packages needed to develop, deploy, and manage mobile apps are made from many components and tools which allow a developer to write, test, and deploy applications for one or more target platforms.
Front-end development tools Front-end development tools are focused on the user interface and user experience (UI-UX) and provide the following abilities: UI design tools SDKs to access device features Cross-platform accommodations/supportNotable tools are listed below.
First-Party First party tools include official SDKs published by, or on behalf of, the company responsible for the design of a particular hardware platform (e.g. Apple, Google, etc) as well as any third-party software that is officially supported for the purpose of developing mobile apps for that hardware.
Second Party Third Party Back-end servers Back-end tools pick up where the front-end tools leave off, and provide a set of reusable services that are centrally managed and controlled and provide the following abilities: Integration with back-end systems User authentication-authorization Data services Reusable business logicAvailable tools are listed below.
Platform:
Security add-on layers With bring your own device (BYOD) becoming the norm within more enterprises, IT departments often need stop-gap, tactical solutions that layer atop existing apps, phones, and platform component. Features include App wrapping for security Data encryption Client actions Reporting and statistics System software Many system-level components are needed to have a functioning platform for developing mobile apps.
Platform:
Criteria for selecting a development platform usually contains the target mobile platforms, existing infrastructure and development skills. When targeting more than one platform with cross-platform development it is also important to consider the impact of the tool on the user experience. Performance is another important criteria, as research on mobile apps indicates a strong correlation between application performance and user satisfaction. Along with performance and other criteria, the availability of the technology and the project's requirement may drive the development between native and cross-platform environments. To aid the choice between native and cross-platform environments, some guidelines and benchmarks have been published. Typically, cross-platform environments are reusable across multiple platforms, leveraging a native container while using HTML, CSS, and JavaScript for the user interface. In contrast, native environments are targeted at one platform for each of those environments. For example, Android development occurs in the Eclipse IDE using Android Developer Tools (ADT) plugins, Apple iOS development occurs using Xcode IDE with Objective-C and/or Swift, Windows and BlackBerry each have their own development environments.
Platform:
Mobile app testing Mobile applications are first tested within the development environment using emulators and later subjected to field testing. Emulators provide an inexpensive way to test applications on mobile phones to which developers may not have physical access. The following are examples of tools used for testing application across the most popular mobile operating systems.
Google Android Emulator - an Android emulator that is patched to run on a Windows PC as a standalone app, without having to download and install the complete and complex Android SDK. It can be installed and Android compatible apps can be tested on it.
The official Android SDK Emulator - a mobile device emulator which mimics all of the hardware and software features of a typical mobile device (without the calls).
TestiPhone - a web browser-based simulator for quickly testing iPhone web applications. This tool has been tested and works using Internet Explorer 7, Firefox 2 and Safari 3.
Platform:
iPhoney - gives a pixel-accurate web browsing environment and it is powered by Safari. It can be used while developing web sites for the iPhone. It is not an iPhone simulator but instead is designed for web developers who want to create 320 by 480 (or 480 by 320) websites for use with iPhone. iPhoney will only run on OS X 10.4.7 or later.
Platform:
BlackBerry Simulator - There are a variety of official BlackBerry simulators available to emulate the functionality of actual BlackBerry products and test how the device software, screen, keyboard and trackwheel will work with application.
Windows UI Automation - To test applications that use the Microsoft UI Automation technology, it requires Windows Automation API 3.0. It is pre-installed on Windows 7, Windows Server 2008 R2 and later versions of Windows. On other operating systems, you can install using Windows Update or download it from the Microsoft Web site.
Platform:
MobiOne Developer - a mobile Web integrated development environment (IDE) for Windows that helps developers to code, test, debug, package and deploy mobile Web applications to devices such as iPhone, BlackBerry, Android, and the Palm Pre. MobiOne Developer was officially declared End of Life by the end of 2014.Tools include eggPlant: A GUI-based automated test tool for mobile app across all operating systems and devices.
Platform:
Ranorex: Test automation tools for mobile, web and desktop apps.
Testdroid: Real mobile devices and test automation tools for testing mobile and web apps.
Patents:
Many patent applications are pending for new mobile phone apps. Most of these are in the technological fields of business methods, database management, data transfer, and operator interface. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Big memory**
Big memory:
Big memory computers are machines with a large amount of random-access memory (RAM). The computers are required for databases, graph analytics, or more generally, high-performance computing, data science and big data. Some database systems are designed to run mostly in memory, rarely if ever retrieving data from disk or flash memory. See list of in-memory databases.
Details:
The performance of big memory systems depends on how the central processing units (CPUs) access the memory, via a conventional memory controller or via non-uniform memory access (NUMA). Performance also depends on the size and design of the CPU cache.
Details:
Performance also depends on operating system (OS) design. The huge pages feature in Linux and other OSes can improve the efficiency of virtual memory. The transparent huge pages feature in Linux can offer better performance for some big-memory workloads. The "Large-Page Support" in Microsoft Windows enables server applications to establish large-page memory regions which are typically three orders of magnitude larger than the native page size. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**UNIQUAC**
UNIQUAC:
In statistical thermodynamics, UNIQUAC (a portmanteau of universal quasichemical) is an activity coefficient model used in description of phase equilibria. The model is a so-called lattice model and has been derived from a first order approximation of interacting molecule surfaces. The model is, however, not fully thermodynamically consistent due to its two-liquid mixture approach. In this approach the local concentration around one central molecule is assumed to be independent from the local composition around another type of molecule.
UNIQUAC:
The UNIQUAC model can be considered a second generation activity coefficient because its expression for the excess Gibbs energy consists of an entropy term in addition to an enthalpy term. Earlier activity coefficient models such as the Wilson equation and the non-random two-liquid model (NRTL model) only consist of enthalpy terms.
UNIQUAC:
Today the UNIQUAC model is frequently applied in the description of phase equilibria (i.e. liquid–solid, liquid–liquid or liquid–vapor equilibrium). The UNIQUAC model also serves as the basis of the development of the group contribution method UNIFAC, where molecules are subdivided into functional groups. In fact, UNIQUAC is equal to UNIFAC for mixtures of molecules, which are not subdivided; e.g. the binary systems water-methanol, methanol-acryonitrile and formaldehyde-DMF.
UNIQUAC:
A more thermodynamically consistent form of UNIQUAC is given by the more recent COSMOSPACE and the equivalent GEQUAC model.
Equations:
Like most local composition models, UNIQUAC splits excess Gibbs free energy into a combinatorial and a residual contribution: GE=(GE)C+(GE)R The calculated activity coefficients of the ith component then split likewise: ln ln ln γiR The first is an entropic term quantifying the deviation from ideal solubility as a result of differences in molecule shape. The latter is an enthalpic correction caused by the change in interacting forces between different molecules upon mixing.
Equations:
Combinatorial contribution The combinatorial contribution accounts for shape differences between molecules and affects the entropy of the mixture and is based on the lattice theory. The Stavermann–Guggenheim equation is used to approximate this term from pure chemical parameters, using the relative Van der Waals volumes ri and surface areas qi of the pure chemicals: ln ln FiVi Differentiating yields the excess entropy γC, ln ln ln ViFi) with the volume fraction per mixture mole fraction, Vi, for the ith component given by: Vi=ri∑jxjrj The surface area fraction per mixture molar fraction, Fi, for the ith component is given by: Fi=qi∑jxjqj The first three terms on the right hand side of the combinatorial term form the Flory–Huggins contribution, while the remaining term, the Guggenhem–Staverman correction, reduce this because connecting segments cannot be placed in all direction in space. This spatial correction shifts the result of the Flory–Huggins term about 5% towards an ideal solution. The coordination number, z, i.e. the number of close interacting molecules around a central molecule, is frequently set to 10. It can be regarded as an average value that lies between cubic (z = 6) and hexagonal packing (z = 12) of molecules that are simplified by spheres.
Equations:
In the case of infinite dilution for a binary mixture, the equations for the combinatorial contribution reduce to: ln ln ln ln ln ln r2q1r1q2) This pair of equations show that molecules of same shape, i.e. same r and q parameters, have γ1C,∞=γ2C,∞=1 Residual contribution The residual, enthalpic term contains an empirical parameter, τij , which is determined from the binary interaction energy parameters. The expression for the residual activity coefficient for molecule i is: ln ln ∑jqjxjτji∑jqjxj−∑jqjxjτij∑kqkxkτkj) with τij=e−Δuij/RT Δuii [J/mol] is the binary interaction energy parameter. Theory defines Δuij=uij−uii , and Δuji=uji−ujj , where uij is the interaction energy between molecules i and j . The interaction energy parameters are usually determined from activity coefficients, vapor-liquid, liquid-liquid, or liquid-solid equilibrium data.
Equations:
Usually Δuij≠Δuji , because the energies of evaporation (i.e. Δuii ), are in many cases different, while the energy of interaction between molecule i and j is symmetric, and therefore uij=uji . If the interactions between the j molecules and i molecules is the same as between molecules i and j, there is no excess energy of mixing, Δuij=Δuji=0 . And thus γiR=1 Alternatively, in some process simulation software τij can be expressed as follows : ln ln (T)+DijT+Eij/T2 .The C, D, and E coefficients are primarily used in fitting liquid–liquid equilibria data (with D and E rarely used at that). The C coefficient is useful for vapor-liquid equilibria data as well. The use of such an expression ignores the fact that on a molecular level the energy, Δuij , is temperature independent. It is a correction to repair the simplifications, which were applied in the derivation of the model.
Applications (phase equilibrium calculations):
Activity coefficients can be used to predict simple phase equilibria (vapour–liquid, liquid–liquid, solid–liquid), or to estimate other physical properties (e.g. viscosity of mixtures). Models such as UNIQUAC allow chemical engineers to predict the phase behavior of multicomponent chemical mixtures. They are commonly used in process simulation programs to calculate the mass balance in and around separation units.
Parameters determination:
UNIQUAC requires two basic underlying parameters: relative surface and volume fractions are chemical constants, which must be known for all chemicals (qi and ri parameters, respectively). Empirical parameters between components that describes the intermolecular behaviour. These parameters must be known for all binary pairs in the mixture. In a quaternary mixture there are six such parameters (1–2,1–3,1–4,2–3,2–4,3–4) and the number rapidly increases with additional chemical components. The empirical parameters are obtained by a correlation process from experimental equilibrium compositions or activity coefficients, or from phase diagrams, from which the activity coefficients themselves can be calculated. An alternative is to obtain activity coefficients with a method such as UNIFAC, and the UNIFAC parameters can then be simplified by fitting to obtain the UNIQUAC parameters. This method allows for the more rapid calculation of activity coefficients, rather than direct usage of the more complex method.
Parameters determination:
Remark that the determination of parameters from LLE data can be difficult depending on the complexity of the studied system. For this reason it is necessary to confirm the consistency of the obtained parameters in the whole range of compositions (including binary subsystems, experimental and calculated lie-lines, Hessian matrix, etc.).
Parameters determination:
Newer developments UNIQUAC has been extended by several research groups. Some selected derivatives are: UNIFAC, a method which permits the volume, surface and in particular, the binary interaction parameters to be estimated. This eliminates the use of experimental data to calculate the UNIQUAC parameters, extensions for the estimation of activity coefficients for electrolytic mixtures, extensions for better describing the temperature dependence of activity coefficients, and solutions for specific molecular arrangements.The DISQUAC model advances UNIFAC by replacing UNIFAC's semi-empirical group-contribution model with an extension of the consistent theory of Guggenheim's UNIQUAC. By adding a "dispersive" or "random-mixing physical" term, it better predicts mixtures of molecules with both polar and non-polar groups. However, separate calculation of the dispersive and quasi-chemical terms means the contact surfaces are not uniquely defined. The GEQUAC model advances DISQUAC slightly, by breaking polar groups into individual poles and merging the dispersive and quasi-chemical terms. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Panasonic Lumix DMC-TZ20**
Panasonic Lumix DMC-TZ20:
Panasonic Lumix DMC-TZ20, also known as Panasonic Lumix DMC-TZ22 or Panasonic Lumix DMC-ZS10, is a digital camera by Panasonic Lumix. The highest-resolution pictures it records is 14.1 megapixels, through its 24mm Ultra Wide-Angle Leica DC VARIO-ELMAR.
Property:
24 mm LEICA DC 16x optical zoom Full HD movies 1.920 x 1.080 50i GPS integrated touch-screen LCD 3D photos iA (Intelligent Auto) mode with night shot free-hand | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hodge structure**
Hodge structure:
In mathematics, a Hodge structure, named after W. V. D. Hodge, is an algebraic structure at the level of linear algebra, similar to the one that Hodge theory gives to the cohomology groups of a smooth and compact Kähler manifold. Hodge structures have been generalized for all complex varieties (even if they are singular and non-complete) in the form of mixed Hodge structures, defined by Pierre Deligne (1970). A variation of Hodge structure is a family of Hodge structures parameterized by a manifold, first studied by Phillip Griffiths (1968). All these concepts were further generalized to mixed Hodge modules over complex varieties by Morihiko Saito (1989).
Hodge structures:
Definition of Hodge structures A pure Hodge structure of integer weight n consists of an abelian group HZ and a decomposition of its complexification H into a direct sum of complex subspaces Hp,q , where p+q=n , with the property that the complex conjugate of Hp,q is Hq,p := HZ⊗ZC=⨁p+q=nHp,q, Hp,q¯=Hq,p.
An equivalent definition is obtained by replacing the direct sum decomposition of H by the Hodge filtration, a finite decreasing filtration of H by complex subspaces FpH(p∈Z), subject to the condition and FpH⊕FqH¯=H.
The relation between these two descriptions is given as follows: Hp,q=FpH∩FqH¯, FpH=⨁i≥pHi,n−i.
Hodge structures:
For example, if X is a compact Kähler manifold, HZ=Hn(X,Z) is the n-th cohomology group of X with integer coefficients, then H=Hn(X,C) is its n-th cohomology group with complex coefficients and Hodge theory provides the decomposition of H into a direct sum as above, so that these data define a pure Hodge structure of weight n. On the other hand, the Hodge–de Rham spectral sequence supplies Hn with the decreasing filtration by FpH as in the second definition.For applications in algebraic geometry, namely, classification of complex projective varieties by their periods, the set of all Hodge structures of weight n on HZ is too big. Using the Riemann bilinear relations, in this case called Hodge Riemann bilinear relations, it can be substantially simplified. A polarized Hodge structure of weight n consists of a Hodge structure (HZ,Hp,q) and a non-degenerate integer bilinear form Q on HZ (polarization), which is extended to H by linearity, and satisfying the conditions: for for 0.
Hodge structures:
In terms of the Hodge filtration, these conditions imply that for φ≠0, where C is the Weil operator on H, given by C=ip−q on Hp,q Yet another definition of a Hodge structure is based on the equivalence between the Z -grading on a complex vector space and the action of the circle group U(1). In this definition, an action of the multiplicative group of complex numbers C∗ viewed as a two-dimensional real algebraic torus, is given on H. This action must have the property that a real number a acts by an. The subspace Hp,q is the subspace on which z∈C∗ acts as multiplication by zpz¯q.
Hodge structures:
A-Hodge structure In the theory of motives, it becomes important to allow more general coefficients for the cohomology. The definition of a Hodge structure is modified by fixing a Noetherian subring A of the field R of real numbers, for which A⊗ZR is a field. Then a pure Hodge A-structure of weight n is defined as before, replacing Z with A. There are natural functors of base change and restriction relating Hodge A-structures and B-structures for A a subring of B.
Mixed Hodge structures:
It was noticed by Jean-Pierre Serre in the 1960s based on the Weil conjectures that even singular (possibly reducible) and non-complete algebraic varieties should admit 'virtual Betti numbers'. More precisely, one should be able to assign to any algebraic variety X a polynomial PX(t), called its virtual Poincaré polynomial, with the properties If X is nonsingular and projective (or complete) If Y is closed algebraic subset of X and U = X \ Y The existence of such polynomials would follow from the existence of an analogue of Hodge structure in the cohomologies of a general (singular and non-complete) algebraic variety. The novel feature is that the nth cohomology of a general variety looks as if it contained pieces of different weights. This led Alexander Grothendieck to his conjectural theory of motives and motivated a search for an extension of Hodge theory, which culminated in the work of Pierre Deligne. He introduced the notion of a mixed Hodge structure, developed techniques for working with them, gave their construction (based on Heisuke Hironaka's resolution of singularities) and related them to the weights on l-adic cohomology, proving the last part of the Weil conjectures.
Mixed Hodge structures:
Example of curves To motivate the definition, consider the case of a reducible complex algebraic curve X consisting of two nonsingular components, X1 and X2 , which transversally intersect at the points Q1 and Q2 . Further, assume that the components are not compact, but can be compactified by adding the points P1,…,Pn . The first cohomology group of the curve X (with compact support) is dual to the first homology group, which is easier to visualize. There are three types of one-cycles in this group. First, there are elements αi representing small loops around the punctures Pi . Then there are elements βj that are coming from the first homology of the compactification of each of the components. The one-cycle in Xk⊂X (k=1,2 ) corresponding to a cycle in the compactification of this component, is not canonical: these elements are determined modulo the span of α1,…,αn . Finally, modulo the first two types, the group is generated by a combinatorial cycle γ which goes from Q1 to Q2 along a path in one component X1 and comes back along a path in the other component X2 . This suggests that H1(X) admits an increasing filtration 0⊂W0⊂W1⊂W2=H1(X), whose successive quotients Wn/Wn−1 originate from the cohomology of smooth complete varieties, hence admit (pure) Hodge structures, albeit of different weights. Further examples can be found in "A Naive Guide to Mixed Hodge Theory".
Mixed Hodge structures:
Definition of mixed Hodge structure A mixed Hodge structure on an abelian group HZ consists of a finite decreasing filtration Fp on the complex vector space H (the complexification of HZ ), called the Hodge filtration and a finite increasing filtration Wi on the rational vector space HQ=HZ⊗ZQ (obtained by extending the scalars to rational numbers), called the weight filtration, subject to the requirement that the n-th associated graded quotient of HQ with respect to the weight filtration, together with the filtration induced by F on its complexification, is a pure Hodge structure of weight n, for all integer n. Here the induced filtration on gr nWH=Wn⊗C/Wn−1⊗C is defined by gr nWH=(Fp∩Wn⊗C+Wn−1⊗C)/Wn−1⊗C.
Mixed Hodge structures:
One can define a notion of a morphism of mixed Hodge structures, which has to be compatible with the filtrations F and W and prove the following: Theorem. Mixed Hodge structures form an abelian category. The kernels and cokernels in this category coincide with the usual kernels and cokernels in the category of vector spaces, with the induced filtrations.The total cohomology of a compact Kähler manifold has a mixed Hodge structure, where the nth space of the weight filtration Wn is the direct sum of the cohomology groups (with rational coefficients) of degree less than or equal to n. Therefore, one can think of classical Hodge theory in the compact, complex case as providing a double grading on the complex cohomology group, which defines an increasing filtration Fp and a decreasing filtration Wn that are compatible in certain way. In general, the total cohomology space still has these two filtrations, but they no longer come from a direct sum decomposition. In relation with the third definition of the pure Hodge structure, one can say that a mixed Hodge structure cannot be described using the action of the group C∗.
Mixed Hodge structures:
An important insight of Deligne is that in the mixed case there is a more complicated noncommutative proalgebraic group that can be used to the same effect using Tannakian formalism.
Mixed Hodge structures:
Moreover, the category of (mixed) Hodge structures admits a good notion of tensor product, corresponding to the product of varieties, as well as related concepts of inner Hom and dual object, making it into a Tannakian category. By Tannaka–Krein philosophy, this category is equivalent to the category of finite-dimensional representations of a certain group, which Deligne, Milne and et el. has explicitly described, see Deligne & Milne (1982) and Deligne (1994). The description of this group was recast in more geometrical terms by Kapranov (2012). The corresponding (much more involved) analysis for rational pure polarizable Hodge structures was done by Patrikis (2016).
Mixed Hodge structures:
Mixed Hodge structure in cohomology (Deligne's theorem) Deligne has proved that the nth cohomology group of an arbitrary algebraic variety has a canonical mixed Hodge structure. This structure is functorial, and compatible with the products of varieties (Künneth isomorphism) and the product in cohomology. For a complete nonsingular variety X this structure is pure of weight n, and the Hodge filtration can be defined through the hypercohomology of the truncated de Rham complex.
Mixed Hodge structures:
The proof roughly consists of two parts, taking care of noncompactness and singularities. Both parts use the resolution of singularities (due to Hironaka) in an essential way. In the singular case, varieties are replaced by simplicial schemes, leading to more complicated homological algebra, and a technical notion of a Hodge structure on complexes (as opposed to cohomology) is used.
Using the theory of motives, it is possible to refine the weight filtration on the cohomology with rational coefficients to one with integral coefficients.
Examples:
The Tate–Hodge structure Z(1) is the Hodge structure with underlying Z module given by 2πiZ (a subgroup of C ), with Z(1)⊗C=H−1,−1.
So it is pure of weight −2 by definition and it is the unique 1-dimensional pure Hodge structure of weight −2 up to isomorphisms. More generally, its nth tensor power is denoted by Z(n); it is 1-dimensional and pure of weight −2n.
The cohomology of a compact Kähler manifold has a Hodge structure, and the nth cohomology group is pure of weight n.
The cohomology of a complex variety (possibly singular or non-proper) has a mixed Hodge structure. This was shown for smooth varieties by Deligne (1971), Deligne (1971a) and in general by Deligne (1974).
For a projective variety X with normal crossing singularities there is a spectral sequence with a degenerate E2-page which computes all of its mixed Hodge structures. The E1-page has explicit terms with a differential coming from a simplicial set.
Any smooth variety X admits a smooth compactification with complement a normal crossing divisor. The corresponding logarithmic forms can be used to describe the mixed Hodge structure on the cohomology of X explicitly.
Examples:
The Hodge structure for a smooth projective hypersurface X⊂Pn+1 of degree d was worked out explicitly by Griffiths in his "Period Integrals of Algebraic Manifolds" paper. If f∈C[x0,…,xn+1] is the polynomial defining the hypersurface X then the graded Jacobian quotient ring contains all of the information of the middle cohomology of X . He shows that For example, consider the K3 surface given by g=x04+⋯+x34 , hence d=4 and n=2 . Then, the graded Jacobian ring is The isomorphism for the primitive cohomology groups then read hence Notice that R(g)4 is the vector space spanned by which is 19-dimensional. There is an extra vector in H1,1(X) given by the Lefschetz class [L] . From the Lefschetz hyperplane theorem and Hodge duality, the rest of the cohomology is in Hk,k(X) as is 1 -dimensional. Hence the Hodge diamond reads We can also use the previous isomorphism to verify the genus of a degree d plane curve. Since xd+yd+zd is a smooth curve and the Ehresmann fibration theorem guarantees that every other smooth curve of genus g is diffeomorphic, we have that the genus then the same. So, using the isomorphism of primitive cohomology with the graded part of the Jacobian ring, we see that This implies that the dimension is as desired.
Examples:
The Hodge numbers for a complete intersection are also readily computable: there is a combinatorial formula found by Friedrich Hirzebruch.
Applications:
The machinery based on the notions of Hodge structure and mixed Hodge structure forms a part of still largely conjectural theory of motives envisaged by Alexander Grothendieck. Arithmetic information for nonsingular algebraic variety X, encoded by eigenvalue of Frobenius elements acting on its l-adic cohomology, has something in common with the Hodge structure arising from X considered as a complex algebraic variety. Sergei Gelfand and Yuri Manin remarked around 1988 in their Methods of homological algebra, that unlike Galois symmetries acting on other cohomology groups, the origin of "Hodge symmetries" is very mysterious, although formally, they are expressed through the action of the fairly uncomplicated group RC/RC∗ on the de Rham cohomology. Since then, the mystery has deepened with the discovery and mathematical formulation of mirror symmetry.
Variation of Hodge structure:
A variation of Hodge structure (Griffiths (1968), Griffiths (1968a), Griffiths (1970)) is a family of Hodge structures parameterized by a complex manifold X. More precisely a variation of Hodge structure of weight n on a complex manifold X consists of a locally constant sheaf S of finitely generated abelian groups on X, together with a decreasing Hodge filtration F on S ⊗ OX, subject to the following two conditions: The filtration induces a Hodge structure of weight n on each stalk of the sheaf S (Griffiths transversality) The natural connection on S ⊗ OX maps Fn into Fn−1⊗ΩX1.
Variation of Hodge structure:
Here the natural (flat) connection on S ⊗ OX induced by the flat connection on S and the flat connection d on OX, and OX is the sheaf of holomorphic functions on X, and ΩX1 is the sheaf of 1-forms on X. This natural flat connection is a Gauss–Manin connection ∇ and can be described by the Picard–Fuchs equation.
A variation of mixed Hodge structure can be defined in a similar way, by adding a grading or filtration W to S. Typical examples can be found from algebraic morphisms f:Cn→C . For example, {f:C2→Cf(x,y)=y6−x6 has fibers Xt=f−1({t})={(x,y)∈C2:y6−x6=t} which are smooth plane curves of genus 10 for t≠0 and degenerate to a singular curve at 0.
Then, the cohomology sheaves Rf∗i(Q_C2) give variations of mixed hodge structures.
Hodge modules:
Hodge modules are a generalization of variation of Hodge structures on a complex manifold. They can be thought of informally as something like sheaves of Hodge structures on a manifold; the precise definition Saito (1989) is rather technical and complicated. There are generalizations to mixed Hodge modules, and to manifolds with singularities.
For each smooth complex variety, there is an abelian category of mixed Hodge modules associated with it. These behave formally like the categories of sheaves over the manifolds; for example, morphisms f between manifolds induce functors f∗, f*, f!, f! between (derived categories of) mixed Hodge modules similar to the ones for sheaves.
Introductory references:
Debarre, Olivier, Periods and Moduli Arapura, Donu, Complex Algebraic Varieties and their Cohomology (PDF), pp. 120–123, archived from the original (PDF) on 2020-01-04 (Gives tools for computing hodge numbers using sheaf cohomology) A Naive Guide to Mixed Hodge Theory Dimca, Alexandru (1992). Singularities and Topology of Hypersurfaces. Universitext. New York: Springer-Verlag. pp. 240, 261. doi:10.1007/978-1-4612-4404-2. ISBN 0-387-97709-0. MR 1194180. S2CID 117095021. (Gives a formula and generators for mixed Hodge numbers of affine Milnor fiber of a weighted homogenous polynomial, and also a formula for complements of weighted homogeneous polynomials in a weighted projective space.)
Survey articles:
Arapura, Donu (2006), Mixed Hodge Structures Associated to Geometric Variations (PDF), arXiv:math/0611837, Bibcode:2006math.....11837A | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**FTH1P5**
FTH1P5:
Ferritin, heavy polypeptide 1 pseudogene 5 is a protein that in humans is encoded by the FTH1P5 gene. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ZYpp**
ZYpp:
ZYpp (or libzypp; "Zen / YaST Packages Patches Patterns Products") is a package manager engine that powers Linux applications like YaST, Zypper and the implementation of PackageKit for openSUSE and SUSE Linux Enterprise. Unlike some more basic package managers, it provides a satisfiability solver to compute package dependencies. It is a free and open-source software project sponsored by SUSE and licensed under the terms of the GNU General Public License v2 or later. ZYpp is implemented mostly in the programming language C++.
ZYpp:
Zypper is the native command-line interface of the ZYpp package manager to install, remove, update and query software packages of local or remote (networked) media. Its graphical equivalent is the YaST package manager module. It has been used in openSUSE since version 10.2 beta1. In openSUSE 11.1, Zypper reached version 1.0. On June 2, 2009, Ark Linux announced that it has completed its review of dependency solvers and has chosen ZYpp and its tools to replace the aging APT-RPM, as the first distribution to do so. Zypper is also part of the mobile Linux distributions MeeGo, Sailfish OS, and Tizen.
History:
Purpose Following its consecutive acquisitions of Ximian and SuSE GmbH in 2003, Novell decided to merge both package management systems, YaST package manager and Red Carpet, in a best of breed approach, as the two solutions so far were used at Novell. Looking at the extant open source tools and their maturity available back in 2005, none fulfilled the requirements, and were able to work smoothly with the extant Linux management infrastructure software developed by Ximian and SUSE, so it was decided to get the best ideas from extant pieces and to work on a new implementation. Libzypp, the resulting library, was planned to be the software management engine of the SUSE distributions and the Linux Management component of the Novell ZENworks Management suite.
History:
Early days The Libzypp's solver was a port from the Red Carpet solver, which was written to update packages in installed systems. Using it for the full installing process brought it to its limits, and adding extensions such as support for weak dependencies and patches made it fragile and unpredictable. Although this first version of ZYpp's solver worked satisfactorily, on the company enterprise products with the coupled ZMD daemon, it led to an openSUSE 10.1 release which came out in May 2006 with a system package not working as expected. In December 2006, the openSUSE 10.2 release corrected some defects of the prior release, using the revisited ZYpp v2. ZMD was subsequently removed from the 10.3 release and reserved for only the company Enterprise products. While ZYpp v3 provided openSUSE with a relatively good package manager, equivalent to other existing package managers, it suffered from some flaws in its implementation which greatly limited its speed performance.
History:
SAT solver integration An area where libzypp needed improvement was the speed of the dependency solver. libsolv is being written and released under the revised BSD license.Projects like Optimal Package Install/Uninstall Manager (OPIUM) and MANCOOSI were trying to fix dependency solving issues with a SAT solver. Traditional solvers like Advanced Packaging Tool (APT) sometimes show unacceptable deficiencies. It was decided to integrate SAT algorithms into the ZYpp stack; the solver algorithms used were based on the popular minisat solver.The SAT solver implementation as it appears in openSUSE 11.0 is based on two major, but independent, blocks: Using a data dictionary approach to store and retrieve package and dependency information. A new solv format was created, which stores a repository as a string dictionary, a relation dictionary and then all package dependencies. Reading and merging multiple solv repositories takes only milliseconds.
History:
Using satisfiability for computing package dependencies. The Boolean satisfiability problem is a well-researched problem with many exemplar solvers available. It is very fast, as package solving complexity is very low compared to other areas where SAT solvers are used. Also, it does not need complex algorithms and can provide understandable suggestions by calculating proof of why a problem is unsolvable.After several months of work, the benchmark results of this fourth ZYpp version integrated with the SAT solver are more than encouraging, moving YaST and Zypper ahead of other RPM-based package managers in speed and size. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lead hydrogen arsenate**
Lead hydrogen arsenate:
Lead hydrogen arsenate, also called lead arsenate, acid lead arsenate or LA, chemical formula PbHAsO4, is an inorganic insecticide used primarily against the potato beetle.
Lead arsenate was the most extensively used arsenical insecticide. Two principal formulations of lead arsenate were marketed: basic lead arsenate (Pb5OH(AsO4)3, CASN: 1327-31-7) and acid lead arsenate (PbHAsO4).
Production and structure:
It is usually produced using the following reaction, which leads to formation of the desired product as a solid precipitate: Pb(NO3)2 + H3AsO4 → PbHAsO4 +2 HNO3It has the same structure as the hydrogen phosphate PbHPO4. Like lead sulfate PbSO4, these salts are poorly soluble.
Uses:
As an insecticide, it was introduced in 1898 used against the gypsy moth in Massachusetts. It represented a less soluble and less toxic alternative to then-used Paris Green, which is about 10x more toxic. It also adhered better to the surface of the plants, further enhancing and prolonging its insecticidal effect.
Uses:
Lead arsenate was widely used in Australia, Canada, New Zealand, US, England, France, North Africa, and many other areas, principally against the codling moth and snow-white linden moth. It was used mainly on apples, but also on other fruit trees, garden crops, turfgrasses, and against mosquitoes. In combination with ammonium sulfate, it was used in southern California as a winter treatment on lawns to kill crab grass seed.The search for a substitute was commenced in 1919, when it was found that its residues remain in the products despite washing their surfaces. Alternatives were found to be less effective or more toxic to plants and animals, until 1947 when DDT was found. US EPA banned use of lead arsenate on food crops in 1988.
Safety:
LD50 is 1050 mg/kg (rat, oral).Morel mushrooms growing in old apple orchards that had been treated with lead arsenate may accumulate levels of toxic lead and arsenic that are unhealthy for human consumption.Lead arsenate was used as an insecticide in deciduous fruit trees from 1892 until around 1947 in Washington. Peryea et al. studied the distribution of Pb and As in these soils, concluding that these levels were above maximum tolerance levels. This indicates that these levels could be of environmental concern and potentially could be contaminating the groundwater in the area. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ice XII**
Ice XII:
Ice XII is a metastable, dense, crystalline phase of solid water, a type of ice. Ice XII was first reported in 1996 by C. Lobban, J.L. Finney and W.F. Kuhs and, after initial caution, was properly identified in 1998.
Ice XII:
It was first obtained by cooling liquid water to 260 K (−13 °C; 8 °F) at a pressure of 0.55 gigapascals (5,400 atm). Ice XII was discovered existing within the phase stability region of ice V. Later research showed that ice XII could be created outside that range. Pure ice XII can be created from ice Ih at 77 K (−196.2 °C; −321.1 °F) by rapid compression (0.81-1.00 GPa/min) or by warming high density amorphous ice at pressures between 0.8 to 1.6 gigapascals (7,900 to 15,800 atm).
Ice XII:
While it is similar in density (1.29 g/cm3 at 127 K (−146 °C; −231 °F)) to ice IV (also found in the ice V space) it exists as a tetragonal crystal. Topologically it is a mix of seven- and eight-membered rings, a 4-connected net (4-coordinate sphere packing)—the densest possible arrangement without hydrogen bond interpenetration.
Ordinary water ice is known as ice Ih, (in the Bridgman nomenclature). Different types of ice, from ice II to ice XVI, have been created in the laboratory at different temperatures and pressures.
Ice XIV:
When hydrochloric-acid-doped ice XII is cooled down to about 110 K, it undergoes a phase transition into a partially hydrogen-ordered phase, namely ice XIV. The transition entropy from ice XIV to ice XII is estimated to be 60% of Pauling entropy based on DSC measurements. The formation of ice XIV from ice XII is more favoured at high pressure. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Kirkwood approximation**
Kirkwood approximation:
The Kirkwood superposition approximation was introduced in 1935 by John G. Kirkwood as a means of representing a discrete probability distribution. The Kirkwood approximation for a discrete probability density function P(x1,x2,…,xn) is given by P′(x1,x2,…,xn)=∏i=1n−1[∏Ti⊆Vp(Ti)](−1)n−1−i=∏Tn−1⊆Vp(Tn−1)∏Tn−2⊆Vp(Tn−2)⋮∏T1⊆Vp(T1) where ∏Ti⊆Vp(Ti) is the product of probabilities over all subsets of variables of size i in variable set V . This kind of formula has been considered by Watanabe (1960) and, according to Watanabe, also by Robert Fano. For the three-variable case, it reduces to simply P′(x1,x2,x3)=p(x1,x2)p(x2,x3)p(x1,x3)p(x1)p(x2)p(x3) The Kirkwood approximation does not generally produce a valid probability distribution (the normalization condition is violated). Watanabe claims that for this reason informational expressions of this type are not meaningful, and indeed there has been very little written about the properties of this measure. The Kirkwood approximation is the probabilistic counterpart of the interaction information.
Kirkwood approximation:
Judea Pearl (1988 §3.2.4) indicates that an expression of this type can be exact in the case of a decomposable model, that is, a probability distribution that admits a graph structure whose cliques form a tree. In such cases, the numerator contains the product of the intra-clique joint distributions and the denominator contains the product of the clique intersection distributions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Point coordination function**
Point coordination function:
Point coordination function (PCF) is a media access control (MAC) technique used in IEEE 802.11 based WLANs, including Wi-Fi. It resides in a point coordinator also known as access point (AP), to coordinate the communication within the network. The AP waits for PIFS duration rather than DIFS duration to grasp the channel. PIFS is less than DIFS duration and hence the point coordinator always has the priority to access the channel.The PCF is located directly above the distributed coordination function (DCF), in the IEEE 802.11 MAC Architecture. Channel access in PCF mode is centralized and hence the point coordinator sends CF-Poll frame to the PCF capable station to permit it to transmit a frame. In case the polled station does not have any frames to send, then it must transmit null frame.
Point coordination function:
Due to the priority of PCF over DCF, stations that only use DCF might not gain access to the medium. To prevent this, a repetition interval has been designed to cover both (Contention free) PCF & (Contention Based) DCF traffic. The repetition interval which is repeated continuously, starts with a special control frame called Beacon Frame. When stations hear the beacon frame, they start their network allocation vector for the duration of the contention free period of the repetition period. Since most APs have logical bus topologies (they are shared circuits) only one message can be processed at one time (it is a contention based system), and thus a media access control technique is required.
Point coordination function:
Wireless networks may suffer from a hidden node problem where some regular nodes (which communicate only with the AP) cannot see other nodes on the extreme edge of the geographical radius of the network because the wireless signal attenuates before it can reach that far. Thus having an AP in the middle allows the distance to be halved, allowing all nodes to see the AP, and consequentially, halving the maximum distance between two nodes on the extreme edges of a circle-star topology.
Point coordination function:
PCF seems to be implemented only in very few hardware devices as it is not part of the Wi-Fi Alliance's interoperability standard.
PCF Interframe Space:
PCF Interframe Space (PIFS) is one of the interframe space used in IEEE 802.11 based Wireless LANs. PCF enabled access point wait for PIFS duration rather than DIFS to occupy the wireless medium. PIFS duration is less than DIFS and greater than SIFS (DIFS > PIFS > SIFS). Hence AP always has more priority to access the medium.
PIFS duration can be calculated as follows: PIFS = SIFS + Slot time | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Entanglement witness**
Entanglement witness:
In quantum information theory, an entanglement witness is a functional which distinguishes a specific entangled state from separable ones. Entanglement witnesses can be linear or nonlinear functionals of the density matrix. If linear, then they can also be viewed as observables for which the expectation value of the entangled state is strictly outside the range of possible expectation values of any separable state.
Details:
Let a composite quantum system have state space HA⊗HB . A mixed state ρ is then a trace-class positive operator on the state space which has trace 1. We can view the family of states as a subset of the real Banach space generated by the Hermitian trace-class operators, with the trace norm. A mixed state ρ is separable if it can be approximated, in the trace norm, by states of the form ξ=∑i=1kpiρiA⊗ρiB, where ρiA and ρiB are pure states on the subsystems A and B respectively. So the family of separable states is the closed convex hull of pure product states. We will make use of the following variant of Hahn–Banach theorem: Theorem Let S1 and S2 be disjoint convex closed sets in a real Banach space and one of them is compact, then there exists a bounded functional f separating the two sets.
Details:
This is a generalization of the fact that, in real Euclidean space, given a convex set and a point outside, there always exists an affine subspace separating the two. The affine subspace manifests itself as the functional f. In the present context, the family of separable states is a convex set in the space of trace class operators. If ρ is an entangled state (thus lying outside the convex set), then by theorem above, there is a functional f separating ρ from the separable states. It is this functional f, or its identification as an operator, that we call an entanglement witness. There is more than one hyperplane separating a closed convex set from a point lying outside of it, so for an entangled state there is more than one entanglement witness. Recall the fact that the dual space of the Banach space of trace-class operators is isomorphic to the set of bounded operators. Therefore, we can identify f with a Hermitian operator A. Therefore, modulo a few details, we have shown the existence of an entanglement witness given an entangled state: Theorem For every entangled state ρ, there exists a Hermitian operator A such that Tr (Aρ)<0 , and Tr (Aσ)≥0 for all separable states σ.
Details:
When both HA and HB have finite dimension, there is no difference between trace-class and Hilbert–Schmidt operators. So in that case A can be given by Riesz representation theorem. As an immediate corollary, we have: Theorem A mixed state σ is separable if and only if Tr (Aσ)≥0 for any bounded operator A satisfying Tr (A⋅P⊗Q)≥0 , for all product pure state P⊗Q If a state is separable, clearly the desired implication from the theorem must hold. On the other hand, given an entangled state, one of its entanglement witnesses will violate the given condition.
Details:
Thus if a bounded functional f of the trace-class Banach space and f is positive on the product pure states, then f, or its identification as a Hermitian operator, is an entanglement witness. Such a f indicates the entanglement of some state.
Details:
Using the isomorphism between entanglement witnesses and non-completely positive maps, it was shown (by the Horodeckis) that Theorem Assume that HA,HB are finite-dimensional. A mixed state σ∈L(HA)⊗L(HB) is separable if for every positive map Λ from bounded operators on HB to bounded operators on HA , the operator (IA⊗Λ)(σ) is positive, where IA is the identity map on L(HA) , the bounded operators on HA | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bounded type (mathematics)**
Bounded type (mathematics):
In mathematics, a function defined on a region of the complex plane is said to be of bounded type if it is equal to the ratio of two analytic functions bounded in that region. But more generally, a function is of bounded type in a region Ω if and only if f is analytic on Ω and log +|f(z)| has a harmonic majorant on Ω, where log max log (x)] . Being the ratio of two bounded analytic functions is a sufficient condition for a function to be of bounded type (defined in terms of a harmonic majorant), and if Ω is simply connected the condition is also necessary.
Bounded type (mathematics):
The class of all such f on Ω is commonly denoted N(Ω) and is sometimes called the Nevanlinna class for Ω . The Nevanlinna class includes all the Hardy classes.
Functions of bounded type are not necessarily bounded, nor do they have a property called "type" which is bounded. The reason for the name is probably that when defined on a disc, the Nevanlinna characteristic (a function of distance from the centre of the disc) is bounded.
Clearly, if a function is the ratio of two bounded functions, then it can be expressed as the ratio of two functions which are bounded by 1: f(z)=P(z)/Q(z) The logarithms of |1/P(z)| and of |1/Q(z)| are non-negative in the region, so log log log log |1/Q(z)| log max log max log log log Q(z)).
The latter is the real part of an analytic function and is therefore harmonic, showing that log +|f(z)| has a harmonic majorant on Ω.
For a given region, sums, differences, and products of functions of bounded type are of bounded type, as is the quotient of two such functions as long as the denominator is not identically zero.
Examples:
Polynomials are of bounded type in any bounded region. They are also of bounded type in the upper half-plane (UHP), because a polynomial f(z) of degree n can be expressed as a ratio of two analytic functions bounded in the UHP: f(z)=P(z)/Q(z) with P(z)=f(z)/(z+i)n Q(z)=1/(z+i)n.
The inverse of a polynomial is also of bounded type in a region, as is any rational function.
Examples:
The function exp (aiz) is of bounded type in the UHP if and only if a is real. If a is positive the function itself is bounded in the UHP (so we can use Q(z)=1 ), and if a is negative then the function equals 1/Q(z) with exp (|a|iz) Sine and cosine are of bounded type in the UHP. Indeed, sin (z)=P(z)/Q(z) with sin exp (iz) exp (iz) both of which are bounded in the UHP.
Examples:
All of the above examples are of bounded type in the lower half-plane as well, using different P and Q functions. But the region mentioned in the definition of the term "bounded type" cannot be the whole complex plane unless the function is constant because one must use the same P and Q over the whole region, and the only entire functions (that is, analytic in the whole complex plane) which are bounded are constants, by Liouville's theorem.
Examples:
Another example in the upper half-plane is a "Nevanlinna function", that is, an analytic function that maps the UHP to the closed UHP. If f(z) is of this type, then f(z)=P(z)/Q(z) where P and Q are the bounded functions: P(z)=f(z)f(z)+i Q(z)=1f(z)+i (This obviously applies as well to f(z)/i , that is, a function whose real part is non-negative in the UHP.)
Properties:
For a given region, the sum, product, or quotient of two (non-null) functions of bounded type is also of bounded type. The set of functions of bounded type is an algebra over the complex numbers and is in fact a field.
Properties:
Any function of bounded type in the upper half-plane (with a finite number of roots in some neighborhood of 0) can be expressed as a Blaschke product (an analytic function, bounded in the region, which factors out the zeros) multiplying the quotient P(z)/Q(z) where P(z) and Q(z) are bounded by 1 and have no zeros in the UHP. One can then express this quotient as exp exp (−V(z)) where U(z) and V(z) are analytic functions having non-negative real part in the UHP. Each of these in turn can be expressed by a Poisson representation (see Nevanlinna functions): U(z)=c−ipz−i∫R(1λ−z−λ1+λ2)dμ(λ) V(z)=d−iqz−i∫R(1λ−z−λ1+λ2)dν(λ) where c and d are imaginary constants, p and q are non-negative real constants, and μ and ν are non-decreasing functions of a real variable (well behaved so the integrals converge). The difference q−p has been given the name "mean type" by Louis de Branges and describes the growth or decay of the function along the imaginary axis: lim sup ln |f(iy)| The mean type in the upper half-plane is the limit of a weighted average of the logarithm of the function's absolute value divided by distance from zero, normalized in such a way that the value for exp (−iz) is 1: lim ln sin θdθ If an entire function is of bounded type in both the upper and the lower half-plane then it is of exponential type equal to the higher of the two respective "mean types" (and the higher one will be non-negative). An entire function of order greater than 1 (which means that in some direction it grows faster than a function of exponential type) cannot be of bounded type in any half-plane.
Properties:
We may thus produce a function of bounded type by using an appropriate exponential of z and exponentials of arbitrary Nevanlinna functions multiplied by i, for example: exp exp exp (−i/z) Concerning the examples given above, the mean type of polynomials or their inverses is zero. The mean type of exp (aiz) in the upper half-plane is −a, while in the lower half-plane it is a. The mean type of sin (z) in both half-planes is 1.
Properties:
Functions of bounded type in the upper half-plane with non-positive mean type and having a continuous, square-integrable extension to the real axis have the interesting property (useful in applications) that the integral (along the real axis) 12πi∫−∞∞f(t)dtt−z equals f(z) if z is in the upper half-plane and zero if z is in the lower half-plane. This may be termed the Cauchy formula for the upper half-plane. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Range mode query**
Range mode query:
In data structures, the range mode query problem asks to build a data structure on some input data to efficiently answer queries asking for the mode of any consecutive subset of the input.
Problem statement:
Given an array A[1:n]=[a1,a2,...,an] , we wish to answer queries of the form mode(A,i:j) , where 1≤i≤j≤n . The mode mode(S) of any array S=[s1,s2,...,sk] is an element si such that the frequency of si is greater than or equal to the frequency of sj∀j∈{1,...,k} . For example, if S=[1,2,4,2,3,4,2] , then mode(S)=2 because it occurs three times, while all other values occur fewer times. In this problem, the queries ask for the mode of subarrays of the form A[i:j]=[ai,ai+1,...,aj] Theorem 1 Let A and B be any multisets. If c is a mode of A∪B and c∉A , then c is a mode of B Proof Let c∉A be a mode of C=A∪B and fc be its frequency in C . Suppose that c is not a mode of B . Thus, there exists an element b with frequency fb that is the mode of B . Since b is the mode of B and that c∉A , then fb>fc . Thus, b should be the mode of C which is a contradiction.
Lower bound:
Any data structure using S cells of w bits each needs log log (Sw/n)) time to answer a range mode query.This contrasts with other range query problems, such as the range minimum query which have solutions offering constant time query time and linear space. This is due to the hardness of the mode problem, since even if we know the mode of A[i:j] and the mode of A[j+1:k] , there is no simple way of computing the mode of A[i:k] . Any element of A[i:j] or A[j+1:k] could be the mode. For example, if mode(A[i:j])=a and its frequency is fa , and mode(A[j+1:k])=b and its frequency is also fa , there could be an element c with frequency fa−1 in A[i:j] and frequency fa−1 in A[j+1:k] . a≠c≠b , but its frequency in A[i:k] is greater than the frequency of a and b , which makes c a better candidate for mode(A[i:k]) than a or b
Linear space data structure with square root query time:
This method by Chan et al. uses O(n+s2) space and O(n/s) query time. By setting s=n , we get O(n) and O(n) bounds for space and query time.
Linear space data structure with square root query time:
Preprocessing Let A[1:n] be an array, and D[1:Δ] be an array that contains the distinct values of A, where Δ is the number of distinct elements. We define B[1:n] to be an array such that, for each i , B[i] contains the rank (position) of A[i] in D . Arrays B,D can be created by a linear scan of A Arrays Q1,Q2,...,QΔ are also created, such that, for each a∈{1,...,Δ} , Qa={b|B[b]=a} . We then create an array B′[1:n] , such that, for all b∈{1,...,n} , B′[b] contains the rank of b in QB[b] . Again, a linear scan of B suffices to create arrays Q1,Q2,...,QΔ and B′ It is now possible to answer queries of the form "is the frequency of B[i] in B[i:j] at least q " in constant time, by checking whether QB[i][B′[i]+q−1]≤j The array is split B into s blocks b1,b2,...,bs , each of size t=⌈n/s⌉ . Thus, a block bi spans over B[i⋅t+1:(i+1)t] . The mode and the frequency of each block or set of consecutive blocks will be pre-computed in two tables S and S′ . S[bi,bj] is the mode of bi∪bi+1∪...∪bj , or equivalently, the mode of B[bit+1:(bj+1)t] , and S′ stores the corresponding frequency. These two tables can be stored in O(s2) space, and can be populated in O(s⋅n) by scanning B s times, computing a row of S,S′ each time with the following algorithm: algorithm computeS_Sprime is input: Array B = [0:n - 1], Array D = [0:Delta - 1], Integer s output: Tables S and Sprime let S ← Table(0:n - 1, 0:n - 1) let Sprime ← Table(0:n - 1, 0:n - 1) let firstOccurence ← Array(0:Delta - 1) for all i in {0, ..., Delta - 1} do firstOccurence[i] ← -1 end for for i ← 0:s - 1 do let j ← i × t let c ← 0 let fc ← 0 let noBlock ← i let block_start ← j let block_end ← min{(i + 1) × t - 1, n - 1} while j < n do if firstOccurence[B[j]] = -1 then firstOccurence[B[j]] ← j end if if atLeastQInstances(firstOccurence[B[j]], block_end, fc + 1) then c ← B[j] fc ← fc + 1 end if if j = block_end then S[i * s + noBlock] ← c Sprime[i × s + noBlock] ← fc noBlock ← noBlock + 1 block_end ← min{block_end + t, n - 1} end if end while for all j in {0, ..., Delta - 1} do firstOccurence[j] ← -1 end for end for Query We will define the query algorithm over array B . This can be translated to an answer over A , since for any a,i,j , B[a] is a mode for B[i:j] if and only if A[a] is a mode for A[i:j] . We can convert an answer for B to an answer for A in constant time by looking in A or B at the corresponding index.
Linear space data structure with square root query time:
Given a query mode(B,i,j) , the query is split in three parts: the prefix, the span and the suffix. Let bi=⌈(i−1)/t⌉ and bj=⌊j/t⌋−1 . These denote the indices of the first and last block that are completely contained in B . The range of these blocks is called the span. The prefix is then B[i:min{bit,j}] (the set of indices before the span), and the suffix is B[max{(bj+1)t+1,i}:j] (the set of indices after the span). The prefix, suffix or span can be empty, the latter is if bj<bi For the span, the mode c is already stored in S[bi,bj] . Let fc be the frequency of the mode, which is stored in S′[bi,bj] . If the span is empty, let fc=0 . Recall that, by Theorem 1, the mode of B[i:j] is either an element of the prefix, span or suffix. A linear scan is performed over each element in the prefix and in the suffix to check if its frequency is greater than the current candidate c , in which case c and fc are updated to the new value. At the end of the scan, c contains the mode of B[i:j] and fc its frequency.
Linear space data structure with square root query time:
Scanning procedure The procedure is similar for both prefix and suffix, so it suffice to run this procedure for both: Let x be the index of the current element. There are three cases: If QB[x][B′[x]−1]≥i , then it was present in B[i:x−1] and its frequency has already been counted. Pass to the next element.
Otherwise, check if the frequency of B[x] in B[i:j] is at least fc (this can be done in constant time since it is the equivalent of checking it for B[x:j] ).
If it is not, then pass to the next element.
Linear space data structure with square root query time:
If it is, then compute the actual frequency fx of B[x] in B[i:j] by a linear scan (starting at index B′[x]+fc−1 ) or a binary search in QB[x] . Set := B[x] and := fx .This linear scan (excluding the frequency computations) is bounded by the block size t , since neither the prefix or the suffix can be greater than t . A further analysis of the linear scans done for frequency computations shows that it is also bounded by the block size. Thus, the query time is O(t)=O(n/s)
Subquadratic space data structure with constant query time:
This method by uses log log log n) space for a constant time query. We can observe that, if a constant query time is desired, this is a better solution than the one proposed by Chan et al., as the latter gives a space of O(n2) for constant query time if s=n Preprocessing Let A[1:n] be an array. The preprocessing is done in three steps: Split the array A in s blocks b1,b2,...,bs , where the size of each block is t=⌈n/s⌉ . Build a table S of size s×s where S[i,j] is the mode of bi∪bi+1∪...∪bj . The total space for this step is O(s2) For any query mode(A,i,j) , let bi′ be the block that contains i and bj′ be the block that contains j . Let the span be the set of blocks completely contained in A[i:j] . The mode c of the block can be retrieved from S . By Theorem 1, the mode can be either an element of the prefix (indices of A[i:j] before the start of the span), an element of the suffix (indices of A[i:j] after the end of the span), or c . The size of the prefix plus the size of the suffix is bounded by 2t , thus the position of the mode isstored as an integer ranging from 0 to 2t , where [0:2t−1] indicates a position in the prefix/suffix and 2t indicates that the mode is the mode of the span. There are (t2) possible queries involving blocks bi′ and bj′ , so these values are stored in a table of size t2 . Furthermore, there are (2t+1)t2 such tables, so the total space required for this step is O(t2(2t+1)t2) . To access those tables, a pointer is added in addition to the mode in the table S for each pair of blocks.
Subquadratic space data structure with constant query time:
To handle queries mode(A,i,j) where i and j are in the same block, all such solutions are precomputed. There are O(st2) of them, they are stored in a three dimensional table T of this size.The total space used by this data structure is O(s2+t2(2t+1)t2+st2) , which reduces to log log log n) if we take log log log n Query Given a query mode(A,i,j) , check if it is completely contained inside a block, in which case the answer is stored in table T . If the query spans exactly one or more blocks, then the answer is found in table S . Otherwise, use the pointer stored in table S at position S[bi′,bj′] , where bi′,bj′ are the indices of the blocks that contain respectively i and j , to find the table Ubi′,bj′ that contains the positions of the mode for these blocks and use the position to find the mode in A . This can be done in constant time. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Jacqueline Chen**
Jacqueline Chen:
Jacqueline H. Chen is an American mechanical engineer. She works in the Combustion Research Facility of Sandia National Laboratories, where she is a Senior Scientist. Her research applies massively parallel computing to the simulation of turbulent combustion.
Education and career:
Chen grew up as a child of Chinese immigrants in Ohio, and graduated from the Ohio State University with a bachelor's degree in mechanical engineering in 1981. After earning a master's degree in mechanical engineering in 1982 at the University of California, Berkeley, under the mentorship of Boris Rubinsky, she continued at Stanford University for doctoral study in the same subject. She completed her Ph.D. in 1989; her doctoral advisor at Stanford was Brian J. Cantwell.She has worked at Sandia since finishing her education and is a pioneer of massively parallel direct numerical simulation of turbulent combustion with complex chemistry . She has led teams of computer scientists, applied mathematicians and computational engineers on the co-design of combustion simulation software for exascale computing (10^18 flops).
Recognition:
In 2018, Chen was elected to the National Academy of Engineering "for contributions to the computational simulation of turbulent reacting flows with complex chemistry".
Recognition:
In the same year, the Society of Women Engineers gave her an Achievement Award, their top honor, and the Combustion Institute awarded her the Bernard Lewis Gold Medal, "for her exceptional skill in linking high performance computing and combustion research to deliver fundamental insights into turbulence-chemistry interactions". The Combustion Institute and the American Physical Society also named her as one of its fellows. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Flöz Mittel Formation**
Flöz Mittel Formation:
The Flöz Mittel Formation is a geologic formation in Germany. It preserves fossils dating back to the Carboniferous period. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**LINC00520**
LINC00520:
Long intergenic non-protein coding RNA 520 is a long non-coding RNA that in humans is encoded by the LINC00520 gene. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**UniHan IME**
UniHan IME:
UniHan IME is an input method based on the framework of IIIMF developed by Hong Kong Sun Wah Hi-Tech Ltd..
UniHan IME is an input method interface that maps the keyboard keys string to the Han character in the latest version of Unicode Table.
UniHan IME:
UniHan is the CJKV characters section which occupied more than half the storage space of the Unicode Table. There are more than 75,000 characters coded in version 6.0.0 in year 2010. The Chinese, Japanese, Korean and Vietnamese shared the Han characters for naming for more than thousand years. The input methods for the Han characters from Unicode are mainly keyboard typing, mouse pointing on screen or hand writing on pad. The popular methods are the pinyin keyboard method and the hand writing method. A complete font set for Unihan version 6.0.0 is yet to come and so is the Unihan IME.
UniHan IME:
A similar IME called 8 Steps Unihan was developed by 8 Steps Unihan company in Melbourne, Australia. The 8StepsA font coupled with Microsoft windows 10 SimSunExtB font are able to display all the characters in Unihan 10.0. which includes the extension F character set. The glyphs that are repeated have been linked together and only one of the linked code is used by the IME so that all the displayable characters are unique. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Danalite**
Danalite:
Danalite is an iron beryllium silicate sulfide mineral with formula: Fe2+4Be3(SiO4)3S. It is a rare mineral which occurs in granites, tin bearing pegmatites, contact metamorphic skarns, gneisses and in hydrothermal deposits. It occurs in association with magnetite, garnet, fluorite, albite, cassiterite, pyrite, muscovite, arsenopyrite, quartz, and chlorite.Danalite was first described in 1866 from a deposit in Essex County, Massachusetts and named for American mineralogist James Dwight Dana (1813–1895).It has been found in Massachusetts, New Hampshire, Sierra County, New Mexico; Yavapai County, Arizona; Needlepoint Mountain, British Columbia; Walrus Island, James Bay, Quebec; Sweden; Cornwall, England; Imalka and Transbaikal, Russia; Kazakhstan; Somalia; Tasmania; Western Australia and Hiroshima Prefecture, Japan. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Stepwise regression**
Stepwise regression:
In statistics, stepwise regression is a method of fitting regression models in which the choice of predictive variables is carried out by an automatic procedure. In each step, a variable is considered for addition to or subtraction from the set of explanatory variables based on some prespecified criterion. Usually, this takes the form of a forward, backward, or combined sequence of F-tests or t-tests. The frequent practice of fitting the final selected model followed by reporting estimates and confidence intervals without adjusting them to take the model building process into account has led to calls to stop using stepwise model building altogether or to at least make sure model uncertainty is correctly reflected.
Stepwise regression:
Alternatives include other model selection techniques, such as adjusted R2, Akaike information criterion, Bayesian information criterion, Mallows's Cp, PRESS, or false discovery rate.
Main approaches:
The main approaches for stepwise regression are: Forward selection, which involves starting with no variables in the model, testing the addition of each variable using a chosen model fit criterion, adding the variable (if any) whose inclusion gives the most statistically significant improvement of the fit, and repeating this process until none improves the model to a statistically significant extent.
Main approaches:
Backward elimination, which involves starting with all candidate variables, testing the deletion of each variable using a chosen model fit criterion, deleting the variable (if any) whose loss gives the most statistically insignificant deterioration of the model fit, and repeating this process until no further variables can be deleted without a statistically significant loss of fit.
Bidirectional elimination, a combination of the above, testing at each step for variables to be included or excluded.
Alternatives:
A widely used algorithm was first proposed by Efroymson (1960). This is an automatic procedure for statistical model selection in cases where there is a large number of potential explanatory variables, and no underlying theory on which to base the model selection. The procedure is used primarily in regression analysis, though the basic approach is applicable in many forms of model selection. This is a variation on forward selection. At each stage in the process, after a new variable is added, a test is made to check if some variables can be deleted without appreciably increasing the residual sum of squares (RSS). The procedure terminates when the measure is (locally) maximized, or when the available improvement falls below some critical value.
Alternatives:
One of the main issues with stepwise regression is that it searches a large space of possible models. Hence it is prone to overfitting the data. In other words, stepwise regression will often fit much better in sample than it does on new out-of-sample data. Extreme cases have been noted where models have achieved statistical significance working on random numbers. This problem can be mitigated if the criterion for adding (or deleting) a variable is stiff enough. The key line in the sand is at what can be thought of as the Bonferroni point: namely how significant the best spurious variable should be based on chance alone. On a t-statistic scale, this occurs at about log p , where p is the number of predictors. Unfortunately, this means that many variables which actually carry signal will not be included. This fence turns out to be the right trade-off between over-fitting and missing signal. If we look at the risk of different cutoffs, then using this bound will be within a log p factor of the best possible risk. Any other cutoff will end up having a larger such risk inflation.
Model accuracy:
A way to test for errors in models created by step-wise regression, is to not rely on the model's F-statistic, significance, or multiple R, but instead assess the model against a set of data that was not used to create the model. This is often done by building a model based on a sample of the dataset available (e.g., 70%) – the “training set” – and use the remainder of the dataset (e.g., 30%) as a validation set to assess the accuracy of the model. Accuracy is then often measured as the actual standard error (SE), MAPE (Mean absolute percentage error), or mean error between the predicted value and the actual value in the hold-out sample. This method is particularly valuable when data are collected in different settings (e.g., different times, social vs. solitary situations) or when models are assumed to be generalizable.
Criticism:
Stepwise regression procedures are used in data mining, but are controversial. Several points of criticism have been made.
The tests themselves are biased, since they are based on the same data. Wilkinson and Dallal (1981) computed percentage points of the multiple correlation coefficient by simulation and showed that a final regression obtained by forward selection, said by the F-procedure to be significant at 0.1%, was in fact only significant at 5%.
Criticism:
When estimating the degrees of freedom, the number of the candidate independent variables from the best fit selected may be smaller than the total number of final model variables, causing the fit to appear better than it is when adjusting the r2 value for the number of degrees of freedom. It is important to consider how many degrees of freedom have been used in the entire model, not just count the number of independent variables in the resulting fit.
Criticism:
Models that are created may be over-simplifications of the real models of the data.Such criticisms, based upon limitations of the relationship between a model and procedure and data set used to fit it, are usually addressed by verifying the model on an independent data set, as in the PRESS procedure.
Criticism:
Critics regard the procedure as a paradigmatic example of data dredging, intense computation often being an inadequate substitute for subject area expertise. Additionally, the results of stepwise regression are often used incorrectly without adjusting them for the occurrence of model selection. Especially the practice of fitting the final selected model as if no model selection had taken place and reporting of estimates and confidence intervals as if least-squares theory were valid for them, has been described as a scandal. Widespread incorrect usage and the availability of alternatives such as ensemble learning, leaving all variables in the model, or using expert judgement to identify relevant variables have led to calls to totally avoid stepwise model selection. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Orange creamsicle cake**
Orange creamsicle cake:
Orange creamsicle cake is a cake containing orange and vanilla flavors that is named after the Popsicle-brand "Creamsicle" ice cream treat on a stick: a vanilla ice cream center coated in orange-flavored popsicle ice. A traditional version might just be an orange- and vanilla-flavored bundt cake, but there are no-bake versions of the cake made with ice cream and pudding, and other versions made with cakes that are frosted or served with orange marmalade.
Ice cream cakes:
Some versions of the cake are no bake ice cream cakes made with orange and vanilla layers. The layers can be made with vanilla ice cream and orange sherbet over a vanilla wafer crust, although there is a lot of flexibility in how the cake is assembled, like using gingersnaps or white cake for the crust, or frozen yogurt instead of vanilla ice cream. Pound cake is used in some recipes to line a loaf pan, then the pound cake shell is filled with orange sherbet and the top is covered with pound cake. The sherbet filled pound cake loaf is left in the fridge to set for several hours before it is frosted with vanilla frosting.
Angel food cake:
Other versions are made with angel food cake which can be flavored with orange-marmalade or orange zest and frosted with an orange and vanilla flavored whipped custard, or simply with orange marmalade.
Other:
Another version of the cake can be made by frosting orange cake with vanilla pudding frosting. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mir-127**
Mir-127:
mir-127 microRNA is a short non-coding RNA molecule with interesting overlapping gene structure. miR-127 functions to regulate the expression levels of genes involved in lung development, placental formation and apoptosis. Aberrant expression of miR-127 has been linked to different cancers.
Gene structure:
pri-miR-127 is derived from a separate but overlapping conserved gene cluster coding for miR-433/127. miR-127 and miR-433 are overlapped in a 5'-3' direction. Although the loci could be found on different chromosomes in different species, the structure has been conserved. In mammals including human, chimpanzee, horse, dog, monkey, rat, cow, and mouse, multiple sequence alignments (MSA) between miR-433 and miR127 have shown 95-100% similarity with a conserved distance between miR-433 and miR-127 of 986 to 1007 bp. Moreover, the upstream response elements in the miR-433/127 promoters, including estrogen related receptors response element (ERRE) have been conserved among above species. Data have suggested that that miR-433/127 loci may have evolved from a common gene of origin.
Transcription regulation:
Transcription factor binding sites positioned upstream of miRNA precursor play a role in regulating transcription. Activation of miR-127 and miR-433 promoters is mediated by estrogen-related receptor gamma (ERRgamma, NR3B3), which physically associates with their endogenous promoters. Inhibition is regulated by Small heterodimer partner (SHP), which acts in trans. Although miR-127 and miR-433 have common regulatory elements, they have independent promoters and their differential expression pattern is observed.
Functional roles:
Down-regulation of the imprinted gene Rtl1 Rtl1 is a key gene in placenta formation and the loss or overexpression of Rlt1 have led to late-fetal or neonatal lethality in mice. miR-127 is located near CpG islands in the imprinted region encoding rtl1 and is normally transcribed in an antisense orientation to the gene. Ectopic expression of miR-127 resulted in a reduction in Rtl1 expression in Human Hela cell and mouse Heppa-1.
Functional roles:
Experiments performed in mice showed that Rtl1 was only transcribed from the paternal chromosome, while the maternal allele was degraded. miR-127 and miR-136 however, are only maternally expressed in the somatic cells and thus play a role in antisense regulation of Rtl1 imprinting. Aberrant methylation status of Rtl1 and miR-127 indicated that epigenetic programming is also involved in the process.
Functional roles:
Control of fetal lung development miR-127 is highly expressed in late state of fetal development. A disruption to the system by overexpressing miR-127 in a fetal lung organ culture system resulted in defective development shown by a decrease in terminal bud counts and varied bud sizes.
Role in disease:
Diffuse large B-cell lymphoma Upregulation of miR-127 caused a downregulation of B-cell lymphoma 6 protein, a proto-oncogene which is usually hypermutated in diffuse large B-cell lymphoma (DLCL). Moreover, differential expression of miR-127 was detected in different type of DLCL. miR-127 levels were significantly higher in the testicular DLCL compared with the nodal and central nervous system DLCL, implying different biological entity of DLCL in different locations.
Role in disease:
Hepatocellular carcinoma Inhibition of miR-127 expression is linked with Hepatocellular carcinoma. The mechanistic link was confirmed by a change in BCL6 protein, which is targeted by miR-127. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Perspective geological correlation**
Perspective geological correlation:
Geological perspective correlation is a theory in geology describing geometrical regularities in the layering of sediments. Seventy percent of the Earth's surface are occupied by sedimentary basins – volumes consisted of sediments accumulated during million years, and alternated by long interruptions in sedimentation (hiatuses). The most noticeable feature of the rocks, which filled the basins, is layering (stratification). Stratigraphy is a part of Geology that investigates the phenomenon of layering. It describes the sequence of layers in the basin as consisted of stratigraphic units. Units are defined on the basis of their lithology and have no clear definition. Geological Perspective Correlation (GPC) is a theory that divided the geological cross-section in units according strong mathematical rule: all borders of layers in this unit obey the law of perspective geometry.Sedimentation layers are mainly created in shallow waters of oceans, seas, and lakes. As new layers are deposited the old ones are sinking deeper due to the weight of accumulating sediments. The content of sedimentary layers (lithological and biological), their order in the sequence, and geometrical characteristics keep records of the history of the Earth, of past climate, sea-level and environment. Most knowledge about the sedimentary basins came from exploration drilling when searching for oil and gas. The essential feature of this information is that each layer is penetrated by the wells in a number of scattered locations. This raises the problem of identifying each layer in all wells – the geological correlation problem The identification is based on comparison of 1) physical and mineralogical characteristics of the particular layer (lithostratigraphy), or 2) petrified remnants in this layer (biostratigraphy). The similarity of layers is decreasing as the distance between the cross-sections increases that leads to ambiguity of the correlation scheme that indicates which layers penetrated at different locations belong to the same body (see A). To improve the results geologists take in consideration the spatial relations between layers, which restricted the number of acceptable correlations. The first restriction was formulated in XVII century: the sequence of layers is the same in any cross-section. The second one was discovered by Haites in 1963: In an undisturbed sequence of layers (strata) the thicknesses (H1 and H2) of any layer observed in two different locations obey the law of perspective geometry, i.e. the perspective ratio K = H1/H2 is the same for all layers in this succession. This theory attracted attention around the world., and particularly in Russia The theory is also a basis of the method of graphical correlation in biostratigraphy widely used in oil and coal industries.
Overview:
The geometry is the main lead to natural resources exploration.For example, the oil geologists are looking for permeable layers of particular geometry, which allows keeping the oil in place (for instance, the domed shape anticlinal trap). The ore geologists are looking for faults in the sediments – the ways, which deliver the melted mantle materials to the upper crust. Knowledge about underground geometry of the sedimentary basins comes from geological observations, geophysical measurements and from drilling. Drilling gives the most detailed information about the position, thickness, physical, chemical and biological characteristics of each layer, but the point is that each well presents all this information in one location on the layer. Because the geometry of a layer can be very complicated it becomes a difficult problem and requires a significant number of drilled wells.
Overview:
The challenge is identifying in each well the interval that belongs to the same layer now or in the past (see A). To do this geologists use all available characteristics of the layer. Only after this it is possible to begin the recovery of the geometry of the layer (to be more precise – the geometry of the top and bottom surfaces of the layer). This procedure is called geological correlation, and the results are presented as acorrelation scheme (A). It is natural that at the beginning of the exploration, when the number of wells is small, the correlation scheme contains expensive mistakes.
Basics of geological correlation:
The Danish scientist Nicolas Steno (1638–1686) is credited with three principles of sedimentation superposition: in undeformed stratigraphic sequences the oldest strata will be at the bottom of the succession, original horizontality: layers of sediment are originally deposited horizontally, lateral continuity: layers of sediments initially extend laterally in all directions.The principles 1 allows defining the temporary relations between neighboring geological bodies, the principle 2 organizes the geometrical pattern of the succession of layers, the principle 3 helps uniting the parts of the layer found in separated geological cross-sections.
Basics of geological correlation:
Practical correlation has a lot of difficulties: fuzzy borders of the layers, variations in composition and structure of the rocks in the layer, unconformities in the sequence of layers, etc. This is why errors in correlation schemes are not seldom. When the distances between available cross-sections are decreasing (for example, by drilling new wells) the quality of correlation is improving, but meanwhile the wrong geological decisions could be made that increases the expenses of geological projects. From Steno's principle of initial horizontality follows that the top borders of the layers (tops) were initially flat, and remained flat until the complete succession stays undisturbed by subsequent tectonic movements, but no regularities about the geometric relations between these flat surfaces in the succession were known. The first to shed light on the problem was Canadian geologist Binner Heites: in 1963 he published the Geological Perspective Correlation hypothesis. Perspective geological correlation is a theory that establishes strong geometrical restrictions on the geometry of the layers in sedimentary deposits.
Perspective geometry in undisturbed succession of layers:
In 1963 the Canadian geologist Binner Heites discovered a strong regularity of the layering in sedimentary basins: the thicknesses of layers within each stratigraphic unit are governed by the law of perspective correspondence. It means that in undisturbed succession on the correlation scheme the straight lines drawn through the border points of the same layer in two cross-sections intersect in one point – center of perspectivity (see B). For geological purposes more convenient geometrical presentation of perspective relations is the correlation plot proposed by Jekhowsky (see C): the depths of the layer's borders in one geologic cross-section are plotted along axis h′ (h1′, h2′, h3′,...), and the position of the same layers in another cross-section are plotted along axis h′′ (h1′′, h2′′, h3′′, ...). Points 1, 2, 3 ... with coordinates (h1′, h1′′), (h2′, h2′′), and (h3′, h3′′), accordingly, are called correlation points, and a curve drawn through these points, a correlation line. Black dots (connectors) represent the relative position of correlated borders on the plot. When the layers geometry satisfies the conditions of perspective correspondence the correlation line is a straight line. In the particular case of parallel layers the inclination of the correlation line is 450. The Perspective Geological Correlation also states that each sedimentary basin consists of a number of stratigraphic units (sequence of layers without unconformities), and in each unit the relations between the thicknesses of the layers in two cross-sections satisfy the perspective geometry conditions with individual ratios K.Heites also concludes that all strata in each unit were governed by the same rate of deposition, and their borders are synchronous time-planes. Each layer has different thicknesses in different locations, but they lasted equally long. It was a significant input in chronostratigraphy.
Perspective geometry in undisturbed succession of layers:
The following are consequences of the basic statements: In different stratigraphic horizons the slopes of the correlation lines are different.
If two adjacent sections have the same slope then both sections belong to the same stratigraphic horizon. The gap between the lines indicates a fault.
If on the correlation line that presents the undisturbed stratigraphic succession one correlation point doesn't fit the line it means thata) the correlation of the tops of the corresponding layer is wrong, or b) lithologic replacement.
Connections to traditional lithostratigraphy:
The Perspective Geological Correlation is well grounded in traditional geology. The method of convergence maps serves for determining the structure of the layer based on the known structure of the layers lying above. It is based on the assumption that the layers are close to parallel. Convergence map shows lines of equal distance (isopach lines) between key layer and target layer. If the layers are parallel the distance between these layers is constant, the structures of both layers are identical, and to determine depths of the target horizon it is enough to get only one deep well, which reached the target layer. But in reality such conditions are extremely rare. In reality restoring the geometry of the target horizon demands a number of deep wells in the area. In this case the standard procedure for calculating the distance between the target layer and key layer in any point in the area is linear interpolation between the known wells. The reliability of the result (the geometrical structure of the target horizon) is estimated by the analysis of the trend of the distances between key horizon and target horizon (isopachs): if the trend is regular, for example, the distances are monotonically changing in one direction, it is a sign of reliability of the reconstruction. In the simplest case the surface of the target horizon is a plain in general position, and the linear interpolation gives the correct result. The assumptions of the convergence method are consequences of the perspective correlations theory, so, the method obtains the theoretical background. The theory also gave an additional criteria for the validity of the reconstructed surface. It defines the stratigraphic interval where layers were deposited without interruption, and where the layers' thicknesses satisfy the law of perspective geometry.The convergence maps deliver the correct result only when the layers belong to such stratigraphic unit.
Testing:
The description of the theory was supplied by a number of cases in support of the theory.
Testing:
The plot (D) shows the correlation plot for two wells in Alberta (Canada): Innisfail 15-8-35-1W5 and Innisfail 7-33-25-lW5 . The cross-section of Innisfail field contains a middle Proterozoic to Paleocene sedimentary succession in excess of 6 km in thickness. The graph shows that the relations between thicknesses of all corresponding layers in these two cross-sections are located on the straight line, i.e. submit to the law of geometrical perspective with the same perspective ratio K. The markers are from conventional correlation scheme.The deviation of the correlation points from the straight line is about 5 feet on average. .The plot (E) demonstrates that Perspective Geological Correlation works at long distances as well. The plot shows the correlation between two wells in Canada 300 miles apart (Saskatchewan and Manitoba) in Silur-Ordovician carbonates (the tilt angle 530 corresponds to K = 1.6).
Testing:
The first review of Heites' publication appeared at 1964 in Russia. It describes in the details the hypothesis and estimates very high its potential. The idea attracted the programmers working on automation of correlation on computers: the known rules of correlation were fuzzy, and it was impossible to formalize them and transform them into algorithms. The restrictions of the geometry of layering observed by Heites allowed compensating the lack of nonformal human knowledge.A group of Russian scientists (Guberman, Ovchinnikova, Maximov) positively tested Heites' hypothesis in different oil-bearing province using the computer program (in Central Asia, Volga-Ural province, West and East Siberia, and Russian Platform). For example, see plot (F). The activity of this group continued in 2000th, and covers new geological provinces around the globe Canada, Kansas, Louisiana, South Welsh. O. Karpenko demonstrated an effective use of perspective correlation in resolving very practical problems of oil exploration. The law of perspective accordance allowed to discover the boundaries of changing the paleotectonic regime in the thin-layered sedimentary rocks, while the regular correlation technic didn't work. At the example of Rubanivsk gas field author demonstrated that the Dashava deposits of Precarpathian External Zone depression can be divided into number of zones of stable sediment accumulation in different conditions. Some zones correlate with the intervals of enhanced gas flow rate. These works show that the hypothesis is correct in the wide variety of geological conditions, it works at long distances, it can serve as a solid test for stratigraphic schemes made by geologists, it reviles the unconformity of layers as small as 1° (G–I), and faults with the amplitude of displacement as small as 1–2 m (G–II), the number of correctly correlated tops in the stratigraphic unit without unconformities has to be not less than three, and as bigger is this number the bigger is the reliability of the result, it is an instrument for correcting the mistakes. Since the publication of Heites' theory in 1963 it was republished in a number of reviews on quantitative methods of correlation (including automatic correlation). Some of the reports (Hansen, Salin, Barinova) demonstrate that the perspective correlation allowed to achieve better reconstruction of the geological structure at the early stages of geological exploration. Hansen describes the controversial history of investigating the complicated Patapsco formation in Maryland and Virginia (USA), and claims that “an adaptation of Heites' (1963) technique of perspective correlation is used to subdivide the Patapsco Formation into consistently defied mapping units”. Salin was able to simplify the stratigraphic description of Khatyr depression (Siberia) by applying perspective correlation. Barinova analysed the structure of Osipovichy gas underground storage (East Europe)) by automatic correlation program baswed on Haites principles. Because of the high resolving power of the method it was recognized the existence of a number of geological faults that break the leakproofness. Because of small displacements of the faults they were not found by the traditional methods of correlation, and rejected by the geological service of the project. Very soon after the storage started functioning significant leakage of gas was recognized
Extension to biostratigraphy:
In 1964 Shaw proposed the method of correlating fossiliferous stratigraphic profiles using the two-axis graph (H). The markers on each axis are the observed depths of lowest (FAD) and highest (LAD) occurrences of a specially defined group of fossils (taxa). The appearances/ disappearances of taxa are regarded as synchronous and used as markers of correlation. When projected on a graph, the corresponding points of two compared profiles form the Line of Correlation (LOC). Shaw showed that the ideal LOC consists of linear segments (H). Such conditions occur when the number of collected fossils is big, and one can be sure that the material covers the complete range of fossils appearance, and FADs and LADs can be accurately determined. In the reality, some sampled ranges will be shorter than true ranges, and this can disturb the linearity of the LOC.
Extension to biostratigraphy:
In every stratigraphic interval correlated ends of the range (FAD or LAD) belong to the same time surface, and in each geological cross-section (well or outcrop) this interval has identical duration but different thickness. It means that accumulation rates (thickness-to-duration ratio = tg β) are different in different locations. From the fact that the relation of durations of the units and their thicknesses are linear follows that in the limits of the linear section of LOC all strata have the same accumulation rate.
Extension to biostratigraphy:
The reliability and accuracy of Shaw's method have been tested by Edwards, using a computer simulation on hypothetical data sets, and by Rubel and Pak in terms of the formal logic and stochastic theory.
The graphical correlation became a very important tool of stratigraphy in coal and oil industries.
In 1988 Nemec showed the equivalence of Haites' perspective correlation, and Shaw's graphical correlation
Sedimentation model:
Based on the theory of perspective correlation in 1986 S. Guberman proposed a model of the process of sedimentation According Haites’ theory in the given sedimentary basin in each stratigraphic unit the condition of perspective correspondence are satisfied in any pair of wells. From this follows that the tops and bases of the layers in this stratigraphic unit satisfy the conditions of perspective correspondence in 3-D space (K). Any three points of a plane define the complete plane. It means that if in three wells the thicknesses of layers belonging to the same stratigraphic unit are known, then the thicknesses of these layers can be calculated for any location in the basin. Accordingly, if the structure of the top border of the stratigraphic unit is known, the structure of any other border in this unit can be calculated.
Sedimentation model:
The model of creating such sophisticated geometrical pattern is based on the first Steno's principle: the strata are originally horizontal, i.e. are planes. It occurs in the shallow waters due to the turbulence of the undersurface layer of water. The second Steno's principle, which indicates the creation of a series of sedimentary layers lying on top of each others, supposes the subsidence of the basin. The sinking of the basin follows the strong geometrical restrictions: the tectonic bloc, which carry the basin, is rotating around the straight line parallel to the water surface, and located onshore (L). As a result, until the moment of main tectonic disturbance all borders of the layers remain flat and the geometrical inter-relations are described as perspective correspondence. In the future the tectonic movements will distort the shape of the layers – the borders will no more be planes, but in majority of cases the changes are smooth and the perspective relations are maintained.
Sedimentation model:
This model allows specifying some geological terms. The Steno's horizontality principle has to state: the top surface of the sediments is horizontal.The conformity is a fundamental notion in stratigraphy. Until now this term is used in two different meanings: a surface between two stratigraphic sequances, and the relationship between two stratigraphic units. Sometimes both were used in the same paragraph (see, page 84).Perspective correlation principle allows to define the notion of conformity: sequence of layers that obey the conditions of geometrical perspective is a unit of conformity. Two neighboring units of conformity are in relation of unconformity. Here is an example that shows that the borders of undisturbed stratigraphic unit in the Middle Carboniferous (Volga-Ural oil province, Russia) initially were plains. In the central part of the area (about 100 km in diameter) were chosen three wells at distances of 10 – 15 km.. The three tops of the stratigraphic unit in the three wells are points in 3D space with coordinates x, y, z, where x and y are present the position of the well on the surface (M), and z is the thickness of the stratigraphic unit in this location. They determine the top plain of the unit as it was at the time of its creation. The three bases determine the bottom plane of the unit at it was at the same time. This allowed calculating the thickness of the stratigraphic unit at any point in the area. Because the area was well enough drilled the calculated numbers can be compared with the real numbers. The average difference equals 2%. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Momoiro Clover Z**
Momoiro Clover Z:
Momoiro Clover Z (ももいろクローバーZ, Momoiro Kurōbā Zetto) is a Japanese idol girl group, commonly abbreviated as MCZ or Momoclo (ももクロ, Momokuro).
Momoiro Clover Z:
The four members of MCZ are known for energetic performances, incorporating elements of ballet, gymnastics, and action movies.MCZ is notable for being the first female group to hold a solo concert at National Olympic Stadium in Japan, as well as providing theme music for anime television series such as Sailor Moon, Dragon Ball, and Pokémon.In 2013, the group grossed the fourth highest total sales revenue by a music artist in Japan, with over ¥5.2 billion. During 2016, about 636,000 people attended their live concerts, the most ever for a Japanese female group. MCZ was ranked as the most popular female Japanese group from 2013 to 2018, and 2020 to 2022.MCZ has collaborated with other performers, including a 2015 recording with American hard rock band KISS, marking KISS's first collaborative recording. In 2016, their first overseas tour titled Trans America Ultra Live was held in Hawaii, Los Angeles, and New York. They sold more than 3 million physical copies in Japan.
Members:
On stage, Momoiro Clover Z members are easily distinguished by the colors of their clothes, similar to the characters from Super Sentai or Power Rangers. In some songs and music videos, the group loosely parodies them.
Before the group made its debut, other girls were in the lineup: Sumire Fujishiro, Manami Ikura, Yukina Kashiwa (later a member of Nogizaka46), Tsukina Takai (later became a member of SKE48), Miyū Wagawa, and Runa Yumikawa.
Timeline
History:
2008-2009: Conception and beginnings The group was formed in the spring of 2008 as a 5-member unit, originally named Momoiro Clover ("Pink Clover" or, literally, "Peach-Colored Clover"). The name was chosen to imply that the group was composed of innocent girls who wanted to bring happiness to people. Ariyasu Momoka joined the group after their first single. Later in 2011, after the departure of Akari Hayami from the group, management added the letter "Z" to the group's name. The group's slogan is "Idols you can meet right now" (いま、会えるアイドル, Ima, aeru aidoru).
History:
Momoiro Clover began as a street act in 2008, performing for bystanders in Tokyo's Yoyogi Park. As most members were students attending school on weekdays, the group was active mainly on weekends, leading them to be nicknamed "Weekend Heroines" (週末ヒロイン, Shūmatsu Hiroin). In a one-year period, Momoiro Clover had a number of line-up changes. In March 2009, they became a five-member unit composed of Reni Takagi, Kanako Momota, Akari Hayami, Shiori Tamai, and Ayaka Sasaki.To support and promote their first indie single, "Momoiro Punch", Momoiro Clover took advantage of school holidays from May to August and went by minibus on a long tour across Japan. They gave a total of 104 concerts in 24 electronic stores of the Yamada Denki network. The girls slept in the minivan, and group's managers drove. In the middle of the tour, Momoka Ariyasu was added to the group as a sixth member. The single was sold only at the group's live events and those sales were enough for it to place 11th in the Oricon Daily Singles Chart and 23rd in the weekly chart.
History:
2010: Major debut In March 2010, the girls stated their goals: to take first place on Oricon, to participate in Kōhaku Uta Gassen, to perform at Budokan. They usually performed in a small club with live music or on a roof of a department store. They sometimes set a simulated stage of National Olympic Stadium, where notable musicians are allowed to perform.Their first major-label single "Ikuze! Kaitō Shōjo" was released in May. The single debuted on Japan's Oricon Daily Singles Chart at the first position, and at number 3 for the week. Momoiro Clover then moved to King Records. The group's first single with King was "Pinky Jones", composed by Narasaki from the Japanese rock band Coaltar of the Deepers with a "more chaotic" approach than previous songs. December 24 marked Momoiro Clover's first solo concert at a concert hall. Nihon Seinenkan, a venue with a capacity of 1,300 seats, was sold out in 30 minutes.
History:
2011: Shift to Momoiro Clover "Z" In January 2011 at the release event for a new song, sub-leader Akari Hayami stated that she had decided to withdraw from the group in April. Hayami explained that her character was not suited to being an idol and that her dream was to become an actress. At the April 10 Akari Hayami "graduation" concert, the group's management announced the name change to Momoiro Clover Z after Hayami's departure. In Japan, Z(ゼット) symbolizes ultimateness and this letter is often appended to a title (e.g., Mazinger Z and Dragon Ball Z). Z is officially pronounced as (non-US pronunciation) when the name is used in spoken English. The band has gone on record saying in an interview that the Z in the title is a reference to the famous anime series, Dragon Ball Z stating "The Z in our name is a very obvious reference to Dragon Ball Z" and that "It's a awesome and very influential series".
History:
Momoiro Clover Z's first single after Hayami's departure was "Z Densetsu: Owarinaki Kakumei", accompanied by a new group image and stage performance. The girls wore outfits with helmets and so-called "transformation belts" reminiscent of Japanese superhero movies, and the music video also borrowed from such "Super Sentai" imagery. In July, Momoiro Clover Z released their first album, Battle and Romance. Later in December, Hotexpress described the band as the number-one breakthrough idol artist of 2011 and stated that the album became a big turning point for them. Next February, Battle and Romance won the Grand Prix at the CD Shop Awards as the best CD of the year selected by music shop employees from all over the country. Momoiro Clover Z was the first idol group to win the award. On Christmas Day, 2011, Momoiro Clover Z gave a concert at Saitama Super Arena to their biggest audience to date: all 10,000 tickets were sold out.
History:
2012: Rising popularity in Japan In May 2012, Momoiro Clover Z performed in Putrajaya, Malaysia. The former Prime Minister, Najib Razak, personally greeted the group. In June, Momoiro Clover Z opened a national tour, which closed with a sold-out show at Seibu Dome in August to a capacity crowd of 37,000 fans. Both dates were broadcast live to selected cinemas across Japan, the latter also to Taiwan and Hong Kong.
History:
The group recorded an ending theme song for Pokémon's Best Wishes series (titled "Mite Mite Kocchichi" and included in the eighth single "Otome Sensō" as a coupling track). In July, Momoiro Clover Z performed at Japan Expo 2012 in Paris.Momoiro Clover Z's ninth single "Saraba, Itoshiki Kanashimitachi yo", which appeared in November, topped the Billboard Japan Hot 100 chart, becoming their first single to do so. They contributed to the anime Joshiraku with "Nippon Egao Hyakkei" as the ending theme which was released on 5 September with a prior early release in iTunes Japan on 22 August.On December 31, Momoiro Clover Z performed at Kōhaku Uta Gassen, an annual New-Year-Eve music show hosted by NHK, for the first time. Going to Kōhaku had been the group's goal for a long time. During the January 1 Ustream broadcast, Momoiro Clover Z made several announcements: that the band set a new goal for itself — to give a concert at the National Olympic Stadium, an arena with 60–70,000 capacity, that they would release a new album in spring, and that Momoka Ariyasu had to undergo a throat treatment and she would not sing or even talk until the end of January. The treatment was subsequently prolonged for another month, until the end of February. During the group's live Ustream broadcasts, Momoka communicated by drawing and writing on a markerboard. At live performances, other members took turns in singing her parts.
History:
2013: 5th Dimension Momoiro Clover Z's second full-length album 5th Dimension was released in April. It sold 180,000 copies in the first week and debuted on top of the Oricon charts, with the first album Battle and Romance resurging to number two. Finally, it won a platinum disk award. In August, Momoiro Clover Z held a concert at Nissan Stadium. It has the largest capacity in Japan.
History:
2014: Dream come true In March, the group held a solo concert at National Olympic Stadium, realizing one of their dreams since the debut. Such solo concerts had only been performed by six groups until then. Momoiro Clover Z was the first female group and also became the fastest group ever, which achieved that in six years. As a two-day concert, a total of 150,000 people watched in the stadium and at live viewing venues.In May, the group released their 11th single "Naite mo Iin Da yo"; B-side "My Dear Fellow" made its debut at Yankee Stadium when it was used for Masahiro Tanaka's warm up for his first game with the New York Yankees. The group also provided the theme music for the anime Sailor Moon Crystal. The title is "Moon Pride" (the group's 12th single released in July).In August, the group performed at Lady Gaga's concert as an opening act. It was a part of Gaga's world tour named "ArtRave: The Artpop Ball" and held in Japan. Momoiro Clover Z was selected by Gaga herself.
History:
2015: Collaboration with KISS On January 28, 2015, Momoiro Clover Z released a collaboration single with the American hard rock band KISS, titled "Yume no Ukiyo ni Saitemina". It was the first time for KISS to release a collaboration CD with another artist. In Japan, it was released physically in two versions: Momoiro Clover Z edition (CD+Blu-ray) and KISS edition (CD only). An alternate mix of the single's title song was also included as an opening track on the Japanese-only SHM-CD album Best of KISS 40, released in Japan on the same day.In February 2015, Momoiro Clover Z were removed from a television performance due to controversy surrounding an appearance in blackface alongside Rats & Star.Momoiro Clover Z provided the theme song, "Z no Chikai" which was released as their fifteenth single on April 29, 2015, for the Dragon Ball Z: Resurrection 'F' theatrical anime film. The group also voiced the Angels at the end of the film.
History:
2016: Amaranthus/Hakkin no Yoake and Trans America Ultra Live The group released their third studio album Amaranthus and fourth studio album Hakkin no Yoake in a double release in Japan on February 17, 2016. The albums debuted at #1 and #2 in the Oricon weekly albums chart. The group held a dome trek tour for the two albums.In early April 2016, the group announced their first overseas tour titled Trans America Ultra Live and appeared in Hawaii, Los Angeles and New York 2017: MTV Unplugged 2018: 10th Anniversary Best Album On January 21, Momoka Ariyasu graduated from the group, leaving MCZ with only four members. In April, they released their 18th single, "Xiao yi Xiao". On May 23, they released a new best of album for their tenth anniversary called Momo mo Juu, Bancha mo Debana.
History:
2019–2021: Self-Titled Album On May 17, 2019, Momoiro Clover Z released their self-titled fifth studio album, their first studio album to not feature Momoka Ariyasu and their first as a four-member group. In 2021, they performed the theme song for the Sailor Moon Eternal movie.
2021–2022: Shukuten 2023–present: Upcoming Seventh album studio
Music style:
The band's songs are intentionally ridiculous "hyperactive J-pop numbers". Their live performances are heavily choreographed and feature acrobatic stunts. The group is noted for their "anarchic energy" that is similar to that of punk bands. The response from the audience has been characterised as "seismic".Some of Momoiro Clover's works are quite complex, switching from one musical style to another during one song and connecting "seemingly unconnected melodies". The group has worked with many noted songwriters and musicians, belonging to different genres of music, from pop to punk and heavy metal. Overall, the group and its music has been noted as progressive and forward-thinking. Ian Martin from The Japan Times dubbed Momoiro Clover "a pop group who provoke squealing, teenage admiration from punks, indie kids, noise musicians and heavy-psychedelic longhairs throughout the Japanese underground music scene". Momoiro Clover "is known for upbeat tunes, eccentric choreography and the members' costumes". A music critic from The Japan Times cites Momoiro Clover as an example of "a seamless integration of personality, image, and music, with each element mutually complementary".
Discography:
Battle and Romance (2011) 5th Dimension (2013) Amaranthus (2016) Hakkin no Yoake (2016) Momoiro Clover Z (2019) Shukuten (2022)
Collaboration:
Momoiro Clover Z have collaborated with overseas artists.
Kiss released a collaboration single with Momoiro Clover Z, titled "Yume no Ukiyo ni Saitemina" (January 2015).
Lady Gaga designated Momoiro Clover Z for an opening act of her concert (August 2014).
Marty Friedman participated as a guitarist in "Mōretsu Uchū Kōkyōkyoku Dai 7 Gakushō "Mugen no Ai"" (March 2012) and "Moon Pride" (July 2014).
Yngwie Malmsteen participated as a guitarist in "Mōretsu Uchū Kōkyōkyoku Dai 7 Gakushō "Mugen no Ai" -Emperor Style-" (June 2014).The group sings the theme music for the following anime.
Collaboration:
Yosuga no Sora - "Pinky Jones" (November 2010) Dragon Crisis! - "Mirai Bowl" (January 2011) Bodacious Space Pirates - "Mōretsu Uchū Kōkyōkyoku Dai 7 Gakushō "Mugen no Ai"" (March 2012) Pokémon - "Mite Mite Kocchichi" (June 2012) Joshiraku - "Nippon Egao Hyakkei" (June 2012) in collaboration with Yoshida Brothers Pretty Guardian Sailor Moon Crystal - "Moon Pride", "Moon Rainbow" (月虹, Gekkō) (July 2014)Pretty Guardian Sailor Moon Crystal Season III - "Fall in Love with a New Moon" (ニュームーンに恋して, Nyū Mūn ni Koishite) (June 2016) Pretty Guardian Sailor Moon Eternal: The Movie - "Moon Color Chainon" (月色Chainon, Tsukiiro Chainon) (January 2021, with main voice actresses: Kotono Mitsuishi, Hisako Kanemoto, Rina Sato, Ami Koshimizu, and Shizuka Ito) Dragon Ball Z: Resurrection 'F' - "Z no Chikai (April 2015) Crayon Shin-chan: Burst Serving! Kung Fu Boys ~Ramen Rebellion~ - "Xiao Yi Xiao" (April 2018).
Overseas performances:
Japan Media Arts Festival 2011 in Dortmund, Germany (September 9) Hari Belia Negara 2012 in Putrajaya, Malaysia (May 26) Japan Expo 2012 in Paris, France (July 5) Anime Expo 2015 in Los Angeles, California (July 2) Japan SAKURA Festival 2016 in Hanoi, Vietnam (April 16, 17) Bilibili Macro Link 2016 in Shanghai, China (July 23) Trans America Ultra Live 2016 in Hawaii, Los Angeles and New York (November 15–19)
Awards:
In 2012, their first album Battle and Romance won the CD Shop Award as the best CD of the previous year as voted by music shop salesclerks from all over Japan. It was the first time an idol (group) got this prize.
Filmography:
Shirome (シロメ) - August 2010Horror film. During filming, the girls were reportedly led to believe they were participating in a documentary about an urban legend and that everything happening was genuine.The Citizen Police 69 (市民ポリス69) - March 2011 Ninifuni - February 2012 Momodora (ももドラ momo+dra) - February 20125-episode internet drama omnibus film.Maku ga Agaru (幕が上がる) - February 2015The five members played leading roles and later won Japan Academy Prize. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.